WorldWideScience

Sample records for reporting parallel group

  1. Vectorization, parallelization and porting of nuclear codes (vectorization and parallelization). Progress report fiscal 1998

    International Nuclear Information System (INIS)

    Ishizuki, Shigeru; Kawai, Wataru; Nemoto, Toshiyuki; Ogasawara, Shinobu; Kume, Etsuo; Adachi, Masaaki; Kawasaki, Nobuo; Yatake, Yo-ichi

    2000-03-01

    Several computer codes in the nuclear field have been vectorized, parallelized and transported on the FUJITSU VPP500 system, the AP3000 system and the Paragon system at Center for Promotion of Computational Science and Engineering in Japan Atomic Energy Research Institute. We dealt with 12 codes in fiscal 1998. These results are reported in 3 parts, i.e., the vectorization and parallelization on vector processors part, the parallelization on scalar processors part and the porting part. In this report, we describe the vectorization and parallelization on vector processors. In this vectorization and parallelization on vector processors part, the vectorization of General Tokamak Circuit Simulation Program code GTCSP, the vectorization and parallelization of Molecular Dynamics NTV (n-particle, Temperature and Velocity) Simulation code MSP2, Eddy Current Analysis code EDDYCAL, Thermal Analysis Code for Test of Passive Cooling System by HENDEL T2 code THANPACST2 and MHD Equilibrium code SELENEJ on the VPP500 are described. In the parallelization on scalar processors part, the parallelization of Monte Carlo N-Particle Transport code MCNP4B2, Plasma Hydrodynamics code using Cubic Interpolated Propagation Method PHCIP and Vectorized Monte Carlo code (continuous energy model / multi-group model) MVP/GMVP on the Paragon are described. In the porting part, the porting of Monte Carlo N-Particle Transport code MCNP4B2 and Reactor Safety Analysis code RELAP5 on the AP3000 are described. (author)

  2. High-Performance Psychometrics: The Parallel-E Parallel-M Algorithm for Generalized Latent Variable Models. Research Report. ETS RR-16-34

    Science.gov (United States)

    von Davier, Matthias

    2016-01-01

    This report presents results on a parallel implementation of the expectation-maximization (EM) algorithm for multidimensional latent variable models. The developments presented here are based on code that parallelizes both the E step and the M step of the parallel-E parallel-M algorithm. Examples presented in this report include item response…

  3. Parallel and Serial Grouping of Image Elements in Visual Perception

    Science.gov (United States)

    Houtkamp, Roos; Roelfsema, Pieter R.

    2010-01-01

    The visual system groups image elements that belong to an object and segregates them from other objects and the background. Important cues for this grouping process are the Gestalt criteria, and most theories propose that these are applied in parallel across the visual scene. Here, we find that Gestalt grouping can indeed occur in parallel in some…

  4. Parallel computational in nuclear group constant calculation

    International Nuclear Information System (INIS)

    Su'ud, Zaki; Rustandi, Yaddi K.; Kurniadi, Rizal

    2002-01-01

    In this paper parallel computational method in nuclear group constant calculation using collision probability method will be discuss. The main focus is on the calculation of collision matrix which need large amount of computational time. The geometry treated here is concentric cylinder. The calculation of collision probability matrix is carried out using semi analytic method using Beckley Naylor Function. To accelerate computation speed some computer parallel used to solve the problem. We used LINUX based parallelization using PVM software with C or fortran language. While in windows based we used socket programming using DELPHI or C builder. The calculation results shows the important of optimal weight for each processor in case there area many type of processor speed

  5. Vectorization, parallelization and porting of nuclear codes (porting). Progress report fiscal 1998

    International Nuclear Information System (INIS)

    Nemoto, Toshiyuki; Kawai, Wataru; Ishizuki, Shigeru; Kawasaki, Nobuo; Kume, Etsuo; Adachi, Masaaki; Ogasawara, Shinobu

    2000-03-01

    Several computer codes in the nuclear field have been vectorized, parallelized and transported on the FUJITSU VPP500 system, the AP3000 system and the Paragon system at Center for Promotion of Computational Science and Engineering in Japan Atomic Energy Research Institute. We dealt with 12 codes in fiscal 1998. These results are reported in 3 parts, i.e., the vectorization and parallelization on vector processors part, the parallelization on scalar processors part and the porting part. In this report, we describe the porting. In this porting part, the porting of Monte Carlo N-Particle Transport code MCNP4B2 and Reactor Safety Analysis code RELAP5 on the AP3000 are described. In the vectorization and parallelization on vector processors part, the vectorization of General Tokamak Circuit Simulation Program code GTCSP, the vectorization and parallelization of Molecular Dynamics Ntv Simulation code MSP2, Eddy Current Analysis code EDDYCAL, Thermal Analysis Code for Test of Passive Cooling System by HENDEL T2 code THANPACST2 and MHD Equilibrium code SELENEJ on the VPP500 are described. In the parallelization on scalar processors part, the parallelization of Monte Carlo N-Particle Transport code MCNP4B2, Plasma Hydrodynamics code using Cubic Interpolated propagation Method PHCIP and Vectorized Monte Carlo code (continuous energy model/multi-group model) MVP/GMVP on the Paragon are described. (author)

  6. Vectorization, parallelization and porting of nuclear codes. Vectorization and parallelization. Progress report fiscal 1999

    Energy Technology Data Exchange (ETDEWEB)

    Adachi, Masaaki; Ogasawara, Shinobu; Kume, Etsuo [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Ishizuki, Shigeru; Nemoto, Toshiyuki; Kawasaki, Nobuo; Kawai, Wataru [Fujitsu Ltd., Tokyo (Japan); Yatake, Yo-ichi [Hitachi Ltd., Tokyo (Japan)

    2001-02-01

    Several computer codes in the nuclear field have been vectorized, parallelized and trans-ported on the FUJITSU VPP500 system, the AP3000 system, the SX-4 system and the Paragon system at Center for Promotion of Computational Science and Engineering in Japan Atomic Energy Research Institute. We dealt with 18 codes in fiscal 1999. These results are reported in 3 parts, i.e., the vectorization and the parallelization part on vector processors, the parallelization part on scalar processors and the porting part. In this report, we describe the vectorization and parallelization on vector processors. In this vectorization and parallelization on vector processors part, the vectorization of Relativistic Molecular Orbital Calculation code RSCAT, a microscopic transport code for high energy nuclear collisions code JAM, three-dimensional non-steady thermal-fluid analysis code STREAM, Relativistic Density Functional Theory code RDFT and High Speed Three-Dimensional Nodal Diffusion code MOSRA-Light on the VPP500 system and the SX-4 system are described. (author)

  7. Establishing a group of endpoints in a parallel computer

    Science.gov (United States)

    Archer, Charles J.; Blocksome, Michael A.; Ratterman, Joseph D.; Smith, Brian E.; Xue, Hanhong

    2016-02-02

    A parallel computer executes a number of tasks, each task includes a number of endpoints and the endpoints are configured to support collective operations. In such a parallel computer, establishing a group of endpoints receiving a user specification of a set of endpoints included in a global collection of endpoints, where the user specification defines the set in accordance with a predefined virtual representation of the endpoints, the predefined virtual representation is a data structure setting forth an organization of tasks and endpoints included in the global collection of endpoints and the user specification defines the set of endpoints without a user specification of a particular endpoint; and defining a group of endpoints in dependence upon the predefined virtual representation of the endpoints and the user specification.

  8. Parallel solutions of the two-group neutron diffusion equations

    International Nuclear Information System (INIS)

    Zee, K.S.; Turinsky, P.J.

    1987-01-01

    Recent efforts to adapt various numerical solution algorithms to parallel computer architectures have addressed the possibility of substantially reducing the running time of few-group neutron diffusion calculations. The authors have developed an efficient iterative parallel algorithm and an associated computer code for the rapid solution of the finite difference method representation of the two-group neutron diffusion equations on the CRAY X/MP-48 supercomputer having multi-CPUs and vector pipelines. For realistic simulation of light water reactor cores, the code employees a macroscopic depletion model with trace capability for selected fission product transients and critical boron. In addition to this, moderator and fuel temperature feedback models are also incorporated into the code. The validity of the physics models used in the code were benchmarked against qualified codes and proved accurate. This work is an extension of previous work in that various feedback effects are accounted for in the system; the entire code is structured to accommodate extensive vectorization; and an additional parallelism by multitasking is achieved not only for the solution of the matrix equations associated with the inner iterations but also for the other segments of the code, e.g., outer iterations

  9. Massively parallel read mapping on GPUs with the q-group index and PEANUT

    NARCIS (Netherlands)

    J. Köster (Johannes); S. Rahmann (Sven)

    2014-01-01

    textabstractWe present the q-group index, a novel data structure for read mapping tailored towards graphics processing units (GPUs) with a small memory footprint and efficient parallel algorithms for querying and building. On top of the q-group index we introduce PEANUT, a highly parallel GPU-based

  10. Parallel and serial grouping of image elements in visual perception

    NARCIS (Netherlands)

    Houtkamp, R.; Roelfsema, P.R.

    2010-01-01

    The visual system groups image elements that belong to an object and segregates them from other objects and the background. Important cues for this grouping process are the Gestalt criteria, and most theories propose that these are applied in parallel across the visual scene. Here, we find that

  11. Magnetohydrodynamics: Parallel computation of the dynamics of thermonuclear and astrophysical plasmas. 1. Annual report of massively parallel computing pilot project 93MPR05

    International Nuclear Information System (INIS)

    1994-08-01

    This is the first annual report of the MPP pilot project 93MPR05. In this pilot project four research groups with different, complementary backgrounds collaborate with the aim to develop new algorithms and codes to simulate the magnetohydrodynamics of thermonuclear and astrophysical plasmas on massively parallel machines. The expected speed-up is required to simulate the dynamics of the hot plasmas of interest which are characterized by very large magnetic Reynolds numbers and, hence, require high spatial and temporal resolutions (for details see section 1). The four research groups that collaborated to produce the results reported here are: The MHD group of Prof. Dr. J.P. Goedbloed at the FOM-Institute for Plasma Physics 'Rijnhuizen' in Nieuwegein, the group of Prof. Dr. H. van der Vorst at the Mathematics Institute of Utrecht University, the group of Prof. Dr. A.G. Hearn at the Astronomical Institute of Utrecht University, and the group of Dr. Ir. H.J.J. te Riele at the CWI in Amsterdam. The full project team met frequently during this first project year to discuss progress reports, current problems, etc. (see section 2). The main results of the first project year are: - Proof of the scalability of typical linear and nonlinear MHD codes - development and testing of a parallel version of the Arnoldi algorithm - development and testing of alternative methods for solving large non-Hermitian eigenvalue problems - porting of the 3D nonlinear semi-implicit time evolution code HERA to an MPP system. The steps that were scheduled to reach these intended results are given in section 3. (orig./WL)

  12. Magnetohydrodynamics: Parallel computation of the dynamics of thermonuclear and astrophysical plasmas. 1. Annual report of massively parallel computing pilot project 93MPR05

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1994-08-01

    This is the first annual report of the MPP pilot project 93MPR05. In this pilot project four research groups with different, complementary backgrounds collaborate with the aim to develop new algorithms and codes to simulate the magnetohydrodynamics of thermonuclear and astrophysical plasmas on massively parallel machines. The expected speed-up is required to simulate the dynamics of the hot plasmas of interest which are characterized by very large magnetic Reynolds numbers and, hence, require high spatial and temporal resolutions (for details see section 1). The four research groups that collaborated to produce the results reported here are: The MHD group of Prof. Dr. J.P. Goedbloed at the FOM-Institute for Plasma Physics `Rijnhuizen` in Nieuwegein, the group of Prof. Dr. H. van der Vorst at the Mathematics Institute of Utrecht University, the group of Prof. Dr. A.G. Hearn at the Astronomical Institute of Utrecht University, and the group of Dr. Ir. H.J.J. te Riele at the CWI in Amsterdam. The full project team met frequently during this first project year to discuss progress reports, current problems, etc. (see section 2). The main results of the first project year are: - Proof of the scalability of typical linear and nonlinear MHD codes - development and testing of a parallel version of the Arnoldi algorithm - development and testing of alternative methods for solving large non-Hermitian eigenvalue problems - porting of the 3D nonlinear semi-implicit time evolution code HERA to an MPP system. The steps that were scheduled to reach these intended results are given in section 3. (orig./WL).

  13. Psychodrama: A Creative Approach for Addressing Parallel Process in Group Supervision

    Science.gov (United States)

    Hinkle, Michelle Gimenez

    2008-01-01

    This article provides a model for using psychodrama to address issues of parallel process during group supervision. Information on how to utilize the specific concepts and techniques of psychodrama in relation to group supervision is discussed. A case vignette of the model is provided.

  14. Vectorization, parallelization and porting of nuclear codes on the VPP500 system (parallelization). Progress report fiscal 1996

    Energy Technology Data Exchange (ETDEWEB)

    Watanabe, Hideo; Kawai, Wataru; Nemoto, Toshiyuki [Fujitsu Ltd., Tokyo (Japan); and others

    1997-12-01

    Several computer codes in the nuclear field have been vectorized, parallelized and transported on the FUJITSU VPP500 system at Center for Promotion of Computational Science and Engineering in Japan Atomic Energy Research Institute. These results are reported in 3 parts, i.e., the vectorization part, the parallelization part and the porting part. In this report, we describe the parallelization. In this parallelization part, the parallelization of 2-Dimensional relativistic electromagnetic particle code EM2D, Cylindrical Direct Numerical Simulation code CYLDNS and molecular dynamics code for simulating radiation damages in diamond crystals DGR are described. In the vectorization part, the vectorization of two and three dimensional discrete ordinates simulation code DORT-TORT, gas dynamics analysis code FLOWGR and relativistic Boltzmann-Uehling-Uhlenbeck simulation code RBUU are described. And then, in the porting part, the porting of reactor safety analysis code RELAP5/MOD3.2 and RELAP5/MOD3.2.1.2, nuclear data processing system NJOY and 2-D multigroup discrete ordinate transport code TWOTRAN-II are described. And also, a survey for the porting of command-driven interactive data analysis plotting program IPLOT are described. (author)

  15. CONSORT 2010 Explanation and Elaboration: Updated guidelines for reporting parallel group randomised trials

    DEFF Research Database (Denmark)

    Moher, David; Hopewell, Sally; Schulz, Kenneth F

    2010-01-01

    Overwhelming evidence shows the quality of reporting of randomised controlled trials (RCTs) is not optimal. Without transparent reporting, readers cannot judge the reliability and validity of trial findings nor extract information for systematic reviews. Recent methodological analyses indicate...... that inadequate reporting and design are associated with biased estimates of treatment effects. Such systematic error is seriously damaging to RCTs, which are considered the gold standard for evaluating interventions because of their ability to minimise or avoid bias. A group of scientists and editors developed......, this revised explanatory and elaboration document, and the associated website (www.consort-statement.org) should be helpful resources to improve reporting of randomised trials....

  16. Northeast Artificial Intelligence Consortium Annual Report - 1988 Parallel Vision. Volume 9

    Science.gov (United States)

    1989-10-01

    supports the Northeast Aritificial Intelligence Consortium (NAIC). Volume 9 Parallel Vision Report submitted by Christopher M. Brown Randal C. Nelson...NORTHEAST ARTIFICIAL INTELLIGENCE CONSORTIUM ANNUAL REPORT - 1988 Parallel Vision Syracuse University Christopher M. Brown and Randal C. Nelson...Technical Director Directorate of Intelligence & Reconnaissance FOR THE COMMANDER: IGOR G. PLONISCH Directorate of Plans & Programs If your address has

  17. A Lightweight RFID Grouping-Proof Protocol Based on Parallel Mode and DHCP Mechanism

    Directory of Open Access Journals (Sweden)

    Zhicai Shi

    2017-07-01

    Full Text Available A Radio Frequency Identification (RFID grouping-proof protocol is to generate an evidence of the simultaneous existence of a group of tags and it has been applied to many different fields. For current grouping-proof protocols, there still exist some flaws such as low grouping-proof efficiency, being vulnerable to trace attack and information leakage. To improve the secure performance and efficiency, we propose a lightweight RFID grouping-proof protocol based on parallel mode and DHCP (Dynamic Host Configuration Protocol mechanism. Our protocol involves multiple readers and multiple tag groups. During the grouping-proof period, one reader and one tag group are chosen by the verifier by means of DHCP mechanism. When only a part of the tags of the chosen group exist, the protocol can also give the evidence of their co-existence. Our protocol utilizes parallel communication mode between reader and tags so as to ensure its grouping-proof efficiency. It only uses Hash function to complete the mutual authentication among verifier, readers and tags. It can preserve the privacy of the RFID system and resist the attacks such as eavesdropping, replay, trace and impersonation. Therefore the protocol is secure, flexible and efficient. It only uses some lightweight operations such as Hash function and a pseudorandom number generator. Therefore it is very suitable to some low-cost RFID systems.

  18. Vdebug: debugging tool for parallel scientific programs. Design report on vdebug

    International Nuclear Information System (INIS)

    Matsuda, Katsuyuki; Takemiya, Hiroshi

    2000-02-01

    We report on a debugging tool called vdebug which supports debugging work for parallel scientific simulation programs. It is difficult to debug scientific programs with an existing debugger, because the volume of data generated by the programs is too large for users to check data in characters. Usually, the existing debugger shows data values in characters. To alleviate it, we have developed vdebug which enables to check the validity of large amounts of data by showing these data values visually. Although targets of vdebug have been restricted to sequential programs, we have made it applicable to parallel programs by realizing the function of merging and visualizing data distributed on programs on each computer node. Now, vdebug works on seven kinds of parallel computers. In this report, we describe the design of vdebug. (author)

  19. Coarse-grain parallel solution of few-group neutron diffusion equations

    International Nuclear Information System (INIS)

    Sarsour, H.N.; Turinsky, P.J.

    1991-01-01

    The authors present a parallel numerical algorithm for the solution of the finite difference representation of the few-group neutron diffusion equations. The targeted architectures are multiprocessor computers with shared memory like the Cray Y-MP and the IBM 3090/VF, where coarse granularity is important for minimizing overhead. Most of the work done in the past, which attempts to exploit concurrence, has concentrated on the inner iterations of the standard outer-inner iterative strategy. This produces very fine granularity. To coarsen granularity, the authors introduce parallelism at the nested outer-inner level. The problem's spatial domain was partitioned into contiguous subregions and assigned a processor to solve for each subregion independent of all other subregions, hence, processors; i.e., each subregion is treated as a reactor core with imposed boundary conditions. Since those boundary conditions on interior surfaces, referred to as internal boundary conditions (IBCs), are not known, a third iterative level, the recomposition iterations, is introduced to communicate results between subregions

  20. CONSORT 2010 Explanation and Elaboration: updated guidelines for reporting parallel group randomised trials (Chinese version)

    DEFF Research Database (Denmark)

    Moher, David; Hopewell, Sally; Schulz, Kenneth F

    2010-01-01

    Overwhelming evidence shows the quality of reporting of randomised controlled trials (RCTs) is not optimal. Without transparent reporting, readers cannot judge the reliability and validity of trial findings nor extract information for systematic reviews. Recent methodological analyses indicate...... that inadequate reporting and design are associated with biased estimates of treatment effects. Such systematic error is seriously damaging to RCTs, which are considered the gold standard for evaluating interventions because of their ability to minimise or avoid bias. A group of scientists and editors developed......, this revised explanatory and elaboration document, and the associated website (www.consort-statement.org) should be helpful resources to improve reporting of randomised trials....

  1. Numeric algorithms for parallel processors computer architectures with applications to the few-groups neutron diffusion equations

    International Nuclear Information System (INIS)

    Zee, S.K.

    1987-01-01

    A numeric algorithm and an associated computer code were developed for the rapid solution of the finite-difference method representation of the few-group neutron-diffusion equations on parallel computers. Applications of the numeric algorithm on both SIMD (vector pipeline) and MIMD/SIMD (multi-CUP/vector pipeline) architectures were explored. The algorithm was successfully implemented in the two-group, 3-D neutron diffusion computer code named DIFPAR3D (DIFfusion PARallel 3-Dimension). Numerical-solution techniques used in the code include the Chebyshev polynomial acceleration technique in conjunction with the power method of outer iteration. For inner iterations, a parallel form of red-black (cyclic) line SOR with automated determination of group dependent relaxation factors and iteration numbers required to achieve specified inner iteration error tolerance is incorporated. The code employs a macroscopic depletion model with trace capability for selected fission products' transients and critical boron. In addition to this, moderator and fuel temperature feedback models are also incorporated into the DIFPAR3D code, for realistic simulation of power reactor cores. The physics models used were proven acceptable in separate benchmarking studies

  2. Parallel Relational Universes – experiments in modularity

    DEFF Research Database (Denmark)

    Pagliarini, Luigi; Lund, Henrik Hautop

    2015-01-01

    : We here describe Parallel Relational Universes, an artistic method used for the psychological analysis of group dynamics. The design of the artistic system, which mediates group dynamics, emerges from our studies of modular playware and remixing playware. Inspired from remixing modular playware......, where users remix samples in the form of physical and functional modules, we created an artistic instantiation of such a concept with the Parallel Relational Universes, allowing arts alumni to remix artistic expressions. Here, we report the data emerged from a first pre-test, run with gymnasium’s alumni....... We then report both the artistic and the psychological findings. We discuss possible variations of such an instrument. Between an art piece and a psychological test, at a first cognitive analysis, it seems to be a promising research tool...

  3. Parallel Expansions of Sox Transcription Factor Group B Predating the Diversifications of the Arthropods and Jawed Vertebrates

    Science.gov (United States)

    Zhong, Lei; Wang, Dengqiang; Gan, Xiaoni; Yang, Tong; He, Shunping

    2011-01-01

    Group B of the Sox transcription factor family is crucial in embryo development in the insects and vertebrates. Sox group B, unlike the other Sox groups, has an unusually enlarged functional repertoire in insects, but the timing and mechanism of the expansion of this group were unclear. We collected and analyzed data for Sox group B from 36 species of 12 phyla representing the major metazoan clades, with an emphasis on arthropods, to reconstruct the evolutionary history of SoxB in bilaterians and to date the expansion of Sox group B in insects. We found that the genome of the bilaterian last common ancestor probably contained one SoxB1 and one SoxB2 gene only and that tandem duplications of SoxB2 occurred before the arthropod diversification but after the arthropod-nematode divergence, resulting in the basal repertoire of Sox group B in diverse arthropod lineages. The arthropod Sox group B repertoire expanded differently from the vertebrate repertoire, which resulted from genome duplications. The parallel increases in the Sox group B repertoires of the arthropods and vertebrates are consistent with the parallel increases in the complexity and diversification of these two important organismal groups. PMID:21305035

  4. Vectorization, parallelization and porting of nuclear codes (porting). Progress report fiscal 1999

    Energy Technology Data Exchange (ETDEWEB)

    Kawasaki, Nobuo; Nemoto, Toshiyuki; Kawai, Wataru; Ishizuki, Shigeru [Fujitsu Ltd., Tokyo (Japan); Ogasawara, Shinobu; Kume, Etsuo; Adachi, Masaaki [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Yatake, Yo-ichi [Hitachi Ltd., Tokyo (Japan)

    2001-01-01

    Several computer codes in the nuclear field have been vectorized, parallelized and transported on the FUJITSU VPP500 system, the AP3000 system, the SX-4 system and the Paragon system at Center for Promotion of Computational Science and Engineering in Japan Atomic Energy Research Institute. We dealt with 18 codes in fiscal 1999. These results are reported in 3 parts, i.e., the vectorization and the parallelization part on vector processors, the parallelization port on scalar processors and the porting part. In this report, we describe the porting. In this porting part, the porting of Assisted Model Building with Energy Refinement code version 5 (AMBER5), general purpose Monte Carlo codes far neutron and photon transport calculations based on continuous energy and multigroup methods (MVP/GMVP), automatic editing system for MCNP library code (autonj), neutron damage calculations for materials irradiations and neutron damage calculations for compounds code (SPECTER/SPECOMP), severe accident analysis code (MELCOR) and COolant Boiling in Rod Arrays, Two-Fluid code (COBRA-TF) on the VPP500 system and/or the AP3000 system are described. (author)

  5. A SPECT reconstruction method for extending parallel to non-parallel geometries

    International Nuclear Information System (INIS)

    Wen Junhai; Liang Zhengrong

    2010-01-01

    Due to its simplicity, parallel-beam geometry is usually assumed for the development of image reconstruction algorithms. The established reconstruction methodologies are then extended to fan-beam, cone-beam and other non-parallel geometries for practical application. This situation occurs for quantitative SPECT (single photon emission computed tomography) imaging in inverting the attenuated Radon transform. Novikov reported an explicit parallel-beam formula for the inversion of the attenuated Radon transform in 2000. Thereafter, a formula for fan-beam geometry was reported by Bukhgeim and Kazantsev (2002 Preprint N. 99 Sobolev Institute of Mathematics). At the same time, we presented a formula for varying focal-length fan-beam geometry. Sometimes, the reconstruction formula is so implicit that we cannot obtain the explicit reconstruction formula in the non-parallel geometries. In this work, we propose a unified reconstruction framework for extending parallel-beam geometry to any non-parallel geometry using ray-driven techniques. Studies by computer simulations demonstrated the accuracy of the presented unified reconstruction framework for extending parallel-beam to non-parallel geometries in inverting the attenuated Radon transform.

  6. Parallel point-multiplication architecture using combined group operations for high-speed cryptographic applications.

    Directory of Open Access Journals (Sweden)

    Md Selim Hossain

    Full Text Available In this paper, we propose a novel parallel architecture for fast hardware implementation of elliptic curve point multiplication (ECPM, which is the key operation of an elliptic curve cryptography processor. The point multiplication over binary fields is synthesized on both FPGA and ASIC technology by designing fast elliptic curve group operations in Jacobian projective coordinates. A novel combined point doubling and point addition (PDPA architecture is proposed for group operations to achieve high speed and low hardware requirements for ECPM. It has been implemented over the binary field which is recommended by the National Institute of Standards and Technology (NIST. The proposed ECPM supports two Koblitz and random curves for the key sizes 233 and 163 bits. For group operations, a finite-field arithmetic operation, e.g. multiplication, is designed on a polynomial basis. The delay of a 233-bit point multiplication is only 3.05 and 3.56 μs, in a Xilinx Virtex-7 FPGA, for Koblitz and random curves, respectively, and 0.81 μs in an ASIC 65-nm technology, which are the fastest hardware implementation results reported in the literature to date. In addition, a 163-bit point multiplication is also implemented in FPGA and ASIC for fair comparison which takes around 0.33 and 0.46 μs, respectively. The area-time product of the proposed point multiplication is very low compared to similar designs. The performance ([Formula: see text] and Area × Time × Energy (ATE product of the proposed design are far better than the most significant studies found in the literature.

  7. Exploiting Symmetry on Parallel Architectures.

    Science.gov (United States)

    Stiller, Lewis Benjamin

    1995-01-01

    This thesis describes techniques for the design of parallel programs that solve well-structured problems with inherent symmetry. Part I demonstrates the reduction of such problems to generalized matrix multiplication by a group-equivariant matrix. Fast techniques for this multiplication are described, including factorization, orbit decomposition, and Fourier transforms over finite groups. Our algorithms entail interaction between two symmetry groups: one arising at the software level from the problem's symmetry and the other arising at the hardware level from the processors' communication network. Part II illustrates the applicability of our symmetry -exploitation techniques by presenting a series of case studies of the design and implementation of parallel programs. First, a parallel program that solves chess endgames by factorization of an associated dihedral group-equivariant matrix is described. This code runs faster than previous serial programs, and discovered it a number of results. Second, parallel algorithms for Fourier transforms for finite groups are developed, and preliminary parallel implementations for group transforms of dihedral and of symmetric groups are described. Applications in learning, vision, pattern recognition, and statistics are proposed. Third, parallel implementations solving several computational science problems are described, including the direct n-body problem, convolutions arising from molecular biology, and some communication primitives such as broadcast and reduce. Some of our implementations ran orders of magnitude faster than previous techniques, and were used in the investigation of various physical phenomena.

  8. Parallel implementation of multireference coupled-cluster theories based on the reference-level parallelism

    Energy Technology Data Exchange (ETDEWEB)

    Brabec, Jiri; Pittner, Jiri; van Dam, Hubertus JJ; Apra, Edoardo; Kowalski, Karol

    2012-02-01

    A novel algorithm for implementing general type of multireference coupled-cluster (MRCC) theory based on the Jeziorski-Monkhorst exponential Ansatz [B. Jeziorski, H.J. Monkhorst, Phys. Rev. A 24, 1668 (1981)] is introduced. The proposed algorithm utilizes processor groups to calculate the equations for the MRCC amplitudes. In the basic formulation each processor group constructs the equations related to a specific subset of references. By flexible choice of processor groups and subset of reference-specific sufficiency conditions designated to a given group one can assure optimum utilization of available computing resources. The performance of this algorithm is illustrated on the examples of the Brillouin-Wigner and Mukherjee MRCC methods with singles and doubles (BW-MRCCSD and Mk-MRCCSD). A significant improvement in scalability and in reduction of time to solution is reported with respect to recently reported parallel implementation of the BW-MRCCSD formalism [J.Brabec, H.J.J. van Dam, K. Kowalski, J. Pittner, Chem. Phys. Lett. 514, 347 (2011)].

  9. Optimizing trial design in pharmacogenetics research: comparing a fixed parallel group, group sequential, and adaptive selection design on sample size requirements.

    Science.gov (United States)

    Boessen, Ruud; van der Baan, Frederieke; Groenwold, Rolf; Egberts, Antoine; Klungel, Olaf; Grobbee, Diederick; Knol, Mirjam; Roes, Kit

    2013-01-01

    Two-stage clinical trial designs may be efficient in pharmacogenetics research when there is some but inconclusive evidence of effect modification by a genomic marker. Two-stage designs allow to stop early for efficacy or futility and can offer the additional opportunity to enrich the study population to a specific patient subgroup after an interim analysis. This study compared sample size requirements for fixed parallel group, group sequential, and adaptive selection designs with equal overall power and control of the family-wise type I error rate. The designs were evaluated across scenarios that defined the effect sizes in the marker positive and marker negative subgroups and the prevalence of marker positive patients in the overall study population. Effect sizes were chosen to reflect realistic planning scenarios, where at least some effect is present in the marker negative subgroup. In addition, scenarios were considered in which the assumed 'true' subgroup effects (i.e., the postulated effects) differed from those hypothesized at the planning stage. As expected, both two-stage designs generally required fewer patients than a fixed parallel group design, and the advantage increased as the difference between subgroups increased. The adaptive selection design added little further reduction in sample size, as compared with the group sequential design, when the postulated effect sizes were equal to those hypothesized at the planning stage. However, when the postulated effects deviated strongly in favor of enrichment, the comparative advantage of the adaptive selection design increased, which precisely reflects the adaptive nature of the design. Copyright © 2013 John Wiley & Sons, Ltd.

  10. Large-scale parallel configuration interaction. II. Two- and four-component double-group general active space implementation with application to BiH

    DEFF Research Database (Denmark)

    Knecht, Stefan; Jensen, Hans Jørgen Aagaard; Fleig, Timo

    2010-01-01

    We present a parallel implementation of a large-scale relativistic double-group configuration interaction CIprogram. It is applicable with a large variety of two- and four-component Hamiltonians. The parallel algorithm is based on a distributed data model in combination with a static load balanci...

  11. Final Report: Center for Programming Models for Scalable Parallel Computing

    Energy Technology Data Exchange (ETDEWEB)

    Mellor-Crummey, John [William Marsh Rice University

    2011-09-13

    As part of the Center for Programming Models for Scalable Parallel Computing, Rice University collaborated with project partners in the design, development and deployment of language, compiler, and runtime support for parallel programming models to support application development for the “leadership-class” computer systems at DOE national laboratories. Work over the course of this project has focused on the design, implementation, and evaluation of a second-generation version of Coarray Fortran. Research and development efforts of the project have focused on the CAF 2.0 language, compiler, runtime system, and supporting infrastructure. This has involved working with the teams that provide infrastructure for CAF that we rely on, implementing new language and runtime features, producing an open source compiler that enabled us to evaluate our ideas, and evaluating our design and implementation through the use of benchmarks. The report details the research, development, findings, and conclusions from this work.

  12. Parallelization of a spherical Sn transport theory algorithm

    International Nuclear Information System (INIS)

    Haghighat, A.

    1989-01-01

    The work described in this paper derives a parallel algorithm for an R-dependent spherical S N transport theory algorithm and studies its performance by testing different sample problems. The S N transport method is one of the most accurate techniques used to solve the linear Boltzmann equation. Several studies have been done on the vectorization of the S N algorithms; however, very few studies have been performed on the parallelization of this algorithm. Weinke and Hommoto have looked at the parallel processing of the different energy groups, and Azmy recently studied the parallel processing of the inner iterations of an X-Y S N nodal transport theory method. Both studies have reported very encouraging results, which have prompted us to look at the parallel processing of an R-dependent S N spherical geometry algorithm. This geometry was chosen because, in spite of its simplicity, it contains the complications of the curvilinear geometries (i.e., redistribution of neutrons over the discretized angular bins)

  13. Solution of the within-group multidimensional discrete ordinates transport equations on massively parallel architectures

    Science.gov (United States)

    Zerr, Robert Joseph

    2011-12-01

    The integral transport matrix method (ITMM) has been used as the kernel of new parallel solution methods for the discrete ordinates approximation of the within-group neutron transport equation. The ITMM abandons the repetitive mesh sweeps of the traditional source iterations (SI) scheme in favor of constructing stored operators that account for the direct coupling factors among all the cells and between the cells and boundary surfaces. The main goals of this work were to develop the algorithms that construct these operators and employ them in the solution process, determine the most suitable way to parallelize the entire procedure, and evaluate the behavior and performance of the developed methods for increasing number of processes. This project compares the effectiveness of the ITMM with the SI scheme parallelized with the Koch-Baker-Alcouffe (KBA) method. The primary parallel solution method involves a decomposition of the domain into smaller spatial sub-domains, each with their own transport matrices, and coupled together via interface boundary angular fluxes. Each sub-domain has its own set of ITMM operators and represents an independent transport problem. Multiple iterative parallel solution methods have investigated, including parallel block Jacobi (PBJ), parallel red/black Gauss-Seidel (PGS), and parallel GMRES (PGMRES). The fastest observed parallel solution method, PGS, was used in a weak scaling comparison with the PARTISN code. Compared to the state-of-the-art SI-KBA with diffusion synthetic acceleration (DSA), this new method without acceleration/preconditioning is not competitive for any problem parameters considered. The best comparisons occur for problems that are difficult for SI DSA, namely highly scattering and optically thick. SI DSA execution time curves are generally steeper than the PGS ones. However, until further testing is performed it cannot be concluded that SI DSA does not outperform the ITMM with PGS even on several thousand or tens of

  14. Mandibular advancement appliance for obstructive sleep apnoea: results of a randomised placebo controlled trial using parallel group design

    DEFF Research Database (Denmark)

    Petri, N.; Svanholt, P.; Solow, B.

    2008-01-01

    The aim of this trial was to evaluate the efficacy of a mandibular advancement appliance (MAA) for obstructive sleep apnoea (OSA). Ninety-three patients with OSA and a mean apnoea-hypopnoea index (AHI) of 34.7 were centrally randomised into three, parallel groups: (a) MAA; (b) mandibular non......). Eighty-one patients (87%) completed the trial. The MAA group achieved mean AHI and Epworth scores significantly lower (P group and the no-intervention group. No significant differences were found between the MNA group and the no-intervention group. The MAA group had...

  15. High performance computing of density matrix renormalization group method for 2-dimensional model. Parallelization strategy toward peta computing

    International Nuclear Information System (INIS)

    Yamada, Susumu; Igarashi, Ryo; Machida, Masahiko; Imamura, Toshiyuki; Okumura, Masahiko; Onishi, Hiroaki

    2010-01-01

    We parallelize the density matrix renormalization group (DMRG) method, which is a ground-state solver for one-dimensional quantum lattice systems. The parallelization allows us to extend the applicable range of the DMRG to n-leg ladders i.e., quasi two-dimension cases. Such an extension is regarded to bring about several breakthroughs in e.g., quantum-physics, chemistry, and nano-engineering. However, the straightforward parallelization requires all-to-all communications between all processes which are unsuitable for multi-core systems, which is a mainstream of current parallel computers. Therefore, we optimize the all-to-all communications by the following two steps. The first one is the elimination of the communications between all processes by only rearranging data distribution with the communication data amount kept. The second one is the avoidance of the communication conflict by rescheduling the calculation and the communication. We evaluate the performance of the DMRG method on multi-core supercomputers and confirm that our two-steps tuning is quite effective. (author)

  16. Performance of a fine-grained parallel model for multi-group nodal-transport calculations in three-dimensional pin-by-pin reactor geometry

    International Nuclear Information System (INIS)

    Masahiro, Tatsumi; Akio, Yamamoto

    2003-01-01

    A production code SCOPE2 was developed based on the fine-grained parallel algorithm by the red/black iterative method targeting parallel computing environments such as a PC-cluster. It can perform a depletion calculation in a few hours using a PC-cluster with the model based on a 9-group nodal-SP3 transport method in 3-dimensional pin-by-pin geometry for in-core fuel management of commercial PWRs. The present algorithm guarantees the identical convergence process as that in serial execution, which is very important from the viewpoint of quality management. The fine-mesh geometry is constructed by hierarchical decomposition with introduction of intermediate management layer as a block that is a quarter piece of a fuel assembly in radial direction. A combination of a mesh division scheme forcing even meshes on each edge and a latency-hidden communication algorithm provided simplicity and efficiency to message passing to enhance parallel performance. Inter-processor communication and parallel I/O access were realized using the MPI functions. Parallel performance was measured for depletion calculations by the 9-group nodal-SP3 transport method in 3-dimensional pin-by-pin geometry with 340 x 340 x 26 meshes for full core geometry and 170 x 170 x 26 for quarter core geometry. A PC cluster that consists of 24 Pentium-4 processors connected by the Fast Ethernet was used for the performance measurement. Calculations in full core geometry gave better speedups compared to those in quarter core geometry because of larger granularity. Fine-mesh sweep and feedback calculation parts gave almost perfect scalability since granularity is large enough, while 1-group coarse-mesh diffusion acceleration gave only around 80%. The speedup and parallel efficiency for total computation time were 22.6 and 94%, respectively, for the calculation in full core geometry with 24 processors. (authors)

  17. Oral sumatriptan for migraine in children and adolescents: a randomized, multicenter, placebo-controlled, parallel group study.

    Science.gov (United States)

    Fujita, Mitsue; Sato, Katsuaki; Nishioka, Hiroshi; Sakai, Fumihiko

    2014-04-01

    The objective of this article is to evaluate the efficacy and tolerability of two doses of oral sumatriptan vs placebo in the acute treatment of migraine in children and adolescents. Currently, there is no approved prescription medication in Japan for the treatment of migraine in children and adolescents. This was a multicenter, outpatient, single-attack, double-blind, randomized, placebo-controlled, parallel-group study. Eligible patients were children and adolescents aged 10 to 17 years diagnosed with migraine with or without aura (ICHD-II criteria 1.1 or 1.2) from 17 centers. They were randomized to receive sumatriptan 25 mg, 50 mg or placebo (1:1:2). The primary efficacy endpoint was headache relief by two grades on a five-grade scale at two hours post-dose. A total of 178 patients from 17 centers in Japan were enrolled and randomized to an investigational product in double-blind fashion. Of these, 144 patients self-treated a single migraine attack, and all provided a post-dose efficacy assessment and completed the study. The percentage of patients in the full analysis set (FAS) population who report pain relief at two hours post-treatment for the primary endpoint was higher in the placebo group than in the pooled sumatriptan group (38.6% vs 31.1%, 95% CI: -23.02 to 8.04, P  = 0.345). The percentage of patients in the FAS population who reported pain relief at four hours post-dose was higher in the pooled sumatriptan group (63.5%) than in the placebo group (51.4%) but failed to achieve statistical significance ( P  = 0.142). At four hours post-dose, percentages of patients who were pain free or had complete relief of photophobia or phonophobia were numerically higher in the sumatriptan pooled group compared to placebo. Both doses of oral sumatriptan were well tolerated. No adverse events (AEs) were serious or led to study withdrawal. The most common AEs were somnolence in 6% (two patients) in the sumatriptan 25 mg treatment group and chest

  18. Compactness of the automorphism group of a topological parallelism on real projective 3-space: The disconnected case

    OpenAIRE

    Rainer, Löwen

    2017-01-01

    We prove that the automorphism group of a topological parallelism on real projective 3-space is compact. In a preceding article it was proved that at least the connected component of the identity is compact. The present proof does not depend on that earlier result.

  19. Implementation and performance of parallelized elegant

    International Nuclear Information System (INIS)

    Wang, Y.; Borland, M.

    2008-01-01

    The program elegant is widely used for design and modeling of linacs for free-electron lasers and energy recovery linacs, as well as storage rings and other applications. As part of a multi-year effort, we have parallelized many aspects of the code, including single-particle dynamics, wakefields, and coherent synchrotron radiation. We report on the approach used for gradual parallelization, which proved very beneficial in getting parallel features into the hands of users quickly. We also report details of parallelization of collective effects. Finally, we discuss performance of the parallelized code in various applications.

  20. EDF Group - Annual Report 2010

    International Nuclear Information System (INIS)

    2011-01-01

    The EDF Group is one of the world's leading energy companies, active in all areas from generation to trading and network management. It has a sound business model, evenly balanced between regulated and deregulated activities. With its first-rate human resources, R and D capability, expertise in engineering and operating generation plants and networks, as well as its energy eco-efficiency offers, the Group delivers competitive solutions that help ensure sustainable economic development and climate protection. The EDF Group is the leader in the French and UK electricity markets and has solid positions in Italy and numerous other European countries, as well as industrial operations in Asia and the United States. Everywhere it operates, the Group is a model of quality public service for the energy sector. This document is EDF Group's annual report for the year 2010. It contains information about Group profile, governance, business, development strategy, sales and marketing, positions in Europe and international activities. The document is made of several reports: the Activity and Sustainable Development Report, the Financial Report, the Management Report, the Report by the Chairman of EDF Board of Directors on corporate governance and internal control procedures, the Milestones report, the 'EDF at a glance' report, and the Sustainable Development Indicators

  1. Cognitive synergy in groups and group-to-individual transfer of decision-making competencies

    NARCIS (Netherlands)

    Curseu, P.L.; Meslec, M.N.; Pluut, Helen; Lucas, G.J.M.

    2015-01-01

    In a field study (148 participants organized in 38 groups) we tested the effect of group synergy and one's position in relation to the collaborative zone of proximal development (CZPD) on the change of individual decision-making competencies. We used two parallel sets of decision tasks reported in

  2. EDF Group - Annual Report 2005

    International Nuclear Information System (INIS)

    2006-01-01

    The EDF Group is a leading player in the European energy industry, present in all areas of the electricity value chain, from generation to trading, and increasingly active in the gas chain in Europe. Leader in the French electricity market, the Group also has solid positions in the United Kingdom, Germany and Italy. In the electricity sector, it has the premier generation fleet and customer portfolio in Europe and operates in strategically targeted areas in the rest of the world. The Group is also the leading network operator in Europe, giving it a sound business model, equally balanced between regulated activities and those open to competition. This document is EDF Group's annual report for the year 2005. It contains information about Group profile, governance, business, development strategy, sales and marketing, positions in Europe and international activities. The document is made of several reports: the Activity and Sustainable Development report, the Financial Report, the Sustainable Development Report, the Sustainable Development Indicators, the Management Report, the Report by the Chairman of EDF Board of Directors on corporate governance and internal control procedures

  3. Gartner Group reports

    CERN Document Server

    Gartner Group. Stamford, CT

    Gartner Group is the one of the leading independent providers of research and analysis material for IT professionals. Their reports provide in-depth analysis of dominant trends, companies and products. CERN has obtained a licence making these reports available online to anyone within CERN. The database contains not only current reports, updated monthly, but also some going back over a year.

  4. EDF Group - Annual Report 2009

    International Nuclear Information System (INIS)

    2010-01-01

    The EDF Group is a leading player in the energy industry, active in all areas of the electricity value chain, from generation to trading and network management, with expanding operations in the natural gas chain. It has a sound business model, evenly balanced between regulated and deregulated activities. The EDF Group is the leader in the French and British electricity markets and has solid positions in Germany and Italy and numerous other European countries, as well as industrial operations in Asia and the United States. Everywhere it operates, the EDF Group is a model of quality public service for the energy sector. With fi rst-rate human resources, R and D capability and generation expertise in nuclear, fossil-fired and renewable energies, particularly hydro, together with energy eco-efficiency offers, the EDF Group delivers competitive solutions that help ensure sustainable economic development and climate protection. This document is EDF Group's annual report for the year 2009. It contains information about Group profile, governance, business, development strategy, sales and marketing, positions in Europe and international activities. The document is made of several reports: the Activity and Sustainable Development Report, the Financial Report, the Management Report, the Report by the Chairman of EDF Board of Directors on corporate governance and internal control procedures, the Milestones report, the 'EDF at a glance' report, and the Sustainable Development Indicators

  5. EDF Group - Annual Report 2006

    International Nuclear Information System (INIS)

    2007-01-01

    The EDF Group is a leading player in the European energy industry, present in all areas of the electricity value chain, from generation to trading, and increasingly active in the gas chain in Europe. Leader in the French electricity market, the Group also has solid positions in the United Kingdom, Germany and Italy. In the electricity sector, it has the premier generation fleet and customer portfolio in Europe and operates in strategically targeted areas in the rest of the world. The Group is also the leading network operator in Europe, giving it a sound business model, equally balanced between regulated activities and those open to competition. This document is EDF Group's annual report for the year 2006. It contains information about Group profile, governance, business, development strategy, sales and marketing, positions in Europe and international activities. The document is made of several reports: the Activity and Sustainable Development Report, the Financial Report, the Sustainable Development Report, the Sustainable Development Indicators, and the Report by the Chairman of EDF Board of Directors on corporate governance and internal control procedures

  6. EDF Group - Annual Report 2012

    International Nuclear Information System (INIS)

    2013-01-01

    The EDF Group is one of the world's leading energy companies, active in all areas from generation to trading and network management. It has a sound business model, evenly balanced between regulated and deregulated activities. With its first-rate human resources, R and D capability, expertise in engineering and operating generation plants and networks, as well as its energy eco-efficiency offers, the Group delivers competitive solutions that help ensure sustainable economic development and climate protection. The EDF Group is the leader in the French and UK electricity markets and has solid positions in Italy and numerous other European countries, as well as industrial operations in Asia and the United States. Everywhere it operates, the Group is a model of quality public service for the energy sector. This document is EDF Group's annual report for the year 2012. It contains information about Group profile, governance, business, development strategy, sales and marketing, positions in Europe and international activities. The document is made of several reports: the Activity and Sustainable Development Report, the Financial Report, the 'EDF at a glance' report, and the Sustainable Development Indicators

  7. A parallel buffer tree

    DEFF Research Database (Denmark)

    Sitchinava, Nodar; Zeh, Norbert

    2012-01-01

    We present the parallel buffer tree, a parallel external memory (PEM) data structure for batched search problems. This data structure is a non-trivial extension of Arge's sequential buffer tree to a private-cache multiprocessor environment and reduces the number of I/O operations by the number of...... in the optimal OhOf(psortN + K/PB) parallel I/O complexity, where K is the size of the output reported in the process and psortN is the parallel I/O complexity of sorting N elements using P processors....

  8. Parallelization of quantum molecular dynamics simulation code

    International Nuclear Information System (INIS)

    Kato, Kaori; Kunugi, Tomoaki; Shibahara, Masahiko; Kotake, Susumu

    1998-02-01

    A quantum molecular dynamics simulation code has been developed for the analysis of the thermalization of photon energies in the molecule or materials in Kansai Research Establishment. The simulation code is parallelized for both Scalar massively parallel computer (Intel Paragon XP/S75) and Vector parallel computer (Fujitsu VPP300/12). Scalable speed-up has been obtained with a distribution to processor units by division of particle group in both parallel computers. As a result of distribution to processor units not only by particle group but also by the particles calculation that is constructed with fine calculations, highly parallelization performance is achieved in Intel Paragon XP/S75. (author)

  9. Broadcasting a message in a parallel computer

    Science.gov (United States)

    Berg, Jeremy E [Rochester, MN; Faraj, Ahmad A [Rochester, MN

    2011-08-02

    Methods, systems, and products are disclosed for broadcasting a message in a parallel computer. The parallel computer includes a plurality of compute nodes connected together using a data communications network. The data communications network optimized for point to point data communications and is characterized by at least two dimensions. The compute nodes are organized into at least one operational group of compute nodes for collective parallel operations of the parallel computer. One compute node of the operational group assigned to be a logical root. Broadcasting a message in a parallel computer includes: establishing a Hamiltonian path along all of the compute nodes in at least one plane of the data communications network and in the operational group; and broadcasting, by the logical root to the remaining compute nodes, the logical root's message along the established Hamiltonian path.

  10. A parallel algorithm for solving the multidimensional within-group discrete ordinates equations with spatial domain decomposition - 104

    International Nuclear Information System (INIS)

    Zerr, R.J.; Azmy, Y.Y.

    2010-01-01

    A spatial domain decomposition with a parallel block Jacobi solution algorithm has been developed based on the integral transport matrix formulation of the discrete ordinates approximation for solving the within-group transport equation. The new methodology abandons the typical source iteration scheme and solves directly for the fully converged scalar flux. Four matrix operators are constructed based upon the integral form of the discrete ordinates equations. A single differential mesh sweep is performed to construct these operators. The method is parallelized by decomposing the problem domain into several smaller sub-domains, each treated as an independent problem. The scalar flux of each sub-domain is solved exactly given incoming angular flux boundary conditions. Sub-domain boundary conditions are updated iteratively, and convergence is achieved when the scalar flux error in all cells meets a pre-specified convergence criterion. The method has been implemented in a computer code that was then employed for strong scaling studies of the algorithm's parallel performance via a fixed-size problem in tests ranging from one domain up to one cell per sub-domain. Results indicate that the best parallel performance compared to source iterations occurs for optically thick, highly scattering problems, the variety that is most difficult for the traditional SI scheme to solve. Moreover, the minimum execution time occurs when each sub-domain contains a total of four cells. (authors)

  11. A double blind, randomised, parallel group study on the efficacy and safety of treating acute lateral ankle sprain with oral hydrolytic enzymes

    NARCIS (Netherlands)

    Kerkhoffs, G. M. M. J.; Struijs, P. A. A.; de Wit, C.; Rahlfs, V. W.; Zwipp, H.; van Dijk, C. N.

    2004-01-01

    Objective: To compare the effectiveness and safety of the triple combination Phlogenzym ( rutoside, bromelain, and trypsin) with double combinations, the single substances, and placebo. Design: Multinational, multicentre, double blind, randomised, parallel group design with eight groups structured

  12. Differences Between Distributed and Parallel Systems

    Energy Technology Data Exchange (ETDEWEB)

    Brightwell, R.; Maccabe, A.B.; Rissen, R.

    1998-10-01

    Distributed systems have been studied for twenty years and are now coming into wider use as fast networks and powerful workstations become more readily available. In many respects a massively parallel computer resembles a network of workstations and it is tempting to port a distributed operating system to such a machine. However, there are significant differences between these two environments and a parallel operating system is needed to get the best performance out of a massively parallel system. This report characterizes the differences between distributed systems, networks of workstations, and massively parallel systems and analyzes the impact of these differences on operating system design. In the second part of the report, we introduce Puma, an operating system specifically developed for massively parallel systems. We describe Puma portals, the basic building blocks for message passing paradigms implemented on top of Puma, and show how the differences observed in the first part of the report have influenced the design and implementation of Puma.

  13. Vectorization, parallelization and porting of nuclear codes on the VPP500 system (vectorization). Progress report fiscal 1996

    Energy Technology Data Exchange (ETDEWEB)

    Nemoto, Toshiyuki; Kawai, Wataru [Fujitsu Ltd., Tokyo (Japan); Kawasaki, Nobuo [and others

    1997-12-01

    Several computer codes in the nuclear field have been vectorized, parallelized and transported on the FUJITSU VPP500 system at Center for Promotion of Computational Science and Engineering in Japan Atomic Energy Research Institute. These results are reported in 3 parts, i.e., the vectorization part, the parallelization part and the porting part. In this report, we describe the vectorization. In this vectorization part, the vectorization of two and three dimensional discrete ordinates simulation code DORT-TORT, gas dynamics analysis code FLOWGR and relativistic Boltzmann-Uehling-Uhlenbeck simulation code RBUU are described. In the parallelization part, the parallelization of 2-Dimensional relativistic electromagnetic particle code EM2D, Cylindrical Direct Numerical Simulation code CYLDNS and molecular dynamics code for simulating radiation damages in diamond crystals DGR are described. And then, in the porting part, the porting of reactor safety analysis code RELAP5/MOD3.2 and RELAP5/MOD3.2.1.2, nuclear data processing system NJOY and 2-D multigroup discrete ordinate transport code TWOTRAN-II are described. And also, a survey for the porting of command-driven interactive data analysis plotting program IPLOT are described. (author)

  14. Vectorization, parallelization and porting of nuclear codes on the VPP500 system (porting). Progress report fiscal 1996

    Energy Technology Data Exchange (ETDEWEB)

    Nemoto, Toshiyuki [Fujitsu Ltd., Tokyo (Japan); Kawasaki, Nobuo; Tanabe, Hidenobu [and others

    1998-01-01

    Several computer codes in the nuclear field have been vectorized, parallelized and transported on the FUJITSU VPP500 system at Center for Promotion of Computational Science and Engineering in Japan Atomic Energy Research Institute. These results are reported in 3 parts, i.e., the vectorization part, the parallelization part and the porting part. In this report, we describe the porting. In this porting part, the porting of reactor safety analysis code RELAP5/MOD3.2 and RELAP5/MOD3.2.1.2, nuclear data processing system NJOY and 2-D multigroup discrete ordinate transport code TWOTRAN-II are described. And also, a survey for the porting of command-driven interactive data analysis plotting program IPLOT are described. In the parallelization part, the parallelization of 2-Dimensional relativistic electromagnetic particle code EM2D, Cylindrical Direct Numerical Simulation code CYLDNS and molecular dynamics code for simulating radiation damages in diamond crystals DGR are described. And then, in the vectorization part, the vectorization of two and three dimensional discrete ordinates simulation code DORT-TORT, gas dynamics analysis code FLOWGR and relativistic Boltzmann-Uehling-Uhlenbeck simulation code RBUU are described. (author)

  15. Parallel auto-correlative statistics with VTK.

    Energy Technology Data Exchange (ETDEWEB)

    Pebay, Philippe Pierre; Bennett, Janine Camille

    2013-08-01

    This report summarizes existing statistical engines in VTK and presents both the serial and parallel auto-correlative statistics engines. It is a sequel to [PT08, BPRT09b, PT09, BPT09, PT10] which studied the parallel descriptive, correlative, multi-correlative, principal component analysis, contingency, k-means, and order statistics engines. The ease of use of the new parallel auto-correlative statistics engine is illustrated by the means of C++ code snippets and algorithm verification is provided. This report justifies the design of the statistics engines with parallel scalability in mind, and provides scalability and speed-up analysis results for the autocorrelative statistics engine.

  16. EDF group - annual report 2003

    International Nuclear Information System (INIS)

    2004-01-01

    This document contains the magazine, the financial statements and the sustainable development report of Electricite de France (EdF) group for 2003: 1 - the magazine (chairman's statement, group profile, vision and strategy); 2 - the consolidated financial statements for the period ended 31 December 2003 (statutory auditors' report on the consolidated financial statements, EDF's summary annual financial statements); 3 - sustainable development report (transparency and dialogue, responsibility, commitment, partnerships for progress). (J.S.)

  17. EDF Group - Annual Report 2013

    International Nuclear Information System (INIS)

    2014-01-01

    The EDF Group is emerging as a global leader in electricity and an industrial benchmark spanning the entire business from generation and networks to sales and marketing. The group is growing stronger and changing. A long-term vision and relentless determination to provide a modern public service underpin its robust business model. This document is EDF Group's annual report for the year 2013. It contains information about Group profile, governance, business, development strategy, sales and marketing, positions in Europe and international activities. The document comprises the Activity Report and the Sustainable Development Indicators

  18. Pharmacokinetics of serelaxin in patients with hepatic impairment: a single-dose, open-label, parallel group study.

    Science.gov (United States)

    Kobalava, Zhanna; Villevalde, Svetlana; Kotovskaya, Yulia; Hinrichsen, Holger; Petersen-Sylla, Marc; Zaehringer, Andreas; Pang, Yinuo; Rajman, Iris; Canadi, Jasna; Dahlke, Marion; Lloyd, Peter; Halabi, Atef

    2015-06-01

    Serelaxin is a recombinant form of human relaxin-2 in development for treatment of acute heart failure. This study aimed to evaluate the pharmacokinetics (PK) of serelaxin in patients with hepatic impairment. Secondary objectives included evaluation of immunogenicity, safety and tolerability of serelaxin. This was an open-label, parallel group study (NCT01433458) comparing the PK of serelaxin following a single 24 h intravenous (i.v.) infusion (30 μg kg(-1)  day(-1) ) between patients with mild, moderate or severe hepatic impairment (Child-Pugh class A, B, C) and healthy matched controls. Blood sampling and standard safety assessments were conducted. Primary non-compartmental PK parameters [including area under the serum concentration-time curve AUC(0-48 h) and AUC(0-∞) and serum concentration at 24 h post-dose (C24h )] were compared between each hepatic impairment group and healthy controls. A total of 49 subjects (including 25 patients with hepatic impairment) were enrolled, of which 48 subjects completed the study. In all groups, the serum concentration of serelaxin increased over the first few hours of infusion, reached steady-state at 12-24 h and then declined following completion of infusion, with a mean terminal half-life of 7-8 h. All PK parameter estimates were comparable between each group of patients with hepatic impairment and healthy controls. No serious adverse events, discontinuations due to adverse events or deaths were reported. No serelaxin treatment-related antibodies developed during this study. The PK and safety profile of serelaxin were not affected by hepatic impairment. No dose adjustment is needed for serelaxin treatment of 48 h i.v. infusion in patients with hepatic impairment. © 2014 The British Pharmacological Society.

  19. Vectorization, parallelization and porting of nuclear codes on the VPP500 system (vectorization). Progress report fiscal 1997

    International Nuclear Information System (INIS)

    Kawasaki, Nobuo; Ogasawara, Shinobu; Adachi, Masaaki; Kume, Etsuo; Ishizuki, Shigeru; Tanabe, Hidenobu; Nemoto, Toshiyuki; Kawai, Wataru; Watanabe, Hideo

    1999-05-01

    Several computer codes in the nuclear field have been vectorized, parallelized and transported on the FUJITSU VPP500 system and/or the AP3000 system at Center for Promotion of Computational Science and Engineering in Japan Atomic Energy Research Institute. We dealt with 14 codes in fiscal 1997. These results are reported in 3 parts, i.e., the vectorization part, the parallelization part and the porting part. In this report, we describe the vectorization. In this vectorization part, the vectorization of multidimensional two-fluid model code ACE-3D for evaluation of constitutive equations, statistical decay code SD and three-dimensional thermal analysis code for in-core test section (T2) of HENDEL SSPHEAT are described. In the parallelization part, the parallelization of cylindrical direct numerical simulation code CYLDNS44N, worldwide version of system for prediction of environmental emergency dose information code WSPEEDI, extension of quantum molecular dynamics code EQMD and three-dimensional non-steady compressible fluid dynamics code STREAM are described. In the porting part, the porting of transient reactor analysis code TRAC-BF1 and Monte Carlo radiation transport code MCNP4A on the AP3000 are described. In addition, a modification of program libraries for command-driven interactive data analysis plotting program IPLOT is described. (author)

  20. Vectorization, parallelization and porting of nuclear codes on the VPP500 system (porting). Progress report fiscal 1997

    International Nuclear Information System (INIS)

    Ishizuki, Shigeru; Nemoto, Toshiyuki; Kawai, Wataru; Watanabe, Hideo; Tanabe, Hidenobu; Kawasaki, Nobuo; Adachi, Masaaki; Ogasawara, Shinobu; Kume, Etsuo

    1999-05-01

    Several computer codes in the nuclear field have been vectorized, parallelized and transported on the FUJITSU VPP500 system and/or the AP3000 system at Center for Promotion of Computational Science and Engineering in Japan Atomic Energy Research Institute. We dealt with 14 codes in fiscal 1997. These results are reported in 3 parts, i.e., the vectorization part, the parallelization part and the porting part. In this report, we describe the porting. In this porting part, the porting of transient reactor analysis code TRAC-BF1 and Monte Carlo radiation transport code MCNP4A on the AP3000 are described. In addition, a modification of program libraries for command-driven interactive data analysis plotting program IPLOT is described. In the vectorization part, the vectorization of multidimensional two-fluid model code ACE-3D for evaluation of constitutive equations, statistical decay code SD and three-dimensional thermal analysis code for in-core test section (T2) of HENDEL SSPHEAT are described. In the parallelization part, the parallelization of cylindrical direct numerical simulation code CYLDNS44N, worldwide version of system for prediction of environmental emergency dose information code WSPEEDI, extension of quantum molecular dynamics code EQMD and three-dimensional non-steady compressible fluid dynamics code STREAM are described. (author)

  1. EDF group - annual report 2003; Groupe EDF - rapport annuel 2003

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2004-07-01

    This document contains the magazine, the financial statements and the sustainable development report of Electricite de France (EdF) group for 2003: 1 - the magazine (chairman's statement, group profile, vision and strategy); 2 - the consolidated financial statements for the period ended 31 December 2003 (statutory auditors' report on the consolidated financial statements, EDF's summary annual financial statements); 3 - sustainable development report (transparency and dialogue, responsibility, commitment, partnerships for progress). (J.S.)

  2. EDF Group - Annual Report 2016

    International Nuclear Information System (INIS)

    2017-01-01

    EDF group is the world's leading electricity company and global leader for low-carbon energy production. Particularly well established in Europe, especially France, the United-Kingdom, Italy and Belgium, as well as North and South America, the Group covers all businesses spanning the electricity value chain - from generation to distribution and including energy transmission and trading activities - to continuously balance supply. A marked increase in the use of renewables is bringing change to its electricity generation operations, which are underpinned by a diversified and complementary energy mix founded on nuclear power capacity. EDF offers products and advice to help residential customers manage their electricity consumption, to support the energy and financial performance of its business customers, and to help local authorities find sustainable solutions. This document is EDF Group's annual report for the year 2016. It contains information about Group profile, governance, business, development strategy, sales and marketing, positions in Europe and international activities. The document comprises the Group's activities and performances Report and the 'EDF at a glance' 2017 report

  3. Parallelization of TMVA Machine Learning Algorithms

    CERN Document Server

    Hajili, Mammad

    2017-01-01

    This report reflects my work on Parallelization of TMVA Machine Learning Algorithms integrated to ROOT Data Analysis Framework during summer internship at CERN. The report consists of 4 impor- tant part - data set used in training and validation, algorithms that multiprocessing applied on them, parallelization techniques and re- sults of execution time changes due to number of workers.

  4. Parallel artificial liquid membrane extraction

    DEFF Research Database (Denmark)

    Gjelstad, Astrid; Rasmussen, Knut Einar; Parmer, Marthe Petrine

    2013-01-01

    This paper reports development of a new approach towards analytical liquid-liquid-liquid membrane extraction termed parallel artificial liquid membrane extraction. A donor plate and acceptor plate create a sandwich, in which each sample (human plasma) and acceptor solution is separated by an arti......This paper reports development of a new approach towards analytical liquid-liquid-liquid membrane extraction termed parallel artificial liquid membrane extraction. A donor plate and acceptor plate create a sandwich, in which each sample (human plasma) and acceptor solution is separated...... by an artificial liquid membrane. Parallel artificial liquid membrane extraction is a modification of hollow-fiber liquid-phase microextraction, where the hollow fibers are replaced by flat membranes in a 96-well plate format....

  5. Parallel processing of neutron transport in fuel assembly calculation

    International Nuclear Information System (INIS)

    Song, Jae Seung

    1992-02-01

    Group constants, which are used for reactor analyses by nodal method, are generated by fuel assembly calculations based on the neutron transport theory, since one or a quarter of the fuel assembly corresponds to a unit mesh in the current nodal calculation. The group constant calculation for a fuel assembly is performed through spectrum calculations, a two-dimensional fuel assembly calculation, and depletion calculations. The purpose of this study is to develop a parallel algorithm to be used in a parallel processor for the fuel assembly calculation and the depletion calculations of the group constant generation. A serial program, which solves the neutron integral transport equation using the transmission probability method and the linear depletion equation, was prepared and verified by a benchmark calculation. Small changes from the serial program was enough to parallelize the depletion calculation which has inherent parallel characteristics. In the fuel assembly calculation, however, efficient parallelization is not simple and easy because of the many coupling parameters in the calculation and data communications among CPU's. In this study, the group distribution method is introduced for the parallel processing of the fuel assembly calculation to minimize the data communications. The parallel processing was performed on Quadputer with 4 CPU's operating in NURAD Lab. at KAIST. Efficiencies of 54.3 % and 78.0 % were obtained in the fuel assembly calculation and depletion calculation, respectively, which lead to the overall speedup of about 2.5. As a result, it is concluded that the computing time consumed for the group constant generation can be easily reduced by parallel processing on the parallel computer with small size CPU's

  6. CONSORT 2010 Explanation and Elaboration: Updated guidelines for reporting parallel group randomised trials

    DEFF Research Database (Denmark)

    Moher, David; Hopewell, Sally; Schulz, Kenneth F

    2010-01-01

    improves the wording and clarity of the previous checklist and incorporates recommendations related to topics that have only recently received recognition, such as selective outcome reporting bias. This explanatory and elaboration document-intended to enhance the use, understanding, and dissemination...... of the CONSORT statement-has also been extensively revised. It presents the meaning and rationale for each new and updated checklist item providing examples of good reporting and, where possible, references to relevant empirical studies. Several examples of flow diagrams are included. The CONSORT 2010 Statement...

  7. EDF group - annual report 2003; Groupe EDF - rapport annuel 2003

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2004-07-01

    This document contains the magazine, the financial statements and the sustainable development report of Electricite de France (EdF) group for 2003: 1 - the magazine (chairman's statement, group profile, vision and strategy); 2 - the consolidated financial statements for the period ended 31 December 2003 (statutory auditors' report on the consolidated financial statements, EDF's summary annual financial statements); 3 - sustainable development report (transparency and dialogue, responsibility, commitment, partnerships for progress). (J.S.)

  8. High Performance Parallel Processing Project: Industrial computing initiative. Progress reports for fiscal year 1995

    Energy Technology Data Exchange (ETDEWEB)

    Koniges, A.

    1996-02-09

    This project is a package of 11 individual CRADA`s plus hardware. This innovative project established a three-year multi-party collaboration that is significantly accelerating the availability of commercial massively parallel processing computing software technology to U.S. government, academic, and industrial end-users. This report contains individual presentations from nine principal investigators along with overall program information.

  9. Double-blind, parallel-group evaluation of etodolac and naproxen in patients with acute sports injuries.

    Science.gov (United States)

    D'Hooghe, M

    1992-01-01

    The efficacy and safety of etodolac and naproxen were compared in a double-blind, randomized, parallel-group outpatient study. Patients with acute sports injuries were assigned to receive either etodolac 300 mg TID (50 patients) or naproxen 500 mg BID (49 patients) for up to 7 days. Assessments were made at the pretreatment screening (baseline) and at days 2, 3, 4, and 7 of treatment. Assessments included patient and physician global evaluations, spontaneous and induced pain intensity, range of motion, tenderness, heat, degree of swelling, and degree of erythema. Safety assessments, including laboratory profiles, were made at pretreatment and at final evaluation; patients' complaints were elicited at all visits. Both treatment groups showed significant (P less than or equal to 0.05) improvement from baseline for all efficacy parameters by day 2 and thereafter at all time points. Improvement was similar for the two groups. No patients in either group withdrew from the study because of drug-related adverse reactions. The results of this study indicate that etodolac (900 mg/day) is effective and well tolerated as an analgesic and anti-inflammatory in acute sports injuries and is comparable to naproxen (1000 mg/day).

  10. EDF Group - Annual Report 2015

    International Nuclear Information System (INIS)

    2016-01-01

    EDF Group is the world's leading electricity company and it is particularly well established in Europe, especially France, the United Kingdom, Italy and Belgium. Its business covers all electricity-related activities, from generation to distribution and including energy transmission and trading activities to continuously balance supply with demand. A marked increase in the use of renewables is bringing change to its power generation operations, which are underpinned by a diversified low-carbon energy mix founded on nuclear power capacity. With activities across the entire electricity value chain, EDF is reinventing the products and services it offers to help residential customers manage their electricity consumption, to support the energy and financial performance of business customers and to support local authorities in finding sustainable solutions for the cities of the future. This document is EDF Group's annual report for the year 2015. It contains information about Group profile, governance, business, development strategy, sales and marketing, positions in Europe and international activities. The document is made of several reports: the 2016 Book, the '2016 at a glance' report, the Profile and Performance 2015 report, the 2015 Reference Document - Annual Financial Report

  11. Writing parallel programs that work

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    Serial algorithms typically run inefficiently on parallel machines. This may sound like an obvious statement, but it is the root cause of why parallel programming is considered to be difficult. The current state of the computer industry is still that almost all programs in existence are serial. This talk will describe the techniques used in the Intel Parallel Studio to provide a developer with the tools necessary to understand the behaviors and limitations of the existing serial programs. Once the limitations are known the developer can refactor the algorithms and reanalyze the resulting programs with the tools in the Intel Parallel Studio to create parallel programs that work. About the speaker Paul Petersen is a Sr. Principal Engineer in the Software and Solutions Group (SSG) at Intel. He received a Ph.D. degree in Computer Science from the University of Illinois in 1993. After UIUC, he was employed at Kuck and Associates, Inc. (KAI) working on auto-parallelizing compiler (KAP), and was involved in th...

  12. The specificity of learned parallelism in dual-memory retrieval.

    Science.gov (United States)

    Strobach, Tilo; Schubert, Torsten; Pashler, Harold; Rickard, Timothy

    2014-05-01

    Retrieval of two responses from one visually presented cue occurs sequentially at the outset of dual-retrieval practice. Exclusively for subjects who adopt a mode of grouping (i.e., synchronizing) their response execution, however, reaction times after dual-retrieval practice indicate a shift to learned retrieval parallelism (e.g., Nino & Rickard, in Journal of Experimental Psychology: Learning, Memory, and Cognition, 29, 373-388, 2003). In the present study, we investigated how this learned parallelism is achieved and why it appears to occur only for subjects who group their responses. Two main accounts were considered: a task-level versus a cue-level account. The task-level account assumes that learned retrieval parallelism occurs at the level of the task as a whole and is not limited to practiced cues. Grouping response execution may thus promote a general shift to parallel retrieval following practice. The cue-level account states that learned retrieval parallelism is specific to practiced cues. This type of parallelism may result from cue-specific response chunking that occurs uniquely as a consequence of grouped response execution. The results of two experiments favored the second account and were best interpreted in terms of a structural bottleneck model.

  13. Parallelization for first principles electronic state calculation program

    International Nuclear Information System (INIS)

    Watanabe, Hiroshi; Oguchi, Tamio.

    1997-03-01

    In this report we study the parallelization for First principles electronic state calculation program. The target machines are NEC SX-4 for shared memory type parallelization and FUJITSU VPP300 for distributed memory type parallelization. The features of each parallel machine are surveyed, and the parallelization methods suitable for each are proposed. It is shown that 1.60 times acceleration is achieved with 2 CPU parallelization by SX-4 and 4.97 times acceleration is achieved with 12 PE parallelization by VPP 300. (author)

  14. Parallel Algorithms for Groebner-Basis Reduction

    Science.gov (United States)

    1987-09-25

    22209 ELEMENT NO. NO. NO. ACCESSION NO. 11. TITLE (Include Security Classification) * PARALLEL ALGORITHMS FOR GROEBNER -BASIS REDUCTION 12. PERSONAL...All other editions are obsolete. Productivity Engineering in the UNIXt Environment p Parallel Algorithms for Groebner -Basis Reduction Technical Report

  15. Spent Fuel Working Group Report

    International Nuclear Information System (INIS)

    O'Toole, T.

    1993-11-01

    The Department of Energy is storing large amounts of spent nuclear fuel and other reactor irradiated nuclear materials (herein referred to as RINM). In the past, the Department reprocessed RINM to recover plutonium, tritium, and other isotopes. However, the Department has ceased or is phasing out reprocessing operations. As a consequence, Department facilities designed, constructed, and operated to store RINM for relatively short periods of time now store RINM, pending decisions on the disposition of these materials. The extended use of the facilities, combined with their known degradation and that of their stored materials, has led to uncertainties about safety. To ensure that extended storage is safe (i.e., that protection exists for workers, the public, and the environment), the conditions of these storage facilities had to be assessed. The compelling need for such an assessment led to the Secretary's initiative on spent fuel, which is the subject of this report. This report comprises three volumes: Volume I; Summary Results of the Spent Fuel Working Group Evaluation; Volume II, Working Group Assessment Team Reports and Protocol; Volume III; Operating Contractor Site Team Reports. This volume presents the overall results of the Working Group's Evaluation. The group assessed 66 facilities spread across 11 sites. It identified: (1) facilities that should be considered for priority attention. (2) programmatic issues to be considered in decision making about interim storage plans and (3) specific vulnerabilities for some of these facilities

  16. Parallelization characteristics of the DeCART code

    International Nuclear Information System (INIS)

    Cho, J. Y.; Joo, H. G.; Kim, H. Y.; Lee, C. C.; Chang, M. H.; Zee, S. Q.

    2003-12-01

    This report is to describe the parallelization characteristics of the DeCART code and also examine its parallel performance. Parallel computing algorithms are implemented to DeCART to reduce the tremendous computational burden and memory requirement involved in the three-dimensional whole core transport calculation. In the parallelization of the DeCART code, the axial domain decomposition is first realized by using MPI (Message Passing Interface), and then the azimuthal angle domain decomposition by using either MPI or OpenMP. When using the MPI for both the axial and the angle domain decomposition, the concept of MPI grouping is employed for convenient communication in each communication world. For the parallel computation, most of all the computing modules except for the thermal hydraulic module are parallelized. These parallelized computing modules include the MOC ray tracing, CMFD, NEM, region-wise cross section preparation and cell homogenization modules. For the distributed allocation, most of all the MOC and CMFD/NEM variables are allocated only for the assigned planes, which reduces the required memory by a ratio of the number of the assigned planes to the number of all planes. The parallel performance of the DeCART code is evaluated by solving two problems, a rodded variation of the C5G7 MOX three-dimensional benchmark problem and a simplified three-dimensional SMART PWR core problem. In the aspect of parallel performance, the DeCART code shows a good speedup of about 40.1 and 22.4 in the ray tracing module and about 37.3 and 20.2 in the total computing time when using 48 CPUs on the IBM Regatta and 24 CPUs on the LINUX cluster, respectively. In the comparison between the MPI and OpenMP, OpenMP shows a somewhat better performance than MPI. Therefore, it is concluded that the first priority in the parallel computation of the DeCART code is in the axial domain decomposition by using MPI, and then in the angular domain using OpenMP, and finally the angular

  17. Neural Parallel Engine: A toolbox for massively parallel neural signal processing.

    Science.gov (United States)

    Tam, Wing-Kin; Yang, Zhi

    2018-05-01

    Large-scale neural recordings provide detailed information on neuronal activities and can help elicit the underlying neural mechanisms of the brain. However, the computational burden is also formidable when we try to process the huge data stream generated by such recordings. In this study, we report the development of Neural Parallel Engine (NPE), a toolbox for massively parallel neural signal processing on graphical processing units (GPUs). It offers a selection of the most commonly used routines in neural signal processing such as spike detection and spike sorting, including advanced algorithms such as exponential-component-power-component (EC-PC) spike detection and binary pursuit spike sorting. We also propose a new method for detecting peaks in parallel through a parallel compact operation. Our toolbox is able to offer a 5× to 110× speedup compared with its CPU counterparts depending on the algorithms. A user-friendly MATLAB interface is provided to allow easy integration of the toolbox into existing workflows. Previous efforts on GPU neural signal processing only focus on a few rudimentary algorithms, are not well-optimized and often do not provide a user-friendly programming interface to fit into existing workflows. There is a strong need for a comprehensive toolbox for massively parallel neural signal processing. A new toolbox for massively parallel neural signal processing has been created. It can offer significant speedup in processing signals from large-scale recordings up to thousands of channels. Copyright © 2018 Elsevier B.V. All rights reserved.

  18. The efficacy of Femal in women with premenstrual syndrome: a randomised, double-blind, parallel-group, placebo-controlled, multicentre study

    DEFF Research Database (Denmark)

    Gerhardsen, G.; Hansen, A.V.; Killi, M.

    2008-01-01

    Introduction: A double-blind, placebo-controlled, randomised, parallel-group, multicentre study was conducted to evaluate the effect of a pollen-based herbal medicinal product, Femal (R) (Sea-Band Ltd, Leicestershire, UK), on premenstrual sleep disturbances (PSD) in women with premenstrual syndrome...... as the main symptom cluster makes this herbal medicinal product a promising addition to the therapeutic arsenal for women with PMS Udgivelsesdato: 2008/6...

  19. Roflumilast for the treatment of COPD in an Asian population: a randomized, double-blind, parallel-group study.

    Science.gov (United States)

    Zheng, Jinping; Yang, Jinghua; Zhou, Xiangdong; Zhao, Li; Hui, Fuxin; Wang, Haoyan; Bai, Chunxue; Chen, Ping; Li, Huiping; Kang, Jian; Brose, Manja; Richard, Frank; Goehring, Udo-Michael; Zhong, Nanshan

    2014-01-01

    Roflumilast is the only oral phosphodiesterase 4 inhibitor indicated for use in the treatment of COPD. Previous studies of roflumilast have predominantly involved European and North American populations. A large study was necessary to determine the efficacy and safety of roflumilast in a predominantly ethnic Chinese population. In a placebo-controlled, double-blind, parallel-group, multicenter, phase 3 study, patients of Chinese, Malay, and Indian ethnicity (N = 626) with severe to very severe COPD were randomized 1:1 to receive either roflumilast 500 μg once daily or placebo for 24 weeks. The primary end point was change in prebronchodilator FEV1 from baseline to study end. Three hundred thirteen patients were assigned to each treatment. Roflumilast provided a sustained increase over placebo in mean prebronchodilator FEV1 (0.071 L; 95% CI, 0.046, 0.095 L; P < .0001). Similar improvements were observed in the secondary end points of postbronchodilator FEV1 (0.068 L; 95% CI 0.044, 0.092 L; P < .0001) and prebronchodilator and postbronchodilator FVC (0.109 L; 95% CI, 0.061, 0.157 L; P < .0001 and 0.101 L; 95% CI, 0.055, 0.146 L; P < .0001, respectively). The adverse event profile was consistent with previous roflumilast studies. The most frequently reported treatment-related adverse event was diarrhea (6.0% and 1.0% of patients in the roflumilast and placebo groups, respectively). Roflumilast plays an important role in lung function improvement and is well tolerated in an Asian population. It provides an optimal treatment choice for patients with severe to very severe COPD.

  20. Radiation Protection Group Annual Report 2003

    CERN Document Server

    Silari, M

    2004-01-01

    The RP Annual Report summarises the activities carried out by CERN’s Radiation Protection Group in the year 2003. It includes contribution from the EN section of the TIS/IE Group on environmental monitoring. Chapter 1 reports on the measurements and estimations of the impact on the environment and public exposure due to the Organisation’s activities. Chapter 2 provides the results of the monitoring of CERN’s staff, users and contractors to occupational exposure. Chapter 3 deals with operational radiation protection around the accelerators and in the experimental areas. Chapter 4 reports on RP design studies for the LHC and CNGS projects. Chapter 5 addresses the various services provided by the RP Group to other Groups and Divisions at CERN, which include managing radioactive waste, high-level dosimetry, lending radioactive test sources and shipping radioactive materials. Chapter 6 describes activities other than the routine and service tasks, i.e. development work in the field of instrumentation and res...

  1. Survey on present status and trend of parallel programming environments

    International Nuclear Information System (INIS)

    Takemiya, Hiroshi; Higuchi, Kenji; Honma, Ichiro; Ohta, Hirofumi; Kawasaki, Takuji; Imamura, Toshiyuki; Koide, Hiroshi; Akimoto, Masayuki.

    1997-03-01

    This report intends to provide useful information on software tools for parallel programming through the survey on parallel programming environments of the following six parallel computers, Fujitsu VPP300/500, NEC SX-4, Hitachi SR2201, Cray T94, IBM SP, and Intel Paragon, all of which are installed at Japan Atomic Energy Research Institute (JAERI), moreover, the present status of R and D's on parallel softwares of parallel languages, compilers, debuggers, performance evaluation tools, and integrated tools is reported. This survey has been made as a part of our project of developing a basic software for parallel programming environment, which is designed on the concept of STA (Seamless Thinking Aid to programmers). (author)

  2. Hemostatic efficacy of TachoSil in liver resection compared with argon beam coagulator treatment: An open, randomized, prospective, multicenter, parallel-group trial

    DEFF Research Database (Denmark)

    Fischer, Lars; Seiler, Christoph M.; Broelsch, Christoph E.

    2011-01-01

    surgical trial with 2 parallel groups. Patients were eligible for intra-operative randomization after elective resection of ≥1 liver segment and primary hemostasis. The primary end point was the time to hemostasis after starting the randomized intervention to obtain secondaty hemostasis. Secondary end...

  3. Coordinating Group report

    International Nuclear Information System (INIS)

    1994-01-01

    In December 1992, western governors and four federal agencies established a Federal Advisory Committee to Develop On-site Innovative Technologies for Environmental Restoration and Waste Management (the DOIT Committee). The purpose of the Committee is to advise the federal government on ways to improve waste cleanup technology development and the cleanup of federal sites in the West. The Committee directed in January 1993 that information be collected from a wide range of potential stakeholders and that innovative technology candidate projects be identified, organized, set in motion, and evaluated to test new partnerships, regulatory approaches, and technologies which will lead to improve site cleanup. Five working groups were organized, one to develop broad project selection and evaluation criteria and four to focus on specific contaminant problems. A Coordinating Group comprised of working group spokesmen and federal and state representatives, was set up to plan and organize the routine functioning of these working groups. The working groups were charged with defining particular contaminant problems; identifying shortcomings in technology development, stakeholder involvement, regulatory review, and commercialization which impede the resolution of these problems; and identifying candidate sites or technologies which could serve as regional innovative demonstration projects to test new approaches to overcome the shortcomings. This report from the Coordinating Group to the DOIT Committee highlights the key findings and opportunities uncovered by these fact-finding working groups. It provides a basis from which recommendations from the DOIT Committee to the federal government can be made. It also includes observations from two public roundtables, one on commercialization and another on regulatory and institutional barriers impeding technology development and cleanup

  4. Performance Analysis of Parallel Mathematical Subroutine library PARCEL

    International Nuclear Information System (INIS)

    Yamada, Susumu; Shimizu, Futoshi; Kobayashi, Kenichi; Kaburaki, Hideo; Kishida, Norio

    2000-01-01

    The parallel mathematical subroutine library PARCEL (Parallel Computing Elements) has been developed by Japan Atomic Energy Research Institute for easy use of typical parallelized mathematical codes in any application problems on distributed parallel computers. The PARCEL includes routines for linear equations, eigenvalue problems, pseudo-random number generation, and fast Fourier transforms. It is shown that the results of performance for linear equations routines exhibit good parallelization efficiency on vector, as well as scalar, parallel computers. A comparison of the efficiency results with the PETSc (Portable Extensible Tool kit for Scientific Computations) library has been reported. (author)

  5. Safety and pharmacokinetics of single and multiple intravenous bolus doses of diclofenac sodium compared with oral diclofenac potassium 50 mg: A randomized, parallel-group, single-center study in healthy subjects.

    Science.gov (United States)

    Munjal, Sagar; Gautam, Anirudh; Okumu, Franklin; McDowell, James; Allenby, Kent

    2016-01-01

    In a randomized, parallel-group, single-center study in 42 healthy adults, the safety and pharmacokinetic parameters of an intravenous formulation of 18.75 and 37.5 mg diclofenac sodium (DFP-08) following single- and multiple-dose bolus administration were compared with diclofenac potassium 50 mg oral tablets. Mean AUC0-inf values for a 50-mg oral tablet and an 18.75-mg intravenous formulation were similar (1308.9 [393.0]) vs 1232.4 [147.6]). As measured by the AUC, DFP-08 18.75 mg and 37.5 mg demonstrated dose proportionality for extent of exposure. One subject in each of the placebo and DFP-08 18.75-mg groups and 2 subjects in the DFP-08 37.5-mg group reported adverse events that were considered by the investigator to be related to the study drug. All were mild in intensity and did not require treatment. Two subjects in the placebo group and 1 subject in the DFP-08 18.75-mg group reported grade 1 thrombophlebitis; no subjects reported higher than grade 1 thrombophlebitis after receiving a single intravenous dose. The 18.75- and 37.5-mg doses of intravenous diclofenac (single and multiple) were well tolerated for 7 days. Additional efficacy and safety studies are required to fully characterize the product. © 2015, The American College of Clinical Pharmacology.

  6. Radiation Protection Group annual report (1997)

    International Nuclear Information System (INIS)

    Hoefert, M.

    1998-01-01

    The Annual Report of the Radiation Protection Group is intended to inform the Host State Authorities, as well as the CERN Management and staff, about the radiological situation at CERN during the year 1997. The structure of the present report follows that of previous years and has five sections. It presents the results of environmental radiation monitoring, gives information about the radiation control on the sites of the Organization, describes the radiation protection activities around the CERN accelerators, reports on personnel dosimetry, calibration and instrumentation, and briefly comments on the non-routine activities of the Radiation Protection Group

  7. Radiation Protection Group annual report (1996)

    International Nuclear Information System (INIS)

    Hoefert, M.

    1997-01-01

    The Annual Report of the Radiation Protection Group is intended to inform the Host State Authorities, as well as the CERN Management and staff, about the radiological situation at CERN during the year 1996. The structure of the present report follows that of previous years and has five sections. It presents the results of environmental radiation monitoring, gives information about the radiation control on the sites of the Organization, describes the radiation protection activities around the CERN accelerators, reports on personnel dosimetry, calibration and instrumentation, and briefly comments on the non-routine activities of the Radiation Protection Group

  8. Radiation Protection Group annual report (1998)

    International Nuclear Information System (INIS)

    Hoefert, M.

    1999-01-01

    The Annual Report of the Radiation Protection Group is intended to inform the Host State Authorities, as well as the CERN Management and staff, about the radiological situation at CERN during the year 1998. The structure of the present report follows that of previous years and has five sections. It presents the results of environmental radiation monitoring, gives information about the radiation control on the sites of the Organization, describes the radiation protection activities around the CERN accelerators, reports on personnel dosimetry, calibration and instrumentation, and briefly comments on the non-routine activities of the Radiation Protection Group

  9. Radiation Protection Group annual report (1996)

    Energy Technology Data Exchange (ETDEWEB)

    Hoefert, M [ed.

    1997-03-25

    The Annual Report of the Radiation Protection Group is intended to inform the Host State Authorities, as well as the CERN Management and staff, about the radiological situation at CERN during the year 1996. The structure of the present report follows that of previous years and has five sections. It presents the results of environmental radiation monitoring, gives information about the radiation control on the sites of the Organization, describes the radiation protection activities around the CERN accelerators, reports on personnel dosimetry, calibration and instrumentation, and briefly comments on the non-routine activities of the Radiation Protection Group.

  10. Radiation Protection Group annual report (1998)

    Energy Technology Data Exchange (ETDEWEB)

    Hoefert, M [ed.

    1999-04-15

    The Annual Report of the Radiation Protection Group is intended to inform the Host State Authorities, as well as the CERN Management and staff, about the radiological situation at CERN during the year 1998. The structure of the present report follows that of previous years and has five sections. It presents the results of environmental radiation monitoring, gives information about the radiation control on the sites of the Organization, describes the radiation protection activities around the CERN accelerators, reports on personnel dosimetry, calibration and instrumentation, and briefly comments on the non-routine activities of the Radiation Protection Group.

  11. Radiation Protection Group annual report (1997)

    Energy Technology Data Exchange (ETDEWEB)

    Hoefert, M [ed.

    1998-04-10

    The Annual Report of the Radiation Protection Group is intended to inform the Host State Authorities, as well as the CERN Management and staff, about the radiological situation at CERN during the year 1997. The structure of the present report follows that of previous years and has five sections. It presents the results of environmental radiation monitoring, gives information about the radiation control on the sites of the Organization, describes the radiation protection activities around the CERN accelerators, reports on personnel dosimetry, calibration and instrumentation, and briefly comments on the non-routine activities of the Radiation Protection Group.

  12. Radiation Protection Group annual report (1995)

    International Nuclear Information System (INIS)

    Hoefert, M.

    1996-01-01

    The Annual Report of the Radiation Protection Group is intended to inform the Host State Authorities, as well as the CERN Management and staff, about the radiological situation at CERN during the year 1995. The structure of the present report follows that of previous years and has five sections. It presents the results of environmental radiation monitoring, gives information about the radiation control on the sites of the Organization, describes the radiation protection activities around the CERN accelerators, reports on personnel dosimetry, calibration and instrumentation, and briefly comments on the non-routine activities of the Radiation Protection Group

  13. A parallelization study of the general purpose Monte Carlo code MCNP4 on a distributed memory highly parallel computer

    International Nuclear Information System (INIS)

    Yamazaki, Takao; Fujisaki, Masahide; Okuda, Motoi; Takano, Makoto; Masukawa, Fumihiro; Naito, Yoshitaka

    1993-01-01

    The general purpose Monte Carlo code MCNP4 has been implemented on the Fujitsu AP1000 distributed memory highly parallel computer. Parallelization techniques developed and studied are reported. A shielding analysis function of the MCNP4 code is parallelized in this study. A technique to map a history to each processor dynamically and to map control process to a certain processor was applied. The efficiency of parallelized code is up to 80% for a typical practical problem with 512 processors. These results demonstrate the advantages of a highly parallel computer to the conventional computers in the field of shielding analysis by Monte Carlo method. (orig.)

  14. Randomized, parallel-group, double-blind, controlled study to evaluate the efficacy and safety of carbohydrate-derived fulvic acid in topical treatment of eczema

    Directory of Open Access Journals (Sweden)

    Gandy JJ

    2011-09-01

    Full Text Available Justin J Gandy, Jacques R Snyman, Constance EJ van RensburgDepartment of Pharmacology, Faculty of Health Sciences, University of Pretoria, Pretoria, South AfricaBackground: The purpose of this study was to evaluate the efficacy and safety of carbohydrate-derived fulvic acid (CHD-FA in the treatment of eczema in patients two years and older.Methods: In this single-center, double-blind, placebo-controlled, parallel-group comparative study, 36 volunteers with predetermined eczema were randomly assigned to receive either the study drug or placebo twice daily for four weeks.Results: All safety parameters remained within normal limits, with no significant differences in either group. Significant differences were observed for both severity and erythema in the placebo and CHD-FA treated groups, and a significant difference was observed for scaling in the placebo-treated group. With regard to the investigator assessment of global response to treatment, a significant improvement was observed in the CHD-FA group when compared with the placebo group. A statistically significant decrease in visual analog scale score was observed in both groups, when comparing the baseline with the final results.Conclusion: CHD-FA was well tolerated, with no difference in reported side effects other than a short-lived burning sensation on application. CHD-FA significantly improved some aspects of eczema. Investigator assessment of global response to treatment with CHD-FA was significantly better than that with emollient therapy alone. The results of this small exploratory study suggest that CHD-FA warrants further investigation in the treatment of eczema.Keywords: fulvic acid, eczema, anti-inflammatory, efficacy, safety

  15. Automatic Parallelization Tool: Classification of Program Code for Parallel Computing

    Directory of Open Access Journals (Sweden)

    Mustafa Basthikodi

    2016-04-01

    Full Text Available Performance growth of single-core processors has come to a halt in the past decade, but was re-enabled by the introduction of parallelism in processors. Multicore frameworks along with Graphical Processing Units empowered to enhance parallelism broadly. Couples of compilers are updated to developing challenges forsynchronization and threading issues. Appropriate program and algorithm classifications will have advantage to a great extent to the group of software engineers to get opportunities for effective parallelization. In present work we investigated current species for classification of algorithms, in that related work on classification is discussed along with the comparison of issues that challenges the classification. The set of algorithms are chosen which matches the structure with different issues and perform given task. We have tested these algorithms utilizing existing automatic species extraction toolsalong with Bones compiler. We have added functionalities to existing tool, providing a more detailed characterization. The contributions of our work include support for pointer arithmetic, conditional and incremental statements, user defined types, constants and mathematical functions. With this, we can retain significant data which is not captured by original speciesof algorithms. We executed new theories into the device, empowering automatic characterization of program code.

  16. Mathematical Abstraction: Constructing Concept of Parallel Coordinates

    Science.gov (United States)

    Nurhasanah, F.; Kusumah, Y. S.; Sabandar, J.; Suryadi, D.

    2017-09-01

    Mathematical abstraction is an important process in teaching and learning mathematics so pre-service mathematics teachers need to understand and experience this process. One of the theoretical-methodological frameworks for studying this process is Abstraction in Context (AiC). Based on this framework, abstraction process comprises of observable epistemic actions, Recognition, Building-With, Construction, and Consolidation called as RBC + C model. This study investigates and analyzes how pre-service mathematics teachers constructed and consolidated concept of Parallel Coordinates in a group discussion. It uses AiC framework for analyzing mathematical abstraction of a group of pre-service teachers consisted of four students in learning Parallel Coordinates concepts. The data were collected through video recording, students’ worksheet, test, and field notes. The result shows that the students’ prior knowledge related to concept of the Cartesian coordinate has significant role in the process of constructing Parallel Coordinates concept as a new knowledge. The consolidation process is influenced by the social interaction between group members. The abstraction process taken place in this group were dominated by empirical abstraction that emphasizes on the aspect of identifying characteristic of manipulated or imagined object during the process of recognizing and building-with.

  17. Critical appraisal of arguments for the delayed-start design proposed as alternative to the parallel-group randomized clinical trial design in the field of rare disease.

    Science.gov (United States)

    Spineli, Loukia M; Jenz, Eva; Großhennig, Anika; Koch, Armin

    2017-08-17

    A number of papers have proposed or evaluated the delayed-start design as an alternative to the standard two-arm parallel group randomized clinical trial (RCT) design in the field of rare disease. However the discussion is felt to lack a sufficient degree of consideration devoted to the true virtues of the delayed start design and the implications either in terms of required sample-size, overall information, or interpretation of the estimate in the context of small populations. To evaluate whether there are real advantages of the delayed-start design particularly in terms of overall efficacy and sample size requirements as a proposed alternative to the standard parallel group RCT in the field of rare disease. We used a real-life example to compare the delayed-start design with the standard RCT in terms of sample size requirements. Then, based on three scenarios regarding the development of the treatment effect over time, the advantages, limitations and potential costs of the delayed-start design are discussed. We clarify that delayed-start design is not suitable for drugs that establish an immediate treatment effect, but for drugs with effects developing over time, instead. In addition, the sample size will always increase as an implication for a reduced time on placebo resulting in a decreased treatment effect. A number of papers have repeated well-known arguments to justify the delayed-start design as appropriate alternative to the standard parallel group RCT in the field of rare disease and do not discuss the specific needs of research methodology in this field. The main point is that a limited time on placebo will result in an underestimated treatment effect and, in consequence, in larger sample size requirements compared to those expected under a standard parallel-group design. This also impacts on benefit-risk assessment.

  18. Performance of the Galley Parallel File System

    Science.gov (United States)

    Nieuwejaar, Nils; Kotz, David

    1996-01-01

    As the input/output (I/O) needs of parallel scientific applications increase, file systems for multiprocessors are being designed to provide applications with parallel access to multiple disks. Many parallel file systems present applications with a conventional Unix-like interface that allows the application to access multiple disks transparently. This interface conceals the parallism within the file system, which increases the ease of programmability, but makes it difficult or impossible for sophisticated programmers and libraries to use knowledge about their I/O needs to exploit that parallelism. Furthermore, most current parallel file systems are optimized for a different workload than they are being asked to support. We introduce Galley, a new parallel file system that is intended to efficiently support realistic parallel workloads. Initial experiments, reported in this paper, indicate that Galley is capable of providing high-performance 1/O to applications the applications that rely on them. In Section 3 we describe that access data in patterns that have been observed to be common.

  19. A Randomized Single Blind Parallel Group Study Comparing Monoherbal Formulation Containing Holarrhena antidysenterica Extract with Mesalamine in Chronic Ulcerative Colitis Patients

    Directory of Open Access Journals (Sweden)

    Sarika Johari

    2016-01-01

    Full Text Available Background: Incidences of side effects and relapses are very common in chronic ulcerative colitis patients after termination of the treatment. Aims and Objectives: This study aims to compare the treatment with monoherbal formulation of Holarrhena antidysenterica with Mesalamine in chronic ulcerative colitis patients with special emphasis to side effects and relapse. Settings and Design: Patients were enrolled from an Ayurveda Hospital and a private Hospital, Gujarat. The study was randomized, parallel group and single blind design. Materials and Methods: The protocol was approved by Institutional Human Research Ethics Committee of Anand Pharmacy College on 23rd Jan 2013. Three groups (n = 10 were treated with drug Mesalamine (Group I, monoherbal tablet (Group II and combination of both (Group III respectively. Baseline characteristics, factors affecting quality of life, chronicity of disease, signs and symptoms, body weight and laboratory investigations were recorded. Side effects and complications developed, if any were recorded during and after the study. Statistical Analysis Used: Results were expressed as mean ± SEM. Data was statistically evaluated using t-test, Wilcoxon test, Mann Whitney U test, Kruskal Wallis test and ANOVA, wherever applicable, using GraphPad Prism 6. Results: All the groups responded positively to the treatments. All the patients were positive for occult blood in stool which reversed significantly after treatment along with rise in hemoglobin. Patients treated with herbal tablets alone showed maximal reduction in abdominal pain, diarrhea, and bowel frequency and stool consistency scores than Mesalamine treated patients. Treatment with herbal tablet alone and in combination with Mesalamine significantly reduced the stool infection. Patients treated with herbal drug alone and in combination did not report any side effects, relapse or complications while 50% patients treated with Mesalamine exhibited the relapse with

  20. Report of the specialized detector group

    International Nuclear Information System (INIS)

    Witherell, M.S.

    1984-01-01

    The Specialized Detector Group was assigned the task of studying the types of detectors, other than general purpose detectors, that might be suitable for the SSC. At the start of the Snowmass workshop, a number of physics topics were identified which could call for a specialized detector. The modest size of the specialized detector group dictated that we concentrate on a few of these detectors, and not try to consider all candidates. Subgroups were formed for each type of detector, and they worked completely independently on their very different problems. The members of a subgroup were also members of the corresponding group within the Physics area. Because of the wide variety of problems faced by the various subgroups, the detectors will be described in separate papers within these proceedings (some of them within the Physics group reports). Thus, this report gives a summary of these designs and discusses some general considerations

  1. 2002 annual report EDF group

    International Nuclear Information System (INIS)

    2002-01-01

    This document is the 2002 annual report of Electricite de France (EdF) group, the French electric utility. Content: Introductory section (EDF at a glance, Chairman's message, 2002 Highlights); Corporate governance and Group strategy (Corporate governance, sustainable growth strategy, EDF branches); Financial performance (Reaching critical mass, Margins holding up well, Balance sheet); Human resources (Launching Group-wide synergies, Optimising human resources); Customers (Major customers, SMEs and professional customers, Local authorities, Residential customers, Ensuring quality access to electricity); Generation (A balanced energy mix, Nuclear generation, Fossil-fuelled generation, Renewable energies); Corporate social responsibility (Global and local partnerships, Promoting community development)

  2. Parallel Sparse Matrix - Vector Product

    DEFF Research Database (Denmark)

    Alexandersen, Joe; Lazarov, Boyan Stefanov; Dammann, Bernd

    This technical report contains a case study of a sparse matrix-vector product routine, implemented for parallel execution on a compute cluster with both pure MPI and hybrid MPI-OpenMP solutions. C++ classes for sparse data types were developed and the report shows how these class can be used...

  3. Pharmacokinetics, safety, and tolerability of varenicline in healthy adolescent smokers: a multicenter, randomized, double-blind, placebo-controlled, parallel-group study.

    Science.gov (United States)

    Faessel, Helene; Ravva, Patanjali; Williams, Kathryn

    2009-01-01

    Varenicline is approved as an aid to smoking cessation in adults aged > or =18 years. The goal of this study was to characterize the multiple-dose pharmacokinetics, safety, and tolerability of varenicline in adolescent smokers. This multicenter, randomized, double-blind, placebo-controlled, parallel-group study enrolled healthy 12- to 16-year-old smokers (> or =3 cigarettes daily) into high-body-weight (>55 kg) and low-body-weight (daily. The apparent renal clearance (CL/F) and volume of distribution (V/F) of varenicline and the effect of body weight on these parameters were estimated using nonlinear mixed-effects modeling. The high-body-weight group consisted of 35 subjects (65.7% male; 77.1% white; mean age, 15.2 years). The low-body-weight group consisted of 37 subjects (37.8% male; 48.6% white; mean age, 14.3 years). The pharmacokinetic parameters of varenicline were dose proportional over the dose range from 0.5 to 2 mg/d. The CL/F for a 70-kg adolescent was 10.4 L/h, comparable to that in a 70-kg adult. The estimated varenicline V/F was decreased in individuals of small body size, thus predicting a varenicline C(max) approximately 30% greater in low-body-weight subjects than in high-body-weight subjects. In high-body-weight subjects, steady-state varenicline exposure, as represented by the AUC(0-24), was 197.0 ng . h/mL for varenicline 1 mg BID and 95.7 ng . h/mL for varenicline 0.5 mg BID, consistent with values reported previously in adult smokers at the equivalent doses. In low-body-weight subjects, varenicline exposure was 126.3 ng . h/mL for varenicline 0.5 mg BID and 60.1 ng . h/mL for varenicline 0.5 mg once daily, values at the lower end of the range observed previously in adults at doses of 1 mg BID and 0.5 mg BID, respectively. Among high-body-weight subjects, adverse events (AEs) were reported by 57.1% of subjects in both the high- and low-dose varenicline groups and by 14.3% of subjects in the placebo group; among low-body-weight subjects, AEs

  4. Report of Industry Panel Group

    Science.gov (United States)

    Gallimore, Simon; Gier, Jochen; Heitland, Greg; Povinelli, Louis; Sharma, Om; VandeWall, Allen

    2006-01-01

    A final report is presented from the industry panel group. The contents include: 1) General comments; 2) Positive progress since Minnowbrook IV; 3) Industry panel outcome; 4) Prioritized turbine projects; 5) Prioritized compressor projects; and 6) Miscellaneous.

  5. Configuration affects parallel stent grafting results.

    Science.gov (United States)

    Tanious, Adam; Wooster, Mathew; Armstrong, Paul A; Zwiebel, Bruce; Grundy, Shane; Back, Martin R; Shames, Murray L

    2018-05-01

    A number of adjunctive "off-the-shelf" procedures have been described to treat complex aortic diseases. Our goal was to evaluate parallel stent graft configurations and to determine an optimal formula for these procedures. This is a retrospective review of all patients at a single medical center treated with parallel stent grafts from January 2010 to September 2015. Outcomes were evaluated on the basis of parallel graft orientation, type, and main body device. Primary end points included parallel stent graft compromise and overall endovascular aneurysm repair (EVAR) compromise. There were 78 patients treated with a total of 144 parallel stents for a variety of pathologic processes. There was a significant correlation between main body oversizing and snorkel compromise (P = .0195) and overall procedural complication (P = .0019) but not with endoleak rates. Patients were organized into the following oversizing groups for further analysis: 0% to 10%, 10% to 20%, and >20%. Those oversized into the 0% to 10% group had the highest rate of overall EVAR complication (73%; P = .0003). There were no significant correlations between any one particular configuration and overall procedural complication. There was also no significant correlation between total number of parallel stents employed and overall complication. Composite EVAR configuration had no significant correlation with individual snorkel compromise, endoleak, or overall EVAR or procedural complication. The configuration most prone to individual snorkel compromise and overall EVAR complication was a four-stent configuration with two stents in an antegrade position and two stents in a retrograde position (60% complication rate). The configuration most prone to endoleak was one or two stents in retrograde position (33% endoleak rate), followed by three stents in an all-antegrade position (25%). There was a significant correlation between individual stent configuration and stent compromise (P = .0385), with 31

  6. Group EDF annual report 2005 sustainable development

    International Nuclear Information System (INIS)

    2006-05-01

    The EDF Group's Sustainable Development Report for 2005 is designed to report on Group commitments particularly within its Agenda 21, its ethical charter, and the Global Compact. It has also been prepared with reference to external reference frameworks: the Global Reporting Initiative (GRI) guidelines and the French New Economic Regulations (NRE) contained in the May 15, 2001 French law. It contents the Chairman's statement, the evaluation of renewing and sharing commitments with all stakeholders, the managing local issues, EDF responses to the challenges of the future. Indicators are also provided. (A.L.B.)

  7. Development of parallel Fokker-Planck code ALLAp

    International Nuclear Information System (INIS)

    Batishcheva, A.A.; Sigmar, D.J.; Koniges, A.E.

    1996-01-01

    We report on our ongoing development of the 3D Fokker-Planck code ALLA for a highly collisional scrape-off-layer (SOL) plasma. A SOL with strong gradients of density and temperature in the spatial dimension is modeled. Our method is based on a 3-D adaptive grid (in space, magnitude of the velocity, and cosine of the pitch angle) and a second order conservative scheme. Note that the grid size is typically 100 x 257 x 65 nodes. It was shown in our previous work that only these capabilities make it possible to benchmark a 3D code against a spatially-dependent self-similar solution of a kinetic equation with the Landau collision term. In the present work we show results of a more precise benchmarking against the exact solutions of the kinetic equation using a new parallel code ALLAp with an improved method of parallelization and a modified boundary condition at the plasma edge. We also report first results from the code parallelization using Message Passing Interface for a Massively Parallel CRI T3D platform. We evaluate the ALLAp code performance versus the number of T3D processors used and compare its efficiency against a Work/Data Sharing parallelization scheme and a workstation version

  8. S3T working group. Report 1: group aims

    International Nuclear Information System (INIS)

    Pouey, M.

    1983-04-01

    The work group S3T which is aimed to designing and developing devices using unconventional holographic optics is presented. These devices find applications that are classified here in four items high resolution spectrometers, high definition imaging, high flux devices, metrology and interferometry. The problems to solve and the aims of the group in each of these cases are presented. Three synthesis of lectures are in this report. The main one concerns stigmatism conditions of concave holographic gratings used in normal incidence. This new process of focusing is very interesting for hot plasma diagnostics [fr

  9. Pharmacodynamic effects of steady-state fingolimod on antibody response in healthy volunteers: a 4-week, randomized, placebo-controlled, parallel-group, multiple-dose study.

    Science.gov (United States)

    Boulton, Craig; Meiser, Karin; David, Olivier J; Schmouder, Robert

    2012-12-01

    Fingolimod, a first-in-class oral sphingosine 1-phosphate receptor (S1PR) modulator, is approved in many countries for relapsing-remitting multiple sclerosis, at a once-daily 0.5-mg dose. A reduction in peripheral lymphocyte count is an expected consequence of the fingolimod mechanism of S1PR modulation. The authors investigated if this pharmacodynamic effect impacts humoral and cellular immunogenicity. In this double-blind, parallel-group, 4-week study, 72 healthy volunteers were randomized to steady state, fingolimod 0.5 mg, 1.25 mg, or to placebo. The authors compared T-cell dependent and independent responses to the neoantigens, keyhole limpet hemocyanin (KLH), and pneumococcal polysaccharides vaccine (PPV-23), respectively, and additionally recall antigen response (tetanus toxoid [TT]) and delayed-type hypersensitivity (DTH) to KLH, TT, and Candida albicans. Fingolimod caused mild to moderate decreases in anti-KLH and anti-PPV-23 IgG and IgM levels versus placebo. Responder rates were identical between placebo and 0.5-mg groups for anti-KLH IgG (both > 90%) and comparable for anti-PPV-23 IgG (55% and 41%, respectively). Fingolimod did not affect anti-TT immunogenicity, and DTH response did not differ between placebo and fingolimod 0.5-mg groups. Expectedly, lymphocyte count reduced substantially in the fingolimod groups versus placebo but reversed by study end. Fingolimod was well tolerated, and the observed safety profile was consistent with previous reports.

  10. Differences between food group reports of low energy reporters and non-low energy reporters on a food frequency questionnaire

    Science.gov (United States)

    Millen, Amy E.; Tooze, Janet A.; Subar, Amy F.; Kahle, Lisa L.; Schatzkin, Arthur; Krebs-Smith, Susan M.

    2013-01-01

    Background Low-energy reporters (LERs) and non-LERs differ with respect to a number of characteristics, including self-reported intake of foods. Limited data exists investigating food intake differences with LERs identified using doubly labeled water (DLW). Objective In the Observing Protein and Energy Nutrition Study (September, 1999-March, 2000), differences were examined between food group reports of LERs and non-LERs on a food frequency questionnaire (FFQ) (n=440). Design LERs were identified using DLW. LERs' (n=220) and non-LERs' (n=220) reports of 43 food groups on the FFQ were examined in three ways: whether they reported consuming a food group (yes/no), how frequently they reported consuming it (times/day), and the reported portion size (small, medium, or large). Analyses were adjusted for total energy expenditure from DLW. Results LERs compared to non-LERs were less likely to report consumption for one food group among women (soft drinks/regular) and no food groups among men. Reported mean daily frequency of consumption was lower in LERs compared to non-LERs for 23 food groups among women and 24 food groups among men (18 food groups were similar in men and women). Additionally, reported mean portion sizes were smaller for LERs compared to non-LERs for 6 food groups among women and 5 food groups among men (3 food groups were similar in men and women). Results varied minimally by sex and body mass index (BMI). Conclusions LERs as compared to non-LERs were more likely to differ regarding their reported frequency of consumption of food groups than their reported consumption (yes/no) of the food groups or the food groups' reported portion sizes. Results did not vary greatly by sex or BMI. It still remains to be known whether improvement in questionnaire design or additional tools or methods would lead to a decrease in differential reporting due to LER status on an FFQ. PMID:19559136

  11. First massively parallel algorithm to be implemented in Apollo-II code

    International Nuclear Information System (INIS)

    Stankovski, Z.

    1994-01-01

    The collision probability (CP) method in neutron transport, as applied to arbitrary 2D XY geometries, like the TDT module in APOLLO-II, is very time consuming. Consequently RZ or 3D extensions became prohibitive. Fortunately, this method is very suitable for parallelization. Massively parallel computer architectures, especially MIMD machines, bring a new breath to this method. In this paper we present a CM5 implementation of the CP method. Parallelization is applied to the energy groups, using the CMMD message passing library. In our case we use 32 processors for the standard 99-group APOLLIB-II library. The real advantage of this algorithm will appear in the calculation of the future fine multigroup library (about 8000 groups) of the SAPHYR project with a massively parallel computer (to the order of hundreds of processors). (author). 3 tabs., 4 figs., 4 refs

  12. First massively parallel algorithm to be implemented in APOLLO-II code

    International Nuclear Information System (INIS)

    Stankovski, Z.

    1994-01-01

    The collision probability method in neutron transport, as applied to arbitrary 2-dimensional geometries, like the two dimensional transport module in APOLLO-II is very time consuming. Consequently 3-dimensional extension became prohibitive. Fortunately, this method is very suitable for parallelization. Massively parallel computer architectures, especially MIMD machines, bring a new breath to this method. In this paper we present a CM5 implementation of the collision probability method. Parallelization is applied to the energy groups, using the CMMD massage passing library. In our case we used 32 processors for the standard 99-group APOLLIB-II library. The real advantage of this algorithm will appear in the calculation of the future multigroup library (about 8000 groups) of the SAPHYR project with a massively parallel computer (to the order of hundreds of processors). (author). 4 refs., 4 figs., 3 tabs

  13. EDF Group - Annual Report 2011. Electricity, long-term choices

    International Nuclear Information System (INIS)

    2012-01-01

    The EDF Group is one of the world's leading energy companies, active in all areas from generation to trading and network management. It has a sound business model, evenly balanced between regulated and deregulated activities. With its first-rate human resources, R and D capability, expertise in engineering and operating generation plants and networks, as well as its energy eco-efficiency offers, the Group delivers competitive solutions that help ensure sustainable economic development and climate protection. The EDF Group is the leader in the French and UK electricity markets and has solid positions in Italy and numerous other European countries, as well as industrial operations in Asia and the United States. Everywhere it operates, the Group is a model of quality public service for the energy sector. This document is EDF Group's annual report for the year 2011. It contains information about Group profile, governance, business, development strategy, sales and marketing, positions in Europe and international activities. The document is made of several reports: the Activity and Sustainable Development Report, the Financial Report, the Management Report, the Report by the Chairman of EDF Board of Directors on corporate governance and internal control procedures, the Milestones report, the 'EDF at a glance' report, and the Sustainable Development Indicators

  14. Parallel Programming with Intel Parallel Studio XE

    CERN Document Server

    Blair-Chappell , Stephen

    2012-01-01

    Optimize code for multi-core processors with Intel's Parallel Studio Parallel programming is rapidly becoming a "must-know" skill for developers. Yet, where to start? This teach-yourself tutorial is an ideal starting point for developers who already know Windows C and C++ and are eager to add parallelism to their code. With a focus on applying tools, techniques, and language extensions to implement parallelism, this essential resource teaches you how to write programs for multicore and leverage the power of multicore in your programs. Sharing hands-on case studies and real-world examples, the

  15. 2002 annual report EDF group; 2002 rapport annuel groupe EDF

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2002-07-01

    This document is the 2002 annual report of Electricite de France (EdF) group, the French electric utility. Content: Introductory section (EDF at a glance, Chairman's message, 2002 Highlights); Corporate governance and Group strategy (Corporate governance, sustainable growth strategy, EDF branches); Financial performance (Reaching critical mass, Margins holding up well, Balance sheet); Human resources (Launching Group-wide synergies, Optimising human resources); Customers (Major customers, SMEs and professional customers, Local authorities, Residential customers, Ensuring quality access to electricity); Generation (A balanced energy mix, Nuclear generation, Fossil-fuelled generation, Renewable energies); Corporate social responsibility (Global and local partnerships, Promoting community development)

  16. 2002 annual report EDF group; 2002 rapport annuel groupe EDF

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2002-07-01

    This document is the 2002 annual report of Electricite de France (EdF) group, the French electric utility. Content: Introductory section (EDF at a glance, Chairman's message, 2002 Highlights); Corporate governance and Group strategy (Corporate governance, sustainable growth strategy, EDF branches); Financial performance (Reaching critical mass, Margins holding up well, Balance sheet); Human resources (Launching Group-wide synergies, Optimising human resources); Customers (Major customers, SMEs and professional customers, Local authorities, Residential customers, Ensuring quality access to electricity); Generation (A balanced energy mix, Nuclear generation, Fossil-fuelled generation, Renewable energies); Corporate social responsibility (Global and local partnerships, Promoting community development)

  17. Development of a parallelization strategy for the VARIANT code

    International Nuclear Information System (INIS)

    Hanebutte, U.R.; Khalil, H.S.; Palmiotti, G.; Tatsumi, M.

    1996-01-01

    The VARIANT code solves the multigroup steady-state neutron diffusion and transport equation in three-dimensional Cartesian and hexagonal geometries using the variational nodal method. VARIANT consists of four major parts that must be executed sequentially: input handling, calculation of response matrices, solution algorithm (i.e. inner-outer iteration), and output of results. The objective of the parallelization effort was to reduce the overall computing time by distributing the work of the two computationally intensive (sequential) tasks, the coupling coefficient calculation and the iterative solver, equally among a group of processors. This report describes the code's calculations and gives performance results on one of the benchmark problems used to test the code. The performance analysis in the IBM SPx system shows good efficiency for well-load-balanced programs. Even for relatively small problem sizes, respectable efficiencies are seen for the SPx. An extension to achieve a higher degree of parallelism will be addressed in future work. 7 refs., 1 tab

  18. DATA TRANSFER IN THE AUTOMATED SYSTEM OF PARALLEL DESIGN AND CONSTRUCTION

    Directory of Open Access Journals (Sweden)

    Volkov Andrey Anatol'evich

    2012-12-01

    Full Text Available This article covers data transfer processes in the automated system of parallel design and construction. The authors consider the structure of reports used by contractors and clients when large-scale projects are implemented. All necessary items of information are grouped into three levels, and each level is described by certain attributes. The authors drive a lot of attention to the integrated operational schedule as it is the main tool of project management. Some recommendations concerning the forms and the content of reports are presented. Integrated automation of all operations is a necessary condition for the successful implementation of the new concept. The technical aspect of the notion of parallel design and construction also includes the client-to-server infrastructure that brings together all process implemented by the parties involved into projects. This approach should be taken into consideration in the course of review of existing codes and standards to eliminate any inconsistency between the construction legislation and the practical experience of engineers involved into the process.

  19. Parallel processing and learning in simple systems. Final report, 10 January 1986-14 January 1989

    Energy Technology Data Exchange (ETDEWEB)

    Mpitsos, G.J.

    1989-03-15

    Work over the three-year tenure of this grant has dealt with interrelated studies of (1) neuropharmacology, (2) behavior, and (3) distributed/parallel processing in the generation of variable motor patterns in the buccal-oral system of the sea slug Pleurobranchaea californica. (4) Computer simulations of simple neutral networks have been undertaken to examine neurointegrative principles that could not be examined in biological preparations. The simulation work has set the basis for further simulations dealing with networks having characteristics relating to real neurons. All of the work has had the goal of developing interdisciplinary tools for understanding the scale-independent problem of how individuals, each possessing only local knowledge of group activity, act within a group to produce different and variable adaptive outputs, and, in turn, of how the group influences the activity of the individual. The pharmacologic studies have had the goal of developing biochemical tools with which to identify groups of neurons that perform specific tasks during the production of a given behavior but are multifunctional by being critically involved in generating several different behaviors.

  20. Cognitive synergy in groups and group-to-individual transfer of decision-making competencies

    Science.gov (United States)

    Curşeu, Petru L.; Meslec, Nicoleta; Pluut, Helen; Lucas, Gerardus J. M.

    2015-01-01

    In a field study (148 participants organized in 38 groups) we tested the effect of group synergy and one's position in relation to the collaborative zone of proximal development (CZPD) on the change of individual decision-making competencies. We used two parallel sets of decision tasks reported in previous research to test rationality and we evaluated individual decision-making competencies in the pre-group and post-group conditions as well as group rationality (as an emergent group level phenomenon). We used multilevel modeling to analyze the data and the results showed that members of synergetic groups had a higher cognitive gain as compared to members of non-synergetic groups, while highly rational members (members above the CZPD) had lower cognitive gains compared to less rational group members (members situated below the CZPD). These insights extend the literature on group-to-individual transfer of learning and have important practical implications as they show that group dynamics influence the development of individual decision-making competencies. PMID:26441750

  1. Cognitive synergy in groups and group-to-individual transfer of decision-making competencies.

    Science.gov (United States)

    Curşeu, Petru L; Meslec, Nicoleta; Pluut, Helen; Lucas, Gerardus J M

    2015-01-01

    In a field study (148 participants organized in 38 groups) we tested the effect of group synergy and one's position in relation to the collaborative zone of proximal development (CZPD) on the change of individual decision-making competencies. We used two parallel sets of decision tasks reported in previous research to test rationality and we evaluated individual decision-making competencies in the pre-group and post-group conditions as well as group rationality (as an emergent group level phenomenon). We used multilevel modeling to analyze the data and the results showed that members of synergetic groups had a higher cognitive gain as compared to members of non-synergetic groups, while highly rational members (members above the CZPD) had lower cognitive gains compared to less rational group members (members situated below the CZPD). These insights extend the literature on group-to-individual transfer of learning and have important practical implications as they show that group dynamics influence the development of individual decision-making competencies.

  2. Parallel community climate model: Description and user`s guide

    Energy Technology Data Exchange (ETDEWEB)

    Drake, J.B.; Flanery, R.E.; Semeraro, B.D.; Worley, P.H. [and others

    1996-07-15

    This report gives an overview of a parallel version of the NCAR Community Climate Model, CCM2, implemented for MIMD massively parallel computers using a message-passing programming paradigm. The parallel implementation was developed on an Intel iPSC/860 with 128 processors and on the Intel Delta with 512 processors, and the initial target platform for the production version of the code is the Intel Paragon with 2048 processors. Because the implementation uses a standard, portable message-passing libraries, the code has been easily ported to other multiprocessors supporting a message-passing programming paradigm. The parallelization strategy used is to decompose the problem domain into geographical patches and assign each processor the computation associated with a distinct subset of the patches. With this decomposition, the physics calculations involve only grid points and data local to a processor and are performed in parallel. Using parallel algorithms developed for the semi-Lagrangian transport, the fast Fourier transform and the Legendre transform, both physics and dynamics are computed in parallel with minimal data movement and modest change to the original CCM2 source code. Sequential or parallel history tapes are written and input files (in history tape format) are read sequentially by the parallel code to promote compatibility with production use of the model on other computer systems. A validation exercise has been performed with the parallel code and is detailed along with some performance numbers on the Intel Paragon and the IBM SP2. A discussion of reproducibility of results is included. A user`s guide for the PCCM2 version 2.1 on the various parallel machines completes the report. Procedures for compilation, setup and execution are given. A discussion of code internals is included for those who may wish to modify and use the program in their own research.

  3. Parallel Execution of Functional Mock-up Units in Buildings Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Ozmen, Ozgur [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Nutaro, James J. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); New, Joshua Ryan [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2016-06-30

    A Functional Mock-up Interface (FMI) defines a standardized interface to be used in computer simulations to develop complex cyber-physical systems. FMI implementation by a software modeling tool enables the creation of a simulation model that can be interconnected, or the creation of a software library called a Functional Mock-up Unit (FMU). This report describes an FMU wrapper implementation that imports FMUs into a C++ environment and uses an Euler solver that executes FMUs in parallel using Open Multi-Processing (OpenMP). The purpose of this report is to elucidate the runtime performance of the solver when a multi-component system is imported as a single FMU (for the whole system) or as multiple FMUs (for different groups of components as sub-systems). This performance comparison is conducted using two test cases: (1) a simple, multi-tank problem; and (2) a more realistic use case based on the Modelica Buildings Library. In both test cases, the performance gains are promising when each FMU consists of a large number of states and state events that are wrapped in a single FMU. Load balancing is demonstrated to be a critical factor in speeding up parallel execution of multiple FMUs.

  4. Military Munitions Waste Working Group report

    International Nuclear Information System (INIS)

    1993-01-01

    This report presents the findings of the Military Munitions Waste Working Group in its effort to achieve the goals directed under the Federal Advisory Committee to Develop On-Site Innovative Technologies (DOIT Committee) for environmental restoration and waste management. The Military Munitions Waste Working Group identified the following seven areas of concern associated with the ordnance (energetics) waste stream: unexploded ordnance; stockpiled; disposed -- at known locations, i.e., disposal pits; discharged -- impact areas, unknown disposal sites; contaminated media; chemical sureties/weapons; biological weapons; munitions production; depleted uranium; and rocket motor and fuel disposal (open burn/open detonation). Because of time constraints, the Military Munitions Waste Working Group has focused on unexploded ordnance and contaminated media with the understanding that remaining waste streams will be considered as time permits. Contents of this report are as follows: executive summary; introduction; Military Munitions Waste Working Group charter; description of priority waste stream problems; shortcomings of existing approaches, processes and technologies; innovative approaches, processes and technologies, work force planning, training, and education issues relative to technology development and cleanup; criteria used to identify and screen potential demonstration projects; list of potential candidate demonstration projects for the DOIT committee decision/recommendation and appendices

  5. Military Munitions Waste Working Group report

    Energy Technology Data Exchange (ETDEWEB)

    1993-11-30

    This report presents the findings of the Military Munitions Waste Working Group in its effort to achieve the goals directed under the Federal Advisory Committee to Develop On-Site Innovative Technologies (DOIT Committee) for environmental restoration and waste management. The Military Munitions Waste Working Group identified the following seven areas of concern associated with the ordnance (energetics) waste stream: unexploded ordnance; stockpiled; disposed -- at known locations, i.e., disposal pits; discharged -- impact areas, unknown disposal sites; contaminated media; chemical sureties/weapons; biological weapons; munitions production; depleted uranium; and rocket motor and fuel disposal (open burn/open detonation). Because of time constraints, the Military Munitions Waste Working Group has focused on unexploded ordnance and contaminated media with the understanding that remaining waste streams will be considered as time permits. Contents of this report are as follows: executive summary; introduction; Military Munitions Waste Working Group charter; description of priority waste stream problems; shortcomings of existing approaches, processes and technologies; innovative approaches, processes and technologies, work force planning, training, and education issues relative to technology development and cleanup; criteria used to identify and screen potential demonstration projects; list of potential candidate demonstration projects for the DOIT committee decision/recommendation and appendices.

  6. Feasibility studies for final disposal of low and intermediate radioactive waste - summary with main conclusions and recommendations from three parallel studies. Report to the cross-departmental working group for preparing a decision basis for establishing a Danish radioactive waste disposal facility

    International Nuclear Information System (INIS)

    2011-05-01

    In 2003, the Danish Parliament in resolution No. B 48 on the dismantling of the nuclear facilities at Risoe gave consent to the government to begin preparation of a decision basis for a Danish final repository for low and intermediate level waste. As a result, a working group under the Ministry of Health and Prevention in 2008 prepared the report 'Decision basis for a Danish final repository for low and medium level radioactive waste'. In this report it was recommended to prepare three parallel preliminary studies: one about the repository concepts with the aim to obtain the necessary decision-making basis for selecting which concepts to analyze within the process of establishing a final repository, one on transportation of radioactive waste to the depot and one about regional mapping with the aim to characterize areas as suitable or unsuitable for locating a repository. The present report contains the main conclusions of each of the three parallel studies in relation to the further localization process. The preliminary studies suggest 22 areas, of which it is recommended to proceed with six in the selection process. The preliminary studies also show that all investigated storage concepts will be possible solutions from a security standpoint. However, there will be greater risks associated with depots near the surface, because they are more subjected to intentional or accidental intrusion. Overall, a medium deep repository will be the most appropriate solution, but it is also a more expensive solution than the near-surface repository. Both subsurface and the deep repositories may be reversible, but it is estimated to increase overall costs and may increase risk related to accidents. The preliminary studies establishes a set of conclusions and recommendations concerning future studies related to repository concepts and safety analyses, including in relation to the specific geology at the selected locations. The transportation studies show that radio

  7. Parallel processing for artificial intelligence 2

    CERN Document Server

    Kumar, V; Suttner, CB

    1994-01-01

    With the increasing availability of parallel machines and the raising of interest in large scale and real world applications, research on parallel processing for Artificial Intelligence (AI) is gaining greater importance in the computer science environment. Many applications have been implemented and delivered but the field is still considered to be in its infancy. This book assembles diverse aspects of research in the area, providing an overview of the current state of technology. It also aims to promote further growth across the discipline. Contributions have been grouped according to their

  8. Neck collar, "act-as-usual" or active mobilization for whiplash injury? A randomized parallel-group trial

    DEFF Research Database (Denmark)

    Kongsted, Alice; Montvilas, Erisela Qerama; Kasch, Helge

    2007-01-01

    practitioners within 10 days after a whiplash injury and randomized to: 1) immobilization of the cervical spine in a rigid collar followed by active mobilization, 2) advice to "act-as-usual," or 3) an active mobilization program (Mechanical Diagnosis and Therapy). Follow-up was carried out after 3, 6, and 12......-extension trauma to the cervical spine. It is unclear whether this, in some cases disabling, condition can be prevented by early intervention. Active interventions have been recommended but have not been compared with information only. Methods. Participants were recruited from emergency units and general......Study Design. Randomized, parallel-group trial. Objective. To compare the effect of 3 early intervention strategies following whiplash injury. Summary of Background Data. Long-lasting pain and disability, known as chronic whiplash-associated disorder (WAD), may develop after a forced flexion...

  9. Neck collar, "act-as-usual" or active mobilization for whiplash injury? A randomized parallel-group trial

    DEFF Research Database (Denmark)

    Kongsted, Alice; Montvilas, Erisela Qerama; Kasch, Helge

    2007-01-01

    Study Design. Randomized, parallel-group trial. Objective. To compare the effect of 3 early intervention strategies following whiplash injury. Summary of Background Data. Long-lasting pain and disability, known as chronic whiplash-associated disorder (WAD), may develop after a forced flexion......-extension trauma to the cervical spine. It is unclear whether this, in some cases disabling, condition can be prevented by early intervention. Active interventions have been recommended but have not been compared with information only. Methods. Participants were recruited from emergency units and general...... practitioners within 10 days after a whiplash injury and randomized to: 1) immobilization of the cervical spine in a rigid collar followed by active mobilization, 2) advice to "act-as-usual," or 3) an active mobilization program (Mechanical Diagnosis and Therapy). Follow-up was carried out after 3, 6, and 12...

  10. High performance parallel computers for science

    International Nuclear Information System (INIS)

    Nash, T.; Areti, H.; Atac, R.; Biel, J.; Cook, A.; Deppe, J.; Edel, M.; Fischler, M.; Gaines, I.; Hance, R.

    1989-01-01

    This paper reports that Fermilab's Advanced Computer Program (ACP) has been developing cost effective, yet practical, parallel computers for high energy physics since 1984. The ACP's latest developments are proceeding in two directions. A Second Generation ACP Multiprocessor System for experiments will include $3500 RISC processors each with performance over 15 VAX MIPS. To support such high performance, the new system allows parallel I/O, parallel interprocess communication, and parallel host processes. The ACP Multi-Array Processor, has been developed for theoretical physics. Each $4000 node is a FORTRAN or C programmable pipelined 20 Mflops (peak), 10 MByte single board computer. These are plugged into a 16 port crossbar switch crate which handles both inter and intra crate communication. The crates are connected in a hypercube. Site oriented applications like lattice gauge theory are supported by system software called CANOPY, which makes the hardware virtually transparent to users. A 256 node, 5 GFlop, system is under construction

  11. EDF group. Annual report 2001

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2002-05-01

    This document is the English version of the 2001 annual report of Electricite de France (EdF) Group, the French electric utility. It comprises 4 parts: introduction (statement of the chairman and chief executive officer, corporate governance, group key figures, sustainable growth indicators - parent company, energy for a sustainable future, EdF group worldwide); dynamics and balanced growth (financial results, EdF's strategy in building a competitive global group: consolidating the European network, moving forward in energy-related services, responding to increasing energy demand in emerging countries); sustainable solutions for all (empowering the customer: competitive solutions for industrial customers, anticipating the needs of residential customers and SMEs, environmental solutions to enhance urban life, upgrading the network and providing access to energy; a sound, sustainable and secure energy mix: a highly competitive nuclear fleet, the vital resource of fossil-fuelled plants, a proactive approach to renewable energies); a global commitment to corporate social responsibility (human resources and partnerships). (J.S.)

  12. EDF group. Annual report 2001

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2002-05-01

    This document is the English version of the 2001 annual report of Electricite de France (EdF) Group, the French electric utility. It comprises 4 parts: introduction (statement of the chairman and chief executive officer, corporate governance, group key figures, sustainable growth indicators - parent company, energy for a sustainable future, EdF group worldwide); dynamics and balanced growth (financial results, EdF's strategy in building a competitive global group: consolidating the European network, moving forward in energy-related services, responding to increasing energy demand in emerging countries); sustainable solutions for all (empowering the customer: competitive solutions for industrial customers, anticipating the needs of residential customers and SMEs, environmental solutions to enhance urban life, upgrading the network and providing access to energy; a sound, sustainable and secure energy mix: a highly competitive nuclear fleet, the vital resource of fossil-fuelled plants, a proactive approach to renewable energies); a global commitment to corporate social responsibility (human resources and partnerships). (J.S.)

  13. A CS1 pedagogical approach to parallel thinking

    Science.gov (United States)

    Rague, Brian William

    Almost all collegiate programs in Computer Science offer an introductory course in programming primarily devoted to communicating the foundational principles of software design and development. The ACM designates this introduction to computer programming course for first-year students as CS1, during which methodologies for solving problems within a discrete computational context are presented. Logical thinking is highlighted, guided primarily by a sequential approach to algorithm development and made manifest by typically using the latest, commercially successful programming language. In response to the most recent developments in accessible multicore computers, instructors of these introductory classes may wish to include training on how to design workable parallel code. Novel issues arise when programming concurrent applications which can make teaching these concepts to beginning programmers a seemingly formidable task. Student comprehension of design strategies related to parallel systems should be monitored to ensure an effective classroom experience. This research investigated the feasibility of integrating parallel computing concepts into the first-year CS classroom. To quantitatively assess student comprehension of parallel computing, an experimental educational study using a two-factor mixed group design was conducted to evaluate two instructional interventions in addition to a control group: (1) topic lecture only, and (2) topic lecture with laboratory work using a software visualization Parallel Analysis Tool (PAT) specifically designed for this project. A new evaluation instrument developed for this study, the Perceptions of Parallelism Survey (PoPS), was used to measure student learning regarding parallel systems. The results from this educational study show a statistically significant main effect among the repeated measures, implying that student comprehension levels of parallel concepts as measured by the PoPS improve immediately after the delivery of

  14. Parallel R-matrix computation

    International Nuclear Information System (INIS)

    Heggarty, J.W.

    1999-06-01

    For almost thirty years, sequential R-matrix computation has been used by atomic physics research groups, from around the world, to model collision phenomena involving the scattering of electrons or positrons with atomic or molecular targets. As considerable progress has been made in the understanding of fundamental scattering processes, new data, obtained from more complex calculations, is of current interest to experimentalists. Performing such calculations, however, places considerable demands on the computational resources to be provided by the target machine, in terms of both processor speed and memory requirement. Indeed, in some instances the computational requirements are so great that the proposed R-matrix calculations are intractable, even when utilising contemporary classic supercomputers. Historically, increases in the computational requirements of R-matrix computation were accommodated by porting the problem codes to a more powerful classic supercomputer. Although this approach has been successful in the past, it is no longer considered to be a satisfactory solution due to the limitations of current (and future) Von Neumann machines. As a consequence, there has been considerable interest in the high performance multicomputers, that have emerged over the last decade which appear to offer the computational resources required by contemporary R-matrix research. Unfortunately, developing codes for these machines is not as simple a task as it was to develop codes for successive classic supercomputers. The difficulty arises from the considerable differences in the computing models that exist between the two types of machine and results in the programming of multicomputers to be widely acknowledged as a difficult, time consuming and error-prone task. Nevertheless, unless parallel R-matrix computation is realised, important theoretical and experimental atomic physics research will continue to be hindered. This thesis describes work that was undertaken in

  15. Parallel processing of Monte Carlo code MCNP for particle transport problem

    Energy Technology Data Exchange (ETDEWEB)

    Higuchi, Kenji; Kawasaki, Takuji

    1996-06-01

    It is possible to vectorize or parallelize Monte Carlo codes (MC code) for photon and neutron transport problem, making use of independency of the calculation for each particle. Applicability of existing MC code to parallel processing is mentioned. As for parallel computer, we have used both vector-parallel processor and scalar-parallel processor in performance evaluation. We have made (i) vector-parallel processing of MCNP code on Monte Carlo machine Monte-4 with four vector processors, (ii) parallel processing on Paragon XP/S with 256 processors. In this report we describe the methodology and results for parallel processing on two types of parallel or distributed memory computers. In addition, we mention the evaluation of parallel programming environments for parallel computers used in the present work as a part of the work developing STA (Seamless Thinking Aid) Basic Software. (author)

  16. Test generation for digital circuits using parallel processing

    Science.gov (United States)

    Hartmann, Carlos R.; Ali, Akhtar-Uz-Zaman M.

    1990-12-01

    The problem of test generation for digital logic circuits is an NP-Hard problem. Recently, the availability of low cost, high performance parallel machines has spurred interest in developing fast parallel algorithms for computer-aided design and test. This report describes a method of applying a 15-valued logic system for digital logic circuit test vector generation in a parallel programming environment. A concept called fault site testing allows for test generation, in parallel, that targets more than one fault at a given location. The multi-valued logic system allows results obtained by distinct processors and/or processes to be merged by means of simple set intersections. A machine-independent description is given for the proposed algorithm.

  17. SSC muon detector group report

    International Nuclear Information System (INIS)

    Carlsmith, D.; Groom, D.; Hedin, D.; Kirk, T.; Ohsugi, T.; Reeder, D.; Rosner, J.; Wojcicki, S.

    1986-01-01

    We report here on results from the Muon Detector Group which met to discuss aspects of muon detection for the reference 4π detector models put forward for evaluation at the Snowmass 1986 Summer Study. We report on: suitable overall detector geometry; muon energy loss mechanisms; muon orbit determination; muon momentum and angle measurement resolution; raw muon rates and trigger concepts; plus we identify SSC physics for which muon detection will play a significant role. We conclude that muon detection at SSC energies and luminosities is feasible and will play an important role in the evolution of physics at the SSC

  18. SSC muon detector group report

    Energy Technology Data Exchange (ETDEWEB)

    Carlsmith, D.; Groom, D.; Hedin, D.; Kirk, T.; Ohsugi, T.; Reeder, D.; Rosner, J.; Wojcicki, S.

    1986-01-01

    We report here on results from the Muon Detector Group which met to discuss aspects of muon detection for the reference 4..pi.. detector models put forward for evaluation at the Snowmass 1986 Summer Study. We report on: suitable overall detector geometry; muon energy loss mechanisms; muon orbit determination; muon momentum and angle measurement resolution; raw muon rates and trigger concepts; plus we identify SSC physics for which muon detection will play a significant role. We conclude that muon detection at SSC energies and luminosities is feasible and will play an important role in the evolution of physics at the SSC.

  19. EDF group - Reference Document, Annual Financial Report 2014

    International Nuclear Information System (INIS)

    2015-01-01

    The EDF Group is the world's leading electricity company and very well established in Europe. Its business covers all electricity-related activities, from generation to networks and commerce. It is an important player in energy trading through EDF trading. This document is EDF Group's Reference Document and Annual Financial Report for the year 2014. It contains information about Group profile, governance, business, investments, property, plant and equipment, management, financial position, human resources, shareholders, etc. The document includes the half-year financial report

  20. Biomedical Research Group, Health Division annual report 1954

    Energy Technology Data Exchange (ETDEWEB)

    Langham, W.H.; Storer, J.B.

    1955-12-31

    This report covers the activities of the Biomedical Research Group (H-4) of the Health Division during the period January 1 through December 31, 1954. Organizationally, Group H-4 is divided into five sections, namely, Biochemistry, Radiobiology, Radiopathology, Biophysics, and Organic Chemistry. The activities of the Group are summarized under the headings of the various sections. The general nature of each section`s program, publications, documents and reports originating from its members, and abstracts and summaries of the projects pursued during the year are presented.

  1. Report of the Nuclear Spectroscopy Group

    International Nuclear Information System (INIS)

    Lerry, T.B.; Wylie, W.; Hugo

    1978-01-01

    This is a report of the group working with Nuclear Spectroscopy. They made a general discussion involving personnel, research interests (present and future) and suggestions, on general. (A.C.A.S.) [pt

  2. More parallel please

    DEFF Research Database (Denmark)

    Gregersen, Frans; Josephson, Olle; Kristoffersen, Gjert

    of departure that English may be used in parallel with the various local, in this case Nordic, languages. As such, the book integrates the challenge of internationalization faced by any university with the wish to improve quality in research, education and administration based on the local language......Abstract [en] More parallel, please is the result of the work of an Inter-Nordic group of experts on language policy financed by the Nordic Council of Ministers 2014-17. The book presents all that is needed to plan, practice and revise a university language policy which takes as its point......(s). There are three layers in the text: First, you may read the extremely brief version of the in total 11 recommendations for best practice. Second, you may acquaint yourself with the extended version of the recommendations and finally, you may study the reasoning behind each of them. At the end of the text, we give...

  3. The Acoustic and Peceptual Effects of Series and Parallel Processing

    Directory of Open Access Journals (Sweden)

    Melinda C. Anderson

    2009-01-01

    Full Text Available Temporal envelope (TE cues provide a great deal of speech information. This paper explores how spectral subtraction and dynamic-range compression gain modifications affect TE fluctuations for parallel and series configurations. In parallel processing, algorithms compute gains based on the same input signal, and the gains in dB are summed. In series processing, output from the first algorithm forms the input to the second algorithm. Acoustic measurements show that the parallel arrangement produces more gain fluctuations, introducing more changes to the TE than the series configurations. Intelligibility tests for normal-hearing (NH and hearing-impaired (HI listeners show (1 parallel processing gives significantly poorer speech understanding than an unprocessed (UNP signal and the series arrangement and (2 series processing and UNP yield similar results. Speech quality tests show that UNP is preferred to both parallel and series arrangements, although spectral subtraction is the most preferred. No significant differences exist in sound quality between the series and parallel arrangements, or between the NH group and the HI group. These results indicate that gain modifications affect intelligibility and sound quality differently. Listeners appear to have a higher tolerance for gain modifications with regard to intelligibility, while judgments for sound quality appear to be more affected by smaller amounts of gain modification.

  4. On the Automatic Parallelization of Sparse and Irregular Fortran Programs

    Directory of Open Access Journals (Sweden)

    Yuan Lin

    1999-01-01

    Full Text Available Automatic parallelization is usually believed to be less effective at exploiting implicit parallelism in sparse/irregular programs than in their dense/regular counterparts. However, not much is really known because there have been few research reports on this topic. In this work, we have studied the possibility of using an automatic parallelizing compiler to detect the parallelism in sparse/irregular programs. The study with a collection of sparse/irregular programs led us to some common loop patterns. Based on these patterns new techniques were derived that produced good speedups when manually applied to our benchmark codes. More importantly, these parallelization methods can be implemented in a parallelizing compiler and can be applied automatically.

  5. Parallel PDE-Based Simulations Using the Common Component Architecture

    International Nuclear Information System (INIS)

    McInnes, Lois C.; Allan, Benjamin A.; Armstrong, Robert; Benson, Steven J.; Bernholdt, David E.; Dahlgren, Tamara L.; Diachin, Lori; Krishnan, Manoj Kumar; Kohl, James A.; Larson, J. Walter; Lefantzi, Sophia; Nieplocha, Jarek; Norris, Boyana; Parker, Steven G.; Ray, Jaideep; Zhou, Shujia

    2006-01-01

    The complexity of parallel PDE-based simulations continues to increase as multimodel, multiphysics, and multi-institutional projects become widespread. A goal of component based software engineering in such large-scale simulations is to help manage this complexity by enabling better interoperability among various codes that have been independently developed by different groups. The Common Component Architecture (CCA) Forum is defining a component architecture specification to address the challenges of high-performance scientific computing. In addition, several execution frameworks, supporting infrastructure, and general purpose components are being developed. Furthermore, this group is collaborating with others in the high-performance computing community to design suites of domain-specific component interface specifications and underlying implementations. This chapter discusses recent work on leveraging these CCA efforts in parallel PDE-based simulations involving accelerator design, climate modeling, combustion, and accidental fires and explosions. We explain how component technology helps to address the different challenges posed by each of these applications, and we highlight how component interfaces built on existing parallel toolkits facilitate the reuse of software for parallel mesh manipulation, discretization, linear algebra, integration, optimization, and parallel data redistribution. We also present performance data to demonstrate the suitability of this approach, and we discuss strategies for applying component technologies to both new and existing applications

  6. Directions in parallel processor architecture, and GPUs too

    CERN Multimedia

    CERN. Geneva

    2014-01-01

    Modern computing is power-limited in every domain of computing. Performance increments extracted from instruction-level parallelism (ILP) are no longer power-efficient; they haven't been for some time. Thread-level parallelism (TLP) is a more easily exploited form of parallelism, at the expense of programmer effort to expose it in the program. In this talk, I will introduce you to disparate topics in parallel processor architecture that will impact programming models (and you) in both the near and far future. About the speaker Olivier is a senior GPU (SM) architect at NVIDIA and an active participant in the concurrency working group of the ISO C++ committee. He has also worked on very large diesel engines as a mechanical engineer, and taught at McGill University (Canada) as a faculty instructor.

  7. Z-buffer image assembly processing in high parallel visualization processing

    International Nuclear Information System (INIS)

    Kaneko, Isamu; Muramatsu, Kazuhiro

    2000-03-01

    On the platform of the parallel computer with many processors, the domain decomposition method is used as a popular means of parallel processing. In these days when the simulation scale becomes much larger and takes a lot of time, the simultaneous visualization processing with the actual computation is much more needed, and especially in case of a real-time visualization, the domain decomposition technique is indispensable. In case of parallel rendering processing, the rendered results must be gathered to one processor to compose the integrated picture in the last stage. This integration is usually conducted by the method using Z-buffer values. This process, however, induces the crucial problems of much lower speed processing and local memory shortage in case of parallel processing exceeding more than several tens of processors. In this report, the two new solutions are proposed. The one is the adoption of a special operator (Reduce operator) in the parallelization process, and the other is a buffer compression by deleting the background informations. This report includes the performance results of these new techniques to investigate their effect with use of the parallel computer Paragon. (author)

  8. Practical parallel computing

    CERN Document Server

    Morse, H Stephen

    1994-01-01

    Practical Parallel Computing provides information pertinent to the fundamental aspects of high-performance parallel processing. This book discusses the development of parallel applications on a variety of equipment.Organized into three parts encompassing 12 chapters, this book begins with an overview of the technology trends that converge to favor massively parallel hardware over traditional mainframes and vector machines. This text then gives a tutorial introduction to parallel hardware architectures. Other chapters provide worked-out examples of programs using several parallel languages. Thi

  9. Portable programming on parallel/networked computers using the Application Portable Parallel Library (APPL)

    Science.gov (United States)

    Quealy, Angela; Cole, Gary L.; Blech, Richard A.

    1993-01-01

    The Application Portable Parallel Library (APPL) is a subroutine-based library of communication primitives that is callable from applications written in FORTRAN or C. APPL provides a consistent programmer interface to a variety of distributed and shared-memory multiprocessor MIMD machines. The objective of APPL is to minimize the effort required to move parallel applications from one machine to another, or to a network of homogeneous machines. APPL encompasses many of the message-passing primitives that are currently available on commercial multiprocessor systems. This paper describes APPL (version 2.3.1) and its usage, reports the status of the APPL project, and indicates possible directions for the future. Several applications using APPL are discussed, as well as performance and overhead results.

  10. Parallel rendering

    Science.gov (United States)

    Crockett, Thomas W.

    1995-01-01

    This article provides a broad introduction to the subject of parallel rendering, encompassing both hardware and software systems. The focus is on the underlying concepts and the issues which arise in the design of parallel rendering algorithms and systems. We examine the different types of parallelism and how they can be applied in rendering applications. Concepts from parallel computing, such as data decomposition, task granularity, scalability, and load balancing, are considered in relation to the rendering problem. We also explore concepts from computer graphics, such as coherence and projection, which have a significant impact on the structure of parallel rendering algorithms. Our survey covers a number of practical considerations as well, including the choice of architectural platform, communication and memory requirements, and the problem of image assembly and display. We illustrate the discussion with numerous examples from the parallel rendering literature, representing most of the principal rendering methods currently used in computer graphics.

  11. EDF Group - Annual Report 2014. The people who power tomorrow

    International Nuclear Information System (INIS)

    2015-01-01

    The EDF Group is the world's leading electricity company and very well established in Europe. Its business covers all electricity-related activities, from generation to networks and commerce. It is an important player in energy trading through EDF trading. This document is EDF Group's annual report for the year 2014. It contains information about Group profile, governance, business, development strategy, sales and marketing, positions in Europe and international activities. The document is made of several reports: the Activity and Sustainable Development Report, the 'EDF at a glance' report, and the EDF Group Performance sheet

  12. ISOE EG-SAM interim report - Report on behalf of the Sub expert Group

    International Nuclear Information System (INIS)

    Harris, Willie; Miller, David W.; Djeffal, Salah; Anderson, Ellen; Couasnon, Olivier; Hagemeyer, Derek; Sovijarvi, Jukka; Amaral, Marcos A.; Tarzia, J.P.; Schmidt, Claudia; Fritioff, Karin; Kaulard, Joerg; Lance, Benoit; Fritioff, Karin; Schieber, Caroline; Hayashida, Yoshihisa; Doty, Rick

    2014-01-01

    During its November 2012 meeting, the expert group decided to develop an interim (preliminary) report before the end of 2013 (with a general perspective and discussion of specific severe accident management worker dose issues), and to finalize the report by organizing the international workshop of 2014 to address national experiences, which will be incorporated to the report. The work of the EG-SAM focuses on radiation protection management and organization, radiation protection training and exercises related to severe accident management, facility configuration and readiness, worker protection, radioactive materials, contamination controls and logistics and key lessons learned especially from the TMI, Chernobyl and Fukushima Dai-ichi accidents. This interim report was completed through intensive work of all Group members nominated by the ISOE, and was accomplished during EG-SAM meetings through 2012-2013. This document gathers the different presentations given by the sub expert groups in charge of each chapter of the report

  13. Parallel computations

    CERN Document Server

    1982-01-01

    Parallel Computations focuses on parallel computation, with emphasis on algorithms used in a variety of numerical and physical applications and for many different types of parallel computers. Topics covered range from vectorization of fast Fourier transforms (FFTs) and of the incomplete Cholesky conjugate gradient (ICCG) algorithm on the Cray-1 to calculation of table lookups and piecewise functions. Single tridiagonal linear systems and vectorized computation of reactive flow are also discussed.Comprised of 13 chapters, this volume begins by classifying parallel computers and describing techn

  14. Energy Systems Group. Annual Progress Report 1984

    DEFF Research Database (Denmark)

    Grohnheit, Poul Erik; Larsen, Hans Hvidtfeldt; Villadsen, B.

    The report describes the work of the Energy Systems Group at Risø National Laboratory during 1984. The activities may be roughly classified as development and use of energy-economy models, energy systems analysis, energy technology assessment and energy planning. The report includes a list of staff...

  15. Final Report: Migration Mechanisms for Large-scale Parallel Applications

    Energy Technology Data Exchange (ETDEWEB)

    Jason Nieh

    2009-10-30

    Process migration is the ability to transfer a process from one machine to another. It is a useful facility in distributed computing environments, especially as computing devices become more pervasive and Internet access becomes more ubiquitous. The potential benefits of process migration, among others, are fault resilience by migrating processes off of faulty hosts, data access locality by migrating processes closer to the data, better system response time by migrating processes closer to users, dynamic load balancing by migrating processes to less loaded hosts, and improved service availability and administration by migrating processes before host maintenance so that applications can continue to run with minimal downtime. Although process migration provides substantial potential benefits and many approaches have been considered, achieving transparent process migration functionality has been difficult in practice. To address this problem, our work has designed, implemented, and evaluated new and powerful transparent process checkpoint-restart and migration mechanisms for desktop, server, and parallel applications that operate across heterogeneous cluster and mobile computing environments. A key aspect of this work has been to introduce lightweight operating system virtualization to provide processes with private, virtual namespaces that decouple and isolate processes from dependencies on the host operating system instance. This decoupling enables processes to be transparently checkpointed and migrated without modifying, recompiling, or relinking applications or the operating system. Building on this lightweight operating system virtualization approach, we have developed novel technologies that enable (1) coordinated, consistent checkpoint-restart and migration of multiple processes, (2) fast checkpointing of process and file system state to enable restart of multiple parallel execution environments and time travel, (3) process migration across heterogeneous

  16. Parallel sorting algorithms

    CERN Document Server

    Akl, Selim G

    1985-01-01

    Parallel Sorting Algorithms explains how to use parallel algorithms to sort a sequence of items on a variety of parallel computers. The book reviews the sorting problem, the parallel models of computation, parallel algorithms, and the lower bounds on the parallel sorting problems. The text also presents twenty different algorithms, such as linear arrays, mesh-connected computers, cube-connected computers. Another example where algorithm can be applied is on the shared-memory SIMD (single instruction stream multiple data stream) computers in which the whole sequence to be sorted can fit in the

  17. Scientific programming on massively parallel processor CP-PACS

    International Nuclear Information System (INIS)

    Boku, Taisuke

    1998-01-01

    The massively parallel processor CP-PACS takes various problems of calculation physics as the object, and it has been designed so that its architecture has been devised to do various numerical processings. In this report, the outline of the CP-PACS and the example of programming in the Kernel CG benchmark in NAS Parallel Benchmarks, version 1, are shown, and the pseudo vector processing mechanism and the parallel processing tuning of scientific and technical computation utilizing the three-dimensional hyper crossbar net, which are two great features of the architecture of the CP-PACS are described. As for the CP-PACS, the PUs based on RISC processor and added with pseudo vector processor are used. Pseudo vector processing is realized as the loop processing by scalar command. The features of the connection net of PUs are explained. The algorithm of the NPB version 1 Kernel CG is shown. The part that takes the time for processing most in the main loop is the product of matrix and vector (matvec), and the parallel processing of the matvec is explained. The time for the computation by the CPU is determined. As the evaluation of the performance, the evaluation of the time for execution, the short vector processing of pseudo vector processor based on slide window, and the comparison with other parallel computers are reported. (K.I.)

  18. Interim Report by Asia International Grid Connection Study Group

    Science.gov (United States)

    Omatsu, Ryo

    2018-01-01

    The Asia International Grid Connection Study Group Interim Report examines the feasibility of developing an international grid connection in Japan. The Group has investigated different cases of grid connections in Europe and conducted research on electricity markets in Northeast Asia, and identifies the barriers and challenges for developing an international grid network including Japan. This presentation introduces basic contents of the interim report by the Study Group.

  19. Step by step parallel programming method for molecular dynamics code

    International Nuclear Information System (INIS)

    Orii, Shigeo; Ohta, Toshio

    1996-07-01

    Parallel programming for a numerical simulation program of molecular dynamics is carried out with a step-by-step programming technique using the two phase method. As a result, within the range of a certain computing parameters, it is found to obtain parallel performance by using the level of parallel programming which decomposes the calculation according to indices of do-loops into each processor on the vector parallel computer VPP500 and the scalar parallel computer Paragon. It is also found that VPP500 shows parallel performance in wider range computing parameters. The reason is that the time cost of the program parts, which can not be reduced by the do-loop level of the parallel programming, can be reduced to the negligible level by the vectorization. After that, the time consuming parts of the program are concentrated on less parts that can be accelerated by the do-loop level of the parallel programming. This report shows the step-by-step parallel programming method and the parallel performance of the molecular dynamics code on VPP500 and Paragon. (author)

  20. Energy Systems Group annual progress report 1984

    International Nuclear Information System (INIS)

    Grohnheit, P.E.; Larsen, H.; Villadsen, B.

    1985-02-01

    The report describes the work of the Energy Systems Group at Risoe National Laboratory during 1984. The activities may be roughly classified as development and use of energy-economy models, energy systems analysis, energy technology assessment and energy planning. The report includes a list of staff members. (author)

  1. A Model of Parallel Kinematics for Machine Calibration

    DEFF Research Database (Denmark)

    Pedersen, David Bue; Bæk Nielsen, Morten; Kløve Christensen, Simon

    2016-01-01

    Parallel kinematics have been adopted by more than 25 manufacturers of high-end desktop 3D printers [Wohlers Report (2015), p.118] as well as by research projects such as the WASP project [WASP (2015)], a 12 meter tall linear delta robot for Additive Manufacture of large-scale components for cons......Parallel kinematics have been adopted by more than 25 manufacturers of high-end desktop 3D printers [Wohlers Report (2015), p.118] as well as by research projects such as the WASP project [WASP (2015)], a 12 meter tall linear delta robot for Additive Manufacture of large-scale components...

  2. Summary Report of Working Group 2: Computation

    International Nuclear Information System (INIS)

    Stoltz, P. H.; Tsung, R. S.

    2009-01-01

    The working group on computation addressed three physics areas: (i) plasma-based accelerators (laser-driven and beam-driven), (ii) high gradient structure-based accelerators, and (iii) electron beam sources and transport [1]. Highlights of the talks in these areas included new models of breakdown on the microscopic scale, new three-dimensional multipacting calculations with both finite difference and finite element codes, and detailed comparisons of new electron gun models with standard models such as PARMELA. The group also addressed two areas of advances in computation: (i) new algorithms, including simulation in a Lorentz-boosted frame that can reduce computation time orders of magnitude, and (ii) new hardware architectures, like graphics processing units and Cell processors that promise dramatic increases in computing power. Highlights of the talks in these areas included results from the first large-scale parallel finite element particle-in-cell code (PIC), many order-of-magnitude speedup of, and details of porting the VPIC code to the Roadrunner supercomputer. The working group featured two plenary talks, one by Brian Albright of Los Alamos National Laboratory on the performance of the VPIC code on the Roadrunner supercomputer, and one by David Bruhwiler of Tech-X Corporation on recent advances in computation for advanced accelerators. Highlights of the talk by Albright included the first one trillion particle simulations, a sustained performance of 0.3 petaflops, and an eight times speedup of science calculations, including back-scatter in laser-plasma interaction. Highlights of the talk by Bruhwiler included simulations of 10 GeV accelerator laser wakefield stages including external injection, new developments in electromagnetic simulations of electron guns using finite difference and finite element approaches.

  3. Group Music Therapy for Prisoners

    DEFF Research Database (Denmark)

    Chen, Xi Jing; Hannibal, Niels; Xu, Kevin

    2014-01-01

    The prevalence of psychological problems is high in prisons. Many prisoners have unmet needs for appropriate treatments. Although previous studies have suggested music therapy to be a successful treatment modality for prisoners, more rigorous evidence is needed. This parallel randomised controlled...... study aims to investigate the effectiveness of group music therapy to reduce anxiety and depression, and raise self-esteem in prisoners. One hundred and ninety two inmates from a Chinese prison will be allocated to two groups through randomisation. The experimental group will participate in biweekly...... group music therapy for 10 weeks (20 sessions) while the control group will be placed on a waitlist. Anxiety, depression and self-esteem will be measured by self-report scales three times: before, at the middle, and at the end of the intervention. Logs by the participants and their daily routine...

  4. Parallel processor programs in the Federal Government

    Science.gov (United States)

    Schneck, P. B.; Austin, D.; Squires, S. L.; Lehmann, J.; Mizell, D.; Wallgren, K.

    1985-01-01

    In 1982, a report dealing with the nation's research needs in high-speed computing called for increased access to supercomputing resources for the research community, research in computational mathematics, and increased research in the technology base needed for the next generation of supercomputers. Since that time a number of programs addressing future generations of computers, particularly parallel processors, have been started by U.S. government agencies. The present paper provides a description of the largest government programs in parallel processing. Established in fiscal year 1985 by the Institute for Defense Analyses for the National Security Agency, the Supercomputing Research Center will pursue research to advance the state of the art in supercomputing. Attention is also given to the DOE applied mathematical sciences research program, the NYU Ultracomputer project, the DARPA multiprocessor system architectures program, NSF research on multiprocessor systems, ONR activities in parallel computing, and NASA parallel processor projects.

  5. Group EDF annual report 2005 sustainable development; Groupe EDF rapport annuel 2005 developpement durable

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2006-05-15

    The EDF Group's Sustainable Development Report for 2005 is designed to report on Group commitments particularly within its Agenda 21, its ethical charter, and the Global Compact. It has also been prepared with reference to external reference frameworks: the Global Reporting Initiative (GRI) guidelines and the French New Economic Regulations (NRE) contained in the May 15, 2001 French law. It contents the Chairman's statement, the evaluation of renewing and sharing commitments with all stakeholders, the managing local issues, EDF responses to the challenges of the future. Indicators are also provided. (A.L.B.)

  6. EDF Group - Annual Report 2008. Leading the energy change

    International Nuclear Information System (INIS)

    2009-01-01

    The EDF Group is a leading player in the energy industry, present in all areas of the electricity value chain, from generation to trading, along with network management and the natural gas chain. The Group has a sound business model, evenly balanced between regulated and deregulated activities. It is the leader in the French and British electricity markets and has solid positions in Germany and Italy. The Group has a portfolio of 38.1 million customers in Europe and the world's premier nuclear generation fleet. Given its R and D capability, its track record and expertise in nuclear generation and renewable energy, together with its energy eco-efficiency offers, EDF offers competitive solutions that reconcile sustainable economic development and climate preservation. EDF's goal is to deliver solutions that allow every customer to help create a world of competitive, low-carbon energies. This document is EDF Group's annual report for the year 2008. It contains information about Group profile, governance, business, development strategy, sales and marketing, positions in Europe and international activities. The document is made of several reports: the Activity and Sustainable Development Report, the Financial Report, the Sustainable Development Report and the Sustainable Development Indicators

  7. Design paper: The CapOpus trial: a randomized, parallel-group, observer-blinded clinical trial of specialized addiction treatment versus treatment as usual for young patients with cannabis abuse and psychosis

    DEFF Research Database (Denmark)

    Hjorthøj, Carsten; Fohlmann, Allan; Larsen, Anne-Mette

    2008-01-01

    : The major objective for the CapOpus trial is to evaluate the additional effect on cannabis abuse of a specialized addiction treatment program adding group treatment and motivational interviewing to treatment as usual. DESIGN: The trial is designed as a randomized, parallel-group, observer-blinded clinical...

  8. Combustion Dynamics Facility: April 1990 workshop working group reports

    Energy Technology Data Exchange (ETDEWEB)

    Kung, A.H.; Lee, Y.T.

    1990-04-01

    This document summarizes results from a workshop held April 5--7, 1990, on the proposed Combustion Dynamics Facility (CDF). The workshop was hosted by the Lawrence Berkeley Laboratory (LBL) and Sandia National Laboratories (SNL) to provide an opportunity for potential users to learn about the proposed experimental and computational facilities, to discuss the science that could be conducted with such facilities, and to offer suggestions as to how the specifications and design of the proposed facilities might be further refined to address the most visionary scientific opportunities. Some 130 chemical physicists, combustion chemists, and specialists in UV synchrotron radiation sources and free-electron lasers (more than half of whom were from institutions other than LBL and SNL) attended the five plenary sessions and participated in one or more of the nine parallel working group sessions. Seven of these sessions were devoted to broadening and strengthening the scope of CDF scientific opportunities and to detail the experimental facilities required to realize these opportunities. Two technical working group sessions addressed the design and proposed performance of two of the major CDF experimental facilities. These working groups and their chairpersons are listed below. A full listing of the attendees of the workshop is given in Appendix A. 1 tab.

  9. Self-monitoring of urinary salt excretion as a method of salt-reduction education: a parallel, randomized trial involving two groups.

    Science.gov (United States)

    Yasutake, Kenichiro; Miyoshi, Emiko; Misumi, Yukiko; Kajiyama, Tomomi; Fukuda, Tamami; Ishii, Taeko; Moriguchi, Ririko; Murata, Yusuke; Ohe, Kenji; Enjoji, Munechika; Tsuchihashi, Takuya

    2018-02-20

    The present study aimed to evaluate salt-reduction education using a self-monitoring urinary salt-excretion device. Parallel, randomized trial involving two groups. The following parameters were checked at baseline and endline of the intervention: salt check sheet, eating behaviour questionnaire, 24 h home urine collection, blood pressure before and after urine collection. The intervention group self-monitored urine salt excretion using a self-measuring device for 4 weeks. In the control group, urine salt excretion was measured, but the individuals were not informed of the result. Seventy-eight individuals (control group, n 36; intervention group, n 42) collected two 24 h urine samples from a target population of 123 local resident volunteers. The samples were then analysed. There were no differences in clinical background or related parameters between the two groups. The 24 h urinary Na:K ratio showed a significant decrease in the intervention group (-1·1) compared with the control group (-0·0; P=0·033). Blood pressure did not change in either group. The results of the salt check sheet did not change in the control group but were significantly lower in the intervention group. The score of the eating behaviour questionnaire did not change in the control group, but the intervention group showed a significant increase in eating behaviour stage. Self-monitoring of urinary salt excretion helps to improve 24 h urinary Na:K, salt check sheet scores and stage of eating behaviour. Thus, usage of self-monitoring tools has an educational potential in salt intake reduction.

  10. Vectorization, parallelization and implementation of Quantum molecular dynamics codes (QQQF, MONTEV)

    Energy Technology Data Exchange (ETDEWEB)

    Kato, Kaori [High Energy Accelerator Research Organization, Tsukuba, Ibaraki (Japan); Kunugi, Tomoaki; Kotake, Susumu; Shibahara, Masahiko

    1998-03-01

    This report describes parallelization, vectorization and implementation for two simulation codes, Quantum molecular dynamics simulation code QQQF and Photon montecalro molecular dynamics simulation code MONTEV, that have been developed for the analysis of the thermalization of photon energies in the molecule or materials. QQQF has been vectorized and parallelized on Fujitsu VPP and has been implemented from VPP to Intel Paragon XP/S and parallelized. MONTEV has been implemented from VPP to Paragon and parallelized. (author)

  11. Intensive versus conventional blood pressure monitoring in a general practice population. The Blood Pressure Reduction in Danish General Practice trial: a randomized controlled parallel group trial

    DEFF Research Database (Denmark)

    Klarskov, Pia; Bang, Lia E; Schultz-Larsen, Peter

    2018-01-01

    To compare the effect of a conventional to an intensive blood pressure monitoring regimen on blood pressure in hypertensive patients in the general practice setting. Randomized controlled parallel group trial with 12-month follow-up. One hundred and ten general practices in all regions of Denmark....... One thousand forty-eight patients with essential hypertension. Conventional blood pressure monitoring ('usual group') continued usual ad hoc blood pressure monitoring by office blood pressure measurements, while intensive blood pressure monitoring ('intensive group') supplemented this with frequent...... a reduction of blood pressure. Clinical Trials NCT00244660....

  12. Report of the tunnel safety working group

    International Nuclear Information System (INIS)

    Gannon, J.

    1991-04-01

    On 18 February 1991 the Project Manager formed a working group to address the safety guidelines and requirements for the underground facilities during the period of accelerator construction, installation, and commissioning. The following report summarizes the research and discussions conducted by the group and the recommended guidelines for safety during this phase of the project

  13. Parallel iterative solvers and preconditioners using approximate hierarchical methods

    Energy Technology Data Exchange (ETDEWEB)

    Grama, A.; Kumar, V.; Sameh, A. [Univ. of Minnesota, Minneapolis, MN (United States)

    1996-12-31

    In this paper, we report results of the performance, convergence, and accuracy of a parallel GMRES solver for Boundary Element Methods. The solver uses a hierarchical approximate matrix-vector product based on a hybrid Barnes-Hut / Fast Multipole Method. We study the impact of various accuracy parameters on the convergence and show that with minimal loss in accuracy, our solver yields significant speedups. We demonstrate the excellent parallel efficiency and scalability of our solver. The combined speedups from approximation and parallelism represent an improvement of several orders in solution time. We also develop fast and paralellizable preconditioners for this problem. We report on the performance of an inner-outer scheme and a preconditioner based on truncated Green`s function. Experimental results on a 256 processor Cray T3D are presented.

  14. Parallelization of ITOUGH2 using PVM

    International Nuclear Information System (INIS)

    Finsterle, Stefan

    1998-01-01

    ITOUGH2 inversions are computationally intensive because the forward problem must be solved many times to evaluate the objective function for different parameter combinations or to numerically calculate sensitivity coefficients. Most of these forward runs are independent from each other and can therefore be performed in parallel. Message passing based on the Parallel Virtual Machine (PVM) system has been implemented into ITOUGH2 to enable parallel processing of ITOUGH2 jobs on a heterogeneous network of Unix workstations. This report describes the PVM system and its implementation into ITOUGH2. Instructions are given for installing PVM, compiling ITOUGH2-PVM for use on a workstation cluster, the preparation of an 1.TOUGH2 input file under PVM, and the execution of an ITOUGH2-PVM application. Examples are discussed, demonstrating the use of ITOUGH2-PVM

  15. Parallel MR imaging.

    Science.gov (United States)

    Deshmane, Anagha; Gulani, Vikas; Griswold, Mark A; Seiberlich, Nicole

    2012-07-01

    Parallel imaging is a robust method for accelerating the acquisition of magnetic resonance imaging (MRI) data, and has made possible many new applications of MR imaging. Parallel imaging works by acquiring a reduced amount of k-space data with an array of receiver coils. These undersampled data can be acquired more quickly, but the undersampling leads to aliased images. One of several parallel imaging algorithms can then be used to reconstruct artifact-free images from either the aliased images (SENSE-type reconstruction) or from the undersampled data (GRAPPA-type reconstruction). The advantages of parallel imaging in a clinical setting include faster image acquisition, which can be used, for instance, to shorten breath-hold times resulting in fewer motion-corrupted examinations. In this article the basic concepts behind parallel imaging are introduced. The relationship between undersampling and aliasing is discussed and two commonly used parallel imaging methods, SENSE and GRAPPA, are explained in detail. Examples of artifacts arising from parallel imaging are shown and ways to detect and mitigate these artifacts are described. Finally, several current applications of parallel imaging are presented and recent advancements and promising research in parallel imaging are briefly reviewed. Copyright © 2012 Wiley Periodicals, Inc.

  16. Comparative effectiveness of Pilates and yoga group exercise interventions for chronic mechanical neck pain: quasi-randomised parallel controlled study.

    Science.gov (United States)

    Dunleavy, K; Kava, K; Goldberg, A; Malek, M H; Talley, S A; Tutag-Lehr, V; Hildreth, J

    2016-09-01

    To determine the effectiveness of Pilates and yoga group exercise interventions for individuals with chronic neck pain (CNP). Quasi-randomised parallel controlled study. Community, university and private practice settings in four locations. Fifty-six individuals with CNP scoring ≥3/10 on the numeric pain rating scale for >3 months (controls n=17, Pilates n=20, yoga n=19). Exercise participants completed 12 small-group sessions with modifications and progressions supervised by a physiotherapist. The primary outcome measure was the Neck Disability Index (NDI). Secondary outcomes were pain ratings, range of movement and postural measurements collected at baseline, 6 weeks and 12 weeks. Follow-up was performed 6 weeks after completion of the exercise classes (Week 18). NDI decreased significantly in the Pilates {baseline: 11.1 [standard deviation (SD) 4.3] vs Week 12: 6.8 (SD 4.3); mean difference -4.3 (95% confidence interval -1.64 to -6.7); PPilates and yoga group exercise interventions with appropriate modifications and supervision were safe and equally effective for decreasing disability and pain compared with the control group for individuals with mild-to-moderate CNP. Physiotherapists may consider including these approaches in a plan of care. ClinicalTrials.gov NCT01999283. Copyright © 2015 Chartered Society of Physiotherapy. Published by Elsevier Ltd. All rights reserved.

  17. Spatially parallel processing of within-dimension conjunctions.

    Science.gov (United States)

    Linnell, K J; Humphreys, G W

    2001-01-01

    Within-dimension conjunction search for red-green targets amongst red-blue, and blue-green, nontargets is extremely inefficient (Wolfe et al, 1990 Journal of Experimental Psychology: Human Perception and Performance 16 879-892). We tested whether pairs of red-green conjunction targets can nevertheless be processed spatially in parallel. Participants made speeded detection responses whenever a red-green target was present. Across trials where a second identical target was present, the distribution of detection times was compatible with the assumption that targets were processed in parallel (Miller, 1982 Cognitive Psychology 14 247-279). We show that this was not an artifact of response-competition or feature-based processing. We suggest that within-dimension conjunctions can be processed spatially in parallel. Visual search for such items may be inefficient owing to within-dimension grouping between items.

  18. Health in Transportation Working Group 2016 Annual Report

    Science.gov (United States)

    2017-06-30

    The Health in Transportation Working Group 2016 Annual Report provides an overview of the Working Groups activities and accomplishments in 2016, summarizes other USDOT health-related accomplishments, and documents its progress toward the recommend...

  19. The development of a scalable parallel 3-D CFD algorithm for turbomachinery. M.S. Thesis Final Report

    Science.gov (United States)

    Luke, Edward Allen

    1993-01-01

    Two algorithms capable of computing a transonic 3-D inviscid flow field about rotating machines are considered for parallel implementation. During the study of these algorithms, a significant new method of measuring the performance of parallel algorithms is developed. The theory that supports this new method creates an empirical definition of scalable parallel algorithms that is used to produce quantifiable evidence that a scalable parallel application was developed. The implementation of the parallel application and an automated domain decomposition tool are also discussed.

  20. pcircle - A Suite of Scalable Parallel File System Tools

    Energy Technology Data Exchange (ETDEWEB)

    2015-10-01

    Most of the software related to file system are written for conventional local file system, they are serialized and can't take advantage of the benefit of a large scale parallel file system. "pcircle" software builds on top of ubiquitous MPI in cluster computing environment and "work-stealing" pattern to provide a scalable, high-performance suite of file system tools. In particular - it implemented parallel data copy and parallel data checksumming, with advanced features such as async progress report, checkpoint and restart, as well as integrity checking.

  1. Abandoned Mine Waste Working Group report

    International Nuclear Information System (INIS)

    1993-01-01

    The Mine Waste Working Group discussed the nature and possible contributions to the solution of this class of waste problem at length. There was a consensus that the mine waste problem presented some fundamental differences from the other classes of waste addresses by the Develop On-Site Innovative Technologies (DOIT) working groups. Contents of this report are: executive summary; stakeholders address the problems; the mine waste program; current technology development programs; problems and issues that need to be addressed; demonstration projects to test solutions; conclusion-next steps; and appendices

  2. Report of the LOFT special review group. Technical report

    International Nuclear Information System (INIS)

    Ross, D.F. Jr.

    1981-02-01

    This report represents the results of the LOFT Special Review Group (LSRG) evaluation of the LOFT program and is submitted to the Commission as an aid in its decision whether to continue NRC support of the LOFT project beyond FY 1982. The principal consensus reached by the LSRG recommends continued NRC support of the LOFT program through FY 1983

  3. The language parallel Pascal and other aspects of the massively parallel processor

    Science.gov (United States)

    Reeves, A. P.; Bruner, J. D.

    1982-01-01

    A high level language for the Massively Parallel Processor (MPP) was designed. This language, called Parallel Pascal, is described in detail. A description of the language design, a description of the intermediate language, Parallel P-Code, and details for the MPP implementation are included. Formal descriptions of Parallel Pascal and Parallel P-Code are given. A compiler was developed which converts programs in Parallel Pascal into the intermediate Parallel P-Code language. The code generator to complete the compiler for the MPP is being developed independently. A Parallel Pascal to Pascal translator was also developed. The architecture design for a VLSI version of the MPP was completed with a description of fault tolerant interconnection networks. The memory arrangement aspects of the MPP are discussed and a survey of other high level languages is given.

  4. Parallel Atomistic Simulations

    Energy Technology Data Exchange (ETDEWEB)

    HEFFELFINGER,GRANT S.

    2000-01-18

    Algorithms developed to enable the use of atomistic molecular simulation methods with parallel computers are reviewed. Methods appropriate for bonded as well as non-bonded (and charged) interactions are included. While strategies for obtaining parallel molecular simulations have been developed for the full variety of atomistic simulation methods, molecular dynamics and Monte Carlo have received the most attention. Three main types of parallel molecular dynamics simulations have been developed, the replicated data decomposition, the spatial decomposition, and the force decomposition. For Monte Carlo simulations, parallel algorithms have been developed which can be divided into two categories, those which require a modified Markov chain and those which do not. Parallel algorithms developed for other simulation methods such as Gibbs ensemble Monte Carlo, grand canonical molecular dynamics, and Monte Carlo methods for protein structure determination are also reviewed and issues such as how to measure parallel efficiency, especially in the case of parallel Monte Carlo algorithms with modified Markov chains are discussed.

  5. A randomised, single-blind, single-dose, three-arm, parallel-group study in healthy subjects to demonstrate pharmacokinetic equivalence of ABP 501 and adalimumab.

    Science.gov (United States)

    Kaur, Primal; Chow, Vincent; Zhang, Nan; Moxness, Michael; Kaliyaperumal, Arunan; Markus, Richard

    2017-03-01

    To demonstrate pharmacokinetic (PK) similarity of biosimilar candidate ABP 501 relative to adalimumab reference product from the USA and European Union (EU) and evaluate safety, tolerability and immunogenicity of ABP 501. Randomised, single-blind, single-dose, three-arm, parallel-group study; healthy subjects were randomised to receive ABP 501 (n=67), adalimumab (USA) (n=69) or adalimumab (EU) (n=67) 40 mg subcutaneously. Primary end points were area under the serum concentration-time curve from time 0 extrapolated to infinity (AUC inf ) and the maximum observed concentration (C max ). Secondary end points included safety and immunogenicity. AUC inf and C max were similar across the three groups. Geometrical mean ratio (GMR) of AUC inf was 1.11 between ABP 501 and adalimumab (USA), and 1.04 between ABP 501 and adalimumab (EU). GMR of C max was 1.04 between ABP 501 and adalimumab (USA) and 0.96 between ABP 501 and adalimumab (EU). The 90% CIs for the GMRs of AUC inf and C max were within the prespecified standard PK equivalence criteria of 0.80 to 1.25. Treatment-related adverse events were mild to moderate and were reported for 35.8%, 24.6% and 41.8% of subjects in the ABP 501, adalimumab (USA) and adalimumab (EU) groups; incidence of antidrug antibodies (ADAbs) was similar among the study groups. Results of this study demonstrated PK similarity of ABP 501 with adalimumab (USA) and adalimumab (EU) after a single 40-mg subcutaneous injection. No new safety signals with ABP 501 were identified. The safety and tolerability of ABP 501 was similar to the reference products, and similar ADAb rates were observed across the three groups. EudraCT number 2012-000785-37; Results. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  6. Parallel computing for event reconstruction in high-energy physics

    International Nuclear Information System (INIS)

    Wolbers, S.

    1993-01-01

    Parallel computing has been recognized as a solution to large computing problems. In High Energy Physics offline event reconstruction of detector data is a very large computing problem that has been solved with parallel computing techniques. A review of the parallel programming package CPS (Cooperative Processes Software) developed and used at Fermilab for offline reconstruction of Terabytes of data requiring the delivery of hundreds of Vax-Years per experiment is given. The Fermilab UNIX farms, consisting of 180 Silicon Graphics workstations and 144 IBM RS6000 workstations, are used to provide the computing power for the experiments. Fermilab has had a long history of providing production parallel computing starting with the ACP (Advanced Computer Project) Farms in 1986. The Fermilab UNIX Farms have been in production for over 2 years with 24 hour/day service to experimental user groups. Additional tools for management, control and monitoring these large systems will be described. Possible future directions for parallel computing in High Energy Physics will be given

  7. Nuclear Physics Group progress report

    International Nuclear Information System (INIS)

    Coote, G.E.

    1985-02-01

    This report summarises the work of the Nuclear Physics Group of the Institute of Nuclear Sciences during the period January-December 1983. Commissioning of the EN-tandem electrostatic accelerator continued, with the first proton beam produced in June. Many improvements were made to the vacuum pumping and control systems. Applications of the nuclear microprobe on the 3MV accelerator continued at a good pace, with applications in archaeometry, dental research, studies of glass and metallurgy

  8. Parallelization for X-ray crystal structural analysis program

    Energy Technology Data Exchange (ETDEWEB)

    Watanabe, Hiroshi [Japan Atomic Energy Research Inst., Tokyo (Japan); Minami, Masayuki; Yamamoto, Akiji

    1997-10-01

    In this report we study vectorization and parallelization for X-ray crystal structural analysis program. The target machine is NEC SX-4 which is a distributed/shared memory type vector parallel supercomputer. X-ray crystal structural analysis is surveyed, and a new multi-dimensional discrete Fourier transform method is proposed. The new method is designed to have a very long vector length, so that it enables to obtain the 12.0 times higher performance result that the original code. Besides the above-mentioned vectorization, the parallelization by micro-task functions on SX-4 reaches 13.7 times acceleration in the part of multi-dimensional discrete Fourier transform with 14 CPUs, and 3.0 times acceleration in the whole program. Totally 35.9 times acceleration to the original 1CPU scalar version is achieved with vectorization and parallelization on SX-4. (author)

  9. Non-financial reporting, CSR frameworks and groups of undertakings

    DEFF Research Database (Denmark)

    Szabó, Dániel Gergely; Sørensen, Karsten Engsig

    2017-01-01

    The recently adopted Directive on non-financial reporting (Directive 2014/95/EU) and several CSR frameworks are based on the assumption that groups of undertakings adopt, report and implement one group policy. This is a very important but also rather unique approach to groups. This article first...... shows how the Directive as well as a few CSR frameworks intend to be implemented in groups and next it discusses potential barriers to do so. Even though company law does not always facilitate the adoption, communication and implementation of a group CSR policy, it may not in practice be a problem to do...... so. However, it is shown that doing so may have unforeseen consequences for the parent undertaking. To avoid them, it is recommended to make adjustments to the implementation of the group policy....

  10. Combinatorics of spreads and parallelisms

    CERN Document Server

    Johnson, Norman

    2010-01-01

    Partitions of Vector Spaces Quasi-Subgeometry Partitions Finite Focal-SpreadsGeneralizing André SpreadsThe Going Up Construction for Focal-SpreadsSubgeometry Partitions Subgeometry and Quasi-Subgeometry Partitions Subgeometries from Focal-SpreadsExtended André SubgeometriesKantor's Flag-Transitive DesignsMaximal Additive Partial SpreadsSubplane Covered Nets and Baer Groups Partial Desarguesian t-Parallelisms Direct Products of Affine PlanesJha-Johnson SL(2,

  11. Duloxetine for the management of diabetic peripheral neuropathic pain: evidence-based findings from post hoc analysis of three multicenter, randomized, double-blind, placebo-controlled, parallel-group studies

    DEFF Research Database (Denmark)

    Kajdasz, Daniel K; Iyengar, Smriti; Desaiah, Durisala

    2007-01-01

    peripheral neuropathic pain (DPNP). METHODS: Data were pooled from three 12-week, multicenter, randomized, double-blind, placebo-controlled, parallel-group studies in which patients received 60 mg duloxetine either QD or BID or placebo. NNT was calculated based on rates of response (defined as >or=30...

  12. UTM Data Working Group Demonstration 1: Final Report

    Science.gov (United States)

    Rios, Joseph L.; Mulfinger, Daniel G.; Smith, Irene S.; Venkatesan, Priya; Smith, David R.; Baskaran, Vijayakumar; Wang, Leo

    2017-01-01

    This document summarizes activities defining and executing the first demonstration of the NASA-FAA Research Transition Team (RTT) Data Exchange and Information Architecture (DEIA) working group (DWG). The demonstration focused on testing the interactions between two key components in the future UAS Traffic Management (UTM) System through a collaborative and distributed simulation of key scenarios. The summary incorporates written feedback from each of the participants in the demonstration. In addition to reporting the activities, this report also provides some insight into future steps of this working group.

  13. Working Group Report: Quantum Chromodynamics

    Energy Technology Data Exchange (ETDEWEB)

    Campbell, J. M. [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States)

    2013-10-18

    This is the summary report of the energy frontier QCD working group prepared for Snowmass 2013. We review the status of tools, both theoretical and experimental, for understanding the strong interactions at colliders. We attempt to prioritize important directions that future developments should take. Most of the efforts of the QCD working group concentrate on proton-proton colliders, at 14 TeV as planned for the next run of the LHC, and for 33 and 100 TeV, possible energies of the colliders that will be necessary to carry on the physics program started at 14 TeV. We also examine QCD predictions and measurements at lepton-lepton and lepton-hadron colliders, and in particular their ability to improve our knowledge of strong coupling constant and parton distribution functions.

  14. IAEA INTOR workshop report, group 8

    International Nuclear Information System (INIS)

    Tamura, Sanae; Shimada, Ryuichi; Miya, Naoyuki; Shinya, Kichiro; Kishimoto, Hiroshi

    1979-10-01

    This report provides material for discussion in Group 8, Power Supply and Transfer, of the IAEA Workshop on INTOR. A new system for the poloidal field power supply for INTOR is proposed and its overall system design is described. The results of simulation calculation of the system are also given. (author)

  15. Parallel integer sorting with medium and fine-scale parallelism

    Science.gov (United States)

    Dagum, Leonardo

    1993-01-01

    Two new parallel integer sorting algorithms, queue-sort and barrel-sort, are presented and analyzed in detail. These algorithms do not have optimal parallel complexity, yet they show very good performance in practice. Queue-sort designed for fine-scale parallel architectures which allow the queueing of multiple messages to the same destination. Barrel-sort is designed for medium-scale parallel architectures with a high message passing overhead. The performance results from the implementation of queue-sort on a Connection Machine CM-2 and barrel-sort on a 128 processor iPSC/860 are given. The two implementations are found to be comparable in performance but not as good as a fully vectorized bucket sort on the Cray YMP.

  16. Load-balancing techniques for a parallel electromagnetic particle-in-cell code

    Energy Technology Data Exchange (ETDEWEB)

    PLIMPTON,STEVEN J.; SEIDEL,DAVID B.; PASIK,MICHAEL F.; COATS,REBECCA S.

    2000-01-01

    QUICKSILVER is a 3-d electromagnetic particle-in-cell simulation code developed and used at Sandia to model relativistic charged particle transport. It models the time-response of electromagnetic fields and low-density-plasmas in a self-consistent manner: the fields push the plasma particles and the plasma current modifies the fields. Through an LDRD project a new parallel version of QUICKSILVER was created to enable large-scale plasma simulations to be run on massively-parallel distributed-memory supercomputers with thousands of processors, such as the Intel Tflops and DEC CPlant machines at Sandia. The new parallel code implements nearly all the features of the original serial QUICKSILVER and can be run on any platform which supports the message-passing interface (MPI) standard as well as on single-processor workstations. This report describes basic strategies useful for parallelizing and load-balancing particle-in-cell codes, outlines the parallel algorithms used in this implementation, and provides a summary of the modifications made to QUICKSILVER. It also highlights a series of benchmark simulations which have been run with the new code that illustrate its performance and parallel efficiency. These calculations have up to a billion grid cells and particles and were run on thousands of processors. This report also serves as a user manual for people wishing to run parallel QUICKSILVER.

  17. Load-balancing techniques for a parallel electromagnetic particle-in-cell code

    International Nuclear Information System (INIS)

    Plimpton, Steven J.; Seidel, David B.; Pasik, Michael F.; Coats, Rebecca S.

    2000-01-01

    QUICKSILVER is a 3-d electromagnetic particle-in-cell simulation code developed and used at Sandia to model relativistic charged particle transport. It models the time-response of electromagnetic fields and low-density-plasmas in a self-consistent manner: the fields push the plasma particles and the plasma current modifies the fields. Through an LDRD project a new parallel version of QUICKSILVER was created to enable large-scale plasma simulations to be run on massively-parallel distributed-memory supercomputers with thousands of processors, such as the Intel Tflops and DEC CPlant machines at Sandia. The new parallel code implements nearly all the features of the original serial QUICKSILVER and can be run on any platform which supports the message-passing interface (MPI) standard as well as on single-processor workstations. This report describes basic strategies useful for parallelizing and load-balancing particle-in-cell codes, outlines the parallel algorithms used in this implementation, and provides a summary of the modifications made to QUICKSILVER. It also highlights a series of benchmark simulations which have been run with the new code that illustrate its performance and parallel efficiency. These calculations have up to a billion grid cells and particles and were run on thousands of processors. This report also serves as a user manual for people wishing to run parallel QUICKSILVER

  18. Comparable long-term efficacy, as assessed by patient-reported outcomes, safety and pharmacokinetics, of CT-P13 and reference infliximab in patients with ankylosing spondylitis: 54-week results from the randomized, parallel-group PLANETAS study.

    Science.gov (United States)

    Park, Won; Yoo, Dae Hyun; Jaworski, Janusz; Brzezicki, Jan; Gnylorybov, Andriy; Kadinov, Vladimir; Sariego, Irmgadt Goecke; Abud-Mendoza, Carlos; Escalante, William Jose Otero; Kang, Seong Wook; Andersone, Daina; Blanco, Francisco; Hong, Seung Suh; Lee, Sun Hee; Braun, Jürgen

    2016-01-20

    CT-P13 (Remsima®, Inflectra®) is a biosimilar of the infliximab reference product (RP; Remicade®) and is approved in Europe and elsewhere, mostly for the same indications as RP. The aim of this study was to compare the 54-week efficacy, immunogenicity, pharmacokinetics (PK) and safety of CT-P13 with RP in patients with ankylosing spondylitis (AS), with a focus on patient-reported outcomes (PROs). This was a multinational, double-blind, parallel-group study in patients with active AS. Participants were randomized (1:1) to receive CT-P13 (5 mg/kg) or RP (5 mg/kg) at weeks 0, 2, 6 and then every 8 weeks up to week 54. To assess responses, standardized assessment tools were used with an intention-to-treat analysis of observed data. Anti-drug antibodies (ADAs), PK parameters, and safety outcomes were also assessed. Of 250 randomized patients (n = 125 per group), 210 (84.0 %) completed 54 weeks of treatment, with similar completion rates between groups. At week 54, Assessment of Spondylo Arthritis international Society (ASAS)20 response, ASAS40 response and ASAS partial remission were comparable between treatment groups. Changes from baseline in PROs such as mean Bath Ankylosing Spondylitis Disease Activity Index (BASDAI; CT-P13 -3.1 versus RP -2.8), Bath Ankylosing Spondylitis Functional Index (BASFI; -2.9 versus -2.7), and Short Form Health Survey (SF-36) scores (9.26 versus 10.13 for physical component summary; 7.30 versus 6.54 for mental component summary) were similar between treatment groups. At 54 weeks, 19.5 % and 23.0 % of patients receiving CT-P13 and RP, respectively, had ADAs. All observed PK parameters of CT-P13 and RP, including maximum and minimum serum concentrations, were similar through 54 weeks. The influence of ADAs on PK was similar in the two treatment groups. Most adverse events were mild or moderate in severity. There was no notable difference between treatment groups in the incidence of adverse events, serious adverse events

  19. Interim Report on ISO TC 163 Working Group 3. Annual progress report

    Energy Technology Data Exchange (ETDEWEB)

    Fairey, Philip [Florida Solar Energy Center, Cocoa, FL (United States)

    2009-04-02

    This reports cover the initial year efforts of the International Standards Organization (ISO) to develop international standards for rating the energy performance of buildings. The author of this report is a participant in this effort. This report summarizes the activities of the ISO Working Group charged with development of these standards and makes recommendations to the sponsors for future U.S. involvement in this ISO effort.

  20. Parallel single-cell analysis microfluidic platform

    NARCIS (Netherlands)

    van den Brink, Floris Teunis Gerardus; Gool, Elmar; Frimat, Jean-Philippe; Bomer, Johan G.; van den Berg, Albert; le Gac, Severine

    2011-01-01

    We report a PDMS microfluidic platform for parallel single-cell analysis (PaSCAl) as a powerful tool to decipher the heterogeneity found in cell populations. Cells are trapped individually in dedicated pockets, and thereafter, a number of invasive or non-invasive analysis schemes are performed.

  1. Beyond Silence: A Randomized, Parallel-Group Trial Exploring the Impact of Workplace Mental Health Literacy Training with Healthcare Employees.

    Science.gov (United States)

    Moll, Sandra E; Patten, Scott; Stuart, Heather; MacDermid, Joy C; Kirsh, Bonnie

    2018-01-01

    This study sought to evaluate whether a contact-based workplace education program was more effective than standard mental health literacy training in promoting early intervention and support for healthcare employees with mental health issues. A parallel-group, randomised trial was conducted with employees in 2 multi-site Ontario hospitals with the evaluators blinded to the groups. Participants were randomly assigned to 1 of 2 group-based education programs: Beyond Silence (comprising 6 in-person, 2-h sessions plus 5 online sessions co-led by employees who personally experienced mental health issues) or Mental Health First Aid (a standardised 2-day training program led by a trained facilitator). Participants completed baseline, post-group, and 3-mo follow-up surveys to explore perceived changes in mental health knowledge, stigmatized beliefs, and help-seeking/help-outreach behaviours. An intent-to-treat analysis was completed with 192 participants. Differences were assessed using multi-level mixed models accounting for site, group, and repeated measurement. Neither program led to significant increases in help-seeking or help-outreach behaviours. Both programs increased mental health literacy, improved attitudes towards seeking treatment, and decreased stigmatized beliefs, with sustained changes in stigmatized beliefs more prominent in the Beyond Silence group. Beyond Silence, a new contact-based education program customised for healthcare workers was not superior to standard mental health literacy training in improving mental health help-seeking or help-outreach behaviours in the workplace. The only difference was a reduction in stigmatized beliefs over time. Additional research is needed to explore the factors that lead to behaviour change.

  2. About Parallel Programming: Paradigms, Parallel Execution and Collaborative Systems

    Directory of Open Access Journals (Sweden)

    Loredana MOCEAN

    2009-01-01

    Full Text Available In the last years, there were made efforts for delineation of a stabile and unitary frame, where the problems of logical parallel processing must find solutions at least at the level of imperative languages. The results obtained by now are not at the level of the made efforts. This paper wants to be a little contribution at these efforts. We propose an overview in parallel programming, parallel execution and collaborative systems.

  3. Online Diagnosis for the Capacity Fade Fault of a Parallel-Connected Lithium Ion Battery Group

    Directory of Open Access Journals (Sweden)

    Hua Zhang

    2016-05-01

    Full Text Available In a parallel-connected battery group (PCBG, capacity degradation is usually caused by the inconsistency between a faulty cell and other normal cells, and the inconsistency occurs due to two potential causes: an aging inconsistency fault or a loose contacting fault. In this paper, a novel method is proposed to perform online and real-time capacity fault diagnosis for PCBGs. Firstly, based on the analysis of parameter variation characteristics of a PCBG with different fault causes, it is found that PCBG resistance can be taken as an indicator for both seeking the faulty PCBG and distinguishing the fault causes. On one hand, the faulty PCBG can be identified by comparing the PCBG resistance among PCBGs; on the other hand, two fault causes can be distinguished by comparing the variance of the PCBG resistances. Furthermore, for online applications, a novel recursive-least-squares algorithm with restricted memory and constraint (RLSRMC, in which the constraint is added to eliminate the “imaginary number” phenomena of parameters, is developed and used in PCBG resistance identification. Lastly, fault simulation and validation results demonstrate that the proposed methods have good accuracy and reliability.

  4. Parallel computing works!

    CERN Document Server

    Fox, Geoffrey C; Messina, Guiseppe C

    2014-01-01

    A clear illustration of how parallel computers can be successfully appliedto large-scale scientific computations. This book demonstrates how avariety of applications in physics, biology, mathematics and other scienceswere implemented on real parallel computers to produce new scientificresults. It investigates issues of fine-grained parallelism relevant forfuture supercomputers with particular emphasis on hypercube architecture. The authors describe how they used an experimental approach to configuredifferent massively parallel machines, design and implement basic systemsoftware, and develop

  5. Heterogeneous Multicore Parallel Programming for Graphics Processing Units

    Directory of Open Access Journals (Sweden)

    Francois Bodin

    2009-01-01

    Full Text Available Hybrid parallel multicore architectures based on graphics processing units (GPUs can provide tremendous computing power. Current NVIDIA and AMD Graphics Product Group hardware display a peak performance of hundreds of gigaflops. However, exploiting GPUs from existing applications is a difficult task that requires non-portable rewriting of the code. In this paper, we present HMPP, a Heterogeneous Multicore Parallel Programming workbench with compilers, developed by CAPS entreprise, that allows the integration of heterogeneous hardware accelerators in a unintrusive manner while preserving the legacy code.

  6. Implementation of a parallel version of a regional climate model

    Energy Technology Data Exchange (ETDEWEB)

    Gerstengarbe, F.W. [ed.; Kuecken, M. [Potsdam-Institut fuer Klimafolgenforschung (PIK), Potsdam (Germany); Schaettler, U. [Deutscher Wetterdienst, Offenbach am Main (Germany). Geschaeftsbereich Forschung und Entwicklung

    1997-10-01

    A regional climate model developed by the Max Planck Institute for Meterology and the German Climate Computing Centre in Hamburg based on the `Europa` and `Deutschland` models of the German Weather Service has been parallelized and implemented on the IBM RS/6000 SP computer system of the Potsdam Institute for Climate Impact Research including parallel input/output processing, the explicit Eulerian time-step, the semi-implicit corrections, the normal-mode initialization and the physical parameterizations of the German Weather Service. The implementation utilizes Fortran 90 and the Message Passing Interface. The parallelization strategy used is a 2D domain decomposition. This report describes the parallelization strategy, the parallel I/O organization, the influence of different domain decomposition approaches for static and dynamic load imbalances and first numerical results. (orig.)

  7. Fiscal 2000 report on advanced parallelized compiler technology. Outlines; 2000 nendo advanced heiretsuka compiler gijutsu hokokusho (Gaiyo hen)

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2001-03-01

    Research and development was carried out concerning the automatic parallelized compiler technology which improves on the practical performance, cost/performance ratio, and ease of operation of the multiprocessor system now used for constructing supercomputers and expected to provide a fundamental architecture for microprocessors for the 21st century. Efforts were made to develop an automatic multigrain parallelization technology for extracting multigrain as parallelized from a program and for making full use of the same and a parallelizing tuning technology for accelerating parallelization by feeding back to the compiler the dynamic information and user knowledge to be acquired during execution. Moreover, a benchmark program was selected and studies were made to set execution rules and evaluation indexes for the establishment of technologies for subjectively evaluating the performance of parallelizing compilers for the existing commercial parallel processing computers, which was achieved through the implementation and evaluation of the 'Advanced parallelizing compiler technology research and development project.' (NEDO)

  8. Report of the Working Group on Publicity and Funding

    DEFF Research Database (Denmark)

    Gammeltoft, Peder

    2017-01-01

    The report presents the aims and activities of the working group and in its efforts with raising awareness of the need for geographical names standardization and the work of the Group of Experts, through presence on the web and social media and Media Kit. The report also highlights efforts to find...... financial support for training and for representatives from developing countries attending UNSCGN Conferences and UNGEGN Sessions....

  9. A multitransputer parallel processing system (MTPPS)

    International Nuclear Information System (INIS)

    Jethra, A.K.; Pande, S.S.; Borkar, S.P.; Khare, A.N.; Ghodgaonkar, M.D.; Bairi, B.R.

    1993-01-01

    This report describes the design and implementation of a 16 node Multi Transputer Parallel Processing System(MTPPS) which is a platform for parallel program development. It is a MIMD machine based on message passing paradigm. The basic compute engine is an Inmos Transputer Ims T800-20. Transputer with local memory constitutes the processing element (NODE) of this MIMD architecture. Multiple NODES can be connected to each other in an identifiable network topology through the high speed serial links of the transputer. A Network Configuration Unit (NCU) incorporates the necessary hardware to provide software controlled network configuration. System is modularly expandable and more NODES can be added to the system to achieve the required processing power. The system is backend to the IBM-PC which has been integrated into the system to provide user I/O interface. PC resources are available to the programmer. Interface hardware between the PC and the network of transputers is INMOS compatible. Therefore, all the commercially available development software compatible to INMOS products can run on this system. While giving the details of design and implementation, this report briefly summarises MIMD Architectures, Transputer Architecture and Parallel Processing Software Development issues. LINPACK performance evaluation of the system and solutions of neutron physics and plasma physics problem have been discussed along with results. (author). 12 refs., 22 figs., 3 tabs., 3 appendixes

  10. Effect of probiotic yoghurt on animal-based diet-induced change in gut microbiota: an open, randomised, parallel-group study.

    Science.gov (United States)

    Odamaki, T; Kato, K; Sugahara, H; Xiao, J Z; Abe, F; Benno, Y

    2016-09-01

    Diet has a significant influence on the intestinal environment. In this study, we assessed changes in the faecal microbiota induced by an animal-based diet and the effect of the ingestion of yoghurt supplemented with a probiotic strain on these changes. In total, 33 subjects were enrolled in an open, randomised, parallel-group study. After a seven-day pre-observation period, the subjects were allocated into three groups (11 subjects in each group). All of the subjects were provided with an animal-based diet for five days, followed by a balanced diet for 14 days. Subjects in the first group ingested dairy in the form of 200 g of yoghurt supplemented with Bifidobacterium longum during both the animal-based and balanced diet periods (YAB group). Subjects in the second group ingested yoghurt only during the balanced diet period (YB group). Subjects who did not ingest yoghurt throughout the intervention were used as the control (CTR) group. Faecal samples were collected before and after the animal-based diet was provided and after the balanced diet was provided, followed by analysis by high-throughput sequencing of amplicons derived from the V3-V4 region of the 16S rRNA gene. In the YB and CTR groups, the animal-based diet caused a significant increase in the relative abundance of Bilophila, Odoribacter, Dorea and Ruminococcus (belonging to Lachnospiraceae) and a significant decrease in the level of Bifidobacterium after five days of intake. With the exception of Ruminococcus, these changes were not observed in the YAB group. No significant effect was induced by yoghurt supplementation following an animal-based diet (YB group vs CTR group). These results suggest that the intake of yoghurt supplemented with bifidobacteria played a role in maintaining a normal microbiota composition during the ingestion of a meat-based diet. This study protocol was registered in the University Hospital Medical Information Network: UMIN000014164.

  11. TIS General Safety Group Annual Report 2000

    CERN Document Server

    Weingarten, W

    2001-01-01

    This report summarises the main activities of the General Safety (GS) Group of the Technical Inspection and Safety Division (TIS) during the year 2000, and the results obtained. The different topics in which the Group is active are covered: general safety inspections and ergonomy, electrical, chemistry and gas safety, chemical pollution containment and control, industrial hygiene, the safety of civil engineering works and outside contractors, fire prevention and the safety aspects of the LHC experiments.

  12. Working group report: Cosmology and astroparticle physics

    Indian Academy of Sciences (India)

    This is the report of the cosmology and astroparticle physics working group at ... discussions carried out during the workshop on selected topics in the above fields. ... Theoretical Physics Division, Physical Research Laboratory, Navrangpura, ...

  13. Strength Training Parallel with Plyometric and Cross training Influences on Speed Endurance

    OpenAIRE

    C.C.Chandra Obul Reddy; Dr. K. Rama Subba Reddy

    2017-01-01

    The purpose of the study was to find out the influence of weight training parallel with plyometric and cross training on speed endurance. To achieve this purpose of the study, forty-five men students studying CSSR & SRRM Degree College, Kamalapuram, YSR (D), Andhra Pradesh, India were randomly selected as subjects during the year 2015-2016. They were divided into three equal groups of fifteen subjects each. Group I underwent weight training parallel with plyometric training for three sessions...

  14. Parallel generation of architecture on the GPU

    KAUST Repository

    Steinberger, Markus

    2014-05-01

    In this paper, we present a novel approach for the parallel evaluation of procedural shape grammars on the graphics processing unit (GPU). Unlike previous approaches that are either limited in the kind of shapes they allow, the amount of parallelism they can take advantage of, or both, our method supports state of the art procedural modeling including stochasticity and context-sensitivity. To increase parallelism, we explicitly express independence in the grammar, reduce inter-rule dependencies required for context-sensitive evaluation, and introduce intra-rule parallelism. Our rule scheduling scheme avoids unnecessary back and forth between CPU and GPU and reduces round trips to slow global memory by dynamically grouping rules in on-chip shared memory. Our GPU shape grammar implementation is multiple orders of magnitude faster than the standard in CPU-based rule evaluation, while offering equal expressive power. In comparison to the state of the art in GPU shape grammar derivation, our approach is nearly 50 times faster, while adding support for geometric context-sensitivity. © 2014 The Author(s) Computer Graphics Forum © 2014 The Eurographics Association and John Wiley & Sons Ltd. Published by John Wiley & Sons Ltd.

  15. Endpoint-based parallel data processing in a parallel active messaging interface of a parallel computer

    Science.gov (United States)

    Archer, Charles J.; Blocksome, Michael A.; Ratterman, Joseph D.; Smith, Brian E.

    2014-08-12

    Endpoint-based parallel data processing in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI composed of data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes coupled for data communications through the PAMI, including establishing a data communications geometry, the geometry specifying, for tasks representing processes of execution of the parallel application, a set of endpoints that are used in collective operations of the PAMI including a plurality of endpoints for one of the tasks; receiving in endpoints of the geometry an instruction for a collective operation; and executing the instruction for a collective operation through the endpoints in dependence upon the geometry, including dividing data communications operations among the plurality of endpoints for one of the tasks.

  16. ATLAS Future Framework Requirements Group Report

    CERN Document Server

    The ATLAS collaboration

    2016-01-01

    The Future Frameworks Requirements Group was constituted in Summer 2013 to consider and summarise the framework requirements from trigger and offline for configuring, scheduling and monitoring the data processing software needed by the ATLAS experiment. The principal motivation for such a re-examination arises from the current and anticipated evolution of CPUs, where multiple cores, hyper-threading and wide vector registers require a shift to a concurrent programming model. Such a model requires extensive changes in the current Gaudi/Athena frameworks and offers the opportunity to consider how HLT and offline processing can be better accommodated within the ATLAS framework. This note contains the report of the Future Frameworks Requirements Group.

  17. Web based parallel/distributed medical data mining using software agents

    Energy Technology Data Exchange (ETDEWEB)

    Kargupta, H.; Stafford, B.; Hamzaoglu, I.

    1997-12-31

    This paper describes an experimental parallel/distributed data mining system PADMA (PArallel Data Mining Agents) that uses software agents for local data accessing and analysis and a web based interface for interactive data visualization. It also presents the results of applying PADMA for detecting patterns in unstructured texts of postmortem reports and laboratory test data for Hepatitis C patients.

  18. Working group report: Cosmology and astroparticle physics

    Indian Academy of Sciences (India)

    This is the report of the cosmology and astroparticle physics working group ... origin of the accelerating Universe: Dark energy and particle cosmology by Y-Y Keum, .... Neutrino oscillations with two and three mass varying supernova neutrinos ...

  19. Transdiagnostic group CBT vs. standard group CBT for depression, social anxiety disorder and agoraphobia/panic disorder

    DEFF Research Database (Denmark)

    Arnfred, Sidse Marie Hemmingsen; Aharoni, Ruth; Pedersen, Morten Hvenegaard

    2017-01-01

    Background: Transdiagnostic Cognitive Behavior Therapy (TCBT) manuals delivered in individual format have been reported to be just as effective as traditional diagnosis specific CBT manuals. We have translated and modified the “The Unified Protocol for Transdiagnostic Treatment of Emotional...... Disorders” (UP-CBT) for group delivery in Mental Health Service (MHS), and shown effects comparable to traditional CBT in a naturalistic study. As the use of one manual instead of several diagnosis-specific manuals could simplify logistics, reduce waiting time, and increase therapist expertise compared...... to diagnosis specific CBT, we aim to test the relative efficacy of group UP-CBT and diagnosis specific group CBT. Methods/design: The study is a partially blinded, pragmatic, non-inferiority, parallel, multi-center randomized controlled trial (RCT) of UP-CBT vs diagnosis specific CBT for Unipolar Depression...

  20. Real-time SHVC software decoding with multi-threaded parallel processing

    Science.gov (United States)

    Gudumasu, Srinivas; He, Yuwen; Ye, Yan; He, Yong; Ryu, Eun-Seok; Dong, Jie; Xiu, Xiaoyu

    2014-09-01

    This paper proposes a parallel decoding framework for scalable HEVC (SHVC). Various optimization technologies are implemented on the basis of SHVC reference software SHM-2.0 to achieve real-time decoding speed for the two layer spatial scalability configuration. SHVC decoder complexity is analyzed with profiling information. The decoding process at each layer and the up-sampling process are designed in parallel and scheduled by a high level application task manager. Within each layer, multi-threaded decoding is applied to accelerate the layer decoding speed. Entropy decoding, reconstruction, and in-loop processing are pipeline designed with multiple threads based on groups of coding tree units (CTU). A group of CTUs is treated as a processing unit in each pipeline stage to achieve a better trade-off between parallelism and synchronization. Motion compensation, inverse quantization, and inverse transform modules are further optimized with SSE4 SIMD instructions. Simulations on a desktop with an Intel i7 processor 2600 running at 3.4 GHz show that the parallel SHVC software decoder is able to decode 1080p spatial 2x at up to 60 fps (frames per second) and 1080p spatial 1.5x at up to 50 fps for those bitstreams generated with SHVC common test conditions in the JCT-VC standardization group. The decoding performance at various bitrates with different optimization technologies and different numbers of threads are compared in terms of decoding speed and resource usage, including processor and memory.

  1. Vectorization, parallelization and porting of nuclear codes. 2001

    International Nuclear Information System (INIS)

    Akiyama, Mitsunaga; Katakura, Fumishige; Kume, Etsuo; Nemoto, Toshiyuki; Tsuruoka, Takuya; Adachi, Masaaki

    2003-07-01

    Several computer codes in the nuclear field have been vectorized, parallelized and transported on the super computer system at Center for Promotion of Computational Science and Engineering in Japan Atomic Energy Research Institute. We dealt with 10 codes in fiscal 2001. In this report, the parallelization of Neutron Radiography for 3 Dimensional CT code NR3DCT, the vectorization of unsteady-state heat conduction code THERMO3D, the porting of initial program of MHD simulation, the tuning of Heat And Mass Balance Analysis Code HAMBAC, the porting and parallelization of Monte Carlo N-Particle transport code MCNP4C3, the porting and parallelization of Monte Carlo N-Particle transport code system MCNPX2.1.5, the porting of induced activity calculation code CINAC-V4, the use of VisLink library in multidimensional two-fluid model code ACD3D and the porting of experiment data processing code from GS8500 to SR8000 are described. (author)

  2. Parallelization of a three-dimensional whole core transport code DeCART

    Energy Technology Data Exchange (ETDEWEB)

    Jin Young, Cho; Han Gyu, Joo; Ha Yong, Kim; Moon-Hee, Chang [Korea Atomic Energy Research Institute, Yuseong-gu, Daejon (Korea, Republic of)

    2003-07-01

    Parallelization of the DeCART (deterministic core analysis based on ray tracing) code is presented that reduces the computational burden of the tremendous computing time and memory required in three-dimensional whole core transport calculations. The parallelization employs the concept of MPI grouping and the MPI/OpenMP mixed scheme as well. Since most of the computing time and memory are used in MOC (method of characteristics) and the multi-group CMFD (coarse mesh finite difference) calculation in DeCART, variables and subroutines related to these two modules are the primary targets for parallelization. Specifically, the ray tracing module was parallelized using a planar domain decomposition scheme and an angular domain decomposition scheme. The parallel performance of the DeCART code is evaluated by solving a rodded variation of the C5G7MOX three dimensional benchmark problem and a simplified three-dimensional SMART PWR core problem. In C5G7MOX problem with 24 CPUs, a speedup of maximum 21 is obtained on an IBM Regatta machine and 22 on a LINUX Cluster in the MOC kernel, which indicates good parallel performance of the DeCART code. In the simplified SMART problem, the memory requirement of about 11 GBytes in the single processor cases reduces to 940 Mbytes with 24 processors, which means that the DeCART code can now solve large core problems with affordable LINUX clusters. (authors)

  3. Parallel phase model : a programming model for high-end parallel machines with manycores.

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Junfeng (Syracuse University, Syracuse, NY); Wen, Zhaofang; Heroux, Michael Allen; Brightwell, Ronald Brian

    2009-04-01

    This paper presents a parallel programming model, Parallel Phase Model (PPM), for next-generation high-end parallel machines based on a distributed memory architecture consisting of a networked cluster of nodes with a large number of cores on each node. PPM has a unified high-level programming abstraction that facilitates the design and implementation of parallel algorithms to exploit both the parallelism of the many cores and the parallelism at the cluster level. The programming abstraction will be suitable for expressing both fine-grained and coarse-grained parallelism. It includes a few high-level parallel programming language constructs that can be added as an extension to an existing (sequential or parallel) programming language such as C; and the implementation of PPM also includes a light-weight runtime library that runs on top of an existing network communication software layer (e.g. MPI). Design philosophy of PPM and details of the programming abstraction are also presented. Several unstructured applications that inherently require high-volume random fine-grained data accesses have been implemented in PPM with very promising results.

  4. Systematic approach for deriving feasible mappings of parallel algorithms to parallel computing platforms

    NARCIS (Netherlands)

    Arkin, Ethem; Tekinerdogan, Bedir; Imre, Kayhan M.

    2017-01-01

    The need for high-performance computing together with the increasing trend from single processor to parallel computer architectures has leveraged the adoption of parallel computing. To benefit from parallel computing power, usually parallel algorithms are defined that can be mapped and executed

  5. Parallel family trees for transfer matrices in the Potts model

    Science.gov (United States)

    Navarro, Cristobal A.; Canfora, Fabrizio; Hitschfeld, Nancy; Navarro, Gonzalo

    2015-02-01

    The computational cost of transfer matrix methods for the Potts model is related to the question in how many ways can two layers of a lattice be connected? Answering the question leads to the generation of a combinatorial set of lattice configurations. This set defines the configuration space of the problem, and the smaller it is, the faster the transfer matrix can be computed. The configuration space of generic (q , v) transfer matrix methods for strips is in the order of the Catalan numbers, which grows asymptotically as O(4m) where m is the width of the strip. Other transfer matrix methods with a smaller configuration space indeed exist but they make assumptions on the temperature, number of spin states, or restrict the structure of the lattice. In this paper we propose a parallel algorithm that uses a sub-Catalan configuration space of O(3m) to build the generic (q , v) transfer matrix in a compressed form. The improvement is achieved by grouping the original set of Catalan configurations into a forest of family trees, in such a way that the solution to the problem is now computed by solving the root node of each family. As a result, the algorithm becomes exponentially faster than the Catalan approach while still highly parallel. The resulting matrix is stored in a compressed form using O(3m ×4m) of space, making numerical evaluation and decompression to be faster than evaluating the matrix in its O(4m ×4m) uncompressed form. Experimental results for different sizes of strip lattices show that the parallel family trees (PFT) strategy indeed runs exponentially faster than the Catalan Parallel Method (CPM), especially when dealing with dense transfer matrices. In terms of parallel performance, we report strong-scaling speedups of up to 5.7 × when running on an 8-core shared memory machine and 28 × for a 32-core cluster. The best balance of speedup and efficiency for the multi-core machine was achieved when using p = 4 processors, while for the cluster

  6. Working Group Report: Higgs Boson

    Energy Technology Data Exchange (ETDEWEB)

    Dawson, Sally; Gritsan, Andrei; Logan, Heather; Qian, Jianming; Tully, Chris; Van Kooten, Rick [et al.

    2013-10-30

    This report summarizes the work of the Energy Frontier Higgs Boson working group of the 2013 Community Summer Study (Snowmass). We identify the key elements of a precision Higgs physics program and document the physics potential of future experimental facilities as elucidated during the Snowmass study. We study Higgs couplings to gauge boson and fermion pairs, double Higgs production for the Higgs self-coupling, its quantum numbers and $CP$-mixing in Higgs couplings, the Higgs mass and total width, and prospects for direct searches for additional Higgs bosons in extensions of the Standard Model. Our report includes projections of measurement capabilities from detailed studies of the Compact Linear Collider (CLIC), a Gamma-Gamma Collider, the International Linear Collider (ILC), the Large Hadron Collider High-Luminosity Upgrade (HL-LHC), Very Large Hadron Colliders up to 100 TeV (VLHC), a Muon Collider, and a Triple-Large Electron Positron Collider (TLEP).

  7. Nuclear Physics Group progress report

    International Nuclear Information System (INIS)

    Coote, G.E.

    1985-07-01

    This report summarises the work of the Nuclear Physics Group of the Institute of Nuclear Sciences during the period January-December 1984. Commissioning of the EN-tandem accelerator was completed. The first applications included the production of 13 N from a water target and the measurement of hydrogen depth profiles with a 19 F beam. Further equipment was built for tandem accelerator mass spectrometry but the full facility will not be ready until 1985. The nuclear microprobe on the 3 MV accelerator was used for many studies in archaeometry, metallurgy, biology and materials analysis

  8. Parallel algorithms

    CERN Document Server

    Casanova, Henri; Robert, Yves

    2008-01-01

    ""…The authors of the present book, who have extensive credentials in both research and instruction in the area of parallelism, present a sound, principled treatment of parallel algorithms. … This book is very well written and extremely well designed from an instructional point of view. … The authors have created an instructive and fascinating text. The book will serve researchers as well as instructors who need a solid, readable text for a course on parallelism in computing. Indeed, for anyone who wants an understandable text from which to acquire a current, rigorous, and broad vi

  9. Final Report of the Advanced Coal Technology Work Group

    Science.gov (United States)

    The Advanced Coal Technology workgroup reported to the Clean Air Act Advisory Committee. This page includes the final report of the Advanced Coal Technology Work Group to the Clean Air Act Advisory Committee.

  10. Vipie: web pipeline for parallel characterization of viral populations from multiple NGS samples.

    Science.gov (United States)

    Lin, Jake; Kramna, Lenka; Autio, Reija; Hyöty, Heikki; Nykter, Matti; Cinek, Ondrej

    2017-05-15

    Next generation sequencing (NGS) technology allows laboratories to investigate virome composition in clinical and environmental samples in a culture-independent way. There is a need for bioinformatic tools capable of parallel processing of virome sequencing data by exactly identical methods: this is especially important in studies of multifactorial diseases, or in parallel comparison of laboratory protocols. We have developed a web-based application allowing direct upload of sequences from multiple virome samples using custom parameters. The samples are then processed in parallel using an identical protocol, and can be easily reanalyzed. The pipeline performs de-novo assembly, taxonomic classification of viruses as well as sample analyses based on user-defined grouping categories. Tables of virus abundance are produced from cross-validation by remapping the sequencing reads to a union of all observed reference viruses. In addition, read sets and reports are created after processing unmapped reads against known human and bacterial ribosome references. Secured interactive results are dynamically plotted with population and diversity charts, clustered heatmaps and a sortable and searchable abundance table. The Vipie web application is a unique tool for multi-sample metagenomic analysis of viral data, producing searchable hits tables, interactive population maps, alpha diversity measures and clustered heatmaps that are grouped in applicable custom sample categories. Known references such as human genome and bacterial ribosomal genes are optionally removed from unmapped ('dark matter') reads. Secured results are accessible and shareable on modern browsers. Vipie is a freely available web-based tool whose code is open source.

  11. The Effects of Stress and Executive Functions on Decision Making in an Executive Parallel Task

    OpenAIRE

    McGuigan, Brian

    2016-01-01

    The aim of this study was to investigate the effects of acute stress on parallel task performance with the Game of Dice Task (GDT) to measure decision making and the Stroop test.  Two previous studies have found that the combination of stress and a parallel task with the GDT and an executive functions task preserved performance on the GDT for a stress group compared to a control group.  The purpose of this study was to create and use a new parallel task with the GDT and the stroop test to elu...

  12. Multi-core parallelism in a column-store

    NARCIS (Netherlands)

    Gawade, M.M.

    2017-01-01

    The research reported in this thesis addresses several challenges of improving the efficiency and effectiveness of parallel processing of analytical database queries on modern multi- and many-core systems, using an open-source column-oriented analytical database management system, MonetDB, for

  13. Fast image processing on parallel hardware

    International Nuclear Information System (INIS)

    Bittner, U.

    1988-01-01

    Current digital imaging modalities in the medical field incorporate parallel hardware which is heavily used in the stage of image formation like the CT/MR image reconstruction or in the DSA real time subtraction. In order to image post-processing as efficient as image acquisition, new software approaches have to be found which take full advantage of the parallel hardware architecture. This paper describes the implementation of two-dimensional median filter which can serve as an example for the development of such an algorithm. The algorithm is analyzed by viewing it as a complete parallel sort of the k pixel values in the chosen window which leads to a generalization to rank order operators and other closely related filters reported in literature. A section about the theoretical base of the algorithm gives hints for how to characterize operations suitable for implementations on pipeline processors and the way to find the appropriate algorithms. Finally some results that computation time and usefulness of medial filtering in radiographic imaging are given

  14. A massively parallel strategy for STR marker development, capture, and genotyping.

    Science.gov (United States)

    Kistler, Logan; Johnson, Stephen M; Irwin, Mitchell T; Louis, Edward E; Ratan, Aakrosh; Perry, George H

    2017-09-06

    Short tandem repeat (STR) variants are highly polymorphic markers that facilitate powerful population genetic analyses. STRs are especially valuable in conservation and ecological genetic research, yielding detailed information on population structure and short-term demographic fluctuations. Massively parallel sequencing has not previously been leveraged for scalable, efficient STR recovery. Here, we present a pipeline for developing STR markers directly from high-throughput shotgun sequencing data without a reference genome, and an approach for highly parallel target STR recovery. We employed our approach to capture a panel of 5000 STRs from a test group of diademed sifakas (Propithecus diadema, n = 3), endangered Malagasy rainforest lemurs, and we report extremely efficient recovery of targeted loci-97.3-99.6% of STRs characterized with ≥10x non-redundant sequence coverage. We then tested our STR capture strategy on P. diadema fecal DNA, and report robust initial results and suggestions for future implementations. In addition to STR targets, this approach also generates large, genome-wide single nucleotide polymorphism (SNP) panels from flanking regions. Our method provides a cost-effective and scalable solution for rapid recovery of large STR and SNP datasets in any species without needing a reference genome, and can be used even with suboptimal DNA more easily acquired in conservation and ecological studies. Published by Oxford University Press on behalf of Nucleic Acids Research 2017.

  15. QuASAR-MPRA: accurate allele-specific analysis for massively parallel reporter assays.

    Science.gov (United States)

    Kalita, Cynthia A; Moyerbrailean, Gregory A; Brown, Christopher; Wen, Xiaoquan; Luca, Francesca; Pique-Regi, Roger

    2018-03-01

    The majority of the human genome is composed of non-coding regions containing regulatory elements such as enhancers, which are crucial for controlling gene expression. Many variants associated with complex traits are in these regions, and may disrupt gene regulatory sequences. Consequently, it is important to not only identify true enhancers but also to test if a variant within an enhancer affects gene regulation. Recently, allele-specific analysis in high-throughput reporter assays, such as massively parallel reporter assays (MPRAs), have been used to functionally validate non-coding variants. However, we are still missing high-quality and robust data analysis tools for these datasets. We have further developed our method for allele-specific analysis QuASAR (quantitative allele-specific analysis of reads) to analyze allele-specific signals in barcoded read counts data from MPRA. Using this approach, we can take into account the uncertainty on the original plasmid proportions, over-dispersion, and sequencing errors. The provided allelic skew estimate and its standard error also simplifies meta-analysis of replicate experiments. Additionally, we show that a beta-binomial distribution better models the variability present in the allelic imbalance of these synthetic reporters and results in a test that is statistically well calibrated under the null. Applying this approach to the MPRA data, we found 602 SNPs with significant (false discovery rate 10%) allele-specific regulatory function in LCLs. We also show that we can combine MPRA with QuASAR estimates to validate existing experimental and computational annotations of regulatory variants. Our study shows that with appropriate data analysis tools, we can improve the power to detect allelic effects in high-throughput reporter assays. http://github.com/piquelab/QuASAR/tree/master/mpra. fluca@wayne.edu or rpique@wayne.edu. Supplementary data are available online at Bioinformatics. © The Author (2017). Published by

  16. A parallel form of the Gudjonsson Suggestibility Scale.

    Science.gov (United States)

    Gudjonsson, G H

    1987-09-01

    The purpose of this study is twofold: (1) to present a parallel form of the Gudjonsson Suggestibility Scale (GSS, Form 1); (2) to study test-retest reliabilities of interrogative suggestibility. Three groups of subjects were administered the two suggestibility scales in a counterbalanced order. Group 1 (28 normal subjects) and Group 2 (32 'forensic' patients) completed both scales within the same testing session, whereas Group 3 (30 'forensic' patients) completed the two scales between one week and eight months apart. All the correlations were highly significant, giving support for high 'temporal consistency' of interrogative suggestibility.

  17. Parallel computation for distributed parameter system-from vector processors to Adena computer

    Energy Technology Data Exchange (ETDEWEB)

    Nogi, T

    1983-04-01

    Research on advanced parallel hardware and software architectures for very high-speed computation deserves and needs more support and attention to fulfil its promise. Novel architectures for parallel processing are being made ready. Architectures for parallel processing can be roughly divided into two groups. One is a vector processor in which a single central processing unit involves multiple vector-arithmetic registers. The other is a processor array in which slave processors are connected to a host processor to perform parallel computation. In this review, the concept and data structure of the Adena (alternating-direction edition nexus array) architecture, which is conformable to distributed-parameter simulation algorithms, are described. 5 references.

  18. Cache-aware data structure model for parallelism and dynamic load balancing

    International Nuclear Information System (INIS)

    Sridi, Marwa

    2016-01-01

    This PhD thesis is dedicated to the implementation of innovative parallel methods in the framework of fast transient fluid-structure dynamics. It improves existing methods within EUROPLEXUS software, in order to optimize the shared memory parallel strategy, complementary to the original distributed memory approach, brought together into a global hybrid strategy for clusters of multi-core nodes. Starting from a sound analysis of the state of the art concerning data structuring techniques correlated to the hierarchic memory organization of current multi-processor architectures, the proposed work introduces an approach suitable for an explicit time integration (i.e. with no linear system to solve at each step). A data structure of type 'Structure of arrays' is conserved for the global data storage, providing flexibility and efficiency for current operations on kinematics fields (displacement, velocity and acceleration). On the contrary, in the particular case of elementary operations (for internal forces generic computations, as well as fluxes computations between cell faces for fluid models), particularly time consuming but localized in the program, a temporary data structure of type 'Array of structures' is used instead, to force an efficient filling of the cache memory and increase the performance of the resolution, for both serial and shared memory parallel processing. Switching from the global structure to the temporary one is based on a cell grouping strategy, following classing cache-blocking principles but handling specifically for this work neighboring data necessary to the efficient treatment of ALE fluxes for cells on the group boundaries. The proposed approach is extensively tested, from the point of views of both the computation time and the access failures into cache memory, confronting the gains obtained within the elementary operations to the potential overhead generated by the data structure switch. Obtained results are very satisfactory, especially

  19. Parallel grid generation algorithm for distributed memory computers

    Science.gov (United States)

    Moitra, Stuti; Moitra, Anutosh

    1994-01-01

    A parallel grid-generation algorithm and its implementation on the Intel iPSC/860 computer are described. The grid-generation scheme is based on an algebraic formulation of homotopic relations. Methods for utilizing the inherent parallelism of the grid-generation scheme are described, and implementation of multiple levELs of parallelism on multiple instruction multiple data machines are indicated. The algorithm is capable of providing near orthogonality and spacing control at solid boundaries while requiring minimal interprocessor communications. Results obtained on the Intel hypercube for a blended wing-body configuration are used to demonstrate the effectiveness of the algorithm. Fortran implementations bAsed on the native programming model of the iPSC/860 computer and the Express system of software tools are reported. Computational gains in execution time speed-up ratios are given.

  20. Parallel tools GUI framework-DOE SBIR phase I final technical report

    Energy Technology Data Exchange (ETDEWEB)

    Galarowicz, James [Argo Navis Technologies LLC., Annapolis, MD (United States)

    2013-12-05

    Many parallel performance, profiling, and debugging tools require a graphical way of displaying the very large datasets typically gathered from high performance computing (HPC) applications. Most tool projects create their graphical user interfaces (GUI) from scratch, many times spending their project resources on simply redeveloping commonly used infrastructure. Our goal was to create a multiplatform GUI framework, based on Nokia/Digia’s popular Qt libraries, which will specifically address the needs of these parallel tools. The Parallel Tools GUI Framework (PTGF) uses a plugin architecture facilitating rapid GUI development and reduced development costs for new and existing tool projects by allowing the reuse of many common GUI elements, called “widgets.” Widgets created include, 2D data visualizations, a source code viewer with syntax highlighting, and integrated help and welcome screens. Application programming interface (API) design was focused on minimizing the time to getting a functional tool working. Having a standard, unified, and userfriendly interface which operates on multiple platforms will benefit HPC application developers by reducing training time and allowing users to move between tools rapidly during a single session. However, Argo Navis Technologies LLC will not be submitting a DOE SBIR Phase II proposal and commercialization plan for the PTGF project. Our preliminary estimates for gross income over the next several years was based upon initial customer interest and income generated by similar projects. Unfortunately, as we further assessed the market during Phase I, we grew to realize that there was not enough demand to warrant such a large investment. While we do find that the project is worth our continued investment of time and money, we do not think it worthy of the DOE's investment at this time. We are grateful that the DOE has afforded us the opportunity to make this assessment, and come to this conclusion.

  1. State of the art of parallel scientific visualization applications on PC clusters; Etat de l'art des applications de visualisation scientifique paralleles sur grappes de PC

    Energy Technology Data Exchange (ETDEWEB)

    Juliachs, M

    2004-07-01

    In this state of the art on parallel scientific visualization applications on PC clusters, we deal with both surface and volume rendering approaches. We first analyze available PC cluster configurations and existing parallel rendering software components for parallel graphics rendering. CEA/DIF has been studying cluster visualization since 2001. This report is part of a study to set up a new visualization research platform. This platform consisting of an eight-node PC cluster under Linux and a tiled display was installed in collaboration with Versailles-Saint-Quentin University in August 2003. (author)

  2. Kalman Filter Tracking on Parallel Architectures

    International Nuclear Information System (INIS)

    Cerati, Giuseppe; Elmer, Peter; Krutelyov, Slava; Lantz, Steven; Lefebvre, Matthieu; McDermott, Kevin; Riley, Daniel; Tadel, Matevž; Wittich, Peter; Würthwein, Frank; Yagil, Avi

    2016-01-01

    Power density constraints are limiting the performance improvements of modern CPUs. To address this we have seen the introduction of lower-power, multi-core processors such as GPGPU, ARM and Intel MIC. In order to achieve the theoretical performance gains of these processors, it will be necessary to parallelize algorithms to exploit larger numbers of lightweight cores and specialized functions like large vector units. Track finding and fitting is one of the most computationally challenging problems for event reconstruction in particle physics. At the High-Luminosity Large Hadron Collider (HL-LHC), for example, this will be by far the dominant problem. The need for greater parallelism has driven investigations of very different track finding techniques such as Cellular Automata or Hough Transforms. The most common track finding techniques in use today, however, are those based on a Kalman filter approach. Significant experience has been accumulated with these techniques on real tracking detector systems, both in the trigger and offline. They are known to provide high physics performance, are robust, and are in use today at the LHC. Given the utility of the Kalman filter in track finding, we have begun to port these algorithms to parallel architectures, namely Intel Xeon and Xeon Phi. We report here on our progress towards an end-to-end track reconstruction algorithm fully exploiting vectorization and parallelization techniques in a simplified experimental environment

  3. Parallel Scaling Characteristics of Selected NERSC User ProjectCodes

    Energy Technology Data Exchange (ETDEWEB)

    Skinner, David; Verdier, Francesca; Anand, Harsh; Carter,Jonathan; Durst, Mark; Gerber, Richard

    2005-03-05

    This report documents parallel scaling characteristics of NERSC user project codes between Fiscal Year 2003 and the first half of Fiscal Year 2004 (Oct 2002-March 2004). The codes analyzed cover 60% of all the CPU hours delivered during that time frame on seaborg, a 6080 CPU IBM SP and the largest parallel computer at NERSC. The scale in terms of concurrency and problem size of the workload is analyzed. Drawing on batch queue logs, performance data and feedback from researchers we detail the motivations, benefits, and challenges of implementing highly parallel scientific codes on current NERSC High Performance Computing systems. An evaluation and outlook of the NERSC workload for Allocation Year 2005 is presented.

  4. Algorithms for parallel and vector computations

    Science.gov (United States)

    Ortega, James M.

    1995-01-01

    This is a final report on work performed under NASA grant NAG-1-1112-FOP during the period March, 1990 through February 1995. Four major topics are covered: (1) solution of nonlinear poisson-type equations; (2) parallel reduced system conjugate gradient method; (3) orderings for conjugate gradient preconditioners, and (4) SOR as a preconditioner.

  5. Parallel algorithms for mapping pipelined and parallel computations

    Science.gov (United States)

    Nicol, David M.

    1988-01-01

    Many computational problems in image processing, signal processing, and scientific computing are naturally structured for either pipelined or parallel computation. When mapping such problems onto a parallel architecture it is often necessary to aggregate an obvious problem decomposition. Even in this context the general mapping problem is known to be computationally intractable, but recent advances have been made in identifying classes of problems and architectures for which optimal solutions can be found in polynomial time. Among these, the mapping of pipelined or parallel computations onto linear array, shared memory, and host-satellite systems figures prominently. This paper extends that work first by showing how to improve existing serial mapping algorithms. These improvements have significantly lower time and space complexities: in one case a published O(nm sup 3) time algorithm for mapping m modules onto n processors is reduced to an O(nm log m) time complexity, and its space requirements reduced from O(nm sup 2) to O(m). Run time complexity is further reduced with parallel mapping algorithms based on these improvements, which run on the architecture for which they create the mappings.

  6. Resolution of the neutron transport equation by massively parallel computer in the Cronos code

    International Nuclear Information System (INIS)

    Zardini, D.M.

    1996-01-01

    The feasibility of neutron transport problems parallel resolution by CRONOS code's SN module is here studied. In this report we give the first data about the parallel resolution by angular variable decomposition of the transport equation. Problems about parallel resolution by spatial variable decomposition and memory stage limits are also explained here. (author)

  7. IAEA INTOR Workshop report, group 12

    International Nuclear Information System (INIS)

    1980-01-01

    This report gives the material for the IAEA INTOR Workshop for data base discussion in Group 12, Start-up, Burn and Shutdown. Number of problem areas from the generation of a plasma to the termination of the discharge are covered, which should be assessed to develop a scenario for sustaining a plasma for the whole duration of a pulse. The reactor relevant burn pulse is also assessed. (author)

  8. The level 1 and 2 specification for parallel benchmark and a benchmark test of scalar-parallel computer SP2 based on the specifications

    International Nuclear Information System (INIS)

    Orii, Shigeo

    1998-06-01

    A benchmark specification for performance evaluation of parallel computers for numerical analysis is proposed. Level 1 benchmark, which is a conventional type benchmark using processing time, measures performance of computers running a code. Level 2 benchmark proposed in this report is to give the reason of the performance. As an example, scalar-parallel computer SP2 is evaluated with this benchmark specification in case of a molecular dynamics code. As a result, the main causes to suppress the parallel performance are maximum band width and start-up time of communication between nodes. Especially the start-up time is proportional not only to the number of processors but also to the number of particles. (author)

  9. Parallel computing for homogeneous diffusion and transport equations in neutronics; Calcul parallele pour les equations de diffusion et de transport homogenes en neutronique

    Energy Technology Data Exchange (ETDEWEB)

    Pinchedez, K

    1999-06-01

    Parallel computing meets the ever-increasing requirements for neutronic computer code speed and accuracy. In this work, two different approaches have been considered. We first parallelized the sequential algorithm used by the neutronics code CRONOS developed at the French Atomic Energy Commission. The algorithm computes the dominant eigenvalue associated with PN simplified transport equations by a mixed finite element method. Several parallel algorithms have been developed on distributed memory machines. The performances of the parallel algorithms have been studied experimentally by implementation on a T3D Cray and theoretically by complexity models. A comparison of various parallel algorithms has confirmed the chosen implementations. We next applied a domain sub-division technique to the two-group diffusion Eigen problem. In the modal synthesis-based method, the global spectrum is determined from the partial spectra associated with sub-domains. Then the Eigen problem is expanded on a family composed, on the one hand, from eigenfunctions associated with the sub-domains and, on the other hand, from functions corresponding to the contribution from the interface between the sub-domains. For a 2-D homogeneous core, this modal method has been validated and its accuracy has been measured. (author)

  10. Parallel computing works

    Energy Technology Data Exchange (ETDEWEB)

    1991-10-23

    An account of the Caltech Concurrent Computation Program (C{sup 3}P), a five year project that focused on answering the question: Can parallel computers be used to do large-scale scientific computations '' As the title indicates, the question is answered in the affirmative, by implementing numerous scientific applications on real parallel computers and doing computations that produced new scientific results. In the process of doing so, C{sup 3}P helped design and build several new computers, designed and implemented basic system software, developed algorithms for frequently used mathematical computations on massively parallel machines, devised performance models and measured the performance of many computers, and created a high performance computing facility based exclusively on parallel computers. While the initial focus of C{sup 3}P was the hypercube architecture developed by C. Seitz, many of the methods developed and lessons learned have been applied successfully on other massively parallel architectures.

  11. Linear Collider Working Group reports from Snowmass '88

    International Nuclear Information System (INIS)

    Ruth, R.D.

    1989-03-01

    This report contains a summary of the Linear Collider Working Group. Papers on the following topics are discussed: parameters; damping ring; bunch compressor; linac; final focus; and multibunch effects

  12. Multilevel Parallelization of AutoDock 4.2

    Directory of Open Access Journals (Sweden)

    Norgan Andrew P

    2011-04-01

    Full Text Available Abstract Background Virtual (computational screening is an increasingly important tool for drug discovery. AutoDock is a popular open-source application for performing molecular docking, the prediction of ligand-receptor interactions. AutoDock is a serial application, though several previous efforts have parallelized various aspects of the program. In this paper, we report on a multi-level parallelization of AutoDock 4.2 (mpAD4. Results Using MPI and OpenMP, AutoDock 4.2 was parallelized for use on MPI-enabled systems and to multithread the execution of individual docking jobs. In addition, code was implemented to reduce input/output (I/O traffic by reusing grid maps at each node from docking to docking. Performance of mpAD4 was examined on two multiprocessor computers. Conclusions Using MPI with OpenMP multithreading, mpAD4 scales with near linearity on the multiprocessor systems tested. In situations where I/O is limiting, reuse of grid maps reduces both system I/O and overall screening time. Multithreading of AutoDock's Lamarkian Genetic Algorithm with OpenMP increases the speed of execution of individual docking jobs, and when combined with MPI parallelization can significantly reduce the execution time of virtual screens. This work is significant in that mpAD4 speeds the execution of certain molecular docking workloads and allows the user to optimize the degree of system-level (MPI and node-level (OpenMP parallelization to best fit both workloads and computational resources.

  13. Multilevel Parallelization of AutoDock 4.2.

    Science.gov (United States)

    Norgan, Andrew P; Coffman, Paul K; Kocher, Jean-Pierre A; Katzmann, David J; Sosa, Carlos P

    2011-04-28

    Virtual (computational) screening is an increasingly important tool for drug discovery. AutoDock is a popular open-source application for performing molecular docking, the prediction of ligand-receptor interactions. AutoDock is a serial application, though several previous efforts have parallelized various aspects of the program. In this paper, we report on a multi-level parallelization of AutoDock 4.2 (mpAD4). Using MPI and OpenMP, AutoDock 4.2 was parallelized for use on MPI-enabled systems and to multithread the execution of individual docking jobs. In addition, code was implemented to reduce input/output (I/O) traffic by reusing grid maps at each node from docking to docking. Performance of mpAD4 was examined on two multiprocessor computers. Using MPI with OpenMP multithreading, mpAD4 scales with near linearity on the multiprocessor systems tested. In situations where I/O is limiting, reuse of grid maps reduces both system I/O and overall screening time. Multithreading of AutoDock's Lamarkian Genetic Algorithm with OpenMP increases the speed of execution of individual docking jobs, and when combined with MPI parallelization can significantly reduce the execution time of virtual screens. This work is significant in that mpAD4 speeds the execution of certain molecular docking workloads and allows the user to optimize the degree of system-level (MPI) and node-level (OpenMP) parallelization to best fit both workloads and computational resources.

  14. Report of the Working Group on Publicity and Funding

    DEFF Research Database (Denmark)

    Gammeltoft, Peder

    2014-01-01

    The report highlights the activities of the working group in raising awareness of the need for geographical names standardization and the work of the Group of Experts, particularly in advancing the digital presence of UNGEGN, through web presence and updated Media Kit and Wikipedia presence...

  15. Template based parallel checkpointing in a massively parallel computer system

    Science.gov (United States)

    Archer, Charles Jens [Rochester, MN; Inglett, Todd Alan [Rochester, MN

    2009-01-13

    A method and apparatus for a template based parallel checkpoint save for a massively parallel super computer system using a parallel variation of the rsync protocol, and network broadcast. In preferred embodiments, the checkpoint data for each node is compared to a template checkpoint file that resides in the storage and that was previously produced. Embodiments herein greatly decrease the amount of data that must be transmitted and stored for faster checkpointing and increased efficiency of the computer system. Embodiments are directed to a parallel computer system with nodes arranged in a cluster with a high speed interconnect that can perform broadcast communication. The checkpoint contains a set of actual small data blocks with their corresponding checksums from all nodes in the system. The data blocks may be compressed using conventional non-lossy data compression algorithms to further reduce the overall checkpoint size.

  16. Neutrino Research Group. 2011-2014 activity report

    International Nuclear Information System (INIS)

    2014-01-01

    For the last two decades, neutrino physics has been producing major discoveries including neutrino oscillations. These results gave clear confirmation that active neutrinos oscillate and therefore have mass with three different mass states. This is a very important result showing that the Minimal Standard Model is incomplete and requires an extension which is not yet known. The neutrino research field is very broad and active, at the frontier of today's particle physics. The Neutrino Research Group (GDR) was created in January 2005 with the aim of gathering CEA and CNRS research teams working on Neutrino Physics on experimental or theoretical level. This document is the 2011-2014 activity report of the research group, ten years after its creation. It presents the results of the 5 working groups: 1 - Determination of neutrino parameters; 2 - Physics beyond the standard model; 3 - Neutrinos in the universe; 4 - Accelerators, detection means, R and D and valorisation; 5 - Common tools to all working groups. The research group structure, participating laboratories and teams and the neutrino physics road-map are presented in appendixes

  17. State of the art of parallel scientific visualization applications on PC clusters; Etat de l'art des applications de visualisation scientifique paralleles sur grappes de PC

    Energy Technology Data Exchange (ETDEWEB)

    Juliachs, M

    2004-07-01

    In this state of the art on parallel scientific visualization applications on PC clusters, we deal with both surface and volume rendering approaches. We first analyze available PC cluster configurations and existing parallel rendering software components for parallel graphics rendering. CEA/DIF has been studying cluster visualization since 2001. This report is part of a study to set up a new visualization research platform. This platform consisting of an eight-node PC cluster under Linux and a tiled display was installed in collaboration with Versailles-Saint-Quentin University in August 2003. (author)

  18. Report of the ERIC Management Review Group.

    Science.gov (United States)

    Carter, Launor F.; And Others

    The mission of the ERIC Management Review Group was to examine the practices and procedures used by Central ERIC Management in their guidance and management of the 19 ERIC clearinghouses. The major topics covered in this report are: recommendations; the role of the clearinghouses; the bibliographic and documentation function; the interpretation…

  19. Palladium-Catalyzed Enantioselective C-H Olefination of Diaryl Sulfoxides through Parallel Kinetic Resolution and Desymmetrization.

    Science.gov (United States)

    Zhu, Yu-Chao; Li, Yan; Zhang, Bo-Chao; Zhang, Feng-Xu; Yang, Yi-Nuo; Wang, Xi-Sheng

    2018-03-07

    The first example of Pd II -catalyzed enantioselective C-H olefination with non-chiral or racemic sulfoxides as directing groups was developed. A variety of chiral diaryl sulfoxides were synthesized with high enantioselectivity (up to 99 %) through both desymmetrization and parallel kinetic resolution (PKR). This is the first report of Pd II -catalyzed enantioselective C(sp 2 )-H functionalization through PKR, and it represents a novel strategy to construct sulfur chiral centers. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. Xyce Parallel Electronic Simulator Users' Guide Version 6.6.

    Energy Technology Data Exchange (ETDEWEB)

    Keiter, Eric R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Aadithya, Karthik Venkatraman [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Mei, Ting [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Russo, Thomas V. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Schiek, Richard [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Sholander, Peter E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Thornquist, Heidi K. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Verley, Jason [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2016-11-01

    This manual describes the use of the Xyce Parallel Electronic Simulator. Xyce has been de- signed as a SPICE-compatible, high-performance analog circuit simulator, and has been written to support the simulation needs of the Sandia National Laboratories electrical designers. This development has focused on improving capability over the current state-of-the-art in the following areas: Capability to solve extremely large circuit problems by supporting large-scale parallel com- puting platforms (up to thousands of processors). This includes support for most popular parallel and serial computers. A differential-algebraic-equation (DAE) formulation, which better isolates the device model package from solver algorithms. This allows one to develop new types of analysis without requiring the implementation of analysis-specific device models. Device models that are specifically tailored to meet Sandia's needs, including some radiation- aware devices (for Sandia users only). Object-oriented code design and implementation using modern coding practices. Xyce is a parallel code in the most general sense of the phrase -- a message passing parallel implementation -- which allows it to run efficiently a wide range of computing platforms. These include serial, shared-memory and distributed-memory parallel platforms. Attention has been paid to the specific nature of circuit-simulation problems to ensure that optimal parallel efficiency is achieved as the number of processors grows. The information herein is subject to change without notice. Copyright c 2002-2016 Sandia Corporation. All rights reserved. Acknowledgements The BSIM Group at the University of California, Berkeley developed the BSIM3, BSIM4, BSIM6, BSIM-CMG and BSIM-SOI models. The BSIM3 is Copyright c 1999, Regents of the University of California. The BSIM4 is Copyright c 2006, Regents of the University of California. The BSIM6 is Copyright c 2015, Regents of the University of California. The BSIM-CMG is Copyright c

  1. Parallel, Asynchronous Executive (PAX): System concepts, facilities, and architecture

    Science.gov (United States)

    Jones, W. H.

    1983-01-01

    The Parallel, Asynchronous Executive (PAX) is a software operating system simulation that allows many computers to work on a single problem at the same time. PAX is currently implemented on a UNIVAC 1100/42 computer system. Independent UNIVAC runstreams are used to simulate independent computers. Data are shared among independent UNIVAC runstreams through shared mass-storage files. PAX has achieved the following: (1) applied several computing processes simultaneously to a single, logically unified problem; (2) resolved most parallel processor conflicts by careful work assignment; (3) resolved by means of worker requests to PAX all conflicts not resolved by work assignment; (4) provided fault isolation and recovery mechanisms to meet the problems of an actual parallel, asynchronous processing machine. Additionally, one real-life problem has been constructed for the PAX environment. This is CASPER, a collection of aerodynamic and structural dynamic problem simulation routines. CASPER is not discussed in this report except to provide examples of parallel-processing techniques.

  2. Using the extended parallel process model to prevent noise-induced hearing loss among coal miners in Appalachia

    Energy Technology Data Exchange (ETDEWEB)

    Murray-Johnson, L.; Witte, K.; Patel, D.; Orrego, V.; Zuckerman, C.; Maxfield, A.M.; Thimons, E.D. [Ohio State University, Columbus, OH (US)

    2004-12-15

    Occupational noise-induced hearing loss is the second most self-reported occupational illness or injury in the United States. Among coal miners, more than 90% of the population reports a hearing deficit by age 55. In this formative evaluation, focus groups were conducted with coal miners in Appalachia to ascertain whether miners perceive hearing loss as a major health risk and if so, what would motivate the consistent wearing of hearing protection devices (HPDs). The theoretical framework of the Extended Parallel Process Model was used to identify the miners' knowledge, attitudes, beliefs, and current behaviors regarding hearing protection. Focus group participants had strong perceived severity and varying levels of perceived susceptibility to hearing loss. Various barriers significantly reduced the self-efficacy and the response efficacy of using hearing protection.

  3. Introduction to parallel programming

    CERN Document Server

    Brawer, Steven

    1989-01-01

    Introduction to Parallel Programming focuses on the techniques, processes, methodologies, and approaches involved in parallel programming. The book first offers information on Fortran, hardware and operating system models, and processes, shared memory, and simple parallel programs. Discussions focus on processes and processors, joining processes, shared memory, time-sharing with multiple processors, hardware, loops, passing arguments in function/subroutine calls, program structure, and arithmetic expressions. The text then elaborates on basic parallel programming techniques, barriers and race

  4. Parallelism in matrix computations

    CERN Document Server

    Gallopoulos, Efstratios; Sameh, Ahmed H

    2016-01-01

    This book is primarily intended as a research monograph that could also be used in graduate courses for the design of parallel algorithms in matrix computations. It assumes general but not extensive knowledge of numerical linear algebra, parallel architectures, and parallel programming paradigms. The book consists of four parts: (I) Basics; (II) Dense and Special Matrix Computations; (III) Sparse Matrix Computations; and (IV) Matrix functions and characteristics. Part I deals with parallel programming paradigms and fundamental kernels, including reordering schemes for sparse matrices. Part II is devoted to dense matrix computations such as parallel algorithms for solving linear systems, linear least squares, the symmetric algebraic eigenvalue problem, and the singular-value decomposition. It also deals with the development of parallel algorithms for special linear systems such as banded ,Vandermonde ,Toeplitz ,and block Toeplitz systems. Part III addresses sparse matrix computations: (a) the development of pa...

  5. Three-dimensional gyrokinetic particle-in-cell simulation of plasmas on a massively parallel computer: Final report on LDRD Core Competency Project, FY 1991--FY 1993

    International Nuclear Information System (INIS)

    Byers, J.A.; Williams, T.J.; Cohen, B.I.; Dimits, A.M.

    1994-01-01

    One of the programs of the Magnetic fusion Energy (MFE) Theory and computations Program is studying the anomalous transport of thermal energy across the field lines in the core of a tokamak. We use the method of gyrokinetic particle-in-cell simulation in this study. For this LDRD project we employed massively parallel processing, new algorithms, and new algorithms, and new formal techniques to improve this research. Specifically, we sought to take steps toward: researching experimentally-relevant parameters in our simulations, learning parallel computing to have as a resource for our group, and achieving a 100 x speedup over our starting-point Cray2 simulation code's performance

  6. Self-Reported quality of life in adults with attention-deficit/hyperactivity disorder and executive function impairment treated with lisdexamfetamine dimesylate: a randomized, double-blind, multicenter, placebo-controlled, parallel-group study.

    Science.gov (United States)

    Adler, Lenard A; Dirks, Bryan; Deas, Patrick; Raychaudhuri, Aparna; Dauphin, Matthew; Saylor, Keith; Weisler, Richard

    2013-10-09

    This study examined the effects of lisdexamfetamine dimesylate (LDX) on quality of life (QOL) in adults with attention-deficit/hyperactivity disorder (ADHD) and clinically significant executive function deficits (EFD). This report highlights QOL findings from a 10-week randomized placebo-controlled trial of LDX (30-70 mg/d) in adults (18-55 years) with ADHD and EFD (Behavior Rating Inventory of EF-Adult, Global Executive Composite [BRIEF-A GEC] ≥65). The primary efficacy measure was the self-reported BRIEF-A; a key secondary measure was self-reported QOL on the Adult ADHD Impact Module (AIM-A). The clinician-completed ADHD Rating Scale version IV (ADHD-RS-IV) with adult prompts and Clinical Global Impressions-Severity (CGI-S) were also employed. The Adult ADHD QoL (AAQoL) was added while the study was in progress. A post hoc analysis examined the subgroup having evaluable results from both AIM-A and AAQoL. Of 161 randomized (placebo, 81; LDX, 80), 159 were included in the safety population. LDX improved AIM-A multi-item domain scores versus placebo; LS mean difference for Performance and Daily Functioning was 21.6 (ES, 0.93, PPsychological Health was 12.1; Life Outlook was 12.5; and Relationships was 7.3. In a post hoc analysis of participants with both AIM-A and AAQoL scores, AIM-A multi-item subgroup analysis scores numerically improved with LDX, with smaller difference for Impact of Symptoms: Daily Interference. The safety profile of LDX was consistent with amphetamine use in previous studies. Overall, adults with ADHD/EFD exhibited self-reported improvement on QOL, using the AIM-A and AAQoL scales in line with medium/large ES; these improvements were paralleled by improvements in EF and ADHD symptoms. The safety profile of LDX was similar to previous studies. ClinicalTrials.gov, NCT01101022.

  7. Acceleration and parallelization calculation of EFEN-SP_3 method

    International Nuclear Information System (INIS)

    Yang Wen; Zheng Youqi; Wu Hongchun; Cao Liangzhi; Li Yunzhao

    2013-01-01

    Due to the fact that the exponential function expansion nodal-SP_3 (EFEN-SP_3) method needs further improvement in computational efficiency to routinely carry out PWR whole core pin-by-pin calculation, the coarse mesh acceleration and spatial parallelization were investigated in this paper. The coarse mesh acceleration was built by considering discontinuity factor on each coarse mesh interface and preserving neutron balance within each coarse mesh in space, angle and energy. The spatial parallelization based on MPI was implemented by guaranteeing load balancing and minimizing communications cost to fully take advantage of the modern computing and storage abilities. Numerical results based on a commercial nuclear power reactor demonstrate an speedup ratio of about 40 for the coarse mesh acceleration and a parallel efficiency of higher than 60% with 40 CPUs for the spatial parallelization. With these two improvements, the EFEN code can complete a PWR whole core pin-by-pin calculation with 289 × 289 × 218 meshes and 4 energy groups within 100 s by using 48 CPUs (2.40 GHz frequency). (authors)

  8. Filipino students' reported parental socialization of academic achievement by socioeconomic group.

    Science.gov (United States)

    Bernardo, Allan B I

    2009-10-01

    Academic achievement of students differs by socioeconomic group. Parents' socialization of academic achievement in their children was explored in self-reports of 241 students from two socioeconomic status (SES) groups in the Philippines, using a scale developed by Bempechat, et al. Students in the upper SES group had higher achievement than their peers in the middle SES group, but had lower scores on most dimensions of parental socialization of academic achievement. Regression analyses indicate that reported parental attempts to encourage more effort to achieve was associated with lower achievement in students with upper SES.

  9. Effects of policosanol on borderline to mildly elevated serum total cholesterol levels: a prospective, double-blind, placebo-controlled, parallel-group, comparative study

    Directory of Open Access Journals (Sweden)

    Gladys Castaño, PhD

    2003-09-01

    Full Text Available Background: Hypercholesterolemia is a major risk factor for coronary heart disease. Clinical studies have shown that lowering elevated serum cholesterol levels, particularly low-density lipoprotein cholesterol (LDL-C, is beneficial for patients with borderline to mildly elevated serum total cholesterol (TC levels (5.0–6.0 mmol/L. Policosanol is a cholesterol-lowering drug made from purified sugar cane wax. The therapeutic range of policosanol is 5 to 20 mg/d. Objective: This study investigated the efficacy and tolerability of policosanol 5 mg/d in patients with borderline to mildly elevated serum TC levels. Methods: This 14-week, single-center, prospective, double-blind, placebo-controlled, parallel-group, comparative study was conducted in men and women aged 25 to 75 years with a serum TC level ≥4.8 to <6.0 mmol/L. After a 6-week run-in period in which patients were placed on therapeutic lifestyle changes, in particular a cholesterol-lowering diet, patients were randomly assigned to receive policosanol 5-mg tablets or placebo tablets once daily with the evening meal for 8 weeks, and the diet was continued throughout the study. Lipid profile variables, safety indicators, adverse events (AEs, and compliance with study medications were assessed. Results: One hundred patients (71 women, 29 men; mean [SD] age, 52 [10] years entered the study after the dietary run-in period. After 8 weeks of treatment, the mean (SD serum LDL-C level decreased significantly in the policosanol group (P<0.001 vs baseline and placebo from 3.57 (0.30 mmol/L to 2.86 (0.41 mmol/L (change, −19.9%. Significantly more patients in the policosanol group (42 patients [84%] achieved a ≥15% decrease in serum LDL-C than in the placebo group (2 patients [4%] (P<0.001. Also in the policosanol group, the mean (SD serum TC level decreased significantly, from 5.20 (0.22 mmol/L to 4.56 (0.44 mmol/L (P<0.001 vs baseline and placebo (change, −12.3%; the mean (SD triglyceride (TG

  10. The General Safety Group Annual Report 2001/2002

    CERN Document Server

    Weingarten, W

    2003-01-01

    This report summarizes the main activities of the General Safety (GS) Group of the Technical Inspection and Safety Division during 2001 and 2002, and the results obtained. The different topics in which the group is active are covered: general safety inspections and ergonomics, electrical, chemical and gas safety, chemical pollution containment and control, industrial hygiene, the safety of civil engineering works and outside contractors, fire prevention and the safety aspects of the LHC experiments.

  11. Provably optimal parallel transport sweeps on regular grids

    International Nuclear Information System (INIS)

    Adams, M. P.; Adams, M. L.; Hawkins, W. D.; Smith, T.; Rauchwerger, L.; Amato, N. M.; Bailey, T. S.; Falgout, R. D.

    2013-01-01

    We have found provably optimal algorithms for full-domain discrete-ordinate transport sweeps on regular grids in 3D Cartesian geometry. We describe these algorithms and sketch a 'proof that they always execute the full eight-octant sweep in the minimum possible number of stages for a given P x x P y x P z partitioning. Computational results demonstrate that our optimal scheduling algorithms execute sweeps in the minimum possible stage count. Observed parallel efficiencies agree well with our performance model. An older version of our PDT transport code achieves almost 80% parallel efficiency on 131,072 cores, on a weak-scaling problem with only one energy group, 80 directions, and 4096 cells/core. A newer version is less efficient at present-we are still improving its implementation - but achieves almost 60% parallel efficiency on 393,216 cores. These results conclusively demonstrate that sweeps can perform with high efficiency on core counts approaching 10 6 . (authors)

  12. Provably optimal parallel transport sweeps on regular grids

    Energy Technology Data Exchange (ETDEWEB)

    Adams, M. P.; Adams, M. L.; Hawkins, W. D. [Dept. of Nuclear Engineering, Texas A and M University, 3133 TAMU, College Station, TX 77843-3133 (United States); Smith, T.; Rauchwerger, L.; Amato, N. M. [Dept. of Computer Science and Engineering, Texas A and M University, 3133 TAMU, College Station, TX 77843-3133 (United States); Bailey, T. S.; Falgout, R. D. [Lawrence Livermore National Laboratory (United States)

    2013-07-01

    We have found provably optimal algorithms for full-domain discrete-ordinate transport sweeps on regular grids in 3D Cartesian geometry. We describe these algorithms and sketch a 'proof that they always execute the full eight-octant sweep in the minimum possible number of stages for a given P{sub x} x P{sub y} x P{sub z} partitioning. Computational results demonstrate that our optimal scheduling algorithms execute sweeps in the minimum possible stage count. Observed parallel efficiencies agree well with our performance model. An older version of our PDT transport code achieves almost 80% parallel efficiency on 131,072 cores, on a weak-scaling problem with only one energy group, 80 directions, and 4096 cells/core. A newer version is less efficient at present-we are still improving its implementation - but achieves almost 60% parallel efficiency on 393,216 cores. These results conclusively demonstrate that sweeps can perform with high efficiency on core counts approaching 10{sup 6}. (authors)

  13. The role of parallelism in the real-time processing of anaphora.

    Science.gov (United States)

    Poirier, Josée; Walenski, Matthew; Shapiro, Lewis P

    2012-06-01

    Parallelism effects refer to the facilitated processing of a target structure when it follows a similar, parallel structure. In coordination, a parallelism-related conjunction triggers the expectation that a second conjunct with the same structure as the first conjunct should occur. It has been proposed that parallelism effects reflect the use of the first structure as a template that guides the processing of the second. In this study, we examined the role of parallelism in real-time anaphora resolution by charting activation patterns in coordinated constructions containing anaphora, Verb-Phrase Ellipsis (VPE) and Noun-Phrase Traces (NP-traces). Specifically, we hypothesised that an expectation of parallelism would incite the parser to assume a structure similar to the first conjunct in the second, anaphora-containing conjunct. The speculation of a similar structure would result in early postulation of covert anaphora. Experiment 1 confirms that following a parallelism-related conjunction, first-conjunct material is activated in the second conjunct. Experiment 2 reveals that an NP-trace in the second conjunct is posited immediately where licensed, which is earlier than previously reported in the literature. In light of our findings, we propose an intricate relation between structural expectations and anaphor resolution.

  14. Reports from the working group on neutron scattering

    International Nuclear Information System (INIS)

    1979-06-01

    The present report contains papers dating from July 1978 until May 1979. During this period the experimental facilities have been expanded; a new four-circuit neutron spectrometer was installed and, together with the Fritz Hafer Institute, a measuring point was set up for investigations of ideal crystals, the Compton scattering equipment has been essentially improved. The report contains a contribution on the mechanics and the control of the neutron diffractometers existing at BER II. The main subjects of the scientific research work were magnetic structures and phase transitions, electron densities and chemical bonds, structure and dynamics of molecular crystals. At the BER II reactor measuring opportunities could be offered to a number of guest groups. Their research activities are reported, too. In addition to those made at the Berlin reactor BER II measurements could be made at the accelerator VICKSI of the Hahn-Meitner Institute and at the reactors of the Institute Laue-Langevin at Grenoble and of the Research Establishment at Riso by the working groups. (orig.) [de

  15. Applied nuclear physics group - activities report. 1977-1997

    International Nuclear Information System (INIS)

    Appoloni, Carlos Roberto

    1998-06-01

    This report presents the activities conducted by the Applied Nuclear Physics group of the Londrina State University - Applied Nuclear Physics Laboratory - Brazil, from the activities beginning (1977) up to the end of the year 1997

  16. Model-driven product line engineering for mapping parallel algorithms to parallel computing platforms

    NARCIS (Netherlands)

    Arkin, Ethem; Tekinerdogan, Bedir

    2016-01-01

    Mapping parallel algorithms to parallel computing platforms requires several activities such as the analysis of the parallel algorithm, the definition of the logical configuration of the platform, the mapping of the algorithm to the logical configuration platform and the implementation of the

  17. Parallelization in Modern C++

    CERN Multimedia

    CERN. Geneva

    2016-01-01

    The traditionally used and well established parallel programming models OpenMP and MPI are both targeting lower level parallelism and are meant to be as language agnostic as possible. For a long time, those models were the only widely available portable options for developing parallel C++ applications beyond using plain threads. This has strongly limited the optimization capabilities of compilers, has inhibited extensibility and genericity, and has restricted the use of those models together with other, modern higher level abstractions introduced by the C++11 and C++14 standards. The recent revival of interest in the industry and wider community for the C++ language has also spurred a remarkable amount of standardization proposals and technical specifications being developed. Those efforts however have so far failed to build a vision on how to seamlessly integrate various types of parallelism, such as iterative parallel execution, task-based parallelism, asynchronous many-task execution flows, continuation s...

  18. Massively parallel mathematical sieves

    Energy Technology Data Exchange (ETDEWEB)

    Montry, G.R.

    1989-01-01

    The Sieve of Eratosthenes is a well-known algorithm for finding all prime numbers in a given subset of integers. A parallel version of the Sieve is described that produces computational speedups over 800 on a hypercube with 1,024 processing elements for problems of fixed size. Computational speedups as high as 980 are achieved when the problem size per processor is fixed. The method of parallelization generalizes to other sieves and will be efficient on any ensemble architecture. We investigate two highly parallel sieves using scattered decomposition and compare their performance on a hypercube multiprocessor. A comparison of different parallelization techniques for the sieve illustrates the trade-offs necessary in the design and implementation of massively parallel algorithms for large ensemble computers.

  19. Computer-Aided Parallelizer and Optimizer

    Science.gov (United States)

    Jin, Haoqiang

    2011-01-01

    The Computer-Aided Parallelizer and Optimizer (CAPO) automates the insertion of compiler directives (see figure) to facilitate parallel processing on Shared Memory Parallel (SMP) machines. While CAPO currently is integrated seamlessly into CAPTools (developed at the University of Greenwich, now marketed as ParaWise), CAPO was independently developed at Ames Research Center as one of the components for the Legacy Code Modernization (LCM) project. The current version takes serial FORTRAN programs, performs interprocedural data dependence analysis, and generates OpenMP directives. Due to the widely supported OpenMP standard, the generated OpenMP codes have the potential to run on a wide range of SMP machines. CAPO relies on accurate interprocedural data dependence information currently provided by CAPTools. Compiler directives are generated through identification of parallel loops in the outermost level, construction of parallel regions around parallel loops and optimization of parallel regions, and insertion of directives with automatic identification of private, reduction, induction, and shared variables. Attempts also have been made to identify potential pipeline parallelism (implemented with point-to-point synchronization). Although directives are generated automatically, user interaction with the tool is still important for producing good parallel codes. A comprehensive graphical user interface is included for users to interact with the parallelization process.

  20. Trial protocol: a parallel group, individually randomized clinical trial to evaluate the effect of a mobile phone application to improve sexual health among youth in Stockholm County.

    Science.gov (United States)

    Nielsen, Anna; De Costa, Ayesha; Bågenholm, Aspasia; Danielsson, Kristina Gemzell; Marrone, Gaetano; Boman, Jens; Salazar, Mariano; Diwan, Vinod

    2018-02-05

    Genital Chlamydia trachomatis infection is a major public health problem worldwide affecting mostly youth. Sweden introduced an opportunistic screening approach in 1982 accompanied by treatment, partner notification and case reporting. After an initial decline in infection rate till the mid-90s, the number of reported cases has increased over the last two decades and has now stabilized at a high level of 37,000 reported cases in Sweden per year (85% of cases in youth). Sexual risk-taking among youth is also reported to have significantly increased over the last 20 years. Mobile health (mHealth) interventions could be particularly suitable for youth and sexual health promotion as the intervention is delivered in a familiar and discrete way to a tech savvy at-risk population. This paper presents a protocol for a randomized trial to study the effect of an interactive mHealth application (app) on condom use among the youth of Stockholm. 446 youth resident in Stockholm, will be recruited in this two arm parallel group individually randomized trial. Recruitment will be from Youth Health Clinics or via the trial website. Participants will be randomized to receive either the intervention (which comprises an interactive app on safe sexual health that will be installed on their smart phones) or a control group (standard of care). Youth will be followed up for 6 months, with questionnaire responses submitted periodically via the app. Self-reported condom use over 6 months will be the primary outcome. Secondary outcomes will include presence of an infection, Chlamydia tests during the study period and proxy markers of safe sex. Analysis is by intention to treat. This trial exploits the high mobile phone usage among youth to provide a phone app intervention in the area of sexual health. If successful, the results will have implications for health service delivery and health promotion among the youth. From a methodological perspective, this trial is expected to provide

  1. Link failure detection in a parallel computer

    Science.gov (United States)

    Archer, Charles J.; Blocksome, Michael A.; Megerian, Mark G.; Smith, Brian E.

    2010-11-09

    Methods, apparatus, and products are disclosed for link failure detection in a parallel computer including compute nodes connected in a rectangular mesh network, each pair of adjacent compute nodes in the rectangular mesh network connected together using a pair of links, that includes: assigning each compute node to either a first group or a second group such that adjacent compute nodes in the rectangular mesh network are assigned to different groups; sending, by each of the compute nodes assigned to the first group, a first test message to each adjacent compute node assigned to the second group; determining, by each of the compute nodes assigned to the second group, whether the first test message was received from each adjacent compute node assigned to the first group; and notifying a user, by each of the compute nodes assigned to the second group, whether the first test message was received.

  2. PC6 acupoint stimulation for the prevention of postcardiac surgery nausea and vomiting: a protocol for a two-group, parallel, superiority randomised clinical trial.

    Science.gov (United States)

    Cooke, Marie; Rickard, Claire; Rapchuk, Ivan; Shekar, Kiran; Marshall, Andrea P; Comans, Tracy; Doi, Suhail; McDonald, John; Spooner, Amy

    2014-11-13

    Postoperative nausea and vomiting (PONV) are frequent but unwanted complications for patients following anaesthesia and cardiac surgery, affecting at least a third of patients, despite pharmacological treatment. The primary aim of the proposed research is to test the efficacy of PC6 acupoint stimulation versus placebo for reducing PONV in cardiac surgery patients. In conjunction with this we aim to develop an understanding of intervention fidelity and factors that support, or impede, the use of PC6 acupoint stimulation, a knowledge translation approach. 712 postcardiac surgery participants will be recruited to take part in a two-group, parallel, superiority, randomised controlled trial. Participants will be randomised to receive a wrist band on each wrist providing acupressure to PC six using acupoint stimulation or a placebo. Randomisation will be computer generated, use randomly varied block sizes, and be concealed prior to the enrolment of each patient. The wristbands will remain in place for 36 h. PONV will be evaluated by the assessment of both nausea and vomiting, use of rescue antiemetics, quality of recovery and cost. Patient satisfaction with PONV care will be measured and clinical staff interviewed about the clinical use, feasibility, acceptability and challenges of using acupressure wristbands for PONV. Ethics approval will be sought from appropriate Human Research Ethics Committee/s before start of the study. A systematic review of the use of wrist acupressure for PC6 acupoint stimulation reported minor side effects only. Study progress will be reviewed by a Data Safety Monitoring Committee (DSMC) for nausea and vomiting outcomes at n=350. Dissemination of results will include conference presentations at national and international scientific meetings and publications in peer-reviewed journals. Study participants will receive a one-page lay-summary of results. Australian New Zealand Clinical Trials Registry--ACTRN12614000589684. Published by the BMJ

  3. A National Quality Improvement Collaborative for the clinical use of outcome measurement in specialised mental healthcare: results from a parallel group design and a nested cluster randomised controlled trial.

    Science.gov (United States)

    Metz, Margot J; Veerbeek, Marjolein A; Franx, Gerdien C; van der Feltz-Cornelis, Christina M; de Beurs, Edwin; Beekman, Aartjan T F

    2017-05-01

    Although the importance and advantages of measurement-based care in mental healthcare are well established, implementation in daily practice is complex and far from optimal. To accelerate the implementation of outcome measurement in routine clinical practice, a government-sponsored National Quality Improvement Collaborative was initiated in Dutch-specialised mental healthcare. To investigate the effects of this initiative, we combined a matched-pair parallel group design (21 teams) with a cluster randomised controlled trial (RCT) (6 teams). At the beginning and end, the primary outcome 'actual use and perceived clinical utility of outcome measurement' was assessed. In both designs, intervention teams demonstrated a significant higher level of implementation of outcome measurement than control teams. Overall effects were large (parallel group d =0.99; RCT d =1.25). The National Collaborative successfully improved the use of outcome measurement in routine clinical practice. None. © The Royal College of Psychiatrists 2017. This is an open access article distributed under the terms of the Creative Commons Non-Commercial, No Derivatives (CC BY-NC-ND) license.

  4. Nuclear Structure Group annual progress report June 1974 -May 1975

    International Nuclear Information System (INIS)

    1975-06-01

    This is the first annual progress report of the Nuclear Structure Group of the University of Birmingham. The introduction lists the main fields of study of the Group as: polarisation penomena and optical model studies using 3 He and 4 He probes; photonuclear physics; heavy-ion physics; and K- meson physics. The programme is related to particle accelerators at Birmingham, Oxford, Harwell and the Rutherford Laboratory. The body of the report consists of summaries of 38 experiments undertaken by members of the Group. The third section contains 10 notes on instrumentation topics. Appendices contain lists of (a) personnel, (b) papers published or submitted during the period. (U.K.)

  5. Data communications in a parallel active messaging interface of a parallel computer

    Science.gov (United States)

    Archer, Charles J; Blocksome, Michael A; Ratterman, Joseph D; Smith, Brian E

    2013-11-12

    Data communications in a parallel active messaging interface (`PAMI`) of a parallel computer composed of compute nodes that execute a parallel application, each compute node including application processors that execute the parallel application and at least one management processor dedicated to gathering information regarding data communications. The PAMI is composed of data communications endpoints, each endpoint composed of a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes and the endpoints coupled for data communications through the PAMI and through data communications resources. Embodiments function by gathering call site statistics describing data communications resulting from execution of data communications instructions and identifying in dependence upon the call cite statistics a data communications algorithm for use in executing a data communications instruction at a call site in the parallel application.

  6. Annual report to the Working Group on Technology, Growth, and Employment

    International Nuclear Information System (INIS)

    1985-04-01

    A meeting of the Working Group on High Energy Physics was convened in Brussels, Belgium, in July 1984, and impaneled new groups of technical experts to report on long-term planning, technical collaborations, and the identification of administrative obstacles experienced within the Summit countries that impede international collaboration. The charges to these three new groups are contained in this report under the section on the Brussels meeting. The reports prepared by the technical experts were then reviewed at the January 1985 meeting at Cadarache, France, and the results are contained in this report under the section on the Cadarache meeting. The Summit Working Group on High Energy Physics believes progress is being made toward cooperation among the Summit countries in the exploration of scientific and technological development upon which the Summit Heads of State and Government declared at Versailles revitalization and growth of the world economy will depend - to a large extent. At Cadarache, the Group found that, since its establishment, international collaboration has increased in the use of present accelerators and in the planning for future accelerators. The Group also found that there are specific areas of technology in which near-term research cooperation is possible. Finally, the Group identified administrative regulations that hamper effective international collaboration in science and technology and that could be revised or eliminated through coordinated, high level Summit action. The major accomplishment of the Working Group thus far has been the creation of a forum for discussions on collaboration in a major field of science by seven industrialized countries. The Group recommends the continuation of its review of long-term plans for major facilities on an intergovernmental basis

  7. Application Portable Parallel Library

    Science.gov (United States)

    Cole, Gary L.; Blech, Richard A.; Quealy, Angela; Townsend, Scott

    1995-01-01

    Application Portable Parallel Library (APPL) computer program is subroutine-based message-passing software library intended to provide consistent interface to variety of multiprocessor computers on market today. Minimizes effort needed to move application program from one computer to another. User develops application program once and then easily moves application program from parallel computer on which created to another parallel computer. ("Parallel computer" also include heterogeneous collection of networked computers). Written in C language with one FORTRAN 77 subroutine for UNIX-based computers and callable from application programs written in C language or FORTRAN 77.

  8. Parallel Algorithms and Patterns

    Energy Technology Data Exchange (ETDEWEB)

    Robey, Robert W. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-06-16

    This is a powerpoint presentation on parallel algorithms and patterns. A parallel algorithm is a well-defined, step-by-step computational procedure that emphasizes concurrency to solve a problem. Examples of problems include: Sorting, searching, optimization, matrix operations. A parallel pattern is a computational step in a sequence of independent, potentially concurrent operations that occurs in diverse scenarios with some frequency. Examples are: Reductions, prefix scans, ghost cell updates. We only touch on parallel patterns in this presentation. It really deserves its own detailed discussion which Gabe Rockefeller would like to develop.

  9. Annual report of the Summit Members' Working Group on Controlled Thermonuclear Fusion (Fusin Working Group (FWG))

    International Nuclear Information System (INIS)

    1987-04-01

    The Summit Members' Working Group on Controlled Thermonuclear Fusion [Fusion Working Group (FWG)] was established in 1983 in response to the Declaration of the Heads of State and Government at the Versailles Economic Summit meeting of 1982, and in response to the subsequent report of the Working Group in Technology, Growth and Employment (TGE) as endorsed at the Williamsburg Summit meeting, 1983. This document contains the complete written record of each of the three FWG meetings which include the minutes, lists of attendees, agendas, statements, and summary conclusions as well as the full reports of the Technical Working Party. In addition, there is a pertinent exchange of correspondence between FWG members on the role of the Technical Working Party and a requested background paper on the modalities associated with a possible future ETR project

  10. Working group report: Flavor physics and model building

    Indian Academy of Sciences (India)

    cO Indian Academy of Sciences. Vol. ... This is the report of flavor physics and model building working group at ... those in model building have been primarily devoted to neutrino physics. ..... [12] Andrei Gritsan, ICHEP 2004, Beijing, China.

  11. Working Group Report: Sensors

    Energy Technology Data Exchange (ETDEWEB)

    Artuso, M.; et al.,

    2013-10-18

    Sensors play a key role in detecting both charged particles and photons for all three frontiers in Particle Physics. The signals from an individual sensor that can be used include ionization deposited, phonons created, or light emitted from excitations of the material. The individual sensors are then typically arrayed for detection of individual particles or groups of particles. Mounting of new, ever higher performance experiments, often depend on advances in sensors in a range of performance characteristics. These performance metrics can include position resolution for passing particles, time resolution on particles impacting the sensor, and overall rate capabilities. In addition the feasible detector area and cost frequently provides a limit to what can be built and therefore is often another area where improvements are important. Finally, radiation tolerance is becoming a requirement in a broad array of devices. We present a status report on a broad category of sensors, including challenges for the future and work in progress to solve those challenges.

  12. Parallel computing for homogeneous diffusion and transport equations in neutronics

    International Nuclear Information System (INIS)

    Pinchedez, K.

    1999-06-01

    Parallel computing meets the ever-increasing requirements for neutronic computer code speed and accuracy. In this work, two different approaches have been considered. We first parallelized the sequential algorithm used by the neutronics code CRONOS developed at the French Atomic Energy Commission. The algorithm computes the dominant eigenvalue associated with PN simplified transport equations by a mixed finite element method. Several parallel algorithms have been developed on distributed memory machines. The performances of the parallel algorithms have been studied experimentally by implementation on a T3D Cray and theoretically by complexity models. A comparison of various parallel algorithms has confirmed the chosen implementations. We next applied a domain sub-division technique to the two-group diffusion Eigen problem. In the modal synthesis-based method, the global spectrum is determined from the partial spectra associated with sub-domains. Then the Eigen problem is expanded on a family composed, on the one hand, from eigenfunctions associated with the sub-domains and, on the other hand, from functions corresponding to the contribution from the interface between the sub-domains. For a 2-D homogeneous core, this modal method has been validated and its accuracy has been measured. (author)

  13. Plutonium working group report on environmental, safety and health vulnerabilities associated with the Department's plutonium storage. Volume II, part 7: Mound working group assessment team report

    International Nuclear Information System (INIS)

    1994-09-01

    This is the report of a visit to the Mound site by the Working Group Assessment Team (WGAT) to assess plutonium vulnerabilities. Purposes of the visit were: to review results of the site's self assessment of current practices for handling and storing plutonium; to conduct an independent assessment of these practices; to reconcile differences and assemble a final list of vulnerabilities; to calculate consequences and probability for each vulnerability; and to issue a report to the Working Group. This report, representing completion of the Mound visit, will be compiled along with those from all other sites with plutonium inventories as part of a final report to the Secretary of Energy

  14. Totally parallel multilevel algorithms

    Science.gov (United States)

    Frederickson, Paul O.

    1988-01-01

    Four totally parallel algorithms for the solution of a sparse linear system have common characteristics which become quite apparent when they are implemented on a highly parallel hypercube such as the CM2. These four algorithms are Parallel Superconvergent Multigrid (PSMG) of Frederickson and McBryan, Robust Multigrid (RMG) of Hackbusch, the FFT based Spectral Algorithm, and Parallel Cyclic Reduction. In fact, all four can be formulated as particular cases of the same totally parallel multilevel algorithm, which are referred to as TPMA. In certain cases the spectral radius of TPMA is zero, and it is recognized to be a direct algorithm. In many other cases the spectral radius, although not zero, is small enough that a single iteration per timestep keeps the local error within the required tolerance.

  15. Working group report: Low energy and flavour physics

    Indian Academy of Sciences (India)

    This is a report of the low energy and flavour physics working group at ... that calculates the non-leptonic decay amplitudes including the long-distance con- tributions. There were three lectures that lasted for over seven hours, and were.

  16. Performance evaluation of the HEP, ELXSI and CRAY X-MP parallel processors on hydrocode test problems

    International Nuclear Information System (INIS)

    Liebrock, L.M.; McGrath, J.F.; Hicks, D.L.

    1986-01-01

    Parallel programming promises improved processing speeds for hydrocodes, magnetohydrocodes, multiphase flow codes, thermal-hydraulics codes, wavecodes and other continuum dynamics codes. This paper presents the results of some investigations of parallel algorithms on three parallel processors: the CRAY X-MP, ELXSI and the HEP computers. Introduction and Background: We report the results of investigations of parallel algorithms for computational continuum dynamics. These programs (hydrocodes, wavecodes, etc.) produce simulations of the solutions to problems arising in the motion of continua: solid dynamics, liquid dynamics, gas dynamics, plasma dynamics, multiphase flow dynamics, thermal-hydraulic dynamics and multimaterial flow dynamics. This report restricts its scope to one-dimensional algorithms such as the von Neumann-Richtmyer (1950) scheme

  17. A possibility of parallel and anti-parallel diffraction measurements on ...

    Indian Academy of Sciences (India)

    However, a bent perfect crystal (BPC) monochromator at monochromatic focusing condition can provide a quite flat and equal resolution property at both parallel and anti-parallel positions and thus one can have a chance to use both sides for the diffraction experiment. From the data of the FWHM and the / measured ...

  18. State of the art of parallel scientific visualization applications on PC clusters

    International Nuclear Information System (INIS)

    Juliachs, M.

    2004-01-01

    In this state of the art on parallel scientific visualization applications on PC clusters, we deal with both surface and volume rendering approaches. We first analyze available PC cluster configurations and existing parallel rendering software components for parallel graphics rendering. CEA/DIF has been studying cluster visualization since 2001. This report is part of a study to set up a new visualization research platform. This platform consisting of an eight-node PC cluster under Linux and a tiled display was installed in collaboration with Versailles-Saint-Quentin University in August 2003. (author)

  19. Parallel treatment of simulation particles in particle-in-cell codes on SUPRENUM

    International Nuclear Information System (INIS)

    Seldner, D.

    1990-02-01

    This report contains the program documentation and description of the program package 2D-PLAS, which has been developed at the Nuclear Research Center Karlsruhe in the Institute for Data Processing in Technology (IDT) under the auspices of the BMFT. 2D-PLAS is a parallel program version of the treatment of the simulation particles of the two-dimensional stationary particle-in-cell code BFCPIC which has been developed at the Nuclear Research Center Karlsruhe. This parallel version has been designed for the parallel computer SUPRENUM. (orig.) [de

  20. Structural Synthesis of 3-DoF Spatial Fully Parallel Manipulators

    Directory of Open Access Journals (Sweden)

    Alfonso Hernandez

    2014-07-01

    Full Text Available In this paper, the architectures of three degrees of freedom (3-DoF spatial, fully parallel manipulators (PMs, whose limbs are structurally identical, are obtained systematically. To do this, the methodology followed makes use of the concepts of the displacement group theory of rigid body motion. This theory works with so-called ‘motion generators’. That is, every limb is a kinematic chain that produces a certain type of displacement in the mobile platform or end-effector. The laws of group algebra will determine the actual motion pattern of the end-effector. The structural synthesis is a combinatorial process of different kinematic chains’ topologies employed in order to get all of the 3-DoF motion pattern possibilities in the end-effector of the fully parallel manipulator.

  1. A new decomposition method for parallel processing multi-level optimization

    International Nuclear Information System (INIS)

    Park, Hyung Wook; Kim, Min Soo; Choi, Dong Hoon

    2002-01-01

    In practical designs, most of the multidisciplinary problems have a large-size and complicate design system. Since multidisciplinary problems have hundreds of analyses and thousands of variables, the grouping of analyses and the order of the analyses in the group affect the speed of the total design cycle. Therefore, it is very important to reorder and regroup the original design processes in order to minimize the total computational cost by decomposing large multidisciplinary problems into several MultiDisciplinary Analysis SubSystems (MDASS) and by processing them in parallel. In this study, a new decomposition method is proposed for parallel processing of multidisciplinary design optimization, such as Collaborative Optimization (CO) and Individual Discipline Feasible (IDF) method. Numerical results for two example problems are presented to show the feasibility of the proposed method

  2. Modelling and parallel calculation of a kinetic boundary layer

    International Nuclear Information System (INIS)

    Perlat, Jean Philippe

    1998-01-01

    This research thesis aims at addressing reliability and cost issues in the calculation by numeric simulation of flows in transition regime. The first step has been to reduce calculation cost and memory space for the Monte Carlo method which is known to provide performance and reliability for rarefied regimes. Vector and parallel computers allow this objective to be reached. Here, a MIMD (multiple instructions, multiple data) machine has been used which implements parallel calculation at different levels of parallelization. Parallelization procedures have been adapted, and results showed that parallelization by calculation domain decomposition was far more efficient. Due to reliability issue related to the statistic feature of Monte Carlo methods, a new deterministic model was necessary to simulate gas molecules in transition regime. New models and hyperbolic systems have therefore been studied. One is chosen which allows thermodynamic values (density, average velocity, temperature, deformation tensor, heat flow) present in Navier-Stokes equations to be determined, and the equations of evolution of thermodynamic values are described for the mono-atomic case. Numerical resolution of is reported. A kinetic scheme is developed which complies with the structure of all systems, and which naturally expresses boundary conditions. The validation of the obtained 14 moment-based model is performed on shock problems and on Couette flows [fr

  3. Providing full point-to-point communications among compute nodes of an operational group in a global combining network of a parallel computer

    Energy Technology Data Exchange (ETDEWEB)

    Archer, Charles J.; Faraj, Daniel A.; Inglett, Todd A.; Ratterman, Joseph D.

    2018-01-30

    Methods, apparatus, and products are disclosed for providing full point-to-point communications among compute nodes of an operational group in a global combining network of a parallel computer, each compute node connected to each adjacent compute node in the global combining network through a link, that include: receiving a network packet in a compute node, the network packet specifying a destination compute node; selecting, in dependence upon the destination compute node, at least one of the links for the compute node along which to forward the network packet toward the destination compute node; and forwarding the network packet along the selected link to the adjacent compute node connected to the compute node through the selected link.

  4. A portable, parallel, object-oriented Monte Carlo neutron transport code in C++

    International Nuclear Information System (INIS)

    Lee, S.R.; Cummings, J.C.; Nolen, S.D.

    1997-01-01

    We have developed a multi-group Monte Carlo neutron transport code using C++ and the Parallel Object-Oriented Methods and Applications (POOMA) class library. This transport code, called MC++, currently computes k and α-eigenvalues and is portable to and runs parallel on a wide variety of platforms, including MPPs, clustered SMPs, and individual workstations. It contains appropriate classes and abstractions for particle transport and, through the use of POOMA, for portable parallelism. Current capabilities of MC++ are discussed, along with physics and performance results on a variety of hardware, including all Accelerated Strategic Computing Initiative (ASCI) hardware. Current parallel performance indicates the ability to compute α-eigenvalues in seconds to minutes rather than hours to days. Future plans and the implementation of a general transport physics framework are also discussed

  5. Parallel implementation of the PHOENIX generalized stellar atmosphere program. II. Wavelength parallelization

    International Nuclear Information System (INIS)

    Baron, E.; Hauschildt, Peter H.

    1998-01-01

    We describe an important addition to the parallel implementation of our generalized nonlocal thermodynamic equilibrium (NLTE) stellar atmosphere and radiative transfer computer program PHOENIX. In a previous paper in this series we described data and task parallel algorithms we have developed for radiative transfer, spectral line opacity, and NLTE opacity and rate calculations. These algorithms divided the work spatially or by spectral lines, that is, distributing the radial zones, individual spectral lines, or characteristic rays among different processors and employ, in addition, task parallelism for logically independent functions (such as atomic and molecular line opacities). For finite, monotonic velocity fields, the radiative transfer equation is an initial value problem in wavelength, and hence each wavelength point depends upon the previous one. However, for sophisticated NLTE models of both static and moving atmospheres needed to accurately describe, e.g., novae and supernovae, the number of wavelength points is very large (200,000 - 300,000) and hence parallelization over wavelength can lead both to considerable speedup in calculation time and the ability to make use of the aggregate memory available on massively parallel supercomputers. Here, we describe an implementation of a pipelined design for the wavelength parallelization of PHOENIX, where the necessary data from the processor working on a previous wavelength point is sent to the processor working on the succeeding wavelength point as soon as it is known. Our implementation uses a MIMD design based on a relatively small number of standard message passing interface (MPI) library calls and is fully portable between serial and parallel computers. copyright 1998 The American Astronomical Society

  6. Development of structural schemes of parallel structure manipulators using screw calculus

    Science.gov (United States)

    Rashoyan, G. V.; Shalyukhin, K. A.; Gaponenko, EV

    2018-03-01

    The paper considers the approach to the structural analysis and synthesis of parallel structure robots based on the mathematical apparatus of groups of screws and on a concept of reciprocity of screws. The results are depicted of synthesis of parallel structure robots with different numbers of degrees of freedom, corresponding to the different groups of screws. Power screws are applied with this aim, based on the principle of static-kinematic analogy; the power screws are similar to the orts of axes of not driven kinematic pairs of a corresponding connecting chain. Accordingly, kinematic screws of the outlet chain of a robot are simultaneously determined which are reciprocal to power screws of kinematic sub-chains. Solution of certain synthesis problems is illustrated with practical applications. Closed groups of screws can have eight types. The three-membered groups of screws are of greatest significance, as well as four-membered screw groups [1] and six-membered screw groups. Three-membered screw groups correspond to progressively guiding mechanisms, to spherical mechanisms, and to planar mechanisms. The four-membered group corresponds to the motion of the SCARA robot. The six-membered group includes all possible motions. From the works of A.P. Kotelnikov, F.M. Dimentberg, it is known that closed fifth-order screw groups do not exist. The article presents examples of the mechanisms corresponding to the given groups.

  7. Parallel k-means++

    Energy Technology Data Exchange (ETDEWEB)

    2017-04-04

    A parallelization of the k-means++ seed selection algorithm on three distinct hardware platforms: GPU, multicore CPU, and multithreaded architecture. K-means++ was developed by David Arthur and Sergei Vassilvitskii in 2007 as an extension of the k-means data clustering technique. These algorithms allow people to cluster multidimensional data, by attempting to minimize the mean distance of data points within a cluster. K-means++ improved upon traditional k-means by using a more intelligent approach to selecting the initial seeds for the clustering process. While k-means++ has become a popular alternative to traditional k-means clustering, little work has been done to parallelize this technique. We have developed original C++ code for parallelizing the algorithm on three unique hardware architectures: GPU using NVidia's CUDA/Thrust framework, multicore CPU using OpenMP, and the Cray XMT multithreaded architecture. By parallelizing the process for these platforms, we are able to perform k-means++ clustering much more quickly than it could be done before.

  8. Joint Action Group: public opinion poll: final report

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1998-12-31

    The Joint Action Group (JAG) for Environmental Cleanup of the Muggah Creek Watershed in Cape Breton, Nova Scotia is a new community-driven process in which a group of individuals have cooperated in one of the largest remediation projects in Canada. The group plays an advisory role to the government in identifying what should be done to remediate the Muggah Creek watershed and the Sydney Tar Ponds. The Muggah Creek watershed area includes a municipal landfill site, the coke ovens site and the Muggah Creek estuary (Sydney Tar Ponds). This report contains an analysis of the responses of a sample of 600 households in industrial Cape Breton to a telephone survey designed to measure community awareness and knowledge of JAG, its working groups, and the Muggah Creek Watershed Cleanup process, and identify community concerns regarding the process. tabs.

  9. Joint Action Group: public opinion poll: final report

    International Nuclear Information System (INIS)

    1998-01-01

    The Joint Action Group (JAG) for Environmental Cleanup of the Muggah Creek Watershed in Cape Breton, Nova Scotia is a new community-driven process in which a group of individuals have cooperated in one of the largest remediation projects in Canada. The group plays an advisory role to the government in identifying what should be done to remediate the Muggah Creek watershed and the Sydney Tar Ponds. The Muggah Creek watershed area includes a municipal landfill site, the coke ovens site and the Muggah Creek estuary (Sydney Tar Ponds). This report contains an analysis of the responses of a sample of 600 households in industrial Cape Breton to a telephone survey designed to measure community awareness and knowledge of JAG, its working groups, and the Muggah Creek Watershed Cleanup process, and identify community concerns regarding the process. tabs

  10. Parallel magnetic resonance imaging

    International Nuclear Information System (INIS)

    Larkman, David J; Nunes, Rita G

    2007-01-01

    Parallel imaging has been the single biggest innovation in magnetic resonance imaging in the last decade. The use of multiple receiver coils to augment the time consuming Fourier encoding has reduced acquisition times significantly. This increase in speed comes at a time when other approaches to acquisition time reduction were reaching engineering and human limits. A brief summary of spatial encoding in MRI is followed by an introduction to the problem parallel imaging is designed to solve. There are a large number of parallel reconstruction algorithms; this article reviews a cross-section, SENSE, SMASH, g-SMASH and GRAPPA, selected to demonstrate the different approaches. Theoretical (the g-factor) and practical (coil design) limits to acquisition speed are reviewed. The practical implementation of parallel imaging is also discussed, in particular coil calibration. How to recognize potential failure modes and their associated artefacts are shown. Well-established applications including angiography, cardiac imaging and applications using echo planar imaging are reviewed and we discuss what makes a good application for parallel imaging. Finally, active research areas where parallel imaging is being used to improve data quality by repairing artefacted images are also reviewed. (invited topical review)

  11. Experiences in Data-Parallel Programming

    Directory of Open Access Journals (Sweden)

    Terry W. Clark

    1997-01-01

    Full Text Available To efficiently parallelize a scientific application with a data-parallel compiler requires certain structural properties in the source program, and conversely, the absence of others. A recent parallelization effort of ours reinforced this observation and motivated this correspondence. Specifically, we have transformed a Fortran 77 version of GROMOS, a popular dusty-deck program for molecular dynamics, into Fortran D, a data-parallel dialect of Fortran. During this transformation we have encountered a number of difficulties that probably are neither limited to this particular application nor do they seem likely to be addressed by improved compiler technology in the near future. Our experience with GROMOS suggests a number of points to keep in mind when developing software that may at some time in its life cycle be parallelized with a data-parallel compiler. This note presents some guidelines for engineering data-parallel applications that are compatible with Fortran D or High Performance Fortran compilers.

  12. A Clinical Pilot Study Comparing Sweet Bee Venom parallel treatment with only Acupuncture Treatment in patient diagnosed with lumbar spine sprain

    Directory of Open Access Journals (Sweden)

    Shin Yong-jeen

    2011-06-01

    Full Text Available Objectives: This study was carried out to compare the Sweet Bee Venom (referred to as Sweet BV hereafter acupuncture parallel treatment to treatment with acupuncture only for the patient diagnosed with lumbar spine sprain and find a better treatment. Methods: The subjects were patients diagnosed with lumbar spine sprain and hospitalized at Suncheon oriental medical hospital, which was randomly divided into sweet BV parallel treatment group and acupuncture-only group, and other treatment conditions were maintained the same. Then,VAS (Visual Analogue Scale was used to compare the difference in the treatment period between the two groups from VAS 10 to VAS 0, from VAS 10 to VAS 5, and from VAS 5 to VAS 0. Result & Conclusion: Sweet BV parallel treatment group and acupuncture-only treatment group were compared regarding the respective treatment period, and as the result, the treatment period from VAS 10 to VAS 5 was significantly reduced in sweet BV parallel treatment group compared to the acupuncture-only treatment group, but the treatment period from VAS 5 to VAS 0 did not show a significant difference. Therefore, it can be said that sweet BV parallel treatment is effective in shortening the treatment period and controlling early pain compared to acupuncture-only treatment.

  13. Non-Cartesian parallel imaging reconstruction.

    Science.gov (United States)

    Wright, Katherine L; Hamilton, Jesse I; Griswold, Mark A; Gulani, Vikas; Seiberlich, Nicole

    2014-11-01

    Non-Cartesian parallel imaging has played an important role in reducing data acquisition time in MRI. The use of non-Cartesian trajectories can enable more efficient coverage of k-space, which can be leveraged to reduce scan times. These trajectories can be undersampled to achieve even faster scan times, but the resulting images may contain aliasing artifacts. Just as Cartesian parallel imaging can be used to reconstruct images from undersampled Cartesian data, non-Cartesian parallel imaging methods can mitigate aliasing artifacts by using additional spatial encoding information in the form of the nonhomogeneous sensitivities of multi-coil phased arrays. This review will begin with an overview of non-Cartesian k-space trajectories and their sampling properties, followed by an in-depth discussion of several selected non-Cartesian parallel imaging algorithms. Three representative non-Cartesian parallel imaging methods will be described, including Conjugate Gradient SENSE (CG SENSE), non-Cartesian generalized autocalibrating partially parallel acquisition (GRAPPA), and Iterative Self-Consistent Parallel Imaging Reconstruction (SPIRiT). After a discussion of these three techniques, several potential promising clinical applications of non-Cartesian parallel imaging will be covered. © 2014 Wiley Periodicals, Inc.

  14. EDF Group - Annual Report 2007. European leader for tomorrow's energies

    International Nuclear Information System (INIS)

    2008-01-01

    The EDF Group is a leading player in the European energy industry, active in all areas of the electricity value chain, from generation to trading and network management. The leader in the French electricity market, the Group also has solid positions in the United Kingdom, Germany and Italy, with a portfolio of 38.5 million European customers and a generation fleet which is unique in the world. It intends to play a major role in the global revival of nuclear and is increasingly active in the gas chain. The Group has a sound business model, evenly balanced between regulated and deregulated activities. Given its R and D capability, its track record and expertise in nuclear, fossil-fired and hydro generation and in renewable energies, together with its energy eco-efficiency offers, EDF is well placed to deliver competitive solutions to reconcile sustainable economic growth and climate preservation. This document is EDF Group's annual report for the year 2007. It contains information about Group profile, governance, business, development strategy, sales and marketing, positions in Europe and international activities. The document is made of several reports: the Activity and Sustainable Development Report, the Financial Report, the Sustainable Development Report, the Sustainable Development Indicators, and the Report by the Chairman of EDF Board of Directors on corporate governance and internal control procedures

  15. Influence of Paralleling Dies and Paralleling Half-Bridges on Transient Current Distribution in Multichip Power Modules

    DEFF Research Database (Denmark)

    Li, Helong; Zhou, Wei; Wang, Xiongfei

    2018-01-01

    This paper addresses the transient current distribution in the multichip half-bridge power modules, where two types of paralleling connections with different current commutation mechanisms are considered: paralleling dies and paralleling half-bridges. It reveals that with paralleling dies, both t...

  16. User's guide of parallel program development environment (PPDE). The 2nd edition

    International Nuclear Information System (INIS)

    Ueno, Hirokazu; Takemiya, Hiroshi; Imamura, Toshiyuki; Koide, Hiroshi; Matsuda, Katsuyuki; Higuchi, Kenji; Hirayama, Toshio; Ohta, Hirofumi

    2000-03-01

    The STA basic system has been enhanced to accelerate support for parallel programming on heterogeneous parallel computers, through a series of R and D on the technology of parallel processing. The enhancement has been made through extending the function of the PPDF, Parallel Program Development Environment in the STA basic system. The extended PPDE has the function to make: 1) the automatic creation of a 'makefile' and a shell script file for its execution, 2) the multi-tools execution which makes the tools on heterogeneous computers to execute with one operation a task on a computer, and 3) the mirror composition to reflect editing results of a file on a computer into all related files on other computers. These additional functions will enhance the work efficiency for program development on some computers. More functions have been added to the PPDE to provide help for parallel program development. New functions were also designed to complement a HPF translator and a parallelizing support tool when working together so that a sequential program is efficiently converted to a parallel program. This report describes the use of extended PPDE. (author)

  17. Fast electrostatic force calculation on parallel computer clusters

    International Nuclear Information System (INIS)

    Kia, Amirali; Kim, Daejoong; Darve, Eric

    2008-01-01

    The fast multipole method (FMM) and smooth particle mesh Ewald (SPME) are well known fast algorithms to evaluate long range electrostatic interactions in molecular dynamics and other fields. FMM is a multi-scale method which reduces the computation cost by approximating the potential due to a group of particles at a large distance using few multipole functions. This algorithm scales like O(N) for N particles. SPME algorithm is an O(NlnN) method which is based on an interpolation of the Fourier space part of the Ewald sum and evaluating the resulting convolutions using fast Fourier transform (FFT). Those algorithms suffer from relatively poor efficiency on large parallel machines especially for mid-size problems around hundreds of thousands of atoms. A variation of the FMM, called PWA, based on plane wave expansions is presented in this paper. A new parallelization strategy for PWA, which takes advantage of the specific form of this expansion, is described. Its parallel efficiency is compared with SPME through detail time measurements on two different computer clusters

  18. An iterative algorithm for solving the multidimensional neutron diffusion nodal method equations on parallel computers

    International Nuclear Information System (INIS)

    Kirk, B.L.; Azmy, Y.Y.

    1992-01-01

    In this paper the one-group, steady-state neutron diffusion equation in two-dimensional Cartesian geometry is solved using the nodal integral method. The discrete variable equations comprise loosely coupled sets of equations representing the nodal balance of neutrons, as well as neutron current continuity along rows or columns of computational cells. An iterative algorithm that is more suitable for solving large problems concurrently is derived based on the decomposition of the spatial domain and is accelerated using successive overrelaxation. This algorithm is very well suited for parallel computers, especially since the spatial domain decomposition occurs naturally, so that the number of iterations required for convergence does not depend on the number of processors participating in the calculation. Implementation of the authors' algorithm on the Intel iPSC/2 hypercube and Sequent Balance 8000 parallel computer is presented, and measured speedup and efficiency for test problems are reported. The results suggest that the efficiency of the hypercube quickly deteriorates when many processors are used, while the Sequent Balance retains very high efficiency for a comparable number of participating processors. This leads to the conjecture that message-passing parallel computers are not as well suited for this algorithm as shared-memory machines

  19. A double blind parallel group placebo controlled comparison of sedative and mnesic effects of etifoxine and lorazepam in healthy subjects [corrected].

    Science.gov (United States)

    Micallef, J; Soubrouillard, C; Guet, F; Le Guern, M E; Alquier, C; Bruguerolle, B; Blin, O

    2001-06-01

    This paper describes the psychomotor and mnesic effects of single oral doses of etifoxine (50 and 100 mg) and lorazepam (2 mg) in healthy subjects. Forty-eight healthy subjects were included in this randomized double blind, placebo controlled parallel group study [corrected]. The effects of drugs were assessed by using a battery of subjective and objective tests that explored mood and vigilance (Visual Analog Scale), attention (Barrage test), psychomotor performance (Choice Reaction Time) and memory (digit span, immediate and delayed free recall of a word list). Whereas vigilance, psychomotor performance and free recall were significantly impaired by lorazepam, neither dosage of etifoxine (50 and 100 mg) produced such effects. These results suggest that 50 and 100 mg single dose of etifoxine do not induce amnesia and sedation as compared to lorazepam.

  20. Ignalina Safety Analysis Group's report for the year 1998

    International Nuclear Information System (INIS)

    Uspuras, E.; Augutis, J.; Bubelis, E.; Cesna, B.; Kaliatka, A.

    1999-02-01

    Results of Ignalina NPP Safety Analysis Group's research are presented. The main fields of group's activities in 1998 were following: safety analysis of reactor's cooling system, safety analysis of accident localization system, investigation of the problem graphite - fuel channel, reactor core modelling, assistance to the regulatory body VATESI in drafting regulations and reviewing safety reports presented by Ignalina NPP during the process of licensing of unit 1

  1. Parallel Architectures and Parallel Algorithms for Integrated Vision Systems. Ph.D. Thesis

    Science.gov (United States)

    Choudhary, Alok Nidhi

    1989-01-01

    Computer vision is regarded as one of the most complex and computationally intensive problems. An integrated vision system (IVS) is a system that uses vision algorithms from all levels of processing to perform for a high level application (e.g., object recognition). An IVS normally involves algorithms from low level, intermediate level, and high level vision. Designing parallel architectures for vision systems is of tremendous interest to researchers. Several issues are addressed in parallel architectures and parallel algorithms for integrated vision systems.

  2. Pattern-Driven Automatic Parallelization

    Directory of Open Access Journals (Sweden)

    Christoph W. Kessler

    1996-01-01

    Full Text Available This article describes a knowledge-based system for automatic parallelization of a wide class of sequential numerical codes operating on vectors and dense matrices, and for execution on distributed memory message-passing multiprocessors. Its main feature is a fast and powerful pattern recognition tool that locally identifies frequently occurring computations and programming concepts in the source code. This tool also works for dusty deck codes that have been "encrypted" by former machine-specific code transformations. Successful pattern recognition guides sophisticated code transformations including local algorithm replacement such that the parallelized code need not emerge from the sequential program structure by just parallelizing the loops. It allows access to an expert's knowledge on useful parallel algorithms, available machine-specific library routines, and powerful program transformations. The partially restored program semantics also supports local array alignment, distribution, and redistribution, and allows for faster and more exact prediction of the performance of the parallelized target code than is usually possible.

  3. Report of JLC site study group

    CERN Document Server

    Hasegawa, T; Yamashita, S

    2003-01-01

    This study group selected some good sites for construction of JLC (Electron-Positron Linear Collider) on the basis of investigation of data and field survey. The aims, activity, use of underground of private land, conditions of site, selection of site at present and future, summary and proposal are reported. 9 sites (Hidaka, Kitakami, Murayama, Abukuma, Kitaibaraki, Aichi and Gifu, Takamatsu, Hiroshima and Seburi range) are selected for the construction on the basis of firm ground and 4 sites (Okinawa, Harima, Tsukuba and Mutsuogawara) for development and researches. 9 sites area consists of plutonic rock or old strata of Paleozoic era. Many problems in each site are reported. There are three following proposals; 1) the self-governing communities of the sites have to understand JLC and start to construct it by information, 2) a site evaluation committee consists of specialist of civil engineering, building, social and natural environment and disaster prevention and 3) the vibration test should be carried out ...

  4. Data communications in a parallel active messaging interface of a parallel computer

    Science.gov (United States)

    Archer, Charles J; Blocksome, Michael A; Ratterman, Joseph D; Smith, Brian E

    2013-10-29

    Data communications in a parallel active messaging interface (`PAMI`) of a parallel computer, the parallel computer including a plurality of compute nodes that execute a parallel application, the PAMI composed of data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes and the endpoints coupled for data communications through the PAMI and through data communications resources, including receiving in an origin endpoint of the PAMI a data communications instruction, the instruction characterized by an instruction type, the instruction specifying a transmission of transfer data from the origin endpoint to a target endpoint and transmitting, in accordance with the instruction type, the transfer data from the origin endpoint to the target endpoint.

  5. The STAPL Parallel Graph Library

    KAUST Repository

    Harshvardhan,; Fidel, Adam; Amato, Nancy M.; Rauchwerger, Lawrence

    2013-01-01

    This paper describes the stapl Parallel Graph Library, a high-level framework that abstracts the user from data-distribution and parallelism details and allows them to concentrate on parallel graph algorithm development. It includes a customizable

  6. A parallel algorithm for the non-symmetric eigenvalue problem

    International Nuclear Information System (INIS)

    Sidani, M.M.

    1991-01-01

    An algorithm is presented for the solution of the non-symmetric eigenvalue problem. The algorithm is based on a divide-and-conquer procedure that provides initial approximations to the eigenpairs, which are then refined using Newton iterations. Since the smaller subproblems can be solved independently, and since Newton iterations with different initial guesses can be started simultaneously, the algorithm - unlike the standard QR method - is ideal for parallel computers. The author also reports on his investigation of deflation methods designed to obtain further eigenpairs if needed. Numerical results from implementations on a host of parallel machines (distributed and shared-memory) are presented

  7. Development of a parallelization method for KENO V.a

    International Nuclear Information System (INIS)

    Basoglu, B.; Bentley, C.; Dunn, M.

    1995-01-01

    The KENO V.a codes is a widely used Monte carlo codes that is part of the SCALE modular codes system for performing standardized computer analysis of nuclear systems for licensing evaluation. In the past few years, attempts have been made to speed up KENO V.a using new generation computers. In this paper we report on the initial development of a parallel version of KENO V.a for the Kendall Square Research supercomputer (KSRI) at ORNL. Investigations thus far have shown that the parallel code provides accurate results with significantly reduced computation times relative to the conventional KENO V.a code

  8. The specification of Stampi, a message passing library for distributed parallel computing

    International Nuclear Information System (INIS)

    Imamura, Toshiyuki; Takemiya, Hiroshi; Koide, Hiroshi

    2000-03-01

    At CCSE, Center for Promotion of Computational Science and Engineering, a new message passing library for heterogeneous and distributed parallel computing has been developed, and it is called as Stampi. Stampi enables us to communicate between any combination of parallel computers as well as workstations. Currently, a Stampi system is constructed from Stampi library and Stampi/Java. It provides functions to connect a Stampi application with not only those on COMPACS, COMplex Parallel Computer System, but also applets which work on WWW browsers. This report summarizes the specifications of Stampi and details the development of its system. (author)

  9. Parallel real-time visualization system for large-scale simulation. Application to WSPEEDI

    International Nuclear Information System (INIS)

    Muramatsu, Kazuhiro; Otani, Takayuki; Kitabata, Hideyuki; Matsumoto, Hideki; Takei, Toshifumi; Doi, Shun

    2000-01-01

    The real-time visualization system, PATRAS (PArallel TRAcking Steering system) has been developed on parallel computing servers. The system performs almost all of the visualization tasks on a parallel computing server, and uses image data compression technique for efficient communication between the server and the client terminal. Therefore, the system realizes high performance concurrent visualization in an internet computing environment. The experience in applying PATRAS to WSPEEDI (Worldwide version of System for Prediction Environmental Emergency Dose Information) is reported. The application of PATRAS to WSPEEDI enables users to understand behaviours of radioactive tracers from different release points easily and quickly. (author)

  10. Effectiveness of a mobile cooperation intervention during the clinical practicum of nursing students: a parallel group randomized controlled trial protocol.

    Science.gov (United States)

    Strandell-Laine, Camilla; Saarikoski, Mikko; Löyttyniemi, Eliisa; Salminen, Leena; Suomi, Reima; Leino-Kilpi, Helena

    2017-06-01

    The aim of this study was to describe a study protocol for a study evaluating the effectiveness of a mobile cooperation intervention to improve students' competence level, self-efficacy in clinical performance and satisfaction with the clinical learning environment. Nursing student-nurse teacher cooperation during the clinical practicum has a vital role in promoting the learning of students. Despite an increasing interest in using mobile technologies to improve the clinical practicum of students, there is limited robust evidence regarding their effectiveness. A multicentre, parallel group, randomized, controlled, pragmatic, superiority trial. Second-year pre-registration nursing students who are beginning a clinical practicum will be recruited from one university of applied sciences. Eligible students will be randomly allocated to either a control group (engaging in standard cooperation) or an intervention group (engaging in mobile cooperation) for the 5-week the clinical practicum. The complex mobile cooperation intervention comprises of a mobile application-assisted, nursing student-nurse teacher cooperation and a training in the functions of the mobile application. The primary outcome is competence. The secondary outcomes include self-efficacy in clinical performance and satisfaction with the clinical learning environment. Moreover, a process evaluation will be undertaken. The ethical approval for this study was obtained in December 2014 and the study received funding in 2015. The results of this study will provide robust evidence on mobile cooperation during the clinical practicum, a research topic that has not been consistently studied to date. © 2016 John Wiley & Sons Ltd.

  11. UCLA Particle Physics Research Group annual progress report

    International Nuclear Information System (INIS)

    Nefkens, B.M.K.

    1983-11-01

    The objectives, basic research programs, recent results, and continuing activities of the UCLA Particle Physics Research Group are presented. The objectives of the research are to discover, to formulate, and to elucidate the physics laws that govern the elementary constituents of matter and to determine basic properties of particles. The research carried out by the Group last year may be divided into three separate programs: (1) baryon spectroscopy, (2) investigations of charge symmetry and isospin invariance, and (3) tests of time reversal invariance. The main body of this report is the account of the techniques used in our investigations, the results obtained, and the plans for continuing and new research. An update of the group bibliography is given at the end

  12. Domain Decomposition: A Bridge between Nature and Parallel Computers

    Science.gov (United States)

    1992-09-01

    B., "Domain Decomposition Algorithms for Indefinite Elliptic Problems," S"IAM Journal of S; cientific and Statistical (’omputing, Vol. 13, 1992, pp...AD-A256 575 NASA Contractor Report 189709 ICASE Report No. 92-44 ICASE DOMAIN DECOMPOSITION: A BRIDGE BETWEEN NATURE AND PARALLEL COMPUTERS DTIC dE...effectively implemented on dis- tributed memory multiprocessors. In 1990 (as reported in Ref. 38 using the tile algo- rithm), a 103,201-unknown 2D elliptic

  13. Parallelism and array processing

    International Nuclear Information System (INIS)

    Zacharov, V.

    1983-01-01

    Modern computing, as well as the historical development of computing, has been dominated by sequential monoprocessing. Yet there is the alternative of parallelism, where several processes may be in concurrent execution. This alternative is discussed in a series of lectures, in which the main developments involving parallelism are considered, both from the standpoint of computing systems and that of applications that can exploit such systems. The lectures seek to discuss parallelism in a historical context, and to identify all the main aspects of concurrency in computation right up to the present time. Included will be consideration of the important question as to what use parallelism might be in the field of data processing. (orig.)

  14. Use of bibloc and monobloc oral appliances in obstructive sleep apnoea: a multicentre, randomized, blinded, parallel-group equivalence trial.

    Science.gov (United States)

    Isacsson, Göran; Nohlert, Eva; Fransson, Anette M C; Bornefalk-Hermansson, Anna; Wiman Eriksson, Eva; Ortlieb, Eva; Trepp, Livia; Avdelius, Anna; Sturebrand, Magnus; Fodor, Clara; List, Thomas; Schumann, Mohamad; Tegelberg, Åke

    2018-05-16

    The clinical benefit of bibloc over monobloc appliances in treating obstructive sleep apnoea (OSA) has not been evaluated in randomized trials. We hypothesized that the two types of appliances are equally effective in treating OSA. To compare the efficacy of monobloc versus bibloc appliances in a short-term perspective. In this multicentre, randomized, blinded, controlled, parallel-group equivalence trial, patients with OSA were randomly assigned to use either a bibloc or a monobloc appliance. One-night respiratory polygraphy without respiratory support was performed at baseline, and participants were re-examined with the appliance in place at short-term follow-up. The primary outcome was the change in the apnoea-hypopnea index (AHI). An independent person prepared a randomization list and sealed envelopes. Evaluating dentist and the biomedical analysts who evaluated the polygraphy were blinded to the choice of therapy. Of 302 patients, 146 were randomly assigned to use the bibloc and 156 the monobloc device; 123 and 139 patients, respectively, were analysed as per protocol. The mean changes in AHI were -13.8 (95% confidence interval -16.1 to -11.5) in the bibloc group and -12.5 (-14.8 to -10.3) in the monobloc group. The difference of -1.3 (-4.5 to 1.9) was significant within the equivalence interval (P = 0.011; the greater of the two P values) and was confirmed by the intention-to-treat analysis (P = 0.001). The adverse events were of mild character and were experienced by similar percentages of patients in both groups (39 and 40 per cent for the bibloc and monobloc group, respectively). The study shows short-term results with a median time from commencing treatment to the evaluation visit of 56 days and long-term data on efficacy and harm are needed to be fully conclusive. In a short-term perspective, both appliances were equivalent in terms of their positive effects for treating OSA and caused adverse events of similar magnitude. Registered with Clinical

  15. Small arms proliferation. Report on working group 2

    International Nuclear Information System (INIS)

    1998-01-01

    The working group reported on the proliferation of small arms, light weapons non-lethal weapons, which have traditionally been given little attention in international talks on peace on the contrary to nuclear weapons which have been tested during the Second World War but never used in war later

  16. Steam Generator Group Project. Annual report, 1982

    International Nuclear Information System (INIS)

    Clark, R.A.; Lewis, M.

    1984-02-01

    The Steam Generator Group Project (SGGP) is an NRC program joined by additional sponsors. The SGGP utilizes a steam generator removed from service at a nuclear plant (Surry 2) as a vehicle for research on a variety of safety and reliability issues. This report is an annual summary of progress of the program for 1982. Information is presented on the Steam Generator Examination Facility (SGEF), especially designed and constructed for this research. Loading of the generator into the SGEF is then discussed. The report then presents radiological field mapping results and personnel exposure monitoring. This is followed by information on field reduction achieved by channel head decontaminations. The report then presents results of a secondary side examination through shell penetrations placed prior to transport, confirming no change in generator condition due to transport. Decontamination of the channel head is discussed followed by plans for eddy current testing and removal of the plugs placed during service. Results of a preliminary profilometry examination are then provided

  17. Parallel External Memory Graph Algorithms

    DEFF Research Database (Denmark)

    Arge, Lars Allan; Goodrich, Michael T.; Sitchinava, Nodari

    2010-01-01

    In this paper, we study parallel I/O efficient graph algorithms in the Parallel External Memory (PEM) model, one o f the private-cache chip multiprocessor (CMP) models. We study the fundamental problem of list ranking which leads to efficient solutions to problems on trees, such as computing lowest...... an optimal speedup of ¿(P) in parallel I/O complexity and parallel computation time, compared to the single-processor external memory counterparts....

  18. Annual report of the Summit Members' Working Group on Controlled Thermonuclear Fusion (Fusin Working Group (FWG))

    Energy Technology Data Exchange (ETDEWEB)

    none,

    1987-04-01

    The Summit Members' Working Group on Controlled Thermonuclear Fusion (Fusion Working Group (FWG)) was established in 1983 in response to the Declaration of the Heads of State and Government at the Versailles Economic Summit meeting of 1982, and in response to the subsequent report of the Working Group in Technology, Growth and Employment (TGE) as endorsed at the Williamsburg Summit meeting, 1983. This document contains the complete written record of each of the three FWG meetings which include the minutes, lists of attendees, agendas, statements, and summary conclusions as well as the full reports of the Technical Working Party. In addition, there is a pertinent exchange of correspondence between FWG members on the role of the Technical Working Party and a requested background paper on the modalities associated with a possible future ETR project.

  19. A PC parallel port button box provides millisecond response time accuracy under Linux.

    Science.gov (United States)

    Stewart, Neil

    2006-02-01

    For psychologists, it is sometimes necessary to measure people's reaction times to the nearest millisecond. This article describes how to use the PC parallel port to receive signals from a button box to achieve millisecond response time accuracy. The workings of the parallel port, the corresponding port addresses, and a simple Linux program for controlling the port are described. A test of the speed and reliability of button box signal detection is reported. If the reader is moderately familiar with Linux, this article should provide sufficient instruction for him or her to build and test his or her own parallel port button box. This article also describes how the parallel port could be used to control an external apparatus.

  20. Integrated computer network high-speed parallel interface

    International Nuclear Information System (INIS)

    Frank, R.B.

    1979-03-01

    As the number and variety of computers within Los Alamos Scientific Laboratory's Central Computer Facility grows, the need for a standard, high-speed intercomputer interface has become more apparent. This report details the development of a High-Speed Parallel Interface from conceptual through implementation stages to meet current and future needs for large-scle network computing within the Integrated Computer Network. 4 figures

  1. Parallel Hybrid Vehicle Optimal Storage System

    Science.gov (United States)

    Bloomfield, Aaron P.

    2009-01-01

    A paper reports the results of a Hybrid Diesel Vehicle Project focused on a parallel hybrid configuration suitable for diesel-powered, medium-sized, commercial vehicles commonly used for parcel delivery and shuttle buses, as the missions of these types of vehicles require frequent stops. During these stops, electric hybridization can effectively recover the vehicle's kinetic energy during the deceleration, store it onboard, and then use that energy to assist in the subsequent acceleration.

  2. Rasagiline as an adjunct to levodopa in patients with Parkinson's disease and motor fluctuations (LARGO, Lasting effect in Adjunct therapy with Rasagiline Given Once daily, study): a randomised, double-blind, parallel-group trial.

    OpenAIRE

    Rascol, O.; Brooks, D.J.; Melamed, E.; Oertel, W.; Poewe, W.; Stocchi, F.; Tolosa, E.; LARGO study group

    2005-01-01

    Lancet. 2005 Mar 12-18;365(9463):947-54. Rasagiline as an adjunct to levodopa in patients with Parkinson's disease and motor fluctuations (LARGO, Lasting effect in Adjunct therapy with Rasagiline Given Once daily, study): a randomised, double-blind, parallel-group trial. Rascol O, Brooks DJ, Melamed E, Oertel W, Poewe W, Stocchi F, Tolosa E; LARGO study group. Clinical Investigation Centre, Department of Clinical Pharmacology, University Hospital, Toulouse, France. ...

  3. Progress report, 1 Jan - 31 Dec 1989. Information Systems Group

    International Nuclear Information System (INIS)

    Loevborg, L.

    1990-04-01

    The report describes the work of the Information Systems Group at Risoe National Laboratory during 1989. The activities may be classified as research into human work and cognition, decision support systems, and process control and process simulation. The report includes a list of staff members. (author)

  4. Parallel inter channel interaction mechanisms

    International Nuclear Information System (INIS)

    Jovic, V.; Afgan, N.; Jovic, L.

    1995-01-01

    Parallel channels interactions are examined. For experimental researches of nonstationary regimes flow in three parallel vertical channels results of phenomenon analysis and mechanisms of parallel channel interaction for adiabatic condition of one-phase fluid and two-phase mixture flow are shown. (author)

  5. The QCD/SM Working Group: Summary Report

    International Nuclear Information System (INIS)

    Dobbs, M.

    2004-01-01

    Among the many physics processes at TeV hadron colliders, we look most eagerly for those that display signs of the Higgs boson or of new physics. We do so however amid an abundance of processes that proceed via Standard Model (SM) and in particular Quantum Chromodynamics (QCD) interactions, and that are interesting in their own right. Good knowledge of these processes is required to help us distinguish the new from the known. Their theoretical and experimental study teaches us at the same time more about QCD/SM dynamics, and thereby enables us to further improve such distinctions. This is important because it is becoming increasingly clear that the success of finding and exploring Higgs boson physics or other New Physics at the Tevatron and LHC will depend significantly on precise understanding of QCD/SM effects for many observables. To improve predictions and deepen the study of QCD/SM signals and backgrounds was therefore the ambition for our QCD/SM working group at this Les Houches workshop. Members of the working group made significant progress towards this on a number of fronts. A variety of tools were further developed, from methods to perform higher order perturbative calculations or various types of resummation, to improvements in the modeling of underlying events and parton showers. Furthermore, various precise studies of important specific processes were conducted. A significant part of the activities in Les Houches revolved around Monte Carlo simulation of collision events. A number of contributions in this report reflect the progress made in this area. At present a large number of Monte Carlo programs exist, each written with a different purpose and employing different techniques. Discussions in Les Houches revealed the need for an accessible primer on Monte Carlo programs, featuring a listing of various codes, each with a short description, but also providing a low-level explanation of the underlying methods. This primer has now been compiled and a

  6. Parallels in government and corporate sustainability reporting

    Science.gov (United States)

    D. J. Shields; S. V. Solar

    2007-01-01

    One of the core tenets of Sustainable Development is transparency and information sharing, i.e., government and corporate reporting. Governments report on issues within their sphere of responsibility to the degree that their constituents demand that they do so. Firms undertake reporting for two reasons: they are required to do so by law, and doing so makes good...

  7. IAEA INTOR workshop report, groups 2, 5, 7, 9, 10 and 15

    International Nuclear Information System (INIS)

    1980-02-01

    In order to prove scientific feasibility of magnetic confinement fusion, large fusion devices are under construction in several countries (JT-60 in Japan, T-15 in U.S.S.R., TFTR in U.S.A. and JET in EC). International Tokamak Reactor (INTOR) Workshop was organized by the International Atomic Energy Agency (IAEA) to identify roles, objectives and characteristics of the next generation fusion device. This report is a compilation of the home task reports of six groups on INTOR engineering aspects by Japan Atomic Energy Research Institute for workshop sessions 2 and 3 held in 1979. Tasks of the respective groups are group 2: first wall/blanket/shield, group 5: magnetics, group 7: systems integration and structure, group 9: assembly and remote maintenance, group 10: radiation shielding and personnel access, group 15: safety and environment. (author)

  8. Daily consumption of fermented soymilk helps to improve facial wrinkles in healthy postmenopausal women in a randomized, parallel-group, open-label trial

    Directory of Open Access Journals (Sweden)

    Mitsuyoshi Kano

    2018-02-01

    Full Text Available Background: Soymilk fermented by lactobacilli and/or bifidobacteria is attracting attention due to the excellent bioavailability of its isoflavones. We investigated the effects of fermented soymilk containing high amounts of isoflavone aglycones on facial wrinkles and urinary isoflavones in postmenopausal women in a randomized, parallel-group, open-label trial. Healthy Japanese women were randomly divided into active (n = 44, mean age 56.3 ± 0.5 or control (n = 44, mean age 56.1 ± 0.5 groups, who consumed or did not consume a bottle of soymilk fermented by Bifidobacterium breve strain Yakult and Lactobacillus mali for 8 weeks. Maximum depth of wrinkles around the crow’s feet area and other wrinkle parameters were evaluated as primary and secondary endpoints respectively at weeks 0, 4, and 8 during the consumption period. Urinary isoflavone levels were determined by liquid chromatography-mass spectrometry. Results: The active group demonstrated significant improvements in the maximum depth (p=0.015 and average depth (p=0.04 of wrinkles, and significantly elevated urinary isoflavones (daidzein, genistein, and glycitein; each p < 0.001 compared with the control during the consumption period. No serious adverse effects were recorded. Conclusion: These findings suggest that fermented soymilk taken daily may improve facial wrinkles and elevate urinary isoflavones in healthy postmenopausal women.

  9. Parallel paving: An algorithm for generating distributed, adaptive, all-quadrilateral meshes on parallel computers

    Energy Technology Data Exchange (ETDEWEB)

    Lober, R.R.; Tautges, T.J.; Vaughan, C.T.

    1997-03-01

    Paving is an automated mesh generation algorithm which produces all-quadrilateral elements. It can additionally generate these elements in varying sizes such that the resulting mesh adapts to a function distribution, such as an error function. While powerful, conventional paving is a very serial algorithm in its operation. Parallel paving is the extension of serial paving into parallel environments to perform the same meshing functions as conventional paving only on distributed, discretized models. This extension allows large, adaptive, parallel finite element simulations to take advantage of paving`s meshing capabilities for h-remap remeshing. A significantly modified version of the CUBIT mesh generation code has been developed to host the parallel paving algorithm and demonstrate its capabilities on both two dimensional and three dimensional surface geometries and compare the resulting parallel produced meshes to conventionally paved meshes for mesh quality and algorithm performance. Sandia`s {open_quotes}tiling{close_quotes} dynamic load balancing code has also been extended to work with the paving algorithm to retain parallel efficiency as subdomains undergo iterative mesh refinement.

  10. Charbonnages de France group. Annual report 99; Groupe Charbonnages de France. Rapport annuel 99

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2001-07-01

    This 1999 annual report of the French national collieries 'Charbonnages de France' (CDF) presents the turnover and financial data of the group, the situation of coal mining in France, the management of manpower, the rehabilitation of abandoned mine and plant sites, the impact of power market deregulation on the activities of the coal-fired power plants and cogeneration units of the national society of electric and thermal power (SNET) and of the SIDEC company, and the management of the real estate patrimony of the group in mining regions. Some conference texts written by engineers of the group are added at the end of the document and present the competences of CDF in environmental engineering (valorization of coal fly ash, cleansing of polluted sites, phyto-remediation) and development of biomass energy. (J.S.)

  11. Performance assessment of the SIMFAP parallel cluster at IFIN-HH Bucharest

    International Nuclear Information System (INIS)

    Adam, Gh.; Adam, S.; Ayriyan, A.; Dushanov, E.; Hayryan, E.; Korenkov, V.; Lutsenko, A.; Mitsyn, V.; Sapozhnikova, T.; Sapozhnikov, A; Streltsova, O.; Buzatu, F.; Dulea, M.; Vasile, I.; Sima, A.; Visan, C.; Busa, J.; Pokorny, I.

    2008-01-01

    Performance assessment and case study outputs of the parallel SIMFAP cluster at IFIN-HH Bucharest point to its effective and reliable operation. A comparison with results on the supercomputing system in LIT-JINR Dubna adds insight on resource allocation for problem solving by parallel computing. The solution of models asking for very large numbers of knots in the discretization mesh needs the migration to high performance computing based on parallel cluster architectures. The acquisition of ready-to-use parallel computing facilities being beyond limited budgetary resources, the solution at IFIN-HH was to buy the hardware and the inter-processor network, and to implement by own efforts the open software concerning both the operating system and the parallel computing standard. The present paper provides a report demonstrating the successful solution of these tasks. The implementation of the well-known HPL (High Performance LINPACK) Benchmark points to the effective and reliable operation of the cluster. The comparison of HPL outputs obtained on parallel clusters of different magnitudes shows that there is an optimum range of the order N of the linear algebraic system over which a given parallel cluster provides optimum parallel solutions. For the SIMFAP cluster, this range can be inferred to correspond to about 1 to 2 x 10 4 linear algebraic equations. For an algorithm of polynomial complexity N α the task sharing among p processors within a parallel solution mainly follows an (N/p)α behaviour under peak performance achievement. Thus, while the problem complexity remains the same, a substantial decrease of the coefficient of the leading order of the polynomial complexity is achieved. (authors)

  12. COSPAR/PRBEM international working group activities report

    Science.gov (United States)

    Bourdarie, S.; Blake, B.; Cao, J. B.; Friedel, R.; Miyoshi, Y.; Panasyuk, M.; Underwood, C.

    It is now clear to everybody that the current standard AE8 AP8 model for ionising particle specification in the radiation belts must be updated But such an objective is quite difficult to reach just as a reminder to develop AE8 AP8 model in the seventies was 10 persons full time for ten years It is clear that world-wide efforts must be combined because not any individual group has the human resource to perform these new models by themselves Under COSPAR umbrella an international group of expert well distributed around the world has been created to set up a common framework for everybody involved in this field Planned activities of the international group of experts are to - Define users needs - Provide guidelines for standard file format for ionising measurements - Set up guidelines to process in-situ data on a common basis - Decide in which form the new models will have to be - Centralise all progress done world-wide to advise the community - Try to organise world-wide activities as a project to ensure complementarities and more efficiencies between all efforts done Activities of this working group since its creation will be reported as well as future plans

  13. Seeing or moving in parallel

    DEFF Research Database (Denmark)

    Christensen, Mark Schram; Ehrsson, H Henrik; Nielsen, Jens Bo

    2013-01-01

    a different network, involving bilateral dorsal premotor cortex (PMd), primary motor cortex, and SMA, was more active when subjects viewed parallel movements while performing either symmetrical or parallel movements. Correlations between behavioral instability and brain activity were present in right lateral...... adduction-abduction movements symmetrically or in parallel with real-time congruent or incongruent visual feedback of the movements. One network, consisting of bilateral superior and middle frontal gyrus and supplementary motor area (SMA), was more active when subjects performed parallel movements, whereas...

  14. Vector-Parallel processing of the successive overrelaxation method

    International Nuclear Information System (INIS)

    Yokokawa, Mitsuo

    1988-02-01

    Successive overrelaxation method, called SOR method, is one of iterative methods for solving linear system of equations, and it has been calculated in serial with a natural ordering in many nuclear codes. After the appearance of vector processors, this natural SOR method has been changed for the parallel algorithm such as hyperplane or red-black method, in which the calculation order is modified. These methods are suitable for vector processors, and more high-speed calculation can be obtained compared with the natural SOR method on vector processors. In this report, a new scheme named 4-colors SOR method is proposed. We find that the 4-colors SOR method can be executed on vector-parallel processors and it gives the most high-speed calculation among all SOR methods according to results of the vector-parallel execution on the Alliant FX/8 multiprocessor system. It is also shown that the theoretical optimal acceleration parameters are equal among five different ordering SOR methods, and the difference between convergence rates of these SOR methods are examined. (author)

  15. The numerical parallel computing of photon transport

    International Nuclear Information System (INIS)

    Huang Qingnan; Liang Xiaoguang; Zhang Lifa

    1998-12-01

    The parallel computing of photon transport is investigated, the parallel algorithm and the parallelization of programs on parallel computers both with shared memory and with distributed memory are discussed. By analyzing the inherent law of the mathematics and physics model of photon transport according to the structure feature of parallel computers, using the strategy of 'to divide and conquer', adjusting the algorithm structure of the program, dissolving the data relationship, finding parallel liable ingredients and creating large grain parallel subtasks, the sequential computing of photon transport into is efficiently transformed into parallel and vector computing. The program was run on various HP parallel computers such as the HY-1 (PVP), the Challenge (SMP) and the YH-3 (MPP) and very good parallel speedup has been gotten

  16. Hypergraph partitioning implementation for parallelizing matrix-vector multiplication using CUDA GPU-based parallel computing

    Science.gov (United States)

    Murni, Bustamam, A.; Ernastuti, Handhika, T.; Kerami, D.

    2017-07-01

    Calculation of the matrix-vector multiplication in the real-world problems often involves large matrix with arbitrary size. Therefore, parallelization is needed to speed up the calculation process that usually takes a long time. Graph partitioning techniques that have been discussed in the previous studies cannot be used to complete the parallelized calculation of matrix-vector multiplication with arbitrary size. This is due to the assumption of graph partitioning techniques that can only solve the square and symmetric matrix. Hypergraph partitioning techniques will overcome the shortcomings of the graph partitioning technique. This paper addresses the efficient parallelization of matrix-vector multiplication through hypergraph partitioning techniques using CUDA GPU-based parallel computing. CUDA (compute unified device architecture) is a parallel computing platform and programming model that was created by NVIDIA and implemented by the GPU (graphics processing unit).

  17. Parallel Framework for Cooperative Processes

    Directory of Open Access Journals (Sweden)

    Mitică Craus

    2005-01-01

    Full Text Available This paper describes the work of an object oriented framework designed to be used in the parallelization of a set of related algorithms. The idea behind the system we are describing is to have a re-usable framework for running several sequential algorithms in a parallel environment. The algorithms that the framework can be used with have several things in common: they have to run in cycles and the work should be possible to be split between several "processing units". The parallel framework uses the message-passing communication paradigm and is organized as a master-slave system. Two applications are presented: an Ant Colony Optimization (ACO parallel algorithm for the Travelling Salesman Problem (TSP and an Image Processing (IP parallel algorithm for the Symmetrical Neighborhood Filter (SNF. The implementations of these applications by means of the parallel framework prove to have good performances: approximatively linear speedup and low communication cost.

  18. Compiler Technology for Parallel Scientific Computation

    Directory of Open Access Journals (Sweden)

    Can Özturan

    1994-01-01

    Full Text Available There is a need for compiler technology that, given the source program, will generate efficient parallel codes for different architectures with minimal user involvement. Parallel computation is becoming indispensable in solving large-scale problems in science and engineering. Yet, the use of parallel computation is limited by the high costs of developing the needed software. To overcome this difficulty we advocate a comprehensive approach to the development of scalable architecture-independent software for scientific computation based on our experience with equational programming language (EPL. Our approach is based on a program decomposition, parallel code synthesis, and run-time support for parallel scientific computation. The program decomposition is guided by the source program annotations provided by the user. The synthesis of parallel code is based on configurations that describe the overall computation as a set of interacting components. Run-time support is provided by the compiler-generated code that redistributes computation and data during object program execution. The generated parallel code is optimized using techniques of data alignment, operator placement, wavefront determination, and memory optimization. In this article we discuss annotations, configurations, parallel code generation, and run-time support suitable for parallel programs written in the functional parallel programming language EPL and in Fortran.

  19. Pthreads vs MPI Parallel Performance of Angular-Domain Decomposed S

    International Nuclear Information System (INIS)

    Azmy, Y.Y.; Barnett, D.A.

    2000-01-01

    Two programming models for parallelizing the Angular Domain Decomposition (ADD) of the discrete ordinates (S n ) approximation of the neutron transport equation are examined. These are the shared memory model based on the POSIX threads (Pthreads) standard, and the message passing model based on the Message Passing Interface (MPI) standard. These standard libraries are available on most multiprocessor platforms thus making the resulting parallel codes widely portable. The question is: on a fixed platform, and for a particular code solving a given test problem, which of the two programming models delivers better parallel performance? Such comparison is possible on Symmetric Multi-Processors (SMP) architectures in which several CPUs physically share a common memory, and in addition are capable of emulating message passing functionality. Implementation of the two-dimensional,(S n ), Arbitrarily High Order Transport (AHOT) code for solving neutron transport problems using these two parallelization models is described. Measured parallel performance of each model on the COMPAQ AlphaServer 8400 and the SGI Origin 2000 platforms is described, and comparison of the observed speedup for the two programming models is reported. For the case presented in this paper it appears that the MPI implementation scales better than the Pthreads implementation on both platforms

  20. Parallel computing: numerics, applications, and trends

    National Research Council Canada - National Science Library

    Trobec, Roman; Vajteršic, Marián; Zinterhof, Peter

    2009-01-01

    ... and/or distributed systems. The contributions to this book are focused on topics most concerned in the trends of today's parallel computing. These range from parallel algorithmics, programming, tools, network computing to future parallel computing. Particular attention is paid to parallel numerics: linear algebra, differential equations, numerica...

  1. Parallel Computing Strategies for Irregular Algorithms

    Science.gov (United States)

    Biswas, Rupak; Oliker, Leonid; Shan, Hongzhang; Biegel, Bryan (Technical Monitor)

    2002-01-01

    Parallel computing promises several orders of magnitude increase in our ability to solve realistic computationally-intensive problems, but relies on their efficient mapping and execution on large-scale multiprocessor architectures. Unfortunately, many important applications are irregular and dynamic in nature, making their effective parallel implementation a daunting task. Moreover, with the proliferation of parallel architectures and programming paradigms, the typical scientist is faced with a plethora of questions that must be answered in order to obtain an acceptable parallel implementation of the solution algorithm. In this paper, we consider three representative irregular applications: unstructured remeshing, sparse matrix computations, and N-body problems, and parallelize them using various popular programming paradigms on a wide spectrum of computer platforms ranging from state-of-the-art supercomputers to PC clusters. We present the underlying problems, the solution algorithms, and the parallel implementation strategies. Smart load-balancing, partitioning, and ordering techniques are used to enhance parallel performance. Overall results demonstrate the complexity of efficiently parallelizing irregular algorithms.

  2. ARTS - adaptive runtime system for massively parallel systems. Final report; ARTS - optimale Ausfuehrungsunterstuetzung fuer komplexe Anwendungen auf massiv parallelen Systemen. Teilprojekt: Parallele Stroemungsmechanik. Abschlussbericht

    Energy Technology Data Exchange (ETDEWEB)

    Gentzsch, W.; Ferstl, F.; Paap, H.G.; Riedel, E.

    1998-03-20

    In the ARTS project, system software has been developed to support smog and fluid dynamic applications on massively parallel systems. The aim is to implement and test specific software structures within an adaptive run-time system to separate the parallel core algorithms of the applications from the platform independent runtime aspects. Only slight modifications is existing Fortran and C code are necessary to integrate the application code into the new object oriented parallel integrated ARTS framework. The OO-design offers easy control, re-use and adaptation of the system services, resulting in a dramatic decrease in development time of the application and in ease of maintainability of the application software in the future. (orig.) [Deutsch] Im Projekt ARTS wird Basissoftware zur Unterstuetzung von Anwendungen aus den Bereichen Smoganalyse und Stroemungsmechanik auf massiv parallelen Systemen entwickelt und optimiert. Im Vordergrund steht die Erprobung geeigneter Strukturen, um systemnahe Funktionalitaeten in einer Laufzeitumgebung anzusiedeln und dadurch die parallelen Kernalgorithmen der Anwendungsprogramme von den plattformunabhaengigen Laufzeitaspekten zu trennen. Es handelt sich dabei um herkoemmlich strukturierten Fortran-Code, der unter minimalen Aenderungen auch weiterhin nutzbar sein muss, sowie um objektbasiert entworfenen C-Code, der die volle Funktionalitaet der ARTS-Plattform ausnutzen kann. Ein objektorientiertes Design erlaubt eine einfache Kontrolle, Wiederverwendung und Adaption der vom System vorgegebenen Basisdienste. Daraus resultiert ein deutlich reduzierter Entwicklungs- und Laufzeitaufwand fuer die Anwendung. ARTS schafft eine integrierende Plattform, die moderne Technologien aus dem Bereich objektorientierter Laufzeitsysteme mit praxisrelevanten Anforderungen aus dem Bereich des wissenschaftlichen Hoechstleistungsrechnens kombiniert. (orig.)

  3. The Glasgow Parallel Reduction Machine: Programming Shared-memory Many-core Systems using Parallel Task Composition

    Directory of Open Access Journals (Sweden)

    Ashkan Tousimojarad

    2013-12-01

    Full Text Available We present the Glasgow Parallel Reduction Machine (GPRM, a novel, flexible framework for parallel task-composition based many-core programming. We allow the programmer to structure programs into task code, written as C++ classes, and communication code, written in a restricted subset of C++ with functional semantics and parallel evaluation. In this paper we discuss the GPRM, the virtual machine framework that enables the parallel task composition approach. We focus the discussion on GPIR, the functional language used as the intermediate representation of the bytecode running on the GPRM. Using examples in this language we show the flexibility and power of our task composition framework. We demonstrate the potential using an implementation of a merge sort algorithm on a 64-core Tilera processor, as well as on a conventional Intel quad-core processor and an AMD 48-core processor system. We also compare our framework with OpenMP tasks in a parallel pointer chasing algorithm running on the Tilera processor. Our results show that the GPRM programs outperform the corresponding OpenMP codes on all test platforms, and can greatly facilitate writing of parallel programs, in particular non-data parallel algorithms such as reductions.

  4. Streaming for Functional Data-Parallel Languages

    DEFF Research Database (Denmark)

    Madsen, Frederik Meisner

    In this thesis, we investigate streaming as a general solution to the space inefficiency commonly found in functional data-parallel programming languages. The data-parallel paradigm maps well to parallel SIMD-style hardware. However, the traditional fully materializing execution strategy...... by extending two existing data-parallel languages: NESL and Accelerate. In the extensions we map bulk operations to data-parallel streams that can evaluate fully sequential, fully parallel or anything in between. By a dataflow, piecewise parallel execution strategy, the runtime system can adjust to any target...... flattening necessitates all sub-computations to materialize at the same time. For example, naive n by n matrix multiplication requires n^3 space in NESL because the algorithm contains n^3 independent scalar multiplications. For large values of n, this is completely unacceptable. We address the problem...

  5. Experimental study of parallel multi-tungsten wire Z-pinch

    International Nuclear Information System (INIS)

    Huang Xianbin; China Academy of Engineering Physics, Mianyang; Lin Libin; Yang Libing; Deng Jianjun; Gu Yuanchao; Ye Shican; Yue Zhengpu; Zhou Shaotong; Li Fengping; Zhang Siqun

    2005-01-01

    The study of three parallel tungsten wire loads and five parallel tungsten wire loads implosion experiment on accelerator 'Yang' are reported. Tungsten wires (φ17 μm) with separation of 1 mm were used. The pinch was driven by a 350 kA peak current, 80 ns 10%-90% rise time. By means of pinhole camera and X-ray diagnostics technology, a non-uniform plasma column is formed among the wires and soft X-ray pulse are observed. the change of load current are analyzed, the development of sausage instability and kink instability, 'hot spot' effect and dispersion spot for plasma column are also discussed. (authors)

  6. User's guide of parallel program development environment (PPDE). The 2nd edition

    Energy Technology Data Exchange (ETDEWEB)

    Ueno, Hirokazu; Takemiya, Hiroshi; Imamura, Toshiyuki; Koide, Hiroshi; Matsuda, Katsuyuki; Higuchi, Kenji; Hirayama, Toshio [Center for Promotion of Computational Science and Engineering, Japan Atomic Energy Research Institute, Tokyo (Japan); Ohta, Hirofumi [Hitachi Ltd., Tokyo (Japan)

    2000-03-01

    The STA basic system has been enhanced to accelerate support for parallel programming on heterogeneous parallel computers, through a series of R and D on the technology of parallel processing. The enhancement has been made through extending the function of the PPDF, Parallel Program Development Environment in the STA basic system. The extended PPDE has the function to make: 1) the automatic creation of a 'makefile' and a shell script file for its execution, 2) the multi-tools execution which makes the tools on heterogeneous computers to execute with one operation a task on a computer, and 3) the mirror composition to reflect editing results of a file on a computer into all related files on other computers. These additional functions will enhance the work efficiency for program development on some computers. More functions have been added to the PPDE to provide help for parallel program development. New functions were also designed to complement a HPF translator and a paralleilizing support tool when working together so that a sequential program is efficiently converted to a parallel program. This report describes the use of extended PPDE. (author)

  7. Development of parallel benchmark code by sheet metal forming simulator 'ITAS'

    International Nuclear Information System (INIS)

    Watanabe, Hiroshi; Suzuki, Shintaro; Minami, Kazuo

    1999-03-01

    This report describes the development of parallel benchmark code by sheet metal forming simulator 'ITAS'. ITAS is a nonlinear elasto-plastic analysis program by the finite element method for the purpose of the simulation of sheet metal forming. ITAS adopts the dynamic analysis method that computes displacement of sheet metal at every time unit and utilizes the implicit method with the direct linear equation solver. Therefore the simulator is very robust. However, it requires a lot of computational time and memory capacity. In the development of the parallel benchmark code, we designed the code by MPI programming to reduce the computational time. In numerical experiments on the five kinds of parallel super computers at CCSE JAERI, i.e., SP2, SR2201, SX-4, T94 and VPP300, good performances are observed. The result will be shown to the public through WWW so that the benchmark results may become a guideline of research and development of the parallel program. (author)

  8. Effects of oral contraceptives containing ethinylestradiol with either drospirenone or levonorgestrel on various parameters associated with well-being in healthy women: a randomized, single-blind, parallel-group, multicentre study.

    Science.gov (United States)

    Kelly, Sue; Davies, Emyr; Fearns, Simon; McKinnon, Carol; Carter, Rick; Gerlinger, Christoph; Smithers, Andrew

    2010-01-01

    The combined oral contraceptive Yasmin (drospirenone 3 mg plus ethinylestradiol 30 microg [DRSP 3 mg/EE 30 microg]) has been shown to be a well tolerated and effective combination that provides high contraceptive reliability and good cycle control. Furthermore, DRSP 3 mg/EE 30 microg has been shown to have a positive effect on premenstrual symptoms and well-being/health-related quality of life, and to improve the skin condition of women with acne. To date, however, there have been relatively few studies that have compared the effects of DRSP 3 mg/EE 30 microg on the general well-being of women with those of other oral contraceptives. To compare the impact of DRSP 3 mg/EE 30 microg with that of levonorgestrel 150 microg/EE 30 microg (LNG 150 microg/EE 30 microg; Microgynon 30) on various parameters associated with well-being in healthy female subjects. This was a randomized, single-blind, parallel-group, multicentre study conducted using 21/7-day regimens of DRSP 3 mg/EE 30 microg and LNG 150 microg/EE 30 microg over seven cycles. Efficacy parameters included: changes in Menstrual Distress Questionnaire (MDQ) normative T scores; the proportion of subjects with acne; and menstrual symptoms. Cycle control and subjective well-being parameters were also assessed. Treatment with DRSP 3 mg/EE 30 microg had similar beneficial effects on symptoms of water retention and impaired concentration to LNG 150 microg/EE 30 microg, but was significantly better in alleviating negative affect symptoms during the menstrual phase (median difference in MDQ T score -3; p = 0.027; Wilcoxon rank sum test). The proportion of subjects with acne decreased from approximately 55% to approximately 45% in the DRSP 3 mg/EE 30 microg group, but remained static at approximately 60% in the LNG 150 microg/EE 30 microg group. Somatic and psychological symptoms occurred at the greatest intensity and for most subjects during the menstrual phase of the cycle in both groups. Both drugs had similar cycle

  9. Parallel discrete-event simulation of FCFS stochastic queueing networks

    Science.gov (United States)

    Nicol, David M.

    1988-01-01

    Physical systems are inherently parallel. Intuition suggests that simulations of these systems may be amenable to parallel execution. The parallel execution of a discrete-event simulation requires careful synchronization of processes in order to ensure the execution's correctness; this synchronization can degrade performance. Largely negative results were recently reported in a study which used a well-known synchronization method on queueing network simulations. Discussed here is a synchronization method (appointments), which has proven itself to be effective on simulations of FCFS queueing networks. The key concept behind appointments is the provision of lookahead. Lookahead is a prediction on a processor's future behavior, based on an analysis of the processor's simulation state. It is shown how lookahead can be computed for FCFS queueing network simulations, give performance data that demonstrates the method's effectiveness under moderate to heavy loads, and discuss performance tradeoffs between the quality of lookahead, and the cost of computing lookahead.

  10. Patterns for Parallel Software Design

    CERN Document Server

    Ortega-Arjona, Jorge Luis

    2010-01-01

    Essential reading to understand patterns for parallel programming Software patterns have revolutionized the way we think about how software is designed, built, and documented, and the design of parallel software requires you to consider other particular design aspects and special skills. From clusters to supercomputers, success heavily depends on the design skills of software developers. Patterns for Parallel Software Design presents a pattern-oriented software architecture approach to parallel software design. This approach is not a design method in the classic sense, but a new way of managin

  11. High performance parallel I/O

    CERN Document Server

    Prabhat

    2014-01-01

    Gain Critical Insight into the Parallel I/O EcosystemParallel I/O is an integral component of modern high performance computing (HPC), especially in storing and processing very large datasets to facilitate scientific discovery. Revealing the state of the art in this field, High Performance Parallel I/O draws on insights from leading practitioners, researchers, software architects, developers, and scientists who shed light on the parallel I/O ecosystem.The first part of the book explains how large-scale HPC facilities scope, configure, and operate systems, with an emphasis on choices of I/O har

  12. Parallel transport of long mean-free-path plasma along open magnetic field lines: Parallel heat flux

    International Nuclear Information System (INIS)

    Guo Zehua; Tang Xianzhu

    2012-01-01

    In a long mean-free-path plasma where temperature anisotropy can be sustained, the parallel heat flux has two components with one associated with the parallel thermal energy and the other the perpendicular thermal energy. Due to the large deviation of the distribution function from local Maxwellian in an open field line plasma with low collisionality, the conventional perturbative calculation of the parallel heat flux closure in its local or non-local form is no longer applicable. Here, a non-perturbative calculation is presented for a collisionless plasma in a two-dimensional flux expander bounded by absorbing walls. Specifically, closures of previously unfamiliar form are obtained for ions and electrons, which relate two distinct components of the species parallel heat flux to the lower order fluid moments such as density, parallel flow, parallel and perpendicular temperatures, and the field quantities such as the magnetic field strength and the electrostatic potential. The plasma source and boundary condition at the absorbing wall enter explicitly in the closure calculation. Although the closure calculation does not take into account wave-particle interactions, the results based on passing orbits from steady-state collisionless drift-kinetic equation show remarkable agreement with fully kinetic-Maxwell simulations. As an example of the physical implications of the theory, the parallel heat flux closures are found to predict a surprising observation in the kinetic-Maxwell simulation of the 2D magnetic flux expander problem, where the parallel heat flux of the parallel thermal energy flows from low to high parallel temperature region.

  13. Is Monte Carlo embarrassingly parallel?

    Energy Technology Data Exchange (ETDEWEB)

    Hoogenboom, J. E. [Delft Univ. of Technology, Mekelweg 15, 2629 JB Delft (Netherlands); Delft Nuclear Consultancy, IJsselzoom 2, 2902 LB Capelle aan den IJssel (Netherlands)

    2012-07-01

    Monte Carlo is often stated as being embarrassingly parallel. However, running a Monte Carlo calculation, especially a reactor criticality calculation, in parallel using tens of processors shows a serious limitation in speedup and the execution time may even increase beyond a certain number of processors. In this paper the main causes of the loss of efficiency when using many processors are analyzed using a simple Monte Carlo program for criticality. The basic mechanism for parallel execution is MPI. One of the bottlenecks turn out to be the rendez-vous points in the parallel calculation used for synchronization and exchange of data between processors. This happens at least at the end of each cycle for fission source generation in order to collect the full fission source distribution for the next cycle and to estimate the effective multiplication factor, which is not only part of the requested results, but also input to the next cycle for population control. Basic improvements to overcome this limitation are suggested and tested. Also other time losses in the parallel calculation are identified. Moreover, the threading mechanism, which allows the parallel execution of tasks based on shared memory using OpenMP, is analyzed in detail. Recommendations are given to get the maximum efficiency out of a parallel Monte Carlo calculation. (authors)

  14. Is Monte Carlo embarrassingly parallel?

    International Nuclear Information System (INIS)

    Hoogenboom, J. E.

    2012-01-01

    Monte Carlo is often stated as being embarrassingly parallel. However, running a Monte Carlo calculation, especially a reactor criticality calculation, in parallel using tens of processors shows a serious limitation in speedup and the execution time may even increase beyond a certain number of processors. In this paper the main causes of the loss of efficiency when using many processors are analyzed using a simple Monte Carlo program for criticality. The basic mechanism for parallel execution is MPI. One of the bottlenecks turn out to be the rendez-vous points in the parallel calculation used for synchronization and exchange of data between processors. This happens at least at the end of each cycle for fission source generation in order to collect the full fission source distribution for the next cycle and to estimate the effective multiplication factor, which is not only part of the requested results, but also input to the next cycle for population control. Basic improvements to overcome this limitation are suggested and tested. Also other time losses in the parallel calculation are identified. Moreover, the threading mechanism, which allows the parallel execution of tasks based on shared memory using OpenMP, is analyzed in detail. Recommendations are given to get the maximum efficiency out of a parallel Monte Carlo calculation. (authors)

  15. Evaluation of pulsing magnetic field effects on paresthesia in multiple sclerosis patients, a randomized, double-blind, parallel-group clinical trial.

    Science.gov (United States)

    Afshari, Daryoush; Moradian, Nasrin; Khalili, Majid; Razazian, Nazanin; Bostani, Arash; Hoseini, Jamal; Moradian, Mohamad; Ghiasian, Masoud

    2016-10-01

    Evidence is mounting that magnet therapy could alleviate the symptoms of multiple sclerosis (MS). This study was performed to test the effects of the pulsing magnetic fields on the paresthesia in MS patients. This study has been conducted as a randomized, double-blind, parallel-group clinical trial during the April 2012 to October 2013. The subjects were selected among patients referred to MS clinic of Imam Reza Hospital; affiliated to Kermanshah University of Medical Sciences, Iran. Sixty three patients with MS were included in the study and randomly were divided into two groups, 35 patients were exposed to a magnetic pulsing field of 4mT intensity and 15-Hz frequency sinusoidal wave for 20min per session 2 times per week over a period of 2 months involving 16 sessions and 28 patients was exposed to a magnetically inactive field (placebo) for 20min per session 2 times per week over a period of 2 months involving 16 sessions. The severity of paresthesia was measured by the numerical rating scale (NRS) at 30, 60days. The study primary end point was NRS change between baseline and 60days. The secondary outcome was NRS change between baseline and 30days. Patients exposing to magnetic field showed significant paresthesia improvement compared with the group of patients exposing to placebo. According to our results pulsed magnetic therapy could alleviate paresthesia in MS patients .But trials with more patients and longer duration are mandatory to describe long-term effects. Copyright © 2016 Elsevier B.V. All rights reserved.

  16. A prospective, parallel group, open-labeled, comparative, multi-centric, active controlled study to evaluate the safety, tolerability and benefits of fixed dose combination of acarbose and metformin versus metformin alone in type 2 diabetes.

    Science.gov (United States)

    Jayaram, S; Hariharan, R S; Madhavan, R; Periyandavar, I; Samra, S S

    2010-11-01

    The present study was a prospective, parallel group, open-labeled, comparative, multicentric, active controlled study to evaluate the safety, tolerability and benefits of fixed dose combination of acarbose and metformin versus metformin alone in type 2 diabetic patients. A total of 229 patients with type 2 diabetes were enrolled at 5 medical centers across India. They received either acarbose (50 mg) + metformin (500 mg) bid/tid (n=115) or metformin monotherapy (500 mg) bid/ tid (n=114) for 12 weeks. Primary objective was to evaluate safety and tolerability based on the adverse events reported. Secondary objective was efficacy assessment based on changes in fasting, post prandial blood glucose and HbA1c values. In the acarbose + metformin group 10 patients reported 14 adverse events while in metformin group 9 patients reported 10 adverse events. No patient reported any serious adverse event or was withdraw from study because of adverse events. In the acarbose plus metformin group fasting blood glucose (FBG) decreased from a baseline of 158.85 +/- 18.14 mg/dl to 113.55 +/- 19.38 mg/dl (p fasting blood glucose decreased from a baseline of 158.31 +/- 26.53 mg/dl to 130.55 +/- 28.31 mg/dl (p < 0.0001) (decrease of 27.76 +/- 22.91 mg/dl) at 12 weeks. In the acarbose plus metformin group postprandial blood glucose (PPBG) decreased from a baseline of 264.65 +/- 34.03 mg/dl to 173.22 +/- 31.40 mg/dl (p < 0.0001) (decrease of 91.43 +/- 28.65 mg/dl) at 12 weeks, while in the metformin group PPBG decreased from a baseline of 253.56 +/- 36.28 mg/dl to 205.36 +/- 39.49 mg/dl (p < 0.0001) (decrease of 48.20 +/- 32.72 mg/dl) at 12 weeks. In the acarbose plus metformin group glycosylated haemoglobin (HbA1c) decreased from a baseline of 9.47 +/- 0.69% to 7.71 +/- 0.85% (p < 0.0001) (% decrease of 1.76 +/- 1.11) at 12 weeks, while in the metformin group HbAlc decreased from a baseline of 9.32 +/- 0.65% to 8.26 +/- 0.68% (p < 0.0001) (% decrease of 1.06 +/- 0.66) at 12 weeks. The

  17. Cosmic Shear With ACS Pure Parallels

    Science.gov (United States)

    Rhodes, Jason

    2002-07-01

    Small distortions in the shapes of background galaxies by foreground mass provide a powerful method of directly measuring the amount and distribution of dark matter. Several groups have recently detected this weak lensing by large-scale structure, also called cosmic shear. The high resolution and sensitivity of HST/ACS provide a unique opportunity to measure cosmic shear accurately on small scales. Using 260 parallel orbits in Sloan textiti {F775W} we will measure for the first time: beginlistosetlength sep0cm setlengthemsep0cm setlengthopsep0cm em the cosmic shear variance on scales Omega_m^0.5, with signal-to-noise {s/n} 20, and the mass density Omega_m with s/n=4. They will be done at small angular scales where non-linear effects dominate the power spectrum, providing a test of the gravitational instability paradigm for structure formation. Measurements on these scales are not possible from the ground, because of the systematic effects induced by PSF smearing from seeing. Having many independent lines of sight reduces the uncertainty due to cosmic variance, making parallel observations ideal.

  18. LUCKY-TD code for solving the time-dependent transport equation with the use of parallel computations

    Energy Technology Data Exchange (ETDEWEB)

    Moryakov, A. V., E-mail: sailor@orc.ru [National Research Centre Kurchatov Institute (Russian Federation)

    2016-12-15

    An algorithm for solving the time-dependent transport equation in the P{sub m}S{sub n} group approximation with the use of parallel computations is presented. The algorithm is implemented in the LUCKY-TD code for supercomputers employing the MPI standard for the data exchange between parallel processes.

  19. Peer groups and operational cycle enhancements to the performance indicator report

    International Nuclear Information System (INIS)

    Stromberg, H.M.; DeHaan, M.S.; Gentillon, C.D.; Wilson, G.E.; Vanden Heuvel, L.N.

    1992-01-01

    Accurate performance evaluation and plant trending by the performance indicator program are integral parts of monitoring the operation of commercial nuclear power plants. The presentations of the NRC/AEOD performance indicator program have undergone a number of enhancements. The diversity of the commercial nuclear plants, coupled with continued improvements in the performance indicator program, has resulted in the evaluation of plants in logical peer groups and highlighted the need to evaluate the impact of plant operational conditions on the performance indicators. These enhancements allow a more-meaningful evaluation of operating commercial nuclear power plant performance. This report proposes methods to enhance the presentation of the performance indicator data by analyzing the data in logical peer groups and displaying the performance indicator data based on the operational status of the plants. Previously, preliminary development of the operational cycle displays of the performance indicator data was documented. This report extends the earlier findings and presents the continued development of the peer groups and operational cycle trend and deviation data and displays. This report describes the peer groups and enhanced PI data presentations by considering the operational cycle phase breakdowns, calculation methods, and presentation methods

  20. Parallel algorithms for continuum dynamics

    International Nuclear Information System (INIS)

    Hicks, D.L.; Liebrock, L.M.

    1987-01-01

    Simply porting existing parallel programs to a new parallel processor may not achieve the full speedup possible; to achieve the maximum efficiency may require redesigning the parallel algorithms for the specific architecture. The authors discuss here parallel algorithms that were developed first for the HEP processor and then ported to the CRAY X-MP/4, the ELXSI/10, and the Intel iPSC/32. Focus is mainly on the most recent parallel processing results produced, i.e., those on the Intel Hypercube. The applications are simulations of continuum dynamics in which the momentum and stress gradients are important. Examples of these are inertial confinement fusion experiments, severe breaks in the coolant system of a reactor, weapons physics, shock-wave physics. Speedup efficiencies on the Intel iPSC Hypercube are very sensitive to the ratio of communication to computation. Great care must be taken in designing algorithms for this machine to avoid global communication. This is much more critical on the iPSC than it was on the three previous parallel processors

  1. Parallel S/sub n/ iteration schemes

    International Nuclear Information System (INIS)

    Wienke, B.R.; Hiromoto, R.E.

    1986-01-01

    The iterative, multigroup, discrete ordinates (S/sub n/) technique for solving the linear transport equation enjoys widespread usage and appeal. Serial iteration schemes and numerical algorithms developed over the years provide a timely framework for parallel extension. On the Denelcor HEP, the authors investigate three parallel iteration schemes for solving the one-dimensional S/sub n/ transport equation. The multigroup representation and serial iteration methods are also reviewed. This analysis represents a first attempt to extend serial S/sub n/ algorithms to parallel environments and provides good baseline estimates on ease of parallel implementation, relative algorithm efficiency, comparative speedup, and some future directions. The authors examine ordered and chaotic versions of these strategies, with and without concurrent rebalance and diffusion acceleration. Two strategies efficiently support high degrees of parallelization and appear to be robust parallel iteration techniques. The third strategy is a weaker parallel algorithm. Chaotic iteration, difficult to simulate on serial machines, holds promise and converges faster than ordered versions of the schemes. Actual parallel speedup and efficiency are high and payoff appears substantial

  2. A comparison of two treatments for childhood apraxia of speech: methods and treatment protocol for a parallel group randomised control trial

    Directory of Open Access Journals (Sweden)

    Murray Elizabeth

    2012-08-01

    Full Text Available Abstract Background Childhood Apraxia of Speech is an impairment of speech motor planning that manifests as difficulty producing the sounds (articulation and melody (prosody of speech. These difficulties may persist through life and are detrimental to academic, social, and vocational development. A number of published single subject and case series studies of speech treatments are available. There are currently no randomised control trials or other well designed group trials available to guide clinical practice. Methods/Design A parallel group, fixed size randomised control trial will be conducted in Sydney, Australia to determine the efficacy of two treatments for Childhood Apraxia of Speech: 1 Rapid Syllable Transition Treatment and the 2 Nuffield Dyspraxia Programme – Third edition. Eligible children will be English speaking, aged 4–12 years with a diagnosis of suspected CAS, normal or adjusted hearing and vision, and no comprehension difficulties or other developmental diagnoses. At least 20 children will be randomised to receive one of the two treatments in parallel. Treatments will be delivered by trained and supervised speech pathology clinicians using operationalised manuals. Treatment will be administered in 1-hour sessions, 4 times per week for 3 weeks. The primary outcomes are speech sound and prosodic accuracy on a customised 292 item probe and the Diagnostic Evaluation of Articulation and Phonology inconsistency subtest administered prior to treatment and 1 week, 1 month and 4 months post-treatment. All post assessments will be completed by blinded assessors. Our hypotheses are: 1 treatment effects at 1 week post will be similar for both treatments, 2 maintenance of treatment effects at 1 and 4 months post will be greater for Rapid Syllable Transition Treatment than Nuffield Dyspraxia Programme treatment, and 3 generalisation of treatment effects to untrained related speech behaviours will be greater for Rapid

  3. Basic design of parallel computational program for probabilistic structural analysis

    International Nuclear Information System (INIS)

    Kaji, Yoshiyuki; Arai, Taketoshi; Gu, Wenwei; Nakamura, Hitoshi

    1999-06-01

    In our laboratory, for 'development of damage evaluation method of structural brittle materials by microscopic fracture mechanics and probabilistic theory' (nuclear computational science cross-over research) we examine computational method related to super parallel computation system which is coupled with material strength theory based on microscopic fracture mechanics for latent cracks and continuum structural model to develop new structural reliability evaluation methods for ceramic structures. This technical report is the review results regarding probabilistic structural mechanics theory, basic terms of formula and program methods of parallel computation which are related to principal terms in basic design of computational mechanics program. (author)

  4. Basic design of parallel computational program for probabilistic structural analysis

    Energy Technology Data Exchange (ETDEWEB)

    Kaji, Yoshiyuki; Arai, Taketoshi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Gu, Wenwei; Nakamura, Hitoshi

    1999-06-01

    In our laboratory, for `development of damage evaluation method of structural brittle materials by microscopic fracture mechanics and probabilistic theory` (nuclear computational science cross-over research) we examine computational method related to super parallel computation system which is coupled with material strength theory based on microscopic fracture mechanics for latent cracks and continuum structural model to develop new structural reliability evaluation methods for ceramic structures. This technical report is the review results regarding probabilistic structural mechanics theory, basic terms of formula and program methods of parallel computation which are related to principal terms in basic design of computational mechanics program. (author)

  5. Interim report of working group of Nuclear Fusion Committee

    International Nuclear Information System (INIS)

    Takuma, Hiroshi

    1986-01-01

    The conclusion of the working group was presented as an interim report to the general meeting of Nuclear Fusion Committee, which became the base for deciding the future plan. The report was the result of the hard work for about a half year by five Committee experts and 23 researchers, and has the rich contents. At present, the supply of petroleum relaxed, and the trend that a large amount of investment for a long period for nuclear fusion research is problematical has become strong. Of course, the importance of the nuclear fusion research never changes. The research projects of Heliotron E, Gekko 12, Gamma 10 and so on have advanced, and the base for synthetically promoting the research has been completed. It is indispensable to decide the most effective plan for the next stage. The working group discussed on the five year plan, especially on the research based on a large project. The policy of the works and problems, the progress of the works of respective subgroups, and the summarization are reported. The researches on nuclear burning simulation, no current plasma using an external conductor system and making an axisymmetrical high-beta torus steady were proposed. (Kako, I.)

  6. Summary report for the Microwave Source Working Group

    International Nuclear Information System (INIS)

    Westenskow, G.A.

    1997-01-01

    This report summarizes the discussions of the Microwave Source Working Group during the Advanced Accelerator Concepts Workshop held October 13-19, 1996 in the Granlibakken Conference Center at Lake Tahoe, California. Progress on rf sources being developed for linear colliders is reviewed. Possible choices for high-power rf sources at 34 GHz and 94 GHz for future colliders are examined. 27 refs

  7. Summary report for the Microwave Source Working Group

    Energy Technology Data Exchange (ETDEWEB)

    Westenskow, G.A.

    1997-01-01

    This report summarizes the discussions of the Microwave Source Working Group during the Advanced Accelerator Concepts Workshop held October 13-19, 1996 in the Granlibakken Conference Center at Lake Tahoe, California. Progress on rf sources being developed for linear colliders is reviewed. Possible choices for high-power rf sources at 34 GHz and 94 GHz for future colliders are examined. 27 refs.

  8. Parallelizing the spectral transform method: A comparison of alternative parallel algorithms

    International Nuclear Information System (INIS)

    Foster, I.; Worley, P.H.

    1993-01-01

    The spectral transform method is a standard numerical technique for solving partial differential equations on the sphere and is widely used in global climate modeling. In this paper, we outline different approaches to parallelizing the method and describe experiments that we are conducting to evaluate the efficiency of these approaches on parallel computers. The experiments are conducted using a testbed code that solves the nonlinear shallow water equations on a sphere, but are designed to permit evaluation in the context of a global model. They allow us to evaluate the relative merits of the approaches as a function of problem size and number of processors. The results of this study are guiding ongoing work on PCCM2, a parallel implementation of the Community Climate Model developed at the National Center for Atmospheric Research

  9. A Set of Annotation Interfaces for Alignment of Parallel Corpora

    Directory of Open Access Journals (Sweden)

    Singh Anil Kumar

    2014-09-01

    Full Text Available Annotation interfaces for parallel corpora which fit in well with other tools can be very useful. We describe a set of annotation interfaces which fulfill this criterion. This set includes a sentence alignment interface, two different word or word group alignment interfaces and an initial version of a parallel syntactic annotation alignment interface. These tools can be used for manual alignment, or they can be used to correct automatic alignments. Manual alignment can be performed in combination with certain kinds of linguistic annotation. Most of these interfaces use a representation called the Shakti Standard Format that has been found to be very robust and has been used for large and successful projects. It ties together the different interfaces, so that the data created by them is portable across all tools which support this representation. The existence of a query language for data stored in this representation makes it possible to build tools that allow easy search and modification of annotated parallel data.

  10. Algorithms for parallel computers

    International Nuclear Information System (INIS)

    Churchhouse, R.F.

    1985-01-01

    Until relatively recently almost all the algorithms for use on computers had been designed on the (usually unstated) assumption that they were to be run on single processor, serial machines. With the introduction of vector processors, array processors and interconnected systems of mainframes, minis and micros, however, various forms of parallelism have become available. The advantage of parallelism is that it offers increased overall processing speed but it also raises some fundamental questions, including: (i) which, if any, of the existing 'serial' algorithms can be adapted for use in the parallel mode. (ii) How close to optimal can such adapted algorithms be and, where relevant, what are the convergence criteria. (iii) How can we design new algorithms specifically for parallel systems. (iv) For multi-processor systems how can we handle the software aspects of the interprocessor communications. Aspects of these questions illustrated by examples are considered in these lectures. (orig.)

  11. Parallel processing for fluid dynamics applications

    International Nuclear Information System (INIS)

    Johnson, G.M.

    1989-01-01

    The impact of parallel processing on computational science and, in particular, on computational fluid dynamics is growing rapidly. In this paper, particular emphasis is given to developments which have occurred within the past two years. Parallel processing is defined and the reasons for its importance in high-performance computing are reviewed. Parallel computer architectures are classified according to the number and power of their processing units, their memory, and the nature of their connection scheme. Architectures which show promise for fluid dynamics applications are emphasized. Fluid dynamics problems are examined for parallelism inherent at the physical level. CFD algorithms and their mappings onto parallel architectures are discussed. Several example are presented to document the performance of fluid dynamics applications on present-generation parallel processing devices

  12. Optimal task mapping in safety-critical real-time parallel systems; Placement optimal de taches pour les systemes paralleles temps-reel critiques

    Energy Technology Data Exchange (ETDEWEB)

    Aussagues, Ch

    1998-12-11

    This PhD thesis is dealing with the correct design of safety-critical real-time parallel systems. Such systems constitutes a fundamental part of high-performance systems for command and control that can be found in the nuclear domain or more generally in parallel embedded systems. The verification of their temporal correctness is the core of this thesis. our contribution is mainly in the following three points: the analysis and extension of a programming model for such real-time parallel systems; the proposal of an original method based on a new operator of synchronized product of state machines task-graphs; the validation of the approach by its implementation and evaluation. The work addresses particularly the main problem of optimal task mapping on a parallel architecture, such that the temporal constraints are globally guaranteed, i.e. the timeliness property is valid. The results incorporate also optimally criteria for the sizing and correct dimensioning of a parallel system, for instance in the number of processing elements. These criteria are connected with operational constraints of the application domain. Our approach is based on the off-line analysis of the feasibility of the deadline-driven dynamic scheduling that is used to schedule tasks inside one processor. This leads us to define the synchronized-product, a system of linear, constraints is automatically generated and then allows to calculate a maximum load of a group of tasks and then to verify their timeliness constraints. The communications, their timeliness verification and incorporation to the mapping problem is the second main contribution of this thesis. FInally, the global solving technique dealing with both task and communication aspects has been implemented and evaluated in the framework of the OASIS project in the LETI research center at the CEA/Saclay. (author) 96 refs.

  13. Parallel discrete event simulation

    NARCIS (Netherlands)

    Overeinder, B.J.; Hertzberger, L.O.; Sloot, P.M.A.; Withagen, W.J.

    1991-01-01

    In simulating applications for execution on specific computing systems, the simulation performance figures must be known in a short period of time. One basic approach to the problem of reducing the required simulation time is the exploitation of parallelism. However, in parallelizing the simulation

  14. Effect of differences in gas-dynamic behaviour on the separation performance of ultracentrifuges connected in parallel

    International Nuclear Information System (INIS)

    Portoghese, C.C.P.; Buchmann, J.H.

    1996-01-01

    This paper is concerned with the degradation of separation factors occurred when groups of ultracentrifuges having different gas-dynamic behaviour are connected in parallel arrangements. Differences in the gas-dynamic behavior were traduced in terms of different tails pressures for the same operational conditions, that are feed flow rate, product pressure and cut number. A mathematical model describing the ratio of the tails flow rates as a function of the tails pressure ratios and the feed flow rate was developed using experimental data collected from a pair of different ultracentrifuges connected in parallel. The optimization of model parameters was made using Marquardt's algorithm. The model developed was used to simulate the separation factors degradation in some parallel arrangements containing more than two centrifuges. Te obtained results were compared with experimental data collected from different groups of ultracentrifuges. It was observed that the calculated results were in good agreement with experimental data. This mathematical model, which parameters were determined in a two-centrifuges parallel arrangement, is useful to simulate the effect of quantified gas-dynamic differences in the separation factors of groups containing any number of different ultracentrifuges and, consequently, to analyze cascade losses due to this kind of occurrence. (author)

  15. Parallel computation of rotating flows

    DEFF Research Database (Denmark)

    Lundin, Lars Kristian; Barker, Vincent A.; Sørensen, Jens Nørkær

    1999-01-01

    This paper deals with the simulation of 3‐D rotating flows based on the velocity‐vorticity formulation of the Navier‐Stokes equations in cylindrical coordinates. The governing equations are discretized by a finite difference method. The solution is advanced to a new time level by a two‐step process...... is that of solving a singular, large, sparse, over‐determined linear system of equations, and the iterative method CGLS is applied for this purpose. We discuss some of the mathematical and numerical aspects of this procedure and report on the performance of our software on a wide range of parallel computers. Darbe...

  16. Overview of the Force Scientific Parallel Language

    Directory of Open Access Journals (Sweden)

    Gita Alaghband

    1994-01-01

    Full Text Available The Force parallel programming language designed for large-scale shared-memory multiprocessors is presented. The language provides a number of parallel constructs as extensions to the ordinary Fortran language and is implemented as a two-level macro preprocessor to support portability across shared memory multiprocessors. The global parallelism model on which the Force is based provides a powerful parallel language. The parallel constructs, generic synchronization, and freedom from process management supported by the Force has resulted in structured parallel programs that are ported to the many multiprocessors on which the Force is implemented. Two new parallel constructs for looping and functional decomposition are discussed. Several programming examples to illustrate some parallel programming approaches using the Force are also presented.

  17. The Galley Parallel File System

    Science.gov (United States)

    Nieuwejaar, Nils; Kotz, David

    1996-01-01

    Most current multiprocessor file systems are designed to use multiple disks in parallel, using the high aggregate bandwidth to meet the growing I/0 requirements of parallel scientific applications. Many multiprocessor file systems provide applications with a conventional Unix-like interface, allowing the application to access multiple disks transparently. This interface conceals the parallelism within the file system, increasing the ease of programmability, but making it difficult or impossible for sophisticated programmers and libraries to use knowledge about their I/O needs to exploit that parallelism. In addition to providing an insufficient interface, most current multiprocessor file systems are optimized for a different workload than they are being asked to support. We introduce Galley, a new parallel file system that is intended to efficiently support realistic scientific multiprocessor workloads. We discuss Galley's file structure and application interface, as well as the performance advantages offered by that interface.

  18. Waste Contaminants at Military Bases Working Group report

    International Nuclear Information System (INIS)

    1993-01-01

    The Waste Contaminants at Military Bases Working Group has screened six prospective demonstration projects for consideration by the Federal Advisory Committee to Develop On-Site Innovative Technologies (DOIT). These projects include the Kirtland Air Force Base Demonstration Project, the March Air Force Base Demonstration Project, the McClellan Air Force Base Demonstration Project, the Williams Air Force Base Demonstration Project, and two demonstration projects under the Air Force Center for Environmental Excellence. A seventh project (Port Hueneme Naval Construction Battalion Center) was added to list of prospective demonstrations after the September 1993 Working Group Meeting. This demonstration project has not been screened by the working group. Two additional Air Force remediation programs are also under consideration and are described in Section 6 of this document. The following information on prospective demonstrations was collected by the Waste Contaminants at Military Bases Working Group to assist the DOIT Committee in making Phase 1 Demonstration Project recommendations. The remainder of this report is organized into seven sections: Work Group Charter's mission and vision; contamination problems, current technology limitations, and institutional and regulatory barriers to technology development and commercialization, and work force issues; screening process for initial Phase 1 demonstration technologies and sites; demonstration descriptions -- good matches;demonstration descriptions -- close matches; additional candidate demonstration projects; and next steps

  19. PDDP, A Data Parallel Programming Model

    Directory of Open Access Journals (Sweden)

    Karen H. Warren

    1996-01-01

    Full Text Available PDDP, the parallel data distribution preprocessor, is a data parallel programming model for distributed memory parallel computers. PDDP implements high-performance Fortran-compatible data distribution directives and parallelism expressed by the use of Fortran 90 array syntax, the FORALL statement, and the WHERE construct. Distributed data objects belong to a global name space; other data objects are treated as local and replicated on each processor. PDDP allows the user to program in a shared memory style and generates codes that are portable to a variety of parallel machines. For interprocessor communication, PDDP uses the fastest communication primitives on each platform.

  20. Climate Change 2013. The Physical Science Basis. Working Group I Contribution to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change - Abstract for decision-makers

    International Nuclear Information System (INIS)

    Stocker, Thomas F.; Qin, Dahe; Plattner, Gian-Kasper; Tignor, Melinda M.B.; Allen, Simon K.; Boschung, Judith; Nauels, Alexander; Xia, Yu; Bex, Vincent; Midgley, Pauline M.; Alexander, Lisa V.; Allen, Simon K.; Bindoff, Nathaniel L.; Breon, Francois-Marie; Church, John A.; Cubasch, Ulrich; Emori, Seita; Forster, Piers; Friedlingstein, Pierre; Gillett, Nathan; Gregory, Jonathan M.; Hartmann, Dennis L.; Jansen, Eystein; Kirtman, Ben; Knutti, Reto; Kumar Kanikicharla, Krishna; Lemke, Peter; Marotzke, Jochem; Masson-Delmotte, Valerie; Meehl, Gerald A.; Mokhov, Igor I.; Piao, Shilong; Plattner, Gian-Kasper; Dahe, Qin; Ramaswamy, Venkatachalam; Randall, David; Rhein, Monika; Rojas, Maisa; Sabine, Christopher; Shindell, Drew; Stocker, Thomas F.; Talley, Lynne D.; Vaughan, David G.; Xie, Shang-Ping; Allen, Myles R.; Boucher, Olivier; Chambers, Don; Hesselbjerg Christensen, Jens; Ciais, Philippe; Clark, Peter U.; Collins, Matthew; Comiso, Josefino C.; Vasconcellos de Menezes, Viviane; Feely, Richard A.; Fichefet, Thierry; Fiore, Arlene M.; Flato, Gregory; Fuglestvedt, Jan; Hegerl, Gabriele; Hezel, Paul J.; Johnson, Gregory C.; Kaser, Georg; Kattsov, Vladimir; Kennedy, John; Klein Tank, Albert M.G.; Le Quere, Corinne; Myhre, Gunnar; Osborn, Timothy; Payne, Antony J.; Perlwitz, Judith; Power, Scott; Prather, Michael; Rintoul, Stephen R.; Rogelj, Joeri; Rusticucci, Matilde; Schulz, Michael; Sedlacek, Jan; Stott, Peter A.; Sutton, Rowan; Thorne, Peter W.; Wuebbles, Donald

    2013-10-01

    strong commitment to assessing the science comprehensively, without bias and in a way that is relevant to policy but not policy prescriptive. This report consists of a short Summary in French for Policy-makers followed by the full version of the report in English comprising a longer Technical Summary and fourteen thematic chapters plus annexes. An innovation in this Working Group I assessment is the Atlas of Global and Regional Climate Projections (Annex I) containing time series and maps of temperature and precipitation projections for 35 regions of the world, which enhances accessibility for stakeholders and users. The Summary for Policy-makers and Technical Summary of this report follow a parallel structure and each includes cross-references to the chapter and section where the material being summarised can be found in the underlying report. In this way, these summary components of the report provide a road-map to the contents of the entire report and a traceable account of every major finding

  1. Design considerations for parallel graphics libraries

    Science.gov (United States)

    Crockett, Thomas W.

    1994-01-01

    Applications which run on parallel supercomputers are often characterized by massive datasets. Converting these vast collections of numbers to visual form has proven to be a powerful aid to comprehension. For a variety of reasons, it may be desirable to provide this visual feedback at runtime. One way to accomplish this is to exploit the available parallelism to perform graphics operations in place. In order to do this, we need appropriate parallel rendering algorithms and library interfaces. This paper provides a tutorial introduction to some of the issues which arise in designing parallel graphics libraries and their underlying rendering algorithms. The focus is on polygon rendering for distributed memory message-passing systems. We illustrate our discussion with examples from PGL, a parallel graphics library which has been developed on the Intel family of parallel systems.

  2. Three-dimensional magnetic field computation on a distributed memory parallel processor

    International Nuclear Information System (INIS)

    Barion, M.L.

    1990-01-01

    The analysis of three-dimensional magnetic fields by finite element methods frequently proves too onerous a task for the computing resource on which it is attempted. When non-linear and transient effects are included, it may become impossible to calculate the field distribution to sufficient resolution. One approach to this problem is to exploit the natural parallelism in the finite element method via parallel processing. This paper reports on an implementation of a finite element code for non-linear three-dimensional low-frequency magnetic field calculation on Intel's iPSC/2

  3. The island dynamics model on parallel quadtree grids

    Science.gov (United States)

    Mistani, Pouria; Guittet, Arthur; Bochkov, Daniil; Schneider, Joshua; Margetis, Dionisios; Ratsch, Christian; Gibou, Frederic

    2018-05-01

    We introduce an approach for simulating epitaxial growth by use of an island dynamics model on a forest of quadtree grids, and in a parallel environment. To this end, we use a parallel framework introduced in the context of the level-set method. This framework utilizes: discretizations that achieve a second-order accurate level-set method on non-graded adaptive Cartesian grids for solving the associated free boundary value problem for surface diffusion; and an established library for the partitioning of the grid. We consider the cases with: irreversible aggregation, which amounts to applying Dirichlet boundary conditions at the island boundary; and an asymmetric (Ehrlich-Schwoebel) energy barrier for attachment/detachment of atoms at the island boundary, which entails the use of a Robin boundary condition. We provide the scaling analyses performed on the Stampede supercomputer and numerical examples that illustrate the capability of our methodology to efficiently simulate different aspects of epitaxial growth. The combination of adaptivity and parallelism in our approach enables simulations that are several orders of magnitude faster than those reported in the recent literature and, thus, provides a viable framework for the systematic study of mound formation on crystal surfaces.

  4. Chemical Safety Vulnerability Working Group report. Volume 2

    International Nuclear Information System (INIS)

    1994-09-01

    The Chemical Safety Vulnerability (CSV) Working Group was established to identify adverse conditions involving hazardous chemicals at DOE facilities that might result in fires or explosions, release of hazardous chemicals to the environment, or exposure of workers or the public to chemicals. A CSV Review was conducted in 148 facilities at 29 sites. Eight generic vulnerabilities were documented related to: abandoned chemicals and chemical residuals; past chemical spills and ground releases; characterization of legacy chemicals and wastes; disposition of legacy chemicals; storage facilities and conditions; condition of facilities and support systems; unanalyzed and unaddressed hazards; and inventory control and tracking. Weaknesses in five programmatic areas were also identified related to: management commitment and planning; chemical safety management programs; aging facilities that continue to operate; nonoperating facilities awaiting deactivation; and resource allocations. Volume 2 consists of seven appendices containing the following: Tasking memorandums; Project plan for the CSV Review; Field verification guide for the CSV Review; Field verification report, Lawrence Livermore National Lab.; Field verification report, Oak Ridge Reservation; Field verification report, Savannah River Site; and the Field verification report, Hanford Site

  5. Chemical Safety Vulnerability Working Group report. Volume 2

    Energy Technology Data Exchange (ETDEWEB)

    1994-09-01

    The Chemical Safety Vulnerability (CSV) Working Group was established to identify adverse conditions involving hazardous chemicals at DOE facilities that might result in fires or explosions, release of hazardous chemicals to the environment, or exposure of workers or the public to chemicals. A CSV Review was conducted in 148 facilities at 29 sites. Eight generic vulnerabilities were documented related to: abandoned chemicals and chemical residuals; past chemical spills and ground releases; characterization of legacy chemicals and wastes; disposition of legacy chemicals; storage facilities and conditions; condition of facilities and support systems; unanalyzed and unaddressed hazards; and inventory control and tracking. Weaknesses in five programmatic areas were also identified related to: management commitment and planning; chemical safety management programs; aging facilities that continue to operate; nonoperating facilities awaiting deactivation; and resource allocations. Volume 2 consists of seven appendices containing the following: Tasking memorandums; Project plan for the CSV Review; Field verification guide for the CSV Review; Field verification report, Lawrence Livermore National Lab.; Field verification report, Oak Ridge Reservation; Field verification report, Savannah River Site; and the Field verification report, Hanford Site.

  6. Parallelizing AT with MatlabMPI

    International Nuclear Information System (INIS)

    2011-01-01

    The Accelerator Toolbox (AT) is a high-level collection of tools and scripts specifically oriented toward solving problems dealing with computational accelerator physics. It is integrated into the MATLAB environment, which provides an accessible, intuitive interface for accelerator physicists, allowing researchers to focus the majority of their efforts on simulations and calculations, rather than programming and debugging difficulties. Efforts toward parallelization of AT have been put in place to upgrade its performance to modern standards of computing. We utilized the packages MatlabMPI and pMatlab, which were developed by MIT Lincoln Laboratory, to set up a message-passing environment that could be called within MATLAB, which set up the necessary pre-requisites for multithread processing capabilities. On local quad-core CPUs, we were able to demonstrate processor efficiencies of roughly 95% and speed increases of nearly 380%. By exploiting the efficacy of modern-day parallel computing, we were able to demonstrate incredibly efficient speed increments per processor in AT's beam-tracking functions. Extrapolating from prediction, we can expect to reduce week-long computation runtimes to less than 15 minutes. This is a huge performance improvement and has enormous implications for the future computing power of the accelerator physics group at SSRL. However, one of the downfalls of parringpass is its current lack of transparency; the pMatlab and MatlabMPI packages must first be well-understood by the user before the system can be configured to run the scripts. In addition, the instantiation of argument parameters requires internal modification of the source code. Thus, parringpass, cannot be directly run from the MATLAB command line, which detracts from its flexibility and user-friendliness. Future work in AT's parallelization will focus on development of external functions and scripts that can be called from within MATLAB and configured on multiple nodes, while

  7. Automatic Loop Parallelization via Compiler Guided Refactoring

    DEFF Research Database (Denmark)

    Larsen, Per; Ladelsky, Razya; Lidman, Jacob

    For many parallel applications, performance relies not on instruction-level parallelism, but on loop-level parallelism. Unfortunately, many modern applications are written in ways that obstruct automatic loop parallelization. Since we cannot identify sufficient parallelization opportunities...... for these codes in a static, off-line compiler, we developed an interactive compilation feedback system that guides the programmer in iteratively modifying application source, thereby improving the compiler’s ability to generate loop-parallel code. We use this compilation system to modify two sequential...... benchmarks, finding that the code parallelized in this way runs up to 8.3 times faster on an octo-core Intel Xeon 5570 system and up to 12.5 times faster on a quad-core IBM POWER6 system. Benchmark performance varies significantly between the systems. This suggests that semi-automatic parallelization should...

  8. Aspects of computation on asynchronous parallel processors

    International Nuclear Information System (INIS)

    Wright, M.

    1989-01-01

    The increasing availability of asynchronous parallel processors has provided opportunities for original and useful work in scientific computing. However, the field of parallel computing is still in a highly volatile state, and researchers display a wide range of opinion about many fundamental questions such as models of parallelism, approaches for detecting and analyzing parallelism of algorithms, and tools that allow software developers and users to make effective use of diverse forms of complex hardware. This volume collects the work of researchers specializing in different aspects of parallel computing, who met to discuss the framework and the mechanics of numerical computing. The far-reaching impact of high-performance asynchronous systems is reflected in the wide variety of topics, which include scientific applications (e.g. linear algebra, lattice gauge simulation, ordinary and partial differential equations), models of parallelism, parallel language features, task scheduling, automatic parallelization techniques, tools for algorithm development in parallel environments, and system design issues

  9. TIBER II/ETR: Nuclear Performance Analysis Group Report

    International Nuclear Information System (INIS)

    1987-09-01

    A Nuclear Performance Analysis Group was formed to develop the nuclear technology mission of TIBER-II under the leadership of Argonne National Laboratory reporting to LLNL with major participation by the University of California - Los Angeles (test requirements, R and D needs, water-cooled test modules, neutronic tests). Additional key support was provided by GA Technologies (helium-cooled test modules), Hanford Engineering Development Laboratory (material-irradiation tests), Sandia National Laboratory - Albuquerque (high-heat-flux component tests), and the Idaho National Engineering Laboratory (safety tests). Support also was provided by Rennselaer Polytechnic Institute, Grumman Aerospace Corporation, and the Canadian Fusion Fuels Technology Program. This report discusses these areas and provides a schedule for their completion

  10. GROUP OF HEARING MOTHERS OF DEAF CHILDREN: INTERNSHIP EXPERIENCE REPORT

    Directory of Open Access Journals (Sweden)

    Rafaela Fava de Quevedo

    2017-03-01

    Full Text Available This experience report describes a group phenomenon, based upon a case study of a group of hearing mothers of deaf children. The weekly group, in operation for over three years, provides support for families with deaf children. At first, observations were made in the group for a subsequent analysis of the data and act on interventions. Categories containing the main features that emerged in the group were created in order to discuss the content found. The categories addressed by mothers included: independence/autonomy of the child; adolescence and sexuality; discovery of deafness and reorganization of family dynamics; and matters beyond the group goal. As for the categories related to the group process there are: resistance; containing function of the coordinator; transfer; interventions in the group field. The results lead to understanding the group as a facilitator and as a necessary support for the participants. Before that, interventions were carried out to expand the space for reflection offered by the group, which provides adaptations to the different situations experienced by the participants.

  11. Parallelization of the FLAPW method

    International Nuclear Information System (INIS)

    Canning, A.; Mannstadt, W.; Freeman, A.J.

    1999-01-01

    The FLAPW (full-potential linearized-augmented plane-wave) method is one of the most accurate first-principles methods for determining electronic and magnetic properties of crystals and surfaces. Until the present work, the FLAPW method has been limited to systems of less than about one hundred atoms due to a lack of an efficient parallel implementation to exploit the power and memory of parallel computers. In this work we present an efficient parallelization of the method by division among the processors of the plane-wave components for each state. The code is also optimized for RISC (reduced instruction set computer) architectures, such as those found on most parallel computers, making full use of BLAS (basic linear algebra subprograms) wherever possible. Scaling results are presented for systems of up to 686 silicon atoms and 343 palladium atoms per unit cell, running on up to 512 processors on a CRAY T3E parallel computer

  12. Parallelization of the FLAPW method

    Science.gov (United States)

    Canning, A.; Mannstadt, W.; Freeman, A. J.

    2000-08-01

    The FLAPW (full-potential linearized-augmented plane-wave) method is one of the most accurate first-principles methods for determining structural, electronic and magnetic properties of crystals and surfaces. Until the present work, the FLAPW method has been limited to systems of less than about a hundred atoms due to the lack of an efficient parallel implementation to exploit the power and memory of parallel computers. In this work, we present an efficient parallelization of the method by division among the processors of the plane-wave components for each state. The code is also optimized for RISC (reduced instruction set computer) architectures, such as those found on most parallel computers, making full use of BLAS (basic linear algebra subprograms) wherever possible. Scaling results are presented for systems of up to 686 silicon atoms and 343 palladium atoms per unit cell, running on up to 512 processors on a CRAY T3E parallel supercomputer.

  13. Split-mouth and parallel-arm trials to compare pain with intraosseous anaesthesia delivered by the computerised Quicksleeper system and conventional infiltration anaesthesia in paediatric oral healthcare: protocol for a randomised controlled trial.

    Science.gov (United States)

    Smaïl-Faugeron, Violaine; Muller-Bolla, Michèle; Sixou, Jean-Louis; Courson, Frédéric

    2015-07-10

    Local anaesthesia is commonly used in paediatric oral healthcare. Infiltration anaesthesia is the most frequently used, but recent developments in anaesthesia techniques have introduced an alternative: intraosseous anaesthesia. We propose to perform a split-mouth and parallel-arm multicentre randomised controlled trial (RCT) comparing the pain caused by the insertion of the needle for the injection of conventional infiltration anaesthesia, and intraosseous anaesthesia by the computerised QuickSleeper system, in children and adolescents. Inclusion criteria are patients 7-15 years old with at least 2 first permanent molars belonging to the same dental arch (for the split-mouth RCT) or with a first permanent molar (for the parallel-arm RCT) requiring conservative or endodontic treatment limited to pulpotomy. The setting of this study is the Department of Paediatric Dentistry at 3 University dental hospitals in France. The primary outcome measure will be pain reported by the patient on a visual analogue scale concerning the insertion of the needle and the injection/infiltration. Secondary outcomes are latency, need for additional anaesthesia during the treatment and pain felt during the treatment. We will use a computer-generated permuted-block randomisation sequence for allocation to anaesthesia groups. The random sequences will be stratified by centre (and by dental arch for the parallel-arm RCT). Only participants will be blinded to group assignment. Data will be analysed by the intent-to-treat principle. In all, 160 patients will be included (30 in the split-mouth RCT, 130 in the parallel-arm RCT). This protocol has been approved by the French ethics committee for the protection of people (Comité de Protection des Personnes, Ile de France I) and will be conducted in full accordance with accepted ethical principles. Findings will be reported in scientific publications and at research conferences, and in project summary papers for participants. Clinical

  14. Status of safety at Areva group facilities. 2007 annual report

    International Nuclear Information System (INIS)

    2007-01-01

    This report describes the status of nuclear safety and radiation protection in the facilities of the AREVA group and gives information on radiation protection in the service operations, as observed through the inspection programs and analyses carried out by the General Inspectorate in 2007. Having been submitted to the group's Supervisory Board, this report is sent to the bodies representing the personnel. Content: 1 - A look back at 2007 by the AREVA General Inspector: Visible progress in 2007, Implementation of the Nuclear Safety Charter, Notable events; 2 - Status of nuclear safety and radiation protection in the nuclear facilities and service operations: Personnel radiation protection, Event tracking, Service operations, Criticality control, Radioactive waste and effluent management; 3 - Performance improvement actions; 4 - Description of the General Inspectorate; 5 - Glossary

  15. Waste area Grouping 2 Phase I remedial investigation: Sediment and Cesium-137 transport modeling report

    International Nuclear Information System (INIS)

    Clapp, R.B.; Bao, Y.S.; Moore, T.D.; Brenkert, A.L.; Purucker, S.T.; Reece, D.K.; Burgoa, B.B.

    1996-06-01

    This report is one of five reports issued in 1996 that provide follow-up information to the Phase I Remedial Investigation (RI) Report for Waste Area Grouping (WAG) 2 at Oak Ridge National Laboratory (ORNL). The five reports address areas of concern that may present immediate risk to public health at the Clinch River and ecological risk within WAG 2 at ORNL. A sixth report, on groundwater, in the series documenting WAG 2 RI Phase I results were part of project activities conducted in FY 1996. The five reports that complete activities conducted as part of Phase I of the Remedial Investigation (RI) for WAG 2 are as follows: (1) Waste Area Grouping 2, Phase I Task Data Report: Seep Data Assessment, (2) Waste Area Grouping 2, Phase I Task Data Report: Tributaries Data Assessment, (3) Waste Area Grouping 2, Phase I Task Data Report: Ecological Risk Assessment, (4) Waste Area Grouping 2, Phase I Task Data Report: Human Health Risk Assessment, (5) Waste Area Grouping 2, Phase I Task Data Report: Sediment and 137 Cs Transport Modeling In December 1990, the Remedial Investigation Plan for Waste Area Grouping 2 at Oak Ridge National Laboratory was issued (ORNL 1990). The WAG 2 RI Plan was structured with a short-term component to be conducted while upgradient WAGs are investigated and remediated, and a long-term component that will complete the RI process for WAG 2 following remediation of upgradient WAGs. RI activities for the short-term component were initiated with the approval of the Environmental Protection Agency, Region IV (EPA), and the Tennessee Department of Environment and Conservation (TDEC). This report presents the results of an investigation of the risk associated with possible future releases of 137 Cs due to an extreme flood. The results are based on field measurements made during storms and computer model simulations

  16. FAVL work group: report and recommendations

    International Nuclear Information System (INIS)

    2011-01-01

    This document reports the works of a work group dedicated to the process of search for storage site for low activity and long life radioactive wastes. The authors recall the history of this process which started in the early 1990's, and resulted in the selection of two sites, in Auxon and in Pars-les-Chavanges, and finally in the withdrawal of both towns. Then, the authors analyse the whole process in terms of intervention or participation of local authorities, of information and participation of waste producers. They also discuss the roles of the ASN, IRSN, DGEC, ANDRA and ANDRA's Coesdic. They make recommendations regarding site selection, agenda, responsibilities, preferential representative at the local level, public information, consultation, and project support

  17. Parallelization characteristics of a three-dimensional whole-core code DeCART

    International Nuclear Information System (INIS)

    Cho, J. Y.; Joo, H.K.; Kim, H. Y.; Lee, J. C.; Jang, M. H.

    2003-01-01

    Neutron transport calculation for three-dimensional amount of computing time but also huge memory. Therefore, whole-core codes such as DeCART need both also parallel computation and distributed memory capabilities. This paper is to implement such parallel capabilities based on MPI grouping and memory distribution on the DeCART code, and then to evaluate the performance by solving the C5G7 three-dimensional benchmark and a simplified three-dimensional SMART core problem. In C5G7 problem with 24 CPUs, a speedup of maximum 22 is obtained on IBM regatta machine and 21 on a LINUX cluster for the MOC kernel, which indicates good parallel performance of the DeCART code. The simplified SMART problem which need about 11 GBytes memory with one processors requires about 940 MBytes, which means that the DeCART code can now solve large core problems on affordable LINUX clusters

  18. Parallelization of 2-D lattice Boltzmann codes

    International Nuclear Information System (INIS)

    Suzuki, Soichiro; Kaburaki, Hideo; Yokokawa, Mitsuo.

    1996-03-01

    Lattice Boltzmann (LB) codes to simulate two dimensional fluid flow are developed on vector parallel computer Fujitsu VPP500 and scalar parallel computer Intel Paragon XP/S. While a 2-D domain decomposition method is used for the scalar parallel LB code, a 1-D domain decomposition method is used for the vector parallel LB code to be vectorized along with the axis perpendicular to the direction of the decomposition. High parallel efficiency of 95.1% by the vector parallel calculation on 16 processors with 1152x1152 grid and 88.6% by the scalar parallel calculation on 100 processors with 800x800 grid are obtained. The performance models are developed to analyze the performance of the LB codes. It is shown by our performance models that the execution speed of the vector parallel code is about one hundred times faster than that of the scalar parallel code with the same number of processors up to 100 processors. We also analyze the scalability in keeping the available memory size of one processor element at maximum. Our performance model predicts that the execution time of the vector parallel code increases about 3% on 500 processors. Although the 1-D domain decomposition method has in general a drawback in the interprocessor communication, the vector parallel LB code is still suitable for the large scale and/or high resolution simulations. (author)

  19. Parallelization of 2-D lattice Boltzmann codes

    Energy Technology Data Exchange (ETDEWEB)

    Suzuki, Soichiro; Kaburaki, Hideo; Yokokawa, Mitsuo

    1996-03-01

    Lattice Boltzmann (LB) codes to simulate two dimensional fluid flow are developed on vector parallel computer Fujitsu VPP500 and scalar parallel computer Intel Paragon XP/S. While a 2-D domain decomposition method is used for the scalar parallel LB code, a 1-D domain decomposition method is used for the vector parallel LB code to be vectorized along with the axis perpendicular to the direction of the decomposition. High parallel efficiency of 95.1% by the vector parallel calculation on 16 processors with 1152x1152 grid and 88.6% by the scalar parallel calculation on 100 processors with 800x800 grid are obtained. The performance models are developed to analyze the performance of the LB codes. It is shown by our performance models that the execution speed of the vector parallel code is about one hundred times faster than that of the scalar parallel code with the same number of processors up to 100 processors. We also analyze the scalability in keeping the available memory size of one processor element at maximum. Our performance model predicts that the execution time of the vector parallel code increases about 3% on 500 processors. Although the 1-D domain decomposition method has in general a drawback in the interprocessor communication, the vector parallel LB code is still suitable for the large scale and/or high resolution simulations. (author).

  20. Concurrent particle-in-cell plasma simulation on a multi-transputer parallel computer

    International Nuclear Information System (INIS)

    Khare, A.N.; Jethra, A.; Patel, Kartik

    1992-01-01

    This report describes the parallelization of a Particle-in-Cell (PIC) plasma simulation code on a multi-transputer parallel computer. The algorithm used in the parallelization of the PIC method is described. The decomposition schemes related to the distribution of the particles among the processors are discussed. The implementation of the algorithm on a transputer network connected as a torus is presented. The solutions of the problems related to global communication of data are presented in the form of a set of generalized communication functions. The performance of the program as a function of data size and the number of transputers show that the implementation is scalable and represents an effective way of achieving high performance at acceptable cost. (author). 11 refs., 4 figs., 2 tabs., appendices

  1. Explorations of the implementation of a parallel IDW interpolation algorithm in a Linux cluster-based parallel GIS

    Science.gov (United States)

    Huang, Fang; Liu, Dingsheng; Tan, Xicheng; Wang, Jian; Chen, Yunping; He, Binbin

    2011-04-01

    To design and implement an open-source parallel GIS (OP-GIS) based on a Linux cluster, the parallel inverse distance weighting (IDW) interpolation algorithm has been chosen as an example to explore the working model and the principle of algorithm parallel pattern (APP), one of the parallelization patterns for OP-GIS. Based on an analysis of the serial IDW interpolation algorithm of GRASS GIS, this paper has proposed and designed a specific parallel IDW interpolation algorithm, incorporating both single process, multiple data (SPMD) and master/slave (M/S) programming modes. The main steps of the parallel IDW interpolation algorithm are: (1) the master node packages the related information, and then broadcasts it to the slave nodes; (2) each node calculates its assigned data extent along one row using the serial algorithm; (3) the master node gathers the data from all nodes; and (4) iterations continue until all rows have been processed, after which the results are outputted. According to the experiments performed in the course of this work, the parallel IDW interpolation algorithm can attain an efficiency greater than 0.93 compared with similar algorithms, which indicates that the parallel algorithm can greatly reduce processing time and maximize speed and performance.

  2. A massively parallel algorithm for the collision probability calculations in the Apollo-II code using the PVM library

    International Nuclear Information System (INIS)

    Stankovski, Z.

    1995-01-01

    The collision probability method in neutron transport, as applied to 2D geometries, consume a great amount of computer time, for a typical 2D assembly calculation evaluations. Consequently RZ or 3D calculations became prohibitive. In this paper we present a simple but efficient parallel algorithm based on the message passing host/node programing model. Parallelization was applied to the energy group treatment. Such approach permits parallelization of the existing code, requiring only limited modifications. Sequential/parallel computer portability is preserved, witch is a necessary condition for a industrial code. Sequential performances are also preserved. The algorithm is implemented on a CRAY 90 coupled to a 128 processor T3D computer, a 16 processor IBM SP1 and a network of workstations, using the Public Domain PVM library. The tests were executed for a 2D geometry with the standard 99-group library. All results were very satisfactory, the best ones with IBM SP1. Because of heterogeneity of the workstation network, we did ask high performances for this architecture. The same source code was used for all computers. A more impressive advantage of this algorithm will appear in the calculations of the SAPHYR project (with the future fine multigroup library of about 8000 groups) with a massively parallel computer, using several hundreds of processors. (author). 5 refs., 6 figs., 2 tabs

  3. A massively parallel algorithm for the collision probability calculations in the Apollo-II code using the PVM library

    International Nuclear Information System (INIS)

    Stankovski, Z.

    1995-01-01

    The collision probability method in neutron transport, as applied to 2D geometries, consume a great amount of computer time, for a typical 2D assembly calculation about 90% of the computing time is consumed in the collision probability evaluations. Consequently RZ or 3D calculations became prohibitive. In this paper the author presents a simple but efficient parallel algorithm based on the message passing host/node programmation model. Parallelization was applied to the energy group treatment. Such approach permits parallelization of the existing code, requiring only limited modifications. Sequential/parallel computer portability is preserved, which is a necessary condition for a industrial code. Sequential performances are also preserved. The algorithm is implemented on a CRAY 90 coupled to a 128 processor T3D computer, a 16 processor IBM SPI and a network of workstations, using the Public Domain PVM library. The tests were executed for a 2D geometry with the standard 99-group library. All results were very satisfactory, the best ones with IBM SPI. Because of heterogeneity of the workstation network, the author did not ask high performances for this architecture. The same source code was used for all computers. A more impressive advantage of this algorithm will appear in the calculations of the SAPHYR project (with the future fine multigroup library of about 8000 groups) with a massively parallel computer, using several hundreds of processors

  4. Report of the working group on detector simulation

    International Nuclear Information System (INIS)

    Price, L.E.; Lebrun, P.

    1986-01-01

    An ad hoc group at Snowmass reviewed the need for detector simulation to support detectors at the SSC. This report first reviews currently available programs for detector simulation, both those written for single specific detectors and those aimed at general utility. It then considers the requirements for detector simulation for the SSC, with particular attention to enhancements that are needed relative to present programs. Finally, a list of recommendations is given

  5. A simple approach to solving the kinematics of the 4-UPS/PS (3R1T) parallel manipulator

    Energy Technology Data Exchange (ETDEWEB)

    Gallardo-Alvarado, Jaime; Gracio-Murillo, Mario A. [Instituto Tecnologico de Celaya, Celaya (Mexico); Islam, Md. Nazrul [Universiti Malasya Sabah, Sabah (Malaysia); Abedinnasab, Mohammad H. [Rowan University, New Jersey (United States)

    2016-05-15

    This work reports on the position, velocity and acceleration analyses of a four-degrees-of-freedom parallel manipulator, 4-DoF-PM for brevity, which generates Three-rotation-one-translation (3R1T) motion. Nearly closed-form solutions to solve the forward displacement analysis are easily obtained based on closure equations formulated upon linear combinations of the coordinates of three non-collinear points embedded in the moving platform. Then, the input-output equations of velocity and acceleration of the robot manipulator are systematically established by resorting to the theory of screws. To this end, the Klein form of the Lie algebra se(3) of the Euclidean group SE(3) is systematically applied to the velocity and reduced acceleration state in screw form of the moving platform cancelling the passive joint rates of the parallel manipulator. Numerical examples, which are confirmed by means of commercially available software, are provided to show the application of the method.

  6. A simple approach to solving the kinematics of the 4-UPS/PS (3R1T) parallel manipulator

    International Nuclear Information System (INIS)

    Gallardo-Alvarado, Jaime; Gracio-Murillo, Mario A.; Islam, Md. Nazrul; Abedinnasab, Mohammad H.

    2016-01-01

    This work reports on the position, velocity and acceleration analyses of a four-degrees-of-freedom parallel manipulator, 4-DoF-PM for brevity, which generates Three-rotation-one-translation (3R1T) motion. Nearly closed-form solutions to solve the forward displacement analysis are easily obtained based on closure equations formulated upon linear combinations of the coordinates of three non-collinear points embedded in the moving platform. Then, the input-output equations of velocity and acceleration of the robot manipulator are systematically established by resorting to the theory of screws. To this end, the Klein form of the Lie algebra se(3) of the Euclidean group SE(3) is systematically applied to the velocity and reduced acceleration state in screw form of the moving platform cancelling the passive joint rates of the parallel manipulator. Numerical examples, which are confirmed by means of commercially available software, are provided to show the application of the method.

  7. Parallel Monte Carlo reactor neutronics

    International Nuclear Information System (INIS)

    Blomquist, R.N.; Brown, F.B.

    1994-01-01

    The issues affecting implementation of parallel algorithms for large-scale engineering Monte Carlo neutron transport simulations are discussed. For nuclear reactor calculations, these include load balancing, recoding effort, reproducibility, domain decomposition techniques, I/O minimization, and strategies for different parallel architectures. Two codes were parallelized and tested for performance. The architectures employed include SIMD, MIMD-distributed memory, and workstation network with uneven interactive load. Speedups linear with the number of nodes were achieved

  8. Second International Workshop on Software Engineering and Code Design in Parallel Meteorological and Oceanographic Applications

    Science.gov (United States)

    OKeefe, Matthew (Editor); Kerr, Christopher L. (Editor)

    1998-01-01

    This report contains the abstracts and technical papers from the Second International Workshop on Software Engineering and Code Design in Parallel Meteorological and Oceanographic Applications, held June 15-18, 1998, in Scottsdale, Arizona. The purpose of the workshop is to bring together software developers in meteorology and oceanography to discuss software engineering and code design issues for parallel architectures, including Massively Parallel Processors (MPP's), Parallel Vector Processors (PVP's), Symmetric Multi-Processors (SMP's), Distributed Shared Memory (DSM) multi-processors, and clusters. Issues to be discussed include: (1) code architectures for current parallel models, including basic data structures, storage allocation, variable naming conventions, coding rules and styles, i/o and pre/post-processing of data; (2) designing modular code; (3) load balancing and domain decomposition; (4) techniques that exploit parallelism efficiently yet hide the machine-related details from the programmer; (5) tools for making the programmer more productive; and (6) the proliferation of programming models (F--, OpenMP, MPI, and HPF).

  9. Parallel Implicit Algorithms for CFD

    Science.gov (United States)

    Keyes, David E.

    1998-01-01

    The main goal of this project was efficient distributed parallel and workstation cluster implementations of Newton-Krylov-Schwarz (NKS) solvers for implicit Computational Fluid Dynamics (CFD.) "Newton" refers to a quadratically convergent nonlinear iteration using gradient information based on the true residual, "Krylov" to an inner linear iteration that accesses the Jacobian matrix only through highly parallelizable sparse matrix-vector products, and "Schwarz" to a domain decomposition form of preconditioning the inner Krylov iterations with primarily neighbor-only exchange of data between the processors. Prior experience has established that Newton-Krylov methods are competitive solvers in the CFD context and that Krylov-Schwarz methods port well to distributed memory computers. The combination of the techniques into Newton-Krylov-Schwarz was implemented on 2D and 3D unstructured Euler codes on the parallel testbeds that used to be at LaRC and on several other parallel computers operated by other agencies or made available by the vendors. Early implementations were made directly in Massively Parallel Integration (MPI) with parallel solvers we adapted from legacy NASA codes and enhanced for full NKS functionality. Later implementations were made in the framework of the PETSC library from Argonne National Laboratory, which now includes pseudo-transient continuation Newton-Krylov-Schwarz solver capability (as a result of demands we made upon PETSC during our early porting experiences). A secondary project pursued with funding from this contract was parallel implicit solvers in acoustics, specifically in the Helmholtz formulation. A 2D acoustic inverse problem has been solved in parallel within the PETSC framework.

  10. Parallel kinematics type, kinematics, and optimal design

    CERN Document Server

    Liu, Xin-Jun

    2014-01-01

    Parallel Kinematics- Type, Kinematics, and Optimal Design presents the results of 15 year's research on parallel mechanisms and parallel kinematics machines. This book covers the systematic classification of parallel mechanisms (PMs) as well as providing a large number of mechanical architectures of PMs available for use in practical applications. It focuses on the kinematic design of parallel robots. One successful application of parallel mechanisms in the field of machine tools, which is also called parallel kinematics machines, has been the emerging trend in advanced machine tools. The book describes not only the main aspects and important topics in parallel kinematics, but also references novel concepts and approaches, i.e. type synthesis based on evolution, performance evaluation and optimization based on screw theory, singularity model taking into account motion and force transmissibility, and others.   This book is intended for researchers, scientists, engineers and postgraduates or above with interes...

  11. Parallelized Genetic Identification of the Thermal-Electrochemical Model for Lithium-Ion Battery

    Directory of Open Access Journals (Sweden)

    Liqiang Zhang

    2013-01-01

    Full Text Available The parameters of a well predicted model can be used as health characteristics for Lithium-ion battery. This article reports a parallelized parameter identification of the thermal-electrochemical model, which significantly reduces the time consumption of parameter identification. Since the P2D model has the most predictability, it is chosen for further research and expanded to the thermal-electrochemical model by coupling thermal effect and temperature-dependent parameters. Then Genetic Algorithm is used for parameter identification, but it takes too much time because of the long time simulation of model. For this reason, a computer cluster is built by surplus computing resource in our laboratory based on Parallel Computing Toolbox and Distributed Computing Server in MATLAB. The performance of two parallelized methods, namely Single Program Multiple Data (SPMD and parallel FOR loop (PARFOR, is investigated and then the parallelized GA identification is proposed. With this method, model simulations running parallelly and the parameter identification could be speeded up more than a dozen times, and the identification result is batter than that from serial GA. This conclusion is validated by model parameter identification of a real LiFePO4 battery.

  12. Hydraulic Profiling of a Parallel Channel Type Reactor Core

    International Nuclear Information System (INIS)

    Seo, Kyong-Won; Hwang, Dae-Hyun; Lee, Chung-Chan

    2006-01-01

    An advanced reactor core which consisted of closed multiple parallel channels was optimized to maximize the thermal margin of the core. The closed multiple parallel channel configurations have different characteristics to the open channels of conventional PWRs. The channels, usually assemblies, are isolated hydraulically from each other and there is no cross flow between channels. The distribution of inlet flow rate between channels is a very important design parameter in the core because distribution of inlet flow is directly proportional to a margin for a certain hydraulic parameter. The thermal hydraulic parameter may be the boiling margin, maximum fuel temperature, and critical heat flux. The inlet flow distribution of the core was optimized for the boiling margins by grouping the inlet orifices by several hydraulic regions. The procedure is called a hydraulic profiling

  13. Image processing with massively parallel computer Quadrics Q1

    International Nuclear Information System (INIS)

    Della Rocca, A.B.; La Porta, L.; Ferriani, S.

    1995-05-01

    Aimed to evaluate the image processing capabilities of the massively parallel computer Quadrics Q1, a convolution algorithm that has been implemented is described in this report. At first the discrete convolution mathematical definition is recalled together with the main Q1 h/w and s/w features. Then the different codification forms of the algorythm are described and the Q1 performances are compared with those obtained by different computers. Finally, the conclusions report on main results and suggestions

  14. Resolutions of the Coulomb operator: VIII. Parallel implementation using the modern programming language X10.

    Science.gov (United States)

    Limpanuparb, Taweetham; Milthorpe, Josh; Rendell, Alistair P

    2014-10-30

    Use of the modern parallel programming language X10 for computing long-range Coulomb and exchange interactions is presented. By using X10, a partitioned global address space language with support for task parallelism and the explicit representation of data locality, the resolution of the Ewald operator can be parallelized in a straightforward manner including use of both intranode and internode parallelism. We evaluate four different schemes for dynamic load balancing of integral calculation using X10's work stealing runtime, and report performance results for long-range HF energy calculation of large molecule/high quality basis running on up to 1024 cores of a high performance cluster machine. Copyright © 2014 Wiley Periodicals, Inc.

  15. Parallel Newton-Krylov-Schwarz algorithms for the transonic full potential equation

    Science.gov (United States)

    Cai, Xiao-Chuan; Gropp, William D.; Keyes, David E.; Melvin, Robin G.; Young, David P.

    1996-01-01

    We study parallel two-level overlapping Schwarz algorithms for solving nonlinear finite element problems, in particular, for the full potential equation of aerodynamics discretized in two dimensions with bilinear elements. The overall algorithm, Newton-Krylov-Schwarz (NKS), employs an inexact finite-difference Newton method and a Krylov space iterative method, with a two-level overlapping Schwarz method as a preconditioner. We demonstrate that NKS, combined with a density upwinding continuation strategy for problems with weak shocks, is robust and, economical for this class of mixed elliptic-hyperbolic nonlinear partial differential equations, with proper specification of several parameters. We study upwinding parameters, inner convergence tolerance, coarse grid density, subdomain overlap, and the level of fill-in in the incomplete factorization, and report their effect on numerical convergence rate, overall execution time, and parallel efficiency on a distributed-memory parallel computer.

  16. Experiments with parallel algorithms for combinatorial problems

    NARCIS (Netherlands)

    G.A.P. Kindervater (Gerard); H.W.J.M. Trienekens

    1985-01-01

    textabstractIn the last decade many models for parallel computation have been proposed and many parallel algorithms have been developed. However, few of these models have been realized and most of these algorithms are supposed to run on idealized, unrealistic parallel machines. The parallel machines

  17. Activity report of the Neutrino Research Group. Year 2006

    International Nuclear Information System (INIS)

    2007-01-01

    For the last two decades, neutrino physics has been producing major discoveries including neutrino oscillations. These results gave clear confirmation that active neutrinos oscillate and therefore have mass with three different mass states. This is a very important result showing that the Minimal Standard Model is incomplete and requires an extension which is not yet known. The neutrino research field is very broad and active, at the frontier of today's particle physics. The Neutrino Research Group (GDR) was created in January 2005 with the aim of gathering CEA and CNRS research teams working on Neutrino Physics on experimental or theoretical level. This document is the 2006 activity report of the research group, two years after its creation. It presents the results of the 5 working groups: 1 - Determination of neutrino parameters; 2 - Physics beyond the standard model; 3 - Neutrinos in the universe; 4 - Accelerators, detection means, R and D and valorisation; 5 - Common tools to all working groups. The proposed neutrino physics road-map and the actual and future short-, medium- and long-term projects are presented in appendixes. The Neutrino research group organization, the Memphys specific mission group, the research group participating laboratories and teams, as well as the Memphys project are presented too

  18. Parallel reservoir simulator computations

    International Nuclear Information System (INIS)

    Hemanth-Kumar, K.; Young, L.C.

    1995-01-01

    The adaptation of a reservoir simulator for parallel computations is described. The simulator was originally designed for vector processors. It performs approximately 99% of its calculations in vector/parallel mode and relative to scalar calculations it achieves speedups of 65 and 81 for black oil and EOS simulations, respectively on the CRAY C-90

  19. Development Of A Parallel Performance Model For The THOR Neutral Particle Transport Code

    Energy Technology Data Exchange (ETDEWEB)

    Yessayan, Raffi; Azmy, Yousry; Schunert, Sebastian

    2017-02-01

    The THOR neutral particle transport code enables simulation of complex geometries for various problems from reactor simulations to nuclear non-proliferation. It is undergoing a thorough V&V requiring computational efficiency. This has motivated various improvements including angular parallelization, outer iteration acceleration, and development of peripheral tools. For guiding future improvements to the code’s efficiency, better characterization of its parallel performance is useful. A parallel performance model (PPM) can be used to evaluate the benefits of modifications and to identify performance bottlenecks. Using INL’s Falcon HPC, the PPM development incorporates an evaluation of network communication behavior over heterogeneous links and a functional characterization of the per-cell/angle/group runtime of each major code component. After evaluating several possible sources of variability, this resulted in a communication model and a parallel portion model. The former’s accuracy is bounded by the variability of communication on Falcon while the latter has an error on the order of 1%.

  20. The STAPL Parallel Graph Library

    KAUST Repository

    Harshvardhan,

    2013-01-01

    This paper describes the stapl Parallel Graph Library, a high-level framework that abstracts the user from data-distribution and parallelism details and allows them to concentrate on parallel graph algorithm development. It includes a customizable distributed graph container and a collection of commonly used parallel graph algorithms. The library introduces pGraph pViews that separate algorithm design from the container implementation. It supports three graph processing algorithmic paradigms, level-synchronous, asynchronous and coarse-grained, and provides common graph algorithms based on them. Experimental results demonstrate improved scalability in performance and data size over existing graph libraries on more than 16,000 cores and on internet-scale graphs containing over 16 billion vertices and 250 billion edges. © Springer-Verlag Berlin Heidelberg 2013.

  1. The parallel volume at large distances

    DEFF Research Database (Denmark)

    Kampf, Jürgen

    In this paper we examine the asymptotic behavior of the parallel volume of planar non-convex bodies as the distance tends to infinity. We show that the difference between the parallel volume of the convex hull of a body and the parallel volume of the body itself tends to . This yields a new proof...... for the fact that a planar body can only have polynomial parallel volume, if it is convex. Extensions to Minkowski spaces and random sets are also discussed....

  2. The parallel volume at large distances

    DEFF Research Database (Denmark)

    Kampf, Jürgen

    In this paper we examine the asymptotic behavior of the parallel volume of planar non-convex bodies as the distance tends to infinity. We show that the difference between the parallel volume of the convex hull of a body and the parallel volume of the body itself tends to 0. This yields a new proof...... for the fact that a planar body can only have polynomial parallel volume, if it is convex. Extensions to Minkowski spaces and random sets are also discussed....

  3. Expressing Parallelism with ROOT

    Energy Technology Data Exchange (ETDEWEB)

    Piparo, D. [CERN; Tejedor, E. [CERN; Guiraud, E. [CERN; Ganis, G. [CERN; Mato, P. [CERN; Moneta, L. [CERN; Valls Pla, X. [CERN; Canal, P. [Fermilab

    2017-11-22

    The need for processing the ever-increasing amount of data generated by the LHC experiments in a more efficient way has motivated ROOT to further develop its support for parallelism. Such support is being tackled both for shared-memory and distributed-memory environments. The incarnations of the aforementioned parallelism are multi-threading, multi-processing and cluster-wide executions. In the area of multi-threading, we discuss the new implicit parallelism and related interfaces, as well as the new building blocks to safely operate with ROOT objects in a multi-threaded environment. Regarding multi-processing, we review the new MultiProc framework, comparing it with similar tools (e.g. multiprocessing module in Python). Finally, as an alternative to PROOF for cluster-wide executions, we introduce the efforts on integrating ROOT with state-of-the-art distributed data processing technologies like Spark, both in terms of programming model and runtime design (with EOS as one of the main components). For all the levels of parallelism, we discuss, based on real-life examples and measurements, how our proposals can increase the productivity of scientists.

  4. Expressing Parallelism with ROOT

    Science.gov (United States)

    Piparo, D.; Tejedor, E.; Guiraud, E.; Ganis, G.; Mato, P.; Moneta, L.; Valls Pla, X.; Canal, P.

    2017-10-01

    The need for processing the ever-increasing amount of data generated by the LHC experiments in a more efficient way has motivated ROOT to further develop its support for parallelism. Such support is being tackled both for shared-memory and distributed-memory environments. The incarnations of the aforementioned parallelism are multi-threading, multi-processing and cluster-wide executions. In the area of multi-threading, we discuss the new implicit parallelism and related interfaces, as well as the new building blocks to safely operate with ROOT objects in a multi-threaded environment. Regarding multi-processing, we review the new MultiProc framework, comparing it with similar tools (e.g. multiprocessing module in Python). Finally, as an alternative to PROOF for cluster-wide executions, we introduce the efforts on integrating ROOT with state-of-the-art distributed data processing technologies like Spark, both in terms of programming model and runtime design (with EOS as one of the main components). For all the levels of parallelism, we discuss, based on real-life examples and measurements, how our proposals can increase the productivity of scientists.

  5. Parallel hierarchical radiosity rendering

    Energy Technology Data Exchange (ETDEWEB)

    Carter, Michael [Iowa State Univ., Ames, IA (United States)

    1993-07-01

    In this dissertation, the step-by-step development of a scalable parallel hierarchical radiosity renderer is documented. First, a new look is taken at the traditional radiosity equation, and a new form is presented in which the matrix of linear system coefficients is transformed into a symmetric matrix, thereby simplifying the problem and enabling a new solution technique to be applied. Next, the state-of-the-art hierarchical radiosity methods are examined for their suitability to parallel implementation, and scalability. Significant enhancements are also discovered which both improve their theoretical foundations and improve the images they generate. The resultant hierarchical radiosity algorithm is then examined for sources of parallelism, and for an architectural mapping. Several architectural mappings are discussed. A few key algorithmic changes are suggested during the process of making the algorithm parallel. Next, the performance, efficiency, and scalability of the algorithm are analyzed. The dissertation closes with a discussion of several ideas which have the potential to further enhance the hierarchical radiosity method, or provide an entirely new forum for the application of hierarchical methods.

  6. Nuclear Forensics: Report of the AAAS/APS Working Group

    Science.gov (United States)

    Tannenbaum, Benn

    2008-04-01

    This report was produced by a Working Group of the American Physical Society's Program on Public Affairs in conjunction with the American Association for the Advancement of Science Center for Science, Technology and Security Policy. The primary purpose of this report is to provide the Congress, U.S. government agencies and other institutions involved in nuclear forensics with a clear unclassified statement of the state of the art of nuclear forensics; an assessment of its potential for preventing and identifying unattributed nuclear attacks; and identification of the policies, resources and human talent to fulfill that potential. In the course of its work, the Working Group observed that nuclear forensics was an essential part of the overall nuclear attribution process, which aims at identifying the origin of unidentified nuclear weapon material and, in the event, an unidentified nuclear explosion. A credible nuclear attribution capability and in particular nuclear forensics capability could deter essential participants in the chain of actors needed to smuggle nuclear weapon material or carry out a nuclear terrorist act and could also encourage states to better secure such materials and weapons. The Working Group also noted that nuclear forensics result would take some time to obtain and that neither internal coordination, nor international arrangements, nor the state of qualified personnel and needed equipment were currently enough to minimize the time needed to reach reliable results in an emergency such as would be caused by a nuclear detonation or the intercept of a weapon-size quantity of material. The Working Group assesses international cooperation to be crucial for forensics to work, since the material would likely come from inadequately documented foreign sources. In addition, international participation, if properly managed, could enhance the credibility of the deterrent effect of attribution. Finally the Working Group notes that the U.S. forensics

  7. Chemical Safety Vulnerability Working Group report. Volume 3

    Energy Technology Data Exchange (ETDEWEB)

    1994-09-01

    The Chemical Safety Vulnerability (CSV) Working Group was established to identify adverse conditions involving hazardous chemicals at DOE facilities that might result in fires or explosions, release of hazardous chemicals to the environment, or exposure of workers or the public to chemicals. A CSV Review was conducted in 148 facilities at 29 sites. Eight generic vulnerabilities were documented related to: abandoned chemicals and chemical residuals; past chemical spills and ground releases; characterization of legacy chemicals and wastes; disposition of legacy chemicals; storage facilities and conditions; condition of facilities and support systems; unanalyzed and unaddressed hazards; and inventory control and tracking. Weaknesses in five programmatic areas were also identified related to: management commitment and planning; chemical safety management programs; aging facilities that continue to operate; nonoperating facilities awaiting deactivation; and resource allocations. Volume 3 consists of eleven appendices containing the following: Field verification reports for Idaho National Engineering Lab., Rocky Flats Plant, Brookhaven National Lab., Los Alamos National Lab., and Sandia National Laboratories (NM); Mini-visits to small DOE sites; Working Group meeting, June 7--8, 1994; Commendable practices; Related chemical safety initiatives at DOE; Regulatory framework and industry initiatives related to chemical safety; and Chemical inventory data from field self-evaluation reports.

  8. Chemical Safety Vulnerability Working Group report. Volume 3

    International Nuclear Information System (INIS)

    1994-09-01

    The Chemical Safety Vulnerability (CSV) Working Group was established to identify adverse conditions involving hazardous chemicals at DOE facilities that might result in fires or explosions, release of hazardous chemicals to the environment, or exposure of workers or the public to chemicals. A CSV Review was conducted in 148 facilities at 29 sites. Eight generic vulnerabilities were documented related to: abandoned chemicals and chemical residuals; past chemical spills and ground releases; characterization of legacy chemicals and wastes; disposition of legacy chemicals; storage facilities and conditions; condition of facilities and support systems; unanalyzed and unaddressed hazards; and inventory control and tracking. Weaknesses in five programmatic areas were also identified related to: management commitment and planning; chemical safety management programs; aging facilities that continue to operate; nonoperating facilities awaiting deactivation; and resource allocations. Volume 3 consists of eleven appendices containing the following: Field verification reports for Idaho National Engineering Lab., Rocky Flats Plant, Brookhaven National Lab., Los Alamos National Lab., and Sandia National Laboratories (NM); Mini-visits to small DOE sites; Working Group meeting, June 7--8, 1994; Commendable practices; Related chemical safety initiatives at DOE; Regulatory framework and industry initiatives related to chemical safety; and Chemical inventory data from field self-evaluation reports

  9. Parallel Monitors for Self-adaptive Sessions

    Directory of Open Access Journals (Sweden)

    Mario Coppo

    2016-06-01

    Full Text Available The paper presents a data-driven model of self-adaptivity for multiparty sessions. System choreography is prescribed by a global type. Participants are incarnated by processes associated with monitors, which control their behaviour. Each participant can access and modify a set of global data, which are able to trigger adaptations in the presence of critical changes of values. The use of the parallel composition for building global types, monitors and processes enables a significant degree of flexibility: an adaptation step can dynamically reconfigure a set of participants only, without altering the remaining participants, even if the two groups communicate.

  10. Radio frequency feedback method for parallelized droplet microfluidics

    KAUST Repository

    Conchouso Gonzalez, David

    2016-12-19

    This paper reports on a radio frequency micro-strip T-resonator that is integrated to a parallel droplet microfluidic system. The T-resonator works as a feedback system to monitor uniform droplet production and to detect, in real-time, any malfunctions due to channel fouling or clogging. Emulsions at different W/O flow-rate ratios are generated in a microfluidic device containing 8 parallelized generators. These emulsions are then guided towards the RF sensor, which is then read using a Network Analyzer to obtain the frequency response of the system. The proposed T-resonator shows frequency shifts of 45MHz for only 5% change in the emulsion\\'s water in oil content. These shifts can then be used as a feedback system to trigger alarms and notify production and quality control engineers about problems in the droplet generation process.

  11. Radio frequency feedback method for parallelized droplet microfluidics

    KAUST Repository

    Conchouso Gonzalez, David; Carreno, Armando Arpys Arevalo; McKerricher, Garret; Castro, David; Foulds, Ian G.

    2016-01-01

    This paper reports on a radio frequency micro-strip T-resonator that is integrated to a parallel droplet microfluidic system. The T-resonator works as a feedback system to monitor uniform droplet production and to detect, in real-time, any malfunctions due to channel fouling or clogging. Emulsions at different W/O flow-rate ratios are generated in a microfluidic device containing 8 parallelized generators. These emulsions are then guided towards the RF sensor, which is then read using a Network Analyzer to obtain the frequency response of the system. The proposed T-resonator shows frequency shifts of 45MHz for only 5% change in the emulsion's water in oil content. These shifts can then be used as a feedback system to trigger alarms and notify production and quality control engineers about problems in the droplet generation process.

  12. Shared Variable Oriented Parallel Precompiler for SPMD Model

    Institute of Scientific and Technical Information of China (English)

    1995-01-01

    For the moment,commercial parallel computer systems with distributed memory architecture are usually provided with parallel FORTRAN or parallel C compliers,which are just traditional sequential FORTRAN or C compilers expanded with communication statements.Programmers suffer from writing parallel programs with communication statements. The Shared Variable Oriented Parallel Precompiler (SVOPP) proposed in this paper can automatically generate appropriate communication statements based on shared variables for SPMD(Single Program Multiple Data) computation model and greatly ease the parallel programming with high communication efficiency.The core function of parallel C precompiler has been successfully verified on a transputer-based parallel computer.Its prominent performance shows that SVOPP is probably a break-through in parallel programming technique.

  13. Evaluating parallel optimization on transputers

    Directory of Open Access Journals (Sweden)

    A.G. Chalmers

    2003-12-01

    Full Text Available The faster processing power of modern computers and the development of efficient algorithms have made it possible for operations researchers to tackle a much wider range of problems than ever before. Further improvements in processing speed can be achieved utilising relatively inexpensive transputers to process components of an algorithm in parallel. The Davidon-Fletcher-Powell method is one of the most successful and widely used optimisation algorithms for unconstrained problems. This paper examines the algorithm and identifies the components that can be processed in parallel. The results of some experiments with these components are presented which indicates under what conditions parallel processing with an inexpensive configuration is likely to be faster than the traditional sequential implementations. The performance of the whole algorithm with its parallel components is then compared with the original sequential algorithm. The implementation serves to illustrate the practicalities of speeding up typical OR algorithms in terms of difficulty, effort and cost. The results give an indication of the savings in time a given parallel implementation can be expected to yield.

  14. Programming massively parallel processors a hands-on approach

    CERN Document Server

    Kirk, David B

    2010-01-01

    Programming Massively Parallel Processors discusses basic concepts about parallel programming and GPU architecture. ""Massively parallel"" refers to the use of a large number of processors to perform a set of computations in a coordinated parallel way. The book details various techniques for constructing parallel programs. It also discusses the development process, performance level, floating-point format, parallel patterns, and dynamic parallelism. The book serves as a teaching guide where parallel programming is the main topic of the course. It builds on the basics of C programming for CUDA, a parallel programming environment that is supported on NVI- DIA GPUs. Composed of 12 chapters, the book begins with basic information about the GPU as a parallel computer source. It also explains the main concepts of CUDA, data parallelism, and the importance of memory access efficiency using CUDA. The target audience of the book is graduate and undergraduate students from all science and engineering disciplines who ...

  15. Continuous development of schemes for parallel computing of the electrostatics in biological systems: implementation in DelPhi.

    Science.gov (United States)

    Li, Chuan; Petukh, Marharyta; Li, Lin; Alexov, Emil

    2013-08-15

    Due to the enormous importance of electrostatics in molecular biology, calculating the electrostatic potential and corresponding energies has become a standard computational approach for the study of biomolecules and nano-objects immersed in water and salt phase or other media. However, the electrostatics of large macromolecules and macromolecular complexes, including nano-objects, may not be obtainable via explicit methods and even the standard continuum electrostatics methods may not be applicable due to high computational time and memory requirements. Here, we report further development of the parallelization scheme reported in our previous work (Li, et al., J. Comput. Chem. 2012, 33, 1960) to include parallelization of the molecular surface and energy calculations components of the algorithm. The parallelization scheme utilizes different approaches such as space domain parallelization, algorithmic parallelization, multithreading, and task scheduling, depending on the quantity being calculated. This allows for efficient use of the computing resources of the corresponding computer cluster. The parallelization scheme is implemented in the popular software DelPhi and results in speedup of several folds. As a demonstration of the efficiency and capability of this methodology, the electrostatic potential, and electric field distributions are calculated for the bovine mitochondrial supercomplex illustrating their complex topology, which cannot be obtained by modeling the supercomplex components alone. Copyright © 2013 Wiley Periodicals, Inc.

  16. A new scheduling algorithm for parallel sparse LU factorization with static pivoting

    Energy Technology Data Exchange (ETDEWEB)

    Grigori, Laura; Li, Xiaoye S.

    2002-08-20

    In this paper we present a static scheduling algorithm for parallel sparse LU factorization with static pivoting. The algorithm is divided into mapping and scheduling phases, using the symmetric pruned graphs of L' and U to represent dependencies. The scheduling algorithm is designed for driving the parallel execution of the factorization on a distributed-memory architecture. Experimental results and comparisons with SuperLU{_}DIST are reported after applying this algorithm on real world application matrices on an IBM SP RS/6000 distributed memory machine.

  17. Report of ITER Special Working Group 2

    International Nuclear Information System (INIS)

    Roberts, M.

    1994-01-01

    ITER Special Working Group 2 (SWG-2) was established by the terms of the ITER-EDA Agreement. According to that agreement, open-quotes SWG-2 shall submit guidelines for implementation of task assignments by the Home Teams to the Council for approval at its second meeting. This SWG shall also draft Protocol 2 to the ITER-EDA Agreement and submit a draft to the Council not later than by 21 May 1993.close quotes The members of SWG-2 for Protocol 2 drafting are listed. The rest of this paper is the verbatim report of SWG-2 on Protocol 2

  18. Report of the Quark Flavor Physics Working Group

    CERN Document Server

    Butler, J N; Ritchie, J L; Cirigliano, V; Kettell, S; Briere, R; Petrov, A A; Schwartz, A; Skwarnicki, T; Zupan, J; Christ, N; Sharpe, S R; Van de Water, R S; Altmannshofer, W; Arkani-Hamed, N; Artuso, M; Asner, D M; Bernard, C; Bevan, A J; Blanke, M; Bonvicini, G; Browder, T E; Bryman, D A; Campana, P; Cenci, R; Cline, D; Comfort, J; Cronin-Hennessy, D; Datta, A; Dobbs, S; Duraisamy, M; El-Khadra, A X; Fast, J E; Forty, R; Flood, K T; Gershon, T; Grossman, Y; Hamilton, B; Hill, C T; Hill, R J; Hitlin, D G; Jaffe, D E; Jawahery, A; Jessop, C P; Kagan, A L; Kaplan, D M; Kohl, M; Krizan, P; Kronfeld, A S; Lee, K; Littenberg, L S; MacFarlane, D B; Mackenzie, P B; Meadows, B T; Olsen, J; Papucci, M; Parsa, Z; Paz, G; Perez, G; Piilonen, L E; Pitts, K; Purohit, M V; Quinn, B; Ratcliff, B N; Roberts, D A; Rosner, J L; Rubin, P; Seeman, J; Seth, K K; Schmidt, B; Schopper, A; Sokoloff, M D; Soni, A; Stenson, K; Stone, S; Sundrum, R; Tschirhart, R; Vainshtein, A; Wah, Y W; Wilkinson, G; Wise, M B; Worcester, E; Xu, J; Yamanaka, T

    2013-01-01

    This report represents the response of the Intensity Frontier Quark Flavor Physics Working Group to the Snowmass charge. We summarize the current status of quark flavor physics and identify many exciting future opportunities for studying the properties of strange, charm, and bottom quarks. The ability of these studies to reveal the effects of new physics at high mass scales make them an essential ingredient in a well-balanced experimental particle physics program.

  19. Annual report 2003 on the radiation and nuclear safety in the Republic of Slovenia

    International Nuclear Information System (INIS)

    Kostadinov, V.; Stritar, A.

    2004-07-01

    This report is a continuation of a practice which was introduced a year ago. This short report provides in a condensed form the essential data on the situation in the country in the areas of radiation protection and nuclear safety, and is aimed at a wider group of interested public. In parallel, an extended report was prepared consisting of all the details and data which would be of interest to a narrower group of professionals. It is available in electronic form on CD or at the home page of the SNSA. (author)

  20. Annual Report 2003 on the Radiation and Nuclear Safety in the Republic of Slovenia

    International Nuclear Information System (INIS)

    Kostadinov, V.; Stritar, A.

    2004-07-01

    This report is a continuation of a practice which was introduced a year ago. This short report provides in a condensed form the essential data on the situation in the country in the areas of radiation protection and nuclear safety, and is aimed at a wider group of interested public. In parallel, an extended report was prepared consisting of all the details and data which would be of interest to a narrower group of professionals. It is available in electronic form on CD or at the home page of the SNSA.

  1. Advanced parallel processing with supercomputer architectures

    International Nuclear Information System (INIS)

    Hwang, K.

    1987-01-01

    This paper investigates advanced parallel processing techniques and innovative hardware/software architectures that can be applied to boost the performance of supercomputers. Critical issues on architectural choices, parallel languages, compiling techniques, resource management, concurrency control, programming environment, parallel algorithms, and performance enhancement methods are examined and the best answers are presented. The authors cover advanced processing techniques suitable for supercomputers, high-end mainframes, minisupers, and array processors. The coverage emphasizes vectorization, multitasking, multiprocessing, and distributed computing. In order to achieve these operation modes, parallel languages, smart compilers, synchronization mechanisms, load balancing methods, mapping parallel algorithms, operating system functions, application library, and multidiscipline interactions are investigated to ensure high performance. At the end, they assess the potentials of optical and neural technologies for developing future supercomputers

  2. A Parallel Processing Algorithm for Remote Sensing Classification

    Science.gov (United States)

    Gualtieri, J. Anthony

    2005-01-01

    A current thread in parallel computation is the use of cluster computers created by networking a few to thousands of commodity general-purpose workstation-level commuters using the Linux operating system. For example on the Medusa cluster at NASA/GSFC, this provides for super computing performance, 130 G(sub flops) (Linpack Benchmark) at moderate cost, $370K. However, to be useful for scientific computing in the area of Earth science, issues of ease of programming, access to existing scientific libraries, and portability of existing code need to be considered. In this paper, I address these issues in the context of tools for rendering earth science remote sensing data into useful products. In particular, I focus on a problem that can be decomposed into a set of independent tasks, which on a serial computer would be performed sequentially, but with a cluster computer can be performed in parallel, giving an obvious speedup. To make the ideas concrete, I consider the problem of classifying hyperspectral imagery where some ground truth is available to train the classifier. In particular I will use the Support Vector Machine (SVM) approach as applied to hyperspectral imagery. The approach will be to introduce notions about parallel computation and then to restrict the development to the SVM problem. Pseudocode (an outline of the computation) will be described and then details specific to the implementation will be given. Then timing results will be reported to show what speedups are possible using parallel computation. The paper will close with a discussion of the results.

  3. Endpoint-based parallel data processing with non-blocking collective instructions in a parallel active messaging interface of a parallel computer

    Science.gov (United States)

    Archer, Charles J; Blocksome, Michael A; Cernohous, Bob R; Ratterman, Joseph D; Smith, Brian E

    2014-11-11

    Endpoint-based parallel data processing with non-blocking collective instructions in a PAMI of a parallel computer is disclosed. The PAMI is composed of data communications endpoints, each including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task. The compute nodes are coupled for data communications through the PAMI. The parallel application establishes a data communications geometry specifying a set of endpoints that are used in collective operations of the PAMI by associating with the geometry a list of collective algorithms valid for use with the endpoints of the geometry; registering in each endpoint in the geometry a dispatch callback function for a collective operation; and executing without blocking, through a single one of the endpoints in the geometry, an instruction for the collective operation.

  4. Recommendations for reporting economic evaluations of haemophilia prophylaxis: a nominal groups consensus statement on behalf of the Economics Expert Working Group of The International Prophylaxis Study Group.

    Science.gov (United States)

    Nicholson, A; Berger, K; Bohn, R; Carcao, M; Fischer, K; Gringeri, A; Hoots, K; Mantovani, L; Schramm, W; van Hout, B A; Willan, A R; Feldman, B M

    2008-01-01

    The need for clearly reported studies evaluating the cost of prophylaxis and its overall outcomes has been recommended from previous literature. To establish minimal ''core standards'' that can be followed when conducting and reporting economic evaluations of hemophilia prophylaxis. Ten members of the IPSG Economic Analysis Working Group participated in a consensus process using the Nominal Groups Technique (NGT). The following topics relating to the economic analysis of prophylaxis studies were addressed; Whose perspective should be taken? Which is the best methodological approach? Is micro- or macro-costing the best costing strategy? What information must be presented about costs and outcomes in order to facilitate local and international interpretation? The group suggests studies on the economic impact of prophylaxis should be viewed from a societal perspective and be reported using a Cost Utility Analysis (CUA) (with consideration of also reporting Cost Benefit Analysis [CBA]). All costs that exceed $500 should be used to measure the costs of prophylaxis (macro strategy) including items such as clotting factor costs, hospitalizations, surgical procedures, productivity loss and number of days lost from school or work. Generic and disease specific quality of lífe and utility measures should be used to report the outcomes of the study. The IPSG has suggested minimal core standards to be applied to the reporting of economic evaluations of hemophilia prophylaxis. Standardized reporting will facilitate the comparison of studies and will allow for more rational policy decisions and treatment choices.

  5. Solution-processed parallel tandem polymer solar cells using silver nanowires as intermediate electrode.

    Science.gov (United States)

    Guo, Fei; Kubis, Peter; Li, Ning; Przybilla, Thomas; Matt, Gebhard; Stubhan, Tobias; Ameri, Tayebeh; Butz, Benjamin; Spiecker, Erdmann; Forberich, Karen; Brabec, Christoph J

    2014-12-23

    Tandem architecture is the most relevant concept to overcome the efficiency limit of single-junction photovoltaic solar cells. Series-connected tandem polymer solar cells (PSCs) have advanced rapidly during the past decade. In contrast, the development of parallel-connected tandem cells is lagging far behind due to the big challenge in establishing an efficient interlayer with high transparency and high in-plane conductivity. Here, we report all-solution fabrication of parallel tandem PSCs using silver nanowires as intermediate charge collecting electrode. Through a rational interface design, a robust interlayer is established, enabling the efficient extraction and transport of electrons from subcells. The resulting parallel tandem cells exhibit high fill factors of ∼60% and enhanced current densities which are identical to the sum of the current densities of the subcells. These results suggest that solution-processed parallel tandem configuration provides an alternative avenue toward high performance photovoltaic devices.

  6. Working Group on Ionising Radiations. Report 1987-88

    International Nuclear Information System (INIS)

    1989-01-01

    The programme of work for 1987/88 by the Working Group on Ionising Radiation, Health and Safety Commision in February 1988, included the main topics of continuing interest and concern in relation to ionising radiations in general and the Ionising Radiations Regulations 1985 (IRR 85) (Ref 1) in particular. These were: emergency dose limitation, occupational dose limitation, practical experience of the principle of keeping doses as low as reasonably practicable, experience of the regulatory requirements in respect of internal dosimetry and the need for a standing advisory committee on ionising radiations. Calibration of radiotherapy equipment was also considered as a matter of principle following a specific incident involving cancer patients. This report of progress during the first year summarises the Group's opinions on each topic and gives recommendations. (author)

  7. H5Part A Portable High Performance Parallel Data Interface for Particle Simulations

    CERN Document Server

    Adelmann, Andreas; Shalf, John M; Siegerist, Cristina

    2005-01-01

    Largest parallel particle simulations, in six dimensional phase space generate wast amont of data. It is also desirable to share data and data analysis tools such as ParViT (Particle Visualization Toolkit) among other groups who are working on particle-based accelerator simulations. We define a very simple file schema built on top of HDF5 (Hierarchical Data Format version 5) as well as an API that simplifies the reading/writing of the data to the HDF5 file format. HDF5 offers a self-describing machine-independent binary file format that supports scalable parallel I/O performance for MPI codes on a variety of supercomputing systems and works equally well on laptop computers. The API is available for C, C++, and Fortran codes. The file format will enable disparate research groups with very different simulation implementations to share data transparently and share data analysis tools. For instance, the common file format will enable groups that depend on completely different simulation implementations to share c...

  8. SOFTWARE FOR DESIGNING PARALLEL APPLICATIONS

    Directory of Open Access Journals (Sweden)

    M. K. Bouza

    2017-01-01

    Full Text Available The object of research is the tools to support the development of parallel programs in C/C ++. The methods and software which automates the process of designing parallel applications are proposed.

  9. An Introduction to Parallel Computation R

    Indian Academy of Sciences (India)

    How are they programmed? This article provides an introduction. A parallel computer is a network of processors built for ... and have been used to solve problems much faster than a single ... in parallel computer design is to select an organization which ..... The most ambitious approach to parallel computing is to develop.

  10. Building a parallel file system simulator

    International Nuclear Information System (INIS)

    Molina-Estolano, E; Maltzahn, C; Brandt, S A; Bent, J

    2009-01-01

    Parallel file systems are gaining in popularity in high-end computing centers as well as commercial data centers. High-end computing systems are expected to scale exponentially and to pose new challenges to their storage scalability in terms of cost and power. To address these challenges scientists and file system designers will need a thorough understanding of the design space of parallel file systems. Yet there exist few systematic studies of parallel file system behavior at petabyte- and exabyte scale. An important reason is the significant cost of getting access to large-scale hardware to test parallel file systems. To contribute to this understanding we are building a parallel file system simulator that can simulate parallel file systems at very large scale. Our goal is to simulate petabyte-scale parallel file systems on a small cluster or even a single machine in reasonable time and fidelity. With this simulator, file system experts will be able to tune existing file systems for specific workloads, scientists and file system deployment engineers will be able to better communicate workload requirements, file system designers and researchers will be able to try out design alternatives and innovations at scale, and instructors will be able to study very large-scale parallel file system behavior in the class room. In this paper we describe our approach and provide preliminary results that are encouraging both in terms of fidelity and simulation scalability.

  11. Distributed Cooperative Current-Sharing Control of Parallel Chargers Using Feedback Linearization

    Directory of Open Access Journals (Sweden)

    Jiangang Liu

    2014-01-01

    Full Text Available We propose a distributed current-sharing scheme to address the output current imbalance problem for the parallel chargers in the energy storage type light rail vehicle system. By treating the parallel chargers as a group of agents with output information sharing through communication network, the current-sharing control problem is recast as the consensus tracking problem of multiagents. To facilitate the design, input-output feedback linearization is first applied to transform the nonidentical nonlinear charging system model into the first-order integrator. Then, a general saturation function is introduced to design the cooperative current-sharing control law which can guarantee the boundedness of the proposed control. The cooperative stability of the closed-loop system under fixed and dynamic communication topologies is rigorously proved with the aid of Lyapunov function and LaSalle invariant principle. Simulation using a multicharging test system further illustrates that the output currents of parallel chargers are balanced using the proposed control.

  12. Cooperative parallel adaptive neighbourhood search for the disjunctively constrained knapsack problem

    Science.gov (United States)

    Quan, Zhe; Wu, Lei

    2017-09-01

    This article investigates the use of parallel computing for solving the disjunctively constrained knapsack problem. The proposed parallel computing model can be viewed as a cooperative algorithm based on a multi-neighbourhood search. The cooperation system is composed of a team manager and a crowd of team members. The team members aim at applying their own search strategies to explore the solution space. The team manager collects the solutions from the members and shares the best one with them. The performance of the proposed method is evaluated on a group of benchmark data sets. The results obtained are compared to those reached by the best methods from the literature. The results show that the proposed method is able to provide the best solutions in most cases. In order to highlight the robustness of the proposed parallel computing model, a new set of large-scale instances is introduced. Encouraging results have been obtained.

  13. Parallel computation with molecular-motor-propelled agents in nanofabricated networks.

    Science.gov (United States)

    Nicolau, Dan V; Lard, Mercy; Korten, Till; van Delft, Falco C M J M; Persson, Malin; Bengtsson, Elina; Månsson, Alf; Diez, Stefan; Linke, Heiner; Nicolau, Dan V

    2016-03-08

    The combinatorial nature of many important mathematical problems, including nondeterministic-polynomial-time (NP)-complete problems, places a severe limitation on the problem size that can be solved with conventional, sequentially operating electronic computers. There have been significant efforts in conceiving parallel-computation approaches in the past, for example: DNA computation, quantum computation, and microfluidics-based computation. However, these approaches have not proven, so far, to be scalable and practical from a fabrication and operational perspective. Here, we report the foundations of an alternative parallel-computation system in which a given combinatorial problem is encoded into a graphical, modular network that is embedded in a nanofabricated planar device. Exploring the network in a parallel fashion using a large number of independent, molecular-motor-propelled agents then solves the mathematical problem. This approach uses orders of magnitude less energy than conventional computers, thus addressing issues related to power consumption and heat dissipation. We provide a proof-of-concept demonstration of such a device by solving, in a parallel fashion, the small instance {2, 5, 9} of the subset sum problem, which is a benchmark NP-complete problem. Finally, we discuss the technical advances necessary to make our system scalable with presently available technology.

  14. Professional Parallel Programming with C# Master Parallel Extensions with NET 4

    CERN Document Server

    Hillar, Gastón

    2010-01-01

    Expert guidance for those programming today's dual-core processors PCs As PC processors explode from one or two to now eight processors, there is an urgent need for programmers to master concurrent programming. This book dives deep into the latest technologies available to programmers for creating professional parallel applications using C#, .NET 4, and Visual Studio 2010. The book covers task-based programming, coordination data structures, PLINQ, thread pools, asynchronous programming model, and more. It also teaches other parallel programming techniques, such as SIMD and vectorization.Teach

  15. Report of the 1997 LEP2 working group on 'searches'

    International Nuclear Information System (INIS)

    Allanach, B.C.; Blair, G.A.; Diaz, M.A.

    1997-08-01

    A number of research program reports are presented from the LEP2 positron-electron collider in the area of searches for Higgs bosons, supersymmetry and supergravity. Working groups' reports cover prospective sensitivity of Higgs boson searches, radiative corrections to chargino production, charge and colour breaking minima in minimal Supersymmetric Standard Model, R-party violation effects upon unification predictions, searches for new pair-produced particles, single sneutrino production and searches related to effects similar to HERA experiments. The final section of the report summarizes the LEP 2 searches, concentrating on gians from running at 200 GeV and alternative paradigms for supersymmetric phenomenology. (UK)

  16. Parallel computation

    International Nuclear Information System (INIS)

    Jejcic, A.; Maillard, J.; Maurel, G.; Silva, J.; Wolff-Bacha, F.

    1997-01-01

    The work in the field of parallel processing has developed as research activities using several numerical Monte Carlo simulations related to basic or applied current problems of nuclear and particle physics. For the applications utilizing the GEANT code development or improvement works were done on parts simulating low energy physical phenomena like radiation, transport and interaction. The problem of actinide burning by means of accelerators was approached using a simulation with the GEANT code. A program of neutron tracking in the range of low energies up to the thermal region has been developed. It is coupled to the GEANT code and permits in a single pass the simulation of a hybrid reactor core receiving a proton burst. Other works in this field refers to simulations for nuclear medicine applications like, for instance, development of biological probes, evaluation and characterization of the gamma cameras (collimators, crystal thickness) as well as the method for dosimetric calculations. Particularly, these calculations are suited for a geometrical parallelization approach especially adapted to parallel machines of the TN310 type. Other works mentioned in the same field refer to simulation of the electron channelling in crystals and simulation of the beam-beam interaction effect in colliders. The GEANT code was also used to simulate the operation of germanium detectors designed for natural and artificial radioactivity monitoring of environment

  17. Summary report: injection group

    International Nuclear Information System (INIS)

    Simpson, J.; Ankenbrandt, C.; Brown, B.

    1984-01-01

    The injector group attempted to define and address several problem areas related to the SSC injector as defined in the Reference Design Study (RDS). It also considered the topic of machine utilization, particularly the question of test beam requirements. Details of the work are given in individually contributed papers, but the general concerns and consensus of the group are presented within this note. The group recognized that the injector as outlined in the RDS was developed primarily for costing estimates. As such, it was not necessarily well optimized from the standpoint of insuring the required beam properties for the SSC. On the other hand, considering the extraordinary short time in which the RDS was prepared, it is an impressive document and a good basis from which to work. Because the documented SSC performance goals are ambitious, the group sought an injector solution which would more likely guarantee that SSC performance not be limited by its injectors. As will be seen, this leads to a somewhat different solution than that described in the RDS. Furthermore, it is the consensus of the group that the new, conservative approach represents only a modest cost increase of the overall project well worth the confidence gained and the risks avoided

  18. Neoclassical parallel flow calculation in the presence of external parallel momentum sources in Heliotron J

    Energy Technology Data Exchange (ETDEWEB)

    Nishioka, K.; Nakamura, Y. [Graduate School of Energy Science, Kyoto University, Gokasho, Uji, Kyoto 611-0011 (Japan); Nishimura, S. [National Institute for Fusion Science, 322-6 Oroshi-cho, Toki, Gifu 509-5292 (Japan); Lee, H. Y. [Korea Advanced Institute of Science and Technology, Daejeon 305-701 (Korea, Republic of); Kobayashi, S.; Mizuuchi, T.; Nagasaki, K.; Okada, H.; Minami, T.; Kado, S.; Yamamoto, S.; Ohshima, S.; Konoshima, S.; Sano, F. [Institute of Advanced Energy, Kyoto University, Gokasho, Uji, Kyoto 611-0011 (Japan)

    2016-03-15

    A moment approach to calculate neoclassical transport in non-axisymmetric torus plasmas composed of multiple ion species is extended to include the external parallel momentum sources due to unbalanced tangential neutral beam injections (NBIs). The momentum sources that are included in the parallel momentum balance are calculated from the collision operators of background particles with fast ions. This method is applied for the clarification of the physical mechanism of the neoclassical parallel ion flows and the multi-ion species effect on them in Heliotron J NBI plasmas. It is found that parallel ion flow can be determined by the balance between the parallel viscosity and the external momentum source in the region where the external source is much larger than the thermodynamic force driven source in the collisional plasmas. This is because the friction between C{sup 6+} and D{sup +} prevents a large difference between C{sup 6+} and D{sup +} flow velocities in such plasmas. The C{sup 6+} flow velocities, which are measured by the charge exchange recombination spectroscopy system, are numerically evaluated with this method. It is shown that the experimentally measured C{sup 6+} impurity flow velocities do not contradict clearly with the neoclassical estimations, and the dependence of parallel flow velocities on the magnetic field ripples is consistent in both results.

  19. Structural Properties of G,T-Parallel Duplexes

    Directory of Open Access Journals (Sweden)

    Anna Aviñó

    2010-01-01

    Full Text Available The structure of G,T-parallel-stranded duplexes of DNA carrying similar amounts of adenine and guanine residues is studied by means of molecular dynamics (MD simulations and UV- and CD spectroscopies. In addition the impact of the substitution of adenine by 8-aminoadenine and guanine by 8-aminoguanine is analyzed. The presence of 8-aminoadenine and 8-aminoguanine stabilizes the parallel duplex structure. Binding of these oligonucleotides to their target polypyrimidine sequences to form the corresponding G,T-parallel triplex was not observed. Instead, when unmodified parallel-stranded duplexes were mixed with their polypyrimidine target, an interstrand Watson-Crick duplex was formed. As predicted by theoretical calculations parallel-stranded duplexes carrying 8-aminopurines did not bind to their target. The preference for the parallel-duplex over the Watson-Crick antiparallel duplex is attributed to the strong stabilization of the parallel duplex produced by the 8-aminopurines. Theoretical studies show that the isomorphism of the triads is crucial for the stability of the parallel triplex.

  20. High-speed parallel solution of the neutron diffusion equation with the hierarchical domain decomposition boundary element method incorporating parallel communications

    International Nuclear Information System (INIS)

    Tsuji, Masashi; Chiba, Gou

    2000-01-01

    A hierarchical domain decomposition boundary element method (HDD-BEM) for solving the multiregion neutron diffusion equation (NDE) has been fully parallelized, both for numerical computations and for data communications, to accomplish a high parallel efficiency on distributed memory message passing parallel computers. Data exchanges between node processors that are repeated during iteration processes of HDD-BEM are implemented, without any intervention of the host processor that was used to supervise parallel processing in the conventional parallelized HDD-BEM (P-HDD-BEM). Thus, the parallel processing can be executed with only cooperative operations of node processors. The communication overhead was even the dominant time consuming part in the conventional P-HDD-BEM, and the parallelization efficiency decreased steeply with the increase of the number of processors. With the parallel data communication, the efficiency is affected only by the number of boundary elements assigned to decomposed subregions, and the communication overhead can be drastically reduced. This feature can be particularly advantageous in the analysis of three-dimensional problems where a large number of processors are required. The proposed P-HDD-BEM offers a promising solution to the deterioration problem of parallel efficiency and opens a new path to parallel computations of NDEs on distributed memory message passing parallel computers. (author)

  1. Stampi: a message passing library for distributed parallel computing. User's guide, second edition

    International Nuclear Information System (INIS)

    Imamura, Toshiyuki; Koide, Hiroshi; Takemiya, Hiroshi

    2000-02-01

    A new message passing library, Stampi, has been developed to realize a computation with different kind of parallel computers arbitrarily and making MPI (Message Passing Interface) as an unique interface for communication. Stampi is based on the MPI2 specification, and it realizes dynamic process creation to different machines and communication between spawned one within the scope of MPI semantics. Main features of Stampi are summarized as follows: (i) an automatic switch function between external- and internal communications, (ii) a message routing/relaying with a routing module, (iii) a dynamic process creation, (iv) a support of two types of connection, Master/Slave and Client/Server, (v) a support of a communication with Java applets. Indeed vendors implemented MPI libraries as a closed system in one parallel machine or their systems, and did not support both functions; process creation and communication to external machines. Stampi supports both functions and enables us distributed parallel computing. Currently Stampi has been implemented on COMPACS (COMplex PArallel Computer System) introduced in CCSE, five parallel computers and one graphic workstation, moreover on eight kinds of parallel machines, totally fourteen systems. Stampi provides us MPI communication functionality on them. This report describes mainly the usage of Stampi. (author)

  2. Principal working group No. 1 on operating experience and human factors (PWG1). Report of the task group on reviewing the activities

    International Nuclear Information System (INIS)

    2001-02-01

    A Task Group was formed by PWG-1 in the latter part of 1999 to review the mandate of PWG1 in light of new directions and assignments from CSNI, and to prepare a report that suggests future directions of the Working Group, in harmony with directions from CSNI. This report is the response of the Task Group. Principal Working Group no.1 was organized in September 1982. The group formed its charter, which included: - reviewing periodically activities for the collection, dissemination, storage and analysis of incidents reported under the IRS; - examining annually the incidents reported during the previous year in order to select issues (either technical or human-factor-oriented) with major safety significance and report them to CSNI; - encouraging feed-back through CSNI of lessons derived from operating experience to nuclear safety research programmes, including human factors studies; - providing a forum to exchange information in the field of human factors studies; - establishing short-term task forces, when necessary to carry out information exchange, special studies or any other work within its mandate; - making recommendations to CSNI for improving and encouraging these activities. The mandate of the working group was systematically re-examined in 1994. The purpose was to determine whether changes since the formation of the original mandate would indicate some need to refocus the directions of the working group. It was concluded that the main line of work (sometimes called the core business) of PWG1, which was shown to be an efficient tool for exchanging safety-significant operating experience and lessons learned from safety-significant issues, remained as valid and necessary in 1994 as it was in 1982. Some recommendations for improvement of efficiency were made, but the core business was unchanged. Very little of the mandate needed modification. With little change over nearly 20 years, these six items have constituted the mandate of PWG1. There have been twenty

  3. Parallel education: what is it?

    OpenAIRE

    Amos, Michelle Peta

    2017-01-01

    In the history of education it has long been discussed that single-sex and coeducation are the two models of education present in schools. With the introduction of parallel schools over the last 15 years, there has been very little research into this 'new model'. Many people do not understand what it means for a school to be parallel or they confuse a parallel model with co-education, due to the presence of both boys and girls within the one institution. Therefore, the main obj...

  4. Parallel computing of physical maps--a comparative study in SIMD and MIMD parallelism.

    Science.gov (United States)

    Bhandarkar, S M; Chirravuri, S; Arnold, J

    1996-01-01

    Ordering clones from a genomic library into physical maps of whole chromosomes presents a central computational problem in genetics. Chromosome reconstruction via clone ordering is usually isomorphic to the NP-complete Optimal Linear Arrangement problem. Parallel SIMD and MIMD algorithms for simulated annealing based on Markov chain distribution are proposed and applied to the problem of chromosome reconstruction via clone ordering. Perturbation methods and problem-specific annealing heuristics are proposed and described. The SIMD algorithms are implemented on a 2048 processor MasPar MP-2 system which is an SIMD 2-D toroidal mesh architecture whereas the MIMD algorithms are implemented on an 8 processor Intel iPSC/860 which is an MIMD hypercube architecture. A comparative analysis of the various SIMD and MIMD algorithms is presented in which the convergence, speedup, and scalability characteristics of the various algorithms are analyzed and discussed. On a fine-grained, massively parallel SIMD architecture with a low synchronization overhead such as the MasPar MP-2, a parallel simulated annealing algorithm based on multiple periodically interacting searches performs the best. For a coarse-grained MIMD architecture with high synchronization overhead such as the Intel iPSC/860, a parallel simulated annealing algorithm based on multiple independent searches yields the best results. In either case, distribution of clonal data across multiple processors is shown to exacerbate the tendency of the parallel simulated annealing algorithm to get trapped in a local optimum.

  5. On synchronous parallel computations with independent probabilistic choice

    International Nuclear Information System (INIS)

    Reif, J.H.

    1984-01-01

    This paper introduces probabilistic choice to synchronous parallel machine models; in particular parallel RAMs. The power of probabilistic choice in parallel computations is illustrate by parallelizing some known probabilistic sequential algorithms. The authors characterize the computational complexity of time, space, and processor bounded probabilistic parallel RAMs in terms of the computational complexity of probabilistic sequential RAMs. They show that parallelism uniformly speeds up time bounded probabilistic sequential RAM computations by nearly a quadratic factor. They also show that probabilistic choice can be eliminated from parallel computations by introducing nonuniformity

  6. Plutonium working group report on environmental, safety and health vulnerabilities associated with the department's plutonium storage. Volume II, part 11: Lawrence Berkeley Laboratory working group assessment team report

    International Nuclear Information System (INIS)

    1994-09-01

    President Clinton has directed an Interagency Working Group to initiate a comprehensive review of long-term options for the disposition of surplus plutonium. As part of this initiative, Secretary of Energy, Hazel O'Leary, has directed that a Department of Energy project be initiated to develop options and recommendations for the safe storage of these materials in the interim. A step in the process is a plutonium vulnerability assessment of facilities throughout the Department. The Plutonium Vulnerability Working Group was formed to produce the Project and Assessment Plans, to manage the assessments and to produce a final report for the Secretary by September 30, 1994. The plans established the approach and methodology for the assessment. The Project Plan specifies a Working Group Assessment Team (WGAT) to examine each of the twelve DOE sites with significant holdings of plutonium. The Assessment Plan describes the methodology that the Site Assessment Team (SAT) used to report on the plutonium holdings for each specific site.This report provides results of the assessment of the Lawrence Berkeley Laboratory

  7. Resistor Combinations for Parallel Circuits.

    Science.gov (United States)

    McTernan, James P.

    1978-01-01

    To help simplify both teaching and learning of parallel circuits, a high school electricity/electronics teacher presents and illustrates the use of tables of values for parallel resistive circuits in which total resistances are whole numbers. (MF)

  8. Parallelization methods study of thermal-hydraulics codes

    International Nuclear Information System (INIS)

    Gaudart, Catherine

    2000-01-01

    The variety of parallelization methods and machines leads to a wide selection for programmers. In this study we suggest, in an industrial context, some solutions from the experience acquired through different parallelization methods. The study is about several scientific codes which simulate a large variety of thermal-hydraulics phenomena. A bibliography on parallelization methods and a first analysis of the codes showed the difficulty of our process on the whole applications to study. Therefore, it would be necessary to identify and extract a representative part of these applications and parallelization methods. The linear solver part of the codes forced itself. On this particular part several parallelization methods had been used. From these developments one could estimate the necessary work for a non initiate programmer to parallelize his application, and the impact of the development constraints. The different methods of parallelization tested are the numerical library PETSc, the parallelizer PAF, the language HPF, the formalism PEI and the communications library MPI and PYM. In order to test several methods on different applications and to follow the constraint of minimization of the modifications in codes, a tool called SPS (Server of Parallel Solvers) had be developed. We propose to describe the different constraints about the optimization of codes in an industrial context, to present the solutions given by the tool SPS, to show the development of the linear solver part with the tested parallelization methods and lastly to compare the results against the imposed criteria. (author) [fr

  9. Parallel factor analysis PARAFAC of process affected water

    Energy Technology Data Exchange (ETDEWEB)

    Ewanchuk, A.M.; Ulrich, A.C.; Sego, D. [Alberta Univ., Edmonton, AB (Canada). Dept. of Civil and Environmental Engineering; Alostaz, M. [Thurber Engineering Ltd., Calgary, AB (Canada)

    2010-07-01

    A parallel factor analysis (PARAFAC) of oil sands process-affected water was presented. Naphthenic acids (NA) are traditionally described as monobasic carboxylic acids. Research has indicated that oil sands NA do not fit classical definitions of NA. Oil sands organic acids have toxic and corrosive properties. When analyzed by fluorescence technology, oil sands process-affected water displays a characteristic peak at 290 nm excitation and approximately 346 nm emission. In this study, a parallel factor analysis (PARAFAC) was used to decompose process-affected water multi-way data into components representing analytes, chemical compounds, and groups of compounds. Water samples from various oil sands operations were analyzed in order to obtain EEMs. The EEMs were then arranged into a large matrix in decreasing process-affected water content for PARAFAC. Data were divided into 5 components. A comparison with commercially prepared NA samples suggested that oil sands NA is fundamentally different. Further research is needed to determine what each of the 5 components represent. tabs., figs.

  10. Energy Systems Group annual progress report 1 January - 31 December 1983

    International Nuclear Information System (INIS)

    Mackenzie, G.A.; Larsen, H.

    1984-03-01

    The report describes the work of the Energy Systems Group at Risoe National Laboratory during 1983. The activities may be roughly classified as energy planning, development and use of energy-economy models, energy systems analysis, and energy technology assessment. The report includes a list of staff members, as well as their experience and areas of interest. (author)

  11. User's guide of parallel program development environment (PPDE). The 2nd edition

    Energy Technology Data Exchange (ETDEWEB)

    Ueno, Hirokazu; Takemiya, Hiroshi; Imamura, Toshiyuki; Koide, Hiroshi; Matsuda, Katsuyuki; Higuchi, Kenji; Hirayama, Toshio [Center for Promotion of Computational Science and Engineering, Japan Atomic Energy Research Institute, Tokyo (Japan); Ohta, Hirofumi [Hitachi Ltd., Tokyo (Japan)

    2000-03-01

    The STA basic system has been enhanced to accelerate support for parallel programming on heterogeneous parallel computers, through a series of R and D on the technology of parallel processing. The enhancement has been made through extending the function of the PPDF, Parallel Program Development Environment in the STA basic system. The extended PPDE has the function to make: 1) the automatic creation of a 'makefile' and a shell script file for its execution, 2) the multi-tools execution which makes the tools on heterogeneous computers to execute with one operation a task on a computer, and 3) the mirror composition to reflect editing results of a file on a computer into all related files on other computers. These additional functions will enhance the work efficiency for program development on some computers. More functions have been added to the PPDE to provide help for parallel program development. New functions were also designed to complement a HPF translator and a paralleilizing support tool when working together so that a sequential program is efficiently converted to a parallel program. This report describes the use of extended PPDE. (author)

  12. Simulation Exploration through Immersive Parallel Planes

    Energy Technology Data Exchange (ETDEWEB)

    Brunhart-Lupo, Nicholas J [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Bush, Brian W [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Gruchalla, Kenny M [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Smith, Steve [Los Alamos Visualization Associates

    2017-05-25

    We present a visualization-driven simulation system that tightly couples systems dynamics simulations with an immersive virtual environment to allow analysts to rapidly develop and test hypotheses in a high-dimensional parameter space. To accomplish this, we generalize the two-dimensional parallel-coordinates statistical graphic as an immersive 'parallel-planes' visualization for multivariate time series emitted by simulations running in parallel with the visualization. In contrast to traditional parallel coordinate's mapping the multivariate dimensions onto coordinate axes represented by a series of parallel lines, we map pairs of the multivariate dimensions onto a series of parallel rectangles. As in the case of parallel coordinates, each individual observation in the dataset is mapped to a polyline whose vertices coincide with its coordinate values. Regions of the rectangles can be 'brushed' to highlight and select observations of interest: a 'slider' control allows the user to filter the observations by their time coordinate. In an immersive virtual environment, users interact with the parallel planes using a joystick that can select regions on the planes, manipulate selection, and filter time. The brushing and selection actions are used to both explore existing data as well as to launch additional simulations corresponding to the visually selected portions of the input parameter space. As soon as the new simulations complete, their resulting observations are displayed in the virtual environment. This tight feedback loop between simulation and immersive analytics accelerates users' realization of insights about the simulation and its output.

  13. Workspace Analysis for Parallel Robot

    Directory of Open Access Journals (Sweden)

    Ying Sun

    2013-05-01

    Full Text Available As a completely new-type of robot, the parallel robot possesses a lot of advantages that the serial robot does not, such as high rigidity, great load-carrying capacity, small error, high precision, small self-weight/load ratio, good dynamic behavior and easy control, hence its range is extended in using domain. In order to find workspace of parallel mechanism, the numerical boundary-searching algorithm based on the reverse solution of kinematics and limitation of link length has been introduced. This paper analyses position workspace, orientation workspace of parallel robot of the six degrees of freedom. The result shows: It is a main means to increase and decrease its workspace to change the length of branch of parallel mechanism; The radius of the movement platform has no effect on the size of workspace, but will change position of workspace.

  14. Massively Parallel Finite Element Programming

    KAUST Repository

    Heister, Timo; Kronbichler, Martin; Bangerth, Wolfgang

    2010-01-01

    Today's large finite element simulations require parallel algorithms to scale on clusters with thousands or tens of thousands of processor cores. We present data structures and algorithms to take advantage of the power of high performance computers in generic finite element codes. Existing generic finite element libraries often restrict the parallelization to parallel linear algebra routines. This is a limiting factor when solving on more than a few hundreds of cores. We describe routines for distributed storage of all major components coupled with efficient, scalable algorithms. We give an overview of our effort to enable the modern and generic finite element library deal.II to take advantage of the power of large clusters. In particular, we describe the construction of a distributed mesh and develop algorithms to fully parallelize the finite element calculation. Numerical results demonstrate good scalability. © 2010 Springer-Verlag.

  15. Massively Parallel Finite Element Programming

    KAUST Repository

    Heister, Timo

    2010-01-01

    Today\\'s large finite element simulations require parallel algorithms to scale on clusters with thousands or tens of thousands of processor cores. We present data structures and algorithms to take advantage of the power of high performance computers in generic finite element codes. Existing generic finite element libraries often restrict the parallelization to parallel linear algebra routines. This is a limiting factor when solving on more than a few hundreds of cores. We describe routines for distributed storage of all major components coupled with efficient, scalable algorithms. We give an overview of our effort to enable the modern and generic finite element library deal.II to take advantage of the power of large clusters. In particular, we describe the construction of a distributed mesh and develop algorithms to fully parallelize the finite element calculation. Numerical results demonstrate good scalability. © 2010 Springer-Verlag.

  16. Inertial confinement physics and technology group progress report (1994-1995)

    International Nuclear Information System (INIS)

    Associazione EURATOM-ENEA sulla fusione, Frascati

    1998-05-01

    The technical activities performed during the period 1994-1995 in the framework of the Inertial Fusion Physics and Technology Group, are reported. The theoretical and numerical work, as well as experiments performed with the Frascati ABC facility are described [it

  17. Collectively loading an application in a parallel computer

    Science.gov (United States)

    Aho, Michael E.; Attinella, John E.; Gooding, Thomas M.; Miller, Samuel J.; Mundy, Michael B.

    2016-01-05

    Collectively loading an application in a parallel computer, the parallel computer comprising a plurality of compute nodes, including: identifying, by a parallel computer control system, a subset of compute nodes in the parallel computer to execute a job; selecting, by the parallel computer control system, one of the subset of compute nodes in the parallel computer as a job leader compute node; retrieving, by the job leader compute node from computer memory, an application for executing the job; and broadcasting, by the job leader to the subset of compute nodes in the parallel computer, the application for executing the job.

  18. Productive Parallel Programming: The PCN Approach

    Directory of Open Access Journals (Sweden)

    Ian Foster

    1992-01-01

    Full Text Available We describe the PCN programming system, focusing on those features designed to improve the productivity of scientists and engineers using parallel supercomputers. These features include a simple notation for the concise specification of concurrent algorithms, the ability to incorporate existing Fortran and C code into parallel applications, facilities for reusing parallel program components, a portable toolkit that allows applications to be developed on a workstation or small parallel computer and run unchanged on supercomputers, and integrated debugging and performance analysis tools. We survey representative scientific applications and identify problem classes for which PCN has proved particularly useful.

  19. Intranasal Midazolam versus Rectal Diazepam for the Management of Canine Status Epilepticus: A Multicenter Randomized Parallel-Group Clinical Trial.

    Science.gov (United States)

    Charalambous, M; Bhatti, S F M; Van Ham, L; Platt, S; Jeffery, N D; Tipold, A; Siedenburg, J; Volk, H A; Hasegawa, D; Gallucci, A; Gandini, G; Musteata, M; Ives, E; Vanhaesebrouck, A E

    2017-07-01

    Intranasal administration of benzodiazepines has shown superiority over rectal administration for terminating emergency epileptic seizures in human trials. No such clinical trials have been performed in dogs. To evaluate the clinical efficacy of intranasal midazolam (IN-MDZ), via a mucosal atomization device, as a first-line management option for canine status epilepticus and compare it to rectal administration of diazepam (R-DZP) for controlling status epilepticus before intravenous access is available. Client-owned dogs with idiopathic or structural epilepsy manifesting status epilepticus within a hospital environment were used. Dogs were randomly allocated to treatment with IN-MDZ (n = 20) or R-DZP (n = 15). Randomized parallel-group clinical trial. Seizure cessation time and adverse effects were recorded. For each dog, treatment was considered successful if the seizure ceased within 5 minutes and did not recur within 10 minutes after administration. The 95% confidence interval was used to detect the true population of dogs that were successfully treated. The Fisher's 2-tailed exact test was used to compare the 2 groups, and the results were considered statistically significant if P status epilepticus in 70% (14/20) and 20% (3/15) of cases, respectively (P = .0059). All dogs showed sedation and ataxia. IN-MDZ is a quick, safe and effective first-line medication for controlling status epilepticus in dogs and appears superior to R-DZP. IN-MDZ might be a valuable treatment option when intravenous access is not available and for treatment of status epilepticus in dogs at home. Copyright © 2017 The Authors. Journal of Veterinary Internal Medicine published by Wiley Periodicals, Inc. on behalf of the American College of Veterinary Internal Medicine.

  20. Working Group Report: Lattice Field Theory

    Energy Technology Data Exchange (ETDEWEB)

    Blum, T.; et al.,

    2013-10-22

    This is the report of the Computing Frontier working group on Lattice Field Theory prepared for the proceedings of the 2013 Community Summer Study ("Snowmass"). We present the future computing needs and plans of the U.S. lattice gauge theory community and argue that continued support of the U.S. (and worldwide) lattice-QCD effort is essential to fully capitalize on the enormous investment in the high-energy physics experimental program. We first summarize the dramatic progress of numerical lattice-QCD simulations in the past decade, with some emphasis on calculations carried out under the auspices of the U.S. Lattice-QCD Collaboration, and describe a broad program of lattice-QCD calculations that will be relevant for future experiments at the intensity and energy frontiers. We then present details of the computational hardware and software resources needed to undertake these calculations.