WorldWideScience

Sample records for parallel group essential

  1. Group theory I essentials

    CERN Document Server

    Milewski, Emil G

    2012-01-01

    REA's Essentials provide quick and easy access to critical information in a variety of different fields, ranging from the most basic to the most advanced. As its name implies, these concise, comprehensive study guides summarize the essentials of the field covered. Essentials are helpful when preparing for exams, doing homework and will remain a lasting reference source for students, teachers, and professionals. Group Theory I includes sets and mapping, groupoids and semi-groups, groups, isomorphisms and homomorphisms, cyclic groups, the Sylow theorems, and finite p-groups.

  2. Parallel and Serial Grouping of Image Elements in Visual Perception

    Science.gov (United States)

    Houtkamp, Roos; Roelfsema, Pieter R.

    2010-01-01

    The visual system groups image elements that belong to an object and segregates them from other objects and the background. Important cues for this grouping process are the Gestalt criteria, and most theories propose that these are applied in parallel across the visual scene. Here, we find that Gestalt grouping can indeed occur in parallel in some…

  3. Parallel computational in nuclear group constant calculation

    International Nuclear Information System (INIS)

    Su'ud, Zaki; Rustandi, Yaddi K.; Kurniadi, Rizal

    2002-01-01

    In this paper parallel computational method in nuclear group constant calculation using collision probability method will be discuss. The main focus is on the calculation of collision matrix which need large amount of computational time. The geometry treated here is concentric cylinder. The calculation of collision probability matrix is carried out using semi analytic method using Beckley Naylor Function. To accelerate computation speed some computer parallel used to solve the problem. We used LINUX based parallelization using PVM software with C or fortran language. While in windows based we used socket programming using DELPHI or C builder. The calculation results shows the important of optimal weight for each processor in case there area many type of processor speed

  4. Establishing a group of endpoints in a parallel computer

    Science.gov (United States)

    Archer, Charles J.; Blocksome, Michael A.; Ratterman, Joseph D.; Smith, Brian E.; Xue, Hanhong

    2016-02-02

    A parallel computer executes a number of tasks, each task includes a number of endpoints and the endpoints are configured to support collective operations. In such a parallel computer, establishing a group of endpoints receiving a user specification of a set of endpoints included in a global collection of endpoints, where the user specification defines the set in accordance with a predefined virtual representation of the endpoints, the predefined virtual representation is a data structure setting forth an organization of tasks and endpoints included in the global collection of endpoints and the user specification defines the set of endpoints without a user specification of a particular endpoint; and defining a group of endpoints in dependence upon the predefined virtual representation of the endpoints and the user specification.

  5. Parallel solutions of the two-group neutron diffusion equations

    International Nuclear Information System (INIS)

    Zee, K.S.; Turinsky, P.J.

    1987-01-01

    Recent efforts to adapt various numerical solution algorithms to parallel computer architectures have addressed the possibility of substantially reducing the running time of few-group neutron diffusion calculations. The authors have developed an efficient iterative parallel algorithm and an associated computer code for the rapid solution of the finite difference method representation of the two-group neutron diffusion equations on the CRAY X/MP-48 supercomputer having multi-CPUs and vector pipelines. For realistic simulation of light water reactor cores, the code employees a macroscopic depletion model with trace capability for selected fission product transients and critical boron. In addition to this, moderator and fuel temperature feedback models are also incorporated into the code. The validity of the physics models used in the code were benchmarked against qualified codes and proved accurate. This work is an extension of previous work in that various feedback effects are accounted for in the system; the entire code is structured to accommodate extensive vectorization; and an additional parallelism by multitasking is achieved not only for the solution of the matrix equations associated with the inner iterations but also for the other segments of the code, e.g., outer iterations

  6. Massively parallel read mapping on GPUs with the q-group index and PEANUT

    NARCIS (Netherlands)

    J. Köster (Johannes); S. Rahmann (Sven)

    2014-01-01

    textabstractWe present the q-group index, a novel data structure for read mapping tailored towards graphics processing units (GPUs) with a small memory footprint and efficient parallel algorithms for querying and building. On top of the q-group index we introduce PEANUT, a highly parallel GPU-based

  7. Parallel and serial grouping of image elements in visual perception

    NARCIS (Netherlands)

    Houtkamp, R.; Roelfsema, P.R.

    2010-01-01

    The visual system groups image elements that belong to an object and segregates them from other objects and the background. Important cues for this grouping process are the Gestalt criteria, and most theories propose that these are applied in parallel across the visual scene. Here, we find that

  8. Parallel Programming with Intel Parallel Studio XE

    CERN Document Server

    Blair-Chappell , Stephen

    2012-01-01

    Optimize code for multi-core processors with Intel's Parallel Studio Parallel programming is rapidly becoming a "must-know" skill for developers. Yet, where to start? This teach-yourself tutorial is an ideal starting point for developers who already know Windows C and C++ and are eager to add parallelism to their code. With a focus on applying tools, techniques, and language extensions to implement parallelism, this essential resource teaches you how to write programs for multicore and leverage the power of multicore in your programs. Sharing hands-on case studies and real-world examples, the

  9. Psychodrama: A Creative Approach for Addressing Parallel Process in Group Supervision

    Science.gov (United States)

    Hinkle, Michelle Gimenez

    2008-01-01

    This article provides a model for using psychodrama to address issues of parallel process during group supervision. Information on how to utilize the specific concepts and techniques of psychodrama in relation to group supervision is discussed. A case vignette of the model is provided.

  10. Intensive versus conventional blood pressure monitoring in a general practice population. The Blood Pressure Reduction in Danish General Practice trial: a randomized controlled parallel group trial

    DEFF Research Database (Denmark)

    Klarskov, Pia; Bang, Lia E; Schultz-Larsen, Peter

    2018-01-01

    To compare the effect of a conventional to an intensive blood pressure monitoring regimen on blood pressure in hypertensive patients in the general practice setting. Randomized controlled parallel group trial with 12-month follow-up. One hundred and ten general practices in all regions of Denmark....... One thousand forty-eight patients with essential hypertension. Conventional blood pressure monitoring ('usual group') continued usual ad hoc blood pressure monitoring by office blood pressure measurements, while intensive blood pressure monitoring ('intensive group') supplemented this with frequent...... a reduction of blood pressure. Clinical Trials NCT00244660....

  11. How does social essentialism affect the development of inter-group relations?

    Science.gov (United States)

    Rhodes, Marjorie; Leslie, Sarah-Jane; Saunders, Katya; Dunham, Yarrow; Cimpian, Andrei

    2018-01-01

    Psychological essentialism is a pervasive conceptual bias to view categories as reflecting something deep, stable, and informative about their members. Scholars from diverse disciplines have long theorized that psychological essentialism has negative ramifications for inter-group relations, yet little previous empirical work has experimentally tested the social implications of essentialist beliefs. Three studies (N = 127, ages 4.5-6) found that experimentally inducing essentialist beliefs about a novel social category led children to share fewer resources with category members, but did not lead to the out-group dislike that defines social prejudice. These findings indicate that essentialism negatively influences some key components of inter-group relations, but does not lead directly to the development of prejudice. © 2017 John Wiley & Sons Ltd.

  12. A Lightweight RFID Grouping-Proof Protocol Based on Parallel Mode and DHCP Mechanism

    Directory of Open Access Journals (Sweden)

    Zhicai Shi

    2017-07-01

    Full Text Available A Radio Frequency Identification (RFID grouping-proof protocol is to generate an evidence of the simultaneous existence of a group of tags and it has been applied to many different fields. For current grouping-proof protocols, there still exist some flaws such as low grouping-proof efficiency, being vulnerable to trace attack and information leakage. To improve the secure performance and efficiency, we propose a lightweight RFID grouping-proof protocol based on parallel mode and DHCP (Dynamic Host Configuration Protocol mechanism. Our protocol involves multiple readers and multiple tag groups. During the grouping-proof period, one reader and one tag group are chosen by the verifier by means of DHCP mechanism. When only a part of the tags of the chosen group exist, the protocol can also give the evidence of their co-existence. Our protocol utilizes parallel communication mode between reader and tags so as to ensure its grouping-proof efficiency. It only uses Hash function to complete the mutual authentication among verifier, readers and tags. It can preserve the privacy of the RFID system and resist the attacks such as eavesdropping, replay, trace and impersonation. Therefore the protocol is secure, flexible and efficient. It only uses some lightweight operations such as Hash function and a pseudorandom number generator. Therefore it is very suitable to some low-cost RFID systems.

  13. Coarse-grain parallel solution of few-group neutron diffusion equations

    International Nuclear Information System (INIS)

    Sarsour, H.N.; Turinsky, P.J.

    1991-01-01

    The authors present a parallel numerical algorithm for the solution of the finite difference representation of the few-group neutron diffusion equations. The targeted architectures are multiprocessor computers with shared memory like the Cray Y-MP and the IBM 3090/VF, where coarse granularity is important for minimizing overhead. Most of the work done in the past, which attempts to exploit concurrence, has concentrated on the inner iterations of the standard outer-inner iterative strategy. This produces very fine granularity. To coarsen granularity, the authors introduce parallelism at the nested outer-inner level. The problem's spatial domain was partitioned into contiguous subregions and assigned a processor to solve for each subregion independent of all other subregions, hence, processors; i.e., each subregion is treated as a reactor core with imposed boundary conditions. Since those boundary conditions on interior surfaces, referred to as internal boundary conditions (IBCs), are not known, a third iterative level, the recomposition iterations, is introduced to communicate results between subregions

  14. Numeric algorithms for parallel processors computer architectures with applications to the few-groups neutron diffusion equations

    International Nuclear Information System (INIS)

    Zee, S.K.

    1987-01-01

    A numeric algorithm and an associated computer code were developed for the rapid solution of the finite-difference method representation of the few-group neutron-diffusion equations on parallel computers. Applications of the numeric algorithm on both SIMD (vector pipeline) and MIMD/SIMD (multi-CUP/vector pipeline) architectures were explored. The algorithm was successfully implemented in the two-group, 3-D neutron diffusion computer code named DIFPAR3D (DIFfusion PARallel 3-Dimension). Numerical-solution techniques used in the code include the Chebyshev polynomial acceleration technique in conjunction with the power method of outer iteration. For inner iterations, a parallel form of red-black (cyclic) line SOR with automated determination of group dependent relaxation factors and iteration numbers required to achieve specified inner iteration error tolerance is incorporated. The code employs a macroscopic depletion model with trace capability for selected fission products' transients and critical boron. In addition to this, moderator and fuel temperature feedback models are also incorporated into the DIFPAR3D code, for realistic simulation of power reactor cores. The physics models used were proven acceptable in separate benchmarking studies

  15. Patterns for Parallel Software Design

    CERN Document Server

    Ortega-Arjona, Jorge Luis

    2010-01-01

    Essential reading to understand patterns for parallel programming Software patterns have revolutionized the way we think about how software is designed, built, and documented, and the design of parallel software requires you to consider other particular design aspects and special skills. From clusters to supercomputers, success heavily depends on the design skills of software developers. Patterns for Parallel Software Design presents a pattern-oriented software architecture approach to parallel software design. This approach is not a design method in the classic sense, but a new way of managin

  16. Two types of essential carboxyl groups in Rhodospirillum rubrum proton ATPase

    International Nuclear Information System (INIS)

    Ceccarelli, E.; Vallejos, R.H.

    1983-01-01

    Two different types of essential carboxyl groups were detected in the extrinsic component of the proton ATPase of Rhodospirillum rubrum. Chemical modification of R. rubrum chromatophores or its solubilized ATPase by Woodward's reagent K resulted in inactivation of photophosphorylating and ATPase activities. The apparent order of reaction was nearly 1 with respect to reagent concentration and similar K1 were obtained for the soluble and membrane-bound ATPases suggesting that inactivation was associated with modification of one essential carboxyl group located in the soluble component of the proton ATPase. Inactivation was prevented by adenine nucleotides but not by divalent cations. Dicyclohexylcarbodiimide completely inhibited the solubilized ATPase with a K1 of 5.2 mM and a K2 of 0.81 min-1. Mg2+ afforded nearly complete protection with a Kd of 2.8 mM. Two moles of [14C]dicyclohexylcarbodiimide were incorporated per mole of enzyme for complete inactivation but in the presence of 30 mM MgCl2 only one mole was incorporated and there was no inhibition. The labeling was recovered mostly from the beta subunit. The incorporation of the labeled reagent into the ATPase was not prevented by previous modification with Woodward's reagent K. It is concluded that both reagents modified two different essential carboxyl groups in the soluble ATPase from R. rubrum

  17. Geometry I essentials

    CERN Document Server

    REA, The Editors of

    2012-01-01

    REA's Essentials provide quick and easy access to critical information in a variety of different fields, ranging from the most basic to the most advanced. As its name implies, these concise, comprehensive study guides summarize the essentials of the field covered. Essentials are helpful when preparing for exams, doing homework and will remain a lasting reference source for students, teachers, and professionals. Geometry I includes methods of proof, points, lines, planes, angles, congruent angles and line segments, triangles, parallelism, quadrilaterals, geometric inequalities, and geometric

  18. Parallel Expansions of Sox Transcription Factor Group B Predating the Diversifications of the Arthropods and Jawed Vertebrates

    Science.gov (United States)

    Zhong, Lei; Wang, Dengqiang; Gan, Xiaoni; Yang, Tong; He, Shunping

    2011-01-01

    Group B of the Sox transcription factor family is crucial in embryo development in the insects and vertebrates. Sox group B, unlike the other Sox groups, has an unusually enlarged functional repertoire in insects, but the timing and mechanism of the expansion of this group were unclear. We collected and analyzed data for Sox group B from 36 species of 12 phyla representing the major metazoan clades, with an emphasis on arthropods, to reconstruct the evolutionary history of SoxB in bilaterians and to date the expansion of Sox group B in insects. We found that the genome of the bilaterian last common ancestor probably contained one SoxB1 and one SoxB2 gene only and that tandem duplications of SoxB2 occurred before the arthropod diversification but after the arthropod-nematode divergence, resulting in the basal repertoire of Sox group B in diverse arthropod lineages. The arthropod Sox group B repertoire expanded differently from the vertebrate repertoire, which resulted from genome duplications. The parallel increases in the Sox group B repertoires of the arthropods and vertebrates are consistent with the parallel increases in the complexity and diversification of these two important organismal groups. PMID:21305035

  19. Exploiting Symmetry on Parallel Architectures.

    Science.gov (United States)

    Stiller, Lewis Benjamin

    1995-01-01

    This thesis describes techniques for the design of parallel programs that solve well-structured problems with inherent symmetry. Part I demonstrates the reduction of such problems to generalized matrix multiplication by a group-equivariant matrix. Fast techniques for this multiplication are described, including factorization, orbit decomposition, and Fourier transforms over finite groups. Our algorithms entail interaction between two symmetry groups: one arising at the software level from the problem's symmetry and the other arising at the hardware level from the processors' communication network. Part II illustrates the applicability of our symmetry -exploitation techniques by presenting a series of case studies of the design and implementation of parallel programs. First, a parallel program that solves chess endgames by factorization of an associated dihedral group-equivariant matrix is described. This code runs faster than previous serial programs, and discovered it a number of results. Second, parallel algorithms for Fourier transforms for finite groups are developed, and preliminary parallel implementations for group transforms of dihedral and of symmetric groups are described. Applications in learning, vision, pattern recognition, and statistics are proposed. Third, parallel implementations solving several computational science problems are described, including the direct n-body problem, convolutions arising from molecular biology, and some communication primitives such as broadcast and reduce. Some of our implementations ran orders of magnitude faster than previous techniques, and were used in the investigation of various physical phenomena.

  20. Optimizing trial design in pharmacogenetics research: comparing a fixed parallel group, group sequential, and adaptive selection design on sample size requirements.

    Science.gov (United States)

    Boessen, Ruud; van der Baan, Frederieke; Groenwold, Rolf; Egberts, Antoine; Klungel, Olaf; Grobbee, Diederick; Knol, Mirjam; Roes, Kit

    2013-01-01

    Two-stage clinical trial designs may be efficient in pharmacogenetics research when there is some but inconclusive evidence of effect modification by a genomic marker. Two-stage designs allow to stop early for efficacy or futility and can offer the additional opportunity to enrich the study population to a specific patient subgroup after an interim analysis. This study compared sample size requirements for fixed parallel group, group sequential, and adaptive selection designs with equal overall power and control of the family-wise type I error rate. The designs were evaluated across scenarios that defined the effect sizes in the marker positive and marker negative subgroups and the prevalence of marker positive patients in the overall study population. Effect sizes were chosen to reflect realistic planning scenarios, where at least some effect is present in the marker negative subgroup. In addition, scenarios were considered in which the assumed 'true' subgroup effects (i.e., the postulated effects) differed from those hypothesized at the planning stage. As expected, both two-stage designs generally required fewer patients than a fixed parallel group design, and the advantage increased as the difference between subgroups increased. The adaptive selection design added little further reduction in sample size, as compared with the group sequential design, when the postulated effect sizes were equal to those hypothesized at the planning stage. However, when the postulated effects deviated strongly in favor of enrichment, the comparative advantage of the adaptive selection design increased, which precisely reflects the adaptive nature of the design. Copyright © 2013 John Wiley & Sons, Ltd.

  1. Large-scale parallel configuration interaction. II. Two- and four-component double-group general active space implementation with application to BiH

    DEFF Research Database (Denmark)

    Knecht, Stefan; Jensen, Hans Jørgen Aagaard; Fleig, Timo

    2010-01-01

    We present a parallel implementation of a large-scale relativistic double-group configuration interaction CIprogram. It is applicable with a large variety of two- and four-component Hamiltonians. The parallel algorithm is based on a distributed data model in combination with a static load balanci...

  2. Solution of the within-group multidimensional discrete ordinates transport equations on massively parallel architectures

    Science.gov (United States)

    Zerr, Robert Joseph

    2011-12-01

    The integral transport matrix method (ITMM) has been used as the kernel of new parallel solution methods for the discrete ordinates approximation of the within-group neutron transport equation. The ITMM abandons the repetitive mesh sweeps of the traditional source iterations (SI) scheme in favor of constructing stored operators that account for the direct coupling factors among all the cells and between the cells and boundary surfaces. The main goals of this work were to develop the algorithms that construct these operators and employ them in the solution process, determine the most suitable way to parallelize the entire procedure, and evaluate the behavior and performance of the developed methods for increasing number of processes. This project compares the effectiveness of the ITMM with the SI scheme parallelized with the Koch-Baker-Alcouffe (KBA) method. The primary parallel solution method involves a decomposition of the domain into smaller spatial sub-domains, each with their own transport matrices, and coupled together via interface boundary angular fluxes. Each sub-domain has its own set of ITMM operators and represents an independent transport problem. Multiple iterative parallel solution methods have investigated, including parallel block Jacobi (PBJ), parallel red/black Gauss-Seidel (PGS), and parallel GMRES (PGMRES). The fastest observed parallel solution method, PGS, was used in a weak scaling comparison with the PARTISN code. Compared to the state-of-the-art SI-KBA with diffusion synthetic acceleration (DSA), this new method without acceleration/preconditioning is not competitive for any problem parameters considered. The best comparisons occur for problems that are difficult for SI DSA, namely highly scattering and optically thick. SI DSA execution time curves are generally steeper than the PGS ones. However, until further testing is performed it cannot be concluded that SI DSA does not outperform the ITMM with PGS even on several thousand or tens of

  3. Mandibular advancement appliance for obstructive sleep apnoea: results of a randomised placebo controlled trial using parallel group design

    DEFF Research Database (Denmark)

    Petri, N.; Svanholt, P.; Solow, B.

    2008-01-01

    The aim of this trial was to evaluate the efficacy of a mandibular advancement appliance (MAA) for obstructive sleep apnoea (OSA). Ninety-three patients with OSA and a mean apnoea-hypopnoea index (AHI) of 34.7 were centrally randomised into three, parallel groups: (a) MAA; (b) mandibular non......). Eighty-one patients (87%) completed the trial. The MAA group achieved mean AHI and Epworth scores significantly lower (P group and the no-intervention group. No significant differences were found between the MNA group and the no-intervention group. The MAA group had...

  4. High performance computing of density matrix renormalization group method for 2-dimensional model. Parallelization strategy toward peta computing

    International Nuclear Information System (INIS)

    Yamada, Susumu; Igarashi, Ryo; Machida, Masahiko; Imamura, Toshiyuki; Okumura, Masahiko; Onishi, Hiroaki

    2010-01-01

    We parallelize the density matrix renormalization group (DMRG) method, which is a ground-state solver for one-dimensional quantum lattice systems. The parallelization allows us to extend the applicable range of the DMRG to n-leg ladders i.e., quasi two-dimension cases. Such an extension is regarded to bring about several breakthroughs in e.g., quantum-physics, chemistry, and nano-engineering. However, the straightforward parallelization requires all-to-all communications between all processes which are unsuitable for multi-core systems, which is a mainstream of current parallel computers. Therefore, we optimize the all-to-all communications by the following two steps. The first one is the elimination of the communications between all processes by only rearranging data distribution with the communication data amount kept. The second one is the avoidance of the communication conflict by rescheduling the calculation and the communication. We evaluate the performance of the DMRG method on multi-core supercomputers and confirm that our two-steps tuning is quite effective. (author)

  5. Performance of a fine-grained parallel model for multi-group nodal-transport calculations in three-dimensional pin-by-pin reactor geometry

    International Nuclear Information System (INIS)

    Masahiro, Tatsumi; Akio, Yamamoto

    2003-01-01

    A production code SCOPE2 was developed based on the fine-grained parallel algorithm by the red/black iterative method targeting parallel computing environments such as a PC-cluster. It can perform a depletion calculation in a few hours using a PC-cluster with the model based on a 9-group nodal-SP3 transport method in 3-dimensional pin-by-pin geometry for in-core fuel management of commercial PWRs. The present algorithm guarantees the identical convergence process as that in serial execution, which is very important from the viewpoint of quality management. The fine-mesh geometry is constructed by hierarchical decomposition with introduction of intermediate management layer as a block that is a quarter piece of a fuel assembly in radial direction. A combination of a mesh division scheme forcing even meshes on each edge and a latency-hidden communication algorithm provided simplicity and efficiency to message passing to enhance parallel performance. Inter-processor communication and parallel I/O access were realized using the MPI functions. Parallel performance was measured for depletion calculations by the 9-group nodal-SP3 transport method in 3-dimensional pin-by-pin geometry with 340 x 340 x 26 meshes for full core geometry and 170 x 170 x 26 for quarter core geometry. A PC cluster that consists of 24 Pentium-4 processors connected by the Fast Ethernet was used for the performance measurement. Calculations in full core geometry gave better speedups compared to those in quarter core geometry because of larger granularity. Fine-mesh sweep and feedback calculation parts gave almost perfect scalability since granularity is large enough, while 1-group coarse-mesh diffusion acceleration gave only around 80%. The speedup and parallel efficiency for total computation time were 22.6 and 94%, respectively, for the calculation in full core geometry with 24 processors. (authors)

  6. Compactness of the automorphism group of a topological parallelism on real projective 3-space: The disconnected case

    OpenAIRE

    Rainer, Löwen

    2017-01-01

    We prove that the automorphism group of a topological parallelism on real projective 3-space is compact. In a preceding article it was proved that at least the connected component of the identity is compact. The present proof does not depend on that earlier result.

  7. Parallel tempering in full QCD with Wilson fermions

    International Nuclear Information System (INIS)

    Ilgenfritz, E.-M.; Kerler, W.; Mueller-Preussker, M.; Stueben, H.

    2002-01-01

    We study the performance of QCD simulations with dynamical Wilson fermions by combining the hybrid Monte Carlo algorithm with parallel tempering on 10 4 and 12 4 lattices. In order to compare tempered with standard simulations, covariance matrices between subensembles have to be formulated and evaluated using the general properties of autocorrelations of the parallel tempering algorithm. We find that rendering the hopping parameter κ dynamical does not lead to an essential improvement. We point out possible reasons for this observation and discuss more suitable ways of applying parallel tempering to QCD

  8. Study of palmar dermatoglyphics in patients with essential hypertension between the age group of 20-50 years

    Directory of Open Access Journals (Sweden)

    Rudragouda S Bulagouda, Purnima J Patil, Gavishiddppa A Hadimani, Balappa M Bannur, Patil BG, Nagaraj S. Mallashetty, Ishwar B Bagoji

    2013-10-01

    Full Text Available Background: In present study, we tried to determine significant palmar dermatoglyphic parameters in case of essential hypertensive’s in age group between 20-50 years and whether the parameters can be used for screening purpose i.e., early detection of hypertension. Method: With the use of modified Purvis Smith method, Black duplicating ink (Kores, Bombay was smeared on both hands one by one and prints will be taken by rolling the hands from wrist creases to finger tips on the roller covered with bond paper. While crystal bond paper, applied firmly over a wooden pad, was used for recording the inked epidermal ridge patterns. Rolled finger prints were recorded after applying uniform pressure on white bond paper from ulnar to radial side. Complete palm impression, including the hollow or the palm was obtained over paper. Thus one set of finger prints and palm prints was obtained. The prints obtained were immediately examined with hand-lens. Result: Right hand and left hand of the both male and female study group showed more number of arches than controls. Right hand and left hand of the both male and female study group showed more number of Radial loops than controls. The right hand and left hand of both male and female control group showed more number of ulnar loops than study group. The right hand and left hand of the male control group showed more number of Whorls than study, while in females, the right hand study group showed more number of whorls than control group and the left hand study group showed less number of Whorls as compared to control group. Conclusion: The present study indicates that there are some genetic factors which are involved in the causation of essential hypertension and it is possible to certain extent to predict from dermatoglyphics individual’s chance of acquiring essential hypertension. Like clinical history, examination and investigations, the dermatoglyphics will play an important role revealing the genetic

  9. Parallel programming practical aspects, models and current limitations

    CERN Document Server

    Tarkov, Mikhail S

    2014-01-01

    Parallel programming is designed for the use of parallel computer systems for solving time-consuming problems that cannot be solved on a sequential computer in a reasonable time. These problems can be divided into two classes: 1. Processing large data arrays (including processing images and signals in real time)2. Simulation of complex physical processes and chemical reactions For each of these classes, prospective methods are designed for solving problems. For data processing, one of the most promising technologies is the use of artificial neural networks. Particles-in-cell method and cellular automata are very useful for simulation. Problems of scalability of parallel algorithms and the transfer of existing parallel programs to future parallel computers are very acute now. An important task is to optimize the use of the equipment (including the CPU cache) of parallel computers. Along with parallelizing information processing, it is essential to ensure the processing reliability by the relevant organization ...

  10. Parallelization of quantum molecular dynamics simulation code

    International Nuclear Information System (INIS)

    Kato, Kaori; Kunugi, Tomoaki; Shibahara, Masahiko; Kotake, Susumu

    1998-02-01

    A quantum molecular dynamics simulation code has been developed for the analysis of the thermalization of photon energies in the molecule or materials in Kansai Research Establishment. The simulation code is parallelized for both Scalar massively parallel computer (Intel Paragon XP/S75) and Vector parallel computer (Fujitsu VPP300/12). Scalable speed-up has been obtained with a distribution to processor units by division of particle group in both parallel computers. As a result of distribution to processor units not only by particle group but also by the particles calculation that is constructed with fine calculations, highly parallelization performance is achieved in Intel Paragon XP/S75. (author)

  11. Broadcasting a message in a parallel computer

    Science.gov (United States)

    Berg, Jeremy E [Rochester, MN; Faraj, Ahmad A [Rochester, MN

    2011-08-02

    Methods, systems, and products are disclosed for broadcasting a message in a parallel computer. The parallel computer includes a plurality of compute nodes connected together using a data communications network. The data communications network optimized for point to point data communications and is characterized by at least two dimensions. The compute nodes are organized into at least one operational group of compute nodes for collective parallel operations of the parallel computer. One compute node of the operational group assigned to be a logical root. Broadcasting a message in a parallel computer includes: establishing a Hamiltonian path along all of the compute nodes in at least one plane of the data communications network and in the operational group; and broadcasting, by the logical root to the remaining compute nodes, the logical root's message along the established Hamiltonian path.

  12. A parallel algorithm for solving the multidimensional within-group discrete ordinates equations with spatial domain decomposition - 104

    International Nuclear Information System (INIS)

    Zerr, R.J.; Azmy, Y.Y.

    2010-01-01

    A spatial domain decomposition with a parallel block Jacobi solution algorithm has been developed based on the integral transport matrix formulation of the discrete ordinates approximation for solving the within-group transport equation. The new methodology abandons the typical source iteration scheme and solves directly for the fully converged scalar flux. Four matrix operators are constructed based upon the integral form of the discrete ordinates equations. A single differential mesh sweep is performed to construct these operators. The method is parallelized by decomposing the problem domain into several smaller sub-domains, each treated as an independent problem. The scalar flux of each sub-domain is solved exactly given incoming angular flux boundary conditions. Sub-domain boundary conditions are updated iteratively, and convergence is achieved when the scalar flux error in all cells meets a pre-specified convergence criterion. The method has been implemented in a computer code that was then employed for strong scaling studies of the algorithm's parallel performance via a fixed-size problem in tests ranging from one domain up to one cell per sub-domain. Results indicate that the best parallel performance compared to source iterations occurs for optically thick, highly scattering problems, the variety that is most difficult for the traditional SI scheme to solve. Moreover, the minimum execution time occurs when each sub-domain contains a total of four cells. (authors)

  13. A double blind, randomised, parallel group study on the efficacy and safety of treating acute lateral ankle sprain with oral hydrolytic enzymes

    NARCIS (Netherlands)

    Kerkhoffs, G. M. M. J.; Struijs, P. A. A.; de Wit, C.; Rahlfs, V. W.; Zwipp, H.; van Dijk, C. N.

    2004-01-01

    Objective: To compare the effectiveness and safety of the triple combination Phlogenzym ( rutoside, bromelain, and trypsin) with double combinations, the single substances, and placebo. Design: Multinational, multicentre, double blind, randomised, parallel group design with eight groups structured

  14. Intellectual Property Rights, Parallel Imports and Strategic Behavior

    OpenAIRE

    Maskus, Keith E.; Ganslandt, Mattias

    2007-01-01

    The existence of parallel imports (PI) raises a number of interesting policy and strategic questions, which are the subject of this survey article. For example, parallel trade is essentially arbitrage within policy-integrated markets of IPR-protected goods, which may have different prices across countries. Thus, we analyze fully two types of price differences that give rise to such arbitrage. First is simple retail-level trade in horizontal markets because consumer prices may differ. Second i...

  15. Fundamental Parallel Algorithms for Private-Cache Chip Multiprocessors

    DEFF Research Database (Denmark)

    Arge, Lars Allan; Goodrich, Michael T.; Nelson, Michael

    2008-01-01

    about the way cores are interconnected, for we assume that all inter-processor communication occurs through the memory hierarchy. We study several fundamental problems, including prefix sums, selection, and sorting, which often form the building blocks of other parallel algorithms. Indeed, we present...... two sorting algorithms, a distribution sort and a mergesort. Our algorithms are asymptotically optimal in terms of parallel cache accesses and space complexity under reasonable assumptions about the relationships between the number of processors, the size of memory, and the size of cache blocks....... In addition, we study sorting lower bounds in a computational model, which we call the parallel external-memory (PEM) model, that formalizes the essential properties of our algorithms for private-cache CMPs....

  16. Parallel processing of neutron transport in fuel assembly calculation

    International Nuclear Information System (INIS)

    Song, Jae Seung

    1992-02-01

    Group constants, which are used for reactor analyses by nodal method, are generated by fuel assembly calculations based on the neutron transport theory, since one or a quarter of the fuel assembly corresponds to a unit mesh in the current nodal calculation. The group constant calculation for a fuel assembly is performed through spectrum calculations, a two-dimensional fuel assembly calculation, and depletion calculations. The purpose of this study is to develop a parallel algorithm to be used in a parallel processor for the fuel assembly calculation and the depletion calculations of the group constant generation. A serial program, which solves the neutron integral transport equation using the transmission probability method and the linear depletion equation, was prepared and verified by a benchmark calculation. Small changes from the serial program was enough to parallelize the depletion calculation which has inherent parallel characteristics. In the fuel assembly calculation, however, efficient parallelization is not simple and easy because of the many coupling parameters in the calculation and data communications among CPU's. In this study, the group distribution method is introduced for the parallel processing of the fuel assembly calculation to minimize the data communications. The parallel processing was performed on Quadputer with 4 CPU's operating in NURAD Lab. at KAIST. Efficiencies of 54.3 % and 78.0 % were obtained in the fuel assembly calculation and depletion calculation, respectively, which lead to the overall speedup of about 2.5. As a result, it is concluded that the computing time consumed for the group constant generation can be easily reduced by parallel processing on the parallel computer with small size CPU's

  17. Intensive versus conventional blood pressure monitoring in a general practice population. The Blood Pressure Reduction in Danish General Practice trial: a randomized controlled parallel group trial.

    Science.gov (United States)

    Klarskov, Pia; Bang, Lia E; Schultz-Larsen, Peter; Gregers Petersen, Hans; Benee Olsen, David; Berg, Ronan M G; Abrahamsen, Henrik; Wiinberg, Niels

    2018-01-17

    To compare the effect of a conventional to an intensive blood pressure monitoring regimen on blood pressure in hypertensive patients in the general practice setting. Randomized controlled parallel group trial with 12-month follow-up. One hundred and ten general practices in all regions of Denmark. One thousand forty-eight patients with essential hypertension. Conventional blood pressure monitoring ('usual group') continued usual ad hoc blood pressure monitoring by office blood pressure measurements, while intensive blood pressure monitoring ('intensive group') supplemented this with frequent home blood pressure monitoring and 24-hour ambulatory blood pressure monitoring. Mean day- and night-time systolic and diastolic 24-hour ambulatory blood pressure. Change in systolic and diastolic office blood pressure and change in cardiovascular risk profile. Of the patients, 515 (49%) were allocated to the usual group, and 533 (51%) to the intensive group. The reductions in day- and night-time 24-hour ambulatory blood pressure were similar (usual group: 4.6 ± 13.5/2.8 ± 82 mmHg; intensive group: 5.6 ± 13.0/3.5 ± 8.2 mmHg; P = 0.27/P = 0.20). Cardiovascular risk scores were reduced in both groups at follow-up, but more so in the intensive than in the usual group (P = 0.02). An intensive blood pressure monitoring strategy led to a similar blood pressure reduction to conventional monitoring. However, the intensive strategy appeared to improve patients' cardiovascular risk profile through other effects than a reduction of blood pressure. Clinical Trials NCT00244660. © The Author 2018. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  18. Double-blind, parallel-group evaluation of etodolac and naproxen in patients with acute sports injuries.

    Science.gov (United States)

    D'Hooghe, M

    1992-01-01

    The efficacy and safety of etodolac and naproxen were compared in a double-blind, randomized, parallel-group outpatient study. Patients with acute sports injuries were assigned to receive either etodolac 300 mg TID (50 patients) or naproxen 500 mg BID (49 patients) for up to 7 days. Assessments were made at the pretreatment screening (baseline) and at days 2, 3, 4, and 7 of treatment. Assessments included patient and physician global evaluations, spontaneous and induced pain intensity, range of motion, tenderness, heat, degree of swelling, and degree of erythema. Safety assessments, including laboratory profiles, were made at pretreatment and at final evaluation; patients' complaints were elicited at all visits. Both treatment groups showed significant (P less than or equal to 0.05) improvement from baseline for all efficacy parameters by day 2 and thereafter at all time points. Improvement was similar for the two groups. No patients in either group withdrew from the study because of drug-related adverse reactions. The results of this study indicate that etodolac (900 mg/day) is effective and well tolerated as an analgesic and anti-inflammatory in acute sports injuries and is comparable to naproxen (1000 mg/day).

  19. The (Biological or Cultural) Essence of Essentialism: Implications for Policy Support among Dominant and Subordinated Groups.

    Science.gov (United States)

    Soylu Yalcinkaya, Nur; Estrada-Villalta, Sara; Adams, Glenn

    2017-01-01

    Most research links (racial) essentialism to negative intergroup outcomes. We propose that this conclusion reflects both a narrow conceptual focus on biological/genetic essence and a narrow research focus from the perspective of racially dominant groups. We distinguished between beliefs in biological and cultural essences, and we investigated the implications of this distinction for support of social justice policies (e.g., affirmative action) among people with dominant (White) and subordinated (e.g., Black, Latino) racial identities in the United States. Whereas, endorsement of biological essentialism may have similarly negative implications for social justice policies across racial categories, we investigated the hypothesis that endorsement of cultural essentialism would have different implications across racial categories. In Studies 1a and 1b, we assessed the properties of a cultural essentialism measure we developed using two samples with different racial/ethnic compositions. In Study 2, we collected data from 170 participants using an online questionnaire to test the implications of essentialist beliefs for policy support. Consistent with previous research, we found that belief in biological essentialism was negatively related to policy support for participants from both dominant and subordinated categories. In contrast, the relationship between cultural essentialism and policy support varied across identity categories in the hypothesized way: negative for participants from the dominant category but positive for participants from subordinated categories. Results suggest that cultural essentialism may provide a way of identification that subordinated communities use to mobilize support for social justice.

  20. The (Biological or Cultural Essence of Essentialism: Implications for Policy Support among Dominant and Subordinated Groups

    Directory of Open Access Journals (Sweden)

    Nur Soylu Yalcinkaya

    2017-05-01

    Full Text Available Most research links (racial essentialism to negative intergroup outcomes. We propose that this conclusion reflects both a narrow conceptual focus on biological/genetic essence and a narrow research focus from the perspective of racially dominant groups. We distinguished between beliefs in biological and cultural essences, and we investigated the implications of this distinction for support of social justice policies (e.g., affirmative action among people with dominant (White and subordinated (e.g., Black, Latino racial identities in the United States. Whereas, endorsement of biological essentialism may have similarly negative implications for social justice policies across racial categories, we investigated the hypothesis that endorsement of cultural essentialism would have different implications across racial categories. In Studies 1a and 1b, we assessed the properties of a cultural essentialism measure we developed using two samples with different racial/ethnic compositions. In Study 2, we collected data from 170 participants using an online questionnaire to test the implications of essentialist beliefs for policy support. Consistent with previous research, we found that belief in biological essentialism was negatively related to policy support for participants from both dominant and subordinated categories. In contrast, the relationship between cultural essentialism and policy support varied across identity categories in the hypothesized way: negative for participants from the dominant category but positive for participants from subordinated categories. Results suggest that cultural essentialism may provide a way of identification that subordinated communities use to mobilize support for social justice.

  1. Writing parallel programs that work

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    Serial algorithms typically run inefficiently on parallel machines. This may sound like an obvious statement, but it is the root cause of why parallel programming is considered to be difficult. The current state of the computer industry is still that almost all programs in existence are serial. This talk will describe the techniques used in the Intel Parallel Studio to provide a developer with the tools necessary to understand the behaviors and limitations of the existing serial programs. Once the limitations are known the developer can refactor the algorithms and reanalyze the resulting programs with the tools in the Intel Parallel Studio to create parallel programs that work. About the speaker Paul Petersen is a Sr. Principal Engineer in the Software and Solutions Group (SSG) at Intel. He received a Ph.D. degree in Computer Science from the University of Illinois in 1993. After UIUC, he was employed at Kuck and Associates, Inc. (KAI) working on auto-parallelizing compiler (KAP), and was involved in th...

  2. The specificity of learned parallelism in dual-memory retrieval.

    Science.gov (United States)

    Strobach, Tilo; Schubert, Torsten; Pashler, Harold; Rickard, Timothy

    2014-05-01

    Retrieval of two responses from one visually presented cue occurs sequentially at the outset of dual-retrieval practice. Exclusively for subjects who adopt a mode of grouping (i.e., synchronizing) their response execution, however, reaction times after dual-retrieval practice indicate a shift to learned retrieval parallelism (e.g., Nino & Rickard, in Journal of Experimental Psychology: Learning, Memory, and Cognition, 29, 373-388, 2003). In the present study, we investigated how this learned parallelism is achieved and why it appears to occur only for subjects who group their responses. Two main accounts were considered: a task-level versus a cue-level account. The task-level account assumes that learned retrieval parallelism occurs at the level of the task as a whole and is not limited to practiced cues. Grouping response execution may thus promote a general shift to parallel retrieval following practice. The cue-level account states that learned retrieval parallelism is specific to practiced cues. This type of parallelism may result from cue-specific response chunking that occurs uniquely as a consequence of grouped response execution. The results of two experiments favored the second account and were best interpreted in terms of a structural bottleneck model.

  3. Design, analysis and control of cable-suspended parallel robots and its applications

    CERN Document Server

    Zi, Bin

    2017-01-01

    This book provides an essential overview of the authors’ work in the field of cable-suspended parallel robots, focusing on innovative design, mechanics, control, development and applications. It presents and analyzes several typical mechanical architectures of cable-suspended parallel robots in practical applications, including the feed cable-suspended structure for super antennae, hybrid-driven-based cable-suspended parallel robots, and cooperative cable parallel manipulators for multiple mobile cranes. It also addresses the fundamental mechanics of cable-suspended parallel robots on the basis of their typical applications, including the kinematics, dynamics and trajectory tracking control of the feed cable-suspended structure for super antennae. In addition it proposes a novel hybrid-driven-based cable-suspended parallel robot that uses integrated mechanism design methods to improve the performance of traditional cable-suspended parallel robots. A comparative study on error and performance indices of hybr...

  4. Xyce parallel electronic simulator design.

    Energy Technology Data Exchange (ETDEWEB)

    Thornquist, Heidi K.; Rankin, Eric Lamont; Mei, Ting; Schiek, Richard Louis; Keiter, Eric Richard; Russo, Thomas V.

    2010-09-01

    This document is the Xyce Circuit Simulator developer guide. Xyce has been designed from the 'ground up' to be a SPICE-compatible, distributed memory parallel circuit simulator. While it is in many respects a research code, Xyce is intended to be a production simulator. As such, having software quality engineering (SQE) procedures in place to insure a high level of code quality and robustness are essential. Version control, issue tracking customer support, C++ style guildlines and the Xyce release process are all described. The Xyce Parallel Electronic Simulator has been under development at Sandia since 1999. Historically, Xyce has mostly been funded by ASC, the original focus of Xyce development has primarily been related to circuits for nuclear weapons. However, this has not been the only focus and it is expected that the project will diversify. Like many ASC projects, Xyce is a group development effort, which involves a number of researchers, engineers, scientists, mathmaticians and computer scientists. In addition to diversity of background, it is to be expected on long term projects for there to be a certain amount of staff turnover, as people move on to different projects. As a result, it is very important that the project maintain high software quality standards. The point of this document is to formally document a number of the software quality practices followed by the Xyce team in one place. Also, it is hoped that this document will be a good source of information for new developers.

  5. Clinical evaluation of the essential oil of "Satureja Hortensis" for the treatment of denture stomatitis

    Directory of Open Access Journals (Sweden)

    Ali Mohammad Sabzghabaee

    2012-01-01

    Full Text Available Background: The prevalence of denture stomatitis has been shown to vary from 15 to 65% in complete denture wearers. Satureja hortensis L. has been considered to have antinociceptive, anti-inflammatory, antifungal and antimicrobial activities in vitro and exhibits strong inhibitory effect on the growth of periodontal bacteria. The aim of this study was to evaluate the efficacy of a 1% gel formulation of S. hortensis essential oil for the treatment of denture stomatitis. Materials and Methods: A randomized, controlled clinical trial study was conducted on 80 patients (mean age 62.91±7.34 in two parallel groups treated either with S. hortensis essential oil 1% gel or placebo applied two times daily for two weeks. Denture stomatitis was diagnosed by clinical examination and paraclinical confirmation with sampling the palatal mucosa for Candida albicans. Data were analyzed using Chi-squared or Student′s t tests. Results: The erythematous lesions of palatal area were significantly reduced (P<0.0001 in the treatment group who applied 1% topical gel of S. hortensis essential oil and Candida colonies count were reduced significantly (P=0.001. Conclusion: Topical application of the essential oil of S. hortensis could be considered as an effective agent for the treatment of denture stomatitis.

  6. Strongly Essential Coalitions and the Nucleolus of Peer Group Games

    NARCIS (Netherlands)

    Brânzei, R.; Solymosi, T.; Tijs, S.H.

    2003-01-01

    Most of the known efficient algorithms designed to compute the nucleolus for special classes of balanced games are based on two facts: (i) in any balanced game, the coalitions which actually determine the nucleolus are essential; and (ii) all essential coalitions in any of the games in the class

  7. Essentials of cloud computing

    CERN Document Server

    Chandrasekaran, K

    2014-01-01

    ForewordPrefaceComputing ParadigmsLearning ObjectivesPreambleHigh-Performance ComputingParallel ComputingDistributed ComputingCluster ComputingGrid ComputingCloud ComputingBiocomputingMobile ComputingQuantum ComputingOptical ComputingNanocomputingNetwork ComputingSummaryReview PointsReview QuestionsFurther ReadingCloud Computing FundamentalsLearning ObjectivesPreambleMotivation for Cloud ComputingThe Need for Cloud ComputingDefining Cloud ComputingNIST Definition of Cloud ComputingCloud Computing Is a ServiceCloud Computing Is a Platform5-4-3 Principles of Cloud computingFive Essential Charact

  8. Parallel operation of voltage-source converters: issues and applications

    Energy Technology Data Exchange (ETDEWEB)

    Almeida, F.C.B.; Silva, D.S. [Federal University of Juiz de Fora (UFJF), MG (Brazil)], Emails: felipe.brum@engenharia.ufjf.br, salomaoime@yahoo.com.br; Ribeiro, P.F. [Calvin College, Grand Rapids, MI (United States); Federal University of Juiz de Fora (UFJF), MG (Brazil)], E-mail: pfribeiro@ieee.org

    2009-07-01

    Technological advancements in power electronics have prompted the development of advanced AC/DC conversion systems with high efficiency and flexible performance. Among these devices, the Voltage-Source Converter (VSC) has become an essential building block. This paper considers the parallel operation of VSCs under different system conditions and how they can assist the operation of highly complex power networks. A multi-terminal VSC-based High Voltage Direct Current (M-VSC-HVDC) system is chosen to be modeled, simulated and then analyzed as an example of VSCs operating in parallel. (author)

  9. The plaque- and gingivitis-inhibiting capacity of a commercially available essential oil product. A parallel, split-mouth, single blind, randomized, placebo-controlled clinical study.

    Science.gov (United States)

    Preus, Hans Ragnar; Koldsland, Odd Carsten; Aass, Anne Merete; Sandvik, Leiv; Hansen, Bjørn Frode

    2013-11-01

    Studies have reported commercially available essential oils with convincing plaque and gingivitis preventing properties. However, no tests have compared these essential oils, i.e. Listerine(®), against their true vehicle controls. To compare the plaque and gingivitis inhibiting effect of a commercially-available essential oil (Listerine(®) Total Care) to a negative (22% hydro-alcohol solution) and a positive (0.2% chlorhexidine (CHX)) control in an experimental gingivitis model. In three groups of 15 healthy volunteers, experimental gingivitis was induced and monitored over 21 days, simultaneously treated with Listerine(®) Total Care (test), 22% hydro-alcohol solution (negative control) and 0.2% chlorhexidine solution (positive control), respectively. The upper right quadrant of each individual received mouthwash only, whereas the upper left quadrant was subject to both rinses and mechanical oral hygiene. Plaque, gingivitis and side-effects were assessed at day 7, 14 and 21. After 21 days, the chlorhexidine group showed significantly lower average plaque and gingivitis scores than the Listerine(®) and alcohol groups, whereas there was little difference between the two latter. Listerine(®) Total Care had no statistically significant effect on plaque formation as compared to its vehicle control.

  10. The efficacy of Femal in women with premenstrual syndrome: a randomised, double-blind, parallel-group, placebo-controlled, multicentre study

    DEFF Research Database (Denmark)

    Gerhardsen, G.; Hansen, A.V.; Killi, M.

    2008-01-01

    Introduction: A double-blind, placebo-controlled, randomised, parallel-group, multicentre study was conducted to evaluate the effect of a pollen-based herbal medicinal product, Femal (R) (Sea-Band Ltd, Leicestershire, UK), on premenstrual sleep disturbances (PSD) in women with premenstrual syndrome...... as the main symptom cluster makes this herbal medicinal product a promising addition to the therapeutic arsenal for women with PMS Udgivelsesdato: 2008/6...

  11. Vectorization, parallelization and porting of nuclear codes (vectorization and parallelization). Progress report fiscal 1998

    International Nuclear Information System (INIS)

    Ishizuki, Shigeru; Kawai, Wataru; Nemoto, Toshiyuki; Ogasawara, Shinobu; Kume, Etsuo; Adachi, Masaaki; Kawasaki, Nobuo; Yatake, Yo-ichi

    2000-03-01

    Several computer codes in the nuclear field have been vectorized, parallelized and transported on the FUJITSU VPP500 system, the AP3000 system and the Paragon system at Center for Promotion of Computational Science and Engineering in Japan Atomic Energy Research Institute. We dealt with 12 codes in fiscal 1998. These results are reported in 3 parts, i.e., the vectorization and parallelization on vector processors part, the parallelization on scalar processors part and the porting part. In this report, we describe the vectorization and parallelization on vector processors. In this vectorization and parallelization on vector processors part, the vectorization of General Tokamak Circuit Simulation Program code GTCSP, the vectorization and parallelization of Molecular Dynamics NTV (n-particle, Temperature and Velocity) Simulation code MSP2, Eddy Current Analysis code EDDYCAL, Thermal Analysis Code for Test of Passive Cooling System by HENDEL T2 code THANPACST2 and MHD Equilibrium code SELENEJ on the VPP500 are described. In the parallelization on scalar processors part, the parallelization of Monte Carlo N-Particle Transport code MCNP4B2, Plasma Hydrodynamics code using Cubic Interpolated Propagation Method PHCIP and Vectorized Monte Carlo code (continuous energy model / multi-group model) MVP/GMVP on the Paragon are described. In the porting part, the porting of Monte Carlo N-Particle Transport code MCNP4B2 and Reactor Safety Analysis code RELAP5 on the AP3000 are described. (author)

  12. Parallel point-multiplication architecture using combined group operations for high-speed cryptographic applications.

    Directory of Open Access Journals (Sweden)

    Md Selim Hossain

    Full Text Available In this paper, we propose a novel parallel architecture for fast hardware implementation of elliptic curve point multiplication (ECPM, which is the key operation of an elliptic curve cryptography processor. The point multiplication over binary fields is synthesized on both FPGA and ASIC technology by designing fast elliptic curve group operations in Jacobian projective coordinates. A novel combined point doubling and point addition (PDPA architecture is proposed for group operations to achieve high speed and low hardware requirements for ECPM. It has been implemented over the binary field which is recommended by the National Institute of Standards and Technology (NIST. The proposed ECPM supports two Koblitz and random curves for the key sizes 233 and 163 bits. For group operations, a finite-field arithmetic operation, e.g. multiplication, is designed on a polynomial basis. The delay of a 233-bit point multiplication is only 3.05 and 3.56 μs, in a Xilinx Virtex-7 FPGA, for Koblitz and random curves, respectively, and 0.81 μs in an ASIC 65-nm technology, which are the fastest hardware implementation results reported in the literature to date. In addition, a 163-bit point multiplication is also implemented in FPGA and ASIC for fair comparison which takes around 0.33 and 0.46 μs, respectively. The area-time product of the proposed point multiplication is very low compared to similar designs. The performance ([Formula: see text] and Area × Time × Energy (ATE product of the proposed design are far better than the most significant studies found in the literature.

  13. Concatenating algorithms for parallel numerical simulations coupling radiation hydrodynamics with neutron transport

    International Nuclear Information System (INIS)

    Mo Zeyao

    2004-11-01

    Multiphysics parallel numerical simulations are usually essential to simplify researches on complex physical phenomena in which several physics are tightly coupled. It is very important on how to concatenate those coupled physics for fully scalable parallel simulation. Meanwhile, three objectives should be balanced, the first is efficient data transfer among simulations, the second and the third are efficient parallel executions and simultaneously developments of those simulation codes. Two concatenating algorithms for multiphysics parallel numerical simulations coupling radiation hydrodynamics with neutron transport on unstructured grid are presented. The first algorithm, Fully Loosely Concatenation (FLC), focuses on the independence of code development and the independence running with optimal performance of code. The second algorithm. Two Level Tightly Concatenation (TLTC), focuses on the optimal tradeoffs among above three objectives. Theoretical analyses for communicational complexity and parallel numerical experiments on hundreds of processors on two parallel machines have showed that these two algorithms are efficient and can be generalized to other multiphysics parallel numerical simulations. In especial, algorithm TLTC is linearly scalable and has achieved the optimal parallel performance. (authors)

  14. Hemostatic efficacy of TachoSil in liver resection compared with argon beam coagulator treatment: An open, randomized, prospective, multicenter, parallel-group trial

    DEFF Research Database (Denmark)

    Fischer, Lars; Seiler, Christoph M.; Broelsch, Christoph E.

    2011-01-01

    surgical trial with 2 parallel groups. Patients were eligible for intra-operative randomization after elective resection of ≥1 liver segment and primary hemostasis. The primary end point was the time to hemostasis after starting the randomized intervention to obtain secondaty hemostasis. Secondary end...

  15. Automatic Parallelization Tool: Classification of Program Code for Parallel Computing

    Directory of Open Access Journals (Sweden)

    Mustafa Basthikodi

    2016-04-01

    Full Text Available Performance growth of single-core processors has come to a halt in the past decade, but was re-enabled by the introduction of parallelism in processors. Multicore frameworks along with Graphical Processing Units empowered to enhance parallelism broadly. Couples of compilers are updated to developing challenges forsynchronization and threading issues. Appropriate program and algorithm classifications will have advantage to a great extent to the group of software engineers to get opportunities for effective parallelization. In present work we investigated current species for classification of algorithms, in that related work on classification is discussed along with the comparison of issues that challenges the classification. The set of algorithms are chosen which matches the structure with different issues and perform given task. We have tested these algorithms utilizing existing automatic species extraction toolsalong with Bones compiler. We have added functionalities to existing tool, providing a more detailed characterization. The contributions of our work include support for pointer arithmetic, conditional and incremental statements, user defined types, constants and mathematical functions. With this, we can retain significant data which is not captured by original speciesof algorithms. We executed new theories into the device, empowering automatic characterization of program code.

  16. Mathematical Abstraction: Constructing Concept of Parallel Coordinates

    Science.gov (United States)

    Nurhasanah, F.; Kusumah, Y. S.; Sabandar, J.; Suryadi, D.

    2017-09-01

    Mathematical abstraction is an important process in teaching and learning mathematics so pre-service mathematics teachers need to understand and experience this process. One of the theoretical-methodological frameworks for studying this process is Abstraction in Context (AiC). Based on this framework, abstraction process comprises of observable epistemic actions, Recognition, Building-With, Construction, and Consolidation called as RBC + C model. This study investigates and analyzes how pre-service mathematics teachers constructed and consolidated concept of Parallel Coordinates in a group discussion. It uses AiC framework for analyzing mathematical abstraction of a group of pre-service teachers consisted of four students in learning Parallel Coordinates concepts. The data were collected through video recording, students’ worksheet, test, and field notes. The result shows that the students’ prior knowledge related to concept of the Cartesian coordinate has significant role in the process of constructing Parallel Coordinates concept as a new knowledge. The consolidation process is influenced by the social interaction between group members. The abstraction process taken place in this group were dominated by empirical abstraction that emphasizes on the aspect of identifying characteristic of manipulated or imagined object during the process of recognizing and building-with.

  17. Critical appraisal of arguments for the delayed-start design proposed as alternative to the parallel-group randomized clinical trial design in the field of rare disease.

    Science.gov (United States)

    Spineli, Loukia M; Jenz, Eva; Großhennig, Anika; Koch, Armin

    2017-08-17

    A number of papers have proposed or evaluated the delayed-start design as an alternative to the standard two-arm parallel group randomized clinical trial (RCT) design in the field of rare disease. However the discussion is felt to lack a sufficient degree of consideration devoted to the true virtues of the delayed start design and the implications either in terms of required sample-size, overall information, or interpretation of the estimate in the context of small populations. To evaluate whether there are real advantages of the delayed-start design particularly in terms of overall efficacy and sample size requirements as a proposed alternative to the standard parallel group RCT in the field of rare disease. We used a real-life example to compare the delayed-start design with the standard RCT in terms of sample size requirements. Then, based on three scenarios regarding the development of the treatment effect over time, the advantages, limitations and potential costs of the delayed-start design are discussed. We clarify that delayed-start design is not suitable for drugs that establish an immediate treatment effect, but for drugs with effects developing over time, instead. In addition, the sample size will always increase as an implication for a reduced time on placebo resulting in a decreased treatment effect. A number of papers have repeated well-known arguments to justify the delayed-start design as appropriate alternative to the standard parallel group RCT in the field of rare disease and do not discuss the specific needs of research methodology in this field. The main point is that a limited time on placebo will result in an underestimated treatment effect and, in consequence, in larger sample size requirements compared to those expected under a standard parallel-group design. This also impacts on benefit-risk assessment.

  18. A homotopy method for solving Riccati equations on a shared memory parallel computer

    International Nuclear Information System (INIS)

    Zigic, D.; Watson, L.T.; Collins, E.G. Jr.; Davis, L.D.

    1993-01-01

    Although there are numerous algorithms for solving Riccati equations, there still remains a need for algorithms which can operate efficiently on large problems and on parallel machines. This paper gives a new homotopy-based algorithm for solving Riccati equations on a shared memory parallel computer. The central part of the algorithm is the computation of the kernel of the Jacobian matrix, which is essential for the corrector iterations along the homotopy zero curve. Using a Schur decomposition the tensor product structure of various matrices can be efficiently exploited. The algorithm allows for efficient parallelization on shared memory machines

  19. A Parallel Algorithm for Connected Component Labelling of Gray-scale Images on Homogeneous Multicore Architectures

    International Nuclear Information System (INIS)

    Niknam, Mehdi; Thulasiraman, Parimala; Camorlinga, Sergio

    2010-01-01

    Connected component labelling is an essential step in image processing. We provide a parallel version of Suzuki's sequential connected component algorithm in order to speed up the labelling process. Also, we modify the algorithm to enable labelling gray-scale images. Due to the data dependencies in the algorithm we used a method similar to pipeline to exploit parallelism. The parallel algorithm method achieved a speedup of 2.5 for image size of 256 x 256 pixels using 4 processing threads.

  20. Modern algebra essentials

    CERN Document Server

    Lutfiyya, Lutfi A

    2012-01-01

    REA's Essentials provide quick and easy access to critical information in a variety of different fields, ranging from the most basic to the most advanced. As its name implies, these concise, comprehensive study guides summarize the essentials of the field covered. Essentials are helpful when preparing for exams, doing homework and will remain a lasting reference source for students, teachers, and professionals. Modern Algebra includes set theory, operations, relations, basic properties of the integers, group theory, and ring theory.

  1. Essential element contents in food groups from the second Brazilian total diet study

    International Nuclear Information System (INIS)

    Ambrogi, J.B.; Avegliano, R.P.; Maihara, V.A.

    2016-01-01

    Total diet study (TDS) has been considered as one of the most appropriate approaches to estimate dietary exposure of essential elements. This paper presents preliminary results of concentrations and average dietary daily intakes of Ca, Co, Cr, Fe, K, Na, Se and Zn from the 2nd Brazilian TDS. Nineteen groups from a Food List which represents the daily intake of the population from the Brazilian southeastern region were analyzed by instrumental neutron activation analysis. The dietary daily intake values for Ca (641 mg), Fe (19.6 mg), K (2738 mg), Na (2466 mg), Se (56.4 μg), and Zn (15.3 mg) were higher than the 1st Brazilian TDS. (author)

  2. Scaling Behavior of Dilute Polymer Solutions Confined between Parallel Plates

    NARCIS (Netherlands)

    Vliet, J.H. van; Luyten, M.C.; Brinke, G. ten

    1992-01-01

    The average size and shape of a polymer coil confined in a slit between two parallel plates depends on the distance L between the plates. On the basis of numerical results, four different regimes can be distingubhed. For large values of L the coil is essentially unconfined. For intermediate values

  3. On the Performance of the Python Programming Language for Serial and Parallel Scientific Computations

    Directory of Open Access Journals (Sweden)

    Xing Cai

    2005-01-01

    Full Text Available This article addresses the performance of scientific applications that use the Python programming language. First, we investigate several techniques for improving the computational efficiency of serial Python codes. Then, we discuss the basic programming techniques in Python for parallelizing serial scientific applications. It is shown that an efficient implementation of the array-related operations is essential for achieving good parallel performance, as for the serial case. Once the array-related operations are efficiently implemented, probably using a mixed-language implementation, good serial and parallel performance become achievable. This is confirmed by a set of numerical experiments. Python is also shown to be well suited for writing high-level parallel programs.

  4. Linear feedback controls the essentials

    CERN Document Server

    Haidekker, Mark A

    2013-01-01

    The design of control systems is at the very core of engineering. Feedback controls are ubiquitous, ranging from simple room thermostats to airplane engine control. Helping to make sense of this wide-ranging field, this book provides a new approach by keeping a tight focus on the essentials with a limited, yet consistent set of examples. Analysis and design methods are explained in terms of theory and practice. The book covers classical, linear feedback controls, and linear approximations are used when needed. In parallel, the book covers time-discrete (digital) control systems and juxtapos

  5. Configuration affects parallel stent grafting results.

    Science.gov (United States)

    Tanious, Adam; Wooster, Mathew; Armstrong, Paul A; Zwiebel, Bruce; Grundy, Shane; Back, Martin R; Shames, Murray L

    2018-05-01

    A number of adjunctive "off-the-shelf" procedures have been described to treat complex aortic diseases. Our goal was to evaluate parallel stent graft configurations and to determine an optimal formula for these procedures. This is a retrospective review of all patients at a single medical center treated with parallel stent grafts from January 2010 to September 2015. Outcomes were evaluated on the basis of parallel graft orientation, type, and main body device. Primary end points included parallel stent graft compromise and overall endovascular aneurysm repair (EVAR) compromise. There were 78 patients treated with a total of 144 parallel stents for a variety of pathologic processes. There was a significant correlation between main body oversizing and snorkel compromise (P = .0195) and overall procedural complication (P = .0019) but not with endoleak rates. Patients were organized into the following oversizing groups for further analysis: 0% to 10%, 10% to 20%, and >20%. Those oversized into the 0% to 10% group had the highest rate of overall EVAR complication (73%; P = .0003). There were no significant correlations between any one particular configuration and overall procedural complication. There was also no significant correlation between total number of parallel stents employed and overall complication. Composite EVAR configuration had no significant correlation with individual snorkel compromise, endoleak, or overall EVAR or procedural complication. The configuration most prone to individual snorkel compromise and overall EVAR complication was a four-stent configuration with two stents in an antegrade position and two stents in a retrograde position (60% complication rate). The configuration most prone to endoleak was one or two stents in retrograde position (33% endoleak rate), followed by three stents in an all-antegrade position (25%). There was a significant correlation between individual stent configuration and stent compromise (P = .0385), with 31

  6. Massive hybrid parallelism for fully implicit multiphysics

    International Nuclear Information System (INIS)

    Gaston, D. R.; Permann, C. J.; Andrs, D.; Peterson, J. W.

    2013-01-01

    As hardware advances continue to modify the supercomputing landscape, traditional scientific software development practices will become more outdated, ineffective, and inefficient. The process of rewriting/retooling existing software for new architectures is a Sisyphean task, and results in substantial hours of development time, effort, and money. Software libraries which provide an abstraction of the resources provided by such architectures are therefore essential if the computational engineering and science communities are to continue to flourish in this modern computing environment. The Multiphysics Object Oriented Simulation Environment (MOOSE) framework enables complex multiphysics analysis tools to be built rapidly by scientists, engineers, and domain specialists, while also allowing them to both take advantage of current HPC architectures, and efficiently prepare for future supercomputer designs. MOOSE employs a hybrid shared-memory and distributed-memory parallel model and provides a complete and consistent interface for creating multiphysics analysis tools. In this paper, a brief discussion of the mathematical algorithms underlying the framework and the internal object-oriented hybrid parallel design are given. Representative massively parallel results from several applications areas are presented, and a brief discussion of future areas of research for the framework are provided. (authors)

  7. Massive hybrid parallelism for fully implicit multiphysics

    Energy Technology Data Exchange (ETDEWEB)

    Gaston, D. R.; Permann, C. J.; Andrs, D.; Peterson, J. W. [Idaho National Laboratory, 2525 N. Fremont Ave., Idaho Falls, ID 83415 (United States)

    2013-07-01

    As hardware advances continue to modify the supercomputing landscape, traditional scientific software development practices will become more outdated, ineffective, and inefficient. The process of rewriting/retooling existing software for new architectures is a Sisyphean task, and results in substantial hours of development time, effort, and money. Software libraries which provide an abstraction of the resources provided by such architectures are therefore essential if the computational engineering and science communities are to continue to flourish in this modern computing environment. The Multiphysics Object Oriented Simulation Environment (MOOSE) framework enables complex multiphysics analysis tools to be built rapidly by scientists, engineers, and domain specialists, while also allowing them to both take advantage of current HPC architectures, and efficiently prepare for future supercomputer designs. MOOSE employs a hybrid shared-memory and distributed-memory parallel model and provides a complete and consistent interface for creating multiphysics analysis tools. In this paper, a brief discussion of the mathematical algorithms underlying the framework and the internal object-oriented hybrid parallel design are given. Representative massively parallel results from several applications areas are presented, and a brief discussion of future areas of research for the framework are provided. (authors)

  8. MASSIVE HYBRID PARALLELISM FOR FULLY IMPLICIT MULTIPHYSICS

    Energy Technology Data Exchange (ETDEWEB)

    Cody J. Permann; David Andrs; John W. Peterson; Derek R. Gaston

    2013-05-01

    As hardware advances continue to modify the supercomputing landscape, traditional scientific software development practices will become more outdated, ineffective, and inefficient. The process of rewriting/retooling existing software for new architectures is a Sisyphean task, and results in substantial hours of development time, effort, and money. Software libraries which provide an abstraction of the resources provided by such architectures are therefore essential if the computational engineering and science communities are to continue to flourish in this modern computing environment. The Multiphysics Object Oriented Simulation Environment (MOOSE) framework enables complex multiphysics analysis tools to be built rapidly by scientists, engineers, and domain specialists, while also allowing them to both take advantage of current HPC architectures, and efficiently prepare for future supercomputer designs. MOOSE employs a hybrid shared-memory and distributed-memory parallel model and provides a complete and consistent interface for creating multiphysics analysis tools. In this paper, a brief discussion of the mathematical algorithms underlying the framework and the internal object-oriented hybrid parallel design are given. Representative massively parallel results from several applications areas are presented, and a brief discussion of future areas of research for the framework are provided.

  9. First massively parallel algorithm to be implemented in Apollo-II code

    International Nuclear Information System (INIS)

    Stankovski, Z.

    1994-01-01

    The collision probability (CP) method in neutron transport, as applied to arbitrary 2D XY geometries, like the TDT module in APOLLO-II, is very time consuming. Consequently RZ or 3D extensions became prohibitive. Fortunately, this method is very suitable for parallelization. Massively parallel computer architectures, especially MIMD machines, bring a new breath to this method. In this paper we present a CM5 implementation of the CP method. Parallelization is applied to the energy groups, using the CMMD message passing library. In our case we use 32 processors for the standard 99-group APOLLIB-II library. The real advantage of this algorithm will appear in the calculation of the future fine multigroup library (about 8000 groups) of the SAPHYR project with a massively parallel computer (to the order of hundreds of processors). (author). 3 tabs., 4 figs., 4 refs

  10. First massively parallel algorithm to be implemented in APOLLO-II code

    International Nuclear Information System (INIS)

    Stankovski, Z.

    1994-01-01

    The collision probability method in neutron transport, as applied to arbitrary 2-dimensional geometries, like the two dimensional transport module in APOLLO-II is very time consuming. Consequently 3-dimensional extension became prohibitive. Fortunately, this method is very suitable for parallelization. Massively parallel computer architectures, especially MIMD machines, bring a new breath to this method. In this paper we present a CM5 implementation of the collision probability method. Parallelization is applied to the energy groups, using the CMMD massage passing library. In our case we used 32 processors for the standard 99-group APOLLIB-II library. The real advantage of this algorithm will appear in the calculation of the future multigroup library (about 8000 groups) of the SAPHYR project with a massively parallel computer (to the order of hundreds of processors). (author). 4 refs., 4 figs., 3 tabs

  11. Monte Carlo photon transport on shared memory and distributed memory parallel processors

    International Nuclear Information System (INIS)

    Martin, W.R.; Wan, T.C.; Abdel-Rahman, T.S.; Mudge, T.N.; Miura, K.

    1987-01-01

    Parallelized Monte Carlo algorithms for analyzing photon transport in an inertially confined fusion (ICF) plasma are considered. Algorithms were developed for shared memory (vector and scalar) and distributed memory (scalar) parallel processors. The shared memory algorithm was implemented on the IBM 3090/400, and timing results are presented for dedicated runs with two, three, and four processors. Two alternative distributed memory algorithms (replication and dispatching) were implemented on a hypercube parallel processor (1 through 64 nodes). The replication algorithm yields essentially full efficiency for all cube sizes; with the 64-node configuration, the absolute performance is nearly the same as with the CRAY X-MP. The dispatching algorithm also yields efficiencies above 80% in a large simulation for the 64-processor configuration

  12. Design and implementation of parallel video encoding strategies using divisible load analysis

    NARCIS (Netherlands)

    Li, Ping; Veeravalli, Bharadwaj; Kassim, A.A.

    2005-01-01

    The processing time needed for motion estimation usually accounts for a significant part of the overall processing time of the video encoder. To improve the video encoding speed, reducing the execution time for motion estimation process is essential. Parallel implementation of video encoding systems

  13. The essential tension between leadership and power: when leaders sacrifice group goals for the sake of self-interest.

    Science.gov (United States)

    Maner, Jon K; Mead, Nicole L

    2010-09-01

    Throughout human history, leaders have been responsible for helping groups attain important goals. Ideally, leaders use their power to steer groups toward desired outcomes. However, leaders can also use their power in the service of self-interest rather than effective leadership. Five experiments identified factors within both the person and the social context that determine whether leaders wield their power to promote group goals versus self-interest. In most cases, leaders behaved in a manner consistent with group goals. However, when their power was tenuous due to instability within the hierarchy, leaders high (but not low) in dominance motivation prioritized their own power over group goals: They withheld valuable information from the group, excluded a highly skilled group member, and prevented a proficient group member from having any influence over a group task. These self-interested actions were eliminated when the group was competing against a rival outgroup. Findings provide important insight into factors that influence the way leaders navigate the essential tension between leadership and power. (PsycINFO Database Record (c) 2010 APA, all rights reserved).

  14. Components of action potential repolarization in cerebellar parallel fibres.

    Science.gov (United States)

    Pekala, Dobromila; Baginskas, Armantas; Szkudlarek, Hanna J; Raastad, Morten

    2014-11-15

    Repolarization of the presynaptic action potential is essential for transmitter release, excitability and energy expenditure. Little is known about repolarization in thin, unmyelinated axons forming en passant synapses, which represent the most common type of axons in the mammalian brain's grey matter.We used rat cerebellar parallel fibres, an example of typical grey matter axons, to investigate the effects of K(+) channel blockers on repolarization. We show that repolarization is composed of a fast tetraethylammonium (TEA)-sensitive component, determining the width and amplitude of the spike, and a slow margatoxin (MgTX)-sensitive depolarized after-potential (DAP). These two components could be recorded at the granule cell soma as antidromic action potentials and from the axons with a newly developed miniaturized grease-gap method. A considerable proportion of fast repolarization remained in the presence of TEA, MgTX, or both. This residual was abolished by the addition of quinine. The importance of proper control of fast repolarization was demonstrated by somatic recordings of antidromic action potentials. In these experiments, the relatively broad K(+) channel blocker 4-aminopyridine reduced the fast repolarization, resulting in bursts of action potentials forming on top of the DAP. We conclude that repolarization of the action potential in parallel fibres is supported by at least three groups of K(+) channels. Differences in their temporal profiles allow relatively independent control of the spike and the DAP, whereas overlap of their temporal profiles provides robust control of axonal bursting properties.

  15. Parallel processing for artificial intelligence 2

    CERN Document Server

    Kumar, V; Suttner, CB

    1994-01-01

    With the increasing availability of parallel machines and the raising of interest in large scale and real world applications, research on parallel processing for Artificial Intelligence (AI) is gaining greater importance in the computer science environment. Many applications have been implemented and delivered but the field is still considered to be in its infancy. This book assembles diverse aspects of research in the area, providing an overview of the current state of technology. It also aims to promote further growth across the discipline. Contributions have been grouped according to their

  16. Neck collar, "act-as-usual" or active mobilization for whiplash injury? A randomized parallel-group trial

    DEFF Research Database (Denmark)

    Kongsted, Alice; Montvilas, Erisela Qerama; Kasch, Helge

    2007-01-01

    practitioners within 10 days after a whiplash injury and randomized to: 1) immobilization of the cervical spine in a rigid collar followed by active mobilization, 2) advice to "act-as-usual," or 3) an active mobilization program (Mechanical Diagnosis and Therapy). Follow-up was carried out after 3, 6, and 12......-extension trauma to the cervical spine. It is unclear whether this, in some cases disabling, condition can be prevented by early intervention. Active interventions have been recommended but have not been compared with information only. Methods. Participants were recruited from emergency units and general......Study Design. Randomized, parallel-group trial. Objective. To compare the effect of 3 early intervention strategies following whiplash injury. Summary of Background Data. Long-lasting pain and disability, known as chronic whiplash-associated disorder (WAD), may develop after a forced flexion...

  17. Neck collar, "act-as-usual" or active mobilization for whiplash injury? A randomized parallel-group trial

    DEFF Research Database (Denmark)

    Kongsted, Alice; Montvilas, Erisela Qerama; Kasch, Helge

    2007-01-01

    Study Design. Randomized, parallel-group trial. Objective. To compare the effect of 3 early intervention strategies following whiplash injury. Summary of Background Data. Long-lasting pain and disability, known as chronic whiplash-associated disorder (WAD), may develop after a forced flexion......-extension trauma to the cervical spine. It is unclear whether this, in some cases disabling, condition can be prevented by early intervention. Active interventions have been recommended but have not been compared with information only. Methods. Participants were recruited from emergency units and general...... practitioners within 10 days after a whiplash injury and randomized to: 1) immobilization of the cervical spine in a rigid collar followed by active mobilization, 2) advice to "act-as-usual," or 3) an active mobilization program (Mechanical Diagnosis and Therapy). Follow-up was carried out after 3, 6, and 12...

  18. Massively Parallel and Scalable Implicit Time Integration Algorithms for Structural Dynamics

    Science.gov (United States)

    Farhat, Charbel

    1997-01-01

    Explicit codes are often used to simulate the nonlinear dynamics of large-scale structural systems, even for low frequency response, because the storage and CPU requirements entailed by the repeated factorizations traditionally found in implicit codes rapidly overwhelm the available computing resources. With the advent of parallel processing, this trend is accelerating because of the following additional facts: (a) explicit schemes are easier to parallelize than implicit ones, and (b) explicit schemes induce short range interprocessor communications that are relatively inexpensive, while the factorization methods used in most implicit schemes induce long range interprocessor communications that often ruin the sought-after speed-up. However, the time step restriction imposed by the Courant stability condition on all explicit schemes cannot yet be offset by the speed of the currently available parallel hardware. Therefore, it is essential to develop efficient alternatives to direct methods that are also amenable to massively parallel processing because implicit codes using unconditionally stable time-integration algorithms are computationally more efficient when simulating the low-frequency dynamics of aerospace structures.

  19. A CS1 pedagogical approach to parallel thinking

    Science.gov (United States)

    Rague, Brian William

    Almost all collegiate programs in Computer Science offer an introductory course in programming primarily devoted to communicating the foundational principles of software design and development. The ACM designates this introduction to computer programming course for first-year students as CS1, during which methodologies for solving problems within a discrete computational context are presented. Logical thinking is highlighted, guided primarily by a sequential approach to algorithm development and made manifest by typically using the latest, commercially successful programming language. In response to the most recent developments in accessible multicore computers, instructors of these introductory classes may wish to include training on how to design workable parallel code. Novel issues arise when programming concurrent applications which can make teaching these concepts to beginning programmers a seemingly formidable task. Student comprehension of design strategies related to parallel systems should be monitored to ensure an effective classroom experience. This research investigated the feasibility of integrating parallel computing concepts into the first-year CS classroom. To quantitatively assess student comprehension of parallel computing, an experimental educational study using a two-factor mixed group design was conducted to evaluate two instructional interventions in addition to a control group: (1) topic lecture only, and (2) topic lecture with laboratory work using a software visualization Parallel Analysis Tool (PAT) specifically designed for this project. A new evaluation instrument developed for this study, the Perceptions of Parallelism Survey (PoPS), was used to measure student learning regarding parallel systems. The results from this educational study show a statistically significant main effect among the repeated measures, implying that student comprehension levels of parallel concepts as measured by the PoPS improve immediately after the delivery of

  20. Parallel R-matrix computation

    International Nuclear Information System (INIS)

    Heggarty, J.W.

    1999-06-01

    For almost thirty years, sequential R-matrix computation has been used by atomic physics research groups, from around the world, to model collision phenomena involving the scattering of electrons or positrons with atomic or molecular targets. As considerable progress has been made in the understanding of fundamental scattering processes, new data, obtained from more complex calculations, is of current interest to experimentalists. Performing such calculations, however, places considerable demands on the computational resources to be provided by the target machine, in terms of both processor speed and memory requirement. Indeed, in some instances the computational requirements are so great that the proposed R-matrix calculations are intractable, even when utilising contemporary classic supercomputers. Historically, increases in the computational requirements of R-matrix computation were accommodated by porting the problem codes to a more powerful classic supercomputer. Although this approach has been successful in the past, it is no longer considered to be a satisfactory solution due to the limitations of current (and future) Von Neumann machines. As a consequence, there has been considerable interest in the high performance multicomputers, that have emerged over the last decade which appear to offer the computational resources required by contemporary R-matrix research. Unfortunately, developing codes for these machines is not as simple a task as it was to develop codes for successive classic supercomputers. The difficulty arises from the considerable differences in the computing models that exist between the two types of machine and results in the programming of multicomputers to be widely acknowledged as a difficult, time consuming and error-prone task. Nevertheless, unless parallel R-matrix computation is realised, important theoretical and experimental atomic physics research will continue to be hindered. This thesis describes work that was undertaken in

  1. Solving Large Quadratic|Assignment Problems in Parallel

    DEFF Research Database (Denmark)

    Clausen, Jens; Perregaard, Michael

    1997-01-01

    and recalculation of bounds between branchings when used in a parallel Branch-and-Bound algorithm. The algorithm has been implemented on a 16-processor MEIKO Computing Surface with Intel i860 processors. Computational results from the solution of a number of large QAPs, including the classical Nugent 20...... processors, and have hence not been ideally suited for computations essentially involving non-vectorizable computations on integers.In this paper we investigate the combination of one of the best bound functions for a Branch-and-Bound algorithm (the Gilmore-Lawler bound) and various testing, variable binding...

  2. More parallel please

    DEFF Research Database (Denmark)

    Gregersen, Frans; Josephson, Olle; Kristoffersen, Gjert

    of departure that English may be used in parallel with the various local, in this case Nordic, languages. As such, the book integrates the challenge of internationalization faced by any university with the wish to improve quality in research, education and administration based on the local language......Abstract [en] More parallel, please is the result of the work of an Inter-Nordic group of experts on language policy financed by the Nordic Council of Ministers 2014-17. The book presents all that is needed to plan, practice and revise a university language policy which takes as its point......(s). There are three layers in the text: First, you may read the extremely brief version of the in total 11 recommendations for best practice. Second, you may acquaint yourself with the extended version of the recommendations and finally, you may study the reasoning behind each of them. At the end of the text, we give...

  3. The Acoustic and Peceptual Effects of Series and Parallel Processing

    Directory of Open Access Journals (Sweden)

    Melinda C. Anderson

    2009-01-01

    Full Text Available Temporal envelope (TE cues provide a great deal of speech information. This paper explores how spectral subtraction and dynamic-range compression gain modifications affect TE fluctuations for parallel and series configurations. In parallel processing, algorithms compute gains based on the same input signal, and the gains in dB are summed. In series processing, output from the first algorithm forms the input to the second algorithm. Acoustic measurements show that the parallel arrangement produces more gain fluctuations, introducing more changes to the TE than the series configurations. Intelligibility tests for normal-hearing (NH and hearing-impaired (HI listeners show (1 parallel processing gives significantly poorer speech understanding than an unprocessed (UNP signal and the series arrangement and (2 series processing and UNP yield similar results. Speech quality tests show that UNP is preferred to both parallel and series arrangements, although spectral subtraction is the most preferred. No significant differences exist in sound quality between the series and parallel arrangements, or between the NH group and the HI group. These results indicate that gain modifications affect intelligibility and sound quality differently. Listeners appear to have a higher tolerance for gain modifications with regard to intelligibility, while judgments for sound quality appear to be more affected by smaller amounts of gain modification.

  4. Parallel PDE-Based Simulations Using the Common Component Architecture

    International Nuclear Information System (INIS)

    McInnes, Lois C.; Allan, Benjamin A.; Armstrong, Robert; Benson, Steven J.; Bernholdt, David E.; Dahlgren, Tamara L.; Diachin, Lori; Krishnan, Manoj Kumar; Kohl, James A.; Larson, J. Walter; Lefantzi, Sophia; Nieplocha, Jarek; Norris, Boyana; Parker, Steven G.; Ray, Jaideep; Zhou, Shujia

    2006-01-01

    The complexity of parallel PDE-based simulations continues to increase as multimodel, multiphysics, and multi-institutional projects become widespread. A goal of component based software engineering in such large-scale simulations is to help manage this complexity by enabling better interoperability among various codes that have been independently developed by different groups. The Common Component Architecture (CCA) Forum is defining a component architecture specification to address the challenges of high-performance scientific computing. In addition, several execution frameworks, supporting infrastructure, and general purpose components are being developed. Furthermore, this group is collaborating with others in the high-performance computing community to design suites of domain-specific component interface specifications and underlying implementations. This chapter discusses recent work on leveraging these CCA efforts in parallel PDE-based simulations involving accelerator design, climate modeling, combustion, and accidental fires and explosions. We explain how component technology helps to address the different challenges posed by each of these applications, and we highlight how component interfaces built on existing parallel toolkits facilitate the reuse of software for parallel mesh manipulation, discretization, linear algebra, integration, optimization, and parallel data redistribution. We also present performance data to demonstrate the suitability of this approach, and we discuss strategies for applying component technologies to both new and existing applications

  5. Directions in parallel processor architecture, and GPUs too

    CERN Multimedia

    CERN. Geneva

    2014-01-01

    Modern computing is power-limited in every domain of computing. Performance increments extracted from instruction-level parallelism (ILP) are no longer power-efficient; they haven't been for some time. Thread-level parallelism (TLP) is a more easily exploited form of parallelism, at the expense of programmer effort to expose it in the program. In this talk, I will introduce you to disparate topics in parallel processor architecture that will impact programming models (and you) in both the near and far future. About the speaker Olivier is a senior GPU (SM) architect at NVIDIA and an active participant in the concurrency working group of the ISO C++ committee. He has also worked on very large diesel engines as a mechanical engineer, and taught at McGill University (Canada) as a faculty instructor.

  6. Parallelization of a spherical Sn transport theory algorithm

    International Nuclear Information System (INIS)

    Haghighat, A.

    1989-01-01

    The work described in this paper derives a parallel algorithm for an R-dependent spherical S N transport theory algorithm and studies its performance by testing different sample problems. The S N transport method is one of the most accurate techniques used to solve the linear Boltzmann equation. Several studies have been done on the vectorization of the S N algorithms; however, very few studies have been performed on the parallelization of this algorithm. Weinke and Hommoto have looked at the parallel processing of the different energy groups, and Azmy recently studied the parallel processing of the inner iterations of an X-Y S N nodal transport theory method. Both studies have reported very encouraging results, which have prompted us to look at the parallel processing of an R-dependent S N spherical geometry algorithm. This geometry was chosen because, in spite of its simplicity, it contains the complications of the curvilinear geometries (i.e., redistribution of neutrons over the discretized angular bins)

  7. Practical parallel computing

    CERN Document Server

    Morse, H Stephen

    1994-01-01

    Practical Parallel Computing provides information pertinent to the fundamental aspects of high-performance parallel processing. This book discusses the development of parallel applications on a variety of equipment.Organized into three parts encompassing 12 chapters, this book begins with an overview of the technology trends that converge to favor massively parallel hardware over traditional mainframes and vector machines. This text then gives a tutorial introduction to parallel hardware architectures. Other chapters provide worked-out examples of programs using several parallel languages. Thi

  8. Parallel rendering

    Science.gov (United States)

    Crockett, Thomas W.

    1995-01-01

    This article provides a broad introduction to the subject of parallel rendering, encompassing both hardware and software systems. The focus is on the underlying concepts and the issues which arise in the design of parallel rendering algorithms and systems. We examine the different types of parallelism and how they can be applied in rendering applications. Concepts from parallel computing, such as data decomposition, task granularity, scalability, and load balancing, are considered in relation to the rendering problem. We also explore concepts from computer graphics, such as coherence and projection, which have a significant impact on the structure of parallel rendering algorithms. Our survey covers a number of practical considerations as well, including the choice of architectural platform, communication and memory requirements, and the problem of image assembly and display. We illustrate the discussion with numerous examples from the parallel rendering literature, representing most of the principal rendering methods currently used in computer graphics.

  9. Parallel computations

    CERN Document Server

    1982-01-01

    Parallel Computations focuses on parallel computation, with emphasis on algorithms used in a variety of numerical and physical applications and for many different types of parallel computers. Topics covered range from vectorization of fast Fourier transforms (FFTs) and of the incomplete Cholesky conjugate gradient (ICCG) algorithm on the Cray-1 to calculation of table lookups and piecewise functions. Single tridiagonal linear systems and vectorized computation of reactive flow are also discussed.Comprised of 13 chapters, this volume begins by classifying parallel computers and describing techn

  10. A New Strategy to Reduce Influenza Escape: Detecting Therapeutic Targets Constituted of Invariance Groups

    Directory of Open Access Journals (Sweden)

    Julie Lao

    2017-03-01

    Full Text Available The pathogenicity of the different flu species is a real public health problem worldwide. To combat this scourge, we established a method to detect drug targets, reducing the possibility of escape. Besides being able to attach a drug candidate, these targets should have the main characteristic of being part of an essential viral function. The invariance groups that are sets of residues bearing an essential function can be detected genetically. They consist of invariant and synthetic lethal residues (interdependent residues not varying or slightly varying when together. We analyzed an alignment of more than 10,000 hemagglutinin sequences of influenza to detect six invariance groups, close in space, and on the protein surface. In parallel we identified five potential pockets on the surface of hemagglutinin. By combining these results, three potential binding sites were determined that are composed of invariance groups located respectively in the vestigial esterase domain, in the bottom of the stem and in the fusion area. The latter target is constituted of residues involved in the spring-loaded mechanism, an essential step in the fusion process. We propose a model describing how this potential target could block the reorganization of the hemagglutinin HA2 secondary structure and prevent viral entry into the host cell.

  11. Parallel sorting algorithms

    CERN Document Server

    Akl, Selim G

    1985-01-01

    Parallel Sorting Algorithms explains how to use parallel algorithms to sort a sequence of items on a variety of parallel computers. The book reviews the sorting problem, the parallel models of computation, parallel algorithms, and the lower bounds on the parallel sorting problems. The text also presents twenty different algorithms, such as linear arrays, mesh-connected computers, cube-connected computers. Another example where algorithm can be applied is on the shared-memory SIMD (single instruction stream multiple data stream) computers in which the whole sequence to be sorted can fit in the

  12. Study of palmar dermatoglyphics in patients with essential hypertension between the age group of 20-50 years

    OpenAIRE

    Rudragouda S Bulagouda, Purnima J Patil, Gavishiddppa A Hadimani, Balappa M Bannur, Patil BG, Nagaraj S. Mallashetty, Ishwar B Bagoji

    2013-01-01

    Background: In present study, we tried to determine significant palmar dermatoglyphic parameters in case of essential hypertensive’s in age group between 20-50 years and whether the parameters can be used for screening purpose i.e., early detection of hypertension. Method: With the use of modified Purvis Smith method, Black duplicating ink (Kores, Bombay) was smeared on both hands one by one and prints will be taken by rolling the hands from wrist creases to finger tips on the roller covered ...

  13. An Approach Using Parallel Architecture to Storage DICOM Images in Distributed File System

    International Nuclear Information System (INIS)

    Soares, Tiago S; Prado, Thiago C; Dantas, M A R; De Macedo, Douglas D J; Bauer, Michael A

    2012-01-01

    Telemedicine is a very important area in medical field that is expanding daily motivated by many researchers interested in improving medical applications. In Brazil was started in 2005, in the State of Santa Catarina has a developed server called the CyclopsDCMServer, which the purpose to embrace the HDF for the manipulation of medical images (DICOM) using a distributed file system. Since then, many researches were initiated in order to seek better performance. Our approach for this server represents an additional parallel implementation in I/O operations since HDF version 5 has an essential feature for our work which supports parallel I/O, based upon the MPI paradigm. Early experiments using four parallel nodes, provide good performance when compare to the serial HDF implemented in the CyclopsDCMServer.

  14. Design paper: The CapOpus trial: a randomized, parallel-group, observer-blinded clinical trial of specialized addiction treatment versus treatment as usual for young patients with cannabis abuse and psychosis

    DEFF Research Database (Denmark)

    Hjorthøj, Carsten; Fohlmann, Allan; Larsen, Anne-Mette

    2008-01-01

    : The major objective for the CapOpus trial is to evaluate the additional effect on cannabis abuse of a specialized addiction treatment program adding group treatment and motivational interviewing to treatment as usual. DESIGN: The trial is designed as a randomized, parallel-group, observer-blinded clinical...

  15. Self-monitoring of urinary salt excretion as a method of salt-reduction education: a parallel, randomized trial involving two groups.

    Science.gov (United States)

    Yasutake, Kenichiro; Miyoshi, Emiko; Misumi, Yukiko; Kajiyama, Tomomi; Fukuda, Tamami; Ishii, Taeko; Moriguchi, Ririko; Murata, Yusuke; Ohe, Kenji; Enjoji, Munechika; Tsuchihashi, Takuya

    2018-02-20

    The present study aimed to evaluate salt-reduction education using a self-monitoring urinary salt-excretion device. Parallel, randomized trial involving two groups. The following parameters were checked at baseline and endline of the intervention: salt check sheet, eating behaviour questionnaire, 24 h home urine collection, blood pressure before and after urine collection. The intervention group self-monitored urine salt excretion using a self-measuring device for 4 weeks. In the control group, urine salt excretion was measured, but the individuals were not informed of the result. Seventy-eight individuals (control group, n 36; intervention group, n 42) collected two 24 h urine samples from a target population of 123 local resident volunteers. The samples were then analysed. There were no differences in clinical background or related parameters between the two groups. The 24 h urinary Na:K ratio showed a significant decrease in the intervention group (-1·1) compared with the control group (-0·0; P=0·033). Blood pressure did not change in either group. The results of the salt check sheet did not change in the control group but were significantly lower in the intervention group. The score of the eating behaviour questionnaire did not change in the control group, but the intervention group showed a significant increase in eating behaviour stage. Self-monitoring of urinary salt excretion helps to improve 24 h urinary Na:K, salt check sheet scores and stage of eating behaviour. Thus, usage of self-monitoring tools has an educational potential in salt intake reduction.

  16. Intelligent spatial ecosystem modeling using parallel processors

    International Nuclear Information System (INIS)

    Maxwell, T.; Costanza, R.

    1993-01-01

    Spatial modeling of ecosystems is essential if one's modeling goals include developing a relatively realistic description of past behavior and predictions of the impacts of alternative management policies on future ecosystem behavior. Development of these models has been limited in the past by the large amount of input data required and the difficulty of even large mainframe serial computers in dealing with large spatial arrays. These two limitations have begun to erode with the increasing availability of remote sensing data and GIS systems to manipulate it, and the development of parallel computer systems which allow computation of large, complex, spatial arrays. Although many forms of dynamic spatial modeling are highly amenable to parallel processing, the primary focus in this project is on process-based landscape models. These models simulate spatial structure by first compartmentalizing the landscape into some geometric design and then describing flows within compartments and spatial processes between compartments according to location-specific algorithms. The authors are currently building and running parallel spatial models at the regional scale for the Patuxent River region in Maryland, the Everglades in Florida, and Barataria Basin in Louisiana. The authors are also planning a project to construct a series of spatially explicit linked ecological and economic simulation models aimed at assessing the long-term potential impacts of global climate change

  17. Left Ventricular Diastolic Function in Essential Hypertensive Patients: Influence of Age and Left Ventricular Geometry

    Directory of Open Access Journals (Sweden)

    Rosa Eduardo Cantoni

    2002-01-01

    Full Text Available PURPOSE - To evaluate diastolic dysfunction (DD in essential hypertension and the influence of age and cardiac geometry on this parameter. METHODS - Four hundred sixty essential hypertensive patients (HT underwent Doppler echocardiography to obtain E/A wave ratio (E/A, atrial deceleration time (ADT, and isovolumetric relaxation time (IRT. All patients were grouped according to cardiac geometric patterns (NG - normal geometry; CR - concentric remodeling; CH- concentric hypertrophy; EH - eccentric hypertrophy and to age (60 years. One hundred six normotensives (NT persons were also evaluated. RESULTS - A worsening of diastolic function in the HT compared with the NT, including HT with NG (E/A: NT - 1.38±0.03 vs HT - 1.27±0.02, p<0.01, was observed. A higher prevalence of DD occurred parallel to age and cardiac geometry also in the prehypertrophic groups (CR. Multiple regression analysis identified age as the most important predictor of DD (r²=0.30, p<0.01. CONCLUSION - DD was prevalent in this hypertensive population, being highly affected by age and less by heart structural parameters. DD is observed in incipient stages of hypertensive heart disease, and thus its early detection may help in the risk stratification of hypertensive patients.

  18. Parallel Relational Universes – experiments in modularity

    DEFF Research Database (Denmark)

    Pagliarini, Luigi; Lund, Henrik Hautop

    2015-01-01

    : We here describe Parallel Relational Universes, an artistic method used for the psychological analysis of group dynamics. The design of the artistic system, which mediates group dynamics, emerges from our studies of modular playware and remixing playware. Inspired from remixing modular playware......, where users remix samples in the form of physical and functional modules, we created an artistic instantiation of such a concept with the Parallel Relational Universes, allowing arts alumni to remix artistic expressions. Here, we report the data emerged from a first pre-test, run with gymnasium’s alumni....... We then report both the artistic and the psychological findings. We discuss possible variations of such an instrument. Between an art piece and a psychological test, at a first cognitive analysis, it seems to be a promising research tool...

  19. Oral sumatriptan for migraine in children and adolescents: a randomized, multicenter, placebo-controlled, parallel group study.

    Science.gov (United States)

    Fujita, Mitsue; Sato, Katsuaki; Nishioka, Hiroshi; Sakai, Fumihiko

    2014-04-01

    The objective of this article is to evaluate the efficacy and tolerability of two doses of oral sumatriptan vs placebo in the acute treatment of migraine in children and adolescents. Currently, there is no approved prescription medication in Japan for the treatment of migraine in children and adolescents. This was a multicenter, outpatient, single-attack, double-blind, randomized, placebo-controlled, parallel-group study. Eligible patients were children and adolescents aged 10 to 17 years diagnosed with migraine with or without aura (ICHD-II criteria 1.1 or 1.2) from 17 centers. They were randomized to receive sumatriptan 25 mg, 50 mg or placebo (1:1:2). The primary efficacy endpoint was headache relief by two grades on a five-grade scale at two hours post-dose. A total of 178 patients from 17 centers in Japan were enrolled and randomized to an investigational product in double-blind fashion. Of these, 144 patients self-treated a single migraine attack, and all provided a post-dose efficacy assessment and completed the study. The percentage of patients in the full analysis set (FAS) population who report pain relief at two hours post-treatment for the primary endpoint was higher in the placebo group than in the pooled sumatriptan group (38.6% vs 31.1%, 95% CI: -23.02 to 8.04, P  = 0.345). The percentage of patients in the FAS population who reported pain relief at four hours post-dose was higher in the pooled sumatriptan group (63.5%) than in the placebo group (51.4%) but failed to achieve statistical significance ( P  = 0.142). At four hours post-dose, percentages of patients who were pain free or had complete relief of photophobia or phonophobia were numerically higher in the sumatriptan pooled group compared to placebo. Both doses of oral sumatriptan were well tolerated. No adverse events (AEs) were serious or led to study withdrawal. The most common AEs were somnolence in 6% (two patients) in the sumatriptan 25 mg treatment group and chest

  20. Parallel transmission techniques in magnetic resonance imaging: experimental realization, applications and perspectives; Parallele Sendetechniken in der Magnetresonanztomographie: experimentelle Realisierung, Anwendungen und Perspektiven

    Energy Technology Data Exchange (ETDEWEB)

    Ullmann, P.

    2007-06-15

    The primary objective of this work was the first experimental realization of parallel RF transmission for accelerating spatially selective excitation in magnetic resonance imaging. Furthermore, basic aspects regarding the performance of this technique were investigated, potential risks regarding the specific absorption rate (SAR) were considered and feasibility studies under application-oriented conditions as first steps towards a practical utilisation of this technique were undertaken. At first, based on the RF electronics platform of the Bruker Avance MRI systems, the technical foundations were laid to perform simultaneous transmission of individual RF waveforms on different RF channels. Another essential requirement for the realization of Parallel Excitation (PEX) was the design and construction of suitable RF transmit arrays with elements driven by separate transmit channels. In order to image the PEX results two imaging methods were implemented based on a spin-echo and a gradient-echo sequence, in which a parallel spatially selective pulse was included as an excitation pulse. In the course of this work PEX experiments were successfully performed on three different MRI systems, a 4.7 T and a 9.4 T animal system and a 3 T human scanner, using 5 different RF coil setups in total. In the last part of this work investigations regarding possible applications of Parallel Excitation were performed. A first study comprised experiments of slice-selective B1 inhomogeneity correction by using 3D-selective Parallel Excitation. The investigations were performed in a phantom as well as in a rat fixed in paraformaldehyde solution. In conjunction with these experiments a novel method of calculating RF pulses for spatially selective excitation based on a so-called Direct Calibration approach was developed, which is particularly suitable for this type of experiments. In the context of these experiments it was demonstrated how to combine the advantages of parallel transmission

  1. Essential Medicines in a High Income Country: Essential to Whom?

    Science.gov (United States)

    Duong, Mai; Moles, Rebekah J; Chaar, Betty; Chen, Timothy F

    2015-01-01

    To explore the perspectives of a diverse group of stakeholders engaged in medicines decision making around what constitutes an "essential" medicine, and how the Essential Medicines List (EML) concept functions in a high income country context. In-depth qualitative semi-structured interviews were conducted with 32 Australian stakeholders, recognised as decision makers, leaders or advisors in the area of medicines reimbursement or supply chain management. Participants were recruited from government, pharmaceutical industry, pharmaceutical wholesale/distribution companies, medicines non-profit organisations, academic health disciplines, hospitals, and consumer groups. Perspectives on the definition and application of the EML concept in a high income country context were thematically analysed using grounded theory approach. Stakeholders found it challenging to describe the EML concept in the Australian context because many perceived it was generally used in resource scarce settings. Stakeholders were unable to distinguish whether nationally reimbursed medicines were essential medicines in Australia. Despite frequent generic drug shortages and high prices paid by consumers, many struggled to describe how the EML concept applied to Australia. Instead, broad inclusion of consumer needs, such as rare and high cost medicines, and consumer involvement in the decision making process, has led to expansive lists of nationally subsidised medicines. Therefore, improved communication and coordination is needed around shared interests between stakeholders regarding how medicines are prioritised and guaranteed in the supply chain. This study showed that decision-making in Australia around reimbursement of medicines has strayed from the fundamental utilitarian concept of essential medicines. Many stakeholders involved in medicine reimbursement decisions and management of the supply chain did not consider the EML concept in their approach. The wide range of views of what stakeholders

  2. Essential Medicines in a High Income Country: Essential to Whom?

    Directory of Open Access Journals (Sweden)

    Mai Duong

    Full Text Available To explore the perspectives of a diverse group of stakeholders engaged in medicines decision making around what constitutes an "essential" medicine, and how the Essential Medicines List (EML concept functions in a high income country context.In-depth qualitative semi-structured interviews were conducted with 32 Australian stakeholders, recognised as decision makers, leaders or advisors in the area of medicines reimbursement or supply chain management. Participants were recruited from government, pharmaceutical industry, pharmaceutical wholesale/distribution companies, medicines non-profit organisations, academic health disciplines, hospitals, and consumer groups. Perspectives on the definition and application of the EML concept in a high income country context were thematically analysed using grounded theory approach.Stakeholders found it challenging to describe the EML concept in the Australian context because many perceived it was generally used in resource scarce settings. Stakeholders were unable to distinguish whether nationally reimbursed medicines were essential medicines in Australia. Despite frequent generic drug shortages and high prices paid by consumers, many struggled to describe how the EML concept applied to Australia. Instead, broad inclusion of consumer needs, such as rare and high cost medicines, and consumer involvement in the decision making process, has led to expansive lists of nationally subsidised medicines. Therefore, improved communication and coordination is needed around shared interests between stakeholders regarding how medicines are prioritised and guaranteed in the supply chain.This study showed that decision-making in Australia around reimbursement of medicines has strayed from the fundamental utilitarian concept of essential medicines. Many stakeholders involved in medicine reimbursement decisions and management of the supply chain did not consider the EML concept in their approach. The wide range of views of

  3. Parallel MR imaging.

    Science.gov (United States)

    Deshmane, Anagha; Gulani, Vikas; Griswold, Mark A; Seiberlich, Nicole

    2012-07-01

    Parallel imaging is a robust method for accelerating the acquisition of magnetic resonance imaging (MRI) data, and has made possible many new applications of MR imaging. Parallel imaging works by acquiring a reduced amount of k-space data with an array of receiver coils. These undersampled data can be acquired more quickly, but the undersampling leads to aliased images. One of several parallel imaging algorithms can then be used to reconstruct artifact-free images from either the aliased images (SENSE-type reconstruction) or from the undersampled data (GRAPPA-type reconstruction). The advantages of parallel imaging in a clinical setting include faster image acquisition, which can be used, for instance, to shorten breath-hold times resulting in fewer motion-corrupted examinations. In this article the basic concepts behind parallel imaging are introduced. The relationship between undersampling and aliasing is discussed and two commonly used parallel imaging methods, SENSE and GRAPPA, are explained in detail. Examples of artifacts arising from parallel imaging are shown and ways to detect and mitigate these artifacts are described. Finally, several current applications of parallel imaging are presented and recent advancements and promising research in parallel imaging are briefly reviewed. Copyright © 2012 Wiley Periodicals, Inc.

  4. Comparative effectiveness of Pilates and yoga group exercise interventions for chronic mechanical neck pain: quasi-randomised parallel controlled study.

    Science.gov (United States)

    Dunleavy, K; Kava, K; Goldberg, A; Malek, M H; Talley, S A; Tutag-Lehr, V; Hildreth, J

    2016-09-01

    To determine the effectiveness of Pilates and yoga group exercise interventions for individuals with chronic neck pain (CNP). Quasi-randomised parallel controlled study. Community, university and private practice settings in four locations. Fifty-six individuals with CNP scoring ≥3/10 on the numeric pain rating scale for >3 months (controls n=17, Pilates n=20, yoga n=19). Exercise participants completed 12 small-group sessions with modifications and progressions supervised by a physiotherapist. The primary outcome measure was the Neck Disability Index (NDI). Secondary outcomes were pain ratings, range of movement and postural measurements collected at baseline, 6 weeks and 12 weeks. Follow-up was performed 6 weeks after completion of the exercise classes (Week 18). NDI decreased significantly in the Pilates {baseline: 11.1 [standard deviation (SD) 4.3] vs Week 12: 6.8 (SD 4.3); mean difference -4.3 (95% confidence interval -1.64 to -6.7); PPilates and yoga group exercise interventions with appropriate modifications and supervision were safe and equally effective for decreasing disability and pain compared with the control group for individuals with mild-to-moderate CNP. Physiotherapists may consider including these approaches in a plan of care. ClinicalTrials.gov NCT01999283. Copyright © 2015 Chartered Society of Physiotherapy. Published by Elsevier Ltd. All rights reserved.

  5. Spatially parallel processing of within-dimension conjunctions.

    Science.gov (United States)

    Linnell, K J; Humphreys, G W

    2001-01-01

    Within-dimension conjunction search for red-green targets amongst red-blue, and blue-green, nontargets is extremely inefficient (Wolfe et al, 1990 Journal of Experimental Psychology: Human Perception and Performance 16 879-892). We tested whether pairs of red-green conjunction targets can nevertheless be processed spatially in parallel. Participants made speeded detection responses whenever a red-green target was present. Across trials where a second identical target was present, the distribution of detection times was compatible with the assumption that targets were processed in parallel (Miller, 1982 Cognitive Psychology 14 247-279). We show that this was not an artifact of response-competition or feature-based processing. We suggest that within-dimension conjunctions can be processed spatially in parallel. Visual search for such items may be inefficient owing to within-dimension grouping between items.

  6. QR-decomposition based SENSE reconstruction using parallel architecture.

    Science.gov (United States)

    Ullah, Irfan; Nisar, Habab; Raza, Haseeb; Qasim, Malik; Inam, Omair; Omer, Hammad

    2018-04-01

    Magnetic Resonance Imaging (MRI) is a powerful medical imaging technique that provides essential clinical information about the human body. One major limitation of MRI is its long scan time. Implementation of advance MRI algorithms on a parallel architecture (to exploit inherent parallelism) has a great potential to reduce the scan time. Sensitivity Encoding (SENSE) is a Parallel Magnetic Resonance Imaging (pMRI) algorithm that utilizes receiver coil sensitivities to reconstruct MR images from the acquired under-sampled k-space data. At the heart of SENSE lies inversion of a rectangular encoding matrix. This work presents a novel implementation of GPU based SENSE algorithm, which employs QR decomposition for the inversion of the rectangular encoding matrix. For a fair comparison, the performance of the proposed GPU based SENSE reconstruction is evaluated against single and multicore CPU using openMP. Several experiments against various acceleration factors (AFs) are performed using multichannel (8, 12 and 30) phantom and in-vivo human head and cardiac datasets. Experimental results show that GPU significantly reduces the computation time of SENSE reconstruction as compared to multi-core CPU (approximately 12x speedup) and single-core CPU (approximately 53x speedup) without any degradation in the quality of the reconstructed images. Copyright © 2018 Elsevier Ltd. All rights reserved.

  7. High-Performance Psychometrics: The Parallel-E Parallel-M Algorithm for Generalized Latent Variable Models. Research Report. ETS RR-16-34

    Science.gov (United States)

    von Davier, Matthias

    2016-01-01

    This report presents results on a parallel implementation of the expectation-maximization (EM) algorithm for multidimensional latent variable models. The developments presented here are based on code that parallelizes both the E step and the M step of the parallel-E parallel-M algorithm. Examples presented in this report include item response…

  8. Exploration Of Deep Learning Algorithms Using Openacc Parallel Programming Model

    KAUST Repository

    Hamam, Alwaleed A.

    2017-03-13

    Deep learning is based on a set of algorithms that attempt to model high level abstractions in data. Specifically, RBM is a deep learning algorithm that used in the project to increase it\\'s time performance using some efficient parallel implementation by OpenACC tool with best possible optimizations on RBM to harness the massively parallel power of NVIDIA GPUs. GPUs development in the last few years has contributed to growing the concept of deep learning. OpenACC is a directive based ap-proach for computing where directives provide compiler hints to accelerate code. The traditional Restricted Boltzmann Ma-chine is a stochastic neural network that essentially perform a binary version of factor analysis. RBM is a useful neural net-work basis for larger modern deep learning model, such as Deep Belief Network. RBM parameters are estimated using an efficient training method that called Contrastive Divergence. Parallel implementation of RBM is available using different models such as OpenMP, and CUDA. But this project has been the first attempt to apply OpenACC model on RBM.

  9. Exploration Of Deep Learning Algorithms Using Openacc Parallel Programming Model

    KAUST Repository

    Hamam, Alwaleed A.; Khan, Ayaz H.

    2017-01-01

    Deep learning is based on a set of algorithms that attempt to model high level abstractions in data. Specifically, RBM is a deep learning algorithm that used in the project to increase it's time performance using some efficient parallel implementation by OpenACC tool with best possible optimizations on RBM to harness the massively parallel power of NVIDIA GPUs. GPUs development in the last few years has contributed to growing the concept of deep learning. OpenACC is a directive based ap-proach for computing where directives provide compiler hints to accelerate code. The traditional Restricted Boltzmann Ma-chine is a stochastic neural network that essentially perform a binary version of factor analysis. RBM is a useful neural net-work basis for larger modern deep learning model, such as Deep Belief Network. RBM parameters are estimated using an efficient training method that called Contrastive Divergence. Parallel implementation of RBM is available using different models such as OpenMP, and CUDA. But this project has been the first attempt to apply OpenACC model on RBM.

  10. Pharmacokinetics of serelaxin in patients with hepatic impairment: a single-dose, open-label, parallel group study.

    Science.gov (United States)

    Kobalava, Zhanna; Villevalde, Svetlana; Kotovskaya, Yulia; Hinrichsen, Holger; Petersen-Sylla, Marc; Zaehringer, Andreas; Pang, Yinuo; Rajman, Iris; Canadi, Jasna; Dahlke, Marion; Lloyd, Peter; Halabi, Atef

    2015-06-01

    Serelaxin is a recombinant form of human relaxin-2 in development for treatment of acute heart failure. This study aimed to evaluate the pharmacokinetics (PK) of serelaxin in patients with hepatic impairment. Secondary objectives included evaluation of immunogenicity, safety and tolerability of serelaxin. This was an open-label, parallel group study (NCT01433458) comparing the PK of serelaxin following a single 24 h intravenous (i.v.) infusion (30 μg kg(-1)  day(-1) ) between patients with mild, moderate or severe hepatic impairment (Child-Pugh class A, B, C) and healthy matched controls. Blood sampling and standard safety assessments were conducted. Primary non-compartmental PK parameters [including area under the serum concentration-time curve AUC(0-48 h) and AUC(0-∞) and serum concentration at 24 h post-dose (C24h )] were compared between each hepatic impairment group and healthy controls. A total of 49 subjects (including 25 patients with hepatic impairment) were enrolled, of which 48 subjects completed the study. In all groups, the serum concentration of serelaxin increased over the first few hours of infusion, reached steady-state at 12-24 h and then declined following completion of infusion, with a mean terminal half-life of 7-8 h. All PK parameter estimates were comparable between each group of patients with hepatic impairment and healthy controls. No serious adverse events, discontinuations due to adverse events or deaths were reported. No serelaxin treatment-related antibodies developed during this study. The PK and safety profile of serelaxin were not affected by hepatic impairment. No dose adjustment is needed for serelaxin treatment of 48 h i.v. infusion in patients with hepatic impairment. © 2014 The British Pharmacological Society.

  11. A SPECT reconstruction method for extending parallel to non-parallel geometries

    International Nuclear Information System (INIS)

    Wen Junhai; Liang Zhengrong

    2010-01-01

    Due to its simplicity, parallel-beam geometry is usually assumed for the development of image reconstruction algorithms. The established reconstruction methodologies are then extended to fan-beam, cone-beam and other non-parallel geometries for practical application. This situation occurs for quantitative SPECT (single photon emission computed tomography) imaging in inverting the attenuated Radon transform. Novikov reported an explicit parallel-beam formula for the inversion of the attenuated Radon transform in 2000. Thereafter, a formula for fan-beam geometry was reported by Bukhgeim and Kazantsev (2002 Preprint N. 99 Sobolev Institute of Mathematics). At the same time, we presented a formula for varying focal-length fan-beam geometry. Sometimes, the reconstruction formula is so implicit that we cannot obtain the explicit reconstruction formula in the non-parallel geometries. In this work, we propose a unified reconstruction framework for extending parallel-beam geometry to any non-parallel geometry using ray-driven techniques. Studies by computer simulations demonstrated the accuracy of the presented unified reconstruction framework for extending parallel-beam to non-parallel geometries in inverting the attenuated Radon transform.

  12. The language parallel Pascal and other aspects of the massively parallel processor

    Science.gov (United States)

    Reeves, A. P.; Bruner, J. D.

    1982-01-01

    A high level language for the Massively Parallel Processor (MPP) was designed. This language, called Parallel Pascal, is described in detail. A description of the language design, a description of the intermediate language, Parallel P-Code, and details for the MPP implementation are included. Formal descriptions of Parallel Pascal and Parallel P-Code are given. A compiler was developed which converts programs in Parallel Pascal into the intermediate Parallel P-Code language. The code generator to complete the compiler for the MPP is being developed independently. A Parallel Pascal to Pascal translator was also developed. The architecture design for a VLSI version of the MPP was completed with a description of fault tolerant interconnection networks. The memory arrangement aspects of the MPP are discussed and a survey of other high level languages is given.

  13. Parallel Atomistic Simulations

    Energy Technology Data Exchange (ETDEWEB)

    HEFFELFINGER,GRANT S.

    2000-01-18

    Algorithms developed to enable the use of atomistic molecular simulation methods with parallel computers are reviewed. Methods appropriate for bonded as well as non-bonded (and charged) interactions are included. While strategies for obtaining parallel molecular simulations have been developed for the full variety of atomistic simulation methods, molecular dynamics and Monte Carlo have received the most attention. Three main types of parallel molecular dynamics simulations have been developed, the replicated data decomposition, the spatial decomposition, and the force decomposition. For Monte Carlo simulations, parallel algorithms have been developed which can be divided into two categories, those which require a modified Markov chain and those which do not. Parallel algorithms developed for other simulation methods such as Gibbs ensemble Monte Carlo, grand canonical molecular dynamics, and Monte Carlo methods for protein structure determination are also reviewed and issues such as how to measure parallel efficiency, especially in the case of parallel Monte Carlo algorithms with modified Markov chains are discussed.

  14. Parallel computing for event reconstruction in high-energy physics

    International Nuclear Information System (INIS)

    Wolbers, S.

    1993-01-01

    Parallel computing has been recognized as a solution to large computing problems. In High Energy Physics offline event reconstruction of detector data is a very large computing problem that has been solved with parallel computing techniques. A review of the parallel programming package CPS (Cooperative Processes Software) developed and used at Fermilab for offline reconstruction of Terabytes of data requiring the delivery of hundreds of Vax-Years per experiment is given. The Fermilab UNIX farms, consisting of 180 Silicon Graphics workstations and 144 IBM RS6000 workstations, are used to provide the computing power for the experiments. Fermilab has had a long history of providing production parallel computing starting with the ACP (Advanced Computer Project) Farms in 1986. The Fermilab UNIX Farms have been in production for over 2 years with 24 hour/day service to experimental user groups. Additional tools for management, control and monitoring these large systems will be described. Possible future directions for parallel computing in High Energy Physics will be given

  15. Combinatorics of spreads and parallelisms

    CERN Document Server

    Johnson, Norman

    2010-01-01

    Partitions of Vector Spaces Quasi-Subgeometry Partitions Finite Focal-SpreadsGeneralizing André SpreadsThe Going Up Construction for Focal-SpreadsSubgeometry Partitions Subgeometry and Quasi-Subgeometry Partitions Subgeometries from Focal-SpreadsExtended André SubgeometriesKantor's Flag-Transitive DesignsMaximal Additive Partial SpreadsSubplane Covered Nets and Baer Groups Partial Desarguesian t-Parallelisms Direct Products of Affine PlanesJha-Johnson SL(2,

  16. Duloxetine for the management of diabetic peripheral neuropathic pain: evidence-based findings from post hoc analysis of three multicenter, randomized, double-blind, placebo-controlled, parallel-group studies

    DEFF Research Database (Denmark)

    Kajdasz, Daniel K; Iyengar, Smriti; Desaiah, Durisala

    2007-01-01

    peripheral neuropathic pain (DPNP). METHODS: Data were pooled from three 12-week, multicenter, randomized, double-blind, placebo-controlled, parallel-group studies in which patients received 60 mg duloxetine either QD or BID or placebo. NNT was calculated based on rates of response (defined as >or=30...

  17. Computationally efficient implementation of combustion chemistry in parallel PDF calculations

    International Nuclear Information System (INIS)

    Lu Liuyan; Lantz, Steven R.; Ren Zhuyin; Pope, Stephen B.

    2009-01-01

    ISAT strategy, the type and extent of redistribution is determined 'on the fly' based on the prediction of future simulation time. Compared to the PLP/ISAT strategy where chemistry calculations are essentially serial, a speed-up factor of up to 30 is achieved. The study also demonstrates that the adaptive strategy has acceptable parallel scalability.

  18. Parallel Polarization State Generation.

    Science.gov (United States)

    She, Alan; Capasso, Federico

    2016-05-17

    The control of polarization, an essential property of light, is of wide scientific and technological interest. The general problem of generating arbitrary time-varying states of polarization (SOP) has always been mathematically formulated by a series of linear transformations, i.e. a product of matrices, imposing a serial architecture. Here we show a parallel architecture described by a sum of matrices. The theory is experimentally demonstrated by modulating spatially-separated polarization components of a laser using a digital micromirror device that are subsequently beam combined. This method greatly expands the parameter space for engineering devices that control polarization. Consequently, performance characteristics, such as speed, stability, and spectral range, are entirely dictated by the technologies of optical intensity modulation, including absorption, reflection, emission, and scattering. This opens up important prospects for polarization state generation (PSG) with unique performance characteristics with applications in spectroscopic ellipsometry, spectropolarimetry, communications, imaging, and security.

  19. Parallel integer sorting with medium and fine-scale parallelism

    Science.gov (United States)

    Dagum, Leonardo

    1993-01-01

    Two new parallel integer sorting algorithms, queue-sort and barrel-sort, are presented and analyzed in detail. These algorithms do not have optimal parallel complexity, yet they show very good performance in practice. Queue-sort designed for fine-scale parallel architectures which allow the queueing of multiple messages to the same destination. Barrel-sort is designed for medium-scale parallel architectures with a high message passing overhead. The performance results from the implementation of queue-sort on a Connection Machine CM-2 and barrel-sort on a 128 processor iPSC/860 are given. The two implementations are found to be comparable in performance but not as good as a fully vectorized bucket sort on the Cray YMP.

  20. Beyond Silence: A Randomized, Parallel-Group Trial Exploring the Impact of Workplace Mental Health Literacy Training with Healthcare Employees.

    Science.gov (United States)

    Moll, Sandra E; Patten, Scott; Stuart, Heather; MacDermid, Joy C; Kirsh, Bonnie

    2018-01-01

    This study sought to evaluate whether a contact-based workplace education program was more effective than standard mental health literacy training in promoting early intervention and support for healthcare employees with mental health issues. A parallel-group, randomised trial was conducted with employees in 2 multi-site Ontario hospitals with the evaluators blinded to the groups. Participants were randomly assigned to 1 of 2 group-based education programs: Beyond Silence (comprising 6 in-person, 2-h sessions plus 5 online sessions co-led by employees who personally experienced mental health issues) or Mental Health First Aid (a standardised 2-day training program led by a trained facilitator). Participants completed baseline, post-group, and 3-mo follow-up surveys to explore perceived changes in mental health knowledge, stigmatized beliefs, and help-seeking/help-outreach behaviours. An intent-to-treat analysis was completed with 192 participants. Differences were assessed using multi-level mixed models accounting for site, group, and repeated measurement. Neither program led to significant increases in help-seeking or help-outreach behaviours. Both programs increased mental health literacy, improved attitudes towards seeking treatment, and decreased stigmatized beliefs, with sustained changes in stigmatized beliefs more prominent in the Beyond Silence group. Beyond Silence, a new contact-based education program customised for healthcare workers was not superior to standard mental health literacy training in improving mental health help-seeking or help-outreach behaviours in the workplace. The only difference was a reduction in stigmatized beliefs over time. Additional research is needed to explore the factors that lead to behaviour change.

  1. About Parallel Programming: Paradigms, Parallel Execution and Collaborative Systems

    Directory of Open Access Journals (Sweden)

    Loredana MOCEAN

    2009-01-01

    Full Text Available In the last years, there were made efforts for delineation of a stabile and unitary frame, where the problems of logical parallel processing must find solutions at least at the level of imperative languages. The results obtained by now are not at the level of the made efforts. This paper wants to be a little contribution at these efforts. We propose an overview in parallel programming, parallel execution and collaborative systems.

  2. Online Diagnosis for the Capacity Fade Fault of a Parallel-Connected Lithium Ion Battery Group

    Directory of Open Access Journals (Sweden)

    Hua Zhang

    2016-05-01

    Full Text Available In a parallel-connected battery group (PCBG, capacity degradation is usually caused by the inconsistency between a faulty cell and other normal cells, and the inconsistency occurs due to two potential causes: an aging inconsistency fault or a loose contacting fault. In this paper, a novel method is proposed to perform online and real-time capacity fault diagnosis for PCBGs. Firstly, based on the analysis of parameter variation characteristics of a PCBG with different fault causes, it is found that PCBG resistance can be taken as an indicator for both seeking the faulty PCBG and distinguishing the fault causes. On one hand, the faulty PCBG can be identified by comparing the PCBG resistance among PCBGs; on the other hand, two fault causes can be distinguished by comparing the variance of the PCBG resistances. Furthermore, for online applications, a novel recursive-least-squares algorithm with restricted memory and constraint (RLSRMC, in which the constraint is added to eliminate the “imaginary number” phenomena of parameters, is developed and used in PCBG resistance identification. Lastly, fault simulation and validation results demonstrate that the proposed methods have good accuracy and reliability.

  3. Parallel computing works!

    CERN Document Server

    Fox, Geoffrey C; Messina, Guiseppe C

    2014-01-01

    A clear illustration of how parallel computers can be successfully appliedto large-scale scientific computations. This book demonstrates how avariety of applications in physics, biology, mathematics and other scienceswere implemented on real parallel computers to produce new scientificresults. It investigates issues of fine-grained parallelism relevant forfuture supercomputers with particular emphasis on hypercube architecture. The authors describe how they used an experimental approach to configuredifferent massively parallel machines, design and implement basic systemsoftware, and develop

  4. Characterization of robotics parallel algorithms and mapping onto a reconfigurable SIMD machine

    Science.gov (United States)

    Lee, C. S. G.; Lin, C. T.

    1989-01-01

    The kinematics, dynamics, Jacobian, and their corresponding inverse computations are six essential problems in the control of robot manipulators. Efficient parallel algorithms for these computations are discussed and analyzed. Their characteristics are identified and a scheme on the mapping of these algorithms to a reconfigurable parallel architecture is presented. Based on the characteristics including type of parallelism, degree of parallelism, uniformity of the operations, fundamental operations, data dependencies, and communication requirement, it is shown that most of the algorithms for robotic computations possess highly regular properties and some common structures, especially the linear recursive structure. Moreover, they are well-suited to be implemented on a single-instruction-stream multiple-data-stream (SIMD) computer with reconfigurable interconnection network. The model of a reconfigurable dual network SIMD machine with internal direct feedback is introduced. A systematic procedure internal direct feedback is introduced. A systematic procedure to map these computations to the proposed machine is presented. A new scheduling problem for SIMD machines is investigated and a heuristic algorithm, called neighborhood scheduling, that reorders the processing sequence of subtasks to reduce the communication time is described. Mapping results of a benchmark algorithm are illustrated and discussed.

  5. Effective damping for SSR analysis of parallel turbine-generators

    International Nuclear Information System (INIS)

    Agrawal, B.L.; Farmer, R.G.

    1988-01-01

    Damping is a dominant parameter in studies to determine SSR problem severity and countermeasure requirements. To reach valid conclusions for multi-unit plants, it is essential that the net effective damping of unequally loaded units be known. For the Palo Verde Nuclear Generating Station, extensive testing and analysis have been performed to verify and develop an accurate means of determining the effective damping of unequally loaded units in parallel. This has led to a unique and simple algorithm which correlates well with two other analytic techniques

  6. Heterogeneous Multicore Parallel Programming for Graphics Processing Units

    Directory of Open Access Journals (Sweden)

    Francois Bodin

    2009-01-01

    Full Text Available Hybrid parallel multicore architectures based on graphics processing units (GPUs can provide tremendous computing power. Current NVIDIA and AMD Graphics Product Group hardware display a peak performance of hundreds of gigaflops. However, exploiting GPUs from existing applications is a difficult task that requires non-portable rewriting of the code. In this paper, we present HMPP, a Heterogeneous Multicore Parallel Programming workbench with compilers, developed by CAPS entreprise, that allows the integration of heterogeneous hardware accelerators in a unintrusive manner while preserving the legacy code.

  7. Practical integrated simulation systems for coupled numerical simulations in parallel

    Energy Technology Data Exchange (ETDEWEB)

    Osamu, Hazama; Zhihong, Guo [Japan Atomic Energy Research Inst., Centre for Promotion of Computational Science and Engineering, Tokyo (Japan)

    2003-07-01

    In order for the numerical simulations to reflect 'real-world' phenomena and occurrences, incorporation of multidisciplinary and multi-physics simulations considering various physical models and factors are becoming essential. However, there still exist many obstacles which inhibit such numerical simulations. For example, it is still difficult in many instances to develop satisfactory software packages which allow for such coupled simulations and such simulations will require more computational resources. A precise multi-physics simulation today will require parallel processing which again makes it a complicated process. Under the international cooperative efforts between CCSE/JAERI and Fraunhofer SCAI, a German institute, a library called the MpCCI, or Mesh-based Parallel Code Coupling Interface, has been implemented together with a library called STAMPI to couple two existing codes to develop an 'integrated numerical simulation system' intended for meta-computing environments. (authors)

  8. A parallel sweeping preconditioner for frequency-domain seismic wave propagation

    KAUST Repository

    Poulson, Jack

    2012-09-01

    We present a parallel implementation of Engquist and Ying\\'s sweeping preconditioner, which exploits radiation boundary conditions in order to form an approximate block LDLT factorization of the Helmholtz operator with only O(N4/3) work and an application (and memory) cost of only O(N logN). The approximate factorization is then used as a preconditioner for GMRES, and we show that essentially O(1) iterations are required for convergence, even for the full SEG/EAGE over-thrust model at 30 Hz. In particular, we demonstrate the solution of said problem in a mere 15 minutes on 8192 cores of TACC\\'s Lonestar, which may be the largest-scale 3D heterogeneous Helmholtz calculation to date. Generalizations of our parallel strategy are also briefly discussed for time-harmonic linear elasticity and Maxwell\\'s equations.

  9. The existence of an insulin-stimulated glucose and non-essential but not essential amino acid substrate interaction in diabetic pigs

    Directory of Open Access Journals (Sweden)

    Wijdenes Jan

    2011-05-01

    Full Text Available Abstract Background The generation of energy from glucose is impaired in diabetes and can be compensated by other substrates like fatty acids (Randle cycle. Little information is available on amino acids (AA as alternative energy-source in diabetes. To study the interaction between insulin-stimulated glucose and AA utilization in normal and diabetic subjects, intraportal hyperinsulinaemic euglycaemic euaminoacidaemic clamp studies were performed in normal (n = 8 and streptozotocin (120 mg/kg induced diabetic (n = 7 pigs of ~40-45 kg. Results Diabetic vs normal pigs showed basal hyperglycaemia (19.0 ± 2.0 vs 4.7 ± 0.1 mmol/L, P P P P P P P . Essential AA clearance was largely unchanged (72.9 ± 8.5 vs 63.3 ± 8.5 mL/kg· min, however clearances of threonine (P P Conclusions The ratio of insulin-stimulated glucose versus AA clearance was decreased 5.4-fold in diabetic pigs, which was caused by a 3.6-fold decrease in glucose clearance and a 2.0-fold increase in non-essential AA clearance. In parallel with the Randle concept (glucose - fatty acid cycle, the present data suggest the existence of a glucose and non-essential AA substrate interaction in diabetic pigs whereby reduced insulin-stimulated glucose clearance seems to be partly compensated by an increase in non-essential AA clearance whereas essential AA are preferentially spared from an increase in clearance.

  10. Parallelization characteristics of the DeCART code

    International Nuclear Information System (INIS)

    Cho, J. Y.; Joo, H. G.; Kim, H. Y.; Lee, C. C.; Chang, M. H.; Zee, S. Q.

    2003-12-01

    This report is to describe the parallelization characteristics of the DeCART code and also examine its parallel performance. Parallel computing algorithms are implemented to DeCART to reduce the tremendous computational burden and memory requirement involved in the three-dimensional whole core transport calculation. In the parallelization of the DeCART code, the axial domain decomposition is first realized by using MPI (Message Passing Interface), and then the azimuthal angle domain decomposition by using either MPI or OpenMP. When using the MPI for both the axial and the angle domain decomposition, the concept of MPI grouping is employed for convenient communication in each communication world. For the parallel computation, most of all the computing modules except for the thermal hydraulic module are parallelized. These parallelized computing modules include the MOC ray tracing, CMFD, NEM, region-wise cross section preparation and cell homogenization modules. For the distributed allocation, most of all the MOC and CMFD/NEM variables are allocated only for the assigned planes, which reduces the required memory by a ratio of the number of the assigned planes to the number of all planes. The parallel performance of the DeCART code is evaluated by solving two problems, a rodded variation of the C5G7 MOX three-dimensional benchmark problem and a simplified three-dimensional SMART PWR core problem. In the aspect of parallel performance, the DeCART code shows a good speedup of about 40.1 and 22.4 in the ray tracing module and about 37.3 and 20.2 in the total computing time when using 48 CPUs on the IBM Regatta and 24 CPUs on the LINUX cluster, respectively. In the comparison between the MPI and OpenMP, OpenMP shows a somewhat better performance than MPI. Therefore, it is concluded that the first priority in the parallel computation of the DeCART code is in the axial domain decomposition by using MPI, and then in the angular domain using OpenMP, and finally the angular

  11. Effect of probiotic yoghurt on animal-based diet-induced change in gut microbiota: an open, randomised, parallel-group study.

    Science.gov (United States)

    Odamaki, T; Kato, K; Sugahara, H; Xiao, J Z; Abe, F; Benno, Y

    2016-09-01

    Diet has a significant influence on the intestinal environment. In this study, we assessed changes in the faecal microbiota induced by an animal-based diet and the effect of the ingestion of yoghurt supplemented with a probiotic strain on these changes. In total, 33 subjects were enrolled in an open, randomised, parallel-group study. After a seven-day pre-observation period, the subjects were allocated into three groups (11 subjects in each group). All of the subjects were provided with an animal-based diet for five days, followed by a balanced diet for 14 days. Subjects in the first group ingested dairy in the form of 200 g of yoghurt supplemented with Bifidobacterium longum during both the animal-based and balanced diet periods (YAB group). Subjects in the second group ingested yoghurt only during the balanced diet period (YB group). Subjects who did not ingest yoghurt throughout the intervention were used as the control (CTR) group. Faecal samples were collected before and after the animal-based diet was provided and after the balanced diet was provided, followed by analysis by high-throughput sequencing of amplicons derived from the V3-V4 region of the 16S rRNA gene. In the YB and CTR groups, the animal-based diet caused a significant increase in the relative abundance of Bilophila, Odoribacter, Dorea and Ruminococcus (belonging to Lachnospiraceae) and a significant decrease in the level of Bifidobacterium after five days of intake. With the exception of Ruminococcus, these changes were not observed in the YAB group. No significant effect was induced by yoghurt supplementation following an animal-based diet (YB group vs CTR group). These results suggest that the intake of yoghurt supplemented with bifidobacteria played a role in maintaining a normal microbiota composition during the ingestion of a meat-based diet. This study protocol was registered in the University Hospital Medical Information Network: UMIN000014164.

  12. Strength Training Parallel with Plyometric and Cross training Influences on Speed Endurance

    OpenAIRE

    C.C.Chandra Obul Reddy; Dr. K. Rama Subba Reddy

    2017-01-01

    The purpose of the study was to find out the influence of weight training parallel with plyometric and cross training on speed endurance. To achieve this purpose of the study, forty-five men students studying CSSR & SRRM Degree College, Kamalapuram, YSR (D), Andhra Pradesh, India were randomly selected as subjects during the year 2015-2016. They were divided into three equal groups of fifteen subjects each. Group I underwent weight training parallel with plyometric training for three sessions...

  13. Parallel generation of architecture on the GPU

    KAUST Repository

    Steinberger, Markus

    2014-05-01

    In this paper, we present a novel approach for the parallel evaluation of procedural shape grammars on the graphics processing unit (GPU). Unlike previous approaches that are either limited in the kind of shapes they allow, the amount of parallelism they can take advantage of, or both, our method supports state of the art procedural modeling including stochasticity and context-sensitivity. To increase parallelism, we explicitly express independence in the grammar, reduce inter-rule dependencies required for context-sensitive evaluation, and introduce intra-rule parallelism. Our rule scheduling scheme avoids unnecessary back and forth between CPU and GPU and reduces round trips to slow global memory by dynamically grouping rules in on-chip shared memory. Our GPU shape grammar implementation is multiple orders of magnitude faster than the standard in CPU-based rule evaluation, while offering equal expressive power. In comparison to the state of the art in GPU shape grammar derivation, our approach is nearly 50 times faster, while adding support for geometric context-sensitivity. © 2014 The Author(s) Computer Graphics Forum © 2014 The Eurographics Association and John Wiley & Sons Ltd. Published by John Wiley & Sons Ltd.

  14. Endpoint-based parallel data processing in a parallel active messaging interface of a parallel computer

    Science.gov (United States)

    Archer, Charles J.; Blocksome, Michael A.; Ratterman, Joseph D.; Smith, Brian E.

    2014-08-12

    Endpoint-based parallel data processing in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI composed of data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes coupled for data communications through the PAMI, including establishing a data communications geometry, the geometry specifying, for tasks representing processes of execution of the parallel application, a set of endpoints that are used in collective operations of the PAMI including a plurality of endpoints for one of the tasks; receiving in endpoints of the geometry an instruction for a collective operation; and executing the instruction for a collective operation through the endpoints in dependence upon the geometry, including dividing data communications operations among the plurality of endpoints for one of the tasks.

  15. Parallel implementation of multireference coupled-cluster theories based on the reference-level parallelism

    Energy Technology Data Exchange (ETDEWEB)

    Brabec, Jiri; Pittner, Jiri; van Dam, Hubertus JJ; Apra, Edoardo; Kowalski, Karol

    2012-02-01

    A novel algorithm for implementing general type of multireference coupled-cluster (MRCC) theory based on the Jeziorski-Monkhorst exponential Ansatz [B. Jeziorski, H.J. Monkhorst, Phys. Rev. A 24, 1668 (1981)] is introduced. The proposed algorithm utilizes processor groups to calculate the equations for the MRCC amplitudes. In the basic formulation each processor group constructs the equations related to a specific subset of references. By flexible choice of processor groups and subset of reference-specific sufficiency conditions designated to a given group one can assure optimum utilization of available computing resources. The performance of this algorithm is illustrated on the examples of the Brillouin-Wigner and Mukherjee MRCC methods with singles and doubles (BW-MRCCSD and Mk-MRCCSD). A significant improvement in scalability and in reduction of time to solution is reported with respect to recently reported parallel implementation of the BW-MRCCSD formalism [J.Brabec, H.J.J. van Dam, K. Kowalski, J. Pittner, Chem. Phys. Lett. 514, 347 (2011)].

  16. Real-time SHVC software decoding with multi-threaded parallel processing

    Science.gov (United States)

    Gudumasu, Srinivas; He, Yuwen; Ye, Yan; He, Yong; Ryu, Eun-Seok; Dong, Jie; Xiu, Xiaoyu

    2014-09-01

    This paper proposes a parallel decoding framework for scalable HEVC (SHVC). Various optimization technologies are implemented on the basis of SHVC reference software SHM-2.0 to achieve real-time decoding speed for the two layer spatial scalability configuration. SHVC decoder complexity is analyzed with profiling information. The decoding process at each layer and the up-sampling process are designed in parallel and scheduled by a high level application task manager. Within each layer, multi-threaded decoding is applied to accelerate the layer decoding speed. Entropy decoding, reconstruction, and in-loop processing are pipeline designed with multiple threads based on groups of coding tree units (CTU). A group of CTUs is treated as a processing unit in each pipeline stage to achieve a better trade-off between parallelism and synchronization. Motion compensation, inverse quantization, and inverse transform modules are further optimized with SSE4 SIMD instructions. Simulations on a desktop with an Intel i7 processor 2600 running at 3.4 GHz show that the parallel SHVC software decoder is able to decode 1080p spatial 2x at up to 60 fps (frames per second) and 1080p spatial 1.5x at up to 50 fps for those bitstreams generated with SHVC common test conditions in the JCT-VC standardization group. The decoding performance at various bitrates with different optimization technologies and different numbers of threads are compared in terms of decoding speed and resource usage, including processor and memory.

  17. The 2003 essential. AREVA

    International Nuclear Information System (INIS)

    2004-07-01

    This document presents the essential activities of the Areva Group, a world nuclear industry leader. This group proposes technological solutions to produce the nuclear energy and to transport the electric power. It develops connection systems for the telecommunication, the computers and the automotive industry. Key data on the program management, the sustainable development activities and the different divisions are provided. (A.L.B.)

  18. 2nd International Conference on Cable-Driven Parallel Robots

    CERN Document Server

    Bruckmann, Tobias

    2015-01-01

    This volume presents the outcome of the second forum to cable-driven parallel robots, bringing the cable robot community together. It shows the new ideas of the active researchers developing cable-driven robots. The book presents the state of the art, including both summarizing contributions as well as latest research and future options. The book cover all topics which are essential for cable-driven robots: Classification Kinematics, Workspace and Singularity Analysis Statics and Dynamics Cable Modeling Control and Calibration Design Methodology Hardware Development Experimental Evaluation Prototypes, Application Reports and new Application concepts

  19. Parallelization of a three-dimensional whole core transport code DeCART

    Energy Technology Data Exchange (ETDEWEB)

    Jin Young, Cho; Han Gyu, Joo; Ha Yong, Kim; Moon-Hee, Chang [Korea Atomic Energy Research Institute, Yuseong-gu, Daejon (Korea, Republic of)

    2003-07-01

    Parallelization of the DeCART (deterministic core analysis based on ray tracing) code is presented that reduces the computational burden of the tremendous computing time and memory required in three-dimensional whole core transport calculations. The parallelization employs the concept of MPI grouping and the MPI/OpenMP mixed scheme as well. Since most of the computing time and memory are used in MOC (method of characteristics) and the multi-group CMFD (coarse mesh finite difference) calculation in DeCART, variables and subroutines related to these two modules are the primary targets for parallelization. Specifically, the ray tracing module was parallelized using a planar domain decomposition scheme and an angular domain decomposition scheme. The parallel performance of the DeCART code is evaluated by solving a rodded variation of the C5G7MOX three dimensional benchmark problem and a simplified three-dimensional SMART PWR core problem. In C5G7MOX problem with 24 CPUs, a speedup of maximum 21 is obtained on an IBM Regatta machine and 22 on a LINUX Cluster in the MOC kernel, which indicates good parallel performance of the DeCART code. In the simplified SMART problem, the memory requirement of about 11 GBytes in the single processor cases reduces to 940 Mbytes with 24 processors, which means that the DeCART code can now solve large core problems with affordable LINUX clusters. (authors)

  20. Essentialism Promotes Children's Inter-ethnic Bias

    Directory of Open Access Journals (Sweden)

    Gil eDiesendruck

    2015-08-01

    Full Text Available The present study investigated the developmental foundation of the relation between social essentialism and attitudes. Forty-eight Jewish Israeli secular 6-year-olds were exposed to either a story emphasizing essentialism about ethnicity, or stories controlling for the salience of ethnicity or essentialism per se. After listening to a story, children’s attitudes were assessed in a drawing and in an IAT task. Compared to the control conditions, children in the ethnic essentialism condition drew a Jewish and an Arab character as farther apart from each other, and the Jewish character with a more positive affect than the Arab character. Moreover, boys in the ethnic essentialism condition manifested a stronger bias in the IAT. These findings reveal an early link between essentialism and inter-group attitudes.

  1. Parallel phase model : a programming model for high-end parallel machines with manycores.

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Junfeng (Syracuse University, Syracuse, NY); Wen, Zhaofang; Heroux, Michael Allen; Brightwell, Ronald Brian

    2009-04-01

    This paper presents a parallel programming model, Parallel Phase Model (PPM), for next-generation high-end parallel machines based on a distributed memory architecture consisting of a networked cluster of nodes with a large number of cores on each node. PPM has a unified high-level programming abstraction that facilitates the design and implementation of parallel algorithms to exploit both the parallelism of the many cores and the parallelism at the cluster level. The programming abstraction will be suitable for expressing both fine-grained and coarse-grained parallelism. It includes a few high-level parallel programming language constructs that can be added as an extension to an existing (sequential or parallel) programming language such as C; and the implementation of PPM also includes a light-weight runtime library that runs on top of an existing network communication software layer (e.g. MPI). Design philosophy of PPM and details of the programming abstraction are also presented. Several unstructured applications that inherently require high-volume random fine-grained data accesses have been implemented in PPM with very promising results.

  2. Systematic approach for deriving feasible mappings of parallel algorithms to parallel computing platforms

    NARCIS (Netherlands)

    Arkin, Ethem; Tekinerdogan, Bedir; Imre, Kayhan M.

    2017-01-01

    The need for high-performance computing together with the increasing trend from single processor to parallel computer architectures has leveraged the adoption of parallel computing. To benefit from parallel computing power, usually parallel algorithms are defined that can be mapped and executed

  3. Parallel algorithms

    CERN Document Server

    Casanova, Henri; Robert, Yves

    2008-01-01

    ""…The authors of the present book, who have extensive credentials in both research and instruction in the area of parallelism, present a sound, principled treatment of parallel algorithms. … This book is very well written and extremely well designed from an instructional point of view. … The authors have created an instructive and fascinating text. The book will serve researchers as well as instructors who need a solid, readable text for a course on parallelism in computing. Indeed, for anyone who wants an understandable text from which to acquire a current, rigorous, and broad vi

  4. The Effects of Stress and Executive Functions on Decision Making in an Executive Parallel Task

    OpenAIRE

    McGuigan, Brian

    2016-01-01

    The aim of this study was to investigate the effects of acute stress on parallel task performance with the Game of Dice Task (GDT) to measure decision making and the Stroop test.  Two previous studies have found that the combination of stress and a parallel task with the GDT and an executive functions task preserved performance on the GDT for a stress group compared to a control group.  The purpose of this study was to create and use a new parallel task with the GDT and the stroop test to elu...

  5. A parallel form of the Gudjonsson Suggestibility Scale.

    Science.gov (United States)

    Gudjonsson, G H

    1987-09-01

    The purpose of this study is twofold: (1) to present a parallel form of the Gudjonsson Suggestibility Scale (GSS, Form 1); (2) to study test-retest reliabilities of interrogative suggestibility. Three groups of subjects were administered the two suggestibility scales in a counterbalanced order. Group 1 (28 normal subjects) and Group 2 (32 'forensic' patients) completed both scales within the same testing session, whereas Group 3 (30 'forensic' patients) completed the two scales between one week and eight months apart. All the correlations were highly significant, giving support for high 'temporal consistency' of interrogative suggestibility.

  6. Parallel computation for distributed parameter system-from vector processors to Adena computer

    Energy Technology Data Exchange (ETDEWEB)

    Nogi, T

    1983-04-01

    Research on advanced parallel hardware and software architectures for very high-speed computation deserves and needs more support and attention to fulfil its promise. Novel architectures for parallel processing are being made ready. Architectures for parallel processing can be roughly divided into two groups. One is a vector processor in which a single central processing unit involves multiple vector-arithmetic registers. The other is a processor array in which slave processors are connected to a host processor to perform parallel computation. In this review, the concept and data structure of the Adena (alternating-direction edition nexus array) architecture, which is conformable to distributed-parameter simulation algorithms, are described. 5 references.

  7. Cache-aware data structure model for parallelism and dynamic load balancing

    International Nuclear Information System (INIS)

    Sridi, Marwa

    2016-01-01

    This PhD thesis is dedicated to the implementation of innovative parallel methods in the framework of fast transient fluid-structure dynamics. It improves existing methods within EUROPLEXUS software, in order to optimize the shared memory parallel strategy, complementary to the original distributed memory approach, brought together into a global hybrid strategy for clusters of multi-core nodes. Starting from a sound analysis of the state of the art concerning data structuring techniques correlated to the hierarchic memory organization of current multi-processor architectures, the proposed work introduces an approach suitable for an explicit time integration (i.e. with no linear system to solve at each step). A data structure of type 'Structure of arrays' is conserved for the global data storage, providing flexibility and efficiency for current operations on kinematics fields (displacement, velocity and acceleration). On the contrary, in the particular case of elementary operations (for internal forces generic computations, as well as fluxes computations between cell faces for fluid models), particularly time consuming but localized in the program, a temporary data structure of type 'Array of structures' is used instead, to force an efficient filling of the cache memory and increase the performance of the resolution, for both serial and shared memory parallel processing. Switching from the global structure to the temporary one is based on a cell grouping strategy, following classing cache-blocking principles but handling specifically for this work neighboring data necessary to the efficient treatment of ALE fluxes for cells on the group boundaries. The proposed approach is extensively tested, from the point of views of both the computation time and the access failures into cache memory, confronting the gains obtained within the elementary operations to the potential overhead generated by the data structure switch. Obtained results are very satisfactory, especially

  8. Fundus autofluorescence in chronic essential hypertension.

    Science.gov (United States)

    Ramezani, Alireza; Saberian, Peyman; Soheilian, Masoud; Parsa, Saeed Alipour; Kamali, Homayoun Koohi; Entezari, Morteza; Shahbazi, Mohammad-Mehdi; Yaseri, Mehdi

    2014-01-01

    To evaluate fundus autofluorescence (FAF) changes in patients with chronic essential hypertension (HTN). In this case-control study, 35 eyes of 35 patients with chronic essential HTN (lasting >5 years) and 31 eyes of 31 volunteers without history of HTN were included. FAF pictures were taken from right eyes of all cases with the Heidelberg retina angiography and then were assessed by two masked retinal specialists. In total, FAF images including 35 images of hypertensive patients and 31 pictures of volunteers, three apparently abnormal patterns were detected. A ring of hyper-autofluorescence in the central macula (doughnut-shaped) was observed in 9 (25.7%) eyes of the hypertensive group but only in 2 (6.5%) eyes of the control group. This difference was statistically significant (P = 0.036) between two groups. Hypo- and/or hyper-autofluorescence patches outside the fovea were the other sign found more in the hypertensive group (22.9%) than in the control group (6.5%); however, the difference was not statistically significant (P = 0.089). The third feature was hypo-autofluorescence around the disk noticed in 11 (31.4%) eyes of hypertensive patients compared to 8 (25.8%) eyes of the controls (P = 0.615). A ring of hyper-autofluorescence in the central macula forming a doughnut-shaped feature may be a FAF sign in patients with chronic essential HTN.

  9. Fundus Autofluorescence in Chronic Essential Hypertension

    Directory of Open Access Journals (Sweden)

    Alireza Ramezani

    2014-01-01

    Full Text Available Purpose: To evaluate fundus autofluorescence (FAF changes in patients with chronic essential hypertension (HTN. Methods: In this case-control study, 35 eyes of 35 patients with chronic essential HTN (lasting >5 years and 31 eyes of 31 volunteers without history of HTN were included. FAF pictures were taken from right eyes of all cases with the Heidelberg retina angiography and then were assessed by two masked retinal specialists. Results: In total, FAF images including 35 images of hypertensive patients and 31 pictures of volunteers, three apparently abnormal patterns were detected. A ring of hyper-autofluorescence in the central macula (doughnut-shaped was observed in 9 (25.7% eyes of the hypertensive group but only in 2 (6.5% eyes of the control group. This difference was statistically significant (P = 0.036 between two groups. Hypo- and/or hyper-autofluorescence patches outside the fovea were the other sign found more in the hypertensive group (22.9% than in the control group (6.5%; however, the difference was not statistically significant (P = 0.089. The third feature was hypo-autofluorescence around the disk noticed in 11 (31.4% eyes of hypertensive patients compared to 8 (25.8% eyes of the controls (P = 0.615. Conclusion: A ring of hyper-autofluorescence in the central macula forming a doughnut-shaped feature may be a FAF sign in patients with chronic essential HTN.

  10. Development of GPU Based Parallel Computing Module for Solving Pressure Equation in the CUPID Component Thermo-Fluid Analysis Code

    International Nuclear Information System (INIS)

    Lee, Jin Pyo; Joo, Han Gyu

    2010-01-01

    In the thermo-fluid analysis code named CUPID, the linear system of pressure equations must be solved in each iteration step. The time for repeatedly solving the linear system can be quite significant because large sparse matrices of Rank more than 50,000 are involved and the diagonal dominance of the system is hardly hold. Therefore parallelization of the linear system solver is essential to reduce the computing time. Meanwhile, Graphics Processing Units (GPU) have been developed as highly parallel, multi-core processors for the global demand of high quality 3D graphics. If a suitable interface is provided, parallelization using GPU can be available to engineering computing. NVIDIA provides a Software Development Kit(SDK) named CUDA(Compute Unified Device Architecture) to code developers so that they can manage GPUs for parallelization using the C language. In this research, we implement parallel routines for the linear system solver using CUDA, and examine the performance of the parallelization. In the next section, we will describe the method of CUDA parallelization for the CUPID code, and then the performance of the CUDA parallelization will be discussed

  11. Parallel algorithms for mapping pipelined and parallel computations

    Science.gov (United States)

    Nicol, David M.

    1988-01-01

    Many computational problems in image processing, signal processing, and scientific computing are naturally structured for either pipelined or parallel computation. When mapping such problems onto a parallel architecture it is often necessary to aggregate an obvious problem decomposition. Even in this context the general mapping problem is known to be computationally intractable, but recent advances have been made in identifying classes of problems and architectures for which optimal solutions can be found in polynomial time. Among these, the mapping of pipelined or parallel computations onto linear array, shared memory, and host-satellite systems figures prominently. This paper extends that work first by showing how to improve existing serial mapping algorithms. These improvements have significantly lower time and space complexities: in one case a published O(nm sup 3) time algorithm for mapping m modules onto n processors is reduced to an O(nm log m) time complexity, and its space requirements reduced from O(nm sup 2) to O(m). Run time complexity is further reduced with parallel mapping algorithms based on these improvements, which run on the architecture for which they create the mappings.

  12. Parallel computing for homogeneous diffusion and transport equations in neutronics; Calcul parallele pour les equations de diffusion et de transport homogenes en neutronique

    Energy Technology Data Exchange (ETDEWEB)

    Pinchedez, K

    1999-06-01

    Parallel computing meets the ever-increasing requirements for neutronic computer code speed and accuracy. In this work, two different approaches have been considered. We first parallelized the sequential algorithm used by the neutronics code CRONOS developed at the French Atomic Energy Commission. The algorithm computes the dominant eigenvalue associated with PN simplified transport equations by a mixed finite element method. Several parallel algorithms have been developed on distributed memory machines. The performances of the parallel algorithms have been studied experimentally by implementation on a T3D Cray and theoretically by complexity models. A comparison of various parallel algorithms has confirmed the chosen implementations. We next applied a domain sub-division technique to the two-group diffusion Eigen problem. In the modal synthesis-based method, the global spectrum is determined from the partial spectra associated with sub-domains. Then the Eigen problem is expanded on a family composed, on the one hand, from eigenfunctions associated with the sub-domains and, on the other hand, from functions corresponding to the contribution from the interface between the sub-domains. For a 2-D homogeneous core, this modal method has been validated and its accuracy has been measured. (author)

  13. Parallel computing works

    Energy Technology Data Exchange (ETDEWEB)

    1991-10-23

    An account of the Caltech Concurrent Computation Program (C{sup 3}P), a five year project that focused on answering the question: Can parallel computers be used to do large-scale scientific computations '' As the title indicates, the question is answered in the affirmative, by implementing numerous scientific applications on real parallel computers and doing computations that produced new scientific results. In the process of doing so, C{sup 3}P helped design and build several new computers, designed and implemented basic system software, developed algorithms for frequently used mathematical computations on massively parallel machines, devised performance models and measured the performance of many computers, and created a high performance computing facility based exclusively on parallel computers. While the initial focus of C{sup 3}P was the hypercube architecture developed by C. Seitz, many of the methods developed and lessons learned have been applied successfully on other massively parallel architectures.

  14. A review of parallel computing for large-scale remote sensing image mosaicking

    OpenAIRE

    Chen, Lajiao; Ma, Yan; Liu, Peng; Wei, Jingbo; Jie, Wei; He, Jijun

    2015-01-01

    Interest in image mosaicking has been spurred by a wide variety of research and management needs. However, for large-scale applications, remote sensing image mosaicking usually requires significant computational capabilities. Several studies have attempted to apply parallel computing to improve image mosaicking algorithms and to speed up calculation process. The state of the art of this field has not yet been summarized, which is, however, essential for a better understanding and for further ...

  15. An efficient parallel algorithm: Poststack and prestack Kirchhoff 3D depth migration using flexi-depth iterations

    Science.gov (United States)

    Rastogi, Richa; Srivastava, Abhishek; Khonde, Kiran; Sirasala, Kirannmayi M.; Londhe, Ashutosh; Chavhan, Hitesh

    2015-07-01

    This paper presents an efficient parallel 3D Kirchhoff depth migration algorithm suitable for current class of multicore architecture. The fundamental Kirchhoff depth migration algorithm exhibits inherent parallelism however, when it comes to 3D data migration, as the data size increases the resource requirement of the algorithm also increases. This challenges its practical implementation even on current generation high performance computing systems. Therefore a smart parallelization approach is essential to handle 3D data for migration. The most compute intensive part of Kirchhoff depth migration algorithm is the calculation of traveltime tables due to its resource requirements such as memory/storage and I/O. In the current research work, we target this area and develop a competent parallel algorithm for post and prestack 3D Kirchhoff depth migration, using hybrid MPI+OpenMP programming techniques. We introduce a concept of flexi-depth iterations while depth migrating data in parallel imaging space, using optimized traveltime table computations. This concept provides flexibility to the algorithm by migrating data in a number of depth iterations, which depends upon the available node memory and the size of data to be migrated during runtime. Furthermore, it minimizes the requirements of storage, I/O and inter-node communication, thus making it advantageous over the conventional parallelization approaches. The developed parallel algorithm is demonstrated and analysed on Yuva II, a PARAM series of supercomputers. Optimization, performance and scalability experiment results along with the migration outcome show the effectiveness of the parallel algorithm.

  16. Template based parallel checkpointing in a massively parallel computer system

    Science.gov (United States)

    Archer, Charles Jens [Rochester, MN; Inglett, Todd Alan [Rochester, MN

    2009-01-13

    A method and apparatus for a template based parallel checkpoint save for a massively parallel super computer system using a parallel variation of the rsync protocol, and network broadcast. In preferred embodiments, the checkpoint data for each node is compared to a template checkpoint file that resides in the storage and that was previously produced. Embodiments herein greatly decrease the amount of data that must be transmitted and stored for faster checkpointing and increased efficiency of the computer system. Embodiments are directed to a parallel computer system with nodes arranged in a cluster with a high speed interconnect that can perform broadcast communication. The checkpoint contains a set of actual small data blocks with their corresponding checksums from all nodes in the system. The data blocks may be compressed using conventional non-lossy data compression algorithms to further reduce the overall checkpoint size.

  17. Essential idempotents and simplex codes

    Directory of Open Access Journals (Sweden)

    Gladys Chalom

    2017-01-01

    Full Text Available We define essential idempotents in group algebras and use them to prove that every mininmal abelian non-cyclic code is a repetition code. Also we use them to prove that every minimal abelian code is equivalent to a minimal cyclic code of the same length. Finally, we show that a binary cyclic code is simplex if and only if is of length of the form $n=2^k-1$ and is generated by an essential idempotent.

  18. Teaching Scientific Computing: A Model-Centered Approach to Pipeline and Parallel Programming with C

    Directory of Open Access Journals (Sweden)

    Vladimiras Dolgopolovas

    2015-01-01

    Full Text Available The aim of this study is to present an approach to the introduction into pipeline and parallel computing, using a model of the multiphase queueing system. Pipeline computing, including software pipelines, is among the key concepts in modern computing and electronics engineering. The modern computer science and engineering education requires a comprehensive curriculum, so the introduction to pipeline and parallel computing is the essential topic to be included in the curriculum. At the same time, the topic is among the most motivating tasks due to the comprehensive multidisciplinary and technical requirements. To enhance the educational process, the paper proposes a novel model-centered framework and develops the relevant learning objects. It allows implementing an educational platform of constructivist learning process, thus enabling learners’ experimentation with the provided programming models, obtaining learners’ competences of the modern scientific research and computational thinking, and capturing the relevant technical knowledge. It also provides an integral platform that allows a simultaneous and comparative introduction to pipelining and parallel computing. The programming language C for developing programming models and message passing interface (MPI and OpenMP parallelization tools have been chosen for implementation.

  19. Introduction to parallel programming

    CERN Document Server

    Brawer, Steven

    1989-01-01

    Introduction to Parallel Programming focuses on the techniques, processes, methodologies, and approaches involved in parallel programming. The book first offers information on Fortran, hardware and operating system models, and processes, shared memory, and simple parallel programs. Discussions focus on processes and processors, joining processes, shared memory, time-sharing with multiple processors, hardware, loops, passing arguments in function/subroutine calls, program structure, and arithmetic expressions. The text then elaborates on basic parallel programming techniques, barriers and race

  20. Parallelism in matrix computations

    CERN Document Server

    Gallopoulos, Efstratios; Sameh, Ahmed H

    2016-01-01

    This book is primarily intended as a research monograph that could also be used in graduate courses for the design of parallel algorithms in matrix computations. It assumes general but not extensive knowledge of numerical linear algebra, parallel architectures, and parallel programming paradigms. The book consists of four parts: (I) Basics; (II) Dense and Special Matrix Computations; (III) Sparse Matrix Computations; and (IV) Matrix functions and characteristics. Part I deals with parallel programming paradigms and fundamental kernels, including reordering schemes for sparse matrices. Part II is devoted to dense matrix computations such as parallel algorithms for solving linear systems, linear least squares, the symmetric algebraic eigenvalue problem, and the singular-value decomposition. It also deals with the development of parallel algorithms for special linear systems such as banded ,Vandermonde ,Toeplitz ,and block Toeplitz systems. Part III addresses sparse matrix computations: (a) the development of pa...

  1. Acceleration and parallelization calculation of EFEN-SP_3 method

    International Nuclear Information System (INIS)

    Yang Wen; Zheng Youqi; Wu Hongchun; Cao Liangzhi; Li Yunzhao

    2013-01-01

    Due to the fact that the exponential function expansion nodal-SP_3 (EFEN-SP_3) method needs further improvement in computational efficiency to routinely carry out PWR whole core pin-by-pin calculation, the coarse mesh acceleration and spatial parallelization were investigated in this paper. The coarse mesh acceleration was built by considering discontinuity factor on each coarse mesh interface and preserving neutron balance within each coarse mesh in space, angle and energy. The spatial parallelization based on MPI was implemented by guaranteeing load balancing and minimizing communications cost to fully take advantage of the modern computing and storage abilities. Numerical results based on a commercial nuclear power reactor demonstrate an speedup ratio of about 40 for the coarse mesh acceleration and a parallel efficiency of higher than 60% with 40 CPUs for the spatial parallelization. With these two improvements, the EFEN code can complete a PWR whole core pin-by-pin calculation with 289 × 289 × 218 meshes and 4 energy groups within 100 s by using 48 CPUs (2.40 GHz frequency). (authors)

  2. Algorithms for parallel flow solvers on message passing architectures

    Science.gov (United States)

    Vanderwijngaart, Rob F.

    1995-01-01

    The purpose of this project has been to identify and test suitable technologies for implementation of fluid flow solvers -- possibly coupled with structures and heat equation solvers -- on MIMD parallel computers. In the course of this investigation much attention has been paid to efficient domain decomposition strategies for ADI-type algorithms. Multi-partitioning derives its efficiency from the assignment of several blocks of grid points to each processor in the parallel computer. A coarse-grain parallelism is obtained, and a near-perfect load balance results. In uni-partitioning every processor receives responsibility for exactly one block of grid points instead of several. This necessitates fine-grain pipelined program execution in order to obtain a reasonable load balance. Although fine-grain parallelism is less desirable on many systems, especially high-latency networks of workstations, uni-partition methods are still in wide use in production codes for flow problems. Consequently, it remains important to achieve good efficiency with this technique that has essentially been superseded by multi-partitioning for parallel ADI-type algorithms. Another reason for the concentration on improving the performance of pipeline methods is their applicability in other types of flow solver kernels with stronger implied data dependence. Analytical expressions can be derived for the size of the dynamic load imbalance incurred in traditional pipelines. From these it can be determined what is the optimal first-processor retardation that leads to the shortest total completion time for the pipeline process. Theoretical predictions of pipeline performance with and without optimization match experimental observations on the iPSC/860 very well. Analysis of pipeline performance also highlights the effect of uncareful grid partitioning in flow solvers that employ pipeline algorithms. If grid blocks at boundaries are not at least as large in the wall-normal direction as those

  3. Provably optimal parallel transport sweeps on regular grids

    International Nuclear Information System (INIS)

    Adams, M. P.; Adams, M. L.; Hawkins, W. D.; Smith, T.; Rauchwerger, L.; Amato, N. M.; Bailey, T. S.; Falgout, R. D.

    2013-01-01

    We have found provably optimal algorithms for full-domain discrete-ordinate transport sweeps on regular grids in 3D Cartesian geometry. We describe these algorithms and sketch a 'proof that they always execute the full eight-octant sweep in the minimum possible number of stages for a given P x x P y x P z partitioning. Computational results demonstrate that our optimal scheduling algorithms execute sweeps in the minimum possible stage count. Observed parallel efficiencies agree well with our performance model. An older version of our PDT transport code achieves almost 80% parallel efficiency on 131,072 cores, on a weak-scaling problem with only one energy group, 80 directions, and 4096 cells/core. A newer version is less efficient at present-we are still improving its implementation - but achieves almost 60% parallel efficiency on 393,216 cores. These results conclusively demonstrate that sweeps can perform with high efficiency on core counts approaching 10 6 . (authors)

  4. Provably optimal parallel transport sweeps on regular grids

    Energy Technology Data Exchange (ETDEWEB)

    Adams, M. P.; Adams, M. L.; Hawkins, W. D. [Dept. of Nuclear Engineering, Texas A and M University, 3133 TAMU, College Station, TX 77843-3133 (United States); Smith, T.; Rauchwerger, L.; Amato, N. M. [Dept. of Computer Science and Engineering, Texas A and M University, 3133 TAMU, College Station, TX 77843-3133 (United States); Bailey, T. S.; Falgout, R. D. [Lawrence Livermore National Laboratory (United States)

    2013-07-01

    We have found provably optimal algorithms for full-domain discrete-ordinate transport sweeps on regular grids in 3D Cartesian geometry. We describe these algorithms and sketch a 'proof that they always execute the full eight-octant sweep in the minimum possible number of stages for a given P{sub x} x P{sub y} x P{sub z} partitioning. Computational results demonstrate that our optimal scheduling algorithms execute sweeps in the minimum possible stage count. Observed parallel efficiencies agree well with our performance model. An older version of our PDT transport code achieves almost 80% parallel efficiency on 131,072 cores, on a weak-scaling problem with only one energy group, 80 directions, and 4096 cells/core. A newer version is less efficient at present-we are still improving its implementation - but achieves almost 60% parallel efficiency on 393,216 cores. These results conclusively demonstrate that sweeps can perform with high efficiency on core counts approaching 10{sup 6}. (authors)

  5. Cyclic di-AMP regulation of osmotic homeostasis is essential in Group B Streptococcus.

    Directory of Open Access Journals (Sweden)

    Laura Devaux

    2018-04-01

    Full Text Available Cyclic nucleotides are universally used as secondary messengers to control cellular physiology. Among these signalling molecules, cyclic di-adenosine monophosphate (c-di-AMP is a specific bacterial second messenger recognized by host cells during infections and its synthesis is assumed to be necessary for bacterial growth by controlling a conserved and essential cellular function. In this study, we sought to identify the main c-di-AMP dependent pathway in Streptococcus agalactiae, the etiological agent of neonatal septicaemia and meningitis. By conditionally inactivating dacA, the only diadenyate cyclase gene, we confirm that c-di-AMP synthesis is essential in standard growth conditions. However, c-di-AMP synthesis becomes rapidly dispensable due to the accumulation of compensatory mutations. We identified several mutations restoring the viability of a ΔdacA mutant, in particular a loss-of-function mutation in the osmoprotectant transporter BusAB. Identification of c-di-AMP binding proteins revealed a conserved set of potassium and osmolyte transporters, as well as the BusR transcriptional factor. We showed that BusR negatively regulates busAB transcription by direct binding to the busAB promoter. Loss of BusR repression leads to a toxic busAB expression in absence of c-di-AMP if osmoprotectants, such as glycine betaine, are present in the medium. In contrast, deletion of the gdpP c-di-AMP phosphodiesterase leads to hyperosmotic susceptibility, a phenotype dependent on a functional BusR. Taken together, we demonstrate that c-di-AMP is essential for osmotic homeostasis and that the predominant mechanism is dependent on the c-di-AMP binding transcriptional factor BusR. The regulation of osmotic homeostasis is likely the conserved and essential function of c-di-AMP, but each species has evolved specific c-di-AMP mechanisms of osmoregulation to adapt to its environment.

  6. Proceedings of the workshop on Compilation of (Symbolic) Languages for Parallel Computers

    Energy Technology Data Exchange (ETDEWEB)

    Foster, I.; Tick, E. (comp.)

    1991-11-01

    This report comprises the abstracts and papers for the talks presented at the Workshop on Compilation of (Symbolic) Languages for Parallel Computers, held October 31--November 1, 1991, in San Diego. These unreferred contributions were provided by the participants for the purpose of this workshop; many of them will be published elsewhere in peer-reviewed conferences and publications. Our goal is planning this workshop was to bring together researchers from different disciplines with common problems in compilation. In particular, we wished to encourage interaction between researchers working in compilation of symbolic languages and those working on compilation of conventional, imperative languages. The fundamental problems facing researchers interested in compilation of logic, functional, and procedural programming languages for parallel computers are essentially the same. However, differences in the basic programming paradigms have led to different communities emphasizing different species of the parallel compilation problem. For example, parallel logic and functional languages provide dataflow-like formalisms in which control dependencies are unimportant. Hence, a major focus of research in compilation has been on techniques that try to infer when sequential control flow can safely be imposed. Granularity analysis for scheduling is a related problem. The single- assignment property leads to a need for analysis of memory use in order to detect opportunities for reuse. Much of the work in each of these areas relies on the use of abstract interpretation techniques.

  7. Ratings of Essentialism for Eight Religious Identities.

    Science.gov (United States)

    Toosi, Negin R; Ambady, Nalini

    2011-01-01

    As a social identity, religion is unique because it contains a spectrum of choice. In some religious communities, individuals are considered members by virtue of having parents of that background, and religion, culture, and ethnicity are closely intertwined. Other faith communities actively invite people of other backgrounds to join, expecting individuals to choose the religion that best fits their personal beliefs. These various methods of identification influence beliefs about the essentialist nature of religious identity. Essentialism is when social groups are considered to have deep, immutable, and inherent defining properties. In this study, college students (N=55) provided ratings of essentialism for eight religious identities: Atheist, Buddhist, Catholic, Hindu, Jewish, Muslim, Protestant, and Spiritual-but-not-religious. Significant differences in essentialism were found between the target groups. Results and implications for intergroup relations are discussed.

  8. Model-driven product line engineering for mapping parallel algorithms to parallel computing platforms

    NARCIS (Netherlands)

    Arkin, Ethem; Tekinerdogan, Bedir

    2016-01-01

    Mapping parallel algorithms to parallel computing platforms requires several activities such as the analysis of the parallel algorithm, the definition of the logical configuration of the platform, the mapping of the algorithm to the logical configuration platform and the implementation of the

  9. Parallelization in Modern C++

    CERN Multimedia

    CERN. Geneva

    2016-01-01

    The traditionally used and well established parallel programming models OpenMP and MPI are both targeting lower level parallelism and are meant to be as language agnostic as possible. For a long time, those models were the only widely available portable options for developing parallel C++ applications beyond using plain threads. This has strongly limited the optimization capabilities of compilers, has inhibited extensibility and genericity, and has restricted the use of those models together with other, modern higher level abstractions introduced by the C++11 and C++14 standards. The recent revival of interest in the industry and wider community for the C++ language has also spurred a remarkable amount of standardization proposals and technical specifications being developed. Those efforts however have so far failed to build a vision on how to seamlessly integrate various types of parallelism, such as iterative parallel execution, task-based parallelism, asynchronous many-task execution flows, continuation s...

  10. Massively parallel mathematical sieves

    Energy Technology Data Exchange (ETDEWEB)

    Montry, G.R.

    1989-01-01

    The Sieve of Eratosthenes is a well-known algorithm for finding all prime numbers in a given subset of integers. A parallel version of the Sieve is described that produces computational speedups over 800 on a hypercube with 1,024 processing elements for problems of fixed size. Computational speedups as high as 980 are achieved when the problem size per processor is fixed. The method of parallelization generalizes to other sieves and will be efficient on any ensemble architecture. We investigate two highly parallel sieves using scattered decomposition and compare their performance on a hypercube multiprocessor. A comparison of different parallelization techniques for the sieve illustrates the trade-offs necessary in the design and implementation of massively parallel algorithms for large ensemble computers.

  11. Computer-Aided Parallelizer and Optimizer

    Science.gov (United States)

    Jin, Haoqiang

    2011-01-01

    The Computer-Aided Parallelizer and Optimizer (CAPO) automates the insertion of compiler directives (see figure) to facilitate parallel processing on Shared Memory Parallel (SMP) machines. While CAPO currently is integrated seamlessly into CAPTools (developed at the University of Greenwich, now marketed as ParaWise), CAPO was independently developed at Ames Research Center as one of the components for the Legacy Code Modernization (LCM) project. The current version takes serial FORTRAN programs, performs interprocedural data dependence analysis, and generates OpenMP directives. Due to the widely supported OpenMP standard, the generated OpenMP codes have the potential to run on a wide range of SMP machines. CAPO relies on accurate interprocedural data dependence information currently provided by CAPTools. Compiler directives are generated through identification of parallel loops in the outermost level, construction of parallel regions around parallel loops and optimization of parallel regions, and insertion of directives with automatic identification of private, reduction, induction, and shared variables. Attempts also have been made to identify potential pipeline parallelism (implemented with point-to-point synchronization). Although directives are generated automatically, user interaction with the tool is still important for producing good parallel codes. A comprehensive graphical user interface is included for users to interact with the parallelization process.

  12. Link failure detection in a parallel computer

    Science.gov (United States)

    Archer, Charles J.; Blocksome, Michael A.; Megerian, Mark G.; Smith, Brian E.

    2010-11-09

    Methods, apparatus, and products are disclosed for link failure detection in a parallel computer including compute nodes connected in a rectangular mesh network, each pair of adjacent compute nodes in the rectangular mesh network connected together using a pair of links, that includes: assigning each compute node to either a first group or a second group such that adjacent compute nodes in the rectangular mesh network are assigned to different groups; sending, by each of the compute nodes assigned to the first group, a first test message to each adjacent compute node assigned to the second group; determining, by each of the compute nodes assigned to the second group, whether the first test message was received from each adjacent compute node assigned to the first group; and notifying a user, by each of the compute nodes assigned to the second group, whether the first test message was received.

  13. A National Quality Improvement Collaborative for the clinical use of outcome measurement in specialised mental healthcare: results from a parallel group design and a nested cluster randomised controlled trial.

    Science.gov (United States)

    Metz, Margot J; Veerbeek, Marjolein A; Franx, Gerdien C; van der Feltz-Cornelis, Christina M; de Beurs, Edwin; Beekman, Aartjan T F

    2017-05-01

    Although the importance and advantages of measurement-based care in mental healthcare are well established, implementation in daily practice is complex and far from optimal. To accelerate the implementation of outcome measurement in routine clinical practice, a government-sponsored National Quality Improvement Collaborative was initiated in Dutch-specialised mental healthcare. To investigate the effects of this initiative, we combined a matched-pair parallel group design (21 teams) with a cluster randomised controlled trial (RCT) (6 teams). At the beginning and end, the primary outcome 'actual use and perceived clinical utility of outcome measurement' was assessed. In both designs, intervention teams demonstrated a significant higher level of implementation of outcome measurement than control teams. Overall effects were large (parallel group d =0.99; RCT d =1.25). The National Collaborative successfully improved the use of outcome measurement in routine clinical practice. None. © The Royal College of Psychiatrists 2017. This is an open access article distributed under the terms of the Creative Commons Non-Commercial, No Derivatives (CC BY-NC-ND) license.

  14. Data communications in a parallel active messaging interface of a parallel computer

    Science.gov (United States)

    Archer, Charles J; Blocksome, Michael A; Ratterman, Joseph D; Smith, Brian E

    2013-11-12

    Data communications in a parallel active messaging interface (`PAMI`) of a parallel computer composed of compute nodes that execute a parallel application, each compute node including application processors that execute the parallel application and at least one management processor dedicated to gathering information regarding data communications. The PAMI is composed of data communications endpoints, each endpoint composed of a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes and the endpoints coupled for data communications through the PAMI and through data communications resources. Embodiments function by gathering call site statistics describing data communications resulting from execution of data communications instructions and identifying in dependence upon the call cite statistics a data communications algorithm for use in executing a data communications instruction at a call site in the parallel application.

  15. A parallel buffer tree

    DEFF Research Database (Denmark)

    Sitchinava, Nodar; Zeh, Norbert

    2012-01-01

    We present the parallel buffer tree, a parallel external memory (PEM) data structure for batched search problems. This data structure is a non-trivial extension of Arge's sequential buffer tree to a private-cache multiprocessor environment and reduces the number of I/O operations by the number of...... in the optimal OhOf(psortN + K/PB) parallel I/O complexity, where K is the size of the output reported in the process and psortN is the parallel I/O complexity of sorting N elements using P processors....

  16. Application Portable Parallel Library

    Science.gov (United States)

    Cole, Gary L.; Blech, Richard A.; Quealy, Angela; Townsend, Scott

    1995-01-01

    Application Portable Parallel Library (APPL) computer program is subroutine-based message-passing software library intended to provide consistent interface to variety of multiprocessor computers on market today. Minimizes effort needed to move application program from one computer to another. User develops application program once and then easily moves application program from parallel computer on which created to another parallel computer. ("Parallel computer" also include heterogeneous collection of networked computers). Written in C language with one FORTRAN 77 subroutine for UNIX-based computers and callable from application programs written in C language or FORTRAN 77.

  17. Parallel Algorithms and Patterns

    Energy Technology Data Exchange (ETDEWEB)

    Robey, Robert W. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-06-16

    This is a powerpoint presentation on parallel algorithms and patterns. A parallel algorithm is a well-defined, step-by-step computational procedure that emphasizes concurrency to solve a problem. Examples of problems include: Sorting, searching, optimization, matrix operations. A parallel pattern is a computational step in a sequence of independent, potentially concurrent operations that occurs in diverse scenarios with some frequency. Examples are: Reductions, prefix scans, ghost cell updates. We only touch on parallel patterns in this presentation. It really deserves its own detailed discussion which Gabe Rockefeller would like to develop.

  18. Parallel computing for homogeneous diffusion and transport equations in neutronics

    International Nuclear Information System (INIS)

    Pinchedez, K.

    1999-06-01

    Parallel computing meets the ever-increasing requirements for neutronic computer code speed and accuracy. In this work, two different approaches have been considered. We first parallelized the sequential algorithm used by the neutronics code CRONOS developed at the French Atomic Energy Commission. The algorithm computes the dominant eigenvalue associated with PN simplified transport equations by a mixed finite element method. Several parallel algorithms have been developed on distributed memory machines. The performances of the parallel algorithms have been studied experimentally by implementation on a T3D Cray and theoretically by complexity models. A comparison of various parallel algorithms has confirmed the chosen implementations. We next applied a domain sub-division technique to the two-group diffusion Eigen problem. In the modal synthesis-based method, the global spectrum is determined from the partial spectra associated with sub-domains. Then the Eigen problem is expanded on a family composed, on the one hand, from eigenfunctions associated with the sub-domains and, on the other hand, from functions corresponding to the contribution from the interface between the sub-domains. For a 2-D homogeneous core, this modal method has been validated and its accuracy has been measured. (author)

  19. Essentialism goes social: belief in social determinism as a component of psychological essentialism.

    Science.gov (United States)

    Rangel, Ulrike; Keller, Johannes

    2011-06-01

    Individuals tend to explain the characteristics of others with reference to an underlying essence, a tendency that has been termed psychological essentialism. Drawing on current conceptualizations of essentialism as a fundamental mode of social thinking, and on prior studies investigating belief in genetic determinism (BGD) as a component of essentialism, we argue that BGD cannot constitute the sole basis of individuals' essentialist reasoning. Accordingly, we propose belief in social determinism (BSD) as a complementary component of essentialism, which relies on the belief that a person's essential character is shaped by social factors (e.g., upbringing, social background). We developed a scale to measure this social component of essentialism. Results of five correlational studies indicate that (a) BGD and BSD are largely independent, (b) BGD and BSD are related to important correlates of essentialist thinking (e.g., dispositionism, perceived group homogeneity), (c) BGD and BSD are associated with indicators of fundamental epistemic and ideological motives, and (d) the endorsement of each lay theory is associated with vital social-cognitive consequences (particularly stereotyping and prejudice). Two experimental studies examined the idea that the relationship between BSD and prejudice is bidirectional in nature. Study 6 reveals that rendering social-deterministic explanations salient results in increased levels of ingroup favoritism in individuals who chronically endorse BSD. Results of Study 7 show that priming of prejudice enhances endorsement of social-deterministic explanations particularly in persons habitually endorsing prejudiced attitudes. 2011 APA, all rights reserved

  20. Delta's Key to the Next Generation TOEFL[R] Test: Essential Grammar for the iBT

    Science.gov (United States)

    Gallagher, Nancy

    2012-01-01

    Although the TOEFL iBT does not have a discrete grammar section, knowledge of English sentence structure is important throughout the test. Essential Grammar for the iBT reviews the skills that are fundamental to success on tests. Content includes noun and verb forms, clauses, agreement, parallel structure, punctuation, and much more. The book may…

  1. Totally parallel multilevel algorithms

    Science.gov (United States)

    Frederickson, Paul O.

    1988-01-01

    Four totally parallel algorithms for the solution of a sparse linear system have common characteristics which become quite apparent when they are implemented on a highly parallel hypercube such as the CM2. These four algorithms are Parallel Superconvergent Multigrid (PSMG) of Frederickson and McBryan, Robust Multigrid (RMG) of Hackbusch, the FFT based Spectral Algorithm, and Parallel Cyclic Reduction. In fact, all four can be formulated as particular cases of the same totally parallel multilevel algorithm, which are referred to as TPMA. In certain cases the spectral radius of TPMA is zero, and it is recognized to be a direct algorithm. In many other cases the spectral radius, although not zero, is small enough that a single iteration per timestep keeps the local error within the required tolerance.

  2. Neural Parallel Engine: A toolbox for massively parallel neural signal processing.

    Science.gov (United States)

    Tam, Wing-Kin; Yang, Zhi

    2018-05-01

    Large-scale neural recordings provide detailed information on neuronal activities and can help elicit the underlying neural mechanisms of the brain. However, the computational burden is also formidable when we try to process the huge data stream generated by such recordings. In this study, we report the development of Neural Parallel Engine (NPE), a toolbox for massively parallel neural signal processing on graphical processing units (GPUs). It offers a selection of the most commonly used routines in neural signal processing such as spike detection and spike sorting, including advanced algorithms such as exponential-component-power-component (EC-PC) spike detection and binary pursuit spike sorting. We also propose a new method for detecting peaks in parallel through a parallel compact operation. Our toolbox is able to offer a 5× to 110× speedup compared with its CPU counterparts depending on the algorithms. A user-friendly MATLAB interface is provided to allow easy integration of the toolbox into existing workflows. Previous efforts on GPU neural signal processing only focus on a few rudimentary algorithms, are not well-optimized and often do not provide a user-friendly programming interface to fit into existing workflows. There is a strong need for a comprehensive toolbox for massively parallel neural signal processing. A new toolbox for massively parallel neural signal processing has been created. It can offer significant speedup in processing signals from large-scale recordings up to thousands of channels. Copyright © 2018 Elsevier B.V. All rights reserved.

  3. A possibility of parallel and anti-parallel diffraction measurements on ...

    Indian Academy of Sciences (India)

    However, a bent perfect crystal (BPC) monochromator at monochromatic focusing condition can provide a quite flat and equal resolution property at both parallel and anti-parallel positions and thus one can have a chance to use both sides for the diffraction experiment. From the data of the FWHM and the / measured ...

  4. Roflumilast for the treatment of COPD in an Asian population: a randomized, double-blind, parallel-group study.

    Science.gov (United States)

    Zheng, Jinping; Yang, Jinghua; Zhou, Xiangdong; Zhao, Li; Hui, Fuxin; Wang, Haoyan; Bai, Chunxue; Chen, Ping; Li, Huiping; Kang, Jian; Brose, Manja; Richard, Frank; Goehring, Udo-Michael; Zhong, Nanshan

    2014-01-01

    Roflumilast is the only oral phosphodiesterase 4 inhibitor indicated for use in the treatment of COPD. Previous studies of roflumilast have predominantly involved European and North American populations. A large study was necessary to determine the efficacy and safety of roflumilast in a predominantly ethnic Chinese population. In a placebo-controlled, double-blind, parallel-group, multicenter, phase 3 study, patients of Chinese, Malay, and Indian ethnicity (N = 626) with severe to very severe COPD were randomized 1:1 to receive either roflumilast 500 μg once daily or placebo for 24 weeks. The primary end point was change in prebronchodilator FEV1 from baseline to study end. Three hundred thirteen patients were assigned to each treatment. Roflumilast provided a sustained increase over placebo in mean prebronchodilator FEV1 (0.071 L; 95% CI, 0.046, 0.095 L; P < .0001). Similar improvements were observed in the secondary end points of postbronchodilator FEV1 (0.068 L; 95% CI 0.044, 0.092 L; P < .0001) and prebronchodilator and postbronchodilator FVC (0.109 L; 95% CI, 0.061, 0.157 L; P < .0001 and 0.101 L; 95% CI, 0.055, 0.146 L; P < .0001, respectively). The adverse event profile was consistent with previous roflumilast studies. The most frequently reported treatment-related adverse event was diarrhea (6.0% and 1.0% of patients in the roflumilast and placebo groups, respectively). Roflumilast plays an important role in lung function improvement and is well tolerated in an Asian population. It provides an optimal treatment choice for patients with severe to very severe COPD.

  5. Structural Synthesis of 3-DoF Spatial Fully Parallel Manipulators

    Directory of Open Access Journals (Sweden)

    Alfonso Hernandez

    2014-07-01

    Full Text Available In this paper, the architectures of three degrees of freedom (3-DoF spatial, fully parallel manipulators (PMs, whose limbs are structurally identical, are obtained systematically. To do this, the methodology followed makes use of the concepts of the displacement group theory of rigid body motion. This theory works with so-called ‘motion generators’. That is, every limb is a kinematic chain that produces a certain type of displacement in the mobile platform or end-effector. The laws of group algebra will determine the actual motion pattern of the end-effector. The structural synthesis is a combinatorial process of different kinematic chains’ topologies employed in order to get all of the 3-DoF motion pattern possibilities in the end-effector of the fully parallel manipulator.

  6. A new decomposition method for parallel processing multi-level optimization

    International Nuclear Information System (INIS)

    Park, Hyung Wook; Kim, Min Soo; Choi, Dong Hoon

    2002-01-01

    In practical designs, most of the multidisciplinary problems have a large-size and complicate design system. Since multidisciplinary problems have hundreds of analyses and thousands of variables, the grouping of analyses and the order of the analyses in the group affect the speed of the total design cycle. Therefore, it is very important to reorder and regroup the original design processes in order to minimize the total computational cost by decomposing large multidisciplinary problems into several MultiDisciplinary Analysis SubSystems (MDASS) and by processing them in parallel. In this study, a new decomposition method is proposed for parallel processing of multidisciplinary design optimization, such as Collaborative Optimization (CO) and Individual Discipline Feasible (IDF) method. Numerical results for two example problems are presented to show the feasibility of the proposed method

  7. Parallel transmission techniques in magnetic resonance imaging: experimental realization, applications and perspectives

    International Nuclear Information System (INIS)

    Ullmann, P.

    2007-06-01

    The primary objective of this work was the first experimental realization of parallel RF transmission for accelerating spatially selective excitation in magnetic resonance imaging. Furthermore, basic aspects regarding the performance of this technique were investigated, potential risks regarding the specific absorption rate (SAR) were considered and feasibility studies under application-oriented conditions as first steps towards a practical utilisation of this technique were undertaken. At first, based on the RF electronics platform of the Bruker Avance MRI systems, the technical foundations were laid to perform simultaneous transmission of individual RF waveforms on different RF channels. Another essential requirement for the realization of Parallel Excitation (PEX) was the design and construction of suitable RF transmit arrays with elements driven by separate transmit channels. In order to image the PEX results two imaging methods were implemented based on a spin-echo and a gradient-echo sequence, in which a parallel spatially selective pulse was included as an excitation pulse. In the course of this work PEX experiments were successfully performed on three different MRI systems, a 4.7 T and a 9.4 T animal system and a 3 T human scanner, using 5 different RF coil setups in total. In the last part of this work investigations regarding possible applications of Parallel Excitation were performed. A first study comprised experiments of slice-selective B1 inhomogeneity correction by using 3D-selective Parallel Excitation. The investigations were performed in a phantom as well as in a rat fixed in paraformaldehyde solution. In conjunction with these experiments a novel method of calculating RF pulses for spatially selective excitation based on a so-called Direct Calibration approach was developed, which is particularly suitable for this type of experiments. In the context of these experiments it was demonstrated how to combine the advantages of parallel transmission

  8. Vlasov modelling of parallel transport in a tokamak scrape-off layer

    International Nuclear Information System (INIS)

    Manfredi, G; Hirstoaga, S; Devaux, S

    2011-01-01

    A one-dimensional Vlasov-Poisson model is used to describe the parallel transport in a tokamak scrape-off layer. Thanks to a recently developed 'asymptotic-preserving' numerical scheme, it is possible to lift numerical constraints on the time step and grid spacing, which are no longer limited by, respectively, the electron plasma period and Debye length. The Vlasov approach provides a good velocity-space resolution even in regions of low density. The model is applied to the study of parallel transport during edge-localized modes, with particular emphasis on the particles and energy fluxes on the divertor plates. The numerical results are compared with analytical estimates based on a free-streaming model, with good general agreement. An interesting feature is the observation of an early electron energy flux, due to suprathermal electrons escaping the ions' attraction. In contrast, the long-time evolution is essentially quasi-neutral and dominated by the ion dynamics.

  9. Vlasov modelling of parallel transport in a tokamak scrape-off layer

    Energy Technology Data Exchange (ETDEWEB)

    Manfredi, G [Institut de Physique et Chimie des Materiaux, CNRS and Universite de Strasbourg, BP 43, F-67034 Strasbourg (France); Hirstoaga, S [INRIA Nancy Grand-Est and Institut de Recherche en Mathematiques Avancees, 7 rue Rene Descartes, F-67084 Strasbourg (France); Devaux, S, E-mail: Giovanni.Manfredi@ipcms.u-strasbg.f, E-mail: hirstoaga@math.unistra.f, E-mail: Stephane.Devaux@ccfe.ac.u [JET-EFDA, Culham Science Centre, Abingdon, OX14 3DB (United Kingdom)

    2011-01-15

    A one-dimensional Vlasov-Poisson model is used to describe the parallel transport in a tokamak scrape-off layer. Thanks to a recently developed 'asymptotic-preserving' numerical scheme, it is possible to lift numerical constraints on the time step and grid spacing, which are no longer limited by, respectively, the electron plasma period and Debye length. The Vlasov approach provides a good velocity-space resolution even in regions of low density. The model is applied to the study of parallel transport during edge-localized modes, with particular emphasis on the particles and energy fluxes on the divertor plates. The numerical results are compared with analytical estimates based on a free-streaming model, with good general agreement. An interesting feature is the observation of an early electron energy flux, due to suprathermal electrons escaping the ions' attraction. In contrast, the long-time evolution is essentially quasi-neutral and dominated by the ion dynamics.

  10. Providing full point-to-point communications among compute nodes of an operational group in a global combining network of a parallel computer

    Energy Technology Data Exchange (ETDEWEB)

    Archer, Charles J.; Faraj, Daniel A.; Inglett, Todd A.; Ratterman, Joseph D.

    2018-01-30

    Methods, apparatus, and products are disclosed for providing full point-to-point communications among compute nodes of an operational group in a global combining network of a parallel computer, each compute node connected to each adjacent compute node in the global combining network through a link, that include: receiving a network packet in a compute node, the network packet specifying a destination compute node; selecting, in dependence upon the destination compute node, at least one of the links for the compute node along which to forward the network packet toward the destination compute node; and forwarding the network packet along the selected link to the adjacent compute node connected to the compute node through the selected link.

  11. A portable, parallel, object-oriented Monte Carlo neutron transport code in C++

    International Nuclear Information System (INIS)

    Lee, S.R.; Cummings, J.C.; Nolen, S.D.

    1997-01-01

    We have developed a multi-group Monte Carlo neutron transport code using C++ and the Parallel Object-Oriented Methods and Applications (POOMA) class library. This transport code, called MC++, currently computes k and α-eigenvalues and is portable to and runs parallel on a wide variety of platforms, including MPPs, clustered SMPs, and individual workstations. It contains appropriate classes and abstractions for particle transport and, through the use of POOMA, for portable parallelism. Current capabilities of MC++ are discussed, along with physics and performance results on a variety of hardware, including all Accelerated Strategic Computing Initiative (ASCI) hardware. Current parallel performance indicates the ability to compute α-eigenvalues in seconds to minutes rather than hours to days. Future plans and the implementation of a general transport physics framework are also discussed

  12. Long-time atomistic simulations with the Parallel Replica Dynamics method

    Science.gov (United States)

    Perez, Danny

    Molecular Dynamics (MD) -- the numerical integration of atomistic equations of motion -- is a workhorse of computational materials science. Indeed, MD can in principle be used to obtain any thermodynamic or kinetic quantity, without introducing any approximation or assumptions beyond the adequacy of the interaction potential. It is therefore an extremely powerful and flexible tool to study materials with atomistic spatio-temporal resolution. These enviable qualities however come at a steep computational price, hence limiting the system sizes and simulation times that can be achieved in practice. While the size limitation can be efficiently addressed with massively parallel implementations of MD based on spatial decomposition strategies, allowing for the simulation of trillions of atoms, the same approach usually cannot extend the timescales much beyond microseconds. In this article, we discuss an alternative parallel-in-time approach, the Parallel Replica Dynamics (ParRep) method, that aims at addressing the timescale limitation of MD for systems that evolve through rare state-to-state transitions. We review the formal underpinnings of the method and demonstrate that it can provide arbitrarily accurate results for any definition of the states. When an adequate definition of the states is available, ParRep can simulate trajectories with a parallel speedup approaching the number of replicas used. We demonstrate the usefulness of ParRep by presenting different examples of materials simulations where access to long timescales was essential to access the physical regime of interest and discuss practical considerations that must be addressed to carry out these simulations. Work supported by the United States Department of Energy (U.S. DOE), Office of Science, Office of Basic Energy Sciences, Materials Sciences and Engineering Division.

  13. Parallel implementation of the PHOENIX generalized stellar atmosphere program. II. Wavelength parallelization

    International Nuclear Information System (INIS)

    Baron, E.; Hauschildt, Peter H.

    1998-01-01

    We describe an important addition to the parallel implementation of our generalized nonlocal thermodynamic equilibrium (NLTE) stellar atmosphere and radiative transfer computer program PHOENIX. In a previous paper in this series we described data and task parallel algorithms we have developed for radiative transfer, spectral line opacity, and NLTE opacity and rate calculations. These algorithms divided the work spatially or by spectral lines, that is, distributing the radial zones, individual spectral lines, or characteristic rays among different processors and employ, in addition, task parallelism for logically independent functions (such as atomic and molecular line opacities). For finite, monotonic velocity fields, the radiative transfer equation is an initial value problem in wavelength, and hence each wavelength point depends upon the previous one. However, for sophisticated NLTE models of both static and moving atmospheres needed to accurately describe, e.g., novae and supernovae, the number of wavelength points is very large (200,000 - 300,000) and hence parallelization over wavelength can lead both to considerable speedup in calculation time and the ability to make use of the aggregate memory available on massively parallel supercomputers. Here, we describe an implementation of a pipelined design for the wavelength parallelization of PHOENIX, where the necessary data from the processor working on a previous wavelength point is sent to the processor working on the succeeding wavelength point as soon as it is known. Our implementation uses a MIMD design based on a relatively small number of standard message passing interface (MPI) library calls and is fully portable between serial and parallel computers. copyright 1998 The American Astronomical Society

  14. Development of structural schemes of parallel structure manipulators using screw calculus

    Science.gov (United States)

    Rashoyan, G. V.; Shalyukhin, K. A.; Gaponenko, EV

    2018-03-01

    The paper considers the approach to the structural analysis and synthesis of parallel structure robots based on the mathematical apparatus of groups of screws and on a concept of reciprocity of screws. The results are depicted of synthesis of parallel structure robots with different numbers of degrees of freedom, corresponding to the different groups of screws. Power screws are applied with this aim, based on the principle of static-kinematic analogy; the power screws are similar to the orts of axes of not driven kinematic pairs of a corresponding connecting chain. Accordingly, kinematic screws of the outlet chain of a robot are simultaneously determined which are reciprocal to power screws of kinematic sub-chains. Solution of certain synthesis problems is illustrated with practical applications. Closed groups of screws can have eight types. The three-membered groups of screws are of greatest significance, as well as four-membered screw groups [1] and six-membered screw groups. Three-membered screw groups correspond to progressively guiding mechanisms, to spherical mechanisms, and to planar mechanisms. The four-membered group corresponds to the motion of the SCARA robot. The six-membered group includes all possible motions. From the works of A.P. Kotelnikov, F.M. Dimentberg, it is known that closed fifth-order screw groups do not exist. The article presents examples of the mechanisms corresponding to the given groups.

  15. Parallel k-means++

    Energy Technology Data Exchange (ETDEWEB)

    2017-04-04

    A parallelization of the k-means++ seed selection algorithm on three distinct hardware platforms: GPU, multicore CPU, and multithreaded architecture. K-means++ was developed by David Arthur and Sergei Vassilvitskii in 2007 as an extension of the k-means data clustering technique. These algorithms allow people to cluster multidimensional data, by attempting to minimize the mean distance of data points within a cluster. K-means++ improved upon traditional k-means by using a more intelligent approach to selecting the initial seeds for the clustering process. While k-means++ has become a popular alternative to traditional k-means clustering, little work has been done to parallelize this technique. We have developed original C++ code for parallelizing the algorithm on three unique hardware architectures: GPU using NVidia's CUDA/Thrust framework, multicore CPU using OpenMP, and the Cray XMT multithreaded architecture. By parallelizing the process for these platforms, we are able to perform k-means++ clustering much more quickly than it could be done before.

  16. Dynamic stability calculations for power grids employing a parallel computer

    Energy Technology Data Exchange (ETDEWEB)

    Schmidt, K

    1982-06-01

    The aim of dynamic contingency calculations in power systems is to estimate the effects of assumed disturbances, such as loss of generation. Due to the large dimensions of the problem these simulations require considerable computing time and costs, to the effect that they are at present only used in a planning state but not for routine checks in power control stations. In view of the homogeneity of the problem, where a multitude of equal generator models, having different parameters, are to be integrated simultaneously, the use of a parallel computer looks very attractive. The results of this study employing a prototype parallel computer (SMS 201) are presented. It consists of up to 128 equal microcomputers bus-connected to a control computer. Each of the modules is programmed to simulate a node of the power grid. Generators with their associated control are represented by models of 13 states each. Passive nodes are complemented by 'phantom'-generators, so that the whole power grid is homogenous, thus removing the need for load-flow-iterations. Programming of microcomputers is essentially performed in FORTRAN.

  17. Cognitive synergy in groups and group-to-individual transfer of decision-making competencies

    NARCIS (Netherlands)

    Curseu, P.L.; Meslec, M.N.; Pluut, Helen; Lucas, G.J.M.

    2015-01-01

    In a field study (148 participants organized in 38 groups) we tested the effect of group synergy and one's position in relation to the collaborative zone of proximal development (CZPD) on the change of individual decision-making competencies. We used two parallel sets of decision tasks reported in

  18. Parallel magnetic resonance imaging

    International Nuclear Information System (INIS)

    Larkman, David J; Nunes, Rita G

    2007-01-01

    Parallel imaging has been the single biggest innovation in magnetic resonance imaging in the last decade. The use of multiple receiver coils to augment the time consuming Fourier encoding has reduced acquisition times significantly. This increase in speed comes at a time when other approaches to acquisition time reduction were reaching engineering and human limits. A brief summary of spatial encoding in MRI is followed by an introduction to the problem parallel imaging is designed to solve. There are a large number of parallel reconstruction algorithms; this article reviews a cross-section, SENSE, SMASH, g-SMASH and GRAPPA, selected to demonstrate the different approaches. Theoretical (the g-factor) and practical (coil design) limits to acquisition speed are reviewed. The practical implementation of parallel imaging is also discussed, in particular coil calibration. How to recognize potential failure modes and their associated artefacts are shown. Well-established applications including angiography, cardiac imaging and applications using echo planar imaging are reviewed and we discuss what makes a good application for parallel imaging. Finally, active research areas where parallel imaging is being used to improve data quality by repairing artefacted images are also reviewed. (invited topical review)

  19. Experiences in Data-Parallel Programming

    Directory of Open Access Journals (Sweden)

    Terry W. Clark

    1997-01-01

    Full Text Available To efficiently parallelize a scientific application with a data-parallel compiler requires certain structural properties in the source program, and conversely, the absence of others. A recent parallelization effort of ours reinforced this observation and motivated this correspondence. Specifically, we have transformed a Fortran 77 version of GROMOS, a popular dusty-deck program for molecular dynamics, into Fortran D, a data-parallel dialect of Fortran. During this transformation we have encountered a number of difficulties that probably are neither limited to this particular application nor do they seem likely to be addressed by improved compiler technology in the near future. Our experience with GROMOS suggests a number of points to keep in mind when developing software that may at some time in its life cycle be parallelized with a data-parallel compiler. This note presents some guidelines for engineering data-parallel applications that are compatible with Fortran D or High Performance Fortran compilers.

  20. A Clinical Pilot Study Comparing Sweet Bee Venom parallel treatment with only Acupuncture Treatment in patient diagnosed with lumbar spine sprain

    Directory of Open Access Journals (Sweden)

    Shin Yong-jeen

    2011-06-01

    Full Text Available Objectives: This study was carried out to compare the Sweet Bee Venom (referred to as Sweet BV hereafter acupuncture parallel treatment to treatment with acupuncture only for the patient diagnosed with lumbar spine sprain and find a better treatment. Methods: The subjects were patients diagnosed with lumbar spine sprain and hospitalized at Suncheon oriental medical hospital, which was randomly divided into sweet BV parallel treatment group and acupuncture-only group, and other treatment conditions were maintained the same. Then,VAS (Visual Analogue Scale was used to compare the difference in the treatment period between the two groups from VAS 10 to VAS 0, from VAS 10 to VAS 5, and from VAS 5 to VAS 0. Result & Conclusion: Sweet BV parallel treatment group and acupuncture-only treatment group were compared regarding the respective treatment period, and as the result, the treatment period from VAS 10 to VAS 5 was significantly reduced in sweet BV parallel treatment group compared to the acupuncture-only treatment group, but the treatment period from VAS 5 to VAS 0 did not show a significant difference. Therefore, it can be said that sweet BV parallel treatment is effective in shortening the treatment period and controlling early pain compared to acupuncture-only treatment.

  1. Non-Cartesian parallel imaging reconstruction.

    Science.gov (United States)

    Wright, Katherine L; Hamilton, Jesse I; Griswold, Mark A; Gulani, Vikas; Seiberlich, Nicole

    2014-11-01

    Non-Cartesian parallel imaging has played an important role in reducing data acquisition time in MRI. The use of non-Cartesian trajectories can enable more efficient coverage of k-space, which can be leveraged to reduce scan times. These trajectories can be undersampled to achieve even faster scan times, but the resulting images may contain aliasing artifacts. Just as Cartesian parallel imaging can be used to reconstruct images from undersampled Cartesian data, non-Cartesian parallel imaging methods can mitigate aliasing artifacts by using additional spatial encoding information in the form of the nonhomogeneous sensitivities of multi-coil phased arrays. This review will begin with an overview of non-Cartesian k-space trajectories and their sampling properties, followed by an in-depth discussion of several selected non-Cartesian parallel imaging algorithms. Three representative non-Cartesian parallel imaging methods will be described, including Conjugate Gradient SENSE (CG SENSE), non-Cartesian generalized autocalibrating partially parallel acquisition (GRAPPA), and Iterative Self-Consistent Parallel Imaging Reconstruction (SPIRiT). After a discussion of these three techniques, several potential promising clinical applications of non-Cartesian parallel imaging will be covered. © 2014 Wiley Periodicals, Inc.

  2. Influence of Paralleling Dies and Paralleling Half-Bridges on Transient Current Distribution in Multichip Power Modules

    DEFF Research Database (Denmark)

    Li, Helong; Zhou, Wei; Wang, Xiongfei

    2018-01-01

    This paper addresses the transient current distribution in the multichip half-bridge power modules, where two types of paralleling connections with different current commutation mechanisms are considered: paralleling dies and paralleling half-bridges. It reveals that with paralleling dies, both t...

  3. Linear parallel processing machines I

    Energy Technology Data Exchange (ETDEWEB)

    Von Kunze, M

    1984-01-01

    As is well-known, non-context-free grammars for generating formal languages happen to be of a certain intrinsic computational power that presents serious difficulties to efficient parsing algorithms as well as for the development of an algebraic theory of contextsensitive languages. In this paper a framework is given for the investigation of the computational power of formal grammars, in order to start a thorough analysis of grammars consisting of derivation rules of the form aB ..-->.. A/sub 1/ ... A /sub n/ b/sub 1/...b /sub m/ . These grammars may be thought of as automata by means of parallel processing, if one considers the variables as operators acting on the terminals while reading them right-to-left. This kind of automata and their 2-dimensional programming language prove to be useful by allowing a concise linear-time algorithm for integer multiplication. Linear parallel processing machines (LP-machines) which are, in their general form, equivalent to Turing machines, include finite automata and pushdown automata (with states encoded) as special cases. Bounded LP-machines yield deterministic accepting automata for nondeterministic contextfree languages, and they define an interesting class of contextsensitive languages. A characterization of this class in terms of generating grammars is established by using derivation trees with crossings as a helpful tool. From the algebraic point of view, deterministic LP-machines are effectively represented semigroups with distinguished subsets. Concerning the dualism between generating and accepting devices of formal languages within the algebraic setting, the concept of accepting automata turns out to reduce essentially to embeddability in an effectively represented extension monoid, even in the classical cases.

  4. Fast electrostatic force calculation on parallel computer clusters

    International Nuclear Information System (INIS)

    Kia, Amirali; Kim, Daejoong; Darve, Eric

    2008-01-01

    The fast multipole method (FMM) and smooth particle mesh Ewald (SPME) are well known fast algorithms to evaluate long range electrostatic interactions in molecular dynamics and other fields. FMM is a multi-scale method which reduces the computation cost by approximating the potential due to a group of particles at a large distance using few multipole functions. This algorithm scales like O(N) for N particles. SPME algorithm is an O(NlnN) method which is based on an interpolation of the Fourier space part of the Ewald sum and evaluating the resulting convolutions using fast Fourier transform (FFT). Those algorithms suffer from relatively poor efficiency on large parallel machines especially for mid-size problems around hundreds of thousands of atoms. A variation of the FMM, called PWA, based on plane wave expansions is presented in this paper. A new parallelization strategy for PWA, which takes advantage of the specific form of this expansion, is described. Its parallel efficiency is compared with SPME through detail time measurements on two different computer clusters

  5. A double blind parallel group placebo controlled comparison of sedative and mnesic effects of etifoxine and lorazepam in healthy subjects [corrected].

    Science.gov (United States)

    Micallef, J; Soubrouillard, C; Guet, F; Le Guern, M E; Alquier, C; Bruguerolle, B; Blin, O

    2001-06-01

    This paper describes the psychomotor and mnesic effects of single oral doses of etifoxine (50 and 100 mg) and lorazepam (2 mg) in healthy subjects. Forty-eight healthy subjects were included in this randomized double blind, placebo controlled parallel group study [corrected]. The effects of drugs were assessed by using a battery of subjective and objective tests that explored mood and vigilance (Visual Analog Scale), attention (Barrage test), psychomotor performance (Choice Reaction Time) and memory (digit span, immediate and delayed free recall of a word list). Whereas vigilance, psychomotor performance and free recall were significantly impaired by lorazepam, neither dosage of etifoxine (50 and 100 mg) produced such effects. These results suggest that 50 and 100 mg single dose of etifoxine do not induce amnesia and sedation as compared to lorazepam.

  6. Parallel Architectures and Parallel Algorithms for Integrated Vision Systems. Ph.D. Thesis

    Science.gov (United States)

    Choudhary, Alok Nidhi

    1989-01-01

    Computer vision is regarded as one of the most complex and computationally intensive problems. An integrated vision system (IVS) is a system that uses vision algorithms from all levels of processing to perform for a high level application (e.g., object recognition). An IVS normally involves algorithms from low level, intermediate level, and high level vision. Designing parallel architectures for vision systems is of tremendous interest to researchers. Several issues are addressed in parallel architectures and parallel algorithms for integrated vision systems.

  7. Tapping generalized essentialism to predict outgroup prejudices.

    Science.gov (United States)

    Hodson, Gordon; Skorska, Malvina N

    2015-06-01

    Psychological essentialism, the perception that groups possess inherent properties binding them and differentiating them from others, is theoretically relevant to predicting prejudice. Recent developments isolate two key dimensions: essentialistic entitativity (EE; groups as unitary, whole, entity-like) and essentialistic naturalness (EN; groups as fixed and immutable). We introduce a novel question: does tapping the covariance between EE and EN, rather than pitting them against each other, boost prejudice prediction? In Study 1 (re-analysis of Roets & Van Hiel, 2011b, Samples 1-3, in Belgium) and Study 2 (new Canadian data) their common/shared variance, modelled as generalized essentialism, doubles the predictive power relative to regression-based approaches with regard to racism (but not anti-gay or -schizophrenic prejudices). Theoretical implications are discussed. © 2014 The British Psychological Society.

  8. Pattern-Driven Automatic Parallelization

    Directory of Open Access Journals (Sweden)

    Christoph W. Kessler

    1996-01-01

    Full Text Available This article describes a knowledge-based system for automatic parallelization of a wide class of sequential numerical codes operating on vectors and dense matrices, and for execution on distributed memory message-passing multiprocessors. Its main feature is a fast and powerful pattern recognition tool that locally identifies frequently occurring computations and programming concepts in the source code. This tool also works for dusty deck codes that have been "encrypted" by former machine-specific code transformations. Successful pattern recognition guides sophisticated code transformations including local algorithm replacement such that the parallelized code need not emerge from the sequential program structure by just parallelizing the loops. It allows access to an expert's knowledge on useful parallel algorithms, available machine-specific library routines, and powerful program transformations. The partially restored program semantics also supports local array alignment, distribution, and redistribution, and allows for faster and more exact prediction of the performance of the parallelized target code than is usually possible.

  9. Data communications in a parallel active messaging interface of a parallel computer

    Science.gov (United States)

    Archer, Charles J; Blocksome, Michael A; Ratterman, Joseph D; Smith, Brian E

    2013-10-29

    Data communications in a parallel active messaging interface (`PAMI`) of a parallel computer, the parallel computer including a plurality of compute nodes that execute a parallel application, the PAMI composed of data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes and the endpoints coupled for data communications through the PAMI and through data communications resources, including receiving in an origin endpoint of the PAMI a data communications instruction, the instruction characterized by an instruction type, the instruction specifying a transmission of transfer data from the origin endpoint to a target endpoint and transmitting, in accordance with the instruction type, the transfer data from the origin endpoint to the target endpoint.

  10. The STAPL Parallel Graph Library

    KAUST Repository

    Harshvardhan,; Fidel, Adam; Amato, Nancy M.; Rauchwerger, Lawrence

    2013-01-01

    This paper describes the stapl Parallel Graph Library, a high-level framework that abstracts the user from data-distribution and parallelism details and allows them to concentrate on parallel graph algorithm development. It includes a customizable

  11. Effectiveness of a mobile cooperation intervention during the clinical practicum of nursing students: a parallel group randomized controlled trial protocol.

    Science.gov (United States)

    Strandell-Laine, Camilla; Saarikoski, Mikko; Löyttyniemi, Eliisa; Salminen, Leena; Suomi, Reima; Leino-Kilpi, Helena

    2017-06-01

    The aim of this study was to describe a study protocol for a study evaluating the effectiveness of a mobile cooperation intervention to improve students' competence level, self-efficacy in clinical performance and satisfaction with the clinical learning environment. Nursing student-nurse teacher cooperation during the clinical practicum has a vital role in promoting the learning of students. Despite an increasing interest in using mobile technologies to improve the clinical practicum of students, there is limited robust evidence regarding their effectiveness. A multicentre, parallel group, randomized, controlled, pragmatic, superiority trial. Second-year pre-registration nursing students who are beginning a clinical practicum will be recruited from one university of applied sciences. Eligible students will be randomly allocated to either a control group (engaging in standard cooperation) or an intervention group (engaging in mobile cooperation) for the 5-week the clinical practicum. The complex mobile cooperation intervention comprises of a mobile application-assisted, nursing student-nurse teacher cooperation and a training in the functions of the mobile application. The primary outcome is competence. The secondary outcomes include self-efficacy in clinical performance and satisfaction with the clinical learning environment. Moreover, a process evaluation will be undertaken. The ethical approval for this study was obtained in December 2014 and the study received funding in 2015. The results of this study will provide robust evidence on mobile cooperation during the clinical practicum, a research topic that has not been consistently studied to date. © 2016 John Wiley & Sons Ltd.

  12. Parallelism and array processing

    International Nuclear Information System (INIS)

    Zacharov, V.

    1983-01-01

    Modern computing, as well as the historical development of computing, has been dominated by sequential monoprocessing. Yet there is the alternative of parallelism, where several processes may be in concurrent execution. This alternative is discussed in a series of lectures, in which the main developments involving parallelism are considered, both from the standpoint of computing systems and that of applications that can exploit such systems. The lectures seek to discuss parallelism in a historical context, and to identify all the main aspects of concurrency in computation right up to the present time. Included will be consideration of the important question as to what use parallelism might be in the field of data processing. (orig.)

  13. Use of bibloc and monobloc oral appliances in obstructive sleep apnoea: a multicentre, randomized, blinded, parallel-group equivalence trial.

    Science.gov (United States)

    Isacsson, Göran; Nohlert, Eva; Fransson, Anette M C; Bornefalk-Hermansson, Anna; Wiman Eriksson, Eva; Ortlieb, Eva; Trepp, Livia; Avdelius, Anna; Sturebrand, Magnus; Fodor, Clara; List, Thomas; Schumann, Mohamad; Tegelberg, Åke

    2018-05-16

    The clinical benefit of bibloc over monobloc appliances in treating obstructive sleep apnoea (OSA) has not been evaluated in randomized trials. We hypothesized that the two types of appliances are equally effective in treating OSA. To compare the efficacy of monobloc versus bibloc appliances in a short-term perspective. In this multicentre, randomized, blinded, controlled, parallel-group equivalence trial, patients with OSA were randomly assigned to use either a bibloc or a monobloc appliance. One-night respiratory polygraphy without respiratory support was performed at baseline, and participants were re-examined with the appliance in place at short-term follow-up. The primary outcome was the change in the apnoea-hypopnea index (AHI). An independent person prepared a randomization list and sealed envelopes. Evaluating dentist and the biomedical analysts who evaluated the polygraphy were blinded to the choice of therapy. Of 302 patients, 146 were randomly assigned to use the bibloc and 156 the monobloc device; 123 and 139 patients, respectively, were analysed as per protocol. The mean changes in AHI were -13.8 (95% confidence interval -16.1 to -11.5) in the bibloc group and -12.5 (-14.8 to -10.3) in the monobloc group. The difference of -1.3 (-4.5 to 1.9) was significant within the equivalence interval (P = 0.011; the greater of the two P values) and was confirmed by the intention-to-treat analysis (P = 0.001). The adverse events were of mild character and were experienced by similar percentages of patients in both groups (39 and 40 per cent for the bibloc and monobloc group, respectively). The study shows short-term results with a median time from commencing treatment to the evaluation visit of 56 days and long-term data on efficacy and harm are needed to be fully conclusive. In a short-term perspective, both appliances were equivalent in terms of their positive effects for treating OSA and caused adverse events of similar magnitude. Registered with Clinical

  14. Parallel External Memory Graph Algorithms

    DEFF Research Database (Denmark)

    Arge, Lars Allan; Goodrich, Michael T.; Sitchinava, Nodari

    2010-01-01

    In this paper, we study parallel I/O efficient graph algorithms in the Parallel External Memory (PEM) model, one o f the private-cache chip multiprocessor (CMP) models. We study the fundamental problem of list ranking which leads to efficient solutions to problems on trees, such as computing lowest...... an optimal speedup of ¿(P) in parallel I/O complexity and parallel computation time, compared to the single-processor external memory counterparts....

  15. Rasagiline as an adjunct to levodopa in patients with Parkinson's disease and motor fluctuations (LARGO, Lasting effect in Adjunct therapy with Rasagiline Given Once daily, study): a randomised, double-blind, parallel-group trial.

    OpenAIRE

    Rascol, O.; Brooks, D.J.; Melamed, E.; Oertel, W.; Poewe, W.; Stocchi, F.; Tolosa, E.; LARGO study group

    2005-01-01

    Lancet. 2005 Mar 12-18;365(9463):947-54. Rasagiline as an adjunct to levodopa in patients with Parkinson's disease and motor fluctuations (LARGO, Lasting effect in Adjunct therapy with Rasagiline Given Once daily, study): a randomised, double-blind, parallel-group trial. Rascol O, Brooks DJ, Melamed E, Oertel W, Poewe W, Stocchi F, Tolosa E; LARGO study group. Clinical Investigation Centre, Department of Clinical Pharmacology, University Hospital, Toulouse, France. ...

  16. A Randomized Single Blind Parallel Group Study Comparing Monoherbal Formulation Containing Holarrhena antidysenterica Extract with Mesalamine in Chronic Ulcerative Colitis Patients

    Directory of Open Access Journals (Sweden)

    Sarika Johari

    2016-01-01

    Full Text Available Background: Incidences of side effects and relapses are very common in chronic ulcerative colitis patients after termination of the treatment. Aims and Objectives: This study aims to compare the treatment with monoherbal formulation of Holarrhena antidysenterica with Mesalamine in chronic ulcerative colitis patients with special emphasis to side effects and relapse. Settings and Design: Patients were enrolled from an Ayurveda Hospital and a private Hospital, Gujarat. The study was randomized, parallel group and single blind design. Materials and Methods: The protocol was approved by Institutional Human Research Ethics Committee of Anand Pharmacy College on 23rd Jan 2013. Three groups (n = 10 were treated with drug Mesalamine (Group I, monoherbal tablet (Group II and combination of both (Group III respectively. Baseline characteristics, factors affecting quality of life, chronicity of disease, signs and symptoms, body weight and laboratory investigations were recorded. Side effects and complications developed, if any were recorded during and after the study. Statistical Analysis Used: Results were expressed as mean ± SEM. Data was statistically evaluated using t-test, Wilcoxon test, Mann Whitney U test, Kruskal Wallis test and ANOVA, wherever applicable, using GraphPad Prism 6. Results: All the groups responded positively to the treatments. All the patients were positive for occult blood in stool which reversed significantly after treatment along with rise in hemoglobin. Patients treated with herbal tablets alone showed maximal reduction in abdominal pain, diarrhea, and bowel frequency and stool consistency scores than Mesalamine treated patients. Treatment with herbal tablet alone and in combination with Mesalamine significantly reduced the stool infection. Patients treated with herbal drug alone and in combination did not report any side effects, relapse or complications while 50% patients treated with Mesalamine exhibited the relapse with

  17. Parallel two-phase-flow-induced vibrations in fuel pin model

    International Nuclear Information System (INIS)

    Hara, Fumio; Yamashita, Tadashi

    1978-01-01

    This paper reports the experimental results of vibrations of a fuel pin model -herein meaning the essential form of a fuel pin from the standpoint of vibration- in a parallel air-and-water two-phase flow. The essential part of the experimental apparatus consisted of a flat elastic strip made of stainless steel, both ends of which were firmly supported in a circular channel conveying the two-phase fluid. Vibrational strain of the fuel pin model, pressure fluctuation of the two-phase flow and two-phase-flow void signals were measured. Statistical measures such as power spectral density, variance and correlation function were calculated. The authors obtained (1) the relation between variance of vibrational strain and two-phase-flow velocity, (2) the relation between variance of vibrational strain and two-phase-flow pressure fluctuation, (3) frequency characteristics of variance of vibrational strain against the dominant frequency of the two-phase-flow pressure fluctuation, and (4) frequency characteristics of variance of vibrational strain against the dominant frequency of two-phase-flow void signals. The authors conclude that there exist two kinds of excitation mechanisms in vibrations of a fuel pin model inserted in a parallel air-and-water two-phase flow; namely, (1) parametric excitation, which occurs when the fundamental natural frequency of the fuel pin model is related to the dominant travelling frequency of water slugs in the two-phase flow by the ratio 1/2, 1/1, 3/2 and so on; and (2) vibrational resonance, which occurs when the fundamental frequency coincides with the dominant frequency of the two-phase-flow pressure fluctuation. (auth.)

  18. Parallel inter channel interaction mechanisms

    International Nuclear Information System (INIS)

    Jovic, V.; Afgan, N.; Jovic, L.

    1995-01-01

    Parallel channels interactions are examined. For experimental researches of nonstationary regimes flow in three parallel vertical channels results of phenomenon analysis and mechanisms of parallel channel interaction for adiabatic condition of one-phase fluid and two-phase mixture flow are shown. (author)

  19. Daily consumption of fermented soymilk helps to improve facial wrinkles in healthy postmenopausal women in a randomized, parallel-group, open-label trial

    Directory of Open Access Journals (Sweden)

    Mitsuyoshi Kano

    2018-02-01

    Full Text Available Background: Soymilk fermented by lactobacilli and/or bifidobacteria is attracting attention due to the excellent bioavailability of its isoflavones. We investigated the effects of fermented soymilk containing high amounts of isoflavone aglycones on facial wrinkles and urinary isoflavones in postmenopausal women in a randomized, parallel-group, open-label trial. Healthy Japanese women were randomly divided into active (n = 44, mean age 56.3 ± 0.5 or control (n = 44, mean age 56.1 ± 0.5 groups, who consumed or did not consume a bottle of soymilk fermented by Bifidobacterium breve strain Yakult and Lactobacillus mali for 8 weeks. Maximum depth of wrinkles around the crow’s feet area and other wrinkle parameters were evaluated as primary and secondary endpoints respectively at weeks 0, 4, and 8 during the consumption period. Urinary isoflavone levels were determined by liquid chromatography-mass spectrometry. Results: The active group demonstrated significant improvements in the maximum depth (p=0.015 and average depth (p=0.04 of wrinkles, and significantly elevated urinary isoflavones (daidzein, genistein, and glycitein; each p < 0.001 compared with the control during the consumption period. No serious adverse effects were recorded. Conclusion: These findings suggest that fermented soymilk taken daily may improve facial wrinkles and elevate urinary isoflavones in healthy postmenopausal women.

  20. Parallel paving: An algorithm for generating distributed, adaptive, all-quadrilateral meshes on parallel computers

    Energy Technology Data Exchange (ETDEWEB)

    Lober, R.R.; Tautges, T.J.; Vaughan, C.T.

    1997-03-01

    Paving is an automated mesh generation algorithm which produces all-quadrilateral elements. It can additionally generate these elements in varying sizes such that the resulting mesh adapts to a function distribution, such as an error function. While powerful, conventional paving is a very serial algorithm in its operation. Parallel paving is the extension of serial paving into parallel environments to perform the same meshing functions as conventional paving only on distributed, discretized models. This extension allows large, adaptive, parallel finite element simulations to take advantage of paving`s meshing capabilities for h-remap remeshing. A significantly modified version of the CUBIT mesh generation code has been developed to host the parallel paving algorithm and demonstrate its capabilities on both two dimensional and three dimensional surface geometries and compare the resulting parallel produced meshes to conventionally paved meshes for mesh quality and algorithm performance. Sandia`s {open_quotes}tiling{close_quotes} dynamic load balancing code has also been extended to work with the paving algorithm to retain parallel efficiency as subdomains undergo iterative mesh refinement.

  1. Randomized, parallel-group, double-blind, controlled study to evaluate the efficacy and safety of carbohydrate-derived fulvic acid in topical treatment of eczema

    Directory of Open Access Journals (Sweden)

    Gandy JJ

    2011-09-01

    Full Text Available Justin J Gandy, Jacques R Snyman, Constance EJ van RensburgDepartment of Pharmacology, Faculty of Health Sciences, University of Pretoria, Pretoria, South AfricaBackground: The purpose of this study was to evaluate the efficacy and safety of carbohydrate-derived fulvic acid (CHD-FA in the treatment of eczema in patients two years and older.Methods: In this single-center, double-blind, placebo-controlled, parallel-group comparative study, 36 volunteers with predetermined eczema were randomly assigned to receive either the study drug or placebo twice daily for four weeks.Results: All safety parameters remained within normal limits, with no significant differences in either group. Significant differences were observed for both severity and erythema in the placebo and CHD-FA treated groups, and a significant difference was observed for scaling in the placebo-treated group. With regard to the investigator assessment of global response to treatment, a significant improvement was observed in the CHD-FA group when compared with the placebo group. A statistically significant decrease in visual analog scale score was observed in both groups, when comparing the baseline with the final results.Conclusion: CHD-FA was well tolerated, with no difference in reported side effects other than a short-lived burning sensation on application. CHD-FA significantly improved some aspects of eczema. Investigator assessment of global response to treatment with CHD-FA was significantly better than that with emollient therapy alone. The results of this small exploratory study suggest that CHD-FA warrants further investigation in the treatment of eczema.Keywords: fulvic acid, eczema, anti-inflammatory, efficacy, safety

  2. Seeing or moving in parallel

    DEFF Research Database (Denmark)

    Christensen, Mark Schram; Ehrsson, H Henrik; Nielsen, Jens Bo

    2013-01-01

    a different network, involving bilateral dorsal premotor cortex (PMd), primary motor cortex, and SMA, was more active when subjects viewed parallel movements while performing either symmetrical or parallel movements. Correlations between behavioral instability and brain activity were present in right lateral...... adduction-abduction movements symmetrically or in parallel with real-time congruent or incongruent visual feedback of the movements. One network, consisting of bilateral superior and middle frontal gyrus and supplementary motor area (SMA), was more active when subjects performed parallel movements, whereas...

  3. The numerical parallel computing of photon transport

    International Nuclear Information System (INIS)

    Huang Qingnan; Liang Xiaoguang; Zhang Lifa

    1998-12-01

    The parallel computing of photon transport is investigated, the parallel algorithm and the parallelization of programs on parallel computers both with shared memory and with distributed memory are discussed. By analyzing the inherent law of the mathematics and physics model of photon transport according to the structure feature of parallel computers, using the strategy of 'to divide and conquer', adjusting the algorithm structure of the program, dissolving the data relationship, finding parallel liable ingredients and creating large grain parallel subtasks, the sequential computing of photon transport into is efficiently transformed into parallel and vector computing. The program was run on various HP parallel computers such as the HY-1 (PVP), the Challenge (SMP) and the YH-3 (MPP) and very good parallel speedup has been gotten

  4. Hypergraph partitioning implementation for parallelizing matrix-vector multiplication using CUDA GPU-based parallel computing

    Science.gov (United States)

    Murni, Bustamam, A.; Ernastuti, Handhika, T.; Kerami, D.

    2017-07-01

    Calculation of the matrix-vector multiplication in the real-world problems often involves large matrix with arbitrary size. Therefore, parallelization is needed to speed up the calculation process that usually takes a long time. Graph partitioning techniques that have been discussed in the previous studies cannot be used to complete the parallelized calculation of matrix-vector multiplication with arbitrary size. This is due to the assumption of graph partitioning techniques that can only solve the square and symmetric matrix. Hypergraph partitioning techniques will overcome the shortcomings of the graph partitioning technique. This paper addresses the efficient parallelization of matrix-vector multiplication through hypergraph partitioning techniques using CUDA GPU-based parallel computing. CUDA (compute unified device architecture) is a parallel computing platform and programming model that was created by NVIDIA and implemented by the GPU (graphics processing unit).

  5. Parallel Framework for Cooperative Processes

    Directory of Open Access Journals (Sweden)

    Mitică Craus

    2005-01-01

    Full Text Available This paper describes the work of an object oriented framework designed to be used in the parallelization of a set of related algorithms. The idea behind the system we are describing is to have a re-usable framework for running several sequential algorithms in a parallel environment. The algorithms that the framework can be used with have several things in common: they have to run in cycles and the work should be possible to be split between several "processing units". The parallel framework uses the message-passing communication paradigm and is organized as a master-slave system. Two applications are presented: an Ant Colony Optimization (ACO parallel algorithm for the Travelling Salesman Problem (TSP and an Image Processing (IP parallel algorithm for the Symmetrical Neighborhood Filter (SNF. The implementations of these applications by means of the parallel framework prove to have good performances: approximatively linear speedup and low communication cost.

  6. Compiler Technology for Parallel Scientific Computation

    Directory of Open Access Journals (Sweden)

    Can Özturan

    1994-01-01

    Full Text Available There is a need for compiler technology that, given the source program, will generate efficient parallel codes for different architectures with minimal user involvement. Parallel computation is becoming indispensable in solving large-scale problems in science and engineering. Yet, the use of parallel computation is limited by the high costs of developing the needed software. To overcome this difficulty we advocate a comprehensive approach to the development of scalable architecture-independent software for scientific computation based on our experience with equational programming language (EPL. Our approach is based on a program decomposition, parallel code synthesis, and run-time support for parallel scientific computation. The program decomposition is guided by the source program annotations provided by the user. The synthesis of parallel code is based on configurations that describe the overall computation as a set of interacting components. Run-time support is provided by the compiler-generated code that redistributes computation and data during object program execution. The generated parallel code is optimized using techniques of data alignment, operator placement, wavefront determination, and memory optimization. In this article we discuss annotations, configurations, parallel code generation, and run-time support suitable for parallel programs written in the functional parallel programming language EPL and in Fortran.

  7. Vectorization, parallelization and porting of nuclear codes (porting). Progress report fiscal 1998

    International Nuclear Information System (INIS)

    Nemoto, Toshiyuki; Kawai, Wataru; Ishizuki, Shigeru; Kawasaki, Nobuo; Kume, Etsuo; Adachi, Masaaki; Ogasawara, Shinobu

    2000-03-01

    Several computer codes in the nuclear field have been vectorized, parallelized and transported on the FUJITSU VPP500 system, the AP3000 system and the Paragon system at Center for Promotion of Computational Science and Engineering in Japan Atomic Energy Research Institute. We dealt with 12 codes in fiscal 1998. These results are reported in 3 parts, i.e., the vectorization and parallelization on vector processors part, the parallelization on scalar processors part and the porting part. In this report, we describe the porting. In this porting part, the porting of Monte Carlo N-Particle Transport code MCNP4B2 and Reactor Safety Analysis code RELAP5 on the AP3000 are described. In the vectorization and parallelization on vector processors part, the vectorization of General Tokamak Circuit Simulation Program code GTCSP, the vectorization and parallelization of Molecular Dynamics Ntv Simulation code MSP2, Eddy Current Analysis code EDDYCAL, Thermal Analysis Code for Test of Passive Cooling System by HENDEL T2 code THANPACST2 and MHD Equilibrium code SELENEJ on the VPP500 are described. In the parallelization on scalar processors part, the parallelization of Monte Carlo N-Particle Transport code MCNP4B2, Plasma Hydrodynamics code using Cubic Interpolated propagation Method PHCIP and Vectorized Monte Carlo code (continuous energy model/multi-group model) MVP/GMVP on the Paragon are described. (author)

  8. Parallel computing: numerics, applications, and trends

    National Research Council Canada - National Science Library

    Trobec, Roman; Vajteršic, Marián; Zinterhof, Peter

    2009-01-01

    ... and/or distributed systems. The contributions to this book are focused on topics most concerned in the trends of today's parallel computing. These range from parallel algorithmics, programming, tools, network computing to future parallel computing. Particular attention is paid to parallel numerics: linear algebra, differential equations, numerica...

  9. Parallel Computing Strategies for Irregular Algorithms

    Science.gov (United States)

    Biswas, Rupak; Oliker, Leonid; Shan, Hongzhang; Biegel, Bryan (Technical Monitor)

    2002-01-01

    Parallel computing promises several orders of magnitude increase in our ability to solve realistic computationally-intensive problems, but relies on their efficient mapping and execution on large-scale multiprocessor architectures. Unfortunately, many important applications are irregular and dynamic in nature, making their effective parallel implementation a daunting task. Moreover, with the proliferation of parallel architectures and programming paradigms, the typical scientist is faced with a plethora of questions that must be answered in order to obtain an acceptable parallel implementation of the solution algorithm. In this paper, we consider three representative irregular applications: unstructured remeshing, sparse matrix computations, and N-body problems, and parallelize them using various popular programming paradigms on a wide spectrum of computer platforms ranging from state-of-the-art supercomputers to PC clusters. We present the underlying problems, the solution algorithms, and the parallel implementation strategies. Smart load-balancing, partitioning, and ordering techniques are used to enhance parallel performance. Overall results demonstrate the complexity of efficiently parallelizing irregular algorithms.

  10. A model for dealing with parallel processes in supervision

    Directory of Open Access Journals (Sweden)

    Lilja Cajvert

    2011-03-01

    Supervision in social work is essential for successful outcomes when working with clients. In social work, unconscious difficulties may arise and similar difficulties may occur in supervision as parallel processes. In this article, the development of a practice-based model of supervision to deal with parallel processes in supervision is described. The model has six phases. In the first phase, the focus is on the supervisor’s inner world, his/her own reflections and observations. In the second phase, the supervision situation is “frozen”, and the supervisees are invited to join the supervisor in taking a meta-perspective on the current situation of supervision. The focus in the third phase is on the inner world of all the group members as well as the visualization and identification of reflections and feelings that arose during the supervision process. Phase four focuses on the supervisee who presented a case, and in phase five the focus shifts to the common understanding and theorization of the supervision process as well as the definition and identification of possible parallel processes. In the final phase, the supervisee, with the assistance of the supervisor and other members of the group, develops a solution and determines how to proceed with the client in treatment. This article uses phenomenological concepts to provide a theoretical framework for the supervision model. Phenomenological reduction is an important approach to examine and to externalize and visualize the inner words of the supervisor and supervisees. Een model voor het hanteren van parallelle processen tijdens supervisie Om succesvol te zijn in de hulpverlening aan cliënten, is supervisie cruciaal in het sociaal werk. Tijdens de hulpverlening kunnen impliciete moeilijkheden de kop opsteken en soortgelijke moeilijkheden duiken soms ook op tijdens supervisie. Dit worden parallelle processen genoemd. Dit artikel beschrijft een op praktijkervaringen gebaseerd model om dergelijke parallelle

  11. The Glasgow Parallel Reduction Machine: Programming Shared-memory Many-core Systems using Parallel Task Composition

    Directory of Open Access Journals (Sweden)

    Ashkan Tousimojarad

    2013-12-01

    Full Text Available We present the Glasgow Parallel Reduction Machine (GPRM, a novel, flexible framework for parallel task-composition based many-core programming. We allow the programmer to structure programs into task code, written as C++ classes, and communication code, written in a restricted subset of C++ with functional semantics and parallel evaluation. In this paper we discuss the GPRM, the virtual machine framework that enables the parallel task composition approach. We focus the discussion on GPIR, the functional language used as the intermediate representation of the bytecode running on the GPRM. Using examples in this language we show the flexibility and power of our task composition framework. We demonstrate the potential using an implementation of a merge sort algorithm on a 64-core Tilera processor, as well as on a conventional Intel quad-core processor and an AMD 48-core processor system. We also compare our framework with OpenMP tasks in a parallel pointer chasing algorithm running on the Tilera processor. Our results show that the GPRM programs outperform the corresponding OpenMP codes on all test platforms, and can greatly facilitate writing of parallel programs, in particular non-data parallel algorithms such as reductions.

  12. Streaming for Functional Data-Parallel Languages

    DEFF Research Database (Denmark)

    Madsen, Frederik Meisner

    In this thesis, we investigate streaming as a general solution to the space inefficiency commonly found in functional data-parallel programming languages. The data-parallel paradigm maps well to parallel SIMD-style hardware. However, the traditional fully materializing execution strategy...... by extending two existing data-parallel languages: NESL and Accelerate. In the extensions we map bulk operations to data-parallel streams that can evaluate fully sequential, fully parallel or anything in between. By a dataflow, piecewise parallel execution strategy, the runtime system can adjust to any target...... flattening necessitates all sub-computations to materialize at the same time. For example, naive n by n matrix multiplication requires n^3 space in NESL because the algorithm contains n^3 independent scalar multiplications. For large values of n, this is completely unacceptable. We address the problem...

  13. Effectiveness of Ginger Essential Oil on Postoperative Nausea and Vomiting in Abdominal Surgery Patients.

    Science.gov (United States)

    Lee, Yu Ri; Shin, Hye Sook

    2017-03-01

    The purpose of this study was to examine the effectiveness of aromatherapy with ginger essential oil on nausea and vomiting in abdominal surgery patients. This was a quasi-experimental study with a nonequivalent control group and repeated measures. The experimental group (n = 30) received ginger essential oil inhalation. The placebo control group (n = 30) received normal saline inhalation. The level of postoperative nausea and vomiting was measured using a Korean version of the Index of Nausea, Vomiting, and Retching (INVR) at baseline and at 6, 12, and 24 h after aromatherapy administration. The data were collected from July 23 to August 22, 2012. Nausea and vomiting scores were significantly lower in the experimental group with ginger essential oil inhalation than those in the placebo control group with normal saline. In the experimental group, the nausea and vomiting scores decreased considerably in the first 6 h after inhaled aromatherapy with ginger essential oil. Findings indicate that ginger essential oil inhalation has implications for alleviating postoperative nausea and vomiting in abdominal surgery patients.

  14. The effect of the essential oils of lavender and rosemary on the human short-term memory

    Directory of Open Access Journals (Sweden)

    O.V. Filiptsova

    2018-03-01

    Full Text Available The research results of the effect of essential oils on the human short-term image and numerical memory have been described. The study involved 79 secondary school students (34 boys and 45 girls aged 13 to 17 years, residents of the Ukrainian metropolis. Participants were divided into three groups: the control group, “Lavender” group, in which the lavender essential oil was sprayed, and “Rosemary” group, in which the rosemary essential oil was sprayed. The statistically significant differences in productivity of the short-term memory of the participants of different groups have been found. Therefore, the essential oils of rosemary and lavender have significantly increased the image memory compared to the control. Inhalation of the rosemary essential oil increased the memorization of numbers, and inhalation of the lavender essential oil weakened this process.

  15. High performance parallel I/O

    CERN Document Server

    Prabhat

    2014-01-01

    Gain Critical Insight into the Parallel I/O EcosystemParallel I/O is an integral component of modern high performance computing (HPC), especially in storing and processing very large datasets to facilitate scientific discovery. Revealing the state of the art in this field, High Performance Parallel I/O draws on insights from leading practitioners, researchers, software architects, developers, and scientists who shed light on the parallel I/O ecosystem.The first part of the book explains how large-scale HPC facilities scope, configure, and operate systems, with an emphasis on choices of I/O har

  16. Magnetohydrodynamics: Parallel computation of the dynamics of thermonuclear and astrophysical plasmas. 1. Annual report of massively parallel computing pilot project 93MPR05

    International Nuclear Information System (INIS)

    1994-08-01

    This is the first annual report of the MPP pilot project 93MPR05. In this pilot project four research groups with different, complementary backgrounds collaborate with the aim to develop new algorithms and codes to simulate the magnetohydrodynamics of thermonuclear and astrophysical plasmas on massively parallel machines. The expected speed-up is required to simulate the dynamics of the hot plasmas of interest which are characterized by very large magnetic Reynolds numbers and, hence, require high spatial and temporal resolutions (for details see section 1). The four research groups that collaborated to produce the results reported here are: The MHD group of Prof. Dr. J.P. Goedbloed at the FOM-Institute for Plasma Physics 'Rijnhuizen' in Nieuwegein, the group of Prof. Dr. H. van der Vorst at the Mathematics Institute of Utrecht University, the group of Prof. Dr. A.G. Hearn at the Astronomical Institute of Utrecht University, and the group of Dr. Ir. H.J.J. te Riele at the CWI in Amsterdam. The full project team met frequently during this first project year to discuss progress reports, current problems, etc. (see section 2). The main results of the first project year are: - Proof of the scalability of typical linear and nonlinear MHD codes - development and testing of a parallel version of the Arnoldi algorithm - development and testing of alternative methods for solving large non-Hermitian eigenvalue problems - porting of the 3D nonlinear semi-implicit time evolution code HERA to an MPP system. The steps that were scheduled to reach these intended results are given in section 3. (orig./WL)

  17. Magnetohydrodynamics: Parallel computation of the dynamics of thermonuclear and astrophysical plasmas. 1. Annual report of massively parallel computing pilot project 93MPR05

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1994-08-01

    This is the first annual report of the MPP pilot project 93MPR05. In this pilot project four research groups with different, complementary backgrounds collaborate with the aim to develop new algorithms and codes to simulate the magnetohydrodynamics of thermonuclear and astrophysical plasmas on massively parallel machines. The expected speed-up is required to simulate the dynamics of the hot plasmas of interest which are characterized by very large magnetic Reynolds numbers and, hence, require high spatial and temporal resolutions (for details see section 1). The four research groups that collaborated to produce the results reported here are: The MHD group of Prof. Dr. J.P. Goedbloed at the FOM-Institute for Plasma Physics `Rijnhuizen` in Nieuwegein, the group of Prof. Dr. H. van der Vorst at the Mathematics Institute of Utrecht University, the group of Prof. Dr. A.G. Hearn at the Astronomical Institute of Utrecht University, and the group of Dr. Ir. H.J.J. te Riele at the CWI in Amsterdam. The full project team met frequently during this first project year to discuss progress reports, current problems, etc. (see section 2). The main results of the first project year are: - Proof of the scalability of typical linear and nonlinear MHD codes - development and testing of a parallel version of the Arnoldi algorithm - development and testing of alternative methods for solving large non-Hermitian eigenvalue problems - porting of the 3D nonlinear semi-implicit time evolution code HERA to an MPP system. The steps that were scheduled to reach these intended results are given in section 3. (orig./WL).

  18. Parallel transport of long mean-free-path plasma along open magnetic field lines: Parallel heat flux

    International Nuclear Information System (INIS)

    Guo Zehua; Tang Xianzhu

    2012-01-01

    In a long mean-free-path plasma where temperature anisotropy can be sustained, the parallel heat flux has two components with one associated with the parallel thermal energy and the other the perpendicular thermal energy. Due to the large deviation of the distribution function from local Maxwellian in an open field line plasma with low collisionality, the conventional perturbative calculation of the parallel heat flux closure in its local or non-local form is no longer applicable. Here, a non-perturbative calculation is presented for a collisionless plasma in a two-dimensional flux expander bounded by absorbing walls. Specifically, closures of previously unfamiliar form are obtained for ions and electrons, which relate two distinct components of the species parallel heat flux to the lower order fluid moments such as density, parallel flow, parallel and perpendicular temperatures, and the field quantities such as the magnetic field strength and the electrostatic potential. The plasma source and boundary condition at the absorbing wall enter explicitly in the closure calculation. Although the closure calculation does not take into account wave-particle interactions, the results based on passing orbits from steady-state collisionless drift-kinetic equation show remarkable agreement with fully kinetic-Maxwell simulations. As an example of the physical implications of the theory, the parallel heat flux closures are found to predict a surprising observation in the kinetic-Maxwell simulation of the 2D magnetic flux expander problem, where the parallel heat flux of the parallel thermal energy flows from low to high parallel temperature region.

  19. Analgesic Potential of Essential Oils

    Directory of Open Access Journals (Sweden)

    José Ferreira Sarmento-Neto

    2015-12-01

    Full Text Available Pain is an unpleasant sensation associated with a wide range of injuries and diseases, and affects approximately 20% of adults in the world. The discovery of new and more effective drugs that can relieve pain is an important research goal in both the pharmaceutical industry and academia. This review describes studies involving antinociceptive activity of essential oils from 31 plant species. Botanical aspects of aromatic plants, mechanisms of action in pain models and chemical composition profiles of the essential oils are discussed. The data obtained in these studies demonstrate the analgesic potential of this group of natural products for therapeutic purposes.

  20. Parameter estimation of fractional-order chaotic systems by using quantum parallel particle swarm optimization algorithm.

    Directory of Open Access Journals (Sweden)

    Yu Huang

    Full Text Available Parameter estimation for fractional-order chaotic systems is an important issue in fractional-order chaotic control and synchronization and could be essentially formulated as a multidimensional optimization problem. A novel algorithm called quantum parallel particle swarm optimization (QPPSO is proposed to solve the parameter estimation for fractional-order chaotic systems. The parallel characteristic of quantum computing is used in QPPSO. This characteristic increases the calculation of each generation exponentially. The behavior of particles in quantum space is restrained by the quantum evolution equation, which consists of the current rotation angle, individual optimal quantum rotation angle, and global optimal quantum rotation angle. Numerical simulation based on several typical fractional-order systems and comparisons with some typical existing algorithms show the effectiveness and efficiency of the proposed algorithm.

  1. Is Monte Carlo embarrassingly parallel?

    Energy Technology Data Exchange (ETDEWEB)

    Hoogenboom, J. E. [Delft Univ. of Technology, Mekelweg 15, 2629 JB Delft (Netherlands); Delft Nuclear Consultancy, IJsselzoom 2, 2902 LB Capelle aan den IJssel (Netherlands)

    2012-07-01

    Monte Carlo is often stated as being embarrassingly parallel. However, running a Monte Carlo calculation, especially a reactor criticality calculation, in parallel using tens of processors shows a serious limitation in speedup and the execution time may even increase beyond a certain number of processors. In this paper the main causes of the loss of efficiency when using many processors are analyzed using a simple Monte Carlo program for criticality. The basic mechanism for parallel execution is MPI. One of the bottlenecks turn out to be the rendez-vous points in the parallel calculation used for synchronization and exchange of data between processors. This happens at least at the end of each cycle for fission source generation in order to collect the full fission source distribution for the next cycle and to estimate the effective multiplication factor, which is not only part of the requested results, but also input to the next cycle for population control. Basic improvements to overcome this limitation are suggested and tested. Also other time losses in the parallel calculation are identified. Moreover, the threading mechanism, which allows the parallel execution of tasks based on shared memory using OpenMP, is analyzed in detail. Recommendations are given to get the maximum efficiency out of a parallel Monte Carlo calculation. (authors)

  2. Is Monte Carlo embarrassingly parallel?

    International Nuclear Information System (INIS)

    Hoogenboom, J. E.

    2012-01-01

    Monte Carlo is often stated as being embarrassingly parallel. However, running a Monte Carlo calculation, especially a reactor criticality calculation, in parallel using tens of processors shows a serious limitation in speedup and the execution time may even increase beyond a certain number of processors. In this paper the main causes of the loss of efficiency when using many processors are analyzed using a simple Monte Carlo program for criticality. The basic mechanism for parallel execution is MPI. One of the bottlenecks turn out to be the rendez-vous points in the parallel calculation used for synchronization and exchange of data between processors. This happens at least at the end of each cycle for fission source generation in order to collect the full fission source distribution for the next cycle and to estimate the effective multiplication factor, which is not only part of the requested results, but also input to the next cycle for population control. Basic improvements to overcome this limitation are suggested and tested. Also other time losses in the parallel calculation are identified. Moreover, the threading mechanism, which allows the parallel execution of tasks based on shared memory using OpenMP, is analyzed in detail. Recommendations are given to get the maximum efficiency out of a parallel Monte Carlo calculation. (authors)

  3. Evaluation of pulsing magnetic field effects on paresthesia in multiple sclerosis patients, a randomized, double-blind, parallel-group clinical trial.

    Science.gov (United States)

    Afshari, Daryoush; Moradian, Nasrin; Khalili, Majid; Razazian, Nazanin; Bostani, Arash; Hoseini, Jamal; Moradian, Mohamad; Ghiasian, Masoud

    2016-10-01

    Evidence is mounting that magnet therapy could alleviate the symptoms of multiple sclerosis (MS). This study was performed to test the effects of the pulsing magnetic fields on the paresthesia in MS patients. This study has been conducted as a randomized, double-blind, parallel-group clinical trial during the April 2012 to October 2013. The subjects were selected among patients referred to MS clinic of Imam Reza Hospital; affiliated to Kermanshah University of Medical Sciences, Iran. Sixty three patients with MS were included in the study and randomly were divided into two groups, 35 patients were exposed to a magnetic pulsing field of 4mT intensity and 15-Hz frequency sinusoidal wave for 20min per session 2 times per week over a period of 2 months involving 16 sessions and 28 patients was exposed to a magnetically inactive field (placebo) for 20min per session 2 times per week over a period of 2 months involving 16 sessions. The severity of paresthesia was measured by the numerical rating scale (NRS) at 30, 60days. The study primary end point was NRS change between baseline and 60days. The secondary outcome was NRS change between baseline and 30days. Patients exposing to magnetic field showed significant paresthesia improvement compared with the group of patients exposing to placebo. According to our results pulsed magnetic therapy could alleviate paresthesia in MS patients .But trials with more patients and longer duration are mandatory to describe long-term effects. Copyright © 2016 Elsevier B.V. All rights reserved.

  4. Cosmic Shear With ACS Pure Parallels

    Science.gov (United States)

    Rhodes, Jason

    2002-07-01

    Small distortions in the shapes of background galaxies by foreground mass provide a powerful method of directly measuring the amount and distribution of dark matter. Several groups have recently detected this weak lensing by large-scale structure, also called cosmic shear. The high resolution and sensitivity of HST/ACS provide a unique opportunity to measure cosmic shear accurately on small scales. Using 260 parallel orbits in Sloan textiti {F775W} we will measure for the first time: beginlistosetlength sep0cm setlengthemsep0cm setlengthopsep0cm em the cosmic shear variance on scales Omega_m^0.5, with signal-to-noise {s/n} 20, and the mass density Omega_m with s/n=4. They will be done at small angular scales where non-linear effects dominate the power spectrum, providing a test of the gravitational instability paradigm for structure formation. Measurements on these scales are not possible from the ground, because of the systematic effects induced by PSF smearing from seeing. Having many independent lines of sight reduces the uncertainty due to cosmic variance, making parallel observations ideal.

  5. LUCKY-TD code for solving the time-dependent transport equation with the use of parallel computations

    Energy Technology Data Exchange (ETDEWEB)

    Moryakov, A. V., E-mail: sailor@orc.ru [National Research Centre Kurchatov Institute (Russian Federation)

    2016-12-15

    An algorithm for solving the time-dependent transport equation in the P{sub m}S{sub n} group approximation with the use of parallel computations is presented. The algorithm is implemented in the LUCKY-TD code for supercomputers employing the MPI standard for the data exchange between parallel processes.

  6. Fast and Green Microwave-Assisted Conversion of Essential Oil Allylbenzenes into the Corresponding Aldehydes via Alkene Isomerization and Subsequent Potassium Permanganate Promoted Oxidative Alkene Group Cleavage

    DEFF Research Database (Denmark)

    Luu, Thi Xuan Thi; Lam, Trinh To; Le, Thach Ngoc

    2009-01-01

    Essential oil allylbenzenes from have been converted quickly and efficiently into the corresponding benzaldehydes in good yields by a two-step "green" reaction pathway based on a solventless alkene group isomerization by KF/Al2O3 to form the corresponding 1-arylpropene and a subsequent solventles...

  7. Parallel algorithms for continuum dynamics

    International Nuclear Information System (INIS)

    Hicks, D.L.; Liebrock, L.M.

    1987-01-01

    Simply porting existing parallel programs to a new parallel processor may not achieve the full speedup possible; to achieve the maximum efficiency may require redesigning the parallel algorithms for the specific architecture. The authors discuss here parallel algorithms that were developed first for the HEP processor and then ported to the CRAY X-MP/4, the ELXSI/10, and the Intel iPSC/32. Focus is mainly on the most recent parallel processing results produced, i.e., those on the Intel Hypercube. The applications are simulations of continuum dynamics in which the momentum and stress gradients are important. Examples of these are inertial confinement fusion experiments, severe breaks in the coolant system of a reactor, weapons physics, shock-wave physics. Speedup efficiencies on the Intel iPSC Hypercube are very sensitive to the ratio of communication to computation. Great care must be taken in designing algorithms for this machine to avoid global communication. This is much more critical on the iPSC than it was on the three previous parallel processors

  8. Parallel S/sub n/ iteration schemes

    International Nuclear Information System (INIS)

    Wienke, B.R.; Hiromoto, R.E.

    1986-01-01

    The iterative, multigroup, discrete ordinates (S/sub n/) technique for solving the linear transport equation enjoys widespread usage and appeal. Serial iteration schemes and numerical algorithms developed over the years provide a timely framework for parallel extension. On the Denelcor HEP, the authors investigate three parallel iteration schemes for solving the one-dimensional S/sub n/ transport equation. The multigroup representation and serial iteration methods are also reviewed. This analysis represents a first attempt to extend serial S/sub n/ algorithms to parallel environments and provides good baseline estimates on ease of parallel implementation, relative algorithm efficiency, comparative speedup, and some future directions. The authors examine ordered and chaotic versions of these strategies, with and without concurrent rebalance and diffusion acceleration. Two strategies efficiently support high degrees of parallelization and appear to be robust parallel iteration techniques. The third strategy is a weaker parallel algorithm. Chaotic iteration, difficult to simulate on serial machines, holds promise and converges faster than ordered versions of the schemes. Actual parallel speedup and efficiency are high and payoff appears substantial

  9. Vectorization, parallelization and porting of nuclear codes. Vectorization and parallelization. Progress report fiscal 1999

    Energy Technology Data Exchange (ETDEWEB)

    Adachi, Masaaki; Ogasawara, Shinobu; Kume, Etsuo [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Ishizuki, Shigeru; Nemoto, Toshiyuki; Kawasaki, Nobuo; Kawai, Wataru [Fujitsu Ltd., Tokyo (Japan); Yatake, Yo-ichi [Hitachi Ltd., Tokyo (Japan)

    2001-02-01

    Several computer codes in the nuclear field have been vectorized, parallelized and trans-ported on the FUJITSU VPP500 system, the AP3000 system, the SX-4 system and the Paragon system at Center for Promotion of Computational Science and Engineering in Japan Atomic Energy Research Institute. We dealt with 18 codes in fiscal 1999. These results are reported in 3 parts, i.e., the vectorization and the parallelization part on vector processors, the parallelization part on scalar processors and the porting part. In this report, we describe the vectorization and parallelization on vector processors. In this vectorization and parallelization on vector processors part, the vectorization of Relativistic Molecular Orbital Calculation code RSCAT, a microscopic transport code for high energy nuclear collisions code JAM, three-dimensional non-steady thermal-fluid analysis code STREAM, Relativistic Density Functional Theory code RDFT and High Speed Three-Dimensional Nodal Diffusion code MOSRA-Light on the VPP500 system and the SX-4 system are described. (author)

  10. Parallel control method for a bilateral master-slave manipulator

    International Nuclear Information System (INIS)

    Miyazaki, Tomohiro; Hagihara, Shiro

    1989-01-01

    In this paper, a new control method for a bilateral master-slave manipulator is proposed. The proposed method yields stable and fast response of the control system. These are essential to obtain a precise position control and a sensitive force reflection control. In the conventional position-force control method, each control loop of the master and the slave arms are connected in series to construct a bilateral control loop. Therefore the total phase lag through the bilateral control loop becomes twice as much as that of one arm control. Such phase lag makes the control system unstable and control performance worse. To improve the stability and the control performance, we propose 'parallel control method.' In the proposed method, the control loops of the master and the slave arms are connected in parallel so that the total phase lag is reduced to as much as that of one arm. The stability condition of the proposed method is studied and it is proved that the stability of this method can be guaranteed independent of the rigidness of a reaction surface and the position/force ratio between the master and the slave arms while the stability of the conventional method depends on them. (author)

  11. A comparison of two treatments for childhood apraxia of speech: methods and treatment protocol for a parallel group randomised control trial

    Directory of Open Access Journals (Sweden)

    Murray Elizabeth

    2012-08-01

    Full Text Available Abstract Background Childhood Apraxia of Speech is an impairment of speech motor planning that manifests as difficulty producing the sounds (articulation and melody (prosody of speech. These difficulties may persist through life and are detrimental to academic, social, and vocational development. A number of published single subject and case series studies of speech treatments are available. There are currently no randomised control trials or other well designed group trials available to guide clinical practice. Methods/Design A parallel group, fixed size randomised control trial will be conducted in Sydney, Australia to determine the efficacy of two treatments for Childhood Apraxia of Speech: 1 Rapid Syllable Transition Treatment and the 2 Nuffield Dyspraxia Programme – Third edition. Eligible children will be English speaking, aged 4–12 years with a diagnosis of suspected CAS, normal or adjusted hearing and vision, and no comprehension difficulties or other developmental diagnoses. At least 20 children will be randomised to receive one of the two treatments in parallel. Treatments will be delivered by trained and supervised speech pathology clinicians using operationalised manuals. Treatment will be administered in 1-hour sessions, 4 times per week for 3 weeks. The primary outcomes are speech sound and prosodic accuracy on a customised 292 item probe and the Diagnostic Evaluation of Articulation and Phonology inconsistency subtest administered prior to treatment and 1 week, 1 month and 4 months post-treatment. All post assessments will be completed by blinded assessors. Our hypotheses are: 1 treatment effects at 1 week post will be similar for both treatments, 2 maintenance of treatment effects at 1 and 4 months post will be greater for Rapid Syllable Transition Treatment than Nuffield Dyspraxia Programme treatment, and 3 generalisation of treatment effects to untrained related speech behaviours will be greater for Rapid

  12. Implementation and performance of parallelized elegant

    International Nuclear Information System (INIS)

    Wang, Y.; Borland, M.

    2008-01-01

    The program elegant is widely used for design and modeling of linacs for free-electron lasers and energy recovery linacs, as well as storage rings and other applications. As part of a multi-year effort, we have parallelized many aspects of the code, including single-particle dynamics, wakefields, and coherent synchrotron radiation. We report on the approach used for gradual parallelization, which proved very beneficial in getting parallel features into the hands of users quickly. We also report details of parallelization of collective effects. Finally, we discuss performance of the parallelized code in various applications.

  13. Parallelizing the spectral transform method: A comparison of alternative parallel algorithms

    International Nuclear Information System (INIS)

    Foster, I.; Worley, P.H.

    1993-01-01

    The spectral transform method is a standard numerical technique for solving partial differential equations on the sphere and is widely used in global climate modeling. In this paper, we outline different approaches to parallelizing the method and describe experiments that we are conducting to evaluate the efficiency of these approaches on parallel computers. The experiments are conducted using a testbed code that solves the nonlinear shallow water equations on a sphere, but are designed to permit evaluation in the context of a global model. They allow us to evaluate the relative merits of the approaches as a function of problem size and number of processors. The results of this study are guiding ongoing work on PCCM2, a parallel implementation of the Community Climate Model developed at the National Center for Atmospheric Research

  14. A Set of Annotation Interfaces for Alignment of Parallel Corpora

    Directory of Open Access Journals (Sweden)

    Singh Anil Kumar

    2014-09-01

    Full Text Available Annotation interfaces for parallel corpora which fit in well with other tools can be very useful. We describe a set of annotation interfaces which fulfill this criterion. This set includes a sentence alignment interface, two different word or word group alignment interfaces and an initial version of a parallel syntactic annotation alignment interface. These tools can be used for manual alignment, or they can be used to correct automatic alignments. Manual alignment can be performed in combination with certain kinds of linguistic annotation. Most of these interfaces use a representation called the Shakti Standard Format that has been found to be very robust and has been used for large and successful projects. It ties together the different interfaces, so that the data created by them is portable across all tools which support this representation. The existence of a query language for data stored in this representation makes it possible to build tools that allow easy search and modification of annotated parallel data.

  15. Algorithms for parallel computers

    International Nuclear Information System (INIS)

    Churchhouse, R.F.

    1985-01-01

    Until relatively recently almost all the algorithms for use on computers had been designed on the (usually unstated) assumption that they were to be run on single processor, serial machines. With the introduction of vector processors, array processors and interconnected systems of mainframes, minis and micros, however, various forms of parallelism have become available. The advantage of parallelism is that it offers increased overall processing speed but it also raises some fundamental questions, including: (i) which, if any, of the existing 'serial' algorithms can be adapted for use in the parallel mode. (ii) How close to optimal can such adapted algorithms be and, where relevant, what are the convergence criteria. (iii) How can we design new algorithms specifically for parallel systems. (iv) For multi-processor systems how can we handle the software aspects of the interprocessor communications. Aspects of these questions illustrated by examples are considered in these lectures. (orig.)

  16. Parallel processing for fluid dynamics applications

    International Nuclear Information System (INIS)

    Johnson, G.M.

    1989-01-01

    The impact of parallel processing on computational science and, in particular, on computational fluid dynamics is growing rapidly. In this paper, particular emphasis is given to developments which have occurred within the past two years. Parallel processing is defined and the reasons for its importance in high-performance computing are reviewed. Parallel computer architectures are classified according to the number and power of their processing units, their memory, and the nature of their connection scheme. Architectures which show promise for fluid dynamics applications are emphasized. Fluid dynamics problems are examined for parallelism inherent at the physical level. CFD algorithms and their mappings onto parallel architectures are discussed. Several example are presented to document the performance of fluid dynamics applications on present-generation parallel processing devices

  17. Optimal task mapping in safety-critical real-time parallel systems; Placement optimal de taches pour les systemes paralleles temps-reel critiques

    Energy Technology Data Exchange (ETDEWEB)

    Aussagues, Ch

    1998-12-11

    This PhD thesis is dealing with the correct design of safety-critical real-time parallel systems. Such systems constitutes a fundamental part of high-performance systems for command and control that can be found in the nuclear domain or more generally in parallel embedded systems. The verification of their temporal correctness is the core of this thesis. our contribution is mainly in the following three points: the analysis and extension of a programming model for such real-time parallel systems; the proposal of an original method based on a new operator of synchronized product of state machines task-graphs; the validation of the approach by its implementation and evaluation. The work addresses particularly the main problem of optimal task mapping on a parallel architecture, such that the temporal constraints are globally guaranteed, i.e. the timeliness property is valid. The results incorporate also optimally criteria for the sizing and correct dimensioning of a parallel system, for instance in the number of processing elements. These criteria are connected with operational constraints of the application domain. Our approach is based on the off-line analysis of the feasibility of the deadline-driven dynamic scheduling that is used to schedule tasks inside one processor. This leads us to define the synchronized-product, a system of linear, constraints is automatically generated and then allows to calculate a maximum load of a group of tasks and then to verify their timeliness constraints. The communications, their timeliness verification and incorporation to the mapping problem is the second main contribution of this thesis. FInally, the global solving technique dealing with both task and communication aspects has been implemented and evaluated in the framework of the OASIS project in the LETI research center at the CEA/Saclay. (author) 96 refs.

  18. Parallel discrete event simulation

    NARCIS (Netherlands)

    Overeinder, B.J.; Hertzberger, L.O.; Sloot, P.M.A.; Withagen, W.J.

    1991-01-01

    In simulating applications for execution on specific computing systems, the simulation performance figures must be known in a short period of time. One basic approach to the problem of reducing the required simulation time is the exploitation of parallelism. However, in parallelizing the simulation

  19. LPIC++. A parallel one-dimensional relativistic electromagnetic particle-in-cell code for simulating laser-plasma-interaction

    International Nuclear Information System (INIS)

    Lichters, R.; Pfund, R.E.W.; Meyer-ter-Vehn, J.

    1997-08-01

    The code LPIC++ presented here, is based on a one-dimensional, electromagnetic, relativistic PIC code that has originally been developed by one of the authors during a PhD thesis at the Max-Planck-Institut fuer Quantenoptik for kinetic simulations of high harmonic generation from overdense plasma surfaces. The code uses essentially the algorithm of Birdsall and Langdon and Villasenor and Bunemann. It is written in C++ in order to be easily extendable and has been parallelized to be able to grow in power linearly with the size of accessable hardware, e.g. massively parallel machines like Cray T3E. The parallel LPIC++ version uses PVM for communication between processors. PVM is public domain software, can be downloaded from the world wide web. A particular strength of LPIC++ lies in its clear program and data structure, which uses chained lists for the organization of grid cells and enables dynamic adjustment of spatial domain sizes in a very convenient way, and therefore easy balancing of processor loads. Also particles belonging to one cell are linked in a chained list and are immediately accessable from this cell. In addition to this convenient type of data organization in a PIC code, the code shows excellent performance in both its single processor and parallel version. (orig.)

  20. Effect of differences in gas-dynamic behaviour on the separation performance of ultracentrifuges connected in parallel

    International Nuclear Information System (INIS)

    Portoghese, C.C.P.; Buchmann, J.H.

    1996-01-01

    This paper is concerned with the degradation of separation factors occurred when groups of ultracentrifuges having different gas-dynamic behaviour are connected in parallel arrangements. Differences in the gas-dynamic behavior were traduced in terms of different tails pressures for the same operational conditions, that are feed flow rate, product pressure and cut number. A mathematical model describing the ratio of the tails flow rates as a function of the tails pressure ratios and the feed flow rate was developed using experimental data collected from a pair of different ultracentrifuges connected in parallel. The optimization of model parameters was made using Marquardt's algorithm. The model developed was used to simulate the separation factors degradation in some parallel arrangements containing more than two centrifuges. Te obtained results were compared with experimental data collected from different groups of ultracentrifuges. It was observed that the calculated results were in good agreement with experimental data. This mathematical model, which parameters were determined in a two-centrifuges parallel arrangement, is useful to simulate the effect of quantified gas-dynamic differences in the separation factors of groups containing any number of different ultracentrifuges and, consequently, to analyze cascade losses due to this kind of occurrence. (author)

  1. Overview of the Force Scientific Parallel Language

    Directory of Open Access Journals (Sweden)

    Gita Alaghband

    1994-01-01

    Full Text Available The Force parallel programming language designed for large-scale shared-memory multiprocessors is presented. The language provides a number of parallel constructs as extensions to the ordinary Fortran language and is implemented as a two-level macro preprocessor to support portability across shared memory multiprocessors. The global parallelism model on which the Force is based provides a powerful parallel language. The parallel constructs, generic synchronization, and freedom from process management supported by the Force has resulted in structured parallel programs that are ported to the many multiprocessors on which the Force is implemented. Two new parallel constructs for looping and functional decomposition are discussed. Several programming examples to illustrate some parallel programming approaches using the Force are also presented.

  2. The Galley Parallel File System

    Science.gov (United States)

    Nieuwejaar, Nils; Kotz, David

    1996-01-01

    Most current multiprocessor file systems are designed to use multiple disks in parallel, using the high aggregate bandwidth to meet the growing I/0 requirements of parallel scientific applications. Many multiprocessor file systems provide applications with a conventional Unix-like interface, allowing the application to access multiple disks transparently. This interface conceals the parallelism within the file system, increasing the ease of programmability, but making it difficult or impossible for sophisticated programmers and libraries to use knowledge about their I/O needs to exploit that parallelism. In addition to providing an insufficient interface, most current multiprocessor file systems are optimized for a different workload than they are being asked to support. We introduce Galley, a new parallel file system that is intended to efficiently support realistic scientific multiprocessor workloads. We discuss Galley's file structure and application interface, as well as the performance advantages offered by that interface.

  3. PDDP, A Data Parallel Programming Model

    Directory of Open Access Journals (Sweden)

    Karen H. Warren

    1996-01-01

    Full Text Available PDDP, the parallel data distribution preprocessor, is a data parallel programming model for distributed memory parallel computers. PDDP implements high-performance Fortran-compatible data distribution directives and parallelism expressed by the use of Fortran 90 array syntax, the FORALL statement, and the WHERE construct. Distributed data objects belong to a global name space; other data objects are treated as local and replicated on each processor. PDDP allows the user to program in a shared memory style and generates codes that are portable to a variety of parallel machines. For interprocessor communication, PDDP uses the fastest communication primitives on each platform.

  4. Design considerations for parallel graphics libraries

    Science.gov (United States)

    Crockett, Thomas W.

    1994-01-01

    Applications which run on parallel supercomputers are often characterized by massive datasets. Converting these vast collections of numbers to visual form has proven to be a powerful aid to comprehension. For a variety of reasons, it may be desirable to provide this visual feedback at runtime. One way to accomplish this is to exploit the available parallelism to perform graphics operations in place. In order to do this, we need appropriate parallel rendering algorithms and library interfaces. This paper provides a tutorial introduction to some of the issues which arise in designing parallel graphics libraries and their underlying rendering algorithms. The focus is on polygon rendering for distributed memory message-passing systems. We illustrate our discussion with examples from PGL, a parallel graphics library which has been developed on the Intel family of parallel systems.

  5. Parallelizing AT with MatlabMPI

    International Nuclear Information System (INIS)

    2011-01-01

    The Accelerator Toolbox (AT) is a high-level collection of tools and scripts specifically oriented toward solving problems dealing with computational accelerator physics. It is integrated into the MATLAB environment, which provides an accessible, intuitive interface for accelerator physicists, allowing researchers to focus the majority of their efforts on simulations and calculations, rather than programming and debugging difficulties. Efforts toward parallelization of AT have been put in place to upgrade its performance to modern standards of computing. We utilized the packages MatlabMPI and pMatlab, which were developed by MIT Lincoln Laboratory, to set up a message-passing environment that could be called within MATLAB, which set up the necessary pre-requisites for multithread processing capabilities. On local quad-core CPUs, we were able to demonstrate processor efficiencies of roughly 95% and speed increases of nearly 380%. By exploiting the efficacy of modern-day parallel computing, we were able to demonstrate incredibly efficient speed increments per processor in AT's beam-tracking functions. Extrapolating from prediction, we can expect to reduce week-long computation runtimes to less than 15 minutes. This is a huge performance improvement and has enormous implications for the future computing power of the accelerator physics group at SSRL. However, one of the downfalls of parringpass is its current lack of transparency; the pMatlab and MatlabMPI packages must first be well-understood by the user before the system can be configured to run the scripts. In addition, the instantiation of argument parameters requires internal modification of the source code. Thus, parringpass, cannot be directly run from the MATLAB command line, which detracts from its flexibility and user-friendliness. Future work in AT's parallelization will focus on development of external functions and scripts that can be called from within MATLAB and configured on multiple nodes, while

  6. Automatic Loop Parallelization via Compiler Guided Refactoring

    DEFF Research Database (Denmark)

    Larsen, Per; Ladelsky, Razya; Lidman, Jacob

    For many parallel applications, performance relies not on instruction-level parallelism, but on loop-level parallelism. Unfortunately, many modern applications are written in ways that obstruct automatic loop parallelization. Since we cannot identify sufficient parallelization opportunities...... for these codes in a static, off-line compiler, we developed an interactive compilation feedback system that guides the programmer in iteratively modifying application source, thereby improving the compiler’s ability to generate loop-parallel code. We use this compilation system to modify two sequential...... benchmarks, finding that the code parallelized in this way runs up to 8.3 times faster on an octo-core Intel Xeon 5570 system and up to 12.5 times faster on a quad-core IBM POWER6 system. Benchmark performance varies significantly between the systems. This suggests that semi-automatic parallelization should...

  7. Aspects of computation on asynchronous parallel processors

    International Nuclear Information System (INIS)

    Wright, M.

    1989-01-01

    The increasing availability of asynchronous parallel processors has provided opportunities for original and useful work in scientific computing. However, the field of parallel computing is still in a highly volatile state, and researchers display a wide range of opinion about many fundamental questions such as models of parallelism, approaches for detecting and analyzing parallelism of algorithms, and tools that allow software developers and users to make effective use of diverse forms of complex hardware. This volume collects the work of researchers specializing in different aspects of parallel computing, who met to discuss the framework and the mechanics of numerical computing. The far-reaching impact of high-performance asynchronous systems is reflected in the wide variety of topics, which include scientific applications (e.g. linear algebra, lattice gauge simulation, ordinary and partial differential equations), models of parallelism, parallel language features, task scheduling, automatic parallelization techniques, tools for algorithm development in parallel environments, and system design issues

  8. Parallelization of the FLAPW method

    International Nuclear Information System (INIS)

    Canning, A.; Mannstadt, W.; Freeman, A.J.

    1999-01-01

    The FLAPW (full-potential linearized-augmented plane-wave) method is one of the most accurate first-principles methods for determining electronic and magnetic properties of crystals and surfaces. Until the present work, the FLAPW method has been limited to systems of less than about one hundred atoms due to a lack of an efficient parallel implementation to exploit the power and memory of parallel computers. In this work we present an efficient parallelization of the method by division among the processors of the plane-wave components for each state. The code is also optimized for RISC (reduced instruction set computer) architectures, such as those found on most parallel computers, making full use of BLAS (basic linear algebra subprograms) wherever possible. Scaling results are presented for systems of up to 686 silicon atoms and 343 palladium atoms per unit cell, running on up to 512 processors on a CRAY T3E parallel computer

  9. Parallelization of the FLAPW method

    Science.gov (United States)

    Canning, A.; Mannstadt, W.; Freeman, A. J.

    2000-08-01

    The FLAPW (full-potential linearized-augmented plane-wave) method is one of the most accurate first-principles methods for determining structural, electronic and magnetic properties of crystals and surfaces. Until the present work, the FLAPW method has been limited to systems of less than about a hundred atoms due to the lack of an efficient parallel implementation to exploit the power and memory of parallel computers. In this work, we present an efficient parallelization of the method by division among the processors of the plane-wave components for each state. The code is also optimized for RISC (reduced instruction set computer) architectures, such as those found on most parallel computers, making full use of BLAS (basic linear algebra subprograms) wherever possible. Scaling results are presented for systems of up to 686 silicon atoms and 343 palladium atoms per unit cell, running on up to 512 processors on a CRAY T3E parallel supercomputer.

  10. Real-time objects development: Study and proposal for a parallel scheduling architecture

    International Nuclear Information System (INIS)

    Rioux, Laurent

    1997-01-01

    This thesis contributes to the programming and the execution control of real-time object oriented applications. Using real-time objects is very interesting for programming real- time applications, because this model can introduce the concurrence with the encapsulation properties, with modularity and reusability by taking into account the real-time constraints of the application. One essential quality of this approach is that it can directly specify the parallelism and the real-time constraints at the model level of the application. An annotation system of C++ has been defined to describe the real-time specifications in the model (or in the source code) of the application. It will supply to the execution support the different information it needs for the control. In this approach of multitasking, the control is distributed and encapsulated inside each real time object. Three complementary levels of control have been defined: the state level (defining the capability of an object to treat an operation), the concurrence level (assuring the coherence between the object attributes) and a scheduling control (allocating the processors resources to the object by taking real-time constraints into account). The proposed control architecture, named OROS, manages the attribute access of each object in an individual way, then it can parallel treatments which do not access at the same data. This architecture makes a dynamic control of an application that can take benefit from the parallelism of the new machines both for the execution parallelism and the control itself. This architecture uses only the simplest primitives of the industrial real-time operating systems which ensures its feasibility and portability. (author) [fr

  11. Demonstration of essentiality of entanglement in a Deutsch-like quantum algorithm

    Science.gov (United States)

    Huang, He-Liang; Goswami, Ashutosh K.; Bao, Wan-Su; Panigrahi, Prasanta K.

    2018-06-01

    Quantum algorithms can be used to efficiently solve certain classically intractable problems by exploiting quantum parallelism. However, the effectiveness of quantum entanglement in quantum computing remains a question of debate. This study presents a new quantum algorithm that shows entanglement could provide advantages over both classical algorithms and quantum algo- rithms without entanglement. Experiments are implemented to demonstrate the proposed algorithm using superconducting qubits. Results show the viability of the algorithm and suggest that entanglement is essential in obtaining quantum speedup for certain problems in quantum computing. The study provides reliable and clear guidance for developing useful quantum algorithms.

  12. Evaluation of anxiolytic and sedative effect of essential oil and hydroalcoholic extract of Ocimum basilicum L. and chemical composition of its essential oil.

    Science.gov (United States)

    Rabbani, Mohammed; Sajjadi, Seyed Ebrahim; Vaezi, Arefeh

    2015-01-01

    Ocimum basilicum belongs to Lamiaceae family and has been used for the treatment of wide range of diseases in traditional medicine in Iranian folk medicine. Due to the progressive need to anti-anxiety medications and because of the similarity between O. basilicum and Salvia officinalis, which has anti-anxiety effects, we decided to investigate the anxiolytic and sedative activity of hydroalcoholic extract and essential oil of O. basilicum in mice by utilizing an elevated plus maze and locomotor activity meter. The chemical composition of the plant essential oil was also determined. The essential oil and hydroalcoholic extract of this plant were administered intraperitoneally to male Syrian mice at various doses (100, 150 and 200 mg/kg of hydroalcoholic extract and 200 mg/kg of essential oil) 30 min before starting the experiment. The amount of hydroalcoholic extract was 18.6% w/w and the essential oil was 0.34% v/w. The major components of the essential oil were methyl chavicol (42.8%), geranial (13.0%), neral (12.2%) and β-caryophyllene (7.2%). HE at 150 and 200 mg/kg and EO at 200 mg/kg significantly increased the time passed in open arms in comparison to control group. This finding was not significant for the dose of 100 mg/kg of the extract. None of the dosages had significant effect on the number of entrance to the open arms. Moreover, both the hydroalcoholic extract and the essential oil decreased the locomotion of mice in comparison to the control group. This study shows the anxiolytic and sedative effect of hydroalcoholic extract and essential oil of O. basilicum. The anti-anxiety and sedative effect of essential oil was higher than the hydroalcoholic extract with the same doses. These effects could be due to the phenol components of O. basilicum.

  13. Parallelization characteristics of a three-dimensional whole-core code DeCART

    International Nuclear Information System (INIS)

    Cho, J. Y.; Joo, H.K.; Kim, H. Y.; Lee, J. C.; Jang, M. H.

    2003-01-01

    Neutron transport calculation for three-dimensional amount of computing time but also huge memory. Therefore, whole-core codes such as DeCART need both also parallel computation and distributed memory capabilities. This paper is to implement such parallel capabilities based on MPI grouping and memory distribution on the DeCART code, and then to evaluate the performance by solving the C5G7 three-dimensional benchmark and a simplified three-dimensional SMART core problem. In C5G7 problem with 24 CPUs, a speedup of maximum 22 is obtained on IBM regatta machine and 21 on a LINUX cluster for the MOC kernel, which indicates good parallel performance of the DeCART code. The simplified SMART problem which need about 11 GBytes memory with one processors requires about 940 MBytes, which means that the DeCART code can now solve large core problems on affordable LINUX clusters

  14. Pharmacodynamic effects of steady-state fingolimod on antibody response in healthy volunteers: a 4-week, randomized, placebo-controlled, parallel-group, multiple-dose study.

    Science.gov (United States)

    Boulton, Craig; Meiser, Karin; David, Olivier J; Schmouder, Robert

    2012-12-01

    Fingolimod, a first-in-class oral sphingosine 1-phosphate receptor (S1PR) modulator, is approved in many countries for relapsing-remitting multiple sclerosis, at a once-daily 0.5-mg dose. A reduction in peripheral lymphocyte count is an expected consequence of the fingolimod mechanism of S1PR modulation. The authors investigated if this pharmacodynamic effect impacts humoral and cellular immunogenicity. In this double-blind, parallel-group, 4-week study, 72 healthy volunteers were randomized to steady state, fingolimod 0.5 mg, 1.25 mg, or to placebo. The authors compared T-cell dependent and independent responses to the neoantigens, keyhole limpet hemocyanin (KLH), and pneumococcal polysaccharides vaccine (PPV-23), respectively, and additionally recall antigen response (tetanus toxoid [TT]) and delayed-type hypersensitivity (DTH) to KLH, TT, and Candida albicans. Fingolimod caused mild to moderate decreases in anti-KLH and anti-PPV-23 IgG and IgM levels versus placebo. Responder rates were identical between placebo and 0.5-mg groups for anti-KLH IgG (both > 90%) and comparable for anti-PPV-23 IgG (55% and 41%, respectively). Fingolimod did not affect anti-TT immunogenicity, and DTH response did not differ between placebo and fingolimod 0.5-mg groups. Expectedly, lymphocyte count reduced substantially in the fingolimod groups versus placebo but reversed by study end. Fingolimod was well tolerated, and the observed safety profile was consistent with previous reports.

  15. A Noise Trimming and Positional Significance of Transposon Insertion System to Identify Essential Genes in Yersinia pestis

    Science.gov (United States)

    Yang, Zheng Rong; Bullifent, Helen L.; Moore, Karen; Paszkiewicz, Konrad; Saint, Richard J.; Southern, Stephanie J.; Champion, Olivia L.; Senior, Nicola J.; Sarkar-Tyson, Mitali; Oyston, Petra C. F.; Atkins, Timothy P.; Titball, Richard W.

    2017-02-01

    Massively parallel sequencing technology coupled with saturation mutagenesis has provided new and global insights into gene functions and roles. At a simplistic level, the frequency of mutations within genes can indicate the degree of essentiality. However, this approach neglects to take account of the positional significance of mutations - the function of a gene is less likely to be disrupted by a mutation close to the distal ends. Therefore, a systematic bioinformatics approach to improve the reliability of essential gene identification is desirable. We report here a parametric model which introduces a novel mutation feature together with a noise trimming approach to predict the biological significance of Tn5 mutations. We show improved performance of essential gene prediction in the bacterium Yersinia pestis, the causative agent of plague. This method would have broad applicability to other organisms and to the identification of genes which are essential for competitiveness or survival under a broad range of stresses.

  16. Parallelization of 2-D lattice Boltzmann codes

    International Nuclear Information System (INIS)

    Suzuki, Soichiro; Kaburaki, Hideo; Yokokawa, Mitsuo.

    1996-03-01

    Lattice Boltzmann (LB) codes to simulate two dimensional fluid flow are developed on vector parallel computer Fujitsu VPP500 and scalar parallel computer Intel Paragon XP/S. While a 2-D domain decomposition method is used for the scalar parallel LB code, a 1-D domain decomposition method is used for the vector parallel LB code to be vectorized along with the axis perpendicular to the direction of the decomposition. High parallel efficiency of 95.1% by the vector parallel calculation on 16 processors with 1152x1152 grid and 88.6% by the scalar parallel calculation on 100 processors with 800x800 grid are obtained. The performance models are developed to analyze the performance of the LB codes. It is shown by our performance models that the execution speed of the vector parallel code is about one hundred times faster than that of the scalar parallel code with the same number of processors up to 100 processors. We also analyze the scalability in keeping the available memory size of one processor element at maximum. Our performance model predicts that the execution time of the vector parallel code increases about 3% on 500 processors. Although the 1-D domain decomposition method has in general a drawback in the interprocessor communication, the vector parallel LB code is still suitable for the large scale and/or high resolution simulations. (author)

  17. Parallelization of 2-D lattice Boltzmann codes

    Energy Technology Data Exchange (ETDEWEB)

    Suzuki, Soichiro; Kaburaki, Hideo; Yokokawa, Mitsuo

    1996-03-01

    Lattice Boltzmann (LB) codes to simulate two dimensional fluid flow are developed on vector parallel computer Fujitsu VPP500 and scalar parallel computer Intel Paragon XP/S. While a 2-D domain decomposition method is used for the scalar parallel LB code, a 1-D domain decomposition method is used for the vector parallel LB code to be vectorized along with the axis perpendicular to the direction of the decomposition. High parallel efficiency of 95.1% by the vector parallel calculation on 16 processors with 1152x1152 grid and 88.6% by the scalar parallel calculation on 100 processors with 800x800 grid are obtained. The performance models are developed to analyze the performance of the LB codes. It is shown by our performance models that the execution speed of the vector parallel code is about one hundred times faster than that of the scalar parallel code with the same number of processors up to 100 processors. We also analyze the scalability in keeping the available memory size of one processor element at maximum. Our performance model predicts that the execution time of the vector parallel code increases about 3% on 500 processors. Although the 1-D domain decomposition method has in general a drawback in the interprocessor communication, the vector parallel LB code is still suitable for the large scale and/or high resolution simulations. (author).

  18. Explorations of the implementation of a parallel IDW interpolation algorithm in a Linux cluster-based parallel GIS

    Science.gov (United States)

    Huang, Fang; Liu, Dingsheng; Tan, Xicheng; Wang, Jian; Chen, Yunping; He, Binbin

    2011-04-01

    To design and implement an open-source parallel GIS (OP-GIS) based on a Linux cluster, the parallel inverse distance weighting (IDW) interpolation algorithm has been chosen as an example to explore the working model and the principle of algorithm parallel pattern (APP), one of the parallelization patterns for OP-GIS. Based on an analysis of the serial IDW interpolation algorithm of GRASS GIS, this paper has proposed and designed a specific parallel IDW interpolation algorithm, incorporating both single process, multiple data (SPMD) and master/slave (M/S) programming modes. The main steps of the parallel IDW interpolation algorithm are: (1) the master node packages the related information, and then broadcasts it to the slave nodes; (2) each node calculates its assigned data extent along one row using the serial algorithm; (3) the master node gathers the data from all nodes; and (4) iterations continue until all rows have been processed, after which the results are outputted. According to the experiments performed in the course of this work, the parallel IDW interpolation algorithm can attain an efficiency greater than 0.93 compared with similar algorithms, which indicates that the parallel algorithm can greatly reduce processing time and maximize speed and performance.

  19. A massively parallel algorithm for the collision probability calculations in the Apollo-II code using the PVM library

    International Nuclear Information System (INIS)

    Stankovski, Z.

    1995-01-01

    The collision probability method in neutron transport, as applied to 2D geometries, consume a great amount of computer time, for a typical 2D assembly calculation evaluations. Consequently RZ or 3D calculations became prohibitive. In this paper we present a simple but efficient parallel algorithm based on the message passing host/node programing model. Parallelization was applied to the energy group treatment. Such approach permits parallelization of the existing code, requiring only limited modifications. Sequential/parallel computer portability is preserved, witch is a necessary condition for a industrial code. Sequential performances are also preserved. The algorithm is implemented on a CRAY 90 coupled to a 128 processor T3D computer, a 16 processor IBM SP1 and a network of workstations, using the Public Domain PVM library. The tests were executed for a 2D geometry with the standard 99-group library. All results were very satisfactory, the best ones with IBM SP1. Because of heterogeneity of the workstation network, we did ask high performances for this architecture. The same source code was used for all computers. A more impressive advantage of this algorithm will appear in the calculations of the SAPHYR project (with the future fine multigroup library of about 8000 groups) with a massively parallel computer, using several hundreds of processors. (author). 5 refs., 6 figs., 2 tabs

  20. A massively parallel algorithm for the collision probability calculations in the Apollo-II code using the PVM library

    International Nuclear Information System (INIS)

    Stankovski, Z.

    1995-01-01

    The collision probability method in neutron transport, as applied to 2D geometries, consume a great amount of computer time, for a typical 2D assembly calculation about 90% of the computing time is consumed in the collision probability evaluations. Consequently RZ or 3D calculations became prohibitive. In this paper the author presents a simple but efficient parallel algorithm based on the message passing host/node programmation model. Parallelization was applied to the energy group treatment. Such approach permits parallelization of the existing code, requiring only limited modifications. Sequential/parallel computer portability is preserved, which is a necessary condition for a industrial code. Sequential performances are also preserved. The algorithm is implemented on a CRAY 90 coupled to a 128 processor T3D computer, a 16 processor IBM SPI and a network of workstations, using the Public Domain PVM library. The tests were executed for a 2D geometry with the standard 99-group library. All results were very satisfactory, the best ones with IBM SPI. Because of heterogeneity of the workstation network, the author did not ask high performances for this architecture. The same source code was used for all computers. A more impressive advantage of this algorithm will appear in the calculations of the SAPHYR project (with the future fine multigroup library of about 8000 groups) with a massively parallel computer, using several hundreds of processors

  1. Parallel Monte Carlo reactor neutronics

    International Nuclear Information System (INIS)

    Blomquist, R.N.; Brown, F.B.

    1994-01-01

    The issues affecting implementation of parallel algorithms for large-scale engineering Monte Carlo neutron transport simulations are discussed. For nuclear reactor calculations, these include load balancing, recoding effort, reproducibility, domain decomposition techniques, I/O minimization, and strategies for different parallel architectures. Two codes were parallelized and tested for performance. The architectures employed include SIMD, MIMD-distributed memory, and workstation network with uneven interactive load. Speedups linear with the number of nodes were achieved

  2. Parallel Implicit Algorithms for CFD

    Science.gov (United States)

    Keyes, David E.

    1998-01-01

    The main goal of this project was efficient distributed parallel and workstation cluster implementations of Newton-Krylov-Schwarz (NKS) solvers for implicit Computational Fluid Dynamics (CFD.) "Newton" refers to a quadratically convergent nonlinear iteration using gradient information based on the true residual, "Krylov" to an inner linear iteration that accesses the Jacobian matrix only through highly parallelizable sparse matrix-vector products, and "Schwarz" to a domain decomposition form of preconditioning the inner Krylov iterations with primarily neighbor-only exchange of data between the processors. Prior experience has established that Newton-Krylov methods are competitive solvers in the CFD context and that Krylov-Schwarz methods port well to distributed memory computers. The combination of the techniques into Newton-Krylov-Schwarz was implemented on 2D and 3D unstructured Euler codes on the parallel testbeds that used to be at LaRC and on several other parallel computers operated by other agencies or made available by the vendors. Early implementations were made directly in Massively Parallel Integration (MPI) with parallel solvers we adapted from legacy NASA codes and enhanced for full NKS functionality. Later implementations were made in the framework of the PETSC library from Argonne National Laboratory, which now includes pseudo-transient continuation Newton-Krylov-Schwarz solver capability (as a result of demands we made upon PETSC during our early porting experiences). A secondary project pursued with funding from this contract was parallel implicit solvers in acoustics, specifically in the Helmholtz formulation. A 2D acoustic inverse problem has been solved in parallel within the PETSC framework.

  3. Parallel kinematics type, kinematics, and optimal design

    CERN Document Server

    Liu, Xin-Jun

    2014-01-01

    Parallel Kinematics- Type, Kinematics, and Optimal Design presents the results of 15 year's research on parallel mechanisms and parallel kinematics machines. This book covers the systematic classification of parallel mechanisms (PMs) as well as providing a large number of mechanical architectures of PMs available for use in practical applications. It focuses on the kinematic design of parallel robots. One successful application of parallel mechanisms in the field of machine tools, which is also called parallel kinematics machines, has been the emerging trend in advanced machine tools. The book describes not only the main aspects and important topics in parallel kinematics, but also references novel concepts and approaches, i.e. type synthesis based on evolution, performance evaluation and optimization based on screw theory, singularity model taking into account motion and force transmissibility, and others.   This book is intended for researchers, scientists, engineers and postgraduates or above with interes...

  4. Hydraulic Profiling of a Parallel Channel Type Reactor Core

    International Nuclear Information System (INIS)

    Seo, Kyong-Won; Hwang, Dae-Hyun; Lee, Chung-Chan

    2006-01-01

    An advanced reactor core which consisted of closed multiple parallel channels was optimized to maximize the thermal margin of the core. The closed multiple parallel channel configurations have different characteristics to the open channels of conventional PWRs. The channels, usually assemblies, are isolated hydraulically from each other and there is no cross flow between channels. The distribution of inlet flow rate between channels is a very important design parameter in the core because distribution of inlet flow is directly proportional to a margin for a certain hydraulic parameter. The thermal hydraulic parameter may be the boiling margin, maximum fuel temperature, and critical heat flux. The inlet flow distribution of the core was optimized for the boiling margins by grouping the inlet orifices by several hydraulic regions. The procedure is called a hydraulic profiling

  5. Experiments with parallel algorithms for combinatorial problems

    NARCIS (Netherlands)

    G.A.P. Kindervater (Gerard); H.W.J.M. Trienekens

    1985-01-01

    textabstractIn the last decade many models for parallel computation have been proposed and many parallel algorithms have been developed. However, few of these models have been realized and most of these algorithms are supposed to run on idealized, unrealistic parallel machines. The parallel machines

  6. Parallel reservoir simulator computations

    International Nuclear Information System (INIS)

    Hemanth-Kumar, K.; Young, L.C.

    1995-01-01

    The adaptation of a reservoir simulator for parallel computations is described. The simulator was originally designed for vector processors. It performs approximately 99% of its calculations in vector/parallel mode and relative to scalar calculations it achieves speedups of 65 and 81 for black oil and EOS simulations, respectively on the CRAY C-90

  7. Development Of A Parallel Performance Model For The THOR Neutral Particle Transport Code

    Energy Technology Data Exchange (ETDEWEB)

    Yessayan, Raffi; Azmy, Yousry; Schunert, Sebastian

    2017-02-01

    The THOR neutral particle transport code enables simulation of complex geometries for various problems from reactor simulations to nuclear non-proliferation. It is undergoing a thorough V&V requiring computational efficiency. This has motivated various improvements including angular parallelization, outer iteration acceleration, and development of peripheral tools. For guiding future improvements to the code’s efficiency, better characterization of its parallel performance is useful. A parallel performance model (PPM) can be used to evaluate the benefits of modifications and to identify performance bottlenecks. Using INL’s Falcon HPC, the PPM development incorporates an evaluation of network communication behavior over heterogeneous links and a functional characterization of the per-cell/angle/group runtime of each major code component. After evaluating several possible sources of variability, this resulted in a communication model and a parallel portion model. The former’s accuracy is bounded by the variability of communication on Falcon while the latter has an error on the order of 1%.

  8. The STAPL Parallel Graph Library

    KAUST Repository

    Harshvardhan,

    2013-01-01

    This paper describes the stapl Parallel Graph Library, a high-level framework that abstracts the user from data-distribution and parallelism details and allows them to concentrate on parallel graph algorithm development. It includes a customizable distributed graph container and a collection of commonly used parallel graph algorithms. The library introduces pGraph pViews that separate algorithm design from the container implementation. It supports three graph processing algorithmic paradigms, level-synchronous, asynchronous and coarse-grained, and provides common graph algorithms based on them. Experimental results demonstrate improved scalability in performance and data size over existing graph libraries on more than 16,000 cores and on internet-scale graphs containing over 16 billion vertices and 250 billion edges. © Springer-Verlag Berlin Heidelberg 2013.

  9. The parallel volume at large distances

    DEFF Research Database (Denmark)

    Kampf, Jürgen

    In this paper we examine the asymptotic behavior of the parallel volume of planar non-convex bodies as the distance tends to infinity. We show that the difference between the parallel volume of the convex hull of a body and the parallel volume of the body itself tends to . This yields a new proof...... for the fact that a planar body can only have polynomial parallel volume, if it is convex. Extensions to Minkowski spaces and random sets are also discussed....

  10. The parallel volume at large distances

    DEFF Research Database (Denmark)

    Kampf, Jürgen

    In this paper we examine the asymptotic behavior of the parallel volume of planar non-convex bodies as the distance tends to infinity. We show that the difference between the parallel volume of the convex hull of a body and the parallel volume of the body itself tends to 0. This yields a new proof...... for the fact that a planar body can only have polynomial parallel volume, if it is convex. Extensions to Minkowski spaces and random sets are also discussed....

  11. Expressing Parallelism with ROOT

    Energy Technology Data Exchange (ETDEWEB)

    Piparo, D. [CERN; Tejedor, E. [CERN; Guiraud, E. [CERN; Ganis, G. [CERN; Mato, P. [CERN; Moneta, L. [CERN; Valls Pla, X. [CERN; Canal, P. [Fermilab

    2017-11-22

    The need for processing the ever-increasing amount of data generated by the LHC experiments in a more efficient way has motivated ROOT to further develop its support for parallelism. Such support is being tackled both for shared-memory and distributed-memory environments. The incarnations of the aforementioned parallelism are multi-threading, multi-processing and cluster-wide executions. In the area of multi-threading, we discuss the new implicit parallelism and related interfaces, as well as the new building blocks to safely operate with ROOT objects in a multi-threaded environment. Regarding multi-processing, we review the new MultiProc framework, comparing it with similar tools (e.g. multiprocessing module in Python). Finally, as an alternative to PROOF for cluster-wide executions, we introduce the efforts on integrating ROOT with state-of-the-art distributed data processing technologies like Spark, both in terms of programming model and runtime design (with EOS as one of the main components). For all the levels of parallelism, we discuss, based on real-life examples and measurements, how our proposals can increase the productivity of scientists.

  12. Expressing Parallelism with ROOT

    Science.gov (United States)

    Piparo, D.; Tejedor, E.; Guiraud, E.; Ganis, G.; Mato, P.; Moneta, L.; Valls Pla, X.; Canal, P.

    2017-10-01

    The need for processing the ever-increasing amount of data generated by the LHC experiments in a more efficient way has motivated ROOT to further develop its support for parallelism. Such support is being tackled both for shared-memory and distributed-memory environments. The incarnations of the aforementioned parallelism are multi-threading, multi-processing and cluster-wide executions. In the area of multi-threading, we discuss the new implicit parallelism and related interfaces, as well as the new building blocks to safely operate with ROOT objects in a multi-threaded environment. Regarding multi-processing, we review the new MultiProc framework, comparing it with similar tools (e.g. multiprocessing module in Python). Finally, as an alternative to PROOF for cluster-wide executions, we introduce the efforts on integrating ROOT with state-of-the-art distributed data processing technologies like Spark, both in terms of programming model and runtime design (with EOS as one of the main components). For all the levels of parallelism, we discuss, based on real-life examples and measurements, how our proposals can increase the productivity of scientists.

  13. Parallel hierarchical radiosity rendering

    Energy Technology Data Exchange (ETDEWEB)

    Carter, Michael [Iowa State Univ., Ames, IA (United States)

    1993-07-01

    In this dissertation, the step-by-step development of a scalable parallel hierarchical radiosity renderer is documented. First, a new look is taken at the traditional radiosity equation, and a new form is presented in which the matrix of linear system coefficients is transformed into a symmetric matrix, thereby simplifying the problem and enabling a new solution technique to be applied. Next, the state-of-the-art hierarchical radiosity methods are examined for their suitability to parallel implementation, and scalability. Significant enhancements are also discovered which both improve their theoretical foundations and improve the images they generate. The resultant hierarchical radiosity algorithm is then examined for sources of parallelism, and for an architectural mapping. Several architectural mappings are discussed. A few key algorithmic changes are suggested during the process of making the algorithm parallel. Next, the performance, efficiency, and scalability of the algorithm are analyzed. The dissertation closes with a discussion of several ideas which have the potential to further enhance the hierarchical radiosity method, or provide an entirely new forum for the application of hierarchical methods.

  14. Parallel Monitors for Self-adaptive Sessions

    Directory of Open Access Journals (Sweden)

    Mario Coppo

    2016-06-01

    Full Text Available The paper presents a data-driven model of self-adaptivity for multiparty sessions. System choreography is prescribed by a global type. Participants are incarnated by processes associated with monitors, which control their behaviour. Each participant can access and modify a set of global data, which are able to trigger adaptations in the presence of critical changes of values. The use of the parallel composition for building global types, monitors and processes enables a significant degree of flexibility: an adaptation step can dynamically reconfigure a set of participants only, without altering the remaining participants, even if the two groups communicate.

  15. Globalized Newton-Krylov-Schwarz Algorithms and Software for Parallel Implicit CFD

    Science.gov (United States)

    Gropp, W. D.; Keyes, D. E.; McInnes, L. C.; Tidriri, M. D.

    1998-01-01

    Implicit solution methods are important in applications modeled by PDEs with disparate temporal and spatial scales. Because such applications require high resolution with reasonable turnaround, "routine" parallelization is essential. The pseudo-transient matrix-free Newton-Krylov-Schwarz (Psi-NKS) algorithmic framework is presented as an answer. We show that, for the classical problem of three-dimensional transonic Euler flow about an M6 wing, Psi-NKS can simultaneously deliver: globalized, asymptotically rapid convergence through adaptive pseudo- transient continuation and Newton's method-, reasonable parallelizability for an implicit method through deferred synchronization and favorable communication-to-computation scaling in the Krylov linear solver; and high per- processor performance through attention to distributed memory and cache locality, especially through the Schwarz preconditioner. Two discouraging features of Psi-NKS methods are their sensitivity to the coding of the underlying PDE discretization and the large number of parameters that must be selected to govern convergence. We therefore distill several recommendations from our experience and from our reading of the literature on various algorithmic components of Psi-NKS, and we describe a freely available, MPI-based portable parallel software implementation of the solver employed here.

  16. Shared Variable Oriented Parallel Precompiler for SPMD Model

    Institute of Scientific and Technical Information of China (English)

    1995-01-01

    For the moment,commercial parallel computer systems with distributed memory architecture are usually provided with parallel FORTRAN or parallel C compliers,which are just traditional sequential FORTRAN or C compilers expanded with communication statements.Programmers suffer from writing parallel programs with communication statements. The Shared Variable Oriented Parallel Precompiler (SVOPP) proposed in this paper can automatically generate appropriate communication statements based on shared variables for SPMD(Single Program Multiple Data) computation model and greatly ease the parallel programming with high communication efficiency.The core function of parallel C precompiler has been successfully verified on a transputer-based parallel computer.Its prominent performance shows that SVOPP is probably a break-through in parallel programming technique.

  17. Evaluating parallel optimization on transputers

    Directory of Open Access Journals (Sweden)

    A.G. Chalmers

    2003-12-01

    Full Text Available The faster processing power of modern computers and the development of efficient algorithms have made it possible for operations researchers to tackle a much wider range of problems than ever before. Further improvements in processing speed can be achieved utilising relatively inexpensive transputers to process components of an algorithm in parallel. The Davidon-Fletcher-Powell method is one of the most successful and widely used optimisation algorithms for unconstrained problems. This paper examines the algorithm and identifies the components that can be processed in parallel. The results of some experiments with these components are presented which indicates under what conditions parallel processing with an inexpensive configuration is likely to be faster than the traditional sequential implementations. The performance of the whole algorithm with its parallel components is then compared with the original sequential algorithm. The implementation serves to illustrate the practicalities of speeding up typical OR algorithms in terms of difficulty, effort and cost. The results give an indication of the savings in time a given parallel implementation can be expected to yield.

  18. Programming massively parallel processors a hands-on approach

    CERN Document Server

    Kirk, David B

    2010-01-01

    Programming Massively Parallel Processors discusses basic concepts about parallel programming and GPU architecture. ""Massively parallel"" refers to the use of a large number of processors to perform a set of computations in a coordinated parallel way. The book details various techniques for constructing parallel programs. It also discusses the development process, performance level, floating-point format, parallel patterns, and dynamic parallelism. The book serves as a teaching guide where parallel programming is the main topic of the course. It builds on the basics of C programming for CUDA, a parallel programming environment that is supported on NVI- DIA GPUs. Composed of 12 chapters, the book begins with basic information about the GPU as a parallel computer source. It also explains the main concepts of CUDA, data parallelism, and the importance of memory access efficiency using CUDA. The target audience of the book is graduate and undergraduate students from all science and engineering disciplines who ...

  19. Advanced parallel processing with supercomputer architectures

    International Nuclear Information System (INIS)

    Hwang, K.

    1987-01-01

    This paper investigates advanced parallel processing techniques and innovative hardware/software architectures that can be applied to boost the performance of supercomputers. Critical issues on architectural choices, parallel languages, compiling techniques, resource management, concurrency control, programming environment, parallel algorithms, and performance enhancement methods are examined and the best answers are presented. The authors cover advanced processing techniques suitable for supercomputers, high-end mainframes, minisupers, and array processors. The coverage emphasizes vectorization, multitasking, multiprocessing, and distributed computing. In order to achieve these operation modes, parallel languages, smart compilers, synchronization mechanisms, load balancing methods, mapping parallel algorithms, operating system functions, application library, and multidiscipline interactions are investigated to ensure high performance. At the end, they assess the potentials of optical and neural technologies for developing future supercomputers

  20. The effect of the essential oils of lavender and rosemary on the human short-term memory

    OpenAIRE

    O.V. Filiptsova; L.V. Gazzavi-Rogozina; I.A. Timoshyna; O.I. Naboka; Ye.V. Dyomina; A.V. Ochkur

    2018-01-01

    The research results of the effect of essential oils on the human short-term image and numerical memory have been described. The study involved 79 secondary school students (34 boys and 45 girls) aged 13 to 17 years, residents of the Ukrainian metropolis. Participants were divided into three groups: the control group, “Lavender” group, in which the lavender essential oil was sprayed, and “Rosemary” group, in which the rosemary essential oil was sprayed. The statistically significant differenc...

  1. Antimicrobial activity of essential oils and carvacrol, and synergy of carvacrol and erythromycin, against clinical, erythromycin-resistant Group A Streptococci.

    Directory of Open Access Journals (Sweden)

    Gloria eMagi

    2015-03-01

    Full Text Available In the present study, we have evaluated the in vitro antibacterial activity of essential oils from Origanum vulgare, Thymus vulgaris, Lavandula angustifolia, Mentha piperita, and Melaleuca alternifolia against 32 erythromycin-resistant [MIC ≥1 µg/mL; inducible, constitutive, and efflux-mediated resistance phenotype; erm(TR, erm(B, and mef(A genes] and cell-invasive Group A streptococci (GAS isolated from children with pharyngotonsillitis in Italy. Over the past decades erythromycin resistance in GAS has emerged in several countries; strains combining erythromycin resistance and cell invasiveness may escape β-lactams because of intracellular location and macrolides because of resistance, resulting in difficulty of eradication and recurrent pharyngitis. Thyme and origanum essential oils demonstrated the highest antimicrobial activity with MICs ranging from 256 to 512 µg/mL. The phenolic monoterpene carvacrol [2-Methyl-5-(1-methylethyl phenol] is a major component of the essential oils of Origanum and Thymus plants. MICs of carvacrol ranged from 64 to 256 µg/mL. In the live/dead assay several dead cells were detected as early as 1 h after incubation with carvacrol at the MIC. In single-step resistance selection studies no resistant mutants were obtained. A synergistic action of carvacrol and erythromycin was detected by the checkerboard assay and calculation of the FIC Index. A 2- to 2048-fold reduction of the erythromycin MIC was documented in checkerboard assays. Synergy (FIC Index ≤0.5 was found in 21/32 strains and was highly significant (p <0.01 in strains where resistance is expressed only in presence of erythromycin. Synergy was confirmed in 17/23 strains using 24-h time-kill curves in presence of carvacrol and erythromycin. Our findings demonstrated that carvacrol acts either alone or in combination with erythromycin against erythromycin-resistant GAS and could potentially serve as a novel therapeutic tool.

  2. Endpoint-based parallel data processing with non-blocking collective instructions in a parallel active messaging interface of a parallel computer

    Science.gov (United States)

    Archer, Charles J; Blocksome, Michael A; Cernohous, Bob R; Ratterman, Joseph D; Smith, Brian E

    2014-11-11

    Endpoint-based parallel data processing with non-blocking collective instructions in a PAMI of a parallel computer is disclosed. The PAMI is composed of data communications endpoints, each including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task. The compute nodes are coupled for data communications through the PAMI. The parallel application establishes a data communications geometry specifying a set of endpoints that are used in collective operations of the PAMI by associating with the geometry a list of collective algorithms valid for use with the endpoints of the geometry; registering in each endpoint in the geometry a dispatch callback function for a collective operation; and executing without blocking, through a single one of the endpoints in the geometry, an instruction for the collective operation.

  3. Resistance to awareness of the supervisor's transferences with special reference to the parallel process.

    Science.gov (United States)

    Stimmel, B

    1995-06-01

    Supervision is an essential part of psychoanalytic education. Although not taken for granted, it is not studied with the same critical eye as is the analytic process. This paper examines the supervision specifically with a focus on the supervisor's transference towards the supervisee. The point is made, in the context of clinical examples, that one of the ways these transference reactions may be rationalised is within the setting of the parallel process so often encountered in supervision. Parallel process, a very familiar term, is used frequently and easily when discussing supervision. It may be used also as a resistance to awareness of transference phenomena within the supervisor in relation to the supervisee, particularly because of its clinical presentation. It is an enactment between supervisor and supervisee, thus ripe with possibilities for disguise, displacement and gratification. While transference reactions of the supervisee are often discussed, those of the supervisor are notably missing in our literature.

  4. H5Part A Portable High Performance Parallel Data Interface for Particle Simulations

    CERN Document Server

    Adelmann, Andreas; Shalf, John M; Siegerist, Cristina

    2005-01-01

    Largest parallel particle simulations, in six dimensional phase space generate wast amont of data. It is also desirable to share data and data analysis tools such as ParViT (Particle Visualization Toolkit) among other groups who are working on particle-based accelerator simulations. We define a very simple file schema built on top of HDF5 (Hierarchical Data Format version 5) as well as an API that simplifies the reading/writing of the data to the HDF5 file format. HDF5 offers a self-describing machine-independent binary file format that supports scalable parallel I/O performance for MPI codes on a variety of supercomputing systems and works equally well on laptop computers. The API is available for C, C++, and Fortran codes. The file format will enable disparate research groups with very different simulation implementations to share data transparently and share data analysis tools. For instance, the common file format will enable groups that depend on completely different simulation implementations to share c...

  5. SOFTWARE FOR DESIGNING PARALLEL APPLICATIONS

    Directory of Open Access Journals (Sweden)

    M. K. Bouza

    2017-01-01

    Full Text Available The object of research is the tools to support the development of parallel programs in C/C ++. The methods and software which automates the process of designing parallel applications are proposed.

  6. An Introduction to Parallel Computation R

    Indian Academy of Sciences (India)

    How are they programmed? This article provides an introduction. A parallel computer is a network of processors built for ... and have been used to solve problems much faster than a single ... in parallel computer design is to select an organization which ..... The most ambitious approach to parallel computing is to develop.

  7. Building a parallel file system simulator

    International Nuclear Information System (INIS)

    Molina-Estolano, E; Maltzahn, C; Brandt, S A; Bent, J

    2009-01-01

    Parallel file systems are gaining in popularity in high-end computing centers as well as commercial data centers. High-end computing systems are expected to scale exponentially and to pose new challenges to their storage scalability in terms of cost and power. To address these challenges scientists and file system designers will need a thorough understanding of the design space of parallel file systems. Yet there exist few systematic studies of parallel file system behavior at petabyte- and exabyte scale. An important reason is the significant cost of getting access to large-scale hardware to test parallel file systems. To contribute to this understanding we are building a parallel file system simulator that can simulate parallel file systems at very large scale. Our goal is to simulate petabyte-scale parallel file systems on a small cluster or even a single machine in reasonable time and fidelity. With this simulator, file system experts will be able to tune existing file systems for specific workloads, scientists and file system deployment engineers will be able to better communicate workload requirements, file system designers and researchers will be able to try out design alternatives and innovations at scale, and instructors will be able to study very large-scale parallel file system behavior in the class room. In this paper we describe our approach and provide preliminary results that are encouraging both in terms of fidelity and simulation scalability.

  8. Distributed Cooperative Current-Sharing Control of Parallel Chargers Using Feedback Linearization

    Directory of Open Access Journals (Sweden)

    Jiangang Liu

    2014-01-01

    Full Text Available We propose a distributed current-sharing scheme to address the output current imbalance problem for the parallel chargers in the energy storage type light rail vehicle system. By treating the parallel chargers as a group of agents with output information sharing through communication network, the current-sharing control problem is recast as the consensus tracking problem of multiagents. To facilitate the design, input-output feedback linearization is first applied to transform the nonidentical nonlinear charging system model into the first-order integrator. Then, a general saturation function is introduced to design the cooperative current-sharing control law which can guarantee the boundedness of the proposed control. The cooperative stability of the closed-loop system under fixed and dynamic communication topologies is rigorously proved with the aid of Lyapunov function and LaSalle invariant principle. Simulation using a multicharging test system further illustrates that the output currents of parallel chargers are balanced using the proposed control.

  9. Cooperative parallel adaptive neighbourhood search for the disjunctively constrained knapsack problem

    Science.gov (United States)

    Quan, Zhe; Wu, Lei

    2017-09-01

    This article investigates the use of parallel computing for solving the disjunctively constrained knapsack problem. The proposed parallel computing model can be viewed as a cooperative algorithm based on a multi-neighbourhood search. The cooperation system is composed of a team manager and a crowd of team members. The team members aim at applying their own search strategies to explore the solution space. The team manager collects the solutions from the members and shares the best one with them. The performance of the proposed method is evaluated on a group of benchmark data sets. The results obtained are compared to those reached by the best methods from the literature. The results show that the proposed method is able to provide the best solutions in most cases. In order to highlight the robustness of the proposed parallel computing model, a new set of large-scale instances is introduced. Encouraging results have been obtained.

  10. Professional Parallel Programming with C# Master Parallel Extensions with NET 4

    CERN Document Server

    Hillar, Gastón

    2010-01-01

    Expert guidance for those programming today's dual-core processors PCs As PC processors explode from one or two to now eight processors, there is an urgent need for programmers to master concurrent programming. This book dives deep into the latest technologies available to programmers for creating professional parallel applications using C#, .NET 4, and Visual Studio 2010. The book covers task-based programming, coordination data structures, PLINQ, thread pools, asynchronous programming model, and more. It also teaches other parallel programming techniques, such as SIMD and vectorization.Teach

  11. Oxidative stability of chicken thigh meat after treatment of fennel and savory essential oils

    Directory of Open Access Journals (Sweden)

    Adriana Pavelková

    2016-05-01

    Full Text Available In the present work, the effect of the fennel and savory essential oils on oxidative stability of chicken thigh muscles during chilled storage was investigated. In the experiment were used chickens of hybrid combination Cobb 500 after 42 days of the fattening period. The obtained fresh chicken thigh with skin from left half-carcass were divided into five groups (n = 5: C - control air-packaged group; A1 - vacuum-packaged experimental group; A2 - vacuum-packaged experimental group with EDTA solution 1.50% w/w; A3 - vacuum-packaged experimental group with fennel (Foeniculum vulgare essential oil at concentrations 0.2% v/w and A4 - vacuum-packaged experimental group with savory (Satureja hortensis essential oil at concentration 0.2% v/w. The essential oils were applicate on surface chicken thighs. The chicken thighs were packaged using a vacuum packaging machine and stored in refrigerate at                 4 ±0.5 °C. The value of thiobarbituric acid (TBA expressed as amount of malondialdehyde (MDA in 1 kg sample was measured during storage in 1st, 4th, 8th, 12th and 16th day. The treatments of chicken thighs with fennel and savory essential oils show statistically significant differences between all testing groups and control group, where higher average value of MDA measured in thigh muscle of broiler chickens was in samples of control group                 (0.359 mg.kg-1 compared to experimental groups A1 (0.129 mg.kg-1, A2 (0.091 mg.kg-1, A3 (0.084 mg.kg-1 and A4 (0.089 mg.kg-1 after 16-day of chilled storage. Experiment results show that the treatment of chicken thigh with fennel and savory essential oils had positive influence on the reduction of oxidative processes in thigh muscles during chilling storage and use of essential oil is one of the options increase shelf life of fresh chicken meat.

  12. Parallelization for first principles electronic state calculation program

    International Nuclear Information System (INIS)

    Watanabe, Hiroshi; Oguchi, Tamio.

    1997-03-01

    In this report we study the parallelization for First principles electronic state calculation program. The target machines are NEC SX-4 for shared memory type parallelization and FUJITSU VPP300 for distributed memory type parallelization. The features of each parallel machine are surveyed, and the parallelization methods suitable for each are proposed. It is shown that 1.60 times acceleration is achieved with 2 CPU parallelization by SX-4 and 4.97 times acceleration is achieved with 12 PE parallelization by VPP 300. (author)

  13. Parallel computation

    International Nuclear Information System (INIS)

    Jejcic, A.; Maillard, J.; Maurel, G.; Silva, J.; Wolff-Bacha, F.

    1997-01-01

    The work in the field of parallel processing has developed as research activities using several numerical Monte Carlo simulations related to basic or applied current problems of nuclear and particle physics. For the applications utilizing the GEANT code development or improvement works were done on parts simulating low energy physical phenomena like radiation, transport and interaction. The problem of actinide burning by means of accelerators was approached using a simulation with the GEANT code. A program of neutron tracking in the range of low energies up to the thermal region has been developed. It is coupled to the GEANT code and permits in a single pass the simulation of a hybrid reactor core receiving a proton burst. Other works in this field refers to simulations for nuclear medicine applications like, for instance, development of biological probes, evaluation and characterization of the gamma cameras (collimators, crystal thickness) as well as the method for dosimetric calculations. Particularly, these calculations are suited for a geometrical parallelization approach especially adapted to parallel machines of the TN310 type. Other works mentioned in the same field refer to simulation of the electron channelling in crystals and simulation of the beam-beam interaction effect in colliders. The GEANT code was also used to simulate the operation of germanium detectors designed for natural and artificial radioactivity monitoring of environment

  14. Neoclassical parallel flow calculation in the presence of external parallel momentum sources in Heliotron J

    Energy Technology Data Exchange (ETDEWEB)

    Nishioka, K.; Nakamura, Y. [Graduate School of Energy Science, Kyoto University, Gokasho, Uji, Kyoto 611-0011 (Japan); Nishimura, S. [National Institute for Fusion Science, 322-6 Oroshi-cho, Toki, Gifu 509-5292 (Japan); Lee, H. Y. [Korea Advanced Institute of Science and Technology, Daejeon 305-701 (Korea, Republic of); Kobayashi, S.; Mizuuchi, T.; Nagasaki, K.; Okada, H.; Minami, T.; Kado, S.; Yamamoto, S.; Ohshima, S.; Konoshima, S.; Sano, F. [Institute of Advanced Energy, Kyoto University, Gokasho, Uji, Kyoto 611-0011 (Japan)

    2016-03-15

    A moment approach to calculate neoclassical transport in non-axisymmetric torus plasmas composed of multiple ion species is extended to include the external parallel momentum sources due to unbalanced tangential neutral beam injections (NBIs). The momentum sources that are included in the parallel momentum balance are calculated from the collision operators of background particles with fast ions. This method is applied for the clarification of the physical mechanism of the neoclassical parallel ion flows and the multi-ion species effect on them in Heliotron J NBI plasmas. It is found that parallel ion flow can be determined by the balance between the parallel viscosity and the external momentum source in the region where the external source is much larger than the thermodynamic force driven source in the collisional plasmas. This is because the friction between C{sup 6+} and D{sup +} prevents a large difference between C{sup 6+} and D{sup +} flow velocities in such plasmas. The C{sup 6+} flow velocities, which are measured by the charge exchange recombination spectroscopy system, are numerically evaluated with this method. It is shown that the experimentally measured C{sup 6+} impurity flow velocities do not contradict clearly with the neoclassical estimations, and the dependence of parallel flow velocities on the magnetic field ripples is consistent in both results.

  15. Structural Properties of G,T-Parallel Duplexes

    Directory of Open Access Journals (Sweden)

    Anna Aviñó

    2010-01-01

    Full Text Available The structure of G,T-parallel-stranded duplexes of DNA carrying similar amounts of adenine and guanine residues is studied by means of molecular dynamics (MD simulations and UV- and CD spectroscopies. In addition the impact of the substitution of adenine by 8-aminoadenine and guanine by 8-aminoguanine is analyzed. The presence of 8-aminoadenine and 8-aminoguanine stabilizes the parallel duplex structure. Binding of these oligonucleotides to their target polypyrimidine sequences to form the corresponding G,T-parallel triplex was not observed. Instead, when unmodified parallel-stranded duplexes were mixed with their polypyrimidine target, an interstrand Watson-Crick duplex was formed. As predicted by theoretical calculations parallel-stranded duplexes carrying 8-aminopurines did not bind to their target. The preference for the parallel-duplex over the Watson-Crick antiparallel duplex is attributed to the strong stabilization of the parallel duplex produced by the 8-aminopurines. Theoretical studies show that the isomorphism of the triads is crucial for the stability of the parallel triplex.

  16. High-speed parallel solution of the neutron diffusion equation with the hierarchical domain decomposition boundary element method incorporating parallel communications

    International Nuclear Information System (INIS)

    Tsuji, Masashi; Chiba, Gou

    2000-01-01

    A hierarchical domain decomposition boundary element method (HDD-BEM) for solving the multiregion neutron diffusion equation (NDE) has been fully parallelized, both for numerical computations and for data communications, to accomplish a high parallel efficiency on distributed memory message passing parallel computers. Data exchanges between node processors that are repeated during iteration processes of HDD-BEM are implemented, without any intervention of the host processor that was used to supervise parallel processing in the conventional parallelized HDD-BEM (P-HDD-BEM). Thus, the parallel processing can be executed with only cooperative operations of node processors. The communication overhead was even the dominant time consuming part in the conventional P-HDD-BEM, and the parallelization efficiency decreased steeply with the increase of the number of processors. With the parallel data communication, the efficiency is affected only by the number of boundary elements assigned to decomposed subregions, and the communication overhead can be drastically reduced. This feature can be particularly advantageous in the analysis of three-dimensional problems where a large number of processors are required. The proposed P-HDD-BEM offers a promising solution to the deterioration problem of parallel efficiency and opens a new path to parallel computations of NDEs on distributed memory message passing parallel computers. (author)

  17. Parallel education: what is it?

    OpenAIRE

    Amos, Michelle Peta

    2017-01-01

    In the history of education it has long been discussed that single-sex and coeducation are the two models of education present in schools. With the introduction of parallel schools over the last 15 years, there has been very little research into this 'new model'. Many people do not understand what it means for a school to be parallel or they confuse a parallel model with co-education, due to the presence of both boys and girls within the one institution. Therefore, the main obj...

  18. Parallel computing of physical maps--a comparative study in SIMD and MIMD parallelism.

    Science.gov (United States)

    Bhandarkar, S M; Chirravuri, S; Arnold, J

    1996-01-01

    Ordering clones from a genomic library into physical maps of whole chromosomes presents a central computational problem in genetics. Chromosome reconstruction via clone ordering is usually isomorphic to the NP-complete Optimal Linear Arrangement problem. Parallel SIMD and MIMD algorithms for simulated annealing based on Markov chain distribution are proposed and applied to the problem of chromosome reconstruction via clone ordering. Perturbation methods and problem-specific annealing heuristics are proposed and described. The SIMD algorithms are implemented on a 2048 processor MasPar MP-2 system which is an SIMD 2-D toroidal mesh architecture whereas the MIMD algorithms are implemented on an 8 processor Intel iPSC/860 which is an MIMD hypercube architecture. A comparative analysis of the various SIMD and MIMD algorithms is presented in which the convergence, speedup, and scalability characteristics of the various algorithms are analyzed and discussed. On a fine-grained, massively parallel SIMD architecture with a low synchronization overhead such as the MasPar MP-2, a parallel simulated annealing algorithm based on multiple periodically interacting searches performs the best. For a coarse-grained MIMD architecture with high synchronization overhead such as the Intel iPSC/860, a parallel simulated annealing algorithm based on multiple independent searches yields the best results. In either case, distribution of clonal data across multiple processors is shown to exacerbate the tendency of the parallel simulated annealing algorithm to get trapped in a local optimum.

  19. On synchronous parallel computations with independent probabilistic choice

    International Nuclear Information System (INIS)

    Reif, J.H.

    1984-01-01

    This paper introduces probabilistic choice to synchronous parallel machine models; in particular parallel RAMs. The power of probabilistic choice in parallel computations is illustrate by parallelizing some known probabilistic sequential algorithms. The authors characterize the computational complexity of time, space, and processor bounded probabilistic parallel RAMs in terms of the computational complexity of probabilistic sequential RAMs. They show that parallelism uniformly speeds up time bounded probabilistic sequential RAM computations by nearly a quadratic factor. They also show that probabilistic choice can be eliminated from parallel computations by introducing nonuniformity

  20. The effects of lavender essential oil aromatherapy on anxiety and depression in haemodialysis patients

    Directory of Open Access Journals (Sweden)

    Masoumeh Bagheri-Nesami

    2017-05-01

    Full Text Available This study was intended to examine the effects of lavender essential oil aromatherapy on anxiety and depression in haemodialysis patients. This randomised clinical trial was conducted on 72 haemodialysis patients divided into control and experimental groups. The control group only received the routine care. The experimental group received aromatherapy with 3 drops of lavender essential oil 5% for 10 minutes every time they underwent haemodialysis for a period of one month. Anxiety and depression were measured in both groups at baseline and by the end of the second and fourth weeks during the first hour of a dialysis session. The rANOVA showed no significant difference between the two groups in terms of the severity of anxiety before the intervention and by the end of the second and fourth weeks (p  =  0.783. However, the  rANOVA revealed a significant difference with respect to the severity of depression between the two groups (p  =  0.005. Current research suggests that we need various concentrations of lavender essential oil to relieve anxiety compared to depression. In sum, future studies are required to investigate different concentrations of lavender essential oil at different times during haemodialysis sessions to obtain specific doses for lavender essential oil to be used on haemodialysis patients suffering from anxiety and depression.

  1. Resistor Combinations for Parallel Circuits.

    Science.gov (United States)

    McTernan, James P.

    1978-01-01

    To help simplify both teaching and learning of parallel circuits, a high school electricity/electronics teacher presents and illustrates the use of tables of values for parallel resistive circuits in which total resistances are whole numbers. (MF)

  2. Parallelization methods study of thermal-hydraulics codes

    International Nuclear Information System (INIS)

    Gaudart, Catherine

    2000-01-01

    The variety of parallelization methods and machines leads to a wide selection for programmers. In this study we suggest, in an industrial context, some solutions from the experience acquired through different parallelization methods. The study is about several scientific codes which simulate a large variety of thermal-hydraulics phenomena. A bibliography on parallelization methods and a first analysis of the codes showed the difficulty of our process on the whole applications to study. Therefore, it would be necessary to identify and extract a representative part of these applications and parallelization methods. The linear solver part of the codes forced itself. On this particular part several parallelization methods had been used. From these developments one could estimate the necessary work for a non initiate programmer to parallelize his application, and the impact of the development constraints. The different methods of parallelization tested are the numerical library PETSc, the parallelizer PAF, the language HPF, the formalism PEI and the communications library MPI and PYM. In order to test several methods on different applications and to follow the constraint of minimization of the modifications in codes, a tool called SPS (Server of Parallel Solvers) had be developed. We propose to describe the different constraints about the optimization of codes in an industrial context, to present the solutions given by the tool SPS, to show the development of the linear solver part with the tested parallelization methods and lastly to compare the results against the imposed criteria. (author) [fr

  3. Parallel factor analysis PARAFAC of process affected water

    Energy Technology Data Exchange (ETDEWEB)

    Ewanchuk, A.M.; Ulrich, A.C.; Sego, D. [Alberta Univ., Edmonton, AB (Canada). Dept. of Civil and Environmental Engineering; Alostaz, M. [Thurber Engineering Ltd., Calgary, AB (Canada)

    2010-07-01

    A parallel factor analysis (PARAFAC) of oil sands process-affected water was presented. Naphthenic acids (NA) are traditionally described as monobasic carboxylic acids. Research has indicated that oil sands NA do not fit classical definitions of NA. Oil sands organic acids have toxic and corrosive properties. When analyzed by fluorescence technology, oil sands process-affected water displays a characteristic peak at 290 nm excitation and approximately 346 nm emission. In this study, a parallel factor analysis (PARAFAC) was used to decompose process-affected water multi-way data into components representing analytes, chemical compounds, and groups of compounds. Water samples from various oil sands operations were analyzed in order to obtain EEMs. The EEMs were then arranged into a large matrix in decreasing process-affected water content for PARAFAC. Data were divided into 5 components. A comparison with commercially prepared NA samples suggested that oil sands NA is fundamentally different. Further research is needed to determine what each of the 5 components represent. tabs., figs.

  4. Simulation Exploration through Immersive Parallel Planes

    Energy Technology Data Exchange (ETDEWEB)

    Brunhart-Lupo, Nicholas J [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Bush, Brian W [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Gruchalla, Kenny M [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Smith, Steve [Los Alamos Visualization Associates

    2017-05-25

    We present a visualization-driven simulation system that tightly couples systems dynamics simulations with an immersive virtual environment to allow analysts to rapidly develop and test hypotheses in a high-dimensional parameter space. To accomplish this, we generalize the two-dimensional parallel-coordinates statistical graphic as an immersive 'parallel-planes' visualization for multivariate time series emitted by simulations running in parallel with the visualization. In contrast to traditional parallel coordinate's mapping the multivariate dimensions onto coordinate axes represented by a series of parallel lines, we map pairs of the multivariate dimensions onto a series of parallel rectangles. As in the case of parallel coordinates, each individual observation in the dataset is mapped to a polyline whose vertices coincide with its coordinate values. Regions of the rectangles can be 'brushed' to highlight and select observations of interest: a 'slider' control allows the user to filter the observations by their time coordinate. In an immersive virtual environment, users interact with the parallel planes using a joystick that can select regions on the planes, manipulate selection, and filter time. The brushing and selection actions are used to both explore existing data as well as to launch additional simulations corresponding to the visually selected portions of the input parameter space. As soon as the new simulations complete, their resulting observations are displayed in the virtual environment. This tight feedback loop between simulation and immersive analytics accelerates users' realization of insights about the simulation and its output.

  5. Workspace Analysis for Parallel Robot

    Directory of Open Access Journals (Sweden)

    Ying Sun

    2013-05-01

    Full Text Available As a completely new-type of robot, the parallel robot possesses a lot of advantages that the serial robot does not, such as high rigidity, great load-carrying capacity, small error, high precision, small self-weight/load ratio, good dynamic behavior and easy control, hence its range is extended in using domain. In order to find workspace of parallel mechanism, the numerical boundary-searching algorithm based on the reverse solution of kinematics and limitation of link length has been introduced. This paper analyses position workspace, orientation workspace of parallel robot of the six degrees of freedom. The result shows: It is a main means to increase and decrease its workspace to change the length of branch of parallel mechanism; The radius of the movement platform has no effect on the size of workspace, but will change position of workspace.

  6. Massively Parallel Finite Element Programming

    KAUST Repository

    Heister, Timo; Kronbichler, Martin; Bangerth, Wolfgang

    2010-01-01

    Today's large finite element simulations require parallel algorithms to scale on clusters with thousands or tens of thousands of processor cores. We present data structures and algorithms to take advantage of the power of high performance computers in generic finite element codes. Existing generic finite element libraries often restrict the parallelization to parallel linear algebra routines. This is a limiting factor when solving on more than a few hundreds of cores. We describe routines for distributed storage of all major components coupled with efficient, scalable algorithms. We give an overview of our effort to enable the modern and generic finite element library deal.II to take advantage of the power of large clusters. In particular, we describe the construction of a distributed mesh and develop algorithms to fully parallelize the finite element calculation. Numerical results demonstrate good scalability. © 2010 Springer-Verlag.

  7. Massively Parallel Finite Element Programming

    KAUST Repository

    Heister, Timo

    2010-01-01

    Today\\'s large finite element simulations require parallel algorithms to scale on clusters with thousands or tens of thousands of processor cores. We present data structures and algorithms to take advantage of the power of high performance computers in generic finite element codes. Existing generic finite element libraries often restrict the parallelization to parallel linear algebra routines. This is a limiting factor when solving on more than a few hundreds of cores. We describe routines for distributed storage of all major components coupled with efficient, scalable algorithms. We give an overview of our effort to enable the modern and generic finite element library deal.II to take advantage of the power of large clusters. In particular, we describe the construction of a distributed mesh and develop algorithms to fully parallelize the finite element calculation. Numerical results demonstrate good scalability. © 2010 Springer-Verlag.

  8. Collectively loading an application in a parallel computer

    Science.gov (United States)

    Aho, Michael E.; Attinella, John E.; Gooding, Thomas M.; Miller, Samuel J.; Mundy, Michael B.

    2016-01-05

    Collectively loading an application in a parallel computer, the parallel computer comprising a plurality of compute nodes, including: identifying, by a parallel computer control system, a subset of compute nodes in the parallel computer to execute a job; selecting, by the parallel computer control system, one of the subset of compute nodes in the parallel computer as a job leader compute node; retrieving, by the job leader compute node from computer memory, an application for executing the job; and broadcasting, by the job leader to the subset of compute nodes in the parallel computer, the application for executing the job.

  9. Productive Parallel Programming: The PCN Approach

    Directory of Open Access Journals (Sweden)

    Ian Foster

    1992-01-01

    Full Text Available We describe the PCN programming system, focusing on those features designed to improve the productivity of scientists and engineers using parallel supercomputers. These features include a simple notation for the concise specification of concurrent algorithms, the ability to incorporate existing Fortran and C code into parallel applications, facilities for reusing parallel program components, a portable toolkit that allows applications to be developed on a workstation or small parallel computer and run unchanged on supercomputers, and integrated debugging and performance analysis tools. We survey representative scientific applications and identify problem classes for which PCN has proved particularly useful.

  10. Intranasal Midazolam versus Rectal Diazepam for the Management of Canine Status Epilepticus: A Multicenter Randomized Parallel-Group Clinical Trial.

    Science.gov (United States)

    Charalambous, M; Bhatti, S F M; Van Ham, L; Platt, S; Jeffery, N D; Tipold, A; Siedenburg, J; Volk, H A; Hasegawa, D; Gallucci, A; Gandini, G; Musteata, M; Ives, E; Vanhaesebrouck, A E

    2017-07-01

    Intranasal administration of benzodiazepines has shown superiority over rectal administration for terminating emergency epileptic seizures in human trials. No such clinical trials have been performed in dogs. To evaluate the clinical efficacy of intranasal midazolam (IN-MDZ), via a mucosal atomization device, as a first-line management option for canine status epilepticus and compare it to rectal administration of diazepam (R-DZP) for controlling status epilepticus before intravenous access is available. Client-owned dogs with idiopathic or structural epilepsy manifesting status epilepticus within a hospital environment were used. Dogs were randomly allocated to treatment with IN-MDZ (n = 20) or R-DZP (n = 15). Randomized parallel-group clinical trial. Seizure cessation time and adverse effects were recorded. For each dog, treatment was considered successful if the seizure ceased within 5 minutes and did not recur within 10 minutes after administration. The 95% confidence interval was used to detect the true population of dogs that were successfully treated. The Fisher's 2-tailed exact test was used to compare the 2 groups, and the results were considered statistically significant if P status epilepticus in 70% (14/20) and 20% (3/15) of cases, respectively (P = .0059). All dogs showed sedation and ataxia. IN-MDZ is a quick, safe and effective first-line medication for controlling status epilepticus in dogs and appears superior to R-DZP. IN-MDZ might be a valuable treatment option when intravenous access is not available and for treatment of status epilepticus in dogs at home. Copyright © 2017 The Authors. Journal of Veterinary Internal Medicine published by Wiley Periodicals, Inc. on behalf of the American College of Veterinary Internal Medicine.

  11. Parallel-In-Time For Moving Meshes

    Energy Technology Data Exchange (ETDEWEB)

    Falgout, R. D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Manteuffel, T. A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Southworth, B. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Schroder, J. B. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-02-04

    With steadily growing computational resources available, scientists must develop e ective ways to utilize the increased resources. High performance, highly parallel software has be- come a standard. However until recent years parallelism has focused primarily on the spatial domain. When solving a space-time partial di erential equation (PDE), this leads to a sequential bottleneck in the temporal dimension, particularly when taking a large number of time steps. The XBraid parallel-in-time library was developed as a practical way to add temporal parallelism to existing se- quential codes with only minor modi cations. In this work, a rezoning-type moving mesh is applied to a di usion problem and formulated in a parallel-in-time framework. Tests and scaling studies are run using XBraid and demonstrate excellent results for the simple model problem considered herein.

  12. Integrated Task And Data Parallel Programming: Language Design

    Science.gov (United States)

    Grimshaw, Andrew S.; West, Emily A.

    1998-01-01

    his research investigates the combination of task and data parallel language constructs within a single programming language. There are an number of applications that exhibit properties which would be well served by such an integrated language. Examples include global climate models, aircraft design problems, and multidisciplinary design optimization problems. Our approach incorporates data parallel language constructs into an existing, object oriented, task parallel language. The language will support creation and manipulation of parallel classes and objects of both types (task parallel and data parallel). Ultimately, the language will allow data parallel and task parallel classes to be used either as building blocks or managers of parallel objects of either type, thus allowing the development of single and multi-paradigm parallel applications. 1995 Research Accomplishments In February I presented a paper at Frontiers '95 describing the design of the data parallel language subset. During the spring I wrote and defended my dissertation proposal. Since that time I have developed a runtime model for the language subset. I have begun implementing the model and hand-coding simple examples which demonstrate the language subset. I have identified an astrophysical fluid flow application which will validate the data parallel language subset. 1996 Research Agenda Milestones for the coming year include implementing a significant portion of the data parallel language subset over the Legion system. Using simple hand-coded methods, I plan to demonstrate (1) concurrent task and data parallel objects and (2) task parallel objects managing both task and data parallel objects. My next steps will focus on constructing a compiler and implementing the fluid flow application with the language. Concurrently, I will conduct a search for a real-world application exhibiting both task and data parallelism within the same program m. Additional 1995 Activities During the fall I collaborated

  13. Performance of the Galley Parallel File System

    Science.gov (United States)

    Nieuwejaar, Nils; Kotz, David

    1996-01-01

    As the input/output (I/O) needs of parallel scientific applications increase, file systems for multiprocessors are being designed to provide applications with parallel access to multiple disks. Many parallel file systems present applications with a conventional Unix-like interface that allows the application to access multiple disks transparently. This interface conceals the parallism within the file system, which increases the ease of programmability, but makes it difficult or impossible for sophisticated programmers and libraries to use knowledge about their I/O needs to exploit that parallelism. Furthermore, most current parallel file systems are optimized for a different workload than they are being asked to support. We introduce Galley, a new parallel file system that is intended to efficiently support realistic parallel workloads. Initial experiments, reported in this paper, indicate that Galley is capable of providing high-performance 1/O to applications the applications that rely on them. In Section 3 we describe that access data in patterns that have been observed to be common.

  14. Topological K-Kolmogorov groups

    International Nuclear Information System (INIS)

    Abd El-Sattar, A. Dabbour.

    1987-07-01

    The idea of the K-groups was used to define K-Kolmogorov homology and cohomology (over pairs of coefficient groups) which are descriptions of certain modifications of the Kolmogorov groups. The present work is devoted to the study of the topological properties of the K-Kolmogorov groups which lie at the root of the group duality based essentially upon Pontrjagin's concept of group multiplication. 14 refs

  15. Unified Singularity Modeling and Reconfiguration of 3rTPS Metamorphic Parallel Mechanisms with Parallel Constraint Screws

    Directory of Open Access Journals (Sweden)

    Yufeng Zhuang

    2015-01-01

    Full Text Available This paper presents a unified singularity modeling and reconfiguration analysis of variable topologies of a class of metamorphic parallel mechanisms with parallel constraint screws. The new parallel mechanisms consist of three reconfigurable rTPS limbs that have two working phases stemming from the reconfigurable Hooke (rT joint. While one phase has full mobility, the other supplies a constraint force to the platform. Based on these, the platform constraint screw systems show that the new metamorphic parallel mechanisms have four topologies by altering the limb phases with mobility change among 1R2T (one rotation with two translations, 2R2T, and 3R2T and mobility 6. Geometric conditions of the mechanism design are investigated with some special topologies illustrated considering the limb arrangement. Following this and the actuation scheme analysis, a unified Jacobian matrix is formed using screw theory to include the change between geometric constraints and actuation constraints in the topology reconfiguration. Various singular configurations are identified by analyzing screw dependency in the Jacobian matrix. The work in this paper provides basis for singularity-free workspace analysis and optimal design of the class of metamorphic parallel mechanisms with parallel constraint screws which shows simple geometric constraints with potential simple kinematics and dynamics properties.

  16. Essentiality, conservation, evolutionary pressure and codon bias in bacterial genomes.

    Science.gov (United States)

    Dilucca, Maddalena; Cimini, Giulio; Giansanti, Andrea

    2018-07-15

    Essential genes constitute the core of genes which cannot be mutated too much nor lost along the evolutionary history of a species. Natural selection is expected to be stricter on essential genes and on conserved (highly shared) genes, than on genes that are either nonessential or peculiar to a single or a few species. In order to further assess this expectation, we study here how essentiality of a gene is connected with its degree of conservation among several unrelated bacterial species, each one characterised by its own codon usage bias. Confirming previous results on E. coli, we show the existence of a universal exponential relation between gene essentiality and conservation in bacteria. Moreover, we show that, within each bacterial genome, there are at least two groups of functionally distinct genes, characterised by different levels of conservation and codon bias: i) a core of essential genes, mainly related to cellular information processing; ii) a set of less conserved nonessential genes with prevalent functions related to metabolism. In particular, the genes in the first group are more retained among species, are subject to a stronger purifying conservative selection and display a more limited repertoire of synonymous codons. The core of essential genes is close to the minimal bacterial genome, which is in the focus of recent studies in synthetic biology, though we confirm that orthologs of genes that are essential in one species are not necessarily essential in other species. We also list a set of highly shared genes which, reasonably, could constitute a reservoir of targets for new anti-microbial drugs. Copyright © 2018 Elsevier B.V. All rights reserved.

  17. Effecting a broadcast with an allreduce operation on a parallel computer

    Science.gov (United States)

    Almasi, Gheorghe; Archer, Charles J.; Ratterman, Joseph D.; Smith, Brian E.

    2010-11-02

    A parallel computer comprises a plurality of compute nodes organized into at least one operational group for collective parallel operations. Each compute node is assigned a unique rank and is coupled for data communications through a global combining network. One compute node is assigned to be a logical root. A send buffer and a receive buffer is configured. Each element of a contribution of the logical root in the send buffer is contributed. One or more zeros corresponding to a size of the element are injected. An allreduce operation with a bitwise OR using the element and the injected zeros is performed. And the result for the allreduce operation is determined and stored in each receive buffer.

  18. Desensitization to a whole egg by rush oral immunotherapy improves the quality of life of guardians: A multicenter, randomized, parallel-group, delayed-start design study.

    Science.gov (United States)

    Itoh-Nagato, Naoka; Inoue, Yuzaburo; Nagao, Mizuho; Fujisawa, Takao; Shimojo, Naoki; Iwata, Tsutomu

    2018-04-01

    Patients with food allergies and their families have a significantly reduced health-related quality of life (QOL). We performed a multicenter, randomized, parallel-group, delayed-start design study to clarify the efficacy and safety of rush oral immunotherapy (rOIT) and its impact on the participants' daily life and their guardians (UMIN000003943). Forty-five participants were randomly divided into an early-start group and a late-start group. The early-start group received rOIT for 3 months, while the late-start group continued the egg elimination diet (control). In the next stage, both groups received OIT until all participants had finished 12 months of maintenance OIT. The ratio of the participants in whom an increase of the TD was achieved in the first stage was significantly higher in the early-start group (87.0%), than in the late-start group (22.7%). The QOL of the guardians in the early-start group significantly improved after the first stage (65.2%), in comparison to the late-start group (31.8%). During 12 months of rOIT, the serum ovomucoid-specific IgE levels, the percentage of CD203c + basophils upon stimulation with egg white, and the wheal size to egg white were decreased, while the serum ovomucoid-specific IgG4 levels were increased. However, approximately 80% of the participants in the early-start group showed an allergic reaction during the first stage of the study, whereas none of the patients in the late-start group experienced an allergic reaction. rOIT induced desensitization to egg and thus improved the QOL of guardians; however, the participants experienced frequent allergic reactions due to the treatment. Copyright © 2017 Japanese Society of Allergology. Production and hosting by Elsevier B.V. All rights reserved.

  19. Fast ℓ1-SPIRiT Compressed Sensing Parallel Imaging MRI: Scalable Parallel Implementation and Clinically Feasible Runtime

    Science.gov (United States)

    Murphy, Mark; Alley, Marcus; Demmel, James; Keutzer, Kurt; Vasanawala, Shreyas; Lustig, Michael

    2012-01-01

    We present ℓ1-SPIRiT, a simple algorithm for auto calibrating parallel imaging (acPI) and compressed sensing (CS) that permits an efficient implementation with clinically-feasible runtimes. We propose a CS objective function that minimizes cross-channel joint sparsity in the Wavelet domain. Our reconstruction minimizes this objective via iterative soft-thresholding, and integrates naturally with iterative Self-Consistent Parallel Imaging (SPIRiT). Like many iterative MRI reconstructions, ℓ1-SPIRiT’s image quality comes at a high computational cost. Excessively long runtimes are a barrier to the clinical use of any reconstruction approach, and thus we discuss our approach to efficiently parallelizing ℓ1-SPIRiT and to achieving clinically-feasible runtimes. We present parallelizations of ℓ1-SPIRiT for both multi-GPU systems and multi-core CPUs, and discuss the software optimization and parallelization decisions made in our implementation. The performance of these alternatives depends on the processor architecture, the size of the image matrix, and the number of parallel imaging channels. Fundamentally, achieving fast runtime requires the correct trade-off between cache usage and parallelization overheads. We demonstrate image quality via a case from our clinical experimentation, using a custom 3DFT Spoiled Gradient Echo (SPGR) sequence with up to 8× acceleration via poisson-disc undersampling in the two phase-encoded directions. PMID:22345529

  20. Parallel and non-parallel laminar mixed convection flow in an inclined tube: The effect of the boundary conditions

    International Nuclear Information System (INIS)

    Barletta, A.

    2008-01-01

    The necessary condition for the onset of parallel flow in the fully developed region of an inclined duct is applied to the case of a circular tube. Parallel flow in inclined ducts is an uncommon regime, since in most cases buoyancy tends to produce the onset of secondary flow. The present study shows how proper thermal boundary conditions may preserve parallel flow regime. Mixed convection flow is studied for a special non-axisymmetric thermal boundary condition that, with a proper choice of a switch parameter, may be compatible with parallel flow. More precisely, a circumferentially variable heat flux distribution is prescribed on the tube wall, expressed as a sinusoidal function of the azimuthal coordinate θ with period 2π. A π/2 rotation in the position of the maximum heat flux, achieved by setting the switch parameter, may allow or not the existence of parallel flow. Two cases are considered corresponding to parallel and non-parallel flow. In the first case, the governing balance equations allow a simple analytical solution. On the contrary, in the second case, the local balance equations are solved numerically by employing a finite element method

  1. Parallel programming with Easy Java Simulations

    Science.gov (United States)

    Esquembre, F.; Christian, W.; Belloni, M.

    2018-01-01

    Nearly all of today's processors are multicore, and ideally programming and algorithm development utilizing the entire processor should be introduced early in the computational physics curriculum. Parallel programming is often not introduced because it requires a new programming environment and uses constructs that are unfamiliar to many teachers. We describe how we decrease the barrier to parallel programming by using a java-based programming environment to treat problems in the usual undergraduate curriculum. We use the easy java simulations programming and authoring tool to create the program's graphical user interface together with objects based on those developed by Kaminsky [Building Parallel Programs (Course Technology, Boston, 2010)] to handle common parallel programming tasks. Shared-memory parallel implementations of physics problems, such as time evolution of the Schrödinger equation, are available as source code and as ready-to-run programs from the AAPT-ComPADRE digital library.

  2. Parallelism and Scalability in an Image Processing Application

    DEFF Research Database (Denmark)

    Rasmussen, Morten Sleth; Stuart, Matthias Bo; Karlsson, Sven

    2008-01-01

    parallel programs. This paper investigates parallelism and scalability of an embedded image processing application. The major challenges faced when parallelizing the application were to extract enough parallelism from the application and to reduce load imbalance. The application has limited immediately......The recent trends in processor architecture show that parallel processing is moving into new areas of computing in the form of many-core desktop processors and multi-processor system-on-chip. This means that parallel processing is required in application areas that traditionally have not used...

  3. Parallelism and Scalability in an Image Processing Application

    DEFF Research Database (Denmark)

    Rasmussen, Morten Sleth; Stuart, Matthias Bo; Karlsson, Sven

    2009-01-01

    parallel programs. This paper investigates parallelism and scalability of an embedded image processing application. The major challenges faced when parallelizing the application were to extract enough parallelism from the application and to reduce load imbalance. The application has limited immediately......The recent trends in processor architecture show that parallel processing is moving into new areas of computing in the form of many-core desktop processors and multi-processor system-on-chips. This means that parallel processing is required in application areas that traditionally have not used...

  4. Parallel programming of saccades during natural scene viewing: evidence from eye movement positions.

    Science.gov (United States)

    Wu, Esther X W; Gilani, Syed Omer; van Boxtel, Jeroen J A; Amihai, Ido; Chua, Fook Kee; Yen, Shih-Cheng

    2013-10-24

    Previous studies have shown that saccade plans during natural scene viewing can be programmed in parallel. This evidence comes mainly from temporal indicators, i.e., fixation durations and latencies. In the current study, we asked whether eye movement positions recorded during scene viewing also reflect parallel programming of saccades. As participants viewed scenes in preparation for a memory task, their inspection of the scene was suddenly disrupted by a transition to another scene. We examined whether saccades after the transition were invariably directed immediately toward the center or were contingent on saccade onset times relative to the transition. The results, which showed a dissociation in eye movement behavior between two groups of saccades after the scene transition, supported the parallel programming account. Saccades with relatively long onset times (>100 ms) after the transition were directed immediately toward the center of the scene, probably to restart scene exploration. Saccades with short onset times (programming of saccades during scene viewing. Additionally, results from the analyses of intersaccadic intervals were also consistent with the parallel programming hypothesis.

  5. Parallel auto-correlative statistics with VTK.

    Energy Technology Data Exchange (ETDEWEB)

    Pebay, Philippe Pierre; Bennett, Janine Camille

    2013-08-01

    This report summarizes existing statistical engines in VTK and presents both the serial and parallel auto-correlative statistics engines. It is a sequel to [PT08, BPRT09b, PT09, BPT09, PT10] which studied the parallel descriptive, correlative, multi-correlative, principal component analysis, contingency, k-means, and order statistics engines. The ease of use of the new parallel auto-correlative statistics engine is illustrated by the means of C++ code snippets and algorithm verification is provided. This report justifies the design of the statistics engines with parallel scalability in mind, and provides scalability and speed-up analysis results for the autocorrelative statistics engine.

  6. Conformal pure radiation with parallel rays

    International Nuclear Information System (INIS)

    Leistner, Thomas; Paweł Nurowski

    2012-01-01

    We define pure radiation metrics with parallel rays to be n-dimensional pseudo-Riemannian metrics that admit a parallel null line bundle K and whose Ricci tensor vanishes on vectors that are orthogonal to K. We give necessary conditions in terms of the Weyl, Cotton and Bach tensors for a pseudo-Riemannian metric to be conformal to a pure radiation metric with parallel rays. Then, we derive conditions in terms of the tractor calculus that are equivalent to the existence of a pure radiation metric with parallel rays in a conformal class. We also give analogous results for n-dimensional pseudo-Riemannian pp-waves. (paper)

  7. Parallel diffusion length on thermal neutrons in rod type lattices

    International Nuclear Information System (INIS)

    Ahmed, T.; Siddiqui, S.A.M.M.; Khan, A.M.

    1981-11-01

    Calculation of diffusion lengths of thermal neutrons in lead-water and aluminum water lattices in direction parallel to the rods are performed using one group diffusion equation together with Shevelev transport correction. The formalism is then applied to two practical cases, the Kawasaki (Hitachi) and the Douglas point (Candu) reactor lattices. Our results are in good agreement with the observed values. (author)

  8. Effects of an additional small group discussion to cognitive achievement and retention in basic principles of bioethics teaching methods

    Directory of Open Access Journals (Sweden)

    Dedi Afandi

    2009-03-01

    Full Text Available Aim The place of ethics in undergraduate medical curricula is essential but the methods of teaching medical ethics did not show substantial changes. “Basic principles of bioethics” is the best knowledge to develop student’s reasoning analysis in medical ethics In this study, we investigate the effects of an additional small group discussion in basic principles of bioethics conventional lecture methods to cognitive achievement and retention. This study was a randomized controlled trial with parallel design. Cognitive scores of the basic principles of bioethics as a parameter was measured using basic principles of bioethics (Kaidah Dasar Bioetika, KDB test. Both groups were attending conventional lectures, then the intervention group got an additional small group discussion.Result Conventional lectures with or without small group discussion significantly increased cognitive achievement of basic principles of bioethics (P= 0.001 and P= 0.000, respectively, and there were significant differences in cognitive achievement and retention between the 2 groups (P= 0.000 and P= 0.000, respectively.Conclusion Additional small group discussion method improved cognitive achievement and retention of basic principles of bioethics. (Med J Indones 2009; 18: 48-52Keywords: lecture, specification checklist, multiple choice questions

  9. Parallel plasma fluid turbulence calculations

    International Nuclear Information System (INIS)

    Leboeuf, J.N.; Carreras, B.A.; Charlton, L.A.; Drake, J.B.; Lynch, V.E.; Newman, D.E.; Sidikman, K.L.; Spong, D.A.

    1994-01-01

    The study of plasma turbulence and transport is a complex problem of critical importance for fusion-relevant plasmas. To this day, the fluid treatment of plasma dynamics is the best approach to realistic physics at the high resolution required for certain experimentally relevant calculations. Core and edge turbulence in a magnetic fusion device have been modeled using state-of-the-art, nonlinear, three-dimensional, initial-value fluid and gyrofluid codes. Parallel implementation of these models on diverse platforms--vector parallel (National Energy Research Supercomputer Center's CRAY Y-MP C90), massively parallel (Intel Paragon XP/S 35), and serial parallel (clusters of high-performance workstations using the Parallel Virtual Machine protocol)--offers a variety of paths to high resolution and significant improvements in real-time efficiency, each with its own advantages. The largest and most efficient calculations have been performed at the 200 Mword memory limit on the C90 in dedicated mode, where an overlap of 12 to 13 out of a maximum of 16 processors has been achieved with a gyrofluid model of core fluctuations. The richness of the physics captured by these calculations is commensurate with the increased resolution and efficiency and is limited only by the ingenuity brought to the analysis of the massive amounts of data generated

  10. A task parallel implementation of fast multipole methods

    KAUST Repository

    Taura, Kenjiro

    2012-11-01

    This paper describes a task parallel implementation of ExaFMM, an open source implementation of fast multipole methods (FMM), using a lightweight task parallel library MassiveThreads. Although there have been many attempts on parallelizing FMM, experiences have almost exclusively been limited to formulation based on flat homogeneous parallel loops. FMM in fact contains operations that cannot be readily expressed in such conventional but restrictive models. We show that task parallelism, or parallel recursions in particular, allows us to parallelize all operations of FMM naturally and scalably. Moreover it allows us to parallelize a \\'\\'mutual interaction\\'\\' for force/potential evaluation, which is roughly twice as efficient as a more conventional, unidirectional force/potential evaluation. The net result is an open source FMM that is clearly among the fastest single node implementations, including those on GPUs; with a million particles on a 32 cores Sandy Bridge 2.20GHz node, it completes a single time step including tree construction and force/potential evaluation in 65 milliseconds. The study clearly showcases both programmability and performance benefits of flexible parallel constructs over more monolithic parallel loops. © 2012 IEEE.

  11. A new parallel algorithm and its simulation on hypercube simulator for low pass digital image filtering using systolic array

    International Nuclear Information System (INIS)

    Al-Hallaq, A.; Amin, S.

    1998-01-01

    This paper introduces a new parallel algorithm and its simulation on a hypercube simulator for the low pass digital image filtering using a systolic array. This new algorithm is faster than the old one (Amin, 1988). This is due to the the fact that the old algorithm carries out the addition operations in a sequential mode. But in our new design these addition operations are divided into tow groups, which can be performed in parallel. One group will be performed on one half of the systolic array and the other on the second half, that is, by folding. This parallelism reduces the time required for the whole process by almost quarter the time of the old algorithm.(authors). 18 refs., 3 figs

  12. Second derivative parallel block backward differentiation type ...

    African Journals Online (AJOL)

    Second derivative parallel block backward differentiation type formulas for Stiff ODEs. ... Log in or Register to get access to full text downloads. ... and the methods are inherently parallel and can be distributed over parallel processors. They are ...

  13. [Effects of Frankincense and Myrrh essential oil on transdermal absorption in vitro of Chuanxiong and penetration mechanism of skin blood flow].

    Science.gov (United States)

    Zhu, Xiao-Fang; Luo, Jing; Guan, Yong-Mei; Yu, Ya-Ting; Jin, Chen; Zhu, Wei-Feng; Liu, Hong-Ning

    2017-02-01

    The aim of this paper was to explore the effects of Frankincense and Myrrh essential oil on transdermal absorption in vitro of Chuanxiong, and to investigate the possible penetration mechanism of their essential oil from the perspective of skin blood perfusion changes. Transdermal tests were performed in vitro with excised mice skin by improved Franz diffusion cells. The cumulative penetration amounts of ferulic acid in Chuanxiong were determined by HPLC to investigate the effects of Frankincense and Myrrh essential oil on transdermal permeation properties of Chuanxiong. Simultaneously, the skin blood flows were determined by laser flow doppler. The results showed that the cumulative penetration amount of ferulic acid in Chuanxiong was (8.13±0.76) μg•cm⁻² in 24 h, and was (48.91±4.87), (57.80±2.86), (63.34±4.56), (54.17±4.40), (62.52±7.79) μg•cm⁻² respectively in Azone group, Frankincense essential oil group, Myrrh essential oil, frankincense and myrrh singly extracted essential oil mixture group, and frankincense and myrrh mixed extraction essential oil group. The enhancement ratios of each essential oil groups were 7.68, 8.26, 7.26, 8.28, which were slightly greater than 6.55 in Azone group. In addition, as compared with the conditions before treatment, there were significant differences and obvious increasing trend in blood flow of rats in Frankincense essential oil group, Myrrh essential oil group, frankincense and myrrh singly extracted essential oil mixture group, and frankincense and myrrh mixed extraction essential oil group when were dosed at 10, 20, 30, 10 min respectively, indicating that the skin blood flows were increased under the effects of Frankincense and Myrrh essential oil to a certain extent. Thus, Frankincense and Myrrh essential oil had certain effect on promoting permeability of Chuanxiong both before and after drug combination, and may promote the elimination of drugs from epidermis to dermal capillaries through increase of

  14. A Parallel Approach to Fractal Image Compression

    OpenAIRE

    Lubomir Dedera

    2004-01-01

    The paper deals with a parallel approach to coding and decoding algorithms in fractal image compressionand presents experimental results comparing sequential and parallel algorithms from the point of view of achieved bothcoding and decoding time and effectiveness of parallelization.

  15. Essentially Optimal Universally Composable Oblivious Transfer

    DEFF Research Database (Denmark)

    Damgård, Ivan Bjerre; Nielsen, Jesper Buus; Orlandi, Claudio

    2009-01-01

    . Communication complexity: it communicates O(1) group elements to transfer one out of two group elements. The Big-O notation hides 32, meaning that the communication is probably not optimal, but is essentially optimal in that the overhead is at least constant. Our construction is based on pairings, and we assume......Oblivious transfer is one of the most important cryptographic primitives, both for theoretical and practical reasons and several protocols were proposed during the years. We provide the first oblivious transfer protocol which is simultaneously optimal on the following list of parameters: Security...

  16. Radiation dosimetry with plane-parallel ionization chambers: An international (IAEA) code of practice

    International Nuclear Information System (INIS)

    Andreo, P.

    1996-01-01

    Research on plane-parallel ionization chambers since the IAEA Code of Practice (TRS-277) was published in 1987 has expanded our knowledge on perturbation and other correction factors in ionization chamber dosimeter, and also constructional details of these chambers have been shown to be important. Different national organizations have published, or are in the process of publishing, recommendations on detailed procedures for the calibration and use of plane-parallel ionization chambers. An international working group was formed under the auspices of the IAEA, first to assess the status and validity of IAEA TRS-277, and second to develop an international Code of Practice for the calibration and use of plane-parallel ionization chambers in high-energy electron and photon beams. The purpose of this work is to describe the forthcoming Code of Practice. (author). 39 refs, 3 figs, 2 tabs

  17. Radiation dosimetry with plane-parallel ionization chambers: An international (IAEA) code of practice

    Energy Technology Data Exchange (ETDEWEB)

    Andreo, P [Lunds Hospital, Lund (Sweden). Radiophysics Dept.; Almond, P R [J.G. Brown Cancer Center, Univ. of Lousville, Lousville, KY (United States). Dept. of Radiation Oncology; Mattsson, O [Sahlgrenska Hospital, Gothenburg (Sweden). Dept. of Radiation Physics; Nahum, A E [Royal Marsden Hospital, Sutton (United Kingdom). Joint Dept. of Physics; Roos, M [Physikalisch-Technische Bundesanstalt, Braunschweig (Germany)

    1996-08-01

    Research on plane-parallel ionization chambers since the IAEA Code of Practice (TRS-277) was published in 1987 has expanded our knowledge on perturbation and other correction factors in ionization chamber dosimeter, and also constructional details of these chambers have been shown to be important. Different national organizations have published, or are in the process of publishing, recommendations on detailed procedures for the calibration and use of plane-parallel ionization chambers. An international working group was formed under the auspices of the IAEA, first to assess the status and validity of IAEA TRS-277, and second to develop an international Code of Practice for the calibration and use of plane-parallel ionization chambers in high-energy electron and photon beams. The purpose of this work is to describe the forthcoming Code of Practice. (author). 39 refs, 3 figs, 2 tabs.

  18. Differences Between Distributed and Parallel Systems

    Energy Technology Data Exchange (ETDEWEB)

    Brightwell, R.; Maccabe, A.B.; Rissen, R.

    1998-10-01

    Distributed systems have been studied for twenty years and are now coming into wider use as fast networks and powerful workstations become more readily available. In many respects a massively parallel computer resembles a network of workstations and it is tempting to port a distributed operating system to such a machine. However, there are significant differences between these two environments and a parallel operating system is needed to get the best performance out of a massively parallel system. This report characterizes the differences between distributed systems, networks of workstations, and massively parallel systems and analyzes the impact of these differences on operating system design. In the second part of the report, we introduce Puma, an operating system specifically developed for massively parallel systems. We describe Puma portals, the basic building blocks for message passing paradigms implemented on top of Puma, and show how the differences observed in the first part of the report have influenced the design and implementation of Puma.

  19. Parallel processing from applications to systems

    CERN Document Server

    Moldovan, Dan I

    1993-01-01

    This text provides one of the broadest presentations of parallelprocessing available, including the structure of parallelprocessors and parallel algorithms. The emphasis is on mappingalgorithms to highly parallel computers, with extensive coverage ofarray and multiprocessor architectures. Early chapters provideinsightful coverage on the analysis of parallel algorithms andprogram transformations, effectively integrating a variety ofmaterial previously scattered throughout the literature. Theory andpractice are well balanced across diverse topics in this concisepresentation. For exceptional cla

  20. A survey of parallel multigrid algorithms

    Science.gov (United States)

    Chan, Tony F.; Tuminaro, Ray S.

    1987-01-01

    A typical multigrid algorithm applied to well-behaved linear-elliptic partial-differential equations (PDEs) is described. Criteria for designing and evaluating parallel algorithms are presented. Before evaluating the performance of some parallel multigrid algorithms, consideration is given to some theoretical complexity results for solving PDEs in parallel and for executing the multigrid algorithm. The effect of mapping and load imbalance on the partial efficiency of the algorithm is studied.

  1. Parallel computing by Monte Carlo codes MVP/GMVP

    International Nuclear Information System (INIS)

    Nagaya, Yasunobu; Nakagawa, Masayuki; Mori, Takamasa

    2001-01-01

    General-purpose Monte Carlo codes MVP/GMVP are well-vectorized and thus enable us to perform high-speed Monte Carlo calculations. In order to achieve more speedups, we parallelized the codes on the different types of parallel computing platforms or by using a standard parallelization library MPI. The platforms used for benchmark calculations are a distributed-memory vector-parallel computer Fujitsu VPP500, a distributed-memory massively parallel computer Intel paragon and a distributed-memory scalar-parallel computer Hitachi SR2201, IBM SP2. As mentioned generally, linear speedup could be obtained for large-scale problems but parallelization efficiency decreased as the batch size per a processing element(PE) was smaller. It was also found that the statistical uncertainty for assembly powers was less than 0.1% by the PWR full-core calculation with more than 10 million histories and it took about 1.5 hours by massively parallel computing. (author)

  2. Automorphisms of free groups with boundaries

    DEFF Research Database (Denmark)

    A. Jensen, Craig; Wahl, Nathalie

    2004-01-01

    The automorphisms of free groups with boundaries form a family of groups A_{n,k} closely related to mapping class groups, with the standard automorphisms of free groups as A_{n,0} and (essentially) the symmetric automorphisms of free groups as A_{0,k}. We construct a contractible space L_{n,k} on......The automorphisms of free groups with boundaries form a family of groups A_{n,k} closely related to mapping class groups, with the standard automorphisms of free groups as A_{n,0} and (essentially) the symmetric automorphisms of free groups as A_{0,k}. We construct a contractible space L......_{n,k} on which A_{n,k} acts with finite stabilizers and finite quotient space and deduce a range for the virtual cohomological dimension of A_{n,k}. We also give a presentation of the groups and calculate their first homology group....

  3. The parallel processing of EGS4 code on distributed memory scalar parallel computer:Intel Paragon XP/S15-256

    Energy Technology Data Exchange (ETDEWEB)

    Takemiya, Hiroshi; Ohta, Hirofumi; Honma, Ichirou

    1996-03-01

    The parallelization of Electro-Magnetic Cascade Monte Carlo Simulation Code, EGS4 on distributed memory scalar parallel computer: Intel Paragon XP/S15-256 is described. EGS4 has the feature that calculation time for one incident particle is quite different from each other because of the dynamic generation of secondary particles and different behavior of each particle. Granularity for parallel processing, parallel programming model and the algorithm of parallel random number generation are discussed and two kinds of method, each of which allocates particles dynamically or statically, are used for the purpose of realizing high speed parallel processing of this code. Among four problems chosen for performance evaluation, the speedup factors for three problems have been attained to nearly 100 times with 128 processor. It has been found that when both the calculation time for each incident particles and its dispersion are large, it is preferable to use dynamic particle allocation method which can average the load for each processor. And it has also been found that when they are small, it is preferable to use static particle allocation method which reduces the communication overhead. Moreover, it is pointed out that to get the result accurately, it is necessary to use double precision variables in EGS4 code. Finally, the workflow of program parallelization is analyzed and tools for program parallelization through the experience of the EGS4 parallelization are discussed. (author).

  4. The anti-dermatophyte activity of Zataria multiflora essential oils.

    Science.gov (United States)

    Mahboubi, M; HeidaryTabar, R; Mahdizadeh, E

    2017-06-01

    Dermtophytes are a group of pathogenic fungi and the major cause of dermatophytosis in humans and animals. Fighting dermatophytes by natural essential oils is one important issue in new researches. In this investigation, we evaluated the anti-dermatophyte activities of three samples of Z. multiflora essential oils against dermatophytes along with analysis of chemical compositions of the essential oils and their anti-elastase activities on elastase production in dermatophytes. Carvacrol (1.5-34.4%), thymol (25.8-41.2%), carvacrol methyl ether (1.9-28.3%) and p-cymene (2.3-8.3%) were the main components of Z. multiflora essential oils. Z. multiflora essential oils (100ppm) inhibited the mycelium growth of dermatophytes (6±1.7-47.0±1.4%) and had the minimal inhibitory concentration (MIC) and minimal fungicidal concentration (MFC) values of 0.03-0.25μl/ml against dermatophytes. Essential oils inhibited elastase produced in dermatophytes and pure porcine elastase. Z. multiflora essential oils can be used as natural anti-dermatophyte agent for fighting dermatophytes in further preclinical and clinical studies. Copyright © 2017 Elsevier Masson SAS. All rights reserved.

  5. Towards a streaming model for nested data parallelism

    DEFF Research Database (Denmark)

    Madsen, Frederik Meisner; Filinski, Andrzej

    2013-01-01

    The language-integrated cost semantics for nested data parallelism pioneered by NESL provides an intuitive, high-level model for predicting performance and scalability of parallel algorithms with reasonable accuracy. However, this predictability, obtained through a uniform, parallelism-flattening......The language-integrated cost semantics for nested data parallelism pioneered by NESL provides an intuitive, high-level model for predicting performance and scalability of parallel algorithms with reasonable accuracy. However, this predictability, obtained through a uniform, parallelism......-processable in a streaming fashion. This semantics is directly compatible with previously proposed piecewise execution models for nested data parallelism, but allows the expected space usage to be reasoned about directly at the source-language level. The language definition and implementation are still very much work...

  6. Massively Parallel Computing: A Sandia Perspective

    Energy Technology Data Exchange (ETDEWEB)

    Dosanjh, Sudip S.; Greenberg, David S.; Hendrickson, Bruce; Heroux, Michael A.; Plimpton, Steve J.; Tomkins, James L.; Womble, David E.

    1999-05-06

    The computing power available to scientists and engineers has increased dramatically in the past decade, due in part to progress in making massively parallel computing practical and available. The expectation for these machines has been great. The reality is that progress has been slower than expected. Nevertheless, massively parallel computing is beginning to realize its potential for enabling significant break-throughs in science and engineering. This paper provides a perspective on the state of the field, colored by the authors' experiences using large scale parallel machines at Sandia National Laboratories. We address trends in hardware, system software and algorithms, and we also offer our view of the forces shaping the parallel computing industry.

  7. Parallel Algorithms for the Exascale Era

    Energy Technology Data Exchange (ETDEWEB)

    Robey, Robert W. [Los Alamos National Laboratory

    2016-10-19

    New parallel algorithms are needed to reach the Exascale level of parallelism with millions of cores. We look at some of the research developed by students in projects at LANL. The research blends ideas from the early days of computing while weaving in the fresh approach brought by students new to the field of high performance computing. We look at reproducibility of global sums and why it is important to parallel computing. Next we look at how the concept of hashing has led to the development of more scalable algorithms suitable for next-generation parallel computers. Nearly all of this work has been done by undergraduates and published in leading scientific journals.

  8. Recent progress in 3D EM/EM-PIC simulation with ARGUS and parallel ARGUS

    International Nuclear Information System (INIS)

    Mankofsky, A.; Petillo, J.; Krueger, W.; Mondelli, A.; McNamara, B.; Philp, R.

    1994-01-01

    ARGUS is an integrated, 3-D, volumetric simulation model for systems involving electric and magnetic fields and charged particles, including materials embedded in the simulation region. The code offers the capability to carry out time domain and frequency domain electromagnetic simulations of complex physical systems. ARGUS offers a boolean solid model structure input capability that can include essentially arbitrary structures on the computational domain, and a modular architecture that allows multiple physics packages to access the same data structure and to share common code utilities. Physics modules are in place to compute electrostatic and electromagnetic fields, the normal modes of RF structures, and self-consistent particle-in-cell (PIC) simulation in either a time dependent mode or a steady state mode. The PIC modules include multiple particle species, the Lorentz equations of motion, and algorithms for the creation of particles by emission from material surfaces, injection onto the grid, and ionization. In this paper, we present an updated overview of ARGUS, with particular emphasis given in recent algorithmic and computational advances. These include a completely rewritten frequency domain solver which efficiently treats lossy materials and periodic structures, a parallel version of ARGUS with support for both shared memory parallel vector (i.e. CRAY) machines and distributed memory massively parallel MIMD systems, and numerous new applications of the code

  9. Improvement of defecation in healthy individuals with infrequent bowel movements through the ingestion of dried Mozuku powder: a randomized, double-blind, parallel-group study

    Directory of Open Access Journals (Sweden)

    Masaki Matayoshi

    2017-09-01

    Full Text Available Background: Okinawa mozuku (Cladosiphon okamuranu is a type of edible seaweed of the family Chordariaceae that typically contains the polysaccharide fucoidan as a functional ingredient. In Okinawa, raw mozuku is eaten as vinegared mozuku together with vinegar or as tempura (deep-fried in batter. Polysaccharides such as fucoidan are generally known to regulate intestinal function, which is why we have used Okinawa mozuku to investigate this intestinal regulatory effect. Methods: The study was designed as a randomized, double-blind, parallel group study. Dried Okinawa mozuku powder at a dose of 2.4 g/day (1.0 g/day of fucoidan and a placebo not containing any dried Okinawa mozuku powder were each made into capsules and given to healthy men and women with infrequent weekly bowel movements (2–4 movements a week to ingest for eight weeks. We then investigated changes in the defecation situation, blood tests, and adverse events. Results: In the group that ingested the capsules containing dried Okinawa mozuku powder, the number of days with a bowel movement significantly increased compared with the placebo group after four weeks of ingestion (p < 0.05. Furthermore, after eight weeks of ingestion, the same increasing trend was seen compared with the placebo group (p = 0.0964. The volume of stool also increased significantly in the dried Okinawa mozuku powder group after eight weeks compared with the placebo group. In terms of blood tests and adverse events, no adverse events occurred that were the result of the test food. Conclusions: Ingestion of Okinawa mozuku was found to have a regulatory effect on intestinal function by promoting defecation in healthy individuals with a tendency for constipation. This demonstrated that Okinawa mozuku is a functional food capable of making defecation smoother and increasing the volume of stool.

  10. A Parallel Approach to Fractal Image Compression

    Directory of Open Access Journals (Sweden)

    Lubomir Dedera

    2004-01-01

    Full Text Available The paper deals with a parallel approach to coding and decoding algorithms in fractal image compressionand presents experimental results comparing sequential and parallel algorithms from the point of view of achieved bothcoding and decoding time and effectiveness of parallelization.

  11. A parallelization study of the general purpose Monte Carlo code MCNP4 on a distributed memory highly parallel computer

    International Nuclear Information System (INIS)

    Yamazaki, Takao; Fujisaki, Masahide; Okuda, Motoi; Takano, Makoto; Masukawa, Fumihiro; Naito, Yoshitaka

    1993-01-01

    The general purpose Monte Carlo code MCNP4 has been implemented on the Fujitsu AP1000 distributed memory highly parallel computer. Parallelization techniques developed and studied are reported. A shielding analysis function of the MCNP4 code is parallelized in this study. A technique to map a history to each processor dynamically and to map control process to a certain processor was applied. The efficiency of parallelized code is up to 80% for a typical practical problem with 512 processors. These results demonstrate the advantages of a highly parallel computer to the conventional computers in the field of shielding analysis by Monte Carlo method. (orig.)

  12. Performance Analysis of Parallel Mathematical Subroutine library PARCEL

    International Nuclear Information System (INIS)

    Yamada, Susumu; Shimizu, Futoshi; Kobayashi, Kenichi; Kaburaki, Hideo; Kishida, Norio

    2000-01-01

    The parallel mathematical subroutine library PARCEL (Parallel Computing Elements) has been developed by Japan Atomic Energy Research Institute for easy use of typical parallelized mathematical codes in any application problems on distributed parallel computers. The PARCEL includes routines for linear equations, eigenvalue problems, pseudo-random number generation, and fast Fourier transforms. It is shown that the results of performance for linear equations routines exhibit good parallelization efficiency on vector, as well as scalar, parallel computers. A comparison of the efficiency results with the PETSc (Portable Extensible Tool kit for Scientific Computations) library has been reported. (author)

  13. Applications of the parallel computing system using network

    International Nuclear Information System (INIS)

    Ido, Shunji; Hasebe, Hiroki

    1994-01-01

    Parallel programming is applied to multiple processors connected in Ethernet. Data exchanges between tasks located in each processing element are realized by two ways. One is socket which is standard library on recent UNIX operating systems. Another is a network connecting software, named as Parallel Virtual Machine (PVM) which is a free software developed by ORNL, to use many workstations connected to network as a parallel computer. This paper discusses the availability of parallel computing using network and UNIX workstations and comparison between specialized parallel systems (Transputer and iPSC/860) in a Monte Carlo simulation which generally shows high parallelization ratio. (author)

  14. Anti-parallel triplexes

    DEFF Research Database (Denmark)

    Kosbar, Tamer R.; Sofan, Mamdouh A.; Waly, Mohamed A.

    2015-01-01

    about 6.1 °C when the TFO strand was modified with Z and the Watson-Crick strand with adenine-LNA (AL). The molecular modeling results showed that, in case of nucleobases Y and Z a hydrogen bond (1.69 and 1.72 Å, respectively) was formed between the protonated 3-aminopropyn-1-yl chain and one...... of the phosphate groups in Watson-Crick strand. Also, it was shown that the nucleobase Y made a good stacking and binding with the other nucleobases in the TFO and Watson-Crick duplex, respectively. In contrast, the nucleobase Z with LNA moiety was forced to twist out of plane of Watson-Crick base pair which......The phosphoramidites of DNA monomers of 7-(3-aminopropyn-1-yl)-8-aza-7-deazaadenine (Y) and 7-(3-aminopropyn-1-yl)-8-aza-7-deazaadenine LNA (Z) are synthesized, and the thermal stability at pH 7.2 and 8.2 of anti-parallel triplexes modified with these two monomers is determined. When, the anti...

  15. In vitro activity of Origanum vulgare essential oil against Candida species

    Directory of Open Access Journals (Sweden)

    Marlete Brum Cleff

    2010-03-01

    Full Text Available The aim of this study was to evaluate the in vitro activity of the essential oil extracted from Origanum vulgare against sixteen Candida species isolates. Standard strains tested comprised C. albicans (ATCC strains 44858, 4053, 18804 and 3691, C. parapsilosis (ATCC 22019, C. krusei (ATCC 34135, C. lusitaniae (ATCC 34449 and C. dubliniensis (ATCC MY646. Six Candida albicans isolates from the vaginal mucous membrane of female dogs, one isolate from the cutaneous tegument of a dog and one isolate of a capuchin monkey were tested in parallel. A broth microdilution technique (CLSI was used, and the inoculum concentration was adjusted to 5 x 10(6 CFU mL-1. The essential oil was obtained by hydrodistillation in a Clevenger apparatus and analyzed by gas chromatography. Susceptibility was expressed as Minimal Inhibitory Concentration (MIC and Minimal Fungicidal Concentration (MFC. All isolates tested in vitro were sensitive to O. vulgare essential oil. The chromatographic analysis revealed that the main compounds present in the essential oil were 4-terpineol (47.95%, carvacrol (9.42%, thymol (8.42% and □-terpineol (7.57%. C. albicans isolates obtained from animal mucous membranes exhibited MIC and MFC values of 2.72 µL mL-1 and 5 µL mL-1, respectively. MIC and MFC values for C. albicans standard strains were 2.97 µL mL-1 and 3.54 µL mL-1, respectively. The MIC and MFC for non-albicans species were 2.10 µL mL-1 and 2.97 µL mL-1, respectively. The antifungal activity of O. vulgare essential oil against Candida spp. observed in vitro suggests its administration may represent an alternative treatment for candidiasis.

  16. Towards global interoperability for supporting biodiversity research on Essential Biodiversity Variables (EBVs)

    NARCIS (Netherlands)

    Kissling, W.D.; Hardisty, A.; García, E.A.; Santamaria, M.; De Leo, F.; Pesole, G.; Freyhof, J.; Manset, D.; Wissel, S.; Konijn, J.; Los, W.

    2015-01-01

    Essential biodiversity variables (EBVs) have been proposed by the Group on Earth Observations Biodiversity Observation Network (GEO BON) to identify a minimum set of essential measurements that are required for studying, monitoring and reporting biodiversity and ecosystem change. Despite the initial

  17. Balanced, parallel operation of flashlamps

    International Nuclear Information System (INIS)

    Carder, B.M.; Merritt, B.T.

    1979-01-01

    A new energy store, the Compensated Pulsed Alternator (CPA), promises to be a cost effective substitute for capacitors to drive flashlamps that pump large Nd:glass lasers. Because the CPA is large and discrete, it will be necessary that it drive many parallel flashlamp circuits, presenting a problem in equal current distribution. Current division to +- 20% between parallel flashlamps has been achieved, but this is marginal for laser pumping. A method is presented here that provides equal current sharing to about 1%, and it includes fused protection against short circuit faults. The method was tested with eight parallel circuits, including both open-circuit and short-circuit fault tests

  18. Bayer image parallel decoding based on GPU

    Science.gov (United States)

    Hu, Rihui; Xu, Zhiyong; Wei, Yuxing; Sun, Shaohua

    2012-11-01

    In the photoelectrical tracking system, Bayer image is decompressed in traditional method, which is CPU-based. However, it is too slow when the images become large, for example, 2K×2K×16bit. In order to accelerate the Bayer image decoding, this paper introduces a parallel speedup method for NVIDA's Graphics Processor Unit (GPU) which supports CUDA architecture. The decoding procedure can be divided into three parts: the first is serial part, the second is task-parallelism part, and the last is data-parallelism part including inverse quantization, inverse discrete wavelet transform (IDWT) as well as image post-processing part. For reducing the execution time, the task-parallelism part is optimized by OpenMP techniques. The data-parallelism part could advance its efficiency through executing on the GPU as CUDA parallel program. The optimization techniques include instruction optimization, shared memory access optimization, the access memory coalesced optimization and texture memory optimization. In particular, it can significantly speed up the IDWT by rewriting the 2D (Tow-dimensional) serial IDWT into 1D parallel IDWT. Through experimenting with 1K×1K×16bit Bayer image, data-parallelism part is 10 more times faster than CPU-based implementation. Finally, a CPU+GPU heterogeneous decompression system was designed. The experimental result shows that it could achieve 3 to 5 times speed increase compared to the CPU serial method.

  19. Refinement of Parallel and Reactive Programs

    OpenAIRE

    Back, R. J. R.

    1992-01-01

    We show how to apply the refinement calculus to stepwise refinement of parallel and reactive programs. We use action systems as our basic program model. Action systems are sequential programs which can be implemented in a parallel fashion. Hence refinement calculus methods, originally developed for sequential programs, carry over to the derivation of parallel programs. Refinement of reactive programs is handled by data refinement techniques originally developed for the sequential refinement c...

  20. Essential oils as natural food antimicrobial agents: a review.

    Science.gov (United States)

    Vergis, Jess; Gokulakrishnan, P; Agarwal, R K; Kumar, Ashok

    2015-01-01

    Food-borne illnesses pose a real scourge in the present scenario as the consumerism of packaged food has increased to a great extend. Pathogens entering the packaged foods may survive longer, which needs a check. Antimicrobial agents either alone or in combination are added to the food or packaging materials for this purpose. Exploiting the antimicrobial property, essential oils are considered as a "natural" remedy to this problem other than its flavoring property instead of using synthetic agents. The essential oils are well known for its antibacterial, antiviral, antimycotic, antiparasitic, and antioxidant properties due to the presence of phenolic functional group. Gram-positive organisms are found more susceptible to the action of the essential oils. Essential oils improve the shelf-life of packaged products, control the microbial growth, and unriddle the consumer concerns regarding the use of chemical preservatives. This review is intended to provide an overview of the essential oils and their role as natural antimicrobial agents in the food industry.

  1. Efficacy and tolerability of topical sertaconazole versus topical terbinafine in localized dermatophytosis: A randomized, observer-blind, parallel group study.

    Science.gov (United States)

    Chatterjee, Dattatreyo; Ghosh, Sudip Kumar; Sen, Sukanta; Sarkar, Saswati; Hazra, Avijit; De, Radharaman

    2016-01-01

    Epidermal dermatophyte infections most commonly manifest as tinea corporis or tinea cruris. Topical azole antifungals are commonly used in their treatment but literature suggests that most require twice-daily application and provide lower cure rates than the allylamine antifungal terbinafine. We conducted a head-to-head comparison of the effectiveness of the once-daily topical azole, sertaconazole, with terbinafine in these infections. We conducted a randomized, observer-blind, parallel group study (Clinical Trial Registry India [CTRI]/2014/09/005029) with adult patients of either sex presenting with localized lesions. The clinical diagnosis was confirmed by potassium hydroxide smear microscopy of skin scrapings. After baseline assessment of erythema, scaling, and pruritus, patients applied either of the two study drugs once daily for 2 weeks. If clinical cure was not seen at 2 weeks, but improvement was noted, application was continued for further 2 weeks. Patients deemed to be clinical failure at 2 weeks were switched to oral antifungals. Overall 88 patients on sertaconazole and 91 on terbinafine were analyzed. At 2 weeks, the clinical cure rates were comparable at 77.27% (95% confidence interval [CI]: 68.52%-86.03%) for sertaconazole and 73.63% (95% CI 64.57%-82.68%) for terbinafine ( P = 0.606). Fourteen patients in either group improved and on further treatment showed complete healing by another 2 weeks. The final cure rate at 4 weeks was also comparable at 93.18% (95% CI 88.75%-97.62%) and 89.01% (95% CI 82.59%-95.44%), respectively ( P = 0.914). At 2 weeks, 6 (6.82%) sertaconazole and 10 (10.99%) terbinafine recipients were considered as "clinical failure." Tolerability of both preparations was excellent. Despite the limitations of an observer-blind study without microbiological support, the results suggest that once-daily topical sertaconazole is as effective as terbinafine in localized tinea infections.

  2. Design of a real-time wind turbine simulator using a custom parallel architecture

    Science.gov (United States)

    Hoffman, John A.; Gluck, R.; Sridhar, S.

    1995-01-01

    The design of a new parallel-processing digital simulator is described. The new simulator has been developed specifically for analysis of wind energy systems in real time. The new processor has been named: the Wind Energy System Time-domain simulator, version 3 (WEST-3). Like previous WEST versions, WEST-3 performs many computations in parallel. The modules in WEST-3 are pure digital processors, however. These digital processors can be programmed individually and operated in concert to achieve real-time simulation of wind turbine systems. Because of this programmability, WEST-3 is very much more flexible and general than its two predecessors. The design features of WEST-3 are described to show how the system produces high-speed solutions of nonlinear time-domain equations. WEST-3 has two very fast Computational Units (CU's) that use minicomputer technology plus special architectural features that make them many times faster than a microcomputer. These CU's are needed to perform the complex computations associated with the wind turbine rotor system in real time. The parallel architecture of the CU causes several tasks to be done in each cycle, including an IO operation and the combination of a multiply, add, and store. The WEST-3 simulator can be expanded at any time for additional computational power. This is possible because the CU's interfaced to each other and to other portions of the simulation using special serial buses. These buses can be 'patched' together in essentially any configuration (in a manner very similar to the programming methods used in analog computation) to balance the input/ output requirements. CU's can be added in any number to share a given computational load. This flexible bus feature is very different from many other parallel processors which usually have a throughput limit because of rigid bus architecture.

  3. Portable parallel programming in a Fortran environment

    International Nuclear Information System (INIS)

    May, E.N.

    1989-01-01

    Experience using the Argonne-developed PARMACs macro package to implement a portable parallel programming environment is described. Fortran programs with intrinsic parallelism of coarse and medium granularity are easily converted to parallel programs which are portable among a number of commercially available parallel processors in the class of shared-memory bus-based and local-memory network based MIMD processors. The parallelism is implemented using standard UNIX (tm) tools and a small number of easily understood synchronization concepts (monitors and message-passing techniques) to construct and coordinate multiple cooperating processes on one or many processors. Benchmark results are presented for parallel computers such as the Alliant FX/8, the Encore MultiMax, the Sequent Balance, the Intel iPSC/2 Hypercube and a network of Sun 3 workstations. These parallel machines are typical MIMD types with from 8 to 30 processors, each rated at from 1 to 10 MIPS processing power. The demonstration code used for this work is a Monte Carlo simulation of the response to photons of a ''nearly realistic'' lead, iron and plastic electromagnetic and hadronic calorimeter, using the EGS4 code system. 6 refs., 2 figs., 2 tabs

  4. Structured Parallel Programming Patterns for Efficient Computation

    CERN Document Server

    McCool, Michael; Robison, Arch

    2012-01-01

    Programming is now parallel programming. Much as structured programming revolutionized traditional serial programming decades ago, a new kind of structured programming, based on patterns, is relevant to parallel programming today. Parallel computing experts and industry insiders Michael McCool, Arch Robison, and James Reinders describe how to design and implement maintainable and efficient parallel algorithms using a pattern-based approach. They present both theory and practice, and give detailed concrete examples using multiple programming models. Examples are primarily given using two of th

  5. A Tutorial on Parallel and Concurrent Programming in Haskell

    Science.gov (United States)

    Peyton Jones, Simon; Singh, Satnam

    This practical tutorial introduces the features available in Haskell for writing parallel and concurrent programs. We first describe how to write semi-explicit parallel programs by using annotations to express opportunities for parallelism and to help control the granularity of parallelism for effective execution on modern operating systems and processors. We then describe the mechanisms provided by Haskell for writing explicitly parallel programs with a focus on the use of software transactional memory to help share information between threads. Finally, we show how nested data parallelism can be used to write deterministically parallel programs which allows programmers to use rich data types in data parallel programs which are automatically transformed into flat data parallel versions for efficient execution on multi-core processors.

  6. Post-discharge management following hip fracture - get you back to B4: A parallel group, randomized controlled trial study protocol

    Directory of Open Access Journals (Sweden)

    Brown Roy A

    2011-06-01

    Full Text Available Abstract Background Fall-related hip fractures result in significant personal and societal consequences; importantly, up to half of older adults with hip fracture never regain their previous level of mobility. Strategies of follow-up care for older adults after fracture have improved investigation for osteoporosis; but managing bone health alone is not enough. Prevention of fractures requires management of both bone health and falls risk factors (including the contributing role of cognition, balance and continence to improve outcomes. Methods/Design This is a parallel group, pragmatic randomized controlled trial to test the effectiveness of a post-fracture clinic compared with usual care on mobility for older adults following their hospitalization for hip fracture. Participants randomized to the intervention will attend a fracture follow-up clinic where a geriatrician and physiotherapist will assess and manage their mobility and other health issues. Depending on needs identified at the clinical assessment, participants may receive individualized and group-based outpatient physiotherapy, and a home exercise program. Our primary objective is to assess the effectiveness of a novel post-discharge fracture management strategy on the mobility of older adults after hip fracture. We will enrol 130 older adults (65 years+ who have sustained a hip fracture in the previous three months, and were admitted to hospital from home and are expected to be discharged home. We will exclude older adults who prior to the fracture were: unable to walk 10 meters; diagnosed with dementia and/or significant comorbidities that would preclude their participation in the clinical service. Eligible participants will be randomly assigned to the Intervention or Usual Care groups by remote allocation. Treatment allocation will be concealed; investigators, measurement team and primary data analysts will be blinded to group allocation. Our primary outcome is mobility

  7. Optimal task mapping in safety-critical real-time parallel systems

    International Nuclear Information System (INIS)

    Aussagues, Ch.

    1998-01-01

    This PhD thesis is dealing with the correct design of safety-critical real-time parallel systems. Such systems constitutes a fundamental part of high-performance systems for command and control that can be found in the nuclear domain or more generally in parallel embedded systems. The verification of their temporal correctness is the core of this thesis. our contribution is mainly in the following three points: the analysis and extension of a programming model for such real-time parallel systems; the proposal of an original method based on a new operator of synchronized product of state machines task-graphs; the validation of the approach by its implementation and evaluation. The work addresses particularly the main problem of optimal task mapping on a parallel architecture, such that the temporal constraints are globally guaranteed, i.e. the timeliness property is valid. The results incorporate also optimally criteria for the sizing and correct dimensioning of a parallel system, for instance in the number of processing elements. These criteria are connected with operational constraints of the application domain. Our approach is based on the off-line analysis of the feasibility of the deadline-driven dynamic scheduling that is used to schedule tasks inside one processor. This leads us to define the synchronized-product, a system of linear, constraints is automatically generated and then allows to calculate a maximum load of a group of tasks and then to verify their timeliness constraints. The communications, their timeliness verification and incorporation to the mapping problem is the second main contribution of this thesis. FInally, the global solving technique dealing with both task and communication aspects has been implemented and evaluated in the framework of the OASIS project in the LETI research center at the CEA/Saclay. (author)

  8. Parallel Computing Using Web Servers and "Servlets".

    Science.gov (United States)

    Lo, Alfred; Bloor, Chris; Choi, Y. K.

    2000-01-01

    Describes parallel computing and presents inexpensive ways to implement a virtual parallel computer with multiple Web servers. Highlights include performance measurement of parallel systems; models for using Java and intranet technology including single server, multiple clients and multiple servers, single client; and a comparison of CGI (common…

  9. Current distribution characteristics of superconducting parallel circuits

    International Nuclear Information System (INIS)

    Mori, K.; Suzuki, Y.; Hara, N.; Kitamura, M.; Tominaka, T.

    1994-01-01

    In order to increase the current carrying capacity of the current path of the superconducting magnet system, the portion of parallel circuits such as insulated multi-strand cables or parallel persistent current switches (PCS) are made. In superconducting parallel circuits of an insulated multi-strand cable or a parallel persistent current switch (PCS), the current distribution during the current sweep, the persistent mode, and the quench process were investigated. In order to measure the current distribution, two methods were used. (1) Each strand was surrounded with a pure iron core with the air gap. In the air gap, a Hall probe was located. The accuracy of this method was deteriorated by the magnetic hysteresis of iron. (2) The Rogowski coil without iron was used for the current measurement of each path in a 4-parallel PCS. As a result, it was shown that the current distribution characteristics of a parallel PCS is very similar to that of an insulated multi-strand cable for the quench process

  10. The essential oil of rosemary and its effect on the human image and numerical short-term memory

    Directory of Open Access Journals (Sweden)

    O.V. Filiptsova

    2017-06-01

    Full Text Available The research results of the effect of essential oil of rosemary on the human short-term image and numerical memory have been described. The study involved 53 secondary school students (24 boys and 29 girls aged 13–15 years, residents of the Ukrainian metropolis. Participants were divided into the control group and “Rosemary” group, in which the rosemary essential oil was sprayed. The statistically significant differences in productivity of the short-term memory of the participants of these two groups have been found, while sex differences in uniform groups were absent. Therefore, the essential oil of rosemary has significantly increased the image memory compared to the control. Inhalation of the rosemary essential oil increased the memorization of numbers as well.

  11. Parallel hierarchical global illumination

    Energy Technology Data Exchange (ETDEWEB)

    Snell, Quinn O. [Iowa State Univ., Ames, IA (United States)

    1997-10-08

    Solving the global illumination problem is equivalent to determining the intensity of every wavelength of light in all directions at every point in a given scene. The complexity of the problem has led researchers to use approximation methods for solving the problem on serial computers. Rather than using an approximation method, such as backward ray tracing or radiosity, the authors have chosen to solve the Rendering Equation by direct simulation of light transport from the light sources. This paper presents an algorithm that solves the Rendering Equation to any desired accuracy, and can be run in parallel on distributed memory or shared memory computer systems with excellent scaling properties. It appears superior in both speed and physical correctness to recent published methods involving bidirectional ray tracing or hybrid treatments of diffuse and specular surfaces. Like progressive radiosity methods, it dynamically refines the geometry decomposition where required, but does so without the excessive storage requirements for ray histories. The algorithm, called Photon, produces a scene which converges to the global illumination solution. This amounts to a huge task for a 1997-vintage serial computer, but using the power of a parallel supercomputer significantly reduces the time required to generate a solution. Currently, Photon can be run on most parallel environments from a shared memory multiprocessor to a parallel supercomputer, as well as on clusters of heterogeneous workstations.

  12. 6th International Parallel Tools Workshop

    CERN Document Server

    Brinkmann, Steffen; Gracia, José; Resch, Michael; Nagel, Wolfgang

    2013-01-01

    The latest advances in the High Performance Computing hardware have significantly raised the level of available compute performance. At the same time, the growing hardware capabilities of modern supercomputing architectures have caused an increasing complexity of the parallel application development. Despite numerous efforts to improve and simplify parallel programming, there is still a lot of manual debugging and  tuning work required. This process  is supported by special software tools, facilitating debugging, performance analysis, and optimization and thus  making a major contribution to the development of  robust and efficient parallel software. This book introduces a selection of the tools, which were presented and discussed at the 6th International Parallel Tools Workshop, held in Stuttgart, Germany, 25-26 September 2012.

  13. A randomised, single-blind, single-dose, three-arm, parallel-group study in healthy subjects to demonstrate pharmacokinetic equivalence of ABP 501 and adalimumab.

    Science.gov (United States)

    Kaur, Primal; Chow, Vincent; Zhang, Nan; Moxness, Michael; Kaliyaperumal, Arunan; Markus, Richard

    2017-03-01

    To demonstrate pharmacokinetic (PK) similarity of biosimilar candidate ABP 501 relative to adalimumab reference product from the USA and European Union (EU) and evaluate safety, tolerability and immunogenicity of ABP 501. Randomised, single-blind, single-dose, three-arm, parallel-group study; healthy subjects were randomised to receive ABP 501 (n=67), adalimumab (USA) (n=69) or adalimumab (EU) (n=67) 40 mg subcutaneously. Primary end points were area under the serum concentration-time curve from time 0 extrapolated to infinity (AUC inf ) and the maximum observed concentration (C max ). Secondary end points included safety and immunogenicity. AUC inf and C max were similar across the three groups. Geometrical mean ratio (GMR) of AUC inf was 1.11 between ABP 501 and adalimumab (USA), and 1.04 between ABP 501 and adalimumab (EU). GMR of C max was 1.04 between ABP 501 and adalimumab (USA) and 0.96 between ABP 501 and adalimumab (EU). The 90% CIs for the GMRs of AUC inf and C max were within the prespecified standard PK equivalence criteria of 0.80 to 1.25. Treatment-related adverse events were mild to moderate and were reported for 35.8%, 24.6% and 41.8% of subjects in the ABP 501, adalimumab (USA) and adalimumab (EU) groups; incidence of antidrug antibodies (ADAbs) was similar among the study groups. Results of this study demonstrated PK similarity of ABP 501 with adalimumab (USA) and adalimumab (EU) after a single 40-mg subcutaneous injection. No new safety signals with ABP 501 were identified. The safety and tolerability of ABP 501 was similar to the reference products, and similar ADAb rates were observed across the three groups. EudraCT number 2012-000785-37; Results. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  14. Group theory for chemists fundamental theory and applications

    CERN Document Server

    Molloy, K C

    2010-01-01

    The basics of group theory and its applications to themes such as the analysis of vibrational spectra and molecular orbital theory are essential knowledge for the undergraduate student of inorganic chemistry. The second edition of Group Theory for Chemists uses diagrams and problem-solving to help students test and improve their understanding, including a new section on the application of group theory to electronic spectroscopy.Part one covers the essentials of symmetry and group theory, including symmetry, point groups and representations. Part two deals with the application of group theory t

  15. Angular parallelization of a curvilinear Sn transport theory method

    International Nuclear Information System (INIS)

    Haghighat, A.

    1991-01-01

    In this paper a parallel algorithm for angular domain decomposition (or parallelization) of an r-dependent spherical S n transport theory method is derived. The parallel formulation is incorporated into TWOTRAN-II using the IBM Parallel Fortran compiler and implemented on an IBM 3090/400 (with four processors). The behavior of the parallel algorithm for different physical problems is studied, and it is concluded that the parallel algorithm behaves differently in the presence of a fission source as opposed to the absence of a fission source; this is attributed to the relative contributions of the source and the angular redistribution terms in the S s algorithm. Further, the parallel performance of the algorithm is measured for various problem sizes and different combinations of angular subdomains or processors. Poor parallel efficiencies between ∼35 and 50% are achieved in situations where the relative difference of parallel to serial iterations is ∼50%. High parallel efficiencies between ∼60% and 90% are obtained in situations where the relative difference of parallel to serial iterations is <35%

  16. Combining Compile-Time and Run-Time Parallelization

    Directory of Open Access Journals (Sweden)

    Sungdo Moon

    1999-01-01

    Full Text Available This paper demonstrates that significant improvements to automatic parallelization technology require that existing systems be extended in two ways: (1 they must combine high‐quality compile‐time analysis with low‐cost run‐time testing; and (2 they must take control flow into account during analysis. We support this claim with the results of an experiment that measures the safety of parallelization at run time for loops left unparallelized by the Stanford SUIF compiler’s automatic parallelization system. We present results of measurements on programs from two benchmark suites – SPECFP95 and NAS sample benchmarks – which identify inherently parallel loops in these programs that are missed by the compiler. We characterize remaining parallelization opportunities, and find that most of the loops require run‐time testing, analysis of control flow, or some combination of the two. We present a new compile‐time analysis technique that can be used to parallelize most of these remaining loops. This technique is designed to not only improve the results of compile‐time parallelization, but also to produce low‐cost, directed run‐time tests that allow the system to defer binding of parallelization until run‐time when safety cannot be proven statically. We call this approach predicated array data‐flow analysis. We augment array data‐flow analysis, which the compiler uses to identify independent and privatizable arrays, by associating predicates with array data‐flow values. Predicated array data‐flow analysis allows the compiler to derive “optimistic” data‐flow values guarded by predicates; these predicates can be used to derive a run‐time test guaranteeing the safety of parallelization.

  17. PSHED: a simplified approach to developing parallel programs

    International Nuclear Information System (INIS)

    Mahajan, S.M.; Ramesh, K.; Rajesh, K.; Somani, A.; Goel, M.

    1992-01-01

    This paper presents a simplified approach in the forms of a tree structured computational model for parallel application programs. An attempt is made to provide a standard user interface to execute programs on BARC Parallel Processing System (BPPS), a scalable distributed memory multiprocessor. The interface package called PSHED provides a basic framework for representing and executing parallel programs on different parallel architectures. The PSHED package incorporates concepts from a broad range of previous research in programming environments and parallel computations. (author). 6 refs

  18. Parallel evolutionary computation in bioinformatics applications.

    Science.gov (United States)

    Pinho, Jorge; Sobral, João Luis; Rocha, Miguel

    2013-05-01

    A large number of optimization problems within the field of Bioinformatics require methods able to handle its inherent complexity (e.g. NP-hard problems) and also demand increased computational efforts. In this context, the use of parallel architectures is a necessity. In this work, we propose ParJECoLi, a Java based library that offers a large set of metaheuristic methods (such as Evolutionary Algorithms) and also addresses the issue of its efficient execution on a wide range of parallel architectures. The proposed approach focuses on the easiness of use, making the adaptation to distinct parallel environments (multicore, cluster, grid) transparent to the user. Indeed, this work shows how the development of the optimization library can proceed independently of its adaptation for several architectures, making use of Aspect-Oriented Programming. The pluggable nature of parallelism related modules allows the user to easily configure its environment, adding parallelism modules to the base source code when needed. The performance of the platform is validated with two case studies within biological model optimization. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  19. PARALLEL IMPORT: REALITY FOR RUSSIA

    Directory of Open Access Journals (Sweden)

    Т. А. Сухопарова

    2014-01-01

    Full Text Available Problem of parallel import is urgent question at now. Parallel import legalization in Russia is expedient. Such statement based on opposite experts opinion analysis. At the same time it’s necessary to negative consequences consider of this decision and to apply remedies to its minimization.Purchase on Elibrary.ru > Buy now

  20. Multitasking TORT Under UNICOS: Parallel Performance Models and Measurements

    International Nuclear Information System (INIS)

    Azmy, Y.Y.; Barnett, D.A.

    1999-01-01

    The existing parallel algorithms in the TORT discrete ordinates were updated to function in a UNI-COS environment. A performance model for the parallel overhead was derived for the existing algorithms. The largest contributors to the parallel overhead were identified and a new algorithm was developed. A parallel overhead model was also derived for the new algorithm. The results of the comparison of parallel performance models were compared to applications of the code to two TORT standard test problems and a large production problem. The parallel performance models agree well with the measured parallel overhead

  1. Multitasking TORT under UNICOS: Parallel performance models and measurements

    International Nuclear Information System (INIS)

    Barnett, A.; Azmy, Y.Y.

    1999-01-01

    The existing parallel algorithms in the TORT discrete ordinates code were updated to function in a UNICOS environment. A performance model for the parallel overhead was derived for the existing algorithms. The largest contributors to the parallel overhead were identified and a new algorithm was developed. A parallel overhead model was also derived for the new algorithm. The results of the comparison of parallel performance models were compared to applications of the code to two TORT standard test problems and a large production problem. The parallel performance models agree well with the measured parallel overhead

  2. Cosmic Shear With ACS Pure Parallels. Targeted Portion.

    Science.gov (United States)

    Rhodes, Jason

    2002-07-01

    Small distortions in the shapes of background galaxies by foreground mass provide a powerful method of directly measuring the amount and distribution of dark matter. Several groups have recently detected this weak lensing by large-scale structure, also called cosmic shear. The high resolution and sensitivity of HST/ACS provide a unique opportunity to measure cosmic shear accurately on small scales. Using 260 parallel orbits in Sloan i {F775W} we will measure for the first time: the cosmic shear variance on scales Omega_m^0.5, with signal-to-noise {s/n} 20, and the mass density Omega_m with s/n=4. They will be done at small angular scales where non-linear effects dominate the power spectrum, providing a test of the gravitational instability paradigm for structure formation. Measurements on these scales are not possible from the ground, because of the systematic effects induced by PSF smearing from seeing. Having many independent lines of sight reduces the uncertainty due to cosmic variance, making parallel observations ideal.

  3. EFFECT OF THYME ESSENTIAL OIL ADDITION ON PHYSICAL AND MICROBIOLOGICAL QUALITY OF TABLE EGGS

    Directory of Open Access Journals (Sweden)

    Henrieta Arpášová

    2013-02-01

    Full Text Available Essentialoils areintensivefragrant, oilyliquidsubstances containedindifferent parts of theplant. Their function is based on organoleptic effect and stimulation of organism to the production of digestive juices. Result is ahigherdigestibilityandabsorption of nutirents. Besides antibacterial properties, essential oils or their components have been shown to exhibit antiviral,antimycotic, antitoxigenic, antiparasitic, and insecticidal properties. In this experiment the effects of supplementation of the diet for laying hens with thyme essential oils on physical and microbiological egg parameters were studied.Hens of laying hybrid Hy-Line Brown (n=30 were randomly divided into 3 groups (n=10 and fed for 23 weeks on diets with thyme essential oil supplemented. In the first experimental group the feed mixture was supplemented with thyme essential oil addition in a dose 0.5 g/kg, in the second one some essential oil in a dose 1g/kg. The results suggest that all of qualitative parameters of egg internal content (yolk weight (g, yolk index, percentage portion egg yolk (%, yolk index, yolk colour (°HLR, albumen weight (g, percentage portion of albumen (%, Haugh Units (HU, albumen index were with thyme essential oil addition insignificantly influenced (P>0.05. The number of coliforms, enterococci, fungi and yeasts decreased with increasing dose of oil. The number of lactobacilli was zero in all groups.

  4. Parallel artificial liquid membrane extraction

    DEFF Research Database (Denmark)

    Gjelstad, Astrid; Rasmussen, Knut Einar; Parmer, Marthe Petrine

    2013-01-01

    This paper reports development of a new approach towards analytical liquid-liquid-liquid membrane extraction termed parallel artificial liquid membrane extraction. A donor plate and acceptor plate create a sandwich, in which each sample (human plasma) and acceptor solution is separated by an arti......This paper reports development of a new approach towards analytical liquid-liquid-liquid membrane extraction termed parallel artificial liquid membrane extraction. A donor plate and acceptor plate create a sandwich, in which each sample (human plasma) and acceptor solution is separated...... by an artificial liquid membrane. Parallel artificial liquid membrane extraction is a modification of hollow-fiber liquid-phase microextraction, where the hollow fibers are replaced by flat membranes in a 96-well plate format....

  5. Massively parallel multicanonical simulations

    Science.gov (United States)

    Gross, Jonathan; Zierenberg, Johannes; Weigel, Martin; Janke, Wolfhard

    2018-03-01

    Generalized-ensemble Monte Carlo simulations such as the multicanonical method and similar techniques are among the most efficient approaches for simulations of systems undergoing discontinuous phase transitions or with rugged free-energy landscapes. As Markov chain methods, they are inherently serial computationally. It was demonstrated recently, however, that a combination of independent simulations that communicate weight updates at variable intervals allows for the efficient utilization of parallel computational resources for multicanonical simulations. Implementing this approach for the many-thread architecture provided by current generations of graphics processing units (GPUs), we show how it can be efficiently employed with of the order of 104 parallel walkers and beyond, thus constituting a versatile tool for Monte Carlo simulations in the era of massively parallel computing. We provide the fully documented source code for the approach applied to the paradigmatic example of the two-dimensional Ising model as starting point and reference for practitioners in the field.

  6. [Essential oil from Artemisia lavandulaefolia induces apoptosis and necrosis of HeLa cells].

    Science.gov (United States)

    Zhang, Lu-min; Lv, Xue-wei; Shao, Lin-xiang; Ma, Yan-fang; Cheng, Wen-zhao; Gao, Hai-tao

    2013-12-01

    To investigate the effects of Artemisia lavandulaefolia essential oil on apoptosis and necrosis of HeLa cells. Cell viability was assayed using MTT method. The morphological and structure alterations in HeLa cells were observed by microscopy. Furthermore, cell apoptosis was measured by DNA Ladder and flow cytometry. DNA damage was measured by comet assay, and the protein expression was examined by Western blot analysis. MTT assay displayed essential oil from Artemisia lavandulaefolia could inhibit the proliferation of HeLa cells in a dose-dependent manner. After treated with essential oil of Artemisia lavadulaefolia for 24 h, HeLa cells in 100 and 200 microg/mL experiment groups exhibited the typical morphology changes of undergoing apoptosis, such as cell shrinkage and nucleus chromatin condensed. However, the cells in the 400 microg/mL group showed the necrotic morphology changes including cytomembrane rupture and cytoplasm spillover. In addition, DNA Ladder could be demonstrated by DNA electrophoresis in each experiment group. Apoptosis peak was also evident in flow cytometry in each experiment group. After treating the HeLa cells with essential oil of Artemisia lavadulaefolia for 6 h, comet tail was detected by comet assay. Moreover, western blotting analysis showed that caspase-3 was activated and the cleavage of PARP was inactivated. Essential oil from Artemisia lavadulaefolia can inhibit the proliferation of HeLa cells in vitro. Low concentration of essential oil from Artemisia lavadulaefolia can induce apoptosis, whereas high concentration of the compounds result in necrosis of HeLa cells. And,the mechanism may be related to the caspase-3-mediated-PARP apoptotic signal pathway.

  7. New adaptive differencing strategy in the PENTRAN 3-d parallel Sn code

    International Nuclear Information System (INIS)

    Sjoden, G.E.; Haghighat, A.

    1996-01-01

    It is known that three-dimensional (3-D) discrete ordinates (S n ) transport problems require an immense amount of storage and computational effort to solve. For this reason, parallel codes that offer a capability to completely decompose the angular, energy, and spatial domains among a distributed network of processors are required. One such code recently developed is PENTRAN, which iteratively solves 3-D multi-group, anisotropic S n problems on distributed-memory platforms, such as the IBM-SP2. Because large problems typically contain several different material zones with various properties, available differencing schemes should automatically adapt to the transport physics in each material zone. To minimize the memory and message-passing overhead required for massively parallel S n applications, available differencing schemes in an adaptive strategy should also offer reasonable accuracy and positivity, yet require only the zeroth spatial moment of the transport equation; differencing schemes based on higher spatial moments, in spite of their greater accuracy, require at least twice the amount of storage and communication cost for implementation in a massively parallel transport code. This paper discusses a new adaptive differencing strategy that uses increasingly accurate schemes with low parallel memory and communication overhead. This strategy, implemented in PENTRAN, includes a new scheme, exponential directional averaged (EDA) differencing

  8. Parallel thermal radiation transport in two dimensions

    International Nuclear Information System (INIS)

    Smedley-Stevenson, R.P.; Ball, S.R.

    2003-01-01

    This paper describes the distributed memory parallel implementation of a deterministic thermal radiation transport algorithm in a 2-dimensional ALE hydrodynamics code. The parallel algorithm consists of a variety of components which are combined in order to produce a state of the art computational capability, capable of solving large thermal radiation transport problems using Blue-Oak, the 3 Tera-Flop MPP (massive parallel processors) computing facility at AWE (United Kingdom). Particular aspects of the parallel algorithm are described together with examples of the performance on some challenging applications. (author)

  9. Parallel thermal radiation transport in two dimensions

    Energy Technology Data Exchange (ETDEWEB)

    Smedley-Stevenson, R.P.; Ball, S.R. [AWE Aldermaston (United Kingdom)

    2003-07-01

    This paper describes the distributed memory parallel implementation of a deterministic thermal radiation transport algorithm in a 2-dimensional ALE hydrodynamics code. The parallel algorithm consists of a variety of components which are combined in order to produce a state of the art computational capability, capable of solving large thermal radiation transport problems using Blue-Oak, the 3 Tera-Flop MPP (massive parallel processors) computing facility at AWE (United Kingdom). Particular aspects of the parallel algorithm are described together with examples of the performance on some challenging applications. (author)

  10. Development of a parallelization strategy for the VARIANT code

    International Nuclear Information System (INIS)

    Hanebutte, U.R.; Khalil, H.S.; Palmiotti, G.; Tatsumi, M.

    1996-01-01

    The VARIANT code solves the multigroup steady-state neutron diffusion and transport equation in three-dimensional Cartesian and hexagonal geometries using the variational nodal method. VARIANT consists of four major parts that must be executed sequentially: input handling, calculation of response matrices, solution algorithm (i.e. inner-outer iteration), and output of results. The objective of the parallelization effort was to reduce the overall computing time by distributing the work of the two computationally intensive (sequential) tasks, the coupling coefficient calculation and the iterative solver, equally among a group of processors. This report describes the code's calculations and gives performance results on one of the benchmark problems used to test the code. The performance analysis in the IBM SPx system shows good efficiency for well-load-balanced programs. Even for relatively small problem sizes, respectable efficiencies are seen for the SPx. An extension to achieve a higher degree of parallelism will be addressed in future work. 7 refs., 1 tab

  11. Oxidative stability of cnicken thigh meat after treatment of abies alba essential oil

    Directory of Open Access Journals (Sweden)

    Adriana Pavelková

    2015-12-01

    Full Text Available In the present work, the effect of the Abies alba essential oil in two different concentrations on oxidative stability of chicken thigh muscles during chilled storage was investigated. In the experiment were chickens of hybrid combination Cobb 500 after 42 days of the fattening period slaughtered.  All the broiler chickens were fed with the same feed mixtures and were kept under the same conditions. The feed mixtures were produced without any antibiotic preparations and coccidiostatics. After slaughtering was dissection obtained fresh chicken thigh with skin from left half-carcass which were divided into five groups (n = 5: C - control air-packaged group; A1 - vacuum-packaged experimental group; A2 - vacuum-packaged experimental group with ethylenediaminetetraacetic acid (EDTA solution 1.50% w/w; A3 - vacuum-packaged experimental group with Abies alba oil 0.10% v/w and A4 - vacuum-packaged experimental group with Abies alba oil 0.20% v/w. The Abies alba essential oil was applicate on ground chicken things and immediately after dipping, each sample was packaged using a vacuum packaging machine and storage in refrigerate at 4 ±0.5 °C. Thiobarbituric acid (TBA value expressed in number of malondialdehyde was measured in the process of first storage day of 1st, 4th, 8th, 12th and 16th day after slaughtering and expressed on the amount of malondialdehyde (MDA in 1 kg sample. The treatments of chicken things with Abies alba essential oil show statistically significant differences between all testing groups and control group, where higher average value of MDA measured in thigh muscle of broiler chickens was in samples of control group (0.4380 mg.kg-1 compared to experimental groups A1 (0.124 mg.kg-1, A2 (0.086 mg.kg-1, A3 (0.082 mg.kg-1 and A4 (0.077 mg.kg-1 after 16-day of chilled storage. Experiment results show that the treatment of chicken thigh with Abies alba essential oil positively influenced on the reduction of oxidative processes in thigh

  12. Parallel processing for artificial intelligence 1

    CERN Document Server

    Kanal, LN; Kumar, V; Suttner, CB

    1994-01-01

    Parallel processing for AI problems is of great current interest because of its potential for alleviating the computational demands of AI procedures. The articles in this book consider parallel processing for problems in several areas of artificial intelligence: image processing, knowledge representation in semantic networks, production rules, mechanization of logic, constraint satisfaction, parsing of natural language, data filtering and data mining. The publication is divided into six sections. The first addresses parallel computing for processing and understanding images. The second discus

  13. Comparison of parallel viscosity with neoclassical theory

    International Nuclear Information System (INIS)

    Ida, K.; Nakajima, N.

    1996-04-01

    Toroidal rotation profiles are measured with charge exchange spectroscopy for the plasma heated with tangential NBI in CHS heliotron/torsatron device to estimate parallel viscosity. The parallel viscosity derived from the toroidal rotation velocity shows good agreement with the neoclassical parallel viscosity plus the perpendicular viscosity. (μ perpendicular = 2 m 2 /s). (author)

  14. Effects of Assist-As-Needed Upper Extremity Robotic Therapy after Incomplete Spinal Cord Injury: A Parallel-Group Controlled Trial

    Directory of Open Access Journals (Sweden)

    John Michael Frullo

    2017-06-01

    Full Text Available BackgroundRobotic rehabilitation of the upper limb following neurological injury has been supported through several large clinical studies for individuals with chronic stroke. The application of robotic rehabilitation to the treatment of other neurological injuries is less developed, despite indications that strategies successful for restoration of motor capability following stroke may benefit individuals with incomplete spinal cord injury (SCI as well. Although recent studies suggest that robot-aided rehabilitation might be beneficial after incomplete SCI, it is still unclear what type of robot-aided intervention contributes to motor recovery.MethodsWe developed a novel assist-as-needed (AAN robotic controller to adjust challenge and robotic assistance continuously during rehabilitation therapy delivered via an upper extremity exoskeleton, the MAHI Exo-II, to train independent elbow and wrist joint movements. We further enrolled seventeen patients with incomplete spinal cord injury (AIS C and D levels in a parallel-group balanced controlled trial to test the efficacy of the AAN controller, compared to a subject-triggered (ST controller that does not adjust assistance or challenge levels continuously during therapy. The conducted study is a stage two, development-of-concept pilot study.ResultsWe validated the AAN controller in its capability of modulating assistance and challenge during therapy via analysis of longitudinal robotic metrics. For the selected primary outcome measure, the pre–post difference in ARAT score, no statistically significant change was measured in either group of subjects. Ancillary analysis of secondary outcome measures obtained via robotic testing indicates gradual improvement in movement quality during the therapy program in both groups, with the AAN controller affording greater increases in movement quality over the ST controller.ConclusionThe present study demonstrates feasibility of subject-adaptive robotic therapy

  15. Molecular symmetry: Why permutation-inversion (PI) groups don't render the point groups obsolete

    Science.gov (United States)

    Groner, Peter

    2018-01-01

    The analysis of spectra of molecules with internal large-amplitude motions (LAMs) requires molecular symmetry (MS) groups that are larger than and significantly different from the more familiar point groups. MS groups are described often by the permutation-inversion (PI) group method. It is shown that point groups still can and should play a significant role together with the PI groups for a class of molecules with internal rotors. In molecules of this class, several simple internal rotors are attached to a rigid molecular frame. The PI groups for this class are semidirect products like H ^ F, where the invariant subgroup H is a direct product of cyclic groups and F is a point group. This result is used to derive meaningful labels for MS groups, and to derive correlation tables between MS groups and point groups. MS groups of this class have many parallels to space groups of crystalline solids.

  16. Adapting algorithms to massively parallel hardware

    CERN Document Server

    Sioulas, Panagiotis

    2016-01-01

    In the recent years, the trend in computing has shifted from delivering processors with faster clock speeds to increasing the number of cores per processor. This marks a paradigm shift towards parallel programming in which applications are programmed to exploit the power provided by multi-cores. Usually there is gain in terms of the time-to-solution and the memory footprint. Specifically, this trend has sparked an interest towards massively parallel systems that can provide a large number of processors, and possibly computing nodes, as in the GPUs and MPPAs (Massively Parallel Processor Arrays). In this project, the focus was on two distinct computing problems: k-d tree searches and track seeding cellular automata. The goal was to adapt the algorithms to parallel systems and evaluate their performance in different cases.

  17. Implementing Shared Memory Parallelism in MCBEND

    Directory of Open Access Journals (Sweden)

    Bird Adam

    2017-01-01

    Full Text Available MCBEND is a general purpose radiation transport Monte Carlo code from AMEC Foster Wheelers’s ANSWERS® Software Service. MCBEND is well established in the UK shielding community for radiation shielding and dosimetry assessments. The existing MCBEND parallel capability effectively involves running the same calculation on many processors. This works very well except when the memory requirements of a model restrict the number of instances of a calculation that will fit on a machine. To more effectively utilise parallel hardware OpenMP has been used to implement shared memory parallelism in MCBEND. This paper describes the reasoning behind the choice of OpenMP, notes some of the challenges of multi-threading an established code such as MCBEND and assesses the performance of the parallel method implemented in MCBEND.

  18. On possibility of application of the parallel-mixed type coolant flow scheme to NPP steam generators linked with superheaters

    International Nuclear Information System (INIS)

    Malkis, V.A.; Lokshin, V.A.

    1983-01-01

    Optimum distribution of the coolant straight-through flow between the superheater, evaporator and economizer is determined and the parallel-mixed type flow scheme is compared with other schemes. The calculations are performed for the 250 MW(e) steam generator for the WWER-1000 reactor unit the inlet and outlet primary coolant temperature of which is 324 and 290 deg C, respectively, while the feed water and saturation temperatures are 220 and 278.5 deg C, respectively. The rated superheating temperature is 300 deg C. The comparison of different schemes has been performed according to the average temperature head value at the steam-generator under the condition of equality as well as essential difference in the heat transfer coefficients in certain steam-generator sections. The calculations have shown that the use of parallel-mixed type flow permits to essentially increase the temperature head of the steam generator. At a constant heat transfer coefficient in all steam generator sections the highest temperature head is reached. At relative flow rates in the steam generator, economizer and evaporator equal to 6, 8 and 86%, respectively. The superheated steam generator temperature head in this case by 12% exceeds the temperature head of the WWER-1000 reactor unit wet steam generator. In case of heat transfer coefficient reduction in the superheater by a factor of three, the choice of the primary coolant, optimum distribution permits to maintain the steam generator temperature head at the level of the WWER-1000 reactor unit wet-steam steam generator. The use of the parallel-mixed type flow scheme permits to design a steam generator of slightly superheated steam for the parameters of the WWER-1000 unit

  19. Parallel Task Processing on a Multicore Platform in a PC-based Control System for Parallel Kinematics

    Directory of Open Access Journals (Sweden)

    Harald Michalik

    2009-02-01

    Full Text Available Multicore platforms are such that have one physical processor chip with multiple cores interconnected via a chip level bus. Because they deliver a greater computing power through concurrency, offer greater system density multicore platforms provide best qualifications to address the performance bottleneck encountered in PC-based control systems for parallel kinematic robots with heavy CPU-load. Heavy load control tasks are generated by new control approaches that include features like singularity prediction, structure control algorithms, vision data integration and similar tasks. In this paper we introduce the parallel task scheduling extension of a communication architecture specially tailored for the development of PC-based control of parallel kinematics. The Sche-duling is specially designed for the processing on a multicore platform. It breaks down the serial task processing of the robot control cycle and extends it with parallel task processing paths in order to enhance the overall control performance.

  20. Parallel fabrication of macroporous scaffolds.

    Science.gov (United States)

    Dobos, Andrew; Grandhi, Taraka Sai Pavan; Godeshala, Sudhakar; Meldrum, Deirdre R; Rege, Kaushal

    2018-07-01

    Scaffolds generated from naturally occurring and synthetic polymers have been investigated in several applications because of their biocompatibility and tunable chemo-mechanical properties. Existing methods for generation of 3D polymeric scaffolds typically cannot be parallelized, suffer from low throughputs, and do not allow for quick and easy removal of the fragile structures that are formed. Current molds used in hydrogel and scaffold fabrication using solvent casting and porogen leaching are often single-use and do not facilitate 3D scaffold formation in parallel. Here, we describe a simple device and related approaches for the parallel fabrication of macroporous scaffolds. This approach was employed for the generation of macroporous and non-macroporous materials in parallel, in higher throughput and allowed for easy retrieval of these 3D scaffolds once formed. In addition, macroporous scaffolds with interconnected as well as non-interconnected pores were generated, and the versatility of this approach was employed for the generation of 3D scaffolds from diverse materials including an aminoglycoside-derived cationic hydrogel ("Amikagel"), poly(lactic-co-glycolic acid) or PLGA, and collagen. Macroporous scaffolds generated using the device were investigated for plasmid DNA binding and cell loading, indicating the use of this approach for developing materials for different applications in biotechnology. Our results demonstrate that the device-based approach is a simple technology for generating scaffolds in parallel, which can enhance the toolbox of current fabrication techniques. © 2018 Wiley Periodicals, Inc.

  1. Practical enhancement factor model based on GM for multiple parallel reactions: Piperazine (PZ) CO2 capture

    DEFF Research Database (Denmark)

    Gaspar, Jozsef; Fosbøl, Philip Loldrup

    2017-01-01

    Reactive absorption is a key process for gas separation and purification and it is the main technology for CO2 capture. Thus, reliable and simple mathematical models for mass transfer rate calculation are essential. Models which apply to parallel interacting and non-interacting reactions, for all......, desorption and pinch conditions.In this work, we apply the GM model to multiple parallel reactions. We deduce the model for piperazine (PZ) CO2 capture and we validate it against wetted-wall column measurements using 2, 5 and 8 molal PZ for temperatures between 40 °C and 100 °C and CO2 loadings between 0.......23 and 0.41 mol CO2/2 mol PZ. We show that overall second order kinetics describes well the reaction between CO2 and PZ accounting for the carbamate and bicarbamate reactions. Here we prove the GM model for piperazine and MEA but we expect that this practical approach is applicable for various amines...

  2. Event parallelism: Distributed memory parallel computing for high energy physics experiments

    International Nuclear Information System (INIS)

    Nash, T.

    1989-05-01

    This paper describes the present and expected future development of distributed memory parallel computers for high energy physics experiments. It covers the use of event parallel microprocessor farms, particularly at Fermilab, including both ACP multiprocessors and farms of MicroVAXES. These systems have proven very cost effective in the past. A case is made for moving to the more open environment of UNIX and RISC processors. The 2nd Generation ACP Multiprocessor System, which is based on powerful RISC systems, is described. Given the promise of still more extraordinary increases in processor performance, a new emphasis on point to point, rather than bussed, communication will be required. Developments in this direction are described. 6 figs

  3. Event parallelism: Distributed memory parallel computing for high energy physics experiments

    International Nuclear Information System (INIS)

    Nash, T.

    1989-01-01

    This paper describes the present and expected future development of distributed memory parallel computers for high energy physics experiments. It covers the use of event parallel microprocessor farms, particularly at Fermilab, including both ACP multiprocessors and farms of MicroVAXES. These systems have proven very cost effective in the past. A case is made for moving to the more open environment of UNIX and RISC processors. The 2nd Generation ACP Multiprocessor System, which is based on powerful RISC systems, is described. Given the promise of still more extraordinary increases in processor performance, a new emphasis on point to point, rather than bussed, communication will be required. Developments in this direction are described. (orig.)

  4. Event parallelism: Distributed memory parallel computing for high energy physics experiments

    Science.gov (United States)

    Nash, Thomas

    1989-12-01

    This paper describes the present and expected future development of distributed memory parallel computers for high energy physics experiments. It covers the use of event parallel microprocessor farms, particularly at Fermilab, including both ACP multiprocessors and farms of MicroVAXES. These systems have proven very cost effective in the past. A case is made for moving to the more open environment of UNIX and RISC processors. The 2nd Generation ACP Multiprocessor System, which is based on powerful RISC system, is described. Given the promise of still more extraordinary increases in processor performance, a new emphasis on point to point, rather than bussed, communication will be required. Developments in this direction are described.

  5. A phase 2a randomized, parallel group, dose-ranging study of molindone in children with attention-deficit/hyperactivity disorder and persistent, serious conduct problems.

    Science.gov (United States)

    Stocks, Jennifer Dugan; Taneja, Baldeo K; Baroldi, Paolo; Findling, Robert L

    2012-04-01

    To evaluate safety and tolerability of four doses of immediate-release molindone hydrochloride in children with attention-deficit/hyperactivity disorder (ADHD) and serious conduct problems. This open-label, parallel-group, dose-ranging, multicenter trial randomized children, aged 6-12 years, with ADHD and persistent, serious conduct problems to receive oral molindone thrice daily for 9-12 weeks in four treatment groups: Group 1-10 mg (5 mg if weight conduct problems. Secondary outcome measures included change in Nisonger Child Behavior Rating Form-Typical Intelligence Quotient (NCBRF-TIQ) Conduct Problem subscale scores, change in Clinical Global Impressions-Severity (CGI-S) and -Improvement (CGI-I) subscale scores from baseline to end point, and Swanson, Nolan, and Pelham rating scale-revised (SNAP-IV) ADHD-related subscale scores. The study randomized 78 children; 55 completed the study. Treatment with molindone was generally well tolerated, with no clinically meaningful changes in laboratory or physical examination findings. The most common treatment-related adverse events (AEs) included somnolence (n=9), weight increase (n=8), akathisia (n=4), sedation (n=4), and abdominal pain (n=4). Mean weight increased by 0.54 kg, and mean body mass index by 0.24 kg/m(2). The incidence of AEs and treatment-related AEs increased with increasing dose. NCBRF-TIQ subscale scores improved in all four treatment groups, with 34%, 34%, 32%, and 55% decreases from baseline in groups 1, 2, 3, and 4, respectively. CGI-S and SNAP-IV scores improved over time in all treatment groups, and CGI-I scores improved to the greatest degree in group 4. Molindone at doses of 5-20 mg/day (children weighing <30 kg) and 20-40 mg (≥ 30 kg) was well tolerated, and preliminary efficacy results suggest that molindone produces dose-related behavioral improvements over 9-12 weeks. Additional double-blind, placebo-controlled trials are needed to further investigate molindone in this pediatric population.

  6. Researching the Parallel Process in Supervision and Psychotherapy

    DEFF Research Database (Denmark)

    Jacobsen, Claus Haugaard

    Reflects upon how to do process research in supervision and in the parallel process. A single case study is presented illustrating how a study on parallel process can be carried out.......Reflects upon how to do process research in supervision and in the parallel process. A single case study is presented illustrating how a study on parallel process can be carried out....

  7. Can Attention be Divided Between Perceptual Groups?

    Science.gov (United States)

    McCann, Robert S.; Foyle, David C.; Johnston, James C.; Hart, Sandra G. (Technical Monitor)

    1994-01-01

    Previous work using Head-Up Displays (HUDs) suggests that the visual system parses the HUD and the outside world into distinct perceptual groups, with attention deployed sequentially to first one group and then the other. New experiments show that both groups can be processed in parallel in a divided attention search task, even though subjects have just processed a stimulus in one perceptual group or the other. Implications for models of visual attention will be discussed.

  8. Xyce Parallel Electronic Simulator Users' Guide Version 6.6.

    Energy Technology Data Exchange (ETDEWEB)

    Keiter, Eric R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Aadithya, Karthik Venkatraman [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Mei, Ting [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Russo, Thomas V. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Schiek, Richard [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Sholander, Peter E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Thornquist, Heidi K. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Verley, Jason [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2016-11-01

    This manual describes the use of the Xyce Parallel Electronic Simulator. Xyce has been de- signed as a SPICE-compatible, high-performance analog circuit simulator, and has been written to support the simulation needs of the Sandia National Laboratories electrical designers. This development has focused on improving capability over the current state-of-the-art in the following areas: Capability to solve extremely large circuit problems by supporting large-scale parallel com- puting platforms (up to thousands of processors). This includes support for most popular parallel and serial computers. A differential-algebraic-equation (DAE) formulation, which better isolates the device model package from solver algorithms. This allows one to develop new types of analysis without requiring the implementation of analysis-specific device models. Device models that are specifically tailored to meet Sandia's needs, including some radiation- aware devices (for Sandia users only). Object-oriented code design and implementation using modern coding practices. Xyce is a parallel code in the most general sense of the phrase -- a message passing parallel implementation -- which allows it to run efficiently a wide range of computing platforms. These include serial, shared-memory and distributed-memory parallel platforms. Attention has been paid to the specific nature of circuit-simulation problems to ensure that optimal parallel efficiency is achieved as the number of processors grows. The information herein is subject to change without notice. Copyright c 2002-2016 Sandia Corporation. All rights reserved. Acknowledgements The BSIM Group at the University of California, Berkeley developed the BSIM3, BSIM4, BSIM6, BSIM-CMG and BSIM-SOI models. The BSIM3 is Copyright c 1999, Regents of the University of California. The BSIM4 is Copyright c 2006, Regents of the University of California. The BSIM6 is Copyright c 2015, Regents of the University of California. The BSIM-CMG is Copyright c

  9. Physics of the Lorentz Group

    Science.gov (United States)

    Başkal, Sibel

    2015-11-01

    This book explains the Lorentz mathematical group in a language familiar to physicists. While the three-dimensional rotation group is one of the standard mathematical tools in physics, the Lorentz group of the four-dimensional Minkowski space is still very strange to most present-day physicists. It plays an essential role in understanding particles moving at close to light speed and is becoming the essential language for quantum optics, classical optics, and information science. The book is based on papers and books published by the authors on the representations of the Lorentz group based on harmonic oscillators and their applications to high-energy physics and to Wigner functions applicable to quantum optics. It also covers the two-by-two representations of the Lorentz group applicable to ray optics, including cavity, multilayer and lens optics, as well as representations of the Lorentz group applicable to Stokes parameters and the Poincaré sphere on polarization optics.

  10. Development of parallel/serial program analyzing tool

    International Nuclear Information System (INIS)

    Watanabe, Hiroshi; Nagao, Saichi; Takigawa, Yoshio; Kumakura, Toshimasa

    1999-03-01

    Japan Atomic Energy Research Institute has been developing 'KMtool', a parallel/serial program analyzing tool, in order to promote the parallelization of the science and engineering computation program. KMtool analyzes the performance of program written by FORTRAN77 and MPI, and it reduces the effort for parallelization. This paper describes development purpose, design, utilization and evaluation of KMtool. (author)

  11. Simulation Exploration through Immersive Parallel Planes: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Brunhart-Lupo, Nicholas; Bush, Brian W.; Gruchalla, Kenny; Smith, Steve

    2016-03-01

    We present a visualization-driven simulation system that tightly couples systems dynamics simulations with an immersive virtual environment to allow analysts to rapidly develop and test hypotheses in a high-dimensional parameter space. To accomplish this, we generalize the two-dimensional parallel-coordinates statistical graphic as an immersive 'parallel-planes' visualization for multivariate time series emitted by simulations running in parallel with the visualization. In contrast to traditional parallel coordinate's mapping the multivariate dimensions onto coordinate axes represented by a series of parallel lines, we map pairs of the multivariate dimensions onto a series of parallel rectangles. As in the case of parallel coordinates, each individual observation in the dataset is mapped to a polyline whose vertices coincide with its coordinate values. Regions of the rectangles can be 'brushed' to highlight and select observations of interest: a 'slider' control allows the user to filter the observations by their time coordinate. In an immersive virtual environment, users interact with the parallel planes using a joystick that can select regions on the planes, manipulate selection, and filter time. The brushing and selection actions are used to both explore existing data as well as to launch additional simulations corresponding to the visually selected portions of the input parameter space. As soon as the new simulations complete, their resulting observations are displayed in the virtual environment. This tight feedback loop between simulation and immersive analytics accelerates users' realization of insights about the simulation and its output.

  12. Parallel family trees for transfer matrices in the Potts model

    Science.gov (United States)

    Navarro, Cristobal A.; Canfora, Fabrizio; Hitschfeld, Nancy; Navarro, Gonzalo

    2015-02-01

    The computational cost of transfer matrix methods for the Potts model is related to the question in how many ways can two layers of a lattice be connected? Answering the question leads to the generation of a combinatorial set of lattice configurations. This set defines the configuration space of the problem, and the smaller it is, the faster the transfer matrix can be computed. The configuration space of generic (q , v) transfer matrix methods for strips is in the order of the Catalan numbers, which grows asymptotically as O(4m) where m is the width of the strip. Other transfer matrix methods with a smaller configuration space indeed exist but they make assumptions on the temperature, number of spin states, or restrict the structure of the lattice. In this paper we propose a parallel algorithm that uses a sub-Catalan configuration space of O(3m) to build the generic (q , v) transfer matrix in a compressed form. The improvement is achieved by grouping the original set of Catalan configurations into a forest of family trees, in such a way that the solution to the problem is now computed by solving the root node of each family. As a result, the algorithm becomes exponentially faster than the Catalan approach while still highly parallel. The resulting matrix is stored in a compressed form using O(3m ×4m) of space, making numerical evaluation and decompression to be faster than evaluating the matrix in its O(4m ×4m) uncompressed form. Experimental results for different sizes of strip lattices show that the parallel family trees (PFT) strategy indeed runs exponentially faster than the Catalan Parallel Method (CPM), especially when dealing with dense transfer matrices. In terms of parallel performance, we report strong-scaling speedups of up to 5.7 × when running on an 8-core shared memory machine and 28 × for a 32-core cluster. The best balance of speedup and efficiency for the multi-core machine was achieved when using p = 4 processors, while for the cluster

  13. Cognitive synergy in groups and group-to-individual transfer of decision-making competencies

    Science.gov (United States)

    Curşeu, Petru L.; Meslec, Nicoleta; Pluut, Helen; Lucas, Gerardus J. M.

    2015-01-01

    In a field study (148 participants organized in 38 groups) we tested the effect of group synergy and one's position in relation to the collaborative zone of proximal development (CZPD) on the change of individual decision-making competencies. We used two parallel sets of decision tasks reported in previous research to test rationality and we evaluated individual decision-making competencies in the pre-group and post-group conditions as well as group rationality (as an emergent group level phenomenon). We used multilevel modeling to analyze the data and the results showed that members of synergetic groups had a higher cognitive gain as compared to members of non-synergetic groups, while highly rational members (members above the CZPD) had lower cognitive gains compared to less rational group members (members situated below the CZPD). These insights extend the literature on group-to-individual transfer of learning and have important practical implications as they show that group dynamics influence the development of individual decision-making competencies. PMID:26441750

  14. Cognitive synergy in groups and group-to-individual transfer of decision-making competencies.

    Science.gov (United States)

    Curşeu, Petru L; Meslec, Nicoleta; Pluut, Helen; Lucas, Gerardus J M

    2015-01-01

    In a field study (148 participants organized in 38 groups) we tested the effect of group synergy and one's position in relation to the collaborative zone of proximal development (CZPD) on the change of individual decision-making competencies. We used two parallel sets of decision tasks reported in previous research to test rationality and we evaluated individual decision-making competencies in the pre-group and post-group conditions as well as group rationality (as an emergent group level phenomenon). We used multilevel modeling to analyze the data and the results showed that members of synergetic groups had a higher cognitive gain as compared to members of non-synergetic groups, while highly rational members (members above the CZPD) had lower cognitive gains compared to less rational group members (members situated below the CZPD). These insights extend the literature on group-to-individual transfer of learning and have important practical implications as they show that group dynamics influence the development of individual decision-making competencies.

  15. Symmetries in eleven dimensional supergravity compactified on a parallelized seven sphere

    CERN Document Server

    Englert, F; Spindel, P

    1983-01-01

    We analyse, in eleven-dimensional supergravity compactified on S7, the spontaneous symmetry breaking induced by a spontaneous parallelization of the sphere. The eight supersymmetries are broken at a common scale and the SO(8) gauge group is reduced to Spin (7). Such a large residual symmetry has a simple geometrical significance revealed through use of octonions; this is explained in elementary terms.

  16. Parallelization of Subchannel Analysis Code MATRA

    International Nuclear Information System (INIS)

    Kim, Seongjin; Hwang, Daehyun; Kwon, Hyouk

    2014-01-01

    A stand-alone calculation of MATRA code used up pertinent computing time for the thermal margin calculations while a relatively considerable time is needed to solve the whole core pin-by-pin problems. In addition, it is strongly required to improve the computation speed of the MATRA code to satisfy the overall performance of the multi-physics coupling calculations. Therefore, a parallel approach to improve and optimize the computability of the MATRA code is proposed and verified in this study. The parallel algorithm is embodied in the MATRA code using the MPI communication method and the modification of the previous code structure was minimized. An improvement is confirmed by comparing the results between the single and multiple processor algorithms. The speedup and efficiency are also evaluated when increasing the number of processors. The parallel algorithm was implemented to the subchannel code MATRA using the MPI. The performance of the parallel algorithm was verified by comparing the results with those from the MATRA with the single processor. It is also noticed that the performance of the MATRA code was greatly improved by implementing the parallel algorithm for the 1/8 core and whole core problems

  17. INFLUENCE OF PLANT ESSENTIAL OILS ON SELECTED PARAMETERS OF THE PERFORMANCE OF LAYING HENS

    Directory of Open Access Journals (Sweden)

    Henrieta ARPÁŠOVÁ

    2010-10-01

    Full Text Available The experiment was designed to investigate the effects of feed supplementation with essential oils on egg weight and body mass of laying hens. Hens of the laying breed Isa Brown were randomly divided at the day of hatching into 3 groups (n=26 and fed for 45 weeks on diets which differed in kind of essential oil supplemented. Hens were fed from day 1 by the standard feed mixture. Laying hens accepted fodder ad libitum. In the control group hens took feed mixture without additions, in the first experimental group the feed mixture was supplemented with 0.25 ml/kg thyme essential oil and in the second one hens got hyssop essential oil in the same dose of 0.25 ml/kg. The housing system satisfied enriched cage requirements specified by the Directive 1999/74 EC. The useful area provided for one laying hen presented 943.2 cm2. The equipment of cage consisted of roosts, place for rooting in ashes – synthetic grass, nest and equipment for shortening of clutches. The results showed that the average body weight for a rearing period was in order groups: 736.15±523.49; 747.20±541.6 and 721.95±522.57 (g±SD. Differences between groups were not significant (P>0.05. The average body weight during the laying period was 1763.85±171.46; 1786.08±192.09 and 1729.73±129.12 g for control, thyme oil and hyssop oil supplementation respectively. During the laying period there were significant differences in body weight between control and experimental group with hyssop essential oil supplementation (P<0.05 and between both experimental groups (P<0.01. No significant differences were found out between control group and experimental groups (P>0.05 in egg weight (58.36±4.91; 58.82±4.95 and 58.26±5.33 g respectively.

  18. Parallel programming with Python

    CERN Document Server

    Palach, Jan

    2014-01-01

    A fast, easy-to-follow and clear tutorial to help you develop Parallel computing systems using Python. Along with explaining the fundamentals, the book will also introduce you to slightly advanced concepts and will help you in implementing these techniques in the real world. If you are an experienced Python programmer and are willing to utilize the available computing resources by parallelizing applications in a simple way, then this book is for you. You are required to have a basic knowledge of Python development to get the most of this book.

  19. Immediate interruption of sedation compared with usual sedation care in critically ill postoperative patients (SOS-Ventilation): a randomised, parallel-group clinical trial.

    Science.gov (United States)

    Chanques, Gerald; Conseil, Matthieu; Roger, Claire; Constantin, Jean-Michel; Prades, Albert; Carr, Julie; Muller, Laurent; Jung, Boris; Belafia, Fouad; Cissé, Moussa; Delay, Jean-Marc; de Jong, Audrey; Lefrant, Jean-Yves; Futier, Emmanuel; Mercier, Grégoire; Molinari, Nicolas; Jaber, Samir

    2017-10-01

    Avoidance of excessive sedation and subsequent prolonged mechanical ventilation in intensive care units (ICUs) is recommended, but no data are available for critically ill postoperative patients. We hypothesised that in such patients stopping sedation immediately after admission to the ICU could reduce unnecessary sedation and improve patient outcomes. We did a randomised, parallel-group, clinical trial at three ICUs in France. Stratified randomisation with minimisation (1:1 via a restricted web platform) was used to assign eligible patients (aged ≥18 years, admitted to an ICU after abdominal surgery, and expected to require at least 12 h of mechanical ventilation because of a critical illness defined by a Sequential Organ Failure Assessment score >1 for any organ, but without severe acute respiratory distress syndrome or brain injury) to usual sedation care provided according to recommended practices (control group) or to immediate interruption of sedation (intervention group). The primary outcome was the time to successful extubation (defined as the time from randomisation to the time of extubation [or tracheotomy mask] for at least 48 h). All patients who underwent randomisation (except for those who were excluded after randomisation) were included in the intention-to-treat analysis. This study is registered with ClinicalTrials.gov, number NCT01486121. Between Dec 2, 2011, and Feb 27, 2014, 137 patients were randomly assigned to the control (n=68) or intervention groups (n=69). In the intention-to-treat analysis, time to successful extubation was significantly lower in the intervention group than in the control group (median 8 h [IQR 4-36] vs 50 h [29-93], group difference -33·6 h [95% CI -44·9 to -22·4]; p<0·0001). The adjusted hazard ratio was 5·2 (95% CI 3·1-8·8, p<0·0001). Immediate interruption of sedation in critically ill postoperative patients with organ dysfunction who were admitted to the ICU after abdominal surgery improved outcomes compared

  20. Interaction between rancidity and organoleptic parameters of anchovy marinade (Engraulis encrasicolus L. 1758) include essential oils.

    Science.gov (United States)

    Turan, Hülya; Kocatepe, Demet; Keskin, İrfan; Altan, Can Okan; Köstekli, Bayram; Candan, Canan; Ceylan, Asuman

    2017-09-01

    This study was carried out to evaluate the lipid oxidation and sensory attributes of anchovy marinated with 10% NaCl+4% alcohol vinegar+0.2% citric acid solution and 0.1% different essential oils. Group A Control: only sunflower seed oil, Group B: sunflower seed oil+0.1% rosemary oil, Group C: sunflower seed oil+0.1% coriander oil, Group D: sunflower seed oil+0.1% laurel oil and Group E: sunflower seed oil+0.1% garlic oil. During storage, lipid oxidation as indicated by the 2-thiobarbituric acid reactive substances (TBARs) values of the control group were significantly higher than the other groups containing essential oils. The results showed that the essential oils have retarding effect on lipids oxidation. This effect was the highest in laurel oil during initial 3 months; and it was similar to laurel oil and rosemary oil in the fourth month; in all the essential oil added groups in 6 month. L*(brightness) values were similar for all groups in first fourth months but, at the last 2 months, group using laurel oil was found better. Yellowness (b*) was similar in all groups during the intial 3 months whereas, after that lower values in the groups that used laurel and rosemary oils were detected. The study concluded that marination with 0.1% laurel oil of anchovy can retard lipid oxidation and improve the sensory attributes of the product during refrigerated storage.

  1. Shared memory parallelism for 3D cartesian discrete ordinates solver

    International Nuclear Information System (INIS)

    Moustafa, S.; Dutka-Malen, I.; Plagne, L.; Poncot, A.; Ramet, P.

    2013-01-01

    This paper describes the design and the performance of DOMINO, a 3D Cartesian SN solver that implements two nested levels of parallelism (multi-core + SIMD - Single Instruction on Multiple Data) on shared memory computation nodes. DOMINO is written in C++, a multi-paradigm programming language that enables the use of powerful and generic parallel programming tools such as Intel TBB and Eigen. These two libraries allow us to combine multi-thread parallelism with vector operations in an efficient and yet portable way. As a result, DOMINO can exploit the full power of modern multi-core processors and is able to tackle very large simulations, that usually require large HPC clusters, using a single computing node. For example, DOMINO solves a 3D full core PWR eigenvalue problem involving 26 energy groups, 288 angular directions (S16), 46*10 6 spatial cells and 1*10 12 DoFs within 11 hours on a single 32-core SMP node. This represents a sustained performance of 235 GFlops and 40.74% of the SMP node peak performance for the DOMINO sweep implementation. The very high Flops/Watt ratio of DOMINO makes it a very interesting building block for a future many-nodes nuclear simulation tool. (authors)

  2. Towards measurement of the Casimir force between parallel plates separated at sub-mircon distance

    NARCIS (Netherlands)

    Syed Nawazuddin, M.B.; Lammerink, Theodorus S.J.; Wiegerink, Remco J.; Berenschot, Johan W.; de Boer, Meint J.; Elwenspoek, Michael Curt

    2011-01-01

    Ever since its prediction, experimental investigation of the Casimir force has been of great scientific interest. Many research groups have successfully attempted quantifying the force with different device geometries; however measurement of the Casimir force between parallel plates with sub-micron

  3. Multistage parallel-serial time averaging filters

    International Nuclear Information System (INIS)

    Theodosiou, G.E.

    1980-01-01

    Here, a new time averaging circuit design, the 'parallel filter' is presented, which can reduce the time jitter, introduced in time measurements using counters of large dimensions. This parallel filter could be considered as a single stage unit circuit which can be repeated an arbitrary number of times in series, thus providing a parallel-serial filter type as a result. The main advantages of such a filter over a serial one are much less electronic gate jitter and time delay for the same amount of total time uncertainty reduction. (orig.)

  4. Massively parallel Fokker-Planck code ALLAp

    International Nuclear Information System (INIS)

    Batishcheva, A.A.; Krasheninnikov, S.I.; Craddock, G.G.; Djordjevic, V.

    1996-01-01

    The recently developed for workstations Fokker-Planck code ALLA simulates the temporal evolution of 1V, 2V and 1D2V collisional edge plasmas. In this work we present the results of code parallelization on the CRI T3D massively parallel platform (ALLAp version). Simultaneously we benchmark the 1D2V parallel vesion against an analytic self-similar solution of the collisional kinetic equation. This test is not trivial as it demands a very strong spatial temperature and density variation within the simulation domain. (orig.)

  5. Parallel Algorithms for Groebner-Basis Reduction

    Science.gov (United States)

    1987-09-25

    22209 ELEMENT NO. NO. NO. ACCESSION NO. 11. TITLE (Include Security Classification) * PARALLEL ALGORITHMS FOR GROEBNER -BASIS REDUCTION 12. PERSONAL...All other editions are obsolete. Productivity Engineering in the UNIXt Environment p Parallel Algorithms for Groebner -Basis Reduction Technical Report

  6. A possibility of parallel and anti-parallel diffraction measurements on ...

    Indian Academy of Sciences (India)

    resolution property of the other one, anti-parallel position, is very poor. .... in a wide angular region using BPC mochromator at the MF condition by showing ... and N Nimura, Proceedings of the 7th World Conference on Neutron Radiography,.

  7. The 2003 essential. AREVA; L'essentiel 2003. AREVA

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2004-07-01

    This document presents the essential activities of the Areva Group, a world nuclear industry leader. This group proposes technological solutions to produce the nuclear energy and to transport the electric power. It develops connection systems for the telecommunication, the computers and the automotive industry. Key data on the program management, the sustainable development activities and the different divisions are provided. (A.L.B.)

  8. Research in Parallel Algorithms and Software for Computational Aerosciences

    Science.gov (United States)

    Domel, Neal D.

    1996-01-01

    Phase 1 is complete for the development of a computational fluid dynamics CFD) parallel code with automatic grid generation and adaptation for the Euler analysis of flow over complex geometries. SPLITFLOW, an unstructured Cartesian grid code developed at Lockheed Martin Tactical Aircraft Systems, has been modified for a distributed memory/massively parallel computing environment. The parallel code is operational on an SGI network, Cray J90 and C90 vector machines, SGI Power Challenge, and Cray T3D and IBM SP2 massively parallel machines. Parallel Virtual Machine (PVM) is the message passing protocol for portability to various architectures. A domain decomposition technique was developed which enforces dynamic load balancing to improve solution speed and memory requirements. A host/node algorithm distributes the tasks. The solver parallelizes very well, and scales with the number of processors. Partially parallelized and non-parallelized tasks consume most of the wall clock time in a very fine grain environment. Timing comparisons on a Cray C90 demonstrate that Parallel SPLITFLOW runs 2.4 times faster on 8 processors than its non-parallel counterpart autotasked over 8 processors.

  9. Intervention for children with word-finding difficulties: a parallel group randomised control trial.

    Science.gov (United States)

    Best, Wendy; Hughes, Lucy Mari; Masterson, Jackie; Thomas, Michael; Fedor, Anna; Roncoli, Silvia; Fern-Pollak, Liory; Shepherd, Donna-Lynn; Howard, David; Shobbrook, Kate; Kapikian, Anna

    2017-07-31

    The study investigated the outcome of a word-web intervention for children diagnosed with word-finding difficulties (WFDs). Twenty children age 6-8 years with WFDs confirmed by a discrepancy between comprehension and production on the Test of Word Finding-2, were randomly assigned to intervention (n = 11) and waiting control (n = 9) groups. The intervention group had six sessions of intervention which used word-webs and targeted children's meta-cognitive awareness and word-retrieval. On the treated experimental set (n = 25 items) the intervention group gained on average four times as many items as the waiting control group (d = 2.30). There were also gains on personally chosen items for the intervention group. There was little change on untreated items for either group. The study is the first randomised control trial to demonstrate an effect of word-finding therapy with children with language difficulties in mainstream school. The improvement in word-finding for treated items was obtained following a clinically realistic intervention in terms of approach, intensity and duration.

  10. A language for data-parallel and task parallel programming dedicated to multi-SIMD computers. Contributions to hydrodynamic simulation with lattice gases

    International Nuclear Information System (INIS)

    Pic, Marc Michel

    1995-01-01

    Parallel programming covers task-parallelism and data-parallelism. Many problems need both parallelisms. Multi-SIMD computers allow hierarchical approach of these parallelisms. The T++ language, based on C++, is dedicated to exploit Multi-SIMD computers using a programming paradigm which is an extension of array-programming to tasks managing. Our language introduced array of independent tasks to achieve separately (MIMD), on subsets of processors of identical behaviour (SIMD), in order to translate the hierarchical inclusion of data-parallelism in task-parallelism. To manipulate in a symmetrical way tasks and data we propose meta-operations which have the same behaviour on tasks arrays and on data arrays. We explain how to implement this language on our parallel computer SYMPHONIE in order to profit by the locally-shared memory, by the hardware virtualization, and by the multiplicity of communications networks. We analyse simultaneously a typical application of such architecture. Finite elements scheme for Fluid mechanic needs powerful parallel computers and requires large floating points abilities. Lattice gases is an alternative to such simulations. Boolean lattice bases are simple, stable, modular, need to floating point computation, but include numerical noise. Boltzmann lattice gases present large precision of computation, but needs floating points and are only locally stable. We propose a new scheme, called multi-bit, who keeps the advantages of each boolean model to which it is applied, with large numerical precision and reduced noise. Experiments on viscosity, physical behaviour, noise reduction and spurious invariants are shown and implementation techniques for parallel Multi-SIMD computers detailed. (author) [fr

  11. A task parallel implementation of fast multipole methods

    KAUST Repository

    Taura, Kenjiro; Nakashima, Jun; Yokota, Rio; Maruyama, Naoya

    2012-01-01

    This paper describes a task parallel implementation of ExaFMM, an open source implementation of fast multipole methods (FMM), using a lightweight task parallel library MassiveThreads. Although there have been many attempts on parallelizing FMM

  12. Nitrobenzene anti-parallel dimer formation in non-polar solvents

    Directory of Open Access Journals (Sweden)

    Toshiyuki Shikata

    2014-06-01

    Full Text Available We investigated the dielectric and depolarized Rayleigh scattering behaviors of nitrobenzene (NO2-Bz, which is a benzene mono-substituted with a planar molecular frame bearing the large electric dipole moment 4.0 D, in non-polar solvents solutions, such as tetrachloromethane and benzene, at up to 3 THz for the dielectric measurements and 8 THz for the scattering experiments at 20 °C. The dielectric relaxation strength of the system was substantially smaller than the proportionality to the concentration in a concentrated regime and showed a Kirkwood correlation factor markedly lower than unity; gK ∼ 0.65. This observation revealed that NO2-Bz has a tendency to form dimers, (NO2-Bz2, in anti-parallel configurations for the dipole moment with increasing concentration of the two solvents. Both the dielectric and scattering data exhibited fast and slow Debye-type relaxation modes with the characteristic time constants ∼7 and ∼50 ps in a concentrated regime (∼15 and ∼30 ps in a dilute regime, respectively. The fast mode was simply attributed to the rotational motion of the (monomeric NO2-Bz. However, the magnitude of the slow mode was proportional to the square of the concentration in the dilute regime; thus, the mode was assigned to the anti-parallel dimer, (NO2-Bz2, dissociation process, and the slow relaxation time was attributed to the anti-parallel dimer lifetime. The concentration dependencies of both the dielectric and scattering data show that the NO2-Bz molecular processes are controlled through a chemical equilibrium between monomers and anti-parallel dimers, 2NO2-Bz ↔ (NO2-Bz2, due to a strong dipole-dipole interaction between nitro groups.

  13. Adaptive dynamics of competition for nutritionally complementary resources: character convergence, displacement, and parallelism.

    Science.gov (United States)

    Vasseur, David A; Fox, Jeremy W

    2011-10-01

    Consumers acquire essential nutrients by ingesting the tissues of resource species. When these tissues contain essential nutrients in a suboptimal ratio, consumers may benefit from ingesting a mixture of nutritionally complementary resource species. We investigate the joint ecological and evolutionary consequences of competition for complementary resources, using an adaptive dynamics model of two consumers and two resources that differ in their relative content of two essential nutrients. In the absence of competition, a nutritionally balanced diet rarely maximizes fitness because of the dynamic feedbacks between uptake rate and resource density, whereas in sympatry, nutritionally balanced diets maximize fitness because competing consumers with different nutritional requirements tend to equalize the relative abundances of the two resources. Adaptation from allopatric to sympatric fitness optima can generate character convergence, divergence, and parallel shifts, depending not on the degree of diet overlap but on the match between resource nutrient content and consumer nutrient requirements. Contrary to previous verbal arguments that suggest that character convergence leads to neutral stability, coadaptation of competing consumers always leads to stable coexistence. Furthermore, we show that incorporating costs of consuming or excreting excess nonlimiting nutrients selects for nutritionally balanced diets and so promotes character convergence. This article demonstrates that resource-use overlap has little bearing on coexistence when resources are nutritionally complementary, and it highlights the importance of using mathematical models to infer the stability of ecoevolutionary dynamics.

  14. The effect of an essential oil combination derived from selected ...

    African Journals Online (AJOL)

    One thousand two hundred and fifty sexed day-old broiler chicks obtained from a commercial hatchery were divided randomly into five treatment groups (negative control, antibiotic and essential oil combination (EOC) at three levels) of 250 birds each. Each treatment group was further sub-divided into five replicates of 50 ...

  15. Optimisation of a parallel ocean general circulation model

    OpenAIRE

    M. I. Beare; D. P. Stevens

    1997-01-01

    International audience; This paper presents the development of a general-purpose parallel ocean circulation model, for use on a wide range of computer platforms, from traditional scalar machines to workstation clusters and massively parallel processors. Parallelism is provided, as a modular option, via high-level message-passing routines, thus hiding the technical intricacies from the user. An initial implementation highlights that the parallel efficiency of the model is adversely affected by...

  16. Vectorization, parallelization and porting of nuclear codes on the VPP500 system (parallelization). Progress report fiscal 1996

    Energy Technology Data Exchange (ETDEWEB)

    Watanabe, Hideo; Kawai, Wataru; Nemoto, Toshiyuki [Fujitsu Ltd., Tokyo (Japan); and others

    1997-12-01

    Several computer codes in the nuclear field have been vectorized, parallelized and transported on the FUJITSU VPP500 system at Center for Promotion of Computational Science and Engineering in Japan Atomic Energy Research Institute. These results are reported in 3 parts, i.e., the vectorization part, the parallelization part and the porting part. In this report, we describe the parallelization. In this parallelization part, the parallelization of 2-Dimensional relativistic electromagnetic particle code EM2D, Cylindrical Direct Numerical Simulation code CYLDNS and molecular dynamics code for simulating radiation damages in diamond crystals DGR are described. In the vectorization part, the vectorization of two and three dimensional discrete ordinates simulation code DORT-TORT, gas dynamics analysis code FLOWGR and relativistic Boltzmann-Uehling-Uhlenbeck simulation code RBUU are described. And then, in the porting part, the porting of reactor safety analysis code RELAP5/MOD3.2 and RELAP5/MOD3.2.1.2, nuclear data processing system NJOY and 2-D multigroup discrete ordinate transport code TWOTRAN-II are described. And also, a survey for the porting of command-driven interactive data analysis plotting program IPLOT are described. (author)

  17. Stratified steady and unsteady two-phase flows between two parallel plates

    International Nuclear Information System (INIS)

    Sim, Woo Gun

    2006-01-01

    To understand fluid dynamic forces acting on a structure subjected to two-phase flow, it is essential to get detailed information about the characteristics of two-phase flow. Stratified steady and unsteady two-phase flows between two parallel plates have been studied to investigate the general characteristics of the flow related to flow-induced vibration. Based on the spectral collocation method, a numerical approach has been developed for the unsteady two-phase flow. The method is validated by comparing numerical result to analytical one given for a simple harmonic two-phase flow. The flow parameters for the steady two-phase flow, such as void fraction and two-phase frictional multiplier, are evaluated. The dynamic characteristics of the unsteady two-phase flow, including the void fraction effect on the complex unsteady pressure, are illustrated

  18. [Effects of acupuncture on circadian rhythm of blood pressure in patients with essential hypertension].

    Science.gov (United States)

    Lei, Yun; Jin, Jiu; Ban, Haipeng; Du, Yuzheng

    2017-11-12

    To observe the effects of acupuncture combined with medication on circadian rhythm of blood pressure in patients with essential hypertension. Sixty-four patients of essential hypertension were randomly divided into an observation group and a control group, 32 cases in each group. All the patients maintained original treatment (taking antihypertensive medication); the patients in the observation group were treated with acupuncture method of " Huoxue Sanfeng , Shugan Jianpi ", once a day, five times per week, for totally 6 weeks (30 times). The circadian rhythm of blood pressure and related dynamic parameters were observed before and after treatment in the two groups. (1) The differences of daytime average systolic blood pressure (dASBP), daytime average diastolic blood pressure (dADBP), nighttime average systolic blood pressure (nASBP) and circadian rhythm of systolic blood pressure before and after treatment were significant in the observation group (all P circadian rhythm of blood pressure and related dynamic parameters before and after treatment were insignificant in the control group (all P >0.05). The nASBP and circadian rhythm of systolic blood pressure in the observation group were significantly different from those in the control group (all P circadian rhythm of blood pressure in the observation group was higher than that in the control group ( P circadian rhythm of blood pressure and related dynamic parameters in patients with essential hypertension.

  19. Parallel multigrid smoothing: polynomial versus Gauss-Seidel

    International Nuclear Information System (INIS)

    Adams, Mark; Brezina, Marian; Hu, Jonathan; Tuminaro, Ray

    2003-01-01

    Gauss-Seidel is often the smoother of choice within multigrid applications. In the context of unstructured meshes, however, maintaining good parallel efficiency is difficult with multiplicative iterative methods such as Gauss-Seidel. This leads us to consider alternative smoothers. We discuss the computational advantages of polynomial smoothers within parallel multigrid algorithms for positive definite symmetric systems. Two particular polynomials are considered: Chebyshev and a multilevel specific polynomial. The advantages of polynomial smoothing over traditional smoothers such as Gauss-Seidel are illustrated on several applications: Poisson's equation, thin-body elasticity, and eddy current approximations to Maxwell's equations. While parallelizing the Gauss-Seidel method typically involves a compromise between a scalable convergence rate and maintaining high flop rates, polynomial smoothers achieve parallel scalable multigrid convergence rates without sacrificing flop rates. We show that, although parallel computers are the main motivation, polynomial smoothers are often surprisingly competitive with Gauss-Seidel smoothers on serial machines

  20. Parallel multigrid smoothing: polynomial versus Gauss-Seidel

    Science.gov (United States)

    Adams, Mark; Brezina, Marian; Hu, Jonathan; Tuminaro, Ray

    2003-07-01

    Gauss-Seidel is often the smoother of choice within multigrid applications. In the context of unstructured meshes, however, maintaining good parallel efficiency is difficult with multiplicative iterative methods such as Gauss-Seidel. This leads us to consider alternative smoothers. We discuss the computational advantages of polynomial smoothers within parallel multigrid algorithms for positive definite symmetric systems. Two particular polynomials are considered: Chebyshev and a multilevel specific polynomial. The advantages of polynomial smoothing over traditional smoothers such as Gauss-Seidel are illustrated on several applications: Poisson's equation, thin-body elasticity, and eddy current approximations to Maxwell's equations. While parallelizing the Gauss-Seidel method typically involves a compromise between a scalable convergence rate and maintaining high flop rates, polynomial smoothers achieve parallel scalable multigrid convergence rates without sacrificing flop rates. We show that, although parallel computers are the main motivation, polynomial smoothers are often surprisingly competitive with Gauss-Seidel smoothers on serial machines.

  1. Parallel symbolic state-space exploration is difficult, but what is the alternative?

    Directory of Open Access Journals (Sweden)

    Gianfranco Ciardo

    2009-12-01

    Full Text Available State-space exploration is an essential step in many modeling and analysis problems. Its goal is to find the states reachable from the initial state of a discrete-state model described. The state space can used to answer important questions, e.g., "Is there a dead state?" and "Can N become negative?", or as a starting point for sophisticated investigations expressed in temporal logic. Unfortunately, the state space is often so large that ordinary explicit data structures and sequential algorithms cannot cope, prompting the exploration of (1 parallel approaches using multiple processors, from simple workstation networks to shared-memory supercomputers, to satisfy large memory and runtime requirements and (2 symbolic approaches using decision diagrams to encode the large structured sets and relations manipulated during state-space generation. Both approaches have merits and limitations. Parallel explicit state-space generation is challenging, but almost linear speedup can be achieved; however, the analysis is ultimately limited by the memory and processors available. Symbolic methods are a heuristic that can efficiently encode many, but not all, functions over a structured and exponentially large domain; here the pitfalls are subtler: their performance varies widely depending on the class of decision diagram chosen, the state variable order, and obscure algorithmic parameters. As symbolic approaches are often much more efficient than explicit ones for many practical models, we argue for the need to parallelize symbolic state-space generation algorithms, so that we can realize the advantage of both approaches. This is a challenging endeavor, as the most efficient symbolic algorithm, Saturation, is inherently sequential. We conclude by discussing challenges, efforts, and promising directions toward this goal.

  2. Scalable parallel prefix solvers for discrete ordinates transport

    International Nuclear Information System (INIS)

    Pautz, S.; Pandya, T.; Adams, M.

    2009-01-01

    The well-known 'sweep' algorithm for inverting the streaming-plus-collision term in first-order deterministic radiation transport calculations has some desirable numerical properties. However, it suffers from parallel scaling issues caused by a lack of concurrency. The maximum degree of concurrency, and thus the maximum parallelism, grows more slowly than the problem size for sweeps-based solvers. We investigate a new class of parallel algorithms that involves recasting the streaming-plus-collision problem in prefix form and solving via cyclic reduction. This method, although computationally more expensive at low levels of parallelism than the sweep algorithm, offers better theoretical scalability properties. Previous work has demonstrated this approach for one-dimensional calculations; we show how to extend it to multidimensional calculations. Notably, for multiple dimensions it appears that this approach is limited to long-characteristics discretizations; other discretizations cannot be cast in prefix form. We implement two variants of the algorithm within the radlib/SCEPTRE transport code library at Sandia National Laboratories and show results on two different massively parallel systems. Both the 'forward' and 'symmetric' solvers behave similarly, scaling well to larger degrees of parallelism then sweeps-based solvers. We do observe some issues at the highest levels of parallelism (relative to the system size) and discuss possible causes. We conclude that this approach shows good potential for future parallel systems, but the parallel scalability will depend heavily on the architecture of the communication networks of these systems. (authors)

  3. Domain decomposition methods and parallel computing

    International Nuclear Information System (INIS)

    Meurant, G.

    1991-01-01

    In this paper, we show how to efficiently solve large linear systems on parallel computers. These linear systems arise from discretization of scientific computing problems described by systems of partial differential equations. We show how to get a discrete finite dimensional system from the continuous problem and the chosen conjugate gradient iterative algorithm is briefly described. Then, the different kinds of parallel architectures are reviewed and their advantages and deficiencies are emphasized. We sketch the problems found in programming the conjugate gradient method on parallel computers. For this algorithm to be efficient on parallel machines, domain decomposition techniques are introduced. We give results of numerical experiments showing that these techniques allow a good rate of convergence for the conjugate gradient algorithm as well as computational speeds in excess of a billion of floating point operations per second. (author). 5 refs., 11 figs., 2 tabs., 1 inset

  4. Vectorization and parallelization of Monte-Carlo programs for calculation of radiation transport

    International Nuclear Information System (INIS)

    Seidel, R.

    1995-01-01

    The versatile MCNP-3B Monte-Carlo code written in FORTRAN77, for simulation of the radiation transport of neutral particles, has been subjected to vectorization and parallelization of essential parts, without touching its versatility. Vectorization is not dependent on a specific computer. Several sample tasks have been selected in order to test the vectorized MCNP-3B code in comparison to the scalar MNCP-3B code. The samples are a representative example of the 3-D calculations to be performed for simulation of radiation transport in neutron and reactor physics. (1) 4πneutron detector. (2) High-energy calorimeter. (3) PROTEUS benchmark (conversion rates and neutron multiplication factors for the HCLWR (High Conversion Light Water Reactor)). (orig./HP) [de

  5. Xyce parallel electronic simulator : users' guide.

    Energy Technology Data Exchange (ETDEWEB)

    Mei, Ting; Rankin, Eric Lamont; Thornquist, Heidi K.; Santarelli, Keith R.; Fixel, Deborah A.; Coffey, Todd Stirling; Russo, Thomas V.; Schiek, Richard Louis; Warrender, Christina E.; Keiter, Eric Richard; Pawlowski, Roger Patrick

    2011-05-01

    This manual describes the use of the Xyce Parallel Electronic Simulator. Xyce has been designed as a SPICE-compatible, high-performance analog circuit simulator, and has been written to support the simulation needs of the Sandia National Laboratories electrical designers. This development has focused on improving capability over the current state-of-the-art in the following areas: (1) Capability to solve extremely large circuit problems by supporting large-scale parallel computing platforms (up to thousands of processors). Note that this includes support for most popular parallel and serial computers; (2) Improved performance for all numerical kernels (e.g., time integrator, nonlinear and linear solvers) through state-of-the-art algorithms and novel techniques. (3) Device models which are specifically tailored to meet Sandia's needs, including some radiation-aware devices (for Sandia users only); and (4) Object-oriented code design and implementation using modern coding practices that ensure that the Xyce Parallel Electronic Simulator will be maintainable and extensible far into the future. Xyce is a parallel code in the most general sense of the phrase - a message passing parallel implementation - which allows it to run efficiently on the widest possible number of computing platforms. These include serial, shared-memory and distributed-memory parallel as well as heterogeneous platforms. Careful attention has been paid to the specific nature of circuit-simulation problems to ensure that optimal parallel efficiency is achieved as the number of processors grows. The development of Xyce provides a platform for computational research and development aimed specifically at the needs of the Laboratory. With Xyce, Sandia has an 'in-house' capability with which both new electrical (e.g., device model development) and algorithmic (e.g., faster time-integration methods, parallel solver algorithms) research and development can be performed. As a result, Xyce is

  6. Reactive wavepacket dynamics for four atom systems on scalable parallel computers

    International Nuclear Information System (INIS)

    Goldfield, E.M.

    1994-01-01

    While time-dependent quantum mechanics has been successfully applied to many three atom systems, it was nevertheless a computational challenge to use wavepacket methods to study four atom systems, systems with several heavy atoms, and systems with deep potential wells. S.K. Gray and the author are studying the reaction of OH + CO ↔ (HOCO) ↔ H + CO 2 , a difficult reaction by all the above criteria. Memory considerations alone made it impossible to use a single IBM RS/6000 workstation to study a four degree-of-freedom model of this system. They have developed a scalable parallel wavepacket code for the IBM SP1 and have run it on the SP1 at Argonne and at the Cornell Theory Center. The wavepacket, defined on a four dimensional grid, is spread out among the processors. Two-dimensional FFT's are used to compute the kinetic energy operator acting on the wavepacket. Accomplishing this task, which is the computationally intensive part of the calculation, requires a global transpose of the data. This transpose is the only serious communication between processors. Since the problem is essentially data-parallel, communication is regular and load-balancing is excellent. But as the problem is moderately fine-grained and messages are long, the ratio of communication to computation is somewhat high and they typically get about 55% of ideal speed-up

  7. An Efficient Parallel Multi-Scale Segmentation Method for Remote Sensing Imagery

    Directory of Open Access Journals (Sweden)

    Haiyan Gu

    2018-04-01

    Full Text Available Remote sensing (RS image segmentation is an essential step in geographic object-based image analysis (GEOBIA to ultimately derive “meaningful objects”. While many segmentation methods exist, most of them are not efficient for large data sets. Thus, the goal of this research is to develop an efficient parallel multi-scale segmentation method for RS imagery by combining graph theory and the fractal net evolution approach (FNEA. Specifically, a minimum spanning tree (MST algorithm in graph theory is proposed to be combined with a minimum heterogeneity rule (MHR algorithm that is used in FNEA. The MST algorithm is used for the initial segmentation while the MHR algorithm is used for object merging. An efficient implementation of the segmentation strategy is presented using data partition and the “reverse searching-forward processing” chain based on message passing interface (MPI parallel technology. Segmentation results of the proposed method using images from multiple sensors (airborne, SPECIM AISA EAGLE II, WorldView-2, RADARSAT-2 and different selected landscapes (residential/industrial, residential/agriculture covering four test sites indicated its efficiency in accuracy and speed. We conclude that the proposed method is applicable and efficient for the segmentation of a variety of RS imagery (airborne optical, satellite optical, SAR, high-spectral, while the accuracy is comparable with that of the FNEA method.

  8. [Effects of benazepril and valsartan on erythropoietin levels in patients with essential hypertension].

    Science.gov (United States)

    Guo, Lin-lin; Li, Min; Wang, Ai-hong

    2011-10-01

    To compare effects of valsartan and benazepril on erythropoietin (EPO) levels in essential hypertensive patients with normal renal function. Sixty essential hypertensive patients were randomly divided into valsartan group (n=30, valsartan 80 mg/day) and benazepril group (n=30, benazepril 10 mg/day). Plasma EPO and hemoglobin (Hb) levels were measured at the start of and at 4 and 8 weeks during the treatments. EPO and Hb levels were all in normal range in the two groups. Valsartan decreased EPO levels from 14.179∓3.214 U/L (baseline) to 12.138∓2.926 U/L (PBenazepril treatment did not resulted in any obvious changes in EPO or Hb levels (P>0.05). Valsartan may lower EPO and Hb levels in patients with essential hypertension, while benazepril does not have such effects. The safety of valsartan in anemic hypertensive patients should be further investigated.

  9. Parallelization and automatic data distribution for nuclear reactor simulations

    Energy Technology Data Exchange (ETDEWEB)

    Liebrock, L.M. [Liebrock-Hicks Research, Calumet, MI (United States)

    1997-07-01

    Detailed attempts at realistic nuclear reactor simulations currently take many times real time to execute on high performance workstations. Even the fastest sequential machine can not run these simulations fast enough to ensure that the best corrective measure is used during a nuclear accident to prevent a minor malfunction from becoming a major catastrophe. Since sequential computers have nearly reached the speed of light barrier, these simulations will have to be run in parallel to make significant improvements in speed. In physical reactor plants, parallelism abounds. Fluids flow, controls change, and reactions occur in parallel with only adjacent components directly affecting each other. These do not occur in the sequentialized manner, with global instantaneous effects, that is often used in simulators. Development of parallel algorithms that more closely approximate the real-world operation of a reactor may, in addition to speeding up the simulations, actually improve the accuracy and reliability of the predictions generated. Three types of parallel architecture (shared memory machines, distributed memory multicomputers, and distributed networks) are briefly reviewed as targets for parallelization of nuclear reactor simulation. Various parallelization models (loop-based model, shared memory model, functional model, data parallel model, and a combined functional and data parallel model) are discussed along with their advantages and disadvantages for nuclear reactor simulation. A variety of tools are introduced for each of the models. Emphasis is placed on the data parallel model as the primary focus for two-phase flow simulation. Tools to support data parallel programming for multiple component applications and special parallelization considerations are also discussed.

  10. Parallelization and automatic data distribution for nuclear reactor simulations

    International Nuclear Information System (INIS)

    Liebrock, L.M.

    1997-01-01

    Detailed attempts at realistic nuclear reactor simulations currently take many times real time to execute on high performance workstations. Even the fastest sequential machine can not run these simulations fast enough to ensure that the best corrective measure is used during a nuclear accident to prevent a minor malfunction from becoming a major catastrophe. Since sequential computers have nearly reached the speed of light barrier, these simulations will have to be run in parallel to make significant improvements in speed. In physical reactor plants, parallelism abounds. Fluids flow, controls change, and reactions occur in parallel with only adjacent components directly affecting each other. These do not occur in the sequentialized manner, with global instantaneous effects, that is often used in simulators. Development of parallel algorithms that more closely approximate the real-world operation of a reactor may, in addition to speeding up the simulations, actually improve the accuracy and reliability of the predictions generated. Three types of parallel architecture (shared memory machines, distributed memory multicomputers, and distributed networks) are briefly reviewed as targets for parallelization of nuclear reactor simulation. Various parallelization models (loop-based model, shared memory model, functional model, data parallel model, and a combined functional and data parallel model) are discussed along with their advantages and disadvantages for nuclear reactor simulation. A variety of tools are introduced for each of the models. Emphasis is placed on the data parallel model as the primary focus for two-phase flow simulation. Tools to support data parallel programming for multiple component applications and special parallelization considerations are also discussed

  11. Twelve-week, multicenter, placebo-controlled, randomized, double-blind, parallel-group, comparative phase II/III study of benzoyl peroxide gel in patients with acne vulgaris: A secondary publication.

    Science.gov (United States)

    Kawashima, Makoto; Sato, Shinichi; Furukawa, Fukumi; Matsunaga, Kayoko; Akamatsu, Hirohiko; Igarashi, Atsuyuki; Tsunemi, Yuichiro; Hayashi, Nobukazu; Yamamoto, Yuki; Nagare, Toshitaka; Katsuramaki, Tsuneo

    2017-07-01

    A placebo-controlled, randomized, double-blind, parallel-group, comparative, multicenter study was conducted to investigate the efficacy and safety of benzoyl peroxide (BPO) gel, administrated once daily for 12 weeks to Japanese patients with acne vulgaris. Efficacy was evaluated by counting all inflammatory and non-inflammatory lesions. Safety was evaluated based on adverse events, local skin tolerability scores and laboratory test values. All 609 subjects were randomly assigned to receive the study products (2.5% and 5% BPO and placebo), and 607 subjects were included in the full analysis set, 544 in the per protocol set and 609 in the safety analyses. The median rates of reduction from baseline to the last evaluation of the inflammatory lesion counts, the primary end-point, in the 2.5% and 5% BPO groups were 72.7% and 75.0%, respectively, and were significantly higher than that in the placebo group (41.7%). No deaths or other serious adverse events were observed. The incidences of adverse events in the 2.5% and 5% BPO groups were 56.4% and 58.8%, respectively; a higher incidence than in the placebo group, but there was no obvious difference between the 2.5% and 5% BPO groups. All adverse events were mild or moderate in severity. Most adverse events did not lead to study product discontinuation. The results suggested that both 2.5% and 5% BPO are useful for the treatment of acne vulgaris. © 2017 The Authors. The Journal of Dermatology published by John Wiley & Sons Australia, Ltd.

  12. The convergence of parallel Boltzmann machines

    NARCIS (Netherlands)

    Zwietering, P.J.; Aarts, E.H.L.; Eckmiller, R.; Hartmann, G.; Hauske, G.

    1990-01-01

    We discuss the main results obtained in a study of a mathematical model of synchronously parallel Boltzmann machines. We present supporting evidence for the conjecture that a synchronously parallel Boltzmann machine maximizes a consensus function that consists of a weighted sum of the regular

  13. Astronomy essentials

    CERN Document Server

    Brass, Charles O

    2012-01-01

    REA's Essentials provide quick and easy access to critical information in a variety of different fields, ranging from the most basic to the most advanced. As its name implies, these concise, comprehensive study guides summarize the essentials of the field covered. Essentials are helpful when preparing for exams, doing homework and will remain a lasting reference source for students, teachers, and professionals. Astronomy includes the historical perspective of astronomy, sky basics and the celestial coordinate systems, a model and the origin of the solar system, the sun, the planets, Kepler'

  14. Implementations of BLAST for parallel computers.

    Science.gov (United States)

    Jülich, A

    1995-02-01

    The BLAST sequence comparison programs have been ported to a variety of parallel computers-the shared memory machine Cray Y-MP 8/864 and the distributed memory architectures Intel iPSC/860 and nCUBE. Additionally, the programs were ported to run on workstation clusters. We explain the parallelization techniques and consider the pros and cons of these methods. The BLAST programs are very well suited for parallelization for a moderate number of processors. We illustrate our results using the program blastp as an example. As input data for blastp, a 799 residue protein query sequence and the protein database PIR were used.

  15. Synchronization Of Parallel Discrete Event Simulations

    Science.gov (United States)

    Steinman, Jeffrey S.

    1992-01-01

    Adaptive, parallel, discrete-event-simulation-synchronization algorithm, Breathing Time Buckets, developed in Synchronous Parallel Environment for Emulation and Discrete Event Simulation (SPEEDES) operating system. Algorithm allows parallel simulations to process events optimistically in fluctuating time cycles that naturally adapt while simulation in progress. Combines best of optimistic and conservative synchronization strategies while avoiding major disadvantages. Algorithm processes events optimistically in time cycles adapting while simulation in progress. Well suited for modeling communication networks, for large-scale war games, for simulated flights of aircraft, for simulations of computer equipment, for mathematical modeling, for interactive engineering simulations, and for depictions of flows of information.

  16. Distributed parallel messaging for multiprocessor systems

    Science.gov (United States)

    Chen, Dong; Heidelberger, Philip; Salapura, Valentina; Senger, Robert M; Steinmacher-Burrow, Burhard; Sugawara, Yutaka

    2013-06-04

    A method and apparatus for distributed parallel messaging in a parallel computing system. The apparatus includes, at each node of a multiprocessor network, multiple injection messaging engine units and reception messaging engine units, each implementing a DMA engine and each supporting both multiple packet injection into and multiple reception from a network, in parallel. The reception side of the messaging unit (MU) includes a switch interface enabling writing of data of a packet received from the network to the memory system. The transmission side of the messaging unit, includes switch interface for reading from the memory system when injecting packets into the network.

  17. Modelling and simulation of multiple single - phase induction motor in parallel connection

    Directory of Open Access Journals (Sweden)

    Sujitjorn, S.

    2006-11-01

    Full Text Available A mathematical model for parallel connected n-multiple single-phase induction motors in generalized state-space form is proposed in this paper. The motor group draws electric power from one inverter. The model is developed by the dq-frame theory and was tested against four loading scenarios in which satisfactory results were obtained.

  18. SPINning parallel systems software

    International Nuclear Information System (INIS)

    Matlin, O.S.; Lusk, E.; McCune, W.

    2002-01-01

    We describe our experiences in using Spin to verify parts of the Multi Purpose Daemon (MPD) parallel process management system. MPD is a distributed collection of processes connected by Unix network sockets. MPD is dynamic processes and connections among them are created and destroyed as MPD is initialized, runs user processes, recovers from faults, and terminates. This dynamic nature is easily expressible in the Spin/Promela framework but poses performance and scalability challenges. We present here the results of expressing some of the parallel algorithms of MPD and executing both simulation and verification runs with Spin

  19. Seamless-merging-oriented parallel inverse lithography technology

    International Nuclear Information System (INIS)

    Yang Yiwei; Shi Zheng; Shen Shanhu

    2009-01-01

    Inverse lithography technology (ILT), a promising resolution enhancement technology (RET) used in next generations of IC manufacture, has the capability to push lithography to its limit. However, the existing methods of ILT are either time-consuming due to the large layout in a single process, or not accurate enough due to simply block merging in the parallel process. The seamless-merging-oriented parallel ILT method proposed in this paper is fast because of the parallel process; and most importantly, convergence enhancement penalty terms (CEPT) introduced in the parallel ILT optimization process take the environment into consideration as well as environmental change through target updating. This method increases the similarity of the overlapped area between guard-bands and work units, makes the merging process approach seamless and hence reduces hot-spots. The experimental results show that seamless-merging-oriented parallel ILT not only accelerates the optimization process, but also significantly improves the quality of ILT.

  20. Automatic Management of Parallel and Distributed System Resources

    Science.gov (United States)

    Yan, Jerry; Ngai, Tin Fook; Lundstrom, Stephen F.

    1990-01-01

    Viewgraphs on automatic management of parallel and distributed system resources are presented. Topics covered include: parallel applications; intelligent management of multiprocessing systems; performance evaluation of parallel architecture; dynamic concurrent programs; compiler-directed system approach; lattice gaseous cellular automata; and sparse matrix Cholesky factorization.

  1. Emollient bath additives for the treatment of childhood eczema (BATHE): multicentre pragmatic parallel group randomised controlled trial of clinical and cost effectiveness.

    Science.gov (United States)

    Santer, Miriam; Ridd, Matthew J; Francis, Nick A; Stuart, Beth; Rumsby, Kate; Chorozoglou, Maria; Becque, Taeko; Roberts, Amanda; Liddiard, Lyn; Nollett, Claire; Hooper, Julie; Prude, Martina; Wood, Wendy; Thomas, Kim S; Thomas-Jones, Emma; Williams, Hywel C; Little, Paul

    2018-05-03

    To determine the clinical effectiveness and cost effectiveness of including emollient bath additives in the management of eczema in children. Pragmatic randomised open label superiority trial with two parallel groups. 96 general practices in Wales and western and southern England. 483 children aged 1 to 11 years, fulfilling UK diagnostic criteria for atopic dermatitis. Children with very mild eczema and children who bathed less than once weekly were excluded. Participants in the intervention group were prescribed emollient bath additives by their usual clinical team to be used regularly for 12 months. The control group were asked to use no bath additives for 12 months. Both groups continued with standard eczema management, including leave-on emollients, and caregivers were given standardised advice on how to wash participants. The primary outcome was eczema control measured by the patient oriented eczema measure (POEM, scores 0-7 mild, 8-16 moderate, 17-28 severe) weekly for 16 weeks. Secondary outcomes were eczema severity over one year (monthly POEM score from baseline to 52 weeks), number of eczema exacerbations resulting in primary healthcare consultation, disease specific quality of life (dermatitis family impact), generic quality of life (child health utility-9D), utilisation of resources, and type and quantity of topical corticosteroid or topical calcineurin inhibitors prescribed. 483 children were randomised and one child was withdrawn, leaving 482 children in the trial: 51% were girls (244/482), 84% were of white ethnicity (447/470), and the mean age was 5 years. 96% (461/482) of participants completed at least one post-baseline POEM, so were included in the analysis, and 77% (370/482) completed questionnaires for more than 80% of the time points for the primary outcome (12/16 weekly questionnaires to 16 weeks). The mean baseline POEM score was 9.5 (SD 5.7) in the bath additives group and 10.1 (SD 5.8) in the no bath additives group. The mean POEM score

  2. Synchronization Techniques in Parallel Discrete Event Simulation

    OpenAIRE

    Lindén, Jonatan

    2018-01-01

    Discrete event simulation is an important tool for evaluating system models in many fields of science and engineering. To improve the performance of large-scale discrete event simulations, several techniques to parallelize discrete event simulation have been developed. In parallel discrete event simulation, the work of a single discrete event simulation is distributed over multiple processing elements. A key challenge in parallel discrete event simulation is to ensure that causally dependent ...

  3. Parallel sparse direct solver for integrated circuit simulation

    CERN Document Server

    Chen, Xiaoming; Yang, Huazhong

    2017-01-01

    This book describes algorithmic methods and parallelization techniques to design a parallel sparse direct solver which is specifically targeted at integrated circuit simulation problems. The authors describe a complete flow and detailed parallel algorithms of the sparse direct solver. They also show how to improve the performance by simple but effective numerical techniques. The sparse direct solver techniques described can be applied to any SPICE-like integrated circuit simulator and have been proven to be high-performance in actual circuit simulation. Readers will benefit from the state-of-the-art parallel integrated circuit simulation techniques described in this book, especially the latest parallel sparse matrix solution techniques. · Introduces complicated algorithms of sparse linear solvers, using concise principles and simple examples, without complex theory or lengthy derivations; · Describes a parallel sparse direct solver that can be adopted to accelerate any SPICE-like integrated circuit simulato...

  4. Chemical composition of Rosmarinus officinalis and Lavandula stoechas essential oils and their insecticidal effects on Orgyia trigotephras (Lepidoptera: Lymantriidae

    Directory of Open Access Journals (Sweden)

    Ben Slimane Badreddine

    2015-01-01

    Full Text Available Objective: To evaluate toxic activities of essential oils obtained from Rosmarinus officinalis and Lavandula stoechas against the fourth larval instars of Orgyia trigotephras. Methods: A total of 1 200 larvae were divided into three groups-I, II, III. Group I was to investigate the effect of extracted essential oils from these aromatic plants as gastric disturbance. Bacillus thuringiensis was used as referencee and ethanol as control. Group II was used as contact action and Group III was used as fumigant action. For both Groups II and III, Decis was used as reference and ethanol as control. During the three experiments, the effect of essential oils on larvae was assessed. Results: The chemical composition of essential oils from two medicinal plants was determined, and their insecticidal effects on the fourth larval state of Orgyia trigotephras were assessed. They presented an insecticidal activity. Rosmarinus officinalis essential oil was less efficient compared to Lavandula stoechas. Conclusions: The relationship between the chemical composition and the biological activities is confirmed by the present findings. Therefore the potential uses of these essential oils as bioinsecticides can be considered as an alternative to the use of synthetic products.

  5. Streaming nested data parallelism on multicores

    DEFF Research Database (Denmark)

    Madsen, Frederik Meisner; Filinski, Andrzej

    2016-01-01

    The paradigm of nested data parallelism (NDP) allows a variety of semi-regular computation tasks to be mapped onto SIMD-style hardware, including GPUs and vector units. However, some care is needed to keep down space consumption in situations where the available parallelism may vastly exceed...

  6. Parallel Boltzmann machines : a mathematical model

    NARCIS (Netherlands)

    Zwietering, P.J.; Aarts, E.H.L.

    1991-01-01

    A mathematical model is presented for the description of parallel Boltzmann machines. The framework is based on the theory of Markov chains and combines a number of previously known results into one generic model. It is argued that parallel Boltzmann machines maximize a function consisting of a

  7. 17 CFR 12.24 - Parallel proceedings.

    Science.gov (United States)

    2010-04-01

    ...) Definition. For purposes of this section, a parallel proceeding shall include: (1) An arbitration proceeding... the receivership includes the resolution of claims made by customers; or (3) A petition filed under... any of the foregoing with knowledge of a parallel proceeding shall promptly notify the Commission, by...

  8. Parallel computing solution of Boltzmann neutron transport equation

    International Nuclear Information System (INIS)

    Ansah-Narh, T.

    2010-01-01

    The focus of the research was on developing parallel computing algorithm for solving Eigen-values of the Boltzmam Neutron Transport Equation (BNTE) in a slab geometry using multi-grid approach. In response to the problem of slow execution of serial computing when solving large problems, such as BNTE, the study was focused on the design of parallel computing systems which was an evolution of serial computing that used multiple processing elements simultaneously to solve complex physical and mathematical problems. Finite element method (FEM) was used for the spatial discretization scheme, while angular discretization was accomplished by expanding the angular dependence in terms of Legendre polynomials. The eigenvalues representing the multiplication factors in the BNTE were determined by the power method. MATLAB Compiler Version 4.1 (R2009a) was used to compile the MATLAB codes of BNTE. The implemented parallel algorithms were enabled with matlabpool, a Parallel Computing Toolbox function. The option UseParallel was set to 'always' and the default value of the option was 'never'. When those conditions held, the solvers computed estimated gradients in parallel. The parallel computing system was used to handle all the bottlenecks in the matrix generated from the finite element scheme and each domain of the power method generated. The parallel algorithm was implemented on a Symmetric Multi Processor (SMP) cluster machine, which had Intel 32 bit quad-core x 86 processors. Convergence rates and timings for the algorithm on the SMP cluster machine were obtained. Numerical experiments indicated the designed parallel algorithm could reach perfect speedup and had good stability and scalability. (au)

  9. Parallel electric fields from ionospheric winds

    International Nuclear Information System (INIS)

    Nakada, M.P.

    1987-01-01

    The possible production of electric fields parallel to the magnetic field by dynamo winds in the E region is examined, using a jet stream wind model. Current return paths through the F region above the stream are examined as well as return paths through the conjugate ionosphere. The Wulf geometry with horizontal winds moving in opposite directions one above the other is also examined. Parallel electric fields are found to depend strongly on the width of current sheets at the edges of the jet stream. If these are narrow enough, appreciable parallel electric fields are produced. These appear to be sufficient to heat the electrons which reduces the conductivity and produces further increases in parallel electric fields and temperatures. Calculations indicate that high enough temperatures for optical emission can be produced in less than 0.3 s. Some properties of auroras that might be produced by dynamo winds are examined; one property is a time delay in brightening at higher and lower altitudes

  10. Data parallel sorting for particle simulation

    Science.gov (United States)

    Dagum, Leonardo

    1992-01-01

    Sorting on a parallel architecture is a communications intensive event which can incur a high penalty in applications where it is required. In the case of particle simulation, only integer sorting is necessary, and sequential implementations easily attain the minimum performance bound of O (N) for N particles. Parallel implementations, however, have to cope with the parallel sorting problem which, in addition to incurring a heavy communications cost, can make the minimun performance bound difficult to attain. This paper demonstrates how the sorting problem in a particle simulation can be reduced to a merging problem, and describes an efficient data parallel algorithm to solve this merging problem in a particle simulation. The new algorithm is shown to be optimal under conditions usual for particle simulation, and its fieldwise implementation on the Connection Machine is analyzed in detail. The new algorithm is about four times faster than a fieldwise implementation of radix sort on the Connection Machine.

  11. High performance parallel computers for science

    International Nuclear Information System (INIS)

    Nash, T.; Areti, H.; Atac, R.; Biel, J.; Cook, A.; Deppe, J.; Edel, M.; Fischler, M.; Gaines, I.; Hance, R.

    1989-01-01

    This paper reports that Fermilab's Advanced Computer Program (ACP) has been developing cost effective, yet practical, parallel computers for high energy physics since 1984. The ACP's latest developments are proceeding in two directions. A Second Generation ACP Multiprocessor System for experiments will include $3500 RISC processors each with performance over 15 VAX MIPS. To support such high performance, the new system allows parallel I/O, parallel interprocess communication, and parallel host processes. The ACP Multi-Array Processor, has been developed for theoretical physics. Each $4000 node is a FORTRAN or C programmable pipelined 20 Mflops (peak), 10 MByte single board computer. These are plugged into a 16 port crossbar switch crate which handles both inter and intra crate communication. The crates are connected in a hypercube. Site oriented applications like lattice gauge theory are supported by system software called CANOPY, which makes the hardware virtually transparent to users. A 256 node, 5 GFlop, system is under construction

  12. Parallel algorithms for online trackfinding at PANDA

    Energy Technology Data Exchange (ETDEWEB)

    Bianchi, Ludovico; Ritman, James; Stockmanns, Tobias [IKP, Forschungszentrum Juelich GmbH (Germany); Herten, Andreas [JSC, Forschungszentrum Juelich GmbH (Germany); Collaboration: PANDA-Collaboration

    2016-07-01

    The PANDA experiment, one of the four scientific pillars of the FAIR facility currently in construction in Darmstadt, is a next-generation particle detector that will study collisions of antiprotons with beam momenta of 1.5-15 GeV/c on a fixed proton target. Because of the broad physics scope and the similar signature of signal and background events, PANDA's strategy for data acquisition is to continuously record data from the whole detector and use this global information to perform online event reconstruction and filtering. A real-time rejection factor of up to 1000 must be achieved to match the incoming data rate for offline storage, making all components of the data processing system computationally very challenging. Online particle track identification and reconstruction is an essential step, since track information is used as input in all following phases. Online tracking algorithms must ensure a delicate balance between high tracking efficiency and quality, and minimal computational footprint. For this reason, a massively parallel solution exploiting multiple Graphic Processing Units (GPUs) is under investigation. The talk presents the core concepts of the algorithms being developed for primary trackfinding, along with details of their implementation on GPUs.

  13. Ground state of the parallel double quantum dot system.

    Science.gov (United States)

    Zitko, Rok; Mravlje, Jernej; Haule, Kristjan

    2012-02-10

    We resolve the controversy regarding the ground state of the parallel double quantum dot system near half filling. The numerical renormalization group predicts an underscreened Kondo state with residual spin-1/2 magnetic moment, ln2 residual impurity entropy, and unitary conductance, while the Bethe ansatz solution predicts a fully screened impurity, regular Fermi-liquid ground state, and zero conductance. We calculate the impurity entropy of the system as a function of the temperature using the hybridization-expansion continuous-time quantum Monte Carlo technique, which is a numerically exact stochastic method, and find excellent agreement with the numerical renormalization group results. We show that the origin of the unconventional behavior in this model is the odd-symmetry "dark state" on the dots.

  14. Parallel 3-D method of characteristics in MPACT

    International Nuclear Information System (INIS)

    Kochunas, B.; Dovvnar, T. J.; Liu, Z.

    2013-01-01

    A new parallel 3-D MOC kernel has been developed and implemented in MPACT which makes use of the modular ray tracing technique to reduce computational requirements and to facilitate parallel decomposition. The parallel model makes use of both distributed and shared memory parallelism which are implemented with the MPI and OpenMP standards, respectively. The kernel is capable of parallel decomposition of problems in space, angle, and by characteristic rays up to 0(104) processors. Initial verification of the parallel 3-D MOC kernel was performed using the Takeda 3-D transport benchmark problems. The eigenvalues computed by MPACT are within the statistical uncertainty of the benchmark reference and agree well with the averages of other participants. The MPACT k eff differs from the benchmark results for rodded and un-rodded cases by 11 and -40 pcm, respectively. The calculations were performed for various numbers of processors and parallel decompositions up to 15625 processors; all producing the same result at convergence. The parallel efficiency of the worst case was 60%, while very good efficiency (>95%) was observed for cases using 500 processors. The overall run time for the 500 processor case was 231 seconds and 19 seconds for the case with 15625 processors. Ongoing work is focused on developing theoretical performance models and the implementation of acceleration techniques to minimize the number of iterations to converge. (authors)

  15. Mapping robust parallel multigrid algorithms to scalable memory architectures

    Science.gov (United States)

    Overman, Andrea; Vanrosendale, John

    1993-01-01

    The convergence rate of standard multigrid algorithms degenerates on problems with stretched grids or anisotropic operators. The usual cure for this is the use of line or plane relaxation. However, multigrid algorithms based on line and plane relaxation have limited and awkward parallelism and are quite difficult to map effectively to highly parallel architectures. Newer multigrid algorithms that overcome anisotropy through the use of multiple coarse grids rather than relaxation are better suited to massively parallel architectures because they require only simple point-relaxation smoothers. In this paper, we look at the parallel implementation of a V-cycle multiple semicoarsened grid (MSG) algorithm on distributed-memory architectures such as the Intel iPSC/860 and Paragon computers. The MSG algorithms provide two levels of parallelism: parallelism within the relaxation or interpolation on each grid and across the grids on each multigrid level. Both levels of parallelism must be exploited to map these algorithms effectively to parallel architectures. This paper describes a mapping of an MSG algorithm to distributed-memory architectures that demonstrates how both levels of parallelism can be exploited. The result is a robust and effective multigrid algorithm for distributed-memory machines.

  16. Current Trends in Numerical Simulation for Parallel Engineering Environments New Directions and Work-in-Progress

    International Nuclear Information System (INIS)

    Trinitis, C; Schulz, M

    2006-01-01

    In today's world, the use of parallel programming and architectures is essential for simulating practical problems in engineering and related disciplines. Remarkable progress in CPU architecture, system scalability, and interconnect technology continues to provide new opportunities, as well as new challenges for both system architects and software developers. These trends are paralleled by progress in parallel algorithms, simulation techniques, and software integration from multiple disciplines. ParSim brings together researchers from both application disciplines and computer science and aims at fostering closer cooperation between these fields. Since its successful introduction in 2002, ParSim has established itself as an integral part of the EuroPVM/MPI conference series. In contrast to traditional conferences, emphasis is put on the presentation of up-to-date results with a short turn-around time. This offers a unique opportunity to present new aspects in this dynamic field and discuss them with a wide, interdisciplinary audience. The EuroPVM/MPI conference series, as one of the prime events in parallel computation, serves as an ideal surrounding for ParSim. This combination enables the participants to present and discuss their work within the scope of both the session and the host conference. This year, eleven papers from authors in nine countries were submitted to ParSim, and we selected five of them. They cover a wide range of different application fields including gas flow simulations, thermo-mechanical processes in nuclear waste storage, and cosmological simulations. At the same time, the selected contributions also address the computer science side of their codes and discuss different parallelization strategies, programming models and languages, as well as the use nonblocking collective operations in MPI. We are confident that this provides an attractive program and that ParSim will be an informal setting for lively discussions and for fostering new

  17. The effect of a corticosteroid cream and a barrier-strengthening moisturizer in hand eczema. A double-blind, randomized, prospective, parallel group clinical trial.

    Science.gov (United States)

    Lodén, M; Wirén, K; Smerud, K T; Meland, N; Hønnås, H; Mørk, G; Lützow-Holm, C; Funk, J; Meding, B

    2012-05-01

    Hand eczema is a common and persistent disease with a relapsing course. Clinical data suggest that once daily treatment with corticosteroids is just as effective as twice daily treatment. The aim of this study was to compare once and twice daily applications of a strong corticosteroid cream in addition to maintenance therapy with a moisturizer in patients with a recent relapse of hand eczema. The study was a parallel, double-blind, randomized, clinical trial on 44 patients. Twice daily application of a strong corticosteroid cream (betamethasone valerate 0.1%) was compared with once daily application, where a urea-containing moisturizer was substituted for the corticosteroid cream in the morning. The investigator scored the presence of eczema and the patients judged the health-related quality of life (HRQoL) using the Dermatology Life Quality Index (DLQI), which measures how much the patient's skin problem has affected his/her life over the past week. The patients also judged the severity of their eczema daily on a visual analogue scale. Both groups improved in terms of eczema and DLQI. However, the clinical scoring demonstrated that once daily application of corticosteroid was superior to twice daily application in diminishing eczema, especially in the group of patients with lower eczema scores at inclusion. Twice daily use of corticosteroids was not superior to once daily use in treating eczema. On the contrary, the clinical assessment showed a larger benefit from once daily treatment compared with twice daily, especially in the group of patients with a moderate eczema at inclusion. © 2011 The Authors. Journal of the European Academy of Dermatology and Venereology © 2011 European Academy of Dermatology and Venereology.

  18. A qualitative single case study of parallel processes

    DEFF Research Database (Denmark)

    Jacobsen, Claus Haugaard

    2007-01-01

    Parallel process in psychotherapy and supervision is a phenomenon manifest in relationships and interactions, that originates in one setting and is reflected in another. This article presents an explorative single case study of parallel processes based on qualitative analyses of two successive...... randomly chosen psychotherapy sessions with a schizophrenic patient and the supervision session given in between. The author's analysis is verified by an independent examiner's analysis. Parallel processes are identified and described. Reflections on the dynamics of parallel processes and supervisory...

  19. Investigation of Mediational Processes Using Parallel Process Latent Growth Curve Modeling

    Science.gov (United States)

    Cheong, JeeWon; MacKinnon, David P.; Khoo, Siek Toon

    2010-01-01

    This study investigated a method to evaluate mediational processes using latent growth curve modeling. The mediator and the outcome measured across multiple time points were viewed as 2 separate parallel processes. The mediational process was defined as the independent variable influencing the growth of the mediator, which, in turn, affected the growth of the outcome. To illustrate modeling procedures, empirical data from a longitudinal drug prevention program, Adolescents Training and Learning to Avoid Steroids, were used. The program effects on the growth of the mediator and the growth of the outcome were examined first in a 2-group structural equation model. The mediational process was then modeled and tested in a parallel process latent growth curve model by relating the prevention program condition, the growth rate factor of the mediator, and the growth rate factor of the outcome. PMID:20157639

  20. Application of the DMRG in two dimensions: a parallel tempering algorithm

    Science.gov (United States)

    Hu, Shijie; Zhao, Jize; Zhang, Xuefeng; Eggert, Sebastian

    The Density Matrix Renormalization Group (DMRG) is known to be a powerful algorithm for treating one-dimensional systems. When the DMRG is applied in two dimensions, however, the convergence becomes much less reliable and typically ''metastable states'' may appear, which are unfortunately quite robust even when keeping a very high number of DMRG states. To overcome this problem we have now successfully developed a parallel tempering DMRG algorithm. Similar to parallel tempering in quantum Monte Carlo, this algorithm allows the systematic switching of DMRG states between different model parameters, which is very efficient for solving convergence problems. Using this method we have figured out the phase diagram of the xxz model on the anisotropic triangular lattice which can be realized by hardcore bosons in optical lattices. SFB Transregio 49 of the Deutsche Forschungsgemeinschaft (DFG) and the Allianz fur Hochleistungsrechnen Rheinland-Pfalz (AHRP).