WorldWideScience

Sample records for ucla parallel pic

  1. Status and future plans for open source QuickPIC

    Science.gov (United States)

    An, Weiming; Decyk, Viktor; Mori, Warren

    2017-10-01

    QuickPIC is a three dimensional (3D) quasi-static particle-in-cell (PIC) code developed based on the UPIC framework. It can be used for efficiently modeling plasma based accelerator (PBA) problems. With quasi-static approximation, QuickPIC can use different time scales for calculating the beam (or laser) evolution and the plasma response, and a 3D plasma wake field can be simulated using a two-dimensional (2D) PIC code where the time variable is ξ = ct - z and z is the beam propagation direction. QuickPIC can be thousand times faster than the normal PIC code when simulating the PBA. It uses an MPI/OpenMP hybrid parallel algorithm, which can be run on either a laptop or the largest supercomputer. The open source QuickPIC is an object-oriented program with high level classes written in Fortran 2003. It can be found at https://github.com/UCLA-Plasma-Simulation-Group/QuickPIC-OpenSource.git

  2. Parallel pic plasma simulation through particle decomposition techniques

    International Nuclear Information System (INIS)

    Briguglio, S.; Vlad, G.; Di Martino, B.; Naples, Univ. 'Federico II'

    1998-02-01

    Particle-in-cell (PIC) codes are among the major candidates to yield a satisfactory description of the detail of kinetic effects, such as the resonant wave-particle interaction, relevant in determining the transport mechanism in magnetically confined plasmas. A significant improvement of the simulation performance of such codes con be expected from parallelization, e.g., by distributing the particle population among several parallel processors. Parallelization of a hybrid magnetohydrodynamic-gyrokinetic code has been accomplished within the High Performance Fortran (HPF) framework, and tested on the IBM SP2 parallel system, using a 'particle decomposition' technique. The adopted technique requires a moderate effort in porting the code in parallel form and results in intrinsic load balancing and modest inter processor communication. The performance tests obtained confirm the hypothesis of high effectiveness of the strategy, if targeted towards moderately parallel architectures. Optimal use of resources is also discussed with reference to a specific physics problem [it

  3. A portable approach for PIC on emerging architectures

    Science.gov (United States)

    Decyk, Viktor

    2016-03-01

    A portable approach for designing Particle-in-Cell (PIC) algorithms on emerging exascale computers, is based on the recognition that 3 distinct programming paradigms are needed. They are: low level vector (SIMD) processing, middle level shared memory parallel programing, and high level distributed memory programming. In addition, there is a memory hierarchy associated with each level. Such algorithms can be initially developed using vectorizing compilers, OpenMP, and MPI. This is the approach recommended by Intel for the Phi processor. These algorithms can then be translated and possibly specialized to other programming models and languages, as needed. For example, the vector processing and shared memory programming might be done with CUDA instead of vectorizing compilers and OpenMP, but generally the algorithm itself is not greatly changed. The UCLA PICKSC web site at http://www.idre.ucla.edu/ contains example open source skeleton codes (mini-apps) illustrating each of these three programming models, individually and in combination. Fortran2003 now supports abstract data types, and design patterns can be used to support a variety of implementations within the same code base. Fortran2003 also supports interoperability with C so that implementations in C languages are also easy to use. Finally, main codes can be translated into dynamic environments such as Python, while still taking advantage of high performing compiled languages. Parallel languages are still evolving with interesting developments in co-Array Fortran, UPC, and OpenACC, among others, and these can also be supported within the same software architecture. Work supported by NSF and DOE Grants.

  4. High-Fidelity RF Gun Simulations with the Parallel 3D Finite Element Particle-In-Cell Code Pic3P

    Energy Technology Data Exchange (ETDEWEB)

    Candel, A; Kabel, A.; Lee, L.; Li, Z.; Limborg, C.; Ng, C.; Schussman, G.; Ko, K.; /SLAC

    2009-06-19

    SLAC's Advanced Computations Department (ACD) has developed the first parallel Finite Element 3D Particle-In-Cell (PIC) code, Pic3P, for simulations of RF guns and other space-charge dominated beam-cavity interactions. Pic3P solves the complete set of Maxwell-Lorentz equations and thus includes space charge, retardation and wakefield effects from first principles. Pic3P uses higher-order Finite Elementmethods on unstructured conformal meshes. A novel scheme for causal adaptive refinement and dynamic load balancing enable unprecedented simulation accuracy, aiding the design and operation of the next generation of accelerator facilities. Application to the Linac Coherent Light Source (LCLS) RF gun is presented.

  5. Recent progress in 3D EM/EM-PIC simulation with ARGUS and parallel ARGUS

    International Nuclear Information System (INIS)

    Mankofsky, A.; Petillo, J.; Krueger, W.; Mondelli, A.; McNamara, B.; Philp, R.

    1994-01-01

    ARGUS is an integrated, 3-D, volumetric simulation model for systems involving electric and magnetic fields and charged particles, including materials embedded in the simulation region. The code offers the capability to carry out time domain and frequency domain electromagnetic simulations of complex physical systems. ARGUS offers a boolean solid model structure input capability that can include essentially arbitrary structures on the computational domain, and a modular architecture that allows multiple physics packages to access the same data structure and to share common code utilities. Physics modules are in place to compute electrostatic and electromagnetic fields, the normal modes of RF structures, and self-consistent particle-in-cell (PIC) simulation in either a time dependent mode or a steady state mode. The PIC modules include multiple particle species, the Lorentz equations of motion, and algorithms for the creation of particles by emission from material surfaces, injection onto the grid, and ionization. In this paper, we present an updated overview of ARGUS, with particular emphasis given in recent algorithmic and computational advances. These include a completely rewritten frequency domain solver which efficiently treats lossy materials and periodic structures, a parallel version of ARGUS with support for both shared memory parallel vector (i.e. CRAY) machines and distributed memory massively parallel MIMD systems, and numerous new applications of the code

  6. A parallel code named NEPTUNE for 3D fully electromagnetic and pic simulations

    International Nuclear Information System (INIS)

    Dong Ye; Yang Wenyuan; Chen Jun; Zhao Qiang; Xia Fang; Ma Yan; Xiao Li; Sun Huifang; Chen Hong; Zhou Haijing; Mao Zeyao; Dong Zhiwei

    2010-01-01

    A parallel code named NEPTUNE for 3D fully electromagnetic and particle-in-cell (PIC) simulations is introduced, which could run on the Linux system with hundreds to thousand CPUs. NEPTUNE is suitable to simulate entire 3D HPM devices; many HPM devices are simulated and designed by using it. In NEPTUNE code, the electromagnetic fields are updated by using the finite-difference in time domain (FDTD) method of solving Maxwell equations and the particles are moved by using Buneman-Boris advance method of solving relativistic Newton-Lorentz equation. Electromagnetic fields and particles are coupled by using liner weighing interpolation PIC method, and the electric filed components are corrected by using Boris method of solve Poisson equation in order to ensure charge-conservation. NEPTUNE code could construct many complicated geometric structures, such as arbitrary axial-symmetric structures, plane transforming structures, slow-wave-structures, coupling holes, foils, and so on. The boundary conditions used in NEPTUNE code are introduced in brief, including perfectly electric conductor boundary, external wave boundary, and particle boundary. Finally, some typical HPM devices are simulated and test by using NEPTUNE code, including MILO, RBWO, VCO, and RKA. The simulation results are with correct and credible physical images, and the parallel efficiencies are also given. (authors)

  7. Massive parallel 3D PIC simulation of negative ion extraction

    Science.gov (United States)

    Revel, Adrien; Mochalskyy, Serhiy; Montellano, Ivar Mauricio; Wünderlich, Dirk; Fantz, Ursel; Minea, Tiberiu

    2017-09-01

    The 3D PIC-MCC code ONIX is dedicated to modeling Negative hydrogen/deuterium Ion (NI) extraction and co-extraction of electrons from radio-frequency driven, low pressure plasma sources. It provides valuable insight on the complex phenomena involved in the extraction process. In previous calculations, a mesh size larger than the Debye length was used, implying numerical electron heating. Important steps have been achieved in terms of computation performance and parallelization efficiency allowing successful massive parallel calculations (4096 cores), imperative to resolve the Debye length. In addition, the numerical algorithms have been improved in terms of grid treatment, i.e., the electric field near the complex geometry boundaries (plasma grid) is calculated more accurately. The revised model preserves the full 3D treatment, but can take advantage of a highly refined mesh. ONIX was used to investigate the role of the mesh size, the re-injection scheme for lost particles (extracted or wall absorbed), and the electron thermalization process on the calculated extracted current and plasma characteristics. It is demonstrated that all numerical schemes give the same NI current distribution for extracted ions. Concerning the electrons, the pair-injection technique is found well-adapted to simulate the sheath in front of the plasma grid.

  8. PACS module image communication at UCLA

    International Nuclear Information System (INIS)

    Stewart, B.K.; Taira, R.K.; Cho, P.S.; Mankovich, N.J.

    1987-01-01

    The advent of the ACR-NEMA digital and communication standard for PACS implementation between imaging, storage and display devices may simplify the networking problems inherent to PACS in the future. However, since the ACR-NEMA interface has not been implemented in manufactured products, the components of a PACS at the present time use various network interface designs, requiring substantial effort in the area of hardware and software integration. Many communication systems are used for the PACS implementation in Pediatric Radiology at UCLA, including baseband, broadband, as well as various parallel-line interface protocols, e.g. GP-IB. A VAX 11/750 minicomputer serves as the host computer for the UCLA Pediatric Radiology PACS system. Communication between the many peripherals take place through the host computer, which acts as the central node. Several communication links have been established, primarily: host computer to other local computers, image processors, various peripherals (digitizers, storage media, etc.) and, of course, to the 512, 1024 and 2048 viewing stations

  9. Fluctuations and transport in fusion plasmas. Final report

    International Nuclear Information System (INIS)

    Gould, R.W.; Liewer, P.C.

    1995-01-01

    The energy confinement in tokamaks in thought to be limited by transport caused by plasma turbulence. Three dimensional plasma particle-in-cell (PIC) codes are used to model the turbulent transport in tokamaks to attempt to understand this phenomena so that tokamaks can be made more efficient. Presently, hundreds of hours of Cray time are used to model these experiments and much bigger and longer runs are desired, to model a large tokamak with realistic parameters is beyond the capability of existing sequential supercomputers. Parallel supercomputers might be a cost effect tool for performing such large scale 3D tokamak simulations. The goal of the work was to develop algorithms for performing PIC codes on coarse-grained message passing parallel computers and to evaluate the performance of such parallel computers on PIC codes. This algorithm would be used in a large scale PIC production code such as the UCLA 3D gyrokinetic code

  10. PIC Detector for Piano Chords

    Directory of Open Access Journals (Sweden)

    Barbancho AnaM

    2010-01-01

    Full Text Available In this paper, a piano chords detector based on parallel interference cancellation (PIC is presented. The proposed system makes use of the novel idea of modeling a segment of music as a third generation mobile communications signal, specifically, as a CDMA (Code Division Multiple Access signal. The proposed model considers each piano note as a CDMA user in which the spreading code is replaced by a representative note pattern. The lack of orthogonality between the note patterns will make necessary to design a specific thresholding matrix to decide whether the PIC outputs correspond to the actual notes composing the chord or not. An additional stage that performs an octave test and a fifth test has been included that improves the error rate in the detection of these intervals that are specially difficult to detect. The proposed system attains very good results in both the detection of the notes that compose a chord and the estimation of the polyphony number.

  11. PIC 16 F84

    International Nuclear Information System (INIS)

    Jung, Gi Cheol; Min, Han Sik

    2001-11-01

    The contents of this book are introduction of microprocessor, basic for microcomputer practice, introduction of one chip micro computer, basic command of PIC, instructions simulator and in circuit emulator ; what a simulator of PIC is, and MPLAB direction, making PIC rom writer and instructions, of La's PIC Micro Programmer, PIC programming ; learning Command with examples, and controlling hardware with C-language, practical task for PIC application, a line tracer automobile and making ultrasonic radar ; circuit, source program and monitor program.

  12. PIC 16 F84

    Energy Technology Data Exchange (ETDEWEB)

    Jung, Gi Cheol; Min, Han Sik

    2001-11-15

    The contents of this book are introduction of microprocessor, basic for microcomputer practice, introduction of one chip micro computer, basic command of PIC, instructions simulator and in circuit emulator ; what a simulator of PIC is, and MPLAB direction, making PIC rom writer and instructions, of La's PIC Micro Programmer, PIC programming ; learning Command with examples, and controlling hardware with C-language, practical task for PIC application, a line tracer automobile and making ultrasonic radar ; circuit, source program and monitor program.

  13. Selective Adaptive Parallel Interference Cancellation for Multicarrier DS-CDMA Systems

    Directory of Open Access Journals (Sweden)

    Ahmed El-Sayed El-Mahdy

    2013-07-01

    Full Text Available In this paper, Selective Adaptive Parallel Interference Cancellation (SA-PIC technique is presented for Multicarrier Direct Sequence Code Division Multiple Access (MC DS-CDMA scheme. The motivation of using SA-PIC is that it gives high performance and at the same time, reduces the computational complexity required to perform interference cancellation. An upper bound expression of the bit error rate (BER for the SA-PIC under Rayleigh fading channel condition is derived. Moreover, the implementation complexities for SA-PIC and Adaptive Parallel Interference Cancellation (APIC are discussed and compared. The performance of SA-PIC is investigated analytically and validated via computer simulations.

  14. Adaptive DSP Algorithms for UMTS: Blind Adaptive MMSE and PIC Multiuser Detection

    NARCIS (Netherlands)

    Potman, J.

    2003-01-01

    A study of the application of blind adaptive Minimum Mean Square Error (MMSE) and Parallel Interference Cancellation (PIC) multiuser detection techniques to Wideband Code Division Multiple Access (WCDMA), the physical layer of Universal Mobile Telecommunication System (UMTS), has been performed as

  15. GaAs Photonic Integrated Circuit (PIC) development for high performance communications

    Energy Technology Data Exchange (ETDEWEB)

    Sullivan, C.T.

    1998-03-01

    Sandia has established a foundational technology in photonic integrated circuits (PICs) based on the (Al,Ga,In)As material system for optical communication, radar control and testing, and network switching applications at the important 1.3{mu}m/1.55{mu}m wavelengths. We investigated the optical, electrooptical, and microwave performance characteristics of the fundamental building-block PIC elements designed to be as simple and process-tolerant as possible, with particular emphasis placed on reducing optical insertion loss. Relatively conventional device array and circuit designs were built using these PIC elements: (1) to establish a baseline performance standard; (2) to assess the impact of epitaxial growth accuracy and uniformity, and of fabrication uniformity and yield; (3) to validate our theoretical and numerical models; and (4) to resolve the optical and microwave packaging issues associated with building fully packaged prototypes. Novel and more complex PIC designs and fabrication processes, viewed as higher payoff but higher risk, were explored in a parallel effort with the intention of meshing those advances into our baseline higher-yield capability as they mature. The application focus targeted the design and fabrication of packaged solitary modulators meeting the requirements of future wideband and high-speed analog and digital data links. Successfully prototyped devices are expected to feed into more complex PICs solving specific problems in high-performance communications, such as optical beamforming networks for phased array antennas.

  16. Programming 16-Bit PIC Microcontrollers in C Learning to Fly the PIC 24

    CERN Document Server

    Di Jasio, Lucio

    2011-01-01

    New in the second edition: * MPLAB X support and MPLAB C for the PIC24F v3 and later libraries * I2C™ interface * 100% assembly free solutions * Improved video, PAL/NTSC * Improved audio, RIFF files decoding * PIC24F GA1, GA2, GB1 and GB2 support   Most readers will associate Microchip's name with the ubiquitous 8-bit PIC microcontrollers but it is the new 16-bit PIC24F family that is truly stealing the scene. Orders of magnitude increases of performance, memory size and the rich peripheral set make programming these devices in C a must. This new guide by Microchip insid

  17. Partial PIC-MRC Receiver Design for Single Carrier Block Transmission System over Multipath Fading Channels

    Directory of Open Access Journals (Sweden)

    Juinn-Horng Deng

    2012-01-01

    Full Text Available Single carrier block transmission (SCBT system has become one of the most popular modulation systems due to its low peak-to-average power ratio (PAPR, and it is gradually considered to be used for uplink wireless communication systems. In this paper, a low complexity partial parallel interference cancellation (PIC with maximum ratio combining (MRC technology is proposed to use for receiver to combat the intersymbol interference (ISI problem over multipath fading channel. With the aid of MRC scheme, the proposed partial PIC technique can effectively perform the interference cancellation and acquire the benefit of time diversity gain. Finally, the proposed system can be extended to use for multiple antenna systems to provide excellent performance. Simulation results reveal that the proposed low complexity partial PIC-MRC SIMO system can provide robust performance and outperform the conventional PIC and the iterative frequency domain decision feedback equalizer (FD-DFE systems over multipath fading channel environment.

  18. A 3D gyrokinetic particle-in-cell simulation of fusion plasma microturbulence on parallel computers

    Science.gov (United States)

    Williams, T. J.

    1992-12-01

    One of the grand challenge problems now supported by HPCC is the Numerical Tokamak Project. A goal of this project is the study of low-frequency micro-instabilities in tokamak plasmas, which are believed to cause energy loss via turbulent thermal transport across the magnetic field lines. An important tool in this study is gyrokinetic particle-in-cell (PIC) simulation. Gyrokinetic, as opposed to fully-kinetic, methods are particularly well suited to the task because they are optimized to study the frequency and wavelength domain of the microinstabilities. Furthermore, many researchers now employ low-noise delta(f) methods to greatly reduce statistical noise by modelling only the perturbation of the gyrokinetic distribution function from a fixed background, not the entire distribution function. In spite of the increased efficiency of these improved algorithms over conventional PIC algorithms, gyrokinetic PIC simulations of tokamak micro-turbulence are still highly demanding of computer power--even fully-vectorized codes on vector supercomputers. For this reason, we have worked for several years to redevelop these codes on massively parallel computers. We have developed 3D gyrokinetic PIC simulation codes for SIMD and MIMD parallel processors, using control-parallel, data-parallel, and domain-decomposition message-passing (DDMP) programming paradigms. This poster summarizes our earlier work on codes for the Connection Machine and BBN TC2000 and our development of a generic DDMP code for distributed-memory parallel machines. We discuss the memory-access issues which are of key importance in writing parallel PIC codes, with special emphasis on issues peculiar to gyrokinetic PIC. We outline the domain decompositions in our new DDMP code and discuss the interplay of different domain decompositions suited for the particle-pushing and field-solution components of the PIC algorithm.

  19. Performance and capacity analysis of Poisson photon-counting based Iter-PIC OCDMA systems.

    Science.gov (United States)

    Li, Lingbin; Zhou, Xiaolin; Zhang, Rong; Zhang, Dingchen; Hanzo, Lajos

    2013-11-04

    In this paper, an iterative parallel interference cancellation (Iter-PIC) technique is developed for optical code-division multiple-access (OCDMA) systems relying on shot-noise limited Poisson photon-counting reception. The novel semi-analytical tool of extrinsic information transfer (EXIT) charts is used for analysing both the bit error rate (BER) performance as well as the channel capacity of these systems and the results are verified by Monte Carlo simulations. The proposed Iter-PIC OCDMA system is capable of achieving two orders of magnitude BER improvements and a 0.1 nats of capacity improvement over the conventional chip-level OCDMA systems at a coding rate of 1/10.

  20. Numerical experiments on unstructured PIC stability.

    Energy Technology Data Exchange (ETDEWEB)

    Day, David Minot

    2011-04-01

    Particle-In-Cell (PIC) is a method for plasmas simulation. Particles are pushed with Verlet time integration. Fields are modeled using finite differences on a tensor product mesh (cells). The Unstructured PIC methods studied here use instead finite element discretizations on unstructured (simplicial) meshes. PIC is constrained by stability limits (upper bounds) on mesh and time step sizes. Numerical evidence (2D) and analysis will be presented showing that similar bounds constrain unstructured PIC.

  1. Two-dimensional PIC-MCC simulation of ion extraction

    International Nuclear Information System (INIS)

    Xiong Jiagui; Wang Dewu

    2000-01-01

    To explore more simple and efficient ion extraction methods used in atomic vapor laser isotope separation (AVLIS), two-dimensional (2D) PIC-MCC simulation code is used to simulate and compare several methods: parallel electrode method, II type electrode method, improved M type electrode method, and radio frequency (RF) resonance method. The simulations show that, the RF resonance method without magnetic field is the best among others, then the improved M type electrode method. The result of simulation of II type electrode method is quite different from that calculated by 2D electron equilibrium model. The RF resonance method with or without magnetic field has guide different results. Strong resonance occurs in the simulation without magnetic field, whereas no significant resonance occurs under weak magnetic field. And that is quite different from the strong resonance phenomena occurring in the 1D PIC simulation with weak magnetic field. As for practical applications, the RF resonance method without magnetic field has pros and cons, compared with the M type electrode method

  2. (Nearly) portable PIC code for parallel computers

    International Nuclear Information System (INIS)

    Decyk, V.K.

    1993-01-01

    As part of the Numerical Tokamak Project, the author has developed a (nearly) portable, one dimensional version of the GCPIC algorithm for particle-in-cell codes on parallel computers. This algorithm uses a spatial domain decomposition for the fields, and passes particles from one domain to another as the particles move spatially. With only minor changes, the code has been run in parallel on the Intel Delta, the Cray C-90, the IBM ES/9000 and a cluster of workstations. After a line by line translation into cmfortran, the code was also run on the CM-200. Impressive speeds have been achieved, both on the Intel Delta and the Cray C-90, around 30 nanoseconds per particle per time step. In addition, the author was able to isolate the data management modules, so that the physics modules were not changed much from their sequential version, and the data management modules can be used as open-quotes black boxes.close quotes

  3. SD card projects using the PIC microcontroller

    CERN Document Server

    Ibrahim, Dogan

    2010-01-01

    PIC Microcontrollers are a favorite in industry and with hobbyists. These microcontrollers are versatile, simple, and low cost making them perfect for many different applications. The 8-bit PIC is widely used in consumer electronic goods, office automation, and personal projects. Author, Dogan Ibrahim, author of several PIC books has now written a book using the PIC18 family of microcontrollers to create projects with SD cards. This book is ideal for those practicing engineers, advanced students, and PIC enthusiasts that want to incorporate SD Cards into their devices. SD cards are che

  4. Performance of DS-CDMA systems with optimal hard-decision parallel interference cancellation

    NARCIS (Netherlands)

    Hofstad, van der R.W.; Klok, M.J.

    2003-01-01

    We study a multiuser detection system for code-division multiple access (CDMA). We show that applying multistage hard-decision parallel interference cancellation (HD-PIC) significantly improves performance compared to the matched filter system. In (multistage) HD-PIC, estimates of the interfering

  5. PIC microcomputer guide for beginner

    International Nuclear Information System (INIS)

    Shin, Chulho

    2001-03-01

    This book comprised of four parts. The first part deals with computer one chip, voltage current, resistance, electronic components, logical element, TTL and CMOS, memory and I/O and MDS. The second part is about PIC16C84 which describes its memory structure, registers and PIC16C84 command. The third part deals with LED control program, jet car LED, quiz buzzer program, LED spectrum, digital dice, two digital dices and time bomb. The last part introduces PIC16C71 and temperature controller.

  6. Integrated Work Management: PIC, Course 31884

    Energy Technology Data Exchange (ETDEWEB)

    Simpson, Lewis Edward [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-09-08

    The person-in-charge (PIC) plays a key role in the integrated work management (IWM) process at Los Alamos National Laboratory (LANL, or the Laboratory) because the PIC is assigned responsibility and authority by the responsible line manager (RLM) for the overall validation, coordination, release, execution, and closeout of a work activity in accordance with IWM. This course, Integrated Work Management: PIC (Course 31884), describes the PIC’s IWM roles and responsibilities. This course also discusses IWM requirements that the PIC must meet. For a general overview of the IWM process, see self-study Course 31881, Integrated Work Management: Overview. For instruction on the preparer’s role, see self-study Course 31883, Integrated Work Management: Preparer.

  7. Parallel Finite Element Particle-In-Cell Code for Simulations of Space-charge Dominated Beam-Cavity Interactions

    International Nuclear Information System (INIS)

    Candel, A.; Kabel, A.; Ko, K.; Lee, L.; Li, Z.; Limborg, C.; Ng, C.; Prudencio, E.; Schussman, G.; Uplenchwar, R.

    2007-01-01

    Over the past years, SLAC's Advanced Computations Department (ACD) has developed the parallel finite element (FE) particle-in-cell code Pic3P (Pic2P) for simulations of beam-cavity interactions dominated by space-charge effects. As opposed to standard space-charge dominated beam transport codes, which are based on the electrostatic approximation, Pic3P (Pic2P) includes space-charge, retardation and boundary effects as it self-consistently solves the complete set of Maxwell-Lorentz equations using higher-order FE methods on conformal meshes. Use of efficient, large-scale parallel processing allows for the modeling of photoinjectors with unprecedented accuracy, aiding the design and operation of the next-generation of accelerator facilities. Applications to the Linac Coherent Light Source (LCLS) RF gun are presented

  8. Large Scale Earth's Bow Shock with Northern IMF as Simulated by PIC Code in Parallel with MHD Model

    Science.gov (United States)

    Baraka, Suleiman

    2016-06-01

    In this paper, we propose a 3D kinetic model (particle-in-cell, PIC) for the description of the large scale Earth's bow shock. The proposed version is stable and does not require huge or extensive computer resources. Because PIC simulations work with scaled plasma and field parameters, we also propose to validate our code by comparing its results with the available MHD simulations under same scaled solar wind (SW) and (IMF) conditions. We report new results from the two models. In both codes the Earth's bow shock position is found to be ≈14.8 R E along the Sun-Earth line, and ≈29 R E on the dusk side. Those findings are consistent with past in situ observations. Both simulations reproduce the theoretical jump conditions at the shock. However, the PIC code density and temperature distributions are inflated and slightly shifted sunward when compared to the MHD results. Kinetic electron motions and reflected ions upstream may cause this sunward shift. Species distributions in the foreshock region are depicted within the transition of the shock (measured ≈2 c/ ω pi for Θ Bn = 90° and M MS = 4.7) and in the downstream. The size of the foot jump in the magnetic field at the shock is measured to be (1.7 c/ ω pi ). In the foreshocked region, the thermal velocity is found equal to 213 km s-1 at 15 R E and is equal to 63 km s -1 at 12 R E (magnetosheath region). Despite the large cell size of the current version of the PIC code, it is powerful to retain macrostructure of planets magnetospheres in very short time, thus it can be used for pedagogical test purposes. It is also likely complementary with MHD to deepen our understanding of the large scale magnetosphere.

  9. SMILEI: A collaborative, open-source, multi-purpose PIC code for the next generation of super-computers

    Science.gov (United States)

    Grech, Mickael; Derouillat, J.; Beck, A.; Chiaramello, M.; Grassi, A.; Niel, F.; Perez, F.; Vinci, T.; Fle, M.; Aunai, N.; Dargent, J.; Plotnikov, I.; Bouchard, G.; Savoini, P.; Riconda, C.

    2016-10-01

    Over the last decades, Particle-In-Cell (PIC) codes have been central tools for plasma simulations. Today, new trends in High-Performance Computing (HPC) are emerging, dramatically changing HPC-relevant software design and putting some - if not most - legacy codes far beyond the level of performance expected on the new and future massively-parallel super computers. SMILEI is a new open-source PIC code co-developed by both plasma physicists and HPC specialists, and applied to a wide range of physics-related studies: from laser-plasma interaction to astrophysical plasmas. It benefits from an innovative parallelization strategy that relies on a super-domain-decomposition allowing for enhanced cache-use and efficient dynamic load balancing. Beyond these HPC-related developments, SMILEI also benefits from additional physics modules allowing to deal with binary collisions, field and collisional ionization and radiation back-reaction. This poster presents the SMILEI project, its HPC capabilities and illustrates some of the physics problems tackled with SMILEI.

  10. Towards the optimization of a gyrokinetic Particle-In-Cell (PIC) code on large-scale hybrid architectures

    International Nuclear Information System (INIS)

    Ohana, N; Lanti, E; Tran, T M; Brunner, S; Hariri, F; Villard, L; Jocksch, A; Gheller, C

    2016-01-01

    With the aim of enabling state-of-the-art gyrokinetic PIC codes to benefit from the performance of recent multithreaded devices, we developed an application from a platform called the “PIC-engine” [1, 2, 3] embedding simplified basic features of the PIC method. The application solves the gyrokinetic equations in a sheared plasma slab using B-spline finite elements up to fourth order to represent the self-consistent electrostatic field. Preliminary studies of the so-called Particle-In-Fourier (PIF) approach, which uses Fourier modes as basis functions in the periodic dimensions of the system instead of the real-space grid, show that this method can be faster than PIC for simulations with a small number of Fourier modes. Similarly to the PIC-engine, multiple levels of parallelism have been implemented using MPI+OpenMP [2] and MPI+OpenACC [1], the latter exploiting the computational power of GPUs without requiring complete code rewriting. It is shown that sorting particles [3] can lead to performance improvement by increasing data locality and vectorizing grid memory access. Weak scalability tests have been successfully run on the GPU-equipped Cray XC30 Piz Daint (at CSCS) up to 4,096 nodes. The reduced time-to-solution will enable more realistic and thus more computationally intensive simulations of turbulent transport in magnetic fusion devices. (paper)

  11. Experimental and theoretical high energy physics research. [UCLA

    Energy Technology Data Exchange (ETDEWEB)

    Buchanan, Charles D.; Cline, David B.; Byers, N.; Ferrara, S.; Peccei, R.; Hauser, Jay; Muller, Thomas; Atac, Muzaffer; Slater, William; Cousins, Robert; Arisaka, Katsushi

    1992-01-01

    Progress in the various components of the UCLA High-Energy Physics Research program is summarized, including some representative figures and lists of resulting presentations and published papers. Principal efforts were directed at the following: (I) UCLA hadronization model, PEP4/9 e{sup +}e{sup {minus}} analysis, {bar P} decay; (II) ICARUS and astroparticle physics (physics goals, technical progress on electronics, data acquisition, and detector performance, long baseline neutrino beam from CERN to the Gran Sasso and ICARUS, future ICARUS program, and WIMP experiment with xenon), B physics with hadron beams and colliders, high-energy collider physics, and the {phi} factory project; (III) theoretical high-energy physics; (IV) H dibaryon search, search for K{sub L}{sup 0} {yields} {pi}{sup 0}{gamma}{gamma} and {pi}{sup 0}{nu}{bar {nu}}, and detector design and construction for the FNAL-KTeV project; (V) UCLA participation in the experiment CDF at Fermilab; and (VI) VLPC/scintillating fiber R D.

  12. GAP--a PIC-type fluid code

    International Nuclear Information System (INIS)

    Marder, B.M.

    1975-01-01

    GAP, a PIC-type fluid code for computing compressible flows, is described and demonstrated. While retaining some features of PIC, it is felt that the GAP approach is conceptually and operationally simpler. 9 figures

  13. Reliability and validity of the Danish version of the UCLA Loneliness Scale

    DEFF Research Database (Denmark)

    Lasgaard, Mathias

    2007-01-01

    The objective of this study was to examine the psychometric properties of a Danish version of the UCLA Loneliness Scale (UCLA). The 20-item scale was completed along with other measures in a national youth probability sample of 379 8th grade students aged 13-17. The scale showed high internal con....... The results, highly comparable to the original version of the scale, indicate that the Danish version of UCLA is a reliable and valid measure of loneliness....... consistency, and correlations between UCLA and measures of emotional loneliness, social loneliness, self-esteem, depression, extraversion, and neuroticism supported the convergent and discriminant validity of the scale. Exploratory factor analysis supported a unidimensional structure of the measure...

  14. [PICS: pharmaceutical inspection cooperation scheme].

    Science.gov (United States)

    Morénas, J

    2009-01-01

    The pharmaceutical inspection cooperation scheme (PICS) is a structure containing 34 participating authorities located worldwide (October 2008). It has been created in 1995 on the basis of the pharmaceutical inspection convention (PIC) settled by the European free trade association (EFTA) in1970. This scheme has different goals as to be an international recognised body in the field of good manufacturing practices (GMP), for training inspectors (by the way of an annual seminar and experts circles related notably to active pharmaceutical ingredients [API], quality risk management, computerized systems, useful for the writing of inspection's aide-memoires). PICS is also leading to high standards for GMP inspectorates (through regular crossed audits) and being a room for exchanges on technical matters between inspectors but also between inspectors and pharmaceutical industry.

  15. PIC Activation through Functional Interplay between Mediator and TFIIH.

    Science.gov (United States)

    Malik, Sohail; Molina, Henrik; Xue, Zhu

    2017-01-06

    The multiprotein Mediator coactivator complex functions in large part by controlling the formation and function of the promoter-bound preinitiation complex (PIC), which consists of RNA polymerase II and general transcription factors. However, precisely how Mediator impacts the PIC, especially post-recruitment, has remained unclear. Here, we have studied Mediator effects on basal transcription in an in vitro transcription system reconstituted from purified components. Our results reveal a close functional interplay between Mediator and TFIIH in the early stages of PIC development. We find that under conditions when TFIIH is not normally required for transcription, Mediator actually represses transcription. TFIIH, whose recruitment to the PIC is known to be facilitated by the Mediator, then acts to relieve Mediator-induced repression to generate an active form of the PIC. Gel mobility shift analyses of PICs and characterization of TFIIH preparations carrying mutant XPB translocase subunit further indicate that this relief of repression is achieved through expending energy via ATP hydrolysis, suggesting that it is coupled to TFIIH's established promoter melting activity. Our interpretation of these results is that Mediator functions as an assembly factor that facilitates PIC maturation through its various stages. Whereas the overall effect of the Mediator is to stimulate basal transcription, its initial engagement with the PIC generates a transcriptionally inert PIC intermediate, which necessitates energy expenditure to complete the process. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. Designing Embedded Systems with PIC Microcontrollers Principles and Applications

    CERN Document Server

    Wilmshurst, Tim

    2009-01-01

    PIC microcontrollers are used worldwide in commercial and industrial devices. The 8-bit PIC which this book focuses on is a versatile work horse that completes many designs. An engineer working with applications that include a microcontroller will no doubt come across the PIC sooner rather than later. It is a must to have a working knowledge of this 8-bit technology. This book takes the novice from introduction of embedded systems through to advanced development techniques for utilizing and optimizing the PIC family of microcontrollers in your device. To truly understand the PIC, assembly and

  17. Concurrent particle-in-cell plasma simulation on a multi-transputer parallel computer

    International Nuclear Information System (INIS)

    Khare, A.N.; Jethra, A.; Patel, Kartik

    1992-01-01

    This report describes the parallelization of a Particle-in-Cell (PIC) plasma simulation code on a multi-transputer parallel computer. The algorithm used in the parallelization of the PIC method is described. The decomposition schemes related to the distribution of the particles among the processors are discussed. The implementation of the algorithm on a transputer network connected as a torus is presented. The solutions of the problems related to global communication of data are presented in the form of a set of generalized communication functions. The performance of the program as a function of data size and the number of transputers show that the implementation is scalable and represents an effective way of achieving high performance at acceptable cost. (author). 11 refs., 4 figs., 2 tabs., appendices

  18. Dynamic load balancing in a concurrent plasma PIC code on the JPL/Caltech Mark III hypercube

    International Nuclear Information System (INIS)

    Liewer, P.C.; Leaver, E.W.; Decyk, V.K.; Dawson, J.M.

    1990-01-01

    Dynamic load balancing has been implemented in a concurrent one-dimensional electromagnetic plasma particle-in-cell (PIC) simulation code using a method which adds very little overhead to the parallel code. In PIC codes, the orbits of many interacting plasma electrons and ions are followed as an initial value problem as the particles move in electromagnetic fields calculated self-consistently from the particle motions. The code was implemented using the GCPIC algorithm in which the particles are divided among processors by partitioning the spatial domain of the simulation. The problem is load-balanced by partitioning the spatial domain so that each partition has approximately the same number of particles. During the simulation, the partitions are dynamically recreated as the spatial distribution of the particles changes in order to maintain processor load balance

  19. Storage of Maize in Purdue Improved Crop Storage (PICS) Bags.

    Science.gov (United States)

    Williams, Scott B; Murdock, Larry L; Baributsa, Dieudonne

    2017-01-01

    Interest in using hermetic technologies as a pest management solution for stored grain has risen in recent years. One hermetic approach, Purdue Improved Crop Storage (PICS) bags, has proven successful in controlling the postharvest pests of cowpea. This success encouraged farmers to use of PICS bags for storing other crops including maize. To assess whether maize can be safely stored in PICS bags without loss of quality, we carried out laboratory studies of maize grain infested with Sitophilus zeamais (Motshulsky) and stored in PICS triple bags or in woven polypropylene bags. Over an eight month observation period, temperatures in the bags correlated with ambient temperature for all treatments. Relative humidity inside PICS bags remained constant over this period despite the large changes that occurred in the surrounding environment. Relative humidity in the woven bags followed ambient humidity closely. PICS bags containing S. zeamais-infested grain saw a significant decline in oxygen compared to the other treatments. Grain moisture content declined in woven bags, but remained high in PICS bags. Seed germination was not significantly affected over the first six months in all treatments, but declined after eight months of storage when infested grain was held in woven bags. Relative damage was low across treatments and not significantly different between treatments. Overall, maize showed no signs of deterioration in PICS bags versus the woven bags and PICS bags were superior to woven bags in terms of specific metrics of grain quality.

  20. PICs in the injector complex - what are we talking about?

    International Nuclear Information System (INIS)

    Hanke, K

    2014-01-01

    This presentation will identify PIC activities for the LHC injector chain, and point out borderline cases to pure consolidation and upgrade. The most important PIC items will be listed for each LIU project (PSB, PS, SPS) and categorized by a) the risk if not performed and b) the implications of doing them. This will in particular address the consequences on performance, schedule, reliability, commissioning time, operational complexity etc. The additional cost of PICs with regard to pure consolidation will be estimated and possible time lines for the implementation of the PICs will be discussed. In this context, it will be evaluated if the PICs can be implemented over several machine stops

  1. Abstract Interpretation of PIC programs through Logic Programming

    DEFF Research Database (Denmark)

    Henriksen, Kim Steen; Gallagher, John Patrick

    2006-01-01

    , are applied to the logic based model of the machine. A small PIC microcontroller is used as a case study. An emulator for this microcontroller is written in Prolog, and standard programming transformations and analysis techniques are used to specialise this emulator with respect to a given PIC program....... The specialised emulator can now be further analysed to gain insight into the given program for the PIC microcontroller. The method describes a general framework for applying abstractions, illustrated here by linear constraints and convex hull analysis, to logic programs. Using these techniques on the specialised...

  2. Metal Detector By Using PIC Microcontroller Interfacing With PC

    Directory of Open Access Journals (Sweden)

    Yin Min Theint

    2015-06-01

    Full Text Available Abstract This system proposes metal detector by using PIC microcontroller interfacing with PC. The system uses PIC microcontroller as the main controller whether the detected metal is ferrous metal or non-ferrous metal. Among various types of metal sensors and various types of metal detecting technologies concentric type induction coil sensor and VLF very low frequency metal detecting technology are used in this system. This system consists of two configurations Hardware configuration and Software configuration. The hardware components include induction coil sensors which senses the frequency changes of metal a PIC microcontroller personal computer PC buzzer light emitting diode LED and webcam. The software configuration includes a program controller interface. PIC MikroCprogramming language is used to implement the control system. This control system is based on the PIC 16F887 microcontroller.This system is mainly used in mining and high security places such as airport plaza shopping mall and governmental buildings.

  3. Laser wakefields at UCLA and LLNL

    International Nuclear Information System (INIS)

    Mori, W.B.; Clayton, C.E.; Joshi, C.; Dawson, J.M.; Decker, C.B.; Marsh, K.; Katsouleas, T.; Darrow, C.B.; Wilks, S.C.

    1991-01-01

    The authors report on recent progress at UCLA and LLNL on the nonlinear laser wakefield scheme. They find advantages to operating in the limit where the laser pulse is narrow enough to expel all the plasma electrons from the focal region. A description of the experimental program for the new short pulse 10 TW laser facility at LLNL is also presented

  4. REFORMA/UCLA Mentor Program: A Mentoring Manual.

    Science.gov (United States)

    Tauler, Sandra

    Although mentoring dates back to Greek mythology, the concept continues to thrive in today's society. Mentoring is a strategy that successful people have known about for centuries. The REFORMA/UCLA Mentor Program has made use of this strategy since its inception in November 1985 at the Graduate School of Library and Information Science at the…

  5. PIC simulations of the trapped electron filamentation instability in finite-width electron plasma waves

    Science.gov (United States)

    Winjum, B. J.; Banks, J. W.; Berger, R. L.; Cohen, B. I.; Chapman, T.; Hittinger, J. A. F.; Rozmus, W.; Strozzi, D. J.; Brunner, S.

    2012-10-01

    We present results on the kinetic filamentation of finite-width nonlinear electron plasma waves (EPW). Using 2D simulations with the PIC code BEPS, we excite a traveling EPW with a Gaussian transverse profile and a wavenumber k0λDe= 1/3. The transverse wavenumber spectrum broadens during transverse EPW localization for small width (but sufficiently large amplitude) waves, while the spectrum narrows to a dominant k as the initial EPW width increases to the plane-wave limit. For large EPW widths, filaments can grow and destroy the wave coherence before transverse localization destroys the wave; the filaments in turn evolve individually as self-focusing EPWs. Additionally, a transverse electric field develops that affects trapped electrons, and a beam-like distribution of untrapped electrons develops between filaments and on the sides of a localizing EPW. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344 and funded by the Laboratory Research and Development Program at LLNL under project tracking code 12-ERD-061. Supported also under Grants DE-FG52-09NA29552 and NSF-Phy-0904039. Simulations were performed on UCLA's Hoffman2 and NERSC's Hopper.

  6. TreePics: visualizing trees with pictures

    Directory of Open Access Journals (Sweden)

    Nicolas Puillandre

    2017-09-01

    Full Text Available While many programs are available to edit phylogenetic trees, associating pictures with branch tips in an efficient and automatic way is not an available option. Here, we present TreePics, a standalone software that uses a web browser to visualize phylogenetic trees in Newick format and that associates pictures (typically, pictures of the voucher specimens to the tip of each branch. Pictures are visualized as thumbnails and can be enlarged by a mouse rollover. Further, several pictures can be selected and displayed in a separate window for visual comparison. TreePics works either online or in a full standalone version, where it can display trees with several thousands of pictures (depending on the memory available. We argue that TreePics can be particularly useful in a preliminary stage of research, such as to quickly detect conflicts between a DNA-based phylogenetic tree and morphological variation, that may be due to contamination that needs to be removed prior to final analyses, or the presence of species complexes.

  7. MHD PbLi experiments in MaPLE loop at UCLA

    International Nuclear Information System (INIS)

    Courtessole, C.; Smolentsev, S.; Sketchley, T.; Abdou, M.

    2016-01-01

    Highlights: • The paper overviews the MaPLE facility at UCLA: one-of-a-few PbLi MHD loop in the world. • We present the progress achieved in development and testing of high-temperature PbLi flow diagnostics. • The most important MHD experiments carried out since the first loop operation in 2011 are summarized. - Abstract: Experiments on magnetohydrodynamic (MHD) flows are critical to understanding complex flow phenomena in ducts of liquid metal blankets, in particular those that utilize eutectic alloy lead–lithium as breeder/coolant, such as self-cooled, dual-coolant and helium-cooled lead–lithium blanket concepts. The primary goal of MHD experiments at UCLA using the liquid metal flow facility called MaPLE (Magnetohydrodynamic PbLi Experiment) is to address important MHD effects, heat transfer and flow materials interactions in blanket-relevant conditions. The paper overviews the one-of-a-kind MaPLE loop at UCLA and presents recent experimental activities, including the development and testing of high-temperature PbLi flow diagnostics and experiments that have been performed since the first loop operation in 2011. We also discuss MaPLE upgrades, which need to be done to substantially expand the experimental capabilities towards a new class of MHD flow phenomena that includes buoyancy effects.

  8. MHD PbLi experiments in MaPLE loop at UCLA

    Energy Technology Data Exchange (ETDEWEB)

    Courtessole, C., E-mail: cyril@fusion.ucla.edu; Smolentsev, S.; Sketchley, T.; Abdou, M.

    2016-11-01

    Highlights: • The paper overviews the MaPLE facility at UCLA: one-of-a-few PbLi MHD loop in the world. • We present the progress achieved in development and testing of high-temperature PbLi flow diagnostics. • The most important MHD experiments carried out since the first loop operation in 2011 are summarized. - Abstract: Experiments on magnetohydrodynamic (MHD) flows are critical to understanding complex flow phenomena in ducts of liquid metal blankets, in particular those that utilize eutectic alloy lead–lithium as breeder/coolant, such as self-cooled, dual-coolant and helium-cooled lead–lithium blanket concepts. The primary goal of MHD experiments at UCLA using the liquid metal flow facility called MaPLE (Magnetohydrodynamic PbLi Experiment) is to address important MHD effects, heat transfer and flow materials interactions in blanket-relevant conditions. The paper overviews the one-of-a-kind MaPLE loop at UCLA and presents recent experimental activities, including the development and testing of high-temperature PbLi flow diagnostics and experiments that have been performed since the first loop operation in 2011. We also discuss MaPLE upgrades, which need to be done to substantially expand the experimental capabilities towards a new class of MHD flow phenomena that includes buoyancy effects.

  9. UCLA accelerator research ampersand development. Progress report

    International Nuclear Information System (INIS)

    1997-01-01

    This report discusses work on advanced accelerators and beam dynamics at ANL, BNL, SLAC, UCLA and Pulse Sciences Incorporated. Discussed in this report are the following concepts: Wakefield acceleration studies; plasma lens research; high gradient rf cavities and beam dynamics studies at the Brookhaven accelerator test facility; rf pulse compression development; and buncher systems for high gradient accelerator and relativistic klystron applications

  10. Evaluation of stability of interface between CCM (Co-Cr-Mo) UCLA abutment and external hex implant.

    Science.gov (United States)

    Yoon, Ki-Joon; Park, Young-Bum; Choi, Hyunmin; Cho, Youngsung; Lee, Jae-Hoon; Lee, Keun-Woo

    2016-12-01

    The purpose of this study is to evaluate the stability of interface between Co-Cr-Mo (CCM) UCLA abutment and external hex implant. Sixteen external hex implant fixtures were assigned to two groups (CCM and Gold group) and were embedded in molds using clear acrylic resin. Screw-retained prostheses were constructed using CCM UCLA abutment and Gold UCLA abutment. The external implant fixture and screw-retained prostheses were connected using abutment screws. After the abutments were tightened to 30 Ncm torque, 5 kg thermocyclic functional loading was applied by chewing simulator. A target of 1.0 × 10 6 cycles was applied. After cyclic loading, removal torque values were recorded using a driving torque tester, and the interface between implant fixture and abutment was evaluated by scanning electronic microscope (SEM). The means and standard deviations (SD) between the CCM and Gold groups were analyzed with independent t-test at the significance level of 0.05. Fractures of crowns, abutments, abutment screws, and fixtures and loosening of abutment screws were not observed after thermocyclic loading. There were no statistically significant differences at the recorded removal torque values between CCM and Gold groups ( P >.05). SEM analysis revealed that remarkable wear patterns were observed at the abutment interface only for Gold UCLA abutments. Those patterns were not observed for other specimens. Within the limit of this study, CCM UCLA abutment has no statistically significant difference in the stability of interface with external hex implant, compared with Gold UCLA abutment.

  11. Plasma Physics Calculations on a Parallel Macintosh Cluster

    Science.gov (United States)

    Decyk, Viktor; Dauger, Dean; Kokelaar, Pieter

    2000-03-01

    We have constructed a parallel cluster consisting of 16 Apple Macintosh G3 computers running the MacOS, and achieved very good performance on numerically intensive, parallel plasma particle-in-cell simulations. A subset of the MPI message-passing library was implemented in Fortran77 and C. This library enabled us to port code, without modification, from other parallel processors to the Macintosh cluster. For large problems where message packets are large and relatively few in number, performance of 50-150 MFlops/node is possible, depending on the problem. This is fast enough that 3D calculations can be routinely done. Unlike Unix-based clusters, no special expertise in operating systems is required to build and run the cluster. Full details are available on our web site: http://exodus.physics.ucla.edu/appleseed/.

  12. PICS bags safely store unshelled and shelled groundnuts in Niger.

    Science.gov (United States)

    Baributsa, D; Baoua, I B; Bakoye, O N; Amadou, L; Murdock, L L

    2017-05-01

    We conducted an experiment in Niger to evaluate the performance of hermetic triple layer (Purdue Improved Crop Storage- PICS) bags for the preservation of shelled and unshelled groundnut Arachis hypogaea L. Naturally-infested groundnut was stored in PICS bags and woven bags for 6.7 months. After storage, the average oxygen level in the PICS bags fell from 21% to 18% (v/v) and 21%-15% (v/v) for unshelled and shelled groundnut, respectively. Identified pests present in the stored groundnuts were Tribolium castaneum (Herbst), Corcyra cephalonica (Stainton) and Cryptolestes ferrugineus (Stephens). After 6.7 months of storage, in the woven bag, there was a large increase in the pest population accompanied by a weight loss of 8.2% for unshelled groundnuts and 28.7% for shelled groundnut. In PICS bags for both shelled and unshelled groundnuts, by contrast, the density of insect pests did not increase, there was no weight loss, and the germination rate was the same compared to that recorded at the beginning of the experiment. Storing shelled groundnuts in PICS bags is the most cost-effective way as it increases the quantity of grain stored.

  13. Massively parallel computation of PARASOL code on the Origin 3800 system

    International Nuclear Information System (INIS)

    Hosokawa, Masanari; Takizuka, Tomonori

    2001-10-01

    The divertor particle simulation code named PARASOL simulates open-field plasmas between divertor walls self-consistently by using an electrostatic PIC method and a binary collision Monte Carlo model. The PARASOL parallelized with MPI-1.1 for scalar parallel computer worked on Intel Paragon XP/S system. A system SGI Origin 3800 was newly installed (May, 2001). The parallel programming was improved at this switchover. As a result of the high-performance new hardware and this improvement, the PARASOL is speeded up by about 60 times with the same number of processors. (author)

  14. EVALUASI CSE-UCLA PADA STUDI PROSES PEMBELAJARAN MATEMATIKA

    Directory of Open Access Journals (Sweden)

    Siska Andriani

    2015-12-01

    Full Text Available tandar proses merupakan salah satu standar nasional yang mengatur perencanaan, pelaksanaan, penilaian, dan pengawasan proses pembelajaran. Pelaksanaan standar proses yang terjadi di lapangan belum terlihat keterlaksanaannya. Tujuan penelitian ini yaitu memperoleh deskripsi keterlaksanaan standar proses pada proses pembelajaran matematika menggunakan analisis CSE-UCLA di SMP Negeri Satu Atap Lerep. Penelitian ini merupakan penelitian kualitatif dengan pendekatan evaluatif. Sumber data utama adalah guru matematika. Teknik pengumpulan data menggunakan wawancara, observasi, dan dokumentasi. Keabsahan data yang digunakan dalam penelitian ini menggunakan uji credibility (triangulasi dan kecukupan bahan referensi, uji transferability, dan uji dependability.  Hasil penelitian menunjukkan bahwa proses pembelajaran matematika di SMP Negeri Satu Atap Lerep sudah mengikuti standar proses. Implementasi standar proses dengan analisis CSE-UCLA menunjukkan bahwa standar proses dilaksanakan melalui tahap  perencanaan, pengembangan, implementasi, hasil dan dampak. Dampak yang muncul pembelajaran yang terjadi tidak maksimal. Selain itu, banyak faktor yang mempengaruhi implementasi standar proses pada pembelajaran matematika di SMP Negeri Satu Atap Lerep. Faktor-faktor tersebut berupa faktor pendukung dan faktor penghambat.

  15. Investigating plasma-rotation methods for the Space-Plasma Physics Campaign at UCLA's BAPSF.

    Science.gov (United States)

    Finnegan, S. M.; Koepke, M. E.; Reynolds, E. W.

    2006-10-01

    In D'Angelo et al., JGR 79, 4747 (1974), rigid-body ExB plasma flow was inferred from parabolic floating-potential profiles produced by a spiral ionizing surface. Here, taking a different approach, we report effects on barium-ion azimuthal-flow profiles using either a non-emissive or emissive spiral end-electrode in the WVU Q-machine. Neither electrode produced a radially-parabolic space-potential profile. The emissive spiral, however, generated controllable, radially-parabolic structure in the floating potential, consistent with a second population of electrons having a radially-parabolic parallel-energy profile. Laser-induced-fluorescence measurements of spatially resolved, azimuthal-velocity distribution functions show that, for a given flow profile, the diamagnetic drift of hot (>>0.2eV) ions overwhelms the ExB-drift contribution. Our experiments constitute a first attempt at producing controllable, rigid-body, ExB plasma flow for future experiments on the LArge-Plasma-Device (LAPD), as part of the Space-Plasma Physics Campaign (at UCLA's BAPSF).

  16. UCLA Translational Biomarker Development Program (UTBD)

    Energy Technology Data Exchange (ETDEWEB)

    Czernin, Johannes [Univ. of California, Los Angeles, CA (United States)

    2014-09-01

    The proposed UTBD program integrates the sciences of diagnostic nuclear medicine and (radio)chemistry with tumor biology and drug development. UTBD aims to translate new PET biomarkers for personalized medicine and to provide examples for the use of PET to determine pharmacokinetic (PK) and pharmacodynamic (PD) drug properties. The program builds on an existing partnership between the Ahmanson Translational Imaging Division (ATID) and the Crump Institute of Molecular Imaging (CIMI), the UCLA Department of Chemistry and the Division of Surgical Oncology. ATID provides the nuclear medicine training program, clinical and preclinical PET/CT scanners, biochemistry and biology labs for probe and drug development, radiochemistry labs, and two cyclotrons. CIMI provides DOE and NIH-funded training programs for radio-synthesis (START) and molecular imaging (SOMI). Other participating entities at UCLA are the Department of Chemistry and Biochemistry and the Division of Surgical Oncology. The first UTBD project focuses on deoxycytidine kinase, a rate-limiting enzyme in nucleotide metabolism, which is expressed in many cancers. Deoxycytidine kinase (dCK) positive tumors can be targeted uniquely by two distinct therapies: 1) nucleoside analog prodrugs such as gemcitabine (GEM) are activated by dCK to cytotoxic antimetabolites; 2) recently developed small molecule dCK inhibitors kill tumor cells by starving them of nucleotides required for DNA replication and repair. Since dCK-specific PET probes are now available, PET imaging of tumor dCK activity could improve the use of two different classes of drugs in a wide variety of cancers.

  17. Electron acceleration in the Solar corona - 3D PiC code simulations of guide field reconnection

    Science.gov (United States)

    Alejandro Munoz Sepulveda, Patricio

    2017-04-01

    The efficient electron acceleration in the solar corona detected by means of hard X-ray emission is still not well understood. Magnetic reconnection through current sheets is one of the proposed production mechanisms of non-thermal electrons in solar flares. Previous works in this direction were based mostly on test particle calculations or 2D fully-kinetic PiC simulations. We have now studied the consequences of self-generated current-aligned instabilities on the electron acceleration mechanisms by 3D magnetic reconnection. For this sake, we carried out 3D Particle-in-Cell (PiC) code numerical simulations of force free reconnecting current sheets, appropriate for the description of the solar coronal plasmas. We find an efficient electron energization, evidenced by the formation of a non-thermal power-law tail with a hard spectral index smaller than -2 in the electron energy distribution function. We discuss and compare the influence of the parallel electric field versus the curvature and gradient drifts in the guiding-center approximation on the overall acceleration, and their dependence on different plasma parameters.

  18. 3-D electromagnetic plasma particle simulations on the Intel Delta parallel computer

    International Nuclear Information System (INIS)

    Wang, J.; Liewer, P.C.

    1994-01-01

    A three-dimensional electromagnetic PIC code has been developed on the 512 node Intel Touchstone Delta MIMD parallel computer. This code is based on the General Concurrent PIC algorithm which uses a domain decomposition to divide the computation among the processors. The 3D simulation domain can be partitioned into 1-, 2-, or 3-dimensional sub-domains. Particles must be exchanged between processors as they move among the subdomains. The Intel Delta allows one to use this code for very-large-scale simulations (i.e. over 10 8 particles and 10 6 grid cells). The parallel efficiency of this code is measured, and the overall code performance on the Delta is compared with that on Cray supercomputers. It is shown that their code runs with a high parallel efficiency of ≥ 95% for large size problems. The particle push time achieved is 115 nsecs/particle/time step for 162 million particles on 512 nodes. Comparing with the performance on a single processor Cray C90, this represents a factor of 58 speedup. The code uses a finite-difference leap frog method for field solve which is significantly more efficient than fast fourier transforms on parallel computers. The performance of this code on the 128 node Cray T3D will also be discussed

  19. Searching for Short GRBs in Soft Gamma Rays with INTEGRAL/PICsIT

    DEFF Research Database (Denmark)

    Rodi, James; Bazzano, Angela; Ubertini, Pietro

    spectral information about these sources at soft gamma-ray energies.We have begun a study of PICsIT data for faint SGRB ssimilar to the one associated with the binary neutron star (BNS) merger GW170817, and also are preparing for future GW triggers by developing a realtime burst analysis for PICs......IT. Searching the PICsIT data for significant excesses during ~30 min-long pointings containing times of SGRBs, we have been able to differentiate between SGRBs and spurious events. Also, this work allows us to assess what fraction of reported SGRBs have been detected by PICsIT, which can be used to provide...

  20. 46 CFR 13.301 - Original application for “Tankerman-PIC (Barge)” endorsement.

    Science.gov (United States)

    2010-10-01

    ... 46 Shipping 1 2010-10-01 2010-10-01 false Original application for âTankerman-PIC (Barge)â endorsement. 13.301 Section 13.301 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY MERCHANT MARINE....301 Original application for “Tankerman-PIC (Barge)” endorsement. Each applicant for a “Tankerman-PIC...

  1. Performance of PICS bags under extreme conditions in the sahel zone of Niger.

    Science.gov (United States)

    Baoua, Ibrahim B; Bakoye, Ousmane; Amadou, Laouali; Murdock, Larry L; Baributsa, Dieudonne

    2018-03-01

    Experiments in Niger assessed whether extreme environmental conditions including sunlight exposure affect the performance of triple-layer PICS bags in protecting cowpea grain against bruchids. Sets of PICS bags and woven polypropylene bags as controls containing 50 kg of naturally infested cowpea grain were held in the laboratory or outside with sun exposure for four and one-half months. PICS bags held either inside or outside exhibited no significant increase in insect damage and no loss in weight after 4.5 months of storage compared to the initial values. By contrast, woven bags stored inside or outside side by side with PICS bags showed several-fold increases in insects present in or on the grain and significant losses in grain weight. Grain stored inside in PICS bags showed no reduction in germination versus the initial value but there was a small but significant drop in germination of grain in PICS bags held outside (7.6%). Germination rates dropped substantially more in grain stored in woven bags inside (16.1%) and still more in woven bags stored outside (60%). PICS bags held inside and outside retained their ability to maintain internal reduced levels of oxygen and elevated levels of carbon dioxide. Exposure to extreme environmental conditions degraded the external polypropylene outer layer of the PICS triple-layer bag. Even so, the internal layers of polyethylene were more slowly degraded. The effects of exposure to sunlight, temperature and humidity variation within the sealed bags are described.

  2. A parallel implementation of particle tracking with space charge effects on an INTEL iPSC/860

    International Nuclear Information System (INIS)

    Chang, L.; Bourianoff, G.; Cole, B.; Machida, S.

    1993-05-01

    Particle-tracking simulation is one of the scientific applications that is well-suited to parallel computations. At the Superconducting Super Collider, it has been theoretically and empirically demonstrated that particle tracking on a designed lattice can achieve very high parallel efficiency on a MIMD Intel iPSC/860 machine. The key to such success is the realization that the particles can be tracked independently without considering their interaction. The perfectly parallel nature of particle tracking is broken if the interaction effects between particles are included. The space charge introduces an electromagnetic force that will affect the motion of tracked particles in 3-D space. For accurate modeling of the beam dynamics with space charge effects, one needs to solve three-dimensional Maxwell field equations, usually by a particle-in-cell (PIC) algorithm. This will require each particle to communicate with its neighbor grids to compute the momentum changes at each time step. It is expected that the 3-D PIC method will degrade parallel efficiency of particle-tracking implementation on any parallel computer. In this paper, we describe an efficient scheme for implementing particle tracking with space charge effects on an INTEL iPSC/860 machine. Experimental results show that a parallel efficiency of 75% can be obtained

  3. The Particle-in-Cell and Kinetic Simulation Software Center

    Science.gov (United States)

    Mori, W. B.; Decyk, V. K.; Tableman, A.; Fonseca, R. A.; Tsung, F. S.; Hu, Q.; Winjum, B. J.; An, W.; Dalichaouch, T. N.; Davidson, A.; Hildebrand, L.; Joglekar, A.; May, J.; Miller, K.; Touati, M.; Xu, X. L.

    2017-10-01

    The UCLA Particle-in-Cell and Kinetic Simulation Software Center (PICKSC) aims to support an international community of PIC and plasma kinetic software developers, users, and educators; to increase the use of this software for accelerating the rate of scientific discovery; and to be a repository of knowledge and history for PIC. We discuss progress towards making available and documenting illustrative open-source software programs and distinct production programs; developing and comparing different PIC algorithms; coordinating the development of resources for the educational use of kinetic software; and the outcomes of our first sponsored OSIRIS users workshop. We also welcome input and discussion from anyone interested in using or developing kinetic software, in obtaining access to our codes, in collaborating, in sharing their own software, or in commenting on how PICKSC can better serve the DPP community. Supported by NSF under Grant ACI-1339893 and by the UCLA Institute for Digital Research and Education.

  4. SAPS simulation with GITM/UCLA-RCM coupled model

    Science.gov (United States)

    Lu, Y.; Deng, Y.; Guo, J.; Zhang, D.; Wang, C. P.; Sheng, C.

    2017-12-01

    Abstract: SAPS simulation with GITM/UCLA-RCM coupled model Author: Yang Lu, Yue Deng, Jiapeng Guo, Donghe Zhang, Chih-Ping Wang, Cheng Sheng Ion velocity in the Sub Aurora region observed by Satellites in storm time often shows a significant westward component. The high speed westward stream is distinguished with convection pattern. These kind of events are called Sub Aurora Polarization Stream (SAPS). In March 17th 2013 storm, DMSP F18 satellite observed several SAPS cases when crossing Sub Aurora region. In this study, Global Ionosphere Thermosphere Model (GITM) has been coupled to UCLA-RCM model to simulate the impact of SAPS during March 2013 event on the ionosphere/thermosphere. The particle precipitation and electric field from RCM has been used to drive GITM. The conductance calculated from GITM has feedback to RCM to make the coupling to be self-consistent. The comparison of GITM simulations with different SAPS specifications will be conducted. The neutral wind from simulation will be compared with GOCE satellite. The comparison between runs with SAPS and without SAPS will separate the effect of SAPS from others and illustrate the impact on the TIDS/TADS propagating to both poleward and equatorward directions.

  5. Designing embedded systems with 32-bit PIC microcontrollers and MikroC

    CERN Document Server

    Ibrahim, Dogan

    2013-01-01

    The new generation of 32-bit PIC microcontrollers can be used to solve the increasingly complex embedded system design challenges faced by engineers today. This book teaches the basics of 32-bit C programming, including an introduction to the PIC 32-bit C compiler. It includes a full description of the architecture of 32-bit PICs and their applications, along with coverage of the relevant development and debugging tools. Through a series of fully realized example projects, Dogan Ibrahim demonstrates how engineers can harness the power of this new technology to optimize their embedded design

  6. Searching for Short GRBs in Soft Gamma Rays with INTEGRAL/PICsIT

    Science.gov (United States)

    Rodi, James; Bazzano, Angela; Ubertini, Pietro; Natalucci, Lorenzo; Savchenko, V.; Kuulkers, E.; Ferrigno, Carlo; Bozzo, Enrico; Brandt, Soren; Chenevez, Jerome; Courvoisier, T. J.-L.; Diehl, R.; Domingo, A.; Hanlon, L.; Jourdain, E.; von Kienlin, A.; Laurent, P.; Lebrun, F.; Lutovinov, A.; Martin-Carrillo, A.; Mereghetti, S.; Roques, J.-P.; Sunyaev, R.

    2018-01-01

    With gravitational wave (GW) detections by the LIGO/Virgo collaboration over the past several years, there is heightened interest in gamma-ray bursts (GRBs), especially “short” GRBs (T90 soft gamma-ray, all-sky monitor for impulsive events, such as SGRBs. Because SGRBs typically have hard spectra with peak energies of a few hundred keV, PICsIT with its ~ 3000 cm2 collecting area is able to provide spectral information about these sources at soft gamma-ray energies.We have begun a study of PICsIT data for faint SGRBs similar to the one associated with the binary neutron star (BNS) merger GW 170817, and also are preparing for future GW triggers by developing a real-time burst analysis for PICsIT. Searching the PICsIT data for significant excesses during ~30 min-long pointings containing times of SGRBs, we have been able to differentiate between SGRBs and spurious events. Also, this work allows us to assess what fraction of reported SGRBs have been detected by PICsIT, which can be used to provide an estimate of the number of GW BNS events seen by PICsIT during the next LIGO/Virgo observing run starting in Fall 2018.

  7. Polarization-dependent Imaging Contrast (PIC) mapping reveals nanocrystal orientation patterns in carbonate biominerals

    Energy Technology Data Exchange (ETDEWEB)

    Gilbert, Pupa U.P.A., E-mail: pupa@physics.wisc.edu [University of Wisconsin-Madison, Departments of Physics and Chemistry, Madison, WI 53706 (United States)

    2012-10-15

    Highlights: Black-Right-Pointing-Pointer Nanocrystal orientation shown by Polarization-dependent Imaging Contrast (PIC) maps. Black-Right-Pointing-Pointer PIC-mapping of carbonate biominerals reveals their ultrastructure at the nanoscale. Black-Right-Pointing-Pointer The formation mechanisms of biominerals is discovered by PIC-mapping using PEEM. -- Abstract: Carbonate biominerals are one of the most interesting systems a physicist can study. They play a major role in the CO{sub 2} cycle, they master templation, self-assembly, nanofabrication, phase transitions, space filling, crystal nucleation and growth mechanisms. A new imaging modality was introduced in the last 5 years that enables direct observation of the orientation of carbonate single crystals, at the nano- and micro-scale. This is Polarization-dependent Imaging Contrast (PIC) mapping, which is based on X-ray linear dichroism, and uses PhotoElectron Emission spectroMicroscopy (PEEM). Here we present PIC-mapping results from biominerals, including the nacre and prismatic layers of mollusk shells, and sea urchin teeth. We describe various PIC-mapping approaches, and show that these lead to fundamental discoveries on the formation mechanisms of biominerals.

  8. Digital Fractional Order Controllers Realized by PIC Microprocessor: Experimental Results

    OpenAIRE

    Petras, I.; Grega, S.; Dorcak, L.

    2003-01-01

    This paper deals with the fractional-order controllers and their possible hardware realization based on PIC microprocessor and numerical algorithm coded in PIC Basic. The mathematical description of the digital fractional -order controllers and approximation in the discrete domain are presented. An example of realization of the particular case of digital fractional-order PID controller is shown and described.

  9. Metal Detector By Using PIC Microcontroller Interfacing With PC

    OpenAIRE

    Yin Min Theint; Myo Maung Maung; Hla Myo Tun

    2015-01-01

    Abstract This system proposes metal detector by using PIC microcontroller interfacing with PC. The system uses PIC microcontroller as the main controller whether the detected metal is ferrous metal or non-ferrous metal. Among various types of metal sensors and various types of metal detecting technologies concentric type induction coil sensor and VLF very low frequency metal detecting technology are used in this system. This system consists of two configurations Hardware configuration and Sof...

  10. Acceleration of PIC simulation with GPU

    International Nuclear Information System (INIS)

    Suzuki, Junya; Shimazu, Hironori; Fukazawa, Keiichiro; Den, Mitsue

    2011-01-01

    Particle-in-cell (PIC) is a simulation technique for plasma physics. The large number of particles in high-resolution plasma simulation increases the volume computation required, making it vital to increase computation speed. In this study, we attempt to accelerate computation speed on graphics processing units (GPUs) using KEMPO, a PIC simulation code package. We perform two tests for benchmarking, with small and large grid sizes. In these tests, we run KEMPO1 code using a CPU only, both a CPU and a GPU, and a GPU only. The results showed that performance using only a GPU was twice that of using a CPU alone. While, execution time for using both a CPU and GPU is comparable to the tests with a CPU alone, because of the significant bottleneck in communication between the CPU and GPU. (author)

  11. Charge-conserving FEM-PIC schemes on general grids

    International Nuclear Information System (INIS)

    Campos Pinto, M.; Jund, S.; Salmon, S.; Sonnendruecker, E.

    2014-01-01

    Particle-In-Cell (PIC) solvers are a major tool for the understanding of the complex behavior of a plasma or a particle beam in many situations. An important issue for electromagnetic PIC solvers, where the fields are computed using Maxwell's equations, is the problem of discrete charge conservation. In this article, we aim at proposing a general mathematical formulation for charge-conserving finite-element Maxwell solvers coupled with particle schemes. In particular, we identify the finite-element continuity equations that must be satisfied by the discrete current sources for several classes of time-domain Vlasov-Maxwell simulations to preserve the Gauss law at each time step, and propose a generic algorithm for computing such consistent sources. Since our results cover a wide range of schemes (namely curl-conforming finite element methods of arbitrary degree, general meshes in two or three dimensions, several classes of time discretization schemes, particles with arbitrary shape factors and piecewise polynomial trajectories of arbitrary degree), we believe that they provide a useful roadmap in the design of high-order charge-conserving FEM-PIC numerical schemes. (authors)

  12. In flight calibrations of Ibis/PICsIT

    International Nuclear Information System (INIS)

    Malaguti, G.; Di Cocco, G.; Foschini, L.; Stephen, J.B.; Bazzano, A.; Ubertini, P.; Bird, A.J.; Laurent, P.; Segreto, A.

    2003-01-01

    PICsIT (Pixellated Imaging Caesium Iodide Telescope) is the high energy detector of the IBIS telescope on-board the INTEGRAL satellite. It consists of 4096 independent detection units, ∼ 0.7 cm 2 in cross-section, operating in the energy range between 175 keV and 10 MeV. The intrinsically low signal to noise ratio in the gamma-ray astronomy domain implies very long observations, lasting 10 5 - 10 6 s. Moreover, the image formation principle on which PICsIT works is that of coded imaging in which the entire detection plane contributes to each decoded sky pixel. For these two main reasons, the monitoring, and possible correction, of the spatial and temporal non-uniformity of pixel performances, especially in terms of gain and energy resolution, is of paramount importance. The IBIS on-board 22 Na calibration source allows the calibration of each pixel at an accuracy of <0.5% by integrating the data from a few revolutions at constant temperature. The two calibration lines, at 511 and 1275 keV, allow also the measurement and monitoring of the PICsIT energy resolution which proves to be very stable at ∼ 19% and ∼ 9% (FWHM) respectively, and consistent with the values expected analytical predictions checked against pre-launch tests. (authors)

  13. Particle Acceleration in Pulsar Wind Nebulae: PIC Modelling

    Science.gov (United States)

    Sironi, Lorenzo; Cerutti, Benoît

    We discuss the role of PIC simulations in unveiling the origin of the emitting particles in PWNe. After describing the basics of the PIC technique, we summarize its implications for the quiescent and the flaring emission of the Crab Nebula, as a prototype of PWNe. A consensus seems to be emerging that, in addition to the standard scenario of particle acceleration via the Fermi process at the termination shock of the pulsar wind, magnetic reconnection in the wind, at the termination shock and in the Nebula plays a major role in powering the multi-wavelength signatures of PWNe.

  14. Electromagnetic direct implicit PIC simulation

    International Nuclear Information System (INIS)

    Langdon, A.B.

    1983-01-01

    Interesting modelling of intense electron flow has been done with implicit particle-in-cell simulation codes. In this report, the direct implicit PIC simulation approach is applied to simulations that include full electromagnetic fields. The resulting algorithm offers advantages relative to moment implicit electromagnetic algorithms and may help in our quest for robust and simpler implicit codes

  15. Programando en assembler a los microcontroladores RISC. PIC de microchips

    Directory of Open Access Journals (Sweden)

    Tito Flórez C.

    1999-01-01

    Full Text Available Programar en assembler a los PIC se hace relativamente sencillo, cuando se minimiza el número de instrucciones a unas pocas (14 para el PICI6C84. El funcionamiento de esas instrucciones se explica mediante ejemplos sencillos, y el funcionamiento del programa en conjunto se explica con un programa ejemplo. De igual forma se explica la forma como debe de ser quemado el PIC.

  16. Saltwell PIC Skid Programmable Logic Controller (PLC) Software Configuration Management Plan

    International Nuclear Information System (INIS)

    KOCH, M.R.

    1999-01-01

    This document provides the procedures and guidelines necessary for computer software configuration management activities during the operation and maintenance phases of the Saltwell PIC Skids as required by LMH-PRO-309/Rev. 0, Computer Software Quality Assurance, Section 2.6, Software Configuration Management. The software configuration management plan (SCMP) integrates technical and administrative controls to establish and maintain technical consistency among requirements, physical configuration, and documentation for the Saltwell PIC Skid Programmable Logic Controller (PLC) software during the Hanford application, operations and maintenance. This SCMP establishes the Saltwell PIC Skid PLC Software Baseline, status changes to that baseline, and ensures that software meets design and operational requirements and is tested in accordance with their design basis

  17. A Performance-Prediction Model for PIC Applications on Clusters of Symmetric MultiProcessors: Validation with Hierarchical HPF+OpenMP Implementation

    Directory of Open Access Journals (Sweden)

    Sergio Briguglio

    2003-01-01

    Full Text Available A performance-prediction model is presented, which describes different hierarchical workload decomposition strategies for particle in cell (PIC codes on Clusters of Symmetric MultiProcessors. The devised workload decomposition is hierarchically structured: a higher-level decomposition among the computational nodes, and a lower-level one among the processors of each computational node. Several decomposition strategies are evaluated by means of the prediction model, with respect to the memory occupancy, the parallelization efficiency and the required programming effort. Such strategies have been implemented by integrating the high-level languages High Performance Fortran (at the inter-node stage and OpenMP (at the intra-node one. The details of these implementations are presented, and the experimental values of parallelization efficiency are compared with the predicted results.

  18. LPIC++. A parallel one-dimensional relativistic electromagnetic particle-in-cell code for simulating laser-plasma-interaction

    International Nuclear Information System (INIS)

    Lichters, R.; Pfund, R.E.W.; Meyer-ter-Vehn, J.

    1997-08-01

    The code LPIC++ presented here, is based on a one-dimensional, electromagnetic, relativistic PIC code that has originally been developed by one of the authors during a PhD thesis at the Max-Planck-Institut fuer Quantenoptik for kinetic simulations of high harmonic generation from overdense plasma surfaces. The code uses essentially the algorithm of Birdsall and Langdon and Villasenor and Bunemann. It is written in C++ in order to be easily extendable and has been parallelized to be able to grow in power linearly with the size of accessable hardware, e.g. massively parallel machines like Cray T3E. The parallel LPIC++ version uses PVM for communication between processors. PVM is public domain software, can be downloaded from the world wide web. A particular strength of LPIC++ lies in its clear program and data structure, which uses chained lists for the organization of grid cells and enables dynamic adjustment of spatial domain sizes in a very convenient way, and therefore easy balancing of processor loads. Also particles belonging to one cell are linked in a chained list and are immediately accessable from this cell. In addition to this convenient type of data organization in a PIC code, the code shows excellent performance in both its single processor and parallel version. (orig.)

  19. Introduccion a los microcontroladores RISC. -PICs de microchips-

    Directory of Open Access Journals (Sweden)

    Tito Flórez C.

    1998-05-01

    Full Text Available Los microcontroladores han prestado una gran ayuda en muchos campos, de los cuales uno de los más conocidos es el control. Iniciarse en el campo de los microcontroladores requiere normalmente dedicarle una enorme cantidad de tiempo, debido, entre otros, a la facilidad de perderse en el mar de información contenida en sus manuales. Debido a la gran similitud que poseen los PIC con respecto a su arquitectura, conjunto de instrucciones y programación, se toma el PIC 16C84 como un buen prototipo de microcontrolador, y se da la información más importante (con sus respectivos ejemplos, para poder ubicarse correctamente en el manejo de éstos.

  20. The UCLA Young Autism Project: A Reply to Gresham and Macmillan.

    Science.gov (United States)

    Smith, Tristam; Lovass, O. Ivar

    1997-01-01

    Responds to "Autistic Recovery? An Analysis and Critique of the Empirical Evidence on the Early Intervention Project" (Gresham and MacMillan), which criticizes research showing the effectiveness of the UCLA Youth Autism Project program for children with autism. The article's misunderstandings are discussed and the program is explained. (CR)

  1. Numerical Schemes for Charged Particle Movement in PIC Simulations

    International Nuclear Information System (INIS)

    Kulhanek, P.

    2001-01-01

    A PIC model of plasma fibers is developed in the Department of Physics of the Czech Technical University for several years. The program code was written in FORTRAN 95, free-style (without compulsory columns). Fortran compiler and linker were used from Compaq Visual Fortran 6.1A embedded in the Microsoft Development studio GUI. Fully three-dimensional code with periodical boundary conditions was developed. Electromagnetic fields are localized on a grid and particles move freely through this grid. One of the partial problems of the PIC model is the numerical particle solver, which will be discussed in this paper. (author)

  2. UCLA Particle Physics Research Group annual progress report

    International Nuclear Information System (INIS)

    Nefkens, B.M.K.

    1981-08-01

    The objectives, basic research programs, recent results and continuing activities of the UCLA Particle Physics Research Group are presented. The objectives of the research are to discover, to formulate, and to elucidate the physics laws that govern the elementary constituents of matter and to determine basic properties of particles. A synopsis of research carried out last year is given. The main body of this report is the account of the techniques used in our investigations, the results obtained, and the plans for continuing and new research

  3. UCLA1 aptamer inhibition of human immunodeficiency virus type 1 subtype C primary isolates in macrophages and selection of resistance

    CSIR Research Space (South Africa)

    Mufhandu, Hazel T

    2016-09-01

    Full Text Available isolates in monocyte-derived macrophages (MDMs). Of 4 macrophage-tropic isolates tested, 3 were inhibited by UCLA1 in the low nanomolar range (IC80 <29 nM). One isolate that showed reduced susceptibility (<50 nM) to UCLA1 contained mutations in the a5 helix...

  4. Occupational Analysis: Hospital Radiologic Technologist. The UCLA Allied Health Professions Project.

    Science.gov (United States)

    Reeder, Glenn D.; And Others

    In an effort to meet the growing demand for skilled radiologic technologists and other supportive personnel educated through the associate degree level, a national survey was conducted as part of the UCLA Allied Health Professions Project to determine the tasks performed by personnel in the field and lay the groundwork for development of…

  5. Evidências de validade da Escala Brasileira de Solidão UCLA

    Directory of Open Access Journals (Sweden)

    Sabrina Martins Barroso

    2016-03-01

    Full Text Available RESUMO Objetivo Este trabalho investigou as evidências de validade da Escala de Solidão UCLA para aplicação na população brasileira. Métodos Foram seguidas as fases: (1 autorização do autor e do Comitê de Ética; (2 tradução e retrotradução; (3 adaptação semântica; (4 validação. Utilizou-se para análise dos dados análise descritiva, fatorial exploratória, alpha de Cronbach, Kappa, teste de esfericidade de Barlett, teste Kaiser-Meyer-Olkin e correlação de Pearson. Para a adaptação, a escala foi submetida a especialistas e a um grupo focal com 8 participantes para adaptação semântica e a um estudo piloto com 126 participantes para adaptação transcultural. Da validação, participaram 818 pessoas, entre 20 e 87 anos, que responderam a duas versões da UCLA, ao Questionário de Saúde do Paciente, à Escala de Percepção de Suporte Social e a um questionário elaborado pelos autores. Resultados A escala mostrou dois fatores, que explicaram 56% da variância e alpha de 0,94. Conclusões A Escala de Solidão UCLA-BR indicou evidências de validade de construto e discriminante, além de boa fidedignidade, podendo ser utilizada para avaliação da solidão na população brasileira.

  6. UCLA's outreach program of science education in the Los Angeles schools.

    Science.gov (United States)

    Palacio-Cayetano, J; Kanowith-Klein, S; Stevens, R

    1999-04-01

    The UCLA School of Medicine's Interactive Multi-media Exercises (IMMEX) Project began its outreach into pre-college education in the Los Angeles area in 1993. The project provides a model in which software and technology are effectively intertwined with teaching, learning, and assessment (of both students' and teachers' performances) in the classroom. The project has evolved into a special collaboration between the medical school and Los Angeles teachers. UCLA faculty and staff work with science teachers and administrators from elementary, middle, and high schools. The program benefits ethnically and racially diverse groups of students in schools ranging from the inner city to the suburbs. The project's primary goal is to use technology to increase students' achievement and interest in science, including medicine, and thus move more students into the medical school pipeline. Evaluations from outside project evaluators (West Ed) as well as from teachers and IMMEX staff show that the project has already had a significant effect on teachers' professional development, classroom practice, and students' achievement in the Los Angeles area.

  7. Effects of increased vertebral number on carcass weight in PIC pigs.

    Science.gov (United States)

    Huang, Jieping; Zhang, Mingming; Ye, Runqing; Ma, Yun; Lei, Chuzhao

    2017-12-01

    Variation of the vertebral number is associated with carcass traits in pigs. However, results from different populations do not match well with others, especially for carcass weight. Therefore, effects of increased vertebral number on carcass weight were investigated by analyzing the relationship between two loci multi-vertebra causal loci (NR6A1 g.748 C > T and VRTN g.20311_20312ins291) and carcass weight in PIC pigs. Results from the association study between vertebral number and carcass weight showed that increased thoracic number had negative effects on carcass weight, but the results were not statistically significant. Further, VRTN Ins/Ins genotype increased more than one thoracic than that of Wt/Wt genotype on average in this PIC population. Meanwhile, there was a significant negative effect of VRTN Ins on carcass weight (P carcass weight in PIC pigs. © 2017 Japanese Society of Animal Science.

  8. A comparative study of gold UCLA-type and CAD/CAM titanium implant abutments

    Science.gov (United States)

    Park, Ji-Man; Lee, Jai-Bong; Heo, Seong-Joo

    2014-01-01

    PURPOSE The aim of this study was to evaluate the interface accuracy of computer-assisted designed and manufactured (CAD/CAM) titanium abutments and implant fixture compared to gold-cast UCLA abutments. MATERIALS AND METHODS An external connection implant system (Mark III, n=10) and an internal connection implant system (Replace Select, n=10) were used, 5 of each group were connected to milled titanium abutment and the rest were connected to the gold-cast UCLA abutments. The implant fixture and abutment were tightened to torque of 35 Ncm using a digital torque gauge, and initial detorque values were measured 10 minutes after tightening. To mimic the mastication, a cyclic loading was applied at 14 Hz for one million cycles, with the stress amplitude range being within 0 N to 100 N. After the cyclic loading, detorque values were measured again. The fixture-abutment gaps were measured under a microscope and recorded with an accuracy of ±0.1 µm at 50 points. RESULTS Initial detorque values of milled abutment were significantly higher than those of cast abutment (P.05). After cyclic loading, detorque values of cast abutment increased, but those of milled abutment decreased (Pabutment group and the cast abutment group after cyclic loading. CONCLUSION In conclusion, CAD/CAM milled titanium abutment can be fabricated with sufficient accuracy to permit screw joint stability between abutment and fixture comparable to that of the traditional gold cast UCLA abutment. PMID:24605206

  9. A curvilinear, fully implicit, conservative electromagnetic PIC algorithm in multiple dimensions

    Science.gov (United States)

    Chacón, L.; Chen, G.

    2016-07-01

    We extend a recently proposed fully implicit PIC algorithm for the Vlasov-Darwin model in multiple dimensions (Chen and Chacón (2015) [1]) to curvilinear geometry. As in the Cartesian case, the approach is based on a potential formulation (ϕ, A), and overcomes many difficulties of traditional semi-implicit Darwin PIC algorithms. Conservation theorems for local charge and global energy are derived in curvilinear representation, and then enforced discretely by a careful choice of the discretization of field and particle equations. Additionally, the algorithm conserves canonical-momentum in any ignorable direction, and preserves the Coulomb gauge ∇ ṡ A = 0 exactly. An asymptotically well-posed fluid preconditioner allows efficient use of large cell sizes, which are determined by accuracy considerations, not stability, and can be orders of magnitude larger than required in a standard explicit electromagnetic PIC simulation. We demonstrate the accuracy and efficiency properties of the algorithm with numerical experiments in mapped meshes in 1D-3V and 2D-3V.

  10. Second-order particle-in-cell (PIC) computational method in the one-dimensional variable Eulerian mesh system

    International Nuclear Information System (INIS)

    Pyun, J.J.

    1981-01-01

    As part of an effort to incorporate the variable Eulerian mesh into the second-order PIC computational method, a truncation error analysis was performed to calculate the second-order error terms for the variable Eulerian mesh system. The results that the maximum mesh size increment/decrement is limited to be α(Δr/sub i/) 2 where Δr/sub i/ is a non-dimensional mesh size of the ith cell, and α is a constant of order one. The numerical solutions of Burgers' equation by the second-order PIC method in the variable Eulerian mesh system wer compared with its exact solution. It was found that the second-order accuracy in the PIC method was maintained under the above condition. Additional problems were analyzed using the second-order PIC methods in both variable and uniform Eulerian mesh systems. The results indicate that the second-order PIC method in the variable Eulerian mesh system can provide substantial computational time saving with no loss in accuracy

  11. PIC Simulations of Hypersonic Plasma Instabilities

    Science.gov (United States)

    Niehoff, D.; Ashour-Abdalla, M.; Niemann, C.; Decyk, V.; Schriver, D.; Clark, E.

    2013-12-01

    The plasma sheaths formed around hypersonic aircraft (Mach number, M > 10) are relatively unexplored and of interest today to both further the development of new technologies and solve long-standing engineering problems. Both laboratory experiments and analytical/numerical modeling are required to advance the understanding of these systems; it is advantageous to perform these tasks in tandem. There has already been some work done to study these plasmas by experiments that create a rapidly expanding plasma through ablation of a target with a laser. In combination with a preformed magnetic field, this configuration leads to a magnetic "bubble" formed behind the front as particles travel at about Mach 30 away from the target. Furthermore, the experiment was able to show the generation of fast electrons which could be due to instabilities on electron scales. To explore this, future experiments will have more accurate diagnostics capable of observing time- and length-scales below typical ion scales, but simulations are a useful tool to explore these plasma conditions theoretically. Particle in Cell (PIC) simulations are necessary when phenomena are expected to be observed at these scales, and also have the advantage of being fully kinetic with no fluid approximations. However, if the scales of the problem are not significantly below the ion scales, then the initialization of the PIC simulation must be very carefully engineered to avoid unnecessary computation and to select the minimum window where structures of interest can be studied. One method of doing this is to seed the simulation with either experiment or ion-scale simulation results. Previous experiments suggest that a useful configuration for studying hypersonic plasma configurations is a ring of particles rapidly expanding transverse to an external magnetic field, which has been simulated on the ion scale with an ion-hybrid code. This suggests that the PIC simulation should have an equivalent configuration

  12. An Efficient Randomized Algorithm for Real-Time Process Scheduling in PicOS Operating System

    Science.gov (United States)

    Helmy*, Tarek; Fatai, Anifowose; Sallam, El-Sayed

    PicOS is an event-driven operating environment designed for use with embedded networked sensors. More specifically, it is designed to support the concurrency in intensive operations required by networked sensors with minimal hardware requirements. Existing process scheduling algorithms of PicOS; a commercial tiny, low-footprint, real-time operating system; have their associated drawbacks. An efficient, alternative algorithm, based on a randomized selection policy, has been proposed, demonstrated, confirmed for efficiency and fairness, on the average, and has been recommended for implementation in PicOS. Simulations were carried out and performance measures such as Average Waiting Time (AWT) and Average Turn-around Time (ATT) were used to assess the efficiency of the proposed randomized version over the existing ones. The results prove that Randomized algorithm is the best and most attractive for implementation in PicOS, since it is most fair and has the least AWT and ATT on average over the other non-preemptive scheduling algorithms implemented in this paper.

  13. Analysis of material removed from UCLA tokamaks Microtor and Macrotor

    International Nuclear Information System (INIS)

    Baer, D.R.; Thomas, M.T.; Taylor, R.J.

    1979-02-01

    This paper reports a first effort to examine the surface of the UCLA tokamaks, Microtor and Macrotor, by analyzing samples that have been exposed to plasma discharge and cleaning for long periods. The samples were sent to the Surface Science Section at the Pacific Northwest Laboratory (PNL). There, Auger electron spectrometry and sputter profile techniques were used to examine the samples, which had been handled in atmospheric conditions after being removed from the tokamak

  14. Spectral domain, common path OCT in a handheld PIC based system

    Science.gov (United States)

    Leinse, Arne; Wevers, Lennart; Marchenko, Denys; Dekker, Ronald; Heideman, René G.; Ruis, Roosje M.; Faber, Dirk J.; van Leeuwen, Ton G.; Kim, Keun Bae; Kim, Kyungmin

    2018-02-01

    Optical Coherence Tomography (OCT) has made it into the clinic in the last decade with systems based on bulk optical components. The next disruptive step will be the introduction of handheld OCT systems. Photonic Integrated Circuit (PIC) technology is the key enabler for this further miniaturization. PIC technology allows signal processing on a stable platform and the implementation of a common path interferometer in that same platform creates a robust fully integrated OCT system with a flexible fiber probe. In this work the first PIC based handheld and integrated common path based spectral domain OCT system is described and demonstrated. The spectrometer in the system is based on an Arrayed Waveguide Grating (AWG) and fully integrated with the CCD and a fiber probe into a system operating at 850 nm. The AWG on the PIC creates a 512 channel spectrometer with a resolution of 0.22 nm enabling a high speed analysis of the full A-scan. The silicon nitride based proprietary waveguide technology (TriPleXTM) enables low loss complex photonic structures from the visible (405 nm) to IR (2350 nm) range, making it a unique candidate for OCT applications. Broadband AWG operation from visible to 1700 nm has been shown in the platform and Photonic Design Kits (PDK) are available enabling custom made designs in a system level design environment. This allows a low threshold entry for designing new (OCT) designs for a broad wavelength range.

  15. Design And Construction Of Digital Multi-Meter Using PIC Microcontroller

    Directory of Open Access Journals (Sweden)

    Khawn Nue

    2015-07-01

    Full Text Available Abstract This thesis describes the design and construction of digital multi-meter using PIC microcontroller. In this system a typical multi-meter may include features such as the ability to measure ACDC voltage DC current resistance temperature diodes frequency and connectivity. This design uses of the PIC microcontroller voltage rectifiers voltage divide potentiometer LCD and other instruments to complete the measure. When we used what we have learned of microprocessors and adjust the program to calculate and show the measures in the LCD keypad selected the modes. The software programming has been incorporated using MPLAB and PROTEUS. In this system the analogue input is taken directly to the analogue input pin of the microcontroller without any other processing. So the input range is from 0V to 5V the maximum source impedance is 2k5 for testing use a 1k pot. To improve the circuit adds an op-amp in front to present greater impedance to the circuit under test. The output impedance of the op-amp will be low which a requirement of the PIC analogue input is.

  16. Development of in-situ visualization tool for PIC simulation

    International Nuclear Information System (INIS)

    Ohno, Nobuaki; Ohtani, Hiroaki

    2014-01-01

    As the capability of a supercomputer is improved, the sizes of simulation and its output data also become larger and larger. Visualization is usually carried out on a researcher's PC with interactive visualization software after performing the computer simulation. However, the data size is becoming too large to do it currently. A promising answer is in-situ visualization. For this case a simulation code is coupled with the visualization code and visualization is performed with the simulation on the same supercomputer. We developed an in-situ visualization tool for particle-in-cell (PIC) simulation and it is provided as a Fortran's module. We coupled it with a PIC simulation code and tested the coupled code on Plasma Simulator supercomputer, and ensured that it works. (author)

  17. Deployment of the OSIRIS EM-PIC code on the Intel Knights Landing architecture

    Science.gov (United States)

    Fonseca, Ricardo

    2017-10-01

    Electromagnetic particle-in-cell (EM-PIC) codes such as OSIRIS have found widespread use in modelling the highly nonlinear and kinetic processes that occur in several relevant plasma physics scenarios, ranging from astrophysical settings to high-intensity laser plasma interaction. Being computationally intensive, these codes require large scale HPC systems, and a continuous effort in adapting the algorithm to new hardware and computing paradigms. In this work, we report on our efforts on deploying the OSIRIS code on the new Intel Knights Landing (KNL) architecture. Unlike the previous generation (Knights Corner), these boards are standalone systems, and introduce several new features, include the new AVX-512 instructions and on-package MCDRAM. We will focus on the parallelization and vectorization strategies followed, as well as memory management, and present a detailed performance evaluation of code performance in comparison with the CPU code. This work was partially supported by Fundaçã para a Ciência e Tecnologia (FCT), Portugal, through Grant No. PTDC/FIS-PLA/2940/2014.

  18. PIC microcontroller-based RF wireless ECG monitoring system.

    Science.gov (United States)

    Oweis, R J; Barhoum, A

    2007-01-01

    This paper presents a radio-telemetry system that provides the possibility of ECG signal transmission from a patient detection circuit via an RF data link. A PC then receives the signal through the National Instrument data acquisition card (NIDAQ). The PC is equipped with software allowing the received ECG signals to be saved, analysed, and sent by email to another part of the world. The proposed telemetry system consists of a patient unit and a PC unit. The amplified and filtered ECG signal is sampled 360 times per second, and the A/D conversion is performed by a PIC16f877 microcontroller. The major contribution of the final proposed system is that it detects, processes and sends patients ECG data over a wireless RF link to a maximum distance of 200 m. Transmitted ECG data with different numbers of samples were received, decoded by means of another PIC microcontroller, and displayed using MATLAB program. The designed software is presented in a graphical user interface utility.

  19. Initial draft of CSE-UCLA evaluation model based on weighted product in order to optimize digital library services in computer college in Bali

    Science.gov (United States)

    Divayana, D. G. H.; Adiarta, A.; Abadi, I. B. G. S.

    2018-01-01

    The aim of this research was to create initial design of CSE-UCLA evaluation model modified with Weighted Product in evaluating digital library service at Computer College in Bali. The method used in this research was developmental research method and developed by Borg and Gall model design. The results obtained from the research that conducted earlier this month was a rough sketch of Weighted Product based CSE-UCLA evaluation model that the design had been able to provide a general overview of the stages of weighted product based CSE-UCLA evaluation model used in order to optimize the digital library services at the Computer Colleges in Bali.

  20. Implementation of a 3D plasma particle-in-cell code on a MIMD parallel computer

    International Nuclear Information System (INIS)

    Liewer, P.C.; Lyster, P.; Wang, J.

    1993-01-01

    A three-dimensional plasma particle-in-cell (PIC) code has been implemented on the Intel Delta MIMD parallel supercomputer using the General Concurrent PIC algorithm. The GCPIC algorithm uses a domain decomposition to divide the computation among the processors: A processor is assigned a subdomain and all the particles in it. Particles must be exchanged between processors as they move. Results are presented comparing the efficiency for 1-, 2- and 3-dimensional partitions of the three dimensional domain. This algorithm has been found to be very efficient even when a large fraction (e.g. 30%) of the particles must be exchanged at every time step. On the 512-node Intel Delta, up to 125 million particles have been pushed with an electrostatic push time of under 500 nsec/particle/time step

  1. Design and Simulation of a PIC16F877A and LM35 Based ...

    African Journals Online (AJOL)

    This paper describes the design and simulation of a temperature virtual monitoring system using proteus (Labcenter electronics). The device makes use of the PIC16F877A, LM35, 2x16 LCD and other discrete components. The lm35 serve as the temperature sensor, whose output is fed into the PIC16F877A for further ...

  2. Appropriateness of the food-pics image database for experimental eating and appetite research with adolescents.

    Science.gov (United States)

    Jensen, Chad D; Duraccio, Kara M; Barnett, Kimberly A; Stevens, Kimberly S

    2016-12-01

    Research examining effects of visual food cues on appetite-related brain processes and eating behavior has proliferated. Recently investigators have developed food image databases for use across experimental studies examining appetite and eating behavior. The food-pics image database represents a standardized, freely available image library originally validated in a large sample primarily comprised of adults. The suitability of the images for use with adolescents has not been investigated. The aim of the present study was to evaluate the appropriateness of the food-pics image library for appetite and eating research with adolescents. Three hundred and seven adolescents (ages 12-17) provided ratings of recognizability, palatability, and desire to eat, for images from the food-pics database. Moreover, participants rated the caloric content (high vs. low) and healthiness (healthy vs. unhealthy) of each image. Adolescents rated approximately 75% of the food images as recognizable. Approximately 65% of recognizable images were correctly categorized as high vs. low calorie and 63% were correctly classified as healthy vs. unhealthy in 80% or more of image ratings. These results suggest that a smaller subset of the food-pics image database is appropriate for use with adolescents. With some modifications to included images, the food-pics image database appears to be appropriate for use in experimental appetite and eating-related research conducted with adolescents. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. EXPERIMENTAL INVESTIGATION OF PIC FORMATION IN CFC-12 INCINERATION

    Science.gov (United States)

    The report gives results of experiments to determine the effect of flame zone temperature on gas-phase flame formation and destruction of products of incomplete combustion (PICS) during dichlorodi-fluoromethane (CFC-12) incineration. The effect of water injection into the flame ...

  4. Characterizing a New Candidate Benchmark Brown Dwarf Companion in the β Pic Moving Group

    Science.gov (United States)

    Phillips, Caprice; Bowler, Brendan; Liu, Michael C.; Mace, Gregory N.; Sokal, Kimberly R.

    2018-01-01

    Benchmark brown dwarfs are objects that have at least two measured fundamental quantities such as luminosity and age, and therefore can be used to test substellar atmospheric and evolutionary models. Nearby, young, loose associations such as the β Pic moving group represent some of the best regions in which to identify intermediate-age benchmark brown dwarfs due to their well-constrained ages and metallicities. We present a spectroscopic study of a new companion at the hydrogen-burning limit orbiting a low-mass star at a separation of 9″ (650 AU) in the 23 Myr old β Pic moving group. The medium-resolution near-infrared spectrum of this companion from IRTF/SpeX shows clear signs of low surface gravity and yields an index-based spectral type of M6±1 with a VL-G gravity on the Allers & Liu classification system. Currently, there are four known brown dwarf and giant planet companions in the β Pic moving group: HR 7329 B, PZ Tel B, β Pic b, and 51 Eri b. Depending on its exact age and accretion history, this new object may represent the third brown dwarf companion and fifth substellar companion in this association.

  5. Improved Iterative Parallel Interference Cancellation Receiver for Future Wireless DS-CDMA Systems

    Directory of Open Access Journals (Sweden)

    Andrea Bernacchioni

    2005-04-01

    Full Text Available We present a new turbo multiuser detector for turbo-coded direct sequence code division multiple access (DS-CDMA systems. The proposed detector is based on the utilization of a parallel interference cancellation (PIC and a bank of turbo decoders. The PIC is broken up in order to perform interference cancellation after each constituent decoder of the turbo decoding scheme. Moreover, in the paper we propose a new enhanced algorithm that provides a more accurate estimation of the signal-to-noise-plus-interference-ratio used in the tentative decision device and in the MAP decoding algorithm. The performance of the proposed receiver is evaluated by means of computer simulations for medium to very high system loads, in AWGN and multipath fading channel, and compared to recently proposed interference cancellation-based iterative MUD, by taking into account the number of iterations and the complexity involved. We will see that the proposed receiver outperforms the others especially for highly loaded systems.

  6. On the elimination of numerical Cerenkov radiation in PIC simulations

    International Nuclear Information System (INIS)

    Greenwood, Andrew D.; Cartwright, Keith L.; Luginsland, John W.; Baca, Ernest A.

    2004-01-01

    Particle-in-cell (PIC) simulations are a useful tool in modeling plasma in physical devices. The Yee finite difference time domain (FDTD) method is commonly used in PIC simulations to model the electromagnetic fields. However, in the Yee FDTD method, poorly resolved waves at frequencies near the cut off frequency of the grid travel slower than the physical speed of light. These slowly traveling, poorly resolved waves are not a problem in many simulations because the physics of interest are at much lower frequencies. However, when high energy particles are present, the particles may travel faster than the numerical speed of their own radiation, leading to non-physical, numerical Cerenkov radiation. Due to non-linear interaction between the particles and the fields, the numerical Cerenkov radiation couples into the frequency band of physical interest and corrupts the PIC simulation. There are two methods of mitigating the effects of the numerical Cerenkov radiation. The computational stencil used to approximate the curl operator can be altered to improve the high frequency physics, or a filtering scheme can be introduced to attenuate the waves that cause the numerical Cerenkov radiation. Altering the computational stencil is more physically accurate but is difficult to implement while maintaining charge conservation in the code. Thus, filtering is more commonly used. Two previously published filters by Godfrey and Friedman are analyzed and compared to ideally desired filter properties

  7. Modelling RF sources using 2-D PIC codes

    Energy Technology Data Exchange (ETDEWEB)

    Eppley, K.R.

    1993-03-01

    In recent years, many types of RF sources have been successfully modelled using 2-D PIC codes. Both cross field devices (magnetrons, cross field amplifiers, etc.) and pencil beam devices (klystrons, gyrotrons, TWT'S, lasertrons, etc.) have been simulated. All these devices involve the interaction of an electron beam with an RF circuit. For many applications, the RF structure may be approximated by an equivalent circuit, which appears in the simulation as a boundary condition on the electric field ( port approximation''). The drive term for the circuit is calculated from the energy transfer between beam and field in the drift space. For some applications it may be necessary to model the actual geometry of the structure, although this is more expensive. One problem not entirely solved is how to accurately model in 2-D the coupling to an external waveguide. Frequently this is approximated by a radial transmission line, but this sometimes yields incorrect results. We also discuss issues in modelling the cathode and injecting the beam into the PIC simulation.

  8. Modelling RF sources using 2-D PIC codes

    Energy Technology Data Exchange (ETDEWEB)

    Eppley, K.R.

    1993-03-01

    In recent years, many types of RF sources have been successfully modelled using 2-D PIC codes. Both cross field devices (magnetrons, cross field amplifiers, etc.) and pencil beam devices (klystrons, gyrotrons, TWT`S, lasertrons, etc.) have been simulated. All these devices involve the interaction of an electron beam with an RF circuit. For many applications, the RF structure may be approximated by an equivalent circuit, which appears in the simulation as a boundary condition on the electric field (``port approximation``). The drive term for the circuit is calculated from the energy transfer between beam and field in the drift space. For some applications it may be necessary to model the actual geometry of the structure, although this is more expensive. One problem not entirely solved is how to accurately model in 2-D the coupling to an external waveguide. Frequently this is approximated by a radial transmission line, but this sometimes yields incorrect results. We also discuss issues in modelling the cathode and injecting the beam into the PIC simulation.

  9. Modelling RF sources using 2-D PIC codes

    International Nuclear Information System (INIS)

    Eppley, K.R.

    1993-03-01

    In recent years, many types of RF sources have been successfully modelled using 2-D PIC codes. Both cross field devices (magnetrons, cross field amplifiers, etc.) and pencil beam devices (klystrons, gyrotrons, TWT'S, lasertrons, etc.) have been simulated. All these devices involve the interaction of an electron beam with an RF circuit. For many applications, the RF structure may be approximated by an equivalent circuit, which appears in the simulation as a boundary condition on the electric field (''port approximation''). The drive term for the circuit is calculated from the energy transfer between beam and field in the drift space. For some applications it may be necessary to model the actual geometry of the structure, although this is more expensive. One problem not entirely solved is how to accurately model in 2-D the coupling to an external waveguide. Frequently this is approximated by a radial transmission line, but this sometimes yields incorrect results. We also discuss issues in modelling the cathode and injecting the beam into the PIC simulation

  10. Boltzmann electron PIC simulation of the E-sail effect

    Directory of Open Access Journals (Sweden)

    P. Janhunen

    2015-12-01

    Full Text Available The solar wind electric sail (E-sail is a planned in-space propulsion device that uses the natural solar wind momentum flux for spacecraft propulsion with the help of long, charged, centrifugally stretched tethers. The problem of accurately predicting the E-sail thrust is still somewhat open, however, due to a possible electron population trapped by the tether. Here we develop a new type of particle-in-cell (PIC simulation for predicting E-sail thrust. In the new simulation, electrons are modelled as a fluid, hence resembling hybrid simulation, but in contrast to normal hybrid simulation, the Poisson equation is used as in normal PIC to calculate the self-consistent electrostatic field. For electron-repulsive parts of the potential, the Boltzmann relation is used. For electron-attractive parts of the potential we employ a power law which contains a parameter that can be used to control the number of trapped electrons. We perform a set of runs varying the parameter and select the one with the smallest number of trapped electrons which still behaves in a physically meaningful way in the sense of producing not more than one solar wind ion deflection shock upstream of the tether. By this prescription we obtain thrust per tether length values that are in line with earlier estimates, although somewhat smaller. We conclude that the Boltzmann PIC simulation is a new tool for simulating the E-sail thrust. This tool enables us to calculate solutions rapidly and allows to easily study different scenarios for trapped electrons.

  11. UCLA Particle Physics Research Group annual progress report

    International Nuclear Information System (INIS)

    Nefkens, B.M.K.

    1983-11-01

    The objectives, basic research programs, recent results, and continuing activities of the UCLA Particle Physics Research Group are presented. The objectives of the research are to discover, to formulate, and to elucidate the physics laws that govern the elementary constituents of matter and to determine basic properties of particles. The research carried out by the Group last year may be divided into three separate programs: (1) baryon spectroscopy, (2) investigations of charge symmetry and isospin invariance, and (3) tests of time reversal invariance. The main body of this report is the account of the techniques used in our investigations, the results obtained, and the plans for continuing and new research. An update of the group bibliography is given at the end

  12. Design and implementation of the standards-based personal intelligent self-management system (PICS).

    Science.gov (United States)

    von Bargen, Tobias; Gietzelt, Matthias; Britten, Matthias; Song, Bianying; Wolf, Klaus-Hendrik; Kohlmann, Martin; Marschollek, Michael; Haux, Reinhold

    2013-01-01

    Against the background of demographic change and a diminishing care workforce there is a growing need for personalized decision support. The aim of this paper is to describe the design and implementation of the standards-based personal intelligent care systems (PICS). PICS makes consistent use of internationally accepted standards such as the Health Level 7 (HL7) Arden syntax for the representation of the decision logic, HL7 Clinical Document Architecture for information representation and is based on a open-source service-oriented architecture framework and a business process management system. Its functionality is exemplified for the application scenario of a patient suffering from congestive heart failure. Several vital signs sensors provide data for the decision support system, and a number of flexible communication channels are available for interaction with patient or caregiver. PICS is a standards-based, open and flexible system enabling personalized decision support. Further development will include the implementation of components on small computers and sensor nodes.

  13. Preparation of Water-soluble Polyion Complex (PIC Micelles Covered with Amphoteric Random Copolymer Shells with Pendant Sulfonate and Quaternary Amino Groups

    Directory of Open Access Journals (Sweden)

    Rina Nakahata

    2018-02-01

    Full Text Available An amphoteric random copolymer (P(SA91 composed of anionic sodium 2-acrylamido-2-methylpropanesulfonate (AMPS, S and cationic 3-acrylamidopropyl trimethylammonium chloride (APTAC, A was prepared via reversible addition-fragmentation chain transfer (RAFT radical polymerization. The subscripts in the abbreviations indicate the degree of polymerization (DP. Furthermore, AMPS and APTAC were polymerized using a P(SA91 macro-chain transfer agent to prepare an anionic diblock copolymer (P(SA91S67 and a cationic diblock copolymer (P(SA91A88, respectively. The DP was estimated from quantitative 13C NMR measurements. A stoichiometrically charge neutralized mixture of the aqueous P(SA91S67 and P(SA91A88 formed water-soluble polyion complex (PIC micelles comprising PIC cores and amphoteric random copolymer shells. The PIC micelles were in a dynamic equilibrium state between PIC micelles and charge neutralized small aggregates composed of a P(SA91S67/P(SA91A88 pair. Interactions between PIC micelles and fetal bovine serum (FBS in phosphate buffered saline (PBS were evaluated by changing the hydrodynamic radius (Rh and light scattering intensity (LSI. Increases in Rh and LSI were not observed for the mixture of PIC micelles and FBS in PBS for one day. This observation suggests that there is no interaction between PIC micelles and proteins, because the PIC micelle surfaces were covered with amphoteric random copolymer shells. However, with increasing time, the diblock copolymer chains that were dissociated from PIC micelles interacted with proteins.

  14. Expression of recombinant myostatin propeptide pPIC9K-Msp plasmid in Pichia pastoris.

    Science.gov (United States)

    Du, W; Xia, J; Zhang, Y; Liu, M J; Li, H B; Yan, X M; Zhang, J S; Li, N; Zhou, Z Y; Xie, W Z

    2015-12-28

    Myostatin propeptide can inhibit the biological activity of myostatin protein and promote muscle growth. To express myostatin propeptide in vitro with a higher biological activity, we performed codon optimization on the sheep myostatin propeptide gene sequence, and mutated aspartic acid-76 to alanine based on the codon usage bias of Pichia pastoris and the enhanced biological activity of myostatin propeptide mutant. Modified myostatin propeptide gene was cloned into the pPIC9K plasmid to form the recombinant plasmid pPIC9K-Msp. Recombinant plasmid pPIC9K-Msp was transformed into Pichia pastoris GS115 by electrotransformation. Transformed cells were screened, and methanol was used to induce expression. SDS-PAGE and western blotting were used to verify the successful expression of myostatin propeptide with biological activity in Pichia pastoris, providing the basis for characterization of this protein.

  15. Evaluation of the Parent-Implemented Communication Strategies (PiCS) Project Using the Multiattribute Utility (MAU) Approach

    Science.gov (United States)

    Stoner, Julia B.; Meadan, Hedda; Angell, Maureen E.; Daczewitz, Marcus

    2012-01-01

    We conducted a multiattribute utility (MAU) evaluation to assess the Parent-Implemented Communication Strategies (PiCS) project which was funded by the Institute of Education Sciences (IES). In the PiCS project parents of young children with developmental disabilities are trained and coached in their homes on naturalistic and visual teaching…

  16. Deploying electromagnetic particle-in-cell (EM-PIC) codes on Xeon Phi accelerators boards

    Science.gov (United States)

    Fonseca, Ricardo

    2014-10-01

    The complexity of the phenomena involved in several relevant plasma physics scenarios, where highly nonlinear and kinetic processes dominate, makes purely theoretical descriptions impossible. Further understanding of these scenarios requires detailed numerical modeling, but fully relativistic particle-in-cell codes such as OSIRIS are computationally intensive. The quest towards Exaflop computer systems has lead to the development of HPC systems based on add-on accelerator cards, such as GPGPUs and more recently the Xeon Phi accelerators that power the current number 1 system in the world. These cards, also referred to as Intel Many Integrated Core Architecture (MIC) offer peak theoretical performances of >1 TFlop/s for general purpose calculations in a single board, and are receiving significant attention as an attractive alternative to CPUs for plasma modeling. In this work we report on our efforts towards the deployment of an EM-PIC code on a Xeon Phi architecture system. We will focus on the parallelization and vectorization strategies followed, and present a detailed performance evaluation of code performance in comparison with the CPU code.

  17. 77 FR 25739 - Notice of Inventory Completion: Fowler Museum at UCLA, Los Angeles, CA

    Science.gov (United States)

    2012-05-01

    ... objects are 1 awl, 1 bone tool, 2 obsidian biface fragments, 9 bags of obsidian debitage, 4 stone metate fragments, 4 bags of animal bone, 1 obsidian hydration sample, and 5 bags of organic flotation residue. The... artifacts and obsidian hydration dating. The Fowler Museum at UCLA has determined the human remains and...

  18. Evaluating CoLiDeS + Pic: The Role of Relevance of Pictures in User Navigation Behaviour

    Science.gov (United States)

    Karanam, Saraschandra; van Oostendorp, Herre; Indurkhya, Bipin

    2012-01-01

    CoLiDeS + Pic is a cognitive model of web-navigation that incorporates semantic information from pictures into CoLiDeS. In our earlier research, we have demonstrated that by incorporating semantic information from pictures, CoLiDeS + Pic can predict the hyperlinks on the shortest path more frequently, and also with greater information scent,…

  19. HPC parallel programming model for gyrokinetic MHD simulation

    International Nuclear Information System (INIS)

    Naitou, Hiroshi; Yamada, Yusuke; Tokuda, Shinji; Ishii, Yasutomo; Yagi, Masatoshi

    2011-01-01

    The 3-dimensional gyrokinetic PIC (particle-in-cell) code for MHD simulation, Gpic-MHD, was installed on SR16000 (“Plasma Simulator”), which is a scalar cluster system consisting of 8,192 logical cores. The Gpic-MHD code advances particle and field quantities in time. In order to distribute calculations over large number of logical cores, the total simulation domain in cylindrical geometry was broken up into N DD-r × N DD-z (number of radial decomposition times number of axial decomposition) small domains including approximately the same number of particles. The axial direction was uniformly decomposed, while the radial direction was non-uniformly decomposed. N RP replicas (copies) of each decomposed domain were used (“particle decomposition”). The hybrid parallelization model of multi-threads and multi-processes was employed: threads were parallelized by the auto-parallelization and N DD-r × N DD-z × N RP processes were parallelized by MPI (message-passing interface). The parallelization performance of Gpic-MHD was investigated for the medium size system of N r × N θ × N z = 1025 × 128 × 128 mesh with 4.196 or 8.192 billion particles. The highest speed for the fixed number of logical cores was obtained for two threads, the maximum number of N DD-z , and optimum combination of N DD-r and N RP . The observed optimum speeds demonstrated good scaling up to 8,192 logical cores. (author)

  20. Activation of AMP-Activated Protein Kinase α and Extracelluar Signal-Regulated Kinase Mediates CB-PIC-Induced Apoptosis in Hypoxic SW620 Colorectal Cancer Cells

    Directory of Open Access Journals (Sweden)

    Sung-Yun Cho

    2013-01-01

    Full Text Available Here, antitumor mechanism of cinnamaldehyde derivative CB-PIC was elucidated in human SW620 colon cancer cells. CB-PIC significantly exerted cytotoxicity, increased sub-G1 accumulation, and cleaved PARP with apoptotic features, while it enhanced the phosphorylation of AMPK alpha and ACC as well as activated the ERK in hypoxic SW620 cells. Furthermore, CB-PIC suppressed the expression of HIF1 alpha, Akt, and mTOR and activated the AMPK phosphorylation in hypoxic SW620 cells. Conversely, silencing of AMPKα blocked PARP cleavage and ERK activation induced by CB-PIC, while ERK inhibitor PD 98059 attenuated the phosphorylation of AMPKα in hypoxic SW620 cells, implying cross-talk between ERK and AMPKα. Furthermore, cotreatment of CB-PIC and metformin enhanced the inhibition of HIF1α and Akt/mTOR and the activation of AMPKα and pACC in hypoxic SW620 cells. In addition, CB-PIC suppressed the growth of SW620 cells inoculated in BALB/c athymic nude mice, and immunohistochemistry revealed that CB-PIC treatment attenuated the expression of Ki-67, CD34, and CAIX and increased the expression of pAMPKα in CB-PIC-treated group. Interestingly, CP-PIC showed better antitumor activity in SW620 colon cancer cells under hypoxia than under normoxia, since it may be applied to chemoresistance. Overall, our findings suggest that activation of AMPKα and ERK mediates CB-PIC-induced apoptosis in hypoxic SW620 colon cancer cells.

  1. Fusion PIC code performance analysis on the Cori KNL system

    Energy Technology Data Exchange (ETDEWEB)

    Koskela, Tuomas S. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). National Energy Research Scientific Computing Center (NERSC); Deslippe, Jack [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). National Energy Research Scientific Computing Center (NERSC); Friesen, Brian [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). National Energy Research Scientific Computing Center (NERSC); Raman, Karthic [INTEL Corp. (United States)

    2017-05-25

    We study the attainable performance of Particle-In-Cell codes on the Cori KNL system by analyzing a miniature particle push application based on the fusion PIC code XGC1. We start from the most basic building blocks of a PIC code and build up the complexity to identify the kernels that cost the most in performance and focus optimization efforts there. Particle push kernels operate at high AI and are not likely to be memory bandwidth or even cache bandwidth bound on KNL. Therefore, we see only minor benefits from the high bandwidth memory available on KNL, and achieving good vectorization is shown to be the most beneficial optimization path with theoretical yield of up to 8x speedup on KNL. In practice we are able to obtain up to a 4x gain from vectorization due to limitations set by the data layout and memory latency.

  2. 2D arc-PIC code description: methods and documentation

    CERN Document Server

    Timko, Helga

    2011-01-01

    Vacuum discharges are one of the main limiting factors for future linear collider designs such as that of the Compact LInear Collider. To optimize machine efficiency, maintaining the highest feasible accelerating gradient below a certain breakdown rate is desirable; understanding breakdowns can therefore help us to achieve this goal. As a part of ongoing theoretical research on vacuum discharges at the Helsinki Institute of Physics, the build-up of plasma can be investigated through the particle-in-cell method. For this purpose, we have developed the 2D Arc-PIC code introduced here. We present an exhaustive description of the 2D Arc-PIC code in two parts. In the first part, we introduce the particle-in-cell method in general and detail the techniques used in the code. In the second part, we provide a documentation and derivation of the key equations occurring in the code. The code is original work of the author, written in 2010, and is therefore under the copyright of the author. The development of the code h...

  3. PICsIT a position sensitive detector for space applications

    CERN Document Server

    Labanti, C; Ferriani, S; Ferro, G; Malaguti, G; Mauri, A; Rossi, E; Schiavone, F; Stephen, J B; Traci, A; Visparelli, D

    2002-01-01

    Pixellated Imaging CsI Telescope (PICsIT) is the high energy detector plane of Imager on Board INTEGRAL Satellite (IBIS), one of the main instruments on board the International Gamma-Ray Astrophysics Laboratory (INTEGRAL) satellite that will be launched in the year 2001. It consists of 4096 CsI(Tl) individual detector elements and operates in the energy range from 120 to 10,000 keV. PICsIT is made up of 8 identical modules, each housing 512 scintillating crystals coupled to PIN photodiodes (PD). Each crystal, 30 mm long and with a cross-section of 8.55x8.55 mm sup 2 , is wrapped with a white diffusing coating and then inserted into an aluminium crate. In order to have a compact design, two electronic boards, mounted directly below the crystal/PD assembly, host both the Analogue and Digital Front-End Electronics (FEE). The behaviour of the read-out FEE has a direct impact on the performance of the whole detector in terms of lower energy threshold, energy resolution and event time tagging. Due to the great numb...

  4. Dynamic Load Balancing for PIC code using Eulerian/Lagrangian partitioning

    OpenAIRE

    Sauget, Marc; Latu, Guillaume

    2017-01-01

    This document presents an analysis of different load balance strategies for a Plasma physics code that models high energy particle beams with PIC method. A comparison of different load balancing algorithms is given: static or dynamic ones. Lagrangian and Eulerian partitioning techniques have been investigated.

  5. A study on radiation-resistance of PIC (polymer-impregnated concrete) for container of conditioning and disposal of low and intermediate level radioactive wastes

    International Nuclear Information System (INIS)

    Ishizaki, Kanjiro; Sudoh, Giichi; Araki, Kunio; Kasahara, Yuko.

    1983-01-01

    The radiation-resistance of PIC with test piece was evaluated by irradiation of gamma-rays. All the test pieces had JIS mortar size of 4 x 4 x 16 cm. JIS mortar and concrete were used as specimens. The maximum aggregate size of concrete was 10 mm. The specimens impregnated by MMA (methylmethacrylate) monomer and solution of 10% of PSt (polystyrene) in MMA monomer (MMA .PSt) were polymerized by irradiating for 5 hr at the dose rate of 1 MR (1 x 10 6 Roentgen)/hr. PIC specimens were exposed up to maximum 1000 MR to 60 Co gamma-rays in air and under water which simulate shallow land disposal and deep sea dumping conditions, respectively. The lowering of strength of the PIC exposed to gamma-rays under water was larger than that of the PIC in air. The improving effect of the added PSt on the radiation-resistance was observed. It was observed that the 50 MR-irradiated MMA.PSt-PIC under water, which had the residual compressive strength of 85%, was resistant to gamma-rays. When this residual strength was regarded as a limit of radiation-resistance in air, the limit of MMA and MMA.PSt-PIC were approximately 25 MR and 150 MR, respectively. The lowering of strength was mainly due to the deterioration of MMA polymer in PIC. The total exposure dose for PIC-container was estimated by assuming the conditions about the packaged radioactive wastes, dose rate, container and so on. The total exposure dose on PIC-container for 100 years became roughly 1.25 MR. Therefore, it is estimated that the PIC-containers for conditioning and disposal of low and intermediate level radioactive wastes have a sufficient resistance to radiation arising from wastes. (author)

  6. The Pic19 NBS-LRR gene family members are closely linked to Scmv1, but not involved in maize resistance to sugarcane mosaic virus

    DEFF Research Database (Denmark)

    Jiang, Lu; Ingvardsen, Christina Rønn; Lübberstedt, Thomas

    2008-01-01

    the isolation and characterization of the Pic19R gene family members from the inbred line FAP1360A, which shows complete resistance to SCMV. Two primer pairs were designed based on the conserved regions among the known Pic19 paralogs and used for rapid amplification of cDNA ends of FAP1360A. Six full-length c...... of the Pic19R family indicated that the Pic19R-1 paralog is identical to the known Rxo1 gene conferring resistance to rice bacterial streak disease and none of the other Pic19R paralogs seems to be involved in resistance to SCMV...

  7. Leveraging lean principles in creating a comprehensive quality program: The UCLA health readmission reduction initiative.

    Science.gov (United States)

    Afsar-Manesh, Nasim; Lonowski, Sarah; Namavar, Aram A

    2017-12-01

    UCLA Health embarked to transform care by integrating lean methodology in a key clinical project, Readmission Reduction Initiative (RRI). The first step focused on assembling a leadership team to articulate system-wide priorities for quality improvement. The lean principle of creating a culture of change and accountability was established by: 1) engaging stakeholders, 2) managing the process with performance accountability, and, 3) delivering patient-centered care. The RRI utilized three major lean principles: 1) A3, 2) root cause analyses, 3) value stream mapping. Baseline readmission rate at UCLA from 9/2010-12/2011 illustrated a mean of 12.1%. After the start of the RRI program, for the period of 1/2012-6/2013, the readmission rate decreased to 11.3% (p<0.05). To impact readmissions, solutions must evolve from smaller service- and location-based interventions into strategies with broader approach. As elucidated, a systematic clinical approach grounded in lean methodologies is a viable solution to this complex problem. Copyright © 2017 Elsevier Inc. All rights reserved.

  8. PIC simulations of conical magnetically insulated transmission line with LTD generator: Transition from self-limited to load-limited flow

    Science.gov (United States)

    Liu, Laqun; Wang, Huihui; Guo, Fan; Zou, Wenkang; Liu, Dagang

    2017-04-01

    Based on the 3-dimensional Particle-In-Cell (PIC) code CHIPIC3D, with a new circuit boundary algorithm we developed, a conical magnetically insulated transmission line (MITL) with a 1.0-MV linear transformer driver (LTD) is explored numerically. The values of switch jitter time of LTD are critical parameters for the system, which are difficult to be measured experimentally. In this paper, these values are obtained by comparing the PIC results with experimental data of large diode-gap MITL. By decreasing the diode gap, we find that all PIC results agree well with experimental data only if MITL works on self-limited flow no matter how large the diode gap is. However, when the diode gap decreases to a threshold, the self-limited flow would transfer to a load-limited flow. In this situation, PIC results no longer agree with experimental data anymore due to the anode plasma expansion in the diode load. This disagreement is used to estimate the plasma expansion speed.

  9. 2D PIC simulations for an EN discharge with magnetized electrons and unmagnetized ions

    Science.gov (United States)

    Lieberman, Michael A.; Kawamura, Emi; Lichtenberg, Allan J.

    2009-10-01

    We conducted 2D particle-in-cell (PIC) simulations for an electronegative (EN) discharge with magnetized electrons and unmagnetized ions, and compared the results to a previously developed 1D (radial) analytical model of an EN plasma with strongly magnetized electrons and weakly magnetized ions [1]. In both cases, there is a static uniform applied magnetic field in the axial direction. The 1D radial model mimics the wall losses of the particles in the axial direction by introducing a bulk loss frequency term νL. A special (desired) solution was found in which only positive and negative ions but no electrons escaped radially. The 2D PIC results show good agreement with the 1D model over a range of parameters and indicate that the analytical form of νL employed in [1] is reasonably accurate. However, for the PIC simulations, there is always a finite flux of electrons to the radial wall which is about 10 to 30% of the negative ion flux.[4pt] [1] G. Leray, P. Chabert, A.J. Lichtenberg and M.A. Lieberman, J. Phys. D, accepted for publication 2009.

  10. PIC simulation of the electron-ion collision effects on suprathermal electrons

    International Nuclear Information System (INIS)

    Wu Yanqing; Han Shensheng

    2000-01-01

    The generation and transportation of suprathermal electrons are important to both traditional ICF scheme and 'Fast Ignition' scheme. The author discusses the effects of electron-ion collision on the generation and transportation of the suprathermal electrons by parametric instability. It indicates that the weak electron-ion term in the PIC simulation results in the enhancement of the collisional absorption and increase of the hot electron temperature and reduction in the maximum electrostatic field amplitude while wave breaking. Therefore the energy and distribution of the suprathermal electrons are changed. They are distributed more close to the phase velocity of the electrostatic wave than the case without electron-ion collision term. The electron-ion collision enhances the self-consistent field and impedes the suprathermal electron transportation. These factors also reduce the suprathermal electron energy. In addition, the authors discuss the effect of initial condition on PIC simulation to ensure that the results are correct

  11. Doubling-resolution analog-to-digital conversion based on PIC18F45K80

    Directory of Open Access Journals (Sweden)

    Yueyang Yuan

    2014-08-01

    Full Text Available Aiming at the analog signal being converted into the digital with a higher precision, a method to improve the analog-to-digital converter (ADC resolution is proposed and described. Based on the microcomputer PIC18F45K80 in which the internal ADC modules are embedded, a circuit is designed for doubling the resolution of ADC. According to the circuit diagram, the mathematical formula for calculating this resolution is derived. The corresponding software and print circuit board assembly is also prepared. With the experiment, a 13 bit ADC is achieved based on the 12 bit ADC module predesigned in the PIC18F45K80.

  12. Experimental And Theoretical High Energy Physics Research At UCLA

    Energy Technology Data Exchange (ETDEWEB)

    Cousins, Robert D. [University of California Los Angeles

    2013-07-22

    This is the final report of the UCLA High Energy Physics DOE Grant No. DE-FG02- 91ER40662. This report covers the last grant project period, namely the three years beginning January 15, 2010, plus extensions through April 30, 2013. The report describes the broad range of our experimental research spanning direct dark matter detection searches using both liquid xenon (XENON) and liquid argon (DARKSIDE); present (ICARUS) and R&D for future (LBNE) neutrino physics; ultra-high-energy neutrino and cosmic ray detection (ANITA); and the highest-energy accelerator-based physics with the CMS experiment and CERN’s Large Hadron Collider. For our theory group, the report describes frontier activities including particle astrophysics and cosmology; neutrino physics; LHC interaction cross section calculations now feasible due to breakthroughs in theoretical techniques; and advances in the formal theory of supergravity.

  13. EXPERIMENTAL INVESTIGATION OF PIC FORMATION DURING THE INCINERATION OF RECOVERED CFC-11

    Science.gov (United States)

    The report gives results of an investigation of the formation of products of incomplete combustion (PICS) during "recovered" trichlorofluoromethane (CFC-11) incineration. Tests involved burning the recovered CFC-11 in a propane gas flame. combustion gas samples were taken and an...

  14. Progress on the Development of the hPIC Particle-in-Cell Code

    Science.gov (United States)

    Dart, Cameron; Hayes, Alyssa; Khaziev, Rinat; Marcinko, Stephen; Curreli, Davide; Laboratory of Computational Plasma Physics Team

    2017-10-01

    Advancements were made in the development of the kinetic-kinetic electrostatic Particle-in-Cell code, hPIC, designed for large-scale simulation of the Plasma-Material Interface. hPIC achieved a weak scaling efficiency of 87% using the Algebraic Multigrid Solver BoomerAMG from the PETSc library on more than 64,000 cores of the Blue Waters supercomputer at the University of Illinois at Urbana-Champaign. The code successfully simulates two-stream instability and a volume of plasma over several square centimeters of surface extending out to the presheath in kinetic-kinetic mode. Results from a parametric study of the plasma sheath in strongly magnetized conditions will be presented, as well as a detailed analysis of the plasma sheath structure at grazing magnetic angles. The distribution function and its moments will be reported for plasma species in the simulation domain and at the material surface for plasma sheath simulations. Membership Pending.

  15. To build an environmental quality building. Evaluation: the HQE secondary school of Pic Saint Loup realized by the region; Construire un batiment respectueux de l'environnement. Retour d'experience: le Lycee HQE du Pic Saint Loup realise par la Region

    Energy Technology Data Exchange (ETDEWEB)

    Denicourt, Ch.

    2004-07-01

    This document presents the action realized in Pic Saint Loup secondary school, concerning the program management of an environmental quality building (HQE). The 8 chapters details the realization of the HQE building, the project planing of a HQE building, the Pic Saint Loup project, the operation beginning, the planing implementing, the project feasibility evaluation, the program redaction and the time and cost evaluation. (A.L.B.)

  16. 3D PiC code investigations of Auroral Kilometric Radiation mechanisms

    International Nuclear Information System (INIS)

    Gillespie, K M; McConville, S L; Speirs, D C; Ronald, K; Phelps, A D R; Bingham, R; Cross, A W; Robertson, C W; Whyte, C G; He, W; Vorgul, I; Cairns, R A; Kellett, B J

    2014-01-01

    Efficient (∼1%) electron cyclotron radio emissions are known to originate in the X mode from regions of locally depleted plasma in the Earths polar magnetosphere. These emissions are commonly referred to as the Auroral Kilometric Radiation (AKR). AKR occurs naturally in these polar regions where electrons are accelerated by electric fields into the increasing planetary magnetic dipole. Here conservation of the magnetic moment converts axial to rotational momentum forming a horseshoe distribution in velocity phase space. This distribution is unstable to cyclotron emission with radiation emitted in the X-mode. Initial studies were conducted in the form of 2D PiC code simulations [1] and a scaled laboratory experiment that was constructed to reproduce the mechanism of AKR. As studies progressed, 3D PiC code simulations were conducted to enable complete investigation of the complex interaction dimensions. A maximum efficiency of 1.25% is predicted from these simulations in the same mode and frequency as measured in the experiment. This is also consistent with geophysical observations and the predictions of theory.

  17. The LHC Tier1 at PIC: Experience from first LHC run

    International Nuclear Information System (INIS)

    Flix, J.; Perez-Calero Yzquierdo, A.; Accion, E.; Acin, V.; Acosta, C.; Bernabeu, G.; Bria, A.; Casals, J.; Caubet, M.; Cruz, R.; Delfino, M.; Espinal, X.; Lanciotti, E.; Lopez, F.; Martinez, F.; Mendez, V.; Merino, G.; Pacheco, A.; Planas, E.; Porto, M. C.; Rodriguez, B.; Sedov, A.

    2013-01-01

    This paper summarizes the operational experience of the Tier1 computer center at Port d'Informacio Cientifica (PIC) supporting the commissioning and first run (Run1) of the Large Hadron Collider (LHC). The evolution of the experiment computing models resulting from the higher amounts of data expected after there start of the LHC are also described. (authors)

  18. Recent reflectometry results from the UCLA plasma diagnostics group

    International Nuclear Information System (INIS)

    Gilmore, M.; Doyle, E.J.; Kubota, S.; Nguyen, X.V.; Peebles, W.A.; Rhodes, T.L.; Zeng, L.

    2001-01-01

    The UCLA Plasma Diagnostics Group has an active ongoing reflectometry program. The program is threefold, including 1) profile and 2) fluctuation measurements on fusion devices (DIII-D, NSTX, and others), and 3) basic reflectometry studies in linear and laboratory plasmas that seek to develop new measurement capabilities and increase the physics understanding of reflectometry. Recent results on the DIII-D tokamak include progress toward the implementation of FM reflectometry as a standard density profile diagnostic, and correlation length measurements in QDB discharges that indicate a very different scaling than normally observed in L-mode plasmas. The first reflectometry measurements in a spherical torus (ST) have also been obtained on NSTX. Profiles in NSTX show good agreement with those of Thomson scattering. Finally, in a linear device, a local magnetic field strength measurement based on O-X correlation reflectometry has been demonstrated to proof of principle level, and correlation lengths measured by reflectometry are in good agreement with probes. (author)

  19. Rise time of proton cut-off energy in 2D and 3D PIC simulations

    Science.gov (United States)

    Babaei, J.; Gizzi, L. A.; Londrillo, P.; Mirzanejad, S.; Rovelli, T.; Sinigardi, S.; Turchetti, G.

    2017-04-01

    The Target Normal Sheath Acceleration regime for proton acceleration by laser pulses is experimentally consolidated and fairly well understood. However, uncertainties remain in the analysis of particle-in-cell simulation results. The energy spectrum is exponential with a cut-off, but the maximum energy depends on the simulation time, following different laws in two and three dimensional (2D, 3D) PIC simulations so that the determination of an asymptotic value has some arbitrariness. We propose two empirical laws for the rise time of the cut-off energy in 2D and 3D PIC simulations, suggested by a model in which the proton acceleration is due to a surface charge distribution on the target rear side. The kinetic energy of the protons that we obtain follows two distinct laws, which appear to be nicely satisfied by PIC simulations, for a model target given by a uniform foil plus a contaminant layer that is hydrogen-rich. The laws depend on two parameters: the scaling time, at which the energy starts to rise, and the asymptotic cut-off energy. The values of the cut-off energy, obtained by fitting 2D and 3D simulations for the same target and laser pulse configuration, are comparable. This suggests that parametric scans can be performed with 2D simulations since 3D ones are computationally very expensive, delegating their role only to a correspondence check. In this paper, the simulations are carried out with the PIC code ALaDyn by changing the target thickness L and the incidence angle α, with a fixed a0 = 3. A monotonic dependence, on L for normal incidence and on α for fixed L, is found, as in the experimental results for high temporal contrast pulses.

  20. Construction and initial operation of MHD PbLi facility at UCLA

    International Nuclear Information System (INIS)

    Kunugi, T.; Yokomine, T.; Ueki, Y.; Smolentsev, S.; Li, F.-C.; Sketchley, T.; Abdou, M.A.; Yuki, K.

    2014-01-01

    We review current accomplishments in Task 1-3 'Flow Control and Thermofluid Modeling' of the Japan-US 'TITAN' collaboration program. Our task focuses on experimental activities and also computer modeling of magnetohydrodynamic flows and heat and mass transfer of electrically conducting fluids under conditions relevant to fusion blankets. Since our task started, major efforts were taken to design, construct and test a new magnetohydrodynamic lead-lithium (PbLi) loop at UCLA, to accumulate the PbLi handling technology, and to develop a high-temperature ultrasonic Doppler velocimetry and a differential-pressure measurement system for PbLi flows. In the present paper, the loop construction, the electromagnetic pump performance test, our on-going experiments with the constructed loop are described. (author)

  1. PIC simulation of a thermal anisotropy-driven Weibel instability in a circular rarefaction wave

    International Nuclear Information System (INIS)

    Dieckmann, M E; Sarri, G; Kourakis, I; Borghesi, M; Murphy, G C; O'C Drury, L; Bret, A; Romagnani, L; Ynnerman, A

    2012-01-01

    The expansion of an initially unmagnetized planar rarefaction wave has recently been shown to trigger a thermal anisotropy-driven Weibel instability (TAWI), which can generate magnetic fields from noise levels. It is examined here whether the TAWI can also grow in a curved rarefaction wave. The expansion of an initially unmagnetized circular plasma cloud, which consists of protons and hot electrons, into a vacuum is modelled for this purpose with a two-dimensional particle-in-cell (PIC) simulation. It is shown that the momentum transfer from the electrons to the radially accelerating protons can indeed trigger a TAWI. Radial current channels form and the aperiodic growth of a magnetowave is observed, which has a magnetic field that is oriented orthogonal to the simulation plane. The induced electric field implies that the electron density gradient is no longer parallel to the electric field. Evidence is presented here that this electric field modification triggers a second magnetic instability, which results in a rotational low-frequency magnetowave. The relevance of the TAWI is discussed for the growth of small-scale magnetic fields in astrophysical environments, which are needed to explain the electromagnetic emissions by astrophysical jets. It is outlined how this instability could be examined experimentally. (paper)

  2. PIC simulation of a thermal anisotropy-driven Weibel instability in a circular rarefaction wave

    Science.gov (United States)

    Dieckmann, M. E.; Sarri, G.; Murphy, G. C.; Bret, A.; Romagnani, L.; Kourakis, I.; Borghesi, M.; Ynnerman, A.; O'C Drury, L.

    2012-02-01

    The expansion of an initially unmagnetized planar rarefaction wave has recently been shown to trigger a thermal anisotropy-driven Weibel instability (TAWI), which can generate magnetic fields from noise levels. It is examined here whether the TAWI can also grow in a curved rarefaction wave. The expansion of an initially unmagnetized circular plasma cloud, which consists of protons and hot electrons, into a vacuum is modelled for this purpose with a two-dimensional particle-in-cell (PIC) simulation. It is shown that the momentum transfer from the electrons to the radially accelerating protons can indeed trigger a TAWI. Radial current channels form and the aperiodic growth of a magnetowave is observed, which has a magnetic field that is oriented orthogonal to the simulation plane. The induced electric field implies that the electron density gradient is no longer parallel to the electric field. Evidence is presented here that this electric field modification triggers a second magnetic instability, which results in a rotational low-frequency magnetowave. The relevance of the TAWI is discussed for the growth of small-scale magnetic fields in astrophysical environments, which are needed to explain the electromagnetic emissions by astrophysical jets. It is outlined how this instability could be examined experimentally.

  3. A maximum power point tracker for photovoltaic system using a PIC microcontroller; Controlador de potencia maxima para sistemas fotovoltaicos (SFVs) utilizando un microcontrolador PIC

    Energy Technology Data Exchange (ETDEWEB)

    Guzman, Eusebio; Mendoza, Victor X; Carrillo, Jose J . A; Galarza, Cristian [Universidad Autonoma Metropolitana, Mexico, D.F. (Mexico)

    2000-07-01

    A maximum power point tracker MPPT for photovoltaic systems is presented. The equipment can output up to 600 W and its control signals are generated by a PIC microcontroller. The principle of control is based on current and voltage sampling at the output terminals of the photovoltaic generator. From power comparison of two consecutive samples, it is possible to know how far from the optimal point the system is working. Output voltage control is used to force the system to work within the optimal area of operation. The microcontroller program sequence, the DC/DC converter structure and the most relevant results are shown. [Spanish] En este trabajo se presenta el desarrollo de un controlador de potencia maxima para su aplicacion en sistemas fotovoltaicos (SFVs). El diseno alcanza una potencia de 600 W y sus senales de control son generadas con un controlador PIC. El principio de control se basa en el muestreo de la corriente y la tension en las terminadas del generador fotovoltaico GFV. De dos muestreos consecutivos, y por comparacion de las potencias, se determina que tan alejado del punto optimo opera el sistema. La operacion del sistema dentro de la zona de funcionamiento optimo se asegura mediante un control por tension. Se muestra la secuencia de programacion del microcontrolador, la estructura del convertidor CD/CD empleado y algunos resultados relevantes.

  4. PicPrint: Embedding pictures in additive manufacturing

    DEFF Research Database (Denmark)

    Nielsen, Jannik Boll; Eiríksson, Eyþór Rúnar; Lyngby, Rasmus Ahrenkiel

    2017-01-01

    Here  we  present  PicPrint,  a  method  and  tool  for  producing  an  additively  manufactured  lithophane,  enabling  transferring  and embedding  2D  information  into  additively  manufactured  3D  objects.  The  method  takes  an  input  image  and  converts  it  to  a......, after which  the mesh is  ready  for either  direct  print  on an additive manufacturing system, or transfer to other geometries via Boolean mesh operations. ...

  5. Implementation of multi-layer feed forward neural network on PIC16F877 microcontroller

    International Nuclear Information System (INIS)

    Nur Aira Abd Rahman

    2005-01-01

    Artificial Neural Network (ANN) is an electronic model based on the neural structure of the brain. Similar to human brain, ANN consists of interconnected simple processing units or neurons that process input to generate output signals. ANN operation is divided into 2 categories; training mode and service mode. This project aims to implement ANN on PIC micro-controller that enable on-chip or stand alone training and service mode. The input can varies from sensors or switches, while the output can be used to control valves, motors, light source and a lot more. As partial development of the project, this paper reports the current status and results of the implemented ANN. The hardware fraction of this project incorporates Microchip PIC16F877A microcontrollers along with uM-FPU math co-processor. uM-FPU is a 32-bit floating point co-processor utilized to execute complex calculation requires by the sigmoid activation function for neuron. ANN algorithm is converted to software program written in assembly language. The implemented ANN structure is three layer with one hidden layer, and five neurons with two hidden neurons. To prove the operability and functionality, the network is trained to solve three common logic gate operations; AND, OR, and XOR. This paper concludes that the ANN had been successfully implemented on PIC16F877a and uM-FPU math co-processor hardware that works accordingly on both training and service mode. (Author)

  6. Plasma simulation and fusion calculation

    International Nuclear Information System (INIS)

    Buzbee, B.L.

    1983-01-01

    Particle-in-cell (PIC) models are widely used in fusion studies associated with energy research. They are also used in certain fluid dynamical studies. Parallel computation is relevant to them because (1) PIC models are not amenable to a lot of vectorization - about 50% of the total computation can be vectorized in the average model; (2) the volume of data processed by PIC models typically necessitates use of secondary storage with an attendant requirements for high-speed I/O; and (3) PIC models exist today whose implementation requires a computer 10 to 100 times faster than the Cray-1. This paper discusses parallel formulation of PIC models for master/slave architectures and ring architectures. Because interprocessor communication can be a decisive factor in the overall efficiency of a parallel system, we show how to divide these models into large granules that can be executed in parallel with relatively little need for communication. We also report measurements of speedup obtained from experiments on the UNIVAC 1100/84 and the Denelcor HEP

  7. Potencial alelopático de Tropaeolum majus L. na germinação e crescimento inicial de plântulas de picão-preto Allelophaty potential of Tropaeolum majus L on picão-preto seeds germination and initial seedling growth

    Directory of Open Access Journals (Sweden)

    Anelise Samara Nazari Formagio

    2012-01-01

    Full Text Available Objetivou-se com este estudo avaliar o potencial alelopático de extratos metanólicos de folhas, flores e raízes de capuchinha (Tropaeolum majus L. sobre a germinação de sementes e o crescimento inicial de plântulas de picão-preto. O extrato metanólico com melhor potencial de inibição foi submetido a particionamento, resultando nas frações hexânica, clorofórmica, acetato de etila e hidrometanólica e posterior caracterização pelo espectro de absorção na região do infravermelho (IV. O efeito alelopático foi avaliado sobre as sementes de picão-preto, as quais foram distribuídas sobre papel germitest umedecido com 2mL dos extratos e mantidas em germinador do tipo B.O.D. regulado a temperatura de 25°C e luz branca constante, sendo que as sementes imersas diretamente em água constituíram o tratamento controle. A avaliação da qualidade da semente foi realizada pelos testes de germinação e vigor (primeira contagem e comprimento de raiz primária e de hipocótilo das plântulas, em delineamento inteiramente ao acaso. O potencial alelopático das folhas de capuchinha foi maior em relação às demais partes da planta sobre a germinação das sementes, comprimento de hipocótilo e de raiz das plântulas de picão-preto. Estes efeitos podem estar associados à presença de grupos químicos polares, pois à medida que se aumentou a polaridade dos solventes detectou-se maior efeito inibitório sobre a germinação e o crescimento inicial de plântulas de picão-preto.This research aimed to evaluate the metanolic extracts allelopathic potential from leaves, flowers and roots of capuchinha (Tropaeolum majus L. on picão-preto seeds germination and initial seedling growth. The best inhibitor metanolic extract was fractioned, in hexanic, cloroformic, etil acetate and hidrometanolic fractions and it was characterized through absorption spectrum using mid-infrared. To evaluate the allelopathic effect of metanolic extracts and the

  8. A parallel 3D particle-in-cell code with dynamic load balancing

    International Nuclear Information System (INIS)

    Wolfheimer, Felix; Gjonaj, Erion; Weiland, Thomas

    2006-01-01

    A parallel 3D electrostatic Particle-In-Cell (PIC) code including an algorithm for modelling Space Charge Limited (SCL) emission [E. Gjonaj, T. Weiland, 3D-modeling of space-charge-limited electron emission. A charge conserving algorithm, Proceedings of the 11th Biennial IEEE Conference on Electromagnetic Field Computation, 2004] is presented. A domain decomposition technique based on orthogonal recursive bisection is used to parallelize the computation on a distributed memory environment of clustered workstations. For problems with a highly nonuniform and time dependent distribution of particles, e.g., bunch dynamics, a dynamic load balancing between the processes is needed to preserve the parallel performance. The algorithm for the detection of a load imbalance and the redistribution of the tasks among the processes is based on a weight function criterion, where the weight of a cell measures the computational load associated with it. The algorithm is studied with two examples. In the first example, multiple electron bunches as occurring in the S-DALINAC [A. Richter, Operational experience at the S-DALINAC, Proceedings of the Fifth European Particle Accelerator Conference, 1996] accelerator are simulated in the absence of space charge fields. In the second example, the SCL emission and electron trajectories in an electron gun are simulated

  9. A parallel 3D particle-in-cell code with dynamic load balancing

    Energy Technology Data Exchange (ETDEWEB)

    Wolfheimer, Felix [Technische Universitaet Darmstadt, Institut fuer Theorie Elektromagnetischer Felder, Schlossgartenstr.8, 64283 Darmstadt (Germany)]. E-mail: wolfheimer@temf.de; Gjonaj, Erion [Technische Universitaet Darmstadt, Institut fuer Theorie Elektromagnetischer Felder, Schlossgartenstr.8, 64283 Darmstadt (Germany); Weiland, Thomas [Technische Universitaet Darmstadt, Institut fuer Theorie Elektromagnetischer Felder, Schlossgartenstr.8, 64283 Darmstadt (Germany)

    2006-03-01

    A parallel 3D electrostatic Particle-In-Cell (PIC) code including an algorithm for modelling Space Charge Limited (SCL) emission [E. Gjonaj, T. Weiland, 3D-modeling of space-charge-limited electron emission. A charge conserving algorithm, Proceedings of the 11th Biennial IEEE Conference on Electromagnetic Field Computation, 2004] is presented. A domain decomposition technique based on orthogonal recursive bisection is used to parallelize the computation on a distributed memory environment of clustered workstations. For problems with a highly nonuniform and time dependent distribution of particles, e.g., bunch dynamics, a dynamic load balancing between the processes is needed to preserve the parallel performance. The algorithm for the detection of a load imbalance and the redistribution of the tasks among the processes is based on a weight function criterion, where the weight of a cell measures the computational load associated with it. The algorithm is studied with two examples. In the first example, multiple electron bunches as occurring in the S-DALINAC [A. Richter, Operational experience at the S-DALINAC, Proceedings of the Fifth European Particle Accelerator Conference, 1996] accelerator are simulated in the absence of space charge fields. In the second example, the SCL emission and electron trajectories in an electron gun are simulated.

  10. Strengthening the fission reactor nuclear science and engineering program at UCLA. Final technical report

    International Nuclear Information System (INIS)

    Okrent, D.

    1997-01-01

    This is the final report on DOE Award No. DE-FG03-92ER75838 A000, a three year matching grant program with Pacific Gas and Electric Company (PG and E) to support strengthening of the fission reactor nuclear science and engineering program at UCLA. The program began on September 30, 1992. The program has enabled UCLA to use its strong existing background to train students in technological problems which simultaneously are of interest to the industry and of specific interest to PG and E. The program included undergraduate scholarships, graduate traineeships and distinguished lecturers. Four topics were selected for research the first year, with the benefit of active collaboration with personnel from PG and E. These topics remained the same during the second year of this program. During the third year, two topics ended with the departure o the students involved (reflux cooling in a PWR during a shutdown and erosion/corrosion of carbon steel piping). Two new topics (long-term risk and fuel relocation within the reactor vessel) were added; hence, the topics during the third year award were the following: reflux condensation and the effect of non-condensable gases; erosion/corrosion of carbon steel piping; use of artificial intelligence in severe accident diagnosis for PWRs (diagnosis of plant status during a PWR station blackout scenario); the influence on risk of organization and management quality; considerations of long term risk from the disposal of hazardous wastes; and a probabilistic treatment of fuel motion and fuel relocation within the reactor vessel during a severe core damage accident

  11. Nonlinear PIC simulation in a Penning trap

    International Nuclear Information System (INIS)

    Lapenta, G.; Delzanno, G.L.; Finn, J. M.

    2002-01-01

    We study the nonlinear dynamics of a Penning trap plasma, including the effect of the finite length and end curvature of the plasma column. A new cylindrical PIC code, called KANDINSKY, has been implemented by using a new interpolation scheme. The principal idea is to calculate the volume of each cell from a particle volume, in the same manner as it is done for the cell charge. With this new method, the density is conserved along streamlines and artificial sources of compressibility are avoided. The code has been validated with a reference Eulerian fluid code. We compare the dynamics of three different models: a model with compression effects, the standard Euler model and a geophysical fluid dynamics model. The results of our investigation prove that Penning traps can really be used to simulate geophysical fluids

  12. Photocathode driven linac at UCLA for FEL and plasma wakefield acceleration experiments

    International Nuclear Information System (INIS)

    Hartman, S.; Aghamir, F.; Barletta, W.; Cline, D.; Dodd, J.; Katsouleas, T.; Kolonko, J.; Park, S.; Pellegrini, C.; Rosenzweig, J.; Smolin, J.; Terrien, J.; Davis, J.; Hairapetian, G.; Joshi, C.; Luhmann, N. Jr.; McDermott, D.

    1991-01-01

    The UCLA compact 20-MeV/c electron linear accelerator is designed to produce a single electron bunch with a peak current of 200 A, an rms energy spread of 0.2% or less, and a short 1.2 picosecond rms pulse duration. The linac is also designed to minimize emittance growth down the beamline so as to obtain emittances of the order of 8πmm-mrad in the experimental region. The linac will feed two beamlines, the first will run straight into the undulator for FEL experiments while the second will be used for diagnostics, longitudinal bunch compression, and other electron beam experiments. Here the authors describe the considerations put into the design of the accelerating structures and the transport to the experimental areas

  13. Analysis of the beam halo in negative ion sources by using 3D3V PIC code

    Energy Technology Data Exchange (ETDEWEB)

    Miyamoto, K., E-mail: kmiyamot@naruto-u.ac.jp [Naruto University of Education, 748 Nakashima, Takashima, Naruto-cho, Naruto-shi, Tokushima 772-8502 (Japan); Nishioka, S.; Goto, I.; Hatayama, A. [Faculty of Science and Technology, Keio University, 3-14-1 Hiyoshi, Kohoku-ku, Yokohama 223-8522 (Japan); Hanada, M.; Kojima, A.; Hiratsuka, J. [Japan Atomic Energy Agency, 801-1 Mukouyama, Naka 319-0913 (Japan)

    2016-02-15

    The physical mechanism of the formation of the negative ion beam halo and the heat loads of the multi-stage acceleration grids are investigated with the 3D PIC (particle in cell) simulation. The following physical mechanism of the beam halo formation is verified: The beam core and the halo consist of the negative ions extracted from the center and the periphery of the meniscus, respectively. This difference of negative ion extraction location results in a geometrical aberration. Furthermore, it is shown that the heat loads on the first acceleration grid and the second acceleration grid are quantitatively improved compared with those for the 2D PIC simulation result.

  14. The UCLA/SLAC Ultra-High Gradient Cerenkov Wakefield Accelerator Experiment

    CERN Document Server

    Thompson, Matthew C; Hogan, Mark; Ischebeck, Rasmus; Muggli, Patric; Rosenzweig, James E; Scott, A; Siemann, Robert; Travish, Gil; Walz, Dieter; Yoder, Rodney

    2005-01-01

    An experiment is planned to study the performance of dielectric Cerenkov wakefield accelerating structures at extremely high gradients in the GV/m range. This new UCLA/SLAC collaboration will take advantage of the unique SLAC FFTB electron beam and its demonstrated ultra-short pulse lengths and high currents (e.g., sz = 20 μm at Q = 3 nC). The electron beam will be focused down and sent through varying lengths of fused silica capillary tubing with two different sizes: ID = 200 μm / OD = 325 μm and ID = 100 μm / OD = 325 μm. The pulse length of the electron beam will be varied in order to alter the accelerating gradient and probe the breakdown threshold of the dielectric structures. In addition to breakdown studies, we plan to collect and measure coherent Cerenkov radiation emitted from the capillary tube to gain information about the strength of the accelerating fields. Status and progress on the experiment are reported.

  15. Program Package for 3d PIC Model of Plasma Fiber

    Science.gov (United States)

    Kulhánek, Petr; Břeň, David

    2007-08-01

    A fully three dimensional Particle in Cell model of the plasma fiber had been developed. The code is written in FORTRAN 95, implementation CVF (Compaq Visual Fortran) under Microsoft Visual Studio user interface. Five particle solvers and two field solvers are included in the model. The solvers have relativistic and non-relativistic variants. The model can deal both with periodical and non-periodical boundary conditions. The mechanism of the surface turbulences generation in the plasma fiber was successfully simulated with the PIC program package.

  16. Characterization of a trinuclear ruthenium species in catalytic water oxidation by Ru(bda)(pic)2 in neutral media.

    Science.gov (United States)

    Zhang, Biaobiao; Li, Fei; Zhang, Rong; Ma, Chengbing; Chen, Lin; Sun, Licheng

    2016-06-30

    A Ru(III)-O-Ru(IV)-O-Ru(III) type trinuclear species was crystallographically characterized in water oxidation by Ru(bda)(pic)2 (H2bda = 2,2'-bipyridine-6,6'-dicarboxylic acid; pic = 4-picoline) under neutral conditions. The formation of a ruthenium trimer due to the reaction of Ru(IV)[double bond, length as m-dash]O with Ru(II)-OH2 was fully confirmed by chemical, electrochemical and photochemical methods. Since the oxidation of the trimer was proposed to lead to catalyst decomposition, the photocatalytic water oxidation activity was rationally improved by the suppression of the formation of the trimer.

  17. Controlling the numerical Cerenkov instability in PIC simulations using a customized finite difference Maxwell solver and a local FFT based current correction

    International Nuclear Information System (INIS)

    Li, Fei; Yu, Peicheng; Xu, Xinlu; Fiuza, Frederico; Decyk, Viktor K.

    2017-01-01

    In this study we present a customized finite-difference-time-domain (FDTD) Maxwell solver for the particle-in-cell (PIC) algorithm. The solver is customized to effectively eliminate the numerical Cerenkov instability (NCI) which arises when a plasma (neutral or non-neutral) relativistically drifts on a grid when using the PIC algorithm. We control the EM dispersion curve in the direction of the plasma drift of a FDTD Maxwell solver by using a customized higher order finite difference operator for the spatial derivative along the direction of the drift (1^ direction). We show that this eliminates the main NCI modes with moderate |k_1|, while keeps additional main NCI modes well outside the range of physical interest with higher |k_1|. These main NCI modes can be easily filtered out along with first spatial aliasing NCI modes which are also at the edge of the fundamental Brillouin zone. The customized solver has the possible advantage of improved parallel scalability because it can be easily partitioned along 1^ which typically has many more cells than other directions for the problems of interest. We show that FFTs can be performed locally to current on each partition to filter out the main and first spatial aliasing NCI modes, and to correct the current so that it satisfies the continuity equation for the customized spatial derivative. This ensures that Gauss’ Law is satisfied. Lastly, we present simulation examples of one relativistically drifting plasma, of two colliding relativistically drifting plasmas, and of nonlinear laser wakefield acceleration (LWFA) in a Lorentz boosted frame that show no evidence of the NCI can be observed when using this customized Maxwell solver together with its NCI elimination scheme.

  18. Coronagraphy at Pic du Midi: Present state and future projects

    Science.gov (United States)

    Koechlin, L.

    2012-12-01

    The Pic du Midi coronagraph (CLIMSO) is a group of four instruments in parallel, taking images of the whole solar photosphere and low corona. It provides series of 2048*2048 pixels images taken nominally at 1 minute time intervals, all year long, weather permitting. A team of ≃q 60 persons, by groups of 2 or 3 each week, operate the instruments. Their work is programmed in collaboration with Institut de Recherches en astrophysique et planétologie (IRAP) of Observatoire Midi Pyrénées (OMP), and with Programme National Soleil Terre (PNST). The four instruments of CLIMSO (L1, C1, L2 and C2) collect images of the Sun as following: 1) L1 : photosphere in H-α (656.28 nm) ; 2) L2 : photosphere in Ca-II (393.37 nm) ; 3) C1 : prominences in H-α ; 4) C2 : prominences in He-I (1083.0 nm). The data taken are stored in fits format images and mpeg films. They are available publicly on data bases such as BASS 2000 Meudon ({http://bass2000.obspm.fr/home.php?lang=en} and BASS2000 Tarbes ({http://bass2000.bagn.obs-mip.fr/base/sun/index.php}). Several solar studies are carried in relation with these data. In addition to the raw fits images, new images will soon be sent to the data bases: they will be calibrated in solar surface emittance, expressed in W/m^2/nm/steradian. Series of mpeg films for each day are presented in superposed color layers, so as to visualize the multispectral information better. New instrumental developments are planned for the next years and already financed. They will use spectropolarimetry to measure the magnetic field and radial velocities in the photosphere and corona. The data will cover the entire solar disc and have a sample rate of one map per minute.

  19. Development and Testing of UCLA's Electron Losses and Fields Investigation (ELFIN) Instrument Payload

    Science.gov (United States)

    Wilkins, C.; Bingley, L.; Angelopoulos, V.; Caron, R.; Cruce, P. R.; Chung, M.; Rowe, K.; Runov, A.; Liu, J.; Tsai, E.

    2017-12-01

    UCLA's Electron Losses and Fields Investigation (ELFIN) is a 3U+ CubeSat mission designed to study relativistic particle precipitation in Earth's polar regions from Low Earth Orbit. Upon its 2018 launch, ELFIN will aim to address an important open question in Space Physics: Are Electromagnetic Ion-Cyclotron (EMIC) waves the dominant source of pitch-angle scattering of high-energy radiation belt charged particles into Earth's atmosphere during storms and substorms? Previous studies have indicated these scattering events occur frequently during storms and substorms, and ELFIN will be the first mission to study this process in-situ.Paramount to ELFIN's success is its instrument suite consisting of an Energetic Particle Detector (EPD) and a Fluxgate Magnetometer (FGM). The EPD is comprised of two collimated solid-state detector stacks which will measure the incident flux of energetic electrons from 50 keV to 4 MeV and ions from 50 keV to 300 keV. The FGM is a 3-axis magnetic field sensor which will capture the local magnetic field and its variations at frequencies up to 5 Hz. The ELFIN spacecraft spins perpendicular to the geomagnetic field to provide 16 pitch-angle particle data sectors per revolution. Together these factors provide the capability to address the nature of radiation belt particle precipitation by pitch-angle scattering during storms and substorms.ELFIN's instrument development has progressed into the late Engineering Model (EM) phase and will soon enter Flight Model (FM) development. The instrument suite is currently being tested and calibrated at UCLA using a variety of methods including the use of radioactive sources and applied magnetics to simulate orbit conditions during spin sectoring. We present the methods and test results from instrument calibration and performance validation.

  20. The Plant Information Center (PIC): A Web-Based Learning Center for Botanical Study.

    Science.gov (United States)

    Greenberg, J.; Daniel, E.; Massey, J.; White, P.

    The Plant Information Center (PIC) is a project funded under the Institute of Museum and Library Studies that aims to provide global access to both primary and secondary botanical resources via the World Wide Web. Central to the project is the development and employment of a series of applications that facilitate resource discovery, interactive…

  1. Peptide Inhibitor of Complement C1 (PIC1 Rapidly Inhibits Complement Activation after Intravascular Injection in Rats.

    Directory of Open Access Journals (Sweden)

    Julia A Sharp

    Full Text Available The complement system has been increasingly recognized to play a pivotal role in a variety of inflammatory and autoimmune diseases. Consequently, therapeutic modulators of the classical, lectin and alternative pathways of the complement system are currently in pre-clinical and clinical development. Our laboratory has identified a peptide that specifically inhibits the classical and lectin pathways of complement and is referred to as Peptide Inhibitor of Complement C1 (PIC1. In this study, we determined that the lead PIC1 variant demonstrates a salt-dependent binding to C1q, the initiator molecule of the classical pathway. Additionally, this peptide bound to the lectin pathway initiator molecule MBL as well as the ficolins H, M and L, suggesting a common mechanism of PIC1 inhibitory activity occurs via binding to the collagen-like tails of these collectin molecules. We further analyzed the effect of arginine and glutamic acid residue substitution on the complement inhibitory activity of our lead derivative in a hemolytic assay and found that the original sequence demonstrated superior inhibitory activity. To improve upon the solubility of the lead derivative, a pegylated, water soluble variant was developed, structurally characterized and demonstrated to inhibit complement activation in mouse plasma, as well as rat, non-human primate and human serum in vitro. After intravenous injection in rats, the pegylated derivative inhibited complement activation in the blood by 90% after 30 seconds, demonstrating extremely rapid function. Additionally, no adverse toxicological effects were observed in limited testing. Together these results show that PIC1 rapidly inhibits classical complement activation in vitro and in vivo and is functional for a variety of animal species, suggesting its utility in animal models of classical complement-mediated diseases.

  2. Material analyses of foam-based SiC FCI after dynamic testing in PbLi in MaPLE loop at UCLA

    Energy Technology Data Exchange (ETDEWEB)

    Gonzalez, Maria, E-mail: maria.gonzalez@ciemat.es [LNF-CIEMAT, Avda Complutense, 40, 28040 Madrid (Spain); Rapisarda, David; Ibarra, Angel [LNF-CIEMAT, Avda Complutense, 40, 28040 Madrid (Spain); Courtessole, Cyril; Smolentsev, Sergey; Abdou, Mohamed [Fusion Science and Technology Center, UCLA (United States)

    2016-11-01

    Highlights: • Samples from foam-based SiC FCI were analyzed by looking at their SEM microstructure and elemental composition. • After finishing dynamic experiments in the flowing hot PbLi, the liquid metal ingress has been confirmed due to infiltration through local defects in the protective inner CVD layer. • No direct evidences of corrosion/erosion were observed; these defects could be related to the manufacturing process. - Abstract: Foam-based SiC flow channel inserts (FCIs) developed and manufactured by Ultramet, USA are currently under testing in the flowing hot lead-lithium (PbLi) alloy in the MaPLE loop at UCLA to address chemical/physical compatibility and to access the MHD pressure drop reduction. UCLA has finished the first experimental series, where a single uninterrupted long-term (∼6500 h) test was performed on a 30-cm FCI segment in a magnetic field up to 1.8 T at the temperature of 300 °C and maximum flow velocities of ∼ 15 cm/s. After finishing the experiments, the FCI sample was extracted from the host stainless steel duct and cut into slices. Few of them have been analyzed at CIEMAT as a part of the joint collaborative effort on the development of the DCLL blanket concept in the EU and the US. The initial inspection of the slices using optical microscopic analysis at UCLA showed significant PbLi ingress into the bulk FCI material that resulted in degradation of insulating properties of the FCI. Current material analyses at CIEMAT are based on advanced techniques, including characterization of FCI samples by FESEM to study PbLi ingress, imaging of cross sections, composition analysis by EDX and crack inspection. These analyses suggest that the ingress was caused by local defects in the protective inner CVD layer that might be originally present in the FCI or occurred during testing.

  3. Global general pediatric surgery partnership: The UCLA-Mozambique experience.

    Science.gov (United States)

    Amado, Vanda; Martins, Deborah B; Karan, Abraar; Johnson, Brittni; Shekherdimian, Shant; Miller, Lee T; Taela, Atanasio; DeUgarte, Daniel A

    2017-09-01

    There has been increasing recognition of the disparities in surgical care throughout the world. Increasingly, efforts are being made to improve local infrastructure and training of surgeons in low-income settings. The purpose of this study was to review the first 5-years of a global academic pediatric general surgery partnership between UCLA and the Eduardo Mondlane University in Maputo, Mozambique. A mixed-methods approach was utilized to perform an ongoing needs assessment. A retrospective review of admission and operative logbooks was performed. Partnership activities were summarized. The needs assessment identified several challenges including limited operative time, personnel, equipment, and resources. Review of logbooks identified a high frequency of burn admissions and colorectal procedures. Partnership activities focused on providing educational resources, on-site proctoring, training opportunities, and research collaboration. This study highlights the spectrum of disease and operative case volume of a referral center for general pediatric surgery in sub-Saharan Africa, and it provides a context for academic partnership activities to facilitate training and improve the quality of pediatric general surgical care in limited-resource settings. Level IV. Copyright © 2017 Elsevier Inc. All rights reserved.

  4. Final Report DOE Grant No. DE-FG03-01ER54617 Computer Modeling of Microturbulence and Macrostability Properties of Magnetically Confined Plasmas

    Energy Technology Data Exchange (ETDEWEB)

    Jean-Noel Leboeuf

    2004-03-04

    OAK-B135 We have made significant progress during the past grant period in several key areas of the UCLA and national Fusion Theory Program. This impressive body of work includes both fundamental and applied contributions to MHD and turbulence in DIII-D and Electric Tokamak plasmas, and also to Z-pinches, particularly with respect to the effect of flows on these phenomena. We have successfully carried out interpretive and predictive global gyrokinetic particle-in-cell calculations of DIII-D discharges. We have cemented our participation in the gyrokinetic PIC effort of the SciDAC Plasma Microturbulence Project through working membership in the Summit Gyrokinetic PIC Team. We have continued to teach advanced courses at UCLA pertaining to computational plasma physics and to foster interaction with students and junior researchers. We have in fact graduated 2 Ph. D. students during the past grant period. The research carried out during that time has resulted in many publications in the premier plasma physics and fusion energy sciences journals and in several invited oral communications at major conferences such as Sherwood, Transport Task Force (TTF), the annual meetings of the Division of Plasma Physics of the American Physical Society, of the European Physical Society, and the 2002 IAEA Fusion Energy Conference, FEC 2002. Many of these have been authored and co-authored with experimentalists at DIII-D.

  5. Final Report DOE Grant No. DE-FG03-01ER54617 Computer Modeling of Microturbulence and Macrostability Properties of Magnetically Confined Plasmas

    International Nuclear Information System (INIS)

    Jean-Noel Leboeuf

    2004-01-01

    OAK-B135 We have made significant progress during the past grant period in several key areas of the UCLA and national Fusion Theory Program. This impressive body of work includes both fundamental and applied contributions to MHD and turbulence in DIII-D and Electric Tokamak plasmas, and also to Z-pinches, particularly with respect to the effect of flows on these phenomena. We have successfully carried out interpretive and predictive global gyrokinetic particle-in-cell calculations of DIII-D discharges. We have cemented our participation in the gyrokinetic PIC effort of the SciDAC Plasma Microturbulence Project through working membership in the Summit Gyrokinetic PIC Team. We have continued to teach advanced courses at UCLA pertaining to computational plasma physics and to foster interaction with students and junior researchers. We have in fact graduated 2 Ph. D. students during the past grant period. The research carried out during that time has resulted in many publications in the premier plasma physics and fusion energy sciences journals and in several invited oral communications at major conferences such as Sherwood, Transport Task Force (TTF), the annual meetings of the Division of Plasma Physics of the American Physical Society, of the European Physical Society, and the 2002 IAEA Fusion Energy Conference, FEC 2002. Many of these have been authored and co-authored with experimentalists at DIII-D

  6. A general concurrent algorithm for plasma particle-in-cell simulation codes

    International Nuclear Information System (INIS)

    Liewer, P.C.; Decyk, V.K.

    1989-01-01

    We have developed a new algorithm for implementing plasma particle-in-cell (PIC) simulation codes on concurrent processors with distributed memory. This algorithm, named the general concurrent PIC algorithm (GCPIC), has been used to implement an electrostatic PIC code on the 33-node JPL Mark III Hypercube parallel computer. To decompose at PIC code using the GCPIC algorithm, the physical domain of the particle simulation is divided into sub-domains, equal in number to the number of processors, such that all sub-domains have roughly equal numbers of particles. For problems with non-uniform particle densities, these sub-domains will be of unequal physical size. Each processor is assigned a sub-domain and is responsible for updating the particles in its sub-domain. This algorithm has led to a a very efficient parallel implementation of a well-benchmarked 1-dimensional PIC code. The dominant portion of the code, updating the particle positions and velocities, is nearly 100% efficient when the number of particles is increased linearly with the number of hypercube processors used so that the number of particles per processor is constant. For example, the increase in time spent updating particles in going from a problem with 11,264 particles run on 1 processor to 360,448 particles on 32 processors was only 3% (parallel efficiency of 97%). Although implemented on a hypercube concurrent computer, this algorithm should also be efficient for PIC codes on other parallel architectures and for large PIC codes on sequential computers where part of the data must reside on external disks. copyright 1989 Academic Press, Inc

  7. Velocity control in three-phase induction motors using PIC; Controle de velocidade de motor de inducao trifasico usando PIC

    Energy Technology Data Exchange (ETDEWEB)

    Marcelino, M.A.; Silva, G.B.S.; Grandinetti, F.J. [Universidade Estadual Paulista (UNESP), Guaratingueta, SP (Brazil). Fac. de Engenharia; Universidade de Taubate (UNITAU), SP (Brazil)], Emails: abud@feg.unesp.br, gabonini@yahoo.com.br, grandinetti@unitau.br

    2009-07-01

    This paper presents a technique for speed control three-phase induction motor using the pulse width modulation (PWM), in open loop while maintaining the tension for constant frequency. The technique is adapted from a thesis entitled 'Control of the three-phase induction motor, using discrete PWM generation, optimized and synchronized', where studies are presented aimed at their application in home appliances, to eliminate mechanical parts, replaced by low cost electronic control, thus having a significant reduction in power consumption. Initially the experiment was done with the Intel 80C31 micro controller. In this paper, the PWM modulation is implemented using a PIC micro controller, and the speed control kept a low profile, based on tables, synchronized with transitions and reduced generation of harmonics in the network. Confirmations were made using the same process of building tables, but takes advantage of the program of a RISC device.

  8. Specific features of spin-variable properties of [Fe(acen)pic2]BPh4 · nH2O

    Science.gov (United States)

    Ivanova, T. A.; Ovchinnikov, I. V.; Gil'mutdinov, I. F.; Mingalieva, L. V.; Turanova, O. A.; Ivanova, G. I.

    2016-02-01

    The [Fe(acen)pic2]BPh4 · nH2O compound has been synthesized and studied in the temperature interval of 5-300 K by the methods of EPR and magnetic susceptibility. The existence of ferromagnetic interactions between Fe(III) complexes in this compound has been revealed, in contrast to unhydrated [Fe(acen)pic2]BPh4. The reduction in the integrated intensity of the magnetic resonance signal as the temperature decreases below 80 K has been explained by the transition of high-spin ions to the low-spin state. It has been shown that the phase transition temperature in the presence of intermolecular (ferromagnetic) interactions is lower than that in the case of noninteracting centers.

  9. An analysis of appropriate delivery of postoperative radiation therapy for endometrial cancer using the RAND/UCLA Appropriateness Method: Executive summary

    Directory of Open Access Journals (Sweden)

    Ellen Jones, MD, PhD

    2016-01-01

    Conclusions: This analysis based on the RAND/UCLA Method shows significant agreement with the 2014 endometrial Guideline. Areas of divergence, often in scenarios with low-level evidence, included use of external beam RT plus vaginal brachytherapy in stages II and III and external beam RT alone in early-stage patients. Furthermore, the analysis explores other important questions regarding management of this disease site.

  10. Rancang Bangun Inverter SVM Berbasis Mikrokontroler PIC 18F4431 Untuk Sistem VSD

    OpenAIRE

    Tarmizi; Muyassar

    2013-01-01

    Sebuah sistem pengaturan kecepatan motor disebut dengan sistem Variable Speed Drives (VSD). Sistem VSD motor induksi menggunakan inverter untuk mengatur frekuensi suplai motor. Untuk mendapatkan frekuensi suplai motor yang mendekati sinusoidal, inveter perlu di switching dengan metode tertentu. Pada penelitian ini, switching inverter 3 fasa menggunakan metode SVM (Space Vector Modulation) yang dikontrol oleh Mikrokontroler PIC18F4431. Sebelum dilakukan ekperimen, inverter SVM ini lakukan si...

  11. Educating European Corporate Communication Professionals for Senior Management Positions: A Collaboration between UCLA's Anderson School of Management and the University of Lugano

    Science.gov (United States)

    Forman, Janis

    2005-01-01

    UCLA's program in strategic management for European corporate communication professionals provides participants with a concentrated, yet selective, immersion in those management disciplines taught at U.S. business schools, topics that are essential to their work as senior advisors to CEOs and as leaders in the field. The choice of topics…

  12. Vision screening of abused and neglected children by the UCLA Mobile Eye Clinic.

    Science.gov (United States)

    Yoo, R; Logani, S; Mahat, M; Wheeler, N C; Lee, D A

    1999-07-01

    The purpose of our study was to present descriptive findings of ocular abnormalities in vision screening examinations of abused and neglected children. We compared the prevalence and the nature of eye diseases and refractive error between abused and neglected boys staying at the Hathaway Home, a residential facility for abused children, and boys from neighboring Boys and Girls clubs. The children in the study received vision screening examinations through the UCLA Mobile Eye Clinic following a standard format. Clinical data were analyzed by chi-square test. The children with a history of abuse demonstrated significantly higher prevalence of myopia, astigmatism, and external eye disorders. Our study suggests that children with a history of abuse may be at higher risk for visual impairment. These visual impairments may be the long-term sequelae of child abuse.

  13. Room Thermostat with Servo Controlled by PIC Microcontroller

    Directory of Open Access Journals (Sweden)

    Jan Skapa

    2013-01-01

    Full Text Available This paper describes the design of room thermostat with Microchip PIC microcontroller. Thermostat is designated for two-pipe heating system. The microprocessor controls thermostatic valve via electric actuator with mechanical gear unit. The room thermostat uses for its activity measurements of air temperature in the room and calorimetric measurement of heat, which is served to the radiator. These features predestinate it mainly for underfloor heating regulation. The thermostat is designed to work in a network. Communication with heating system's central control unit is proceeded via RS485 bus with proprietary communication protocol. If the communication failure occurs the thermostat is able to work separately. The system uses its own real time clock circuit and memory with heating programs. These programs are able to cover the whole heating season. The method of position discontinuous PSD control is used in this equipment.

  14. Charge conserving current deposition scheme for PIC simulations in modified spherical coordinates

    Science.gov (United States)

    Cruz, F.; Grismayer, T.; Fonseca, R. A.; Silva, L. O.

    2017-10-01

    Global models of pulsar magnetospheres have been actively pursued in recent years. Both macro and microscopic (PIC) descriptions have been used, showing that collective processes of e-e + plasmas dominate the global structure of pulsar magnetospheres. Since these systems are best described in spherical coordinates, the algorithms used in cartesian simulations must be generalized. A problem of particular interest is that of charge conservation in PIC simulations. The complex geometry and irregular grids used to improve the efficiency of these algorithms represent major challenges in the design of a charge conserving scheme. Here we present a new first-order current deposition scheme for a 2D axisymmetric, log-spaced radial grid, that rigorously conserves charge. We benchmark this scheme in different scenarios, by integrating it with a spherical Yee scheme and Boris/Vay pushers. The results show that charge is conserved to machine precision, making it unnecessary to correct the electric field to guarantee charge conservation. This scheme will be particularly important for future studies aiming to bridge the microscopic physical processes of e-e + plasma generation due to QED cascades, its self-consistent acceleration and radiative losses to the global dynamics of pulsar magnetospheres. Work supported by the European Research Council (InPairs ERC-2015-AdG 695088), FCT (Portugal) Grant PD/BD/114307/2016, and the Calouste Gulbenkian Foundation through the 2016 Scientific Research Stimulus Program.

  15. Food-pics: an image database for experimental research on eating and appetite.

    Science.gov (United States)

    Blechert, Jens; Meule, Adrian; Busch, Niko A; Ohla, Kathrin

    2014-01-01

    Our current environment is characterized by the omnipresence of food cues. The sight and smell of real foods, but also graphically depictions of appetizing foods, can guide our eating behavior, for example, by eliciting food craving and influencing food choice. The relevance of visual food cues on human information processing has been demonstrated by a growing body of studies employing food images across the disciplines of psychology, medicine, and neuroscience. However, currently used food image sets vary considerably across laboratories and image characteristics (contrast, brightness, etc.) and food composition (calories, macronutrients, etc.) are often unspecified. These factors might have contributed to some of the inconsistencies of this research. To remedy this, we developed food-pics, a picture database comprising 568 food images and 315 non-food images along with detailed meta-data. A total of N = 1988 individuals with large variance in age and weight from German speaking countries and North America provided normative ratings of valence, arousal, palatability, desire to eat, recognizability and visual complexity. Furthermore, data on macronutrients (g), energy density (kcal), and physical image characteristics (color composition, contrast, brightness, size, complexity) are provided. The food-pics image database is freely available under the creative commons license with the hope that the set will facilitate standardization and comparability across studies and advance experimental research on the determinants of eating behavior.

  16. Food-pics: an image database for experimental research on eating and appetite

    Directory of Open Access Journals (Sweden)

    Jens eBlechert

    2014-06-01

    Full Text Available Our current environment is characterized by the omnipresence of food cues. The sight and smell of real foods, but also graphically depictions of appetizing foods, can guide our eating behavior, for example, by eliciting food craving and influencing food choice. The relevance of visual food cues on human information processing has been demonstrated by a growing body of studies employing food images across the disciplines of psychology, medicine, and neuroscience. However, currently used food image sets vary considerably across laboratories and image characteristics (contrast, brightness, etc. and food composition (calories, macronutrients, etc. are often unspecified. These factors might have contributed to some of the inconsistencies of this research. To remedy this, we developed food-pics, a picture database comprising 568 food images and 315 non-food images along with detailed meta-data. A total of N = 1988 individuals with large variance in age and weight from German speaking countries and North America provided normative ratings of valence, arousal, palatability, desire to eat, recognizability and visual complexity. Furthermore, data on macronutrients (g, energy density (kcal, and physical image characteristics (color composition, contrast, brightness, size, complexity are provided. The food-pics image data base is freely available under the creative commons license with the hope that the set will facilitate standardization and comparability across studies and advance experimental research on the determinants of eating behavior.

  17. Neutralization of several adult and paediatric HIV-1 subtype C isolates using a shortened synthetic derivative of gp120 binding aptamer called UCLA1.

    CSIR Research Space (South Africa)

    Mufhandu, Hazel T

    2009-07-01

    Full Text Available This paper present a chemically synthesised derivative of the B40 parental aptamer, called UCLA1 (Cohen et al., 2008), was used for neutralization of endemic subtype C clinical isolates of HIV-1 from adult and paediatric patients and subtype B lab...

  18. Cephaloleia sp. Cerca a Vagelineata Pic*, una Plaga de la Palma Africana

    Directory of Open Access Journals (Sweden)

    Urueta Sandino Eduardo

    1972-08-01

    Full Text Available Cephalolia sp. y Cephaloleila sp, se han empleado como sinónimos del género Cepaloleia sp. (Lepesme. 1947. Se sabe que los estados de larva y adulto atacan el follaje de la palma africana (Elaeis guineensis Jacq. trayendo muchas veces como consecuencia secamientos en los folíolos o su invasión por hongos. En Colombia el Cephaloleia próximo a vagelineata Pic se presenta en la zona de Urabá y posiblemente en el Departamento de Santander.

  19. Operational Test Report (OTR) for U-105 Pumping and Instrumentation and Control (PIC) Skid

    International Nuclear Information System (INIS)

    KOCH, M.R.

    2000-01-01

    Attached is the completed Operation Test Procedure (OTP-200-004, Rev. A-18). OTP includes a print out of the Programmable Logic Controller (PLC) Ladder Diagram. Ladder Diagram was designed for installation in the PLC used to monitor and control pumping activity for Tank Farm 241-U-105. The completed OTP and OTR are referenced in the IS PIC Skid Configuration Drawing (H-2-829998)

  20. Operational Test Report (OTR) for U-105 Pumping and Instrumentation and Control (PIC) Skid

    Energy Technology Data Exchange (ETDEWEB)

    KOCH, M.R.

    2000-02-28

    Attached is the completed Operation Test Procedure (OTP-200-004, Rev. A-18). OTP includes a print out of the Programmable Logic Controller (PLC) Ladder Diagram. Ladder Diagram was designed for installation in the PLC used to monitor and control pumping activity for Tank Farm 241-U-105. The completed OTP and OTR are referenced in the IS PIC Skid Configuration Drawing (H-2-829998).

  1. Operational Test Report (OTR) for U-103 Pumping and Instrumentation and Control (PIC) Skid

    Energy Technology Data Exchange (ETDEWEB)

    KOCH, M.R.

    2000-02-28

    Attached is the completed Operation Test Procedure (OTP-200-004, Rev. A-16). OTP includes a print out of the Programmable Logic Controller (PLC) Ladder Diagram. Ladder Diagram was designed for installation in the PLC used to monitor and control pumping activity for Tank Farm 241-U-103. The completed OTP and OTR are referenced in the 25 PIC Skid Configuration Drawing (H-2-829998).

  2. PIC simulation of electron acceleration in an underdense plasma

    Directory of Open Access Journals (Sweden)

    S Darvish Molla

    2011-06-01

    Full Text Available One of the interesting Laser-Plasma phenomena, when the laser power is high and ultra intense, is the generation of large amplitude plasma waves (Wakefield and electron acceleration. An intense electromagnetic laser pulse can create plasma oscillations through the action of the nonlinear pondermotive force. electrons trapped in the wake can be accelerated to high energies, more than 1 TW. Of the wide variety of methods for generating a regular electric field in plasmas with strong laser radiation, the most attractive one at the present time is the scheme of the Laser Wake Field Accelerator (LWFA. In this method, a strong Langmuir wave is excited in the plasma. In such a wave, electrons are trapped and can acquire relativistic energies, accelerated to high energies. In this paper the PIC simulation of wakefield generation and electron acceleration in an underdense plasma with a short ultra intense laser pulse is discussed. 2D electromagnetic PIC code is written by FORTRAN 90, are developed, and the propagation of different electromagnetic waves in vacuum and plasma is shown. Next, the accuracy of implementation of 2D electromagnetic code is verified, making it relativistic and simulating the generating of wakefield and electron acceleration in an underdense plasma. It is shown that when a symmetric electromagnetic pulse passes through the plasma, the longitudinal field generated in plasma, at the back of the pulse, is weaker than the one due to an asymmetric electromagnetic pulse, and thus the electrons acquire less energy. About the asymmetric pulse, when front part of the pulse has smaller time rise than the back part of the pulse, a stronger wakefield generates, in plasma, at the back of the pulse, and consequently the electrons acquire more energy. In an inverse case, when the rise time of the back part of the pulse is bigger in comparison with that of the back part, a weaker wakefield generates and this leads to the fact that the electrons

  3. UCLA intermediate energy nuclear physics and relativistic heavy ion physics. Annual report, February 1, 1983-January 31, 1984

    International Nuclear Information System (INIS)

    1984-01-01

    In this contract year the UCLA Intermediate Energy Group has continued to pursue a general set of problems in intermediate energy physics using new research tools and theoretical insights. Our program to study N-N scattering and proton-light nucleus scattering has been enhanced by a new polarized target facility (both hydrogen and deuterium) at the High Resolution Spectrometer (HRS) of the Los Alamos Meson Physics Facility (LAMPF). This facility has been constructed by our group in collaboration with physicists from KEK, LAMPF and the University of Minnesota; and the first set of experiments studying polarized beam-polarized target scattering at the HRS were completed this summer and early fall. The HRS mode of operation has led to some unique design features which are described. At the Bevalac, a new beam line spectrometer will be constructed for us during this year and next to significantly enhance our capability to study subthreshold k + , k - and anti p production in relativistic heavy ion collisions and to search for fractionally charged particles. During this period a proposal is being prepared for a very large acceptance spectrometer and its associated beam line which will be used to detect dilepton pairs produced in relativistic heavy ion collisions. In concert with these experimental projects, theoretical advances in the understanding of new data from the HRS, particularly spin transfer data, have been made by the UCLA group and are described

  4. RESEÑA DE LAS I JORNADAS DE INVESTIGACIÓN DE INGENIERÍA CIVIL Y URBANISMO UCLA 2015

    OpenAIRE

    J. C. Rincón

    2016-01-01

    A través del presente ensayo, se esboza el acontecer de las I Jornadas de Investigación de Ingeniería Civil y Urbanismo UCLA 2015, la cual se desarrolló durante los días 15 y 16 de marzo del 2016, en las instalaciones del decanato de Ingeniería Civil de la Universidad Centroccidental Lisandro Alvarado. Se presentaron ponencias alusivas a trabajos de investigación relacionados a ingeniería civil, específicamente en las áreas de estructuras, hidráulica y sanitaria, ingeniería de construcción...

  5. Operational Test Report (OTR) for U-102 Pumping and Instrumentation and Control (PIC) Skid

    Energy Technology Data Exchange (ETDEWEB)

    KOCH, M.R.

    2000-02-28

    Attached is the completed Operation Test Procedure (OTP-200-004, Rev. A-19 and Rev. A-20). OTP includes a print out of the Programmable Logic Controller (PLC) Ladder Diagram. Ladder Diagram was designed for installation in the PLC used to monitor and control pumping activity for Tank Farm 241-U-102. The completed OTP and OTR are referenced in the IS PIC Skid Configuration Drawing (H-2-829998).

  6. QUICKSILVER - A general tool for electromagnetic PIC simulation

    International Nuclear Information System (INIS)

    Seidel, David B.; Coats, Rebecca S.; Johnson, William A.; Kiefer, Mark L.; Mix, L. Paul; Pasik, Michael F.; Pointon, Timothy D.; Quintenz, Jeffrey P.; Riley, Douglas J.; Turner, C. David

    1997-01-01

    The dramatic increase in computational capability that has occurred over the last ten years has allowed fully electromagnetic simulations of large, complex, three-dimensional systems to move progressively from impractical, to expensive, and recently, to routine and widespread. This is particularly true for systems that require the motion of free charge to be self-consistently treated. The QUICKSILVER electromagnetic Particle-In-Cell (EM-PIC) code has been developed at Sandia National Laboratories to provide a general tool to simulate a wide variety of such systems. This tool has found widespread use for many diverse applications, including high-current electron and ion diodes, magnetically insulated power transmission systems, high-power microwave oscillators, high-frequency digital and analog integrated circuit packages, microwave integrated circuit components, antenna systems, radar cross-section applications, and electromagnetic interaction with biological material. This paper will give a brief overview of QUICKSILVER and provide some thoughts on its future development

  7. A gridding method for object-oriented PIC codes

    International Nuclear Information System (INIS)

    Gisler, G.; Peter, W.; Nash, H.; Acquah, J.; Lin, C.; Rine, D.

    1993-01-01

    A simple, rule-based gridding method for object-oriented PIC codes is described which is not only capable of dealing with complicated structures such as multiply-connected regions, but is also computationally faster than classical gridding techniques. Using, these smart grids, vacant cells (e.g., cells enclosed by conductors) will never have to be stored or calculated, thus avoiding the usual situation of having to zero electromagnetic fields within conductors after valuable cpu time has been spent in calculating the fields within these cells in the first place. This object-oriented gridding technique makes use of encapsulating characteristics of actual physical objects (particles, fields, grids, etc.) in C ++ classes and supporting software reuse of these entities through C ++ class inheritance relations. It has been implemented in the form of a simple two-dimensional plasma particle-in-cell code, and forms the initial effort of an AFOSR research project to develop a flexible software simulation environment for particle-in-cell algorithms based on object-oriented technology

  8. Emittance studies of the BNL/SLAC/UCLA 1.6 cell photocathode rf gun

    International Nuclear Information System (INIS)

    Palmer, D.T.; Miller, R.H.; Wang, X.J.

    1997-01-01

    The symmetrized 1.6 cell S-band photocathode gun developed by the BNL/SLAC/UCLA collaboration is in operation at the Brookhaven Accelerator Test Facility (ATF). A novel emittance compensation solenoid magnet has also been designed, built and is in operation at the ATF. These two subsystems form an emittance compensated photoinjector used for beam dynamics, advanced acceleration and free electron laser experiments at the ATF. The highest acceleration field achieved on the copper cathode is 150 MV/m, and the guns normal operating field is 130 MV/m. The maximum rf pulse length is 3 micros. The transverse emittance of the photoelectron beam were measured for various injection parameters. The 1 nC emittance results are presented along with electron bunch length measurements that indicated that at above the 400 pC, space charge bunch lengthening is occurring. The thermal emittance, ε o , of the copper cathode has been measured

  9. LGBT and Information Studies: The Library and Archive OUTreach Symposium at UCLA; and In the Footsteps of Barbara Gittings: An Appreciation

    OpenAIRE

    Keilty, Patrick

    2007-01-01

    On November 17, 2006 the InterActions editorial team attended the Library and Archives OUTreach symposium at UCLA. This galvanizing event brought together academics, practitioners, and activists from the information studies field to discuss the importance of increasing visibility around lesbian, gay, bisexual, and transgendered (LGBT) issues as they pertain to libraries and information seeking. Given the tremendous energy generated by these proceedings, we asked Patrick Keilty, a doctoral st...

  10. A COMPENSATOR APPLICATION USING SYNCHRONOUS MOTOR WITH A PI CONTROLLER BASED ON PIC

    OpenAIRE

    Ramazan BAYINDIR; Alper GÖRGÜN

    2009-01-01

    In this paper, PI control of a synchronous motor has been realized by using a PIC 18F452 microcontroller and it has been worked as ohmic, inductive and capacitive with different excitation currents. Instead of solving integral operation of PI control which has difficulties with conversion to the digital system, summation of all error values of a defined time period are multiplied with the sampling period. Reference values of the PI algorithm are determined with Ziegler-Nicholas method. These ...

  11. UCLA's Molecular Screening Shared Resource: enhancing small molecule discovery with functional genomics and new technology.

    Science.gov (United States)

    Damoiseaux, Robert

    2014-05-01

    The Molecular Screening Shared Resource (MSSR) offers a comprehensive range of leading-edge high throughput screening (HTS) services including drug discovery, chemical and functional genomics, and novel methods for nano and environmental toxicology. The MSSR is an open access environment with investigators from UCLA as well as from the entire globe. Industrial clients are equally welcome as are non-profit entities. The MSSR is a fee-for-service entity and does not retain intellectual property. In conjunction with the Center for Environmental Implications of Nanotechnology, the MSSR is unique in its dedicated and ongoing efforts towards high throughput toxicity testing of nanomaterials. In addition, the MSSR engages in technology development eliminating bottlenecks from the HTS workflow and enabling novel assays and readouts currently not available.

  12. Development and validation of Australian aphasia rehabilitation best practice statements using the RAND/UCLA appropriateness method

    Science.gov (United States)

    Power, Emma; Thomas, Emma; Worrall, Linda; Rose, Miranda; Togher, Leanne; Nickels, Lyndsey; Hersh, Deborah; Godecke, Erin; O'Halloran, Robyn; Lamont, Sue; O'Connor, Claire; Clarke, Kim

    2015-01-01

    Objectives To develop and validate a national set of best practice statements for use in post-stroke aphasia rehabilitation. Design Literature review and statement validation using the RAND/UCLA Appropriateness Method (RAM). Participants A national Community of Practice of over 250 speech pathologists, researchers, consumers and policymakers developed a framework consisting of eight areas of care in aphasia rehabilitation. This framework provided the structure for the development of a care pathway containing aphasia rehabilitation best practice statements. Nine speech pathologists with expertise in aphasia rehabilitation participated in two rounds of RAND/UCLA appropriateness ratings of the statements. Panellists consisted of researchers, service managers, clinicians and policymakers. Main outcome measures Statements that achieved a high level of agreement and an overall median score of 7–9 on a nine-point scale were rated as ‘appropriate’. Results 74 best practice statements were extracted from the literature and rated across eight areas of care (eg, receiving the right referrals, providing intervention). At the end of Round 1, 71 of the 74 statements were rated as appropriate, no statements were rated as inappropriate, and three statements were rated as uncertain. All 74 statements were then rated again in the face-to-face second round. 16 statements were added through splitting existing items or adding new statements. Seven statements were deleted leaving 83 statements. Agreement was reached for 82 of the final 83 statements. Conclusions This national set of 82 best practice statements across eight care areas for the rehabilitation of people with aphasia is the first to be validated by an expert panel. These statements form a crucial component of the Australian Aphasia Rehabilitation Pathway (AARP) (http://www.aphasiapathway.com.au) and provide the basis for more consistent implementation of evidence-based practice in stroke rehabilitation. PMID:26137883

  13. SPECT3D - A multi-dimensional collisional-radiative code for generating diagnostic signatures based on hydrodynamics and PIC simulation output

    Science.gov (United States)

    MacFarlane, J. J.; Golovkin, I. E.; Wang, P.; Woodruff, P. R.; Pereyra, N. A.

    2007-05-01

    SPECT3D is a multi-dimensional collisional-radiative code used to post-process the output from radiation-hydrodynamics (RH) and particle-in-cell (PIC) codes to generate diagnostic signatures (e.g. images, spectra) that can be compared directly with experimental measurements. This ability to post-process simulation code output plays a pivotal role in assessing the reliability of RH and PIC simulation codes and their physics models. SPECT3D has the capability to operate on plasmas in 1D, 2D, and 3D geometries. It computes a variety of diagnostic signatures that can be compared with experimental measurements, including: time-resolved and time-integrated spectra, space-resolved spectra and streaked spectra; filtered and monochromatic images; and X-ray diode signals. Simulated images and spectra can include the effects of backlighters, as well as the effects of instrumental broadening and time-gating. SPECT3D also includes a drilldown capability that shows where frequency-dependent radiation is emitted and absorbed as it propagates through the plasma towards the detector, thereby providing insights on where the radiation seen by a detector originates within the plasma. SPECT3D has the capability to model a variety of complex atomic and radiative processes that affect the radiation seen by imaging and spectral detectors in high energy density physics (HEDP) experiments. LTE (local thermodynamic equilibrium) or non-LTE atomic level populations can be computed for plasmas. Photoabsorption rates can be computed using either escape probability models or, for selected 1D and 2D geometries, multi-angle radiative transfer models. The effects of non-thermal (i.e. non-Maxwellian) electron distributions can also be included. To study the influence of energetic particles on spectra and images recorded in intense short-pulse laser experiments, the effects of both relativistic electrons and energetic proton beams can be simulated. SPECT3D is a user-friendly software package that runs

  14. Microscopic evaluation of implant platform adaptation with UCLA-type abutments: in vitro study

    Directory of Open Access Journals (Sweden)

    Vinícius Anéas RODRIGUES

    Full Text Available Abstract Introduction The fit between abutment and implant is crucial to determine the longevity of implant-supported prostheses and the maintenance of peri-implant bones. Objective To evaluate the vertical misfit between different abutments in order to provide information to assist abutment selection. Material and method UCLA components (N=40 with anti-rotational system were divided as follows: components usinated in titanium (n=10 and plastic components cast proportionally in titanium (n=10, nickel-chromium-titanium-molybdenum (n=10 and nickel-chromium (n=10 alloys. All components were submitted to stereomicroscope analysis and were randomly selected for characterization by SEM. Result Data were analyzed using mean and standard deviation and subjected to ANOVA-one way, where the groups proved to statistically different (p=<0.05, followed by Tukey’s test. Conclusion The selection of material influences the value of vertical misfit. The group machined in Ti showed the lowest value while the group cast in Ni Cr showed the highest value of vertical misfit.

  15. Electro pneumatic trainer embedded with programmable integrated circuit (PIC) microcontroller and graphical user interface platform for aviation industries training purposes

    Science.gov (United States)

    Burhan, I.; Azman, A. A.; Othman, R.

    2016-10-01

    An electro pneumatic trainer embedded with programmable integrated circuit (PIC) microcontroller and Visual Basic (VB) platform is fabricated as a supporting tool to existing teaching and learning process, and to achieve the objectives and learning outcomes towards enhancing the student's knowledge and hands-on skill, especially in electro pneumatic devices. The existing learning process for electro pneumatic courses conducted in the classroom does not emphasize on simulation and complex practical aspects. VB is used as the platform for graphical user interface (GUI) while PIC as the interface circuit between the GUI and hardware of electro pneumatic apparatus. Fabrication of electro pneumatic trainer interfacing between PIC and VB has been designed and improved by involving multiple types of electro pneumatic apparatus such as linear drive, air motor, semi rotary motor, double acting cylinder and single acting cylinder. Newly fabricated electro pneumatic trainer microcontroller interface can be programmed and re-programmed for numerous combination of tasks. Based on the survey to 175 student participants, 97% of the respondents agreed that the newly fabricated trainer is user friendly, safe and attractive, and 96.8% of the respondents strongly agreed that there is improvement in knowledge development and also hands-on skill in their learning process. Furthermore, the Lab Practical Evaluation record has indicated that the respondents have improved their academic performance (hands-on skills) by an average of 23.5%.

  16. Study of effect of grain size on dust charging in an RF plasma using three-dimensional PIC-MCC simulations

    International Nuclear Information System (INIS)

    Ikkurthi, V. R.; Melzer, A.; Matyash, K.; Schneider, R.

    2008-01-01

    A 3-dimensional Particle-Particle Particle-Mesh (P 3 M) code is applied to study the charging process of micrometer size dust grains confined in a capacitive RF discharge. In our model, particles (electrons and ions) are treated kinetically (Particle-in-Cell with Monte Carlo Collisions (PIC-MCC)). In order to accurately resolve the plasma particles' motion close to the dust grain, the PIC technique is supplemented with Molecular Dynamics (MD), employing an an analytic electrostatic potential for the interaction with the dust grain. This allows to self-consistently resolve the dust grain charging due to absorption of plasma electrons and ions. The charging of dust grains confined above lower electrode in a capacitive RF discharge and its dependence on the size and position of the dust is investigated. The results have been compared with laboratory measurements

  17. Sistema Inteligente de Supervisión de Alarmas Basado en Microcontroladores PIC, SISAP

    Directory of Open Access Journals (Sweden)

    Ioslán Sánchez Martínez

    2010-09-01

    Full Text Available Normal 0 21 false false false MicrosoftInternetExplorer4 st1:*{behavior:url(#ieooui } /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Tabla normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman"; mso-ansi-language:#0400; mso-fareast-language:#0400; mso-bidi-language:#0400;} En este artículo se describe hasta la etapa presente de desarrollo del prototipo SISAP (Sistema Inteligente de Supervisión de Alarmas basado en Microcontroladores PICs desarrollado a partir de una propuesta de la Dirección Territorial de ETECSA de Sancti Spíritus con el fin de incrementar las prestaciones de los sistemas instalados para la supervisión de alarmas tecnológicas en centros no atendidos del territorio.  El dispositivo SISAP se encuentra en la versión de desarrollo 0.5 en estado “no concluido”. Hasta este punto es capaz de manejar hasta 40 eventos, que pueden ser on/off o nivele de voltaje y transmitirlos a través de una interfaz telefónica utilizando un protocolo de tonos DTMF. Palabras Clave: Alarmas, Microcontrolador PIC, Voltajes, Eventos on/off, Tonos DTMF.

  18. Multi-dimensional PIC-simulations of parametric instabilities for shock-ignition conditions

    Directory of Open Access Journals (Sweden)

    Riconda C.

    2013-11-01

    Full Text Available Laser-plasma interaction is investigated for conditions relevant for the shock-ignition (SI scheme of inertial confinement fusion using two-dimensional particle-in-cell (PIC simulations of an intense laser beam propagating in a hot, large-scale, non-uniform plasma. The temporal evolution and interdependence of Raman- (SRS, and Brillouin- (SBS, side/backscattering as well as Two-Plasmon-Decay (TPD are studied. TPD is developing in concomitance with SRS creating a broad spectrum of plasma waves near the quarter-critical density. They are rapidly saturated due to plasma cavitation within a few picoseconds. The hot electron spectrum created by SRS and TPD is relatively soft, limited to energies below one hundred keV.

  19. Modulador-Demodulador ASK con codificación Manchester implementado en un microcontrolador PIC

    OpenAIRE

    Tarifa Amaya, Ariel; Del Risco Sánchez, Arnaldo; Cruz Hurtado, Juan Carlos

    2012-01-01

    Se presenta el diseño de un Modulador-Demodulador Digital ASK con codificación Manchester implementado en el firmware de un microcontrolador PIC 18F4455, utilizando el estándar de baja frecuencia (LF) el cual maneja valores de 125kHz. Este modulador-demodulador se utiliza en la implementación de una etiqueta RFID activa. Transmite a solicitud de un dispositivo lector el valor de temperatura de un sensor y su identificador. El dispositivo lector, controla la comunicación con la etiqueta. Según...

  20. Analysis of instability growth and collisionless relaxation in thermionic converters using 1-D PIC simulations

    International Nuclear Information System (INIS)

    Kreh, B.B.

    1994-12-01

    This work investigates the role that the beam-plasma instability may play in a thermionic converter. The traditional assumption of collisionally dominated relaxation is questioned, and the beam-plasma instability is proposed as a possible dominant relaxation mechanism. Theory is developed to describe the beam-plasma instability in the cold-plasma approximation, and the theory is tested with two common Particle-in-Cell (PIC) simulation codes. The theory is first confirmed using an unbounded plasma PIC simulation employing periodic boundary conditions, ES1. The theoretically predicted growth rates are on the order of the plasma frequencies, and ES1 simulations verify these predictions within the order of 1%. For typical conditions encountered in thermionic converters, the resulting growth period is on the order of 7 x 10 -11 seconds. The bounded plasma simulation PDP1 was used to evaluate the influence of finite geometry and the electrode boundaries. For this bounded plasma, a two-stream interaction was supported and resulting in nearly complete thermalization in approximately 5 x 10 -10 seconds. Since the electron-electron collision rate of 10 9 Hz and the electron atom collision rate of 10 7 Hz are significantly slower than the rate of development of these instabilities, the instabilities appear to be an important relaxation mechanism

  1. Magnetic Field-Vector Measurements in Quiescent Prominences via the Hanle Effect: Analysis of Prominences Observed at Pic-Du-Midi and at Sacramento Peak

    Science.gov (United States)

    Bommier, V.; Leroy, J. L.; Sahal-Brechot, S.

    1985-01-01

    The Hanle effect method for magnetic field vector diagnostics has now provided results on the magnetic field strength and direction in quiescent prominences, from linear polarization measurements in the He I E sub 3 line, performed at the Pic-du-Midi and at Sacramento Peak. However, there is an inescapable ambiguity in the field vector determination: each polarization measurement provides two field vector solutions symmetrical with respect to the line-of-sight. A statistical analysis capable of solving this ambiguity was applied to the large sample of prominences observed at the Pic-du-Midi (Leroy, et al., 1984); the same method of analysis applied to the prominences observed at Sacramento Peak (Athay, et al., 1983) provides results in agreement on the most probable magnetic structure of prominences; these results are detailed. The statistical results were confirmed on favorable individual cases: for 15 prominences observed at Pic-du-Midi, the two-field vectors are pointing on the same side of the prominence, and the alpha angles are large enough with respect to the measurements and interpretation inaccuracies, so that the field polarity is derived without any ambiguity.

  2. Transpiration of helium and carbon monoxide through a multihundred watt, PICS filter

    International Nuclear Information System (INIS)

    Schaeffer, D.R.

    1976-01-01

    The transpiration of CO through the Multihundred Watt (MHW) filter can be described by Fick's first law or as a first order, reversible reaction. From Fick's first law, a ''diffusion'' coefficient of 7.8 x 10 -4 cm.L/sec (L is the average path length through the filter) was determined. For the first order reversible reaction, a rate constant of 0.0058 hr -1 was obtained for both the forward and reverse reactions (they were assumed to be equal). This corresponds to a half-life of 120 hr. It was also concluded that the rate constants and thus the transpiration rates, which were determined for the test, are smaller than those expected in the IHS. The effect of increasing the number of filters, changing the volumes, and increasing the temperature, changes the rate constant of the transpiration into the PICS to roughly 0.074 hr -1 (t/sub 1 / 2 / = 9.4 hr) and out of the PICS to 0.84 hr -1 (t/sub 1/2/ = 0.8 hr). Of the two suggested mechanisms for the generation of CO inside the IHS, the cyclic process requires a much larger rate of transpiration than the process requiring oxygen exchange of CO given off by the graphite. The data indicate that the cyclic process can provide the CO generation rates observed in the IHS gas taps if there is no delay in time for any other kinetic process involved in the formation of CO or CO 2 . Since the cyclic process (which requires the fastest rate of transpiration) appears possible, this study does not indicate which reaction is occurring but concludes both are possible

  3. Full PIC simulations of solar radio emission

    Science.gov (United States)

    Sgattoni, A.; Henri, P.; Briand, C.; Amiranoff, F.; Riconda, C.

    2017-12-01

    Solar radio emissions are electromagnetic (EM) waves emitted in the solar wind plasma as a consequence of electron beams accelerated during solar flares or interplanetary shocks such as ICMEs. To describe their origin, a multi-stage model has been proposed in the 60s which considers a succession of non-linear three-wave interaction processes. A good understanding of the process would allow to infer the kinetic energy transfered from the electron beam to EM waves, so that the radio waves recorded by spacecraft can be used as a diagnostic for the electron beam.Even if the electrostatic problem has been extensively studied, full electromagnetic simulations were attempted only recently. Our large scale 2D-3V electromagnetic PIC simulations allow to identify the generation of both electrostatic and EM waves originated by the succession of plasma instabilities. We tested several configurations varying the electron beam density and velocity considering a background plasma of uniform density. For all the tested configurations approximately 105 of the electron-beam kinetic energy is transfered into EM waves emitted in all direction nearly isotropically. With this work we aim to design experiments of laboratory astrophysics to reproduce the electromagnetic emission process and test its efficiency.

  4. Construction and initial operation of MHD PbLi facility at UCLA

    Energy Technology Data Exchange (ETDEWEB)

    Smolentsev, S., E-mail: sergey@fusion.ucla.edu; Li, F.-C.; Morley, N.; Ueki, Y.; Abdou, M.; Sketchley, T.

    2013-06-15

    Highlights: • New MHD PbLi loop has been constructed and tested at UCLA. • Pressure diagnostics system has been developed and successfully tested. • Ultrasound Doppler velocimeter is tested as velocity diagnostics. • Experiments on pressure drop reduction have been performed. • Experiments on MHD flow in a duct with SiC flow channel insert are underway. -- Abstract: A magnetohydrodynamic flow facility MaPLE (Magnetohydrodynamic PbLi Experiment) that utilizes molten eutectic alloy lead–lithium (PbLi) as working fluid has been constructed and tested at University of California, Los Angeles. The loop operation parameters are: maximum magnetic field 1.8 T, PbLi temperature up to 350 °C, maximum PbLi flow rate with/without a magnetic field 15/50 l/min, maximum pressure head 0.15 MPa. The paper describes the loop itself and its major components, basic operation procedures, experience of handling PbLi, initial loop testing, flow diagnostics and current and near-future experiments. The obtained test results of the loop and its components have demonstrated that the new facility is fully functioning and ready for experimental studies of magnetohydrodynamic, heat and mass transfer phenomena in PbLi flows and also can be used in mock up testing in conditions relevant to fusion applications.

  5. An FPGA-based DS-CDMA multiuser demodulator employing adaptive multistage parallel interference cancellation

    Science.gov (United States)

    Li, Xinhua; Song, Zhenyu; Zhan, Yongjie; Wu, Qiongzhi

    2009-12-01

    Since the system capacity is severely limited, reducing the multiple access interfere (MAI) is necessary in the multiuser direct-sequence code division multiple access (DS-CDMA) system which is used in the telecommunication terminals data-transferred link system. In this paper, we adopt an adaptive multistage parallel interference cancellation structure in the demodulator based on the least mean square (LMS) algorithm to eliminate the MAI on the basis of overviewing various of multiuser dectection schemes. Neither a training sequence nor a pilot signal is needed in the proposed scheme, and its implementation complexity can be greatly reduced by a LMS approximate algorithm. The algorithm and its FPGA implementation is then derived. Simulation results of the proposed adaptive PIC can outperform some of the existing interference cancellation methods in AWGN channels. The hardware setup of mutiuser demodulator is described, and the experimental results based on it demonstrate that the simulation results shows large performance gains over the conventional single-user demodulator.

  6. Modulador-Demodulador ASK con codificación Manchester implementado en un microcontrolador PIC

    Directory of Open Access Journals (Sweden)

    Ariel Tarifa Amaya

    2012-12-01

    Full Text Available Se presenta el diseño de un Modulador-Demodulador Digital ASK con codificación Manchester implementado en el firmware de un microcontrolador PIC 18F4455, utilizando el estándar de baja frecuencia (LF el cual maneja valores de 125kHz. Este modulador-demodulador se utiliza en la implementación de una etiqueta RFID activa. Transmite a solicitud de un dispositivo lector el valor de temperatura de un sensor y su identificador. El dispositivo lector, controla la comunicación con la etiqueta. Según la literatura especializada no se reporta un sistema similar.

  7. Development and validation of Australian aphasia rehabilitation best practice statements using the RAND/UCLA appropriateness method.

    Science.gov (United States)

    Power, Emma; Thomas, Emma; Worrall, Linda; Rose, Miranda; Togher, Leanne; Nickels, Lyndsey; Hersh, Deborah; Godecke, Erin; O'Halloran, Robyn; Lamont, Sue; O'Connor, Claire; Clarke, Kim

    2015-07-02

    To develop and validate a national set of best practice statements for use in post-stroke aphasia rehabilitation. Literature review and statement validation using the RAND/UCLA Appropriateness Method (RAM). A national Community of Practice of over 250 speech pathologists, researchers, consumers and policymakers developed a framework consisting of eight areas of care in aphasia rehabilitation. This framework provided the structure for the development of a care pathway containing aphasia rehabilitation best practice statements. Nine speech pathologists with expertise in aphasia rehabilitation participated in two rounds of RAND/UCLA appropriateness ratings of the statements. Panellists consisted of researchers, service managers, clinicians and policymakers. Statements that achieved a high level of agreement and an overall median score of 7-9 on a nine-point scale were rated as 'appropriate'. 74 best practice statements were extracted from the literature and rated across eight areas of care (eg, receiving the right referrals, providing intervention). At the end of Round 1, 71 of the 74 statements were rated as appropriate, no statements were rated as inappropriate, and three statements were rated as uncertain. All 74 statements were then rated again in the face-to-face second round. 16 statements were added through splitting existing items or adding new statements. Seven statements were deleted leaving 83 statements. Agreement was reached for 82 of the final 83 statements. This national set of 82 best practice statements across eight care areas for the rehabilitation of people with aphasia is the first to be validated by an expert panel. These statements form a crucial component of the Australian Aphasia Rehabilitation Pathway (AARP) (http://www.aphasiapathway.com.au) and provide the basis for more consistent implementation of evidence-based practice in stroke rehabilitation. Published by the BMJ Publishing Group Limited. For permission to use (where not already

  8. Software Engineering Support of the Third Round of Scientific Grand Challenge Investigations: An Earth Modeling System Software Framework Strawman Design that Integrates Cactus and UCLA/UCB Distributed Data Broker

    Science.gov (United States)

    Talbot, Bryan; Zhou, Shu-Jia; Higgins, Glenn

    2002-01-01

    One of the most significant challenges in large-scale climate modeling, as well as in high-performance computing in other scientific fields, is that of effectively integrating many software models from multiple contributors. A software framework facilitates the integration task. both in the development and runtime stages of the simulation. Effective software frameworks reduce the programming burden for the investigators, freeing them to focus more on the science and less on the parallel communication implementation, while maintaining high performance across numerous supercomputer and workstation architectures. This document proposes a strawman framework design for the climate community based on the integration of Cactus, from the relativistic physics community, and UCLA/UCB Distributed Data Broker (DDB) from the climate community. This design is the result of an extensive survey of climate models and frameworks in the climate community as well as frameworks from many other scientific communities. The design addresses fundamental development and runtime needs using Cactus, a framework with interfaces for FORTRAN and C-based languages, and high-performance model communication needs using DDB. This document also specifically explores object-oriented design issues in the context of climate modeling as well as climate modeling issues in terms of object-oriented design.

  9. An interface board for developing control loops in power electronics based on microcontrollers and DSPs Cores -Arduino /ChipKit /dsPIC /DSP /TI Piccolo

    DEFF Research Database (Denmark)

    Pittini, Riccardo; Zhang, Zhe; Andersen, Michael A. E.

    2013-01-01

    and development environment. Moreover, the interface board can operate with open hardware Arduino-like boards such as the ChipKit Uno32. The paper also describes how to enhance the performance of a ChipKit Uno32 with a dsPIC obtaining a more suitable solution for power electronics. The basic blocks and interfaces...... of the boards are presented in detail as well as the board main specifications. The board operation has been tested with three core platforms: TI Piccolo controlSTICK, a Microchip dsPIC and a ChipKit Uno32 (Arduino-like platform). The board was used for generating test signals for characterizing 1200 V Si...

  10. UCLA1, a synthetic derivative of a gp120 RNA aptamer, inhibits entry of human immunodeficiency virus type 1 subtype C

    CSIR Research Space (South Africa)

    Mufhandu, Hazel T

    2012-05-01

    Full Text Available such as South Africa (47), where this study was conducted, we assessed the sensitivity of a large panel of subtype C isolates derived from adult and pediatric patients at different stages of HIV-1 infection against UCLA1. We examined its neutralization..., 34). These were derived from the CAPRISA 002 acute infection study cohort (18), subtype C reference panel (31), pediatric and AIDS patients? isolates (9, 17), and a subtype C consensus sequence clone (ConC) (26). The subtype C pseudoviruses were...

  11. Memory-efficient optimization of Gyrokinetic particle-to-grid interpolation for multicore processors

    Energy Technology Data Exchange (ETDEWEB)

    Madduri, Kamesh [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Williams, Samuel [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Ethier, Stephane [Princeton Plasma Physics Lab. (PPPL), Princeton, NJ (United States); Oliker, Leonid [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Shalf, John [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Strohmaier, Erich [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Yelicky, Katherine [Univ. of California, Berkeley, CA (United States)

    2009-01-01

    We present multicore parallelization strategies for the particle-to-grid interpolation step in the Gyrokinetic Toroidal Code (GTC), a 3D particle-in-cell (PIC) application to study turbulent transport in magnetic-confinement fusion devices. Particle-grid interpolation is a known performance bottleneck in several PIC applications. In GTC, this step involves particles depositing charges to a 3D toroidal mesh, and multiple particles may contribute to the charge at a grid point. We design new parallel algorithms for the GTC charge deposition kernel, and analyze their performance on three leading multicore platforms. We implement thirteen different variants for this kernel and identify the best-performing ones given typical PIC parameters such as the grid size, number of particles per cell, and the GTC-specific particle Larmor radius variation. We find that our best strategies can be 2x faster than the reference optimized MPI implementation, and our analysis provides insight into desirable architectural features for high-performance PIC simulation codes.

  12. Optimizing fusion PIC code performance at scale on Cori Phase 2

    Energy Technology Data Exchange (ETDEWEB)

    Koskela, T. S.; Deslippe, J.

    2017-07-23

    In this paper we present the results of optimizing the performance of the gyrokinetic full-f fusion PIC code XGC1 on the Cori Phase Two Knights Landing system. The code has undergone substantial development to enable the use of vector instructions in its most expensive kernels within the NERSC Exascale Science Applications Program. We study the single-node performance of the code on an absolute scale using the roofline methodology to guide optimization efforts. We have obtained 2x speedups in single node performance due to enabling vectorization and performing memory layout optimizations. On multiple nodes, the code is shown to scale well up to 4000 nodes, near half the size of the machine. We discuss some communication bottlenecks that were identified and resolved during the work.

  13. Simulations of the BNL/SLAC/UCLA 1.6 cell emittance compensated photocathode RF gun low energy beam line

    International Nuclear Information System (INIS)

    Palmer, D.T.; Miller, R.H.; Winick, H.

    1995-01-01

    A dedicated low energy (2 to 10 MeV) experimental beam line is now under construction at Brookhaven National Laboratories Accelerator Test Facility (BNL/ATF) for photocathode RF gun testing and photoemission experiments. The design of the experimental line, using the 1.6 cell photocathode RF gun developed by the BNL/SLAC/UCLA RF gun collaboration is presented. Detailed beam dynamics simulations were performed for the 1.6 cell RF gun injector using a solenoidal emittance compensation technique. An experimental program for testing the 1.6 cell RF gun is presented. This program includes beam loading caused by dark current, higher order mode field measurements, integrated and slice emittance measurements using a pepper-pot and RF kicker cavity

  14. Introducción a los microcontroladores RISC en Lenguaje C. PIC's de Microchips

    Directory of Open Access Journals (Sweden)

    Tito Flórez C.

    2000-01-01

    Full Text Available A medida que el programa de los microcontroladores se hace más complejo, trabajar en lenguaje "assembler" se hace más dispendioso, dificil de manejar y el control de interrupciones muchas veces son un dolor de cabeza. Una muy buena alternativa para solucionar estos problemas, es usar el lenguaje C para programarlos. De esta forma, los programas se vuelven muy sencillos; lo mismo que el de iuterrupciones se convierte ahora en algo muy sencillo. Se presentan los elementos y las instrucciones más importantes para poder llegar a desarrollar un sin número de programas para los PICs.

  15. Design and development of low cost thermoluminescence measurement system using PIC16F877 microcontroller

    International Nuclear Information System (INIS)

    Neelamegam, P; Rajendran, A

    2006-01-01

    A real time microcontroller based thermoluminescence system has been developed to measure light intensity and temperature and to control linear heating. This instruments permits to conduct investigations on thermoluminescent materials, such as alkali halides, phosphors and related compounds, which have important applications in materials science and in dosimetry. A low cost dedicated PIC16F877 based microcontroller board was employed for the hardware. The detail of its interface and software to measure thermoluminescence and to send data to PC is explained in this paper

  16. Design and development of low cost thermoluminescence measurement system using PIC16F877 microcontroller

    Energy Technology Data Exchange (ETDEWEB)

    Neelamegam, P [Department of Electronics and Instrumentation Engineering, Shunmuga Arts, Science, Technology and Research Academy (SASTRA), Deemed University, Thanjavur-613 402, Tamil Nadu (India); Rajendran, A [PG and Research Department of Applied Physics, Nehru Memorial College (Autonomous), Puthanampatti-621 007, Tiruchirappalli, Tamil Nadu (India)

    2006-05-15

    A real time microcontroller based thermoluminescence system has been developed to measure light intensity and temperature and to control linear heating. This instruments permits to conduct investigations on thermoluminescent materials, such as alkali halides, phosphors and related compounds, which have important applications in materials science and in dosimetry. A low cost dedicated PIC16F877 based microcontroller board was employed for the hardware. The detail of its interface and software to measure thermoluminescence and to send data to PC is explained in this paper.

  17. Response of plasma facing components in Tokamaks due to intense energy deposition using Particle-In-Cell (PIC) methods

    Science.gov (United States)

    Genco, Filippo

    Damage to plasma-facing components (PFC) due to various plasma instabilities is still a major concern for the successful development of fusion energy and represents a significant research obstacle in the community. It is of great importance to fully understand the behavior and lifetime expectancy of PFC under both low energy cycles during normal events and highly energetic events as disruptions, Edge-Localized Modes (ELM), Vertical Displacement Events (VDE), and Run-away electron (RE). The consequences of these high energetic dumps with energy fluxes ranging from 10 MJ/m2 up to 200 MJ/m 2 applied in very short periods (0.1 to 5 ms) can be catastrophic both for safety and economic reasons. Those phenomena can cause a) large temperature increase in the target material b) consequent melting, evaporation and erosion losses due to the extremely high heat fluxes c) possible structural damage and permanent degradation of the entire bulk material with probable burnout of the coolant tubes; d) plasma contamination, transport of target material into the chamber far from where it was originally picked. The modeling of off-normal events such as Disruptions and ELMs requires the simultaneous solution of three main problems along time: a) the heat transfer in the plasma facing component b) the interaction of the produced vapor from the surface with the incoming plasma particles c) the transport of the radiation produced in the vapor-plasma cloud. In addition the moving boundaries problem has to be considered and solved at the material surface. Considering the carbon divertor as target, the moving boundaries are two since for the given conditions, carbon doesn't melt: the plasma front and the moving eroded material surface. The current solution methods for this problem use finite differences and moving coordinates system based on the Crank-Nicholson method and Alternating Directions Implicit Method (ADI). Currently Particle-In-Cell (PIC) methods are widely used for solving

  18. PIC Simulations of Velocity-space Instabilities in a Decreasing Magnetic Field: Viscosity and Thermal Conduction

    Science.gov (United States)

    Riquelme, Mario; Quataert, Eliot; Verscharen, Daniel

    2018-02-01

    We use particle-in-cell (PIC) simulations of a collisionless, electron–ion plasma with a decreasing background magnetic field, {\\boldsymbol{B}}, to study the effect of velocity-space instabilities on the viscous heating and thermal conduction of the plasma. If | {\\boldsymbol{B}}| decreases, the adiabatic invariance of the magnetic moment gives rise to pressure anisotropies with {p}| | ,j> {p}\\perp ,j ({p}| | ,j and {p}\\perp ,j represent the pressure of species j (electron or ion) parallel and perpendicular to B ). Linear theory indicates that, for sufficiently large anisotropies, different velocity-space instabilities can be triggered. These instabilities in principle have the ability to pitch-angle scatter the particles, limiting the growth of the anisotropies. Our simulations focus on the nonlinear, saturated regime of the instabilities. This is done through the permanent decrease of | {\\boldsymbol{B}}| by an imposed plasma shear. We show that, in the regime 2≲ {β }j≲ 20 ({β }j\\equiv 8π {p}j/| {\\boldsymbol{B}}{| }2), the saturated ion and electron pressure anisotropies are controlled by the combined effect of the oblique ion firehose and the fast magnetosonic/whistler instabilities. These instabilities grow preferentially on the scale of the ion Larmor radius, and make {{Δ }}{p}e/{p}| | ,e≈ {{Δ }}{p}i/{p}| | ,i (where {{Δ }}{p}j={p}\\perp ,j-{p}| | ,j). We also quantify the thermal conduction of the plasma by directly calculating the mean free path of electrons, {λ }e, along the mean magnetic field, finding that {λ }e depends strongly on whether | {\\boldsymbol{B}}| decreases or increases. Our results can be applied in studies of low-collisionality plasmas such as the solar wind, the intracluster medium, and some accretion disks around black holes.

  19. A PIC-MCC code RFdinity1d for simulation of discharge initiation by ICRF antenna

    Science.gov (United States)

    Tripský, M.; Wauters, T.; Lyssoivan, A.; Bobkov, V.; Schneider, P. A.; Stepanov, I.; Douai, D.; Van Eester, D.; Noterdaeme, J.-M.; Van Schoor, M.; ASDEX Upgrade Team; EUROfusion MST1 Team

    2017-12-01

    Discharges produced and sustained by ion cyclotron range of frequency (ICRF) waves in absence of plasma current will be used on ITER for (ion cyclotron-) wall conditioning (ICWC, Te = 3{-}5 eV, ne 18 m-3 ). In this paper, we present the 1D particle-in-cell Monte Carlo collision (PIC-MCC) RFdinity1d for the study the breakdown phase of ICRF discharges, and its dependency on the RF discharge parameters (i) antenna input power P i , (ii) RF frequency f, (iii) shape of the electric field and (iv) the neutral gas pressure pH_2 . The code traces the motion of both electrons and ions in a narrow bundle of magnetic field lines close to the antenna straps. The charged particles are accelerated in the parallel direction with respect to the magnetic field B T by two electric fields: (i) the vacuum RF field of the ICRF antenna E_z^RF and (ii) the electrostatic field E_zP determined by the solution of Poisson’s equation. The electron density evolution in simulations follows exponential increase, {\\dot{n_e} ∼ ν_ion t } . The ionization rate varies with increasing electron density as different mechanisms become important. The charged particles are affected solely by the antenna RF field E_z^RF at low electron density ({ne < 1011} m-3 , {≤ft \\vert E_z^RF \\right \\vert \\gg ≤ft \\vert E_zP \\right \\vert } ). At higher densities, when the electrostatic field E_zP is comparable to the antenna RF field E_z^RF , the ionization frequency reaches the maximum. Plasma oscillations propagating toroidally away from the antenna are observed. The simulated energy distributions of ions and electrons at {ne ∼ 1015} m-3 correspond a power-law Kappa energy distribution. This energy distribution was also observed in NPA measurements at ASDEX Upgrade in ICWC experiments.

  20. PIC simulations of magnetic field production by cosmic rays drifting upstream of SNR shocks

    International Nuclear Information System (INIS)

    Pohl, M.

    2008-01-01

    Turbulent magnetic-field amplification appears to operate near the forward shocks of young shell-type SNR. I review the observational constraints on the spatial distribution and amplitude of amplified magnetic field in this environment. I also present new PIC simulations of magnetic-field growth due to streaming cosmic rays. While the nature of the initial linear instability is largely determined by the choice of simulation parameters, the saturation always involves changing the bulk motion of cosmic rays and background plasma, which limits the field growth to amplitudes of a few times that of the homogeneous magnetic field. (author)

  1. UCLA Particle and Nuclear Physics Research Group, 1993 progress report

    International Nuclear Information System (INIS)

    Nefkens, B.M.K.; Clajus, M.; Price, J.W.; Tippens, W.B.; White, D.B.

    1993-09-01

    The research programs of the UCLA Particle and Nuclear Physics Research Group, the research objectives, results of experiments, the continuing activities and new initiatives are presented. The primary goal of the research is to test the symmetries and invariances of particle/nuclear physics with special emphasis on investigating charge symmetry, isospin invariance, charge conjugation, and CP. Another important part of our work is baryon spectroscopy, which is the determination of the properties (mass, width, decay modes, etc.) of particles and resonances. We also measure some basic properties of light nuclei, for example the hadronic radii of 3 H and 3 He. Special attention is given to the eta meson, its production using photons, electrons, π ± , and protons, and its rare and not-so-rare decays. In Section 1, the physics motivation of our research is outlined. Section 2 provides a summary of the research projects. The status of each program is given in Section 3. We discuss the various experimental techniques used, the results obtained, and we outline the plans for the continuing and the new research. Details are presented of new research that is made possible by the use of the Crystal Ball Detector, a highly segmented NaI calorimeter and spectrometer with nearly 4π acceptance (it was built and used at SLAC and is to be moved to BNL). The appendix contains an update of the bibliography, conference participation, and group memos; it also indicates our share in the organization of conferences, and gives a listing of the colloquia and seminars presented by us

  2. Vertical Distributions of Coccolithophores, PIC, POC, Biogenic Silica, and Chlorophyll a Throughout the Global Ocean.

    Science.gov (United States)

    Balch, William M; Bowler, Bruce C; Drapeau, David T; Lubelczyk, Laura C; Lyczkowski, Emily

    2018-01-01

    Coccolithophores are a critical component of global biogeochemistry, export fluxes, and seawater optical properties. We derive globally significant relationships to estimate integrated coccolithophore and coccolith concentrations as well as integrated concentrations of particulate inorganic carbon (PIC) from their respective surface concentration. We also examine surface versus integral relationships for other biogeochemical variables contributed by all phytoplankton (e.g., chlorophyll a and particulate organic carbon) or diatoms (biogenic silica). Integrals are calculated using both 100 m integrals and euphotic zone integrals (depth of 1% surface photosynthetically available radiation). Surface concentrations are parameterized in either volumetric units (e.g., m -3 ) or values integrated over the top optical depth. Various relationships between surface concentrations and integrated values demonstrate that when surface concentrations are above a specific threshold, the vertical distribution of the property is biased to the surface layer, and when surface concentrations are below a specific threshold, the vertical distributions of the properties are biased to subsurface maxima. Results also show a highly predictable decrease in explained-variance as vertical distributions become more vertically heterogeneous. These relationships have fundamental utility for extrapolating surface ocean color remote sensing measurements to 100 m depth or to the base of the euphotic zone, well beyond the depths of detection for passive ocean color remote sensors. Greatest integrated concentrations of PIC, coccoliths, and coccolithophores are found when there is moderate stratification at the base of the euphotic zone.

  3. PIC Simulations in Low Energy Part of PIP-II Proton Linac

    Energy Technology Data Exchange (ETDEWEB)

    Romanov, Gennady

    2014-07-01

    The front end of PIP-II linac is composed of a 30 keV ion source, low energy beam transport line (LEBT), 2.1 MeV radio frequency quadrupole (RFQ), and medium energy beam transport line (MEBT). This configuration is currently being assembled at Fermilab to support a complete systems test. The front end represents the primary technical risk with PIP-II, and so this step will validate the concept and demonstrate that the hardware can meet the specified requirements. SC accelerating cavities right after MEBT require high quality and well defined beam after RFQ to avoid excessive particle losses. In this paper we will present recent progress of beam dynamic study, using CST PIC simulation code, to investigate partial neutralization effect in LEBT, halo and tail formation in RFQ, total emittance growth and beam losses along low energy part of the linac.

  4. Relativistic electron diffraction at the UCLA Pegasus photoinjector laboratory

    Energy Technology Data Exchange (ETDEWEB)

    Musumeci, P. [UCLA Department of Physics and Astronomy, 475 Portola Plaza, Los Angeles, CA 90095-1547 (United States)], E-mail: musumeci@physics.ucla.edu; Moody, J.T.; Scoby, C.M. [UCLA Department of Physics and Astronomy, 475 Portola Plaza, Los Angeles, CA 90095-1547 (United States)

    2008-10-15

    Electron diffraction holds the promise to yield real-time resolution of atomic motion in an easily accessible environment like a university laboratory at a fraction of the cost of fourth-generation X-ray sources. Currently the limit in time-resolution for conventional electron diffraction is set by how short an electron pulse can be made. A very promising solution to maintain the highest possible beam intensity without excessive pulse broadening from space charge effects is to increase the electron energy to the MeV level where relativistic effects significantly reduce the space charge forces. Rf photoinjectors can in principle deliver up to 10{sup 7}-10{sup 8} electrons packed in bunches of {approx}100-fs length, allowing an unprecedented time resolution and enabling the study of irreversible phenomena by single-shot diffraction patterns. The use of rf photoinjectors as sources for ultrafast electron diffraction has been recently at the center of various theoretical and experimental studies. The UCLA Pegasus laboratory, commissioned in early 2007 as an advanced photoinjector facility, is the only operating system in the country, which has recently demonstrated electron diffraction using a relativistic beam from an rf photoinjector. Due to the use of a state-of-the-art ultrashort photoinjector driver laser system, the beam has been measured to be sub-100-fs long, at least a factor of 5 better than what measured in previous relativistic electron diffraction setups. Moreover, diffraction patterns from various metal targets (titanium and aluminum) have been obtained using the Pegasus beam. One of the main laboratory goals in the near future is to fully develop the rf photoinjector-based ultrafast electron diffraction technique with particular attention to the optimization of the working point of the photoinjector in a low-charge ultrashort pulse regime, and to the development of suitable beam diagnostics.

  5. Relativistic electron diffraction at the UCLA Pegasus photoinjector laboratory

    International Nuclear Information System (INIS)

    Musumeci, P.; Moody, J.T.; Scoby, C.M.

    2008-01-01

    Electron diffraction holds the promise to yield real-time resolution of atomic motion in an easily accessible environment like a university laboratory at a fraction of the cost of fourth-generation X-ray sources. Currently the limit in time-resolution for conventional electron diffraction is set by how short an electron pulse can be made. A very promising solution to maintain the highest possible beam intensity without excessive pulse broadening from space charge effects is to increase the electron energy to the MeV level where relativistic effects significantly reduce the space charge forces. Rf photoinjectors can in principle deliver up to 10 7 -10 8 electrons packed in bunches of ∼100-fs length, allowing an unprecedented time resolution and enabling the study of irreversible phenomena by single-shot diffraction patterns. The use of rf photoinjectors as sources for ultrafast electron diffraction has been recently at the center of various theoretical and experimental studies. The UCLA Pegasus laboratory, commissioned in early 2007 as an advanced photoinjector facility, is the only operating system in the country, which has recently demonstrated electron diffraction using a relativistic beam from an rf photoinjector. Due to the use of a state-of-the-art ultrashort photoinjector driver laser system, the beam has been measured to be sub-100-fs long, at least a factor of 5 better than what measured in previous relativistic electron diffraction setups. Moreover, diffraction patterns from various metal targets (titanium and aluminum) have been obtained using the Pegasus beam. One of the main laboratory goals in the near future is to fully develop the rf photoinjector-based ultrafast electron diffraction technique with particular attention to the optimization of the working point of the photoinjector in a low-charge ultrashort pulse regime, and to the development of suitable beam diagnostics

  6. Relativistic electron diffraction at the UCLA Pegasus photoinjector laboratory.

    Science.gov (United States)

    Musumeci, P; Moody, J T; Scoby, C M

    2008-10-01

    Electron diffraction holds the promise to yield real-time resolution of atomic motion in an easily accessible environment like a university laboratory at a fraction of the cost of fourth-generation X-ray sources. Currently the limit in time-resolution for conventional electron diffraction is set by how short an electron pulse can be made. A very promising solution to maintain the highest possible beam intensity without excessive pulse broadening from space charge effects is to increase the electron energy to the MeV level where relativistic effects significantly reduce the space charge forces. Rf photoinjectors can in principle deliver up to 10(7)-10(8) electrons packed in bunches of approximately 100-fs length, allowing an unprecedented time resolution and enabling the study of irreversible phenomena by single-shot diffraction patterns. The use of rf photoinjectors as sources for ultrafast electron diffraction has been recently at the center of various theoretical and experimental studies. The UCLA Pegasus laboratory, commissioned in early 2007 as an advanced photoinjector facility, is the only operating system in the country, which has recently demonstrated electron diffraction using a relativistic beam from an rf photoinjector. Due to the use of a state-of-the-art ultrashort photoinjector driver laser system, the beam has been measured to be sub-100-fs long, at least a factor of 5 better than what measured in previous relativistic electron diffraction setups. Moreover, diffraction patterns from various metal targets (titanium and aluminum) have been obtained using the Pegasus beam. One of the main laboratory goals in the near future is to fully develop the rf photoinjector-based ultrafast electron diffraction technique with particular attention to the optimization of the working point of the photoinjector in a low-charge ultrashort pulse regime, and to the development of suitable beam diagnostics.

  7. Progress of laser-plasma interaction simulations with the particle-in-cell code

    International Nuclear Information System (INIS)

    Sakagami, Hitoshi; Kishimoto, Yasuaki; Sentoku, Yasuhiko; Taguchi, Toshihiro

    2005-01-01

    As the laser-plasma interaction is a non-equilibrium, non-linear and relativistic phenomenon, we must introduce a microscopic method, namely, the relativistic electromagnetic PIC (Particle-In-Cell) simulation code. The PIC code requires a huge number of particles to validate simulation results, and its task is very computation-intensive. Thus simulation researches by the PIC code have been progressing along with advances in computer technology. Recently, parallel computers with tremendous computational power have become available, and thus we can perform three-dimensional PIC simulations for the laser-plasma interaction to investigate laser fusion. Some simulation results are shown with figures. We discuss a recent trend of large-scale PIC simulations that enable direct comparison between experimental facts and computational results. We also discharge/lightning simulations by the extended PIC code, which include various atomic and relaxation processes. (author)

  8. Development of PIC-based digital survey meter

    International Nuclear Information System (INIS)

    Nor Arymaswati Abdullah; Nur Aira Abdul Rahman; Mohd Ashhar Khalid; Taiman Kadni; Glam Hadzir Patai Mohamad; Abd Aziz Mhd Ramli; Chong Foh Yong

    2006-01-01

    The need of radiation monitoring and monitoring of radioactive contamination in the workplace is very important especially when x-ray machines, linear accelerators, electron beam machines and radioactive sources are present. The appropriate use of radiation detector is significant in order to maintain a radiation and contamination free workplace. This paper reports on the development of a prototype of PIC-based digital survey meter. This prototype of digital survey meter is a hand held instrument for general-purpose radiation monitoring and surface contamination meter. Generally, the device is able to detect some or all of the three major types of ionizing radiation, namely alpha, beta and gamma. It uses a Geiger-Muller tube as a radiation detector, which converts gamma radiation quanta to electric pulses and further processed by the electronic devices. The development involved the design of the controller, counter and high voltage circuit. All these circuit are assembled and enclosed in a plastic casing together with a GM detector and LCD display to form a prototype survey meter. The number of counts of the pulses detected by the survey meter varies due to the random nature of radioactivity. By averaging the reading over a time-period, more accurate and stable reading is achieved. To test the accuracy and the linearity of the design, the prototype was calibrated using standard procedure at the Secondary Standard Dosimetry Laboratory (SSDL) in MINT. (Author)

  9. Resolution of the Vlasov-Maxwell system by PIC discontinuous Galerkin method on GPU with OpenCL

    Directory of Open Access Journals (Sweden)

    Crestetto Anaïs

    2013-01-01

    Full Text Available We present an implementation of a Vlasov-Maxwell solver for multicore processors. The Vlasov equation describes the evolution of charged particles in an electromagnetic field, solution of the Maxwell equations. The Vlasov equation is solved by a Particle-In-Cell method (PIC, while the Maxwell system is computed by a Discontinuous Galerkin method. We use the OpenCL framework, which allows our code to run on multicore processors or recent Graphic Processing Units (GPU. We present several numerical applications to two-dimensional test cases.

  10. The Effect of a Guide Field on the Structures of Magnetic Islands: 2D PIC Simulations

    Science.gov (United States)

    Huang, C.; Lu, Q.; Lu, S.; Wang, P.; Wang, S.

    2014-12-01

    Magnetic island plays an important role in magnetic reconnection. Using a series of 2D PIC simulations, we investigate the magnetic structures of a magnetic island formed during multiple X-line magnetic reconnection, considering the effects of the guide field in symmetric and asymmetric current sheets. In a symmetric current sheet, the current in the direction forms a tripolar structure inside a magnetic island during anti-parallel reconnection, which results in a quadrupole structure of the out-of-plane magnetic field. With the increase of the guide field, the symmetry of both the current system and out-of-plane magnetic field inside the magnetic island is distorted. When the guide field is sufficiently strong, the current forms a ring along the magnetic field lines inside magnetic island. At the same time, the current carried by the energetic electrons accelerated in the vicinity of the X lines forms another ring at the edge of the magnetic island. Such a dual-ring current system enhance the out-of-plane magnetic field inside the magnetic island with a dip in the center of the magnetic island. In an asymmetric current sheet, when there is no guide field, electrons flows toward the X lines along the separatrices from the side with a higher density, and are then directed away from the X lines along the separatrices to the side with a lower density. The formed current results in the enhancement of the out-of-plane magnetic field at one end of the magnetic island, and the attenuation at the other end. With the increase of the guide field, the structures of both the current system and the out-of-plane magnetic field are distorted.

  11. The fitness of copings constructed over UCLA abutments and the implant, constructed by different techniques: casting and casting with laser welding Adaptação de copings de ritânio ao implante, construídos sobre pilares UCLA por duas técnicas: fundição e fundição com soldagem de bordo laser

    Directory of Open Access Journals (Sweden)

    Elza Maria Valadares da Costa

    2004-12-01

    Full Text Available The alternative for the reposition of a missing tooth is the osteointegrated implant being the passive adaptation between the prosthodontic structure and the implant a significant factor for the success of this experiment, a comparative study was done between the two methods for confectioning a single prosthodontic supported by an implant. To do so a screwed implant with a diameter of 3.75mm and a length of 10.0mm (3i Implant innovations, Brasil was positioned in the middle of a resin block and over it we screwed 15 UCLA abutments shaped and anti-rotationable (137CNB, Conexão Sistemas de Próteses, Brasil with a torque of 20N.cm without any laboratorial procedure (control group - CTRLG. From a silicon model 15 UCLA-type calcinatable compounds (56CNB, Conexão Sistemas de Próteses, Brasil were screwed (20 N.cm, received a standard waxing (plain buccal surface and were cast in titanium (casting group - CG and other 15 compounds, UCLA - type shaped in titanium (137 CNB, Conexão Sistemas de Próteses, Brasil received the same standard waxing. These last copings were cast in titanium separated from each other and were laser-welded to the respective abutments on their border (Laser-welding group - LWG. The border adaptation was observed in the implant/compound interface, under measurement microscope, on the y axis, in 4 vestibular, lingual, mesial and distal referential points previously marked on the block. The arithmetical means were obtained and an exploratory data analysis was performed to determine the most appropriate statistical test. Descriptive statistics data (µm for Control (mean±standard deviation: 13.50 ± 21.80; median 0.00, for Casting (36.20±12.60; 37.00, for Laser (10.50 ±12.90; 3.00 were submitted to Kruskal-Wallis ANOVA, alpha = 5%. Results test showed that distorsion median values differ statistically (kw = 17.40; df =2; p = 0.001A reposição de um elemento dentário pode ser feita por um implante osseointegrado sendo que a

  12. Realistic PIC modelling of laser-plasma interaction: a direct implicit method with adjustable damping and high order weight functions

    International Nuclear Information System (INIS)

    Drouin, M.

    2009-11-01

    This research thesis proposes a new formulation of the relativistic implicit direct method, based on the weak formulation of the wave equation which is solved by means of a Newton algorithm. The first part of this thesis deals with the properties of the explicit particle-in-cell (PIC) methods: properties and limitations of an explicit PIC code, linear analysis of a numerical plasma, numerical heating phenomenon, interest of a higher order interpolation function, and presentation of two applications in high density relativistic laser-plasma interaction. The second and main part of this report deals with adapting the direct implicit method to laser-plasma interaction: presentation of the state of the art, formulating of the direct implicit method, resolution of the wave equation. The third part concerns various numerical and physical validations of the ELIXIRS code: case of laser wave propagation in vacuum, demonstration of the adjustable damping which is a characteristic of the proposed algorithm, influence of space-time discretization on energy conservation, expansion of a thermal plasma in vacuum, two cases of plasma-beam unsteadiness in relativistic regime, and then a case of the overcritical laser-plasma interaction

  13. Controle de um pré-regulador com alto fator de potência utilizando microcontrolador PIC /

    OpenAIRE

    Grosse, Alexandre de Souza

    1999-01-01

    Dissertação (Mestrado) - Universidade Federal de Santa Catarina, Centro Tecnológico. Estudo de controle digital em eletrônica de potência utilizando um microcontrolador especial PIC17C756 em um pré-regulador para correção ativa do fator de potência. O enfoque principal é dado no controle da malha de corrente do conversor elevador utilizado. Parte-se da caracterização do microcontrolador e seus periféricos e prossegue-se através do projeto do conversor BOOST. São apresentadas as técnicas de...

  14. The PICS Climate Insights 101 Courses: A Visual Approach to Learning About Climate Science, Mitigation and Adaptation

    Science.gov (United States)

    Pedersen, T. F.; Zwiers, F. W.; Breen, C.; Murdock, T. Q.

    2014-12-01

    The Pacific Institute for Climate Solutions (PICS) has now made available online three free, peer-reviewed, unique animated short courses in a series entitled "Climate Insights 101" that respectively address basic climate science, carbon-emissions mitigation approaches and opportunities, and adaptation. The courses are suitable for students of all ages, and use professionally narrated animations designed to hold a viewer's attention. Multiple issues are covered, including complex concerns like the construction of general circulation models, carbon pricing schemes in various countries, and adaptation approaches in the face of extreme weather events. Clips will be shown in the presentation. The first course (Climate Science Basics) has now been seen by over two hundred thousand individuals in over 80 countries, despite being offered in English only. Each course takes about two hours to work through, and in recognizing that that duration might pose an attention barrier to some students, PICS selected a number of short clips from the climate-science course and posted them as independent snippets on YouTube. A companion series of YouTube videos entitled, "Clear The Air", was created to confront the major global-warming denier myths. But a major challenge remains: despite numerous efforts to promote the availability of the free courses and the shorter YouTube pieces, they have yet to become widely known. Strategies to overcome that constraint will be discussed.

  15. Preliminary conceptual design for a 510 MeV electron/positron injector for a UCLA φ factory

    International Nuclear Information System (INIS)

    Dahlbacka, G.; Hartline, R.; Barletta, W.; Pellegrini, C.

    1991-01-01

    UCLA is proposing a compact suer conducting high luminosity (10 32-33 cm -2 sec -1 ) e + e - collider for a φ factory. To achieve the required e + e - currents, full energy injections from a linac with intermediate storage in a Positron Accumulator Ring (PAR) is used. The elements of the linac are outlined with cost and future flexibility in mind. The preliminary conceptual design starts with a high current gun similar in design to those developed at SLAC and at ANL (for the APS). Four 4-section linac modules follow, each driven by a 60 MW klystron with a 1 μsec macropulse and an average current of 8.6 A. The first 4-section model is used to create positrons in a tungsten target at 186 MeV. The three remaining three modules are used to accelerate the e + e - beam to 558 MeV (no load limit) for injection into the PAR

  16. A COMPENSATOR APPLICATION USING SYNCHRONOUS MOTOR WITH A PI CONTROLLER BASED ON PIC

    Directory of Open Access Journals (Sweden)

    Ramazan BAYINDIR

    2009-01-01

    Full Text Available In this paper, PI control of a synchronous motor has been realized by using a PIC 18F452 microcontroller and it has been worked as ohmic, inductive and capacitive with different excitation currents. Instead of solving integral operation of PI control which has difficulties with conversion to the digital system, summation of all error values of a defined time period are multiplied with the sampling period. Reference values of the PI algorithm are determined with Ziegler-Nicholas method. These parameters are calculated into the microcontroller and changed according to the algorithm. In addition, this work designed to provide visualization for the users. Current, voltage and power factor data of the synchronous motor can be observed easily on the LCD instantly.

  17. Addition compounds between lanthanide (III) and yttrium (III) and methanesulfonates (MS) and 3-picoline-N-oxide (3-pic NO)

    International Nuclear Information System (INIS)

    Zinner, L.B.

    1984-01-01

    The preparation and characterization of addition compounds between lanthanide methanesulfonates and 3-picoline-N-oxide of general formula Ln (MS) 3 .2(3-pic No), Ln being La, Yb and Y, were carried out. The techniques employed for characterization were: elemental analysis, X-ray diffraction, infrared absorption spectroscopy, electrolytic conductance in methanol, melting ranges and emission spectrum of the Eu (III) compound. (Author) [pt

  18. ATS-6 - UCLA fluxgate magnetometer

    Science.gov (United States)

    Mcpherron, R. L.; Coleman, P. J., Jr.; Snare, R. C.

    1975-01-01

    A summary of the design of the University of California at Los Angeles' fluxgate magnetometer is presented. Instrument noise in the bandwidth 0.001 to 1.0 Hz is of order 85 m gamma. The DC field of the spacecraft transverse to the earth-pointing axis is 1.0 + or - 21 gamma in the X direction and -2.4 + or - 1.3 gamma in the Y direction. The spacecraft field parallel to this axis is less than 5 gamma. The small spacecraft field has made possible studies of the macroscopic field not previously possible at synchronous orbit. At the 96 W longitude of Applications Technology Satellite-6 (ATS-6), the earth's field is typically inclined 30 deg to the dipole axis at local noon. Most perturbations of the field are due to substorms. These consist of a rotation in the meridian to a more radial field followed by a subsequent rotation back. The rotation back is normally accompanied by transient variations in the azimuthal field. The exact timing of these perturbations is a function of satellite location and the details of substorm development.

  19. An exploratory, randomized, parallel-group, open-label, relative bioavailability study with an additional two-period crossover food-effect study exploring the pharmacokinetics of two novel formulations of pexmetinib (ARRY-614

    Directory of Open Access Journals (Sweden)

    Wollenberg LA

    2015-09-01

    Full Text Available Lance A Wollenberg,1 Donald T Corson,2,3 Courtney A Nugent,1 Farran L Peterson,1 Ann M Ptaszynski,1 Alisha Arrigo,2,3 Coralee G Mannila,2,3 Kevin S Litwiler,1 Stacie J Bell1,4 1Array BioPharma, Boulder, 2Array BioPharma, Longmont, CO, 3Avista Pharma Solutions, Longmont, CO, 4Mallinckrodt Pharmaceuticals, Ellicott City, MD, USA Background: Pexmetinib (ARRY-614 is a dual inhibitor of p38 mitogen-activated protein kinase and Tie2 signaling pathways implicated in the pathogenesis of myelodysplastic syndromes. Previous clinical experience in a Phase I dose-escalation study of myelodysplastic syndrome patients using pexmetinib administered as neat powder-in-capsule (PIC exhibited high variability in pharmacokinetics and excessive pill burden, prompting an effort to improve the formulation of pexmetinib. Methods: A relative bioavailability assessment encompassed three parallel treatment cohorts of unique subjects comparing the two new formulations (12 subjects per cohort, a liquid oral suspension (LOS and liquid-filled capsule (LFC and the current clinical PIC formulation (six subjects in a fasted state. The food-effect assessment was conducted as a crossover of the LOS and LFC formulations administered under fed and fasted conditions. Subjects were divided into two groups of equal size to evaluate potential period effects on the food-effect assessment. Results: The geometric mean values of the total plasma exposures based upon area-under-the-curve to the last quantifiable sample (AUClast of pexmetinib were approximately four- and twofold higher after administration of the LFC and LOS formulations, respectively, than after the PIC formulation, when the formulations were administered in the fasted state. When the LFC formulation was administered in the fed state, pexmetinib AUClast decreased by <5% compared with the fasted state. After administration of the LOS formulation in the fed state, pexmetinib AUClast was 34% greater than observed in the fasted

  20. The Benefits of Adding SETI to the University Curriculum and What We Have Learned from a SETI Course Recently Offered at UCLA

    Science.gov (United States)

    Lesyna, Larry; Margot, Jean-Luc; Greenberg, Adam; Shinde, Akshay; Alladi, Yashaswi; Prasad MN, Srinivas; Bowman, Oliver; Fisher, Callum; Gyalay, Szilard; McKibbin, William; Miles, Brittany E.; Nguyen, Donald; Power, Conor; Ramani, Namrata; Raviprasad, Rashmi; Santana, Jesse

    2017-01-01

    We advocate for the inclusion of a full-term course entirely devoted to SETI in the university curriculum. SETI usually warrants only a few lectures in a traditional astronomy or astrobiology course. SETI’s rich interdisciplinary character serves astronomy students by introducing them to scientific and technological concepts that will aid them in their dissertation research or later in their careers. SETI is also an exciting topic that draws students from other disciplines and teaches them astronomical concepts that they might otherwise never encounter in their university studies. We have composed syllabi that illustrate the breadth and depth that SETI courses provide for advanced undergraduate or graduate students. The syllabi can also be used as a guide for an effective SETI course taught at a descriptive level.After a pilot course in 2015, UCLA formally offered a course titled "EPSS C179/279 - Search for Extraterrestrial Intelligence: Theory and Applications" in Spring 2016. The course was designed for advanced undergraduate students and graduate students in the science, technical, engineering, and mathematical fields. In 2016, 9 undergraduate students and 5 graduate students took the course. Students designed an observing sequence for the Arecibo and Green Bank telescopes, observed known planetary systems remotely, wrote a sophisticated and modular data processing pipeline, analyzed the data, and presented the results. In the process, they learned radio astronomy fundamentals, software development, signal processing, and statistics. The instructor believes that the students were eager to learn because of the engrossing nature of SETI. The students rated the course highly, in part because of the observing experience and the teamwork approach. The next offering will be in Spring 2017.See lxltech.com and seti.ucla.edu

  1. Automatic Color Sorting Machine Using TCS230 Color Sensor And PIC Microcontroller

    Directory of Open Access Journals (Sweden)

    Kunhimohammed C K

    2015-12-01

    Full Text Available Sorting of products is a very difficult industrial process. Continuous manual sorting creates consistency issues. This paper describes a working prototype designed for automatic sorting of objects based on the color. TCS230 sensor was used to detect the color of the product and the PIC16F628A microcontroller was used to control the overall process. The identification of the color is based on the frequency analysis of the output of TCS230 sensor. Two conveyor belts were used, each controlled by separate DC motors. The first belt is for placing the product to be analyzed by the color sensor, and the second belt is for moving the container, having separated compartments, in order to separate the products. The experimental results promise that the prototype will fulfill the needs for higher production and precise quality in the field of automation.

  2. Parallel Programming with Intel Parallel Studio XE

    CERN Document Server

    Blair-Chappell , Stephen

    2012-01-01

    Optimize code for multi-core processors with Intel's Parallel Studio Parallel programming is rapidly becoming a "must-know" skill for developers. Yet, where to start? This teach-yourself tutorial is an ideal starting point for developers who already know Windows C and C++ and are eager to add parallelism to their code. With a focus on applying tools, techniques, and language extensions to implement parallelism, this essential resource teaches you how to write programs for multicore and leverage the power of multicore in your programs. Sharing hands-on case studies and real-world examples, the

  3. Enhanced quasi-static particle-in-cell simulation of electron cloud instabilities in circular accelerators

    Science.gov (United States)

    Feng, Bing

    Electron cloud instabilities have been observed in many circular accelerators around the world and raised concerns of future accelerators and possible upgrades. In this thesis, the electron cloud instabilities are studied with the quasi-static particle-in-cell (PIC) code QuickPIC. Modeling in three-dimensions the long timescale propagation of beam in electron clouds in circular accelerators requires faster and more efficient simulation codes. Thousands of processors are easily available for parallel computations. However, it is not straightforward to increase the effective speed of the simulation by running the same problem size on an increasingly number of processors because there is a limit to domain size in the decomposition of the two-dimensional part of the code. A pipelining algorithm applied on the fully parallelized particle-in-cell code QuickPIC is implemented to overcome this limit. The pipelining algorithm uses multiple groups of processors and optimizes the job allocation on the processors in parallel computing. With this novel algorithm, it is possible to use on the order of 102 processors, and to expand the scale and the speed of the simulation with QuickPIC by a similar factor. In addition to the efficiency improvement with the pipelining algorithm, the fidelity of QuickPIC is enhanced by adding two physics models, the beam space charge effect and the dispersion effect. Simulation of two specific circular machines is performed with the enhanced QuickPIC. First, the proposed upgrade to the Fermilab Main Injector is studied with an eye upon guiding the design of the upgrade and code validation. Moderate emittance growth is observed for the upgrade of increasing the bunch population by 5 times. But the simulation also shows that increasing the beam energy from 8GeV to 20GeV or above can effectively limit the emittance growth. Then the enhanced QuickPIC is used to simulate the electron cloud effect on electron beam in the Cornell Energy Recovery Linac

  4. Exploiting multi-scale parallelism for large scale numerical modelling of laser wakefield accelerators

    International Nuclear Information System (INIS)

    Fonseca, R A; Vieira, J; Silva, L O; Fiuza, F; Davidson, A; Tsung, F S; Mori, W B

    2013-01-01

    A new generation of laser wakefield accelerators (LWFA), supported by the extreme accelerating fields generated in the interaction of PW-Class lasers and underdense targets, promises the production of high quality electron beams in short distances for multiple applications. Achieving this goal will rely heavily on numerical modelling to further understand the underlying physics and identify optimal regimes, but large scale modelling of these scenarios is computationally heavy and requires the efficient use of state-of-the-art petascale supercomputing systems. We discuss the main difficulties involved in running these simulations and the new developments implemented in the OSIRIS framework to address these issues, ranging from multi-dimensional dynamic load balancing and hybrid distributed/shared memory parallelism to the vectorization of the PIC algorithm. We present the results of the OASCR Joule Metric program on the issue of large scale modelling of LWFA, demonstrating speedups of over 1 order of magnitude on the same hardware. Finally, scalability to over ∼10 6 cores and sustained performance over ∼2 P Flops is demonstrated, opening the way for large scale modelling of LWFA scenarios. (paper)

  5. Effects of Temperature and Residence Time on the Emissions of PIC and Fine Particles during Fixed Bed Combustion of Conifer Stemwood Pellets

    Energy Technology Data Exchange (ETDEWEB)

    Boman, Christoffer; Lindmark, Fredrik; Oehman, Marcus; Nordin, Anders [Umeaa Univ. (Sweden). Energy Technology and Thermal Process Chemistry; Pettersson, Esbjoern [Energy Technology Centre, Piteaa (Sweden); Westerholm, Roger [Stockholm Univ., Arrhenius Laboratory (Sweden). Dept. of Analytical Chemistry

    2006-07-15

    The use of wood fuel Pellets has proved to be well suited for the small-scale market enabling controlled and efficient combustion with low emission of products of incomplete combustion (PIC). Still a potential for further emission reduction exists and a thorough understanding of the influence of combustion conditions on the emission characteristics of air pollutants like PAH and particulate matter (PM) is important. The objective was to determine the effects of temperature and residence time on the emission performance and characteristics with focus on hydrocarbons and PM during combustion of conifer stemwood Pellets in a laboratory fixed bed reactor (<5 kW). Temperature and residence time after the bed section were varied according to statistical experimental designs (650-970 deg C and 0.5-3.5 s) with the emission responses; CO, organic gaseous carbon, NO, 20 VOC compounds, 43 PAH compounds, PM{sub tot}, fine particle mass/count median diameter (MMD and CMD) and number concentration. Temperature was negatively correlated with the emissions of all studied PIC with limited effects of residence time. The PM{sub tot} emissions of 15-20 mg/MJ was in all cases dominated by fine (<1 {mu}m) particles of K, Na, S, Cl, C, O and Zn. Increased residence time resulted in increased fine particle sizes (i.e. MMD and CMD) and decreased number concentrations. The importance of high temperature (>850 deg C) in the bed zone with intensive, air rich and well mixed isothermal conditions for 0.5-1.0 s in the post combustion zone was illustrated for wood Pellets combustion with almost a total depletion of all studied PIC. The results emphasize the need for further verification studies and technology development work.

  6. Final Report for grant ER54958, 'Gyrokinetic Particle Simulation of Turbulent Transport in Burning Plasmas'

    International Nuclear Information System (INIS)

    Decyk, Viktor K.

    2011-01-01

    The UCLA contribution to this collaborative proposal is in two general areas. One area is to enhance the performance of GTC to perform optimally on the DOE leadership class computers, the other part is to contribute to the overall object-based design for GTC as it evolves. Most of the effort during this grant period has been in the former area. High performance computing is undergoing a revolution of greatly increasing parallelism, which is expected to lead to an exaflop computer by the end of the decade. However, the path there is uncertain. A number of hardware architectures have been proposed as well as new parallel languages. It is a disruptive time. In addition to this grant, this work is also funded by a gift from Northrop Grumman and UCLA's Institute for Digital Research and Education. In spite of the variety of proposed hardware, we feel that there is a hardware abstraction that describes the most likely features of the next generation high performance computers. This abstraction consists of a hierarchy of computational layers. At the lowest layer, we have a SIMD (vector) processor whose computational elements work together in lockstep and have fast synchronization and shared memory. At the next higher layer, we have a collection of such processors which communicate through a slower shared memory. At the highest layer, we have a cluster of such collections, which communicate via message-passing. The bottom two layers of this abstraction has been implemented in a language called OpenCL. Although different hardware implements different features of this abstraction, such as the SIMD or vector length, such an abstraction allows one to design parameterizable algorithms that can adapt to different hardware. We believe the hardware which most closely resembles a future exaflop computer is a collection of Graphical Processing Units (GPUs), and we have focused our attention on developing Particle-in-Cell (PIC) algorithms for this hardware initially. PIC codes are

  7. Wavelet-based blind identification of the UCLA Factor building using ambient and earthquake responses

    International Nuclear Information System (INIS)

    Hazra, B; Narasimhan, S

    2010-01-01

    Blind source separation using second-order blind identification (SOBI) has been successfully applied to the problem of output-only identification, popularly known as ambient system identification. In this paper, the basic principles of SOBI for the static mixtures case is extended using the stationary wavelet transform (SWT) in order to improve the separability of sources, thereby improving the quality of identification. Whereas SOBI operates on the covariance matrices constructed directly from measurements, the method presented in this paper, known as the wavelet-based modified cross-correlation method, operates on multiple covariance matrices constructed from the correlation of the responses. The SWT is selected because of its time-invariance property, which means that the transform of a time-shifted signal can be obtained as a shifted version of the transform of the original signal. This important property is exploited in the construction of several time-lagged covariance matrices. The issue of non-stationary sources is addressed through the formation of several time-shifted, windowed covariance matrices. Modal identification results are presented for the UCLA Factor building using ambient vibration data and for recorded responses from the Parkfield earthquake, and compared with published results for this building. Additionally, the effect of sensor density on the identification results is also investigated

  8. The MICHELLE 2D/3D ES PIC Code Advances and Applications

    CERN Document Server

    Petillo, John; De Ford, John F; Dionne, Norman J; Eppley, Kenneth; Held, Ben; Levush, Baruch; Nelson, Eric M; Panagos, Dimitrios; Zhai, Xiaoling

    2005-01-01

    MICHELLE is a new 2D/3D steady-state and time-domain particle-in-cell (PIC) code* that employs electrostatic and now magnetostatic finite-element field solvers. The code has been used to design and analyze a wide variety of devices that includes multistage depressed collectors, gridded guns, multibeam guns, annular-beam guns, sheet-beam guns, beam-transport sections, and ion thrusters. Latest additions to the MICHELLE/Voyager tool are as follows: 1) a prototype 3D self magnetic field solver using the curl-curl finite-element formulation for the magnetic vector potential, employing edge basis functions and accumulating current with MICHELLE's new unstructured grid particle tracker, 2) the electrostatic field solver now accommodates dielectric media, 3) periodic boundary conditions are now functional on all grids, not just structured grids, 4) the addition of a global optimization module to the user interface where both electrical parameters (such as electrode voltages)can be optimized, and 5) adaptive mesh ref...

  9. Science and Engineering of the Environment of Los Angeles: A GK-12 Experiment at Developing Science Communications Skills in UCLA's Graduate Program

    Science.gov (United States)

    Moldwin, M. B.; Hogue, T. S.; Nonacs, P.; Shope, R. E.; Daniel, J.

    2008-12-01

    Many science and research skills are taught by osmosis in graduate programs with the expectation that students will develop good communication skills (speaking, writing, and networking) by observing others, attending meetings, and self reflection. A new National Science Foundation Graduate Teaching Fellows in K- 12 Education (GK-12; http://ehrweb.aaas.org/gk12new/) program at UCLA (SEE-LA; http://measure.igpp.ucla.edu/GK12-SEE-LA/overview.html ) attempts to make the development of good communication skills an explicit part of the graduate program of science and engineering students. SEE-LA places the graduate fellows in two pairs of middle and high schools within Los Angeles to act as scientists-in- residence. They are partnered with two master science teachers and spend two-days per week in the classroom. They are not student teachers, or teacher aides, but scientists who contribute their content expertise, excitement and experience with research, and new ideas for classroom activities and lessons that incorporate inquiry science. During the one-year fellowship, the graduate students also attend a year-long Preparing Future Faculty seminar that discusses many skills needed as they begin their academic or research careers. Students are also required to include a brief (two-page) summary of their research that their middle or high school students would be able to understand as part of their published thesis. Having students actively thinking about and communicating their science to a pre-college audience provides important science communication training and helps contribute to science education. University and local pre- college school partnerships provide an excellent opportunity to support the development of graduate student communication skills while also contributing significantly to the dissemination of sound science to K-12 teachers and students.

  10. The plasma-wall transition layers in the presence of collisions with a magnetic field parallel to the wall

    Science.gov (United States)

    Moritz, J.; Faudot, E.; Devaux, S.; Heuraux, S.

    2018-01-01

    The plasma-wall transition is studied by means of a particle-in-cell (PIC) simulation in the configuration of a parallel to the wall magnetic field (B), with collisions between charged particles vs. neutral atoms taken into account. The investigated system consists of a plasma bounded by two absorbing walls separated by 200 electron Debye lengths (λd). The strength of the magnetic field is chosen such as the ratio λ d / r l , with rl being the electron Larmor radius, is smaller or larger than unity. Collisions are modelled with a simple operator that reorients randomly ion or electron velocity, keeping constant the total kinetic energy of both the neutral atom (target) and the incident charged particle. The PIC simulations show that the plasma-wall transition consists in a quasi-neutral region (pre-sheath), from the center of the plasma towards the walls, where the electric potential or electric field profiles are well described by an ambipolar diffusion model, and in a second region at the vicinity of the walls, called the sheath, where the quasi-neutrality breaks down. In this peculiar geometry of B and for a certain range of the mean-free-path, the sheath is found to be composed of two charged layers: the positive one, close to the walls, and the negative one, towards the plasma and before the neutral pre-sheath. Depending on the amplitude of B, the spatial variation of the electric potential can be non-monotonic and presents a maximum within the sheath region. More generally, the sheath extent as well as the potential drop within the sheath and the pre-sheath is studied with respect to B, the mean-free-path, and the ion and electron temperatures.

  11. Practical parallel computing

    CERN Document Server

    Morse, H Stephen

    1994-01-01

    Practical Parallel Computing provides information pertinent to the fundamental aspects of high-performance parallel processing. This book discusses the development of parallel applications on a variety of equipment.Organized into three parts encompassing 12 chapters, this book begins with an overview of the technology trends that converge to favor massively parallel hardware over traditional mainframes and vector machines. This text then gives a tutorial introduction to parallel hardware architectures. Other chapters provide worked-out examples of programs using several parallel languages. Thi

  12. Study of negative hydrogen ion beam optics using the 3D3V PIC model

    International Nuclear Information System (INIS)

    Miyamoto, K.; Nishioka, S.; Goto, I.; Hatayama, A.; Hanada, M.; Kojima, A.

    2015-01-01

    The mechanism of negative ion extraction under real conditions with the complex magnetic field is studied by using the 3D PIC simulation code. The extraction region of the negative ion source for the negative ion based neutral beam injection system in fusion reactors is modelled. It is shown that the E x B drift of electrons is caused by the magnetic filter and the electron suppression magnetic field, and the resultant asymmetry of the plasma meniscus. Furthermore, it is indicated that that the asymmetry of the plasma meniscus results in the asymmetry of negative ion beam profile including the beam halo. It could be demonstrated theoretically that the E x B drift is not significantly weakened by the elastic collisions of the electrons with neutral particles

  13. Study of negative hydrogen ion beam optics using the 3D3V PIC model

    Energy Technology Data Exchange (ETDEWEB)

    Miyamoto, K., E-mail: kmiyamot@naruto-u.ac.jp [Naruto University of Education, 748 Nakashima, Takashima, Naruto-cho, Naruto-shi, Tokushima, 772-8502 (Japan); Nishioka, S.; Goto, I.; Hatayama, A. [Faculty of Science and Technology, Keio University, 3-14-1, Hiyoshi, Kohoku-ku, Yokohama, 223-8522 (Japan); Hanada, M.; Kojima, A. [Japan Atomic Energy Agency, 801-1,Mukoyama, Naka, 319-0913 (Japan)

    2015-04-08

    The mechanism of negative ion extraction under real conditions with the complex magnetic field is studied by using the 3D PIC simulation code. The extraction region of the negative ion source for the negative ion based neutral beam injection system in fusion reactors is modelled. It is shown that the E x B drift of electrons is caused by the magnetic filter and the electron suppression magnetic field, and the resultant asymmetry of the plasma meniscus. Furthermore, it is indicated that that the asymmetry of the plasma meniscus results in the asymmetry of negative ion beam profile including the beam halo. It could be demonstrated theoretically that the E x B drift is not significantly weakened by the elastic collisions of the electrons with neutral particles.

  14. Parallel rendering

    Science.gov (United States)

    Crockett, Thomas W.

    1995-01-01

    This article provides a broad introduction to the subject of parallel rendering, encompassing both hardware and software systems. The focus is on the underlying concepts and the issues which arise in the design of parallel rendering algorithms and systems. We examine the different types of parallelism and how they can be applied in rendering applications. Concepts from parallel computing, such as data decomposition, task granularity, scalability, and load balancing, are considered in relation to the rendering problem. We also explore concepts from computer graphics, such as coherence and projection, which have a significant impact on the structure of parallel rendering algorithms. Our survey covers a number of practical considerations as well, including the choice of architectural platform, communication and memory requirements, and the problem of image assembly and display. We illustrate the discussion with numerous examples from the parallel rendering literature, representing most of the principal rendering methods currently used in computer graphics.

  15. Parallel computations

    CERN Document Server

    1982-01-01

    Parallel Computations focuses on parallel computation, with emphasis on algorithms used in a variety of numerical and physical applications and for many different types of parallel computers. Topics covered range from vectorization of fast Fourier transforms (FFTs) and of the incomplete Cholesky conjugate gradient (ICCG) algorithm on the Cray-1 to calculation of table lookups and piecewise functions. Single tridiagonal linear systems and vectorized computation of reactive flow are also discussed.Comprised of 13 chapters, this volume begins by classifying parallel computers and describing techn

  16. Parallel sorting algorithms

    CERN Document Server

    Akl, Selim G

    1985-01-01

    Parallel Sorting Algorithms explains how to use parallel algorithms to sort a sequence of items on a variety of parallel computers. The book reviews the sorting problem, the parallel models of computation, parallel algorithms, and the lower bounds on the parallel sorting problems. The text also presents twenty different algorithms, such as linear arrays, mesh-connected computers, cube-connected computers. Another example where algorithm can be applied is on the shared-memory SIMD (single instruction stream multiple data stream) computers in which the whole sequence to be sorted can fit in the

  17. Implementación de un balastro electrónico con microcontrolador PIC para lámparas de sodio de alta presión; Implementation of electronic ballast with PIC microcontroller for high pressure sodiumlamps

    Directory of Open Access Journals (Sweden)

    Armando M. - Gutiérrez Menéndez

    2013-10-01

    Full Text Available En el presente trabajo se muestra un prototipo de balasto electrónico, que garantiza la operación exitosa en alta frecuencia de una lámpara de sodio de alta presión de 70 W, pues opera libre de resonancia acústica (RA. Se efectúa un análisis del fenómeno de la resonancia acústica, profundizando en su origen y predicción teórica. Es descrita la técnica de modulación en frecuencia utilizada para evitar este fenómeno, implementada en el microcontrolador de 8 bit PIC16F877 de la Microchip, la cual es activada en dependencia de la variación de los parámetros eléctricos de la lámpara, como son tensión y corriente. Son mostradas las etapas que dan conformación a dicho prototipo, además de presentarse las simulaciones realizadas a los principales elementos que componen el balasto. Los resultados prácticos alcanzados por el prototipo son expuestos, los cuales se dividen por etapas para analizar el correcto funcionamiento de cada una ellas.In the present work is offered a prototype of electronic ballast that guarantees the correctly operation in high frequency of a high pressure sodium lamp of 70 W, because it operates free of acoustic resonance (RA. An analysis of the phenomenon of the acoustic resonance is made, deepening in its origin and theoretical prediction. The modulation technique is described in frequency used to avoid this phenomenon, implemented in the microcontrolador of 8 bit PIC16F877 of Microchip, which is activated in dependence of the variation of the electric parameters of the lamp, like they are tension and current. They are shown the stages that give conformation to this prototype, besides the simulations carried out to the main elements that compose the ballast being presented. The practical results reached by the prototype are exposed, which are divided by stages to analyze the correct operation of each a them.

  18. Low-temperature plasma simulations with the LSP PIC code

    Science.gov (United States)

    Carlsson, Johan; Khrabrov, Alex; Kaganovich, Igor; Keating, David; Selezneva, Svetlana; Sommerer, Timothy

    2014-10-01

    The LSP (Large-Scale Plasma) PIC-MCC code has been used to simulate several low-temperature plasma configurations, including a gas switch for high-power AC/DC conversion, a glow discharge and a Hall thruster. Simulation results will be presented with an emphasis on code comparison and validation against experiment. High-voltage, direct-current (HVDC) power transmission is becoming more common as it can reduce construction costs and power losses. Solid-state power-electronics devices are presently used, but it has been proposed that gas switches could become a compact, less costly, alternative. A gas-switch conversion device would be based on a glow discharge, with a magnetically insulated cold cathode. Its operation is similar to that of a sputtering magnetron, but with much higher pressure (0.1 to 0.3 Torr) in order to achieve high current density. We have performed 1D (axial) and 2D (axial/radial) simulations of such a gas switch using LSP. The 1D results were compared with results from the EDIPIC code. To test and compare the collision models used by the LSP and EDIPIC codes in more detail, a validation exercise was performed for the cathode fall of a glow discharge. We will also present some 2D (radial/azimuthal) LSP simulations of a Hall thruster. The information, data, or work presented herein was funded in part by the Advanced Research Projects Agency-Energy (ARPA-E), U.S. Department of Energy, under Award Number DE-AR0000298.

  19. Photonic Integrated Circuit (PIC) Device Structures: Background, Fabrication Ecosystem, Relevance to Space Systems Applications, and Discussion of Related Radiation Effects

    Science.gov (United States)

    Alt, Shannon

    2016-01-01

    Electronic integrated circuits are considered one of the most significant technological advances of the 20th century, with demonstrated impact in their ability to incorporate successively higher numbers transistors and construct electronic devices onto a single CMOS chip. Photonic integrated circuits (PICs) exist as the optical analog to integrated circuits; however, in place of transistors, PICs consist of numerous scaled optical components, including such "building-block" structures as waveguides, MMIs, lasers, and optical ring resonators. The ability to construct electronic and photonic components on a single microsystems platform offers transformative potential for the development of technologies in fields including communications, biomedical device development, autonomous navigation, and chemical and atmospheric sensing. Developing on-chip systems that provide new avenues for integration and replacement of bulk optical and electro-optic components also reduces size, weight, power and cost (SWaP-C) limitations, which are important in the selection of instrumentation for specific flight projects. The number of applications currently emerging for complex photonics systems-particularly in data communications-warrants additional investigations when considering reliability for space systems development. This Body of Knowledge document seeks to provide an overview of existing integrated photonics architectures; the current state of design, development, and fabrication ecosystems in the United States and Europe; and potential space applications, with emphasis given to associated radiation effects and reliability.

  20. Hybrid-PIC Computer Simulation of the Plasma and Erosion Processes in Hall Thrusters

    Science.gov (United States)

    Hofer, Richard R.; Katz, Ira; Mikellides, Ioannis G.; Gamero-Castano, Manuel

    2010-01-01

    HPHall software simulates and tracks the time-dependent evolution of the plasma and erosion processes in the discharge chamber and near-field plume of Hall thrusters. HPHall is an axisymmetric solver that employs a hybrid fluid/particle-in-cell (Hybrid-PIC) numerical approach. HPHall, originally developed by MIT in 1998, was upgraded to HPHall-2 by the Polytechnic University of Madrid in 2006. The Jet Propulsion Laboratory has continued the development of HPHall-2 through upgrades to the physical models employed in the code, and the addition of entirely new ones. Primary among these are the inclusion of a three-region electron mobility model that more accurately depicts the cross-field electron transport, and the development of an erosion sub-model that allows for the tracking of the erosion of the discharge chamber wall. The code is being developed to provide NASA science missions with a predictive tool of Hall thruster performance and lifetime that can be used to validate Hall thrusters for missions.

  1. Parallel MR imaging.

    Science.gov (United States)

    Deshmane, Anagha; Gulani, Vikas; Griswold, Mark A; Seiberlich, Nicole

    2012-07-01

    Parallel imaging is a robust method for accelerating the acquisition of magnetic resonance imaging (MRI) data, and has made possible many new applications of MR imaging. Parallel imaging works by acquiring a reduced amount of k-space data with an array of receiver coils. These undersampled data can be acquired more quickly, but the undersampling leads to aliased images. One of several parallel imaging algorithms can then be used to reconstruct artifact-free images from either the aliased images (SENSE-type reconstruction) or from the undersampled data (GRAPPA-type reconstruction). The advantages of parallel imaging in a clinical setting include faster image acquisition, which can be used, for instance, to shorten breath-hold times resulting in fewer motion-corrupted examinations. In this article the basic concepts behind parallel imaging are introduced. The relationship between undersampling and aliasing is discussed and two commonly used parallel imaging methods, SENSE and GRAPPA, are explained in detail. Examples of artifacts arising from parallel imaging are shown and ways to detect and mitigate these artifacts are described. Finally, several current applications of parallel imaging are presented and recent advancements and promising research in parallel imaging are briefly reviewed. Copyright © 2012 Wiley Periodicals, Inc.

  2. IMPLEMENTATION OF PID ON PIC24F SERIES MICROCONTROLLER FOR SPEED CONTROL OF A DC MOTOR USING MPLAB AND PROTEUS

    OpenAIRE

    Sohaib Aslam; Sundas Hannan; Umar Sajjad; Waheed Zafar

    2016-01-01

    Speed control of DC motor is very critical in most of the industrial systems where accuracy and protection are of essence. This paper presents the simulations of Proportional Integral Derivative Controller (PID) on a 16-bit PIC 24F series microcontroller for speed control of a DC motor in the presence of load torque. The PID gains have been tuned by Linear Quadratic Regulator (LQR) technique and then it is implemented on microcontroller using MPLAB and finally simulated for speed control of D...

  3. Development and Benchmarking of a Hybrid PIC Code For Dense Plasmas and Fast Ignition

    Energy Technology Data Exchange (ETDEWEB)

    Witherspoon, F. Douglas [HyperV Technologies Corp.; Welch, Dale R. [Voss Scientific, LLC; Thompson, John R. [FAR-TECH, Inc.; MacFarlane, Joeseph J. [Prism Computational Sciences Inc.; Phillips, Michael W. [Advanced Energy Systems, Inc.; Bruner, Nicki [Voss Scientific, LLC; Mostrom, Chris [Voss Scientific, LLC; Thoma, Carsten [Voss Scientific, LLC; Clark, R. E. [Voss Scientific, LLC; Bogatu, Nick [FAR-TECH, Inc.; Kim, Jin-Soo [FAR-TECH, Inc.; Galkin, Sergei [FAR-TECH, Inc.; Golovkin, Igor E. [Prism Computational Sciences, Inc.; Woodruff, P. R. [Prism Computational Sciences, Inc.; Wu, Linchun [HyperV Technologies Corp.; Messer, Sarah J. [HyperV Technologies Corp.

    2014-05-20

    Radiation processes play an important role in the study of both fast ignition and other inertial confinement schemes, such as plasma jet driven magneto-inertial fusion, both in their effect on energy balance, and in generating diagnostic signals. In the latter case, warm and hot dense matter may be produced by the convergence of a plasma shell formed by the merging of an assembly of high Mach number plasma jets. This innovative approach has the potential advantage of creating matter of high energy densities in voluminous amount compared with high power lasers or particle beams. An important application of this technology is as a plasma liner for the flux compression of magnetized plasma to create ultra-high magnetic fields and burning plasmas. HyperV Technologies Corp. has been developing plasma jet accelerator technology in both coaxial and linear railgun geometries to produce plasma jets of sufficient mass, density, and velocity to create such imploding plasma liners. An enabling tool for the development of this technology is the ability to model the plasma dynamics, not only in the accelerators themselves, but also in the resulting magnetized target plasma and within the merging/interacting plasma jets during transport to the target. Welch pioneered numerical modeling of such plasmas (including for fast ignition) using the LSP simulation code. Lsp is an electromagnetic, parallelized, plasma simulation code under development since 1995. It has a number of innovative features making it uniquely suitable for modeling high energy density plasmas including a hybrid fluid model for electrons that allows electrons in dense plasmas to be modeled with a kinetic or fluid treatment as appropriate. In addition to in-house use at Voss Scientific, several groups carrying out research in Fast Ignition (LLNL, SNL, UCSD, AWE (UK), and Imperial College (UK)) also use LSP. A collaborative team consisting of HyperV Technologies Corp., Voss Scientific LLC, FAR-TECH, Inc., Prism

  4. High-Performance Psychometrics: The Parallel-E Parallel-M Algorithm for Generalized Latent Variable Models. Research Report. ETS RR-16-34

    Science.gov (United States)

    von Davier, Matthias

    2016-01-01

    This report presents results on a parallel implementation of the expectation-maximization (EM) algorithm for multidimensional latent variable models. The developments presented here are based on code that parallelizes both the E step and the M step of the parallel-E parallel-M algorithm. Examples presented in this report include item response…

  5. Photo-induced reorganization of molecular packing of amphi-PIC J-aggregates (single J-aggregate spectroscopy)

    International Nuclear Information System (INIS)

    Malyukin, Yu.V.; Sorokin, A.V.; Yefimova, S.L.; Lebedenko, A.N.

    2005-01-01

    Confocal luminescence microscopy has been used to excite and collect luminescence from single amphi-PIC J-aggregate. Two types of J-aggregates have been revealed in the luminescence image: bead-like J-aggregates, which diameter is less than 1 μm and rod-like ones, which length is about 3 μm and diameter is less than 1 μm. It has been found that single rod-like and bead-like J-aggregates exhibit different luminescence bands with different decay parameters. At the off-resonance blue tail excitation, the J-aggregate exciton luminescence disappeared within a certain time period and a new band appeared, which cannot be attributed to the monomer emission. The luminescence image shows that the J-aggregate is not destroyed. However, J-aggregate storage in darkness does not recover its exciton luminescence

  6. A SPECT reconstruction method for extending parallel to non-parallel geometries

    International Nuclear Information System (INIS)

    Wen Junhai; Liang Zhengrong

    2010-01-01

    Due to its simplicity, parallel-beam geometry is usually assumed for the development of image reconstruction algorithms. The established reconstruction methodologies are then extended to fan-beam, cone-beam and other non-parallel geometries for practical application. This situation occurs for quantitative SPECT (single photon emission computed tomography) imaging in inverting the attenuated Radon transform. Novikov reported an explicit parallel-beam formula for the inversion of the attenuated Radon transform in 2000. Thereafter, a formula for fan-beam geometry was reported by Bukhgeim and Kazantsev (2002 Preprint N. 99 Sobolev Institute of Mathematics). At the same time, we presented a formula for varying focal-length fan-beam geometry. Sometimes, the reconstruction formula is so implicit that we cannot obtain the explicit reconstruction formula in the non-parallel geometries. In this work, we propose a unified reconstruction framework for extending parallel-beam geometry to any non-parallel geometry using ray-driven techniques. Studies by computer simulations demonstrated the accuracy of the presented unified reconstruction framework for extending parallel-beam to non-parallel geometries in inverting the attenuated Radon transform.

  7. The language parallel Pascal and other aspects of the massively parallel processor

    Science.gov (United States)

    Reeves, A. P.; Bruner, J. D.

    1982-01-01

    A high level language for the Massively Parallel Processor (MPP) was designed. This language, called Parallel Pascal, is described in detail. A description of the language design, a description of the intermediate language, Parallel P-Code, and details for the MPP implementation are included. Formal descriptions of Parallel Pascal and Parallel P-Code are given. A compiler was developed which converts programs in Parallel Pascal into the intermediate Parallel P-Code language. The code generator to complete the compiler for the MPP is being developed independently. A Parallel Pascal to Pascal translator was also developed. The architecture design for a VLSI version of the MPP was completed with a description of fault tolerant interconnection networks. The memory arrangement aspects of the MPP are discussed and a survey of other high level languages is given.

  8. Parallel Atomistic Simulations

    Energy Technology Data Exchange (ETDEWEB)

    HEFFELFINGER,GRANT S.

    2000-01-18

    Algorithms developed to enable the use of atomistic molecular simulation methods with parallel computers are reviewed. Methods appropriate for bonded as well as non-bonded (and charged) interactions are included. While strategies for obtaining parallel molecular simulations have been developed for the full variety of atomistic simulation methods, molecular dynamics and Monte Carlo have received the most attention. Three main types of parallel molecular dynamics simulations have been developed, the replicated data decomposition, the spatial decomposition, and the force decomposition. For Monte Carlo simulations, parallel algorithms have been developed which can be divided into two categories, those which require a modified Markov chain and those which do not. Parallel algorithms developed for other simulation methods such as Gibbs ensemble Monte Carlo, grand canonical molecular dynamics, and Monte Carlo methods for protein structure determination are also reviewed and issues such as how to measure parallel efficiency, especially in the case of parallel Monte Carlo algorithms with modified Markov chains are discussed.

  9. Parallel integer sorting with medium and fine-scale parallelism

    Science.gov (United States)

    Dagum, Leonardo

    1993-01-01

    Two new parallel integer sorting algorithms, queue-sort and barrel-sort, are presented and analyzed in detail. These algorithms do not have optimal parallel complexity, yet they show very good performance in practice. Queue-sort designed for fine-scale parallel architectures which allow the queueing of multiple messages to the same destination. Barrel-sort is designed for medium-scale parallel architectures with a high message passing overhead. The performance results from the implementation of queue-sort on a Connection Machine CM-2 and barrel-sort on a 128 processor iPSC/860 are given. The two implementations are found to be comparable in performance but not as good as a fully vectorized bucket sort on the Cray YMP.

  10. Diseño e implementación de una tarjeta de monitoreo y control en forma remota a través de internet, utilizando la tecnología del microcontrolador pic18f97j60

    OpenAIRE

    Montenegro Viera, Efren; Sandoya Tinoco, Eduardo; Ponguillo Intriago, Ronald Alberto

    2009-01-01

    El trabajo presentado en este artículo fue desarrollado para demostrar la aplicación de los recursos tecnológicos de dispositivos como el Microcontrolador PIC 18F97J60 en la Domótica. Existen diferentes maneras de implementar la domótica, motivo por el cual, hemos decidido optar por una nueva tecnología como lo es la de los PICs con sistemas embebidos para el desarrollo de una aplicación que reduzca en costo y tamaño lo que actualmente podemos encontrar en el mercado. Con la ayuda de di...

  11. About Parallel Programming: Paradigms, Parallel Execution and Collaborative Systems

    Directory of Open Access Journals (Sweden)

    Loredana MOCEAN

    2009-01-01

    Full Text Available In the last years, there were made efforts for delineation of a stabile and unitary frame, where the problems of logical parallel processing must find solutions at least at the level of imperative languages. The results obtained by now are not at the level of the made efforts. This paper wants to be a little contribution at these efforts. We propose an overview in parallel programming, parallel execution and collaborative systems.

  12. Parallel computing works!

    CERN Document Server

    Fox, Geoffrey C; Messina, Guiseppe C

    2014-01-01

    A clear illustration of how parallel computers can be successfully appliedto large-scale scientific computations. This book demonstrates how avariety of applications in physics, biology, mathematics and other scienceswere implemented on real parallel computers to produce new scientificresults. It investigates issues of fine-grained parallelism relevant forfuture supercomputers with particular emphasis on hypercube architecture. The authors describe how they used an experimental approach to configuredifferent massively parallel machines, design and implement basic systemsoftware, and develop

  13. Two-way coupling of magnetohydrodynamic simulations with embedded particle-in-cell simulations

    Science.gov (United States)

    Makwana, K. D.; Keppens, R.; Lapenta, G.

    2017-12-01

    We describe a method for coupling an embedded domain in a magnetohydrodynamic (MHD) simulation with a particle-in-cell (PIC) method. In this two-way coupling we follow the work of Daldorff et al. (2014) [19] in which the PIC domain receives its initial and boundary conditions from MHD variables (MHD to PIC coupling) while the MHD simulation is updated based on the PIC variables (PIC to MHD coupling). This method can be useful for simulating large plasma systems, where kinetic effects captured by particle-in-cell simulations are localized but affect global dynamics. We describe the numerical implementation of this coupling, its time-stepping algorithm, and its parallelization strategy, emphasizing the novel aspects of it. We test the stability and energy/momentum conservation of this method by simulating a steady-state plasma. We test the dynamics of this coupling by propagating plasma waves through the embedded PIC domain. Coupling with MHD shows satisfactory results for the fast magnetosonic wave, but significant distortion for the circularly polarized Alfvén wave. Coupling with Hall-MHD shows excellent coupling for the whistler wave. We also apply this methodology to simulate a Geospace Environmental Modeling (GEM) challenge type of reconnection with the diffusion region simulated by PIC coupled to larger scales with MHD and Hall-MHD. In both these cases we see the expected signatures of kinetic reconnection in the PIC domain, implying that this method can be used for reconnection studies.

  14. El Cura Juan Fernández de Sotomayor y Picón y los catecismos de la Independencia

    OpenAIRE

    Ocampo López, Javier

    2010-01-01

    Este libro presenta el entorno revolucionario de finales del siglo XVIII y primera mitad del siglo XIX, a través del pensamiento y la acción del cura cartagenero Juan Fernández de Sotomayor y Picón, Cura de Mompós y rector del colegio Mayor de Nuestra Señora del Rosario, quien vivió en los años que dieron nacimiento a la República de Colombia. Se desempeñó como cura revolucionario, como político de Mompós, de Cartagena de Indias, ante el Congreso de las Provincias Unidas del Congreso Nacional...

  15. Electromagnetic particle-in-cell (PIC) method for modeling the formation of metal surface structures induced by femtosecond laser radiation

    Energy Technology Data Exchange (ETDEWEB)

    Djouder, M. [Laboratoire de Physique et Chimie Quantique, Université Mouloud Mammeri de Tizi-ouzou, BP 17 RP, 15000 Tizi-Ouzou (Algeria); Lamrous, O., E-mail: omarlamrous@mail.ummto.dz [Laboratoire de Physique et Chimie Quantique, Université Mouloud Mammeri de Tizi-ouzou, BP 17 RP, 15000 Tizi-Ouzou (Algeria); Mitiche, M.D. [Laboratoire de Physique et Chimie Quantique, Université Mouloud Mammeri de Tizi-ouzou, BP 17 RP, 15000 Tizi-Ouzou (Algeria); Itina, T.E. [Laboratoire Hubert Curien, UMR CNRS 5516/Université Jean Monnet, 18 rue de Professeur Benoît Lauras, 42000 Saint-Etienne (France); Zemirli, M. [Laboratoire de Physique et Chimie Quantique, Université Mouloud Mammeri de Tizi-ouzou, BP 17 RP, 15000 Tizi-Ouzou (Algeria)

    2013-09-01

    The particle in cell (PIC) method coupled to the finite-difference time-domain (FDTD) method is used to model the formation of laser induced periodic surface structures (LIPSS) at the early stage of femtosecond laser irradiation of smooth metal surface. The theoretical results were analyzed and compared with experimental data taken from the literature. It was shown that the optical properties of the target are not homogeneous and the ejection of electrons is such that ripples in the electron density were obtained. The Coulomb explosion mechanism was proposed to explain the ripples formation under the considered conditions.

  16. Electromagnetic particle-in-cell (PIC) method for modeling the formation of metal surface structures induced by femtosecond laser radiation

    International Nuclear Information System (INIS)

    Djouder, M.; Lamrous, O.; Mitiche, M.D.; Itina, T.E.; Zemirli, M.

    2013-01-01

    The particle in cell (PIC) method coupled to the finite-difference time-domain (FDTD) method is used to model the formation of laser induced periodic surface structures (LIPSS) at the early stage of femtosecond laser irradiation of smooth metal surface. The theoretical results were analyzed and compared with experimental data taken from the literature. It was shown that the optical properties of the target are not homogeneous and the ejection of electrons is such that ripples in the electron density were obtained. The Coulomb explosion mechanism was proposed to explain the ripples formation under the considered conditions.

  17. Implementación de un balastro electrónico con microcontrolador PIC para lámparas de sodio de alta presión Implementation of electronic ballast with PIC microcontroller for high pressure sodiumlamps

    Directory of Open Access Journals (Sweden)

    Armando Manuel Gutiérrez Menéndez

    2013-09-01

    Full Text Available En el presente trabajo se muestra un prototipo de balasto electrónico, que garantiza la operación exitosa en alta frecuencia de una lámpara de sodio de alta presión de 70 W, pues opera libre de resonancia acústica (RA. Se efectúa un análisis del fenómeno de la resonancia acústica, profundizando en su origen y predicción teórica. Es descrita la técnica de modulación en frecuencia utilizada para evitar este fenómeno, implementada en el microcontrolador de 8 bit PIC16F877 de la Microchip, la cual es activada en dependencia de la variación de los parámetros eléctricos de la lámpara, como son tensión y corriente. Son mostradas las etapas que dan conformación a dicho prototipo, además de presentarse las simulaciones realizadas a los principales elementos que componen el balasto. Los resultados prácticos alcanzados por el prototipo son expuestos, los cuales se dividen por etapas para analizar el correcto funcionamiento de cada una ellas.Palabra clave: In the present work is offered a prototype of electronic ballast that guarantees the correctly operation in high frequency of a high pressure sodium lamp of 70 W, because it operates free of acoustic resonance (RA. An analysis of the phenomenon of the acoustic resonance is made, deepening in its origin and theoretical prediction. The modulation technique is described in frequency used to avoid this phenomenon, implemented in the microcontrolador of 8 bit PIC16F877 of Microchip, which is activated in dependence of the variation of the electric parameters of the lamp, like they are tension and current. They are shown the stages that give conformation to this prototype, besides the simulations carried out to the main elements that compose the ballast being presented. The practical results reached by the prototype are exposed, which are divided by stages to analyze the correct operation of each a them.Key words:

  18. Endpoint-based parallel data processing in a parallel active messaging interface of a parallel computer

    Science.gov (United States)

    Archer, Charles J.; Blocksome, Michael A.; Ratterman, Joseph D.; Smith, Brian E.

    2014-08-12

    Endpoint-based parallel data processing in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI composed of data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes coupled for data communications through the PAMI, including establishing a data communications geometry, the geometry specifying, for tasks representing processes of execution of the parallel application, a set of endpoints that are used in collective operations of the PAMI including a plurality of endpoints for one of the tasks; receiving in endpoints of the geometry an instruction for a collective operation; and executing the instruction for a collective operation through the endpoints in dependence upon the geometry, including dividing data communications operations among the plurality of endpoints for one of the tasks.

  19. Digital Survey Meter based on PIC16F628 Microcontroller

    International Nuclear Information System (INIS)

    Al-Mohamad, A.; Shliwitt, J.

    2010-01-01

    A Digital Survey Meter based on PIC16F628 Microcontroller was designed using simple Geiger-Muller Counter ZP1320 made by Centronic in the UK as detector. The sensitivity of this tube is about 9 counts/s at 10μGy/h. It is sensitive to gamma and beta particles over 0.25 MeV. It has a sensitive length of 28mm. Count rate versus dose rate is quite linear up to about 10 4 counts/s. Indication is given by a speaker which emits one click for each count. In addition to the acoustic alarm, the meter works according one of three different measurement modes selected using appropriate 3 states switch: 1- Measurement of Dose rate ( in μGy/h) and counting rate ( in CPS) , for High counting rates. 2- Measurement of Dose rate ( in μGy/h) and counting rate ( in CPM), for Low counting rates. 3- Accumulated Counting with continues display for No. of Counts and Counting Time with a period of 2 Sec. The results are Displayed on an Alphanumerical LCD Display, and the circuit will give many hours of operation from a single 9V PP3 battery. The design of the circuit combines between accuracy, simplicity and low power consumption. We built 2 Models of this design, the first only with an internal detector, and the second is equipped with an External Detector. (author)

  20. Parallel phase model : a programming model for high-end parallel machines with manycores.

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Junfeng (Syracuse University, Syracuse, NY); Wen, Zhaofang; Heroux, Michael Allen; Brightwell, Ronald Brian

    2009-04-01

    This paper presents a parallel programming model, Parallel Phase Model (PPM), for next-generation high-end parallel machines based on a distributed memory architecture consisting of a networked cluster of nodes with a large number of cores on each node. PPM has a unified high-level programming abstraction that facilitates the design and implementation of parallel algorithms to exploit both the parallelism of the many cores and the parallelism at the cluster level. The programming abstraction will be suitable for expressing both fine-grained and coarse-grained parallelism. It includes a few high-level parallel programming language constructs that can be added as an extension to an existing (sequential or parallel) programming language such as C; and the implementation of PPM also includes a light-weight runtime library that runs on top of an existing network communication software layer (e.g. MPI). Design philosophy of PPM and details of the programming abstraction are also presented. Several unstructured applications that inherently require high-volume random fine-grained data accesses have been implemented in PPM with very promising results.

  1. Systematic approach for deriving feasible mappings of parallel algorithms to parallel computing platforms

    NARCIS (Netherlands)

    Arkin, Ethem; Tekinerdogan, Bedir; Imre, Kayhan M.

    2017-01-01

    The need for high-performance computing together with the increasing trend from single processor to parallel computer architectures has leveraged the adoption of parallel computing. To benefit from parallel computing power, usually parallel algorithms are defined that can be mapped and executed

  2. Pembangkit Ragam Gelombang Terprogram Menggunakan DDS AD9851 Berbasis Mikrokontroler PIC 18F4550

    Directory of Open Access Journals (Sweden)

    Hidayat Nur Isnianto

    2015-05-01

    Full Text Available Abstrak—Direct  Digital  Synthesizer  (DDS  merupakan  metode  pembangkit gelombang  analog  secara  digital  dengan  cara    membangkitkan sinyal  digital  yang  berubah-ubah  terhadap  waktu kemudian diubah  kedalam  bentuk analog  menggunakan digital  to  analog  converter  (DAC.  IC AD9851 merupakan  pembangkit  gelombang  analog  yang  menerapkan  metode  DDS, dimana frekuensi yang dibangkitkannya dapat diubah sesuai kebutuhan penggunanya. Untuk menghasilkan sinyal digital menggunakan mikrokontroler PIC 18F4550 karena mikrokontroler ini telah memiliki fitur USB full-speed  2.0  untuk  antarmuka dengan komputer  melalui  USB  tanpa memerlukan driver khusus untuk melakukan komunikasinya. Pengaturan frekuensi dapat dilakukan melalui tombol keypad ataupun diprogram melalui komputer. Hasil  pengujian pembangkit ragam gelombang ini  adalah  rentang frekuensi yang dihasilkan dari  1000  Hz  hingga  30 MHz  berupa gelombang  sinus dengan amplitudo 430 mV dan  gelombang  kotak dengan amplitudo 4,125 V. Kata  Kunci:  DDS,  AD9851,  PIC  18F4550,  USB,  Pembangkit Gelombang Abstract— Direct  Digital  Synthesizer  (DDS  is  a  method  to  generate  an  analog waveform in a digital manner, which is formed by generating a digital signal that varies with time and converted into analog form using a digital to analog device (DAC. IC AD9851 is an analog waveform generator to implements the DDS method, , which generates a frequency that can be changed according to the needs of its users.  To generate a digital signal using a PIC 18F4550 microcontroller microcontroller because it has a feature full-speed USB 2.0 to interface with the computer via USB without the need for special drivers to do the communication . Setting the output frequency can be done via the keypad or buttons programmed via computer . The test results are generating a wide range of frequency waves produced from 1000

  3. Parallel algorithms

    CERN Document Server

    Casanova, Henri; Robert, Yves

    2008-01-01

    ""…The authors of the present book, who have extensive credentials in both research and instruction in the area of parallelism, present a sound, principled treatment of parallel algorithms. … This book is very well written and extremely well designed from an instructional point of view. … The authors have created an instructive and fascinating text. The book will serve researchers as well as instructors who need a solid, readable text for a course on parallelism in computing. Indeed, for anyone who wants an understandable text from which to acquire a current, rigorous, and broad vi

  4. Study and Development of an acquisition chain of gamma radiation based on PIC16F877

    International Nuclear Information System (INIS)

    Blidi, Hamza

    2011-01-01

    The project consists in conceiving and accomplishing electronic cards, for the acquisition of gamma radiation, with the intention of extracting from it, energy and spectral characteristics. Scintillation detector allows to have an electrical signal with an exceptional from, which will be transformed into Gaussian signal, with the support of an amplificator card. Subsequently, an analogical card named Stretcher treats this latter in order to have a set of digital signals, describing the morphological and energy aspect of the signal (Peak Detection, Detection of Zero Level...), these will be exploited and treated by a card of control embedded in PIC16F877. The treatment is assured by the execution of a code written in C language, reflecting the Finite State Machine (FSM) of the converter Wilkinson in order to get the final result of the conversion in a wide energy/frequency (nuclear spectrometry).

  5. Parallel algorithms for mapping pipelined and parallel computations

    Science.gov (United States)

    Nicol, David M.

    1988-01-01

    Many computational problems in image processing, signal processing, and scientific computing are naturally structured for either pipelined or parallel computation. When mapping such problems onto a parallel architecture it is often necessary to aggregate an obvious problem decomposition. Even in this context the general mapping problem is known to be computationally intractable, but recent advances have been made in identifying classes of problems and architectures for which optimal solutions can be found in polynomial time. Among these, the mapping of pipelined or parallel computations onto linear array, shared memory, and host-satellite systems figures prominently. This paper extends that work first by showing how to improve existing serial mapping algorithms. These improvements have significantly lower time and space complexities: in one case a published O(nm sup 3) time algorithm for mapping m modules onto n processors is reduced to an O(nm log m) time complexity, and its space requirements reduced from O(nm sup 2) to O(m). Run time complexity is further reduced with parallel mapping algorithms based on these improvements, which run on the architecture for which they create the mappings.

  6. MEASURED DIAMETERS OF TWO F STARS IN THE β PIC MOVING GROUP

    International Nuclear Information System (INIS)

    Simon, M.; Schaefer, G. H.

    2011-01-01

    We report angular diameters of HIP 560 and HIP 21547, two F spectral-type pre-main-sequence members of the β Pic Moving Group. We used the east-west 314 m long baseline of the CHARA Array. The measured limb-darkened angular diameters of HIP 560 and HIP 21547 are 0.492 ± 0.032 and 0.518 ± 0.009 mas, respectively. The corresponding stellar radii are 2.1 and 1.6 R ☉ for HIP 560 and HIP 21547, respectively. These values indicate that the stars are truly young. Analyses using the evolutionary tracks calculated by Siess, Dufour, and Forestini and the tracks of the Yonsei-Yale group yield consistent results. Analyzing the measurements on an angular diameter versus color diagram we find that the ages of the two stars are indistinguishable; their average value is 13 ± 2 Myr. The masses of HIP 560 and HIP 21547 are 1.65 ± 0.02 and 1.75 ± 0.05 M ☉ , respectively. However, analysis of the stellar parameters on a Hertzsprung-Russell diagram yields ages at least 5 Myr older. Both stars are rapid rotators. The discrepancy between the two types of analyses has a natural explanation in gravitational darkening. Stellar oblateness, however, does not affect our measurements of angular diameters.

  7. Parallel computing works

    Energy Technology Data Exchange (ETDEWEB)

    1991-10-23

    An account of the Caltech Concurrent Computation Program (C{sup 3}P), a five year project that focused on answering the question: Can parallel computers be used to do large-scale scientific computations '' As the title indicates, the question is answered in the affirmative, by implementing numerous scientific applications on real parallel computers and doing computations that produced new scientific results. In the process of doing so, C{sup 3}P helped design and build several new computers, designed and implemented basic system software, developed algorithms for frequently used mathematical computations on massively parallel machines, devised performance models and measured the performance of many computers, and created a high performance computing facility based exclusively on parallel computers. While the initial focus of C{sup 3}P was the hypercube architecture developed by C. Seitz, many of the methods developed and lessons learned have been applied successfully on other massively parallel architectures.

  8. L’anàlisi de conglomerats bietàpic o en dues fases amb SPSS

    Directory of Open Access Journals (Sweden)

    Maria-José Rubio-Hurtado

    2017-01-01

    Full Text Available El procediment d'anàlisi de conglomerats en dues fases, també anomenat bietàpic, és una eina d'exploració dissenyada per descobrir les agrupacions naturals d'un conjunt de dades. Permet la generació de criteris d'informació, freqüències dels conglomerats i estadístics descriptius per conglomerat, així com gràfics de barres, sectors i gràfics d'importància de les variables. El mètode d'anàlisi de conglomerats en dues fases té unes funcions úniques respecte a altres mètodes de conglomeració tradicionals, com són: un procediment automàtic del nombre òptim de conglomerats, la possibilitat de crear models de conglomerats amb variables tant categòriques com contínues, o la possibilitat de treballar amb arxius de dades de grans dimensions.

  9. Low cost digital wind speed meter with wind direction using PIC16F877A

    Energy Technology Data Exchange (ETDEWEB)

    Sujod, M.Z.; Ismail, M.M. [Malaysia Pahang Univ., Pahang (Malaysia). Faculty of Electrical and Electronics Engineering

    2008-07-01

    Weather measurement tools are necessary to determine the actual weather and forecasting. Wind is one of the weather elements that can be measured using an anemometer which is a device for measuring the velocity or the pressure of the wind. It is one of the instruments used in weather stations. This paper described a circuit design for speed and direction of the meter and created a suitable programming to measure and display the wind speed meter and direction. A microcontroller (PIC16F877A) was employed as the central processing unit for digital wind speed and direction meter. The paper presented and discussed the hardware and software implementation as well as the calibration and results. The paper also discussed cost estimation and future recommendations. It was concluded that the hardware and software implementation were carefully selected after considering the development cost where the cost was much lower than the market prices. 4 refs., 8 figs.

  10. Template based parallel checkpointing in a massively parallel computer system

    Science.gov (United States)

    Archer, Charles Jens [Rochester, MN; Inglett, Todd Alan [Rochester, MN

    2009-01-13

    A method and apparatus for a template based parallel checkpoint save for a massively parallel super computer system using a parallel variation of the rsync protocol, and network broadcast. In preferred embodiments, the checkpoint data for each node is compared to a template checkpoint file that resides in the storage and that was previously produced. Embodiments herein greatly decrease the amount of data that must be transmitted and stored for faster checkpointing and increased efficiency of the computer system. Embodiments are directed to a parallel computer system with nodes arranged in a cluster with a high speed interconnect that can perform broadcast communication. The checkpoint contains a set of actual small data blocks with their corresponding checksums from all nodes in the system. The data blocks may be compressed using conventional non-lossy data compression algorithms to further reduce the overall checkpoint size.

  11. A framework for improving access and customer service times in health care: application and analysis at the UCLA Medical Center.

    Science.gov (United States)

    Duda, Catherine; Rajaram, Kumar; Barz, Christiane; Rosenthal, J Thomas

    2013-01-01

    There has been an increasing emphasis on health care efficiency and costs and on improving quality in health care settings such as hospitals or clinics. However, there has not been sufficient work on methods of improving access and customer service times in health care settings. The study develops a framework for improving access and customer service time for health care settings. In the framework, the operational concept of the bottleneck is synthesized with queuing theory to improve access and reduce customer service times without reduction in clinical quality. The framework is applied at the Ronald Reagan UCLA Medical Center to determine the drivers for access and customer service times and then provides guidelines on how to improve these drivers. Validation using simulation techniques shows significant potential for reducing customer service times and increasing access at this institution. Finally, the study provides several practice implications that could be used to improve access and customer service times without reduction in clinical quality across a range of health care settings from large hospitals to small community clinics.

  12. Three-dimensional kinetic simulations of whistler turbulence in solar wind on parallel supercomputers

    Science.gov (United States)

    Chang, Ouliang

    The objective of this dissertation is to study the physics of whistler turbulence evolution and its role in energy transport and dissipation in the solar wind plasmas through computational and theoretical investigations. This dissertation presents the first fully three-dimensional (3D) particle-in-cell (PIC) simulations of whistler turbulence forward cascade in a homogeneous, collisionless plasma with a uniform background magnetic field B o, and the first 3D PIC simulation of whistler turbulence with both forward and inverse cascades. Such computationally demanding research is made possible through the use of massively parallel, high performance electromagnetic PIC simulations on state-of-the-art supercomputers. Simulations are carried out to study characteristic properties of whistler turbulence under variable solar wind fluctuation amplitude (epsilon e) and electron beta (betae), relative contributions to energy dissipation and electron heating in whistler turbulence from the quasilinear scenario and the intermittency scenario, and whistler turbulence preferential cascading direction and wavevector anisotropy. The 3D simulations of whistler turbulence exhibit a forward cascade of fluctuations into broadband, anisotropic, turbulent spectrum at shorter wavelengths with wavevectors preferentially quasi-perpendicular to B o. The overall electron heating yields T ∥ > T⊥ for all epsilone and betae values, indicating the primary linear wave-particle interaction is Landau damping. But linear wave-particle interactions play a minor role in shaping the wavevector spectrum, whereas nonlinear wave-wave interactions are overall stronger and faster processes, and ultimately determine the wavevector anisotropy. Simulated magnetic energy spectra as function of wavenumber show a spectral break to steeper slopes, which scales as k⊥lambda e ≃ 1 independent of betae values, where lambdae is electron inertial length, qualitatively similar to solar wind observations. Specific

  13. Introduction to parallel programming

    CERN Document Server

    Brawer, Steven

    1989-01-01

    Introduction to Parallel Programming focuses on the techniques, processes, methodologies, and approaches involved in parallel programming. The book first offers information on Fortran, hardware and operating system models, and processes, shared memory, and simple parallel programs. Discussions focus on processes and processors, joining processes, shared memory, time-sharing with multiple processors, hardware, loops, passing arguments in function/subroutine calls, program structure, and arithmetic expressions. The text then elaborates on basic parallel programming techniques, barriers and race

  14. Parallelism in matrix computations

    CERN Document Server

    Gallopoulos, Efstratios; Sameh, Ahmed H

    2016-01-01

    This book is primarily intended as a research monograph that could also be used in graduate courses for the design of parallel algorithms in matrix computations. It assumes general but not extensive knowledge of numerical linear algebra, parallel architectures, and parallel programming paradigms. The book consists of four parts: (I) Basics; (II) Dense and Special Matrix Computations; (III) Sparse Matrix Computations; and (IV) Matrix functions and characteristics. Part I deals with parallel programming paradigms and fundamental kernels, including reordering schemes for sparse matrices. Part II is devoted to dense matrix computations such as parallel algorithms for solving linear systems, linear least squares, the symmetric algebraic eigenvalue problem, and the singular-value decomposition. It also deals with the development of parallel algorithms for special linear systems such as banded ,Vandermonde ,Toeplitz ,and block Toeplitz systems. Part III addresses sparse matrix computations: (a) the development of pa...

  15. Development of a three-dimensional groundwater flow model for Western Melton Valley: Application of P-FEM on a DOE waste site

    International Nuclear Information System (INIS)

    West, O.R.; Toran, L.E.

    1994-04-01

    Modeling the movement of hazardous waste in groundwater was identified by the US Department of Energy (DOE) as one of the grand challenges in scientific computation. In recognition of this need, DOE has provided support for a group of scientists from several national laboratories and universities to conduct research and development in groundwater flow and contaminant transport modeling. This group is part of a larger consortium of researchers, collectively referred to as the Partnership in Computational Science (PICS), that has been charged with the task of applying high-performance computational tools and techniques to grand challenge areas identified by DOE. One of the goals of the PICS Groundwater Group is to develop a new three-dimensional groundwater flow and transport code that is optimized for massively parallel computers. An existing groundwater flow code, 3DFEMWATER, was parallelized in order to serve as a benchmark for these new models. The application of P-FEM, the parallelized version of 3DFEMWATER, to a real field site is the subject of this report

  16. Development of a three-dimensional groundwater flow model for Western Melton Valley: Application of P-FEM on a DOE waste site

    Energy Technology Data Exchange (ETDEWEB)

    West, O.R.; Toran, L.E.

    1994-04-01

    Modeling the movement of hazardous waste in groundwater was identified by the US Department of Energy (DOE) as one of the grand challenges in scientific computation. In recognition of this need, DOE has provided support for a group of scientists from several national laboratories and universities to conduct research and development in groundwater flow and contaminant transport modeling. This group is part of a larger consortium of researchers, collectively referred to as the Partnership in Computational Science (PICS), that has been charged with the task of applying high-performance computational tools and techniques to grand challenge areas identified by DOE. One of the goals of the PICS Groundwater Group is to develop a new three-dimensional groundwater flow and transport code that is optimized for massively parallel computers. An existing groundwater flow code, 3DFEMWATER, was parallelized in order to serve as a benchmark for these new models. The application of P-FEM, the parallelized version of 3DFEMWATER, to a real field site is the subject of this report.

  17. Model-driven product line engineering for mapping parallel algorithms to parallel computing platforms

    NARCIS (Netherlands)

    Arkin, Ethem; Tekinerdogan, Bedir

    2016-01-01

    Mapping parallel algorithms to parallel computing platforms requires several activities such as the analysis of the parallel algorithm, the definition of the logical configuration of the platform, the mapping of the algorithm to the logical configuration platform and the implementation of the

  18. Parallelization in Modern C++

    CERN Multimedia

    CERN. Geneva

    2016-01-01

    The traditionally used and well established parallel programming models OpenMP and MPI are both targeting lower level parallelism and are meant to be as language agnostic as possible. For a long time, those models were the only widely available portable options for developing parallel C++ applications beyond using plain threads. This has strongly limited the optimization capabilities of compilers, has inhibited extensibility and genericity, and has restricted the use of those models together with other, modern higher level abstractions introduced by the C++11 and C++14 standards. The recent revival of interest in the industry and wider community for the C++ language has also spurred a remarkable amount of standardization proposals and technical specifications being developed. Those efforts however have so far failed to build a vision on how to seamlessly integrate various types of parallelism, such as iterative parallel execution, task-based parallelism, asynchronous many-task execution flows, continuation s...

  19. Massively parallel mathematical sieves

    Energy Technology Data Exchange (ETDEWEB)

    Montry, G.R.

    1989-01-01

    The Sieve of Eratosthenes is a well-known algorithm for finding all prime numbers in a given subset of integers. A parallel version of the Sieve is described that produces computational speedups over 800 on a hypercube with 1,024 processing elements for problems of fixed size. Computational speedups as high as 980 are achieved when the problem size per processor is fixed. The method of parallelization generalizes to other sieves and will be efficient on any ensemble architecture. We investigate two highly parallel sieves using scattered decomposition and compare their performance on a hypercube multiprocessor. A comparison of different parallelization techniques for the sieve illustrates the trade-offs necessary in the design and implementation of massively parallel algorithms for large ensemble computers.

  20. Simultaneous voltammetric determination of dopamine and ascorbic acid using multivariate calibration methodology performed on a carbon paste electrode modified by a mer-[RuCl3(dppb)(4-pic)] complex

    International Nuclear Information System (INIS)

    Santos, Poliana M.; Sandrino, Bianca; Moreira, Tiago F.; Wohnrath, Karen; Nagata, Noemi; Pessoa, Christiana A.

    2007-01-01

    The preparation and electrochemical characterization of a carbon paste electrode (CPE) modified with mer-[Ru0Cl 3 (dppb)(4-pic)] (dppb=Ph 2 P(CH 2 ) 4 PPh 2 , 4-pic=CH 3 C 5 H 4 N), referred to as Rupic, were investigated. The CPE/Rupic system displayed only one pair of redox peaks, with a midpoint potential at 0.28 V vs. Ag/AgCl, which were ascribed to Ru III /Ru II charge transfer. This modified electrode presented the property of electrocatalysing the oxidation of dopamine (DA) and ascorbic acid (AA) at 0.35 V and 0.30 V vs. Ag/AgCl, respectively. Because the oxidation for both AA and DA practically occurred at the same potential, distinguishing between them was difficult with cyclic voltammetry. This limitation was overcome using Partial Least Square Regression (PLSR), which allowed us, with the optimised models, to determine four synthetic samples with prediction errors (RMSEP) of 5.55 X 10 -5 mol L -1 and 7.48 X 10 -6 mol L -1 for DA and AA, respectively. (author)

  1. Computer-Aided Parallelizer and Optimizer

    Science.gov (United States)

    Jin, Haoqiang

    2011-01-01

    The Computer-Aided Parallelizer and Optimizer (CAPO) automates the insertion of compiler directives (see figure) to facilitate parallel processing on Shared Memory Parallel (SMP) machines. While CAPO currently is integrated seamlessly into CAPTools (developed at the University of Greenwich, now marketed as ParaWise), CAPO was independently developed at Ames Research Center as one of the components for the Legacy Code Modernization (LCM) project. The current version takes serial FORTRAN programs, performs interprocedural data dependence analysis, and generates OpenMP directives. Due to the widely supported OpenMP standard, the generated OpenMP codes have the potential to run on a wide range of SMP machines. CAPO relies on accurate interprocedural data dependence information currently provided by CAPTools. Compiler directives are generated through identification of parallel loops in the outermost level, construction of parallel regions around parallel loops and optimization of parallel regions, and insertion of directives with automatic identification of private, reduction, induction, and shared variables. Attempts also have been made to identify potential pipeline parallelism (implemented with point-to-point synchronization). Although directives are generated automatically, user interaction with the tool is still important for producing good parallel codes. A comprehensive graphical user interface is included for users to interact with the parallelization process.

  2. Data communications in a parallel active messaging interface of a parallel computer

    Science.gov (United States)

    Archer, Charles J; Blocksome, Michael A; Ratterman, Joseph D; Smith, Brian E

    2013-11-12

    Data communications in a parallel active messaging interface (`PAMI`) of a parallel computer composed of compute nodes that execute a parallel application, each compute node including application processors that execute the parallel application and at least one management processor dedicated to gathering information regarding data communications. The PAMI is composed of data communications endpoints, each endpoint composed of a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes and the endpoints coupled for data communications through the PAMI and through data communications resources. Embodiments function by gathering call site statistics describing data communications resulting from execution of data communications instructions and identifying in dependence upon the call cite statistics a data communications algorithm for use in executing a data communications instruction at a call site in the parallel application.

  3. Cloud feedback studies with a physics grid

    Energy Technology Data Exchange (ETDEWEB)

    Dipankar, Anurag [Max Planck Institute for Meteorology Hamburg; Stevens, Bjorn [Max Planck Institute for Meteorology Hamburg

    2013-02-07

    During this project the investigators implemented a fully parallel version of dual-grid approach in main frame code ICON, implemented a fully conservative first-order interpolation scheme for horizontal remapping, integrated UCLA-LES micro-scale model into ICON to run parallely in selected columns, and did cloud feedback studies on aqua-planet setup to evaluate the classical parameterization on a small domain. The micro-scale model may be run in parallel with the classical parameterization, or it may be run on a "physics grid" independent of the dynamics grid.

  4. A parallel buffer tree

    DEFF Research Database (Denmark)

    Sitchinava, Nodar; Zeh, Norbert

    2012-01-01

    We present the parallel buffer tree, a parallel external memory (PEM) data structure for batched search problems. This data structure is a non-trivial extension of Arge's sequential buffer tree to a private-cache multiprocessor environment and reduces the number of I/O operations by the number of...... in the optimal OhOf(psortN + K/PB) parallel I/O complexity, where K is the size of the output reported in the process and psortN is the parallel I/O complexity of sorting N elements using P processors....

  5. Application Portable Parallel Library

    Science.gov (United States)

    Cole, Gary L.; Blech, Richard A.; Quealy, Angela; Townsend, Scott

    1995-01-01

    Application Portable Parallel Library (APPL) computer program is subroutine-based message-passing software library intended to provide consistent interface to variety of multiprocessor computers on market today. Minimizes effort needed to move application program from one computer to another. User develops application program once and then easily moves application program from parallel computer on which created to another parallel computer. ("Parallel computer" also include heterogeneous collection of networked computers). Written in C language with one FORTRAN 77 subroutine for UNIX-based computers and callable from application programs written in C language or FORTRAN 77.

  6. Parallel Algorithms and Patterns

    Energy Technology Data Exchange (ETDEWEB)

    Robey, Robert W. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-06-16

    This is a powerpoint presentation on parallel algorithms and patterns. A parallel algorithm is a well-defined, step-by-step computational procedure that emphasizes concurrency to solve a problem. Examples of problems include: Sorting, searching, optimization, matrix operations. A parallel pattern is a computational step in a sequence of independent, potentially concurrent operations that occurs in diverse scenarios with some frequency. Examples are: Reductions, prefix scans, ghost cell updates. We only touch on parallel patterns in this presentation. It really deserves its own detailed discussion which Gabe Rockefeller would like to develop.

  7. Development of an operational neutron spectrometry system dedicated to the characterization of the natural atmospheric radiative environment, implemented at the Pic du Midi

    International Nuclear Information System (INIS)

    Cheminet, Adrien

    2013-01-01

    This PhD Thesis has been achieved thanks to the joint effort between two French organizations, the French Institute for Radiological Protection and Nuclear Safety (IRSN/LMDN, Cadarache) and the French Aerospace Lab (ONERA/ DESP, Toulouse). The aim was to develop an operational neutron spectrometer extended to high energies in order to measure the dynamics of the spectral variations of the natural radiative environment at the summit of the Pic du Midi Observatory in the French Pyrenees. Thereby, the fluence responses of each detector were calculated thanks to Monte Carlo simulations. Afterwards, they were validated by means of experimental campaigns up to high energies (≥20 MeV) nearby reference neutron fields. The systematic uncertainties were deduced after detailed studies of the mathematic reconstruction of the spectra (i.e. unfolding procedure). Then, the system was tested under rocks at the LSBB of Rustrel before being installed at respectively +500 m and +1000 m above sea level for the first environmental campaigns. Finally, the spectrometer has been operating for two years after its deployment at the summit of the Pic du Midi (+2885 m). The continuous data were analysed thanks to an innovative method. Some seasonal and spectral variations were observed. Some Forbush decreases were also recorded after strong solar flares. These data were further analysed thanks to Monte Carlo simulations. The data were made more attractive thanks to several practical applications with personnel dosimetry or reliability of submicron electronics components. (author)

  8. Totally parallel multilevel algorithms

    Science.gov (United States)

    Frederickson, Paul O.

    1988-01-01

    Four totally parallel algorithms for the solution of a sparse linear system have common characteristics which become quite apparent when they are implemented on a highly parallel hypercube such as the CM2. These four algorithms are Parallel Superconvergent Multigrid (PSMG) of Frederickson and McBryan, Robust Multigrid (RMG) of Hackbusch, the FFT based Spectral Algorithm, and Parallel Cyclic Reduction. In fact, all four can be formulated as particular cases of the same totally parallel multilevel algorithm, which are referred to as TPMA. In certain cases the spectral radius of TPMA is zero, and it is recognized to be a direct algorithm. In many other cases the spectral radius, although not zero, is small enough that a single iteration per timestep keeps the local error within the required tolerance.

  9. The 3D Pelvic Inclination Correction System (PICS): A universally applicable coordinate system for isovolumetric imaging measurements, tested in women with pelvic organ prolapse (POP).

    Science.gov (United States)

    Reiner, Caecilia S; Williamson, Tom; Winklehner, Thomas; Lisse, Sean; Fink, Daniel; DeLancey, John O L; Betschart, Cornelia

    2017-07-01

    In pelvic organ prolapse (POP), the organs are pushed downward along the lines of gravity, so measurements along this longitudinal body axis are desirable. We propose a universally applicable 3D coordinate system that corrects for changes in pelvic inclination and that allows the localization of any point in the pelvis at rest or under dynamic conditions on magnetic resonance images (MRI) of pelvic floor disorders in a scanner- and software independent manner. The proposed 3D coordinate system called 3D Pelvic Inclination Correction System (PICS) is constructed utilizing four bony landmark points, with the origin set at the inferior pubic point, and three additional points at the sacrum (sacrococcygeal joint) and both ischial spines, which are clearly visible on MRI images. The feasibility and applicability of the moving frame was evaluated using MRI datasets from five women with pelvic organ prolapse, three undergoing static MRI and two undergoing dynamic MRI of the pelvic floor in a supine position. The construction of the coordinate system was performed utilizing the selected landmarks, with an initial implementation completed in MATLAB. In all cases the selected landmarks were clearly visible, with the construction of the 3D PICS and measurement of pelvic organ positions performed without difficulty. The resulting distance from the organ position to the horizontal PICS plane was compared to a traditional measure based on standard measurements in 2D slices. The two approaches demonstrated good agreement in each of the cases. The developed approach makes quantitative assessment of pelvic organ position in a physiologically relevant 3D coordinate system possible independent of pelvic movement relative to the scanner. It allows the accurate study of the physiologic range of organ location along the body axis ("up or down") as well as defects of the pelvic sidewall or birth-related pelvic floor injuries outside the midsagittal plane, not possible before in a 2D

  10. Introducing a distributed unstructured mesh into gyrokinetic particle-in-cell code, XGC

    Science.gov (United States)

    Yoon, Eisung; Shephard, Mark; Seol, E. Seegyoung; Kalyanaraman, Kaushik

    2017-10-01

    XGC has shown good scalability for large leadership supercomputers. The current production version uses a copy of the entire unstructured finite element mesh on every MPI rank. Although an obvious scalability issue if the mesh sizes are to be dramatically increased, the current approach is also not optimal with respect to data locality of particles and mesh information. To address these issues we have initiated the development of a distributed mesh PIC method. This approach directly addresses the base scalability issue with respect to mesh size and, through the use of a mesh entity centric view of the particle mesh relationship, provides opportunities to address data locality needs of many core and GPU supported heterogeneous systems. The parallel mesh PIC capabilities are being built on the Parallel Unstructured Mesh Infrastructure (PUMI). The presentation will first overview the form of mesh distribution used and indicate the structures and functions used to support the mesh, the particles and their interaction. Attention will then focus on the node-level optimizations being carried out to ensure performant operation of all PIC operations on the distributed mesh. Partnership for Edge Physics Simulation (EPSI) Grant No. DE-SC0008449 and Center for Extended Magnetohydrodynamic Modeling (CEMM) Grant No. DE-SC0006618.

  11. Neural Parallel Engine: A toolbox for massively parallel neural signal processing.

    Science.gov (United States)

    Tam, Wing-Kin; Yang, Zhi

    2018-05-01

    Large-scale neural recordings provide detailed information on neuronal activities and can help elicit the underlying neural mechanisms of the brain. However, the computational burden is also formidable when we try to process the huge data stream generated by such recordings. In this study, we report the development of Neural Parallel Engine (NPE), a toolbox for massively parallel neural signal processing on graphical processing units (GPUs). It offers a selection of the most commonly used routines in neural signal processing such as spike detection and spike sorting, including advanced algorithms such as exponential-component-power-component (EC-PC) spike detection and binary pursuit spike sorting. We also propose a new method for detecting peaks in parallel through a parallel compact operation. Our toolbox is able to offer a 5× to 110× speedup compared with its CPU counterparts depending on the algorithms. A user-friendly MATLAB interface is provided to allow easy integration of the toolbox into existing workflows. Previous efforts on GPU neural signal processing only focus on a few rudimentary algorithms, are not well-optimized and often do not provide a user-friendly programming interface to fit into existing workflows. There is a strong need for a comprehensive toolbox for massively parallel neural signal processing. A new toolbox for massively parallel neural signal processing has been created. It can offer significant speedup in processing signals from large-scale recordings up to thousands of channels. Copyright © 2018 Elsevier B.V. All rights reserved.

  12. A possibility of parallel and anti-parallel diffraction measurements on ...

    Indian Academy of Sciences (India)

    However, a bent perfect crystal (BPC) monochromator at monochromatic focusing condition can provide a quite flat and equal resolution property at both parallel and anti-parallel positions and thus one can have a chance to use both sides for the diffraction experiment. From the data of the FWHM and the / measured ...

  13. Autenticidad y utopía: el reparto de lo sensible en Pedro Henríquez Ureña, Alfonso Reyes y Mariano Picón Salas

    Directory of Open Access Journals (Sweden)

    Ioannis Antzus Ramos

    2017-07-01

    Full Text Available En este trabajo abordo el pensamiento cultural y estético de Pedro Henríquez Ureña, Alfonso Reyes y Mariano Picón Salas. Para ello me centro principalmente en los ensayos que los tres escritores publicaron en los años veinte y treinta del siglo pasado y destaco las afinidades entre sus planteamientos. Al analizar sus ideas sobre la cultura y la estética del continente concluyo que los tres intelectuales propusieron una misma ordenación del mundo. La utopía occidentalista que ellos anhelaban está marcada por las nociones de consenso y autenticidad y tiene implicaciones políticas. In this article, I study the cultural and aesthetic thought of Pedro Henríquez Ureña, Alfonso Reyes, and Mariano Picón Salas. In order to do this, I take into consideration the essays that the three writers published in the 1920s and 1930s, and I underline the similarities among their ideas. The analysis of their cultural and aesthetic thought shows that the three intellectuals proposed a similar order of the world. The pro-Western utopia that they longed for is marked by the notions of consensus and authenticity and has political implications.

  14. Study of turbulence of Lower Hybrid Drift Instability origin with the Multi Level Multi Domain semi-implicit adaptive PIC method

    Science.gov (United States)

    Innocenti, Maria Elena; Beck, Arnaud; Markidis, Stefano; Lapenta, Giovanni

    2015-04-01

    We study turbulence generated by the Lower Hybrid Drift Instability (LHDI [1]) in the terrestrial magnetosphere. The problem is not only of interest per se, but also for the implications it can have for the so-called turbulent reconnection. The LHDI evolution is simulated with the PIC Multi Level Multi Domain code Parsek2D-MLMD [2,3], which simulates different parts of the domain with different spatial and temporal resolutions. This allows to satisfy, at a low computing cost, the two necessary requirements for LHDI turbulence simulations: 1) a large domain, to capture the high wavelength branch of the LHDI and of the secondary kink instability and 2) high resolution, to cover the high wavenumber part of the power spectrum and to capture the wavenumber at which the turbulent cascade ends. The turbulent cascade proceeds seamlessly from the coarse (low resolution) to the refined (high resolution) grid, the only one resolved enough to capture its end, which is studied here and related to wave-particle interaction processes. We also comment upon the role of smoothing (a common technique used in PIC simulations to reduce particle noise, [4]) in simulations of turbulence and on how its effects on power spectra may be easily mistaken, in absence of accurate convergence studies, for the end of the inertial range. [1] P. Gary, Theory of space plasma microinstabilities, Cambridge Atmospheric and Space Science Series, 2005. [2] M. E. Innocenti, G. Lapenta, S. Markidis, A. Beck, A. Vapirev, Journal of Computational Physics 238 (2013) 115 - 140. [3] M. E. Innocenti, A. Beck, T. Ponweiser, S. Markidis, G. Lapenta, Computer Physics Communications (accepted) (2014). [4] C. K. Birdsall, A. B. Langdon, Plasma physics via computer simulation, Taylor and Francis, 2004.

  15. Kinetic Alfven waves and electron physics. II. Oblique slow shocks

    International Nuclear Information System (INIS)

    Yin, L.; Winske, D.; Daughton, W.

    2007-01-01

    One-dimensional (1D) particle-in-cell (PIC; kinetic ions and electrons) and hybrid (kinetic ions; adiabatic and massless fluid electrons) simulations of highly oblique slow shocks (θ Bn =84 deg. and β=0.1) [Yin et al., J. Geophys. Res., 110, A09217 (2005)] have shown that the dissipation from the ions is too weak to form a shock and that kinetic electron physics is required. The PIC simulations also showed that the downstream electron temperature becomes anisotropic (T e parallel )>T e perpendicular ), as observed in slow shocks in space. The electron anisotropy results, in part, from the electron acceleration/heating by parallel electric fields of obliquely propagating kinetic Alfven waves (KAWs) excited by ion-ion streaming, which cannot be modeled accurately in hybrid simulations. In the shock ramp, spiky structures occur in density and electron parallel temperature, where the ion parallel temperature decreases due to the reduction of the ion backstreaming speed. In this paper, KAW and electron physics in oblique slow shocks are further examined under lower electron beta conditions. It is found that as the electron beta is reduced, the resonant interaction between electrons and the wave parallel electric fields shifts to the tail of the electron velocity distribution, providing more efficient parallel heating. As a consequence, for β e =0.02, the electron physics is shown to influence the formation of a θ Bn =75 deg. shock. Electron effects are further enhanced at a more oblique shock angle (θ Bn =84 deg.) when both the growth rate and the range of unstable modes on the KAW branch increase. Small-scale electron and ion phase-space vortices in the shock ramp formed by electron-KAW interactions and the reduction of the ion backstreaming speed, respectively, are observed in the simulations and confirmed in homogeneous geometries in one and two spatial dimensions in the accompanying paper [Yin et al., Phys. Plasmas 14, 062104 (2007)]. Results from this study

  16. Parallel implementation of the PHOENIX generalized stellar atmosphere program. II. Wavelength parallelization

    International Nuclear Information System (INIS)

    Baron, E.; Hauschildt, Peter H.

    1998-01-01

    We describe an important addition to the parallel implementation of our generalized nonlocal thermodynamic equilibrium (NLTE) stellar atmosphere and radiative transfer computer program PHOENIX. In a previous paper in this series we described data and task parallel algorithms we have developed for radiative transfer, spectral line opacity, and NLTE opacity and rate calculations. These algorithms divided the work spatially or by spectral lines, that is, distributing the radial zones, individual spectral lines, or characteristic rays among different processors and employ, in addition, task parallelism for logically independent functions (such as atomic and molecular line opacities). For finite, monotonic velocity fields, the radiative transfer equation is an initial value problem in wavelength, and hence each wavelength point depends upon the previous one. However, for sophisticated NLTE models of both static and moving atmospheres needed to accurately describe, e.g., novae and supernovae, the number of wavelength points is very large (200,000 - 300,000) and hence parallelization over wavelength can lead both to considerable speedup in calculation time and the ability to make use of the aggregate memory available on massively parallel supercomputers. Here, we describe an implementation of a pipelined design for the wavelength parallelization of PHOENIX, where the necessary data from the processor working on a previous wavelength point is sent to the processor working on the succeeding wavelength point as soon as it is known. Our implementation uses a MIMD design based on a relatively small number of standard message passing interface (MPI) library calls and is fully portable between serial and parallel computers. copyright 1998 The American Astronomical Society

  17. Parallel k-means++

    Energy Technology Data Exchange (ETDEWEB)

    2017-04-04

    A parallelization of the k-means++ seed selection algorithm on three distinct hardware platforms: GPU, multicore CPU, and multithreaded architecture. K-means++ was developed by David Arthur and Sergei Vassilvitskii in 2007 as an extension of the k-means data clustering technique. These algorithms allow people to cluster multidimensional data, by attempting to minimize the mean distance of data points within a cluster. K-means++ improved upon traditional k-means by using a more intelligent approach to selecting the initial seeds for the clustering process. While k-means++ has become a popular alternative to traditional k-means clustering, little work has been done to parallelize this technique. We have developed original C++ code for parallelizing the algorithm on three unique hardware architectures: GPU using NVidia's CUDA/Thrust framework, multicore CPU using OpenMP, and the Cray XMT multithreaded architecture. By parallelizing the process for these platforms, we are able to perform k-means++ clustering much more quickly than it could be done before.

  18. Validation of the UCLA Child Post traumatic stress disorder-reaction index in Zambia

    Directory of Open Access Journals (Sweden)

    Cohen Judith A

    2011-09-01

    Full Text Available Abstract Background Sexual violence against children is a major global health and human rights problem. In order to address this issue there needs to be a better understanding of the issue and the consequences. One major challenge in accomplishing this goal has been a lack of validated child mental health assessments in low-resource countries where the prevalence of sexual violence is high. This paper presents results from a validation study of a trauma-focused mental health assessment tool - the UCLA Post-traumatic Stress Disorder - Reaction Index (PTSD-RI in Zambia. Methods The PTSD-RI was adapted through the addition of locally relevant items and validated using local responses to three cross-cultural criterion validity questions. Reliability of the symptoms scale was assessed using Cronbach alpha analyses. Discriminant validity was assessed comparing mean scale scores of cases and non-cases. Concurrent validity was assessed comparing mean scale scores to a traumatic experience index. Sensitivity and specificity analyses were run using receiver operating curves. Results Analysis of data from 352 youth attending a clinic specializing in sexual abuse showed that this adapted PTSD-RI demonstrated good reliability, with Cronbach alpha scores greater than .90 on all the evaluated scales. The symptom scales were able to statistically significantly discriminate between locally identified cases and non-cases, and higher symptom scale scores were associated with increased numbers of trauma exposures which is an indication of concurrent validity. Sensitivity and specificity analyses resulted in an adequate area under the curve, indicating that this tool was appropriate for case definition. Conclusions This study has shown that validating mental health assessment tools in a low-resource country is feasible, and that by taking the time to adapt a measure to the local context, a useful and valid Zambian version of the PTSD-RI was developed to detect

  19. Parallel magnetic resonance imaging

    International Nuclear Information System (INIS)

    Larkman, David J; Nunes, Rita G

    2007-01-01

    Parallel imaging has been the single biggest innovation in magnetic resonance imaging in the last decade. The use of multiple receiver coils to augment the time consuming Fourier encoding has reduced acquisition times significantly. This increase in speed comes at a time when other approaches to acquisition time reduction were reaching engineering and human limits. A brief summary of spatial encoding in MRI is followed by an introduction to the problem parallel imaging is designed to solve. There are a large number of parallel reconstruction algorithms; this article reviews a cross-section, SENSE, SMASH, g-SMASH and GRAPPA, selected to demonstrate the different approaches. Theoretical (the g-factor) and practical (coil design) limits to acquisition speed are reviewed. The practical implementation of parallel imaging is also discussed, in particular coil calibration. How to recognize potential failure modes and their associated artefacts are shown. Well-established applications including angiography, cardiac imaging and applications using echo planar imaging are reviewed and we discuss what makes a good application for parallel imaging. Finally, active research areas where parallel imaging is being used to improve data quality by repairing artefacted images are also reviewed. (invited topical review)

  20. Experiences in Data-Parallel Programming

    Directory of Open Access Journals (Sweden)

    Terry W. Clark

    1997-01-01

    Full Text Available To efficiently parallelize a scientific application with a data-parallel compiler requires certain structural properties in the source program, and conversely, the absence of others. A recent parallelization effort of ours reinforced this observation and motivated this correspondence. Specifically, we have transformed a Fortran 77 version of GROMOS, a popular dusty-deck program for molecular dynamics, into Fortran D, a data-parallel dialect of Fortran. During this transformation we have encountered a number of difficulties that probably are neither limited to this particular application nor do they seem likely to be addressed by improved compiler technology in the near future. Our experience with GROMOS suggests a number of points to keep in mind when developing software that may at some time in its life cycle be parallelized with a data-parallel compiler. This note presents some guidelines for engineering data-parallel applications that are compatible with Fortran D or High Performance Fortran compilers.

  1. Non-Cartesian parallel imaging reconstruction.

    Science.gov (United States)

    Wright, Katherine L; Hamilton, Jesse I; Griswold, Mark A; Gulani, Vikas; Seiberlich, Nicole

    2014-11-01

    Non-Cartesian parallel imaging has played an important role in reducing data acquisition time in MRI. The use of non-Cartesian trajectories can enable more efficient coverage of k-space, which can be leveraged to reduce scan times. These trajectories can be undersampled to achieve even faster scan times, but the resulting images may contain aliasing artifacts. Just as Cartesian parallel imaging can be used to reconstruct images from undersampled Cartesian data, non-Cartesian parallel imaging methods can mitigate aliasing artifacts by using additional spatial encoding information in the form of the nonhomogeneous sensitivities of multi-coil phased arrays. This review will begin with an overview of non-Cartesian k-space trajectories and their sampling properties, followed by an in-depth discussion of several selected non-Cartesian parallel imaging algorithms. Three representative non-Cartesian parallel imaging methods will be described, including Conjugate Gradient SENSE (CG SENSE), non-Cartesian generalized autocalibrating partially parallel acquisition (GRAPPA), and Iterative Self-Consistent Parallel Imaging Reconstruction (SPIRiT). After a discussion of these three techniques, several potential promising clinical applications of non-Cartesian parallel imaging will be covered. © 2014 Wiley Periodicals, Inc.

  2. Influence of Paralleling Dies and Paralleling Half-Bridges on Transient Current Distribution in Multichip Power Modules

    DEFF Research Database (Denmark)

    Li, Helong; Zhou, Wei; Wang, Xiongfei

    2018-01-01

    This paper addresses the transient current distribution in the multichip half-bridge power modules, where two types of paralleling connections with different current commutation mechanisms are considered: paralleling dies and paralleling half-bridges. It reveals that with paralleling dies, both t...

  3. Parallel Architectures and Parallel Algorithms for Integrated Vision Systems. Ph.D. Thesis

    Science.gov (United States)

    Choudhary, Alok Nidhi

    1989-01-01

    Computer vision is regarded as one of the most complex and computationally intensive problems. An integrated vision system (IVS) is a system that uses vision algorithms from all levels of processing to perform for a high level application (e.g., object recognition). An IVS normally involves algorithms from low level, intermediate level, and high level vision. Designing parallel architectures for vision systems is of tremendous interest to researchers. Several issues are addressed in parallel architectures and parallel algorithms for integrated vision systems.

  4. Pattern-Driven Automatic Parallelization

    Directory of Open Access Journals (Sweden)

    Christoph W. Kessler

    1996-01-01

    Full Text Available This article describes a knowledge-based system for automatic parallelization of a wide class of sequential numerical codes operating on vectors and dense matrices, and for execution on distributed memory message-passing multiprocessors. Its main feature is a fast and powerful pattern recognition tool that locally identifies frequently occurring computations and programming concepts in the source code. This tool also works for dusty deck codes that have been "encrypted" by former machine-specific code transformations. Successful pattern recognition guides sophisticated code transformations including local algorithm replacement such that the parallelized code need not emerge from the sequential program structure by just parallelizing the loops. It allows access to an expert's knowledge on useful parallel algorithms, available machine-specific library routines, and powerful program transformations. The partially restored program semantics also supports local array alignment, distribution, and redistribution, and allows for faster and more exact prediction of the performance of the parallelized target code than is usually possible.

  5. Data communications in a parallel active messaging interface of a parallel computer

    Science.gov (United States)

    Archer, Charles J; Blocksome, Michael A; Ratterman, Joseph D; Smith, Brian E

    2013-10-29

    Data communications in a parallel active messaging interface (`PAMI`) of a parallel computer, the parallel computer including a plurality of compute nodes that execute a parallel application, the PAMI composed of data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes and the endpoints coupled for data communications through the PAMI and through data communications resources, including receiving in an origin endpoint of the PAMI a data communications instruction, the instruction characterized by an instruction type, the instruction specifying a transmission of transfer data from the origin endpoint to a target endpoint and transmitting, in accordance with the instruction type, the transfer data from the origin endpoint to the target endpoint.

  6. The STAPL Parallel Graph Library

    KAUST Repository

    Harshvardhan,; Fidel, Adam; Amato, Nancy M.; Rauchwerger, Lawrence

    2013-01-01

    This paper describes the stapl Parallel Graph Library, a high-level framework that abstracts the user from data-distribution and parallelism details and allows them to concentrate on parallel graph algorithm development. It includes a customizable

  7. Parallelism and array processing

    International Nuclear Information System (INIS)

    Zacharov, V.

    1983-01-01

    Modern computing, as well as the historical development of computing, has been dominated by sequential monoprocessing. Yet there is the alternative of parallelism, where several processes may be in concurrent execution. This alternative is discussed in a series of lectures, in which the main developments involving parallelism are considered, both from the standpoint of computing systems and that of applications that can exploit such systems. The lectures seek to discuss parallelism in a historical context, and to identify all the main aspects of concurrency in computation right up to the present time. Included will be consideration of the important question as to what use parallelism might be in the field of data processing. (orig.)

  8. Comparison of different Maxwell solvers coupled to a PIC resolution method of Maxwell-Vlasov equations; Evaluation de differents solveurs Maxwell pour la resolution de Maxwell-Vlasov par une methode PIC

    Energy Technology Data Exchange (ETDEWEB)

    Fochesato, Ch. [CEA Bruyeres-le-Chatel, Dept. de Conception et Simulation des Armes, Service Simulation des Amorces, Lab. Logiciels de Simulation, 91 (France); Bouche, D. [CEA Bruyeres-le-Chatel, Dept. de Physique Theorique et Appliquee, Lab. de Recherche Conventionne, Centre de Mathematiques et Leurs Applications, 91 (France)

    2007-07-01

    The numerical solution of Maxwell equations is a challenging task. Moreover, the range of applications is very wide: microwave devices, diffraction, to cite a few. As a result, a number of methods have been proposed since the sixties. However, among all these methods, none has proved to be free of drawbacks. The finite difference scheme proposed by Yee in 1966, is well suited for Maxwell equations. However, it only works on cubical mesh. As a result, the boundaries of complex objects are not properly handled by the scheme. When classical nodal finite elements are used, spurious modes appear, which spoil the results of simulations. Edge elements overcome this problem, at the price of rather complex implementation, and computationally intensive simulations. Finite volume methods, either generalizing Yee scheme to a wider class of meshes, or applying to Maxwell equations methods initially used in the field of hyperbolic systems of conservation laws, are also used. Lastly, 'Discontinuous Galerkin' methods, generalizing to arbitrary order of accuracy finite volume methods, have recently been applied to Maxwell equations. In this report, we more specifically focus on the coupling of a Maxwell solver to a PIC (Particle-in-cell) method. We analyze advantages and drawbacks of the most widely used methods: accuracy, robustness, sensitivity to numerical artefacts, efficiency, user judgment. (authors)

  9. Vectorization, parallelization and porting of nuclear codes (vectorization and parallelization). Progress report fiscal 1998

    International Nuclear Information System (INIS)

    Ishizuki, Shigeru; Kawai, Wataru; Nemoto, Toshiyuki; Ogasawara, Shinobu; Kume, Etsuo; Adachi, Masaaki; Kawasaki, Nobuo; Yatake, Yo-ichi

    2000-03-01

    Several computer codes in the nuclear field have been vectorized, parallelized and transported on the FUJITSU VPP500 system, the AP3000 system and the Paragon system at Center for Promotion of Computational Science and Engineering in Japan Atomic Energy Research Institute. We dealt with 12 codes in fiscal 1998. These results are reported in 3 parts, i.e., the vectorization and parallelization on vector processors part, the parallelization on scalar processors part and the porting part. In this report, we describe the vectorization and parallelization on vector processors. In this vectorization and parallelization on vector processors part, the vectorization of General Tokamak Circuit Simulation Program code GTCSP, the vectorization and parallelization of Molecular Dynamics NTV (n-particle, Temperature and Velocity) Simulation code MSP2, Eddy Current Analysis code EDDYCAL, Thermal Analysis Code for Test of Passive Cooling System by HENDEL T2 code THANPACST2 and MHD Equilibrium code SELENEJ on the VPP500 are described. In the parallelization on scalar processors part, the parallelization of Monte Carlo N-Particle Transport code MCNP4B2, Plasma Hydrodynamics code using Cubic Interpolated Propagation Method PHCIP and Vectorized Monte Carlo code (continuous energy model / multi-group model) MVP/GMVP on the Paragon are described. In the porting part, the porting of Monte Carlo N-Particle Transport code MCNP4B2 and Reactor Safety Analysis code RELAP5 on the AP3000 are described. (author)

  10. Parallel External Memory Graph Algorithms

    DEFF Research Database (Denmark)

    Arge, Lars Allan; Goodrich, Michael T.; Sitchinava, Nodari

    2010-01-01

    In this paper, we study parallel I/O efficient graph algorithms in the Parallel External Memory (PEM) model, one o f the private-cache chip multiprocessor (CMP) models. We study the fundamental problem of list ranking which leads to efficient solutions to problems on trees, such as computing lowest...... an optimal speedup of ¿(P) in parallel I/O complexity and parallel computation time, compared to the single-processor external memory counterparts....

  11. Parallel inter channel interaction mechanisms

    International Nuclear Information System (INIS)

    Jovic, V.; Afgan, N.; Jovic, L.

    1995-01-01

    Parallel channels interactions are examined. For experimental researches of nonstationary regimes flow in three parallel vertical channels results of phenomenon analysis and mechanisms of parallel channel interaction for adiabatic condition of one-phase fluid and two-phase mixture flow are shown. (author)

  12. Parallel paving: An algorithm for generating distributed, adaptive, all-quadrilateral meshes on parallel computers

    Energy Technology Data Exchange (ETDEWEB)

    Lober, R.R.; Tautges, T.J.; Vaughan, C.T.

    1997-03-01

    Paving is an automated mesh generation algorithm which produces all-quadrilateral elements. It can additionally generate these elements in varying sizes such that the resulting mesh adapts to a function distribution, such as an error function. While powerful, conventional paving is a very serial algorithm in its operation. Parallel paving is the extension of serial paving into parallel environments to perform the same meshing functions as conventional paving only on distributed, discretized models. This extension allows large, adaptive, parallel finite element simulations to take advantage of paving`s meshing capabilities for h-remap remeshing. A significantly modified version of the CUBIT mesh generation code has been developed to host the parallel paving algorithm and demonstrate its capabilities on both two dimensional and three dimensional surface geometries and compare the resulting parallel produced meshes to conventionally paved meshes for mesh quality and algorithm performance. Sandia`s {open_quotes}tiling{close_quotes} dynamic load balancing code has also been extended to work with the paving algorithm to retain parallel efficiency as subdomains undergo iterative mesh refinement.

  13. Applications of the ARGUS code in accelerator physics

    International Nuclear Information System (INIS)

    Petillo, J.J.; Mankofsky, A.; Krueger, W.A.; Kostas, C.; Mondelli, A.A.; Drobot, A.T.

    1993-01-01

    ARGUS is a three-dimensional, electromagnetic, particle-in-cell (PIC) simulation code that is being distributed to U.S. accelerator laboratories in collaboration between SAIC and the Los Alamos Accelerator Code Group. It uses a modular architecture that allows multiple physics modules to share common utilities for grid and structure input., memory management, disk I/O, and diagnostics, Physics modules are in place for electrostatic and electromagnetic field solutions., frequency-domain (eigenvalue) solutions, time- dependent PIC, and steady-state PIC simulations. All of the modules are implemented with a domain-decomposition architecture that allows large problems to be broken up into pieces that fit in core and that facilitates the adaptation of ARGUS for parallel processing ARGUS operates on either Cray or workstation platforms, and MOTIF-based user interface is available for X-windows terminals. Applications of ARGUS in accelerator physics and design are described in this paper

  14. Seeing or moving in parallel

    DEFF Research Database (Denmark)

    Christensen, Mark Schram; Ehrsson, H Henrik; Nielsen, Jens Bo

    2013-01-01

    a different network, involving bilateral dorsal premotor cortex (PMd), primary motor cortex, and SMA, was more active when subjects viewed parallel movements while performing either symmetrical or parallel movements. Correlations between behavioral instability and brain activity were present in right lateral...... adduction-abduction movements symmetrically or in parallel with real-time congruent or incongruent visual feedback of the movements. One network, consisting of bilateral superior and middle frontal gyrus and supplementary motor area (SMA), was more active when subjects performed parallel movements, whereas...

  15. The numerical parallel computing of photon transport

    International Nuclear Information System (INIS)

    Huang Qingnan; Liang Xiaoguang; Zhang Lifa

    1998-12-01

    The parallel computing of photon transport is investigated, the parallel algorithm and the parallelization of programs on parallel computers both with shared memory and with distributed memory are discussed. By analyzing the inherent law of the mathematics and physics model of photon transport according to the structure feature of parallel computers, using the strategy of 'to divide and conquer', adjusting the algorithm structure of the program, dissolving the data relationship, finding parallel liable ingredients and creating large grain parallel subtasks, the sequential computing of photon transport into is efficiently transformed into parallel and vector computing. The program was run on various HP parallel computers such as the HY-1 (PVP), the Challenge (SMP) and the YH-3 (MPP) and very good parallel speedup has been gotten

  16. Hypergraph partitioning implementation for parallelizing matrix-vector multiplication using CUDA GPU-based parallel computing

    Science.gov (United States)

    Murni, Bustamam, A.; Ernastuti, Handhika, T.; Kerami, D.

    2017-07-01

    Calculation of the matrix-vector multiplication in the real-world problems often involves large matrix with arbitrary size. Therefore, parallelization is needed to speed up the calculation process that usually takes a long time. Graph partitioning techniques that have been discussed in the previous studies cannot be used to complete the parallelized calculation of matrix-vector multiplication with arbitrary size. This is due to the assumption of graph partitioning techniques that can only solve the square and symmetric matrix. Hypergraph partitioning techniques will overcome the shortcomings of the graph partitioning technique. This paper addresses the efficient parallelization of matrix-vector multiplication through hypergraph partitioning techniques using CUDA GPU-based parallel computing. CUDA (compute unified device architecture) is a parallel computing platform and programming model that was created by NVIDIA and implemented by the GPU (graphics processing unit).

  17. Writing parallel programs that work

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    Serial algorithms typically run inefficiently on parallel machines. This may sound like an obvious statement, but it is the root cause of why parallel programming is considered to be difficult. The current state of the computer industry is still that almost all programs in existence are serial. This talk will describe the techniques used in the Intel Parallel Studio to provide a developer with the tools necessary to understand the behaviors and limitations of the existing serial programs. Once the limitations are known the developer can refactor the algorithms and reanalyze the resulting programs with the tools in the Intel Parallel Studio to create parallel programs that work. About the speaker Paul Petersen is a Sr. Principal Engineer in the Software and Solutions Group (SSG) at Intel. He received a Ph.D. degree in Computer Science from the University of Illinois in 1993. After UIUC, he was employed at Kuck and Associates, Inc. (KAI) working on auto-parallelizing compiler (KAP), and was involved in th...

  18. Parallel Framework for Cooperative Processes

    Directory of Open Access Journals (Sweden)

    Mitică Craus

    2005-01-01

    Full Text Available This paper describes the work of an object oriented framework designed to be used in the parallelization of a set of related algorithms. The idea behind the system we are describing is to have a re-usable framework for running several sequential algorithms in a parallel environment. The algorithms that the framework can be used with have several things in common: they have to run in cycles and the work should be possible to be split between several "processing units". The parallel framework uses the message-passing communication paradigm and is organized as a master-slave system. Two applications are presented: an Ant Colony Optimization (ACO parallel algorithm for the Travelling Salesman Problem (TSP and an Image Processing (IP parallel algorithm for the Symmetrical Neighborhood Filter (SNF. The implementations of these applications by means of the parallel framework prove to have good performances: approximatively linear speedup and low communication cost.

  19. Compiler Technology for Parallel Scientific Computation

    Directory of Open Access Journals (Sweden)

    Can Özturan

    1994-01-01

    Full Text Available There is a need for compiler technology that, given the source program, will generate efficient parallel codes for different architectures with minimal user involvement. Parallel computation is becoming indispensable in solving large-scale problems in science and engineering. Yet, the use of parallel computation is limited by the high costs of developing the needed software. To overcome this difficulty we advocate a comprehensive approach to the development of scalable architecture-independent software for scientific computation based on our experience with equational programming language (EPL. Our approach is based on a program decomposition, parallel code synthesis, and run-time support for parallel scientific computation. The program decomposition is guided by the source program annotations provided by the user. The synthesis of parallel code is based on configurations that describe the overall computation as a set of interacting components. Run-time support is provided by the compiler-generated code that redistributes computation and data during object program execution. The generated parallel code is optimized using techniques of data alignment, operator placement, wavefront determination, and memory optimization. In this article we discuss annotations, configurations, parallel code generation, and run-time support suitable for parallel programs written in the functional parallel programming language EPL and in Fortran.

  20. Parallel computing: numerics, applications, and trends

    National Research Council Canada - National Science Library

    Trobec, Roman; Vajteršic, Marián; Zinterhof, Peter

    2009-01-01

    ... and/or distributed systems. The contributions to this book are focused on topics most concerned in the trends of today's parallel computing. These range from parallel algorithmics, programming, tools, network computing to future parallel computing. Particular attention is paid to parallel numerics: linear algebra, differential equations, numerica...

  1. Parallel Computing Strategies for Irregular Algorithms

    Science.gov (United States)

    Biswas, Rupak; Oliker, Leonid; Shan, Hongzhang; Biegel, Bryan (Technical Monitor)

    2002-01-01

    Parallel computing promises several orders of magnitude increase in our ability to solve realistic computationally-intensive problems, but relies on their efficient mapping and execution on large-scale multiprocessor architectures. Unfortunately, many important applications are irregular and dynamic in nature, making their effective parallel implementation a daunting task. Moreover, with the proliferation of parallel architectures and programming paradigms, the typical scientist is faced with a plethora of questions that must be answered in order to obtain an acceptable parallel implementation of the solution algorithm. In this paper, we consider three representative irregular applications: unstructured remeshing, sparse matrix computations, and N-body problems, and parallelize them using various popular programming paradigms on a wide spectrum of computer platforms ranging from state-of-the-art supercomputers to PC clusters. We present the underlying problems, the solution algorithms, and the parallel implementation strategies. Smart load-balancing, partitioning, and ordering techniques are used to enhance parallel performance. Overall results demonstrate the complexity of efficiently parallelizing irregular algorithms.

  2. The Glasgow Parallel Reduction Machine: Programming Shared-memory Many-core Systems using Parallel Task Composition

    Directory of Open Access Journals (Sweden)

    Ashkan Tousimojarad

    2013-12-01

    Full Text Available We present the Glasgow Parallel Reduction Machine (GPRM, a novel, flexible framework for parallel task-composition based many-core programming. We allow the programmer to structure programs into task code, written as C++ classes, and communication code, written in a restricted subset of C++ with functional semantics and parallel evaluation. In this paper we discuss the GPRM, the virtual machine framework that enables the parallel task composition approach. We focus the discussion on GPIR, the functional language used as the intermediate representation of the bytecode running on the GPRM. Using examples in this language we show the flexibility and power of our task composition framework. We demonstrate the potential using an implementation of a merge sort algorithm on a 64-core Tilera processor, as well as on a conventional Intel quad-core processor and an AMD 48-core processor system. We also compare our framework with OpenMP tasks in a parallel pointer chasing algorithm running on the Tilera processor. Our results show that the GPRM programs outperform the corresponding OpenMP codes on all test platforms, and can greatly facilitate writing of parallel programs, in particular non-data parallel algorithms such as reductions.

  3. Streaming for Functional Data-Parallel Languages

    DEFF Research Database (Denmark)

    Madsen, Frederik Meisner

    In this thesis, we investigate streaming as a general solution to the space inefficiency commonly found in functional data-parallel programming languages. The data-parallel paradigm maps well to parallel SIMD-style hardware. However, the traditional fully materializing execution strategy...... by extending two existing data-parallel languages: NESL and Accelerate. In the extensions we map bulk operations to data-parallel streams that can evaluate fully sequential, fully parallel or anything in between. By a dataflow, piecewise parallel execution strategy, the runtime system can adjust to any target...... flattening necessitates all sub-computations to materialize at the same time. For example, naive n by n matrix multiplication requires n^3 space in NESL because the algorithm contains n^3 independent scalar multiplications. For large values of n, this is completely unacceptable. We address the problem...

  4. Morfofisiologia de plantas de milho em competição com picão-preto e trapoeraba submetidas a roçada Morphophysiology of corn plants in competition with picão-preto and trapoeraba submitted to clearings

    Directory of Open Access Journals (Sweden)

    J.P. Lemos

    2012-09-01

    Full Text Available Avaliou-se a eficiência do uso de roçadas no controle de picão-preto (Bidens pilosa e trapoeraba (Commelina benghalensis por meio de características morfológicas e fisiológicas do milho. O experimento foi realizado em condições controladas no ano agrícola 2009/2010. As características fisiológicas foram obtidas em parcela subdividida, sendo realizadas quatro avaliações no decorrer do ciclo do milho: 1ª - antes da primeira roçada (V3; 2ª - após a primeira roçada (V6; 3ª - após a segunda roçada (V9; e 4ª - plantas de milho no estádio de florescimento, por meio de um analisador de gases no infravermelho. Duas roçadas reduziram a interferência das plantas daninhas B. pilosa e C. benghalensisnas características morfológicas do milho. A roçada não influenciou os aspectos fisiológicos nas plantas de milho em competição com as plantas daninhas. C. benghalensis causou maior interferência nas características fisiológicas do milho, reduzindo a fotossíntese e a transpiração. A espécie B. pilosa, quando não roçada, apresentou maior capacidade de interferência na morfologia do milho.The efficiency of using clearings to control Picão-preto (Bidens pilosa and Trapoeraba (Commelina benghalensis was evaluated based on the morphological and physiological characteristics of corn. An experiment under greenhouse conditions was conducted in 2009/2010. The physiological characteristics were obtained in a subdivided parcel, with four evaluations being performed throughout the corn cycle (1- before the first clearing (V3; 2- after the first clearing (V6;3- after the second clearing (V9; and 4- corn plants at the fluorescence stage, by means of Infra Red Gas Analyzer. Two clearings reduced the influence of B. pilosa and C. benghalensis weeds on the morphological characteristics of corn. The clearing did not influence the physiological aspects of the corn plants in competition with weeds. C. benghalensis caused a greater

  5. Development of a new dynamic turbulent model, applications to two-dimensional and plane parallel flows

    International Nuclear Information System (INIS)

    Laval, Jean Philippe

    1999-01-01

    We developed a turbulent model based on asymptotic development of the Navier-Stokes equations within the hypothesis of non-local interactions at small scales. This model provides expressions of the turbulent Reynolds sub-grid stresses via estimates of the sub-grid velocities rather than velocities correlations as is usually done. The model involves the coupling of two dynamical equations: one for the resolved scales of motions, which depends upon the Reynolds stresses generated by the sub-grid motions, and one for the sub-grid scales of motions, which can be used to compute the sub-grid Reynolds stresses. The non-locality of interaction at sub-grid scales allows to model their evolution with a linear inhomogeneous equation where the forcing occurs via the energy cascade from resolved to sub-grid scales. This model was solved using a decomposition of sub-grid scales on Gabor's modes and implemented numerically in 2D with periodic boundary conditions. A particles method (PIC) was used to compute the sub-grid scales. The results were compared with results of direct simulations for several typical flows. The model was also applied to plane parallel flows. An analytical study of the equations allows a description of mean velocity profiles in agreement with experimental results and theoretical results based on the symmetries of the Navier-Stokes equation. Possible applications and improvements of the model are discussed in the conclusion. (author) [fr

  6. 3D PIC-MCC simulations of discharge inception around a sharp anode in nitrogen/oxygen mixtures

    Science.gov (United States)

    Teunissen, Jannis; Ebert, Ute

    2016-08-01

    We investigate how photoionization, electron avalanches and space charge affect the inception of nanosecond pulsed discharges. Simulations are performed with a 3D PIC-MCC (particle-in-cell, Monte Carlo collision) model with adaptive mesh refinement for the field solver. This model, whose source code is available online, is described in the first part of the paper. Then we present simulation results in a needle-to-plane geometry, using different nitrogen/oxygen mixtures at atmospheric pressure. In these mixtures non-local photoionization is important for the discharge growth. The typical length scale for this process depends on the oxygen concentration. With 0.2% oxygen the discharges grow quite irregularly, due to the limited supply of free electrons around them. With 2% or more oxygen the development is much smoother. An almost spherical ionized region can form around the electrode tip, which increases in size with the electrode voltage. Eventually this inception cloud destabilizes into streamer channels. In our simulations, discharge velocities are almost independent of the oxygen concentration. We discuss the physical mechanisms behind these phenomena and compare our simulations with experimental observations.

  7. Patterns for Parallel Software Design

    CERN Document Server

    Ortega-Arjona, Jorge Luis

    2010-01-01

    Essential reading to understand patterns for parallel programming Software patterns have revolutionized the way we think about how software is designed, built, and documented, and the design of parallel software requires you to consider other particular design aspects and special skills. From clusters to supercomputers, success heavily depends on the design skills of software developers. Patterns for Parallel Software Design presents a pattern-oriented software architecture approach to parallel software design. This approach is not a design method in the classic sense, but a new way of managin

  8. High performance parallel I/O

    CERN Document Server

    Prabhat

    2014-01-01

    Gain Critical Insight into the Parallel I/O EcosystemParallel I/O is an integral component of modern high performance computing (HPC), especially in storing and processing very large datasets to facilitate scientific discovery. Revealing the state of the art in this field, High Performance Parallel I/O draws on insights from leading practitioners, researchers, software architects, developers, and scientists who shed light on the parallel I/O ecosystem.The first part of the book explains how large-scale HPC facilities scope, configure, and operate systems, with an emphasis on choices of I/O har

  9. Parallel transport of long mean-free-path plasma along open magnetic field lines: Parallel heat flux

    International Nuclear Information System (INIS)

    Guo Zehua; Tang Xianzhu

    2012-01-01

    In a long mean-free-path plasma where temperature anisotropy can be sustained, the parallel heat flux has two components with one associated with the parallel thermal energy and the other the perpendicular thermal energy. Due to the large deviation of the distribution function from local Maxwellian in an open field line plasma with low collisionality, the conventional perturbative calculation of the parallel heat flux closure in its local or non-local form is no longer applicable. Here, a non-perturbative calculation is presented for a collisionless plasma in a two-dimensional flux expander bounded by absorbing walls. Specifically, closures of previously unfamiliar form are obtained for ions and electrons, which relate two distinct components of the species parallel heat flux to the lower order fluid moments such as density, parallel flow, parallel and perpendicular temperatures, and the field quantities such as the magnetic field strength and the electrostatic potential. The plasma source and boundary condition at the absorbing wall enter explicitly in the closure calculation. Although the closure calculation does not take into account wave-particle interactions, the results based on passing orbits from steady-state collisionless drift-kinetic equation show remarkable agreement with fully kinetic-Maxwell simulations. As an example of the physical implications of the theory, the parallel heat flux closures are found to predict a surprising observation in the kinetic-Maxwell simulation of the 2D magnetic flux expander problem, where the parallel heat flux of the parallel thermal energy flows from low to high parallel temperature region.

  10. Simultaneous voltammetric determination of dopamine and ascorbic acid using multivariate calibration methodology performed on a carbon paste electrode modified by a mer-[RuCl{sub 3}(dppb)(4-pic)] complex

    Energy Technology Data Exchange (ETDEWEB)

    Santos, Poliana M.; Sandrino, Bianca; Moreira, Tiago F.; Wohnrath, Karen; Nagata, Noemi; Pessoa, Christiana A. [Universidade Estadual de Ponta Grossa, PR (Brazil). Dept. de Quimica]. E-mail: capessoa@uepg.br

    2007-07-01

    The preparation and electrochemical characterization of a carbon paste electrode (CPE) modified with mer-[Ru0Cl{sub 3}(dppb)(4-pic)] (dppb=Ph{sub 2}P(CH{sub 2}){sub 4}PPh{sub 2}, 4-pic=CH{sub 3}C{sub 5}H{sub 4}N), referred to as Rupic, were investigated. The CPE/Rupic system displayed only one pair of redox peaks, with a midpoint potential at 0.28 V vs. Ag/AgCl, which were ascribed to Ru{sup III}/Ru{sup II} charge transfer. This modified electrode presented the property of electrocatalysing the oxidation of dopamine (DA) and ascorbic acid (AA) at 0.35 V and 0.30 V vs. Ag/AgCl, respectively. Because the oxidation for both AA and DA practically occurred at the same potential, distinguishing between them was difficult with cyclic voltammetry. This limitation was overcome using Partial Least Square Regression (PLSR), which allowed us, with the optimised models, to determine four synthetic samples with prediction errors (RMSEP) of 5.55 X 10{sup -5} mol L{sup -1} and 7.48 X 10{sup -6} mol L{sup -1} for DA and AA, respectively. (author)

  11. Is Monte Carlo embarrassingly parallel?

    Energy Technology Data Exchange (ETDEWEB)

    Hoogenboom, J. E. [Delft Univ. of Technology, Mekelweg 15, 2629 JB Delft (Netherlands); Delft Nuclear Consultancy, IJsselzoom 2, 2902 LB Capelle aan den IJssel (Netherlands)

    2012-07-01

    Monte Carlo is often stated as being embarrassingly parallel. However, running a Monte Carlo calculation, especially a reactor criticality calculation, in parallel using tens of processors shows a serious limitation in speedup and the execution time may even increase beyond a certain number of processors. In this paper the main causes of the loss of efficiency when using many processors are analyzed using a simple Monte Carlo program for criticality. The basic mechanism for parallel execution is MPI. One of the bottlenecks turn out to be the rendez-vous points in the parallel calculation used for synchronization and exchange of data between processors. This happens at least at the end of each cycle for fission source generation in order to collect the full fission source distribution for the next cycle and to estimate the effective multiplication factor, which is not only part of the requested results, but also input to the next cycle for population control. Basic improvements to overcome this limitation are suggested and tested. Also other time losses in the parallel calculation are identified. Moreover, the threading mechanism, which allows the parallel execution of tasks based on shared memory using OpenMP, is analyzed in detail. Recommendations are given to get the maximum efficiency out of a parallel Monte Carlo calculation. (authors)

  12. Is Monte Carlo embarrassingly parallel?

    International Nuclear Information System (INIS)

    Hoogenboom, J. E.

    2012-01-01

    Monte Carlo is often stated as being embarrassingly parallel. However, running a Monte Carlo calculation, especially a reactor criticality calculation, in parallel using tens of processors shows a serious limitation in speedup and the execution time may even increase beyond a certain number of processors. In this paper the main causes of the loss of efficiency when using many processors are analyzed using a simple Monte Carlo program for criticality. The basic mechanism for parallel execution is MPI. One of the bottlenecks turn out to be the rendez-vous points in the parallel calculation used for synchronization and exchange of data between processors. This happens at least at the end of each cycle for fission source generation in order to collect the full fission source distribution for the next cycle and to estimate the effective multiplication factor, which is not only part of the requested results, but also input to the next cycle for population control. Basic improvements to overcome this limitation are suggested and tested. Also other time losses in the parallel calculation are identified. Moreover, the threading mechanism, which allows the parallel execution of tasks based on shared memory using OpenMP, is analyzed in detail. Recommendations are given to get the maximum efficiency out of a parallel Monte Carlo calculation. (authors)

  13. Particle-in-cell Simulations with Kinetic Electrons

    International Nuclear Information System (INIS)

    Lewandowski, J.L.V.

    2004-01-01

    A new scheme, based on an exact separation between adiabatic and nonadiabatic electron responses, for particle-in-cell (PIC) simulations of drift-type modes is presented. The (linear and nonlinear) elliptic equations for the scalar fields are solved using a multi-grid solver. The new scheme yields linear growth rates in excellent agreement with theory and it is shown to conserve energy well into the nonlinear regime. It is also demonstrated that simulations with few electrons are reliable and accurate, suggesting that large-scale, PIC simulations with electron dynamics in toroidal geometry (e.g., tokamaks and stellarators plasmas) are within reach of present-day massively parallel supercomputers

  14. An exploratory study of three-dimensional MP-PIC-based simulation of bubbling fluidized beds with and without baffles

    DEFF Research Database (Denmark)

    Yang, Shuai; Wu, Hao; Lin, Weigang

    2018-01-01

    In this study, the flow characteristics of Geldart A particles in a bubbling fluidized bed with and without perforated plates were simulated by the multiphase particle-in-cell (MP-PIC)-based Eulerian-Lagrangian method. A modified structure-based drag model was developed based on our previous work....... Other drag models including the Parker and Wen-Yu-Ergun drag models were also employed to investigate the effects of drag models on the simulation results. Although the modified structure-based drag model better predicts the gas-solid flow dynamics of a baffle-free bubbling fluidized bed in comparison...... with the experimental data, none of these drag models predict the gas-solid flow in a baffled bubbling fluidized bed sufficiently well because of the treatment of baffles in the Barracuda software. To improve the simulation accuracy, future versions of Barracuda should address the challenges of incorporating the bed...

  15. Parallel algorithms for continuum dynamics

    International Nuclear Information System (INIS)

    Hicks, D.L.; Liebrock, L.M.

    1987-01-01

    Simply porting existing parallel programs to a new parallel processor may not achieve the full speedup possible; to achieve the maximum efficiency may require redesigning the parallel algorithms for the specific architecture. The authors discuss here parallel algorithms that were developed first for the HEP processor and then ported to the CRAY X-MP/4, the ELXSI/10, and the Intel iPSC/32. Focus is mainly on the most recent parallel processing results produced, i.e., those on the Intel Hypercube. The applications are simulations of continuum dynamics in which the momentum and stress gradients are important. Examples of these are inertial confinement fusion experiments, severe breaks in the coolant system of a reactor, weapons physics, shock-wave physics. Speedup efficiencies on the Intel iPSC Hypercube are very sensitive to the ratio of communication to computation. Great care must be taken in designing algorithms for this machine to avoid global communication. This is much more critical on the iPSC than it was on the three previous parallel processors

  16. Journal of Astrophysics and Astronomy

    Indian Academy of Sciences (India)

    65

    Northern IMF as simulated by PIC code in parallel with MHD model-Journal of Astrophysics ... The global structure of the collisionless bow shock was inves- tigated by ..... international research community, access to modern space science simulations. ...... LaTeX Font Info: Redeclaring math alphabet \\mathbf on input line 29.

  17. Parallel S/sub n/ iteration schemes

    International Nuclear Information System (INIS)

    Wienke, B.R.; Hiromoto, R.E.

    1986-01-01

    The iterative, multigroup, discrete ordinates (S/sub n/) technique for solving the linear transport equation enjoys widespread usage and appeal. Serial iteration schemes and numerical algorithms developed over the years provide a timely framework for parallel extension. On the Denelcor HEP, the authors investigate three parallel iteration schemes for solving the one-dimensional S/sub n/ transport equation. The multigroup representation and serial iteration methods are also reviewed. This analysis represents a first attempt to extend serial S/sub n/ algorithms to parallel environments and provides good baseline estimates on ease of parallel implementation, relative algorithm efficiency, comparative speedup, and some future directions. The authors examine ordered and chaotic versions of these strategies, with and without concurrent rebalance and diffusion acceleration. Two strategies efficiently support high degrees of parallelization and appear to be robust parallel iteration techniques. The third strategy is a weaker parallel algorithm. Chaotic iteration, difficult to simulate on serial machines, holds promise and converges faster than ordered versions of the schemes. Actual parallel speedup and efficiency are high and payoff appears substantial

  18. Vectorization, parallelization and porting of nuclear codes. Vectorization and parallelization. Progress report fiscal 1999

    Energy Technology Data Exchange (ETDEWEB)

    Adachi, Masaaki; Ogasawara, Shinobu; Kume, Etsuo [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Ishizuki, Shigeru; Nemoto, Toshiyuki; Kawasaki, Nobuo; Kawai, Wataru [Fujitsu Ltd., Tokyo (Japan); Yatake, Yo-ichi [Hitachi Ltd., Tokyo (Japan)

    2001-02-01

    Several computer codes in the nuclear field have been vectorized, parallelized and trans-ported on the FUJITSU VPP500 system, the AP3000 system, the SX-4 system and the Paragon system at Center for Promotion of Computational Science and Engineering in Japan Atomic Energy Research Institute. We dealt with 18 codes in fiscal 1999. These results are reported in 3 parts, i.e., the vectorization and the parallelization part on vector processors, the parallelization part on scalar processors and the porting part. In this report, we describe the vectorization and parallelization on vector processors. In this vectorization and parallelization on vector processors part, the vectorization of Relativistic Molecular Orbital Calculation code RSCAT, a microscopic transport code for high energy nuclear collisions code JAM, three-dimensional non-steady thermal-fluid analysis code STREAM, Relativistic Density Functional Theory code RDFT and High Speed Three-Dimensional Nodal Diffusion code MOSRA-Light on the VPP500 system and the SX-4 system are described. (author)

  19. Parallel R-matrix computation

    International Nuclear Information System (INIS)

    Heggarty, J.W.

    1999-06-01

    For almost thirty years, sequential R-matrix computation has been used by atomic physics research groups, from around the world, to model collision phenomena involving the scattering of electrons or positrons with atomic or molecular targets. As considerable progress has been made in the understanding of fundamental scattering processes, new data, obtained from more complex calculations, is of current interest to experimentalists. Performing such calculations, however, places considerable demands on the computational resources to be provided by the target machine, in terms of both processor speed and memory requirement. Indeed, in some instances the computational requirements are so great that the proposed R-matrix calculations are intractable, even when utilising contemporary classic supercomputers. Historically, increases in the computational requirements of R-matrix computation were accommodated by porting the problem codes to a more powerful classic supercomputer. Although this approach has been successful in the past, it is no longer considered to be a satisfactory solution due to the limitations of current (and future) Von Neumann machines. As a consequence, there has been considerable interest in the high performance multicomputers, that have emerged over the last decade which appear to offer the computational resources required by contemporary R-matrix research. Unfortunately, developing codes for these machines is not as simple a task as it was to develop codes for successive classic supercomputers. The difficulty arises from the considerable differences in the computing models that exist between the two types of machine and results in the programming of multicomputers to be widely acknowledged as a difficult, time consuming and error-prone task. Nevertheless, unless parallel R-matrix computation is realised, important theoretical and experimental atomic physics research will continue to be hindered. This thesis describes work that was undertaken in

  20. Caracterización taxonómica, distribución y primeros registros europeos de Apalus cinctus (Pic, 1896 (Coleoptera, Meloidae

    Directory of Open Access Journals (Sweden)

    Ruiz, J. L.

    2013-12-01

    Full Text Available In this study we clarify the taxonomic status and geographic distribution of Apalus cinctus (Pic, 1896, a Mediterranean species included in the group of Apalus bimaculatus (Linnaeus, 1760. Apalus cinctus was only known from a few North African localities mentioned in the original description, and was considered of uncertain taxonomic status. The review of detailed photographs of the type specimen and the study of recently captured specimens allow us to discuss its taxonomic position and to define its diagnostic characters, validating its specific status. Capture or observation of specimens assignable to Apalus cinctus in continental Spain (León, Zamora and Huesca, extends the geographic range of the species considerably, including it within the European Fauna. We question the presence of Apalus bimaculatus in the Iberian peninsula and North Africa, suggesting that it is possibly replaced by A. cinctus.En este estudio se clarifica el estatus taxonómico y la distribución geográfica de Apalus cinctus (Pic, 1896, especie mediterránea que se integra en el grupo de Apalus bimaculatus (Linnaeus, 1760. Apalus cinctus sólo se conocía por su descripción original a partir de algunas localidades norteafricanas y se consideraba como un taxon con estatus taxonómico incierto. El examen de fotografías detalladas del tipo de Apalus cinctus y el estudio de nuevo material capturado recientemente nos permite discutir su posición taxonómica y definir sus caracteres diagnósticos, validando su estatus específico. La captura u observación de ejemplares asignables a Apalus cinctus en España continental (León, Zamora y Huesca, amplía considerablemente la distribución de la especie y permite incluirla dentro de la Fauna Europea. Se cuestiona la presencia de Apalus bimaculatus en la Península Ibérica y en el Norte de África, donde posiblemente sea reemplazada por A. cinctus.

  1. Implementation and performance of parallelized elegant

    International Nuclear Information System (INIS)

    Wang, Y.; Borland, M.

    2008-01-01

    The program elegant is widely used for design and modeling of linacs for free-electron lasers and energy recovery linacs, as well as storage rings and other applications. As part of a multi-year effort, we have parallelized many aspects of the code, including single-particle dynamics, wakefields, and coherent synchrotron radiation. We report on the approach used for gradual parallelization, which proved very beneficial in getting parallel features into the hands of users quickly. We also report details of parallelization of collective effects. Finally, we discuss performance of the parallelized code in various applications.

  2. Parallelizing the spectral transform method: A comparison of alternative parallel algorithms

    International Nuclear Information System (INIS)

    Foster, I.; Worley, P.H.

    1993-01-01

    The spectral transform method is a standard numerical technique for solving partial differential equations on the sphere and is widely used in global climate modeling. In this paper, we outline different approaches to parallelizing the method and describe experiments that we are conducting to evaluate the efficiency of these approaches on parallel computers. The experiments are conducted using a testbed code that solves the nonlinear shallow water equations on a sphere, but are designed to permit evaluation in the context of a global model. They allow us to evaluate the relative merits of the approaches as a function of problem size and number of processors. The results of this study are guiding ongoing work on PCCM2, a parallel implementation of the Community Climate Model developed at the National Center for Atmospheric Research

  3. Algorithms for parallel computers

    International Nuclear Information System (INIS)

    Churchhouse, R.F.

    1985-01-01

    Until relatively recently almost all the algorithms for use on computers had been designed on the (usually unstated) assumption that they were to be run on single processor, serial machines. With the introduction of vector processors, array processors and interconnected systems of mainframes, minis and micros, however, various forms of parallelism have become available. The advantage of parallelism is that it offers increased overall processing speed but it also raises some fundamental questions, including: (i) which, if any, of the existing 'serial' algorithms can be adapted for use in the parallel mode. (ii) How close to optimal can such adapted algorithms be and, where relevant, what are the convergence criteria. (iii) How can we design new algorithms specifically for parallel systems. (iv) For multi-processor systems how can we handle the software aspects of the interprocessor communications. Aspects of these questions illustrated by examples are considered in these lectures. (orig.)

  4. Parallel processing for fluid dynamics applications

    International Nuclear Information System (INIS)

    Johnson, G.M.

    1989-01-01

    The impact of parallel processing on computational science and, in particular, on computational fluid dynamics is growing rapidly. In this paper, particular emphasis is given to developments which have occurred within the past two years. Parallel processing is defined and the reasons for its importance in high-performance computing are reviewed. Parallel computer architectures are classified according to the number and power of their processing units, their memory, and the nature of their connection scheme. Architectures which show promise for fluid dynamics applications are emphasized. Fluid dynamics problems are examined for parallelism inherent at the physical level. CFD algorithms and their mappings onto parallel architectures are discussed. Several example are presented to document the performance of fluid dynamics applications on present-generation parallel processing devices

  5. Parallel discrete event simulation

    NARCIS (Netherlands)

    Overeinder, B.J.; Hertzberger, L.O.; Sloot, P.M.A.; Withagen, W.J.

    1991-01-01

    In simulating applications for execution on specific computing systems, the simulation performance figures must be known in a short period of time. One basic approach to the problem of reducing the required simulation time is the exploitation of parallelism. However, in parallelizing the simulation

  6. Overview of the Force Scientific Parallel Language

    Directory of Open Access Journals (Sweden)

    Gita Alaghband

    1994-01-01

    Full Text Available The Force parallel programming language designed for large-scale shared-memory multiprocessors is presented. The language provides a number of parallel constructs as extensions to the ordinary Fortran language and is implemented as a two-level macro preprocessor to support portability across shared memory multiprocessors. The global parallelism model on which the Force is based provides a powerful parallel language. The parallel constructs, generic synchronization, and freedom from process management supported by the Force has resulted in structured parallel programs that are ported to the many multiprocessors on which the Force is implemented. Two new parallel constructs for looping and functional decomposition are discussed. Several programming examples to illustrate some parallel programming approaches using the Force are also presented.

  7. Generation of Nonlinear Electric Field Bursts in the Outer Radiation Belt through Electrons Trapping by Oblique Whistler Waves

    Science.gov (United States)

    Agapitov, Oleksiy; Drake, James; Mozer, Forrest

    2016-04-01

    Huge numbers of different nonlinear structures (double layers, electron holes, non-linear whistlers, etc. referred to as Time Domain Structures - TDS) have been observed by the electric field experiment on board the Van Allen Probes. A large part of the observed non-linear structures are associated with whistler waves and some of them can be directly driven by whistlers. The parameters favorable for the generation of TDS were studied experimentally as well as making use of 2-D particle-in-cell (PIC) simulations for the system with inhomogeneous magnetic field. It is shown that an outward propagating front of whistlers and hot electrons amplifies oblique whistlers which collapse into regions of intense parallel electric field with properties consistent with recent observations of TDS from the Van Allen Probe satellites. Oblique whistlers seed the parallel electric fields that are driven by the beams. The resulting parallel electric fields trap and heat the precipitating electrons. These electrons drive spikes of intense parallel electric field with characteristics similar to the TDSs seen in the VAP data. The decoupling of the whistler wave and the nonlinear electrostatic component is shown in PIC simulation in the inhomogeneous magnetic field system. These effects are observed by the Van Allen Probes in the radiation belts. The precipitating hot electrons propagate away from the source region in intense bunches rather than as a smooth flux.

  8. The Galley Parallel File System

    Science.gov (United States)

    Nieuwejaar, Nils; Kotz, David

    1996-01-01

    Most current multiprocessor file systems are designed to use multiple disks in parallel, using the high aggregate bandwidth to meet the growing I/0 requirements of parallel scientific applications. Many multiprocessor file systems provide applications with a conventional Unix-like interface, allowing the application to access multiple disks transparently. This interface conceals the parallelism within the file system, increasing the ease of programmability, but making it difficult or impossible for sophisticated programmers and libraries to use knowledge about their I/O needs to exploit that parallelism. In addition to providing an insufficient interface, most current multiprocessor file systems are optimized for a different workload than they are being asked to support. We introduce Galley, a new parallel file system that is intended to efficiently support realistic scientific multiprocessor workloads. We discuss Galley's file structure and application interface, as well as the performance advantages offered by that interface.

  9. Plasma density enhancement in atmospheric-pressure dielectric-barrier discharges by high-voltage nanosecond pulse in the pulse-on period: a PIC simulation

    International Nuclear Information System (INIS)

    Sang Chaofeng; Sun Jizhong; Wang Dezhen

    2010-01-01

    A particle-in-cell (PIC) plus Monte Carlo collision simulation is employed to investigate how a sustainable atmospheric pressure single dielectric-barrier discharge responds to a high-voltage nanosecond pulse (HVNP) further applied to the metal electrode. The results show that the HVNP can significantly increase the plasma density in the pulse-on period. The ion-induced secondary electrons can give rise to avalanche ionization in the positive sheath, which widens the discharge region and enhances the plasma density drastically. However, the plasma density stops increasing as the applied pulse lasts over certain time; therefore, lengthening the pulse duration alone cannot improve the discharge efficiency further. Physical reasons for these phenomena are then discussed.

  10. Plasma density enhancement in atmospheric-pressure dielectric-barrier discharges by high-voltage nanosecond pulse in the pulse-on period: a PIC simulation

    Science.gov (United States)

    Sang, Chaofeng; Sun, Jizhong; Wang, Dezhen

    2010-02-01

    A particle-in-cell (PIC) plus Monte Carlo collision simulation is employed to investigate how a sustainable atmospheric pressure single dielectric-barrier discharge responds to a high-voltage nanosecond pulse (HVNP) further applied to the metal electrode. The results show that the HVNP can significantly increase the plasma density in the pulse-on period. The ion-induced secondary electrons can give rise to avalanche ionization in the positive sheath, which widens the discharge region and enhances the plasma density drastically. However, the plasma density stops increasing as the applied pulse lasts over certain time; therefore, lengthening the pulse duration alone cannot improve the discharge efficiency further. Physical reasons for these phenomena are then discussed.

  11. PDDP, A Data Parallel Programming Model

    Directory of Open Access Journals (Sweden)

    Karen H. Warren

    1996-01-01

    Full Text Available PDDP, the parallel data distribution preprocessor, is a data parallel programming model for distributed memory parallel computers. PDDP implements high-performance Fortran-compatible data distribution directives and parallelism expressed by the use of Fortran 90 array syntax, the FORALL statement, and the WHERE construct. Distributed data objects belong to a global name space; other data objects are treated as local and replicated on each processor. PDDP allows the user to program in a shared memory style and generates codes that are portable to a variety of parallel machines. For interprocessor communication, PDDP uses the fastest communication primitives on each platform.

  12. Design considerations for parallel graphics libraries

    Science.gov (United States)

    Crockett, Thomas W.

    1994-01-01

    Applications which run on parallel supercomputers are often characterized by massive datasets. Converting these vast collections of numbers to visual form has proven to be a powerful aid to comprehension. For a variety of reasons, it may be desirable to provide this visual feedback at runtime. One way to accomplish this is to exploit the available parallelism to perform graphics operations in place. In order to do this, we need appropriate parallel rendering algorithms and library interfaces. This paper provides a tutorial introduction to some of the issues which arise in designing parallel graphics libraries and their underlying rendering algorithms. The focus is on polygon rendering for distributed memory message-passing systems. We illustrate our discussion with examples from PGL, a parallel graphics library which has been developed on the Intel family of parallel systems.

  13. Automatic Loop Parallelization via Compiler Guided Refactoring

    DEFF Research Database (Denmark)

    Larsen, Per; Ladelsky, Razya; Lidman, Jacob

    For many parallel applications, performance relies not on instruction-level parallelism, but on loop-level parallelism. Unfortunately, many modern applications are written in ways that obstruct automatic loop parallelization. Since we cannot identify sufficient parallelization opportunities...... for these codes in a static, off-line compiler, we developed an interactive compilation feedback system that guides the programmer in iteratively modifying application source, thereby improving the compiler’s ability to generate loop-parallel code. We use this compilation system to modify two sequential...... benchmarks, finding that the code parallelized in this way runs up to 8.3 times faster on an octo-core Intel Xeon 5570 system and up to 12.5 times faster on a quad-core IBM POWER6 system. Benchmark performance varies significantly between the systems. This suggests that semi-automatic parallelization should...

  14. Aspects of computation on asynchronous parallel processors

    International Nuclear Information System (INIS)

    Wright, M.

    1989-01-01

    The increasing availability of asynchronous parallel processors has provided opportunities for original and useful work in scientific computing. However, the field of parallel computing is still in a highly volatile state, and researchers display a wide range of opinion about many fundamental questions such as models of parallelism, approaches for detecting and analyzing parallelism of algorithms, and tools that allow software developers and users to make effective use of diverse forms of complex hardware. This volume collects the work of researchers specializing in different aspects of parallel computing, who met to discuss the framework and the mechanics of numerical computing. The far-reaching impact of high-performance asynchronous systems is reflected in the wide variety of topics, which include scientific applications (e.g. linear algebra, lattice gauge simulation, ordinary and partial differential equations), models of parallelism, parallel language features, task scheduling, automatic parallelization techniques, tools for algorithm development in parallel environments, and system design issues

  15. Parallelization of the FLAPW method

    International Nuclear Information System (INIS)

    Canning, A.; Mannstadt, W.; Freeman, A.J.

    1999-01-01

    The FLAPW (full-potential linearized-augmented plane-wave) method is one of the most accurate first-principles methods for determining electronic and magnetic properties of crystals and surfaces. Until the present work, the FLAPW method has been limited to systems of less than about one hundred atoms due to a lack of an efficient parallel implementation to exploit the power and memory of parallel computers. In this work we present an efficient parallelization of the method by division among the processors of the plane-wave components for each state. The code is also optimized for RISC (reduced instruction set computer) architectures, such as those found on most parallel computers, making full use of BLAS (basic linear algebra subprograms) wherever possible. Scaling results are presented for systems of up to 686 silicon atoms and 343 palladium atoms per unit cell, running on up to 512 processors on a CRAY T3E parallel computer

  16. Parallelization of the FLAPW method

    Science.gov (United States)

    Canning, A.; Mannstadt, W.; Freeman, A. J.

    2000-08-01

    The FLAPW (full-potential linearized-augmented plane-wave) method is one of the most accurate first-principles methods for determining structural, electronic and magnetic properties of crystals and surfaces. Until the present work, the FLAPW method has been limited to systems of less than about a hundred atoms due to the lack of an efficient parallel implementation to exploit the power and memory of parallel computers. In this work, we present an efficient parallelization of the method by division among the processors of the plane-wave components for each state. The code is also optimized for RISC (reduced instruction set computer) architectures, such as those found on most parallel computers, making full use of BLAS (basic linear algebra subprograms) wherever possible. Scaling results are presented for systems of up to 686 silicon atoms and 343 palladium atoms per unit cell, running on up to 512 processors on a CRAY T3E parallel supercomputer.

  17. Studies of Particle Wake Potentials in Plasmas

    Science.gov (United States)

    Ellis, Ian; Graziani, Frank; Glosli, James; Strozzi, David; Surh, Michael; Richards, David; Decyk, Viktor; Mori, Warren

    2011-10-01

    Fast Ignition studies require a detailed understanding of electron scattering, stopping, and energy deposition in plasmas with variable values for the number of particles within a Debye sphere. Presently there is disagreement in the literature concerning the proper description of these processes. Developing and validating proper descriptions requires studying the processes using first-principle electrostatic simulations and possibly including magnetic fields. We are using the particle-particle particle-mesh (PPPM) code ddcMD and the particle-in-cell (PIC) code BEPS to perform these simulations. As a starting point in our study, we examine the wake of a particle passing through a plasma in 3D electrostatic simulations performed with ddcMD and with BEPS using various cell sizes. In this poster, we compare the wakes we observe in these simulations with each other and predictions from Vlasov theory. Prepared by LLNL under Contract DE-AC52-07NA27344 and by UCLA under Grant DE-FG52-09NA29552.

  18. Parallelization of 2-D lattice Boltzmann codes

    International Nuclear Information System (INIS)

    Suzuki, Soichiro; Kaburaki, Hideo; Yokokawa, Mitsuo.

    1996-03-01

    Lattice Boltzmann (LB) codes to simulate two dimensional fluid flow are developed on vector parallel computer Fujitsu VPP500 and scalar parallel computer Intel Paragon XP/S. While a 2-D domain decomposition method is used for the scalar parallel LB code, a 1-D domain decomposition method is used for the vector parallel LB code to be vectorized along with the axis perpendicular to the direction of the decomposition. High parallel efficiency of 95.1% by the vector parallel calculation on 16 processors with 1152x1152 grid and 88.6% by the scalar parallel calculation on 100 processors with 800x800 grid are obtained. The performance models are developed to analyze the performance of the LB codes. It is shown by our performance models that the execution speed of the vector parallel code is about one hundred times faster than that of the scalar parallel code with the same number of processors up to 100 processors. We also analyze the scalability in keeping the available memory size of one processor element at maximum. Our performance model predicts that the execution time of the vector parallel code increases about 3% on 500 processors. Although the 1-D domain decomposition method has in general a drawback in the interprocessor communication, the vector parallel LB code is still suitable for the large scale and/or high resolution simulations. (author)

  19. Parallelization of 2-D lattice Boltzmann codes

    Energy Technology Data Exchange (ETDEWEB)

    Suzuki, Soichiro; Kaburaki, Hideo; Yokokawa, Mitsuo

    1996-03-01

    Lattice Boltzmann (LB) codes to simulate two dimensional fluid flow are developed on vector parallel computer Fujitsu VPP500 and scalar parallel computer Intel Paragon XP/S. While a 2-D domain decomposition method is used for the scalar parallel LB code, a 1-D domain decomposition method is used for the vector parallel LB code to be vectorized along with the axis perpendicular to the direction of the decomposition. High parallel efficiency of 95.1% by the vector parallel calculation on 16 processors with 1152x1152 grid and 88.6% by the scalar parallel calculation on 100 processors with 800x800 grid are obtained. The performance models are developed to analyze the performance of the LB codes. It is shown by our performance models that the execution speed of the vector parallel code is about one hundred times faster than that of the scalar parallel code with the same number of processors up to 100 processors. We also analyze the scalability in keeping the available memory size of one processor element at maximum. Our performance model predicts that the execution time of the vector parallel code increases about 3% on 500 processors. Although the 1-D domain decomposition method has in general a drawback in the interprocessor communication, the vector parallel LB code is still suitable for the large scale and/or high resolution simulations. (author).

  20. Explorations of the implementation of a parallel IDW interpolation algorithm in a Linux cluster-based parallel GIS

    Science.gov (United States)

    Huang, Fang; Liu, Dingsheng; Tan, Xicheng; Wang, Jian; Chen, Yunping; He, Binbin

    2011-04-01

    To design and implement an open-source parallel GIS (OP-GIS) based on a Linux cluster, the parallel inverse distance weighting (IDW) interpolation algorithm has been chosen as an example to explore the working model and the principle of algorithm parallel pattern (APP), one of the parallelization patterns for OP-GIS. Based on an analysis of the serial IDW interpolation algorithm of GRASS GIS, this paper has proposed and designed a specific parallel IDW interpolation algorithm, incorporating both single process, multiple data (SPMD) and master/slave (M/S) programming modes. The main steps of the parallel IDW interpolation algorithm are: (1) the master node packages the related information, and then broadcasts it to the slave nodes; (2) each node calculates its assigned data extent along one row using the serial algorithm; (3) the master node gathers the data from all nodes; and (4) iterations continue until all rows have been processed, after which the results are outputted. According to the experiments performed in the course of this work, the parallel IDW interpolation algorithm can attain an efficiency greater than 0.93 compared with similar algorithms, which indicates that the parallel algorithm can greatly reduce processing time and maximize speed and performance.

  1. Parallel Monte Carlo reactor neutronics

    International Nuclear Information System (INIS)

    Blomquist, R.N.; Brown, F.B.

    1994-01-01

    The issues affecting implementation of parallel algorithms for large-scale engineering Monte Carlo neutron transport simulations are discussed. For nuclear reactor calculations, these include load balancing, recoding effort, reproducibility, domain decomposition techniques, I/O minimization, and strategies for different parallel architectures. Two codes were parallelized and tested for performance. The architectures employed include SIMD, MIMD-distributed memory, and workstation network with uneven interactive load. Speedups linear with the number of nodes were achieved

  2. Parallel Implicit Algorithms for CFD

    Science.gov (United States)

    Keyes, David E.

    1998-01-01

    The main goal of this project was efficient distributed parallel and workstation cluster implementations of Newton-Krylov-Schwarz (NKS) solvers for implicit Computational Fluid Dynamics (CFD.) "Newton" refers to a quadratically convergent nonlinear iteration using gradient information based on the true residual, "Krylov" to an inner linear iteration that accesses the Jacobian matrix only through highly parallelizable sparse matrix-vector products, and "Schwarz" to a domain decomposition form of preconditioning the inner Krylov iterations with primarily neighbor-only exchange of data between the processors. Prior experience has established that Newton-Krylov methods are competitive solvers in the CFD context and that Krylov-Schwarz methods port well to distributed memory computers. The combination of the techniques into Newton-Krylov-Schwarz was implemented on 2D and 3D unstructured Euler codes on the parallel testbeds that used to be at LaRC and on several other parallel computers operated by other agencies or made available by the vendors. Early implementations were made directly in Massively Parallel Integration (MPI) with parallel solvers we adapted from legacy NASA codes and enhanced for full NKS functionality. Later implementations were made in the framework of the PETSC library from Argonne National Laboratory, which now includes pseudo-transient continuation Newton-Krylov-Schwarz solver capability (as a result of demands we made upon PETSC during our early porting experiences). A secondary project pursued with funding from this contract was parallel implicit solvers in acoustics, specifically in the Helmholtz formulation. A 2D acoustic inverse problem has been solved in parallel within the PETSC framework.

  3. Parallel kinematics type, kinematics, and optimal design

    CERN Document Server

    Liu, Xin-Jun

    2014-01-01

    Parallel Kinematics- Type, Kinematics, and Optimal Design presents the results of 15 year's research on parallel mechanisms and parallel kinematics machines. This book covers the systematic classification of parallel mechanisms (PMs) as well as providing a large number of mechanical architectures of PMs available for use in practical applications. It focuses on the kinematic design of parallel robots. One successful application of parallel mechanisms in the field of machine tools, which is also called parallel kinematics machines, has been the emerging trend in advanced machine tools. The book describes not only the main aspects and important topics in parallel kinematics, but also references novel concepts and approaches, i.e. type synthesis based on evolution, performance evaluation and optimization based on screw theory, singularity model taking into account motion and force transmissibility, and others.   This book is intended for researchers, scientists, engineers and postgraduates or above with interes...

  4. The Value of PIC Cystography in Detecting De Novo and Residual Vesicoureteral Reflux after Dextranomer/Hyaluronic Acid Copolymer Injection

    Directory of Open Access Journals (Sweden)

    B. W. Palmer

    2011-01-01

    Full Text Available The endoscopic injection of Dx/HA in the management of vesicoureteral reflux (VUR has become an accepted alternative to open surgery. In the current study we evaluated the value of cystography to detect de novo contralateral VUR in unilateral cases of VUR at the time of Dx/HA injection and correlated the findings of immediate post-Dx/HA injection cystography during the same anesthesia to 2-month postoperative VCUG to evaluate its ability to predict successful surgical outcomes. The current study aimed to evaluate whether an intraoperatively performed cystogram could replace postoperative studies. But a negative intraoperative cystogram correlates with the postoperative study in only 80%. Considering the 75–80% success rate of Dx/HA implantation, the addition of intraoperative cystograms cannot replace postoperative studies. In patients treated with unilateral VUR, PIC cystography can detect occult VUR and prevent postoperative contralateral new onset of VUR.

  5. Experiments with parallel algorithms for combinatorial problems

    NARCIS (Netherlands)

    G.A.P. Kindervater (Gerard); H.W.J.M. Trienekens

    1985-01-01

    textabstractIn the last decade many models for parallel computation have been proposed and many parallel algorithms have been developed. However, few of these models have been realized and most of these algorithms are supposed to run on idealized, unrealistic parallel machines. The parallel machines

  6. Final Report UCLA-Thermochemical Storage with Anhydrous Ammonia

    Energy Technology Data Exchange (ETDEWEB)

    Lavine, Adrienne [Univ. of California, Los Angeles, CA (United States)

    2018-02-05

    investigation. UCLA has filed a patent that protects the new ideas developed during this project. Discussions are ongoing with potential investors with the aim of partnering for further work. As well as immediate improvements and extra work with the existing experimental system, a key goal is to extend it to a small solar-driven project at an early opportunity.

  7. “PARAÍSO NATURAL NAS ÁGUAS”: O Nível de Satisfação dos Consumidores em Relação aos Passeios de Barco à Picãozinho

    Directory of Open Access Journals (Sweden)

    Pyetro Pergentino de Farias

    2017-11-01

    Full Text Available Os passeios de barco a Picãozinho tiveram origem durante a década de 80, objetivava o lazer das famílias dos pescadores durante finais de semana e feriados. Começou a se difundir após a descoberta do destino como forma de lazer, de forma que, o destino passou a ser comercializado diante a comunidade local e os turistas, o que ocorre até os dias de hoje. O presente artigo teve como objetivo identificar o nível de satisfação dos consumidores em relação aos passeios de barco à Picãozinho. Para isto, realizou-se uma pesquisa quantitativa fazendo uso do método survey, com a aplicação de um questionário, obtendo um total de 131 respondentes. Dentre as principais constatações do estudo, observou-se que a principal motivação que os consumidores possuem para realizar o passeio de barco consiste na experiência em poder conhecer os recifes de corais. Além disso, a maioria dos respondentes apresentou uma elevada satisfação no que se refere à qualidade do passeio.

  8. Parallel reservoir simulator computations

    International Nuclear Information System (INIS)

    Hemanth-Kumar, K.; Young, L.C.

    1995-01-01

    The adaptation of a reservoir simulator for parallel computations is described. The simulator was originally designed for vector processors. It performs approximately 99% of its calculations in vector/parallel mode and relative to scalar calculations it achieves speedups of 65 and 81 for black oil and EOS simulations, respectively on the CRAY C-90

  9. The STAPL Parallel Graph Library

    KAUST Repository

    Harshvardhan,

    2013-01-01

    This paper describes the stapl Parallel Graph Library, a high-level framework that abstracts the user from data-distribution and parallelism details and allows them to concentrate on parallel graph algorithm development. It includes a customizable distributed graph container and a collection of commonly used parallel graph algorithms. The library introduces pGraph pViews that separate algorithm design from the container implementation. It supports three graph processing algorithmic paradigms, level-synchronous, asynchronous and coarse-grained, and provides common graph algorithms based on them. Experimental results demonstrate improved scalability in performance and data size over existing graph libraries on more than 16,000 cores and on internet-scale graphs containing over 16 billion vertices and 250 billion edges. © Springer-Verlag Berlin Heidelberg 2013.

  10. The parallel volume at large distances

    DEFF Research Database (Denmark)

    Kampf, Jürgen

    In this paper we examine the asymptotic behavior of the parallel volume of planar non-convex bodies as the distance tends to infinity. We show that the difference between the parallel volume of the convex hull of a body and the parallel volume of the body itself tends to . This yields a new proof...... for the fact that a planar body can only have polynomial parallel volume, if it is convex. Extensions to Minkowski spaces and random sets are also discussed....

  11. The parallel volume at large distances

    DEFF Research Database (Denmark)

    Kampf, Jürgen

    In this paper we examine the asymptotic behavior of the parallel volume of planar non-convex bodies as the distance tends to infinity. We show that the difference between the parallel volume of the convex hull of a body and the parallel volume of the body itself tends to 0. This yields a new proof...... for the fact that a planar body can only have polynomial parallel volume, if it is convex. Extensions to Minkowski spaces and random sets are also discussed....

  12. Expressing Parallelism with ROOT

    Energy Technology Data Exchange (ETDEWEB)

    Piparo, D. [CERN; Tejedor, E. [CERN; Guiraud, E. [CERN; Ganis, G. [CERN; Mato, P. [CERN; Moneta, L. [CERN; Valls Pla, X. [CERN; Canal, P. [Fermilab

    2017-11-22

    The need for processing the ever-increasing amount of data generated by the LHC experiments in a more efficient way has motivated ROOT to further develop its support for parallelism. Such support is being tackled both for shared-memory and distributed-memory environments. The incarnations of the aforementioned parallelism are multi-threading, multi-processing and cluster-wide executions. In the area of multi-threading, we discuss the new implicit parallelism and related interfaces, as well as the new building blocks to safely operate with ROOT objects in a multi-threaded environment. Regarding multi-processing, we review the new MultiProc framework, comparing it with similar tools (e.g. multiprocessing module in Python). Finally, as an alternative to PROOF for cluster-wide executions, we introduce the efforts on integrating ROOT with state-of-the-art distributed data processing technologies like Spark, both in terms of programming model and runtime design (with EOS as one of the main components). For all the levels of parallelism, we discuss, based on real-life examples and measurements, how our proposals can increase the productivity of scientists.

  13. Expressing Parallelism with ROOT

    Science.gov (United States)

    Piparo, D.; Tejedor, E.; Guiraud, E.; Ganis, G.; Mato, P.; Moneta, L.; Valls Pla, X.; Canal, P.

    2017-10-01

    The need for processing the ever-increasing amount of data generated by the LHC experiments in a more efficient way has motivated ROOT to further develop its support for parallelism. Such support is being tackled both for shared-memory and distributed-memory environments. The incarnations of the aforementioned parallelism are multi-threading, multi-processing and cluster-wide executions. In the area of multi-threading, we discuss the new implicit parallelism and related interfaces, as well as the new building blocks to safely operate with ROOT objects in a multi-threaded environment. Regarding multi-processing, we review the new MultiProc framework, comparing it with similar tools (e.g. multiprocessing module in Python). Finally, as an alternative to PROOF for cluster-wide executions, we introduce the efforts on integrating ROOT with state-of-the-art distributed data processing technologies like Spark, both in terms of programming model and runtime design (with EOS as one of the main components). For all the levels of parallelism, we discuss, based on real-life examples and measurements, how our proposals can increase the productivity of scientists.

  14. Parallel hierarchical radiosity rendering

    Energy Technology Data Exchange (ETDEWEB)

    Carter, Michael [Iowa State Univ., Ames, IA (United States)

    1993-07-01

    In this dissertation, the step-by-step development of a scalable parallel hierarchical radiosity renderer is documented. First, a new look is taken at the traditional radiosity equation, and a new form is presented in which the matrix of linear system coefficients is transformed into a symmetric matrix, thereby simplifying the problem and enabling a new solution technique to be applied. Next, the state-of-the-art hierarchical radiosity methods are examined for their suitability to parallel implementation, and scalability. Significant enhancements are also discovered which both improve their theoretical foundations and improve the images they generate. The resultant hierarchical radiosity algorithm is then examined for sources of parallelism, and for an architectural mapping. Several architectural mappings are discussed. A few key algorithmic changes are suggested during the process of making the algorithm parallel. Next, the performance, efficiency, and scalability of the algorithm are analyzed. The dissertation closes with a discussion of several ideas which have the potential to further enhance the hierarchical radiosity method, or provide an entirely new forum for the application of hierarchical methods.

  15. The UCLA Multimodal Connectivity Database: A web-based platform for brain connectivity matrix sharing and analysis

    Directory of Open Access Journals (Sweden)

    Jesse A. Brown

    2012-11-01

    Full Text Available Brain connectomics research has rapidly expanded using functional MRI (fMRI and diffusion-weighted MRI (dwMRI. A common product of these varied analyses is a connectivity matrix (CM. A CM stores the connection strength between any two regions (nodes in a brain network. This format is useful for several reasons: 1 it is highly distilled, with minimal data size and complexity, 2 graph theory can be applied to characterize the network’s topology, and 3 it retains sufficient information to capture individual differences such as age, gender, intelligence quotient, or disease state. Here we introduce the UCLA Multimodal Connectivity Database (http://umcd.humanconnectomeproject.org, an openly available website for brain network analysis and data sharing. The site is a repository for researchers to publicly share CMs derived from their data. The site also allows users to select any CM shared by another user, compute graph theoretical metrics on the site, visualize a report of results, or download the raw CM. To date, users have contributed over 2000 individual CMs, spanning different imaging modalities (fMRI, dwMRI and disorders (Alzheimer’s, autism, Attention Deficit Hyperactive Disorder. To demonstrate the site’s functionality, whole brain functional and structural connectivity matrices are derived from 60 subjects’ (ages 26-45 resting state fMRI (rs-fMRI and dwMRI data and uploaded to the site. The site is utilized to derive graph theory global and regional measures for the rs-fMRI and dwMRI networks. Global and nodal graph theoretical measures between functional and structural networks exhibit low correspondence. This example demonstrates how this tool can enhance the comparability of brain networks from different imaging modalities and studies. The existence of this connectivity-based repository should foster broader data sharing and enable larger-scale meta analyses comparing networks across imaging modality, age group, and disease state.

  16. Comparison of different Maxwell solvers coupled to a PIC resolution method of Maxwell-Vlasov equations

    International Nuclear Information System (INIS)

    Fochesato, Ch.; Bouche, D.

    2007-01-01

    The numerical solution of Maxwell equations is a challenging task. Moreover, the range of applications is very wide: microwave devices, diffraction, to cite a few. As a result, a number of methods have been proposed since the sixties. However, among all these methods, none has proved to be free of drawbacks. The finite difference scheme proposed by Yee in 1966, is well suited for Maxwell equations. However, it only works on cubical mesh. As a result, the boundaries of complex objects are not properly handled by the scheme. When classical nodal finite elements are used, spurious modes appear, which spoil the results of simulations. Edge elements overcome this problem, at the price of rather complex implementation, and computationally intensive simulations. Finite volume methods, either generalizing Yee scheme to a wider class of meshes, or applying to Maxwell equations methods initially used in the field of hyperbolic systems of conservation laws, are also used. Lastly, 'Discontinuous Galerkin' methods, generalizing to arbitrary order of accuracy finite volume methods, have recently been applied to Maxwell equations. In this report, we more specifically focus on the coupling of a Maxwell solver to a PIC (Particle-in-cell) method. We analyze advantages and drawbacks of the most widely used methods: accuracy, robustness, sensitivity to numerical artefacts, efficiency, user judgment. (authors)

  17. Shared Variable Oriented Parallel Precompiler for SPMD Model

    Institute of Scientific and Technical Information of China (English)

    1995-01-01

    For the moment,commercial parallel computer systems with distributed memory architecture are usually provided with parallel FORTRAN or parallel C compliers,which are just traditional sequential FORTRAN or C compilers expanded with communication statements.Programmers suffer from writing parallel programs with communication statements. The Shared Variable Oriented Parallel Precompiler (SVOPP) proposed in this paper can automatically generate appropriate communication statements based on shared variables for SPMD(Single Program Multiple Data) computation model and greatly ease the parallel programming with high communication efficiency.The core function of parallel C precompiler has been successfully verified on a transputer-based parallel computer.Its prominent performance shows that SVOPP is probably a break-through in parallel programming technique.

  18. Evaluating parallel optimization on transputers

    Directory of Open Access Journals (Sweden)

    A.G. Chalmers

    2003-12-01

    Full Text Available The faster processing power of modern computers and the development of efficient algorithms have made it possible for operations researchers to tackle a much wider range of problems than ever before. Further improvements in processing speed can be achieved utilising relatively inexpensive transputers to process components of an algorithm in parallel. The Davidon-Fletcher-Powell method is one of the most successful and widely used optimisation algorithms for unconstrained problems. This paper examines the algorithm and identifies the components that can be processed in parallel. The results of some experiments with these components are presented which indicates under what conditions parallel processing with an inexpensive configuration is likely to be faster than the traditional sequential implementations. The performance of the whole algorithm with its parallel components is then compared with the original sequential algorithm. The implementation serves to illustrate the practicalities of speeding up typical OR algorithms in terms of difficulty, effort and cost. The results give an indication of the savings in time a given parallel implementation can be expected to yield.

  19. Programming massively parallel processors a hands-on approach

    CERN Document Server

    Kirk, David B

    2010-01-01

    Programming Massively Parallel Processors discusses basic concepts about parallel programming and GPU architecture. ""Massively parallel"" refers to the use of a large number of processors to perform a set of computations in a coordinated parallel way. The book details various techniques for constructing parallel programs. It also discusses the development process, performance level, floating-point format, parallel patterns, and dynamic parallelism. The book serves as a teaching guide where parallel programming is the main topic of the course. It builds on the basics of C programming for CUDA, a parallel programming environment that is supported on NVI- DIA GPUs. Composed of 12 chapters, the book begins with basic information about the GPU as a parallel computer source. It also explains the main concepts of CUDA, data parallelism, and the importance of memory access efficiency using CUDA. The target audience of the book is graduate and undergraduate students from all science and engineering disciplines who ...

  20. Exploiting Symmetry on Parallel Architectures.

    Science.gov (United States)

    Stiller, Lewis Benjamin

    1995-01-01

    This thesis describes techniques for the design of parallel programs that solve well-structured problems with inherent symmetry. Part I demonstrates the reduction of such problems to generalized matrix multiplication by a group-equivariant matrix. Fast techniques for this multiplication are described, including factorization, orbit decomposition, and Fourier transforms over finite groups. Our algorithms entail interaction between two symmetry groups: one arising at the software level from the problem's symmetry and the other arising at the hardware level from the processors' communication network. Part II illustrates the applicability of our symmetry -exploitation techniques by presenting a series of case studies of the design and implementation of parallel programs. First, a parallel program that solves chess endgames by factorization of an associated dihedral group-equivariant matrix is described. This code runs faster than previous serial programs, and discovered it a number of results. Second, parallel algorithms for Fourier transforms for finite groups are developed, and preliminary parallel implementations for group transforms of dihedral and of symmetric groups are described. Applications in learning, vision, pattern recognition, and statistics are proposed. Third, parallel implementations solving several computational science problems are described, including the direct n-body problem, convolutions arising from molecular biology, and some communication primitives such as broadcast and reduce. Some of our implementations ran orders of magnitude faster than previous techniques, and were used in the investigation of various physical phenomena.

  1. Advanced parallel processing with supercomputer architectures

    International Nuclear Information System (INIS)

    Hwang, K.

    1987-01-01

    This paper investigates advanced parallel processing techniques and innovative hardware/software architectures that can be applied to boost the performance of supercomputers. Critical issues on architectural choices, parallel languages, compiling techniques, resource management, concurrency control, programming environment, parallel algorithms, and performance enhancement methods are examined and the best answers are presented. The authors cover advanced processing techniques suitable for supercomputers, high-end mainframes, minisupers, and array processors. The coverage emphasizes vectorization, multitasking, multiprocessing, and distributed computing. In order to achieve these operation modes, parallel languages, smart compilers, synchronization mechanisms, load balancing methods, mapping parallel algorithms, operating system functions, application library, and multidiscipline interactions are investigated to ensure high performance. At the end, they assess the potentials of optical and neural technologies for developing future supercomputers

  2. An alternative model to distribute VO software to WLCG sites based on CernVM-FS: a prototype at PIC Tier1

    International Nuclear Information System (INIS)

    Lanciotti, E; Merino, G; Blomer, J; Bria, A

    2011-01-01

    In a distributed computing model as WLCG the software of experiment specific application software has to be efficiently distributed to any site of the Grid. Application software is currently installed in a shared area of the site visible for all Worker Nodes (WNs) of the site through some protocol (NFS, AFS or other). The software is installed at the site by jobs which run on a privileged node of the computing farm where the shared area is mounted in write mode. This model presents several drawbacks which cause a non-negligible rate of job failure. An alternative model for software distribution based on the CERN Virtual Machine File System (CernVM-FS) has been tried at PIC, the Spanish Tierl site of WLCG. The test bed used and the results are presented in this paper.

  3. Endpoint-based parallel data processing with non-blocking collective instructions in a parallel active messaging interface of a parallel computer

    Science.gov (United States)

    Archer, Charles J; Blocksome, Michael A; Cernohous, Bob R; Ratterman, Joseph D; Smith, Brian E

    2014-11-11

    Endpoint-based parallel data processing with non-blocking collective instructions in a PAMI of a parallel computer is disclosed. The PAMI is composed of data communications endpoints, each including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task. The compute nodes are coupled for data communications through the PAMI. The parallel application establishes a data communications geometry specifying a set of endpoints that are used in collective operations of the PAMI by associating with the geometry a list of collective algorithms valid for use with the endpoints of the geometry; registering in each endpoint in the geometry a dispatch callback function for a collective operation; and executing without blocking, through a single one of the endpoints in the geometry, an instruction for the collective operation.

  4. SOFTWARE FOR DESIGNING PARALLEL APPLICATIONS

    Directory of Open Access Journals (Sweden)

    M. K. Bouza

    2017-01-01

    Full Text Available The object of research is the tools to support the development of parallel programs in C/C ++. The methods and software which automates the process of designing parallel applications are proposed.

  5. An Introduction to Parallel Computation R

    Indian Academy of Sciences (India)

    How are they programmed? This article provides an introduction. A parallel computer is a network of processors built for ... and have been used to solve problems much faster than a single ... in parallel computer design is to select an organization which ..... The most ambitious approach to parallel computing is to develop.

  6. Building a parallel file system simulator

    International Nuclear Information System (INIS)

    Molina-Estolano, E; Maltzahn, C; Brandt, S A; Bent, J

    2009-01-01

    Parallel file systems are gaining in popularity in high-end computing centers as well as commercial data centers. High-end computing systems are expected to scale exponentially and to pose new challenges to their storage scalability in terms of cost and power. To address these challenges scientists and file system designers will need a thorough understanding of the design space of parallel file systems. Yet there exist few systematic studies of parallel file system behavior at petabyte- and exabyte scale. An important reason is the significant cost of getting access to large-scale hardware to test parallel file systems. To contribute to this understanding we are building a parallel file system simulator that can simulate parallel file systems at very large scale. Our goal is to simulate petabyte-scale parallel file systems on a small cluster or even a single machine in reasonable time and fidelity. With this simulator, file system experts will be able to tune existing file systems for specific workloads, scientists and file system deployment engineers will be able to better communicate workload requirements, file system designers and researchers will be able to try out design alternatives and innovations at scale, and instructors will be able to study very large-scale parallel file system behavior in the class room. In this paper we describe our approach and provide preliminary results that are encouraging both in terms of fidelity and simulation scalability.

  7. Professional Parallel Programming with C# Master Parallel Extensions with NET 4

    CERN Document Server

    Hillar, Gastón

    2010-01-01

    Expert guidance for those programming today's dual-core processors PCs As PC processors explode from one or two to now eight processors, there is an urgent need for programmers to master concurrent programming. This book dives deep into the latest technologies available to programmers for creating professional parallel applications using C#, .NET 4, and Visual Studio 2010. The book covers task-based programming, coordination data structures, PLINQ, thread pools, asynchronous programming model, and more. It also teaches other parallel programming techniques, such as SIMD and vectorization.Teach

  8. Parallelization for first principles electronic state calculation program

    International Nuclear Information System (INIS)

    Watanabe, Hiroshi; Oguchi, Tamio.

    1997-03-01

    In this report we study the parallelization for First principles electronic state calculation program. The target machines are NEC SX-4 for shared memory type parallelization and FUJITSU VPP300 for distributed memory type parallelization. The features of each parallel machine are surveyed, and the parallelization methods suitable for each are proposed. It is shown that 1.60 times acceleration is achieved with 2 CPU parallelization by SX-4 and 4.97 times acceleration is achieved with 12 PE parallelization by VPP 300. (author)

  9. Parallel computation

    International Nuclear Information System (INIS)

    Jejcic, A.; Maillard, J.; Maurel, G.; Silva, J.; Wolff-Bacha, F.

    1997-01-01

    The work in the field of parallel processing has developed as research activities using several numerical Monte Carlo simulations related to basic or applied current problems of nuclear and particle physics. For the applications utilizing the GEANT code development or improvement works were done on parts simulating low energy physical phenomena like radiation, transport and interaction. The problem of actinide burning by means of accelerators was approached using a simulation with the GEANT code. A program of neutron tracking in the range of low energies up to the thermal region has been developed. It is coupled to the GEANT code and permits in a single pass the simulation of a hybrid reactor core receiving a proton burst. Other works in this field refers to simulations for nuclear medicine applications like, for instance, development of biological probes, evaluation and characterization of the gamma cameras (collimators, crystal thickness) as well as the method for dosimetric calculations. Particularly, these calculations are suited for a geometrical parallelization approach especially adapted to parallel machines of the TN310 type. Other works mentioned in the same field refer to simulation of the electron channelling in crystals and simulation of the beam-beam interaction effect in colliders. The GEANT code was also used to simulate the operation of germanium detectors designed for natural and artificial radioactivity monitoring of environment

  10. Neoclassical parallel flow calculation in the presence of external parallel momentum sources in Heliotron J

    Energy Technology Data Exchange (ETDEWEB)

    Nishioka, K.; Nakamura, Y. [Graduate School of Energy Science, Kyoto University, Gokasho, Uji, Kyoto 611-0011 (Japan); Nishimura, S. [National Institute for Fusion Science, 322-6 Oroshi-cho, Toki, Gifu 509-5292 (Japan); Lee, H. Y. [Korea Advanced Institute of Science and Technology, Daejeon 305-701 (Korea, Republic of); Kobayashi, S.; Mizuuchi, T.; Nagasaki, K.; Okada, H.; Minami, T.; Kado, S.; Yamamoto, S.; Ohshima, S.; Konoshima, S.; Sano, F. [Institute of Advanced Energy, Kyoto University, Gokasho, Uji, Kyoto 611-0011 (Japan)

    2016-03-15

    A moment approach to calculate neoclassical transport in non-axisymmetric torus plasmas composed of multiple ion species is extended to include the external parallel momentum sources due to unbalanced tangential neutral beam injections (NBIs). The momentum sources that are included in the parallel momentum balance are calculated from the collision operators of background particles with fast ions. This method is applied for the clarification of the physical mechanism of the neoclassical parallel ion flows and the multi-ion species effect on them in Heliotron J NBI plasmas. It is found that parallel ion flow can be determined by the balance between the parallel viscosity and the external momentum source in the region where the external source is much larger than the thermodynamic force driven source in the collisional plasmas. This is because the friction between C{sup 6+} and D{sup +} prevents a large difference between C{sup 6+} and D{sup +} flow velocities in such plasmas. The C{sup 6+} flow velocities, which are measured by the charge exchange recombination spectroscopy system, are numerically evaluated with this method. It is shown that the experimentally measured C{sup 6+} impurity flow velocities do not contradict clearly with the neoclassical estimations, and the dependence of parallel flow velocities on the magnetic field ripples is consistent in both results.

  11. Structural Properties of G,T-Parallel Duplexes

    Directory of Open Access Journals (Sweden)

    Anna Aviñó

    2010-01-01

    Full Text Available The structure of G,T-parallel-stranded duplexes of DNA carrying similar amounts of adenine and guanine residues is studied by means of molecular dynamics (MD simulations and UV- and CD spectroscopies. In addition the impact of the substitution of adenine by 8-aminoadenine and guanine by 8-aminoguanine is analyzed. The presence of 8-aminoadenine and 8-aminoguanine stabilizes the parallel duplex structure. Binding of these oligonucleotides to their target polypyrimidine sequences to form the corresponding G,T-parallel triplex was not observed. Instead, when unmodified parallel-stranded duplexes were mixed with their polypyrimidine target, an interstrand Watson-Crick duplex was formed. As predicted by theoretical calculations parallel-stranded duplexes carrying 8-aminopurines did not bind to their target. The preference for the parallel-duplex over the Watson-Crick antiparallel duplex is attributed to the strong stabilization of the parallel duplex produced by the 8-aminopurines. Theoretical studies show that the isomorphism of the triads is crucial for the stability of the parallel triplex.

  12. High-speed parallel solution of the neutron diffusion equation with the hierarchical domain decomposition boundary element method incorporating parallel communications

    International Nuclear Information System (INIS)

    Tsuji, Masashi; Chiba, Gou

    2000-01-01

    A hierarchical domain decomposition boundary element method (HDD-BEM) for solving the multiregion neutron diffusion equation (NDE) has been fully parallelized, both for numerical computations and for data communications, to accomplish a high parallel efficiency on distributed memory message passing parallel computers. Data exchanges between node processors that are repeated during iteration processes of HDD-BEM are implemented, without any intervention of the host processor that was used to supervise parallel processing in the conventional parallelized HDD-BEM (P-HDD-BEM). Thus, the parallel processing can be executed with only cooperative operations of node processors. The communication overhead was even the dominant time consuming part in the conventional P-HDD-BEM, and the parallelization efficiency decreased steeply with the increase of the number of processors. With the parallel data communication, the efficiency is affected only by the number of boundary elements assigned to decomposed subregions, and the communication overhead can be drastically reduced. This feature can be particularly advantageous in the analysis of three-dimensional problems where a large number of processors are required. The proposed P-HDD-BEM offers a promising solution to the deterioration problem of parallel efficiency and opens a new path to parallel computations of NDEs on distributed memory message passing parallel computers. (author)

  13. Parallel education: what is it?

    OpenAIRE

    Amos, Michelle Peta

    2017-01-01

    In the history of education it has long been discussed that single-sex and coeducation are the two models of education present in schools. With the introduction of parallel schools over the last 15 years, there has been very little research into this 'new model'. Many people do not understand what it means for a school to be parallel or they confuse a parallel model with co-education, due to the presence of both boys and girls within the one institution. Therefore, the main obj...

  14. Parallel computing of physical maps--a comparative study in SIMD and MIMD parallelism.

    Science.gov (United States)

    Bhandarkar, S M; Chirravuri, S; Arnold, J

    1996-01-01

    Ordering clones from a genomic library into physical maps of whole chromosomes presents a central computational problem in genetics. Chromosome reconstruction via clone ordering is usually isomorphic to the NP-complete Optimal Linear Arrangement problem. Parallel SIMD and MIMD algorithms for simulated annealing based on Markov chain distribution are proposed and applied to the problem of chromosome reconstruction via clone ordering. Perturbation methods and problem-specific annealing heuristics are proposed and described. The SIMD algorithms are implemented on a 2048 processor MasPar MP-2 system which is an SIMD 2-D toroidal mesh architecture whereas the MIMD algorithms are implemented on an 8 processor Intel iPSC/860 which is an MIMD hypercube architecture. A comparative analysis of the various SIMD and MIMD algorithms is presented in which the convergence, speedup, and scalability characteristics of the various algorithms are analyzed and discussed. On a fine-grained, massively parallel SIMD architecture with a low synchronization overhead such as the MasPar MP-2, a parallel simulated annealing algorithm based on multiple periodically interacting searches performs the best. For a coarse-grained MIMD architecture with high synchronization overhead such as the Intel iPSC/860, a parallel simulated annealing algorithm based on multiple independent searches yields the best results. In either case, distribution of clonal data across multiple processors is shown to exacerbate the tendency of the parallel simulated annealing algorithm to get trapped in a local optimum.

  15. On synchronous parallel computations with independent probabilistic choice

    International Nuclear Information System (INIS)

    Reif, J.H.

    1984-01-01

    This paper introduces probabilistic choice to synchronous parallel machine models; in particular parallel RAMs. The power of probabilistic choice in parallel computations is illustrate by parallelizing some known probabilistic sequential algorithms. The authors characterize the computational complexity of time, space, and processor bounded probabilistic parallel RAMs in terms of the computational complexity of probabilistic sequential RAMs. They show that parallelism uniformly speeds up time bounded probabilistic sequential RAM computations by nearly a quadratic factor. They also show that probabilistic choice can be eliminated from parallel computations by introducing nonuniformity

  16. Automatic Parallelization Tool: Classification of Program Code for Parallel Computing

    Directory of Open Access Journals (Sweden)

    Mustafa Basthikodi

    2016-04-01

    Full Text Available Performance growth of single-core processors has come to a halt in the past decade, but was re-enabled by the introduction of parallelism in processors. Multicore frameworks along with Graphical Processing Units empowered to enhance parallelism broadly. Couples of compilers are updated to developing challenges forsynchronization and threading issues. Appropriate program and algorithm classifications will have advantage to a great extent to the group of software engineers to get opportunities for effective parallelization. In present work we investigated current species for classification of algorithms, in that related work on classification is discussed along with the comparison of issues that challenges the classification. The set of algorithms are chosen which matches the structure with different issues and perform given task. We have tested these algorithms utilizing existing automatic species extraction toolsalong with Bones compiler. We have added functionalities to existing tool, providing a more detailed characterization. The contributions of our work include support for pointer arithmetic, conditional and incremental statements, user defined types, constants and mathematical functions. With this, we can retain significant data which is not captured by original speciesof algorithms. We executed new theories into the device, empowering automatic characterization of program code.

  17. Resistor Combinations for Parallel Circuits.

    Science.gov (United States)

    McTernan, James P.

    1978-01-01

    To help simplify both teaching and learning of parallel circuits, a high school electricity/electronics teacher presents and illustrates the use of tables of values for parallel resistive circuits in which total resistances are whole numbers. (MF)

  18. Parallelization methods study of thermal-hydraulics codes

    International Nuclear Information System (INIS)

    Gaudart, Catherine

    2000-01-01

    The variety of parallelization methods and machines leads to a wide selection for programmers. In this study we suggest, in an industrial context, some solutions from the experience acquired through different parallelization methods. The study is about several scientific codes which simulate a large variety of thermal-hydraulics phenomena. A bibliography on parallelization methods and a first analysis of the codes showed the difficulty of our process on the whole applications to study. Therefore, it would be necessary to identify and extract a representative part of these applications and parallelization methods. The linear solver part of the codes forced itself. On this particular part several parallelization methods had been used. From these developments one could estimate the necessary work for a non initiate programmer to parallelize his application, and the impact of the development constraints. The different methods of parallelization tested are the numerical library PETSc, the parallelizer PAF, the language HPF, the formalism PEI and the communications library MPI and PYM. In order to test several methods on different applications and to follow the constraint of minimization of the modifications in codes, a tool called SPS (Server of Parallel Solvers) had be developed. We propose to describe the different constraints about the optimization of codes in an industrial context, to present the solutions given by the tool SPS, to show the development of the linear solver part with the tested parallelization methods and lastly to compare the results against the imposed criteria. (author) [fr

  19. Simulation Exploration through Immersive Parallel Planes

    Energy Technology Data Exchange (ETDEWEB)

    Brunhart-Lupo, Nicholas J [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Bush, Brian W [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Gruchalla, Kenny M [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Smith, Steve [Los Alamos Visualization Associates

    2017-05-25

    We present a visualization-driven simulation system that tightly couples systems dynamics simulations with an immersive virtual environment to allow analysts to rapidly develop and test hypotheses in a high-dimensional parameter space. To accomplish this, we generalize the two-dimensional parallel-coordinates statistical graphic as an immersive 'parallel-planes' visualization for multivariate time series emitted by simulations running in parallel with the visualization. In contrast to traditional parallel coordinate's mapping the multivariate dimensions onto coordinate axes represented by a series of parallel lines, we map pairs of the multivariate dimensions onto a series of parallel rectangles. As in the case of parallel coordinates, each individual observation in the dataset is mapped to a polyline whose vertices coincide with its coordinate values. Regions of the rectangles can be 'brushed' to highlight and select observations of interest: a 'slider' control allows the user to filter the observations by their time coordinate. In an immersive virtual environment, users interact with the parallel planes using a joystick that can select regions on the planes, manipulate selection, and filter time. The brushing and selection actions are used to both explore existing data as well as to launch additional simulations corresponding to the visually selected portions of the input parameter space. As soon as the new simulations complete, their resulting observations are displayed in the virtual environment. This tight feedback loop between simulation and immersive analytics accelerates users' realization of insights about the simulation and its output.

  20. Workspace Analysis for Parallel Robot

    Directory of Open Access Journals (Sweden)

    Ying Sun

    2013-05-01

    Full Text Available As a completely new-type of robot, the parallel robot possesses a lot of advantages that the serial robot does not, such as high rigidity, great load-carrying capacity, small error, high precision, small self-weight/load ratio, good dynamic behavior and easy control, hence its range is extended in using domain. In order to find workspace of parallel mechanism, the numerical boundary-searching algorithm based on the reverse solution of kinematics and limitation of link length has been introduced. This paper analyses position workspace, orientation workspace of parallel robot of the six degrees of freedom. The result shows: It is a main means to increase and decrease its workspace to change the length of branch of parallel mechanism; The radius of the movement platform has no effect on the size of workspace, but will change position of workspace.

  1. Massively Parallel Finite Element Programming

    KAUST Repository

    Heister, Timo; Kronbichler, Martin; Bangerth, Wolfgang

    2010-01-01

    Today's large finite element simulations require parallel algorithms to scale on clusters with thousands or tens of thousands of processor cores. We present data structures and algorithms to take advantage of the power of high performance computers in generic finite element codes. Existing generic finite element libraries often restrict the parallelization to parallel linear algebra routines. This is a limiting factor when solving on more than a few hundreds of cores. We describe routines for distributed storage of all major components coupled with efficient, scalable algorithms. We give an overview of our effort to enable the modern and generic finite element library deal.II to take advantage of the power of large clusters. In particular, we describe the construction of a distributed mesh and develop algorithms to fully parallelize the finite element calculation. Numerical results demonstrate good scalability. © 2010 Springer-Verlag.

  2. Massively Parallel Finite Element Programming

    KAUST Repository

    Heister, Timo

    2010-01-01

    Today\\'s large finite element simulations require parallel algorithms to scale on clusters with thousands or tens of thousands of processor cores. We present data structures and algorithms to take advantage of the power of high performance computers in generic finite element codes. Existing generic finite element libraries often restrict the parallelization to parallel linear algebra routines. This is a limiting factor when solving on more than a few hundreds of cores. We describe routines for distributed storage of all major components coupled with efficient, scalable algorithms. We give an overview of our effort to enable the modern and generic finite element library deal.II to take advantage of the power of large clusters. In particular, we describe the construction of a distributed mesh and develop algorithms to fully parallelize the finite element calculation. Numerical results demonstrate good scalability. © 2010 Springer-Verlag.

  3. Collectively loading an application in a parallel computer

    Science.gov (United States)

    Aho, Michael E.; Attinella, John E.; Gooding, Thomas M.; Miller, Samuel J.; Mundy, Michael B.

    2016-01-05

    Collectively loading an application in a parallel computer, the parallel computer comprising a plurality of compute nodes, including: identifying, by a parallel computer control system, a subset of compute nodes in the parallel computer to execute a job; selecting, by the parallel computer control system, one of the subset of compute nodes in the parallel computer as a job leader compute node; retrieving, by the job leader compute node from computer memory, an application for executing the job; and broadcasting, by the job leader to the subset of compute nodes in the parallel computer, the application for executing the job.

  4. Productive Parallel Programming: The PCN Approach

    Directory of Open Access Journals (Sweden)

    Ian Foster

    1992-01-01

    Full Text Available We describe the PCN programming system, focusing on those features designed to improve the productivity of scientists and engineers using parallel supercomputers. These features include a simple notation for the concise specification of concurrent algorithms, the ability to incorporate existing Fortran and C code into parallel applications, facilities for reusing parallel program components, a portable toolkit that allows applications to be developed on a workstation or small parallel computer and run unchanged on supercomputers, and integrated debugging and performance analysis tools. We survey representative scientific applications and identify problem classes for which PCN has proved particularly useful.

  5. Parallel-In-Time For Moving Meshes

    Energy Technology Data Exchange (ETDEWEB)

    Falgout, R. D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Manteuffel, T. A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Southworth, B. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Schroder, J. B. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-02-04

    With steadily growing computational resources available, scientists must develop e ective ways to utilize the increased resources. High performance, highly parallel software has be- come a standard. However until recent years parallelism has focused primarily on the spatial domain. When solving a space-time partial di erential equation (PDE), this leads to a sequential bottleneck in the temporal dimension, particularly when taking a large number of time steps. The XBraid parallel-in-time library was developed as a practical way to add temporal parallelism to existing se- quential codes with only minor modi cations. In this work, a rezoning-type moving mesh is applied to a di usion problem and formulated in a parallel-in-time framework. Tests and scaling studies are run using XBraid and demonstrate excellent results for the simple model problem considered herein.

  6. Integrated Task And Data Parallel Programming: Language Design

    Science.gov (United States)

    Grimshaw, Andrew S.; West, Emily A.

    1998-01-01

    his research investigates the combination of task and data parallel language constructs within a single programming language. There are an number of applications that exhibit properties which would be well served by such an integrated language. Examples include global climate models, aircraft design problems, and multidisciplinary design optimization problems. Our approach incorporates data parallel language constructs into an existing, object oriented, task parallel language. The language will support creation and manipulation of parallel classes and objects of both types (task parallel and data parallel). Ultimately, the language will allow data parallel and task parallel classes to be used either as building blocks or managers of parallel objects of either type, thus allowing the development of single and multi-paradigm parallel applications. 1995 Research Accomplishments In February I presented a paper at Frontiers '95 describing the design of the data parallel language subset. During the spring I wrote and defended my dissertation proposal. Since that time I have developed a runtime model for the language subset. I have begun implementing the model and hand-coding simple examples which demonstrate the language subset. I have identified an astrophysical fluid flow application which will validate the data parallel language subset. 1996 Research Agenda Milestones for the coming year include implementing a significant portion of the data parallel language subset over the Legion system. Using simple hand-coded methods, I plan to demonstrate (1) concurrent task and data parallel objects and (2) task parallel objects managing both task and data parallel objects. My next steps will focus on constructing a compiler and implementing the fluid flow application with the language. Concurrently, I will conduct a search for a real-world application exhibiting both task and data parallelism within the same program m. Additional 1995 Activities During the fall I collaborated

  7. Performance of the Galley Parallel File System

    Science.gov (United States)

    Nieuwejaar, Nils; Kotz, David

    1996-01-01

    As the input/output (I/O) needs of parallel scientific applications increase, file systems for multiprocessors are being designed to provide applications with parallel access to multiple disks. Many parallel file systems present applications with a conventional Unix-like interface that allows the application to access multiple disks transparently. This interface conceals the parallism within the file system, which increases the ease of programmability, but makes it difficult or impossible for sophisticated programmers and libraries to use knowledge about their I/O needs to exploit that parallelism. Furthermore, most current parallel file systems are optimized for a different workload than they are being asked to support. We introduce Galley, a new parallel file system that is intended to efficiently support realistic parallel workloads. Initial experiments, reported in this paper, indicate that Galley is capable of providing high-performance 1/O to applications the applications that rely on them. In Section 3 we describe that access data in patterns that have been observed to be common.

  8. Unified Singularity Modeling and Reconfiguration of 3rTPS Metamorphic Parallel Mechanisms with Parallel Constraint Screws

    Directory of Open Access Journals (Sweden)

    Yufeng Zhuang

    2015-01-01

    Full Text Available This paper presents a unified singularity modeling and reconfiguration analysis of variable topologies of a class of metamorphic parallel mechanisms with parallel constraint screws. The new parallel mechanisms consist of three reconfigurable rTPS limbs that have two working phases stemming from the reconfigurable Hooke (rT joint. While one phase has full mobility, the other supplies a constraint force to the platform. Based on these, the platform constraint screw systems show that the new metamorphic parallel mechanisms have four topologies by altering the limb phases with mobility change among 1R2T (one rotation with two translations, 2R2T, and 3R2T and mobility 6. Geometric conditions of the mechanism design are investigated with some special topologies illustrated considering the limb arrangement. Following this and the actuation scheme analysis, a unified Jacobian matrix is formed using screw theory to include the change between geometric constraints and actuation constraints in the topology reconfiguration. Various singular configurations are identified by analyzing screw dependency in the Jacobian matrix. The work in this paper provides basis for singularity-free workspace analysis and optimal design of the class of metamorphic parallel mechanisms with parallel constraint screws which shows simple geometric constraints with potential simple kinematics and dynamics properties.

  9. IMPLEMENTATION OF PID ON PIC24F SERIES MICROCONTROLLER FOR SPEED CONTROL OF A DC MOTOR USING MPLAB AND PROTEUS

    Directory of Open Access Journals (Sweden)

    Sohaib Aslam

    2016-09-01

    Full Text Available Speed control of DC motor is very critical in most of the industrial systems where accuracy and protection are of essence. This paper presents the simulations of Proportional Integral Derivative Controller (PID on a 16-bit PIC 24F series microcontroller for speed control of a DC motor in the presence of load torque. The PID gains have been tuned by Linear Quadratic Regulator (LQR technique and then it is implemented on microcontroller using MPLAB and finally simulated for speed control of DC motor in Proteus Virtual System Modeling (VSM software.Proteus has built in feature to add load torque to DC motor so simulation results have been presented in three cases speed of DC motor is controlled without load torque, with 25% load torque and with 50% load torque. In all three cases PID effectively controls the speed of DC motor with minimum steady state error.

  10. Fast ℓ1-SPIRiT Compressed Sensing Parallel Imaging MRI: Scalable Parallel Implementation and Clinically Feasible Runtime

    Science.gov (United States)

    Murphy, Mark; Alley, Marcus; Demmel, James; Keutzer, Kurt; Vasanawala, Shreyas; Lustig, Michael

    2012-01-01

    We present ℓ1-SPIRiT, a simple algorithm for auto calibrating parallel imaging (acPI) and compressed sensing (CS) that permits an efficient implementation with clinically-feasible runtimes. We propose a CS objective function that minimizes cross-channel joint sparsity in the Wavelet domain. Our reconstruction minimizes this objective via iterative soft-thresholding, and integrates naturally with iterative Self-Consistent Parallel Imaging (SPIRiT). Like many iterative MRI reconstructions, ℓ1-SPIRiT’s image quality comes at a high computational cost. Excessively long runtimes are a barrier to the clinical use of any reconstruction approach, and thus we discuss our approach to efficiently parallelizing ℓ1-SPIRiT and to achieving clinically-feasible runtimes. We present parallelizations of ℓ1-SPIRiT for both multi-GPU systems and multi-core CPUs, and discuss the software optimization and parallelization decisions made in our implementation. The performance of these alternatives depends on the processor architecture, the size of the image matrix, and the number of parallel imaging channels. Fundamentally, achieving fast runtime requires the correct trade-off between cache usage and parallelization overheads. We demonstrate image quality via a case from our clinical experimentation, using a custom 3DFT Spoiled Gradient Echo (SPGR) sequence with up to 8× acceleration via poisson-disc undersampling in the two phase-encoded directions. PMID:22345529

  11. Parallel and non-parallel laminar mixed convection flow in an inclined tube: The effect of the boundary conditions

    International Nuclear Information System (INIS)

    Barletta, A.

    2008-01-01

    The necessary condition for the onset of parallel flow in the fully developed region of an inclined duct is applied to the case of a circular tube. Parallel flow in inclined ducts is an uncommon regime, since in most cases buoyancy tends to produce the onset of secondary flow. The present study shows how proper thermal boundary conditions may preserve parallel flow regime. Mixed convection flow is studied for a special non-axisymmetric thermal boundary condition that, with a proper choice of a switch parameter, may be compatible with parallel flow. More precisely, a circumferentially variable heat flux distribution is prescribed on the tube wall, expressed as a sinusoidal function of the azimuthal coordinate θ with period 2π. A π/2 rotation in the position of the maximum heat flux, achieved by setting the switch parameter, may allow or not the existence of parallel flow. Two cases are considered corresponding to parallel and non-parallel flow. In the first case, the governing balance equations allow a simple analytical solution. On the contrary, in the second case, the local balance equations are solved numerically by employing a finite element method

  12. Parallel programming with Easy Java Simulations

    Science.gov (United States)

    Esquembre, F.; Christian, W.; Belloni, M.

    2018-01-01

    Nearly all of today's processors are multicore, and ideally programming and algorithm development utilizing the entire processor should be introduced early in the computational physics curriculum. Parallel programming is often not introduced because it requires a new programming environment and uses constructs that are unfamiliar to many teachers. We describe how we decrease the barrier to parallel programming by using a java-based programming environment to treat problems in the usual undergraduate curriculum. We use the easy java simulations programming and authoring tool to create the program's graphical user interface together with objects based on those developed by Kaminsky [Building Parallel Programs (Course Technology, Boston, 2010)] to handle common parallel programming tasks. Shared-memory parallel implementations of physics problems, such as time evolution of the Schrödinger equation, are available as source code and as ready-to-run programs from the AAPT-ComPADRE digital library.

  13. Parallelism and Scalability in an Image Processing Application

    DEFF Research Database (Denmark)

    Rasmussen, Morten Sleth; Stuart, Matthias Bo; Karlsson, Sven

    2008-01-01

    parallel programs. This paper investigates parallelism and scalability of an embedded image processing application. The major challenges faced when parallelizing the application were to extract enough parallelism from the application and to reduce load imbalance. The application has limited immediately......The recent trends in processor architecture show that parallel processing is moving into new areas of computing in the form of many-core desktop processors and multi-processor system-on-chip. This means that parallel processing is required in application areas that traditionally have not used...

  14. Parallelism and Scalability in an Image Processing Application

    DEFF Research Database (Denmark)

    Rasmussen, Morten Sleth; Stuart, Matthias Bo; Karlsson, Sven

    2009-01-01

    parallel programs. This paper investigates parallelism and scalability of an embedded image processing application. The major challenges faced when parallelizing the application were to extract enough parallelism from the application and to reduce load imbalance. The application has limited immediately......The recent trends in processor architecture show that parallel processing is moving into new areas of computing in the form of many-core desktop processors and multi-processor system-on-chips. This means that parallel processing is required in application areas that traditionally have not used...

  15. Parallel auto-correlative statistics with VTK.

    Energy Technology Data Exchange (ETDEWEB)

    Pebay, Philippe Pierre; Bennett, Janine Camille

    2013-08-01

    This report summarizes existing statistical engines in VTK and presents both the serial and parallel auto-correlative statistics engines. It is a sequel to [PT08, BPRT09b, PT09, BPT09, PT10] which studied the parallel descriptive, correlative, multi-correlative, principal component analysis, contingency, k-means, and order statistics engines. The ease of use of the new parallel auto-correlative statistics engine is illustrated by the means of C++ code snippets and algorithm verification is provided. This report justifies the design of the statistics engines with parallel scalability in mind, and provides scalability and speed-up analysis results for the autocorrelative statistics engine.

  16. Conformal pure radiation with parallel rays

    International Nuclear Information System (INIS)

    Leistner, Thomas; Paweł Nurowski

    2012-01-01

    We define pure radiation metrics with parallel rays to be n-dimensional pseudo-Riemannian metrics that admit a parallel null line bundle K and whose Ricci tensor vanishes on vectors that are orthogonal to K. We give necessary conditions in terms of the Weyl, Cotton and Bach tensors for a pseudo-Riemannian metric to be conformal to a pure radiation metric with parallel rays. Then, we derive conditions in terms of the tractor calculus that are equivalent to the existence of a pure radiation metric with parallel rays in a conformal class. We also give analogous results for n-dimensional pseudo-Riemannian pp-waves. (paper)

  17. Numerical modeling of the Linac4 negative ion source extraction region by 3D PIC-MCC code ONIX

    CERN Document Server

    Mochalskyy, S; Minea, T; Lifschitz, AF; Schmitzer, C; Midttun, O; Steyaert, D

    2013-01-01

    At CERN, a high performance negative ion (NI) source is required for the 160 MeV H- linear accelerator Linac4. The source is planned to produce 80 mA of H- with an emittance of 0.25 mm mradN-RMS which is technically and scientifically very challenging. The optimization of the NI source requires a deep understanding of the underling physics concerning the production and extraction of the negative ions. The extraction mechanism from the negative ion source is complex involving a magnetic filter in order to cool down electrons’ temperature. The ONIX (Orsay Negative Ion eXtraction) code is used to address this problem. The ONIX is a selfconsistent 3D electrostatic code using Particles-in-Cell Monte Carlo Collisions (PIC-MCC) approach. It was written to handle the complex boundary conditions between plasma, source walls, and beam formation at the extraction hole. Both, the positive extraction potential (25kV) and the magnetic field map are taken from the experimental set-up, in construction at CERN. This contrib...

  18. Control device for automatic orientation of a solar panel based on a microcontroller (PIC16f628a)

    Science.gov (United States)

    Rezoug, M. R.; Krama, A.

    2016-07-01

    This work proposes a control device for autonomous solar tracker based on one axis, It consists of two main parts; the control part which is based on "the PIC16f628a"; it has the role of controlling, measuring and plotting responses. The second part is a mechanical device, which has the role of making the solar panel follows the day-night change of the sun throughout the year. Both parties are established to improve energy generation of the photovoltaic panels. In this paper, we will explain the main operating principles of our system. Also, we will provide experimental results which demonstrate the good performance and the efficiency of this system. This innovation is different from what has been proposed in previous studies. The important points of this system are maximum output energy and minimum energy consumption of solar tracker, its cost is relatively low with simplicity in implementation. The average power increase produced by using the tracking system for a particular day, is over 30 % compared with the static panel.

  19. Apar-T: code, validation, and physical interpretation of particle-in-cell results

    Science.gov (United States)

    Melzani, Mickaël; Winisdoerffer, Christophe; Walder, Rolf; Folini, Doris; Favre, Jean M.; Krastanov, Stefan; Messmer, Peter

    2013-10-01

    We present the parallel particle-in-cell (PIC) code Apar-T and, more importantly, address the fundamental question of the relations between the PIC model, the Vlasov-Maxwell theory, and real plasmas. First, we present four validation tests: spectra from simulations of thermal plasmas, linear growth rates of the relativistic tearing instability and of the filamentation instability, and nonlinear filamentation merging phase. For the filamentation instability we show that the effective growth rates measured on the total energy can differ by more than 50% from the linear cold predictions and from the fastest modes of the simulation. We link these discrepancies to the superparticle number per cell and to the level of field fluctuations. Second, we detail a new method for initial loading of Maxwell-Jüttner particle distributions with relativistic bulk velocity and relativistic temperature, and explain why the traditional method with individual particle boosting fails. The formulation of the relativistic Harris equilibrium is generalized to arbitrary temperature and mass ratios. Both are required for the tearing instability setup. Third, we turn to the key point of this paper and scrutinize the question of what description of (weakly coupled) physical plasmas is obtained by PIC models. These models rely on two building blocks: coarse-graining, i.e., grouping of the order of p ~ 1010 real particles into a single computer superparticle, and field storage on a grid with its subsequent finite superparticle size. We introduce the notion of coarse-graining dependent quantities, i.e., quantities depending on p. They derive from the PIC plasma parameter ΛPIC, which we show to behave as ΛPIC ∝ 1/p. We explore two important implications. One is that PIC collision- and fluctuation-induced thermalization times are expected to scale with the number of superparticles per grid cell, and thus to be a factor p ~ 1010 smaller than in real plasmas, a fact that we confirm with

  20. Parallel plasma fluid turbulence calculations

    International Nuclear Information System (INIS)

    Leboeuf, J.N.; Carreras, B.A.; Charlton, L.A.; Drake, J.B.; Lynch, V.E.; Newman, D.E.; Sidikman, K.L.; Spong, D.A.

    1994-01-01

    The study of plasma turbulence and transport is a complex problem of critical importance for fusion-relevant plasmas. To this day, the fluid treatment of plasma dynamics is the best approach to realistic physics at the high resolution required for certain experimentally relevant calculations. Core and edge turbulence in a magnetic fusion device have been modeled using state-of-the-art, nonlinear, three-dimensional, initial-value fluid and gyrofluid codes. Parallel implementation of these models on diverse platforms--vector parallel (National Energy Research Supercomputer Center's CRAY Y-MP C90), massively parallel (Intel Paragon XP/S 35), and serial parallel (clusters of high-performance workstations using the Parallel Virtual Machine protocol)--offers a variety of paths to high resolution and significant improvements in real-time efficiency, each with its own advantages. The largest and most efficient calculations have been performed at the 200 Mword memory limit on the C90 in dedicated mode, where an overlap of 12 to 13 out of a maximum of 16 processors has been achieved with a gyrofluid model of core fluctuations. The richness of the physics captured by these calculations is commensurate with the increased resolution and efficiency and is limited only by the ingenuity brought to the analysis of the massive amounts of data generated

  1. A task parallel implementation of fast multipole methods

    KAUST Repository

    Taura, Kenjiro

    2012-11-01

    This paper describes a task parallel implementation of ExaFMM, an open source implementation of fast multipole methods (FMM), using a lightweight task parallel library MassiveThreads. Although there have been many attempts on parallelizing FMM, experiences have almost exclusively been limited to formulation based on flat homogeneous parallel loops. FMM in fact contains operations that cannot be readily expressed in such conventional but restrictive models. We show that task parallelism, or parallel recursions in particular, allows us to parallelize all operations of FMM naturally and scalably. Moreover it allows us to parallelize a \\'\\'mutual interaction\\'\\' for force/potential evaluation, which is roughly twice as efficient as a more conventional, unidirectional force/potential evaluation. The net result is an open source FMM that is clearly among the fastest single node implementations, including those on GPUs; with a million particles on a 32 cores Sandy Bridge 2.20GHz node, it completes a single time step including tree construction and force/potential evaluation in 65 milliseconds. The study clearly showcases both programmability and performance benefits of flexible parallel constructs over more monolithic parallel loops. © 2012 IEEE.

  2. Second derivative parallel block backward differentiation type ...

    African Journals Online (AJOL)

    Second derivative parallel block backward differentiation type formulas for Stiff ODEs. ... Log in or Register to get access to full text downloads. ... and the methods are inherently parallel and can be distributed over parallel processors. They are ...

  3. Electron paramagnetic resonance spectral study of [Mn(acs){sub 2}(2–pic){sub 2}(H{sub 2}O){sub 2}] single crystals

    Energy Technology Data Exchange (ETDEWEB)

    Kocakoç, Mehpeyker, E-mail: mkocakoc@cu.edu.tr [Çukurova University (Turkey); Tapramaz, Recep, E-mail: recept@omu.edu.tr [Ondokuz Mayıs University (Turkey)

    2016-03-25

    Acesulfame potassium salt is a synthetic and non-caloric sweetener. It is also important chemically for its capability of being ligand in coordination compounds, because it can bind over Nitrogen and Oxygen atoms of carbonyl and sulfonyl groups and ring oxygen. Some acesulfame containing transition metal ion complexes with mixed ligands exhibit solvato and thermo chromic properties and these properties make them physically important. In this work single crystals of Mn{sup +2} ion complex with mixed ligand, [Mn(acs){sub 2}(2-pic){sub 2}(H{sub 2}O){sub 2}], was studied with electron paramagnetic resonance (EPR) spectroscopy. EPR parameters were determined. Zero field splitting parameters indicated that the complex was highly symmetric. Variable temperature studies showed no detectable chance in spectra.

  4. Parallelization of quantum molecular dynamics simulation code

    International Nuclear Information System (INIS)

    Kato, Kaori; Kunugi, Tomoaki; Shibahara, Masahiko; Kotake, Susumu

    1998-02-01

    A quantum molecular dynamics simulation code has been developed for the analysis of the thermalization of photon energies in the molecule or materials in Kansai Research Establishment. The simulation code is parallelized for both Scalar massively parallel computer (Intel Paragon XP/S75) and Vector parallel computer (Fujitsu VPP300/12). Scalable speed-up has been obtained with a distribution to processor units by division of particle group in both parallel computers. As a result of distribution to processor units not only by particle group but also by the particles calculation that is constructed with fine calculations, highly parallelization performance is achieved in Intel Paragon XP/S75. (author)

  5. A Parallel Approach to Fractal Image Compression

    OpenAIRE

    Lubomir Dedera

    2004-01-01

    The paper deals with a parallel approach to coding and decoding algorithms in fractal image compressionand presents experimental results comparing sequential and parallel algorithms from the point of view of achieved bothcoding and decoding time and effectiveness of parallelization.

  6. Differences Between Distributed and Parallel Systems

    Energy Technology Data Exchange (ETDEWEB)

    Brightwell, R.; Maccabe, A.B.; Rissen, R.

    1998-10-01

    Distributed systems have been studied for twenty years and are now coming into wider use as fast networks and powerful workstations become more readily available. In many respects a massively parallel computer resembles a network of workstations and it is tempting to port a distributed operating system to such a machine. However, there are significant differences between these two environments and a parallel operating system is needed to get the best performance out of a massively parallel system. This report characterizes the differences between distributed systems, networks of workstations, and massively parallel systems and analyzes the impact of these differences on operating system design. In the second part of the report, we introduce Puma, an operating system specifically developed for massively parallel systems. We describe Puma portals, the basic building blocks for message passing paradigms implemented on top of Puma, and show how the differences observed in the first part of the report have influenced the design and implementation of Puma.

  7. Parallel processing from applications to systems

    CERN Document Server

    Moldovan, Dan I

    1993-01-01

    This text provides one of the broadest presentations of parallelprocessing available, including the structure of parallelprocessors and parallel algorithms. The emphasis is on mappingalgorithms to highly parallel computers, with extensive coverage ofarray and multiprocessor architectures. Early chapters provideinsightful coverage on the analysis of parallel algorithms andprogram transformations, effectively integrating a variety ofmaterial previously scattered throughout the literature. Theory andpractice are well balanced across diverse topics in this concisepresentation. For exceptional cla

  8. A survey of parallel multigrid algorithms

    Science.gov (United States)

    Chan, Tony F.; Tuminaro, Ray S.

    1987-01-01

    A typical multigrid algorithm applied to well-behaved linear-elliptic partial-differential equations (PDEs) is described. Criteria for designing and evaluating parallel algorithms are presented. Before evaluating the performance of some parallel multigrid algorithms, consideration is given to some theoretical complexity results for solving PDEs in parallel and for executing the multigrid algorithm. The effect of mapping and load imbalance on the partial efficiency of the algorithm is studied.

  9. Parallel computing by Monte Carlo codes MVP/GMVP

    International Nuclear Information System (INIS)

    Nagaya, Yasunobu; Nakagawa, Masayuki; Mori, Takamasa

    2001-01-01

    General-purpose Monte Carlo codes MVP/GMVP are well-vectorized and thus enable us to perform high-speed Monte Carlo calculations. In order to achieve more speedups, we parallelized the codes on the different types of parallel computing platforms or by using a standard parallelization library MPI. The platforms used for benchmark calculations are a distributed-memory vector-parallel computer Fujitsu VPP500, a distributed-memory massively parallel computer Intel paragon and a distributed-memory scalar-parallel computer Hitachi SR2201, IBM SP2. As mentioned generally, linear speedup could be obtained for large-scale problems but parallelization efficiency decreased as the batch size per a processing element(PE) was smaller. It was also found that the statistical uncertainty for assembly powers was less than 0.1% by the PWR full-core calculation with more than 10 million histories and it took about 1.5 hours by massively parallel computing. (author)

  10. The parallel processing of EGS4 code on distributed memory scalar parallel computer:Intel Paragon XP/S15-256

    Energy Technology Data Exchange (ETDEWEB)

    Takemiya, Hiroshi; Ohta, Hirofumi; Honma, Ichirou

    1996-03-01

    The parallelization of Electro-Magnetic Cascade Monte Carlo Simulation Code, EGS4 on distributed memory scalar parallel computer: Intel Paragon XP/S15-256 is described. EGS4 has the feature that calculation time for one incident particle is quite different from each other because of the dynamic generation of secondary particles and different behavior of each particle. Granularity for parallel processing, parallel programming model and the algorithm of parallel random number generation are discussed and two kinds of method, each of which allocates particles dynamically or statically, are used for the purpose of realizing high speed parallel processing of this code. Among four problems chosen for performance evaluation, the speedup factors for three problems have been attained to nearly 100 times with 128 processor. It has been found that when both the calculation time for each incident particles and its dispersion are large, it is preferable to use dynamic particle allocation method which can average the load for each processor. And it has also been found that when they are small, it is preferable to use static particle allocation method which reduces the communication overhead. Moreover, it is pointed out that to get the result accurately, it is necessary to use double precision variables in EGS4 code. Finally, the workflow of program parallelization is analyzed and tools for program parallelization through the experience of the EGS4 parallelization are discussed. (author).

  11. Towards a streaming model for nested data parallelism

    DEFF Research Database (Denmark)

    Madsen, Frederik Meisner; Filinski, Andrzej

    2013-01-01

    The language-integrated cost semantics for nested data parallelism pioneered by NESL provides an intuitive, high-level model for predicting performance and scalability of parallel algorithms with reasonable accuracy. However, this predictability, obtained through a uniform, parallelism-flattening......The language-integrated cost semantics for nested data parallelism pioneered by NESL provides an intuitive, high-level model for predicting performance and scalability of parallel algorithms with reasonable accuracy. However, this predictability, obtained through a uniform, parallelism......-processable in a streaming fashion. This semantics is directly compatible with previously proposed piecewise execution models for nested data parallelism, but allows the expected space usage to be reasoned about directly at the source-language level. The language definition and implementation are still very much work...

  12. Massively Parallel Computing: A Sandia Perspective

    Energy Technology Data Exchange (ETDEWEB)

    Dosanjh, Sudip S.; Greenberg, David S.; Hendrickson, Bruce; Heroux, Michael A.; Plimpton, Steve J.; Tomkins, James L.; Womble, David E.

    1999-05-06

    The computing power available to scientists and engineers has increased dramatically in the past decade, due in part to progress in making massively parallel computing practical and available. The expectation for these machines has been great. The reality is that progress has been slower than expected. Nevertheless, massively parallel computing is beginning to realize its potential for enabling significant break-throughs in science and engineering. This paper provides a perspective on the state of the field, colored by the authors' experiences using large scale parallel machines at Sandia National Laboratories. We address trends in hardware, system software and algorithms, and we also offer our view of the forces shaping the parallel computing industry.

  13. Parallel Algorithms for the Exascale Era

    Energy Technology Data Exchange (ETDEWEB)

    Robey, Robert W. [Los Alamos National Laboratory

    2016-10-19

    New parallel algorithms are needed to reach the Exascale level of parallelism with millions of cores. We look at some of the research developed by students in projects at LANL. The research blends ideas from the early days of computing while weaving in the fresh approach brought by students new to the field of high performance computing. We look at reproducibility of global sums and why it is important to parallel computing. Next we look at how the concept of hashing has led to the development of more scalable algorithms suitable for next-generation parallel computers. Nearly all of this work has been done by undergraduates and published in leading scientific journals.

  14. Tripartite polyionic complex (PIC) micelles as non-viral vectors for mesenchymal stem cell siRNA transfection.

    Science.gov (United States)

    Raisin, Sophie; Morille, Marie; Bony, Claire; Noël, Danièle; Devoisselle, Jean-Marie; Belamie, Emmanuel

    2017-08-22

    In the context of regenerative medicine, the use of RNA interference mechanisms has already proven its efficiency in targeting specific gene expression with the aim of enhancing, accelerating or, more generally, directing stem cell differentiation. However, achievement of good transfection levels requires the use of a gene vector. For in vivo applications, synthetic vectors are an interesting option to avoid possible issues associated with viral vectors (safety, production costs, etc.). Herein, we report on the design of tripartite polyionic complex micelles as original non-viral polymeric vectors suited for mesenchymal stem cell transfection with siRNA. Three micelle formulations were designed to exhibit pH-triggered disassembly in an acidic pH range comparable to that of endosomes. One formulation was selected as the most promising with the highest siRNA loading capacity while clearly maintaining pH-triggered disassembly properties. A thorough investigation of the internalization pathway of micelles into cells with tagged siRNA was made before showing an efficient inhibition of Runx2 expression in primary bone marrow-derived stem cells. This work evidenced PIC micelles as promising synthetic vectors that allow efficient MSC transfection and control over their behavior, from the perspective of their clinical use.

  15. A Parallel Approach to Fractal Image Compression

    Directory of Open Access Journals (Sweden)

    Lubomir Dedera

    2004-01-01

    Full Text Available The paper deals with a parallel approach to coding and decoding algorithms in fractal image compressionand presents experimental results comparing sequential and parallel algorithms from the point of view of achieved bothcoding and decoding time and effectiveness of parallelization.

  16. A parallelization study of the general purpose Monte Carlo code MCNP4 on a distributed memory highly parallel computer

    International Nuclear Information System (INIS)

    Yamazaki, Takao; Fujisaki, Masahide; Okuda, Motoi; Takano, Makoto; Masukawa, Fumihiro; Naito, Yoshitaka

    1993-01-01

    The general purpose Monte Carlo code MCNP4 has been implemented on the Fujitsu AP1000 distributed memory highly parallel computer. Parallelization techniques developed and studied are reported. A shielding analysis function of the MCNP4 code is parallelized in this study. A technique to map a history to each processor dynamically and to map control process to a certain processor was applied. The efficiency of parallelized code is up to 80% for a typical practical problem with 512 processors. These results demonstrate the advantages of a highly parallel computer to the conventional computers in the field of shielding analysis by Monte Carlo method. (orig.)

  17. Performance Analysis of Parallel Mathematical Subroutine library PARCEL

    International Nuclear Information System (INIS)

    Yamada, Susumu; Shimizu, Futoshi; Kobayashi, Kenichi; Kaburaki, Hideo; Kishida, Norio

    2000-01-01

    The parallel mathematical subroutine library PARCEL (Parallel Computing Elements) has been developed by Japan Atomic Energy Research Institute for easy use of typical parallelized mathematical codes in any application problems on distributed parallel computers. The PARCEL includes routines for linear equations, eigenvalue problems, pseudo-random number generation, and fast Fourier transforms. It is shown that the results of performance for linear equations routines exhibit good parallelization efficiency on vector, as well as scalar, parallel computers. A comparison of the efficiency results with the PETSc (Portable Extensible Tool kit for Scientific Computations) library has been reported. (author)

  18. Applications of the parallel computing system using network

    International Nuclear Information System (INIS)

    Ido, Shunji; Hasebe, Hiroki

    1994-01-01

    Parallel programming is applied to multiple processors connected in Ethernet. Data exchanges between tasks located in each processing element are realized by two ways. One is socket which is standard library on recent UNIX operating systems. Another is a network connecting software, named as Parallel Virtual Machine (PVM) which is a free software developed by ORNL, to use many workstations connected to network as a parallel computer. This paper discusses the availability of parallel computing using network and UNIX workstations and comparison between specialized parallel systems (Transputer and iPSC/860) in a Monte Carlo simulation which generally shows high parallelization ratio. (author)

  19. Balanced, parallel operation of flashlamps

    International Nuclear Information System (INIS)

    Carder, B.M.; Merritt, B.T.

    1979-01-01

    A new energy store, the Compensated Pulsed Alternator (CPA), promises to be a cost effective substitute for capacitors to drive flashlamps that pump large Nd:glass lasers. Because the CPA is large and discrete, it will be necessary that it drive many parallel flashlamp circuits, presenting a problem in equal current distribution. Current division to +- 20% between parallel flashlamps has been achieved, but this is marginal for laser pumping. A method is presented here that provides equal current sharing to about 1%, and it includes fused protection against short circuit faults. The method was tested with eight parallel circuits, including both open-circuit and short-circuit fault tests

  20. Bayer image parallel decoding based on GPU

    Science.gov (United States)

    Hu, Rihui; Xu, Zhiyong; Wei, Yuxing; Sun, Shaohua

    2012-11-01

    In the photoelectrical tracking system, Bayer image is decompressed in traditional method, which is CPU-based. However, it is too slow when the images become large, for example, 2K×2K×16bit. In order to accelerate the Bayer image decoding, this paper introduces a parallel speedup method for NVIDA's Graphics Processor Unit (GPU) which supports CUDA architecture. The decoding procedure can be divided into three parts: the first is serial part, the second is task-parallelism part, and the last is data-parallelism part including inverse quantization, inverse discrete wavelet transform (IDWT) as well as image post-processing part. For reducing the execution time, the task-parallelism part is optimized by OpenMP techniques. The data-parallelism part could advance its efficiency through executing on the GPU as CUDA parallel program. The optimization techniques include instruction optimization, shared memory access optimization, the access memory coalesced optimization and texture memory optimization. In particular, it can significantly speed up the IDWT by rewriting the 2D (Tow-dimensional) serial IDWT into 1D parallel IDWT. Through experimenting with 1K×1K×16bit Bayer image, data-parallelism part is 10 more times faster than CPU-based implementation. Finally, a CPU+GPU heterogeneous decompression system was designed. The experimental result shows that it could achieve 3 to 5 times speed increase compared to the CPU serial method.

  1. Refinement of Parallel and Reactive Programs

    OpenAIRE

    Back, R. J. R.

    1992-01-01

    We show how to apply the refinement calculus to stepwise refinement of parallel and reactive programs. We use action systems as our basic program model. Action systems are sequential programs which can be implemented in a parallel fashion. Hence refinement calculus methods, originally developed for sequential programs, carry over to the derivation of parallel programs. Refinement of reactive programs is handled by data refinement techniques originally developed for the sequential refinement c...

  2. Portable parallel programming in a Fortran environment

    International Nuclear Information System (INIS)

    May, E.N.

    1989-01-01

    Experience using the Argonne-developed PARMACs macro package to implement a portable parallel programming environment is described. Fortran programs with intrinsic parallelism of coarse and medium granularity are easily converted to parallel programs which are portable among a number of commercially available parallel processors in the class of shared-memory bus-based and local-memory network based MIMD processors. The parallelism is implemented using standard UNIX (tm) tools and a small number of easily understood synchronization concepts (monitors and message-passing techniques) to construct and coordinate multiple cooperating processes on one or many processors. Benchmark results are presented for parallel computers such as the Alliant FX/8, the Encore MultiMax, the Sequent Balance, the Intel iPSC/2 Hypercube and a network of Sun 3 workstations. These parallel machines are typical MIMD types with from 8 to 30 processors, each rated at from 1 to 10 MIPS processing power. The demonstration code used for this work is a Monte Carlo simulation of the response to photons of a ''nearly realistic'' lead, iron and plastic electromagnetic and hadronic calorimeter, using the EGS4 code system. 6 refs., 2 figs., 2 tabs

  3. Structured Parallel Programming Patterns for Efficient Computation

    CERN Document Server

    McCool, Michael; Robison, Arch

    2012-01-01

    Programming is now parallel programming. Much as structured programming revolutionized traditional serial programming decades ago, a new kind of structured programming, based on patterns, is relevant to parallel programming today. Parallel computing experts and industry insiders Michael McCool, Arch Robison, and James Reinders describe how to design and implement maintainable and efficient parallel algorithms using a pattern-based approach. They present both theory and practice, and give detailed concrete examples using multiple programming models. Examples are primarily given using two of th

  4. A Tutorial on Parallel and Concurrent Programming in Haskell

    Science.gov (United States)

    Peyton Jones, Simon; Singh, Satnam

    This practical tutorial introduces the features available in Haskell for writing parallel and concurrent programs. We first describe how to write semi-explicit parallel programs by using annotations to express opportunities for parallelism and to help control the granularity of parallelism for effective execution on modern operating systems and processors. We then describe the mechanisms provided by Haskell for writing explicitly parallel programs with a focus on the use of software transactional memory to help share information between threads. Finally, we show how nested data parallelism can be used to write deterministically parallel programs which allows programmers to use rich data types in data parallel programs which are automatically transformed into flat data parallel versions for efficient execution on multi-core processors.

  5. Parallel Computing Using Web Servers and "Servlets".

    Science.gov (United States)

    Lo, Alfred; Bloor, Chris; Choi, Y. K.

    2000-01-01

    Describes parallel computing and presents inexpensive ways to implement a virtual parallel computer with multiple Web servers. Highlights include performance measurement of parallel systems; models for using Java and intranet technology including single server, multiple clients and multiple servers, single client; and a comparison of CGI (common…

  6. Evaluación de Varios Insecticidas para el Control del Cephaloleiaspcerca avagelineataPic, Plaga de la Palma Africana

    Directory of Open Access Journals (Sweden)

    Urueta Sandino Eduardo

    1974-04-01

    Full Text Available Resumen. Se efectuaron varios ensayos para determinar el efecto de carbofuran 1.0, 1.5 y 2. 0 kg I. A./ha; carbaril 1.5 y 2. 0 kg I. A. /ha; lindano 1.0 y 1.5 kg I. A. /ha; diazinon 0.5 lt I. A./ha; dicrotofos 0. 5 lt I. A. /ha; fosfamidon 0.6 lt. I. A/ha; y fention 0.5 lt I. A./ha, sobre adultos y larvas de Cephaloleiasp. cerca avagelineataPic., una plaga de la palma africana en Colombia. Todos los insecticidas fueron efectivos para controlar larvas de Cephaloleiasp. en cogollos, hasta por periodos de más de 30 días. El carbofuran 2.0 kg I. A./ha carbaril 2.0 kg l . A./ha y lindano 1. 5 kg I.A. /hafueron los productos más eficientes para controlar adultos de Cephaloleia. sp. protegiendo por 15 días las hojas más jóvenes. Dicrotofos 0.5 lt I. A./ha; diazinon0.5 lt l. A./ha; fention 0.5 itI. A./ha y fosfamidon 0.6 lt I. A/ha, aparentemente no fueron efectivos para controlar las formas adultas de Cephaloleiasp. Ninguno de los insecticidas fue fitotóxico para la palma africana. /Abstract. Several tests were carried out to determine the effectiveness of carbofuran 1. 0, 1.5 and 2.0 kg A.I./ha; carbaryl 1.5, 2.0 kg. A.I./ha; lindane 1.0, 1.5 kg. A.I./ha; phosphamidon 0.6 lt. A.I./ha; fenthion 0.5 lt. A.I./ha; dicrotophos 0.5 lt. A.I /ha; diazinon 0.5 lt. A.I./ha on larvae and adults of Cephaloleia. sp. near vagelineata Pic a Chrysomelidae that affects young oil palm (Elaeisguineensis leaves in Colombia. All of these insecticides controlled well Cepbaloleia sp. larvae for periods over a month. carbofuran 2 kg. A.I./ha; carbaryl 2kg. A.I./ha and lindane 1.5 kg. A. I./ha gave the best control of Cephaloleia. sp. adults, protecting young oil palm leaves up to 15 days. Dicrotophos 0.5 lts. A.I./ha; fenthion 0.5 lt. A. I./ha; phosphamidon 0.6 lt. A.I./ha; diazinon 0.5 lt. A.I./ha; apparently were not effective to control adults of Cephaloleia sp. None of the insecticides tested showed to be phytotoxic to the oil palm.

  7. Current distribution characteristics of superconducting parallel circuits

    International Nuclear Information System (INIS)

    Mori, K.; Suzuki, Y.; Hara, N.; Kitamura, M.; Tominaka, T.

    1994-01-01

    In order to increase the current carrying capacity of the current path of the superconducting magnet system, the portion of parallel circuits such as insulated multi-strand cables or parallel persistent current switches (PCS) are made. In superconducting parallel circuits of an insulated multi-strand cable or a parallel persistent current switch (PCS), the current distribution during the current sweep, the persistent mode, and the quench process were investigated. In order to measure the current distribution, two methods were used. (1) Each strand was surrounded with a pure iron core with the air gap. In the air gap, a Hall probe was located. The accuracy of this method was deteriorated by the magnetic hysteresis of iron. (2) The Rogowski coil without iron was used for the current measurement of each path in a 4-parallel PCS. As a result, it was shown that the current distribution characteristics of a parallel PCS is very similar to that of an insulated multi-strand cable for the quench process

  8. Parallel hierarchical global illumination

    Energy Technology Data Exchange (ETDEWEB)

    Snell, Quinn O. [Iowa State Univ., Ames, IA (United States)

    1997-10-08

    Solving the global illumination problem is equivalent to determining the intensity of every wavelength of light in all directions at every point in a given scene. The complexity of the problem has led researchers to use approximation methods for solving the problem on serial computers. Rather than using an approximation method, such as backward ray tracing or radiosity, the authors have chosen to solve the Rendering Equation by direct simulation of light transport from the light sources. This paper presents an algorithm that solves the Rendering Equation to any desired accuracy, and can be run in parallel on distributed memory or shared memory computer systems with excellent scaling properties. It appears superior in both speed and physical correctness to recent published methods involving bidirectional ray tracing or hybrid treatments of diffuse and specular surfaces. Like progressive radiosity methods, it dynamically refines the geometry decomposition where required, but does so without the excessive storage requirements for ray histories. The algorithm, called Photon, produces a scene which converges to the global illumination solution. This amounts to a huge task for a 1997-vintage serial computer, but using the power of a parallel supercomputer significantly reduces the time required to generate a solution. Currently, Photon can be run on most parallel environments from a shared memory multiprocessor to a parallel supercomputer, as well as on clusters of heterogeneous workstations.

  9. 6th International Parallel Tools Workshop

    CERN Document Server

    Brinkmann, Steffen; Gracia, José; Resch, Michael; Nagel, Wolfgang

    2013-01-01

    The latest advances in the High Performance Computing hardware have significantly raised the level of available compute performance. At the same time, the growing hardware capabilities of modern supercomputing architectures have caused an increasing complexity of the parallel application development. Despite numerous efforts to improve and simplify parallel programming, there is still a lot of manual debugging and  tuning work required. This process  is supported by special software tools, facilitating debugging, performance analysis, and optimization and thus  making a major contribution to the development of  robust and efficient parallel software. This book introduces a selection of the tools, which were presented and discussed at the 6th International Parallel Tools Workshop, held in Stuttgart, Germany, 25-26 September 2012.

  10. How does sagittal imbalance affect the appropriateness of surgical indications and selection of procedure in the treatment of degenerative scoliosis? Findings from the RAND/UCLA Appropriate Use Criteria study.

    Science.gov (United States)

    Daubs, Michael D; Brara, Harsimran S; Raaen, Laura B; Chen, Peggy Guey-Chi; Anderson, Ashaunta T; Asch, Steven M; Nuckols, Teryl K

    2018-05-01

    Degenerative lumbar scoliosis (DLS) is often associated with sagittal imbalance, which may affect patients' health outcomes before and after surgery. The appropriateness of surgery and preferred operative approaches has not been examined in detail for patients with DLS and sagittal imbalance. The goals of this article were to describe what is currently known about the relationship between sagittal imbalance and health outcomes among patients with DLS and to determine how indications for surgery in patients with DLS differ when sagittal imbalance is present. This study included a literature review and an expert panel using the RAND/University of California at Los Angeles (UCLA) Appropriateness Method. To develop appropriate use criteria for DLS, researchers at the RAND Corporation recently employed the RAND/UCLA Appropriateness Method, which involves a systematic review of the literature and multidisciplinary expert panel process. Experts reviewed a synopsis of published literature and rated the appropriateness of five common operative approaches for 260 different clinical scenarios. In the present work, we updated the literature review and compared panelists' ratings in scenarios where imbalance was present versus absent. This work was funded by the Collaborative Spine Research Foundation, a group of surgical specialty societies and device manufacturers. On the basis of 13 eligible studies that examined sagittal imbalance and outcomes in patients with DLS, imbalance was associated with worse functional status in the absence of surgery and worse symptoms and complications postoperatively. Panelists' ratings demonstrated a consistent pattern across the diverse clinical scenarios. In general, when imbalance was present, surgery was more likely to be appropriate or necessary, including in some situations where surgery would otherwise be inappropriate. For patients with moderate to severe symptoms and imbalance, a deformity correction procedure was usually appropriate

  11. Angular parallelization of a curvilinear Sn transport theory method

    International Nuclear Information System (INIS)

    Haghighat, A.

    1991-01-01

    In this paper a parallel algorithm for angular domain decomposition (or parallelization) of an r-dependent spherical S n transport theory method is derived. The parallel formulation is incorporated into TWOTRAN-II using the IBM Parallel Fortran compiler and implemented on an IBM 3090/400 (with four processors). The behavior of the parallel algorithm for different physical problems is studied, and it is concluded that the parallel algorithm behaves differently in the presence of a fission source as opposed to the absence of a fission source; this is attributed to the relative contributions of the source and the angular redistribution terms in the S s algorithm. Further, the parallel performance of the algorithm is measured for various problem sizes and different combinations of angular subdomains or processors. Poor parallel efficiencies between ∼35 and 50% are achieved in situations where the relative difference of parallel to serial iterations is ∼50%. High parallel efficiencies between ∼60% and 90% are obtained in situations where the relative difference of parallel to serial iterations is <35%

  12. Combining Compile-Time and Run-Time Parallelization

    Directory of Open Access Journals (Sweden)

    Sungdo Moon

    1999-01-01

    Full Text Available This paper demonstrates that significant improvements to automatic parallelization technology require that existing systems be extended in two ways: (1 they must combine high‐quality compile‐time analysis with low‐cost run‐time testing; and (2 they must take control flow into account during analysis. We support this claim with the results of an experiment that measures the safety of parallelization at run time for loops left unparallelized by the Stanford SUIF compiler’s automatic parallelization system. We present results of measurements on programs from two benchmark suites – SPECFP95 and NAS sample benchmarks – which identify inherently parallel loops in these programs that are missed by the compiler. We characterize remaining parallelization opportunities, and find that most of the loops require run‐time testing, analysis of control flow, or some combination of the two. We present a new compile‐time analysis technique that can be used to parallelize most of these remaining loops. This technique is designed to not only improve the results of compile‐time parallelization, but also to produce low‐cost, directed run‐time tests that allow the system to defer binding of parallelization until run‐time when safety cannot be proven statically. We call this approach predicated array data‐flow analysis. We augment array data‐flow analysis, which the compiler uses to identify independent and privatizable arrays, by associating predicates with array data‐flow values. Predicated array data‐flow analysis allows the compiler to derive “optimistic” data‐flow values guarded by predicates; these predicates can be used to derive a run‐time test guaranteeing the safety of parallelization.

  13. Parallelization characteristics of the DeCART code

    International Nuclear Information System (INIS)

    Cho, J. Y.; Joo, H. G.; Kim, H. Y.; Lee, C. C.; Chang, M. H.; Zee, S. Q.

    2003-12-01

    This report is to describe the parallelization characteristics of the DeCART code and also examine its parallel performance. Parallel computing algorithms are implemented to DeCART to reduce the tremendous computational burden and memory requirement involved in the three-dimensional whole core transport calculation. In the parallelization of the DeCART code, the axial domain decomposition is first realized by using MPI (Message Passing Interface), and then the azimuthal angle domain decomposition by using either MPI or OpenMP. When using the MPI for both the axial and the angle domain decomposition, the concept of MPI grouping is employed for convenient communication in each communication world. For the parallel computation, most of all the computing modules except for the thermal hydraulic module are parallelized. These parallelized computing modules include the MOC ray tracing, CMFD, NEM, region-wise cross section preparation and cell homogenization modules. For the distributed allocation, most of all the MOC and CMFD/NEM variables are allocated only for the assigned planes, which reduces the required memory by a ratio of the number of the assigned planes to the number of all planes. The parallel performance of the DeCART code is evaluated by solving two problems, a rodded variation of the C5G7 MOX three-dimensional benchmark problem and a simplified three-dimensional SMART PWR core problem. In the aspect of parallel performance, the DeCART code shows a good speedup of about 40.1 and 22.4 in the ray tracing module and about 37.3 and 20.2 in the total computing time when using 48 CPUs on the IBM Regatta and 24 CPUs on the LINUX cluster, respectively. In the comparison between the MPI and OpenMP, OpenMP shows a somewhat better performance than MPI. Therefore, it is concluded that the first priority in the parallel computation of the DeCART code is in the axial domain decomposition by using MPI, and then in the angular domain using OpenMP, and finally the angular

  14. PSHED: a simplified approach to developing parallel programs

    International Nuclear Information System (INIS)

    Mahajan, S.M.; Ramesh, K.; Rajesh, K.; Somani, A.; Goel, M.

    1992-01-01

    This paper presents a simplified approach in the forms of a tree structured computational model for parallel application programs. An attempt is made to provide a standard user interface to execute programs on BARC Parallel Processing System (BPPS), a scalable distributed memory multiprocessor. The interface package called PSHED provides a basic framework for representing and executing parallel programs on different parallel architectures. The PSHED package incorporates concepts from a broad range of previous research in programming environments and parallel computations. (author). 6 refs

  15. Parallel evolutionary computation in bioinformatics applications.

    Science.gov (United States)

    Pinho, Jorge; Sobral, João Luis; Rocha, Miguel

    2013-05-01

    A large number of optimization problems within the field of Bioinformatics require methods able to handle its inherent complexity (e.g. NP-hard problems) and also demand increased computational efforts. In this context, the use of parallel architectures is a necessity. In this work, we propose ParJECoLi, a Java based library that offers a large set of metaheuristic methods (such as Evolutionary Algorithms) and also addresses the issue of its efficient execution on a wide range of parallel architectures. The proposed approach focuses on the easiness of use, making the adaptation to distinct parallel environments (multicore, cluster, grid) transparent to the user. Indeed, this work shows how the development of the optimization library can proceed independently of its adaptation for several architectures, making use of Aspect-Oriented Programming. The pluggable nature of parallelism related modules allows the user to easily configure its environment, adding parallelism modules to the base source code when needed. The performance of the platform is validated with two case studies within biological model optimization. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  16. PARALLEL IMPORT: REALITY FOR RUSSIA

    Directory of Open Access Journals (Sweden)

    Т. А. Сухопарова

    2014-01-01

    Full Text Available Problem of parallel import is urgent question at now. Parallel import legalization in Russia is expedient. Such statement based on opposite experts opinion analysis. At the same time it’s necessary to negative consequences consider of this decision and to apply remedies to its minimization.Purchase on Elibrary.ru > Buy now

  17. Multitasking TORT Under UNICOS: Parallel Performance Models and Measurements

    International Nuclear Information System (INIS)

    Azmy, Y.Y.; Barnett, D.A.

    1999-01-01

    The existing parallel algorithms in the TORT discrete ordinates were updated to function in a UNI-COS environment. A performance model for the parallel overhead was derived for the existing algorithms. The largest contributors to the parallel overhead were identified and a new algorithm was developed. A parallel overhead model was also derived for the new algorithm. The results of the comparison of parallel performance models were compared to applications of the code to two TORT standard test problems and a large production problem. The parallel performance models agree well with the measured parallel overhead

  18. Multitasking TORT under UNICOS: Parallel performance models and measurements

    International Nuclear Information System (INIS)

    Barnett, A.; Azmy, Y.Y.

    1999-01-01

    The existing parallel algorithms in the TORT discrete ordinates code were updated to function in a UNICOS environment. A performance model for the parallel overhead was derived for the existing algorithms. The largest contributors to the parallel overhead were identified and a new algorithm was developed. A parallel overhead model was also derived for the new algorithm. The results of the comparison of parallel performance models were compared to applications of the code to two TORT standard test problems and a large production problem. The parallel performance models agree well with the measured parallel overhead

  19. Parallel artificial liquid membrane extraction

    DEFF Research Database (Denmark)

    Gjelstad, Astrid; Rasmussen, Knut Einar; Parmer, Marthe Petrine

    2013-01-01

    This paper reports development of a new approach towards analytical liquid-liquid-liquid membrane extraction termed parallel artificial liquid membrane extraction. A donor plate and acceptor plate create a sandwich, in which each sample (human plasma) and acceptor solution is separated by an arti......This paper reports development of a new approach towards analytical liquid-liquid-liquid membrane extraction termed parallel artificial liquid membrane extraction. A donor plate and acceptor plate create a sandwich, in which each sample (human plasma) and acceptor solution is separated...... by an artificial liquid membrane. Parallel artificial liquid membrane extraction is a modification of hollow-fiber liquid-phase microextraction, where the hollow fibers are replaced by flat membranes in a 96-well plate format....

  20. Massively parallel multicanonical simulations

    Science.gov (United States)

    Gross, Jonathan; Zierenberg, Johannes; Weigel, Martin; Janke, Wolfhard

    2018-03-01

    Generalized-ensemble Monte Carlo simulations such as the multicanonical method and similar techniques are among the most efficient approaches for simulations of systems undergoing discontinuous phase transitions or with rugged free-energy landscapes. As Markov chain methods, they are inherently serial computationally. It was demonstrated recently, however, that a combination of independent simulations that communicate weight updates at variable intervals allows for the efficient utilization of parallel computational resources for multicanonical simulations. Implementing this approach for the many-thread architecture provided by current generations of graphics processing units (GPUs), we show how it can be efficiently employed with of the order of 104 parallel walkers and beyond, thus constituting a versatile tool for Monte Carlo simulations in the era of massively parallel computing. We provide the fully documented source code for the approach applied to the paradigmatic example of the two-dimensional Ising model as starting point and reference for practitioners in the field.

  1. Parallel thermal radiation transport in two dimensions

    International Nuclear Information System (INIS)

    Smedley-Stevenson, R.P.; Ball, S.R.

    2003-01-01

    This paper describes the distributed memory parallel implementation of a deterministic thermal radiation transport algorithm in a 2-dimensional ALE hydrodynamics code. The parallel algorithm consists of a variety of components which are combined in order to produce a state of the art computational capability, capable of solving large thermal radiation transport problems using Blue-Oak, the 3 Tera-Flop MPP (massive parallel processors) computing facility at AWE (United Kingdom). Particular aspects of the parallel algorithm are described together with examples of the performance on some challenging applications. (author)

  2. Parallel thermal radiation transport in two dimensions

    Energy Technology Data Exchange (ETDEWEB)

    Smedley-Stevenson, R.P.; Ball, S.R. [AWE Aldermaston (United Kingdom)

    2003-07-01

    This paper describes the distributed memory parallel implementation of a deterministic thermal radiation transport algorithm in a 2-dimensional ALE hydrodynamics code. The parallel algorithm consists of a variety of components which are combined in order to produce a state of the art computational capability, capable of solving large thermal radiation transport problems using Blue-Oak, the 3 Tera-Flop MPP (massive parallel processors) computing facility at AWE (United Kingdom). Particular aspects of the parallel algorithm are described together with examples of the performance on some challenging applications. (author)

  3. Parallel processing for artificial intelligence 1

    CERN Document Server

    Kanal, LN; Kumar, V; Suttner, CB

    1994-01-01

    Parallel processing for AI problems is of great current interest because of its potential for alleviating the computational demands of AI procedures. The articles in this book consider parallel processing for problems in several areas of artificial intelligence: image processing, knowledge representation in semantic networks, production rules, mechanization of logic, constraint satisfaction, parsing of natural language, data filtering and data mining. The publication is divided into six sections. The first addresses parallel computing for processing and understanding images. The second discus

  4. Comparison of parallel viscosity with neoclassical theory

    International Nuclear Information System (INIS)

    Ida, K.; Nakajima, N.

    1996-04-01

    Toroidal rotation profiles are measured with charge exchange spectroscopy for the plasma heated with tangential NBI in CHS heliotron/torsatron device to estimate parallel viscosity. The parallel viscosity derived from the toroidal rotation velocity shows good agreement with the neoclassical parallel viscosity plus the perpendicular viscosity. (μ perpendicular = 2 m 2 /s). (author)

  5. Adapting algorithms to massively parallel hardware

    CERN Document Server

    Sioulas, Panagiotis

    2016-01-01

    In the recent years, the trend in computing has shifted from delivering processors with faster clock speeds to increasing the number of cores per processor. This marks a paradigm shift towards parallel programming in which applications are programmed to exploit the power provided by multi-cores. Usually there is gain in terms of the time-to-solution and the memory footprint. Specifically, this trend has sparked an interest towards massively parallel systems that can provide a large number of processors, and possibly computing nodes, as in the GPUs and MPPAs (Massively Parallel Processor Arrays). In this project, the focus was on two distinct computing problems: k-d tree searches and track seeding cellular automata. The goal was to adapt the algorithms to parallel systems and evaluate their performance in different cases.

  6. Implementing Shared Memory Parallelism in MCBEND

    Directory of Open Access Journals (Sweden)

    Bird Adam

    2017-01-01

    Full Text Available MCBEND is a general purpose radiation transport Monte Carlo code from AMEC Foster Wheelers’s ANSWERS® Software Service. MCBEND is well established in the UK shielding community for radiation shielding and dosimetry assessments. The existing MCBEND parallel capability effectively involves running the same calculation on many processors. This works very well except when the memory requirements of a model restrict the number of instances of a calculation that will fit on a machine. To more effectively utilise parallel hardware OpenMP has been used to implement shared memory parallelism in MCBEND. This paper describes the reasoning behind the choice of OpenMP, notes some of the challenges of multi-threading an established code such as MCBEND and assesses the performance of the parallel method implemented in MCBEND.

  7. Parallel Task Processing on a Multicore Platform in a PC-based Control System for Parallel Kinematics

    Directory of Open Access Journals (Sweden)

    Harald Michalik

    2009-02-01

    Full Text Available Multicore platforms are such that have one physical processor chip with multiple cores interconnected via a chip level bus. Because they deliver a greater computing power through concurrency, offer greater system density multicore platforms provide best qualifications to address the performance bottleneck encountered in PC-based control systems for parallel kinematic robots with heavy CPU-load. Heavy load control tasks are generated by new control approaches that include features like singularity prediction, structure control algorithms, vision data integration and similar tasks. In this paper we introduce the parallel task scheduling extension of a communication architecture specially tailored for the development of PC-based control of parallel kinematics. The Sche-duling is specially designed for the processing on a multicore platform. It breaks down the serial task processing of the robot control cycle and extends it with parallel task processing paths in order to enhance the overall control performance.

  8. AMITIS: A 3D GPU-Based Hybrid-PIC Model for Space and Plasma Physics

    Science.gov (United States)

    Fatemi, Shahab; Poppe, Andrew R.; Delory, Gregory T.; Farrell, William M.

    2017-05-01

    We have developed, for the first time, an advanced modeling infrastructure in space simulations (AMITIS) with an embedded three-dimensional self-consistent grid-based hybrid model of plasma (kinetic ions and fluid electrons) that runs entirely on graphics processing units (GPUs). The model uses NVIDIA GPUs and their associated parallel computing platform, CUDA, developed for general purpose processing on GPUs. The model uses a single CPU-GPU pair, where the CPU transfers data between the system and GPU memory, executes CUDA kernels, and writes simulation outputs on the disk. All computations, including moving particles, calculating macroscopic properties of particles on a grid, and solving hybrid model equations are processed on a single GPU. We explain various computing kernels within AMITIS and compare their performance with an already existing well-tested hybrid model of plasma that runs in parallel using multi-CPU platforms. We show that AMITIS runs ∼10 times faster than the parallel CPU-based hybrid model. We also introduce an implicit solver for computation of Faraday’s Equation, resulting in an explicit-implicit scheme for the hybrid model equation. We show that the proposed scheme is stable and accurate. We examine the AMITIS energy conservation and show that the energy is conserved with an error < 0.2% after 500,000 timesteps, even when a very low number of particles per cell is used.

  9. Parallel fabrication of macroporous scaffolds.

    Science.gov (United States)

    Dobos, Andrew; Grandhi, Taraka Sai Pavan; Godeshala, Sudhakar; Meldrum, Deirdre R; Rege, Kaushal

    2018-07-01

    Scaffolds generated from naturally occurring and synthetic polymers have been investigated in several applications because of their biocompatibility and tunable chemo-mechanical properties. Existing methods for generation of 3D polymeric scaffolds typically cannot be parallelized, suffer from low throughputs, and do not allow for quick and easy removal of the fragile structures that are formed. Current molds used in hydrogel and scaffold fabrication using solvent casting and porogen leaching are often single-use and do not facilitate 3D scaffold formation in parallel. Here, we describe a simple device and related approaches for the parallel fabrication of macroporous scaffolds. This approach was employed for the generation of macroporous and non-macroporous materials in parallel, in higher throughput and allowed for easy retrieval of these 3D scaffolds once formed. In addition, macroporous scaffolds with interconnected as well as non-interconnected pores were generated, and the versatility of this approach was employed for the generation of 3D scaffolds from diverse materials including an aminoglycoside-derived cationic hydrogel ("Amikagel"), poly(lactic-co-glycolic acid) or PLGA, and collagen. Macroporous scaffolds generated using the device were investigated for plasmid DNA binding and cell loading, indicating the use of this approach for developing materials for different applications in biotechnology. Our results demonstrate that the device-based approach is a simple technology for generating scaffolds in parallel, which can enhance the toolbox of current fabrication techniques. © 2018 Wiley Periodicals, Inc.

  10. Fabrication of a Customized Ball Abutment to Correct a Nonparallel Implant Abutment for a Mandibular Implant-Supported Removable Partial Prosthesis: A Case Report

    Directory of Open Access Journals (Sweden)

    Hossein Dasht

    2017-12-01

    Full Text Available Introduction: While using an implant-supported removable partial prosthesis, the implant abutments should be parallel to one another along the path of insertion. If the implants and their attachments are placed vertically on a similar occlusal plane, not only is the retention improved, the prosthesis will also be maintained for a longer period. Case Report: A 65-year-old male patient referred to the School of Dentistry in Mashhad, Iran with complaints of discomfort with the removable partial dentures for his lower mandible. Due to the lack of parallelism in the supporting implants, prefabricated ball abutment could not be used. As a result, a customized ball abutment was fabricated in order to correct the non-parallelism of the implants. Conclusion: Using UCLA abutments could be a cost-efficient approach for the correction of misaligned implant abutments in implant-supported overdentures.

  11. Event parallelism: Distributed memory parallel computing for high energy physics experiments

    International Nuclear Information System (INIS)

    Nash, T.

    1989-05-01

    This paper describes the present and expected future development of distributed memory parallel computers for high energy physics experiments. It covers the use of event parallel microprocessor farms, particularly at Fermilab, including both ACP multiprocessors and farms of MicroVAXES. These systems have proven very cost effective in the past. A case is made for moving to the more open environment of UNIX and RISC processors. The 2nd Generation ACP Multiprocessor System, which is based on powerful RISC systems, is described. Given the promise of still more extraordinary increases in processor performance, a new emphasis on point to point, rather than bussed, communication will be required. Developments in this direction are described. 6 figs

  12. Event parallelism: Distributed memory parallel computing for high energy physics experiments

    International Nuclear Information System (INIS)

    Nash, T.

    1989-01-01

    This paper describes the present and expected future development of distributed memory parallel computers for high energy physics experiments. It covers the use of event parallel microprocessor farms, particularly at Fermilab, including both ACP multiprocessors and farms of MicroVAXES. These systems have proven very cost effective in the past. A case is made for moving to the more open environment of UNIX and RISC processors. The 2nd Generation ACP Multiprocessor System, which is based on powerful RISC systems, is described. Given the promise of still more extraordinary increases in processor performance, a new emphasis on point to point, rather than bussed, communication will be required. Developments in this direction are described. (orig.)

  13. Event parallelism: Distributed memory parallel computing for high energy physics experiments

    Science.gov (United States)

    Nash, Thomas

    1989-12-01

    This paper describes the present and expected future development of distributed memory parallel computers for high energy physics experiments. It covers the use of event parallel microprocessor farms, particularly at Fermilab, including both ACP multiprocessors and farms of MicroVAXES. These systems have proven very cost effective in the past. A case is made for moving to the more open environment of UNIX and RISC processors. The 2nd Generation ACP Multiprocessor System, which is based on powerful RISC system, is described. Given the promise of still more extraordinary increases in processor performance, a new emphasis on point to point, rather than bussed, communication will be required. Developments in this direction are described.

  14. Electron Heating and Acceleration in a Reconnecting Magnetotail

    Science.gov (United States)

    El-Alaoui, M.; Zhou, M.; Lapenta, G.; Berchem, J.; Richard, R. L.; Schriver, D.; Walker, R. J.

    2017-12-01

    Electron heating and acceleration in the magnetotail have been investigated intensively. A major site for this process is the reconnection region. However, where and how the electrons are accelerated in a realistic three-dimensional X-line geometry is not fully understood. In this study, we employed a three-dimensional implicit particle-in-cell (iPIC3D) simulation and large-scale kinetic (LSK) simulation to address these problems. We modeled a magnetotail reconnection event observed by THEMIS in an iPIC3D simulation with initial and boundary conditions given by a global magnetohydrodynamic (MHD) simulation of Earth's magnetosphere. The iPIC3D simulation system includes the region of fast outflow emanating from the reconnection site that drives dipolarization fronts. We found that current sheet electrons exhibit elongated (cigar-shaped) velocity distributions with a higher parallel temperature. Using LSK we then followed millions of test electrons using the electromagnetic fields from iPIC3D. We found that magnetotail reconnection can generate power law spectra around the near-Earth X-line. A significant number of electrons with energies higher than 50 keV are produced. We identified several acceleration mechanisms at different locations that were responsible for energizing these electrons: non-adiabatic cross-tail drift, betatron and Fermi acceleration. Relative contributions to the energy gain of these high energy electrons from the different mechanisms will be discussed.

  15. Researching the Parallel Process in Supervision and Psychotherapy

    DEFF Research Database (Denmark)

    Jacobsen, Claus Haugaard

    Reflects upon how to do process research in supervision and in the parallel process. A single case study is presented illustrating how a study on parallel process can be carried out.......Reflects upon how to do process research in supervision and in the parallel process. A single case study is presented illustrating how a study on parallel process can be carried out....

  16. Development of parallel/serial program analyzing tool

    International Nuclear Information System (INIS)

    Watanabe, Hiroshi; Nagao, Saichi; Takigawa, Yoshio; Kumakura, Toshimasa

    1999-03-01

    Japan Atomic Energy Research Institute has been developing 'KMtool', a parallel/serial program analyzing tool, in order to promote the parallelization of the science and engineering computation program. KMtool analyzes the performance of program written by FORTRAN77 and MPI, and it reduces the effort for parallelization. This paper describes development purpose, design, utilization and evaluation of KMtool. (author)

  17. Simulation Exploration through Immersive Parallel Planes: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Brunhart-Lupo, Nicholas; Bush, Brian W.; Gruchalla, Kenny; Smith, Steve

    2016-03-01

    We present a visualization-driven simulation system that tightly couples systems dynamics simulations with an immersive virtual environment to allow analysts to rapidly develop and test hypotheses in a high-dimensional parameter space. To accomplish this, we generalize the two-dimensional parallel-coordinates statistical graphic as an immersive 'parallel-planes' visualization for multivariate time series emitted by simulations running in parallel with the visualization. In contrast to traditional parallel coordinate's mapping the multivariate dimensions onto coordinate axes represented by a series of parallel lines, we map pairs of the multivariate dimensions onto a series of parallel rectangles. As in the case of parallel coordinates, each individual observation in the dataset is mapped to a polyline whose vertices coincide with its coordinate values. Regions of the rectangles can be 'brushed' to highlight and select observations of interest: a 'slider' control allows the user to filter the observations by their time coordinate. In an immersive virtual environment, users interact with the parallel planes using a joystick that can select regions on the planes, manipulate selection, and filter time. The brushing and selection actions are used to both explore existing data as well as to launch additional simulations corresponding to the visually selected portions of the input parameter space. As soon as the new simulations complete, their resulting observations are displayed in the virtual environment. This tight feedback loop between simulation and immersive analytics accelerates users' realization of insights about the simulation and its output.

  18. Parallel programming practical aspects, models and current limitations

    CERN Document Server

    Tarkov, Mikhail S

    2014-01-01

    Parallel programming is designed for the use of parallel computer systems for solving time-consuming problems that cannot be solved on a sequential computer in a reasonable time. These problems can be divided into two classes: 1. Processing large data arrays (including processing images and signals in real time)2. Simulation of complex physical processes and chemical reactions For each of these classes, prospective methods are designed for solving problems. For data processing, one of the most promising technologies is the use of artificial neural networks. Particles-in-cell method and cellular automata are very useful for simulation. Problems of scalability of parallel algorithms and the transfer of existing parallel programs to future parallel computers are very acute now. An important task is to optimize the use of the equipment (including the CPU cache) of parallel computers. Along with parallelizing information processing, it is essential to ensure the processing reliability by the relevant organization ...

  19. Potenciação alelopática de extratos vegetais na germinação e no crescimento inicial de picão-preto e alface Allelopathy of plant extracts on germination and initial growth of beggartick (Bidens pilosa L. and lettuce (Lactuca sativa L.

    Directory of Open Access Journals (Sweden)

    Magda Cristiani Ferreira

    2007-08-01

    Full Text Available O picão-preto (Bidens pilosa L. é uma planta daninha muito agressiva, que está presente em quase todo Brasil. O principal método de controle é o químico, porém apresenta elevado impacto ambiental, risco de intoxicação humana e possibilidade de causar fitotoxicidade as culturas. O objetivo do trabalho foi avaliar o efeito alelopático de extratos etanólicos de Eucalyptus citriodora Hook. e Pinus elliottii L. na germinação e no crescimento inicial de picão-preto e alface (Lactuca sativa L.. Foram testadas quatro concentrações de cada extrato (0,25; 0,50; 1,0 e 2,0 % além do controle (0,0 % água destilada com Tween 20 a 0,08 %. O delineamento experimental foi o inteiramente casualizado, com quatro repetições em condições de laboratório. O extrato de P. elliottii não causou efeito alelopático sobre o picão-preto e o alface. O extrato de E. citriodora reduziu significativamente o índice de velocidade de germinação (IVG do picão-preto em todas as concentrações testadas quando comparadas com o controle (0,0%, porém para a alface o IVG foi significativo apenas na concentração de 2,0 %. Para o comprimento da raiz não foi possível observar diferença significativa entre os tratamentos para os dois extratos testados tanto para o alface como para o picão preto.Bidens pilosa L. is an aggressive weed found all over Brazil. The main control method for this species is chemical treatment however, causes strong environmental impact, and it has great human contamination risks, and may cause phytotoxity to crops. The objective of this study was to evaluate the effect of ethanolic extracts of Eucalyptus citriodora Hook. and Pinus elliottii L. on seed germination and initial growth of B. pilosa and lettuce (Lactuca sativa L.. Five concentrations of each extract (0.0; 0.25; 0.50; 1.0; 2.0% were tested in laboratory conditions using a randomized complete block design with four replicates. P. elliottii extract had no effect on B

  20. Parallelization of Subchannel Analysis Code MATRA

    International Nuclear Information System (INIS)

    Kim, Seongjin; Hwang, Daehyun; Kwon, Hyouk

    2014-01-01

    A stand-alone calculation of MATRA code used up pertinent computing time for the thermal margin calculations while a relatively considerable time is needed to solve the whole core pin-by-pin problems. In addition, it is strongly required to improve the computation speed of the MATRA code to satisfy the overall performance of the multi-physics coupling calculations. Therefore, a parallel approach to improve and optimize the computability of the MATRA code is proposed and verified in this study. The parallel algorithm is embodied in the MATRA code using the MPI communication method and the modification of the previous code structure was minimized. An improvement is confirmed by comparing the results between the single and multiple processor algorithms. The speedup and efficiency are also evaluated when increasing the number of processors. The parallel algorithm was implemented to the subchannel code MATRA using the MPI. The performance of the parallel algorithm was verified by comparing the results with those from the MATRA with the single processor. It is also noticed that the performance of the MATRA code was greatly improved by implementing the parallel algorithm for the 1/8 core and whole core problems