WorldWideScience

Sample records for ucla parallel pic

  1. (Nearly) portable PIC code for parallel computers

    International Nuclear Information System (INIS)

    Decyk, V.K.

    1993-01-01

    As part of the Numerical Tokamak Project, the author has developed a (nearly) portable, one dimensional version of the GCPIC algorithm for particle-in-cell codes on parallel computers. This algorithm uses a spatial domain decomposition for the fields, and passes particles from one domain to another as the particles move spatially. With only minor changes, the code has been run in parallel on the Intel Delta, the Cray C-90, the IBM ES/9000 and a cluster of workstations. After a line by line translation into cmfortran, the code was also run on the CM-200. Impressive speeds have been achieved, both on the Intel Delta and the Cray C-90, around 30 nanoseconds per particle per time step. In addition, the author was able to isolate the data management modules, so that the physics modules were not changed much from their sequential version, and the data management modules can be used as open-quotes black boxes.close quotes

  2. Parallel pic plasma simulation through particle decomposition techniques

    International Nuclear Information System (INIS)

    Briguglio, S.; Vlad, G.; Di Martino, B.; Naples, Univ. 'Federico II'

    1998-02-01

    Particle-in-cell (PIC) codes are among the major candidates to yield a satisfactory description of the detail of kinetic effects, such as the resonant wave-particle interaction, relevant in determining the transport mechanism in magnetically confined plasmas. A significant improvement of the simulation performance of such codes con be expected from parallelization, e.g., by distributing the particle population among several parallel processors. Parallelization of a hybrid magnetohydrodynamic-gyrokinetic code has been accomplished within the High Performance Fortran (HPF) framework, and tested on the IBM SP2 parallel system, using a 'particle decomposition' technique. The adopted technique requires a moderate effort in porting the code in parallel form and results in intrinsic load balancing and modest inter processor communication. The performance tests obtained confirm the hypothesis of high effectiveness of the strategy, if targeted towards moderately parallel architectures. Optimal use of resources is also discussed with reference to a specific physics problem [it

  3. Massive parallel 3D PIC simulation of negative ion extraction

    Science.gov (United States)

    Revel, Adrien; Mochalskyy, Serhiy; Montellano, Ivar Mauricio; Wünderlich, Dirk; Fantz, Ursel; Minea, Tiberiu

    2017-09-01

    The 3D PIC-MCC code ONIX is dedicated to modeling Negative hydrogen/deuterium Ion (NI) extraction and co-extraction of electrons from radio-frequency driven, low pressure plasma sources. It provides valuable insight on the complex phenomena involved in the extraction process. In previous calculations, a mesh size larger than the Debye length was used, implying numerical electron heating. Important steps have been achieved in terms of computation performance and parallelization efficiency allowing successful massive parallel calculations (4096 cores), imperative to resolve the Debye length. In addition, the numerical algorithms have been improved in terms of grid treatment, i.e., the electric field near the complex geometry boundaries (plasma grid) is calculated more accurately. The revised model preserves the full 3D treatment, but can take advantage of a highly refined mesh. ONIX was used to investigate the role of the mesh size, the re-injection scheme for lost particles (extracted or wall absorbed), and the electron thermalization process on the calculated extracted current and plasma characteristics. It is demonstrated that all numerical schemes give the same NI current distribution for extracted ions. Concerning the electrons, the pair-injection technique is found well-adapted to simulate the sheath in front of the plasma grid.

  4. Recent progress in 3D EM/EM-PIC simulation with ARGUS and parallel ARGUS

    International Nuclear Information System (INIS)

    Mankofsky, A.; Petillo, J.; Krueger, W.; Mondelli, A.; McNamara, B.; Philp, R.

    1994-01-01

    ARGUS is an integrated, 3-D, volumetric simulation model for systems involving electric and magnetic fields and charged particles, including materials embedded in the simulation region. The code offers the capability to carry out time domain and frequency domain electromagnetic simulations of complex physical systems. ARGUS offers a boolean solid model structure input capability that can include essentially arbitrary structures on the computational domain, and a modular architecture that allows multiple physics packages to access the same data structure and to share common code utilities. Physics modules are in place to compute electrostatic and electromagnetic fields, the normal modes of RF structures, and self-consistent particle-in-cell (PIC) simulation in either a time dependent mode or a steady state mode. The PIC modules include multiple particle species, the Lorentz equations of motion, and algorithms for the creation of particles by emission from material surfaces, injection onto the grid, and ionization. In this paper, we present an updated overview of ARGUS, with particular emphasis given in recent algorithmic and computational advances. These include a completely rewritten frequency domain solver which efficiently treats lossy materials and periodic structures, a parallel version of ARGUS with support for both shared memory parallel vector (i.e. CRAY) machines and distributed memory massively parallel MIMD systems, and numerous new applications of the code

  5. A parallel code named NEPTUNE for 3D fully electromagnetic and pic simulations

    International Nuclear Information System (INIS)

    Dong Ye; Yang Wenyuan; Chen Jun; Zhao Qiang; Xia Fang; Ma Yan; Xiao Li; Sun Huifang; Chen Hong; Zhou Haijing; Mao Zeyao; Dong Zhiwei

    2010-01-01

    A parallel code named NEPTUNE for 3D fully electromagnetic and particle-in-cell (PIC) simulations is introduced, which could run on the Linux system with hundreds to thousand CPUs. NEPTUNE is suitable to simulate entire 3D HPM devices; many HPM devices are simulated and designed by using it. In NEPTUNE code, the electromagnetic fields are updated by using the finite-difference in time domain (FDTD) method of solving Maxwell equations and the particles are moved by using Buneman-Boris advance method of solving relativistic Newton-Lorentz equation. Electromagnetic fields and particles are coupled by using liner weighing interpolation PIC method, and the electric filed components are corrected by using Boris method of solve Poisson equation in order to ensure charge-conservation. NEPTUNE code could construct many complicated geometric structures, such as arbitrary axial-symmetric structures, plane transforming structures, slow-wave-structures, coupling holes, foils, and so on. The boundary conditions used in NEPTUNE code are introduced in brief, including perfectly electric conductor boundary, external wave boundary, and particle boundary. Finally, some typical HPM devices are simulated and test by using NEPTUNE code, including MILO, RBWO, VCO, and RKA. The simulation results are with correct and credible physical images, and the parallel efficiencies are also given. (authors)

  6. Status and future plans for open source QuickPIC

    Science.gov (United States)

    An, Weiming; Decyk, Viktor; Mori, Warren

    2017-10-01

    QuickPIC is a three dimensional (3D) quasi-static particle-in-cell (PIC) code developed based on the UPIC framework. It can be used for efficiently modeling plasma based accelerator (PBA) problems. With quasi-static approximation, QuickPIC can use different time scales for calculating the beam (or laser) evolution and the plasma response, and a 3D plasma wake field can be simulated using a two-dimensional (2D) PIC code where the time variable is ξ = ct - z and z is the beam propagation direction. QuickPIC can be thousand times faster than the normal PIC code when simulating the PBA. It uses an MPI/OpenMP hybrid parallel algorithm, which can be run on either a laptop or the largest supercomputer. The open source QuickPIC is an object-oriented program with high level classes written in Fortran 2003. It can be found at https://github.com/UCLA-Plasma-Simulation-Group/QuickPIC-OpenSource.git

  7. High-Fidelity RF Gun Simulations with the Parallel 3D Finite Element Particle-In-Cell Code Pic3P

    Energy Technology Data Exchange (ETDEWEB)

    Candel, A; Kabel, A.; Lee, L.; Li, Z.; Limborg, C.; Ng, C.; Schussman, G.; Ko, K.; /SLAC

    2009-06-19

    SLAC's Advanced Computations Department (ACD) has developed the first parallel Finite Element 3D Particle-In-Cell (PIC) code, Pic3P, for simulations of RF guns and other space-charge dominated beam-cavity interactions. Pic3P solves the complete set of Maxwell-Lorentz equations and thus includes space charge, retardation and wakefield effects from first principles. Pic3P uses higher-order Finite Elementmethods on unstructured conformal meshes. A novel scheme for causal adaptive refinement and dynamic load balancing enable unprecedented simulation accuracy, aiding the design and operation of the next generation of accelerator facilities. Application to the Linac Coherent Light Source (LCLS) RF gun is presented.

  8. Large Scale Earth's Bow Shock with Northern IMF as Simulated by PIC Code in Parallel with MHD Model

    Science.gov (United States)

    Baraka, Suleiman

    2016-06-01

    In this paper, we propose a 3D kinetic model (particle-in-cell, PIC) for the description of the large scale Earth's bow shock. The proposed version is stable and does not require huge or extensive computer resources. Because PIC simulations work with scaled plasma and field parameters, we also propose to validate our code by comparing its results with the available MHD simulations under same scaled solar wind (SW) and (IMF) conditions. We report new results from the two models. In both codes the Earth's bow shock position is found to be ≈14.8 R E along the Sun-Earth line, and ≈29 R E on the dusk side. Those findings are consistent with past in situ observations. Both simulations reproduce the theoretical jump conditions at the shock. However, the PIC code density and temperature distributions are inflated and slightly shifted sunward when compared to the MHD results. Kinetic electron motions and reflected ions upstream may cause this sunward shift. Species distributions in the foreshock region are depicted within the transition of the shock (measured ≈2 c/ ω pi for Θ Bn = 90° and M MS = 4.7) and in the downstream. The size of the foot jump in the magnetic field at the shock is measured to be (1.7 c/ ω pi ). In the foreshocked region, the thermal velocity is found equal to 213 km s-1 at 15 R E and is equal to 63 km s -1 at 12 R E (magnetosheath region). Despite the large cell size of the current version of the PIC code, it is powerful to retain macrostructure of planets magnetospheres in very short time, thus it can be used for pedagogical test purposes. It is also likely complementary with MHD to deepen our understanding of the large scale magnetosphere.

  9. A portable approach for PIC on emerging architectures

    Science.gov (United States)

    Decyk, Viktor

    2016-03-01

    A portable approach for designing Particle-in-Cell (PIC) algorithms on emerging exascale computers, is based on the recognition that 3 distinct programming paradigms are needed. They are: low level vector (SIMD) processing, middle level shared memory parallel programing, and high level distributed memory programming. In addition, there is a memory hierarchy associated with each level. Such algorithms can be initially developed using vectorizing compilers, OpenMP, and MPI. This is the approach recommended by Intel for the Phi processor. These algorithms can then be translated and possibly specialized to other programming models and languages, as needed. For example, the vector processing and shared memory programming might be done with CUDA instead of vectorizing compilers and OpenMP, but generally the algorithm itself is not greatly changed. The UCLA PICKSC web site at http://www.idre.ucla.edu/ contains example open source skeleton codes (mini-apps) illustrating each of these three programming models, individually and in combination. Fortran2003 now supports abstract data types, and design patterns can be used to support a variety of implementations within the same code base. Fortran2003 also supports interoperability with C so that implementations in C languages are also easy to use. Finally, main codes can be translated into dynamic environments such as Python, while still taking advantage of high performing compiled languages. Parallel languages are still evolving with interesting developments in co-Array Fortran, UPC, and OpenACC, among others, and these can also be supported within the same software architecture. Work supported by NSF and DOE Grants.

  10. PIC 16 F84

    Energy Technology Data Exchange (ETDEWEB)

    Jung, Gi Cheol; Min, Han Sik

    2001-11-15

    The contents of this book are introduction of microprocessor, basic for microcomputer practice, introduction of one chip micro computer, basic command of PIC, instructions simulator and in circuit emulator ; what a simulator of PIC is, and MPLAB direction, making PIC rom writer and instructions, of La's PIC Micro Programmer, PIC programming ; learning Command with examples, and controlling hardware with C-language, practical task for PIC application, a line tracer automobile and making ultrasonic radar ; circuit, source program and monitor program.

  11. PIC 16 F84

    International Nuclear Information System (INIS)

    Jung, Gi Cheol; Min, Han Sik

    2001-11-01

    The contents of this book are introduction of microprocessor, basic for microcomputer practice, introduction of one chip micro computer, basic command of PIC, instructions simulator and in circuit emulator ; what a simulator of PIC is, and MPLAB direction, making PIC rom writer and instructions, of La's PIC Micro Programmer, PIC programming ; learning Command with examples, and controlling hardware with C-language, practical task for PIC application, a line tracer automobile and making ultrasonic radar ; circuit, source program and monitor program.

  12. PIC Detector for Piano Chords

    Directory of Open Access Journals (Sweden)

    Barbancho AnaM

    2010-01-01

    Full Text Available In this paper, a piano chords detector based on parallel interference cancellation (PIC is presented. The proposed system makes use of the novel idea of modeling a segment of music as a third generation mobile communications signal, specifically, as a CDMA (Code Division Multiple Access signal. The proposed model considers each piano note as a CDMA user in which the spreading code is replaced by a representative note pattern. The lack of orthogonality between the note patterns will make necessary to design a specific thresholding matrix to decide whether the PIC outputs correspond to the actual notes composing the chord or not. An additional stage that performs an octave test and a fifth test has been included that improves the error rate in the detection of these intervals that are specially difficult to detect. The proposed system attains very good results in both the detection of the notes that compose a chord and the estimation of the polyphony number.

  13. PACS module image communication at UCLA

    International Nuclear Information System (INIS)

    Stewart, B.K.; Taira, R.K.; Cho, P.S.; Mankovich, N.J.

    1987-01-01

    The advent of the ACR-NEMA digital and communication standard for PACS implementation between imaging, storage and display devices may simplify the networking problems inherent to PACS in the future. However, since the ACR-NEMA interface has not been implemented in manufactured products, the components of a PACS at the present time use various network interface designs, requiring substantial effort in the area of hardware and software integration. Many communication systems are used for the PACS implementation in Pediatric Radiology at UCLA, including baseband, broadband, as well as various parallel-line interface protocols, e.g. GP-IB. A VAX 11/750 minicomputer serves as the host computer for the UCLA Pediatric Radiology PACS system. Communication between the many peripherals take place through the host computer, which acts as the central node. Several communication links have been established, primarily: host computer to other local computers, image processors, various peripherals (digitizers, storage media, etc.) and, of course, to the 512, 1024 and 2048 viewing stations

  14. PIC microcomputer guide for beginner

    International Nuclear Information System (INIS)

    Shin, Chulho

    2001-03-01

    This book comprised of four parts. The first part deals with computer one chip, voltage current, resistance, electronic components, logical element, TTL and CMOS, memory and I/O and MDS. The second part is about PIC16C84 which describes its memory structure, registers and PIC16C84 command. The third part deals with LED control program, jet car LED, quiz buzzer program, LED spectrum, digital dice, two digital dices and time bomb. The last part introduces PIC16C71 and temperature controller.

  15. UCLA accelerator research ampersand development. Progress report

    International Nuclear Information System (INIS)

    1997-01-01

    This report discusses work on advanced accelerators and beam dynamics at ANL, BNL, SLAC, UCLA and Pulse Sciences Incorporated. Discussed in this report are the following concepts: Wakefield acceleration studies; plasma lens research; high gradient rf cavities and beam dynamics studies at the Brookhaven accelerator test facility; rf pulse compression development; and buncher systems for high gradient accelerator and relativistic klystron applications

  16. Laser wakefields at UCLA and LLNL

    International Nuclear Information System (INIS)

    Mori, W.B.; Clayton, C.E.; Joshi, C.; Dawson, J.M.; Decker, C.B.; Marsh, K.; Katsouleas, T.; Darrow, C.B.; Wilks, S.C.

    1991-01-01

    The authors report on recent progress at UCLA and LLNL on the nonlinear laser wakefield scheme. They find advantages to operating in the limit where the laser pulse is narrow enough to expel all the plasma electrons from the focal region. A description of the experimental program for the new short pulse 10 TW laser facility at LLNL is also presented

  17. [PICS: pharmaceutical inspection cooperation scheme].

    Science.gov (United States)

    Morénas, J

    2009-01-01

    The pharmaceutical inspection cooperation scheme (PICS) is a structure containing 34 participating authorities located worldwide (October 2008). It has been created in 1995 on the basis of the pharmaceutical inspection convention (PIC) settled by the European free trade association (EFTA) in1970. This scheme has different goals as to be an international recognised body in the field of good manufacturing practices (GMP), for training inspectors (by the way of an annual seminar and experts circles related notably to active pharmaceutical ingredients [API], quality risk management, computerized systems, useful for the writing of inspection's aide-memoires). PICS is also leading to high standards for GMP inspectorates (through regular crossed audits) and being a room for exchanges on technical matters between inspectors but also between inspectors and pharmaceutical industry.

  18. ATS-6 - UCLA fluxgate magnetometer

    Science.gov (United States)

    Mcpherron, R. L.; Coleman, P. J., Jr.; Snare, R. C.

    1975-01-01

    A summary of the design of the University of California at Los Angeles' fluxgate magnetometer is presented. Instrument noise in the bandwidth 0.001 to 1.0 Hz is of order 85 m gamma. The DC field of the spacecraft transverse to the earth-pointing axis is 1.0 + or - 21 gamma in the X direction and -2.4 + or - 1.3 gamma in the Y direction. The spacecraft field parallel to this axis is less than 5 gamma. The small spacecraft field has made possible studies of the macroscopic field not previously possible at synchronous orbit. At the 96 W longitude of Applications Technology Satellite-6 (ATS-6), the earth's field is typically inclined 30 deg to the dipole axis at local noon. Most perturbations of the field are due to substorms. These consist of a rotation in the meridian to a more radial field followed by a subsequent rotation back. The rotation back is normally accompanied by transient variations in the azimuthal field. The exact timing of these perturbations is a function of satellite location and the details of substorm development.

  19. Electromagnetic direct implicit PIC simulation

    International Nuclear Information System (INIS)

    Langdon, A.B.

    1983-01-01

    Interesting modelling of intense electron flow has been done with implicit particle-in-cell simulation codes. In this report, the direct implicit PIC simulation approach is applied to simulations that include full electromagnetic fields. The resulting algorithm offers advantages relative to moment implicit electromagnetic algorithms and may help in our quest for robust and simpler implicit codes

  20. Numerical experiments on unstructured PIC stability.

    Energy Technology Data Exchange (ETDEWEB)

    Day, David Minot

    2011-04-01

    Particle-In-Cell (PIC) is a method for plasmas simulation. Particles are pushed with Verlet time integration. Fields are modeled using finite differences on a tensor product mesh (cells). The Unstructured PIC methods studied here use instead finite element discretizations on unstructured (simplicial) meshes. PIC is constrained by stability limits (upper bounds) on mesh and time step sizes. Numerical evidence (2D) and analysis will be presented showing that similar bounds constrain unstructured PIC.

  1. SD card projects using the PIC microcontroller

    CERN Document Server

    Ibrahim, Dogan

    2010-01-01

    PIC Microcontrollers are a favorite in industry and with hobbyists. These microcontrollers are versatile, simple, and low cost making them perfect for many different applications. The 8-bit PIC is widely used in consumer electronic goods, office automation, and personal projects. Author, Dogan Ibrahim, author of several PIC books has now written a book using the PIC18 family of microcontrollers to create projects with SD cards. This book is ideal for those practicing engineers, advanced students, and PIC enthusiasts that want to incorporate SD Cards into their devices. SD cards are che

  2. UCLA Translational Biomarker Development Program (UTBD)

    Energy Technology Data Exchange (ETDEWEB)

    Czernin, Johannes [Univ. of California, Los Angeles, CA (United States)

    2014-09-01

    The proposed UTBD program integrates the sciences of diagnostic nuclear medicine and (radio)chemistry with tumor biology and drug development. UTBD aims to translate new PET biomarkers for personalized medicine and to provide examples for the use of PET to determine pharmacokinetic (PK) and pharmacodynamic (PD) drug properties. The program builds on an existing partnership between the Ahmanson Translational Imaging Division (ATID) and the Crump Institute of Molecular Imaging (CIMI), the UCLA Department of Chemistry and the Division of Surgical Oncology. ATID provides the nuclear medicine training program, clinical and preclinical PET/CT scanners, biochemistry and biology labs for probe and drug development, radiochemistry labs, and two cyclotrons. CIMI provides DOE and NIH-funded training programs for radio-synthesis (START) and molecular imaging (SOMI). Other participating entities at UCLA are the Department of Chemistry and Biochemistry and the Division of Surgical Oncology. The first UTBD project focuses on deoxycytidine kinase, a rate-limiting enzyme in nucleotide metabolism, which is expressed in many cancers. Deoxycytidine kinase (dCK) positive tumors can be targeted uniquely by two distinct therapies: 1) nucleoside analog prodrugs such as gemcitabine (GEM) are activated by dCK to cytotoxic antimetabolites; 2) recently developed small molecule dCK inhibitors kill tumor cells by starving them of nucleotides required for DNA replication and repair. Since dCK-specific PET probes are now available, PET imaging of tumor dCK activity could improve the use of two different classes of drugs in a wide variety of cancers.

  3. Partial PIC-MRC Receiver Design for Single Carrier Block Transmission System over Multipath Fading Channels

    Directory of Open Access Journals (Sweden)

    Juinn-Horng Deng

    2012-01-01

    Full Text Available Single carrier block transmission (SCBT system has become one of the most popular modulation systems due to its low peak-to-average power ratio (PAPR, and it is gradually considered to be used for uplink wireless communication systems. In this paper, a low complexity partial parallel interference cancellation (PIC with maximum ratio combining (MRC technology is proposed to use for receiver to combat the intersymbol interference (ISI problem over multipath fading channel. With the aid of MRC scheme, the proposed partial PIC technique can effectively perform the interference cancellation and acquire the benefit of time diversity gain. Finally, the proposed system can be extended to use for multiple antenna systems to provide excellent performance. Simulation results reveal that the proposed low complexity partial PIC-MRC SIMO system can provide robust performance and outperform the conventional PIC and the iterative frequency domain decision feedback equalizer (FD-DFE systems over multipath fading channel environment.

  4. Integrated Work Management: PIC, Course 31884

    Energy Technology Data Exchange (ETDEWEB)

    Simpson, Lewis Edward [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-09-08

    The person-in-charge (PIC) plays a key role in the integrated work management (IWM) process at Los Alamos National Laboratory (LANL, or the Laboratory) because the PIC is assigned responsibility and authority by the responsible line manager (RLM) for the overall validation, coordination, release, execution, and closeout of a work activity in accordance with IWM. This course, Integrated Work Management: PIC (Course 31884), describes the PIC’s IWM roles and responsibilities. This course also discusses IWM requirements that the PIC must meet. For a general overview of the IWM process, see self-study Course 31881, Integrated Work Management: Overview. For instruction on the preparer’s role, see self-study Course 31883, Integrated Work Management: Preparer.

  5. Programming 16-Bit PIC Microcontrollers in C Learning to Fly the PIC 24

    CERN Document Server

    Di Jasio, Lucio

    2011-01-01

    New in the second edition: * MPLAB X support and MPLAB C for the PIC24F v3 and later libraries * I2C™ interface * 100% assembly free solutions * Improved video, PAL/NTSC * Improved audio, RIFF files decoding * PIC24F GA1, GA2, GB1 and GB2 support   Most readers will associate Microchip's name with the ubiquitous 8-bit PIC microcontrollers but it is the new 16-bit PIC24F family that is truly stealing the scene. Orders of magnitude increases of performance, memory size and the rich peripheral set make programming these devices in C a must. This new guide by Microchip insid

  6. Adaptive DSP Algorithms for UMTS: Blind Adaptive MMSE and PIC Multiuser Detection

    NARCIS (Netherlands)

    Potman, J.

    2003-01-01

    A study of the application of blind adaptive Minimum Mean Square Error (MMSE) and Parallel Interference Cancellation (PIC) multiuser detection techniques to Wideband Code Division Multiple Access (WCDMA), the physical layer of Universal Mobile Telecommunication System (UMTS), has been performed as

  7. GaAs Photonic Integrated Circuit (PIC) development for high performance communications

    Energy Technology Data Exchange (ETDEWEB)

    Sullivan, C.T.

    1998-03-01

    Sandia has established a foundational technology in photonic integrated circuits (PICs) based on the (Al,Ga,In)As material system for optical communication, radar control and testing, and network switching applications at the important 1.3{mu}m/1.55{mu}m wavelengths. We investigated the optical, electrooptical, and microwave performance characteristics of the fundamental building-block PIC elements designed to be as simple and process-tolerant as possible, with particular emphasis placed on reducing optical insertion loss. Relatively conventional device array and circuit designs were built using these PIC elements: (1) to establish a baseline performance standard; (2) to assess the impact of epitaxial growth accuracy and uniformity, and of fabrication uniformity and yield; (3) to validate our theoretical and numerical models; and (4) to resolve the optical and microwave packaging issues associated with building fully packaged prototypes. Novel and more complex PIC designs and fabrication processes, viewed as higher payoff but higher risk, were explored in a parallel effort with the intention of meshing those advances into our baseline higher-yield capability as they mature. The application focus targeted the design and fabrication of packaged solitary modulators meeting the requirements of future wideband and high-speed analog and digital data links. Successfully prototyped devices are expected to feed into more complex PICs solving specific problems in high-performance communications, such as optical beamforming networks for phased array antennas.

  8. Selective Adaptive Parallel Interference Cancellation for Multicarrier DS-CDMA Systems

    Directory of Open Access Journals (Sweden)

    Ahmed El-Sayed El-Mahdy

    2013-07-01

    Full Text Available In this paper, Selective Adaptive Parallel Interference Cancellation (SA-PIC technique is presented for Multicarrier Direct Sequence Code Division Multiple Access (MC DS-CDMA scheme. The motivation of using SA-PIC is that it gives high performance and at the same time, reduces the computational complexity required to perform interference cancellation. An upper bound expression of the bit error rate (BER for the SA-PIC under Rayleigh fading channel condition is derived. Moreover, the implementation complexities for SA-PIC and Adaptive Parallel Interference Cancellation (APIC are discussed and compared. The performance of SA-PIC is investigated analytically and validated via computer simulations.

  9. Two-dimensional PIC-MCC simulation of ion extraction

    International Nuclear Information System (INIS)

    Xiong Jiagui; Wang Dewu

    2000-01-01

    To explore more simple and efficient ion extraction methods used in atomic vapor laser isotope separation (AVLIS), two-dimensional (2D) PIC-MCC simulation code is used to simulate and compare several methods: parallel electrode method, II type electrode method, improved M type electrode method, and radio frequency (RF) resonance method. The simulations show that, the RF resonance method without magnetic field is the best among others, then the improved M type electrode method. The result of simulation of II type electrode method is quite different from that calculated by 2D electron equilibrium model. The RF resonance method with or without magnetic field has guide different results. Strong resonance occurs in the simulation without magnetic field, whereas no significant resonance occurs under weak magnetic field. And that is quite different from the strong resonance phenomena occurring in the 1D PIC simulation with weak magnetic field. As for practical applications, the RF resonance method without magnetic field has pros and cons, compared with the M type electrode method

  10. REFORMA/UCLA Mentor Program: A Mentoring Manual.

    Science.gov (United States)

    Tauler, Sandra

    Although mentoring dates back to Greek mythology, the concept continues to thrive in today's society. Mentoring is a strategy that successful people have known about for centuries. The REFORMA/UCLA Mentor Program has made use of this strategy since its inception in November 1985 at the Graduate School of Library and Information Science at the…

  11. GAP--a PIC-type fluid code

    International Nuclear Information System (INIS)

    Marder, B.M.

    1975-01-01

    GAP, a PIC-type fluid code for computing compressible flows, is described and demonstrated. While retaining some features of PIC, it is felt that the GAP approach is conceptually and operationally simpler. 9 figures

  12. PICs in the injector complex - what are we talking about?

    International Nuclear Information System (INIS)

    Hanke, K

    2014-01-01

    This presentation will identify PIC activities for the LHC injector chain, and point out borderline cases to pure consolidation and upgrade. The most important PIC items will be listed for each LIU project (PSB, PS, SPS) and categorized by a) the risk if not performed and b) the implications of doing them. This will in particular address the consequences on performance, schedule, reliability, commissioning time, operational complexity etc. The additional cost of PICs with regard to pure consolidation will be estimated and possible time lines for the implementation of the PICs will be discussed. In this context, it will be evaluated if the PICs can be implemented over several machine stops

  13. Designing Embedded Systems with PIC Microcontrollers Principles and Applications

    CERN Document Server

    Wilmshurst, Tim

    2009-01-01

    PIC microcontrollers are used worldwide in commercial and industrial devices. The 8-bit PIC which this book focuses on is a versatile work horse that completes many designs. An engineer working with applications that include a microcontroller will no doubt come across the PIC sooner rather than later. It is a must to have a working knowledge of this 8-bit technology. This book takes the novice from introduction of embedded systems through to advanced development techniques for utilizing and optimizing the PIC family of microcontrollers in your device. To truly understand the PIC, assembly and

  14. Acceleration of PIC simulation with GPU

    International Nuclear Information System (INIS)

    Suzuki, Junya; Shimazu, Hironori; Fukazawa, Keiichiro; Den, Mitsue

    2011-01-01

    Particle-in-cell (PIC) is a simulation technique for plasma physics. The large number of particles in high-resolution plasma simulation increases the volume computation required, making it vital to increase computation speed. In this study, we attempt to accelerate computation speed on graphics processing units (GPUs) using KEMPO, a PIC simulation code package. We perform two tests for benchmarking, with small and large grid sizes. In these tests, we run KEMPO1 code using a CPU only, both a CPU and a GPU, and a GPU only. The results showed that performance using only a GPU was twice that of using a CPU alone. While, execution time for using both a CPU and GPU is comparable to the tests with a CPU alone, because of the significant bottleneck in communication between the CPU and GPU. (author)

  15. TreePics: visualizing trees with pictures

    Directory of Open Access Journals (Sweden)

    Nicolas Puillandre

    2017-09-01

    Full Text Available While many programs are available to edit phylogenetic trees, associating pictures with branch tips in an efficient and automatic way is not an available option. Here, we present TreePics, a standalone software that uses a web browser to visualize phylogenetic trees in Newick format and that associates pictures (typically, pictures of the voucher specimens to the tip of each branch. Pictures are visualized as thumbnails and can be enlarged by a mouse rollover. Further, several pictures can be selected and displayed in a separate window for visual comparison. TreePics works either online or in a full standalone version, where it can display trees with several thousands of pictures (depending on the memory available. We argue that TreePics can be particularly useful in a preliminary stage of research, such as to quickly detect conflicts between a DNA-based phylogenetic tree and morphological variation, that may be due to contamination that needs to be removed prior to final analyses, or the presence of species complexes.

  16. Analysis of material removed from UCLA tokamaks Microtor and Macrotor

    International Nuclear Information System (INIS)

    Baer, D.R.; Thomas, M.T.; Taylor, R.J.

    1979-02-01

    This paper reports a first effort to examine the surface of the UCLA tokamaks, Microtor and Macrotor, by analyzing samples that have been exposed to plasma discharge and cleaning for long periods. The samples were sent to the Surface Science Section at the Pacific Northwest Laboratory (PNL). There, Auger electron spectrometry and sputter profile techniques were used to examine the samples, which had been handled in atmospheric conditions after being removed from the tokamak

  17. Experimental and theoretical high energy physics research. [UCLA

    Energy Technology Data Exchange (ETDEWEB)

    Buchanan, Charles D.; Cline, David B.; Byers, N.; Ferrara, S.; Peccei, R.; Hauser, Jay; Muller, Thomas; Atac, Muzaffer; Slater, William; Cousins, Robert; Arisaka, Katsushi

    1992-01-01

    Progress in the various components of the UCLA High-Energy Physics Research program is summarized, including some representative figures and lists of resulting presentations and published papers. Principal efforts were directed at the following: (I) UCLA hadronization model, PEP4/9 e{sup +}e{sup {minus}} analysis, {bar P} decay; (II) ICARUS and astroparticle physics (physics goals, technical progress on electronics, data acquisition, and detector performance, long baseline neutrino beam from CERN to the Gran Sasso and ICARUS, future ICARUS program, and WIMP experiment with xenon), B physics with hadron beams and colliders, high-energy collider physics, and the {phi} factory project; (III) theoretical high-energy physics; (IV) H dibaryon search, search for K{sub L}{sup 0} {yields} {pi}{sup 0}{gamma}{gamma} and {pi}{sup 0}{nu}{bar {nu}}, and detector design and construction for the FNAL-KTeV project; (V) UCLA participation in the experiment CDF at Fermilab; and (VI) VLPC/scintillating fiber R D.

  18. Storage of Maize in Purdue Improved Crop Storage (PICS) Bags.

    Science.gov (United States)

    Williams, Scott B; Murdock, Larry L; Baributsa, Dieudonne

    2017-01-01

    Interest in using hermetic technologies as a pest management solution for stored grain has risen in recent years. One hermetic approach, Purdue Improved Crop Storage (PICS) bags, has proven successful in controlling the postharvest pests of cowpea. This success encouraged farmers to use of PICS bags for storing other crops including maize. To assess whether maize can be safely stored in PICS bags without loss of quality, we carried out laboratory studies of maize grain infested with Sitophilus zeamais (Motshulsky) and stored in PICS triple bags or in woven polypropylene bags. Over an eight month observation period, temperatures in the bags correlated with ambient temperature for all treatments. Relative humidity inside PICS bags remained constant over this period despite the large changes that occurred in the surrounding environment. Relative humidity in the woven bags followed ambient humidity closely. PICS bags containing S. zeamais-infested grain saw a significant decline in oxygen compared to the other treatments. Grain moisture content declined in woven bags, but remained high in PICS bags. Seed germination was not significantly affected over the first six months in all treatments, but declined after eight months of storage when infested grain was held in woven bags. Relative damage was low across treatments and not significantly different between treatments. Overall, maize showed no signs of deterioration in PICS bags versus the woven bags and PICS bags were superior to woven bags in terms of specific metrics of grain quality.

  19. PIC Activation through Functional Interplay between Mediator and TFIIH.

    Science.gov (United States)

    Malik, Sohail; Molina, Henrik; Xue, Zhu

    2017-01-06

    The multiprotein Mediator coactivator complex functions in large part by controlling the formation and function of the promoter-bound preinitiation complex (PIC), which consists of RNA polymerase II and general transcription factors. However, precisely how Mediator impacts the PIC, especially post-recruitment, has remained unclear. Here, we have studied Mediator effects on basal transcription in an in vitro transcription system reconstituted from purified components. Our results reveal a close functional interplay between Mediator and TFIIH in the early stages of PIC development. We find that under conditions when TFIIH is not normally required for transcription, Mediator actually represses transcription. TFIIH, whose recruitment to the PIC is known to be facilitated by the Mediator, then acts to relieve Mediator-induced repression to generate an active form of the PIC. Gel mobility shift analyses of PICs and characterization of TFIIH preparations carrying mutant XPB translocase subunit further indicate that this relief of repression is achieved through expending energy via ATP hydrolysis, suggesting that it is coupled to TFIIH's established promoter melting activity. Our interpretation of these results is that Mediator functions as an assembly factor that facilitates PIC maturation through its various stages. Whereas the overall effect of the Mediator is to stimulate basal transcription, its initial engagement with the PIC generates a transcriptionally inert PIC intermediate, which necessitates energy expenditure to complete the process. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. UCLA Particle Physics Research Group annual progress report

    International Nuclear Information System (INIS)

    Nefkens, B.M.K.

    1981-08-01

    The objectives, basic research programs, recent results and continuing activities of the UCLA Particle Physics Research Group are presented. The objectives of the research are to discover, to formulate, and to elucidate the physics laws that govern the elementary constituents of matter and to determine basic properties of particles. A synopsis of research carried out last year is given. The main body of this report is the account of the techniques used in our investigations, the results obtained, and the plans for continuing and new research

  1. Performance and capacity analysis of Poisson photon-counting based Iter-PIC OCDMA systems.

    Science.gov (United States)

    Li, Lingbin; Zhou, Xiaolin; Zhang, Rong; Zhang, Dingchen; Hanzo, Lajos

    2013-11-04

    In this paper, an iterative parallel interference cancellation (Iter-PIC) technique is developed for optical code-division multiple-access (OCDMA) systems relying on shot-noise limited Poisson photon-counting reception. The novel semi-analytical tool of extrinsic information transfer (EXIT) charts is used for analysing both the bit error rate (BER) performance as well as the channel capacity of these systems and the results are verified by Monte Carlo simulations. The proposed Iter-PIC OCDMA system is capable of achieving two orders of magnitude BER improvements and a 0.1 nats of capacity improvement over the conventional chip-level OCDMA systems at a coding rate of 1/10.

  2. Towards the optimization of a gyrokinetic Particle-In-Cell (PIC) code on large-scale hybrid architectures

    International Nuclear Information System (INIS)

    Ohana, N; Lanti, E; Tran, T M; Brunner, S; Hariri, F; Villard, L; Jocksch, A; Gheller, C

    2016-01-01

    With the aim of enabling state-of-the-art gyrokinetic PIC codes to benefit from the performance of recent multithreaded devices, we developed an application from a platform called the “PIC-engine” [1, 2, 3] embedding simplified basic features of the PIC method. The application solves the gyrokinetic equations in a sheared plasma slab using B-spline finite elements up to fourth order to represent the self-consistent electrostatic field. Preliminary studies of the so-called Particle-In-Fourier (PIF) approach, which uses Fourier modes as basis functions in the periodic dimensions of the system instead of the real-space grid, show that this method can be faster than PIC for simulations with a small number of Fourier modes. Similarly to the PIC-engine, multiple levels of parallelism have been implemented using MPI+OpenMP [2] and MPI+OpenACC [1], the latter exploiting the computational power of GPUs without requiring complete code rewriting. It is shown that sorting particles [3] can lead to performance improvement by increasing data locality and vectorizing grid memory access. Weak scalability tests have been successfully run on the GPU-equipped Cray XC30 Piz Daint (at CSCS) up to 4,096 nodes. The reduced time-to-solution will enable more realistic and thus more computationally intensive simulations of turbulent transport in magnetic fusion devices. (paper)

  3. SAPS simulation with GITM/UCLA-RCM coupled model

    Science.gov (United States)

    Lu, Y.; Deng, Y.; Guo, J.; Zhang, D.; Wang, C. P.; Sheng, C.

    2017-12-01

    Abstract: SAPS simulation with GITM/UCLA-RCM coupled model Author: Yang Lu, Yue Deng, Jiapeng Guo, Donghe Zhang, Chih-Ping Wang, Cheng Sheng Ion velocity in the Sub Aurora region observed by Satellites in storm time often shows a significant westward component. The high speed westward stream is distinguished with convection pattern. These kind of events are called Sub Aurora Polarization Stream (SAPS). In March 17th 2013 storm, DMSP F18 satellite observed several SAPS cases when crossing Sub Aurora region. In this study, Global Ionosphere Thermosphere Model (GITM) has been coupled to UCLA-RCM model to simulate the impact of SAPS during March 2013 event on the ionosphere/thermosphere. The particle precipitation and electric field from RCM has been used to drive GITM. The conductance calculated from GITM has feedback to RCM to make the coupling to be self-consistent. The comparison of GITM simulations with different SAPS specifications will be conducted. The neutral wind from simulation will be compared with GOCE satellite. The comparison between runs with SAPS and without SAPS will separate the effect of SAPS from others and illustrate the impact on the TIDS/TADS propagating to both poleward and equatorward directions.

  4. EVALUASI CSE-UCLA PADA STUDI PROSES PEMBELAJARAN MATEMATIKA

    Directory of Open Access Journals (Sweden)

    Siska Andriani

    2015-12-01

    Full Text Available tandar proses merupakan salah satu standar nasional yang mengatur perencanaan, pelaksanaan, penilaian, dan pengawasan proses pembelajaran. Pelaksanaan standar proses yang terjadi di lapangan belum terlihat keterlaksanaannya. Tujuan penelitian ini yaitu memperoleh deskripsi keterlaksanaan standar proses pada proses pembelajaran matematika menggunakan analisis CSE-UCLA di SMP Negeri Satu Atap Lerep. Penelitian ini merupakan penelitian kualitatif dengan pendekatan evaluatif. Sumber data utama adalah guru matematika. Teknik pengumpulan data menggunakan wawancara, observasi, dan dokumentasi. Keabsahan data yang digunakan dalam penelitian ini menggunakan uji credibility (triangulasi dan kecukupan bahan referensi, uji transferability, dan uji dependability.  Hasil penelitian menunjukkan bahwa proses pembelajaran matematika di SMP Negeri Satu Atap Lerep sudah mengikuti standar proses. Implementasi standar proses dengan analisis CSE-UCLA menunjukkan bahwa standar proses dilaksanakan melalui tahap  perencanaan, pengembangan, implementasi, hasil dan dampak. Dampak yang muncul pembelajaran yang terjadi tidak maksimal. Selain itu, banyak faktor yang mempengaruhi implementasi standar proses pada pembelajaran matematika di SMP Negeri Satu Atap Lerep. Faktor-faktor tersebut berupa faktor pendukung dan faktor penghambat.

  5. Digital Fractional Order Controllers Realized by PIC Microprocessor: Experimental Results

    OpenAIRE

    Petras, I.; Grega, S.; Dorcak, L.

    2003-01-01

    This paper deals with the fractional-order controllers and their possible hardware realization based on PIC microprocessor and numerical algorithm coded in PIC Basic. The mathematical description of the digital fractional -order controllers and approximation in the discrete domain are presented. An example of realization of the particular case of digital fractional-order PID controller is shown and described.

  6. PIC Simulations of Hypersonic Plasma Instabilities

    Science.gov (United States)

    Niehoff, D.; Ashour-Abdalla, M.; Niemann, C.; Decyk, V.; Schriver, D.; Clark, E.

    2013-12-01

    The plasma sheaths formed around hypersonic aircraft (Mach number, M > 10) are relatively unexplored and of interest today to both further the development of new technologies and solve long-standing engineering problems. Both laboratory experiments and analytical/numerical modeling are required to advance the understanding of these systems; it is advantageous to perform these tasks in tandem. There has already been some work done to study these plasmas by experiments that create a rapidly expanding plasma through ablation of a target with a laser. In combination with a preformed magnetic field, this configuration leads to a magnetic "bubble" formed behind the front as particles travel at about Mach 30 away from the target. Furthermore, the experiment was able to show the generation of fast electrons which could be due to instabilities on electron scales. To explore this, future experiments will have more accurate diagnostics capable of observing time- and length-scales below typical ion scales, but simulations are a useful tool to explore these plasma conditions theoretically. Particle in Cell (PIC) simulations are necessary when phenomena are expected to be observed at these scales, and also have the advantage of being fully kinetic with no fluid approximations. However, if the scales of the problem are not significantly below the ion scales, then the initialization of the PIC simulation must be very carefully engineered to avoid unnecessary computation and to select the minimum window where structures of interest can be studied. One method of doing this is to seed the simulation with either experiment or ion-scale simulation results. Previous experiments suggest that a useful configuration for studying hypersonic plasma configurations is a ring of particles rapidly expanding transverse to an external magnetic field, which has been simulated on the ion scale with an ion-hybrid code. This suggests that the PIC simulation should have an equivalent configuration

  7. Nonlinear PIC simulation in a Penning trap

    International Nuclear Information System (INIS)

    Lapenta, G.; Delzanno, G.L.; Finn, J. M.

    2002-01-01

    We study the nonlinear dynamics of a Penning trap plasma, including the effect of the finite length and end curvature of the plasma column. A new cylindrical PIC code, called KANDINSKY, has been implemented by using a new interpolation scheme. The principal idea is to calculate the volume of each cell from a particle volume, in the same manner as it is done for the cell charge. With this new method, the density is conserved along streamlines and artificial sources of compressibility are avoided. The code has been validated with a reference Eulerian fluid code. We compare the dynamics of three different models: a model with compression effects, the standard Euler model and a geophysical fluid dynamics model. The results of our investigation prove that Penning traps can really be used to simulate geophysical fluids

  8. Metal Detector By Using PIC Microcontroller Interfacing With PC

    Directory of Open Access Journals (Sweden)

    Yin Min Theint

    2015-06-01

    Full Text Available Abstract This system proposes metal detector by using PIC microcontroller interfacing with PC. The system uses PIC microcontroller as the main controller whether the detected metal is ferrous metal or non-ferrous metal. Among various types of metal sensors and various types of metal detecting technologies concentric type induction coil sensor and VLF very low frequency metal detecting technology are used in this system. This system consists of two configurations Hardware configuration and Software configuration. The hardware components include induction coil sensors which senses the frequency changes of metal a PIC microcontroller personal computer PC buzzer light emitting diode LED and webcam. The software configuration includes a program controller interface. PIC MikroCprogramming language is used to implement the control system. This control system is based on the PIC 16F887 microcontroller.This system is mainly used in mining and high security places such as airport plaza shopping mall and governmental buildings.

  9. PIC simulations of the trapped electron filamentation instability in finite-width electron plasma waves

    Science.gov (United States)

    Winjum, B. J.; Banks, J. W.; Berger, R. L.; Cohen, B. I.; Chapman, T.; Hittinger, J. A. F.; Rozmus, W.; Strozzi, D. J.; Brunner, S.

    2012-10-01

    We present results on the kinetic filamentation of finite-width nonlinear electron plasma waves (EPW). Using 2D simulations with the PIC code BEPS, we excite a traveling EPW with a Gaussian transverse profile and a wavenumber k0λDe= 1/3. The transverse wavenumber spectrum broadens during transverse EPW localization for small width (but sufficiently large amplitude) waves, while the spectrum narrows to a dominant k as the initial EPW width increases to the plane-wave limit. For large EPW widths, filaments can grow and destroy the wave coherence before transverse localization destroys the wave; the filaments in turn evolve individually as self-focusing EPWs. Additionally, a transverse electric field develops that affects trapped electrons, and a beam-like distribution of untrapped electrons develops between filaments and on the sides of a localizing EPW. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344 and funded by the Laboratory Research and Development Program at LLNL under project tracking code 12-ERD-061. Supported also under Grants DE-FG52-09NA29552 and NSF-Phy-0904039. Simulations were performed on UCLA's Hoffman2 and NERSC's Hopper.

  10. UCLA Particle Physics Research Group annual progress report

    International Nuclear Information System (INIS)

    Nefkens, B.M.K.

    1983-11-01

    The objectives, basic research programs, recent results, and continuing activities of the UCLA Particle Physics Research Group are presented. The objectives of the research are to discover, to formulate, and to elucidate the physics laws that govern the elementary constituents of matter and to determine basic properties of particles. The research carried out by the Group last year may be divided into three separate programs: (1) baryon spectroscopy, (2) investigations of charge symmetry and isospin invariance, and (3) tests of time reversal invariance. The main body of this report is the account of the techniques used in our investigations, the results obtained, and the plans for continuing and new research. An update of the group bibliography is given at the end

  11. Recent reflectometry results from the UCLA plasma diagnostics group

    International Nuclear Information System (INIS)

    Gilmore, M.; Doyle, E.J.; Kubota, S.; Nguyen, X.V.; Peebles, W.A.; Rhodes, T.L.; Zeng, L.

    2001-01-01

    The UCLA Plasma Diagnostics Group has an active ongoing reflectometry program. The program is threefold, including 1) profile and 2) fluctuation measurements on fusion devices (DIII-D, NSTX, and others), and 3) basic reflectometry studies in linear and laboratory plasmas that seek to develop new measurement capabilities and increase the physics understanding of reflectometry. Recent results on the DIII-D tokamak include progress toward the implementation of FM reflectometry as a standard density profile diagnostic, and correlation length measurements in QDB discharges that indicate a very different scaling than normally observed in L-mode plasmas. The first reflectometry measurements in a spherical torus (ST) have also been obtained on NSTX. Profiles in NSTX show good agreement with those of Thomson scattering. Finally, in a linear device, a local magnetic field strength measurement based on O-X correlation reflectometry has been demonstrated to proof of principle level, and correlation lengths measured by reflectometry are in good agreement with probes. (author)

  12. Experimental And Theoretical High Energy Physics Research At UCLA

    Energy Technology Data Exchange (ETDEWEB)

    Cousins, Robert D. [University of California Los Angeles

    2013-07-22

    This is the final report of the UCLA High Energy Physics DOE Grant No. DE-FG02- 91ER40662. This report covers the last grant project period, namely the three years beginning January 15, 2010, plus extensions through April 30, 2013. The report describes the broad range of our experimental research spanning direct dark matter detection searches using both liquid xenon (XENON) and liquid argon (DARKSIDE); present (ICARUS) and R&D for future (LBNE) neutrino physics; ultra-high-energy neutrino and cosmic ray detection (ANITA); and the highest-energy accelerator-based physics with the CMS experiment and CERN’s Large Hadron Collider. For our theory group, the report describes frontier activities including particle astrophysics and cosmology; neutrino physics; LHC interaction cross section calculations now feasible due to breakthroughs in theoretical techniques; and advances in the formal theory of supergravity.

  13. Abstract Interpretation of PIC programs through Logic Programming

    DEFF Research Database (Denmark)

    Henriksen, Kim Steen; Gallagher, John Patrick

    2006-01-01

    , are applied to the logic based model of the machine. A small PIC microcontroller is used as a case study. An emulator for this microcontroller is written in Prolog, and standard programming transformations and analysis techniques are used to specialise this emulator with respect to a given PIC program....... The specialised emulator can now be further analysed to gain insight into the given program for the PIC microcontroller. The method describes a general framework for applying abstractions, illustrated here by linear constraints and convex hull analysis, to logic programs. Using these techniques on the specialised...

  14. Full PIC simulations of solar radio emission

    Science.gov (United States)

    Sgattoni, A.; Henri, P.; Briand, C.; Amiranoff, F.; Riconda, C.

    2017-12-01

    Solar radio emissions are electromagnetic (EM) waves emitted in the solar wind plasma as a consequence of electron beams accelerated during solar flares or interplanetary shocks such as ICMEs. To describe their origin, a multi-stage model has been proposed in the 60s which considers a succession of non-linear three-wave interaction processes. A good understanding of the process would allow to infer the kinetic energy transfered from the electron beam to EM waves, so that the radio waves recorded by spacecraft can be used as a diagnostic for the electron beam.Even if the electrostatic problem has been extensively studied, full electromagnetic simulations were attempted only recently. Our large scale 2D-3V electromagnetic PIC simulations allow to identify the generation of both electrostatic and EM waves originated by the succession of plasma instabilities. We tested several configurations varying the electron beam density and velocity considering a background plasma of uniform density. For all the tested configurations approximately 105 of the electron-beam kinetic energy is transfered into EM waves emitted in all direction nearly isotropically. With this work we aim to design experiments of laboratory astrophysics to reproduce the electromagnetic emission process and test its efficiency.

  15. Investigating plasma-rotation methods for the Space-Plasma Physics Campaign at UCLA's BAPSF.

    Science.gov (United States)

    Finnegan, S. M.; Koepke, M. E.; Reynolds, E. W.

    2006-10-01

    In D'Angelo et al., JGR 79, 4747 (1974), rigid-body ExB plasma flow was inferred from parabolic floating-potential profiles produced by a spiral ionizing surface. Here, taking a different approach, we report effects on barium-ion azimuthal-flow profiles using either a non-emissive or emissive spiral end-electrode in the WVU Q-machine. Neither electrode produced a radially-parabolic space-potential profile. The emissive spiral, however, generated controllable, radially-parabolic structure in the floating potential, consistent with a second population of electrons having a radially-parabolic parallel-energy profile. Laser-induced-fluorescence measurements of spatially resolved, azimuthal-velocity distribution functions show that, for a given flow profile, the diamagnetic drift of hot (>>0.2eV) ions overwhelms the ExB-drift contribution. Our experiments constitute a first attempt at producing controllable, rigid-body, ExB plasma flow for future experiments on the LArge-Plasma-Device (LAPD), as part of the Space-Plasma Physics Campaign (at UCLA's BAPSF).

  16. Metal Detector By Using PIC Microcontroller Interfacing With PC

    OpenAIRE

    Yin Min Theint; Myo Maung Maung; Hla Myo Tun

    2015-01-01

    Abstract This system proposes metal detector by using PIC microcontroller interfacing with PC. The system uses PIC microcontroller as the main controller whether the detected metal is ferrous metal or non-ferrous metal. Among various types of metal sensors and various types of metal detecting technologies concentric type induction coil sensor and VLF very low frequency metal detecting technology are used in this system. This system consists of two configurations Hardware configuration and Sof...

  17. Programando en assembler a los microcontroladores RISC. PIC de microchips

    Directory of Open Access Journals (Sweden)

    Tito Flórez C.

    1999-01-01

    Full Text Available Programar en assembler a los PIC se hace relativamente sencillo, cuando se minimiza el número de instrucciones a unas pocas (14 para el PICI6C84. El funcionamiento de esas instrucciones se explica mediante ejemplos sencillos, y el funcionamiento del programa en conjunto se explica con un programa ejemplo. De igual forma se explica la forma como debe de ser quemado el PIC.

  18. Performance of DS-CDMA systems with optimal hard-decision parallel interference cancellation

    NARCIS (Netherlands)

    Hofstad, van der R.W.; Klok, M.J.

    2003-01-01

    We study a multiuser detection system for code-division multiple access (CDMA). We show that applying multistage hard-decision parallel interference cancellation (HD-PIC) significantly improves performance compared to the matched filter system. In (multistage) HD-PIC, estimates of the interfering

  19. Coronagraphy at Pic du Midi: Present state and future projects

    Science.gov (United States)

    Koechlin, L.

    2012-12-01

    The Pic du Midi coronagraph (CLIMSO) is a group of four instruments in parallel, taking images of the whole solar photosphere and low corona. It provides series of 2048*2048 pixels images taken nominally at 1 minute time intervals, all year long, weather permitting. A team of ≃q 60 persons, by groups of 2 or 3 each week, operate the instruments. Their work is programmed in collaboration with Institut de Recherches en astrophysique et planétologie (IRAP) of Observatoire Midi Pyrénées (OMP), and with Programme National Soleil Terre (PNST). The four instruments of CLIMSO (L1, C1, L2 and C2) collect images of the Sun as following: 1) L1 : photosphere in H-α (656.28 nm) ; 2) L2 : photosphere in Ca-II (393.37 nm) ; 3) C1 : prominences in H-α ; 4) C2 : prominences in He-I (1083.0 nm). The data taken are stored in fits format images and mpeg films. They are available publicly on data bases such as BASS 2000 Meudon ({http://bass2000.obspm.fr/home.php?lang=en} and BASS2000 Tarbes ({http://bass2000.bagn.obs-mip.fr/base/sun/index.php}). Several solar studies are carried in relation with these data. In addition to the raw fits images, new images will soon be sent to the data bases: they will be calibrated in solar surface emittance, expressed in W/m^2/nm/steradian. Series of mpeg films for each day are presented in superposed color layers, so as to visualize the multispectral information better. New instrumental developments are planned for the next years and already financed. They will use spectropolarimetry to measure the magnetic field and radial velocities in the photosphere and corona. The data will cover the entire solar disc and have a sample rate of one map per minute.

  20. Global general pediatric surgery partnership: The UCLA-Mozambique experience.

    Science.gov (United States)

    Amado, Vanda; Martins, Deborah B; Karan, Abraar; Johnson, Brittni; Shekherdimian, Shant; Miller, Lee T; Taela, Atanasio; DeUgarte, Daniel A

    2017-09-01

    There has been increasing recognition of the disparities in surgical care throughout the world. Increasingly, efforts are being made to improve local infrastructure and training of surgeons in low-income settings. The purpose of this study was to review the first 5-years of a global academic pediatric general surgery partnership between UCLA and the Eduardo Mondlane University in Maputo, Mozambique. A mixed-methods approach was utilized to perform an ongoing needs assessment. A retrospective review of admission and operative logbooks was performed. Partnership activities were summarized. The needs assessment identified several challenges including limited operative time, personnel, equipment, and resources. Review of logbooks identified a high frequency of burn admissions and colorectal procedures. Partnership activities focused on providing educational resources, on-site proctoring, training opportunities, and research collaboration. This study highlights the spectrum of disease and operative case volume of a referral center for general pediatric surgery in sub-Saharan Africa, and it provides a context for academic partnership activities to facilitate training and improve the quality of pediatric general surgical care in limited-resource settings. Level IV. Copyright © 2017 Elsevier Inc. All rights reserved.

  1. Reliability and validity of the Danish version of the UCLA Loneliness Scale

    DEFF Research Database (Denmark)

    Lasgaard, Mathias

    2007-01-01

    The objective of this study was to examine the psychometric properties of a Danish version of the UCLA Loneliness Scale (UCLA). The 20-item scale was completed along with other measures in a national youth probability sample of 379 8th grade students aged 13-17. The scale showed high internal con....... The results, highly comparable to the original version of the scale, indicate that the Danish version of UCLA is a reliable and valid measure of loneliness....... consistency, and correlations between UCLA and measures of emotional loneliness, social loneliness, self-esteem, depression, extraversion, and neuroticism supported the convergent and discriminant validity of the scale. Exploratory factor analysis supported a unidimensional structure of the measure...

  2. Relativistic electron diffraction at the UCLA Pegasus photoinjector laboratory

    Energy Technology Data Exchange (ETDEWEB)

    Musumeci, P. [UCLA Department of Physics and Astronomy, 475 Portola Plaza, Los Angeles, CA 90095-1547 (United States)], E-mail: musumeci@physics.ucla.edu; Moody, J.T.; Scoby, C.M. [UCLA Department of Physics and Astronomy, 475 Portola Plaza, Los Angeles, CA 90095-1547 (United States)

    2008-10-15

    Electron diffraction holds the promise to yield real-time resolution of atomic motion in an easily accessible environment like a university laboratory at a fraction of the cost of fourth-generation X-ray sources. Currently the limit in time-resolution for conventional electron diffraction is set by how short an electron pulse can be made. A very promising solution to maintain the highest possible beam intensity without excessive pulse broadening from space charge effects is to increase the electron energy to the MeV level where relativistic effects significantly reduce the space charge forces. Rf photoinjectors can in principle deliver up to 10{sup 7}-10{sup 8} electrons packed in bunches of {approx}100-fs length, allowing an unprecedented time resolution and enabling the study of irreversible phenomena by single-shot diffraction patterns. The use of rf photoinjectors as sources for ultrafast electron diffraction has been recently at the center of various theoretical and experimental studies. The UCLA Pegasus laboratory, commissioned in early 2007 as an advanced photoinjector facility, is the only operating system in the country, which has recently demonstrated electron diffraction using a relativistic beam from an rf photoinjector. Due to the use of a state-of-the-art ultrashort photoinjector driver laser system, the beam has been measured to be sub-100-fs long, at least a factor of 5 better than what measured in previous relativistic electron diffraction setups. Moreover, diffraction patterns from various metal targets (titanium and aluminum) have been obtained using the Pegasus beam. One of the main laboratory goals in the near future is to fully develop the rf photoinjector-based ultrafast electron diffraction technique with particular attention to the optimization of the working point of the photoinjector in a low-charge ultrashort pulse regime, and to the development of suitable beam diagnostics.

  3. Relativistic electron diffraction at the UCLA Pegasus photoinjector laboratory

    International Nuclear Information System (INIS)

    Musumeci, P.; Moody, J.T.; Scoby, C.M.

    2008-01-01

    Electron diffraction holds the promise to yield real-time resolution of atomic motion in an easily accessible environment like a university laboratory at a fraction of the cost of fourth-generation X-ray sources. Currently the limit in time-resolution for conventional electron diffraction is set by how short an electron pulse can be made. A very promising solution to maintain the highest possible beam intensity without excessive pulse broadening from space charge effects is to increase the electron energy to the MeV level where relativistic effects significantly reduce the space charge forces. Rf photoinjectors can in principle deliver up to 10 7 -10 8 electrons packed in bunches of ∼100-fs length, allowing an unprecedented time resolution and enabling the study of irreversible phenomena by single-shot diffraction patterns. The use of rf photoinjectors as sources for ultrafast electron diffraction has been recently at the center of various theoretical and experimental studies. The UCLA Pegasus laboratory, commissioned in early 2007 as an advanced photoinjector facility, is the only operating system in the country, which has recently demonstrated electron diffraction using a relativistic beam from an rf photoinjector. Due to the use of a state-of-the-art ultrashort photoinjector driver laser system, the beam has been measured to be sub-100-fs long, at least a factor of 5 better than what measured in previous relativistic electron diffraction setups. Moreover, diffraction patterns from various metal targets (titanium and aluminum) have been obtained using the Pegasus beam. One of the main laboratory goals in the near future is to fully develop the rf photoinjector-based ultrafast electron diffraction technique with particular attention to the optimization of the working point of the photoinjector in a low-charge ultrashort pulse regime, and to the development of suitable beam diagnostics

  4. Relativistic electron diffraction at the UCLA Pegasus photoinjector laboratory.

    Science.gov (United States)

    Musumeci, P; Moody, J T; Scoby, C M

    2008-10-01

    Electron diffraction holds the promise to yield real-time resolution of atomic motion in an easily accessible environment like a university laboratory at a fraction of the cost of fourth-generation X-ray sources. Currently the limit in time-resolution for conventional electron diffraction is set by how short an electron pulse can be made. A very promising solution to maintain the highest possible beam intensity without excessive pulse broadening from space charge effects is to increase the electron energy to the MeV level where relativistic effects significantly reduce the space charge forces. Rf photoinjectors can in principle deliver up to 10(7)-10(8) electrons packed in bunches of approximately 100-fs length, allowing an unprecedented time resolution and enabling the study of irreversible phenomena by single-shot diffraction patterns. The use of rf photoinjectors as sources for ultrafast electron diffraction has been recently at the center of various theoretical and experimental studies. The UCLA Pegasus laboratory, commissioned in early 2007 as an advanced photoinjector facility, is the only operating system in the country, which has recently demonstrated electron diffraction using a relativistic beam from an rf photoinjector. Due to the use of a state-of-the-art ultrashort photoinjector driver laser system, the beam has been measured to be sub-100-fs long, at least a factor of 5 better than what measured in previous relativistic electron diffraction setups. Moreover, diffraction patterns from various metal targets (titanium and aluminum) have been obtained using the Pegasus beam. One of the main laboratory goals in the near future is to fully develop the rf photoinjector-based ultrafast electron diffraction technique with particular attention to the optimization of the working point of the photoinjector in a low-charge ultrashort pulse regime, and to the development of suitable beam diagnostics.

  5. UCLA Particle and Nuclear Physics Research Group, 1993 progress report

    International Nuclear Information System (INIS)

    Nefkens, B.M.K.; Clajus, M.; Price, J.W.; Tippens, W.B.; White, D.B.

    1993-09-01

    The research programs of the UCLA Particle and Nuclear Physics Research Group, the research objectives, results of experiments, the continuing activities and new initiatives are presented. The primary goal of the research is to test the symmetries and invariances of particle/nuclear physics with special emphasis on investigating charge symmetry, isospin invariance, charge conjugation, and CP. Another important part of our work is baryon spectroscopy, which is the determination of the properties (mass, width, decay modes, etc.) of particles and resonances. We also measure some basic properties of light nuclei, for example the hadronic radii of 3 H and 3 He. Special attention is given to the eta meson, its production using photons, electrons, π ± , and protons, and its rare and not-so-rare decays. In Section 1, the physics motivation of our research is outlined. Section 2 provides a summary of the research projects. The status of each program is given in Section 3. We discuss the various experimental techniques used, the results obtained, and we outline the plans for the continuing and the new research. Details are presented of new research that is made possible by the use of the Crystal Ball Detector, a highly segmented NaI calorimeter and spectrometer with nearly 4π acceptance (it was built and used at SLAC and is to be moved to BNL). The appendix contains an update of the bibliography, conference participation, and group memos; it also indicates our share in the organization of conferences, and gives a listing of the colloquia and seminars presented by us

  6. PICS bags safely store unshelled and shelled groundnuts in Niger.

    Science.gov (United States)

    Baributsa, D; Baoua, I B; Bakoye, O N; Amadou, L; Murdock, L L

    2017-05-01

    We conducted an experiment in Niger to evaluate the performance of hermetic triple layer (Purdue Improved Crop Storage- PICS) bags for the preservation of shelled and unshelled groundnut Arachis hypogaea L. Naturally-infested groundnut was stored in PICS bags and woven bags for 6.7 months. After storage, the average oxygen level in the PICS bags fell from 21% to 18% (v/v) and 21%-15% (v/v) for unshelled and shelled groundnut, respectively. Identified pests present in the stored groundnuts were Tribolium castaneum (Herbst), Corcyra cephalonica (Stainton) and Cryptolestes ferrugineus (Stephens). After 6.7 months of storage, in the woven bag, there was a large increase in the pest population accompanied by a weight loss of 8.2% for unshelled groundnuts and 28.7% for shelled groundnut. In PICS bags for both shelled and unshelled groundnuts, by contrast, the density of insect pests did not increase, there was no weight loss, and the germination rate was the same compared to that recorded at the beginning of the experiment. Storing shelled groundnuts in PICS bags is the most cost-effective way as it increases the quantity of grain stored.

  7. Dynamic load balancing in a concurrent plasma PIC code on the JPL/Caltech Mark III hypercube

    International Nuclear Information System (INIS)

    Liewer, P.C.; Leaver, E.W.; Decyk, V.K.; Dawson, J.M.

    1990-01-01

    Dynamic load balancing has been implemented in a concurrent one-dimensional electromagnetic plasma particle-in-cell (PIC) simulation code using a method which adds very little overhead to the parallel code. In PIC codes, the orbits of many interacting plasma electrons and ions are followed as an initial value problem as the particles move in electromagnetic fields calculated self-consistently from the particle motions. The code was implemented using the GCPIC algorithm in which the particles are divided among processors by partitioning the spatial domain of the simulation. The problem is load-balanced by partitioning the spatial domain so that each partition has approximately the same number of particles. During the simulation, the partitions are dynamically recreated as the spatial distribution of the particles changes in order to maintain processor load balance

  8. Particle Acceleration in Pulsar Wind Nebulae: PIC Modelling

    Science.gov (United States)

    Sironi, Lorenzo; Cerutti, Benoît

    We discuss the role of PIC simulations in unveiling the origin of the emitting particles in PWNe. After describing the basics of the PIC technique, we summarize its implications for the quiescent and the flaring emission of the Crab Nebula, as a prototype of PWNe. A consensus seems to be emerging that, in addition to the standard scenario of particle acceleration via the Fermi process at the termination shock of the pulsar wind, magnetic reconnection in the wind, at the termination shock and in the Nebula plays a major role in powering the multi-wavelength signatures of PWNe.

  9. Numerical Schemes for Charged Particle Movement in PIC Simulations

    International Nuclear Information System (INIS)

    Kulhanek, P.

    2001-01-01

    A PIC model of plasma fibers is developed in the Department of Physics of the Czech Technical University for several years. The program code was written in FORTRAN 95, free-style (without compulsory columns). Fortran compiler and linker were used from Compaq Visual Fortran 6.1A embedded in the Microsoft Development studio GUI. Fully three-dimensional code with periodical boundary conditions was developed. Electromagnetic fields are localized on a grid and particles move freely through this grid. One of the partial problems of the PIC model is the numerical particle solver, which will be discussed in this paper. (author)

  10. EXPERIMENTAL INVESTIGATION OF PIC FORMATION IN CFC-12 INCINERATION

    Science.gov (United States)

    The report gives results of experiments to determine the effect of flame zone temperature on gas-phase flame formation and destruction of products of incomplete combustion (PICS) during dichlorodi-fluoromethane (CFC-12) incineration. The effect of water injection into the flame ...

  11. 46 CFR 13.301 - Original application for “Tankerman-PIC (Barge)” endorsement.

    Science.gov (United States)

    2010-10-01

    ... 46 Shipping 1 2010-10-01 2010-10-01 false Original application for âTankerman-PIC (Barge)â endorsement. 13.301 Section 13.301 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY MERCHANT MARINE....301 Original application for “Tankerman-PIC (Barge)” endorsement. Each applicant for a “Tankerman-PIC...

  12. Final Report UCLA-Thermochemical Storage with Anhydrous Ammonia

    Energy Technology Data Exchange (ETDEWEB)

    Lavine, Adrienne [Univ. of California, Los Angeles, CA (United States)

    2018-02-05

    investigation. UCLA has filed a patent that protects the new ideas developed during this project. Discussions are ongoing with potential investors with the aim of partnering for further work. As well as immediate improvements and extra work with the existing experimental system, a key goal is to extend it to a small solar-driven project at an early opportunity.

  13. UCLA1 aptamer inhibition of human immunodeficiency virus type 1 subtype C primary isolates in macrophages and selection of resistance

    CSIR Research Space (South Africa)

    Mufhandu, Hazel T

    2016-09-01

    Full Text Available isolates in monocyte-derived macrophages (MDMs). Of 4 macrophage-tropic isolates tested, 3 were inhibited by UCLA1 in the low nanomolar range (IC80 <29 nM). One isolate that showed reduced susceptibility (<50 nM) to UCLA1 contained mutations in the a5 helix...

  14. Fusion PIC code performance analysis on the Cori KNL system

    Energy Technology Data Exchange (ETDEWEB)

    Koskela, Tuomas S. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). National Energy Research Scientific Computing Center (NERSC); Deslippe, Jack [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). National Energy Research Scientific Computing Center (NERSC); Friesen, Brian [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). National Energy Research Scientific Computing Center (NERSC); Raman, Karthic [INTEL Corp. (United States)

    2017-05-25

    We study the attainable performance of Particle-In-Cell codes on the Cori KNL system by analyzing a miniature particle push application based on the fusion PIC code XGC1. We start from the most basic building blocks of a PIC code and build up the complexity to identify the kernels that cost the most in performance and focus optimization efforts there. Particle push kernels operate at high AI and are not likely to be memory bandwidth or even cache bandwidth bound on KNL. Therefore, we see only minor benefits from the high bandwidth memory available on KNL, and achieving good vectorization is shown to be the most beneficial optimization path with theoretical yield of up to 8x speedup on KNL. In practice we are able to obtain up to a 4x gain from vectorization due to limitations set by the data layout and memory latency.

  15. Development of in-situ visualization tool for PIC simulation

    International Nuclear Information System (INIS)

    Ohno, Nobuaki; Ohtani, Hiroaki

    2014-01-01

    As the capability of a supercomputer is improved, the sizes of simulation and its output data also become larger and larger. Visualization is usually carried out on a researcher's PC with interactive visualization software after performing the computer simulation. However, the data size is becoming too large to do it currently. A promising answer is in-situ visualization. For this case a simulation code is coupled with the visualization code and visualization is performed with the simulation on the same supercomputer. We developed an in-situ visualization tool for particle-in-cell (PIC) simulation and it is provided as a Fortran's module. We coupled it with a PIC simulation code and tested the coupled code on Plasma Simulator supercomputer, and ensured that it works. (author)

  16. Introduccion a los microcontroladores RISC. -PICs de microchips-

    Directory of Open Access Journals (Sweden)

    Tito Flórez C.

    1998-05-01

    Full Text Available Los microcontroladores han prestado una gran ayuda en muchos campos, de los cuales uno de los más conocidos es el control. Iniciarse en el campo de los microcontroladores requiere normalmente dedicarle una enorme cantidad de tiempo, debido, entre otros, a la facilidad de perderse en el mar de información contenida en sus manuales. Debido a la gran similitud que poseen los PIC con respecto a su arquitectura, conjunto de instrucciones y programación, se toma el PIC 16C84 como un buen prototipo de microcontrolador, y se da la información más importante (con sus respectivos ejemplos, para poder ubicarse correctamente en el manejo de éstos.

  17. 77 FR 25739 - Notice of Inventory Completion: Fowler Museum at UCLA, Los Angeles, CA

    Science.gov (United States)

    2012-05-01

    ... objects are 1 awl, 1 bone tool, 2 obsidian biface fragments, 9 bags of obsidian debitage, 4 stone metate fragments, 4 bags of animal bone, 1 obsidian hydration sample, and 5 bags of organic flotation residue. The... artifacts and obsidian hydration dating. The Fowler Museum at UCLA has determined the human remains and...

  18. The UCLA Young Autism Project: A Reply to Gresham and Macmillan.

    Science.gov (United States)

    Smith, Tristam; Lovass, O. Ivar

    1997-01-01

    Responds to "Autistic Recovery? An Analysis and Critique of the Empirical Evidence on the Early Intervention Project" (Gresham and MacMillan), which criticizes research showing the effectiveness of the UCLA Youth Autism Project program for children with autism. The article's misunderstandings are discussed and the program is explained. (CR)

  19. Occupational Analysis: Hospital Radiologic Technologist. The UCLA Allied Health Professions Project.

    Science.gov (United States)

    Reeder, Glenn D.; And Others

    In an effort to meet the growing demand for skilled radiologic technologists and other supportive personnel educated through the associate degree level, a national survey was conducted as part of the UCLA Allied Health Professions Project to determine the tasks performed by personnel in the field and lay the groundwork for development of…

  20. In flight calibrations of Ibis/PICsIT

    International Nuclear Information System (INIS)

    Malaguti, G.; Di Cocco, G.; Foschini, L.; Stephen, J.B.; Bazzano, A.; Ubertini, P.; Bird, A.J.; Laurent, P.; Segreto, A.

    2003-01-01

    PICsIT (Pixellated Imaging Caesium Iodide Telescope) is the high energy detector of the IBIS telescope on-board the INTEGRAL satellite. It consists of 4096 independent detection units, ∼ 0.7 cm 2 in cross-section, operating in the energy range between 175 keV and 10 MeV. The intrinsically low signal to noise ratio in the gamma-ray astronomy domain implies very long observations, lasting 10 5 - 10 6 s. Moreover, the image formation principle on which PICsIT works is that of coded imaging in which the entire detection plane contributes to each decoded sky pixel. For these two main reasons, the monitoring, and possible correction, of the spatial and temporal non-uniformity of pixel performances, especially in terms of gain and energy resolution, is of paramount importance. The IBIS on-board 22 Na calibration source allows the calibration of each pixel at an accuracy of <0.5% by integrating the data from a few revolutions at constant temperature. The two calibration lines, at 511 and 1275 keV, allow also the measurement and monitoring of the PICsIT energy resolution which proves to be very stable at ∼ 19% and ∼ 9% (FWHM) respectively, and consistent with the values expected analytical predictions checked against pre-launch tests. (authors)

  1. Charge-conserving FEM-PIC schemes on general grids

    International Nuclear Information System (INIS)

    Campos Pinto, M.; Jund, S.; Salmon, S.; Sonnendruecker, E.

    2014-01-01

    Particle-In-Cell (PIC) solvers are a major tool for the understanding of the complex behavior of a plasma or a particle beam in many situations. An important issue for electromagnetic PIC solvers, where the fields are computed using Maxwell's equations, is the problem of discrete charge conservation. In this article, we aim at proposing a general mathematical formulation for charge-conserving finite-element Maxwell solvers coupled with particle schemes. In particular, we identify the finite-element continuity equations that must be satisfied by the discrete current sources for several classes of time-domain Vlasov-Maxwell simulations to preserve the Gauss law at each time step, and propose a generic algorithm for computing such consistent sources. Since our results cover a wide range of schemes (namely curl-conforming finite element methods of arbitrary degree, general meshes in two or three dimensions, several classes of time discretization schemes, particles with arbitrary shape factors and piecewise polynomial trajectories of arbitrary degree), we believe that they provide a useful roadmap in the design of high-order charge-conserving FEM-PIC numerical schemes. (authors)

  2. On the elimination of numerical Cerenkov radiation in PIC simulations

    International Nuclear Information System (INIS)

    Greenwood, Andrew D.; Cartwright, Keith L.; Luginsland, John W.; Baca, Ernest A.

    2004-01-01

    Particle-in-cell (PIC) simulations are a useful tool in modeling plasma in physical devices. The Yee finite difference time domain (FDTD) method is commonly used in PIC simulations to model the electromagnetic fields. However, in the Yee FDTD method, poorly resolved waves at frequencies near the cut off frequency of the grid travel slower than the physical speed of light. These slowly traveling, poorly resolved waves are not a problem in many simulations because the physics of interest are at much lower frequencies. However, when high energy particles are present, the particles may travel faster than the numerical speed of their own radiation, leading to non-physical, numerical Cerenkov radiation. Due to non-linear interaction between the particles and the fields, the numerical Cerenkov radiation couples into the frequency band of physical interest and corrupts the PIC simulation. There are two methods of mitigating the effects of the numerical Cerenkov radiation. The computational stencil used to approximate the curl operator can be altered to improve the high frequency physics, or a filtering scheme can be introduced to attenuate the waves that cause the numerical Cerenkov radiation. Altering the computational stencil is more physically accurate but is difficult to implement while maintaining charge conservation in the code. Thus, filtering is more commonly used. Two previously published filters by Godfrey and Friedman are analyzed and compared to ideally desired filter properties

  3. SMILEI: A collaborative, open-source, multi-purpose PIC code for the next generation of super-computers

    Science.gov (United States)

    Grech, Mickael; Derouillat, J.; Beck, A.; Chiaramello, M.; Grassi, A.; Niel, F.; Perez, F.; Vinci, T.; Fle, M.; Aunai, N.; Dargent, J.; Plotnikov, I.; Bouchard, G.; Savoini, P.; Riconda, C.

    2016-10-01

    Over the last decades, Particle-In-Cell (PIC) codes have been central tools for plasma simulations. Today, new trends in High-Performance Computing (HPC) are emerging, dramatically changing HPC-relevant software design and putting some - if not most - legacy codes far beyond the level of performance expected on the new and future massively-parallel super computers. SMILEI is a new open-source PIC code co-developed by both plasma physicists and HPC specialists, and applied to a wide range of physics-related studies: from laser-plasma interaction to astrophysical plasmas. It benefits from an innovative parallelization strategy that relies on a super-domain-decomposition allowing for enhanced cache-use and efficient dynamic load balancing. Beyond these HPC-related developments, SMILEI also benefits from additional physics modules allowing to deal with binary collisions, field and collisional ionization and radiation back-reaction. This poster presents the SMILEI project, its HPC capabilities and illustrates some of the physics problems tackled with SMILEI.

  4. A 3D gyrokinetic particle-in-cell simulation of fusion plasma microturbulence on parallel computers

    Science.gov (United States)

    Williams, T. J.

    1992-12-01

    One of the grand challenge problems now supported by HPCC is the Numerical Tokamak Project. A goal of this project is the study of low-frequency micro-instabilities in tokamak plasmas, which are believed to cause energy loss via turbulent thermal transport across the magnetic field lines. An important tool in this study is gyrokinetic particle-in-cell (PIC) simulation. Gyrokinetic, as opposed to fully-kinetic, methods are particularly well suited to the task because they are optimized to study the frequency and wavelength domain of the microinstabilities. Furthermore, many researchers now employ low-noise delta(f) methods to greatly reduce statistical noise by modelling only the perturbation of the gyrokinetic distribution function from a fixed background, not the entire distribution function. In spite of the increased efficiency of these improved algorithms over conventional PIC algorithms, gyrokinetic PIC simulations of tokamak micro-turbulence are still highly demanding of computer power--even fully-vectorized codes on vector supercomputers. For this reason, we have worked for several years to redevelop these codes on massively parallel computers. We have developed 3D gyrokinetic PIC simulation codes for SIMD and MIMD parallel processors, using control-parallel, data-parallel, and domain-decomposition message-passing (DDMP) programming paradigms. This poster summarizes our earlier work on codes for the Connection Machine and BBN TC2000 and our development of a generic DDMP code for distributed-memory parallel machines. We discuss the memory-access issues which are of key importance in writing parallel PIC codes, with special emphasis on issues peculiar to gyrokinetic PIC. We outline the domain decompositions in our new DDMP code and discuss the interplay of different domain decompositions suited for the particle-pushing and field-solution components of the PIC algorithm.

  5. Boltzmann electron PIC simulation of the E-sail effect

    Directory of Open Access Journals (Sweden)

    P. Janhunen

    2015-12-01

    Full Text Available The solar wind electric sail (E-sail is a planned in-space propulsion device that uses the natural solar wind momentum flux for spacecraft propulsion with the help of long, charged, centrifugally stretched tethers. The problem of accurately predicting the E-sail thrust is still somewhat open, however, due to a possible electron population trapped by the tether. Here we develop a new type of particle-in-cell (PIC simulation for predicting E-sail thrust. In the new simulation, electrons are modelled as a fluid, hence resembling hybrid simulation, but in contrast to normal hybrid simulation, the Poisson equation is used as in normal PIC to calculate the self-consistent electrostatic field. For electron-repulsive parts of the potential, the Boltzmann relation is used. For electron-attractive parts of the potential we employ a power law which contains a parameter that can be used to control the number of trapped electrons. We perform a set of runs varying the parameter and select the one with the smallest number of trapped electrons which still behaves in a physically meaningful way in the sense of producing not more than one solar wind ion deflection shock upstream of the tether. By this prescription we obtain thrust per tether length values that are in line with earlier estimates, although somewhat smaller. We conclude that the Boltzmann PIC simulation is a new tool for simulating the E-sail thrust. This tool enables us to calculate solutions rapidly and allows to easily study different scenarios for trapped electrons.

  6. Program Package for 3d PIC Model of Plasma Fiber

    Science.gov (United States)

    Kulhánek, Petr; Břeň, David

    2007-08-01

    A fully three dimensional Particle in Cell model of the plasma fiber had been developed. The code is written in FORTRAN 95, implementation CVF (Compaq Visual Fortran) under Microsoft Visual Studio user interface. Five particle solvers and two field solvers are included in the model. The solvers have relativistic and non-relativistic variants. The model can deal both with periodical and non-periodical boundary conditions. The mechanism of the surface turbulences generation in the plasma fiber was successfully simulated with the PIC program package.

  7. A comparative study of gold UCLA-type and CAD/CAM titanium implant abutments

    Science.gov (United States)

    Park, Ji-Man; Lee, Jai-Bong; Heo, Seong-Joo

    2014-01-01

    PURPOSE The aim of this study was to evaluate the interface accuracy of computer-assisted designed and manufactured (CAD/CAM) titanium abutments and implant fixture compared to gold-cast UCLA abutments. MATERIALS AND METHODS An external connection implant system (Mark III, n=10) and an internal connection implant system (Replace Select, n=10) were used, 5 of each group were connected to milled titanium abutment and the rest were connected to the gold-cast UCLA abutments. The implant fixture and abutment were tightened to torque of 35 Ncm using a digital torque gauge, and initial detorque values were measured 10 minutes after tightening. To mimic the mastication, a cyclic loading was applied at 14 Hz for one million cycles, with the stress amplitude range being within 0 N to 100 N. After the cyclic loading, detorque values were measured again. The fixture-abutment gaps were measured under a microscope and recorded with an accuracy of ±0.1 µm at 50 points. RESULTS Initial detorque values of milled abutment were significantly higher than those of cast abutment (P.05). After cyclic loading, detorque values of cast abutment increased, but those of milled abutment decreased (Pabutment group and the cast abutment group after cyclic loading. CONCLUSION In conclusion, CAD/CAM milled titanium abutment can be fabricated with sufficient accuracy to permit screw joint stability between abutment and fixture comparable to that of the traditional gold cast UCLA abutment. PMID:24605206

  8. MHD PbLi experiments in MaPLE loop at UCLA

    International Nuclear Information System (INIS)

    Courtessole, C.; Smolentsev, S.; Sketchley, T.; Abdou, M.

    2016-01-01

    Highlights: • The paper overviews the MaPLE facility at UCLA: one-of-a-few PbLi MHD loop in the world. • We present the progress achieved in development and testing of high-temperature PbLi flow diagnostics. • The most important MHD experiments carried out since the first loop operation in 2011 are summarized. - Abstract: Experiments on magnetohydrodynamic (MHD) flows are critical to understanding complex flow phenomena in ducts of liquid metal blankets, in particular those that utilize eutectic alloy lead–lithium as breeder/coolant, such as self-cooled, dual-coolant and helium-cooled lead–lithium blanket concepts. The primary goal of MHD experiments at UCLA using the liquid metal flow facility called MaPLE (Magnetohydrodynamic PbLi Experiment) is to address important MHD effects, heat transfer and flow materials interactions in blanket-relevant conditions. The paper overviews the one-of-a-kind MaPLE loop at UCLA and presents recent experimental activities, including the development and testing of high-temperature PbLi flow diagnostics and experiments that have been performed since the first loop operation in 2011. We also discuss MaPLE upgrades, which need to be done to substantially expand the experimental capabilities towards a new class of MHD flow phenomena that includes buoyancy effects.

  9. MHD PbLi experiments in MaPLE loop at UCLA

    Energy Technology Data Exchange (ETDEWEB)

    Courtessole, C., E-mail: cyril@fusion.ucla.edu; Smolentsev, S.; Sketchley, T.; Abdou, M.

    2016-11-01

    Highlights: • The paper overviews the MaPLE facility at UCLA: one-of-a-few PbLi MHD loop in the world. • We present the progress achieved in development and testing of high-temperature PbLi flow diagnostics. • The most important MHD experiments carried out since the first loop operation in 2011 are summarized. - Abstract: Experiments on magnetohydrodynamic (MHD) flows are critical to understanding complex flow phenomena in ducts of liquid metal blankets, in particular those that utilize eutectic alloy lead–lithium as breeder/coolant, such as self-cooled, dual-coolant and helium-cooled lead–lithium blanket concepts. The primary goal of MHD experiments at UCLA using the liquid metal flow facility called MaPLE (Magnetohydrodynamic PbLi Experiment) is to address important MHD effects, heat transfer and flow materials interactions in blanket-relevant conditions. The paper overviews the one-of-a-kind MaPLE loop at UCLA and presents recent experimental activities, including the development and testing of high-temperature PbLi flow diagnostics and experiments that have been performed since the first loop operation in 2011. We also discuss MaPLE upgrades, which need to be done to substantially expand the experimental capabilities towards a new class of MHD flow phenomena that includes buoyancy effects.

  10. Searching for Short GRBs in Soft Gamma Rays with INTEGRAL/PICsIT

    DEFF Research Database (Denmark)

    Rodi, James; Bazzano, Angela; Ubertini, Pietro

    spectral information about these sources at soft gamma-ray energies.We have begun a study of PICsIT data for faint SGRB ssimilar to the one associated with the binary neutron star (BNS) merger GW170817, and also are preparing for future GW triggers by developing a realtime burst analysis for PICs......IT. Searching the PICsIT data for significant excesses during ~30 min-long pointings containing times of SGRBs, we have been able to differentiate between SGRBs and spurious events. Also, this work allows us to assess what fraction of reported SGRBs have been detected by PICsIT, which can be used to provide...

  11. PICsIT a position sensitive detector for space applications

    CERN Document Server

    Labanti, C; Ferriani, S; Ferro, G; Malaguti, G; Mauri, A; Rossi, E; Schiavone, F; Stephen, J B; Traci, A; Visparelli, D

    2002-01-01

    Pixellated Imaging CsI Telescope (PICsIT) is the high energy detector plane of Imager on Board INTEGRAL Satellite (IBIS), one of the main instruments on board the International Gamma-Ray Astrophysics Laboratory (INTEGRAL) satellite that will be launched in the year 2001. It consists of 4096 CsI(Tl) individual detector elements and operates in the energy range from 120 to 10,000 keV. PICsIT is made up of 8 identical modules, each housing 512 scintillating crystals coupled to PIN photodiodes (PD). Each crystal, 30 mm long and with a cross-section of 8.55x8.55 mm sup 2 , is wrapped with a white diffusing coating and then inserted into an aluminium crate. In order to have a compact design, two electronic boards, mounted directly below the crystal/PD assembly, host both the Analogue and Digital Front-End Electronics (FEE). The behaviour of the read-out FEE has a direct impact on the performance of the whole detector in terms of lower energy threshold, energy resolution and event time tagging. Due to the great numb...

  12. 2D arc-PIC code description: methods and documentation

    CERN Document Server

    Timko, Helga

    2011-01-01

    Vacuum discharges are one of the main limiting factors for future linear collider designs such as that of the Compact LInear Collider. To optimize machine efficiency, maintaining the highest feasible accelerating gradient below a certain breakdown rate is desirable; understanding breakdowns can therefore help us to achieve this goal. As a part of ongoing theoretical research on vacuum discharges at the Helsinki Institute of Physics, the build-up of plasma can be investigated through the particle-in-cell method. For this purpose, we have developed the 2D Arc-PIC code introduced here. We present an exhaustive description of the 2D Arc-PIC code in two parts. In the first part, we introduce the particle-in-cell method in general and detail the techniques used in the code. In the second part, we provide a documentation and derivation of the key equations occurring in the code. The code is original work of the author, written in 2010, and is therefore under the copyright of the author. The development of the code h...

  13. PIC microcontroller-based RF wireless ECG monitoring system.

    Science.gov (United States)

    Oweis, R J; Barhoum, A

    2007-01-01

    This paper presents a radio-telemetry system that provides the possibility of ECG signal transmission from a patient detection circuit via an RF data link. A PC then receives the signal through the National Instrument data acquisition card (NIDAQ). The PC is equipped with software allowing the received ECG signals to be saved, analysed, and sent by email to another part of the world. The proposed telemetry system consists of a patient unit and a PC unit. The amplified and filtered ECG signal is sampled 360 times per second, and the A/D conversion is performed by a PIC16f877 microcontroller. The major contribution of the final proposed system is that it detects, processes and sends patients ECG data over a wireless RF link to a maximum distance of 200 m. Transmitted ECG data with different numbers of samples were received, decoded by means of another PIC microcontroller, and displayed using MATLAB program. The designed software is presented in a graphical user interface utility.

  14. Modelling RF sources using 2-D PIC codes

    Energy Technology Data Exchange (ETDEWEB)

    Eppley, K.R.

    1993-03-01

    In recent years, many types of RF sources have been successfully modelled using 2-D PIC codes. Both cross field devices (magnetrons, cross field amplifiers, etc.) and pencil beam devices (klystrons, gyrotrons, TWT'S, lasertrons, etc.) have been simulated. All these devices involve the interaction of an electron beam with an RF circuit. For many applications, the RF structure may be approximated by an equivalent circuit, which appears in the simulation as a boundary condition on the electric field ( port approximation''). The drive term for the circuit is calculated from the energy transfer between beam and field in the drift space. For some applications it may be necessary to model the actual geometry of the structure, although this is more expensive. One problem not entirely solved is how to accurately model in 2-D the coupling to an external waveguide. Frequently this is approximated by a radial transmission line, but this sometimes yields incorrect results. We also discuss issues in modelling the cathode and injecting the beam into the PIC simulation.

  15. Modelling RF sources using 2-D PIC codes

    Energy Technology Data Exchange (ETDEWEB)

    Eppley, K.R.

    1993-03-01

    In recent years, many types of RF sources have been successfully modelled using 2-D PIC codes. Both cross field devices (magnetrons, cross field amplifiers, etc.) and pencil beam devices (klystrons, gyrotrons, TWT`S, lasertrons, etc.) have been simulated. All these devices involve the interaction of an electron beam with an RF circuit. For many applications, the RF structure may be approximated by an equivalent circuit, which appears in the simulation as a boundary condition on the electric field (``port approximation``). The drive term for the circuit is calculated from the energy transfer between beam and field in the drift space. For some applications it may be necessary to model the actual geometry of the structure, although this is more expensive. One problem not entirely solved is how to accurately model in 2-D the coupling to an external waveguide. Frequently this is approximated by a radial transmission line, but this sometimes yields incorrect results. We also discuss issues in modelling the cathode and injecting the beam into the PIC simulation.

  16. Modelling RF sources using 2-D PIC codes

    International Nuclear Information System (INIS)

    Eppley, K.R.

    1993-03-01

    In recent years, many types of RF sources have been successfully modelled using 2-D PIC codes. Both cross field devices (magnetrons, cross field amplifiers, etc.) and pencil beam devices (klystrons, gyrotrons, TWT'S, lasertrons, etc.) have been simulated. All these devices involve the interaction of an electron beam with an RF circuit. For many applications, the RF structure may be approximated by an equivalent circuit, which appears in the simulation as a boundary condition on the electric field (''port approximation''). The drive term for the circuit is calculated from the energy transfer between beam and field in the drift space. For some applications it may be necessary to model the actual geometry of the structure, although this is more expensive. One problem not entirely solved is how to accurately model in 2-D the coupling to an external waveguide. Frequently this is approximated by a radial transmission line, but this sometimes yields incorrect results. We also discuss issues in modelling the cathode and injecting the beam into the PIC simulation

  17. Evaluation of stability of interface between CCM (Co-Cr-Mo) UCLA abutment and external hex implant.

    Science.gov (United States)

    Yoon, Ki-Joon; Park, Young-Bum; Choi, Hyunmin; Cho, Youngsung; Lee, Jae-Hoon; Lee, Keun-Woo

    2016-12-01

    The purpose of this study is to evaluate the stability of interface between Co-Cr-Mo (CCM) UCLA abutment and external hex implant. Sixteen external hex implant fixtures were assigned to two groups (CCM and Gold group) and were embedded in molds using clear acrylic resin. Screw-retained prostheses were constructed using CCM UCLA abutment and Gold UCLA abutment. The external implant fixture and screw-retained prostheses were connected using abutment screws. After the abutments were tightened to 30 Ncm torque, 5 kg thermocyclic functional loading was applied by chewing simulator. A target of 1.0 × 10 6 cycles was applied. After cyclic loading, removal torque values were recorded using a driving torque tester, and the interface between implant fixture and abutment was evaluated by scanning electronic microscope (SEM). The means and standard deviations (SD) between the CCM and Gold groups were analyzed with independent t-test at the significance level of 0.05. Fractures of crowns, abutments, abutment screws, and fixtures and loosening of abutment screws were not observed after thermocyclic loading. There were no statistically significant differences at the recorded removal torque values between CCM and Gold groups ( P >.05). SEM analysis revealed that remarkable wear patterns were observed at the abutment interface only for Gold UCLA abutments. Those patterns were not observed for other specimens. Within the limit of this study, CCM UCLA abutment has no statistically significant difference in the stability of interface with external hex implant, compared with Gold UCLA abutment.

  18. Design and Simulation of a PIC16F877A and LM35 Based ...

    African Journals Online (AJOL)

    This paper describes the design and simulation of a temperature virtual monitoring system using proteus (Labcenter electronics). The device makes use of the PIC16F877A, LM35, 2x16 LCD and other discrete components. The lm35 serve as the temperature sensor, whose output is fed into the PIC16F877A for further ...

  19. Performance of PICS bags under extreme conditions in the sahel zone of Niger.

    Science.gov (United States)

    Baoua, Ibrahim B; Bakoye, Ousmane; Amadou, Laouali; Murdock, Larry L; Baributsa, Dieudonne

    2018-03-01

    Experiments in Niger assessed whether extreme environmental conditions including sunlight exposure affect the performance of triple-layer PICS bags in protecting cowpea grain against bruchids. Sets of PICS bags and woven polypropylene bags as controls containing 50 kg of naturally infested cowpea grain were held in the laboratory or outside with sun exposure for four and one-half months. PICS bags held either inside or outside exhibited no significant increase in insect damage and no loss in weight after 4.5 months of storage compared to the initial values. By contrast, woven bags stored inside or outside side by side with PICS bags showed several-fold increases in insects present in or on the grain and significant losses in grain weight. Grain stored inside in PICS bags showed no reduction in germination versus the initial value but there was a small but significant drop in germination of grain in PICS bags held outside (7.6%). Germination rates dropped substantially more in grain stored in woven bags inside (16.1%) and still more in woven bags stored outside (60%). PICS bags held inside and outside retained their ability to maintain internal reduced levels of oxygen and elevated levels of carbon dioxide. Exposure to extreme environmental conditions degraded the external polypropylene outer layer of the PICS triple-layer bag. Even so, the internal layers of polyethylene were more slowly degraded. The effects of exposure to sunlight, temperature and humidity variation within the sealed bags are described.

  20. PicPrint: Embedding pictures in additive manufacturing

    DEFF Research Database (Denmark)

    Nielsen, Jannik Boll; Eiríksson, Eyþór Rúnar; Lyngby, Rasmus Ahrenkiel

    2017-01-01

    Here  we  present  PicPrint,  a  method  and  tool  for  producing  an  additively  manufactured  lithophane,  enabling  transferring  and embedding  2D  information  into  additively  manufactured  3D  objects.  The  method  takes  an  input  image  and  converts  it  to  a......, after which  the mesh is  ready  for either  direct  print  on an additive manufacturing system, or transfer to other geometries via Boolean mesh operations. ...

  1. QUICKSILVER - A general tool for electromagnetic PIC simulation

    International Nuclear Information System (INIS)

    Seidel, David B.; Coats, Rebecca S.; Johnson, William A.; Kiefer, Mark L.; Mix, L. Paul; Pasik, Michael F.; Pointon, Timothy D.; Quintenz, Jeffrey P.; Riley, Douglas J.; Turner, C. David

    1997-01-01

    The dramatic increase in computational capability that has occurred over the last ten years has allowed fully electromagnetic simulations of large, complex, three-dimensional systems to move progressively from impractical, to expensive, and recently, to routine and widespread. This is particularly true for systems that require the motion of free charge to be self-consistently treated. The QUICKSILVER electromagnetic Particle-In-Cell (EM-PIC) code has been developed at Sandia National Laboratories to provide a general tool to simulate a wide variety of such systems. This tool has found widespread use for many diverse applications, including high-current electron and ion diodes, magnetically insulated power transmission systems, high-power microwave oscillators, high-frequency digital and analog integrated circuit packages, microwave integrated circuit components, antenna systems, radar cross-section applications, and electromagnetic interaction with biological material. This paper will give a brief overview of QUICKSILVER and provide some thoughts on its future development

  2. Room Thermostat with Servo Controlled by PIC Microcontroller

    Directory of Open Access Journals (Sweden)

    Jan Skapa

    2013-01-01

    Full Text Available This paper describes the design of room thermostat with Microchip PIC microcontroller. Thermostat is designated for two-pipe heating system. The microprocessor controls thermostatic valve via electric actuator with mechanical gear unit. The room thermostat uses for its activity measurements of air temperature in the room and calorimetric measurement of heat, which is served to the radiator. These features predestinate it mainly for underfloor heating regulation. The thermostat is designed to work in a network. Communication with heating system's central control unit is proceeded via RS485 bus with proprietary communication protocol. If the communication failure occurs the thermostat is able to work separately. The system uses its own real time clock circuit and memory with heating programs. These programs are able to cover the whole heating season. The method of position discontinuous PSD control is used in this equipment.

  3. Electron acceleration in the Solar corona - 3D PiC code simulations of guide field reconnection

    Science.gov (United States)

    Alejandro Munoz Sepulveda, Patricio

    2017-04-01

    The efficient electron acceleration in the solar corona detected by means of hard X-ray emission is still not well understood. Magnetic reconnection through current sheets is one of the proposed production mechanisms of non-thermal electrons in solar flares. Previous works in this direction were based mostly on test particle calculations or 2D fully-kinetic PiC simulations. We have now studied the consequences of self-generated current-aligned instabilities on the electron acceleration mechanisms by 3D magnetic reconnection. For this sake, we carried out 3D Particle-in-Cell (PiC) code numerical simulations of force free reconnecting current sheets, appropriate for the description of the solar coronal plasmas. We find an efficient electron energization, evidenced by the formation of a non-thermal power-law tail with a hard spectral index smaller than -2 in the electron energy distribution function. We discuss and compare the influence of the parallel electric field versus the curvature and gradient drifts in the guiding-center approximation on the overall acceleration, and their dependence on different plasma parameters.

  4. Parallel Finite Element Particle-In-Cell Code for Simulations of Space-charge Dominated Beam-Cavity Interactions

    International Nuclear Information System (INIS)

    Candel, A.; Kabel, A.; Ko, K.; Lee, L.; Li, Z.; Limborg, C.; Ng, C.; Prudencio, E.; Schussman, G.; Uplenchwar, R.

    2007-01-01

    Over the past years, SLAC's Advanced Computations Department (ACD) has developed the parallel finite element (FE) particle-in-cell code Pic3P (Pic2P) for simulations of beam-cavity interactions dominated by space-charge effects. As opposed to standard space-charge dominated beam transport codes, which are based on the electrostatic approximation, Pic3P (Pic2P) includes space-charge, retardation and boundary effects as it self-consistently solves the complete set of Maxwell-Lorentz equations using higher-order FE methods on conformal meshes. Use of efficient, large-scale parallel processing allows for the modeling of photoinjectors with unprecedented accuracy, aiding the design and operation of the next-generation of accelerator facilities. Applications to the Linac Coherent Light Source (LCLS) RF gun are presented

  5. PIC simulation of electron acceleration in an underdense plasma

    Directory of Open Access Journals (Sweden)

    S Darvish Molla

    2011-06-01

    Full Text Available One of the interesting Laser-Plasma phenomena, when the laser power is high and ultra intense, is the generation of large amplitude plasma waves (Wakefield and electron acceleration. An intense electromagnetic laser pulse can create plasma oscillations through the action of the nonlinear pondermotive force. electrons trapped in the wake can be accelerated to high energies, more than 1 TW. Of the wide variety of methods for generating a regular electric field in plasmas with strong laser radiation, the most attractive one at the present time is the scheme of the Laser Wake Field Accelerator (LWFA. In this method, a strong Langmuir wave is excited in the plasma. In such a wave, electrons are trapped and can acquire relativistic energies, accelerated to high energies. In this paper the PIC simulation of wakefield generation and electron acceleration in an underdense plasma with a short ultra intense laser pulse is discussed. 2D electromagnetic PIC code is written by FORTRAN 90, are developed, and the propagation of different electromagnetic waves in vacuum and plasma is shown. Next, the accuracy of implementation of 2D electromagnetic code is verified, making it relativistic and simulating the generating of wakefield and electron acceleration in an underdense plasma. It is shown that when a symmetric electromagnetic pulse passes through the plasma, the longitudinal field generated in plasma, at the back of the pulse, is weaker than the one due to an asymmetric electromagnetic pulse, and thus the electrons acquire less energy. About the asymmetric pulse, when front part of the pulse has smaller time rise than the back part of the pulse, a stronger wakefield generates, in plasma, at the back of the pulse, and consequently the electrons acquire more energy. In an inverse case, when the rise time of the back part of the pulse is bigger in comparison with that of the back part, a weaker wakefield generates and this leads to the fact that the electrons

  6. Deploying electromagnetic particle-in-cell (EM-PIC) codes on Xeon Phi accelerators boards

    Science.gov (United States)

    Fonseca, Ricardo

    2014-10-01

    The complexity of the phenomena involved in several relevant plasma physics scenarios, where highly nonlinear and kinetic processes dominate, makes purely theoretical descriptions impossible. Further understanding of these scenarios requires detailed numerical modeling, but fully relativistic particle-in-cell codes such as OSIRIS are computationally intensive. The quest towards Exaflop computer systems has lead to the development of HPC systems based on add-on accelerator cards, such as GPGPUs and more recently the Xeon Phi accelerators that power the current number 1 system in the world. These cards, also referred to as Intel Many Integrated Core Architecture (MIC) offer peak theoretical performances of >1 TFlop/s for general purpose calculations in a single board, and are receiving significant attention as an attractive alternative to CPUs for plasma modeling. In this work we report on our efforts towards the deployment of an EM-PIC code on a Xeon Phi architecture system. We will focus on the parallelization and vectorization strategies followed, and present a detailed performance evaluation of code performance in comparison with the CPU code.

  7. Deployment of the OSIRIS EM-PIC code on the Intel Knights Landing architecture

    Science.gov (United States)

    Fonseca, Ricardo

    2017-10-01

    Electromagnetic particle-in-cell (EM-PIC) codes such as OSIRIS have found widespread use in modelling the highly nonlinear and kinetic processes that occur in several relevant plasma physics scenarios, ranging from astrophysical settings to high-intensity laser plasma interaction. Being computationally intensive, these codes require large scale HPC systems, and a continuous effort in adapting the algorithm to new hardware and computing paradigms. In this work, we report on our efforts on deploying the OSIRIS code on the new Intel Knights Landing (KNL) architecture. Unlike the previous generation (Knights Corner), these boards are standalone systems, and introduce several new features, include the new AVX-512 instructions and on-package MCDRAM. We will focus on the parallelization and vectorization strategies followed, as well as memory management, and present a detailed performance evaluation of code performance in comparison with the CPU code. This work was partially supported by Fundaçã para a Ciência e Tecnologia (FCT), Portugal, through Grant No. PTDC/FIS-PLA/2940/2014.

  8. PIC simulation of a thermal anisotropy-driven Weibel instability in a circular rarefaction wave

    International Nuclear Information System (INIS)

    Dieckmann, M E; Sarri, G; Kourakis, I; Borghesi, M; Murphy, G C; O'C Drury, L; Bret, A; Romagnani, L; Ynnerman, A

    2012-01-01

    The expansion of an initially unmagnetized planar rarefaction wave has recently been shown to trigger a thermal anisotropy-driven Weibel instability (TAWI), which can generate magnetic fields from noise levels. It is examined here whether the TAWI can also grow in a curved rarefaction wave. The expansion of an initially unmagnetized circular plasma cloud, which consists of protons and hot electrons, into a vacuum is modelled for this purpose with a two-dimensional particle-in-cell (PIC) simulation. It is shown that the momentum transfer from the electrons to the radially accelerating protons can indeed trigger a TAWI. Radial current channels form and the aperiodic growth of a magnetowave is observed, which has a magnetic field that is oriented orthogonal to the simulation plane. The induced electric field implies that the electron density gradient is no longer parallel to the electric field. Evidence is presented here that this electric field modification triggers a second magnetic instability, which results in a rotational low-frequency magnetowave. The relevance of the TAWI is discussed for the growth of small-scale magnetic fields in astrophysical environments, which are needed to explain the electromagnetic emissions by astrophysical jets. It is outlined how this instability could be examined experimentally. (paper)

  9. PIC simulation of a thermal anisotropy-driven Weibel instability in a circular rarefaction wave

    Science.gov (United States)

    Dieckmann, M. E.; Sarri, G.; Murphy, G. C.; Bret, A.; Romagnani, L.; Kourakis, I.; Borghesi, M.; Ynnerman, A.; O'C Drury, L.

    2012-02-01

    The expansion of an initially unmagnetized planar rarefaction wave has recently been shown to trigger a thermal anisotropy-driven Weibel instability (TAWI), which can generate magnetic fields from noise levels. It is examined here whether the TAWI can also grow in a curved rarefaction wave. The expansion of an initially unmagnetized circular plasma cloud, which consists of protons and hot electrons, into a vacuum is modelled for this purpose with a two-dimensional particle-in-cell (PIC) simulation. It is shown that the momentum transfer from the electrons to the radially accelerating protons can indeed trigger a TAWI. Radial current channels form and the aperiodic growth of a magnetowave is observed, which has a magnetic field that is oriented orthogonal to the simulation plane. The induced electric field implies that the electron density gradient is no longer parallel to the electric field. Evidence is presented here that this electric field modification triggers a second magnetic instability, which results in a rotational low-frequency magnetowave. The relevance of the TAWI is discussed for the growth of small-scale magnetic fields in astrophysical environments, which are needed to explain the electromagnetic emissions by astrophysical jets. It is outlined how this instability could be examined experimentally.

  10. Evidências de validade da Escala Brasileira de Solidão UCLA

    Directory of Open Access Journals (Sweden)

    Sabrina Martins Barroso

    2016-03-01

    Full Text Available RESUMO Objetivo Este trabalho investigou as evidências de validade da Escala de Solidão UCLA para aplicação na população brasileira. Métodos Foram seguidas as fases: (1 autorização do autor e do Comitê de Ética; (2 tradução e retrotradução; (3 adaptação semântica; (4 validação. Utilizou-se para análise dos dados análise descritiva, fatorial exploratória, alpha de Cronbach, Kappa, teste de esfericidade de Barlett, teste Kaiser-Meyer-Olkin e correlação de Pearson. Para a adaptação, a escala foi submetida a especialistas e a um grupo focal com 8 participantes para adaptação semântica e a um estudo piloto com 126 participantes para adaptação transcultural. Da validação, participaram 818 pessoas, entre 20 e 87 anos, que responderam a duas versões da UCLA, ao Questionário de Saúde do Paciente, à Escala de Percepção de Suporte Social e a um questionário elaborado pelos autores. Resultados A escala mostrou dois fatores, que explicaram 56% da variância e alpha de 0,94. Conclusões A Escala de Solidão UCLA-BR indicou evidências de validade de construto e discriminante, além de boa fidedignidade, podendo ser utilizada para avaliação da solidão na população brasileira.

  11. Velocity control in three-phase induction motors using PIC; Controle de velocidade de motor de inducao trifasico usando PIC

    Energy Technology Data Exchange (ETDEWEB)

    Marcelino, M.A.; Silva, G.B.S.; Grandinetti, F.J. [Universidade Estadual Paulista (UNESP), Guaratingueta, SP (Brazil). Fac. de Engenharia; Universidade de Taubate (UNITAU), SP (Brazil)], Emails: abud@feg.unesp.br, gabonini@yahoo.com.br, grandinetti@unitau.br

    2009-07-01

    This paper presents a technique for speed control three-phase induction motor using the pulse width modulation (PWM), in open loop while maintaining the tension for constant frequency. The technique is adapted from a thesis entitled 'Control of the three-phase induction motor, using discrete PWM generation, optimized and synchronized', where studies are presented aimed at their application in home appliances, to eliminate mechanical parts, replaced by low cost electronic control, thus having a significant reduction in power consumption. Initially the experiment was done with the Intel 80C31 micro controller. In this paper, the PWM modulation is implemented using a PIC micro controller, and the speed control kept a low profile, based on tables, synchronized with transitions and reduced generation of harmonics in the network. Confirmations were made using the same process of building tables, but takes advantage of the program of a RISC device.

  12. Parallel rendering

    Science.gov (United States)

    Crockett, Thomas W.

    1995-01-01

    This article provides a broad introduction to the subject of parallel rendering, encompassing both hardware and software systems. The focus is on the underlying concepts and the issues which arise in the design of parallel rendering algorithms and systems. We examine the different types of parallelism and how they can be applied in rendering applications. Concepts from parallel computing, such as data decomposition, task granularity, scalability, and load balancing, are considered in relation to the rendering problem. We also explore concepts from computer graphics, such as coherence and projection, which have a significant impact on the structure of parallel rendering algorithms. Our survey covers a number of practical considerations as well, including the choice of architectural platform, communication and memory requirements, and the problem of image assembly and display. We illustrate the discussion with numerous examples from the parallel rendering literature, representing most of the principal rendering methods currently used in computer graphics.

  13. Parallel computations

    CERN Document Server

    1982-01-01

    Parallel Computations focuses on parallel computation, with emphasis on algorithms used in a variety of numerical and physical applications and for many different types of parallel computers. Topics covered range from vectorization of fast Fourier transforms (FFTs) and of the incomplete Cholesky conjugate gradient (ICCG) algorithm on the Cray-1 to calculation of table lookups and piecewise functions. Single tridiagonal linear systems and vectorized computation of reactive flow are also discussed.Comprised of 13 chapters, this volume begins by classifying parallel computers and describing techn

  14. Leveraging lean principles in creating a comprehensive quality program: The UCLA health readmission reduction initiative.

    Science.gov (United States)

    Afsar-Manesh, Nasim; Lonowski, Sarah; Namavar, Aram A

    2017-12-01

    UCLA Health embarked to transform care by integrating lean methodology in a key clinical project, Readmission Reduction Initiative (RRI). The first step focused on assembling a leadership team to articulate system-wide priorities for quality improvement. The lean principle of creating a culture of change and accountability was established by: 1) engaging stakeholders, 2) managing the process with performance accountability, and, 3) delivering patient-centered care. The RRI utilized three major lean principles: 1) A3, 2) root cause analyses, 3) value stream mapping. Baseline readmission rate at UCLA from 9/2010-12/2011 illustrated a mean of 12.1%. After the start of the RRI program, for the period of 1/2012-6/2013, the readmission rate decreased to 11.3% (p<0.05). To impact readmissions, solutions must evolve from smaller service- and location-based interventions into strategies with broader approach. As elucidated, a systematic clinical approach grounded in lean methodologies is a viable solution to this complex problem. Copyright © 2017 Elsevier Inc. All rights reserved.

  15. UCLA's outreach program of science education in the Los Angeles schools.

    Science.gov (United States)

    Palacio-Cayetano, J; Kanowith-Klein, S; Stevens, R

    1999-04-01

    The UCLA School of Medicine's Interactive Multi-media Exercises (IMMEX) Project began its outreach into pre-college education in the Los Angeles area in 1993. The project provides a model in which software and technology are effectively intertwined with teaching, learning, and assessment (of both students' and teachers' performances) in the classroom. The project has evolved into a special collaboration between the medical school and Los Angeles teachers. UCLA faculty and staff work with science teachers and administrators from elementary, middle, and high schools. The program benefits ethnically and racially diverse groups of students in schools ranging from the inner city to the suburbs. The project's primary goal is to use technology to increase students' achievement and interest in science, including medicine, and thus move more students into the medical school pipeline. Evaluations from outside project evaluators (West Ed) as well as from teachers and IMMEX staff show that the project has already had a significant effect on teachers' professional development, classroom practice, and students' achievement in the Los Angeles area.

  16. Designing embedded systems with 32-bit PIC microcontrollers and MikroC

    CERN Document Server

    Ibrahim, Dogan

    2013-01-01

    The new generation of 32-bit PIC microcontrollers can be used to solve the increasingly complex embedded system design challenges faced by engineers today. This book teaches the basics of 32-bit C programming, including an introduction to the PIC 32-bit C compiler. It includes a full description of the architecture of 32-bit PICs and their applications, along with coverage of the relevant development and debugging tools. Through a series of fully realized example projects, Dogan Ibrahim demonstrates how engineers can harness the power of this new technology to optimize their embedded design

  17. A gridding method for object-oriented PIC codes

    International Nuclear Information System (INIS)

    Gisler, G.; Peter, W.; Nash, H.; Acquah, J.; Lin, C.; Rine, D.

    1993-01-01

    A simple, rule-based gridding method for object-oriented PIC codes is described which is not only capable of dealing with complicated structures such as multiply-connected regions, but is also computationally faster than classical gridding techniques. Using, these smart grids, vacant cells (e.g., cells enclosed by conductors) will never have to be stored or calculated, thus avoiding the usual situation of having to zero electromagnetic fields within conductors after valuable cpu time has been spent in calculating the fields within these cells in the first place. This object-oriented gridding technique makes use of encapsulating characteristics of actual physical objects (particles, fields, grids, etc.) in C ++ classes and supporting software reuse of these entities through C ++ class inheritance relations. It has been implemented in the form of a simple two-dimensional plasma particle-in-cell code, and forms the initial effort of an AFOSR research project to develop a flexible software simulation environment for particle-in-cell algorithms based on object-oriented technology

  18. Development of PIC-based digital survey meter

    International Nuclear Information System (INIS)

    Nor Arymaswati Abdullah; Nur Aira Abdul Rahman; Mohd Ashhar Khalid; Taiman Kadni; Glam Hadzir Patai Mohamad; Abd Aziz Mhd Ramli; Chong Foh Yong

    2006-01-01

    The need of radiation monitoring and monitoring of radioactive contamination in the workplace is very important especially when x-ray machines, linear accelerators, electron beam machines and radioactive sources are present. The appropriate use of radiation detector is significant in order to maintain a radiation and contamination free workplace. This paper reports on the development of a prototype of PIC-based digital survey meter. This prototype of digital survey meter is a hand held instrument for general-purpose radiation monitoring and surface contamination meter. Generally, the device is able to detect some or all of the three major types of ionizing radiation, namely alpha, beta and gamma. It uses a Geiger-Muller tube as a radiation detector, which converts gamma radiation quanta to electric pulses and further processed by the electronic devices. The development involved the design of the controller, counter and high voltage circuit. All these circuit are assembled and enclosed in a plastic casing together with a GM detector and LCD display to form a prototype survey meter. The number of counts of the pulses detected by the survey meter varies due to the random nature of radioactivity. By averaging the reading over a time-period, more accurate and stable reading is achieved. To test the accuracy and the linearity of the design, the prototype was calibrated using standard procedure at the Secondary Standard Dosimetry Laboratory (SSDL) in MINT. (Author)

  19. Digital Survey Meter based on PIC16F628 Microcontroller

    International Nuclear Information System (INIS)

    Al-Mohamad, A.; Shliwitt, J.

    2010-01-01

    A Digital Survey Meter based on PIC16F628 Microcontroller was designed using simple Geiger-Muller Counter ZP1320 made by Centronic in the UK as detector. The sensitivity of this tube is about 9 counts/s at 10μGy/h. It is sensitive to gamma and beta particles over 0.25 MeV. It has a sensitive length of 28mm. Count rate versus dose rate is quite linear up to about 10 4 counts/s. Indication is given by a speaker which emits one click for each count. In addition to the acoustic alarm, the meter works according one of three different measurement modes selected using appropriate 3 states switch: 1- Measurement of Dose rate ( in μGy/h) and counting rate ( in CPS) , for High counting rates. 2- Measurement of Dose rate ( in μGy/h) and counting rate ( in CPM), for Low counting rates. 3- Accumulated Counting with continues display for No. of Counts and Counting Time with a period of 2 Sec. The results are Displayed on an Alphanumerical LCD Display, and the circuit will give many hours of operation from a single 9V PP3 battery. The design of the circuit combines between accuracy, simplicity and low power consumption. We built 2 Models of this design, the first only with an internal detector, and the second is equipped with an External Detector. (author)

  20. Parallel algorithms

    CERN Document Server

    Casanova, Henri; Robert, Yves

    2008-01-01

    ""…The authors of the present book, who have extensive credentials in both research and instruction in the area of parallelism, present a sound, principled treatment of parallel algorithms. … This book is very well written and extremely well designed from an instructional point of view. … The authors have created an instructive and fascinating text. The book will serve researchers as well as instructors who need a solid, readable text for a course on parallelism in computing. Indeed, for anyone who wants an understandable text from which to acquire a current, rigorous, and broad vi

  1. Low-temperature plasma simulations with the LSP PIC code

    Science.gov (United States)

    Carlsson, Johan; Khrabrov, Alex; Kaganovich, Igor; Keating, David; Selezneva, Svetlana; Sommerer, Timothy

    2014-10-01

    The LSP (Large-Scale Plasma) PIC-MCC code has been used to simulate several low-temperature plasma configurations, including a gas switch for high-power AC/DC conversion, a glow discharge and a Hall thruster. Simulation results will be presented with an emphasis on code comparison and validation against experiment. High-voltage, direct-current (HVDC) power transmission is becoming more common as it can reduce construction costs and power losses. Solid-state power-electronics devices are presently used, but it has been proposed that gas switches could become a compact, less costly, alternative. A gas-switch conversion device would be based on a glow discharge, with a magnetically insulated cold cathode. Its operation is similar to that of a sputtering magnetron, but with much higher pressure (0.1 to 0.3 Torr) in order to achieve high current density. We have performed 1D (axial) and 2D (axial/radial) simulations of such a gas switch using LSP. The 1D results were compared with results from the EDIPIC code. To test and compare the collision models used by the LSP and EDIPIC codes in more detail, a validation exercise was performed for the cathode fall of a glow discharge. We will also present some 2D (radial/azimuthal) LSP simulations of a Hall thruster. The information, data, or work presented herein was funded in part by the Advanced Research Projects Agency-Energy (ARPA-E), U.S. Department of Energy, under Award Number DE-AR0000298.

  2. Searching for Short GRBs in Soft Gamma Rays with INTEGRAL/PICsIT

    Science.gov (United States)

    Rodi, James; Bazzano, Angela; Ubertini, Pietro; Natalucci, Lorenzo; Savchenko, V.; Kuulkers, E.; Ferrigno, Carlo; Bozzo, Enrico; Brandt, Soren; Chenevez, Jerome; Courvoisier, T. J.-L.; Diehl, R.; Domingo, A.; Hanlon, L.; Jourdain, E.; von Kienlin, A.; Laurent, P.; Lebrun, F.; Lutovinov, A.; Martin-Carrillo, A.; Mereghetti, S.; Roques, J.-P.; Sunyaev, R.

    2018-01-01

    With gravitational wave (GW) detections by the LIGO/Virgo collaboration over the past several years, there is heightened interest in gamma-ray bursts (GRBs), especially “short” GRBs (T90 soft gamma-ray, all-sky monitor for impulsive events, such as SGRBs. Because SGRBs typically have hard spectra with peak energies of a few hundred keV, PICsIT with its ~ 3000 cm2 collecting area is able to provide spectral information about these sources at soft gamma-ray energies.We have begun a study of PICsIT data for faint SGRBs similar to the one associated with the binary neutron star (BNS) merger GW 170817, and also are preparing for future GW triggers by developing a real-time burst analysis for PICsIT. Searching the PICsIT data for significant excesses during ~30 min-long pointings containing times of SGRBs, we have been able to differentiate between SGRBs and spurious events. Also, this work allows us to assess what fraction of reported SGRBs have been detected by PICsIT, which can be used to provide an estimate of the number of GW BNS events seen by PICsIT during the next LIGO/Virgo observing run starting in Fall 2018.

  3. Polarization-dependent Imaging Contrast (PIC) mapping reveals nanocrystal orientation patterns in carbonate biominerals

    Energy Technology Data Exchange (ETDEWEB)

    Gilbert, Pupa U.P.A., E-mail: pupa@physics.wisc.edu [University of Wisconsin-Madison, Departments of Physics and Chemistry, Madison, WI 53706 (United States)

    2012-10-15

    Highlights: Black-Right-Pointing-Pointer Nanocrystal orientation shown by Polarization-dependent Imaging Contrast (PIC) maps. Black-Right-Pointing-Pointer PIC-mapping of carbonate biominerals reveals their ultrastructure at the nanoscale. Black-Right-Pointing-Pointer The formation mechanisms of biominerals is discovered by PIC-mapping using PEEM. -- Abstract: Carbonate biominerals are one of the most interesting systems a physicist can study. They play a major role in the CO{sub 2} cycle, they master templation, self-assembly, nanofabrication, phase transitions, space filling, crystal nucleation and growth mechanisms. A new imaging modality was introduced in the last 5 years that enables direct observation of the orientation of carbonate single crystals, at the nano- and micro-scale. This is Polarization-dependent Imaging Contrast (PIC) mapping, which is based on X-ray linear dichroism, and uses PhotoElectron Emission spectroMicroscopy (PEEM). Here we present PIC-mapping results from biominerals, including the nacre and prismatic layers of mollusk shells, and sea urchin teeth. We describe various PIC-mapping approaches, and show that these lead to fundamental discoveries on the formation mechanisms of biominerals.

  4. Construction and initial operation of MHD PbLi facility at UCLA

    International Nuclear Information System (INIS)

    Kunugi, T.; Yokomine, T.; Ueki, Y.; Smolentsev, S.; Li, F.-C.; Sketchley, T.; Abdou, M.A.; Yuki, K.

    2014-01-01

    We review current accomplishments in Task 1-3 'Flow Control and Thermofluid Modeling' of the Japan-US 'TITAN' collaboration program. Our task focuses on experimental activities and also computer modeling of magnetohydrodynamic flows and heat and mass transfer of electrically conducting fluids under conditions relevant to fusion blankets. Since our task started, major efforts were taken to design, construct and test a new magnetohydrodynamic lead-lithium (PbLi) loop at UCLA, to accumulate the PbLi handling technology, and to develop a high-temperature ultrasonic Doppler velocimetry and a differential-pressure measurement system for PbLi flows. In the present paper, the loop construction, the electromagnetic pump performance test, our on-going experiments with the constructed loop are described. (author)

  5. Emittance studies of the BNL/SLAC/UCLA 1.6 cell photocathode rf gun

    International Nuclear Information System (INIS)

    Palmer, D.T.; Miller, R.H.; Wang, X.J.

    1997-01-01

    The symmetrized 1.6 cell S-band photocathode gun developed by the BNL/SLAC/UCLA collaboration is in operation at the Brookhaven Accelerator Test Facility (ATF). A novel emittance compensation solenoid magnet has also been designed, built and is in operation at the ATF. These two subsystems form an emittance compensated photoinjector used for beam dynamics, advanced acceleration and free electron laser experiments at the ATF. The highest acceleration field achieved on the copper cathode is 150 MV/m, and the guns normal operating field is 130 MV/m. The maximum rf pulse length is 3 micros. The transverse emittance of the photoelectron beam were measured for various injection parameters. The 1 nC emittance results are presented along with electron bunch length measurements that indicated that at above the 400 pC, space charge bunch lengthening is occurring. The thermal emittance, ε o , of the copper cathode has been measured

  6. UCLA's Molecular Screening Shared Resource: enhancing small molecule discovery with functional genomics and new technology.

    Science.gov (United States)

    Damoiseaux, Robert

    2014-05-01

    The Molecular Screening Shared Resource (MSSR) offers a comprehensive range of leading-edge high throughput screening (HTS) services including drug discovery, chemical and functional genomics, and novel methods for nano and environmental toxicology. The MSSR is an open access environment with investigators from UCLA as well as from the entire globe. Industrial clients are equally welcome as are non-profit entities. The MSSR is a fee-for-service entity and does not retain intellectual property. In conjunction with the Center for Environmental Implications of Nanotechnology, the MSSR is unique in its dedicated and ongoing efforts towards high throughput toxicity testing of nanomaterials. In addition, the MSSR engages in technology development eliminating bottlenecks from the HTS workflow and enabling novel assays and readouts currently not available.

  7. Vision screening of abused and neglected children by the UCLA Mobile Eye Clinic.

    Science.gov (United States)

    Yoo, R; Logani, S; Mahat, M; Wheeler, N C; Lee, D A

    1999-07-01

    The purpose of our study was to present descriptive findings of ocular abnormalities in vision screening examinations of abused and neglected children. We compared the prevalence and the nature of eye diseases and refractive error between abused and neglected boys staying at the Hathaway Home, a residential facility for abused children, and boys from neighboring Boys and Girls clubs. The children in the study received vision screening examinations through the UCLA Mobile Eye Clinic following a standard format. Clinical data were analyzed by chi-square test. The children with a history of abuse demonstrated significantly higher prevalence of myopia, astigmatism, and external eye disorders. Our study suggests that children with a history of abuse may be at higher risk for visual impairment. These visual impairments may be the long-term sequelae of child abuse.

  8. Photocathode driven linac at UCLA for FEL and plasma wakefield acceleration experiments

    International Nuclear Information System (INIS)

    Hartman, S.; Aghamir, F.; Barletta, W.; Cline, D.; Dodd, J.; Katsouleas, T.; Kolonko, J.; Park, S.; Pellegrini, C.; Rosenzweig, J.; Smolin, J.; Terrien, J.; Davis, J.; Hairapetian, G.; Joshi, C.; Luhmann, N. Jr.; McDermott, D.

    1991-01-01

    The UCLA compact 20-MeV/c electron linear accelerator is designed to produce a single electron bunch with a peak current of 200 A, an rms energy spread of 0.2% or less, and a short 1.2 picosecond rms pulse duration. The linac is also designed to minimize emittance growth down the beamline so as to obtain emittances of the order of 8πmm-mrad in the experimental region. The linac will feed two beamlines, the first will run straight into the undulator for FEL experiments while the second will be used for diagnostics, longitudinal bunch compression, and other electron beam experiments. Here the authors describe the considerations put into the design of the accelerating structures and the transport to the experimental areas

  9. The Effect of a Guide Field on the Structures of Magnetic Islands: 2D PIC Simulations

    Science.gov (United States)

    Huang, C.; Lu, Q.; Lu, S.; Wang, P.; Wang, S.

    2014-12-01

    Magnetic island plays an important role in magnetic reconnection. Using a series of 2D PIC simulations, we investigate the magnetic structures of a magnetic island formed during multiple X-line magnetic reconnection, considering the effects of the guide field in symmetric and asymmetric current sheets. In a symmetric current sheet, the current in the direction forms a tripolar structure inside a magnetic island during anti-parallel reconnection, which results in a quadrupole structure of the out-of-plane magnetic field. With the increase of the guide field, the symmetry of both the current system and out-of-plane magnetic field inside the magnetic island is distorted. When the guide field is sufficiently strong, the current forms a ring along the magnetic field lines inside magnetic island. At the same time, the current carried by the energetic electrons accelerated in the vicinity of the X lines forms another ring at the edge of the magnetic island. Such a dual-ring current system enhance the out-of-plane magnetic field inside the magnetic island with a dip in the center of the magnetic island. In an asymmetric current sheet, when there is no guide field, electrons flows toward the X lines along the separatrices from the side with a higher density, and are then directed away from the X lines along the separatrices to the side with a lower density. The formed current results in the enhancement of the out-of-plane magnetic field at one end of the magnetic island, and the attenuation at the other end. With the increase of the guide field, the structures of both the current system and the out-of-plane magnetic field are distorted.

  10. PIC Simulations of Velocity-space Instabilities in a Decreasing Magnetic Field: Viscosity and Thermal Conduction

    Science.gov (United States)

    Riquelme, Mario; Quataert, Eliot; Verscharen, Daniel

    2018-02-01

    We use particle-in-cell (PIC) simulations of a collisionless, electron–ion plasma with a decreasing background magnetic field, {\\boldsymbol{B}}, to study the effect of velocity-space instabilities on the viscous heating and thermal conduction of the plasma. If | {\\boldsymbol{B}}| decreases, the adiabatic invariance of the magnetic moment gives rise to pressure anisotropies with {p}| | ,j> {p}\\perp ,j ({p}| | ,j and {p}\\perp ,j represent the pressure of species j (electron or ion) parallel and perpendicular to B ). Linear theory indicates that, for sufficiently large anisotropies, different velocity-space instabilities can be triggered. These instabilities in principle have the ability to pitch-angle scatter the particles, limiting the growth of the anisotropies. Our simulations focus on the nonlinear, saturated regime of the instabilities. This is done through the permanent decrease of | {\\boldsymbol{B}}| by an imposed plasma shear. We show that, in the regime 2≲ {β }j≲ 20 ({β }j\\equiv 8π {p}j/| {\\boldsymbol{B}}{| }2), the saturated ion and electron pressure anisotropies are controlled by the combined effect of the oblique ion firehose and the fast magnetosonic/whistler instabilities. These instabilities grow preferentially on the scale of the ion Larmor radius, and make {{Δ }}{p}e/{p}| | ,e≈ {{Δ }}{p}i/{p}| | ,i (where {{Δ }}{p}j={p}\\perp ,j-{p}| | ,j). We also quantify the thermal conduction of the plasma by directly calculating the mean free path of electrons, {λ }e, along the mean magnetic field, finding that {λ }e depends strongly on whether | {\\boldsymbol{B}}| decreases or increases. Our results can be applied in studies of low-collisionality plasmas such as the solar wind, the intracluster medium, and some accretion disks around black holes.

  11. A PIC-MCC code RFdinity1d for simulation of discharge initiation by ICRF antenna

    Science.gov (United States)

    Tripský, M.; Wauters, T.; Lyssoivan, A.; Bobkov, V.; Schneider, P. A.; Stepanov, I.; Douai, D.; Van Eester, D.; Noterdaeme, J.-M.; Van Schoor, M.; ASDEX Upgrade Team; EUROfusion MST1 Team

    2017-12-01

    Discharges produced and sustained by ion cyclotron range of frequency (ICRF) waves in absence of plasma current will be used on ITER for (ion cyclotron-) wall conditioning (ICWC, Te = 3{-}5 eV, ne 18 m-3 ). In this paper, we present the 1D particle-in-cell Monte Carlo collision (PIC-MCC) RFdinity1d for the study the breakdown phase of ICRF discharges, and its dependency on the RF discharge parameters (i) antenna input power P i , (ii) RF frequency f, (iii) shape of the electric field and (iv) the neutral gas pressure pH_2 . The code traces the motion of both electrons and ions in a narrow bundle of magnetic field lines close to the antenna straps. The charged particles are accelerated in the parallel direction with respect to the magnetic field B T by two electric fields: (i) the vacuum RF field of the ICRF antenna E_z^RF and (ii) the electrostatic field E_zP determined by the solution of Poisson’s equation. The electron density evolution in simulations follows exponential increase, {\\dot{n_e} ∼ ν_ion t } . The ionization rate varies with increasing electron density as different mechanisms become important. The charged particles are affected solely by the antenna RF field E_z^RF at low electron density ({ne < 1011} m-3 , {≤ft \\vert E_z^RF \\right \\vert \\gg ≤ft \\vert E_zP \\right \\vert } ). At higher densities, when the electrostatic field E_zP is comparable to the antenna RF field E_z^RF , the ionization frequency reaches the maximum. Plasma oscillations propagating toroidally away from the antenna are observed. The simulated energy distributions of ions and electrons at {ne ∼ 1015} m-3 correspond a power-law Kappa energy distribution. This energy distribution was also observed in NPA measurements at ASDEX Upgrade in ICWC experiments.

  12. Development and Testing of UCLA's Electron Losses and Fields Investigation (ELFIN) Instrument Payload

    Science.gov (United States)

    Wilkins, C.; Bingley, L.; Angelopoulos, V.; Caron, R.; Cruce, P. R.; Chung, M.; Rowe, K.; Runov, A.; Liu, J.; Tsai, E.

    2017-12-01

    UCLA's Electron Losses and Fields Investigation (ELFIN) is a 3U+ CubeSat mission designed to study relativistic particle precipitation in Earth's polar regions from Low Earth Orbit. Upon its 2018 launch, ELFIN will aim to address an important open question in Space Physics: Are Electromagnetic Ion-Cyclotron (EMIC) waves the dominant source of pitch-angle scattering of high-energy radiation belt charged particles into Earth's atmosphere during storms and substorms? Previous studies have indicated these scattering events occur frequently during storms and substorms, and ELFIN will be the first mission to study this process in-situ.Paramount to ELFIN's success is its instrument suite consisting of an Energetic Particle Detector (EPD) and a Fluxgate Magnetometer (FGM). The EPD is comprised of two collimated solid-state detector stacks which will measure the incident flux of energetic electrons from 50 keV to 4 MeV and ions from 50 keV to 300 keV. The FGM is a 3-axis magnetic field sensor which will capture the local magnetic field and its variations at frequencies up to 5 Hz. The ELFIN spacecraft spins perpendicular to the geomagnetic field to provide 16 pitch-angle particle data sectors per revolution. Together these factors provide the capability to address the nature of radiation belt particle precipitation by pitch-angle scattering during storms and substorms.ELFIN's instrument development has progressed into the late Engineering Model (EM) phase and will soon enter Flight Model (FM) development. The instrument suite is currently being tested and calibrated at UCLA using a variety of methods including the use of radioactive sources and applied magnetics to simulate orbit conditions during spin sectoring. We present the methods and test results from instrument calibration and performance validation.

  13. Strengthening the fission reactor nuclear science and engineering program at UCLA. Final technical report

    International Nuclear Information System (INIS)

    Okrent, D.

    1997-01-01

    This is the final report on DOE Award No. DE-FG03-92ER75838 A000, a three year matching grant program with Pacific Gas and Electric Company (PG and E) to support strengthening of the fission reactor nuclear science and engineering program at UCLA. The program began on September 30, 1992. The program has enabled UCLA to use its strong existing background to train students in technological problems which simultaneously are of interest to the industry and of specific interest to PG and E. The program included undergraduate scholarships, graduate traineeships and distinguished lecturers. Four topics were selected for research the first year, with the benefit of active collaboration with personnel from PG and E. These topics remained the same during the second year of this program. During the third year, two topics ended with the departure o the students involved (reflux cooling in a PWR during a shutdown and erosion/corrosion of carbon steel piping). Two new topics (long-term risk and fuel relocation within the reactor vessel) were added; hence, the topics during the third year award were the following: reflux condensation and the effect of non-condensable gases; erosion/corrosion of carbon steel piping; use of artificial intelligence in severe accident diagnosis for PWRs (diagnosis of plant status during a PWR station blackout scenario); the influence on risk of organization and management quality; considerations of long term risk from the disposal of hazardous wastes; and a probabilistic treatment of fuel motion and fuel relocation within the reactor vessel during a severe core damage accident

  14. A Performance-Prediction Model for PIC Applications on Clusters of Symmetric MultiProcessors: Validation with Hierarchical HPF+OpenMP Implementation

    Directory of Open Access Journals (Sweden)

    Sergio Briguglio

    2003-01-01

    Full Text Available A performance-prediction model is presented, which describes different hierarchical workload decomposition strategies for particle in cell (PIC codes on Clusters of Symmetric MultiProcessors. The devised workload decomposition is hierarchically structured: a higher-level decomposition among the computational nodes, and a lower-level one among the processors of each computational node. Several decomposition strategies are evaluated by means of the prediction model, with respect to the memory occupancy, the parallelization efficiency and the required programming effort. Such strategies have been implemented by integrating the high-level languages High Performance Fortran (at the inter-node stage and OpenMP (at the intra-node one. The details of these implementations are presented, and the experimental values of parallelization efficiency are compared with the predicted results.

  15. Appropriateness of the food-pics image database for experimental eating and appetite research with adolescents.

    Science.gov (United States)

    Jensen, Chad D; Duraccio, Kara M; Barnett, Kimberly A; Stevens, Kimberly S

    2016-12-01

    Research examining effects of visual food cues on appetite-related brain processes and eating behavior has proliferated. Recently investigators have developed food image databases for use across experimental studies examining appetite and eating behavior. The food-pics image database represents a standardized, freely available image library originally validated in a large sample primarily comprised of adults. The suitability of the images for use with adolescents has not been investigated. The aim of the present study was to evaluate the appropriateness of the food-pics image library for appetite and eating research with adolescents. Three hundred and seven adolescents (ages 12-17) provided ratings of recognizability, palatability, and desire to eat, for images from the food-pics database. Moreover, participants rated the caloric content (high vs. low) and healthiness (healthy vs. unhealthy) of each image. Adolescents rated approximately 75% of the food images as recognizable. Approximately 65% of recognizable images were correctly categorized as high vs. low calorie and 63% were correctly classified as healthy vs. unhealthy in 80% or more of image ratings. These results suggest that a smaller subset of the food-pics image database is appropriate for use with adolescents. With some modifications to included images, the food-pics image database appears to be appropriate for use in experimental appetite and eating-related research conducted with adolescents. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. Plasma Physics Calculations on a Parallel Macintosh Cluster

    Science.gov (United States)

    Decyk, Viktor; Dauger, Dean; Kokelaar, Pieter

    2000-03-01

    We have constructed a parallel cluster consisting of 16 Apple Macintosh G3 computers running the MacOS, and achieved very good performance on numerically intensive, parallel plasma particle-in-cell simulations. A subset of the MPI message-passing library was implemented in Fortran77 and C. This library enabled us to port code, without modification, from other parallel processors to the Macintosh cluster. For large problems where message packets are large and relatively few in number, performance of 50-150 MFlops/node is possible, depending on the problem. This is fast enough that 3D calculations can be routinely done. Unlike Unix-based clusters, no special expertise in operating systems is required to build and run the cluster. Full details are available on our web site: http://exodus.physics.ucla.edu/appleseed/.

  17. Parallel computation

    International Nuclear Information System (INIS)

    Jejcic, A.; Maillard, J.; Maurel, G.; Silva, J.; Wolff-Bacha, F.

    1997-01-01

    The work in the field of parallel processing has developed as research activities using several numerical Monte Carlo simulations related to basic or applied current problems of nuclear and particle physics. For the applications utilizing the GEANT code development or improvement works were done on parts simulating low energy physical phenomena like radiation, transport and interaction. The problem of actinide burning by means of accelerators was approached using a simulation with the GEANT code. A program of neutron tracking in the range of low energies up to the thermal region has been developed. It is coupled to the GEANT code and permits in a single pass the simulation of a hybrid reactor core receiving a proton burst. Other works in this field refers to simulations for nuclear medicine applications like, for instance, development of biological probes, evaluation and characterization of the gamma cameras (collimators, crystal thickness) as well as the method for dosimetric calculations. Particularly, these calculations are suited for a geometrical parallelization approach especially adapted to parallel machines of the TN310 type. Other works mentioned in the same field refer to simulation of the electron channelling in crystals and simulation of the beam-beam interaction effect in colliders. The GEANT code was also used to simulate the operation of germanium detectors designed for natural and artificial radioactivity monitoring of environment

  18. Initial draft of CSE-UCLA evaluation model based on weighted product in order to optimize digital library services in computer college in Bali

    Science.gov (United States)

    Divayana, D. G. H.; Adiarta, A.; Abadi, I. B. G. S.

    2018-01-01

    The aim of this research was to create initial design of CSE-UCLA evaluation model modified with Weighted Product in evaluating digital library service at Computer College in Bali. The method used in this research was developmental research method and developed by Borg and Gall model design. The results obtained from the research that conducted earlier this month was a rough sketch of Weighted Product based CSE-UCLA evaluation model that the design had been able to provide a general overview of the stages of weighted product based CSE-UCLA evaluation model used in order to optimize the digital library services at the Computer Colleges in Bali.

  19. Saltwell PIC Skid Programmable Logic Controller (PLC) Software Configuration Management Plan

    International Nuclear Information System (INIS)

    KOCH, M.R.

    1999-01-01

    This document provides the procedures and guidelines necessary for computer software configuration management activities during the operation and maintenance phases of the Saltwell PIC Skids as required by LMH-PRO-309/Rev. 0, Computer Software Quality Assurance, Section 2.6, Software Configuration Management. The software configuration management plan (SCMP) integrates technical and administrative controls to establish and maintain technical consistency among requirements, physical configuration, and documentation for the Saltwell PIC Skid Programmable Logic Controller (PLC) software during the Hanford application, operations and maintenance. This SCMP establishes the Saltwell PIC Skid PLC Software Baseline, status changes to that baseline, and ensures that software meets design and operational requirements and is tested in accordance with their design basis

  20. Expression of recombinant myostatin propeptide pPIC9K-Msp plasmid in Pichia pastoris.

    Science.gov (United States)

    Du, W; Xia, J; Zhang, Y; Liu, M J; Li, H B; Yan, X M; Zhang, J S; Li, N; Zhou, Z Y; Xie, W Z

    2015-12-28

    Myostatin propeptide can inhibit the biological activity of myostatin protein and promote muscle growth. To express myostatin propeptide in vitro with a higher biological activity, we performed codon optimization on the sheep myostatin propeptide gene sequence, and mutated aspartic acid-76 to alanine based on the codon usage bias of Pichia pastoris and the enhanced biological activity of myostatin propeptide mutant. Modified myostatin propeptide gene was cloned into the pPIC9K plasmid to form the recombinant plasmid pPIC9K-Msp. Recombinant plasmid pPIC9K-Msp was transformed into Pichia pastoris GS115 by electrotransformation. Transformed cells were screened, and methanol was used to induce expression. SDS-PAGE and western blotting were used to verify the successful expression of myostatin propeptide with biological activity in Pichia pastoris, providing the basis for characterization of this protein.

  1. Effects of increased vertebral number on carcass weight in PIC pigs.

    Science.gov (United States)

    Huang, Jieping; Zhang, Mingming; Ye, Runqing; Ma, Yun; Lei, Chuzhao

    2017-12-01

    Variation of the vertebral number is associated with carcass traits in pigs. However, results from different populations do not match well with others, especially for carcass weight. Therefore, effects of increased vertebral number on carcass weight were investigated by analyzing the relationship between two loci multi-vertebra causal loci (NR6A1 g.748 C > T and VRTN g.20311_20312ins291) and carcass weight in PIC pigs. Results from the association study between vertebral number and carcass weight showed that increased thoracic number had negative effects on carcass weight, but the results were not statistically significant. Further, VRTN Ins/Ins genotype increased more than one thoracic than that of Wt/Wt genotype on average in this PIC population. Meanwhile, there was a significant negative effect of VRTN Ins on carcass weight (P carcass weight in PIC pigs. © 2017 Japanese Society of Animal Science.

  2. Wavelet-based blind identification of the UCLA Factor building using ambient and earthquake responses

    International Nuclear Information System (INIS)

    Hazra, B; Narasimhan, S

    2010-01-01

    Blind source separation using second-order blind identification (SOBI) has been successfully applied to the problem of output-only identification, popularly known as ambient system identification. In this paper, the basic principles of SOBI for the static mixtures case is extended using the stationary wavelet transform (SWT) in order to improve the separability of sources, thereby improving the quality of identification. Whereas SOBI operates on the covariance matrices constructed directly from measurements, the method presented in this paper, known as the wavelet-based modified cross-correlation method, operates on multiple covariance matrices constructed from the correlation of the responses. The SWT is selected because of its time-invariance property, which means that the transform of a time-shifted signal can be obtained as a shifted version of the transform of the original signal. This important property is exploited in the construction of several time-lagged covariance matrices. The issue of non-stationary sources is addressed through the formation of several time-shifted, windowed covariance matrices. Modal identification results are presented for the UCLA Factor building using ambient vibration data and for recorded responses from the Parkfield earthquake, and compared with published results for this building. Additionally, the effect of sensor density on the identification results is also investigated

  3. Microscopic evaluation of implant platform adaptation with UCLA-type abutments: in vitro study

    Directory of Open Access Journals (Sweden)

    Vinícius Anéas RODRIGUES

    Full Text Available Abstract Introduction The fit between abutment and implant is crucial to determine the longevity of implant-supported prostheses and the maintenance of peri-implant bones. Objective To evaluate the vertical misfit between different abutments in order to provide information to assist abutment selection. Material and method UCLA components (N=40 with anti-rotational system were divided as follows: components usinated in titanium (n=10 and plastic components cast proportionally in titanium (n=10, nickel-chromium-titanium-molybdenum (n=10 and nickel-chromium (n=10 alloys. All components were submitted to stereomicroscope analysis and were randomly selected for characterization by SEM. Result Data were analyzed using mean and standard deviation and subjected to ANOVA-one way, where the groups proved to statistically different (p=<0.05, followed by Tukey’s test. Conclusion The selection of material influences the value of vertical misfit. The group machined in Ti showed the lowest value while the group cast in Ni Cr showed the highest value of vertical misfit.

  4. Construction and initial operation of MHD PbLi facility at UCLA

    Energy Technology Data Exchange (ETDEWEB)

    Smolentsev, S., E-mail: sergey@fusion.ucla.edu; Li, F.-C.; Morley, N.; Ueki, Y.; Abdou, M.; Sketchley, T.

    2013-06-15

    Highlights: • New MHD PbLi loop has been constructed and tested at UCLA. • Pressure diagnostics system has been developed and successfully tested. • Ultrasound Doppler velocimeter is tested as velocity diagnostics. • Experiments on pressure drop reduction have been performed. • Experiments on MHD flow in a duct with SiC flow channel insert are underway. -- Abstract: A magnetohydrodynamic flow facility MaPLE (Magnetohydrodynamic PbLi Experiment) that utilizes molten eutectic alloy lead–lithium (PbLi) as working fluid has been constructed and tested at University of California, Los Angeles. The loop operation parameters are: maximum magnetic field 1.8 T, PbLi temperature up to 350 °C, maximum PbLi flow rate with/without a magnetic field 15/50 l/min, maximum pressure head 0.15 MPa. The paper describes the loop itself and its major components, basic operation procedures, experience of handling PbLi, initial loop testing, flow diagnostics and current and near-future experiments. The obtained test results of the loop and its components have demonstrated that the new facility is fully functioning and ready for experimental studies of magnetohydrodynamic, heat and mass transfer phenomena in PbLi flows and also can be used in mock up testing in conditions relevant to fusion applications.

  5. The UCLA/SLAC Ultra-High Gradient Cerenkov Wakefield Accelerator Experiment

    CERN Document Server

    Thompson, Matthew C; Hogan, Mark; Ischebeck, Rasmus; Muggli, Patric; Rosenzweig, James E; Scott, A; Siemann, Robert; Travish, Gil; Walz, Dieter; Yoder, Rodney

    2005-01-01

    An experiment is planned to study the performance of dielectric Cerenkov wakefield accelerating structures at extremely high gradients in the GV/m range. This new UCLA/SLAC collaboration will take advantage of the unique SLAC FFTB electron beam and its demonstrated ultra-short pulse lengths and high currents (e.g., sz = 20 μm at Q = 3 nC). The electron beam will be focused down and sent through varying lengths of fused silica capillary tubing with two different sizes: ID = 200 μm / OD = 325 μm and ID = 100 μm / OD = 325 μm. The pulse length of the electron beam will be varied in order to alter the accelerating gradient and probe the breakdown threshold of the dielectric structures. In addition to breakdown studies, we plan to collect and measure coherent Cerenkov radiation emitted from the capillary tube to gain information about the strength of the accelerating fields. Status and progress on the experiment are reported.

  6. Doubling-resolution analog-to-digital conversion based on PIC18F45K80

    Directory of Open Access Journals (Sweden)

    Yueyang Yuan

    2014-08-01

    Full Text Available Aiming at the analog signal being converted into the digital with a higher precision, a method to improve the analog-to-digital converter (ADC resolution is proposed and described. Based on the microcomputer PIC18F45K80 in which the internal ADC modules are embedded, a circuit is designed for doubling the resolution of ADC. According to the circuit diagram, the mathematical formula for calculating this resolution is derived. The corresponding software and print circuit board assembly is also prepared. With the experiment, a 13 bit ADC is achieved based on the 12 bit ADC module predesigned in the PIC18F45K80.

  7. Parallel R

    CERN Document Server

    McCallum, Ethan

    2011-01-01

    It's tough to argue with R as a high-quality, cross-platform, open source statistical software product-unless you're in the business of crunching Big Data. This concise book introduces you to several strategies for using R to analyze large datasets. You'll learn the basics of Snow, Multicore, Parallel, and some Hadoop-related tools, including how to find them, how to use them, when they work well, and when they don't. With these packages, you can overcome R's single-threaded nature by spreading work across multiple CPUs, or offloading work to multiple machines to address R's memory barrier.

  8. Concurrent particle-in-cell plasma simulation on a multi-transputer parallel computer

    International Nuclear Information System (INIS)

    Khare, A.N.; Jethra, A.; Patel, Kartik

    1992-01-01

    This report describes the parallelization of a Particle-in-Cell (PIC) plasma simulation code on a multi-transputer parallel computer. The algorithm used in the parallelization of the PIC method is described. The decomposition schemes related to the distribution of the particles among the processors are discussed. The implementation of the algorithm on a transputer network connected as a torus is presented. The solutions of the problems related to global communication of data are presented in the form of a set of generalized communication functions. The performance of the program as a function of data size and the number of transputers show that the implementation is scalable and represents an effective way of achieving high performance at acceptable cost. (author). 11 refs., 4 figs., 2 tabs., appendices

  9. Evaluation of the Parent-Implemented Communication Strategies (PiCS) Project Using the Multiattribute Utility (MAU) Approach

    Science.gov (United States)

    Stoner, Julia B.; Meadan, Hedda; Angell, Maureen E.; Daczewitz, Marcus

    2012-01-01

    We conducted a multiattribute utility (MAU) evaluation to assess the Parent-Implemented Communication Strategies (PiCS) project which was funded by the Institute of Education Sciences (IES). In the PiCS project parents of young children with developmental disabilities are trained and coached in their homes on naturalistic and visual teaching…

  10. Evaluating CoLiDeS + Pic: The Role of Relevance of Pictures in User Navigation Behaviour

    Science.gov (United States)

    Karanam, Saraschandra; van Oostendorp, Herre; Indurkhya, Bipin

    2012-01-01

    CoLiDeS + Pic is a cognitive model of web-navigation that incorporates semantic information from pictures into CoLiDeS. In our earlier research, we have demonstrated that by incorporating semantic information from pictures, CoLiDeS + Pic can predict the hyperlinks on the shortest path more frequently, and also with greater information scent,…

  11. Parallel Lines

    Directory of Open Access Journals (Sweden)

    James G. Worner

    2017-05-01

    Full Text Available James Worner is an Australian-based writer and scholar currently pursuing a PhD at the University of Technology Sydney. His research seeks to expose masculinities lost in the shadow of Australia’s Anzac hegemony while exploring new opportunities for contemporary historiography. He is the recipient of the Doctoral Scholarship in Historical Consciousness at the university’s Australian Centre of Public History and will be hosted by the University of Bologna during 2017 on a doctoral research writing scholarship.   ‘Parallel Lines’ is one of a collection of stories, The Shapes of Us, exploring liminal spaces of modern life: class, gender, sexuality, race, religion and education. It looks at lives, like lines, that do not meet but which travel in proximity, simultaneously attracted and repelled. James’ short stories have been published in various journals and anthologies.

  12. Spectral domain, common path OCT in a handheld PIC based system

    Science.gov (United States)

    Leinse, Arne; Wevers, Lennart; Marchenko, Denys; Dekker, Ronald; Heideman, René G.; Ruis, Roosje M.; Faber, Dirk J.; van Leeuwen, Ton G.; Kim, Keun Bae; Kim, Kyungmin

    2018-02-01

    Optical Coherence Tomography (OCT) has made it into the clinic in the last decade with systems based on bulk optical components. The next disruptive step will be the introduction of handheld OCT systems. Photonic Integrated Circuit (PIC) technology is the key enabler for this further miniaturization. PIC technology allows signal processing on a stable platform and the implementation of a common path interferometer in that same platform creates a robust fully integrated OCT system with a flexible fiber probe. In this work the first PIC based handheld and integrated common path based spectral domain OCT system is described and demonstrated. The spectrometer in the system is based on an Arrayed Waveguide Grating (AWG) and fully integrated with the CCD and a fiber probe into a system operating at 850 nm. The AWG on the PIC creates a 512 channel spectrometer with a resolution of 0.22 nm enabling a high speed analysis of the full A-scan. The silicon nitride based proprietary waveguide technology (TriPleXTM) enables low loss complex photonic structures from the visible (405 nm) to IR (2350 nm) range, making it a unique candidate for OCT applications. Broadband AWG operation from visible to 1700 nm has been shown in the platform and Photonic Design Kits (PDK) are available enabling custom made designs in a system level design environment. This allows a low threshold entry for designing new (OCT) designs for a broad wavelength range.

  13. The Plant Information Center (PIC): A Web-Based Learning Center for Botanical Study.

    Science.gov (United States)

    Greenberg, J.; Daniel, E.; Massey, J.; White, P.

    The Plant Information Center (PIC) is a project funded under the Institute of Museum and Library Studies that aims to provide global access to both primary and secondary botanical resources via the World Wide Web. Central to the project is the development and employment of a series of applications that facilitate resource discovery, interactive…

  14. The LHC Tier1 at PIC: Experience from first LHC run

    International Nuclear Information System (INIS)

    Flix, J.; Perez-Calero Yzquierdo, A.; Accion, E.; Acin, V.; Acosta, C.; Bernabeu, G.; Bria, A.; Casals, J.; Caubet, M.; Cruz, R.; Delfino, M.; Espinal, X.; Lanciotti, E.; Lopez, F.; Martinez, F.; Mendez, V.; Merino, G.; Pacheco, A.; Planas, E.; Porto, M. C.; Rodriguez, B.; Sedov, A.

    2013-01-01

    This paper summarizes the operational experience of the Tier1 computer center at Port d'Informacio Cientifica (PIC) supporting the commissioning and first run (Run1) of the Large Hadron Collider (LHC). The evolution of the experiment computing models resulting from the higher amounts of data expected after there start of the LHC are also described. (authors)

  15. An Efficient Randomized Algorithm for Real-Time Process Scheduling in PicOS Operating System

    Science.gov (United States)

    Helmy*, Tarek; Fatai, Anifowose; Sallam, El-Sayed

    PicOS is an event-driven operating environment designed for use with embedded networked sensors. More specifically, it is designed to support the concurrency in intensive operations required by networked sensors with minimal hardware requirements. Existing process scheduling algorithms of PicOS; a commercial tiny, low-footprint, real-time operating system; have their associated drawbacks. An efficient, alternative algorithm, based on a randomized selection policy, has been proposed, demonstrated, confirmed for efficiency and fairness, on the average, and has been recommended for implementation in PicOS. Simulations were carried out and performance measures such as Average Waiting Time (AWT) and Average Turn-around Time (ATT) were used to assess the efficiency of the proposed randomized version over the existing ones. The results prove that Randomized algorithm is the best and most attractive for implementation in PicOS, since it is most fair and has the least AWT and ATT on average over the other non-preemptive scheduling algorithms implemented in this paper.

  16. Characterizing a New Candidate Benchmark Brown Dwarf Companion in the β Pic Moving Group

    Science.gov (United States)

    Phillips, Caprice; Bowler, Brendan; Liu, Michael C.; Mace, Gregory N.; Sokal, Kimberly R.

    2018-01-01

    Benchmark brown dwarfs are objects that have at least two measured fundamental quantities such as luminosity and age, and therefore can be used to test substellar atmospheric and evolutionary models. Nearby, young, loose associations such as the β Pic moving group represent some of the best regions in which to identify intermediate-age benchmark brown dwarfs due to their well-constrained ages and metallicities. We present a spectroscopic study of a new companion at the hydrogen-burning limit orbiting a low-mass star at a separation of 9″ (650 AU) in the 23 Myr old β Pic moving group. The medium-resolution near-infrared spectrum of this companion from IRTF/SpeX shows clear signs of low surface gravity and yields an index-based spectral type of M6±1 with a VL-G gravity on the Allers & Liu classification system. Currently, there are four known brown dwarf and giant planet companions in the β Pic moving group: HR 7329 B, PZ Tel B, β Pic b, and 51 Eri b. Depending on its exact age and accretion history, this new object may represent the third brown dwarf companion and fifth substellar companion in this association.

  17. EXPERIMENTAL INVESTIGATION OF PIC FORMATION DURING THE INCINERATION OF RECOVERED CFC-11

    Science.gov (United States)

    The report gives results of an investigation of the formation of products of incomplete combustion (PICS) during "recovered" trichlorofluoromethane (CFC-11) incineration. Tests involved burning the recovered CFC-11 in a propane gas flame. combustion gas samples were taken and an...

  18. Dynamic Load Balancing for PIC code using Eulerian/Lagrangian partitioning

    OpenAIRE

    Sauget, Marc; Latu, Guillaume

    2017-01-01

    This document presents an analysis of different load balance strategies for a Plasma physics code that models high energy particle beams with PIC method. A comparison of different load balancing algorithms is given: static or dynamic ones. Lagrangian and Eulerian partitioning techniques have been investigated.

  19. Development and Benchmarking of a Hybrid PIC Code For Dense Plasmas and Fast Ignition

    Energy Technology Data Exchange (ETDEWEB)

    Witherspoon, F. Douglas [HyperV Technologies Corp.; Welch, Dale R. [Voss Scientific, LLC; Thompson, John R. [FAR-TECH, Inc.; MacFarlane, Joeseph J. [Prism Computational Sciences Inc.; Phillips, Michael W. [Advanced Energy Systems, Inc.; Bruner, Nicki [Voss Scientific, LLC; Mostrom, Chris [Voss Scientific, LLC; Thoma, Carsten [Voss Scientific, LLC; Clark, R. E. [Voss Scientific, LLC; Bogatu, Nick [FAR-TECH, Inc.; Kim, Jin-Soo [FAR-TECH, Inc.; Galkin, Sergei [FAR-TECH, Inc.; Golovkin, Igor E. [Prism Computational Sciences, Inc.; Woodruff, P. R. [Prism Computational Sciences, Inc.; Wu, Linchun [HyperV Technologies Corp.; Messer, Sarah J. [HyperV Technologies Corp.

    2014-05-20

    Radiation processes play an important role in the study of both fast ignition and other inertial confinement schemes, such as plasma jet driven magneto-inertial fusion, both in their effect on energy balance, and in generating diagnostic signals. In the latter case, warm and hot dense matter may be produced by the convergence of a plasma shell formed by the merging of an assembly of high Mach number plasma jets. This innovative approach has the potential advantage of creating matter of high energy densities in voluminous amount compared with high power lasers or particle beams. An important application of this technology is as a plasma liner for the flux compression of magnetized plasma to create ultra-high magnetic fields and burning plasmas. HyperV Technologies Corp. has been developing plasma jet accelerator technology in both coaxial and linear railgun geometries to produce plasma jets of sufficient mass, density, and velocity to create such imploding plasma liners. An enabling tool for the development of this technology is the ability to model the plasma dynamics, not only in the accelerators themselves, but also in the resulting magnetized target plasma and within the merging/interacting plasma jets during transport to the target. Welch pioneered numerical modeling of such plasmas (including for fast ignition) using the LSP simulation code. Lsp is an electromagnetic, parallelized, plasma simulation code under development since 1995. It has a number of innovative features making it uniquely suitable for modeling high energy density plasmas including a hybrid fluid model for electrons that allows electrons in dense plasmas to be modeled with a kinetic or fluid treatment as appropriate. In addition to in-house use at Voss Scientific, several groups carrying out research in Fast Ignition (LLNL, SNL, UCSD, AWE (UK), and Imperial College (UK)) also use LSP. A collaborative team consisting of HyperV Technologies Corp., Voss Scientific LLC, FAR-TECH, Inc., Prism

  20. Validation of the UCLA Child Post traumatic stress disorder-reaction index in Zambia

    Directory of Open Access Journals (Sweden)

    Cohen Judith A

    2011-09-01

    Full Text Available Abstract Background Sexual violence against children is a major global health and human rights problem. In order to address this issue there needs to be a better understanding of the issue and the consequences. One major challenge in accomplishing this goal has been a lack of validated child mental health assessments in low-resource countries where the prevalence of sexual violence is high. This paper presents results from a validation study of a trauma-focused mental health assessment tool - the UCLA Post-traumatic Stress Disorder - Reaction Index (PTSD-RI in Zambia. Methods The PTSD-RI was adapted through the addition of locally relevant items and validated using local responses to three cross-cultural criterion validity questions. Reliability of the symptoms scale was assessed using Cronbach alpha analyses. Discriminant validity was assessed comparing mean scale scores of cases and non-cases. Concurrent validity was assessed comparing mean scale scores to a traumatic experience index. Sensitivity and specificity analyses were run using receiver operating curves. Results Analysis of data from 352 youth attending a clinic specializing in sexual abuse showed that this adapted PTSD-RI demonstrated good reliability, with Cronbach alpha scores greater than .90 on all the evaluated scales. The symptom scales were able to statistically significantly discriminate between locally identified cases and non-cases, and higher symptom scale scores were associated with increased numbers of trauma exposures which is an indication of concurrent validity. Sensitivity and specificity analyses resulted in an adequate area under the curve, indicating that this tool was appropriate for case definition. Conclusions This study has shown that validating mental health assessment tools in a low-resource country is feasible, and that by taking the time to adapt a measure to the local context, a useful and valid Zambian version of the PTSD-RI was developed to detect

  1. Massively parallel computation of PARASOL code on the Origin 3800 system

    International Nuclear Information System (INIS)

    Hosokawa, Masanari; Takizuka, Tomonori

    2001-10-01

    The divertor particle simulation code named PARASOL simulates open-field plasmas between divertor walls self-consistently by using an electrostatic PIC method and a binary collision Monte Carlo model. The PARASOL parallelized with MPI-1.1 for scalar parallel computer worked on Intel Paragon XP/S system. A system SGI Origin 3800 was newly installed (May, 2001). The parallel programming was improved at this switchover. As a result of the high-performance new hardware and this improvement, the PARASOL is speeded up by about 60 times with the same number of processors. (author)

  2. RESEÑA DE LAS I JORNADAS DE INVESTIGACIÓN DE INGENIERÍA CIVIL Y URBANISMO UCLA 2015

    OpenAIRE

    J. C. Rincón

    2016-01-01

    A través del presente ensayo, se esboza el acontecer de las I Jornadas de Investigación de Ingeniería Civil y Urbanismo UCLA 2015, la cual se desarrolló durante los días 15 y 16 de marzo del 2016, en las instalaciones del decanato de Ingeniería Civil de la Universidad Centroccidental Lisandro Alvarado. Se presentaron ponencias alusivas a trabajos de investigación relacionados a ingeniería civil, específicamente en las áreas de estructuras, hidráulica y sanitaria, ingeniería de construcción...

  3. A maximum power point tracker for photovoltaic system using a PIC microcontroller; Controlador de potencia maxima para sistemas fotovoltaicos (SFVs) utilizando un microcontrolador PIC

    Energy Technology Data Exchange (ETDEWEB)

    Guzman, Eusebio; Mendoza, Victor X; Carrillo, Jose J . A; Galarza, Cristian [Universidad Autonoma Metropolitana, Mexico, D.F. (Mexico)

    2000-07-01

    A maximum power point tracker MPPT for photovoltaic systems is presented. The equipment can output up to 600 W and its control signals are generated by a PIC microcontroller. The principle of control is based on current and voltage sampling at the output terminals of the photovoltaic generator. From power comparison of two consecutive samples, it is possible to know how far from the optimal point the system is working. Output voltage control is used to force the system to work within the optimal area of operation. The microcontroller program sequence, the DC/DC converter structure and the most relevant results are shown. [Spanish] En este trabajo se presenta el desarrollo de un controlador de potencia maxima para su aplicacion en sistemas fotovoltaicos (SFVs). El diseno alcanza una potencia de 600 W y sus senales de control son generadas con un controlador PIC. El principio de control se basa en el muestreo de la corriente y la tension en las terminadas del generador fotovoltaico GFV. De dos muestreos consecutivos, y por comparacion de las potencias, se determina que tan alejado del punto optimo opera el sistema. La operacion del sistema dentro de la zona de funcionamiento optimo se asegura mediante un control por tension. Se muestra la secuencia de programacion del microcontrolador, la estructura del convertidor CD/CD empleado y algunos resultados relevantes.

  4. The Pic19 NBS-LRR gene family members are closely linked to Scmv1, but not involved in maize resistance to sugarcane mosaic virus

    DEFF Research Database (Denmark)

    Jiang, Lu; Ingvardsen, Christina Rønn; Lübberstedt, Thomas

    2008-01-01

    the isolation and characterization of the Pic19R gene family members from the inbred line FAP1360A, which shows complete resistance to SCMV. Two primer pairs were designed based on the conserved regions among the known Pic19 paralogs and used for rapid amplification of cDNA ends of FAP1360A. Six full-length c...... of the Pic19R family indicated that the Pic19R-1 paralog is identical to the known Rxo1 gene conferring resistance to rice bacterial streak disease and none of the other Pic19R paralogs seems to be involved in resistance to SCMV...

  5. PIC simulation of the electron-ion collision effects on suprathermal electrons

    International Nuclear Information System (INIS)

    Wu Yanqing; Han Shensheng

    2000-01-01

    The generation and transportation of suprathermal electrons are important to both traditional ICF scheme and 'Fast Ignition' scheme. The author discusses the effects of electron-ion collision on the generation and transportation of the suprathermal electrons by parametric instability. It indicates that the weak electron-ion term in the PIC simulation results in the enhancement of the collisional absorption and increase of the hot electron temperature and reduction in the maximum electrostatic field amplitude while wave breaking. Therefore the energy and distribution of the suprathermal electrons are changed. They are distributed more close to the phase velocity of the electrostatic wave than the case without electron-ion collision term. The electron-ion collision enhances the self-consistent field and impedes the suprathermal electron transportation. These factors also reduce the suprathermal electron energy. In addition, the authors discuss the effect of initial condition on PIC simulation to ensure that the results are correct

  6. Progress on the Development of the hPIC Particle-in-Cell Code

    Science.gov (United States)

    Dart, Cameron; Hayes, Alyssa; Khaziev, Rinat; Marcinko, Stephen; Curreli, Davide; Laboratory of Computational Plasma Physics Team

    2017-10-01

    Advancements were made in the development of the kinetic-kinetic electrostatic Particle-in-Cell code, hPIC, designed for large-scale simulation of the Plasma-Material Interface. hPIC achieved a weak scaling efficiency of 87% using the Algebraic Multigrid Solver BoomerAMG from the PETSc library on more than 64,000 cores of the Blue Waters supercomputer at the University of Illinois at Urbana-Champaign. The code successfully simulates two-stream instability and a volume of plasma over several square centimeters of surface extending out to the presheath in kinetic-kinetic mode. Results from a parametric study of the plasma sheath in strongly magnetized conditions will be presented, as well as a detailed analysis of the plasma sheath structure at grazing magnetic angles. The distribution function and its moments will be reported for plasma species in the simulation domain and at the material surface for plasma sheath simulations. Membership Pending.

  7. Operational Test Report (OTR) for U-105 Pumping and Instrumentation and Control (PIC) Skid

    International Nuclear Information System (INIS)

    KOCH, M.R.

    2000-01-01

    Attached is the completed Operation Test Procedure (OTP-200-004, Rev. A-18). OTP includes a print out of the Programmable Logic Controller (PLC) Ladder Diagram. Ladder Diagram was designed for installation in the PLC used to monitor and control pumping activity for Tank Farm 241-U-105. The completed OTP and OTR are referenced in the IS PIC Skid Configuration Drawing (H-2-829998)

  8. Rise time of proton cut-off energy in 2D and 3D PIC simulations

    Science.gov (United States)

    Babaei, J.; Gizzi, L. A.; Londrillo, P.; Mirzanejad, S.; Rovelli, T.; Sinigardi, S.; Turchetti, G.

    2017-04-01

    The Target Normal Sheath Acceleration regime for proton acceleration by laser pulses is experimentally consolidated and fairly well understood. However, uncertainties remain in the analysis of particle-in-cell simulation results. The energy spectrum is exponential with a cut-off, but the maximum energy depends on the simulation time, following different laws in two and three dimensional (2D, 3D) PIC simulations so that the determination of an asymptotic value has some arbitrariness. We propose two empirical laws for the rise time of the cut-off energy in 2D and 3D PIC simulations, suggested by a model in which the proton acceleration is due to a surface charge distribution on the target rear side. The kinetic energy of the protons that we obtain follows two distinct laws, which appear to be nicely satisfied by PIC simulations, for a model target given by a uniform foil plus a contaminant layer that is hydrogen-rich. The laws depend on two parameters: the scaling time, at which the energy starts to rise, and the asymptotic cut-off energy. The values of the cut-off energy, obtained by fitting 2D and 3D simulations for the same target and laser pulse configuration, are comparable. This suggests that parametric scans can be performed with 2D simulations since 3D ones are computationally very expensive, delegating their role only to a correspondence check. In this paper, the simulations are carried out with the PIC code ALaDyn by changing the target thickness L and the incidence angle α, with a fixed a0 = 3. A monotonic dependence, on L for normal incidence and on α for fixed L, is found, as in the experimental results for high temporal contrast pulses.

  9. Rancang Bangun Inverter SVM Berbasis Mikrokontroler PIC 18F4431 Untuk Sistem VSD

    OpenAIRE

    Tarmizi; Muyassar

    2013-01-01

    Sebuah sistem pengaturan kecepatan motor disebut dengan sistem Variable Speed Drives (VSD). Sistem VSD motor induksi menggunakan inverter untuk mengatur frekuensi suplai motor. Untuk mendapatkan frekuensi suplai motor yang mendekati sinusoidal, inveter perlu di switching dengan metode tertentu. Pada penelitian ini, switching inverter 3 fasa menggunakan metode SVM (Space Vector Modulation) yang dikontrol oleh Mikrokontroler PIC18F4431. Sebelum dilakukan ekperimen, inverter SVM ini lakukan si...

  10. Operational Test Report (OTR) for U-105 Pumping and Instrumentation and Control (PIC) Skid

    Energy Technology Data Exchange (ETDEWEB)

    KOCH, M.R.

    2000-02-28

    Attached is the completed Operation Test Procedure (OTP-200-004, Rev. A-18). OTP includes a print out of the Programmable Logic Controller (PLC) Ladder Diagram. Ladder Diagram was designed for installation in the PLC used to monitor and control pumping activity for Tank Farm 241-U-105. The completed OTP and OTR are referenced in the IS PIC Skid Configuration Drawing (H-2-829998).

  11. Operational Test Report (OTR) for U-102 Pumping and Instrumentation and Control (PIC) Skid

    Energy Technology Data Exchange (ETDEWEB)

    KOCH, M.R.

    2000-02-28

    Attached is the completed Operation Test Procedure (OTP-200-004, Rev. A-19 and Rev. A-20). OTP includes a print out of the Programmable Logic Controller (PLC) Ladder Diagram. Ladder Diagram was designed for installation in the PLC used to monitor and control pumping activity for Tank Farm 241-U-102. The completed OTP and OTR are referenced in the IS PIC Skid Configuration Drawing (H-2-829998).

  12. Operational Test Report (OTR) for U-103 Pumping and Instrumentation and Control (PIC) Skid

    Energy Technology Data Exchange (ETDEWEB)

    KOCH, M.R.

    2000-02-28

    Attached is the completed Operation Test Procedure (OTP-200-004, Rev. A-16). OTP includes a print out of the Programmable Logic Controller (PLC) Ladder Diagram. Ladder Diagram was designed for installation in the PLC used to monitor and control pumping activity for Tank Farm 241-U-103. The completed OTP and OTR are referenced in the 25 PIC Skid Configuration Drawing (H-2-829998).

  13. A COMPENSATOR APPLICATION USING SYNCHRONOUS MOTOR WITH A PI CONTROLLER BASED ON PIC

    OpenAIRE

    Ramazan BAYINDIR; Alper GÖRGÜN

    2009-01-01

    In this paper, PI control of a synchronous motor has been realized by using a PIC 18F452 microcontroller and it has been worked as ohmic, inductive and capacitive with different excitation currents. Instead of solving integral operation of PI control which has difficulties with conversion to the digital system, summation of all error values of a defined time period are multiplied with the sampling period. Reference values of the PI algorithm are determined with Ziegler-Nicholas method. These ...

  14. Design and implementation of the standards-based personal intelligent self-management system (PICS).

    Science.gov (United States)

    von Bargen, Tobias; Gietzelt, Matthias; Britten, Matthias; Song, Bianying; Wolf, Klaus-Hendrik; Kohlmann, Martin; Marschollek, Michael; Haux, Reinhold

    2013-01-01

    Against the background of demographic change and a diminishing care workforce there is a growing need for personalized decision support. The aim of this paper is to describe the design and implementation of the standards-based personal intelligent care systems (PICS). PICS makes consistent use of internationally accepted standards such as the Health Level 7 (HL7) Arden syntax for the representation of the decision logic, HL7 Clinical Document Architecture for information representation and is based on a open-source service-oriented architecture framework and a business process management system. Its functionality is exemplified for the application scenario of a patient suffering from congestive heart failure. Several vital signs sensors provide data for the decision support system, and a number of flexible communication channels are available for interaction with patient or caregiver. PICS is a standards-based, open and flexible system enabling personalized decision support. Further development will include the implementation of components on small computers and sensor nodes.

  15. Design And Construction Of Digital Multi-Meter Using PIC Microcontroller

    Directory of Open Access Journals (Sweden)

    Khawn Nue

    2015-07-01

    Full Text Available Abstract This thesis describes the design and construction of digital multi-meter using PIC microcontroller. In this system a typical multi-meter may include features such as the ability to measure ACDC voltage DC current resistance temperature diodes frequency and connectivity. This design uses of the PIC microcontroller voltage rectifiers voltage divide potentiometer LCD and other instruments to complete the measure. When we used what we have learned of microprocessors and adjust the program to calculate and show the measures in the LCD keypad selected the modes. The software programming has been incorporated using MPLAB and PROTEUS. In this system the analogue input is taken directly to the analogue input pin of the microcontroller without any other processing. So the input range is from 0V to 5V the maximum source impedance is 2k5 for testing use a 1k pot. To improve the circuit adds an op-amp in front to present greater impedance to the circuit under test. The output impedance of the op-amp will be low which a requirement of the PIC analogue input is.

  16. A curvilinear, fully implicit, conservative electromagnetic PIC algorithm in multiple dimensions

    Science.gov (United States)

    Chacón, L.; Chen, G.

    2016-07-01

    We extend a recently proposed fully implicit PIC algorithm for the Vlasov-Darwin model in multiple dimensions (Chen and Chacón (2015) [1]) to curvilinear geometry. As in the Cartesian case, the approach is based on a potential formulation (ϕ, A), and overcomes many difficulties of traditional semi-implicit Darwin PIC algorithms. Conservation theorems for local charge and global energy are derived in curvilinear representation, and then enforced discretely by a careful choice of the discretization of field and particle equations. Additionally, the algorithm conserves canonical-momentum in any ignorable direction, and preserves the Coulomb gauge ∇ ṡ A = 0 exactly. An asymptotically well-posed fluid preconditioner allows efficient use of large cell sizes, which are determined by accuracy considerations, not stability, and can be orders of magnitude larger than required in a standard explicit electromagnetic PIC simulation. We demonstrate the accuracy and efficiency properties of the algorithm with numerical experiments in mapped meshes in 1D-3V and 2D-3V.

  17. 2D PIC simulations for an EN discharge with magnetized electrons and unmagnetized ions

    Science.gov (United States)

    Lieberman, Michael A.; Kawamura, Emi; Lichtenberg, Allan J.

    2009-10-01

    We conducted 2D particle-in-cell (PIC) simulations for an electronegative (EN) discharge with magnetized electrons and unmagnetized ions, and compared the results to a previously developed 1D (radial) analytical model of an EN plasma with strongly magnetized electrons and weakly magnetized ions [1]. In both cases, there is a static uniform applied magnetic field in the axial direction. The 1D radial model mimics the wall losses of the particles in the axial direction by introducing a bulk loss frequency term νL. A special (desired) solution was found in which only positive and negative ions but no electrons escaped radially. The 2D PIC results show good agreement with the 1D model over a range of parameters and indicate that the analytical form of νL employed in [1] is reasonably accurate. However, for the PIC simulations, there is always a finite flux of electrons to the radial wall which is about 10 to 30% of the negative ion flux.[4pt] [1] G. Leray, P. Chabert, A.J. Lichtenberg and M.A. Lieberman, J. Phys. D, accepted for publication 2009.

  18. Implementation of multi-layer feed forward neural network on PIC16F877 microcontroller

    International Nuclear Information System (INIS)

    Nur Aira Abd Rahman

    2005-01-01

    Artificial Neural Network (ANN) is an electronic model based on the neural structure of the brain. Similar to human brain, ANN consists of interconnected simple processing units or neurons that process input to generate output signals. ANN operation is divided into 2 categories; training mode and service mode. This project aims to implement ANN on PIC micro-controller that enable on-chip or stand alone training and service mode. The input can varies from sensors or switches, while the output can be used to control valves, motors, light source and a lot more. As partial development of the project, this paper reports the current status and results of the implemented ANN. The hardware fraction of this project incorporates Microchip PIC16F877A microcontrollers along with uM-FPU math co-processor. uM-FPU is a 32-bit floating point co-processor utilized to execute complex calculation requires by the sigmoid activation function for neuron. ANN algorithm is converted to software program written in assembly language. The implemented ANN structure is three layer with one hidden layer, and five neurons with two hidden neurons. To prove the operability and functionality, the network is trained to solve three common logic gate operations; AND, OR, and XOR. This paper concludes that the ANN had been successfully implemented on PIC16F877a and uM-FPU math co-processor hardware that works accordingly on both training and service mode. (Author)

  19. UCLA intermediate energy nuclear physics and relativistic heavy ion physics. Annual report, February 1, 1983-January 31, 1984

    International Nuclear Information System (INIS)

    1984-01-01

    In this contract year the UCLA Intermediate Energy Group has continued to pursue a general set of problems in intermediate energy physics using new research tools and theoretical insights. Our program to study N-N scattering and proton-light nucleus scattering has been enhanced by a new polarized target facility (both hydrogen and deuterium) at the High Resolution Spectrometer (HRS) of the Los Alamos Meson Physics Facility (LAMPF). This facility has been constructed by our group in collaboration with physicists from KEK, LAMPF and the University of Minnesota; and the first set of experiments studying polarized beam-polarized target scattering at the HRS were completed this summer and early fall. The HRS mode of operation has led to some unique design features which are described. At the Bevalac, a new beam line spectrometer will be constructed for us during this year and next to significantly enhance our capability to study subthreshold k + , k - and anti p production in relativistic heavy ion collisions and to search for fractionally charged particles. During this period a proposal is being prepared for a very large acceptance spectrometer and its associated beam line which will be used to detect dilepton pairs produced in relativistic heavy ion collisions. In concert with these experimental projects, theoretical advances in the understanding of new data from the HRS, particularly spin transfer data, have been made by the UCLA group and are described

  20. Controlling the numerical Cerenkov instability in PIC simulations using a customized finite difference Maxwell solver and a local FFT based current correction

    International Nuclear Information System (INIS)

    Li, Fei; Yu, Peicheng; Xu, Xinlu; Fiuza, Frederico; Decyk, Viktor K.

    2017-01-01

    In this study we present a customized finite-difference-time-domain (FDTD) Maxwell solver for the particle-in-cell (PIC) algorithm. The solver is customized to effectively eliminate the numerical Cerenkov instability (NCI) which arises when a plasma (neutral or non-neutral) relativistically drifts on a grid when using the PIC algorithm. We control the EM dispersion curve in the direction of the plasma drift of a FDTD Maxwell solver by using a customized higher order finite difference operator for the spatial derivative along the direction of the drift (1^ direction). We show that this eliminates the main NCI modes with moderate |k_1|, while keeps additional main NCI modes well outside the range of physical interest with higher |k_1|. These main NCI modes can be easily filtered out along with first spatial aliasing NCI modes which are also at the edge of the fundamental Brillouin zone. The customized solver has the possible advantage of improved parallel scalability because it can be easily partitioned along 1^ which typically has many more cells than other directions for the problems of interest. We show that FFTs can be performed locally to current on each partition to filter out the main and first spatial aliasing NCI modes, and to correct the current so that it satisfies the continuity equation for the customized spatial derivative. This ensures that Gauss’ Law is satisfied. Lastly, we present simulation examples of one relativistically drifting plasma, of two colliding relativistically drifting plasmas, and of nonlinear laser wakefield acceleration (LWFA) in a Lorentz boosted frame that show no evidence of the NCI can be observed when using this customized Maxwell solver together with its NCI elimination scheme.

  1. Bio pics

    DEFF Research Database (Denmark)

    Nielsen, Jakob Isak

    2011-01-01

    Martin Zandvliets Dirch (2011) har ansporet til at skitsere den biografiske film som genre og knytte nogle kommentarer til dens forskellige udtryksmuligheder. Hovedeksemplerne er Walk the Line (2005) og I’m Not There (2007) om henholdsvis Johnny Cash og Bob Dylan....

  2. Simulations of the BNL/SLAC/UCLA 1.6 cell emittance compensated photocathode RF gun low energy beam line

    International Nuclear Information System (INIS)

    Palmer, D.T.; Miller, R.H.; Winick, H.

    1995-01-01

    A dedicated low energy (2 to 10 MeV) experimental beam line is now under construction at Brookhaven National Laboratories Accelerator Test Facility (BNL/ATF) for photocathode RF gun testing and photoemission experiments. The design of the experimental line, using the 1.6 cell photocathode RF gun developed by the BNL/SLAC/UCLA RF gun collaboration is presented. Detailed beam dynamics simulations were performed for the 1.6 cell RF gun injector using a solenoidal emittance compensation technique. An experimental program for testing the 1.6 cell RF gun is presented. This program includes beam loading caused by dark current, higher order mode field measurements, integrated and slice emittance measurements using a pepper-pot and RF kicker cavity

  3. Preliminary conceptual design for a 510 MeV electron/positron injector for a UCLA φ factory

    International Nuclear Information System (INIS)

    Dahlbacka, G.; Hartline, R.; Barletta, W.; Pellegrini, C.

    1991-01-01

    UCLA is proposing a compact suer conducting high luminosity (10 32-33 cm -2 sec -1 ) e + e - collider for a φ factory. To achieve the required e + e - currents, full energy injections from a linac with intermediate storage in a Positron Accumulator Ring (PAR) is used. The elements of the linac are outlined with cost and future flexibility in mind. The preliminary conceptual design starts with a high current gun similar in design to those developed at SLAC and at ANL (for the APS). Four 4-section linac modules follow, each driven by a 60 MW klystron with a 1 μsec macropulse and an average current of 8.6 A. The first 4-section model is used to create positrons in a tungsten target at 186 MeV. The three remaining three modules are used to accelerate the e + e - beam to 558 MeV (no load limit) for injection into the PAR

  4. Parallel Programming with Intel Parallel Studio XE

    CERN Document Server

    Blair-Chappell , Stephen

    2012-01-01

    Optimize code for multi-core processors with Intel's Parallel Studio Parallel programming is rapidly becoming a "must-know" skill for developers. Yet, where to start? This teach-yourself tutorial is an ideal starting point for developers who already know Windows C and C++ and are eager to add parallelism to their code. With a focus on applying tools, techniques, and language extensions to implement parallelism, this essential resource teaches you how to write programs for multicore and leverage the power of multicore in your programs. Sharing hands-on case studies and real-world examples, the

  5. Development and validation of Australian aphasia rehabilitation best practice statements using the RAND/UCLA appropriateness method.

    Science.gov (United States)

    Power, Emma; Thomas, Emma; Worrall, Linda; Rose, Miranda; Togher, Leanne; Nickels, Lyndsey; Hersh, Deborah; Godecke, Erin; O'Halloran, Robyn; Lamont, Sue; O'Connor, Claire; Clarke, Kim

    2015-07-02

    To develop and validate a national set of best practice statements for use in post-stroke aphasia rehabilitation. Literature review and statement validation using the RAND/UCLA Appropriateness Method (RAM). A national Community of Practice of over 250 speech pathologists, researchers, consumers and policymakers developed a framework consisting of eight areas of care in aphasia rehabilitation. This framework provided the structure for the development of a care pathway containing aphasia rehabilitation best practice statements. Nine speech pathologists with expertise in aphasia rehabilitation participated in two rounds of RAND/UCLA appropriateness ratings of the statements. Panellists consisted of researchers, service managers, clinicians and policymakers. Statements that achieved a high level of agreement and an overall median score of 7-9 on a nine-point scale were rated as 'appropriate'. 74 best practice statements were extracted from the literature and rated across eight areas of care (eg, receiving the right referrals, providing intervention). At the end of Round 1, 71 of the 74 statements were rated as appropriate, no statements were rated as inappropriate, and three statements were rated as uncertain. All 74 statements were then rated again in the face-to-face second round. 16 statements were added through splitting existing items or adding new statements. Seven statements were deleted leaving 83 statements. Agreement was reached for 82 of the final 83 statements. This national set of 82 best practice statements across eight care areas for the rehabilitation of people with aphasia is the first to be validated by an expert panel. These statements form a crucial component of the Australian Aphasia Rehabilitation Pathway (AARP) (http://www.aphasiapathway.com.au) and provide the basis for more consistent implementation of evidence-based practice in stroke rehabilitation. Published by the BMJ Publishing Group Limited. For permission to use (where not already

  6. Development and validation of Australian aphasia rehabilitation best practice statements using the RAND/UCLA appropriateness method

    Science.gov (United States)

    Power, Emma; Thomas, Emma; Worrall, Linda; Rose, Miranda; Togher, Leanne; Nickels, Lyndsey; Hersh, Deborah; Godecke, Erin; O'Halloran, Robyn; Lamont, Sue; O'Connor, Claire; Clarke, Kim

    2015-01-01

    Objectives To develop and validate a national set of best practice statements for use in post-stroke aphasia rehabilitation. Design Literature review and statement validation using the RAND/UCLA Appropriateness Method (RAM). Participants A national Community of Practice of over 250 speech pathologists, researchers, consumers and policymakers developed a framework consisting of eight areas of care in aphasia rehabilitation. This framework provided the structure for the development of a care pathway containing aphasia rehabilitation best practice statements. Nine speech pathologists with expertise in aphasia rehabilitation participated in two rounds of RAND/UCLA appropriateness ratings of the statements. Panellists consisted of researchers, service managers, clinicians and policymakers. Main outcome measures Statements that achieved a high level of agreement and an overall median score of 7–9 on a nine-point scale were rated as ‘appropriate’. Results 74 best practice statements were extracted from the literature and rated across eight areas of care (eg, receiving the right referrals, providing intervention). At the end of Round 1, 71 of the 74 statements were rated as appropriate, no statements were rated as inappropriate, and three statements were rated as uncertain. All 74 statements were then rated again in the face-to-face second round. 16 statements were added through splitting existing items or adding new statements. Seven statements were deleted leaving 83 statements. Agreement was reached for 82 of the final 83 statements. Conclusions This national set of 82 best practice statements across eight care areas for the rehabilitation of people with aphasia is the first to be validated by an expert panel. These statements form a crucial component of the Australian Aphasia Rehabilitation Pathway (AARP) (http://www.aphasiapathway.com.au) and provide the basis for more consistent implementation of evidence-based practice in stroke rehabilitation. PMID:26137883

  7. Modulador-Demodulador ASK con codificación Manchester implementado en un microcontrolador PIC

    OpenAIRE

    Tarifa Amaya, Ariel; Del Risco Sánchez, Arnaldo; Cruz Hurtado, Juan Carlos

    2012-01-01

    Se presenta el diseño de un Modulador-Demodulador Digital ASK con codificación Manchester implementado en el firmware de un microcontrolador PIC 18F4455, utilizando el estándar de baja frecuencia (LF) el cual maneja valores de 125kHz. Este modulador-demodulador se utiliza en la implementación de una etiqueta RFID activa. Transmite a solicitud de un dispositivo lector el valor de temperatura de un sensor y su identificador. El dispositivo lector, controla la comunicación con la etiqueta. Según...

  8. Cephaloleia sp. Cerca a Vagelineata Pic*, una Plaga de la Palma Africana

    Directory of Open Access Journals (Sweden)

    Urueta Sandino Eduardo

    1972-08-01

    Full Text Available Cephalolia sp. y Cephaloleila sp, se han empleado como sinónimos del género Cepaloleia sp. (Lepesme. 1947. Se sabe que los estados de larva y adulto atacan el follaje de la palma africana (Elaeis guineensis Jacq. trayendo muchas veces como consecuencia secamientos en los folíolos o su invasión por hongos. En Colombia el Cephaloleia próximo a vagelineata Pic se presenta en la zona de Urabá y posiblemente en el Departamento de Santander.

  9. PIC simulations of magnetic field production by cosmic rays drifting upstream of SNR shocks

    International Nuclear Information System (INIS)

    Pohl, M.

    2008-01-01

    Turbulent magnetic-field amplification appears to operate near the forward shocks of young shell-type SNR. I review the observational constraints on the spatial distribution and amplitude of amplified magnetic field in this environment. I also present new PIC simulations of magnetic-field growth due to streaming cosmic rays. While the nature of the initial linear instability is largely determined by the choice of simulation parameters, the saturation always involves changing the bulk motion of cosmic rays and background plasma, which limits the field growth to amplitudes of a few times that of the homogeneous magnetic field. (author)

  10. Introducción a los microcontroladores RISC en Lenguaje C. PIC's de Microchips

    Directory of Open Access Journals (Sweden)

    Tito Flórez C.

    2000-01-01

    Full Text Available A medida que el programa de los microcontroladores se hace más complejo, trabajar en lenguaje "assembler" se hace más dispendioso, dificil de manejar y el control de interrupciones muchas veces son un dolor de cabeza. Una muy buena alternativa para solucionar estos problemas, es usar el lenguaje C para programarlos. De esta forma, los programas se vuelven muy sencillos; lo mismo que el de iuterrupciones se convierte ahora en algo muy sencillo. Se presentan los elementos y las instrucciones más importantes para poder llegar a desarrollar un sin número de programas para los PICs.

  11. Modulador-Demodulador ASK con codificación Manchester implementado en un microcontrolador PIC

    Directory of Open Access Journals (Sweden)

    Ariel Tarifa Amaya

    2012-12-01

    Full Text Available Se presenta el diseño de un Modulador-Demodulador Digital ASK con codificación Manchester implementado en el firmware de un microcontrolador PIC 18F4455, utilizando el estándar de baja frecuencia (LF el cual maneja valores de 125kHz. Este modulador-demodulador se utiliza en la implementación de una etiqueta RFID activa. Transmite a solicitud de un dispositivo lector el valor de temperatura de un sensor y su identificador. El dispositivo lector, controla la comunicación con la etiqueta. Según la literatura especializada no se reporta un sistema similar.

  12. Design and development of low cost thermoluminescence measurement system using PIC16F877 microcontroller

    International Nuclear Information System (INIS)

    Neelamegam, P; Rajendran, A

    2006-01-01

    A real time microcontroller based thermoluminescence system has been developed to measure light intensity and temperature and to control linear heating. This instruments permits to conduct investigations on thermoluminescent materials, such as alkali halides, phosphors and related compounds, which have important applications in materials science and in dosimetry. A low cost dedicated PIC16F877 based microcontroller board was employed for the hardware. The detail of its interface and software to measure thermoluminescence and to send data to PC is explained in this paper

  13. Design and development of low cost thermoluminescence measurement system using PIC16F877 microcontroller

    Energy Technology Data Exchange (ETDEWEB)

    Neelamegam, P [Department of Electronics and Instrumentation Engineering, Shunmuga Arts, Science, Technology and Research Academy (SASTRA), Deemed University, Thanjavur-613 402, Tamil Nadu (India); Rajendran, A [PG and Research Department of Applied Physics, Nehru Memorial College (Autonomous), Puthanampatti-621 007, Tiruchirappalli, Tamil Nadu (India)

    2006-05-15

    A real time microcontroller based thermoluminescence system has been developed to measure light intensity and temperature and to control linear heating. This instruments permits to conduct investigations on thermoluminescent materials, such as alkali halides, phosphors and related compounds, which have important applications in materials science and in dosimetry. A low cost dedicated PIC16F877 based microcontroller board was employed for the hardware. The detail of its interface and software to measure thermoluminescence and to send data to PC is explained in this paper.

  14. Educating European Corporate Communication Professionals for Senior Management Positions: A Collaboration between UCLA's Anderson School of Management and the University of Lugano

    Science.gov (United States)

    Forman, Janis

    2005-01-01

    UCLA's program in strategic management for European corporate communication professionals provides participants with a concentrated, yet selective, immersion in those management disciplines taught at U.S. business schools, topics that are essential to their work as senior advisors to CEOs and as leaders in the field. The choice of topics…

  15. Neutralization of several adult and paediatric HIV-1 subtype C isolates using a shortened synthetic derivative of gp120 binding aptamer called UCLA1.

    CSIR Research Space (South Africa)

    Mufhandu, Hazel T

    2009-07-01

    Full Text Available This paper present a chemically synthesised derivative of the B40 parental aptamer, called UCLA1 (Cohen et al., 2008), was used for neutralization of endemic subtype C clinical isolates of HIV-1 from adult and paediatric patients and subtype B lab...

  16. Second-order particle-in-cell (PIC) computational method in the one-dimensional variable Eulerian mesh system

    International Nuclear Information System (INIS)

    Pyun, J.J.

    1981-01-01

    As part of an effort to incorporate the variable Eulerian mesh into the second-order PIC computational method, a truncation error analysis was performed to calculate the second-order error terms for the variable Eulerian mesh system. The results that the maximum mesh size increment/decrement is limited to be α(Δr/sub i/) 2 where Δr/sub i/ is a non-dimensional mesh size of the ith cell, and α is a constant of order one. The numerical solutions of Burgers' equation by the second-order PIC method in the variable Eulerian mesh system wer compared with its exact solution. It was found that the second-order accuracy in the PIC method was maintained under the above condition. Additional problems were analyzed using the second-order PIC methods in both variable and uniform Eulerian mesh systems. The results indicate that the second-order PIC method in the variable Eulerian mesh system can provide substantial computational time saving with no loss in accuracy

  17. Analysis of instability growth and collisionless relaxation in thermionic converters using 1-D PIC simulations

    International Nuclear Information System (INIS)

    Kreh, B.B.

    1994-12-01

    This work investigates the role that the beam-plasma instability may play in a thermionic converter. The traditional assumption of collisionally dominated relaxation is questioned, and the beam-plasma instability is proposed as a possible dominant relaxation mechanism. Theory is developed to describe the beam-plasma instability in the cold-plasma approximation, and the theory is tested with two common Particle-in-Cell (PIC) simulation codes. The theory is first confirmed using an unbounded plasma PIC simulation employing periodic boundary conditions, ES1. The theoretically predicted growth rates are on the order of the plasma frequencies, and ES1 simulations verify these predictions within the order of 1%. For typical conditions encountered in thermionic converters, the resulting growth period is on the order of 7 x 10 -11 seconds. The bounded plasma simulation PDP1 was used to evaluate the influence of finite geometry and the electrode boundaries. For this bounded plasma, a two-stream interaction was supported and resulting in nearly complete thermalization in approximately 5 x 10 -10 seconds. Since the electron-electron collision rate of 10 9 Hz and the electron atom collision rate of 10 7 Hz are significantly slower than the rate of development of these instabilities, the instabilities appear to be an important relaxation mechanism

  18. Charge conserving current deposition scheme for PIC simulations in modified spherical coordinates

    Science.gov (United States)

    Cruz, F.; Grismayer, T.; Fonseca, R. A.; Silva, L. O.

    2017-10-01

    Global models of pulsar magnetospheres have been actively pursued in recent years. Both macro and microscopic (PIC) descriptions have been used, showing that collective processes of e-e + plasmas dominate the global structure of pulsar magnetospheres. Since these systems are best described in spherical coordinates, the algorithms used in cartesian simulations must be generalized. A problem of particular interest is that of charge conservation in PIC simulations. The complex geometry and irregular grids used to improve the efficiency of these algorithms represent major challenges in the design of a charge conserving scheme. Here we present a new first-order current deposition scheme for a 2D axisymmetric, log-spaced radial grid, that rigorously conserves charge. We benchmark this scheme in different scenarios, by integrating it with a spherical Yee scheme and Boris/Vay pushers. The results show that charge is conserved to machine precision, making it unnecessary to correct the electric field to guarantee charge conservation. This scheme will be particularly important for future studies aiming to bridge the microscopic physical processes of e-e + plasma generation due to QED cascades, its self-consistent acceleration and radiative losses to the global dynamics of pulsar magnetospheres. Work supported by the European Research Council (InPairs ERC-2015-AdG 695088), FCT (Portugal) Grant PD/BD/114307/2016, and the Calouste Gulbenkian Foundation through the 2016 Scientific Research Stimulus Program.

  19. Food-pics: an image database for experimental research on eating and appetite

    Directory of Open Access Journals (Sweden)

    Jens eBlechert

    2014-06-01

    Full Text Available Our current environment is characterized by the omnipresence of food cues. The sight and smell of real foods, but also graphically depictions of appetizing foods, can guide our eating behavior, for example, by eliciting food craving and influencing food choice. The relevance of visual food cues on human information processing has been demonstrated by a growing body of studies employing food images across the disciplines of psychology, medicine, and neuroscience. However, currently used food image sets vary considerably across laboratories and image characteristics (contrast, brightness, etc. and food composition (calories, macronutrients, etc. are often unspecified. These factors might have contributed to some of the inconsistencies of this research. To remedy this, we developed food-pics, a picture database comprising 568 food images and 315 non-food images along with detailed meta-data. A total of N = 1988 individuals with large variance in age and weight from German speaking countries and North America provided normative ratings of valence, arousal, palatability, desire to eat, recognizability and visual complexity. Furthermore, data on macronutrients (g, energy density (kcal, and physical image characteristics (color composition, contrast, brightness, size, complexity are provided. The food-pics image data base is freely available under the creative commons license with the hope that the set will facilitate standardization and comparability across studies and advance experimental research on the determinants of eating behavior.

  20. Food-pics: an image database for experimental research on eating and appetite.

    Science.gov (United States)

    Blechert, Jens; Meule, Adrian; Busch, Niko A; Ohla, Kathrin

    2014-01-01

    Our current environment is characterized by the omnipresence of food cues. The sight and smell of real foods, but also graphically depictions of appetizing foods, can guide our eating behavior, for example, by eliciting food craving and influencing food choice. The relevance of visual food cues on human information processing has been demonstrated by a growing body of studies employing food images across the disciplines of psychology, medicine, and neuroscience. However, currently used food image sets vary considerably across laboratories and image characteristics (contrast, brightness, etc.) and food composition (calories, macronutrients, etc.) are often unspecified. These factors might have contributed to some of the inconsistencies of this research. To remedy this, we developed food-pics, a picture database comprising 568 food images and 315 non-food images along with detailed meta-data. A total of N = 1988 individuals with large variance in age and weight from German speaking countries and North America provided normative ratings of valence, arousal, palatability, desire to eat, recognizability and visual complexity. Furthermore, data on macronutrients (g), energy density (kcal), and physical image characteristics (color composition, contrast, brightness, size, complexity) are provided. The food-pics image database is freely available under the creative commons license with the hope that the set will facilitate standardization and comparability across studies and advance experimental research on the determinants of eating behavior.

  1. 3D PiC code investigations of Auroral Kilometric Radiation mechanisms

    International Nuclear Information System (INIS)

    Gillespie, K M; McConville, S L; Speirs, D C; Ronald, K; Phelps, A D R; Bingham, R; Cross, A W; Robertson, C W; Whyte, C G; He, W; Vorgul, I; Cairns, R A; Kellett, B J

    2014-01-01

    Efficient (∼1%) electron cyclotron radio emissions are known to originate in the X mode from regions of locally depleted plasma in the Earths polar magnetosphere. These emissions are commonly referred to as the Auroral Kilometric Radiation (AKR). AKR occurs naturally in these polar regions where electrons are accelerated by electric fields into the increasing planetary magnetic dipole. Here conservation of the magnetic moment converts axial to rotational momentum forming a horseshoe distribution in velocity phase space. This distribution is unstable to cyclotron emission with radiation emitted in the X-mode. Initial studies were conducted in the form of 2D PiC code simulations [1] and a scaled laboratory experiment that was constructed to reproduce the mechanism of AKR. As studies progressed, 3D PiC code simulations were conducted to enable complete investigation of the complex interaction dimensions. A maximum efficiency of 1.25% is predicted from these simulations in the same mode and frequency as measured in the experiment. This is also consistent with geophysical observations and the predictions of theory.

  2. Vertical Distributions of Coccolithophores, PIC, POC, Biogenic Silica, and Chlorophyll a Throughout the Global Ocean.

    Science.gov (United States)

    Balch, William M; Bowler, Bruce C; Drapeau, David T; Lubelczyk, Laura C; Lyczkowski, Emily

    2018-01-01

    Coccolithophores are a critical component of global biogeochemistry, export fluxes, and seawater optical properties. We derive globally significant relationships to estimate integrated coccolithophore and coccolith concentrations as well as integrated concentrations of particulate inorganic carbon (PIC) from their respective surface concentration. We also examine surface versus integral relationships for other biogeochemical variables contributed by all phytoplankton (e.g., chlorophyll a and particulate organic carbon) or diatoms (biogenic silica). Integrals are calculated using both 100 m integrals and euphotic zone integrals (depth of 1% surface photosynthetically available radiation). Surface concentrations are parameterized in either volumetric units (e.g., m -3 ) or values integrated over the top optical depth. Various relationships between surface concentrations and integrated values demonstrate that when surface concentrations are above a specific threshold, the vertical distribution of the property is biased to the surface layer, and when surface concentrations are below a specific threshold, the vertical distributions of the properties are biased to subsurface maxima. Results also show a highly predictable decrease in explained-variance as vertical distributions become more vertically heterogeneous. These relationships have fundamental utility for extrapolating surface ocean color remote sensing measurements to 100 m depth or to the base of the euphotic zone, well beyond the depths of detection for passive ocean color remote sensors. Greatest integrated concentrations of PIC, coccoliths, and coccolithophores are found when there is moderate stratification at the base of the euphotic zone.

  3. Practical parallel computing

    CERN Document Server

    Morse, H Stephen

    1994-01-01

    Practical Parallel Computing provides information pertinent to the fundamental aspects of high-performance parallel processing. This book discusses the development of parallel applications on a variety of equipment.Organized into three parts encompassing 12 chapters, this book begins with an overview of the technology trends that converge to favor massively parallel hardware over traditional mainframes and vector machines. This text then gives a tutorial introduction to parallel hardware architectures. Other chapters provide worked-out examples of programs using several parallel languages. Thi

  4. Parallel sorting algorithms

    CERN Document Server

    Akl, Selim G

    1985-01-01

    Parallel Sorting Algorithms explains how to use parallel algorithms to sort a sequence of items on a variety of parallel computers. The book reviews the sorting problem, the parallel models of computation, parallel algorithms, and the lower bounds on the parallel sorting problems. The text also presents twenty different algorithms, such as linear arrays, mesh-connected computers, cube-connected computers. Another example where algorithm can be applied is on the shared-memory SIMD (single instruction stream multiple data stream) computers in which the whole sequence to be sorted can fit in the

  5. Software Engineering Support of the Third Round of Scientific Grand Challenge Investigations: An Earth Modeling System Software Framework Strawman Design that Integrates Cactus and UCLA/UCB Distributed Data Broker

    Science.gov (United States)

    Talbot, Bryan; Zhou, Shu-Jia; Higgins, Glenn

    2002-01-01

    One of the most significant challenges in large-scale climate modeling, as well as in high-performance computing in other scientific fields, is that of effectively integrating many software models from multiple contributors. A software framework facilitates the integration task. both in the development and runtime stages of the simulation. Effective software frameworks reduce the programming burden for the investigators, freeing them to focus more on the science and less on the parallel communication implementation, while maintaining high performance across numerous supercomputer and workstation architectures. This document proposes a strawman framework design for the climate community based on the integration of Cactus, from the relativistic physics community, and UCLA/UCB Distributed Data Broker (DDB) from the climate community. This design is the result of an extensive survey of climate models and frameworks in the climate community as well as frameworks from many other scientific communities. The design addresses fundamental development and runtime needs using Cactus, a framework with interfaces for FORTRAN and C-based languages, and high-performance model communication needs using DDB. This document also specifically explores object-oriented design issues in the context of climate modeling as well as climate modeling issues in terms of object-oriented design.

  6. Implementation of a 3D plasma particle-in-cell code on a MIMD parallel computer

    International Nuclear Information System (INIS)

    Liewer, P.C.; Lyster, P.; Wang, J.

    1993-01-01

    A three-dimensional plasma particle-in-cell (PIC) code has been implemented on the Intel Delta MIMD parallel supercomputer using the General Concurrent PIC algorithm. The GCPIC algorithm uses a domain decomposition to divide the computation among the processors: A processor is assigned a subdomain and all the particles in it. Particles must be exchanged between processors as they move. Results are presented comparing the efficiency for 1-, 2- and 3-dimensional partitions of the three dimensional domain. This algorithm has been found to be very efficient even when a large fraction (e.g. 30%) of the particles must be exchanged at every time step. On the 512-node Intel Delta, up to 125 million particles have been pushed with an electrostatic push time of under 500 nsec/particle/time step

  7. Characterization of a trinuclear ruthenium species in catalytic water oxidation by Ru(bda)(pic)2 in neutral media.

    Science.gov (United States)

    Zhang, Biaobiao; Li, Fei; Zhang, Rong; Ma, Chengbing; Chen, Lin; Sun, Licheng

    2016-06-30

    A Ru(III)-O-Ru(IV)-O-Ru(III) type trinuclear species was crystallographically characterized in water oxidation by Ru(bda)(pic)2 (H2bda = 2,2'-bipyridine-6,6'-dicarboxylic acid; pic = 4-picoline) under neutral conditions. The formation of a ruthenium trimer due to the reaction of Ru(IV)[double bond, length as m-dash]O with Ru(II)-OH2 was fully confirmed by chemical, electrochemical and photochemical methods. Since the oxidation of the trimer was proposed to lead to catalyst decomposition, the photocatalytic water oxidation activity was rationally improved by the suppression of the formation of the trimer.

  8. Analysis of the beam halo in negative ion sources by using 3D3V PIC code

    Energy Technology Data Exchange (ETDEWEB)

    Miyamoto, K., E-mail: kmiyamot@naruto-u.ac.jp [Naruto University of Education, 748 Nakashima, Takashima, Naruto-cho, Naruto-shi, Tokushima 772-8502 (Japan); Nishioka, S.; Goto, I.; Hatayama, A. [Faculty of Science and Technology, Keio University, 3-14-1 Hiyoshi, Kohoku-ku, Yokohama 223-8522 (Japan); Hanada, M.; Kojima, A.; Hiratsuka, J. [Japan Atomic Energy Agency, 801-1 Mukouyama, Naka 319-0913 (Japan)

    2016-02-15

    The physical mechanism of the formation of the negative ion beam halo and the heat loads of the multi-stage acceleration grids are investigated with the 3D PIC (particle in cell) simulation. The following physical mechanism of the beam halo formation is verified: The beam core and the halo consist of the negative ions extracted from the center and the periphery of the meniscus, respectively. This difference of negative ion extraction location results in a geometrical aberration. Furthermore, it is shown that the heat loads on the first acceleration grid and the second acceleration grid are quantitatively improved compared with those for the 2D PIC simulation result.

  9. Introduction to parallel programming

    CERN Document Server

    Brawer, Steven

    1989-01-01

    Introduction to Parallel Programming focuses on the techniques, processes, methodologies, and approaches involved in parallel programming. The book first offers information on Fortran, hardware and operating system models, and processes, shared memory, and simple parallel programs. Discussions focus on processes and processors, joining processes, shared memory, time-sharing with multiple processors, hardware, loops, passing arguments in function/subroutine calls, program structure, and arithmetic expressions. The text then elaborates on basic parallel programming techniques, barriers and race

  10. Parallel computing works!

    CERN Document Server

    Fox, Geoffrey C; Messina, Guiseppe C

    2014-01-01

    A clear illustration of how parallel computers can be successfully appliedto large-scale scientific computations. This book demonstrates how avariety of applications in physics, biology, mathematics and other scienceswere implemented on real parallel computers to produce new scientificresults. It investigates issues of fine-grained parallelism relevant forfuture supercomputers with particular emphasis on hypercube architecture. The authors describe how they used an experimental approach to configuredifferent massively parallel machines, design and implement basic systemsoftware, and develop

  11. Parallel Atomistic Simulations

    Energy Technology Data Exchange (ETDEWEB)

    HEFFELFINGER,GRANT S.

    2000-01-18

    Algorithms developed to enable the use of atomistic molecular simulation methods with parallel computers are reviewed. Methods appropriate for bonded as well as non-bonded (and charged) interactions are included. While strategies for obtaining parallel molecular simulations have been developed for the full variety of atomistic simulation methods, molecular dynamics and Monte Carlo have received the most attention. Three main types of parallel molecular dynamics simulations have been developed, the replicated data decomposition, the spatial decomposition, and the force decomposition. For Monte Carlo simulations, parallel algorithms have been developed which can be divided into two categories, those which require a modified Markov chain and those which do not. Parallel algorithms developed for other simulation methods such as Gibbs ensemble Monte Carlo, grand canonical molecular dynamics, and Monte Carlo methods for protein structure determination are also reviewed and issues such as how to measure parallel efficiency, especially in the case of parallel Monte Carlo algorithms with modified Markov chains are discussed.

  12. PIC Simulations in Low Energy Part of PIP-II Proton Linac

    Energy Technology Data Exchange (ETDEWEB)

    Romanov, Gennady

    2014-07-01

    The front end of PIP-II linac is composed of a 30 keV ion source, low energy beam transport line (LEBT), 2.1 MeV radio frequency quadrupole (RFQ), and medium energy beam transport line (MEBT). This configuration is currently being assembled at Fermilab to support a complete systems test. The front end represents the primary technical risk with PIP-II, and so this step will validate the concept and demonstrate that the hardware can meet the specified requirements. SC accelerating cavities right after MEBT require high quality and well defined beam after RFQ to avoid excessive particle losses. In this paper we will present recent progress of beam dynamic study, using CST PIC simulation code, to investigate partial neutralization effect in LEBT, halo and tail formation in RFQ, total emittance growth and beam losses along low energy part of the linac.

  13. Study and Development of an acquisition chain of gamma radiation based on PIC16F877

    International Nuclear Information System (INIS)

    Blidi, Hamza

    2011-01-01

    The project consists in conceiving and accomplishing electronic cards, for the acquisition of gamma radiation, with the intention of extracting from it, energy and spectral characteristics. Scintillation detector allows to have an electrical signal with an exceptional from, which will be transformed into Gaussian signal, with the support of an amplificator card. Subsequently, an analogical card named Stretcher treats this latter in order to have a set of digital signals, describing the morphological and energy aspect of the signal (Peak Detection, Detection of Zero Level...), these will be exploited and treated by a card of control embedded in PIC16F877. The treatment is assured by the execution of a code written in C language, reflecting the Finite State Machine (FSM) of the converter Wilkinson in order to get the final result of the conversion in a wide energy/frequency (nuclear spectrometry).

  14. Automatic Color Sorting Machine Using TCS230 Color Sensor And PIC Microcontroller

    Directory of Open Access Journals (Sweden)

    Kunhimohammed C K

    2015-12-01

    Full Text Available Sorting of products is a very difficult industrial process. Continuous manual sorting creates consistency issues. This paper describes a working prototype designed for automatic sorting of objects based on the color. TCS230 sensor was used to detect the color of the product and the PIC16F628A microcontroller was used to control the overall process. The identification of the color is based on the frequency analysis of the output of TCS230 sensor. Two conveyor belts were used, each controlled by separate DC motors. The first belt is for placing the product to be analyzed by the color sensor, and the second belt is for moving the container, having separated compartments, in order to separate the products. The experimental results promise that the prototype will fulfill the needs for higher production and precise quality in the field of automation.

  15. Multi-dimensional PIC-simulations of parametric instabilities for shock-ignition conditions

    Directory of Open Access Journals (Sweden)

    Riconda C.

    2013-11-01

    Full Text Available Laser-plasma interaction is investigated for conditions relevant for the shock-ignition (SI scheme of inertial confinement fusion using two-dimensional particle-in-cell (PIC simulations of an intense laser beam propagating in a hot, large-scale, non-uniform plasma. The temporal evolution and interdependence of Raman- (SRS, and Brillouin- (SBS, side/backscattering as well as Two-Plasmon-Decay (TPD are studied. TPD is developing in concomitance with SRS creating a broad spectrum of plasma waves near the quarter-critical density. They are rapidly saturated due to plasma cavitation within a few picoseconds. The hot electron spectrum created by SRS and TPD is relatively soft, limited to energies below one hundred keV.

  16. Study of negative hydrogen ion beam optics using the 3D3V PIC model

    International Nuclear Information System (INIS)

    Miyamoto, K.; Nishioka, S.; Goto, I.; Hatayama, A.; Hanada, M.; Kojima, A.

    2015-01-01

    The mechanism of negative ion extraction under real conditions with the complex magnetic field is studied by using the 3D PIC simulation code. The extraction region of the negative ion source for the negative ion based neutral beam injection system in fusion reactors is modelled. It is shown that the E x B drift of electrons is caused by the magnetic filter and the electron suppression magnetic field, and the resultant asymmetry of the plasma meniscus. Furthermore, it is indicated that that the asymmetry of the plasma meniscus results in the asymmetry of negative ion beam profile including the beam halo. It could be demonstrated theoretically that the E x B drift is not significantly weakened by the elastic collisions of the electrons with neutral particles

  17. Study of negative hydrogen ion beam optics using the 3D3V PIC model

    Energy Technology Data Exchange (ETDEWEB)

    Miyamoto, K., E-mail: kmiyamot@naruto-u.ac.jp [Naruto University of Education, 748 Nakashima, Takashima, Naruto-cho, Naruto-shi, Tokushima, 772-8502 (Japan); Nishioka, S.; Goto, I.; Hatayama, A. [Faculty of Science and Technology, Keio University, 3-14-1, Hiyoshi, Kohoku-ku, Yokohama, 223-8522 (Japan); Hanada, M.; Kojima, A. [Japan Atomic Energy Agency, 801-1,Mukoyama, Naka, 319-0913 (Japan)

    2015-04-08

    The mechanism of negative ion extraction under real conditions with the complex magnetic field is studied by using the 3D PIC simulation code. The extraction region of the negative ion source for the negative ion based neutral beam injection system in fusion reactors is modelled. It is shown that the E x B drift of electrons is caused by the magnetic filter and the electron suppression magnetic field, and the resultant asymmetry of the plasma meniscus. Furthermore, it is indicated that that the asymmetry of the plasma meniscus results in the asymmetry of negative ion beam profile including the beam halo. It could be demonstrated theoretically that the E x B drift is not significantly weakened by the elastic collisions of the electrons with neutral particles.

  18. Optimizing fusion PIC code performance at scale on Cori Phase 2

    Energy Technology Data Exchange (ETDEWEB)

    Koskela, T. S.; Deslippe, J.

    2017-07-23

    In this paper we present the results of optimizing the performance of the gyrokinetic full-f fusion PIC code XGC1 on the Cori Phase Two Knights Landing system. The code has undergone substantial development to enable the use of vector instructions in its most expensive kernels within the NERSC Exascale Science Applications Program. We study the single-node performance of the code on an absolute scale using the roofline methodology to guide optimization efforts. We have obtained 2x speedups in single node performance due to enabling vectorization and performing memory layout optimizations. On multiple nodes, the code is shown to scale well up to 4000 nodes, near half the size of the machine. We discuss some communication bottlenecks that were identified and resolved during the work.

  19. A COMPENSATOR APPLICATION USING SYNCHRONOUS MOTOR WITH A PI CONTROLLER BASED ON PIC

    Directory of Open Access Journals (Sweden)

    Ramazan BAYINDIR

    2009-01-01

    Full Text Available In this paper, PI control of a synchronous motor has been realized by using a PIC 18F452 microcontroller and it has been worked as ohmic, inductive and capacitive with different excitation currents. Instead of solving integral operation of PI control which has difficulties with conversion to the digital system, summation of all error values of a defined time period are multiplied with the sampling period. Reference values of the PI algorithm are determined with Ziegler-Nicholas method. These parameters are calculated into the microcontroller and changed according to the algorithm. In addition, this work designed to provide visualization for the users. Current, voltage and power factor data of the synchronous motor can be observed easily on the LCD instantly.

  20. Low cost digital wind speed meter with wind direction using PIC16F877A

    Energy Technology Data Exchange (ETDEWEB)

    Sujod, M.Z.; Ismail, M.M. [Malaysia Pahang Univ., Pahang (Malaysia). Faculty of Electrical and Electronics Engineering

    2008-07-01

    Weather measurement tools are necessary to determine the actual weather and forecasting. Wind is one of the weather elements that can be measured using an anemometer which is a device for measuring the velocity or the pressure of the wind. It is one of the instruments used in weather stations. This paper described a circuit design for speed and direction of the meter and created a suitable programming to measure and display the wind speed meter and direction. A microcontroller (PIC16F877A) was employed as the central processing unit for digital wind speed and direction meter. The paper presented and discussed the hardware and software implementation as well as the calibration and results. The paper also discussed cost estimation and future recommendations. It was concluded that the hardware and software implementation were carefully selected after considering the development cost where the cost was much lower than the market prices. 4 refs., 8 figs.

  1. Transpiration of helium and carbon monoxide through a multihundred watt, PICS filter

    International Nuclear Information System (INIS)

    Schaeffer, D.R.

    1976-01-01

    The transpiration of CO through the Multihundred Watt (MHW) filter can be described by Fick's first law or as a first order, reversible reaction. From Fick's first law, a ''diffusion'' coefficient of 7.8 x 10 -4 cm.L/sec (L is the average path length through the filter) was determined. For the first order reversible reaction, a rate constant of 0.0058 hr -1 was obtained for both the forward and reverse reactions (they were assumed to be equal). This corresponds to a half-life of 120 hr. It was also concluded that the rate constants and thus the transpiration rates, which were determined for the test, are smaller than those expected in the IHS. The effect of increasing the number of filters, changing the volumes, and increasing the temperature, changes the rate constant of the transpiration into the PICS to roughly 0.074 hr -1 (t/sub 1 / 2 / = 9.4 hr) and out of the PICS to 0.84 hr -1 (t/sub 1/2/ = 0.8 hr). Of the two suggested mechanisms for the generation of CO inside the IHS, the cyclic process requires a much larger rate of transpiration than the process requiring oxygen exchange of CO given off by the graphite. The data indicate that the cyclic process can provide the CO generation rates observed in the IHS gas taps if there is no delay in time for any other kinetic process involved in the formation of CO or CO 2 . Since the cyclic process (which requires the fastest rate of transpiration) appears possible, this study does not indicate which reaction is occurring but concludes both are possible

  2. Sistema Inteligente de Supervisión de Alarmas Basado en Microcontroladores PIC, SISAP

    Directory of Open Access Journals (Sweden)

    Ioslán Sánchez Martínez

    2010-09-01

    Full Text Available Normal 0 21 false false false MicrosoftInternetExplorer4 st1:*{behavior:url(#ieooui } /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Tabla normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman"; mso-ansi-language:#0400; mso-fareast-language:#0400; mso-bidi-language:#0400;} En este artículo se describe hasta la etapa presente de desarrollo del prototipo SISAP (Sistema Inteligente de Supervisión de Alarmas basado en Microcontroladores PICs desarrollado a partir de una propuesta de la Dirección Territorial de ETECSA de Sancti Spíritus con el fin de incrementar las prestaciones de los sistemas instalados para la supervisión de alarmas tecnológicas en centros no atendidos del territorio.  El dispositivo SISAP se encuentra en la versión de desarrollo 0.5 en estado “no concluido”. Hasta este punto es capaz de manejar hasta 40 eventos, que pueden ser on/off o nivele de voltaje y transmitirlos a través de una interfaz telefónica utilizando un protocolo de tonos DTMF. Palabras Clave: Alarmas, Microcontrolador PIC, Voltajes, Eventos on/off, Tonos DTMF.

  3. A framework for improving access and customer service times in health care: application and analysis at the UCLA Medical Center.

    Science.gov (United States)

    Duda, Catherine; Rajaram, Kumar; Barz, Christiane; Rosenthal, J Thomas

    2013-01-01

    There has been an increasing emphasis on health care efficiency and costs and on improving quality in health care settings such as hospitals or clinics. However, there has not been sufficient work on methods of improving access and customer service times in health care settings. The study develops a framework for improving access and customer service time for health care settings. In the framework, the operational concept of the bottleneck is synthesized with queuing theory to improve access and reduce customer service times without reduction in clinical quality. The framework is applied at the Ronald Reagan UCLA Medical Center to determine the drivers for access and customer service times and then provides guidelines on how to improve these drivers. Validation using simulation techniques shows significant potential for reducing customer service times and increasing access at this institution. Finally, the study provides several practice implications that could be used to improve access and customer service times without reduction in clinical quality across a range of health care settings from large hospitals to small community clinics.

  4. 3-D electromagnetic plasma particle simulations on the Intel Delta parallel computer

    International Nuclear Information System (INIS)

    Wang, J.; Liewer, P.C.

    1994-01-01

    A three-dimensional electromagnetic PIC code has been developed on the 512 node Intel Touchstone Delta MIMD parallel computer. This code is based on the General Concurrent PIC algorithm which uses a domain decomposition to divide the computation among the processors. The 3D simulation domain can be partitioned into 1-, 2-, or 3-dimensional sub-domains. Particles must be exchanged between processors as they move among the subdomains. The Intel Delta allows one to use this code for very-large-scale simulations (i.e. over 10 8 particles and 10 6 grid cells). The parallel efficiency of this code is measured, and the overall code performance on the Delta is compared with that on Cray supercomputers. It is shown that their code runs with a high parallel efficiency of ≥ 95% for large size problems. The particle push time achieved is 115 nsecs/particle/time step for 162 million particles on 512 nodes. Comparing with the performance on a single processor Cray C90, this represents a factor of 58 speedup. The code uses a finite-difference leap frog method for field solve which is significantly more efficient than fast fourier transforms on parallel computers. The performance of this code on the 128 node Cray T3D will also be discussed

  5. LGBT and Information Studies: The Library and Archive OUTreach Symposium at UCLA; and In the Footsteps of Barbara Gittings: An Appreciation

    OpenAIRE

    Keilty, Patrick

    2007-01-01

    On November 17, 2006 the InterActions editorial team attended the Library and Archives OUTreach symposium at UCLA. This galvanizing event brought together academics, practitioners, and activists from the information studies field to discuss the importance of increasing visibility around lesbian, gay, bisexual, and transgendered (LGBT) issues as they pertain to libraries and information seeking. Given the tremendous energy generated by these proceedings, we asked Patrick Keilty, a doctoral st...

  6. An analysis of appropriate delivery of postoperative radiation therapy for endometrial cancer using the RAND/UCLA Appropriateness Method: Executive summary

    Directory of Open Access Journals (Sweden)

    Ellen Jones, MD, PhD

    2016-01-01

    Conclusions: This analysis based on the RAND/UCLA Method shows significant agreement with the 2014 endometrial Guideline. Areas of divergence, often in scenarios with low-level evidence, included use of external beam RT plus vaginal brachytherapy in stages II and III and external beam RT alone in early-stage patients. Furthermore, the analysis explores other important questions regarding management of this disease site.

  7. A study on radiation-resistance of PIC (polymer-impregnated concrete) for container of conditioning and disposal of low and intermediate level radioactive wastes

    International Nuclear Information System (INIS)

    Ishizaki, Kanjiro; Sudoh, Giichi; Araki, Kunio; Kasahara, Yuko.

    1983-01-01

    The radiation-resistance of PIC with test piece was evaluated by irradiation of gamma-rays. All the test pieces had JIS mortar size of 4 x 4 x 16 cm. JIS mortar and concrete were used as specimens. The maximum aggregate size of concrete was 10 mm. The specimens impregnated by MMA (methylmethacrylate) monomer and solution of 10% of PSt (polystyrene) in MMA monomer (MMA .PSt) were polymerized by irradiating for 5 hr at the dose rate of 1 MR (1 x 10 6 Roentgen)/hr. PIC specimens were exposed up to maximum 1000 MR to 60 Co gamma-rays in air and under water which simulate shallow land disposal and deep sea dumping conditions, respectively. The lowering of strength of the PIC exposed to gamma-rays under water was larger than that of the PIC in air. The improving effect of the added PSt on the radiation-resistance was observed. It was observed that the 50 MR-irradiated MMA.PSt-PIC under water, which had the residual compressive strength of 85%, was resistant to gamma-rays. When this residual strength was regarded as a limit of radiation-resistance in air, the limit of MMA and MMA.PSt-PIC were approximately 25 MR and 150 MR, respectively. The lowering of strength was mainly due to the deterioration of MMA polymer in PIC. The total exposure dose for PIC-container was estimated by assuming the conditions about the packaged radioactive wastes, dose rate, container and so on. The total exposure dose on PIC-container for 100 years became roughly 1.25 MR. Therefore, it is estimated that the PIC-containers for conditioning and disposal of low and intermediate level radioactive wastes have a sufficient resistance to radiation arising from wastes. (author)

  8. Activation of AMP-Activated Protein Kinase α and Extracelluar Signal-Regulated Kinase Mediates CB-PIC-Induced Apoptosis in Hypoxic SW620 Colorectal Cancer Cells

    Directory of Open Access Journals (Sweden)

    Sung-Yun Cho

    2013-01-01

    Full Text Available Here, antitumor mechanism of cinnamaldehyde derivative CB-PIC was elucidated in human SW620 colon cancer cells. CB-PIC significantly exerted cytotoxicity, increased sub-G1 accumulation, and cleaved PARP with apoptotic features, while it enhanced the phosphorylation of AMPK alpha and ACC as well as activated the ERK in hypoxic SW620 cells. Furthermore, CB-PIC suppressed the expression of HIF1 alpha, Akt, and mTOR and activated the AMPK phosphorylation in hypoxic SW620 cells. Conversely, silencing of AMPKα blocked PARP cleavage and ERK activation induced by CB-PIC, while ERK inhibitor PD 98059 attenuated the phosphorylation of AMPKα in hypoxic SW620 cells, implying cross-talk between ERK and AMPKα. Furthermore, cotreatment of CB-PIC and metformin enhanced the inhibition of HIF1α and Akt/mTOR and the activation of AMPKα and pACC in hypoxic SW620 cells. In addition, CB-PIC suppressed the growth of SW620 cells inoculated in BALB/c athymic nude mice, and immunohistochemistry revealed that CB-PIC treatment attenuated the expression of Ki-67, CD34, and CAIX and increased the expression of pAMPKα in CB-PIC-treated group. Interestingly, CP-PIC showed better antitumor activity in SW620 colon cancer cells under hypoxia than under normoxia, since it may be applied to chemoresistance. Overall, our findings suggest that activation of AMPKα and ERK mediates CB-PIC-induced apoptosis in hypoxic SW620 colon cancer cells.

  9. A parallel implementation of particle tracking with space charge effects on an INTEL iPSC/860

    International Nuclear Information System (INIS)

    Chang, L.; Bourianoff, G.; Cole, B.; Machida, S.

    1993-05-01

    Particle-tracking simulation is one of the scientific applications that is well-suited to parallel computations. At the Superconducting Super Collider, it has been theoretically and empirically demonstrated that particle tracking on a designed lattice can achieve very high parallel efficiency on a MIMD Intel iPSC/860 machine. The key to such success is the realization that the particles can be tracked independently without considering their interaction. The perfectly parallel nature of particle tracking is broken if the interaction effects between particles are included. The space charge introduces an electromagnetic force that will affect the motion of tracked particles in 3-D space. For accurate modeling of the beam dynamics with space charge effects, one needs to solve three-dimensional Maxwell field equations, usually by a particle-in-cell (PIC) algorithm. This will require each particle to communicate with its neighbor grids to compute the momentum changes at each time step. It is expected that the 3-D PIC method will degrade parallel efficiency of particle-tracking implementation on any parallel computer. In this paper, we describe an efficient scheme for implementing particle tracking with space charge effects on an INTEL iPSC/860 machine. Experimental results show that a parallel efficiency of 75% can be obtained

  10. Parallelization in Modern C++

    CERN Multimedia

    CERN. Geneva

    2016-01-01

    The traditionally used and well established parallel programming models OpenMP and MPI are both targeting lower level parallelism and are meant to be as language agnostic as possible. For a long time, those models were the only widely available portable options for developing parallel C++ applications beyond using plain threads. This has strongly limited the optimization capabilities of compilers, has inhibited extensibility and genericity, and has restricted the use of those models together with other, modern higher level abstractions introduced by the C++11 and C++14 standards. The recent revival of interest in the industry and wider community for the C++ language has also spurred a remarkable amount of standardization proposals and technical specifications being developed. Those efforts however have so far failed to build a vision on how to seamlessly integrate various types of parallelism, such as iterative parallel execution, task-based parallelism, asynchronous many-task execution flows, continuation s...

  11. Parallelism in matrix computations

    CERN Document Server

    Gallopoulos, Efstratios; Sameh, Ahmed H

    2016-01-01

    This book is primarily intended as a research monograph that could also be used in graduate courses for the design of parallel algorithms in matrix computations. It assumes general but not extensive knowledge of numerical linear algebra, parallel architectures, and parallel programming paradigms. The book consists of four parts: (I) Basics; (II) Dense and Special Matrix Computations; (III) Sparse Matrix Computations; and (IV) Matrix functions and characteristics. Part I deals with parallel programming paradigms and fundamental kernels, including reordering schemes for sparse matrices. Part II is devoted to dense matrix computations such as parallel algorithms for solving linear systems, linear least squares, the symmetric algebraic eigenvalue problem, and the singular-value decomposition. It also deals with the development of parallel algorithms for special linear systems such as banded ,Vandermonde ,Toeplitz ,and block Toeplitz systems. Part III addresses sparse matrix computations: (a) the development of pa...

  12. A parallel buffer tree

    DEFF Research Database (Denmark)

    Sitchinava, Nodar; Zeh, Norbert

    2012-01-01

    We present the parallel buffer tree, a parallel external memory (PEM) data structure for batched search problems. This data structure is a non-trivial extension of Arge's sequential buffer tree to a private-cache multiprocessor environment and reduces the number of I/O operations by the number of...... in the optimal OhOf(psortN + K/PB) parallel I/O complexity, where K is the size of the output reported in the process and psortN is the parallel I/O complexity of sorting N elements using P processors....

  13. Parallel MR imaging.

    Science.gov (United States)

    Deshmane, Anagha; Gulani, Vikas; Griswold, Mark A; Seiberlich, Nicole

    2012-07-01

    Parallel imaging is a robust method for accelerating the acquisition of magnetic resonance imaging (MRI) data, and has made possible many new applications of MR imaging. Parallel imaging works by acquiring a reduced amount of k-space data with an array of receiver coils. These undersampled data can be acquired more quickly, but the undersampling leads to aliased images. One of several parallel imaging algorithms can then be used to reconstruct artifact-free images from either the aliased images (SENSE-type reconstruction) or from the undersampled data (GRAPPA-type reconstruction). The advantages of parallel imaging in a clinical setting include faster image acquisition, which can be used, for instance, to shorten breath-hold times resulting in fewer motion-corrupted examinations. In this article the basic concepts behind parallel imaging are introduced. The relationship between undersampling and aliasing is discussed and two commonly used parallel imaging methods, SENSE and GRAPPA, are explained in detail. Examples of artifacts arising from parallel imaging are shown and ways to detect and mitigate these artifacts are described. Finally, several current applications of parallel imaging are presented and recent advancements and promising research in parallel imaging are briefly reviewed. Copyright © 2012 Wiley Periodicals, Inc.

  14. Parallel Algorithms and Patterns

    Energy Technology Data Exchange (ETDEWEB)

    Robey, Robert W. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-06-16

    This is a powerpoint presentation on parallel algorithms and patterns. A parallel algorithm is a well-defined, step-by-step computational procedure that emphasizes concurrency to solve a problem. Examples of problems include: Sorting, searching, optimization, matrix operations. A parallel pattern is a computational step in a sequence of independent, potentially concurrent operations that occurs in diverse scenarios with some frequency. Examples are: Reductions, prefix scans, ghost cell updates. We only touch on parallel patterns in this presentation. It really deserves its own detailed discussion which Gabe Rockefeller would like to develop.

  15. Application Portable Parallel Library

    Science.gov (United States)

    Cole, Gary L.; Blech, Richard A.; Quealy, Angela; Townsend, Scott

    1995-01-01

    Application Portable Parallel Library (APPL) computer program is subroutine-based message-passing software library intended to provide consistent interface to variety of multiprocessor computers on market today. Minimizes effort needed to move application program from one computer to another. User develops application program once and then easily moves application program from parallel computer on which created to another parallel computer. ("Parallel computer" also include heterogeneous collection of networked computers). Written in C language with one FORTRAN 77 subroutine for UNIX-based computers and callable from application programs written in C language or FORTRAN 77.

  16. SPECT3D - A multi-dimensional collisional-radiative code for generating diagnostic signatures based on hydrodynamics and PIC simulation output

    Science.gov (United States)

    MacFarlane, J. J.; Golovkin, I. E.; Wang, P.; Woodruff, P. R.; Pereyra, N. A.

    2007-05-01

    SPECT3D is a multi-dimensional collisional-radiative code used to post-process the output from radiation-hydrodynamics (RH) and particle-in-cell (PIC) codes to generate diagnostic signatures (e.g. images, spectra) that can be compared directly with experimental measurements. This ability to post-process simulation code output plays a pivotal role in assessing the reliability of RH and PIC simulation codes and their physics models. SPECT3D has the capability to operate on plasmas in 1D, 2D, and 3D geometries. It computes a variety of diagnostic signatures that can be compared with experimental measurements, including: time-resolved and time-integrated spectra, space-resolved spectra and streaked spectra; filtered and monochromatic images; and X-ray diode signals. Simulated images and spectra can include the effects of backlighters, as well as the effects of instrumental broadening and time-gating. SPECT3D also includes a drilldown capability that shows where frequency-dependent radiation is emitted and absorbed as it propagates through the plasma towards the detector, thereby providing insights on where the radiation seen by a detector originates within the plasma. SPECT3D has the capability to model a variety of complex atomic and radiative processes that affect the radiation seen by imaging and spectral detectors in high energy density physics (HEDP) experiments. LTE (local thermodynamic equilibrium) or non-LTE atomic level populations can be computed for plasmas. Photoabsorption rates can be computed using either escape probability models or, for selected 1D and 2D geometries, multi-angle radiative transfer models. The effects of non-thermal (i.e. non-Maxwellian) electron distributions can also be included. To study the influence of energetic particles on spectra and images recorded in intense short-pulse laser experiments, the effects of both relativistic electrons and energetic proton beams can be simulated. SPECT3D is a user-friendly software package that runs

  17. The UCLA Multimodal Connectivity Database: A web-based platform for brain connectivity matrix sharing and analysis

    Directory of Open Access Journals (Sweden)

    Jesse A. Brown

    2012-11-01

    Full Text Available Brain connectomics research has rapidly expanded using functional MRI (fMRI and diffusion-weighted MRI (dwMRI. A common product of these varied analyses is a connectivity matrix (CM. A CM stores the connection strength between any two regions (nodes in a brain network. This format is useful for several reasons: 1 it is highly distilled, with minimal data size and complexity, 2 graph theory can be applied to characterize the network’s topology, and 3 it retains sufficient information to capture individual differences such as age, gender, intelligence quotient, or disease state. Here we introduce the UCLA Multimodal Connectivity Database (http://umcd.humanconnectomeproject.org, an openly available website for brain network analysis and data sharing. The site is a repository for researchers to publicly share CMs derived from their data. The site also allows users to select any CM shared by another user, compute graph theoretical metrics on the site, visualize a report of results, or download the raw CM. To date, users have contributed over 2000 individual CMs, spanning different imaging modalities (fMRI, dwMRI and disorders (Alzheimer’s, autism, Attention Deficit Hyperactive Disorder. To demonstrate the site’s functionality, whole brain functional and structural connectivity matrices are derived from 60 subjects’ (ages 26-45 resting state fMRI (rs-fMRI and dwMRI data and uploaded to the site. The site is utilized to derive graph theory global and regional measures for the rs-fMRI and dwMRI networks. Global and nodal graph theoretical measures between functional and structural networks exhibit low correspondence. This example demonstrates how this tool can enhance the comparability of brain networks from different imaging modalities and studies. The existence of this connectivity-based repository should foster broader data sharing and enable larger-scale meta analyses comparing networks across imaging modality, age group, and disease state.

  18. Parallel discrete event simulation

    NARCIS (Netherlands)

    Overeinder, B.J.; Hertzberger, L.O.; Sloot, P.M.A.; Withagen, W.J.

    1991-01-01

    In simulating applications for execution on specific computing systems, the simulation performance figures must be known in a short period of time. One basic approach to the problem of reducing the required simulation time is the exploitation of parallelism. However, in parallelizing the simulation

  19. Parallel reservoir simulator computations

    International Nuclear Information System (INIS)

    Hemanth-Kumar, K.; Young, L.C.

    1995-01-01

    The adaptation of a reservoir simulator for parallel computations is described. The simulator was originally designed for vector processors. It performs approximately 99% of its calculations in vector/parallel mode and relative to scalar calculations it achieves speedups of 65 and 81 for black oil and EOS simulations, respectively on the CRAY C-90

  20. Totally parallel multilevel algorithms

    Science.gov (United States)

    Frederickson, Paul O.

    1988-01-01

    Four totally parallel algorithms for the solution of a sparse linear system have common characteristics which become quite apparent when they are implemented on a highly parallel hypercube such as the CM2. These four algorithms are Parallel Superconvergent Multigrid (PSMG) of Frederickson and McBryan, Robust Multigrid (RMG) of Hackbusch, the FFT based Spectral Algorithm, and Parallel Cyclic Reduction. In fact, all four can be formulated as particular cases of the same totally parallel multilevel algorithm, which are referred to as TPMA. In certain cases the spectral radius of TPMA is zero, and it is recognized to be a direct algorithm. In many other cases the spectral radius, although not zero, is small enough that a single iteration per timestep keeps the local error within the required tolerance.

  1. Parallel computing works

    Energy Technology Data Exchange (ETDEWEB)

    1991-10-23

    An account of the Caltech Concurrent Computation Program (C{sup 3}P), a five year project that focused on answering the question: Can parallel computers be used to do large-scale scientific computations '' As the title indicates, the question is answered in the affirmative, by implementing numerous scientific applications on real parallel computers and doing computations that produced new scientific results. In the process of doing so, C{sup 3}P helped design and build several new computers, designed and implemented basic system software, developed algorithms for frequently used mathematical computations on massively parallel machines, devised performance models and measured the performance of many computers, and created a high performance computing facility based exclusively on parallel computers. While the initial focus of C{sup 3}P was the hypercube architecture developed by C. Seitz, many of the methods developed and lessons learned have been applied successfully on other massively parallel architectures.

  2. Massively parallel mathematical sieves

    Energy Technology Data Exchange (ETDEWEB)

    Montry, G.R.

    1989-01-01

    The Sieve of Eratosthenes is a well-known algorithm for finding all prime numbers in a given subset of integers. A parallel version of the Sieve is described that produces computational speedups over 800 on a hypercube with 1,024 processing elements for problems of fixed size. Computational speedups as high as 980 are achieved when the problem size per processor is fixed. The method of parallelization generalizes to other sieves and will be efficient on any ensemble architecture. We investigate two highly parallel sieves using scattered decomposition and compare their performance on a hypercube multiprocessor. A comparison of different parallelization techniques for the sieve illustrates the trade-offs necessary in the design and implementation of massively parallel algorithms for large ensemble computers.

  3. Preparation of Water-soluble Polyion Complex (PIC Micelles Covered with Amphoteric Random Copolymer Shells with Pendant Sulfonate and Quaternary Amino Groups

    Directory of Open Access Journals (Sweden)

    Rina Nakahata

    2018-02-01

    Full Text Available An amphoteric random copolymer (P(SA91 composed of anionic sodium 2-acrylamido-2-methylpropanesulfonate (AMPS, S and cationic 3-acrylamidopropyl trimethylammonium chloride (APTAC, A was prepared via reversible addition-fragmentation chain transfer (RAFT radical polymerization. The subscripts in the abbreviations indicate the degree of polymerization (DP. Furthermore, AMPS and APTAC were polymerized using a P(SA91 macro-chain transfer agent to prepare an anionic diblock copolymer (P(SA91S67 and a cationic diblock copolymer (P(SA91A88, respectively. The DP was estimated from quantitative 13C NMR measurements. A stoichiometrically charge neutralized mixture of the aqueous P(SA91S67 and P(SA91A88 formed water-soluble polyion complex (PIC micelles comprising PIC cores and amphoteric random copolymer shells. The PIC micelles were in a dynamic equilibrium state between PIC micelles and charge neutralized small aggregates composed of a P(SA91S67/P(SA91A88 pair. Interactions between PIC micelles and fetal bovine serum (FBS in phosphate buffered saline (PBS were evaluated by changing the hydrodynamic radius (Rh and light scattering intensity (LSI. Increases in Rh and LSI were not observed for the mixture of PIC micelles and FBS in PBS for one day. This observation suggests that there is no interaction between PIC micelles and proteins, because the PIC micelle surfaces were covered with amphoteric random copolymer shells. However, with increasing time, the diblock copolymer chains that were dissociated from PIC micelles interacted with proteins.

  4. MEASURED DIAMETERS OF TWO F STARS IN THE β PIC MOVING GROUP

    International Nuclear Information System (INIS)

    Simon, M.; Schaefer, G. H.

    2011-01-01

    We report angular diameters of HIP 560 and HIP 21547, two F spectral-type pre-main-sequence members of the β Pic Moving Group. We used the east-west 314 m long baseline of the CHARA Array. The measured limb-darkened angular diameters of HIP 560 and HIP 21547 are 0.492 ± 0.032 and 0.518 ± 0.009 mas, respectively. The corresponding stellar radii are 2.1 and 1.6 R ☉ for HIP 560 and HIP 21547, respectively. These values indicate that the stars are truly young. Analyses using the evolutionary tracks calculated by Siess, Dufour, and Forestini and the tracks of the Yonsei-Yale group yield consistent results. Analyzing the measurements on an angular diameter versus color diagram we find that the ages of the two stars are indistinguishable; their average value is 13 ± 2 Myr. The masses of HIP 560 and HIP 21547 are 1.65 ± 0.02 and 1.75 ± 0.05 M ☉ , respectively. However, analysis of the stellar parameters on a Hertzsprung-Russell diagram yields ages at least 5 Myr older. Both stars are rapid rotators. The discrepancy between the two types of analyses has a natural explanation in gravitational darkening. Stellar oblateness, however, does not affect our measurements of angular diameters.

  5. Hybrid-PIC Computer Simulation of the Plasma and Erosion Processes in Hall Thrusters

    Science.gov (United States)

    Hofer, Richard R.; Katz, Ira; Mikellides, Ioannis G.; Gamero-Castano, Manuel

    2010-01-01

    HPHall software simulates and tracks the time-dependent evolution of the plasma and erosion processes in the discharge chamber and near-field plume of Hall thrusters. HPHall is an axisymmetric solver that employs a hybrid fluid/particle-in-cell (Hybrid-PIC) numerical approach. HPHall, originally developed by MIT in 1998, was upgraded to HPHall-2 by the Polytechnic University of Madrid in 2006. The Jet Propulsion Laboratory has continued the development of HPHall-2 through upgrades to the physical models employed in the code, and the addition of entirely new ones. Primary among these are the inclusion of a three-region electron mobility model that more accurately depicts the cross-field electron transport, and the development of an erosion sub-model that allows for the tracking of the erosion of the discharge chamber wall. The code is being developed to provide NASA science missions with a predictive tool of Hall thruster performance and lifetime that can be used to validate Hall thrusters for missions.

  6. The MICHELLE 2D/3D ES PIC Code Advances and Applications

    CERN Document Server

    Petillo, John; De Ford, John F; Dionne, Norman J; Eppley, Kenneth; Held, Ben; Levush, Baruch; Nelson, Eric M; Panagos, Dimitrios; Zhai, Xiaoling

    2005-01-01

    MICHELLE is a new 2D/3D steady-state and time-domain particle-in-cell (PIC) code* that employs electrostatic and now magnetostatic finite-element field solvers. The code has been used to design and analyze a wide variety of devices that includes multistage depressed collectors, gridded guns, multibeam guns, annular-beam guns, sheet-beam guns, beam-transport sections, and ion thrusters. Latest additions to the MICHELLE/Voyager tool are as follows: 1) a prototype 3D self magnetic field solver using the curl-curl finite-element formulation for the magnetic vector potential, employing edge basis functions and accumulating current with MICHELLE's new unstructured grid particle tracker, 2) the electrostatic field solver now accommodates dielectric media, 3) periodic boundary conditions are now functional on all grids, not just structured grids, 4) the addition of a global optimization module to the user interface where both electrical parameters (such as electrode voltages)can be optimized, and 5) adaptive mesh ref...

  7. Comparison of different Maxwell solvers coupled to a PIC resolution method of Maxwell-Vlasov equations

    International Nuclear Information System (INIS)

    Fochesato, Ch.; Bouche, D.

    2007-01-01

    The numerical solution of Maxwell equations is a challenging task. Moreover, the range of applications is very wide: microwave devices, diffraction, to cite a few. As a result, a number of methods have been proposed since the sixties. However, among all these methods, none has proved to be free of drawbacks. The finite difference scheme proposed by Yee in 1966, is well suited for Maxwell equations. However, it only works on cubical mesh. As a result, the boundaries of complex objects are not properly handled by the scheme. When classical nodal finite elements are used, spurious modes appear, which spoil the results of simulations. Edge elements overcome this problem, at the price of rather complex implementation, and computationally intensive simulations. Finite volume methods, either generalizing Yee scheme to a wider class of meshes, or applying to Maxwell equations methods initially used in the field of hyperbolic systems of conservation laws, are also used. Lastly, 'Discontinuous Galerkin' methods, generalizing to arbitrary order of accuracy finite volume methods, have recently been applied to Maxwell equations. In this report, we more specifically focus on the coupling of a Maxwell solver to a PIC (Particle-in-cell) method. We analyze advantages and drawbacks of the most widely used methods: accuracy, robustness, sensitivity to numerical artefacts, efficiency, user judgment. (authors)

  8. Ucla, escuela elemental

    Directory of Open Access Journals (Sweden)

    Neutra, Richard J.

    1962-03-01

    Full Text Available La Escuela Elemental de Preparación de la Universidad de California, en Los Angeles, está dedicada a la educación e investigación y preparación del profesorado de la infancia. Se ha construido en un paraje maravilloso, de frondosa vegetación, frente a un terreno bastante quebrado, circunstancia que presta mayor encanto al conjunto, construido con gran pericia y adaptación al paisaje a base de una dominante horizontalidad, con materiales sencillos (ladrillos, hierro y madera y gran comunicación con la naturaleza mediante grandes cristaleras correderas que ensanchan las clases y las suplementan hacia el jardín de acuerdo con las nuevas normas y prácticas docentes.

  9. IMPLEMENTATION OF PID ON PIC24F SERIES MICROCONTROLLER FOR SPEED CONTROL OF A DC MOTOR USING MPLAB AND PROTEUS

    OpenAIRE

    Sohaib Aslam; Sundas Hannan; Umar Sajjad; Waheed Zafar

    2016-01-01

    Speed control of DC motor is very critical in most of the industrial systems where accuracy and protection are of essence. This paper presents the simulations of Proportional Integral Derivative Controller (PID) on a 16-bit PIC 24F series microcontroller for speed control of a DC motor in the presence of load torque. The PID gains have been tuned by Linear Quadratic Regulator (LQR) technique and then it is implemented on microcontroller using MPLAB and finally simulated for speed control of D...

  10. Addition compounds between lanthanide (III) and yttrium (III) and methanesulfonates (MS) and 3-picoline-N-oxide (3-pic NO)

    International Nuclear Information System (INIS)

    Zinner, L.B.

    1984-01-01

    The preparation and characterization of addition compounds between lanthanide methanesulfonates and 3-picoline-N-oxide of general formula Ln (MS) 3 .2(3-pic No), Ln being La, Yb and Y, were carried out. The techniques employed for characterization were: elemental analysis, X-ray diffraction, infrared absorption spectroscopy, electrolytic conductance in methanol, melting ranges and emission spectrum of the Eu (III) compound. (Author) [pt

  11. Peptide Inhibitor of Complement C1 (PIC1 Rapidly Inhibits Complement Activation after Intravascular Injection in Rats.

    Directory of Open Access Journals (Sweden)

    Julia A Sharp

    Full Text Available The complement system has been increasingly recognized to play a pivotal role in a variety of inflammatory and autoimmune diseases. Consequently, therapeutic modulators of the classical, lectin and alternative pathways of the complement system are currently in pre-clinical and clinical development. Our laboratory has identified a peptide that specifically inhibits the classical and lectin pathways of complement and is referred to as Peptide Inhibitor of Complement C1 (PIC1. In this study, we determined that the lead PIC1 variant demonstrates a salt-dependent binding to C1q, the initiator molecule of the classical pathway. Additionally, this peptide bound to the lectin pathway initiator molecule MBL as well as the ficolins H, M and L, suggesting a common mechanism of PIC1 inhibitory activity occurs via binding to the collagen-like tails of these collectin molecules. We further analyzed the effect of arginine and glutamic acid residue substitution on the complement inhibitory activity of our lead derivative in a hemolytic assay and found that the original sequence demonstrated superior inhibitory activity. To improve upon the solubility of the lead derivative, a pegylated, water soluble variant was developed, structurally characterized and demonstrated to inhibit complement activation in mouse plasma, as well as rat, non-human primate and human serum in vitro. After intravenous injection in rats, the pegylated derivative inhibited complement activation in the blood by 90% after 30 seconds, demonstrating extremely rapid function. Additionally, no adverse toxicological effects were observed in limited testing. Together these results show that PIC1 rapidly inhibits classical complement activation in vitro and in vivo and is functional for a variety of animal species, suggesting its utility in animal models of classical complement-mediated diseases.

  12. Electro pneumatic trainer embedded with programmable integrated circuit (PIC) microcontroller and graphical user interface platform for aviation industries training purposes

    Science.gov (United States)

    Burhan, I.; Azman, A. A.; Othman, R.

    2016-10-01

    An electro pneumatic trainer embedded with programmable integrated circuit (PIC) microcontroller and Visual Basic (VB) platform is fabricated as a supporting tool to existing teaching and learning process, and to achieve the objectives and learning outcomes towards enhancing the student's knowledge and hands-on skill, especially in electro pneumatic devices. The existing learning process for electro pneumatic courses conducted in the classroom does not emphasize on simulation and complex practical aspects. VB is used as the platform for graphical user interface (GUI) while PIC as the interface circuit between the GUI and hardware of electro pneumatic apparatus. Fabrication of electro pneumatic trainer interfacing between PIC and VB has been designed and improved by involving multiple types of electro pneumatic apparatus such as linear drive, air motor, semi rotary motor, double acting cylinder and single acting cylinder. Newly fabricated electro pneumatic trainer microcontroller interface can be programmed and re-programmed for numerous combination of tasks. Based on the survey to 175 student participants, 97% of the respondents agreed that the newly fabricated trainer is user friendly, safe and attractive, and 96.8% of the respondents strongly agreed that there is improvement in knowledge development and also hands-on skill in their learning process. Furthermore, the Lab Practical Evaluation record has indicated that the respondents have improved their academic performance (hands-on skills) by an average of 23.5%.

  13. Algorithms for parallel computers

    International Nuclear Information System (INIS)

    Churchhouse, R.F.

    1985-01-01

    Until relatively recently almost all the algorithms for use on computers had been designed on the (usually unstated) assumption that they were to be run on single processor, serial machines. With the introduction of vector processors, array processors and interconnected systems of mainframes, minis and micros, however, various forms of parallelism have become available. The advantage of parallelism is that it offers increased overall processing speed but it also raises some fundamental questions, including: (i) which, if any, of the existing 'serial' algorithms can be adapted for use in the parallel mode. (ii) How close to optimal can such adapted algorithms be and, where relevant, what are the convergence criteria. (iii) How can we design new algorithms specifically for parallel systems. (iv) For multi-processor systems how can we handle the software aspects of the interprocessor communications. Aspects of these questions illustrated by examples are considered in these lectures. (orig.)

  14. Parallelism and array processing

    International Nuclear Information System (INIS)

    Zacharov, V.

    1983-01-01

    Modern computing, as well as the historical development of computing, has been dominated by sequential monoprocessing. Yet there is the alternative of parallelism, where several processes may be in concurrent execution. This alternative is discussed in a series of lectures, in which the main developments involving parallelism are considered, both from the standpoint of computing systems and that of applications that can exploit such systems. The lectures seek to discuss parallelism in a historical context, and to identify all the main aspects of concurrency in computation right up to the present time. Included will be consideration of the important question as to what use parallelism might be in the field of data processing. (orig.)

  15. Material analyses of foam-based SiC FCI after dynamic testing in PbLi in MaPLE loop at UCLA

    Energy Technology Data Exchange (ETDEWEB)

    Gonzalez, Maria, E-mail: maria.gonzalez@ciemat.es [LNF-CIEMAT, Avda Complutense, 40, 28040 Madrid (Spain); Rapisarda, David; Ibarra, Angel [LNF-CIEMAT, Avda Complutense, 40, 28040 Madrid (Spain); Courtessole, Cyril; Smolentsev, Sergey; Abdou, Mohamed [Fusion Science and Technology Center, UCLA (United States)

    2016-11-01

    Highlights: • Samples from foam-based SiC FCI were analyzed by looking at their SEM microstructure and elemental composition. • After finishing dynamic experiments in the flowing hot PbLi, the liquid metal ingress has been confirmed due to infiltration through local defects in the protective inner CVD layer. • No direct evidences of corrosion/erosion were observed; these defects could be related to the manufacturing process. - Abstract: Foam-based SiC flow channel inserts (FCIs) developed and manufactured by Ultramet, USA are currently under testing in the flowing hot lead-lithium (PbLi) alloy in the MaPLE loop at UCLA to address chemical/physical compatibility and to access the MHD pressure drop reduction. UCLA has finished the first experimental series, where a single uninterrupted long-term (∼6500 h) test was performed on a 30-cm FCI segment in a magnetic field up to 1.8 T at the temperature of 300 °C and maximum flow velocities of ∼ 15 cm/s. After finishing the experiments, the FCI sample was extracted from the host stainless steel duct and cut into slices. Few of them have been analyzed at CIEMAT as a part of the joint collaborative effort on the development of the DCLL blanket concept in the EU and the US. The initial inspection of the slices using optical microscopic analysis at UCLA showed significant PbLi ingress into the bulk FCI material that resulted in degradation of insulating properties of the FCI. Current material analyses at CIEMAT are based on advanced techniques, including characterization of FCI samples by FESEM to study PbLi ingress, imaging of cross sections, composition analysis by EDX and crack inspection. These analyses suggest that the ingress was caused by local defects in the protective inner CVD layer that might be originally present in the FCI or occurred during testing.

  16. Parallel magnetic resonance imaging

    International Nuclear Information System (INIS)

    Larkman, David J; Nunes, Rita G

    2007-01-01

    Parallel imaging has been the single biggest innovation in magnetic resonance imaging in the last decade. The use of multiple receiver coils to augment the time consuming Fourier encoding has reduced acquisition times significantly. This increase in speed comes at a time when other approaches to acquisition time reduction were reaching engineering and human limits. A brief summary of spatial encoding in MRI is followed by an introduction to the problem parallel imaging is designed to solve. There are a large number of parallel reconstruction algorithms; this article reviews a cross-section, SENSE, SMASH, g-SMASH and GRAPPA, selected to demonstrate the different approaches. Theoretical (the g-factor) and practical (coil design) limits to acquisition speed are reviewed. The practical implementation of parallel imaging is also discussed, in particular coil calibration. How to recognize potential failure modes and their associated artefacts are shown. Well-established applications including angiography, cardiac imaging and applications using echo planar imaging are reviewed and we discuss what makes a good application for parallel imaging. Finally, active research areas where parallel imaging is being used to improve data quality by repairing artefacted images are also reviewed. (invited topical review)

  17. UCLA1, a synthetic derivative of a gp120 RNA aptamer, inhibits entry of human immunodeficiency virus type 1 subtype C

    CSIR Research Space (South Africa)

    Mufhandu, Hazel T

    2012-05-01

    Full Text Available such as South Africa (47), where this study was conducted, we assessed the sensitivity of a large panel of subtype C isolates derived from adult and pediatric patients at different stages of HIV-1 infection against UCLA1. We examined its neutralization..., 34). These were derived from the CAPRISA 002 acute infection study cohort (18), subtype C reference panel (31), pediatric and AIDS patients? isolates (9, 17), and a subtype C consensus sequence clone (ConC) (26). The subtype C pseudoviruses were...

  18. Fluctuations and transport in fusion plasmas. Final report

    International Nuclear Information System (INIS)

    Gould, R.W.; Liewer, P.C.

    1995-01-01

    The energy confinement in tokamaks in thought to be limited by transport caused by plasma turbulence. Three dimensional plasma particle-in-cell (PIC) codes are used to model the turbulent transport in tokamaks to attempt to understand this phenomena so that tokamaks can be made more efficient. Presently, hundreds of hours of Cray time are used to model these experiments and much bigger and longer runs are desired, to model a large tokamak with realistic parameters is beyond the capability of existing sequential supercomputers. Parallel supercomputers might be a cost effect tool for performing such large scale 3D tokamak simulations. The goal of the work was to develop algorithms for performing PIC codes on coarse-grained message passing parallel computers and to evaluate the performance of such parallel computers on PIC codes. This algorithm would be used in a large scale PIC production code such as the UCLA 3D gyrokinetic code

  19. HPC parallel programming model for gyrokinetic MHD simulation

    International Nuclear Information System (INIS)

    Naitou, Hiroshi; Yamada, Yusuke; Tokuda, Shinji; Ishii, Yasutomo; Yagi, Masatoshi

    2011-01-01

    The 3-dimensional gyrokinetic PIC (particle-in-cell) code for MHD simulation, Gpic-MHD, was installed on SR16000 (“Plasma Simulator”), which is a scalar cluster system consisting of 8,192 logical cores. The Gpic-MHD code advances particle and field quantities in time. In order to distribute calculations over large number of logical cores, the total simulation domain in cylindrical geometry was broken up into N DD-r × N DD-z (number of radial decomposition times number of axial decomposition) small domains including approximately the same number of particles. The axial direction was uniformly decomposed, while the radial direction was non-uniformly decomposed. N RP replicas (copies) of each decomposed domain were used (“particle decomposition”). The hybrid parallelization model of multi-threads and multi-processes was employed: threads were parallelized by the auto-parallelization and N DD-r × N DD-z × N RP processes were parallelized by MPI (message-passing interface). The parallelization performance of Gpic-MHD was investigated for the medium size system of N r × N θ × N z = 1025 × 128 × 128 mesh with 4.196 or 8.192 billion particles. The highest speed for the fixed number of logical cores was obtained for two threads, the maximum number of N DD-z , and optimum combination of N DD-r and N RP . The observed optimum speeds demonstrated good scaling up to 8,192 logical cores. (author)

  20. Pembangkit Ragam Gelombang Terprogram Menggunakan DDS AD9851 Berbasis Mikrokontroler PIC 18F4550

    Directory of Open Access Journals (Sweden)

    Hidayat Nur Isnianto

    2015-05-01

    Full Text Available Abstrak—Direct  Digital  Synthesizer  (DDS  merupakan  metode  pembangkit gelombang  analog  secara  digital  dengan  cara    membangkitkan sinyal  digital  yang  berubah-ubah  terhadap  waktu kemudian diubah  kedalam  bentuk analog  menggunakan digital  to  analog  converter  (DAC.  IC AD9851 merupakan  pembangkit  gelombang  analog  yang  menerapkan  metode  DDS, dimana frekuensi yang dibangkitkannya dapat diubah sesuai kebutuhan penggunanya. Untuk menghasilkan sinyal digital menggunakan mikrokontroler PIC 18F4550 karena mikrokontroler ini telah memiliki fitur USB full-speed  2.0  untuk  antarmuka dengan komputer  melalui  USB  tanpa memerlukan driver khusus untuk melakukan komunikasinya. Pengaturan frekuensi dapat dilakukan melalui tombol keypad ataupun diprogram melalui komputer. Hasil  pengujian pembangkit ragam gelombang ini  adalah  rentang frekuensi yang dihasilkan dari  1000  Hz  hingga  30 MHz  berupa gelombang  sinus dengan amplitudo 430 mV dan  gelombang  kotak dengan amplitudo 4,125 V. Kata  Kunci:  DDS,  AD9851,  PIC  18F4550,  USB,  Pembangkit Gelombang Abstract— Direct  Digital  Synthesizer  (DDS  is  a  method  to  generate  an  analog waveform in a digital manner, which is formed by generating a digital signal that varies with time and converted into analog form using a digital to analog device (DAC. IC AD9851 is an analog waveform generator to implements the DDS method, , which generates a frequency that can be changed according to the needs of its users.  To generate a digital signal using a PIC 18F4550 microcontroller microcontroller because it has a feature full-speed USB 2.0 to interface with the computer via USB without the need for special drivers to do the communication . Setting the output frequency can be done via the keypad or buttons programmed via computer . The test results are generating a wide range of frequency waves produced from 1000

  1. Global fully kinetic models of planetary magnetospheres with iPic3D

    Science.gov (United States)

    Gonzalez, D.; Sanna, L.; Amaya, J.; Zitz, A.; Lembege, B.; Markidis, S.; Schriver, D.; Walker, R. J.; Berchem, J.; Peng, I. B.; Travnicek, P. M.; Lapenta, G.

    2016-12-01

    We report on the latest developments of our approach to model planetary magnetospheres, mini magnetospheres and the Earth's magnetosphere with the fully kinetic, electromagnetic particle in cell code iPic3D. The code treats electrons and multiple species of ions as full kinetic particles. We review: 1) Why a fully kinetic model and in particular why kinetic electrons are needed for capturing some of the most important aspects of the physics processes of planetary magnetospheres. 2) Why the energy conserving implicit method (ECIM) in its newest implementation [1] is the right approach to reach this goal. We consider the different electron scales and study how the new IECIM can be tuned to resolve only the electron scales of interest while averaging over the unresolved scales preserving their contribution to the evolution. 3) How with modern computing planetary magnetospheres, mini magnetosphere and eventually Earth's magnetosphere can be modeled with fully kinetic electrons. The path from petascale to exascale for iPiC3D is outlined based on the DEEP-ER project [2], using dynamic allocation of different processor architectures (Xeon and Xeon Phi) and innovative I/O technologies.Specifically results from models of Mercury are presented and compared with MESSENGER observations and with previous hybrid (fluid electrons and kinetic ions) simulations. The plasma convection around the planets includes the development of hydrodynamic instabilities at the flanks, the presence of the collisionless shocks, the magnetosheath, the magnetopause, reconnection zones, the formation of the plasma sheet and the magnetotail, and the variation of ion/electron plasma flows when crossing these frontiers. Given the full kinetic nature of our approach we focus on detailed particle dynamics and distribution at locations that can be used for comparison with satellite data. [1] Lapenta, G. (2016). Exactly Energy Conserving Implicit Moment Particle in Cell Formulation. arXiv preprint ar

  2. The STAPL Parallel Graph Library

    KAUST Repository

    Harshvardhan,; Fidel, Adam; Amato, Nancy M.; Rauchwerger, Lawrence

    2013-01-01

    This paper describes the stapl Parallel Graph Library, a high-level framework that abstracts the user from data-distribution and parallelism details and allows them to concentrate on parallel graph algorithm development. It includes a customizable

  3. LPIC++. A parallel one-dimensional relativistic electromagnetic particle-in-cell code for simulating laser-plasma-interaction

    International Nuclear Information System (INIS)

    Lichters, R.; Pfund, R.E.W.; Meyer-ter-Vehn, J.

    1997-08-01

    The code LPIC++ presented here, is based on a one-dimensional, electromagnetic, relativistic PIC code that has originally been developed by one of the authors during a PhD thesis at the Max-Planck-Institut fuer Quantenoptik for kinetic simulations of high harmonic generation from overdense plasma surfaces. The code uses essentially the algorithm of Birdsall and Langdon and Villasenor and Bunemann. It is written in C++ in order to be easily extendable and has been parallelized to be able to grow in power linearly with the size of accessable hardware, e.g. massively parallel machines like Cray T3E. The parallel LPIC++ version uses PVM for communication between processors. PVM is public domain software, can be downloaded from the world wide web. A particular strength of LPIC++ lies in its clear program and data structure, which uses chained lists for the organization of grid cells and enables dynamic adjustment of spatial domain sizes in a very convenient way, and therefore easy balancing of processor loads. Also particles belonging to one cell are linked in a chained list and are immediately accessable from this cell. In addition to this convenient type of data organization in a PIC code, the code shows excellent performance in both its single processor and parallel version. (orig.)

  4. Massively parallel multicanonical simulations

    Science.gov (United States)

    Gross, Jonathan; Zierenberg, Johannes; Weigel, Martin; Janke, Wolfhard

    2018-03-01

    Generalized-ensemble Monte Carlo simulations such as the multicanonical method and similar techniques are among the most efficient approaches for simulations of systems undergoing discontinuous phase transitions or with rugged free-energy landscapes. As Markov chain methods, they are inherently serial computationally. It was demonstrated recently, however, that a combination of independent simulations that communicate weight updates at variable intervals allows for the efficient utilization of parallel computational resources for multicanonical simulations. Implementing this approach for the many-thread architecture provided by current generations of graphics processing units (GPUs), we show how it can be efficiently employed with of the order of 104 parallel walkers and beyond, thus constituting a versatile tool for Monte Carlo simulations in the era of massively parallel computing. We provide the fully documented source code for the approach applied to the paradigmatic example of the two-dimensional Ising model as starting point and reference for practitioners in the field.

  5. SPINning parallel systems software

    International Nuclear Information System (INIS)

    Matlin, O.S.; Lusk, E.; McCune, W.

    2002-01-01

    We describe our experiences in using Spin to verify parts of the Multi Purpose Daemon (MPD) parallel process management system. MPD is a distributed collection of processes connected by Unix network sockets. MPD is dynamic processes and connections among them are created and destroyed as MPD is initialized, runs user processes, recovers from faults, and terminates. This dynamic nature is easily expressible in the Spin/Promela framework but poses performance and scalability challenges. We present here the results of expressing some of the parallel algorithms of MPD and executing both simulation and verification runs with Spin

  6. Parallel programming with Python

    CERN Document Server

    Palach, Jan

    2014-01-01

    A fast, easy-to-follow and clear tutorial to help you develop Parallel computing systems using Python. Along with explaining the fundamentals, the book will also introduce you to slightly advanced concepts and will help you in implementing these techniques in the real world. If you are an experienced Python programmer and are willing to utilize the available computing resources by parallelizing applications in a simple way, then this book is for you. You are required to have a basic knowledge of Python development to get the most of this book.

  7. MultiPic: A standardized set of 750 drawings with norms for six European languages

    NARCIS (Netherlands)

    Duñabeitia, J.A.; Crepaldi, D.; Meyer, A.S.; New, B.; Pliatsikas, C.; Smolka, E.; Brysbaert, M.

    2018-01-01

    Numerous studies in psychology, cognitive neuroscience and psycholinguistics have used pictures of objects as stimulus materials. Currently, authors engaged in cross-linguistic work or wishing to run parallel studies at multiple sites where different languages are spoken must rely on rather small

  8. Continuation of the Application of Parallel PIC Simulations to Laser and Electron Transport Through Plasmas Under Conditions Relevant to ICF and SBSS

    International Nuclear Information System (INIS)

    Warren B Mori

    2007-01-01

    In 2006/2007 we continued to study several issues related to underdense laser-plasma interactions. We have been studying the onset and saturation of Raman backscatter for NIF conditions, nonlinear plasma oscillations, and the two-plasmon decay instability

  9. Expressing Parallelism with ROOT

    Energy Technology Data Exchange (ETDEWEB)

    Piparo, D. [CERN; Tejedor, E. [CERN; Guiraud, E. [CERN; Ganis, G. [CERN; Mato, P. [CERN; Moneta, L. [CERN; Valls Pla, X. [CERN; Canal, P. [Fermilab

    2017-11-22

    The need for processing the ever-increasing amount of data generated by the LHC experiments in a more efficient way has motivated ROOT to further develop its support for parallelism. Such support is being tackled both for shared-memory and distributed-memory environments. The incarnations of the aforementioned parallelism are multi-threading, multi-processing and cluster-wide executions. In the area of multi-threading, we discuss the new implicit parallelism and related interfaces, as well as the new building blocks to safely operate with ROOT objects in a multi-threaded environment. Regarding multi-processing, we review the new MultiProc framework, comparing it with similar tools (e.g. multiprocessing module in Python). Finally, as an alternative to PROOF for cluster-wide executions, we introduce the efforts on integrating ROOT with state-of-the-art distributed data processing technologies like Spark, both in terms of programming model and runtime design (with EOS as one of the main components). For all the levels of parallelism, we discuss, based on real-life examples and measurements, how our proposals can increase the productivity of scientists.

  10. Expressing Parallelism with ROOT

    Science.gov (United States)

    Piparo, D.; Tejedor, E.; Guiraud, E.; Ganis, G.; Mato, P.; Moneta, L.; Valls Pla, X.; Canal, P.

    2017-10-01

    The need for processing the ever-increasing amount of data generated by the LHC experiments in a more efficient way has motivated ROOT to further develop its support for parallelism. Such support is being tackled both for shared-memory and distributed-memory environments. The incarnations of the aforementioned parallelism are multi-threading, multi-processing and cluster-wide executions. In the area of multi-threading, we discuss the new implicit parallelism and related interfaces, as well as the new building blocks to safely operate with ROOT objects in a multi-threaded environment. Regarding multi-processing, we review the new MultiProc framework, comparing it with similar tools (e.g. multiprocessing module in Python). Finally, as an alternative to PROOF for cluster-wide executions, we introduce the efforts on integrating ROOT with state-of-the-art distributed data processing technologies like Spark, both in terms of programming model and runtime design (with EOS as one of the main components). For all the levels of parallelism, we discuss, based on real-life examples and measurements, how our proposals can increase the productivity of scientists.

  11. Parallel Fast Legendre Transform

    NARCIS (Netherlands)

    Alves de Inda, M.; Bisseling, R.H.; Maslen, D.K.

    1998-01-01

    We discuss a parallel implementation of a fast algorithm for the discrete polynomial Legendre transform We give an introduction to the DriscollHealy algorithm using polynomial arithmetic and present experimental results on the eciency and accuracy of our implementation The algorithms were

  12. Practical parallel programming

    CERN Document Server

    Bauer, Barr E

    2014-01-01

    This is the book that will teach programmers to write faster, more efficient code for parallel processors. The reader is introduced to a vast array of procedures and paradigms on which actual coding may be based. Examples and real-life simulations using these devices are presented in C and FORTRAN.

  13. Parallel hierarchical radiosity rendering

    Energy Technology Data Exchange (ETDEWEB)

    Carter, Michael [Iowa State Univ., Ames, IA (United States)

    1993-07-01

    In this dissertation, the step-by-step development of a scalable parallel hierarchical radiosity renderer is documented. First, a new look is taken at the traditional radiosity equation, and a new form is presented in which the matrix of linear system coefficients is transformed into a symmetric matrix, thereby simplifying the problem and enabling a new solution technique to be applied. Next, the state-of-the-art hierarchical radiosity methods are examined for their suitability to parallel implementation, and scalability. Significant enhancements are also discovered which both improve their theoretical foundations and improve the images they generate. The resultant hierarchical radiosity algorithm is then examined for sources of parallelism, and for an architectural mapping. Several architectural mappings are discussed. A few key algorithmic changes are suggested during the process of making the algorithm parallel. Next, the performance, efficiency, and scalability of the algorithm are analyzed. The dissertation closes with a discussion of several ideas which have the potential to further enhance the hierarchical radiosity method, or provide an entirely new forum for the application of hierarchical methods.

  14. Parallel universes beguile science

    CERN Multimedia

    2007-01-01

    A staple of mind-bending science fiction, the possibility of multiple universes has long intrigued hard-nosed physicists, mathematicians and cosmologists too. We may not be able -- as least not yet -- to prove they exist, many serious scientists say, but there are plenty of reasons to think that parallel dimensions are more than figments of eggheaded imagination.

  15. Parallel k-means++

    Energy Technology Data Exchange (ETDEWEB)

    2017-04-04

    A parallelization of the k-means++ seed selection algorithm on three distinct hardware platforms: GPU, multicore CPU, and multithreaded architecture. K-means++ was developed by David Arthur and Sergei Vassilvitskii in 2007 as an extension of the k-means data clustering technique. These algorithms allow people to cluster multidimensional data, by attempting to minimize the mean distance of data points within a cluster. K-means++ improved upon traditional k-means by using a more intelligent approach to selecting the initial seeds for the clustering process. While k-means++ has become a popular alternative to traditional k-means clustering, little work has been done to parallelize this technique. We have developed original C++ code for parallelizing the algorithm on three unique hardware architectures: GPU using NVidia's CUDA/Thrust framework, multicore CPU using OpenMP, and the Cray XMT multithreaded architecture. By parallelizing the process for these platforms, we are able to perform k-means++ clustering much more quickly than it could be done before.

  16. Parallel plate detectors

    International Nuclear Information System (INIS)

    Gardes, D.; Volkov, P.

    1981-01-01

    A 5x3cm 2 (timing only) and a 15x5cm 2 (timing and position) parallel plate avalanche counters (PPAC) are considered. The theory of operation and timing resolution is given. The measurement set-up and the curves of experimental results illustrate the possibilities of the two counters [fr

  17. Parallel hierarchical global illumination

    Energy Technology Data Exchange (ETDEWEB)

    Snell, Quinn O. [Iowa State Univ., Ames, IA (United States)

    1997-10-08

    Solving the global illumination problem is equivalent to determining the intensity of every wavelength of light in all directions at every point in a given scene. The complexity of the problem has led researchers to use approximation methods for solving the problem on serial computers. Rather than using an approximation method, such as backward ray tracing or radiosity, the authors have chosen to solve the Rendering Equation by direct simulation of light transport from the light sources. This paper presents an algorithm that solves the Rendering Equation to any desired accuracy, and can be run in parallel on distributed memory or shared memory computer systems with excellent scaling properties. It appears superior in both speed and physical correctness to recent published methods involving bidirectional ray tracing or hybrid treatments of diffuse and specular surfaces. Like progressive radiosity methods, it dynamically refines the geometry decomposition where required, but does so without the excessive storage requirements for ray histories. The algorithm, called Photon, produces a scene which converges to the global illumination solution. This amounts to a huge task for a 1997-vintage serial computer, but using the power of a parallel supercomputer significantly reduces the time required to generate a solution. Currently, Photon can be run on most parallel environments from a shared memory multiprocessor to a parallel supercomputer, as well as on clusters of heterogeneous workstations.

  18. Controle de um pré-regulador com alto fator de potência utilizando microcontrolador PIC /

    OpenAIRE

    Grosse, Alexandre de Souza

    1999-01-01

    Dissertação (Mestrado) - Universidade Federal de Santa Catarina, Centro Tecnológico. Estudo de controle digital em eletrônica de potência utilizando um microcontrolador especial PIC17C756 em um pré-regulador para correção ativa do fator de potência. O enfoque principal é dado no controle da malha de corrente do conversor elevador utilizado. Parte-se da caracterização do microcontrolador e seus periféricos e prossegue-se através do projeto do conversor BOOST. São apresentadas as técnicas de...

  19. Electromagnetic particle-in-cell (PIC) method for modeling the formation of metal surface structures induced by femtosecond laser radiation

    Energy Technology Data Exchange (ETDEWEB)

    Djouder, M. [Laboratoire de Physique et Chimie Quantique, Université Mouloud Mammeri de Tizi-ouzou, BP 17 RP, 15000 Tizi-Ouzou (Algeria); Lamrous, O., E-mail: omarlamrous@mail.ummto.dz [Laboratoire de Physique et Chimie Quantique, Université Mouloud Mammeri de Tizi-ouzou, BP 17 RP, 15000 Tizi-Ouzou (Algeria); Mitiche, M.D. [Laboratoire de Physique et Chimie Quantique, Université Mouloud Mammeri de Tizi-ouzou, BP 17 RP, 15000 Tizi-Ouzou (Algeria); Itina, T.E. [Laboratoire Hubert Curien, UMR CNRS 5516/Université Jean Monnet, 18 rue de Professeur Benoît Lauras, 42000 Saint-Etienne (France); Zemirli, M. [Laboratoire de Physique et Chimie Quantique, Université Mouloud Mammeri de Tizi-ouzou, BP 17 RP, 15000 Tizi-Ouzou (Algeria)

    2013-09-01

    The particle in cell (PIC) method coupled to the finite-difference time-domain (FDTD) method is used to model the formation of laser induced periodic surface structures (LIPSS) at the early stage of femtosecond laser irradiation of smooth metal surface. The theoretical results were analyzed and compared with experimental data taken from the literature. It was shown that the optical properties of the target are not homogeneous and the ejection of electrons is such that ripples in the electron density were obtained. The Coulomb explosion mechanism was proposed to explain the ripples formation under the considered conditions.

  20. Electromagnetic particle-in-cell (PIC) method for modeling the formation of metal surface structures induced by femtosecond laser radiation

    International Nuclear Information System (INIS)

    Djouder, M.; Lamrous, O.; Mitiche, M.D.; Itina, T.E.; Zemirli, M.

    2013-01-01

    The particle in cell (PIC) method coupled to the finite-difference time-domain (FDTD) method is used to model the formation of laser induced periodic surface structures (LIPSS) at the early stage of femtosecond laser irradiation of smooth metal surface. The theoretical results were analyzed and compared with experimental data taken from the literature. It was shown that the optical properties of the target are not homogeneous and the ejection of electrons is such that ripples in the electron density were obtained. The Coulomb explosion mechanism was proposed to explain the ripples formation under the considered conditions.

  1. Resolution of the Vlasov-Maxwell system by PIC discontinuous Galerkin method on GPU with OpenCL

    Directory of Open Access Journals (Sweden)

    Crestetto Anaïs

    2013-01-01

    Full Text Available We present an implementation of a Vlasov-Maxwell solver for multicore processors. The Vlasov equation describes the evolution of charged particles in an electromagnetic field, solution of the Maxwell equations. The Vlasov equation is solved by a Particle-In-Cell method (PIC, while the Maxwell system is computed by a Discontinuous Galerkin method. We use the OpenCL framework, which allows our code to run on multicore processors or recent Graphic Processing Units (GPU. We present several numerical applications to two-dimensional test cases.

  2. AMITIS: A 3D GPU-Based Hybrid-PIC Model for Space and Plasma Physics

    Science.gov (United States)

    Fatemi, Shahab; Poppe, Andrew R.; Delory, Gregory T.; Farrell, William M.

    2017-05-01

    We have developed, for the first time, an advanced modeling infrastructure in space simulations (AMITIS) with an embedded three-dimensional self-consistent grid-based hybrid model of plasma (kinetic ions and fluid electrons) that runs entirely on graphics processing units (GPUs). The model uses NVIDIA GPUs and their associated parallel computing platform, CUDA, developed for general purpose processing on GPUs. The model uses a single CPU-GPU pair, where the CPU transfers data between the system and GPU memory, executes CUDA kernels, and writes simulation outputs on the disk. All computations, including moving particles, calculating macroscopic properties of particles on a grid, and solving hybrid model equations are processed on a single GPU. We explain various computing kernels within AMITIS and compare their performance with an already existing well-tested hybrid model of plasma that runs in parallel using multi-CPU platforms. We show that AMITIS runs ∼10 times faster than the parallel CPU-based hybrid model. We also introduce an implicit solver for computation of Faraday’s Equation, resulting in an explicit-implicit scheme for the hybrid model equation. We show that the proposed scheme is stable and accurate. We examine the AMITIS energy conservation and show that the energy is conserved with an error < 0.2% after 500,000 timesteps, even when a very low number of particles per cell is used.

  3. Parallel grid population

    Science.gov (United States)

    Wald, Ingo; Ize, Santiago

    2015-07-28

    Parallel population of a grid with a plurality of objects using a plurality of processors. One example embodiment is a method for parallel population of a grid with a plurality of objects using a plurality of processors. The method includes a first act of dividing a grid into n distinct grid portions, where n is the number of processors available for populating the grid. The method also includes acts of dividing a plurality of objects into n distinct sets of objects, assigning a distinct set of objects to each processor such that each processor determines by which distinct grid portion(s) each object in its distinct set of objects is at least partially bounded, and assigning a distinct grid portion to each processor such that each processor populates its distinct grid portion with any objects that were previously determined to be at least partially bounded by its distinct grid portion.

  4. Ultrascalable petaflop parallel supercomputer

    Science.gov (United States)

    Blumrich, Matthias A [Ridgefield, CT; Chen, Dong [Croton On Hudson, NY; Chiu, George [Cross River, NY; Cipolla, Thomas M [Katonah, NY; Coteus, Paul W [Yorktown Heights, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Hall, Shawn [Pleasantville, NY; Haring, Rudolf A [Cortlandt Manor, NY; Heidelberger, Philip [Cortlandt Manor, NY; Kopcsay, Gerard V [Yorktown Heights, NY; Ohmacht, Martin [Yorktown Heights, NY; Salapura, Valentina [Chappaqua, NY; Sugavanam, Krishnan [Mahopac, NY; Takken, Todd [Brewster, NY

    2010-07-20

    A massively parallel supercomputer of petaOPS-scale includes node architectures based upon System-On-a-Chip technology, where each processing node comprises a single Application Specific Integrated Circuit (ASIC) having up to four processing elements. The ASIC nodes are interconnected by multiple independent networks that optimally maximize the throughput of packet communications between nodes with minimal latency. The multiple networks may include three high-speed networks for parallel algorithm message passing including a Torus, collective network, and a Global Asynchronous network that provides global barrier and notification functions. These multiple independent networks may be collaboratively or independently utilized according to the needs or phases of an algorithm for optimizing algorithm processing performance. The use of a DMA engine is provided to facilitate message passing among the nodes without the expenditure of processing resources at the node.

  5. More parallel please

    DEFF Research Database (Denmark)

    Gregersen, Frans; Josephson, Olle; Kristoffersen, Gjert

    of departure that English may be used in parallel with the various local, in this case Nordic, languages. As such, the book integrates the challenge of internationalization faced by any university with the wish to improve quality in research, education and administration based on the local language......Abstract [en] More parallel, please is the result of the work of an Inter-Nordic group of experts on language policy financed by the Nordic Council of Ministers 2014-17. The book presents all that is needed to plan, practice and revise a university language policy which takes as its point......(s). There are three layers in the text: First, you may read the extremely brief version of the in total 11 recommendations for best practice. Second, you may acquaint yourself with the extended version of the recommendations and finally, you may study the reasoning behind each of them. At the end of the text, we give...

  6. PARALLEL MOVING MECHANICAL SYSTEMS

    Directory of Open Access Journals (Sweden)

    Florian Ion Tiberius Petrescu

    2014-09-01

    Full Text Available Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Moving mechanical systems parallel structures are solid, fast, and accurate. Between parallel systems it is to be noticed Stewart platforms, as the oldest systems, fast, solid and precise. The work outlines a few main elements of Stewart platforms. Begin with the geometry platform, kinematic elements of it, and presented then and a few items of dynamics. Dynamic primary element on it means the determination mechanism kinetic energy of the entire Stewart platforms. It is then in a record tail cinematic mobile by a method dot matrix of rotation. If a structural mottoelement consists of two moving elements which translates relative, drive train and especially dynamic it is more convenient to represent the mottoelement as a single moving components. We have thus seven moving parts (the six motoelements or feet to which is added mobile platform 7 and one fixed.

  7. Xyce parallel electronic simulator.

    Energy Technology Data Exchange (ETDEWEB)

    Keiter, Eric R; Mei, Ting; Russo, Thomas V.; Rankin, Eric Lamont; Schiek, Richard Louis; Thornquist, Heidi K.; Fixel, Deborah A.; Coffey, Todd S; Pawlowski, Roger P; Santarelli, Keith R.

    2010-05-01

    This document is a reference guide to the Xyce Parallel Electronic Simulator, and is a companion document to the Xyce Users Guide. The focus of this document is (to the extent possible) exhaustively list device parameters, solver options, parser options, and other usage details of Xyce. This document is not intended to be a tutorial. Users who are new to circuit simulation are better served by the Xyce Users Guide.

  8. Stability of parallel flows

    CERN Document Server

    Betchov, R

    2012-01-01

    Stability of Parallel Flows provides information pertinent to hydrodynamical stability. This book explores the stability problems that occur in various fields, including electronics, mechanics, oceanography, administration, economics, as well as naval and aeronautical engineering. Organized into two parts encompassing 10 chapters, this book starts with an overview of the general equations of a two-dimensional incompressible flow. This text then explores the stability of a laminar boundary layer and presents the equation of the inviscid approximation. Other chapters present the general equation

  9. Algorithmically specialized parallel computers

    CERN Document Server

    Snyder, Lawrence; Gannon, Dennis B

    1985-01-01

    Algorithmically Specialized Parallel Computers focuses on the concept and characteristics of an algorithmically specialized computer.This book discusses the algorithmically specialized computers, algorithmic specialization using VLSI, and innovative architectures. The architectures and algorithms for digital signal, speech, and image processing and specialized architectures for numerical computations are also elaborated. Other topics include the model for analyzing generalized inter-processor, pipelined architecture for search tree maintenance, and specialized computer organization for raster

  10. To build an environmental quality building. Evaluation: the HQE secondary school of Pic Saint Loup realized by the region; Construire un batiment respectueux de l'environnement. Retour d'experience: le Lycee HQE du Pic Saint Loup realise par la Region

    Energy Technology Data Exchange (ETDEWEB)

    Denicourt, Ch.

    2004-07-01

    This document presents the action realized in Pic Saint Loup secondary school, concerning the program management of an environmental quality building (HQE). The 8 chapters details the realization of the HQE building, the project planing of a HQE building, the Pic Saint Loup project, the operation beginning, the planing implementing, the project feasibility evaluation, the program redaction and the time and cost evaluation. (A.L.B.)

  11. Realistic PIC modelling of laser-plasma interaction: a direct implicit method with adjustable damping and high order weight functions

    International Nuclear Information System (INIS)

    Drouin, M.

    2009-11-01

    This research thesis proposes a new formulation of the relativistic implicit direct method, based on the weak formulation of the wave equation which is solved by means of a Newton algorithm. The first part of this thesis deals with the properties of the explicit particle-in-cell (PIC) methods: properties and limitations of an explicit PIC code, linear analysis of a numerical plasma, numerical heating phenomenon, interest of a higher order interpolation function, and presentation of two applications in high density relativistic laser-plasma interaction. The second and main part of this report deals with adapting the direct implicit method to laser-plasma interaction: presentation of the state of the art, formulating of the direct implicit method, resolution of the wave equation. The third part concerns various numerical and physical validations of the ELIXIRS code: case of laser wave propagation in vacuum, demonstration of the adjustable damping which is a characteristic of the proposed algorithm, influence of space-time discretization on energy conservation, expansion of a thermal plasma in vacuum, two cases of plasma-beam unsteadiness in relativistic regime, and then a case of the overcritical laser-plasma interaction

  12. Photonic Integrated Circuit (PIC) Device Structures: Background, Fabrication Ecosystem, Relevance to Space Systems Applications, and Discussion of Related Radiation Effects

    Science.gov (United States)

    Alt, Shannon

    2016-01-01

    Electronic integrated circuits are considered one of the most significant technological advances of the 20th century, with demonstrated impact in their ability to incorporate successively higher numbers transistors and construct electronic devices onto a single CMOS chip. Photonic integrated circuits (PICs) exist as the optical analog to integrated circuits; however, in place of transistors, PICs consist of numerous scaled optical components, including such "building-block" structures as waveguides, MMIs, lasers, and optical ring resonators. The ability to construct electronic and photonic components on a single microsystems platform offers transformative potential for the development of technologies in fields including communications, biomedical device development, autonomous navigation, and chemical and atmospheric sensing. Developing on-chip systems that provide new avenues for integration and replacement of bulk optical and electro-optic components also reduces size, weight, power and cost (SWaP-C) limitations, which are important in the selection of instrumentation for specific flight projects. The number of applications currently emerging for complex photonics systems-particularly in data communications-warrants additional investigations when considering reliability for space systems development. This Body of Knowledge document seeks to provide an overview of existing integrated photonics architectures; the current state of design, development, and fabrication ecosystems in the United States and Europe; and potential space applications, with emphasis given to associated radiation effects and reliability.

  13. The PICS Climate Insights 101 Courses: A Visual Approach to Learning About Climate Science, Mitigation and Adaptation

    Science.gov (United States)

    Pedersen, T. F.; Zwiers, F. W.; Breen, C.; Murdock, T. Q.

    2014-12-01

    The Pacific Institute for Climate Solutions (PICS) has now made available online three free, peer-reviewed, unique animated short courses in a series entitled "Climate Insights 101" that respectively address basic climate science, carbon-emissions mitigation approaches and opportunities, and adaptation. The courses are suitable for students of all ages, and use professionally narrated animations designed to hold a viewer's attention. Multiple issues are covered, including complex concerns like the construction of general circulation models, carbon pricing schemes in various countries, and adaptation approaches in the face of extreme weather events. Clips will be shown in the presentation. The first course (Climate Science Basics) has now been seen by over two hundred thousand individuals in over 80 countries, despite being offered in English only. Each course takes about two hours to work through, and in recognizing that that duration might pose an attention barrier to some students, PICS selected a number of short clips from the climate-science course and posted them as independent snippets on YouTube. A companion series of YouTube videos entitled, "Clear The Air", was created to confront the major global-warming denier myths. But a major challenge remains: despite numerous efforts to promote the availability of the free courses and the shorter YouTube pieces, they have yet to become widely known. Strategies to overcome that constraint will be discussed.

  14. Response of plasma facing components in Tokamaks due to intense energy deposition using Particle-In-Cell (PIC) methods

    Science.gov (United States)

    Genco, Filippo

    Damage to plasma-facing components (PFC) due to various plasma instabilities is still a major concern for the successful development of fusion energy and represents a significant research obstacle in the community. It is of great importance to fully understand the behavior and lifetime expectancy of PFC under both low energy cycles during normal events and highly energetic events as disruptions, Edge-Localized Modes (ELM), Vertical Displacement Events (VDE), and Run-away electron (RE). The consequences of these high energetic dumps with energy fluxes ranging from 10 MJ/m2 up to 200 MJ/m 2 applied in very short periods (0.1 to 5 ms) can be catastrophic both for safety and economic reasons. Those phenomena can cause a) large temperature increase in the target material b) consequent melting, evaporation and erosion losses due to the extremely high heat fluxes c) possible structural damage and permanent degradation of the entire bulk material with probable burnout of the coolant tubes; d) plasma contamination, transport of target material into the chamber far from where it was originally picked. The modeling of off-normal events such as Disruptions and ELMs requires the simultaneous solution of three main problems along time: a) the heat transfer in the plasma facing component b) the interaction of the produced vapor from the surface with the incoming plasma particles c) the transport of the radiation produced in the vapor-plasma cloud. In addition the moving boundaries problem has to be considered and solved at the material surface. Considering the carbon divertor as target, the moving boundaries are two since for the given conditions, carbon doesn't melt: the plasma front and the moving eroded material surface. The current solution methods for this problem use finite differences and moving coordinates system based on the Crank-Nicholson method and Alternating Directions Implicit Method (ADI). Currently Particle-In-Cell (PIC) methods are widely used for solving

  15. The Benefits of Adding SETI to the University Curriculum and What We Have Learned from a SETI Course Recently Offered at UCLA

    Science.gov (United States)

    Lesyna, Larry; Margot, Jean-Luc; Greenberg, Adam; Shinde, Akshay; Alladi, Yashaswi; Prasad MN, Srinivas; Bowman, Oliver; Fisher, Callum; Gyalay, Szilard; McKibbin, William; Miles, Brittany E.; Nguyen, Donald; Power, Conor; Ramani, Namrata; Raviprasad, Rashmi; Santana, Jesse

    2017-01-01

    We advocate for the inclusion of a full-term course entirely devoted to SETI in the university curriculum. SETI usually warrants only a few lectures in a traditional astronomy or astrobiology course. SETI’s rich interdisciplinary character serves astronomy students by introducing them to scientific and technological concepts that will aid them in their dissertation research or later in their careers. SETI is also an exciting topic that draws students from other disciplines and teaches them astronomical concepts that they might otherwise never encounter in their university studies. We have composed syllabi that illustrate the breadth and depth that SETI courses provide for advanced undergraduate or graduate students. The syllabi can also be used as a guide for an effective SETI course taught at a descriptive level.After a pilot course in 2015, UCLA formally offered a course titled "EPSS C179/279 - Search for Extraterrestrial Intelligence: Theory and Applications" in Spring 2016. The course was designed for advanced undergraduate students and graduate students in the science, technical, engineering, and mathematical fields. In 2016, 9 undergraduate students and 5 graduate students took the course. Students designed an observing sequence for the Arecibo and Green Bank telescopes, observed known planetary systems remotely, wrote a sophisticated and modular data processing pipeline, analyzed the data, and presented the results. In the process, they learned radio astronomy fundamentals, software development, signal processing, and statistics. The instructor believes that the students were eager to learn because of the engrossing nature of SETI. The students rated the course highly, in part because of the observing experience and the teamwork approach. The next offering will be in Spring 2017.See lxltech.com and seti.ucla.edu

  16. Science and Engineering of the Environment of Los Angeles: A GK-12 Experiment at Developing Science Communications Skills in UCLA's Graduate Program

    Science.gov (United States)

    Moldwin, M. B.; Hogue, T. S.; Nonacs, P.; Shope, R. E.; Daniel, J.

    2008-12-01

    Many science and research skills are taught by osmosis in graduate programs with the expectation that students will develop good communication skills (speaking, writing, and networking) by observing others, attending meetings, and self reflection. A new National Science Foundation Graduate Teaching Fellows in K- 12 Education (GK-12; http://ehrweb.aaas.org/gk12new/) program at UCLA (SEE-LA; http://measure.igpp.ucla.edu/GK12-SEE-LA/overview.html ) attempts to make the development of good communication skills an explicit part of the graduate program of science and engineering students. SEE-LA places the graduate fellows in two pairs of middle and high schools within Los Angeles to act as scientists-in- residence. They are partnered with two master science teachers and spend two-days per week in the classroom. They are not student teachers, or teacher aides, but scientists who contribute their content expertise, excitement and experience with research, and new ideas for classroom activities and lessons that incorporate inquiry science. During the one-year fellowship, the graduate students also attend a year-long Preparing Future Faculty seminar that discusses many skills needed as they begin their academic or research careers. Students are also required to include a brief (two-page) summary of their research that their middle or high school students would be able to understand as part of their published thesis. Having students actively thinking about and communicating their science to a pre-college audience provides important science communication training and helps contribute to science education. University and local pre- college school partnerships provide an excellent opportunity to support the development of graduate student communication skills while also contributing significantly to the dissemination of sound science to K-12 teachers and students.

  17. Resistor Combinations for Parallel Circuits.

    Science.gov (United States)

    McTernan, James P.

    1978-01-01

    To help simplify both teaching and learning of parallel circuits, a high school electricity/electronics teacher presents and illustrates the use of tables of values for parallel resistive circuits in which total resistances are whole numbers. (MF)

  18. SOFTWARE FOR DESIGNING PARALLEL APPLICATIONS

    Directory of Open Access Journals (Sweden)

    M. K. Bouza

    2017-01-01

    Full Text Available The object of research is the tools to support the development of parallel programs in C/C ++. The methods and software which automates the process of designing parallel applications are proposed.

  19. Parallel External Memory Graph Algorithms

    DEFF Research Database (Denmark)

    Arge, Lars Allan; Goodrich, Michael T.; Sitchinava, Nodari

    2010-01-01

    In this paper, we study parallel I/O efficient graph algorithms in the Parallel External Memory (PEM) model, one o f the private-cache chip multiprocessor (CMP) models. We study the fundamental problem of list ranking which leads to efficient solutions to problems on trees, such as computing lowest...... an optimal speedup of ¿(P) in parallel I/O complexity and parallel computation time, compared to the single-processor external memory counterparts....

  20. Parallel inter channel interaction mechanisms

    International Nuclear Information System (INIS)

    Jovic, V.; Afgan, N.; Jovic, L.

    1995-01-01

    Parallel channels interactions are examined. For experimental researches of nonstationary regimes flow in three parallel vertical channels results of phenomenon analysis and mechanisms of parallel channel interaction for adiabatic condition of one-phase fluid and two-phase mixture flow are shown. (author)

  1. Massively Parallel QCD

    International Nuclear Information System (INIS)

    Soltz, R; Vranas, P; Blumrich, M; Chen, D; Gara, A; Giampap, M; Heidelberger, P; Salapura, V; Sexton, J; Bhanot, G

    2007-01-01

    The theory of the strong nuclear force, Quantum Chromodynamics (QCD), can be numerically simulated from first principles on massively-parallel supercomputers using the method of Lattice Gauge Theory. We describe the special programming requirements of lattice QCD (LQCD) as well as the optimal supercomputer hardware architectures that it suggests. We demonstrate these methods on the BlueGene massively-parallel supercomputer and argue that LQCD and the BlueGene architecture are a natural match. This can be traced to the simple fact that LQCD is a regular lattice discretization of space into lattice sites while the BlueGene supercomputer is a discretization of space into compute nodes, and that both are constrained by requirements of locality. This simple relation is both technologically important and theoretically intriguing. The main result of this paper is the speedup of LQCD using up to 131,072 CPUs on the largest BlueGene/L supercomputer. The speedup is perfect with sustained performance of about 20% of peak. This corresponds to a maximum of 70.5 sustained TFlop/s. At these speeds LQCD and BlueGene are poised to produce the next generation of strong interaction physics theoretical results

  2. A Parallel Butterfly Algorithm

    KAUST Repository

    Poulson, Jack; Demanet, Laurent; Maxwell, Nicholas; Ying, Lexing

    2014-01-01

    The butterfly algorithm is a fast algorithm which approximately evaluates a discrete analogue of the integral transform (Equation Presented.) at large numbers of target points when the kernel, K(x, y), is approximately low-rank when restricted to subdomains satisfying a certain simple geometric condition. In d dimensions with O(Nd) quasi-uniformly distributed source and target points, when each appropriate submatrix of K is approximately rank-r, the running time of the algorithm is at most O(r2Nd logN). A parallelization of the butterfly algorithm is introduced which, assuming a message latency of α and per-process inverse bandwidth of β, executes in at most (Equation Presented.) time using p processes. This parallel algorithm was then instantiated in the form of the open-source DistButterfly library for the special case where K(x, y) = exp(iΦ(x, y)), where Φ(x, y) is a black-box, sufficiently smooth, real-valued phase function. Experiments on Blue Gene/Q demonstrate impressive strong-scaling results for important classes of phase functions. Using quasi-uniform sources, hyperbolic Radon transforms, and an analogue of a three-dimensional generalized Radon transform were, respectively, observed to strong-scale from 1-node/16-cores up to 1024-nodes/16,384-cores with greater than 90% and 82% efficiency, respectively. © 2014 Society for Industrial and Applied Mathematics.

  3. A Parallel Butterfly Algorithm

    KAUST Repository

    Poulson, Jack

    2014-02-04

    The butterfly algorithm is a fast algorithm which approximately evaluates a discrete analogue of the integral transform (Equation Presented.) at large numbers of target points when the kernel, K(x, y), is approximately low-rank when restricted to subdomains satisfying a certain simple geometric condition. In d dimensions with O(Nd) quasi-uniformly distributed source and target points, when each appropriate submatrix of K is approximately rank-r, the running time of the algorithm is at most O(r2Nd logN). A parallelization of the butterfly algorithm is introduced which, assuming a message latency of α and per-process inverse bandwidth of β, executes in at most (Equation Presented.) time using p processes. This parallel algorithm was then instantiated in the form of the open-source DistButterfly library for the special case where K(x, y) = exp(iΦ(x, y)), where Φ(x, y) is a black-box, sufficiently smooth, real-valued phase function. Experiments on Blue Gene/Q demonstrate impressive strong-scaling results for important classes of phase functions. Using quasi-uniform sources, hyperbolic Radon transforms, and an analogue of a three-dimensional generalized Radon transform were, respectively, observed to strong-scale from 1-node/16-cores up to 1024-nodes/16,384-cores with greater than 90% and 82% efficiency, respectively. © 2014 Society for Industrial and Applied Mathematics.

  4. Fast parallel event reconstruction

    CERN Multimedia

    CERN. Geneva

    2010-01-01

    On-line processing of large data volumes produced in modern HEP experiments requires using maximum capabilities of modern and future many-core CPU and GPU architectures.One of such powerful feature is a SIMD instruction set, which allows packing several data items in one register and to operate on all of them, thus achievingmore operations per clock cycle. Motivated by the idea of using the SIMD unit ofmodern processors, the KF based track fit has been adapted for parallelism, including memory optimization, numerical analysis, vectorization with inline operator overloading, and optimization using SDKs. The speed of the algorithm has been increased in 120000 times with 0.1 ms/track, running in parallel on 16 SPEs of a Cell Blade computer.  Running on a Nehalem CPU with 8 cores it shows the processing speed of 52 ns/track using the Intel Threading Building Blocks. The same KF algorithm running on an Nvidia GTX 280 in the CUDA frameworkprovi...

  5. Parallel Computing in SCALE

    International Nuclear Information System (INIS)

    DeHart, Mark D.; Williams, Mark L.; Bowman, Stephen M.

    2010-01-01

    The SCALE computational architecture has remained basically the same since its inception 30 years ago, although constituent modules and capabilities have changed significantly. This SCALE concept was intended to provide a framework whereby independent codes can be linked to provide a more comprehensive capability than possible with the individual programs - allowing flexibility to address a wide variety of applications. However, the current system was designed originally for mainframe computers with a single CPU and with significantly less memory than today's personal computers. It has been recognized that the present SCALE computation system could be restructured to take advantage of modern hardware and software capabilities, while retaining many of the modular features of the present system. Preliminary work is being done to define specifications and capabilities for a more advanced computational architecture. This paper describes the state of current SCALE development activities and plans for future development. With the release of SCALE 6.1 in 2010, a new phase of evolutionary development will be available to SCALE users within the TRITON and NEWT modules. The SCALE (Standardized Computer Analyses for Licensing Evaluation) code system developed by Oak Ridge National Laboratory (ORNL) provides a comprehensive and integrated package of codes and nuclear data for a wide range of applications in criticality safety, reactor physics, shielding, isotopic depletion and decay, and sensitivity/uncertainty (S/U) analysis. Over the last three years, since the release of version 5.1 in 2006, several important new codes have been introduced within SCALE, and significant advances applied to existing codes. Many of these new features became available with the release of SCALE 6.0 in early 2009. However, beginning with SCALE 6.1, a first generation of parallel computing is being introduced. In addition to near-term improvements, a plan for longer term SCALE enhancement

  6. IMPLEMENTATION OF PID ON PIC24F SERIES MICROCONTROLLER FOR SPEED CONTROL OF A DC MOTOR USING MPLAB AND PROTEUS

    Directory of Open Access Journals (Sweden)

    Sohaib Aslam

    2016-09-01

    Full Text Available Speed control of DC motor is very critical in most of the industrial systems where accuracy and protection are of essence. This paper presents the simulations of Proportional Integral Derivative Controller (PID on a 16-bit PIC 24F series microcontroller for speed control of a DC motor in the presence of load torque. The PID gains have been tuned by Linear Quadratic Regulator (LQR technique and then it is implemented on microcontroller using MPLAB and finally simulated for speed control of DC motor in Proteus Virtual System Modeling (VSM software.Proteus has built in feature to add load torque to DC motor so simulation results have been presented in three cases speed of DC motor is controlled without load torque, with 25% load torque and with 50% load torque. In all three cases PID effectively controls the speed of DC motor with minimum steady state error.

  7. El Cura Juan Fernández de Sotomayor y Picón y los catecismos de la Independencia

    OpenAIRE

    Ocampo López, Javier

    2010-01-01

    Este libro presenta el entorno revolucionario de finales del siglo XVIII y primera mitad del siglo XIX, a través del pensamiento y la acción del cura cartagenero Juan Fernández de Sotomayor y Picón, Cura de Mompós y rector del colegio Mayor de Nuestra Señora del Rosario, quien vivió en los años que dieron nacimiento a la República de Colombia. Se desempeñó como cura revolucionario, como político de Mompós, de Cartagena de Indias, ante el Congreso de las Provincias Unidas del Congreso Nacional...

  8. An exploratory study of three-dimensional MP-PIC-based simulation of bubbling fluidized beds with and without baffles

    DEFF Research Database (Denmark)

    Yang, Shuai; Wu, Hao; Lin, Weigang

    2018-01-01

    In this study, the flow characteristics of Geldart A particles in a bubbling fluidized bed with and without perforated plates were simulated by the multiphase particle-in-cell (MP-PIC)-based Eulerian-Lagrangian method. A modified structure-based drag model was developed based on our previous work....... Other drag models including the Parker and Wen-Yu-Ergun drag models were also employed to investigate the effects of drag models on the simulation results. Although the modified structure-based drag model better predicts the gas-solid flow dynamics of a baffle-free bubbling fluidized bed in comparison...... with the experimental data, none of these drag models predict the gas-solid flow in a baffled bubbling fluidized bed sufficiently well because of the treatment of baffles in the Barracuda software. To improve the simulation accuracy, future versions of Barracuda should address the challenges of incorporating the bed...

  9. The Value of PIC Cystography in Detecting De Novo and Residual Vesicoureteral Reflux after Dextranomer/Hyaluronic Acid Copolymer Injection

    Directory of Open Access Journals (Sweden)

    B. W. Palmer

    2011-01-01

    Full Text Available The endoscopic injection of Dx/HA in the management of vesicoureteral reflux (VUR has become an accepted alternative to open surgery. In the current study we evaluated the value of cystography to detect de novo contralateral VUR in unilateral cases of VUR at the time of Dx/HA injection and correlated the findings of immediate post-Dx/HA injection cystography during the same anesthesia to 2-month postoperative VCUG to evaluate its ability to predict successful surgical outcomes. The current study aimed to evaluate whether an intraoperatively performed cystogram could replace postoperative studies. But a negative intraoperative cystogram correlates with the postoperative study in only 80%. Considering the 75–80% success rate of Dx/HA implantation, the addition of intraoperative cystograms cannot replace postoperative studies. In patients treated with unilateral VUR, PIC cystography can detect occult VUR and prevent postoperative contralateral new onset of VUR.

  10. Photo-induced reorganization of molecular packing of amphi-PIC J-aggregates (single J-aggregate spectroscopy)

    International Nuclear Information System (INIS)

    Malyukin, Yu.V.; Sorokin, A.V.; Yefimova, S.L.; Lebedenko, A.N.

    2005-01-01

    Confocal luminescence microscopy has been used to excite and collect luminescence from single amphi-PIC J-aggregate. Two types of J-aggregates have been revealed in the luminescence image: bead-like J-aggregates, which diameter is less than 1 μm and rod-like ones, which length is about 3 μm and diameter is less than 1 μm. It has been found that single rod-like and bead-like J-aggregates exhibit different luminescence bands with different decay parameters. At the off-resonance blue tail excitation, the J-aggregate exciton luminescence disappeared within a certain time period and a new band appeared, which cannot be attributed to the monomer emission. The luminescence image shows that the J-aggregate is not destroyed. However, J-aggregate storage in darkness does not recover its exciton luminescence

  11. Parallel Polarization State Generation.

    Science.gov (United States)

    She, Alan; Capasso, Federico

    2016-05-17

    The control of polarization, an essential property of light, is of wide scientific and technological interest. The general problem of generating arbitrary time-varying states of polarization (SOP) has always been mathematically formulated by a series of linear transformations, i.e. a product of matrices, imposing a serial architecture. Here we show a parallel architecture described by a sum of matrices. The theory is experimentally demonstrated by modulating spatially-separated polarization components of a laser using a digital micromirror device that are subsequently beam combined. This method greatly expands the parameter space for engineering devices that control polarization. Consequently, performance characteristics, such as speed, stability, and spectral range, are entirely dictated by the technologies of optical intensity modulation, including absorption, reflection, emission, and scattering. This opens up important prospects for polarization state generation (PSG) with unique performance characteristics with applications in spectroscopic ellipsometry, spectropolarimetry, communications, imaging, and security.

  12. Parallel imaging microfluidic cytometer.

    Science.gov (United States)

    Ehrlich, Daniel J; McKenna, Brian K; Evans, James G; Belkina, Anna C; Denis, Gerald V; Sherr, David H; Cheung, Man Ching

    2011-01-01

    By adding an additional degree of freedom from multichannel flow, the parallel microfluidic cytometer (PMC) combines some of the best features of fluorescence-activated flow cytometry (FCM) and microscope-based high-content screening (HCS). The PMC (i) lends itself to fast processing of large numbers of samples, (ii) adds a 1D imaging capability for intracellular localization assays (HCS), (iii) has a high rare-cell sensitivity, and (iv) has an unusual capability for time-synchronized sampling. An inability to practically handle large sample numbers has restricted applications of conventional flow cytometers and microscopes in combinatorial cell assays, network biology, and drug discovery. The PMC promises to relieve a bottleneck in these previously constrained applications. The PMC may also be a powerful tool for finding rare primary cells in the clinic. The multichannel architecture of current PMC prototypes allows 384 unique samples for a cell-based screen to be read out in ∼6-10 min, about 30 times the speed of most current FCM systems. In 1D intracellular imaging, the PMC can obtain protein localization using HCS marker strategies at many times for the sample throughput of charge-coupled device (CCD)-based microscopes or CCD-based single-channel flow cytometers. The PMC also permits the signal integration time to be varied over a larger range than is practical in conventional flow cytometers. The signal-to-noise advantages are useful, for example, in counting rare positive cells in the most difficult early stages of genome-wide screening. We review the status of parallel microfluidic cytometry and discuss some of the directions the new technology may take. Copyright © 2011 Elsevier Inc. All rights reserved.

  13. Improved Iterative Parallel Interference Cancellation Receiver for Future Wireless DS-CDMA Systems

    Directory of Open Access Journals (Sweden)

    Andrea Bernacchioni

    2005-04-01

    Full Text Available We present a new turbo multiuser detector for turbo-coded direct sequence code division multiple access (DS-CDMA systems. The proposed detector is based on the utilization of a parallel interference cancellation (PIC and a bank of turbo decoders. The PIC is broken up in order to perform interference cancellation after each constituent decoder of the turbo decoding scheme. Moreover, in the paper we propose a new enhanced algorithm that provides a more accurate estimation of the signal-to-noise-plus-interference-ratio used in the tentative decision device and in the MAP decoding algorithm. The performance of the proposed receiver is evaluated by means of computer simulations for medium to very high system loads, in AWGN and multipath fading channel, and compared to recently proposed interference cancellation-based iterative MUD, by taking into account the number of iterations and the complexity involved. We will see that the proposed receiver outperforms the others especially for highly loaded systems.

  14. PIC simulations of conical magnetically insulated transmission line with LTD generator: Transition from self-limited to load-limited flow

    Science.gov (United States)

    Liu, Laqun; Wang, Huihui; Guo, Fan; Zou, Wenkang; Liu, Dagang

    2017-04-01

    Based on the 3-dimensional Particle-In-Cell (PIC) code CHIPIC3D, with a new circuit boundary algorithm we developed, a conical magnetically insulated transmission line (MITL) with a 1.0-MV linear transformer driver (LTD) is explored numerically. The values of switch jitter time of LTD are critical parameters for the system, which are difficult to be measured experimentally. In this paper, these values are obtained by comparing the PIC results with experimental data of large diode-gap MITL. By decreasing the diode gap, we find that all PIC results agree well with experimental data only if MITL works on self-limited flow no matter how large the diode gap is. However, when the diode gap decreases to a threshold, the self-limited flow would transfer to a load-limited flow. In this situation, PIC results no longer agree with experimental data anymore due to the anode plasma expansion in the diode load. This disagreement is used to estimate the plasma expansion speed.

  15. About Parallel Programming: Paradigms, Parallel Execution and Collaborative Systems

    Directory of Open Access Journals (Sweden)

    Loredana MOCEAN

    2009-01-01

    Full Text Available In the last years, there were made efforts for delineation of a stabile and unitary frame, where the problems of logical parallel processing must find solutions at least at the level of imperative languages. The results obtained by now are not at the level of the made efforts. This paper wants to be a little contribution at these efforts. We propose an overview in parallel programming, parallel execution and collaborative systems.

  16. DOD-SBIR Structured Multi-Resolution PIC Code for Electromagnetic Plasma Simulations, Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Vay, J L; Grote, D P; Friedman, A

    2010-04-22

    A novel electromagnetic solver with mesh refinement capability was implemented in Warp. The solver allows for calculations in 2-1/2 and 3 dimensions, includes the standard Yee stencil, and the Cole-Karkkainen stencil for lower numerical dispersion along the principal axes. Warp implementation of the Cole-Karkkainen stencil includes an extension to perfectly matched layers (PML) for absorption of waves, and is preserving the conservation property of charge conserving current deposition schemes, like the Buneman-Villanesor and Esirkepov methods. Warp's mesh refinement framework (originally developed for electrostatic calculations) was augmented to allow for electromagnetic capability, following the methodology presented in [1] extended to an arbitrary number of refinement levels. Other developments include a generalized particle injection method, internal conductors using stair-cased approximation, and subcycling of particle pushing. The solver runs in parallel using MPI message passing, with a choice at runtime of 1D, 2D and 3D domain decomposition, and is shown to scale linearly on a test problem up-to 32,768 CPUs. The novel solver was tested on the modeling of filamentation instability, fast ignition, ion beam induced plasma wake, and laser plasma acceleration.

  17. DOD-SBIR Structured Multi-Resolution PIC Code for Electromagnetic Plasma Simulations, Final Report

    International Nuclear Information System (INIS)

    Vay, J.L.; Grote, D.P.; Friedman, A.

    2010-01-01

    A novel electromagnetic solver with mesh refinement capability was implemented in Warp. The solver allows for calculations in 2-1/2 and 3 dimensions, includes the standard Yee stencil, and the Cole-Karkkainen stencil for lower numerical dispersion along the principal axes. Warp implementation of the Cole-Karkkainen stencil includes an extension to perfectly matched layers (PML) for absorption of waves, and is preserving the conservation property of charge conserving current deposition schemes, like the Buneman-Villanesor and Esirkepov methods. Warp's mesh refinement framework (originally developed for electrostatic calculations) was augmented to allow for electromagnetic capability, following the methodology presented in (1) extended to an arbitrary number of refinement levels. Other developments include a generalized particle injection method, internal conductors using stair-cased approximation, and subcycling of particle pushing. The solver runs in parallel using MPI message passing, with a choice at runtime of 1D, 2D and 3D domain decomposition, and is shown to scale linearly on a test problem up-to 32,768 CPUs. The novel solver was tested on the modeling of filamentation instability, fast ignition, ion beam induced plasma wake, and laser plasma acceleration.

  18. Parallel Framework for Cooperative Processes

    Directory of Open Access Journals (Sweden)

    Mitică Craus

    2005-01-01

    Full Text Available This paper describes the work of an object oriented framework designed to be used in the parallelization of a set of related algorithms. The idea behind the system we are describing is to have a re-usable framework for running several sequential algorithms in a parallel environment. The algorithms that the framework can be used with have several things in common: they have to run in cycles and the work should be possible to be split between several "processing units". The parallel framework uses the message-passing communication paradigm and is organized as a master-slave system. Two applications are presented: an Ant Colony Optimization (ACO parallel algorithm for the Travelling Salesman Problem (TSP and an Image Processing (IP parallel algorithm for the Symmetrical Neighborhood Filter (SNF. The implementations of these applications by means of the parallel framework prove to have good performances: approximatively linear speedup and low communication cost.

  19. Parallel Monte Carlo reactor neutronics

    International Nuclear Information System (INIS)

    Blomquist, R.N.; Brown, F.B.

    1994-01-01

    The issues affecting implementation of parallel algorithms for large-scale engineering Monte Carlo neutron transport simulations are discussed. For nuclear reactor calculations, these include load balancing, recoding effort, reproducibility, domain decomposition techniques, I/O minimization, and strategies for different parallel architectures. Two codes were parallelized and tested for performance. The architectures employed include SIMD, MIMD-distributed memory, and workstation network with uneven interactive load. Speedups linear with the number of nodes were achieved

  20. Anti-parallel triplexes

    DEFF Research Database (Denmark)

    Kosbar, Tamer R.; Sofan, Mamdouh A.; Waly, Mohamed A.

    2015-01-01

    about 6.1 °C when the TFO strand was modified with Z and the Watson-Crick strand with adenine-LNA (AL). The molecular modeling results showed that, in case of nucleobases Y and Z a hydrogen bond (1.69 and 1.72 Å, respectively) was formed between the protonated 3-aminopropyn-1-yl chain and one...... of the phosphate groups in Watson-Crick strand. Also, it was shown that the nucleobase Y made a good stacking and binding with the other nucleobases in the TFO and Watson-Crick duplex, respectively. In contrast, the nucleobase Z with LNA moiety was forced to twist out of plane of Watson-Crick base pair which......The phosphoramidites of DNA monomers of 7-(3-aminopropyn-1-yl)-8-aza-7-deazaadenine (Y) and 7-(3-aminopropyn-1-yl)-8-aza-7-deazaadenine LNA (Z) are synthesized, and the thermal stability at pH 7.2 and 8.2 of anti-parallel triplexes modified with these two monomers is determined. When, the anti...

  1. Parallel consensual neural networks.

    Science.gov (United States)

    Benediktsson, J A; Sveinsson, J R; Ersoy, O K; Swain, P H

    1997-01-01

    A new type of a neural-network architecture, the parallel consensual neural network (PCNN), is introduced and applied in classification/data fusion of multisource remote sensing and geographic data. The PCNN architecture is based on statistical consensus theory and involves using stage neural networks with transformed input data. The input data are transformed several times and the different transformed data are used as if they were independent inputs. The independent inputs are first classified using the stage neural networks. The output responses from the stage networks are then weighted and combined to make a consensual decision. In this paper, optimization methods are used in order to weight the outputs from the stage networks. Two approaches are proposed to compute the data transforms for the PCNN, one for binary data and another for analog data. The analog approach uses wavelet packets. The experimental results obtained with the proposed approach show that the PCNN outperforms both a conjugate-gradient backpropagation neural network and conventional statistical methods in terms of overall classification accuracy of test data.

  2. A Parallel Particle Swarm Optimizer

    National Research Council Canada - National Science Library

    Schutte, J. F; Fregly, B .J; Haftka, R. T; George, A. D

    2003-01-01

    .... Motivated by a computationally demanding biomechanical system identification problem, we introduce a parallel implementation of a stochastic population based global optimizer, the Particle Swarm...

  3. Patterns for Parallel Software Design

    CERN Document Server

    Ortega-Arjona, Jorge Luis

    2010-01-01

    Essential reading to understand patterns for parallel programming Software patterns have revolutionized the way we think about how software is designed, built, and documented, and the design of parallel software requires you to consider other particular design aspects and special skills. From clusters to supercomputers, success heavily depends on the design skills of software developers. Patterns for Parallel Software Design presents a pattern-oriented software architecture approach to parallel software design. This approach is not a design method in the classic sense, but a new way of managin

  4. Seeing or moving in parallel

    DEFF Research Database (Denmark)

    Christensen, Mark Schram; Ehrsson, H Henrik; Nielsen, Jens Bo

    2013-01-01

    a different network, involving bilateral dorsal premotor cortex (PMd), primary motor cortex, and SMA, was more active when subjects viewed parallel movements while performing either symmetrical or parallel movements. Correlations between behavioral instability and brain activity were present in right lateral...... adduction-abduction movements symmetrically or in parallel with real-time congruent or incongruent visual feedback of the movements. One network, consisting of bilateral superior and middle frontal gyrus and supplementary motor area (SMA), was more active when subjects performed parallel movements, whereas...

  5. Implementación de un balastro electrónico con microcontrolador PIC para lámparas de sodio de alta presión Implementation of electronic ballast with PIC microcontroller for high pressure sodiumlamps

    Directory of Open Access Journals (Sweden)

    Armando Manuel Gutiérrez Menéndez

    2013-09-01

    Full Text Available En el presente trabajo se muestra un prototipo de balasto electrónico, que garantiza la operación exitosa en alta frecuencia de una lámpara de sodio de alta presión de 70 W, pues opera libre de resonancia acústica (RA. Se efectúa un análisis del fenómeno de la resonancia acústica, profundizando en su origen y predicción teórica. Es descrita la técnica de modulación en frecuencia utilizada para evitar este fenómeno, implementada en el microcontrolador de 8 bit PIC16F877 de la Microchip, la cual es activada en dependencia de la variación de los parámetros eléctricos de la lámpara, como son tensión y corriente. Son mostradas las etapas que dan conformación a dicho prototipo, además de presentarse las simulaciones realizadas a los principales elementos que componen el balasto. Los resultados prácticos alcanzados por el prototipo son expuestos, los cuales se dividen por etapas para analizar el correcto funcionamiento de cada una ellas.Palabra clave: In the present work is offered a prototype of electronic ballast that guarantees the correctly operation in high frequency of a high pressure sodium lamp of 70 W, because it operates free of acoustic resonance (RA. An analysis of the phenomenon of the acoustic resonance is made, deepening in its origin and theoretical prediction. The modulation technique is described in frequency used to avoid this phenomenon, implemented in the microcontrolador of 8 bit PIC16F877 of Microchip, which is activated in dependence of the variation of the electric parameters of the lamp, like they are tension and current. They are shown the stages that give conformation to this prototype, besides the simulations carried out to the main elements that compose the ballast being presented. The practical results reached by the prototype are exposed, which are divided by stages to analyze the correct operation of each a them.Key words:

  6. Implementación de un balastro electrónico con microcontrolador PIC para lámparas de sodio de alta presión; Implementation of electronic ballast with PIC microcontroller for high pressure sodiumlamps

    Directory of Open Access Journals (Sweden)

    Armando M. - Gutiérrez Menéndez

    2013-10-01

    Full Text Available En el presente trabajo se muestra un prototipo de balasto electrónico, que garantiza la operación exitosa en alta frecuencia de una lámpara de sodio de alta presión de 70 W, pues opera libre de resonancia acústica (RA. Se efectúa un análisis del fenómeno de la resonancia acústica, profundizando en su origen y predicción teórica. Es descrita la técnica de modulación en frecuencia utilizada para evitar este fenómeno, implementada en el microcontrolador de 8 bit PIC16F877 de la Microchip, la cual es activada en dependencia de la variación de los parámetros eléctricos de la lámpara, como son tensión y corriente. Son mostradas las etapas que dan conformación a dicho prototipo, además de presentarse las simulaciones realizadas a los principales elementos que componen el balasto. Los resultados prácticos alcanzados por el prototipo son expuestos, los cuales se dividen por etapas para analizar el correcto funcionamiento de cada una ellas.In the present work is offered a prototype of electronic ballast that guarantees the correctly operation in high frequency of a high pressure sodium lamp of 70 W, because it operates free of acoustic resonance (RA. An analysis of the phenomenon of the acoustic resonance is made, deepening in its origin and theoretical prediction. The modulation technique is described in frequency used to avoid this phenomenon, implemented in the microcontrolador of 8 bit PIC16F877 of Microchip, which is activated in dependence of the variation of the electric parameters of the lamp, like they are tension and current. They are shown the stages that give conformation to this prototype, besides the simulations carried out to the main elements that compose the ballast being presented. The practical results reached by the prototype are exposed, which are divided by stages to analyze the correct operation of each a them.

  7. PARALLEL IMPORT: REALITY FOR RUSSIA

    Directory of Open Access Journals (Sweden)

    Т. А. Сухопарова

    2014-01-01

    Full Text Available Problem of parallel import is urgent question at now. Parallel import legalization in Russia is expedient. Such statement based on opposite experts opinion analysis. At the same time it’s necessary to negative consequences consider of this decision and to apply remedies to its minimization.Purchase on Elibrary.ru > Buy now

  8. The fitness of copings constructed over UCLA abutments and the implant, constructed by different techniques: casting and casting with laser welding Adaptação de copings de ritânio ao implante, construídos sobre pilares UCLA por duas técnicas: fundição e fundição com soldagem de bordo laser

    Directory of Open Access Journals (Sweden)

    Elza Maria Valadares da Costa

    2004-12-01

    Full Text Available The alternative for the reposition of a missing tooth is the osteointegrated implant being the passive adaptation between the prosthodontic structure and the implant a significant factor for the success of this experiment, a comparative study was done between the two methods for confectioning a single prosthodontic supported by an implant. To do so a screwed implant with a diameter of 3.75mm and a length of 10.0mm (3i Implant innovations, Brasil was positioned in the middle of a resin block and over it we screwed 15 UCLA abutments shaped and anti-rotationable (137CNB, Conexão Sistemas de Próteses, Brasil with a torque of 20N.cm without any laboratorial procedure (control group - CTRLG. From a silicon model 15 UCLA-type calcinatable compounds (56CNB, Conexão Sistemas de Próteses, Brasil were screwed (20 N.cm, received a standard waxing (plain buccal surface and were cast in titanium (casting group - CG and other 15 compounds, UCLA - type shaped in titanium (137 CNB, Conexão Sistemas de Próteses, Brasil received the same standard waxing. These last copings were cast in titanium separated from each other and were laser-welded to the respective abutments on their border (Laser-welding group - LWG. The border adaptation was observed in the implant/compound interface, under measurement microscope, on the y axis, in 4 vestibular, lingual, mesial and distal referential points previously marked on the block. The arithmetical means were obtained and an exploratory data analysis was performed to determine the most appropriate statistical test. Descriptive statistics data (µm for Control (mean±standard deviation: 13.50 ± 21.80; median 0.00, for Casting (36.20±12.60; 37.00, for Laser (10.50 ±12.90; 3.00 were submitted to Kruskal-Wallis ANOVA, alpha = 5%. Results test showed that distorsion median values differ statistically (kw = 17.40; df =2; p = 0.001A reposição de um elemento dentário pode ser feita por um implante osseointegrado sendo que a

  9. The Galley Parallel File System

    Science.gov (United States)

    Nieuwejaar, Nils; Kotz, David

    1996-01-01

    Most current multiprocessor file systems are designed to use multiple disks in parallel, using the high aggregate bandwidth to meet the growing I/0 requirements of parallel scientific applications. Many multiprocessor file systems provide applications with a conventional Unix-like interface, allowing the application to access multiple disks transparently. This interface conceals the parallelism within the file system, increasing the ease of programmability, but making it difficult or impossible for sophisticated programmers and libraries to use knowledge about their I/O needs to exploit that parallelism. In addition to providing an insufficient interface, most current multiprocessor file systems are optimized for a different workload than they are being asked to support. We introduce Galley, a new parallel file system that is intended to efficiently support realistic scientific multiprocessor workloads. We discuss Galley's file structure and application interface, as well as the performance advantages offered by that interface.

  10. Parallelization of the FLAPW method

    International Nuclear Information System (INIS)

    Canning, A.; Mannstadt, W.; Freeman, A.J.

    1999-01-01

    The FLAPW (full-potential linearized-augmented plane-wave) method is one of the most accurate first-principles methods for determining electronic and magnetic properties of crystals and surfaces. Until the present work, the FLAPW method has been limited to systems of less than about one hundred atoms due to a lack of an efficient parallel implementation to exploit the power and memory of parallel computers. In this work we present an efficient parallelization of the method by division among the processors of the plane-wave components for each state. The code is also optimized for RISC (reduced instruction set computer) architectures, such as those found on most parallel computers, making full use of BLAS (basic linear algebra subprograms) wherever possible. Scaling results are presented for systems of up to 686 silicon atoms and 343 palladium atoms per unit cell, running on up to 512 processors on a CRAY T3E parallel computer

  11. Parallelization of the FLAPW method

    Science.gov (United States)

    Canning, A.; Mannstadt, W.; Freeman, A. J.

    2000-08-01

    The FLAPW (full-potential linearized-augmented plane-wave) method is one of the most accurate first-principles methods for determining structural, electronic and magnetic properties of crystals and surfaces. Until the present work, the FLAPW method has been limited to systems of less than about a hundred atoms due to the lack of an efficient parallel implementation to exploit the power and memory of parallel computers. In this work, we present an efficient parallelization of the method by division among the processors of the plane-wave components for each state. The code is also optimized for RISC (reduced instruction set computer) architectures, such as those found on most parallel computers, making full use of BLAS (basic linear algebra subprograms) wherever possible. Scaling results are presented for systems of up to 686 silicon atoms and 343 palladium atoms per unit cell, running on up to 512 processors on a CRAY T3E parallel supercomputer.

  12. Comparison of different Maxwell solvers coupled to a PIC resolution method of Maxwell-Vlasov equations; Evaluation de differents solveurs Maxwell pour la resolution de Maxwell-Vlasov par une methode PIC

    Energy Technology Data Exchange (ETDEWEB)

    Fochesato, Ch. [CEA Bruyeres-le-Chatel, Dept. de Conception et Simulation des Armes, Service Simulation des Amorces, Lab. Logiciels de Simulation, 91 (France); Bouche, D. [CEA Bruyeres-le-Chatel, Dept. de Physique Theorique et Appliquee, Lab. de Recherche Conventionne, Centre de Mathematiques et Leurs Applications, 91 (France)

    2007-07-01

    The numerical solution of Maxwell equations is a challenging task. Moreover, the range of applications is very wide: microwave devices, diffraction, to cite a few. As a result, a number of methods have been proposed since the sixties. However, among all these methods, none has proved to be free of drawbacks. The finite difference scheme proposed by Yee in 1966, is well suited for Maxwell equations. However, it only works on cubical mesh. As a result, the boundaries of complex objects are not properly handled by the scheme. When classical nodal finite elements are used, spurious modes appear, which spoil the results of simulations. Edge elements overcome this problem, at the price of rather complex implementation, and computationally intensive simulations. Finite volume methods, either generalizing Yee scheme to a wider class of meshes, or applying to Maxwell equations methods initially used in the field of hyperbolic systems of conservation laws, are also used. Lastly, 'Discontinuous Galerkin' methods, generalizing to arbitrary order of accuracy finite volume methods, have recently been applied to Maxwell equations. In this report, we more specifically focus on the coupling of a Maxwell solver to a PIC (Particle-in-cell) method. We analyze advantages and drawbacks of the most widely used methods: accuracy, robustness, sensitivity to numerical artefacts, efficiency, user judgment. (authors)

  13. Is Monte Carlo embarrassingly parallel?

    Energy Technology Data Exchange (ETDEWEB)

    Hoogenboom, J. E. [Delft Univ. of Technology, Mekelweg 15, 2629 JB Delft (Netherlands); Delft Nuclear Consultancy, IJsselzoom 2, 2902 LB Capelle aan den IJssel (Netherlands)

    2012-07-01

    Monte Carlo is often stated as being embarrassingly parallel. However, running a Monte Carlo calculation, especially a reactor criticality calculation, in parallel using tens of processors shows a serious limitation in speedup and the execution time may even increase beyond a certain number of processors. In this paper the main causes of the loss of efficiency when using many processors are analyzed using a simple Monte Carlo program for criticality. The basic mechanism for parallel execution is MPI. One of the bottlenecks turn out to be the rendez-vous points in the parallel calculation used for synchronization and exchange of data between processors. This happens at least at the end of each cycle for fission source generation in order to collect the full fission source distribution for the next cycle and to estimate the effective multiplication factor, which is not only part of the requested results, but also input to the next cycle for population control. Basic improvements to overcome this limitation are suggested and tested. Also other time losses in the parallel calculation are identified. Moreover, the threading mechanism, which allows the parallel execution of tasks based on shared memory using OpenMP, is analyzed in detail. Recommendations are given to get the maximum efficiency out of a parallel Monte Carlo calculation. (authors)

  14. Is Monte Carlo embarrassingly parallel?

    International Nuclear Information System (INIS)

    Hoogenboom, J. E.

    2012-01-01

    Monte Carlo is often stated as being embarrassingly parallel. However, running a Monte Carlo calculation, especially a reactor criticality calculation, in parallel using tens of processors shows a serious limitation in speedup and the execution time may even increase beyond a certain number of processors. In this paper the main causes of the loss of efficiency when using many processors are analyzed using a simple Monte Carlo program for criticality. The basic mechanism for parallel execution is MPI. One of the bottlenecks turn out to be the rendez-vous points in the parallel calculation used for synchronization and exchange of data between processors. This happens at least at the end of each cycle for fission source generation in order to collect the full fission source distribution for the next cycle and to estimate the effective multiplication factor, which is not only part of the requested results, but also input to the next cycle for population control. Basic improvements to overcome this limitation are suggested and tested. Also other time losses in the parallel calculation are identified. Moreover, the threading mechanism, which allows the parallel execution of tasks based on shared memory using OpenMP, is analyzed in detail. Recommendations are given to get the maximum efficiency out of a parallel Monte Carlo calculation. (authors)

  15. Parallel integer sorting with medium and fine-scale parallelism

    Science.gov (United States)

    Dagum, Leonardo

    1993-01-01

    Two new parallel integer sorting algorithms, queue-sort and barrel-sort, are presented and analyzed in detail. These algorithms do not have optimal parallel complexity, yet they show very good performance in practice. Queue-sort designed for fine-scale parallel architectures which allow the queueing of multiple messages to the same destination. Barrel-sort is designed for medium-scale parallel architectures with a high message passing overhead. The performance results from the implementation of queue-sort on a Connection Machine CM-2 and barrel-sort on a 128 processor iPSC/860 are given. The two implementations are found to be comparable in performance but not as good as a fully vectorized bucket sort on the Cray YMP.

  16. Template based parallel checkpointing in a massively parallel computer system

    Science.gov (United States)

    Archer, Charles Jens [Rochester, MN; Inglett, Todd Alan [Rochester, MN

    2009-01-13

    A method and apparatus for a template based parallel checkpoint save for a massively parallel super computer system using a parallel variation of the rsync protocol, and network broadcast. In preferred embodiments, the checkpoint data for each node is compared to a template checkpoint file that resides in the storage and that was previously produced. Embodiments herein greatly decrease the amount of data that must be transmitted and stored for faster checkpointing and increased efficiency of the computer system. Embodiments are directed to a parallel computer system with nodes arranged in a cluster with a high speed interconnect that can perform broadcast communication. The checkpoint contains a set of actual small data blocks with their corresponding checksums from all nodes in the system. The data blocks may be compressed using conventional non-lossy data compression algorithms to further reduce the overall checkpoint size.

  17. Tripartite polyionic complex (PIC) micelles as non-viral vectors for mesenchymal stem cell siRNA transfection.

    Science.gov (United States)

    Raisin, Sophie; Morille, Marie; Bony, Claire; Noël, Danièle; Devoisselle, Jean-Marie; Belamie, Emmanuel

    2017-08-22

    In the context of regenerative medicine, the use of RNA interference mechanisms has already proven its efficiency in targeting specific gene expression with the aim of enhancing, accelerating or, more generally, directing stem cell differentiation. However, achievement of good transfection levels requires the use of a gene vector. For in vivo applications, synthetic vectors are an interesting option to avoid possible issues associated with viral vectors (safety, production costs, etc.). Herein, we report on the design of tripartite polyionic complex micelles as original non-viral polymeric vectors suited for mesenchymal stem cell transfection with siRNA. Three micelle formulations were designed to exhibit pH-triggered disassembly in an acidic pH range comparable to that of endosomes. One formulation was selected as the most promising with the highest siRNA loading capacity while clearly maintaining pH-triggered disassembly properties. A thorough investigation of the internalization pathway of micelles into cells with tagged siRNA was made before showing an efficient inhibition of Runx2 expression in primary bone marrow-derived stem cells. This work evidenced PIC micelles as promising synthetic vectors that allow efficient MSC transfection and control over their behavior, from the perspective of their clinical use.

  18. L’anàlisi de conglomerats bietàpic o en dues fases amb SPSS

    Directory of Open Access Journals (Sweden)

    Maria-José Rubio-Hurtado

    2017-01-01

    Full Text Available El procediment d'anàlisi de conglomerats en dues fases, també anomenat bietàpic, és una eina d'exploració dissenyada per descobrir les agrupacions naturals d'un conjunt de dades. Permet la generació de criteris d'informació, freqüències dels conglomerats i estadístics descriptius per conglomerat, així com gràfics de barres, sectors i gràfics d'importància de les variables. El mètode d'anàlisi de conglomerats en dues fases té unes funcions úniques respecte a altres mètodes de conglomeració tradicionals, com són: un procediment automàtic del nombre òptim de conglomerats, la possibilitat de crear models de conglomerats amb variables tant categòriques com contínues, o la possibilitat de treballar amb arxius de dades de grans dimensions.

  19. Rcupcake: an R package for querying and analyzing biomedical data through the BD2K PIC-SURE RESTful API.

    Science.gov (United States)

    Gutiérrez-Sacristán, Alba; Guedj, Romain; Korodi, Gabor; Stedman, Jason; Furlong, Laura I; Patel, Chirag J; Kohane, Isaac S; Avillach, Paul

    2018-04-15

    In the era of big data and precision medicine, the number of databases containing clinical, environmental, self-reported and biochemical variables is increasing exponentially. Enabling the experts to focus on their research questions rather than on computational data management, access and analysis is one of the most significant challenges nowadays. We present Rcupcake, an R package that contains a variety of functions for leveraging different databases through the BD2K PIC-SURE RESTful API and facilitating its query, analysis and interpretation. The package offers a variety of analysis and visualization tools, including the study of the phenotype co-occurrence and prevalence, according to multiple layers of data, such as phenome, exposome or genome. The package is implemented in R and is available under Mozilla v2 license from GitHub (https://github.com/hms-dbmi/Rcupcake). Two reproducible case studies are also available (https://github.com/hms-dbmi/Rcupcake-case-studies/blob/master/SSCcaseStudy_v01.ipynb, https://github.com/hms-dbmi/Rcupcake-case-studies/blob/master/NHANEScaseStudy_v01.ipynb). paul_avillach@hms.harvard.edu. Supplementary data are available at Bioinformatics online.

  20. Mitigation of environmental impacts: a study of the companies that compose the Camaçari Industrial Center (PIC

    Directory of Open Access Journals (Sweden)

    Sonia Maria da Silva Gomes

    2016-09-01

    Full Text Available The purpose of this research was to map the environmental impacts of mitigation actions demonstrated in the sustainability reporting and financial statements of companies that compose the Camaçari Industrial Center (PIC from 2007 to 2013. Data from the Industrial Development Committee of Camaçari was used to survey the companies. The final sample consisted of 14 companies. Content analysis was used to identify the information contained in these reports, based on the model proposed by Nossa (2002 for measuring environmental impacts. The results showed that the subcategory most mentioned in the sustainability reports was wastefulness. It was found in 430 instances, followed by Recycling (157, CO² (129, Contamination and Land Restoration (122, and Conservation of Natural Resources (108. The wastefulness subcategory was also more present in the financial statements, with 77 instances, followed by Contamination and Land Restoration (49 and Recycling (29. There was also a growing trend of disclosure of environmental liabilities. The evidence indicates that the companies are concerned primarily with the treatment and disposal of their waste (solid, liquid and gaseous. The results are restricted to the period and sample investigated. Further research is suggested to broaden the sample and investigate the relationship between disclosure of environmental mitigation actions related to environmental impacts and the financial performance of companies. Additionally, studies could investigate which factors influence the adoption and dissemination of these actions, in the perception of managers of Brazilian companies.

  1. 3D PIC-MCC simulations of discharge inception around a sharp anode in nitrogen/oxygen mixtures

    Science.gov (United States)

    Teunissen, Jannis; Ebert, Ute

    2016-08-01

    We investigate how photoionization, electron avalanches and space charge affect the inception of nanosecond pulsed discharges. Simulations are performed with a 3D PIC-MCC (particle-in-cell, Monte Carlo collision) model with adaptive mesh refinement for the field solver. This model, whose source code is available online, is described in the first part of the paper. Then we present simulation results in a needle-to-plane geometry, using different nitrogen/oxygen mixtures at atmospheric pressure. In these mixtures non-local photoionization is important for the discharge growth. The typical length scale for this process depends on the oxygen concentration. With 0.2% oxygen the discharges grow quite irregularly, due to the limited supply of free electrons around them. With 2% or more oxygen the development is much smoother. An almost spherical ionized region can form around the electrode tip, which increases in size with the electrode voltage. Eventually this inception cloud destabilizes into streamer channels. In our simulations, discharge velocities are almost independent of the oxygen concentration. We discuss the physical mechanisms behind these phenomena and compare our simulations with experimental observations.

  2. Numerical modeling of the Linac4 negative ion source extraction region by 3D PIC-MCC code ONIX

    CERN Document Server

    Mochalskyy, S; Minea, T; Lifschitz, AF; Schmitzer, C; Midttun, O; Steyaert, D

    2013-01-01

    At CERN, a high performance negative ion (NI) source is required for the 160 MeV H- linear accelerator Linac4. The source is planned to produce 80 mA of H- with an emittance of 0.25 mm mradN-RMS which is technically and scientifically very challenging. The optimization of the NI source requires a deep understanding of the underling physics concerning the production and extraction of the negative ions. The extraction mechanism from the negative ion source is complex involving a magnetic filter in order to cool down electrons’ temperature. The ONIX (Orsay Negative Ion eXtraction) code is used to address this problem. The ONIX is a selfconsistent 3D electrostatic code using Particles-in-Cell Monte Carlo Collisions (PIC-MCC) approach. It was written to handle the complex boundary conditions between plasma, source walls, and beam formation at the extraction hole. Both, the positive extraction potential (25kV) and the magnetic field map are taken from the experimental set-up, in construction at CERN. This contrib...

  3. PIC simulation of the vacuum power flow for a 5 terawatt, 5 MV, 1 MA pulsed power system

    Science.gov (United States)

    Liu, Laqun; Zou, Wenkang; Liu, Dagang; Guo, Fan; Wang, Huihui; Chen, Lin

    2018-03-01

    In this paper, a 5 Terawatt, 5 MV, 1 MA pulsed power system based on vacuum magnetic insulation is simulated by the particle-in-cell (PIC) simulation method. The system consists of 50 100-kV linear transformer drive (LTD) cavities in series, using magnetically insulated induction voltage adder (MIVA) technology for pulsed power addition and transmission. The pulsed power formation and the vacuum power flow are simulated when the system works in self-limited flow and load-limited flow. When the pulsed power system isn't connected to the load, the downstream magnetically insulated transmission line (MITL) works in the self-limited flow, the maximum of output current is 1.14 MA and the amplitude of voltage is 4.63 MV. The ratio of the electron current to the total current is 67.5%, when the output current reached the peak value. When the impedance of the load is 3.0 Ω, the downstream MITL works in the self-limited flow, the maximums of output current and the amplitude of voltage are 1.28 MA and 3.96 MV, and the ratio of the electron current to the total current is 11.7% when the output current reached the peak value. In addition, when the switches are triggered in synchronism with the passage of the pulse power flow, it effectively reduces the rise time of the pulse current.

  4. Control device for automatic orientation of a solar panel based on a microcontroller (PIC16f628a)

    Science.gov (United States)

    Rezoug, M. R.; Krama, A.

    2016-07-01

    This work proposes a control device for autonomous solar tracker based on one axis, It consists of two main parts; the control part which is based on "the PIC16f628a"; it has the role of controlling, measuring and plotting responses. The second part is a mechanical device, which has the role of making the solar panel follows the day-night change of the sun throughout the year. Both parties are established to improve energy generation of the photovoltaic panels. In this paper, we will explain the main operating principles of our system. Also, we will provide experimental results which demonstrate the good performance and the efficiency of this system. This innovation is different from what has been proposed in previous studies. The important points of this system are maximum output energy and minimum energy consumption of solar tracker, its cost is relatively low with simplicity in implementation. The average power increase produced by using the tracking system for a particular day, is over 30 % compared with the static panel.

  5. Parallel education: what is it?

    OpenAIRE

    Amos, Michelle Peta

    2017-01-01

    In the history of education it has long been discussed that single-sex and coeducation are the two models of education present in schools. With the introduction of parallel schools over the last 15 years, there has been very little research into this 'new model'. Many people do not understand what it means for a school to be parallel or they confuse a parallel model with co-education, due to the presence of both boys and girls within the one institution. Therefore, the main obj...

  6. Balanced, parallel operation of flashlamps

    International Nuclear Information System (INIS)

    Carder, B.M.; Merritt, B.T.

    1979-01-01

    A new energy store, the Compensated Pulsed Alternator (CPA), promises to be a cost effective substitute for capacitors to drive flashlamps that pump large Nd:glass lasers. Because the CPA is large and discrete, it will be necessary that it drive many parallel flashlamp circuits, presenting a problem in equal current distribution. Current division to +- 20% between parallel flashlamps has been achieved, but this is marginal for laser pumping. A method is presented here that provides equal current sharing to about 1%, and it includes fused protection against short circuit faults. The method was tested with eight parallel circuits, including both open-circuit and short-circuit fault tests

  7. Study of effect of grain size on dust charging in an RF plasma using three-dimensional PIC-MCC simulations

    International Nuclear Information System (INIS)

    Ikkurthi, V. R.; Melzer, A.; Matyash, K.; Schneider, R.

    2008-01-01

    A 3-dimensional Particle-Particle Particle-Mesh (P 3 M) code is applied to study the charging process of micrometer size dust grains confined in a capacitive RF discharge. In our model, particles (electrons and ions) are treated kinetically (Particle-in-Cell with Monte Carlo Collisions (PIC-MCC)). In order to accurately resolve the plasma particles' motion close to the dust grain, the PIC technique is supplemented with Molecular Dynamics (MD), employing an an analytic electrostatic potential for the interaction with the dust grain. This allows to self-consistently resolve the dust grain charging due to absorption of plasma electrons and ions. The charging of dust grains confined above lower electrode in a capacitive RF discharge and its dependence on the size and position of the dust is investigated. The results have been compared with laboratory measurements

  8. Specific features of spin-variable properties of [Fe(acen)pic2]BPh4 · nH2O

    Science.gov (United States)

    Ivanova, T. A.; Ovchinnikov, I. V.; Gil'mutdinov, I. F.; Mingalieva, L. V.; Turanova, O. A.; Ivanova, G. I.

    2016-02-01

    The [Fe(acen)pic2]BPh4 · nH2O compound has been synthesized and studied in the temperature interval of 5-300 K by the methods of EPR and magnetic susceptibility. The existence of ferromagnetic interactions between Fe(III) complexes in this compound has been revealed, in contrast to unhydrated [Fe(acen)pic2]BPh4. The reduction in the integrated intensity of the magnetic resonance signal as the temperature decreases below 80 K has been explained by the transition of high-spin ions to the low-spin state. It has been shown that the phase transition temperature in the presence of intermolecular (ferromagnetic) interactions is lower than that in the case of noninteracting centers.

  9. An interface board for developing control loops in power electronics based on microcontrollers and DSPs Cores -Arduino /ChipKit /dsPIC /DSP /TI Piccolo

    DEFF Research Database (Denmark)

    Pittini, Riccardo; Zhang, Zhe; Andersen, Michael A. E.

    2013-01-01

    and development environment. Moreover, the interface board can operate with open hardware Arduino-like boards such as the ChipKit Uno32. The paper also describes how to enhance the performance of a ChipKit Uno32 with a dsPIC obtaining a more suitable solution for power electronics. The basic blocks and interfaces...... of the boards are presented in detail as well as the board main specifications. The board operation has been tested with three core platforms: TI Piccolo controlSTICK, a Microchip dsPIC and a ChipKit Uno32 (Arduino-like platform). The board was used for generating test signals for characterizing 1200 V Si...

  10. Activated learning; providing structure in global health education at the David Geffen School of Medicine at the University of California, Los Angeles (UCLA)- a pilot study.

    Science.gov (United States)

    Jordan, Jaime; Hoffman, Risa; Arora, Gitanjli; Coates, Wendy

    2016-02-16

    Global health rotations are increasingly popular amongst medical students. The training abroad is highly variable and there is a recognized need for global health curriculum development. We sought to create and evaluate a curriculum, applicable to any global health rotation, that requires students to take an active role in their education and promotes engagement. Prospective, observational, mixed method study of 4th year medical students enrolled in global health courses at UCLA in 2011-12. Course directors identified 4 topics common to all rotations (traditional medicine, health systems, limited resources, pathology) and developed activities for students to complete abroad: observation, interview and reflection on resources, pathology, medical practices; and compare/contrast their experience with the US healthcare system. Students posted responses on a discussion board moderated by US faculty. After the rotation, students completed an anonymous internet-based evaluative survey. Responses were tabulated. Qualitative data from discussion board postings and free response survey items were analyzed using the framework method. 14 (100 %) students completed the Activated Learning assignment. 12 submitted the post rotation survey (85.7 %). Activated Learning enhanced GH education for 67 % and facilitated engagement in the local medical culture for 67 %. Qualitative analysis of discussion board posting demonstrated multiple areas of knowledge gain and analysis of free response survey items revealed 5 major themes supporting Activated Learning: guided learning, stimulation of discussion, shared interactions, cultural understanding, and knowledge of global healthcare systems. Increased interactivity emerged as the major theme for future improvement. The results of this study suggest that an Activated Learning program may enhance education, standardize curricular objectives across multiple sites and promote engagement in local medical culture, pathology and delivery

  11. A parallel 3D particle-in-cell code with dynamic load balancing

    International Nuclear Information System (INIS)

    Wolfheimer, Felix; Gjonaj, Erion; Weiland, Thomas

    2006-01-01

    A parallel 3D electrostatic Particle-In-Cell (PIC) code including an algorithm for modelling Space Charge Limited (SCL) emission [E. Gjonaj, T. Weiland, 3D-modeling of space-charge-limited electron emission. A charge conserving algorithm, Proceedings of the 11th Biennial IEEE Conference on Electromagnetic Field Computation, 2004] is presented. A domain decomposition technique based on orthogonal recursive bisection is used to parallelize the computation on a distributed memory environment of clustered workstations. For problems with a highly nonuniform and time dependent distribution of particles, e.g., bunch dynamics, a dynamic load balancing between the processes is needed to preserve the parallel performance. The algorithm for the detection of a load imbalance and the redistribution of the tasks among the processes is based on a weight function criterion, where the weight of a cell measures the computational load associated with it. The algorithm is studied with two examples. In the first example, multiple electron bunches as occurring in the S-DALINAC [A. Richter, Operational experience at the S-DALINAC, Proceedings of the Fifth European Particle Accelerator Conference, 1996] accelerator are simulated in the absence of space charge fields. In the second example, the SCL emission and electron trajectories in an electron gun are simulated

  12. A parallel 3D particle-in-cell code with dynamic load balancing

    Energy Technology Data Exchange (ETDEWEB)

    Wolfheimer, Felix [Technische Universitaet Darmstadt, Institut fuer Theorie Elektromagnetischer Felder, Schlossgartenstr.8, 64283 Darmstadt (Germany)]. E-mail: wolfheimer@temf.de; Gjonaj, Erion [Technische Universitaet Darmstadt, Institut fuer Theorie Elektromagnetischer Felder, Schlossgartenstr.8, 64283 Darmstadt (Germany); Weiland, Thomas [Technische Universitaet Darmstadt, Institut fuer Theorie Elektromagnetischer Felder, Schlossgartenstr.8, 64283 Darmstadt (Germany)

    2006-03-01

    A parallel 3D electrostatic Particle-In-Cell (PIC) code including an algorithm for modelling Space Charge Limited (SCL) emission [E. Gjonaj, T. Weiland, 3D-modeling of space-charge-limited electron emission. A charge conserving algorithm, Proceedings of the 11th Biennial IEEE Conference on Electromagnetic Field Computation, 2004] is presented. A domain decomposition technique based on orthogonal recursive bisection is used to parallelize the computation on a distributed memory environment of clustered workstations. For problems with a highly nonuniform and time dependent distribution of particles, e.g., bunch dynamics, a dynamic load balancing between the processes is needed to preserve the parallel performance. The algorithm for the detection of a load imbalance and the redistribution of the tasks among the processes is based on a weight function criterion, where the weight of a cell measures the computational load associated with it. The algorithm is studied with two examples. In the first example, multiple electron bunches as occurring in the S-DALINAC [A. Richter, Operational experience at the S-DALINAC, Proceedings of the Fifth European Particle Accelerator Conference, 1996] accelerator are simulated in the absence of space charge fields. In the second example, the SCL emission and electron trajectories in an electron gun are simulated.

  13. Workspace Analysis for Parallel Robot

    Directory of Open Access Journals (Sweden)

    Ying Sun

    2013-05-01

    Full Text Available As a completely new-type of robot, the parallel robot possesses a lot of advantages that the serial robot does not, such as high rigidity, great load-carrying capacity, small error, high precision, small self-weight/load ratio, good dynamic behavior and easy control, hence its range is extended in using domain. In order to find workspace of parallel mechanism, the numerical boundary-searching algorithm based on the reverse solution of kinematics and limitation of link length has been introduced. This paper analyses position workspace, orientation workspace of parallel robot of the six degrees of freedom. The result shows: It is a main means to increase and decrease its workspace to change the length of branch of parallel mechanism; The radius of the movement platform has no effect on the size of workspace, but will change position of workspace.

  14. "Feeling" Series and Parallel Resistances.

    Science.gov (United States)

    Morse, Robert A.

    1993-01-01

    Equipped with drinking straws and stirring straws, a teacher can help students understand how resistances in electric circuits combine in series and in parallel. Follow-up suggestions are provided. (ZWH)

  15. Parallel encoders for pixel detectors

    International Nuclear Information System (INIS)

    Nikityuk, N.M.

    1991-01-01

    A new method of fast encoding and determining the multiplicity and coordinates of fired pixels is described. A specific example construction of parallel encodes and MCC for n=49 and t=2 is given. 16 refs.; 6 figs.; 2 tabs

  16. Massively Parallel Finite Element Programming

    KAUST Repository

    Heister, Timo

    2010-01-01

    Today\\'s large finite element simulations require parallel algorithms to scale on clusters with thousands or tens of thousands of processor cores. We present data structures and algorithms to take advantage of the power of high performance computers in generic finite element codes. Existing generic finite element libraries often restrict the parallelization to parallel linear algebra routines. This is a limiting factor when solving on more than a few hundreds of cores. We describe routines for distributed storage of all major components coupled with efficient, scalable algorithms. We give an overview of our effort to enable the modern and generic finite element library deal.II to take advantage of the power of large clusters. In particular, we describe the construction of a distributed mesh and develop algorithms to fully parallelize the finite element calculation. Numerical results demonstrate good scalability. © 2010 Springer-Verlag.

  17. Event monitoring of parallel computations

    Directory of Open Access Journals (Sweden)

    Gruzlikov Alexander M.

    2015-06-01

    Full Text Available The paper considers the monitoring of parallel computations for detection of abnormal events. It is assumed that computations are organized according to an event model, and monitoring is based on specific test sequences

  18. Massively Parallel Finite Element Programming

    KAUST Repository

    Heister, Timo; Kronbichler, Martin; Bangerth, Wolfgang

    2010-01-01

    Today's large finite element simulations require parallel algorithms to scale on clusters with thousands or tens of thousands of processor cores. We present data structures and algorithms to take advantage of the power of high performance computers in generic finite element codes. Existing generic finite element libraries often restrict the parallelization to parallel linear algebra routines. This is a limiting factor when solving on more than a few hundreds of cores. We describe routines for distributed storage of all major components coupled with efficient, scalable algorithms. We give an overview of our effort to enable the modern and generic finite element library deal.II to take advantage of the power of large clusters. In particular, we describe the construction of a distributed mesh and develop algorithms to fully parallelize the finite element calculation. Numerical results demonstrate good scalability. © 2010 Springer-Verlag.

  19. The STAPL Parallel Graph Library

    KAUST Repository

    Harshvardhan,

    2013-01-01

    This paper describes the stapl Parallel Graph Library, a high-level framework that abstracts the user from data-distribution and parallelism details and allows them to concentrate on parallel graph algorithm development. It includes a customizable distributed graph container and a collection of commonly used parallel graph algorithms. The library introduces pGraph pViews that separate algorithm design from the container implementation. It supports three graph processing algorithmic paradigms, level-synchronous, asynchronous and coarse-grained, and provides common graph algorithms based on them. Experimental results demonstrate improved scalability in performance and data size over existing graph libraries on more than 16,000 cores and on internet-scale graphs containing over 16 billion vertices and 250 billion edges. © Springer-Verlag Berlin Heidelberg 2013.

  20. Magnetic Field-Vector Measurements in Quiescent Prominences via the Hanle Effect: Analysis of Prominences Observed at Pic-Du-Midi and at Sacramento Peak

    Science.gov (United States)

    Bommier, V.; Leroy, J. L.; Sahal-Brechot, S.

    1985-01-01

    The Hanle effect method for magnetic field vector diagnostics has now provided results on the magnetic field strength and direction in quiescent prominences, from linear polarization measurements in the He I E sub 3 line, performed at the Pic-du-Midi and at Sacramento Peak. However, there is an inescapable ambiguity in the field vector determination: each polarization measurement provides two field vector solutions symmetrical with respect to the line-of-sight. A statistical analysis capable of solving this ambiguity was applied to the large sample of prominences observed at the Pic-du-Midi (Leroy, et al., 1984); the same method of analysis applied to the prominences observed at Sacramento Peak (Athay, et al., 1983) provides results in agreement on the most probable magnetic structure of prominences; these results are detailed. The statistical results were confirmed on favorable individual cases: for 15 prominences observed at Pic-du-Midi, the two-field vectors are pointing on the same side of the prominence, and the alpha angles are large enough with respect to the measurements and interpretation inaccuracies, so that the field polarity is derived without any ambiguity.

  1. Writing parallel programs that work

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    Serial algorithms typically run inefficiently on parallel machines. This may sound like an obvious statement, but it is the root cause of why parallel programming is considered to be difficult. The current state of the computer industry is still that almost all programs in existence are serial. This talk will describe the techniques used in the Intel Parallel Studio to provide a developer with the tools necessary to understand the behaviors and limitations of the existing serial programs. Once the limitations are known the developer can refactor the algorithms and reanalyze the resulting programs with the tools in the Intel Parallel Studio to create parallel programs that work. About the speaker Paul Petersen is a Sr. Principal Engineer in the Software and Solutions Group (SSG) at Intel. He received a Ph.D. degree in Computer Science from the University of Illinois in 1993. After UIUC, he was employed at Kuck and Associates, Inc. (KAI) working on auto-parallelizing compiler (KAP), and was involved in th...

  2. Exploiting Symmetry on Parallel Architectures.

    Science.gov (United States)

    Stiller, Lewis Benjamin

    1995-01-01

    This thesis describes techniques for the design of parallel programs that solve well-structured problems with inherent symmetry. Part I demonstrates the reduction of such problems to generalized matrix multiplication by a group-equivariant matrix. Fast techniques for this multiplication are described, including factorization, orbit decomposition, and Fourier transforms over finite groups. Our algorithms entail interaction between two symmetry groups: one arising at the software level from the problem's symmetry and the other arising at the hardware level from the processors' communication network. Part II illustrates the applicability of our symmetry -exploitation techniques by presenting a series of case studies of the design and implementation of parallel programs. First, a parallel program that solves chess endgames by factorization of an associated dihedral group-equivariant matrix is described. This code runs faster than previous serial programs, and discovered it a number of results. Second, parallel algorithms for Fourier transforms for finite groups are developed, and preliminary parallel implementations for group transforms of dihedral and of symmetric groups are described. Applications in learning, vision, pattern recognition, and statistics are proposed. Third, parallel implementations solving several computational science problems are described, including the direct n-body problem, convolutions arising from molecular biology, and some communication primitives such as broadcast and reduce. Some of our implementations ran orders of magnitude faster than previous techniques, and were used in the investigation of various physical phenomena.

  3. Parallel algorithms for continuum dynamics

    International Nuclear Information System (INIS)

    Hicks, D.L.; Liebrock, L.M.

    1987-01-01

    Simply porting existing parallel programs to a new parallel processor may not achieve the full speedup possible; to achieve the maximum efficiency may require redesigning the parallel algorithms for the specific architecture. The authors discuss here parallel algorithms that were developed first for the HEP processor and then ported to the CRAY X-MP/4, the ELXSI/10, and the Intel iPSC/32. Focus is mainly on the most recent parallel processing results produced, i.e., those on the Intel Hypercube. The applications are simulations of continuum dynamics in which the momentum and stress gradients are important. Examples of these are inertial confinement fusion experiments, severe breaks in the coolant system of a reactor, weapons physics, shock-wave physics. Speedup efficiencies on the Intel iPSC Hypercube are very sensitive to the ratio of communication to computation. Great care must be taken in designing algorithms for this machine to avoid global communication. This is much more critical on the iPSC than it was on the three previous parallel processors

  4. Endpoint-based parallel data processing in a parallel active messaging interface of a parallel computer

    Science.gov (United States)

    Archer, Charles J.; Blocksome, Michael A.; Ratterman, Joseph D.; Smith, Brian E.

    2014-08-12

    Endpoint-based parallel data processing in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI composed of data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes coupled for data communications through the PAMI, including establishing a data communications geometry, the geometry specifying, for tasks representing processes of execution of the parallel application, a set of endpoints that are used in collective operations of the PAMI including a plurality of endpoints for one of the tasks; receiving in endpoints of the geometry an instruction for a collective operation; and executing the instruction for a collective operation through the endpoints in dependence upon the geometry, including dividing data communications operations among the plurality of endpoints for one of the tasks.

  5. Evaluación de Varios Insecticidas para el Control del Cephaloleiaspcerca avagelineataPic, Plaga de la Palma Africana

    Directory of Open Access Journals (Sweden)

    Urueta Sandino Eduardo

    1974-04-01

    Full Text Available Resumen. Se efectuaron varios ensayos para determinar el efecto de carbofuran 1.0, 1.5 y 2. 0 kg I. A./ha; carbaril 1.5 y 2. 0 kg I. A. /ha; lindano 1.0 y 1.5 kg I. A. /ha; diazinon 0.5 lt I. A./ha; dicrotofos 0. 5 lt I. A. /ha; fosfamidon 0.6 lt. I. A/ha; y fention 0.5 lt I. A./ha, sobre adultos y larvas de Cephaloleiasp. cerca avagelineataPic., una plaga de la palma africana en Colombia. Todos los insecticidas fueron efectivos para controlar larvas de Cephaloleiasp. en cogollos, hasta por periodos de más de 30 días. El carbofuran 2.0 kg I. A./ha carbaril 2.0 kg l . A./ha y lindano 1. 5 kg I.A. /hafueron los productos más eficientes para controlar adultos de Cephaloleia. sp. protegiendo por 15 días las hojas más jóvenes. Dicrotofos 0.5 lt I. A./ha; diazinon0.5 lt l. A./ha; fention 0.5 itI. A./ha y fosfamidon 0.6 lt I. A/ha, aparentemente no fueron efectivos para controlar las formas adultas de Cephaloleiasp. Ninguno de los insecticidas fue fitotóxico para la palma africana. /Abstract. Several tests were carried out to determine the effectiveness of carbofuran 1. 0, 1.5 and 2.0 kg A.I./ha; carbaryl 1.5, 2.0 kg. A.I./ha; lindane 1.0, 1.5 kg. A.I./ha; phosphamidon 0.6 lt. A.I./ha; fenthion 0.5 lt. A.I./ha; dicrotophos 0.5 lt. A.I /ha; diazinon 0.5 lt. A.I./ha on larvae and adults of Cephaloleia. sp. near vagelineata Pic a Chrysomelidae that affects young oil palm (Elaeisguineensis leaves in Colombia. All of these insecticides controlled well Cepbaloleia sp. larvae for periods over a month. carbofuran 2 kg. A.I./ha; carbaryl 2kg. A.I./ha and lindane 1.5 kg. A. I./ha gave the best control of Cephaloleia. sp. adults, protecting young oil palm leaves up to 15 days. Dicrotophos 0.5 lts. A.I./ha; fenthion 0.5 lt. A. I./ha; phosphamidon 0.6 lt. A.I./ha; diazinon 0.5 lt. A.I./ha; apparently were not effective to control adults of Cephaloleia sp. None of the insecticides tested showed to be phytotoxic to the oil palm.

  6. Cleavage specificity analysis of six type II transmembrane serine proteases (TTSPs using PICS with proteome-derived peptide libraries.

    Directory of Open Access Journals (Sweden)

    Olivier Barré

    Full Text Available Type II transmembrane serine proteases (TTSPs are a family of cell membrane tethered serine proteases with unclear roles as their cleavage site specificities and substrate degradomes have not been fully elucidated. Indeed just 52 cleavage sites are annotated in MEROPS, the database of proteases, their substrates and inhibitors.To profile the active site specificities of the TTSPs, we applied Proteomic Identification of protease Cleavage Sites (PICS. Human proteome-derived database searchable peptide libraries were assayed with six human TTSPs (matriptase, matriptase-2, matriptase-3, HAT, DESC and hepsin to simultaneously determine sequence preferences on the N-terminal non-prime (P and C-terminal prime (P' sides of the scissile bond. Prime-side cleavage products were isolated following biotinylation and identified by tandem mass spectrometry. The corresponding non-prime side sequences were derived from human proteome databases using bioinformatics. Sequencing of 2,405 individual cleaved peptides allowed for the development of the family consensus protease cleavage site specificity revealing a strong specificity for arginine in the P1 position and surprisingly a lysine in P1' position. TTSP cleavage between R↓K was confirmed using synthetic peptides. By parsing through known substrates and known structures of TTSP catalytic domains, and by modeling the remainder, structural explanations for this strong specificity were derived.Degradomics analysis of 2,405 cleavage sites revealed a similar and characteristic TTSP family specificity at the P1 and P1' positions for arginine and lysine in unfolded peptides. The prime side is important for cleavage specificity, thus making these proteases unusual within the tryptic-enzyme class that generally has overriding non-prime side specificity.

  7. ASIC or PIC? Implantable stimulators based on semi-custom CMOS technology or low-power microcontroller architecture.

    Science.gov (United States)

    Salmons, S; Gunning, G T; Taylor, I; Grainger, S R; Hitchings, D J; Blackhurst, J; Jarvis, J C

    2001-01-01

    To gain a better understanding of the effects of chronic stimulation on mammalian muscles we needed to generate patterns of greater variety and complexity than simple constant-frequency or burst patterns. We describe here two approaches to the design of implantable neuromuscular stimulators that can satisfy these requirements. Devices of both types were developed and used in long-term experiments. The first device was based on a semi-custom Application Specific Integrated Circuit (ASIC). This approach has the advantage that the circuit can be completely tested at every stage of development and production, assuring a high degree of reliability. It has the drawback of inflexibility: the patterns are produced by state machines implemented in silicon, so each new set of patterns requires a fresh production run, which is costly and time-consuming. The second device was based on a commercial microcontroller (Microchip PIC16C84). The functionality of this type of circuit is specified in software rather than in silicon hardware, allowing a single device to be programmed for different functions. With the use of features designed to improve fault-tolerance we found this approach to be as reliable as that based on ASICs. The encapsulated devices can easily be accommodated subcutaneously on the flank of a rabbit and a recent version is small enough to implant into the peritoneal cavity of rats. The current devices are programmed with a predetermined set of 12 patterns before assembly; the desired pattern is selected after implantation with an electronic flash gun. The operating current drain is less than 40 microA.

  8. Parallel Implicit Algorithms for CFD

    Science.gov (United States)

    Keyes, David E.

    1998-01-01

    The main goal of this project was efficient distributed parallel and workstation cluster implementations of Newton-Krylov-Schwarz (NKS) solvers for implicit Computational Fluid Dynamics (CFD.) "Newton" refers to a quadratically convergent nonlinear iteration using gradient information based on the true residual, "Krylov" to an inner linear iteration that accesses the Jacobian matrix only through highly parallelizable sparse matrix-vector products, and "Schwarz" to a domain decomposition form of preconditioning the inner Krylov iterations with primarily neighbor-only exchange of data between the processors. Prior experience has established that Newton-Krylov methods are competitive solvers in the CFD context and that Krylov-Schwarz methods port well to distributed memory computers. The combination of the techniques into Newton-Krylov-Schwarz was implemented on 2D and 3D unstructured Euler codes on the parallel testbeds that used to be at LaRC and on several other parallel computers operated by other agencies or made available by the vendors. Early implementations were made directly in Massively Parallel Integration (MPI) with parallel solvers we adapted from legacy NASA codes and enhanced for full NKS functionality. Later implementations were made in the framework of the PETSC library from Argonne National Laboratory, which now includes pseudo-transient continuation Newton-Krylov-Schwarz solver capability (as a result of demands we made upon PETSC during our early porting experiences). A secondary project pursued with funding from this contract was parallel implicit solvers in acoustics, specifically in the Helmholtz formulation. A 2D acoustic inverse problem has been solved in parallel within the PETSC framework.

  9. Second derivative parallel block backward differentiation type ...

    African Journals Online (AJOL)

    Second derivative parallel block backward differentiation type formulas for Stiff ODEs. ... Log in or Register to get access to full text downloads. ... and the methods are inherently parallel and can be distributed over parallel processors. They are ...

  10. A Parallel Approach to Fractal Image Compression

    OpenAIRE

    Lubomir Dedera

    2004-01-01

    The paper deals with a parallel approach to coding and decoding algorithms in fractal image compressionand presents experimental results comparing sequential and parallel algorithms from the point of view of achieved bothcoding and decoding time and effectiveness of parallelization.

  11. UCLA accelerator research and development

    International Nuclear Information System (INIS)

    Cline, D.B.

    1992-01-01

    This progress report covers work supported by the above DOE grant over the period November 1, 1991 to July 31, 1992. The work is a program of experimental and theoretical studies in advanced particle accelerator research and development for high energy physics applications. The program features research at particle beam facilities in the United States and includes research on novel high power sources, novel focussing systems (e.g. plasma lens), beam monitors, novel high brightness, high current gun systems, and novel flavor factories in particular the φ Factory

  12. Parallel fabrication of macroporous scaffolds.

    Science.gov (United States)

    Dobos, Andrew; Grandhi, Taraka Sai Pavan; Godeshala, Sudhakar; Meldrum, Deirdre R; Rege, Kaushal

    2018-07-01

    Scaffolds generated from naturally occurring and synthetic polymers have been investigated in several applications because of their biocompatibility and tunable chemo-mechanical properties. Existing methods for generation of 3D polymeric scaffolds typically cannot be parallelized, suffer from low throughputs, and do not allow for quick and easy removal of the fragile structures that are formed. Current molds used in hydrogel and scaffold fabrication using solvent casting and porogen leaching are often single-use and do not facilitate 3D scaffold formation in parallel. Here, we describe a simple device and related approaches for the parallel fabrication of macroporous scaffolds. This approach was employed for the generation of macroporous and non-macroporous materials in parallel, in higher throughput and allowed for easy retrieval of these 3D scaffolds once formed. In addition, macroporous scaffolds with interconnected as well as non-interconnected pores were generated, and the versatility of this approach was employed for the generation of 3D scaffolds from diverse materials including an aminoglycoside-derived cationic hydrogel ("Amikagel"), poly(lactic-co-glycolic acid) or PLGA, and collagen. Macroporous scaffolds generated using the device were investigated for plasmid DNA binding and cell loading, indicating the use of this approach for developing materials for different applications in biotechnology. Our results demonstrate that the device-based approach is a simple technology for generating scaffolds in parallel, which can enhance the toolbox of current fabrication techniques. © 2018 Wiley Periodicals, Inc.

  13. Parallel plasma fluid turbulence calculations

    International Nuclear Information System (INIS)

    Leboeuf, J.N.; Carreras, B.A.; Charlton, L.A.; Drake, J.B.; Lynch, V.E.; Newman, D.E.; Sidikman, K.L.; Spong, D.A.

    1994-01-01

    The study of plasma turbulence and transport is a complex problem of critical importance for fusion-relevant plasmas. To this day, the fluid treatment of plasma dynamics is the best approach to realistic physics at the high resolution required for certain experimentally relevant calculations. Core and edge turbulence in a magnetic fusion device have been modeled using state-of-the-art, nonlinear, three-dimensional, initial-value fluid and gyrofluid codes. Parallel implementation of these models on diverse platforms--vector parallel (National Energy Research Supercomputer Center's CRAY Y-MP C90), massively parallel (Intel Paragon XP/S 35), and serial parallel (clusters of high-performance workstations using the Parallel Virtual Machine protocol)--offers a variety of paths to high resolution and significant improvements in real-time efficiency, each with its own advantages. The largest and most efficient calculations have been performed at the 200 Mword memory limit on the C90 in dedicated mode, where an overlap of 12 to 13 out of a maximum of 16 processors has been achieved with a gyrofluid model of core fluctuations. The richness of the physics captured by these calculations is commensurate with the increased resolution and efficiency and is limited only by the ingenuity brought to the analysis of the massive amounts of data generated

  14. Evaluating parallel optimization on transputers

    Directory of Open Access Journals (Sweden)

    A.G. Chalmers

    2003-12-01

    Full Text Available The faster processing power of modern computers and the development of efficient algorithms have made it possible for operations researchers to tackle a much wider range of problems than ever before. Further improvements in processing speed can be achieved utilising relatively inexpensive transputers to process components of an algorithm in parallel. The Davidon-Fletcher-Powell method is one of the most successful and widely used optimisation algorithms for unconstrained problems. This paper examines the algorithm and identifies the components that can be processed in parallel. The results of some experiments with these components are presented which indicates under what conditions parallel processing with an inexpensive configuration is likely to be faster than the traditional sequential implementations. The performance of the whole algorithm with its parallel components is then compared with the original sequential algorithm. The implementation serves to illustrate the practicalities of speeding up typical OR algorithms in terms of difficulty, effort and cost. The results give an indication of the savings in time a given parallel implementation can be expected to yield.

  15. Pattern-Driven Automatic Parallelization

    Directory of Open Access Journals (Sweden)

    Christoph W. Kessler

    1996-01-01

    Full Text Available This article describes a knowledge-based system for automatic parallelization of a wide class of sequential numerical codes operating on vectors and dense matrices, and for execution on distributed memory message-passing multiprocessors. Its main feature is a fast and powerful pattern recognition tool that locally identifies frequently occurring computations and programming concepts in the source code. This tool also works for dusty deck codes that have been "encrypted" by former machine-specific code transformations. Successful pattern recognition guides sophisticated code transformations including local algorithm replacement such that the parallelized code need not emerge from the sequential program structure by just parallelizing the loops. It allows access to an expert's knowledge on useful parallel algorithms, available machine-specific library routines, and powerful program transformations. The partially restored program semantics also supports local array alignment, distribution, and redistribution, and allows for faster and more exact prediction of the performance of the parallelized target code than is usually possible.

  16. Parallel artificial liquid membrane extraction

    DEFF Research Database (Denmark)

    Gjelstad, Astrid; Rasmussen, Knut Einar; Parmer, Marthe Petrine

    2013-01-01

    This paper reports development of a new approach towards analytical liquid-liquid-liquid membrane extraction termed parallel artificial liquid membrane extraction. A donor plate and acceptor plate create a sandwich, in which each sample (human plasma) and acceptor solution is separated by an arti......This paper reports development of a new approach towards analytical liquid-liquid-liquid membrane extraction termed parallel artificial liquid membrane extraction. A donor plate and acceptor plate create a sandwich, in which each sample (human plasma) and acceptor solution is separated...... by an artificial liquid membrane. Parallel artificial liquid membrane extraction is a modification of hollow-fiber liquid-phase microextraction, where the hollow fibers are replaced by flat membranes in a 96-well plate format....

  17. Parallel algorithms for mapping pipelined and parallel computations

    Science.gov (United States)

    Nicol, David M.

    1988-01-01

    Many computational problems in image processing, signal processing, and scientific computing are naturally structured for either pipelined or parallel computation. When mapping such problems onto a parallel architecture it is often necessary to aggregate an obvious problem decomposition. Even in this context the general mapping problem is known to be computationally intractable, but recent advances have been made in identifying classes of problems and architectures for which optimal solutions can be found in polynomial time. Among these, the mapping of pipelined or parallel computations onto linear array, shared memory, and host-satellite systems figures prominently. This paper extends that work first by showing how to improve existing serial mapping algorithms. These improvements have significantly lower time and space complexities: in one case a published O(nm sup 3) time algorithm for mapping m modules onto n processors is reduced to an O(nm log m) time complexity, and its space requirements reduced from O(nm sup 2) to O(m). Run time complexity is further reduced with parallel mapping algorithms based on these improvements, which run on the architecture for which they create the mappings.

  18. Cellular automata a parallel model

    CERN Document Server

    Mazoyer, J

    1999-01-01

    Cellular automata can be viewed both as computational models and modelling systems of real processes. This volume emphasises the first aspect. In articles written by leading researchers, sophisticated massive parallel algorithms (firing squad, life, Fischer's primes recognition) are treated. Their computational power and the specific complexity classes they determine are surveyed, while some recent results in relation to chaos from a new dynamic systems point of view are also presented. Audience: This book will be of interest to specialists of theoretical computer science and the parallelism challenge.

  19. Parallel Sparse Matrix - Vector Product

    DEFF Research Database (Denmark)

    Alexandersen, Joe; Lazarov, Boyan Stefanov; Dammann, Bernd

    This technical report contains a case study of a sparse matrix-vector product routine, implemented for parallel execution on a compute cluster with both pure MPI and hybrid MPI-OpenMP solutions. C++ classes for sparse data types were developed and the report shows how these class can be used...

  20. [Falsified medicines in parallel trade].

    Science.gov (United States)

    Muckenfuß, Heide

    2017-11-01

    The number of falsified medicines on the German market has distinctly increased over the past few years. In particular, stolen pharmaceutical products, a form of falsified medicines, have increasingly been introduced into the legal supply chain via parallel trading. The reasons why parallel trading serves as a gateway for falsified medicines are most likely the complex supply chains and routes of transport. It is hardly possible for national authorities to trace the history of a medicinal product that was bought and sold by several intermediaries in different EU member states. In addition, the heterogeneous outward appearance of imported and relabelled pharmaceutical products facilitates the introduction of illegal products onto the market. Official batch release at the Paul-Ehrlich-Institut offers the possibility of checking some aspects that might provide an indication of a falsified medicine. In some circumstances, this may allow the identification of falsified medicines before they come onto the German market. However, this control is only possible for biomedicinal products that have not received a waiver regarding official batch release. For improved control of parallel trade, better networking among the EU member states would be beneficial. European-wide regulations, e. g., for disclosure of the complete supply chain, would help to minimise the risks of parallel trading and hinder the marketing of falsified medicines.

  1. The parallel adult education system

    DEFF Research Database (Denmark)

    Wahlgren, Bjarne

    2015-01-01

    for competence development. The Danish university educational system includes two parallel programs: a traditional academic track (candidatus) and an alternative practice-based track (master). The practice-based program was established in 2001 and organized as part time. The total program takes half the time...

  2. Where are the parallel algorithms?

    Science.gov (United States)

    Voigt, R. G.

    1985-01-01

    Four paradigms that can be useful in developing parallel algorithms are discussed. These include computational complexity analysis, changing the order of computation, asynchronous computation, and divide and conquer. Each is illustrated with an example from scientific computation, and it is shown that computational complexity must be used with great care or an inefficient algorithm may be selected.

  3. Parallel imaging with phase scrambling.

    Science.gov (United States)

    Zaitsev, Maxim; Schultz, Gerrit; Hennig, Juergen; Gruetter, Rolf; Gallichan, Daniel

    2015-04-01

    Most existing methods for accelerated parallel imaging in MRI require additional data, which are used to derive information about the sensitivity profile of each radiofrequency (RF) channel. In this work, a method is presented to avoid the acquisition of separate coil calibration data for accelerated Cartesian trajectories. Quadratic phase is imparted to the image to spread the signals in k-space (aka phase scrambling). By rewriting the Fourier transform as a convolution operation, a window can be introduced to the convolved chirp function, allowing a low-resolution image to be reconstructed from phase-scrambled data without prominent aliasing. This image (for each RF channel) can be used to derive coil sensitivities to drive existing parallel imaging techniques. As a proof of concept, the quadratic phase was applied by introducing an offset to the x(2) - y(2) shim and the data were reconstructed using adapted versions of the image space-based sensitivity encoding and GeneRalized Autocalibrating Partially Parallel Acquisitions algorithms. The method is demonstrated in a phantom (1 × 2, 1 × 3, and 2 × 2 acceleration) and in vivo (2 × 2 acceleration) using a 3D gradient echo acquisition. Phase scrambling can be used to perform parallel imaging acceleration without acquisition of separate coil calibration data, demonstrated here for a 3D-Cartesian trajectory. Further research is required to prove the applicability to other 2D and 3D sampling schemes. © 2014 Wiley Periodicals, Inc.

  4. Default Parallels Plesk Panel Page

    Science.gov (United States)

    services that small businesses want and need. Our software includes key building blocks of cloud service virtualized servers Service Provider Products Parallels® Automation Hosting, SaaS, and cloud computing , the leading hosting automation software. You see this page because there is no Web site at this

  5. Parallel plate transmission line transformer

    NARCIS (Netherlands)

    Voeten, S.J.; Brussaard, G.J.H.; Pemen, A.J.M.

    2011-01-01

    A Transmission Line Transformer (TLT) can be used to transform high-voltage nanosecond pulses. These transformers rely on the fact that the length of the pulse is shorter than the transmission lines used. This allows connecting the transmission lines in parallel at the input and in series at the

  6. Matpar: Parallel Extensions for MATLAB

    Science.gov (United States)

    Springer, P. L.

    1998-01-01

    Matpar is a set of client/server software that allows a MATLAB user to take advantage of a parallel computer for very large problems. The user can replace calls to certain built-in MATLAB functions with calls to Matpar functions.

  7. Massively parallel quantum computer simulator

    NARCIS (Netherlands)

    De Raedt, K.; Michielsen, K.; De Raedt, H.; Trieu, B.; Arnold, G.; Richter, M.; Lippert, Th.; Watanabe, H.; Ito, N.

    2007-01-01

    We describe portable software to simulate universal quantum computers on massive parallel Computers. We illustrate the use of the simulation software by running various quantum algorithms on different computer architectures, such as a IBM BlueGene/L, a IBM Regatta p690+, a Hitachi SR11000/J1, a Cray

  8. Parallel computing: numerics, applications, and trends

    National Research Council Canada - National Science Library

    Trobec, Roman; Vajteršic, Marián; Zinterhof, Peter

    2009-01-01

    ... and/or distributed systems. The contributions to this book are focused on topics most concerned in the trends of today's parallel computing. These range from parallel algorithmics, programming, tools, network computing to future parallel computing. Particular attention is paid to parallel numerics: linear algebra, differential equations, numerica...

  9. Experiments with parallel algorithms for combinatorial problems

    NARCIS (Netherlands)

    G.A.P. Kindervater (Gerard); H.W.J.M. Trienekens

    1985-01-01

    textabstractIn the last decade many models for parallel computation have been proposed and many parallel algorithms have been developed. However, few of these models have been realized and most of these algorithms are supposed to run on idealized, unrealistic parallel machines. The parallel machines

  10. Parallel R-matrix computation

    International Nuclear Information System (INIS)

    Heggarty, J.W.

    1999-06-01

    For almost thirty years, sequential R-matrix computation has been used by atomic physics research groups, from around the world, to model collision phenomena involving the scattering of electrons or positrons with atomic or molecular targets. As considerable progress has been made in the understanding of fundamental scattering processes, new data, obtained from more complex calculations, is of current interest to experimentalists. Performing such calculations, however, places considerable demands on the computational resources to be provided by the target machine, in terms of both processor speed and memory requirement. Indeed, in some instances the computational requirements are so great that the proposed R-matrix calculations are intractable, even when utilising contemporary classic supercomputers. Historically, increases in the computational requirements of R-matrix computation were accommodated by porting the problem codes to a more powerful classic supercomputer. Although this approach has been successful in the past, it is no longer considered to be a satisfactory solution due to the limitations of current (and future) Von Neumann machines. As a consequence, there has been considerable interest in the high performance multicomputers, that have emerged over the last decade which appear to offer the computational resources required by contemporary R-matrix research. Unfortunately, developing codes for these machines is not as simple a task as it was to develop codes for successive classic supercomputers. The difficulty arises from the considerable differences in the computing models that exist between the two types of machine and results in the programming of multicomputers to be widely acknowledged as a difficult, time consuming and error-prone task. Nevertheless, unless parallel R-matrix computation is realised, important theoretical and experimental atomic physics research will continue to be hindered. This thesis describes work that was undertaken in

  11. Effects of Temperature and Residence Time on the Emissions of PIC and Fine Particles during Fixed Bed Combustion of Conifer Stemwood Pellets

    Energy Technology Data Exchange (ETDEWEB)

    Boman, Christoffer; Lindmark, Fredrik; Oehman, Marcus; Nordin, Anders [Umeaa Univ. (Sweden). Energy Technology and Thermal Process Chemistry; Pettersson, Esbjoern [Energy Technology Centre, Piteaa (Sweden); Westerholm, Roger [Stockholm Univ., Arrhenius Laboratory (Sweden). Dept. of Analytical Chemistry

    2006-07-15

    The use of wood fuel Pellets has proved to be well suited for the small-scale market enabling controlled and efficient combustion with low emission of products of incomplete combustion (PIC). Still a potential for further emission reduction exists and a thorough understanding of the influence of combustion conditions on the emission characteristics of air pollutants like PAH and particulate matter (PM) is important. The objective was to determine the effects of temperature and residence time on the emission performance and characteristics with focus on hydrocarbons and PM during combustion of conifer stemwood Pellets in a laboratory fixed bed reactor (<5 kW). Temperature and residence time after the bed section were varied according to statistical experimental designs (650-970 deg C and 0.5-3.5 s) with the emission responses; CO, organic gaseous carbon, NO, 20 VOC compounds, 43 PAH compounds, PM{sub tot}, fine particle mass/count median diameter (MMD and CMD) and number concentration. Temperature was negatively correlated with the emissions of all studied PIC with limited effects of residence time. The PM{sub tot} emissions of 15-20 mg/MJ was in all cases dominated by fine (<1 {mu}m) particles of K, Na, S, Cl, C, O and Zn. Increased residence time resulted in increased fine particle sizes (i.e. MMD and CMD) and decreased number concentrations. The importance of high temperature (>850 deg C) in the bed zone with intensive, air rich and well mixed isothermal conditions for 0.5-1.0 s in the post combustion zone was illustrated for wood Pellets combustion with almost a total depletion of all studied PIC. The results emphasize the need for further verification studies and technology development work.

  12. The numerical parallel computing of photon transport

    International Nuclear Information System (INIS)

    Huang Qingnan; Liang Xiaoguang; Zhang Lifa

    1998-12-01

    The parallel computing of photon transport is investigated, the parallel algorithm and the parallelization of programs on parallel computers both with shared memory and with distributed memory are discussed. By analyzing the inherent law of the mathematics and physics model of photon transport according to the structure feature of parallel computers, using the strategy of 'to divide and conquer', adjusting the algorithm structure of the program, dissolving the data relationship, finding parallel liable ingredients and creating large grain parallel subtasks, the sequential computing of photon transport into is efficiently transformed into parallel and vector computing. The program was run on various HP parallel computers such as the HY-1 (PVP), the Challenge (SMP) and the YH-3 (MPP) and very good parallel speedup has been gotten

  13. Potencial alelopático de Tropaeolum majus L. na germinação e crescimento inicial de plântulas de picão-preto Allelophaty potential of Tropaeolum majus L on picão-preto seeds germination and initial seedling growth

    Directory of Open Access Journals (Sweden)

    Anelise Samara Nazari Formagio

    2012-01-01

    Full Text Available Objetivou-se com este estudo avaliar o potencial alelopático de extratos metanólicos de folhas, flores e raízes de capuchinha (Tropaeolum majus L. sobre a germinação de sementes e o crescimento inicial de plântulas de picão-preto. O extrato metanólico com melhor potencial de inibição foi submetido a particionamento, resultando nas frações hexânica, clorofórmica, acetato de etila e hidrometanólica e posterior caracterização pelo espectro de absorção na região do infravermelho (IV. O efeito alelopático foi avaliado sobre as sementes de picão-preto, as quais foram distribuídas sobre papel germitest umedecido com 2mL dos extratos e mantidas em germinador do tipo B.O.D. regulado a temperatura de 25°C e luz branca constante, sendo que as sementes imersas diretamente em água constituíram o tratamento controle. A avaliação da qualidade da semente foi realizada pelos testes de germinação e vigor (primeira contagem e comprimento de raiz primária e de hipocótilo das plântulas, em delineamento inteiramente ao acaso. O potencial alelopático das folhas de capuchinha foi maior em relação às demais partes da planta sobre a germinação das sementes, comprimento de hipocótilo e de raiz das plântulas de picão-preto. Estes efeitos podem estar associados à presença de grupos químicos polares, pois à medida que se aumentou a polaridade dos solventes detectou-se maior efeito inibitório sobre a germinação e o crescimento inicial de plântulas de picão-preto.This research aimed to evaluate the metanolic extracts allelopathic potential from leaves, flowers and roots of capuchinha (Tropaeolum majus L. on picão-preto seeds germination and initial seedling growth. The best inhibitor metanolic extract was fractioned, in hexanic, cloroformic, etil acetate and hidrometanolic fractions and it was characterized through absorption spectrum using mid-infrared. To evaluate the allelopathic effect of metanolic extracts and the

  14. Automatic Parallelization Tool: Classification of Program Code for Parallel Computing

    Directory of Open Access Journals (Sweden)

    Mustafa Basthikodi

    2016-04-01

    Full Text Available Performance growth of single-core processors has come to a halt in the past decade, but was re-enabled by the introduction of parallelism in processors. Multicore frameworks along with Graphical Processing Units empowered to enhance parallelism broadly. Couples of compilers are updated to developing challenges forsynchronization and threading issues. Appropriate program and algorithm classifications will have advantage to a great extent to the group of software engineers to get opportunities for effective parallelization. In present work we investigated current species for classification of algorithms, in that related work on classification is discussed along with the comparison of issues that challenges the classification. The set of algorithms are chosen which matches the structure with different issues and perform given task. We have tested these algorithms utilizing existing automatic species extraction toolsalong with Bones compiler. We have added functionalities to existing tool, providing a more detailed characterization. The contributions of our work include support for pointer arithmetic, conditional and incremental statements, user defined types, constants and mathematical functions. With this, we can retain significant data which is not captured by original speciesof algorithms. We executed new theories into the device, empowering automatic characterization of program code.

  15. Structural synthesis of parallel robots

    CERN Document Server

    Gogu, Grigore

    This book represents the fifth part of a larger work dedicated to the structural synthesis of parallel robots. The originality of this work resides in the fact that it combines new formulae for mobility, connectivity, redundancy and overconstraints with evolutionary morphology in a unified structural synthesis approach that yields interesting and innovative solutions for parallel robotic manipulators.  This is the first book on robotics that presents solutions for coupled, decoupled, uncoupled, fully-isotropic and maximally regular robotic manipulators with Schönflies motions systematically generated by using the structural synthesis approach proposed in Part 1.  Overconstrained non-redundant/overactuated/redundantly actuated solutions with simple/complex limbs are proposed. Many solutions are presented here for the first time in the literature. The author had to make a difficult and challenging choice between protecting these solutions through patents and releasing them directly into the public domain. T...

  16. GPU Parallel Bundle Block Adjustment

    Directory of Open Access Journals (Sweden)

    ZHENG Maoteng

    2017-09-01

    Full Text Available To deal with massive data in photogrammetry, we introduce the GPU parallel computing technology. The preconditioned conjugate gradient and inexact Newton method are also applied to decrease the iteration times while solving the normal equation. A brand new workflow of bundle adjustment is developed to utilize GPU parallel computing technology. Our method can avoid the storage and inversion of the big normal matrix, and compute the normal matrix in real time. The proposed method can not only largely decrease the memory requirement of normal matrix, but also largely improve the efficiency of bundle adjustment. It also achieves the same accuracy as the conventional method. Preliminary experiment results show that the bundle adjustment of a dataset with about 4500 images and 9 million image points can be done in only 1.5 minutes while achieving sub-pixel accuracy.

  17. A tandem parallel plate analyzer

    International Nuclear Information System (INIS)

    Hamada, Y.; Fujisawa, A.; Iguchi, H.; Nishizawa, A.; Kawasumi, Y.

    1996-11-01

    By a new modification of a parallel plate analyzer the second-order focus is obtained in an arbitrary injection angle. This kind of an analyzer with a small injection angle will have an advantage of small operational voltage, compared to the Proca and Green analyzer where the injection angle is 30 degrees. Thus, the newly proposed analyzer will be very useful for the precise energy measurement of high energy particles in MeV range. (author)

  18. High-speed parallel counter

    International Nuclear Information System (INIS)

    Gus'kov, B.N.; Kalinnikov, V.A.; Krastev, V.R.; Maksimov, A.N.; Nikityuk, N.M.

    1985-01-01

    This paper describes a high-speed parallel counter that contains 31 inputs and 15 outputs and is implemented by integrated circuits of series 500. The counter is designed for fast sampling of events according to the number of particles that pass simultaneously through the hodoscopic plane of the detector. The minimum delay of the output signals relative to the input is 43 nsec. The duration of the output signals can be varied from 75 to 120 nsec

  19. An anthropologist in parallel structure

    Directory of Open Access Journals (Sweden)

    Noelle Molé Liston

    2016-08-01

    Full Text Available The essay examines the parallels between Molé Liston’s studies on labor and precarity in Italy and the United States’ anthropology job market. Probing the way economic shift reshaped the field of anthropology of Europe in the late 2000s, the piece explores how the neoliberalization of the American academy increased the value in studying the hardships and daily lives of non-western populations in Europe.

  20. Combinatorics of spreads and parallelisms

    CERN Document Server

    Johnson, Norman

    2010-01-01

    Partitions of Vector Spaces Quasi-Subgeometry Partitions Finite Focal-SpreadsGeneralizing André SpreadsThe Going Up Construction for Focal-SpreadsSubgeometry Partitions Subgeometry and Quasi-Subgeometry Partitions Subgeometries from Focal-SpreadsExtended André SubgeometriesKantor's Flag-Transitive DesignsMaximal Additive Partial SpreadsSubplane Covered Nets and Baer Groups Partial Desarguesian t-Parallelisms Direct Products of Affine PlanesJha-Johnson SL(2,

  1. New algorithms for parallel MRI

    International Nuclear Information System (INIS)

    Anzengruber, S; Ramlau, R; Bauer, F; Leitao, A

    2008-01-01

    Magnetic Resonance Imaging with parallel data acquisition requires algorithms for reconstructing the patient's image from a small number of measured lines of the Fourier domain (k-space). In contrast to well-known algorithms like SENSE and GRAPPA and its flavors we consider the problem as a non-linear inverse problem. However, in order to avoid cost intensive derivatives we will use Landweber-Kaczmarz iteration and in order to improve the overall results some additional sparsity constraints.

  2. Wakefield calculations on parallel computers

    International Nuclear Information System (INIS)

    Schoessow, P.

    1990-01-01

    The use of parallelism in the solution of wakefield problems is illustrated for two different computer architectures (SIMD and MIMD). Results are given for finite difference codes which have been implemented on a Connection Machine and an Alliant FX/8 and which are used to compute wakefields in dielectric loaded structures. Benchmarks on code performance are presented for both cases. 4 refs., 3 figs., 2 tabs

  3. Aspects of computation on asynchronous parallel processors

    International Nuclear Information System (INIS)

    Wright, M.

    1989-01-01

    The increasing availability of asynchronous parallel processors has provided opportunities for original and useful work in scientific computing. However, the field of parallel computing is still in a highly volatile state, and researchers display a wide range of opinion about many fundamental questions such as models of parallelism, approaches for detecting and analyzing parallelism of algorithms, and tools that allow software developers and users to make effective use of diverse forms of complex hardware. This volume collects the work of researchers specializing in different aspects of parallel computing, who met to discuss the framework and the mechanics of numerical computing. The far-reaching impact of high-performance asynchronous systems is reflected in the wide variety of topics, which include scientific applications (e.g. linear algebra, lattice gauge simulation, ordinary and partial differential equations), models of parallelism, parallel language features, task scheduling, automatic parallelization techniques, tools for algorithm development in parallel environments, and system design issues

  4. Parallel processing of genomics data

    Science.gov (United States)

    Agapito, Giuseppe; Guzzi, Pietro Hiram; Cannataro, Mario

    2016-10-01

    The availability of high-throughput experimental platforms for the analysis of biological samples, such as mass spectrometry, microarrays and Next Generation Sequencing, have made possible to analyze a whole genome in a single experiment. Such platforms produce an enormous volume of data per single experiment, thus the analysis of this enormous flow of data poses several challenges in term of data storage, preprocessing, and analysis. To face those issues, efficient, possibly parallel, bioinformatics software needs to be used to preprocess and analyze data, for instance to highlight genetic variation associated with complex diseases. In this paper we present a parallel algorithm for the parallel preprocessing and statistical analysis of genomics data, able to face high dimension of data and resulting in good response time. The proposed system is able to find statistically significant biological markers able to discriminate classes of patients that respond to drugs in different ways. Experiments performed on real and synthetic genomic datasets show good speed-up and scalability.

  5. Development of an operational neutron spectrometry system dedicated to the characterization of the natural atmospheric radiative environment, implemented at the Pic du Midi

    International Nuclear Information System (INIS)

    Cheminet, Adrien

    2013-01-01

    This PhD Thesis has been achieved thanks to the joint effort between two French organizations, the French Institute for Radiological Protection and Nuclear Safety (IRSN/LMDN, Cadarache) and the French Aerospace Lab (ONERA/ DESP, Toulouse). The aim was to develop an operational neutron spectrometer extended to high energies in order to measure the dynamics of the spectral variations of the natural radiative environment at the summit of the Pic du Midi Observatory in the French Pyrenees. Thereby, the fluence responses of each detector were calculated thanks to Monte Carlo simulations. Afterwards, they were validated by means of experimental campaigns up to high energies (≥20 MeV) nearby reference neutron fields. The systematic uncertainties were deduced after detailed studies of the mathematic reconstruction of the spectra (i.e. unfolding procedure). Then, the system was tested under rocks at the LSBB of Rustrel before being installed at respectively +500 m and +1000 m above sea level for the first environmental campaigns. Finally, the spectrometer has been operating for two years after its deployment at the summit of the Pic du Midi (+2885 m). The continuous data were analysed thanks to an innovative method. Some seasonal and spectral variations were observed. Some Forbush decreases were also recorded after strong solar flares. These data were further analysed thanks to Monte Carlo simulations. The data were made more attractive thanks to several practical applications with personnel dosimetry or reliability of submicron electronics components. (author)

  6. Status report on the 'Merging' of the Electron-Cloud Code POSINST with the 3-D Accelerator PIC CODE WARP

    International Nuclear Information System (INIS)

    Vay, J.-L.; Furman, M.A.; Azevedo, A.W.; Cohen, R.H.; Friedman, A.; Grote, D.P.; Stoltz, P.H.

    2004-01-01

    We have integrated the electron-cloud code POSINST [1] with WARP [2]--a 3-D parallel Particle-In-Cell accelerator code developed for Heavy Ion Inertial Fusion--so that the two can interoperate. Both codes are run in the same process, communicate through a Python interpreter (already used in WARP), and share certain key arrays (so far, particle positions and velocities). Currently, POSINST provides primary and secondary sources of electrons, beam bunch kicks, a particle mover, and diagnostics. WARP provides the field solvers and diagnostics. Secondary emission routines are provided by the Tech-X package CMEE

  7. Overview of the Force Scientific Parallel Language

    Directory of Open Access Journals (Sweden)

    Gita Alaghband

    1994-01-01

    Full Text Available The Force parallel programming language designed for large-scale shared-memory multiprocessors is presented. The language provides a number of parallel constructs as extensions to the ordinary Fortran language and is implemented as a two-level macro preprocessor to support portability across shared memory multiprocessors. The global parallelism model on which the Force is based provides a powerful parallel language. The parallel constructs, generic synchronization, and freedom from process management supported by the Force has resulted in structured parallel programs that are ported to the many multiprocessors on which the Force is implemented. Two new parallel constructs for looping and functional decomposition are discussed. Several programming examples to illustrate some parallel programming approaches using the Force are also presented.

  8. Automatic Loop Parallelization via Compiler Guided Refactoring

    DEFF Research Database (Denmark)

    Larsen, Per; Ladelsky, Razya; Lidman, Jacob

    For many parallel applications, performance relies not on instruction-level parallelism, but on loop-level parallelism. Unfortunately, many modern applications are written in ways that obstruct automatic loop parallelization. Since we cannot identify sufficient parallelization opportunities...... for these codes in a static, off-line compiler, we developed an interactive compilation feedback system that guides the programmer in iteratively modifying application source, thereby improving the compiler’s ability to generate loop-parallel code. We use this compilation system to modify two sequential...... benchmarks, finding that the code parallelized in this way runs up to 8.3 times faster on an octo-core Intel Xeon 5570 system and up to 12.5 times faster on a quad-core IBM POWER6 system. Benchmark performance varies significantly between the systems. This suggests that semi-automatic parallelization should...

  9. Parallel kinematics type, kinematics, and optimal design

    CERN Document Server

    Liu, Xin-Jun

    2014-01-01

    Parallel Kinematics- Type, Kinematics, and Optimal Design presents the results of 15 year's research on parallel mechanisms and parallel kinematics machines. This book covers the systematic classification of parallel mechanisms (PMs) as well as providing a large number of mechanical architectures of PMs available for use in practical applications. It focuses on the kinematic design of parallel robots. One successful application of parallel mechanisms in the field of machine tools, which is also called parallel kinematics machines, has been the emerging trend in advanced machine tools. The book describes not only the main aspects and important topics in parallel kinematics, but also references novel concepts and approaches, i.e. type synthesis based on evolution, performance evaluation and optimization based on screw theory, singularity model taking into account motion and force transmissibility, and others.   This book is intended for researchers, scientists, engineers and postgraduates or above with interes...

  10. Applied Parallel Computing Industrial Computation and Optimization

    DEFF Research Database (Denmark)

    Madsen, Kaj; NA NA NA Olesen, Dorte

    Proceedings and the Third International Workshop on Applied Parallel Computing in Industrial Problems and Optimization (PARA96)......Proceedings and the Third International Workshop on Applied Parallel Computing in Industrial Problems and Optimization (PARA96)...

  11. Parallel algorithms and cluster computing

    CERN Document Server

    Hoffmann, Karl Heinz

    2007-01-01

    This book presents major advances in high performance computing as well as major advances due to high performance computing. It contains a collection of papers in which results achieved in the collaboration of scientists from computer science, mathematics, physics, and mechanical engineering are presented. From the science problems to the mathematical algorithms and on to the effective implementation of these algorithms on massively parallel and cluster computers we present state-of-the-art methods and technology as well as exemplary results in these fields. This book shows that problems which seem superficially distinct become intimately connected on a computational level.

  12. Parallel computation of rotating flows

    DEFF Research Database (Denmark)

    Lundin, Lars Kristian; Barker, Vincent A.; Sørensen, Jens Nørkær

    1999-01-01

    This paper deals with the simulation of 3‐D rotating flows based on the velocity‐vorticity formulation of the Navier‐Stokes equations in cylindrical coordinates. The governing equations are discretized by a finite difference method. The solution is advanced to a new time level by a two‐step process...... is that of solving a singular, large, sparse, over‐determined linear system of equations, and the iterative method CGLS is applied for this purpose. We discuss some of the mathematical and numerical aspects of this procedure and report on the performance of our software on a wide range of parallel computers. Darbe...

  13. The parallel volume at large distances

    DEFF Research Database (Denmark)

    Kampf, Jürgen

    In this paper we examine the asymptotic behavior of the parallel volume of planar non-convex bodies as the distance tends to infinity. We show that the difference between the parallel volume of the convex hull of a body and the parallel volume of the body itself tends to . This yields a new proof...... for the fact that a planar body can only have polynomial parallel volume, if it is convex. Extensions to Minkowski spaces and random sets are also discussed....

  14. The parallel volume at large distances

    DEFF Research Database (Denmark)

    Kampf, Jürgen

    In this paper we examine the asymptotic behavior of the parallel volume of planar non-convex bodies as the distance tends to infinity. We show that the difference between the parallel volume of the convex hull of a body and the parallel volume of the body itself tends to 0. This yields a new proof...... for the fact that a planar body can only have polynomial parallel volume, if it is convex. Extensions to Minkowski spaces and random sets are also discussed....

  15. Diseño e implementación de una tarjeta de monitoreo y control en forma remota a través de internet, utilizando la tecnología del microcontrolador pic18f97j60

    OpenAIRE

    Montenegro Viera, Efren; Sandoya Tinoco, Eduardo; Ponguillo Intriago, Ronald Alberto

    2009-01-01

    El trabajo presentado en este artículo fue desarrollado para demostrar la aplicación de los recursos tecnológicos de dispositivos como el Microcontrolador PIC 18F97J60 en la Domótica. Existen diferentes maneras de implementar la domótica, motivo por el cual, hemos decidido optar por una nueva tecnología como lo es la de los PICs con sistemas embebidos para el desarrollo de una aplicación que reduzca en costo y tamaño lo que actualmente podemos encontrar en el mercado. Con la ayuda de di...

  16. A Parallel Approach to Fractal Image Compression

    Directory of Open Access Journals (Sweden)

    Lubomir Dedera

    2004-01-01

    Full Text Available The paper deals with a parallel approach to coding and decoding algorithms in fractal image compressionand presents experimental results comparing sequential and parallel algorithms from the point of view of achieved bothcoding and decoding time and effectiveness of parallelization.

  17. Parallel Computing Using Web Servers and "Servlets".

    Science.gov (United States)

    Lo, Alfred; Bloor, Chris; Choi, Y. K.

    2000-01-01

    Describes parallel computing and presents inexpensive ways to implement a virtual parallel computer with multiple Web servers. Highlights include performance measurement of parallel systems; models for using Java and intranet technology including single server, multiple clients and multiple servers, single client; and a comparison of CGI (common…

  18. An Introduction to Parallel Computation R

    Indian Academy of Sciences (India)

    How are they programmed? This article provides an introduction. A parallel computer is a network of processors built for ... and have been used to solve problems much faster than a single ... in parallel computer design is to select an organization which ..... The most ambitious approach to parallel computing is to develop.

  19. Comparison of parallel viscosity with neoclassical theory

    International Nuclear Information System (INIS)

    Ida, K.; Nakajima, N.

    1996-04-01

    Toroidal rotation profiles are measured with charge exchange spectroscopy for the plasma heated with tangential NBI in CHS heliotron/torsatron device to estimate parallel viscosity. The parallel viscosity derived from the toroidal rotation velocity shows good agreement with the neoclassical parallel viscosity plus the perpendicular viscosity. (μ perpendicular = 2 m 2 /s). (author)

  20. Advances in randomized parallel computing

    CERN Document Server

    Rajasekaran, Sanguthevar

    1999-01-01

    The technique of randomization has been employed to solve numerous prob­ lems of computing both sequentially and in parallel. Examples of randomized algorithms that are asymptotically better than their deterministic counterparts in solving various fundamental problems abound. Randomized algorithms have the advantages of simplicity and better performance both in theory and often in practice. This book is a collection of articles written by renowned experts in the area of randomized parallel computing. A brief introduction to randomized algorithms In the aflalysis of algorithms, at least three different measures of performance can be used: the best case, the worst case, and the average case. Often, the average case run time of an algorithm is much smaller than the worst case. 2 For instance, the worst case run time of Hoare's quicksort is O(n ), whereas its average case run time is only O( n log n). The average case analysis is conducted with an assumption on the input space. The assumption made to arrive at t...

  1. Xyce parallel electronic simulator design.

    Energy Technology Data Exchange (ETDEWEB)

    Thornquist, Heidi K.; Rankin, Eric Lamont; Mei, Ting; Schiek, Richard Louis; Keiter, Eric Richard; Russo, Thomas V.

    2010-09-01

    This document is the Xyce Circuit Simulator developer guide. Xyce has been designed from the 'ground up' to be a SPICE-compatible, distributed memory parallel circuit simulator. While it is in many respects a research code, Xyce is intended to be a production simulator. As such, having software quality engineering (SQE) procedures in place to insure a high level of code quality and robustness are essential. Version control, issue tracking customer support, C++ style guildlines and the Xyce release process are all described. The Xyce Parallel Electronic Simulator has been under development at Sandia since 1999. Historically, Xyce has mostly been funded by ASC, the original focus of Xyce development has primarily been related to circuits for nuclear weapons. However, this has not been the only focus and it is expected that the project will diversify. Like many ASC projects, Xyce is a group development effort, which involves a number of researchers, engineers, scientists, mathmaticians and computer scientists. In addition to diversity of background, it is to be expected on long term projects for there to be a certain amount of staff turnover, as people move on to different projects. As a result, it is very important that the project maintain high software quality standards. The point of this document is to formally document a number of the software quality practices followed by the Xyce team in one place. Also, it is hoped that this document will be a good source of information for new developers.

  2. The 3D Pelvic Inclination Correction System (PICS): A universally applicable coordinate system for isovolumetric imaging measurements, tested in women with pelvic organ prolapse (POP).

    Science.gov (United States)

    Reiner, Caecilia S; Williamson, Tom; Winklehner, Thomas; Lisse, Sean; Fink, Daniel; DeLancey, John O L; Betschart, Cornelia

    2017-07-01

    In pelvic organ prolapse (POP), the organs are pushed downward along the lines of gravity, so measurements along this longitudinal body axis are desirable. We propose a universally applicable 3D coordinate system that corrects for changes in pelvic inclination and that allows the localization of any point in the pelvis at rest or under dynamic conditions on magnetic resonance images (MRI) of pelvic floor disorders in a scanner- and software independent manner. The proposed 3D coordinate system called 3D Pelvic Inclination Correction System (PICS) is constructed utilizing four bony landmark points, with the origin set at the inferior pubic point, and three additional points at the sacrum (sacrococcygeal joint) and both ischial spines, which are clearly visible on MRI images. The feasibility and applicability of the moving frame was evaluated using MRI datasets from five women with pelvic organ prolapse, three undergoing static MRI and two undergoing dynamic MRI of the pelvic floor in a supine position. The construction of the coordinate system was performed utilizing the selected landmarks, with an initial implementation completed in MATLAB. In all cases the selected landmarks were clearly visible, with the construction of the 3D PICS and measurement of pelvic organ positions performed without difficulty. The resulting distance from the organ position to the horizontal PICS plane was compared to a traditional measure based on standard measurements in 2D slices. The two approaches demonstrated good agreement in each of the cases. The developed approach makes quantitative assessment of pelvic organ position in a physiologically relevant 3D coordinate system possible independent of pelvic movement relative to the scanner. It allows the accurate study of the physiologic range of organ location along the body axis ("up or down") as well as defects of the pelvic sidewall or birth-related pelvic floor injuries outside the midsagittal plane, not possible before in a 2D

  3. PDDP, A Data Parallel Programming Model

    Directory of Open Access Journals (Sweden)

    Karen H. Warren

    1996-01-01

    Full Text Available PDDP, the parallel data distribution preprocessor, is a data parallel programming model for distributed memory parallel computers. PDDP implements high-performance Fortran-compatible data distribution directives and parallelism expressed by the use of Fortran 90 array syntax, the FORALL statement, and the WHERE construct. Distributed data objects belong to a global name space; other data objects are treated as local and replicated on each processor. PDDP allows the user to program in a shared memory style and generates codes that are portable to a variety of parallel machines. For interprocessor communication, PDDP uses the fastest communication primitives on each platform.

  4. Parallelization of quantum molecular dynamics simulation code

    International Nuclear Information System (INIS)

    Kato, Kaori; Kunugi, Tomoaki; Shibahara, Masahiko; Kotake, Susumu

    1998-02-01

    A quantum molecular dynamics simulation code has been developed for the analysis of the thermalization of photon energies in the molecule or materials in Kansai Research Establishment. The simulation code is parallelized for both Scalar massively parallel computer (Intel Paragon XP/S75) and Vector parallel computer (Fujitsu VPP300/12). Scalable speed-up has been obtained with a distribution to processor units by division of particle group in both parallel computers. As a result of distribution to processor units not only by particle group but also by the particles calculation that is constructed with fine calculations, highly parallelization performance is achieved in Intel Paragon XP/S75. (author)

  5. Implementation and performance of parallelized elegant

    International Nuclear Information System (INIS)

    Wang, Y.; Borland, M.

    2008-01-01

    The program elegant is widely used for design and modeling of linacs for free-electron lasers and energy recovery linacs, as well as storage rings and other applications. As part of a multi-year effort, we have parallelized many aspects of the code, including single-particle dynamics, wakefields, and coherent synchrotron radiation. We report on the approach used for gradual parallelization, which proved very beneficial in getting parallel features into the hands of users quickly. We also report details of parallelization of collective effects. Finally, we discuss performance of the parallelized code in various applications.

  6. Parallelization of 2-D lattice Boltzmann codes

    International Nuclear Information System (INIS)

    Suzuki, Soichiro; Kaburaki, Hideo; Yokokawa, Mitsuo.

    1996-03-01

    Lattice Boltzmann (LB) codes to simulate two dimensional fluid flow are developed on vector parallel computer Fujitsu VPP500 and scalar parallel computer Intel Paragon XP/S. While a 2-D domain decomposition method is used for the scalar parallel LB code, a 1-D domain decomposition method is used for the vector parallel LB code to be vectorized along with the axis perpendicular to the direction of the decomposition. High parallel efficiency of 95.1% by the vector parallel calculation on 16 processors with 1152x1152 grid and 88.6% by the scalar parallel calculation on 100 processors with 800x800 grid are obtained. The performance models are developed to analyze the performance of the LB codes. It is shown by our performance models that the execution speed of the vector parallel code is about one hundred times faster than that of the scalar parallel code with the same number of processors up to 100 processors. We also analyze the scalability in keeping the available memory size of one processor element at maximum. Our performance model predicts that the execution time of the vector parallel code increases about 3% on 500 processors. Although the 1-D domain decomposition method has in general a drawback in the interprocessor communication, the vector parallel LB code is still suitable for the large scale and/or high resolution simulations. (author)

  7. Parallelization of 2-D lattice Boltzmann codes

    Energy Technology Data Exchange (ETDEWEB)

    Suzuki, Soichiro; Kaburaki, Hideo; Yokokawa, Mitsuo

    1996-03-01

    Lattice Boltzmann (LB) codes to simulate two dimensional fluid flow are developed on vector parallel computer Fujitsu VPP500 and scalar parallel computer Intel Paragon XP/S. While a 2-D domain decomposition method is used for the scalar parallel LB code, a 1-D domain decomposition method is used for the vector parallel LB code to be vectorized along with the axis perpendicular to the direction of the decomposition. High parallel efficiency of 95.1% by the vector parallel calculation on 16 processors with 1152x1152 grid and 88.6% by the scalar parallel calculation on 100 processors with 800x800 grid are obtained. The performance models are developed to analyze the performance of the LB codes. It is shown by our performance models that the execution speed of the vector parallel code is about one hundred times faster than that of the scalar parallel code with the same number of processors up to 100 processors. We also analyze the scalability in keeping the available memory size of one processor element at maximum. Our performance model predicts that the execution time of the vector parallel code increases about 3% on 500 processors. Although the 1-D domain decomposition method has in general a drawback in the interprocessor communication, the vector parallel LB code is still suitable for the large scale and/or high resolution simulations. (author).

  8. Systematic approach for deriving feasible mappings of parallel algorithms to parallel computing platforms

    NARCIS (Netherlands)

    Arkin, Ethem; Tekinerdogan, Bedir; Imre, Kayhan M.

    2017-01-01

    The need for high-performance computing together with the increasing trend from single processor to parallel computer architectures has leveraged the adoption of parallel computing. To benefit from parallel computing power, usually parallel algorithms are defined that can be mapped and executed

  9. Experiences in Data-Parallel Programming

    Directory of Open Access Journals (Sweden)

    Terry W. Clark

    1997-01-01

    Full Text Available To efficiently parallelize a scientific application with a data-parallel compiler requires certain structural properties in the source program, and conversely, the absence of others. A recent parallelization effort of ours reinforced this observation and motivated this correspondence. Specifically, we have transformed a Fortran 77 version of GROMOS, a popular dusty-deck program for molecular dynamics, into Fortran D, a data-parallel dialect of Fortran. During this transformation we have encountered a number of difficulties that probably are neither limited to this particular application nor do they seem likely to be addressed by improved compiler technology in the near future. Our experience with GROMOS suggests a number of points to keep in mind when developing software that may at some time in its life cycle be parallelized with a data-parallel compiler. This note presents some guidelines for engineering data-parallel applications that are compatible with Fortran D or High Performance Fortran compilers.

  10. Streaming for Functional Data-Parallel Languages

    DEFF Research Database (Denmark)

    Madsen, Frederik Meisner

    In this thesis, we investigate streaming as a general solution to the space inefficiency commonly found in functional data-parallel programming languages. The data-parallel paradigm maps well to parallel SIMD-style hardware. However, the traditional fully materializing execution strategy...... by extending two existing data-parallel languages: NESL and Accelerate. In the extensions we map bulk operations to data-parallel streams that can evaluate fully sequential, fully parallel or anything in between. By a dataflow, piecewise parallel execution strategy, the runtime system can adjust to any target...... flattening necessitates all sub-computations to materialize at the same time. For example, naive n by n matrix multiplication requires n^3 space in NESL because the algorithm contains n^3 independent scalar multiplications. For large values of n, this is completely unacceptable. We address the problem...

  11. An alternative model to distribute VO software to WLCG sites based on CernVM-FS: a prototype at PIC Tier1

    International Nuclear Information System (INIS)

    Lanciotti, E; Merino, G; Blomer, J; Bria, A

    2011-01-01

    In a distributed computing model as WLCG the software of experiment specific application software has to be efficiently distributed to any site of the Grid. Application software is currently installed in a shared area of the site visible for all Worker Nodes (WNs) of the site through some protocol (NFS, AFS or other). The software is installed at the site by jobs which run on a privileged node of the computing farm where the shared area is mounted in write mode. This model presents several drawbacks which cause a non-negligible rate of job failure. An alternative model for software distribution based on the CERN Virtual Machine File System (CernVM-FS) has been tried at PIC, the Spanish Tierl site of WLCG. The test bed used and the results are presented in this paper.

  12. Electron paramagnetic resonance spectral study of [Mn(acs){sub 2}(2–pic){sub 2}(H{sub 2}O){sub 2}] single crystals

    Energy Technology Data Exchange (ETDEWEB)

    Kocakoç, Mehpeyker, E-mail: mkocakoc@cu.edu.tr [Çukurova University (Turkey); Tapramaz, Recep, E-mail: recept@omu.edu.tr [Ondokuz Mayıs University (Turkey)

    2016-03-25

    Acesulfame potassium salt is a synthetic and non-caloric sweetener. It is also important chemically for its capability of being ligand in coordination compounds, because it can bind over Nitrogen and Oxygen atoms of carbonyl and sulfonyl groups and ring oxygen. Some acesulfame containing transition metal ion complexes with mixed ligands exhibit solvato and thermo chromic properties and these properties make them physically important. In this work single crystals of Mn{sup +2} ion complex with mixed ligand, [Mn(acs){sub 2}(2-pic){sub 2}(H{sub 2}O){sub 2}], was studied with electron paramagnetic resonance (EPR) spectroscopy. EPR parameters were determined. Zero field splitting parameters indicated that the complex was highly symmetric. Variable temperature studies showed no detectable chance in spectra.

  13. Effect of polarized radiative transfer on the Hanle magnetic field determination in prominences: Analysis of hydrogen H alpha line observations at Pic-du-Midi

    Science.gov (United States)

    Bommier, V.; Deglinnocenti, E. L.; Leroy, J. L.; Sahal-Brechot, S.

    1985-01-01

    The linear polarization of the Hydrogen H alpha line of prominences has been computed, taking into account the effect of a magnetic field (Hanle effect), of the radiative transfer in the prominence, and of the depolarization due to collisions with the surrounding electrons and protons. The corresponding formalisms are developed in a forthcoming series of papers. In this paper, the main features of the computation method are summarized. The results of computation have been used for interpretation in terms of magnetic field vector measurements from H alpha polarimetric observations in prominences performed at Pic-du-Midi coronagraph-polarimeter. Simultaneous observations in one optically thin line (He I D(3)) and one optically thick line (H alpha) give an opportunity for solving the ambiguity on the field vector determination.

  14. Plasma density enhancement in atmospheric-pressure dielectric-barrier discharges by high-voltage nanosecond pulse in the pulse-on period: a PIC simulation

    International Nuclear Information System (INIS)

    Sang Chaofeng; Sun Jizhong; Wang Dezhen

    2010-01-01

    A particle-in-cell (PIC) plus Monte Carlo collision simulation is employed to investigate how a sustainable atmospheric pressure single dielectric-barrier discharge responds to a high-voltage nanosecond pulse (HVNP) further applied to the metal electrode. The results show that the HVNP can significantly increase the plasma density in the pulse-on period. The ion-induced secondary electrons can give rise to avalanche ionization in the positive sheath, which widens the discharge region and enhances the plasma density drastically. However, the plasma density stops increasing as the applied pulse lasts over certain time; therefore, lengthening the pulse duration alone cannot improve the discharge efficiency further. Physical reasons for these phenomena are then discussed.

  15. Plasma density enhancement in atmospheric-pressure dielectric-barrier discharges by high-voltage nanosecond pulse in the pulse-on period: a PIC simulation

    Science.gov (United States)

    Sang, Chaofeng; Sun, Jizhong; Wang, Dezhen

    2010-02-01

    A particle-in-cell (PIC) plus Monte Carlo collision simulation is employed to investigate how a sustainable atmospheric pressure single dielectric-barrier discharge responds to a high-voltage nanosecond pulse (HVNP) further applied to the metal electrode. The results show that the HVNP can significantly increase the plasma density in the pulse-on period. The ion-induced secondary electrons can give rise to avalanche ionization in the positive sheath, which widens the discharge region and enhances the plasma density drastically. However, the plasma density stops increasing as the applied pulse lasts over certain time; therefore, lengthening the pulse duration alone cannot improve the discharge efficiency further. Physical reasons for these phenomena are then discussed.

  16. Massively parallel diffuse optical tomography

    Energy Technology Data Exchange (ETDEWEB)

    Sandusky, John V.; Pitts, Todd A.

    2017-09-05

    Diffuse optical tomography systems and methods are described herein. In a general embodiment, the diffuse optical tomography system comprises a plurality of sensor heads, the plurality of sensor heads comprising respective optical emitter systems and respective sensor systems. A sensor head in the plurality of sensors heads is caused to act as an illuminator, such that its optical emitter system transmits a transillumination beam towards a portion of a sample. Other sensor heads in the plurality of sensor heads act as observers, detecting portions of the transillumination beam that radiate from the sample in the fields of view of the respective sensory systems of the other sensor heads. Thus, sensor heads in the plurality of sensors heads generate sensor data in parallel.

  17. Embodied and Distributed Parallel DJing.

    Science.gov (United States)

    Cappelen, Birgitta; Andersson, Anders-Petter

    2016-01-01

    Everyone has a right to take part in cultural events and activities, such as music performances and music making. Enforcing that right, within Universal Design, is often limited to a focus on physical access to public areas, hearing aids etc., or groups of persons with special needs performing in traditional ways. The latter might be people with disabilities, being musicians playing traditional instruments, or actors playing theatre. In this paper we focus on the innovative potential of including people with special needs, when creating new cultural activities. In our project RHYME our goal was to create health promoting activities for children with severe disabilities, by developing new musical and multimedia technologies. Because of the users' extreme demands and rich contribution, we ended up creating both a new genre of musical instruments and a new art form. We call this new art form Embodied and Distributed Parallel DJing, and the new genre of instruments for Empowering Multi-Sensorial Things.

  18. Device for balancing parallel strings

    Science.gov (United States)

    Mashikian, Matthew S.

    1985-01-01

    A battery plant is described which features magnetic circuit means in association with each of the battery strings in the battery plant for balancing the electrical current flow through the battery strings by equalizing the voltage across each of the battery strings. Each of the magnetic circuit means generally comprises means for sensing the electrical current flow through one of the battery strings, and a saturable reactor having a main winding connected electrically in series with the battery string, a bias winding connected to a source of alternating current and a control winding connected to a variable source of direct current controlled by the sensing means. Each of the battery strings is formed by a plurality of batteries connected electrically in series, and these battery strings are connected electrically in parallel across common bus conductors.

  19. Linear parallel processing machines I

    Energy Technology Data Exchange (ETDEWEB)

    Von Kunze, M

    1984-01-01

    As is well-known, non-context-free grammars for generating formal languages happen to be of a certain intrinsic computational power that presents serious difficulties to efficient parsing algorithms as well as for the development of an algebraic theory of contextsensitive languages. In this paper a framework is given for the investigation of the computational power of formal grammars, in order to start a thorough analysis of grammars consisting of derivation rules of the form aB ..-->.. A/sub 1/ ... A /sub n/ b/sub 1/...b /sub m/ . These grammars may be thought of as automata by means of parallel processing, if one considers the variables as operators acting on the terminals while reading them right-to-left. This kind of automata and their 2-dimensional programming language prove to be useful by allowing a concise linear-time algorithm for integer multiplication. Linear parallel processing machines (LP-machines) which are, in their general form, equivalent to Turing machines, include finite automata and pushdown automata (with states encoded) as special cases. Bounded LP-machines yield deterministic accepting automata for nondeterministic contextfree languages, and they define an interesting class of contextsensitive languages. A characterization of this class in terms of generating grammars is established by using derivation trees with crossings as a helpful tool. From the algebraic point of view, deterministic LP-machines are effectively represented semigroups with distinguished subsets. Concerning the dualism between generating and accepting devices of formal languages within the algebraic setting, the concept of accepting automata turns out to reduce essentially to embeddability in an effectively represented extension monoid, even in the classical cases.

  20. Parallel computing in enterprise modeling.

    Energy Technology Data Exchange (ETDEWEB)

    Goldsby, Michael E.; Armstrong, Robert C.; Shneider, Max S.; Vanderveen, Keith; Ray, Jaideep; Heath, Zach; Allan, Benjamin A.

    2008-08-01

    This report presents the results of our efforts to apply high-performance computing to entity-based simulations with a multi-use plugin for parallel computing. We use the term 'Entity-based simulation' to describe a class of simulation which includes both discrete event simulation and agent based simulation. What simulations of this class share, and what differs from more traditional models, is that the result sought is emergent from a large number of contributing entities. Logistic, economic and social simulations are members of this class where things or people are organized or self-organize to produce a solution. Entity-based problems never have an a priori ergodic principle that will greatly simplify calculations. Because the results of entity-based simulations can only be realized at scale, scalable computing is de rigueur for large problems. Having said that, the absence of a spatial organizing principal makes the decomposition of the problem onto processors problematic. In addition, practitioners in this domain commonly use the Java programming language which presents its own problems in a high-performance setting. The plugin we have developed, called the Parallel Particle Data Model, overcomes both of these obstacles and is now being used by two Sandia frameworks: the Decision Analysis Center, and the Seldon social simulation facility. While the ability to engage U.S.-sized problems is now available to the Decision Analysis Center, this plugin is central to the success of Seldon. Because Seldon relies on computationally intensive cognitive sub-models, this work is necessary to achieve the scale necessary for realistic results. With the recent upheavals in the financial markets, and the inscrutability of terrorist activity, this simulation domain will likely need a capability with ever greater fidelity. High-performance computing will play an important part in enabling that greater fidelity.

  1. Study of turbulence of Lower Hybrid Drift Instability origin with the Multi Level Multi Domain semi-implicit adaptive PIC method

    Science.gov (United States)

    Innocenti, Maria Elena; Beck, Arnaud; Markidis, Stefano; Lapenta, Giovanni

    2015-04-01

    We study turbulence generated by the Lower Hybrid Drift Instability (LHDI [1]) in the terrestrial magnetosphere. The problem is not only of interest per se, but also for the implications it can have for the so-called turbulent reconnection. The LHDI evolution is simulated with the PIC Multi Level Multi Domain code Parsek2D-MLMD [2,3], which simulates different parts of the domain with different spatial and temporal resolutions. This allows to satisfy, at a low computing cost, the two necessary requirements for LHDI turbulence simulations: 1) a large domain, to capture the high wavelength branch of the LHDI and of the secondary kink instability and 2) high resolution, to cover the high wavenumber part of the power spectrum and to capture the wavenumber at which the turbulent cascade ends. The turbulent cascade proceeds seamlessly from the coarse (low resolution) to the refined (high resolution) grid, the only one resolved enough to capture its end, which is studied here and related to wave-particle interaction processes. We also comment upon the role of smoothing (a common technique used in PIC simulations to reduce particle noise, [4]) in simulations of turbulence and on how its effects on power spectra may be easily mistaken, in absence of accurate convergence studies, for the end of the inertial range. [1] P. Gary, Theory of space plasma microinstabilities, Cambridge Atmospheric and Space Science Series, 2005. [2] M. E. Innocenti, G. Lapenta, S. Markidis, A. Beck, A. Vapirev, Journal of Computational Physics 238 (2013) 115 - 140. [3] M. E. Innocenti, A. Beck, T. Ponweiser, S. Markidis, G. Lapenta, Computer Physics Communications (accepted) (2014). [4] C. K. Birdsall, A. B. Langdon, Plasma physics via computer simulation, Taylor and Francis, 2004.

  2. Compiler Technology for Parallel Scientific Computation

    Directory of Open Access Journals (Sweden)

    Can Özturan

    1994-01-01

    Full Text Available There is a need for compiler technology that, given the source program, will generate efficient parallel codes for different architectures with minimal user involvement. Parallel computation is becoming indispensable in solving large-scale problems in science and engineering. Yet, the use of parallel computation is limited by the high costs of developing the needed software. To overcome this difficulty we advocate a comprehensive approach to the development of scalable architecture-independent software for scientific computation based on our experience with equational programming language (EPL. Our approach is based on a program decomposition, parallel code synthesis, and run-time support for parallel scientific computation. The program decomposition is guided by the source program annotations provided by the user. The synthesis of parallel code is based on configurations that describe the overall computation as a set of interacting components. Run-time support is provided by the compiler-generated code that redistributes computation and data during object program execution. The generated parallel code is optimized using techniques of data alignment, operator placement, wavefront determination, and memory optimization. In this article we discuss annotations, configurations, parallel code generation, and run-time support suitable for parallel programs written in the functional parallel programming language EPL and in Fortran.

  3. Computer-Aided Parallelizer and Optimizer

    Science.gov (United States)

    Jin, Haoqiang

    2011-01-01

    The Computer-Aided Parallelizer and Optimizer (CAPO) automates the insertion of compiler directives (see figure) to facilitate parallel processing on Shared Memory Parallel (SMP) machines. While CAPO currently is integrated seamlessly into CAPTools (developed at the University of Greenwich, now marketed as ParaWise), CAPO was independently developed at Ames Research Center as one of the components for the Legacy Code Modernization (LCM) project. The current version takes serial FORTRAN programs, performs interprocedural data dependence analysis, and generates OpenMP directives. Due to the widely supported OpenMP standard, the generated OpenMP codes have the potential to run on a wide range of SMP machines. CAPO relies on accurate interprocedural data dependence information currently provided by CAPTools. Compiler directives are generated through identification of parallel loops in the outermost level, construction of parallel regions around parallel loops and optimization of parallel regions, and insertion of directives with automatic identification of private, reduction, induction, and shared variables. Attempts also have been made to identify potential pipeline parallelism (implemented with point-to-point synchronization). Although directives are generated automatically, user interaction with the tool is still important for producing good parallel codes. A comprehensive graphical user interface is included for users to interact with the parallelization process.

  4. Exploiting multi-scale parallelism for large scale numerical modelling of laser wakefield accelerators

    International Nuclear Information System (INIS)

    Fonseca, R A; Vieira, J; Silva, L O; Fiuza, F; Davidson, A; Tsung, F S; Mori, W B

    2013-01-01

    A new generation of laser wakefield accelerators (LWFA), supported by the extreme accelerating fields generated in the interaction of PW-Class lasers and underdense targets, promises the production of high quality electron beams in short distances for multiple applications. Achieving this goal will rely heavily on numerical modelling to further understand the underlying physics and identify optimal regimes, but large scale modelling of these scenarios is computationally heavy and requires the efficient use of state-of-the-art petascale supercomputing systems. We discuss the main difficulties involved in running these simulations and the new developments implemented in the OSIRIS framework to address these issues, ranging from multi-dimensional dynamic load balancing and hybrid distributed/shared memory parallelism to the vectorization of the PIC algorithm. We present the results of the OASCR Joule Metric program on the issue of large scale modelling of LWFA, demonstrating speedups of over 1 order of magnitude on the same hardware. Finally, scalability to over ∼10 6 cores and sustained performance over ∼2 P Flops is demonstrated, opening the way for large scale modelling of LWFA scenarios. (paper)

  5. An FPGA-based DS-CDMA multiuser demodulator employing adaptive multistage parallel interference cancellation

    Science.gov (United States)

    Li, Xinhua; Song, Zhenyu; Zhan, Yongjie; Wu, Qiongzhi

    2009-12-01

    Since the system capacity is severely limited, reducing the multiple access interfere (MAI) is necessary in the multiuser direct-sequence code division multiple access (DS-CDMA) system which is used in the telecommunication terminals data-transferred link system. In this paper, we adopt an adaptive multistage parallel interference cancellation structure in the demodulator based on the least mean square (LMS) algorithm to eliminate the MAI on the basis of overviewing various of multiuser dectection schemes. Neither a training sequence nor a pilot signal is needed in the proposed scheme, and its implementation complexity can be greatly reduced by a LMS approximate algorithm. The algorithm and its FPGA implementation is then derived. Simulation results of the proposed adaptive PIC can outperform some of the existing interference cancellation methods in AWGN channels. The hardware setup of mutiuser demodulator is described, and the experimental results based on it demonstrate that the simulation results shows large performance gains over the conventional single-user demodulator.

  6. Development of a new dynamic turbulent model, applications to two-dimensional and plane parallel flows

    International Nuclear Information System (INIS)

    Laval, Jean Philippe

    1999-01-01

    We developed a turbulent model based on asymptotic development of the Navier-Stokes equations within the hypothesis of non-local interactions at small scales. This model provides expressions of the turbulent Reynolds sub-grid stresses via estimates of the sub-grid velocities rather than velocities correlations as is usually done. The model involves the coupling of two dynamical equations: one for the resolved scales of motions, which depends upon the Reynolds stresses generated by the sub-grid motions, and one for the sub-grid scales of motions, which can be used to compute the sub-grid Reynolds stresses. The non-locality of interaction at sub-grid scales allows to model their evolution with a linear inhomogeneous equation where the forcing occurs via the energy cascade from resolved to sub-grid scales. This model was solved using a decomposition of sub-grid scales on Gabor's modes and implemented numerically in 2D with periodic boundary conditions. A particles method (PIC) was used to compute the sub-grid scales. The results were compared with results of direct simulations for several typical flows. The model was also applied to plane parallel flows. An analytical study of the equations allows a description of mean velocity profiles in agreement with experimental results and theoretical results based on the symmetries of the Navier-Stokes equation. Possible applications and improvements of the model are discussed in the conclusion. (author) [fr

  7. The plasma-wall transition layers in the presence of collisions with a magnetic field parallel to the wall

    Science.gov (United States)

    Moritz, J.; Faudot, E.; Devaux, S.; Heuraux, S.

    2018-01-01

    The plasma-wall transition is studied by means of a particle-in-cell (PIC) simulation in the configuration of a parallel to the wall magnetic field (B), with collisions between charged particles vs. neutral atoms taken into account. The investigated system consists of a plasma bounded by two absorbing walls separated by 200 electron Debye lengths (λd). The strength of the magnetic field is chosen such as the ratio λ d / r l , with rl being the electron Larmor radius, is smaller or larger than unity. Collisions are modelled with a simple operator that reorients randomly ion or electron velocity, keeping constant the total kinetic energy of both the neutral atom (target) and the incident charged particle. The PIC simulations show that the plasma-wall transition consists in a quasi-neutral region (pre-sheath), from the center of the plasma towards the walls, where the electric potential or electric field profiles are well described by an ambipolar diffusion model, and in a second region at the vicinity of the walls, called the sheath, where the quasi-neutrality breaks down. In this peculiar geometry of B and for a certain range of the mean-free-path, the sheath is found to be composed of two charged layers: the positive one, close to the walls, and the negative one, towards the plasma and before the neutral pre-sheath. Depending on the amplitude of B, the spatial variation of the electric potential can be non-monotonic and presents a maximum within the sheath region. More generally, the sheath extent as well as the potential drop within the sheath and the pre-sheath is studied with respect to B, the mean-free-path, and the ion and electron temperatures.

  8. Parallel processing for fluid dynamics applications

    International Nuclear Information System (INIS)

    Johnson, G.M.

    1989-01-01

    The impact of parallel processing on computational science and, in particular, on computational fluid dynamics is growing rapidly. In this paper, particular emphasis is given to developments which have occurred within the past two years. Parallel processing is defined and the reasons for its importance in high-performance computing are reviewed. Parallel computer architectures are classified according to the number and power of their processing units, their memory, and the nature of their connection scheme. Architectures which show promise for fluid dynamics applications are emphasized. Fluid dynamics problems are examined for parallelism inherent at the physical level. CFD algorithms and their mappings onto parallel architectures are discussed. Several example are presented to document the performance of fluid dynamics applications on present-generation parallel processing devices

  9. Design considerations for parallel graphics libraries

    Science.gov (United States)

    Crockett, Thomas W.

    1994-01-01

    Applications which run on parallel supercomputers are often characterized by massive datasets. Converting these vast collections of numbers to visual form has proven to be a powerful aid to comprehension. For a variety of reasons, it may be desirable to provide this visual feedback at runtime. One way to accomplish this is to exploit the available parallelism to perform graphics operations in place. In order to do this, we need appropriate parallel rendering algorithms and library interfaces. This paper provides a tutorial introduction to some of the issues which arise in designing parallel graphics libraries and their underlying rendering algorithms. The focus is on polygon rendering for distributed memory message-passing systems. We illustrate our discussion with examples from PGL, a parallel graphics library which has been developed on the Intel family of parallel systems.

  10. Synchronization Techniques in Parallel Discrete Event Simulation

    OpenAIRE

    Lindén, Jonatan

    2018-01-01

    Discrete event simulation is an important tool for evaluating system models in many fields of science and engineering. To improve the performance of large-scale discrete event simulations, several techniques to parallelize discrete event simulation have been developed. In parallel discrete event simulation, the work of a single discrete event simulation is distributed over multiple processing elements. A key challenge in parallel discrete event simulation is to ensure that causally dependent ...

  11. Parallel processing from applications to systems

    CERN Document Server

    Moldovan, Dan I

    1993-01-01

    This text provides one of the broadest presentations of parallelprocessing available, including the structure of parallelprocessors and parallel algorithms. The emphasis is on mappingalgorithms to highly parallel computers, with extensive coverage ofarray and multiprocessor architectures. Early chapters provideinsightful coverage on the analysis of parallel algorithms andprogram transformations, effectively integrating a variety ofmaterial previously scattered throughout the literature. Theory andpractice are well balanced across diverse topics in this concisepresentation. For exceptional cla

  12. Parallel processing for artificial intelligence 1

    CERN Document Server

    Kanal, LN; Kumar, V; Suttner, CB

    1994-01-01

    Parallel processing for AI problems is of great current interest because of its potential for alleviating the computational demands of AI procedures. The articles in this book consider parallel processing for problems in several areas of artificial intelligence: image processing, knowledge representation in semantic networks, production rules, mechanization of logic, constraint satisfaction, parsing of natural language, data filtering and data mining. The publication is divided into six sections. The first addresses parallel computing for processing and understanding images. The second discus

  13. A survey of parallel multigrid algorithms

    Science.gov (United States)

    Chan, Tony F.; Tuminaro, Ray S.

    1987-01-01

    A typical multigrid algorithm applied to well-behaved linear-elliptic partial-differential equations (PDEs) is described. Criteria for designing and evaluating parallel algorithms are presented. Before evaluating the performance of some parallel multigrid algorithms, consideration is given to some theoretical complexity results for solving PDEs in parallel and for executing the multigrid algorithm. The effect of mapping and load imbalance on the partial efficiency of the algorithm is studied.

  14. Refinement of Parallel and Reactive Programs

    OpenAIRE

    Back, R. J. R.

    1992-01-01

    We show how to apply the refinement calculus to stepwise refinement of parallel and reactive programs. We use action systems as our basic program model. Action systems are sequential programs which can be implemented in a parallel fashion. Hence refinement calculus methods, originally developed for sequential programs, carry over to the derivation of parallel programs. Refinement of reactive programs is handled by data refinement techniques originally developed for the sequential refinement c...

  15. Parallel Prediction of Stock Volatility

    Directory of Open Access Journals (Sweden)

    Priscilla Jenq

    2017-10-01

    Full Text Available Volatility is a measurement of the risk of financial products. A stock will hit new highs and lows over time and if these highs and lows fluctuate wildly, then it is considered a high volatile stock. Such a stock is considered riskier than a stock whose volatility is low. Although highly volatile stocks are riskier, the returns that they generate for investors can be quite high. Of course, with a riskier stock also comes the chance of losing money and yielding negative returns. In this project, we will use historic stock data to help us forecast volatility. Since the financial industry usually uses S&P 500 as the indicator of the market, we will use S&P 500 as a benchmark to compute the risk. We will also use artificial neural networks as a tool to predict volatilities for a specific time frame that will be set when we configure this neural network. There have been reports that neural networks with different numbers of layers and different numbers of hidden nodes may generate varying results. In fact, we may be able to find the best configuration of a neural network to compute volatilities. We will implement this system using the parallel approach. The system can be used as a tool for investors to allocating and hedging assets.

  16. Vectoring of parallel synthetic jets

    Science.gov (United States)

    Berk, Tim; Ganapathisubramani, Bharathram; Gomit, Guillaume

    2015-11-01

    A pair of parallel synthetic jets can be vectored by applying a phase difference between the two driving signals. The resulting jet can be merged or bifurcated and either vectored towards the actuator leading in phase or the actuator lagging in phase. In the present study, the influence of phase difference and Strouhal number on the vectoring behaviour is examined experimentally. Phase-locked vorticity fields, measured using Particle Image Velocimetry (PIV), are used to track vortex pairs. The physical mechanisms that explain the diversity in vectoring behaviour are observed based on the vortex trajectories. For a fixed phase difference, the vectoring behaviour is shown to be primarily influenced by pinch-off time of vortex rings generated by the synthetic jets. Beyond a certain formation number, the pinch-off timescale becomes invariant. In this region, the vectoring behaviour is determined by the distance between subsequent vortex rings. We acknowledge the financial support from the European Research Council (ERC grant agreement no. 277472).

  17. A Soft Parallel Kinematic Mechanism.

    Science.gov (United States)

    White, Edward L; Case, Jennifer C; Kramer-Bottiglio, Rebecca

    2018-02-01

    In this article, we describe a novel holonomic soft robotic structure based on a parallel kinematic mechanism. The design is based on the Stewart platform, which uses six sensors and actuators to achieve full six-degree-of-freedom motion. Our design is much less complex than a traditional platform, since it replaces the 12 spherical and universal joints found in a traditional Stewart platform with a single highly deformable elastomer body and flexible actuators. This reduces the total number of parts in the system and simplifies the assembly process. Actuation is achieved through coiled-shape memory alloy actuators. State observation and feedback is accomplished through the use of capacitive elastomer strain gauges. The main structural element is an elastomer joint that provides antagonistic force. We report the response of the actuators and sensors individually, then report the response of the complete assembly. We show that the completed robotic system is able to achieve full position control, and we discuss the limitations associated with using responsive material actuators. We believe that control demonstrated on a single body in this work could be extended to chains of such bodies to create complex soft robots.

  18. Productive Parallel Programming: The PCN Approach

    Directory of Open Access Journals (Sweden)

    Ian Foster

    1992-01-01

    Full Text Available We describe the PCN programming system, focusing on those features designed to improve the productivity of scientists and engineers using parallel supercomputers. These features include a simple notation for the concise specification of concurrent algorithms, the ability to incorporate existing Fortran and C code into parallel applications, facilities for reusing parallel program components, a portable toolkit that allows applications to be developed on a workstation or small parallel computer and run unchanged on supercomputers, and integrated debugging and performance analysis tools. We survey representative scientific applications and identify problem classes for which PCN has proved particularly useful.

  19. High performance parallel I/O

    CERN Document Server

    Prabhat

    2014-01-01

    Gain Critical Insight into the Parallel I/O EcosystemParallel I/O is an integral component of modern high performance computing (HPC), especially in storing and processing very large datasets to facilitate scientific discovery. Revealing the state of the art in this field, High Performance Parallel I/O draws on insights from leading practitioners, researchers, software architects, developers, and scientists who shed light on the parallel I/O ecosystem.The first part of the book explains how large-scale HPC facilities scope, configure, and operate systems, with an emphasis on choices of I/O har

  20. Parallel, Rapid Diffuse Optical Tomography of Breast

    National Research Council Canada - National Science Library

    Yodh, Arjun

    2001-01-01

    During the last year we have experimentally and computationally investigated rapid acquisition and analysis of informationally dense diffuse optical data sets in the parallel plate compressed breast geometry...

  1. Parallel, Rapid Diffuse Optical Tomography of Breast

    National Research Council Canada - National Science Library

    Yodh, Arjun

    2002-01-01

    During the last year we have experimentally and computationally investigated rapid acquisition and analysis of informationally dense diffuse optical data sets in the parallel plate compressed breast geometry...

  2. Parallel auto-correlative statistics with VTK.

    Energy Technology Data Exchange (ETDEWEB)

    Pebay, Philippe Pierre; Bennett, Janine Camille

    2013-08-01

    This report summarizes existing statistical engines in VTK and presents both the serial and parallel auto-correlative statistics engines. It is a sequel to [PT08, BPRT09b, PT09, BPT09, PT10] which studied the parallel descriptive, correlative, multi-correlative, principal component analysis, contingency, k-means, and order statistics engines. The ease of use of the new parallel auto-correlative statistics engine is illustrated by the means of C++ code snippets and algorithm verification is provided. This report justifies the design of the statistics engines with parallel scalability in mind, and provides scalability and speed-up analysis results for the autocorrelative statistics engine.

  3. Conformal pure radiation with parallel rays

    International Nuclear Information System (INIS)

    Leistner, Thomas; Paweł Nurowski

    2012-01-01

    We define pure radiation metrics with parallel rays to be n-dimensional pseudo-Riemannian metrics that admit a parallel null line bundle K and whose Ricci tensor vanishes on vectors that are orthogonal to K. We give necessary conditions in terms of the Weyl, Cotton and Bach tensors for a pseudo-Riemannian metric to be conformal to a pure radiation metric with parallel rays. Then, we derive conditions in terms of the tractor calculus that are equivalent to the existence of a pure radiation metric with parallel rays in a conformal class. We also give analogous results for n-dimensional pseudo-Riemannian pp-waves. (paper)

  4. Compiling Scientific Programs for Scalable Parallel Systems

    National Research Council Canada - National Science Library

    Kennedy, Ken

    2001-01-01

    ...). The research performed in this project included new techniques for recognizing implicit parallelism in sequential programs, a powerful and precise set-based framework for analysis and transformation...

  5. Parallel thermal radiation transport in two dimensions

    International Nuclear Information System (INIS)

    Smedley-Stevenson, R.P.; Ball, S.R.

    2003-01-01

    This paper describes the distributed memory parallel implementation of a deterministic thermal radiation transport algorithm in a 2-dimensional ALE hydrodynamics code. The parallel algorithm consists of a variety of components which are combined in order to produce a state of the art computational capability, capable of solving large thermal radiation transport problems using Blue-Oak, the 3 Tera-Flop MPP (massive parallel processors) computing facility at AWE (United Kingdom). Particular aspects of the parallel algorithm are described together with examples of the performance on some challenging applications. (author)

  6. Parallel Algorithms for the Exascale Era

    Energy Technology Data Exchange (ETDEWEB)

    Robey, Robert W. [Los Alamos National Laboratory

    2016-10-19

    New parallel algorithms are needed to reach the Exascale level of parallelism with millions of cores. We look at some of the research developed by students in projects at LANL. The research blends ideas from the early days of computing while weaving in the fresh approach brought by students new to the field of high performance computing. We look at reproducibility of global sums and why it is important to parallel computing. Next we look at how the concept of hashing has led to the development of more scalable algorithms suitable for next-generation parallel computers. Nearly all of this work has been done by undergraduates and published in leading scientific journals.

  7. Parallel thermal radiation transport in two dimensions

    Energy Technology Data Exchange (ETDEWEB)

    Smedley-Stevenson, R.P.; Ball, S.R. [AWE Aldermaston (United Kingdom)

    2003-07-01

    This paper describes the distributed memory parallel implementation of a deterministic thermal radiation transport algorithm in a 2-dimensional ALE hydrodynamics code. The parallel algorithm consists of a variety of components which are combined in order to produce a state of the art computational capability, capable of solving large thermal radiation transport problems using Blue-Oak, the 3 Tera-Flop MPP (massive parallel processors) computing facility at AWE (United Kingdom). Particular aspects of the parallel algorithm are described together with examples of the performance on some challenging applications. (author)

  8. Structured Parallel Programming Patterns for Efficient Computation

    CERN Document Server

    McCool, Michael; Robison, Arch

    2012-01-01

    Programming is now parallel programming. Much as structured programming revolutionized traditional serial programming decades ago, a new kind of structured programming, based on patterns, is relevant to parallel programming today. Parallel computing experts and industry insiders Michael McCool, Arch Robison, and James Reinders describe how to design and implement maintainable and efficient parallel algorithms using a pattern-based approach. They present both theory and practice, and give detailed concrete examples using multiple programming models. Examples are primarily given using two of th

  9. Parallel Computing for Brain Simulation.

    Science.gov (United States)

    Pastur-Romay, L A; Porto-Pazos, A B; Cedron, F; Pazos, A

    2017-01-01

    The human brain is the most complex system in the known universe, it is therefore one of the greatest mysteries. It provides human beings with extraordinary abilities. However, until now it has not been understood yet how and why most of these abilities are produced. For decades, researchers have been trying to make computers reproduce these abilities, focusing on both understanding the nervous system and, on processing data in a more efficient way than before. Their aim is to make computers process information similarly to the brain. Important technological developments and vast multidisciplinary projects have allowed creating the first simulation with a number of neurons similar to that of a human brain. This paper presents an up-to-date review about the main research projects that are trying to simulate and/or emulate the human brain. They employ different types of computational models using parallel computing: digital models, analog models and hybrid models. This review includes the current applications of these works, as well as future trends. It is focused on various works that look for advanced progress in Neuroscience and still others which seek new discoveries in Computer Science (neuromorphic hardware, machine learning techniques). Their most outstanding characteristics are summarized and the latest advances and future plans are presented. In addition, this review points out the importance of considering not only neurons: Computational models of the brain should also include glial cells, given the proven importance of astrocytes in information processing. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  10. High-Performance Psychometrics: The Parallel-E Parallel-M Algorithm for Generalized Latent Variable Models. Research Report. ETS RR-16-34

    Science.gov (United States)

    von Davier, Matthias

    2016-01-01

    This report presents results on a parallel implementation of the expectation-maximization (EM) algorithm for multidimensional latent variable models. The developments presented here are based on code that parallelizes both the E step and the M step of the parallel-E parallel-M algorithm. Examples presented in this report include item response…

  11. The language parallel Pascal and other aspects of the massively parallel processor

    Science.gov (United States)

    Reeves, A. P.; Bruner, J. D.

    1982-01-01

    A high level language for the Massively Parallel Processor (MPP) was designed. This language, called Parallel Pascal, is described in detail. A description of the language design, a description of the intermediate language, Parallel P-Code, and details for the MPP implementation are included. Formal descriptions of Parallel Pascal and Parallel P-Code are given. A compiler was developed which converts programs in Parallel Pascal into the intermediate Parallel P-Code language. The code generator to complete the compiler for the MPP is being developed independently. A Parallel Pascal to Pascal translator was also developed. The architecture design for a VLSI version of the MPP was completed with a description of fault tolerant interconnection networks. The memory arrangement aspects of the MPP are discussed and a survey of other high level languages is given.

  12. Parallel Boltzmann machines : a mathematical model

    NARCIS (Netherlands)

    Zwietering, P.J.; Aarts, E.H.L.

    1991-01-01

    A mathematical model is presented for the description of parallel Boltzmann machines. The framework is based on the theory of Markov chains and combines a number of previously known results into one generic model. It is argued that parallel Boltzmann machines maximize a function consisting of a

  13. The convergence of parallel Boltzmann machines

    NARCIS (Netherlands)

    Zwietering, P.J.; Aarts, E.H.L.; Eckmiller, R.; Hartmann, G.; Hauske, G.

    1990-01-01

    We discuss the main results obtained in a study of a mathematical model of synchronously parallel Boltzmann machines. We present supporting evidence for the conjecture that a synchronously parallel Boltzmann machine maximizes a consensus function that consists of a weighted sum of the regular

  14. Customizable Memory Schemes for Data Parallel Architectures

    NARCIS (Netherlands)

    Gou, C.

    2011-01-01

    Memory system efficiency is crucial for any processor to achieve high performance, especially in the case of data parallel machines. Processing capabilities of parallel lanes will be wasted, when data requests are not accomplished in a sustainable and timely manner. Irregular vector memory accesses

  15. Parallel Narrative Structure in Paul Harding's "Tinkers"

    Science.gov (United States)

    Çirakli, Mustafa Zeki

    2014-01-01

    The present paper explores the implications of parallel narrative structure in Paul Harding's "Tinkers" (2009). Besides primarily recounting the two sets of parallel narratives, "Tinkers" also comprises of seemingly unrelated fragments such as excerpts from clock repair manuals and diaries. The main stories, however, told…

  16. Streaming nested data parallelism on multicores

    DEFF Research Database (Denmark)

    Madsen, Frederik Meisner; Filinski, Andrzej

    2016-01-01

    The paradigm of nested data parallelism (NDP) allows a variety of semi-regular computation tasks to be mapped onto SIMD-style hardware, including GPUs and vector units. However, some care is needed to keep down space consumption in situations where the available parallelism may vastly exceed...

  17. Bayer image parallel decoding based on GPU

    Science.gov (United States)

    Hu, Rihui; Xu, Zhiyong; Wei, Yuxing; Sun, Shaohua

    2012-11-01

    In the photoelectrical tracking system, Bayer image is decompressed in traditional method, which is CPU-based. However, it is too slow when the images become large, for example, 2K×2K×16bit. In order to accelerate the Bayer image decoding, this paper introduces a parallel speedup method for NVIDA's Graphics Processor Unit (GPU) which supports CUDA architecture. The decoding procedure can be divided into three parts: the first is serial part, the second is task-parallelism part, and the last is data-parallelism part including inverse quantization, inverse discrete wavelet transform (IDWT) as well as image post-processing part. For reducing the execution time, the task-parallelism part is optimized by OpenMP techniques. The data-parallelism part could advance its efficiency through executing on the GPU as CUDA parallel program. The optimization techniques include instruction optimization, shared memory access optimization, the access memory coalesced optimization and texture memory optimization. In particular, it can significantly speed up the IDWT by rewriting the 2D (Tow-dimensional) serial IDWT into 1D parallel IDWT. Through experimenting with 1K×1K×16bit Bayer image, data-parallelism part is 10 more times faster than CPU-based implementation. Finally, a CPU+GPU heterogeneous decompression system was designed. The experimental result shows that it could achieve 3 to 5 times speed increase compared to the CPU serial method.

  18. Parallelization of TMVA Machine Learning Algorithms

    CERN Document Server

    Hajili, Mammad

    2017-01-01

    This report reflects my work on Parallelization of TMVA Machine Learning Algorithms integrated to ROOT Data Analysis Framework during summer internship at CERN. The report consists of 4 impor- tant part - data set used in training and validation, algorithms that multiprocessing applied on them, parallelization techniques and re- sults of execution time changes due to number of workers.

  19. 17 CFR 12.24 - Parallel proceedings.

    Science.gov (United States)

    2010-04-01

    ...) Definition. For purposes of this section, a parallel proceeding shall include: (1) An arbitration proceeding... the receivership includes the resolution of claims made by customers; or (3) A petition filed under... any of the foregoing with knowledge of a parallel proceeding shall promptly notify the Commission, by...

  20. Parallel S/sub n/ iteration schemes

    International Nuclear Information System (INIS)

    Wienke, B.R.; Hiromoto, R.E.

    1986-01-01

    The iterative, multigroup, discrete ordinates (S/sub n/) technique for solving the linear transport equation enjoys widespread usage and appeal. Serial iteration schemes and numerical algorithms developed over the years provide a timely framework for parallel extension. On the Denelcor HEP, the authors investigate three parallel iteration schemes for solving the one-dimensional S/sub n/ transport equation. The multigroup representation and serial iteration methods are also reviewed. This analysis represents a first attempt to extend serial S/sub n/ algorithms to parallel environments and provides good baseline estimates on ease of parallel implementation, relative algorithm efficiency, comparative speedup, and some future directions. The authors examine ordered and chaotic versions of these strategies, with and without concurrent rebalance and diffusion acceleration. Two strategies efficiently support high degrees of parallelization and appear to be robust parallel iteration techniques. The third strategy is a weaker parallel algorithm. Chaotic iteration, difficult to simulate on serial machines, holds promise and converges faster than ordered versions of the schemes. Actual parallel speedup and efficiency are high and payoff appears substantial

  1. Parallel Computing Strategies for Irregular Algorithms

    Science.gov (United States)

    Biswas, Rupak; Oliker, Leonid; Shan, Hongzhang; Biegel, Bryan (Technical Monitor)

    2002-01-01

    Parallel computing promises several orders of magnitude increase in our ability to solve realistic computationally-intensive problems, but relies on their efficient mapping and execution on large-scale multiprocessor architectures. Unfortunately, many important applications are irregular and dynamic in nature, making their effective parallel implementation a daunting task. Moreover, with the proliferation of parallel architectures and programming paradigms, the typical scientist is faced with a plethora of questions that must be answered in order to obtain an acceptable parallel implementation of the solution algorithm. In this paper, we consider three representative irregular applications: unstructured remeshing, sparse matrix computations, and N-body problems, and parallelize them using various popular programming paradigms on a wide spectrum of computer platforms ranging from state-of-the-art supercomputers to PC clusters. We present the underlying problems, the solution algorithms, and the parallel implementation strategies. Smart load-balancing, partitioning, and ordering techniques are used to enhance parallel performance. Overall results demonstrate the complexity of efficiently parallelizing irregular algorithms.

  2. Parallel fuzzy connected image segmentation on GPU

    OpenAIRE

    Zhuge, Ying; Cao, Yong; Udupa, Jayaram K.; Miller, Robert W.

    2011-01-01

    Purpose: Image segmentation techniques using fuzzy connectedness (FC) principles have shown their effectiveness in segmenting a variety of objects in several large applications. However, one challenge in these algorithms has been their excessive computational requirements when processing large image datasets. Nowadays, commodity graphics hardware provides a highly parallel computing environment. In this paper, the authors present a parallel fuzzy connected image segmentation algorithm impleme...

  3. Non-Cartesian parallel imaging reconstruction.

    Science.gov (United States)

    Wright, Katherine L; Hamilton, Jesse I; Griswold, Mark A; Gulani, Vikas; Seiberlich, Nicole

    2014-11-01

    Non-Cartesian parallel imaging has played an important role in reducing data acquisition time in MRI. The use of non-Cartesian trajectories can enable more efficient coverage of k-space, which can be leveraged to reduce scan times. These trajectories can be undersampled to achieve even faster scan times, but the resulting images may contain aliasing artifacts. Just as Cartesian parallel imaging can be used to reconstruct images from undersampled Cartesian data, non-Cartesian parallel imaging methods can mitigate aliasing artifacts by using additional spatial encoding information in the form of the nonhomogeneous sensitivities of multi-coil phased arrays. This review will begin with an overview of non-Cartesian k-space trajectories and their sampling properties, followed by an in-depth discussion of several selected non-Cartesian parallel imaging algorithms. Three representative non-Cartesian parallel imaging methods will be described, including Conjugate Gradient SENSE (CG SENSE), non-Cartesian generalized autocalibrating partially parallel acquisition (GRAPPA), and Iterative Self-Consistent Parallel Imaging Reconstruction (SPIRiT). After a discussion of these three techniques, several potential promising clinical applications of non-Cartesian parallel imaging will be covered. © 2014 Wiley Periodicals, Inc.

  4. Parallel Algorithms for Groebner-Basis Reduction

    Science.gov (United States)

    1987-09-25

    22209 ELEMENT NO. NO. NO. ACCESSION NO. 11. TITLE (Include Security Classification) * PARALLEL ALGORITHMS FOR GROEBNER -BASIS REDUCTION 12. PERSONAL...All other editions are obsolete. Productivity Engineering in the UNIXt Environment p Parallel Algorithms for Groebner -Basis Reduction Technical Report

  5. Parallel knock-out schemes in networks

    NARCIS (Netherlands)

    Broersma, H.J.; Fomin, F.V.; Woeginger, G.J.

    2004-01-01

    We consider parallel knock-out schemes, a procedure on graphs introduced by Lampert and Slater in 1997 in which each vertex eliminates exactly one of its neighbors in each round. We are considering cases in which after a finite number of rounds, where the minimimum number is called the parallel

  6. Building a parallel file system simulator

    International Nuclear Information System (INIS)

    Molina-Estolano, E; Maltzahn, C; Brandt, S A; Bent, J

    2009-01-01

    Parallel file systems are gaining in popularity in high-end computing centers as well as commercial data centers. High-end computing systems are expected to scale exponentially and to pose new challenges to their storage scalability in terms of cost and power. To address these challenges scientists and file system designers will need a thorough understanding of the design space of parallel file systems. Yet there exist few systematic studies of parallel file system behavior at petabyte- and exabyte scale. An important reason is the significant cost of getting access to large-scale hardware to test parallel file systems. To contribute to this understanding we are building a parallel file system simulator that can simulate parallel file systems at very large scale. Our goal is to simulate petabyte-scale parallel file systems on a small cluster or even a single machine in reasonable time and fidelity. With this simulator, file system experts will be able to tune existing file systems for specific workloads, scientists and file system deployment engineers will be able to better communicate workload requirements, file system designers and researchers will be able to try out design alternatives and innovations at scale, and instructors will be able to study very large-scale parallel file system behavior in the class room. In this paper we describe our approach and provide preliminary results that are encouraging both in terms of fidelity and simulation scalability.

  7. Broadcasting a message in a parallel computer

    Science.gov (United States)

    Berg, Jeremy E [Rochester, MN; Faraj, Ahmad A [Rochester, MN

    2011-08-02

    Methods, systems, and products are disclosed for broadcasting a message in a parallel computer. The parallel computer includes a plurality of compute nodes connected together using a data communications network. The data communications network optimized for point to point data communications and is characterized by at least two dimensions. The compute nodes are organized into at least one operational group of compute nodes for collective parallel operations of the parallel computer. One compute node of the operational group assigned to be a logical root. Broadcasting a message in a parallel computer includes: establishing a Hamiltonian path along all of the compute nodes in at least one plane of the data communications network and in the operational group; and broadcasting, by the logical root to the remaining compute nodes, the logical root's message along the established Hamiltonian path.

  8. Advanced parallel processing with supercomputer architectures

    International Nuclear Information System (INIS)

    Hwang, K.

    1987-01-01

    This paper investigates advanced parallel processing techniques and innovative hardware/software architectures that can be applied to boost the performance of supercomputers. Critical issues on architectural choices, parallel languages, compiling techniques, resource management, concurrency control, programming environment, parallel algorithms, and performance enhancement methods are examined and the best answers are presented. The authors cover advanced processing techniques suitable for supercomputers, high-end mainframes, minisupers, and array processors. The coverage emphasizes vectorization, multitasking, multiprocessing, and distributed computing. In order to achieve these operation modes, parallel languages, smart compilers, synchronization mechanisms, load balancing methods, mapping parallel algorithms, operating system functions, application library, and multidiscipline interactions are investigated to ensure high performance. At the end, they assess the potentials of optical and neural technologies for developing future supercomputers

  9. Differences Between Distributed and Parallel Systems

    Energy Technology Data Exchange (ETDEWEB)

    Brightwell, R.; Maccabe, A.B.; Rissen, R.

    1998-10-01

    Distributed systems have been studied for twenty years and are now coming into wider use as fast networks and powerful workstations become more readily available. In many respects a massively parallel computer resembles a network of workstations and it is tempting to port a distributed operating system to such a machine. However, there are significant differences between these two environments and a parallel operating system is needed to get the best performance out of a massively parallel system. This report characterizes the differences between distributed systems, networks of workstations, and massively parallel systems and analyzes the impact of these differences on operating system design. In the second part of the report, we introduce Puma, an operating system specifically developed for massively parallel systems. We describe Puma portals, the basic building blocks for message passing paradigms implemented on top of Puma, and show how the differences observed in the first part of the report have influenced the design and implementation of Puma.

  10. Parallel-In-Time For Moving Meshes

    Energy Technology Data Exchange (ETDEWEB)

    Falgout, R. D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Manteuffel, T. A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Southworth, B. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Schroder, J. B. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-02-04

    With steadily growing computational resources available, scientists must develop e ective ways to utilize the increased resources. High performance, highly parallel software has be- come a standard. However until recent years parallelism has focused primarily on the spatial domain. When solving a space-time partial di erential equation (PDE), this leads to a sequential bottleneck in the temporal dimension, particularly when taking a large number of time steps. The XBraid parallel-in-time library was developed as a practical way to add temporal parallelism to existing se- quential codes with only minor modi cations. In this work, a rezoning-type moving mesh is applied to a di usion problem and formulated in a parallel-in-time framework. Tests and scaling studies are run using XBraid and demonstrate excellent results for the simple model problem considered herein.

  11. Parallel programming with Easy Java Simulations

    Science.gov (United States)

    Esquembre, F.; Christian, W.; Belloni, M.

    2018-01-01

    Nearly all of today's processors are multicore, and ideally programming and algorithm development utilizing the entire processor should be introduced early in the computational physics curriculum. Parallel programming is often not introduced because it requires a new programming environment and uses constructs that are unfamiliar to many teachers. We describe how we decrease the barrier to parallel programming by using a java-based programming environment to treat problems in the usual undergraduate curriculum. We use the easy java simulations programming and authoring tool to create the program's graphical user interface together with objects based on those developed by Kaminsky [Building Parallel Programs (Course Technology, Boston, 2010)] to handle common parallel programming tasks. Shared-memory parallel implementations of physics problems, such as time evolution of the Schrödinger equation, are available as source code and as ready-to-run programs from the AAPT-ComPADRE digital library.

  12. How does sagittal imbalance affect the appropriateness of surgical indications and selection of procedure in the treatment of degenerative scoliosis? Findings from the RAND/UCLA Appropriate Use Criteria study.

    Science.gov (United States)

    Daubs, Michael D; Brara, Harsimran S; Raaen, Laura B; Chen, Peggy Guey-Chi; Anderson, Ashaunta T; Asch, Steven M; Nuckols, Teryl K

    2018-05-01

    Degenerative lumbar scoliosis (DLS) is often associated with sagittal imbalance, which may affect patients' health outcomes before and after surgery. The appropriateness of surgery and preferred operative approaches has not been examined in detail for patients with DLS and sagittal imbalance. The goals of this article were to describe what is currently known about the relationship between sagittal imbalance and health outcomes among patients with DLS and to determine how indications for surgery in patients with DLS differ when sagittal imbalance is present. This study included a literature review and an expert panel using the RAND/University of California at Los Angeles (UCLA) Appropriateness Method. To develop appropriate use criteria for DLS, researchers at the RAND Corporation recently employed the RAND/UCLA Appropriateness Method, which involves a systematic review of the literature and multidisciplinary expert panel process. Experts reviewed a synopsis of published literature and rated the appropriateness of five common operative approaches for 260 different clinical scenarios. In the present work, we updated the literature review and compared panelists' ratings in scenarios where imbalance was present versus absent. This work was funded by the Collaborative Spine Research Foundation, a group of surgical specialty societies and device manufacturers. On the basis of 13 eligible studies that examined sagittal imbalance and outcomes in patients with DLS, imbalance was associated with worse functional status in the absence of surgery and worse symptoms and complications postoperatively. Panelists' ratings demonstrated a consistent pattern across the diverse clinical scenarios. In general, when imbalance was present, surgery was more likely to be appropriate or necessary, including in some situations where surgery would otherwise be inappropriate. For patients with moderate to severe symptoms and imbalance, a deformity correction procedure was usually appropriate

  13. Caracterización taxonómica, distribución y primeros registros europeos de Apalus cinctus (Pic, 1896 (Coleoptera, Meloidae

    Directory of Open Access Journals (Sweden)

    Ruiz, J. L.

    2013-12-01

    Full Text Available In this study we clarify the taxonomic status and geographic distribution of Apalus cinctus (Pic, 1896, a Mediterranean species included in the group of Apalus bimaculatus (Linnaeus, 1760. Apalus cinctus was only known from a few North African localities mentioned in the original description, and was considered of uncertain taxonomic status. The review of detailed photographs of the type specimen and the study of recently captured specimens allow us to discuss its taxonomic position and to define its diagnostic characters, validating its specific status. Capture or observation of specimens assignable to Apalus cinctus in continental Spain (León, Zamora and Huesca, extends the geographic range of the species considerably, including it within the European Fauna. We question the presence of Apalus bimaculatus in the Iberian peninsula and North Africa, suggesting that it is possibly replaced by A. cinctus.En este estudio se clarifica el estatus taxonómico y la distribución geográfica de Apalus cinctus (Pic, 1896, especie mediterránea que se integra en el grupo de Apalus bimaculatus (Linnaeus, 1760. Apalus cinctus sólo se conocía por su descripción original a partir de algunas localidades norteafricanas y se consideraba como un taxon con estatus taxonómico incierto. El examen de fotografías detalladas del tipo de Apalus cinctus y el estudio de nuevo material capturado recientemente nos permite discutir su posición taxonómica y definir sus caracteres diagnósticos, validando su estatus específico. La captura u observación de ejemplares asignables a Apalus cinctus en España continental (León, Zamora y Huesca, amplía considerablemente la distribución de la especie y permite incluirla dentro de la Fauna Europea. Se cuestiona la presencia de Apalus bimaculatus en la Península Ibérica y en el Norte de África, donde posiblemente sea reemplazada por A. cinctus.

  14. Three-dimensional kinetic simulations of whistler turbulence in solar wind on parallel supercomputers

    Science.gov (United States)

    Chang, Ouliang

    The objective of this dissertation is to study the physics of whistler turbulence evolution and its role in energy transport and dissipation in the solar wind plasmas through computational and theoretical investigations. This dissertation presents the first fully three-dimensional (3D) particle-in-cell (PIC) simulations of whistler turbulence forward cascade in a homogeneous, collisionless plasma with a uniform background magnetic field B o, and the first 3D PIC simulation of whistler turbulence with both forward and inverse cascades. Such computationally demanding research is made possible through the use of massively parallel, high performance electromagnetic PIC simulations on state-of-the-art supercomputers. Simulations are carried out to study characteristic properties of whistler turbulence under variable solar wind fluctuation amplitude (epsilon e) and electron beta (betae), relative contributions to energy dissipation and electron heating in whistler turbulence from the quasilinear scenario and the intermittency scenario, and whistler turbulence preferential cascading direction and wavevector anisotropy. The 3D simulations of whistler turbulence exhibit a forward cascade of fluctuations into broadband, anisotropic, turbulent spectrum at shorter wavelengths with wavevectors preferentially quasi-perpendicular to B o. The overall electron heating yields T ∥ > T⊥ for all epsilone and betae values, indicating the primary linear wave-particle interaction is Landau damping. But linear wave-particle interactions play a minor role in shaping the wavevector spectrum, whereas nonlinear wave-wave interactions are overall stronger and faster processes, and ultimately determine the wavevector anisotropy. Simulated magnetic energy spectra as function of wavenumber show a spectral break to steeper slopes, which scales as k⊥lambda e ≃ 1 independent of betae values, where lambdae is electron inertial length, qualitatively similar to solar wind observations. Specific

  15. Model-driven product line engineering for mapping parallel algorithms to parallel computing platforms

    NARCIS (Netherlands)

    Arkin, Ethem; Tekinerdogan, Bedir

    2016-01-01

    Mapping parallel algorithms to parallel computing platforms requires several activities such as the analysis of the parallel algorithm, the definition of the logical configuration of the platform, the mapping of the algorithm to the logical configuration platform and the implementation of the

  16. Portable parallel programming in a Fortran environment

    International Nuclear Information System (INIS)

    May, E.N.

    1989-01-01

    Experience using the Argonne-developed PARMACs macro package to implement a portable parallel programming environment is described. Fortran programs with intrinsic parallelism of coarse and medium granularity are easily converted to parallel programs which are portable among a number of commercially available parallel processors in the class of shared-memory bus-based and local-memory network based MIMD processors. The parallelism is implemented using standard UNIX (tm) tools and a small number of easily understood synchronization concepts (monitors and message-passing techniques) to construct and coordinate multiple cooperating processes on one or many processors. Benchmark results are presented for parallel computers such as the Alliant FX/8, the Encore MultiMax, the Sequent Balance, the Intel iPSC/2 Hypercube and a network of Sun 3 workstations. These parallel machines are typical MIMD types with from 8 to 30 processors, each rated at from 1 to 10 MIPS processing power. The demonstration code used for this work is a Monte Carlo simulation of the response to photons of a ''nearly realistic'' lead, iron and plastic electromagnetic and hadronic calorimeter, using the EGS4 code system. 6 refs., 2 figs., 2 tabs

  17. Performance of the Galley Parallel File System

    Science.gov (United States)

    Nieuwejaar, Nils; Kotz, David

    1996-01-01

    As the input/output (I/O) needs of parallel scientific applications increase, file systems for multiprocessors are being designed to provide applications with parallel access to multiple disks. Many parallel file systems present applications with a conventional Unix-like interface that allows the application to access multiple disks transparently. This interface conceals the parallism within the file system, which increases the ease of programmability, but makes it difficult or impossible for sophisticated programmers and libraries to use knowledge about their I/O needs to exploit that parallelism. Furthermore, most current parallel file systems are optimized for a different workload than they are being asked to support. We introduce Galley, a new parallel file system that is intended to efficiently support realistic parallel workloads. Initial experiments, reported in this paper, indicate that Galley is capable of providing high-performance 1/O to applications the applications that rely on them. In Section 3 we describe that access data in patterns that have been observed to be common.

  18. The New World Gibbobruchus Pic (Coleoptera, Chrysomelidae, Bruchinae): description of a new species and phylogenetic insights into the evolution of host associations and biogeography.

    Science.gov (United States)

    Manfio, Daiara; Jorge, Isaac R; Morse, Geoffrey E; Ribeiro-Costa, Cibele S

    2016-04-18

    The seed beetle Gibbobruchus tridentatus Manfio, Jorge & Ribeiro-Costa sp. nov. is described from the Amazon basin in Brazil (Acre) and Ecuador (Napo), and is included in an updated key to the species of Gibbobruchus Pic. This new species and the recently described G. bergamini Manfio & Ribeiro-Costa are incorporated into a phylogenetic reanalysis of the genus and into a comparative analysis of host plant use and biogeography. Species groups previously proposed were supported and the evolutionary history in host plant-use shows Gibbobruchus conserved at tribe level, Cercideae (Caesalpinioideae), with coordination between biogeographic expansion and host genus shifts. Both species, Gibbobruchus tridentatus Manfio, Jorge & Ribeiro-Costa sp. nov. and G. bergamini, were placed within the group scurra (G. tridentatus (G. scurra (G. cavillator+G. bolivianus+G. bergamini))) and supported by one synapomorphy. Additionally, we update geographic distributions and host plant records. Two hosts, Bauhinia argentinensis Burkart and B. tarapotensis Benth. are recorded for the first time as hosts for the genus and for the subfamily.

  19. ERO and PIC simulations of gross and net erosion of tungsten in the outer strike-point region of ASDEX Upgrade

    Directory of Open Access Journals (Sweden)

    A. Hakola

    2017-08-01

    Full Text Available We have modelled net and gross erosion of W in low-density l-mode plasmas in the low-field side strike point region of ASDEX Upgrade by ERO and Particle-in-Cell (PIC simulations. The observed net-erosion peak at the strike point was mainly due to the light impurities present in the plasma while the noticeable net-deposition regions surrounding the erosion maximum could be attributed to the strong E ×B drift and the magnetic field bringing eroded particles from a distance of several meters towards the private flux region. Our results also imply that the role of cross-field diffusion is very small in the studied plasmas. The simulations indicate net/gross erosion ratio of 0.2–0.6, which is in line with the literature data and what was determined spectroscopically. The deviations from the estimates extracted from post-exposure ion-beam-analysis data (∼0.6–0.7 are most likely due to the measured re-deposition patterns showing the outcomes of multiple erosion-deposition cycles.

  20. The kpx, a program analyzer for parallelization

    International Nuclear Information System (INIS)

    Matsuyama, Yuji; Orii, Shigeo; Ota, Toshiro; Kume, Etsuo; Aikawa, Hiroshi.

    1997-03-01

    The kpx is a program analyzer, developed as a common technological basis for promoting parallel processing. The kpx consists of three tools. The first is ktool, that shows how much execution time is spent in program segments. The second is ptool, that shows parallelization overhead on the Paragon system. The last is xtool, that shows parallelization overhead on the VPP system. The kpx, designed to work for any FORTRAN cord on any UNIX computer, is confirmed to work well after testing on Paragon, SP2, SR2201, VPP500, VPP300, Monte-4, SX-4 and T90. (author)