WorldWideScience

Sample records for em particle code

  1. Deploying electromagnetic particle-in-cell (EM-PIC) codes on Xeon Phi accelerators boards

    Science.gov (United States)

    Fonseca, Ricardo

    2014-10-01

    The complexity of the phenomena involved in several relevant plasma physics scenarios, where highly nonlinear and kinetic processes dominate, makes purely theoretical descriptions impossible. Further understanding of these scenarios requires detailed numerical modeling, but fully relativistic particle-in-cell codes such as OSIRIS are computationally intensive. The quest towards Exaflop computer systems has lead to the development of HPC systems based on add-on accelerator cards, such as GPGPUs and more recently the Xeon Phi accelerators that power the current number 1 system in the world. These cards, also referred to as Intel Many Integrated Core Architecture (MIC) offer peak theoretical performances of >1 TFlop/s for general purpose calculations in a single board, and are receiving significant attention as an attractive alternative to CPUs for plasma modeling. In this work we report on our efforts towards the deployment of an EM-PIC code on a Xeon Phi architecture system. We will focus on the parallelization and vectorization strategies followed, and present a detailed performance evaluation of code performance in comparison with the CPU code.

  2. The Accurate Particle Tracer Code

    OpenAIRE

    Wang, Yulei; Liu, Jian; Qin, Hong; Yu, Zhi

    2016-01-01

    The Accurate Particle Tracer (APT) code is designed for large-scale particle simulations on dynamical systems. Based on a large variety of advanced geometric algorithms, APT possesses long-term numerical accuracy and stability, which are critical for solving multi-scale and non-linear problems. Under the well-designed integrated and modularized framework, APT serves as a universal platform for researchers from different fields, such as plasma physics, accelerator physics, space science, fusio...

  3. The accurate particle tracer code

    Science.gov (United States)

    Wang, Yulei; Liu, Jian; Qin, Hong; Yu, Zhi; Yao, Yicun

    2017-11-01

    The Accurate Particle Tracer (APT) code is designed for systematic large-scale applications of geometric algorithms for particle dynamical simulations. Based on a large variety of advanced geometric algorithms, APT possesses long-term numerical accuracy and stability, which are critical for solving multi-scale and nonlinear problems. To provide a flexible and convenient I/O interface, the libraries of Lua and Hdf5 are used. Following a three-step procedure, users can efficiently extend the libraries of electromagnetic configurations, external non-electromagnetic forces, particle pushers, and initialization approaches by use of the extendible module. APT has been used in simulations of key physical problems, such as runaway electrons in tokamaks and energetic particles in Van Allen belt. As an important realization, the APT-SW version has been successfully distributed on the world's fastest computer, the Sunway TaihuLight supercomputer, by supporting master-slave architecture of Sunway many-core processors. Based on large-scale simulations of a runaway beam under parameters of the ITER tokamak, it is revealed that the magnetic ripple field can disperse the pitch-angle distribution significantly and improve the confinement of energetic runaway beam on the same time.

  4. PHANTOM: Smoothed particle hydrodynamics and magnetohydrodynamics code

    Science.gov (United States)

    Price, Daniel J.; Wurster, James; Nixon, Chris; Tricco, Terrence S.; Toupin, Stéven; Pettitt, Alex; Chan, Conrad; Laibe, Guillaume; Glover, Simon; Dobbs, Clare; Nealon, Rebecca; Liptai, David; Worpel, Hauke; Bonnerot, Clément; Dipierro, Giovanni; Ragusa, Enrico; Federrath, Christoph; Iaconi, Roberto; Reichardt, Thomas; Forgan, Duncan; Hutchison, Mark; Constantino, Thomas; Ayliffe, Ben; Mentiplay, Daniel; Hirsh, Kieran; Lodato, Giuseppe

    2017-09-01

    Phantom is a smoothed particle hydrodynamics and magnetohydrodynamics code focused on stellar, galactic, planetary, and high energy astrophysics. It is modular, and handles sink particles, self-gravity, two fluid and one fluid dust, ISM chemistry and cooling, physical viscosity, non-ideal MHD, and more. Its modular structure makes it easy to add new physics to the code.

  5. An implicit Smooth Particle Hydrodynamic code

    Energy Technology Data Exchange (ETDEWEB)

    Knapp, Charles E. [Univ. of New Mexico, Albuquerque, NM (United States)

    2000-05-01

    An implicit version of the Smooth Particle Hydrodynamic (SPH) code SPHINX has been written and is working. In conjunction with the SPHINX code the new implicit code models fluids and solids under a wide range of conditions. SPH codes are Lagrangian, meshless and use particles to model the fluids and solids. The implicit code makes use of the Krylov iterative techniques for solving large linear-systems and a Newton-Raphson method for non-linear corrections. It uses numerical derivatives to construct the Jacobian matrix. It uses sparse techniques to save on memory storage and to reduce the amount of computation. It is believed that this is the first implicit SPH code to use Newton-Krylov techniques, and is also the first implicit SPH code to model solids. A description of SPH and the techniques used in the implicit code are presented. Then, the results of a number of tests cases are discussed, which include a shock tube problem, a Rayleigh-Taylor problem, a breaking dam problem, and a single jet of gas problem. The results are shown to be in very good agreement with analytic solutions, experimental results, and the explicit SPHINX code. In the case of the single jet of gas case it has been demonstrated that the implicit code can do a problem in much shorter time than the explicit code. The problem was, however, very unphysical, but it does demonstrate the potential of the implicit code. It is a first step toward a useful implicit SPH code.

  6. Computer codes in particle transport physics

    International Nuclear Information System (INIS)

    Pesic, M.

    2004-01-01

    Simulation of transport and interaction of various particles in complex media and wide energy range (from 1 MeV up to 1 TeV) is very complicated problem that requires valid model of a real process in nature and appropriate solving tool - computer code and data library. A brief overview of computer codes based on Monte Carlo techniques for simulation of transport and interaction of hadrons and ions in wide energy range in three dimensional (3D) geometry is shown. Firstly, a short attention is paid to underline the approach to the solution of the problem - process in nature - by selection of the appropriate 3D model and corresponding tools - computer codes and cross sections data libraries. Process of data collection and evaluation from experimental measurements and theoretical approach to establishing reliable libraries of evaluated cross sections data is Ion g, difficult and not straightforward activity. For this reason, world reference data centers and specialized ones are acknowledged, together with the currently available, state of art evaluated nuclear data libraries, as the ENDF/B-VI, JEF, JENDL, CENDL, BROND, etc. Codes for experimental and theoretical data evaluations (e.g., SAMMY and GNASH) together with the codes for data processing (e.g., NJOY, PREPRO and GRUCON) are briefly described. Examples of data evaluation and data processing to generate computer usable data libraries are shown. Among numerous and various computer codes developed in transport physics of particles, the most general ones are described only: MCNPX, FLUKA and SHIELD. A short overview of basic application of these codes, physical models implemented with their limitations, energy ranges of particles and types of interactions, is given. General information about the codes covers also programming language, operation system, calculation speed and the code availability. An example of increasing computation speed of running MCNPX code using a MPI cluster compared to the code sequential option

  7. FLUKA: A Multi-Particle Transport Code

    Energy Technology Data Exchange (ETDEWEB)

    Ferrari, A.; Sala, P.R.; /CERN /INFN, Milan; Fasso, A.; /SLAC; Ranft, J.; /Siegen U.

    2005-12-14

    This report describes the 2005 version of the Fluka particle transport code. The first part introduces the basic notions, describes the modular structure of the system, and contains an installation and beginner's guide. The second part complements this initial information with details about the various components of Fluka and how to use them. It concludes with a detailed history and bibliography.

  8. Papa, a Particle Tracing Code in Pascal

    NARCIS (Netherlands)

    Haselhoff, E.H.; Haselhoff, Eltjo H.; Ernst, G.J.

    1990-01-01

    During the design of a 10 ¿m high-gain FEL oscillator (TEUFEL Project) we developed a new particle-tracing code to perform simulations of thermionic- and photo-cathode electron injectors/accelerators. The program allows predictions of current, energy and beam emittance in a user-specified linac

  9. Particle tracing code for multispecies gas

    International Nuclear Information System (INIS)

    Eaton, R.R.; Fox, R.L.; Vandevender, W.H.

    1979-06-01

    Details are presented for the development of a computer code designed to calculate the flow of a multispecies gas mixture using particle tracing techniques. The current technique eliminates the need for a full simulation by utilizing local time averaged velocity distribution functions to obtain the dynamic properties for probable collision partners. The development of this concept reduces statistical scatter experienced in conventional Monte Carlo simulations. The technique is applicable to flow problems involving gas mixtures with disparate masses and trace constituents in the Knudsen number, Kn, range from 1.0 to less than 0.01. The resulting code has previously been used to analyze several aerodynamic isotope enrichment devices

  10. IFR code for secondary particle dynamics

    International Nuclear Information System (INIS)

    Teague, M.R.; Yu, S.S.

    1985-01-01

    A numerical simulation has been constructed to obtain a detailed, quantitative estimate of the electromagnetic fields and currents existing in the Advanced Test Accelerator under conditions of laser guiding. The code treats the secondary electrons by particle simulation and the beam dynamics by a time-dependent envelope model. The simulation gives a fully relativistic description of secondary electrons moving in self-consistent electromagnetic fields. The calculations are made using coordinates t, x, y, z for the electrons and t, ct-z, r for the axisymmetric electromagnetic fields and currents. Code results, showing in particular current enhancement effects, will be given

  11. Parallelization Issues and Particle-In Codes.

    Science.gov (United States)

    Elster, Anne Cathrine

    1994-01-01

    "Everything should be made as simple as possible, but not simpler." Albert Einstein. The field of parallel scientific computing has concentrated on parallelization of individual modules such as matrix solvers and factorizers. However, many applications involve several interacting modules. Our analyses of a particle-in-cell code modeling charged particles in an electric field, show that these accompanying dependencies affect data partitioning and lead to new parallelization strategies concerning processor, memory and cache utilization. Our test-bed, a KSR1, is a distributed memory machine with a globally shared addressing space. However, most of the new methods presented hold generally for hierarchical and/or distributed memory systems. We introduce a novel approach that uses dual pointers on the local particle arrays to keep the particle locations automatically partially sorted. Complexity and performance analyses with accompanying KSR benchmarks, have been included for both this scheme and for the traditional replicated grids approach. The latter approach maintains load-balance with respect to particles. However, our results demonstrate it fails to scale properly for problems with large grids (say, greater than 128-by-128) running on as few as 15 KSR nodes, since the extra storage and computation time associated with adding the grid copies, becomes significant. Our grid partitioning scheme, although harder to implement, does not need to replicate the whole grid. Consequently, it scales well for large problems on highly parallel systems. It may, however, require load balancing schemes for non-uniform particle distributions. Our dual pointer approach may facilitate this through dynamically partitioned grids. We also introduce hierarchical data structures that store neighboring grid-points within the same cache -line by reordering the grid indexing. This alignment produces a 25% savings in cache-hits for a 4-by-4 cache. A consideration of the input data's effect on

  12. Computer-assisted Particle-in-Cell code development

    International Nuclear Information System (INIS)

    Kawata, S.; Boonmee, C.; Teramoto, T.; Drska, L.; Limpouch, J.; Liska, R.; Sinor, M.

    1997-12-01

    This report presents a new approach for an electromagnetic Particle-in-Cell (PIC) code development by a computer: in general PIC codes have a common structure, and consist of a particle pusher, a field solver, charge and current density collections, and a field interpolation. Because of the common feature, the main part of the PIC code can be mechanically developed on a computer. In this report we use the packages FIDE and GENTRAN of the REDUCE computer algebra system for discretizations of field equations and a particle equation, and for an automatic generation of Fortran codes. The approach proposed is successfully applied to the development of 1.5-dimensional PIC code. By using the generated PIC code the Weibel instability in a plasma is simulated. The obtained growth rate agrees well with the theoretical value. (author)

  13. PENTACLE: Parallelized particle-particle particle-tree code for planet formation

    Science.gov (United States)

    Iwasawa, Masaki; Oshino, Shoichi; Fujii, Michiko S.; Hori, Yasunori

    2017-10-01

    We have newly developed a parallelized particle-particle particle-tree code for planet formation, PENTACLE, which is a parallelized hybrid N-body integrator executed on a CPU-based (super)computer. PENTACLE uses a fourth-order Hermite algorithm to calculate gravitational interactions between particles within a cut-off radius and a Barnes-Hut tree method for gravity from particles beyond. It also implements an open-source library designed for full automatic parallelization of particle simulations, FDPS (Framework for Developing Particle Simulator), to parallelize a Barnes-Hut tree algorithm for a memory-distributed supercomputer. These allow us to handle 1-10 million particles in a high-resolution N-body simulation on CPU clusters for collisional dynamics, including physical collisions in a planetesimal disc. In this paper, we show the performance and the accuracy of PENTACLE in terms of \\tilde{R}_cut and a time-step Δt. It turns out that the accuracy of a hybrid N-body simulation is controlled through Δ t / \\tilde{R}_cut and Δ t / \\tilde{R}_cut ˜ 0.1 is necessary to simulate accurately the accretion process of a planet for ≥106 yr. For all those interested in large-scale particle simulations, PENTACLE, customized for planet formation, will be freely available from https://github.com/PENTACLE-Team/PENTACLE under the MIT licence.

  14. Optimization of Particle-in-Cell Codes on RISC Processors

    Science.gov (United States)

    Decyk, Viktor K.; Karmesin, Steve Roy; Boer, Aeint de; Liewer, Paulette C.

    1996-01-01

    General strategies are developed to optimize particle-cell-codes written in Fortran for RISC processors which are commonly used on massively parallel computers. These strategies include data reorganization to improve cache utilization and code reorganization to improve efficiency of arithmetic pipelines.

  15. Particle In Cell Codes on Highly Parallel Architectures

    Science.gov (United States)

    Tableman, Adam

    2014-10-01

    We describe strategies and examples of Particle-In-Cell Codes running on Nvidia GPU and Intel Phi architectures. This includes basic implementations in skeletons codes and full-scale development versions (encompassing 1D, 2D, and 3D codes) in Osiris. Both the similarities and differences between Intel's and Nvidia's hardware will be examined. Work supported by grants NSF ACI 1339893, DOE DE SC 000849, DOE DE SC 0008316, DOE DE NA 0001833, and DOE DE FC02 04ER 54780.

  16. Survey of particle codes in the Magnetic Fusion Energy Program

    International Nuclear Information System (INIS)

    1977-12-01

    In the spring of 1976, the Fusion Plasma Theory Branch of the Division of Magnetic Fusion Energy conducted a survey of all the physics computer codes being supported at that time. The purpose of that survey was to allow DMFE to prepare a description of the codes for distribution to the plasma physics community. This document is the first of several planned and covers those types of codes which treat the plasma as a group of particles

  17. PHITS-a particle and heavy ion transport code system

    International Nuclear Information System (INIS)

    Niita, Koji; Sato, Tatsuhiko; Iwase, Hiroshi; Nose, Hiroyuki; Nakashima, Hiroshi; Sihver, Lembit

    2006-01-01

    The paper presents a summary of the recent development of the multi-purpose Monte Carlo Particle and Heavy Ion Transport code System, PHITS. In particular, we discuss in detail the development of two new models, JAM and JQMD, for high energy particle interactions, incorporated in PHITS, and show comparisons between model calculations and experiments for the validations of these models. The paper presents three applications of the code including spallation neutron source, heavy ion therapy and space radiation. The results and examples shown indicate PHITS has great ability of carrying out the radiation transport analysis of almost all particles including heavy ions within a wide energy range

  18. Recent advances in neutral particle transport methods and codes

    International Nuclear Information System (INIS)

    Azmy, Y.Y.

    1996-01-01

    An overview of ORNL's three-dimensional neutral particle transport code, TORT, is presented. Special features of the code that make it invaluable for large applications are summarized for the prospective user. Advanced capabilities currently under development and installation in the production release of TORT are discussed; they include: multitasking on Cray platforms running the UNICOS operating system; Adjacent cell Preconditioning acceleration scheme; and graphics codes for displaying computed quantities such as the flux. Further developments for TORT and its companion codes to enhance its present capabilities, as well as expand its range of applications are disucssed. Speculation on the next generation of neutron particle transport codes at ORNL, especially regarding unstructured grids and high order spatial approximations, are also mentioned

  19. Development of particle and heavy ion transport code system

    International Nuclear Information System (INIS)

    Niita, Koji

    2004-01-01

    Particle and heavy ion transport code system (PHITS) is 3 dimension general purpose Monte Carlo simulation codes for description of transport and reaction of particle and heavy ion in materials. It is developed on the basis of NMTC/JAM for design and safety of J-PARC. What is PHITS, it's physical process, physical models and development process of PHITC code are described. For examples of application, evaluation of neutron optics, cancer treatment by heavy particle ray and cosmic radiation are stated. JAM and JQMD model are used as the physical model. Neutron motion in six polar magnetic field and gravitational field, PHITC simulation of trace of C 12 beam and secondary neutron track of small model of cancer treatment device in HIMAC and neutron flux in Space Shuttle are explained. (S.Y.)

  20. Optimization of the particle pusher in a diode simulation code

    International Nuclear Information System (INIS)

    Theimer, M.M.; Quintenz, J.P.

    1979-09-01

    The particle pusher in Sandia's particle-in-cell diode simulation code has been rewritten to reduce the required run time of a typical simulation. The resulting new version of the code has been found to run up to three times as fast as the original with comparable accuracy. The cost of this optimization was an increase in storage requirements of about 15%. The new version has also been written to run efficiently on a CRAY-1 computing system. Steps taken to affect this reduced run time are described. Various test cases are detailed

  1. High energy particle transport code NMTC/JAM

    International Nuclear Information System (INIS)

    Niita, K.; Takada, H.; Meigo, S.; Ikeda, Y.

    2001-01-01

    We have developed a high energy particle transport code NMTC/JAM, which is an upgrade version of NMTC/JAERI97. The available energy range of NMTC/JAM is, in principle, extended to 200 GeV for nucleons and mesons including the high energy nuclear reaction code JAM for the intra-nuclear cascade part. We compare the calculations by NMTC/JAM code with the experimental data of thin and thick targets for proton induced reactions up to several 10 GeV. The results of NMTC/JAM code show excellent agreement with the experimental data. From these code validation, it is concluded that NMTC/JAM is reliable in neutronics optimization study of the high intense spallation neutron utilization facility. (author)

  2. FLUKA A multi-particle transport code (program version 2005)

    CERN Document Server

    Ferrari, A; Fassò, A; Ranft, Johannes

    2005-01-01

    This report describes the 2005 version of the Fluka particle transport code. The first part introduces the basic notions, describes the modular structure of the system, and contains an installation and beginner’s guide. The second part complements this initial information with details about the various components of Fluka and how to use them. It concludes with a detailed history and bibliography.

  3. Antiproton annihilation physics annihilation physics in the Monte Carlo particle transport code particle transport code SHIELD-HIT12A

    DEFF Research Database (Denmark)

    Taasti, Vicki Trier; Knudsen, Helge; Holzscheiter, Michael

    2015-01-01

    The Monte Carlo particle transport code SHIELD-HIT12A is designed to simulate therapeutic beams for cancer radiotherapy with fast ions. SHIELD-HIT12A allows creation of antiproton beam kernels for the treatment planning system TRiP98, but first it must be benchmarked against experimental data. An...

  4. BOA, Beam Optics Analyzer A Particle-In-Cell Code

    International Nuclear Information System (INIS)

    Bui, Thuc

    2007-01-01

    The program was tasked with implementing time dependent analysis of charges particles into an existing finite element code with adaptive meshing, called Beam Optics Analyzer (BOA). BOA was initially funded by a DOE Phase II program to use the finite element method with adaptive meshing to track particles in unstructured meshes. It uses modern programming techniques, state-of-the-art data structures, so that new methods, features and capabilities are easily added and maintained. This Phase II program was funded to implement plasma simulations in BOA and extend its capabilities to model thermal electrons, secondary emissions, self magnetic field and implement a more comprehensive post-processing and feature-rich GUI. The program was successful in implementing thermal electrons, secondary emissions, and self magnetic field calculations. The BOA GUI was also upgraded significantly, and CCR is receiving interest from the microwave tube and semiconductor equipment industry for the code. Implementation of PIC analysis was partially successful. Computational resource requirements for modeling more than 2000 particles begin to exceed the capability of most readily available computers. Modern plasma analysis typically requires modeling of approximately 2 million particles or more. The problem is that tracking many particles in an unstructured mesh that is adapting becomes inefficient. In particular memory requirements become excessive. This probably makes particle tracking in unstructured meshes currently unfeasible with commonly available computer resources. Consequently, Calabazas Creek Research, Inc. is exploring hybrid codes where the electromagnetic fields are solved on the unstructured, adaptive mesh while particles are tracked on a fixed mesh. Efficient interpolation routines should be able to transfer information between nodes of the two meshes. If successfully developed, this could provide high accuracy and reasonable computational efficiency.

  5. Parallelization of a Monte Carlo particle transport simulation code

    Science.gov (United States)

    Hadjidoukas, P.; Bousis, C.; Emfietzoglou, D.

    2010-05-01

    We have developed a high performance version of the Monte Carlo particle transport simulation code MC4. The original application code, developed in Visual Basic for Applications (VBA) for Microsoft Excel, was first rewritten in the C programming language for improving code portability. Several pseudo-random number generators have been also integrated and studied. The new MC4 version was then parallelized for shared and distributed-memory multiprocessor systems using the Message Passing Interface. Two parallel pseudo-random number generator libraries (SPRNG and DCMT) have been seamlessly integrated. The performance speedup of parallel MC4 has been studied on a variety of parallel computing architectures including an Intel Xeon server with 4 dual-core processors, a Sun cluster consisting of 16 nodes of 2 dual-core AMD Opteron processors and a 200 dual-processor HP cluster. For large problem size, which is limited only by the physical memory of the multiprocessor server, the speedup results are almost linear on all systems. We have validated the parallel implementation against the serial VBA and C implementations using the same random number generator. Our experimental results on the transport and energy loss of electrons in a water medium show that the serial and parallel codes are equivalent in accuracy. The present improvements allow for studying of higher particle energies with the use of more accurate physical models, and improve statistics as more particles tracks can be simulated in low response time.

  6. DART: a simulation code for charged particle beams

    International Nuclear Information System (INIS)

    White, R.C.; Barr, W.L.; Moir, R.W.

    1988-01-01

    This paper presents a recently modified verion of the 2-D DART code designed to simulate the behavior of a beam of charged particles whose paths are affected by electric and magnetic fields. This code was originally used to design laboratory-scale and full-scale beam direct converters. Since then, its utility has been expanded to allow more general applications. The simulation technique includes space charge, secondary electron effects, and neutral gas ionization. Calculations of electrode placement and energy conversion efficiency are described. Basic operation procedures are given including sample input files and output. 7 refs., 18 figs

  7. Particle and heavy ion transport code system; PHITS

    International Nuclear Information System (INIS)

    Niita, Koji

    2004-01-01

    Intermediate and high energy nuclear data are strongly required in design study of many facilities such as accelerator-driven systems, intense pulse spallation neutron sources, and also in medical and space technology. There is, however, few evaluated nuclear data of intermediate and high energy nuclear reactions. Therefore, we have to use some models or systematics for the cross sections, which are essential ingredients of high energy particle and heavy ion transport code to estimate neutron yield, heat deposition and many other quantities of the transport phenomena in materials. We have developed general purpose particle and heavy ion transport Monte Carlo code system, PHITS (Particle and Heavy Ion Transport code System), based on the NMTC/JAM code by the collaboration of Tohoku University, JAERI and RIST. The PHITS has three important ingredients which enable us to calculate (1) high energy nuclear reactions up to 200 GeV, (2) heavy ion collision and its transport in material, (3) low energy neutron transport based on the evaluated nuclear data. In the PHITS, the cross sections of high energy nuclear reactions are obtained by JAM model. JAM (Jet AA Microscopic Transport Model) is a hadronic cascade model, which explicitly treats all established hadronic states including resonances and all hadron-hadron cross sections parametrized based on the resonance model and string model by fitting the available experimental data. The PHITS can describe the transport of heavy ions and their collisions by making use of JQMD and SPAR code. The JQMD (JAERI Quantum Molecular Dynamics) is a simulation code for nucleus nucleus collisions based on the molecular dynamics. The SPAR code is widely used to calculate the stopping powers and ranges for charged particles and heavy ions. The PHITS has included some part of MCNP4C code, by which the transport of low energy neutron, photon and electron based on the evaluated nuclear data can be described. Furthermore, the high energy nuclear

  8. DART: A simulation code for charged particle beams

    International Nuclear Information System (INIS)

    White, R.C.; Barr, W.L.; Moir, R.W.

    1989-01-01

    This paper presents a recently modified version of the 2-D code, DART, which can simulate the behavior of a beam of charged particles whose trajectories are determined by electric and magnetic fields. This code was originally used to design laboratory-scale and full-scale beam direct converters. Since then, its utility has been expanded to allow more general applications. The simulation includes space charge, secondary electrons, and the ionization of neutral gas. A beam can contain up to nine superimposed beamlets of different energy and species. The calculation of energy conversion efficiency and the method of specifying the electrode geometry are described. Basic procedures for using the code are given, and sample input and output fields are shown. 7 refs., 18 figs

  9. Canonical momenta and numerical instabilities in particle codes

    International Nuclear Information System (INIS)

    Godfrey, B.B.

    1975-01-01

    A set of warm plasma dispersion relations appropriate to a large class of electromagnetic plasma simulation codes is derived. The numerical Cherenkov instability is shown by analytic and numerical analysis of these dispersion relations to be the most significant nonphysical effect involving transverse electromagnetic waves. The instability arises due to a spurious phase shift between resonant particles and light waves, caused by a basic incompatibility between the Lagrangian treatment of particle positions and the Eulerian treatment of particle velocities characteristic of most PIC--CIC algorithms. It is demonstrated that, through the use of canonical momentum, this mismatch is alleviated sufficiently to completely eliminate the Cherenkov instability. Collateral effects on simulation accuracy and on other numerical instabilities appear to be minor

  10. The OpenMC Monte Carlo particle transport code

    International Nuclear Information System (INIS)

    Romano, Paul K.; Forget, Benoit

    2013-01-01

    Highlights: ► An open source Monte Carlo particle transport code, OpenMC, has been developed. ► Solid geometry and continuous-energy physics allow high-fidelity simulations. ► Development has focused on high performance and modern I/O techniques. ► OpenMC is capable of scaling up to hundreds of thousands of processors. ► Results on a variety of benchmark problems agree with MCNP5. -- Abstract: A new Monte Carlo code called OpenMC is currently under development at the Massachusetts Institute of Technology as a tool for simulation on high-performance computing platforms. Given that many legacy codes do not scale well on existing and future parallel computer architectures, OpenMC has been developed from scratch with a focus on high performance scalable algorithms as well as modern software design practices. The present work describes the methods used in the OpenMC code and demonstrates the performance and accuracy of the code on a variety of problems.

  11. Deployment of the OSIRIS EM-PIC code on the Intel Knights Landing architecture

    Science.gov (United States)

    Fonseca, Ricardo

    2017-10-01

    Electromagnetic particle-in-cell (EM-PIC) codes such as OSIRIS have found widespread use in modelling the highly nonlinear and kinetic processes that occur in several relevant plasma physics scenarios, ranging from astrophysical settings to high-intensity laser plasma interaction. Being computationally intensive, these codes require large scale HPC systems, and a continuous effort in adapting the algorithm to new hardware and computing paradigms. In this work, we report on our efforts on deploying the OSIRIS code on the new Intel Knights Landing (KNL) architecture. Unlike the previous generation (Knights Corner), these boards are standalone systems, and introduce several new features, include the new AVX-512 instructions and on-package MCDRAM. We will focus on the parallelization and vectorization strategies followed, as well as memory management, and present a detailed performance evaluation of code performance in comparison with the CPU code. This work was partially supported by Fundaçã para a Ciência e Tecnologia (FCT), Portugal, through Grant No. PTDC/FIS-PLA/2940/2014.

  12. High energy particle transport code NMTC/JAM

    International Nuclear Information System (INIS)

    Niita, Koji; Meigo, Shin-ichiro; Takada, Hiroshi; Ikeda, Yujiro

    2001-03-01

    We have developed a high energy particle transport code NMTC/JAM, which is an upgraded version of NMTC/JAERI97. The applicable energy range of NMTC/JAM is extended in principle up to 200 GeV for nucleons and mesons by introducing the high energy nuclear reaction code JAM for the intra-nuclear cascade part. For the evaporation and fission process, we have also implemented a new model, GEM, by which the light nucleus production from the excited residual nucleus can be described. According to the extension of the applicable energy, we have upgraded the nucleon-nucleus non-elastic, elastic and differential elastic cross section data by employing new systematics. In addition, the particle transport in a magnetic field has been implemented for the beam transport calculations. In this upgrade, some new tally functions are added and the format of input of data has been improved very much in a user friendly manner. Due to the implementation of these new calculation functions and utilities, consequently, NMTC/JAM enables us to carry out reliable neutronics study of a large scale target system with complex geometry more accurately and easily than before. This report serves as a user manual of the code. (author)

  13. New electromagnetic particle simulation code for the analysis of spacecraft-plasma interactions

    International Nuclear Information System (INIS)

    Miyake, Yohei; Usui, Hideyuki

    2009-01-01

    A novel particle simulation code, the electromagnetic spacecraft environment simulator (EMSES), has been developed for the self-consistent analysis of spacecraft-plasma interactions on the full electromagnetic (EM) basis. EMSES includes several boundary treatments carefully coded for both longitudinal and transverse electric fields to satisfy perfect conductive surface conditions. For the longitudinal component, the following are considered: (1) the surface charge accumulation caused by impinging or emitted particles and (2) the surface charge redistribution, such that the surface becomes an equipotential. For item (1), a special treatment has been adopted for the current density calculated around the spacecraft surface, so that the charge accumulation occurs exactly on the surface. As a result, (1) is realized automatically in the updates of the charge density and the electric field through the current density. Item (2) is achieved by applying the capacity matrix method. Meanwhile, the transverse electric field is simply set to zero for components defined inside and tangential to the spacecraft surfaces. This paper also presents the validation of EMSES by performing test simulations for spacecraft charging and peculiar EM wave modes in a plasma sheath.

  14. Los Alamos neutral particle transport codes: New and enhanced capabilities

    International Nuclear Information System (INIS)

    Alcouffe, R.E.; Baker, R.S.; Brinkley, F.W.; Clark, B.A.; Koch, K.R.; Marr, D.R.

    1992-01-01

    We present new developments in Los Alamos discrete-ordinates transport codes and introduce THREEDANT, the latest in the series of Los Alamos discrete ordinates transport codes. THREEDANT solves the multigroup, neutral-particle transport equation in X-Y-Z and R-Θ-Z geometries. THREEDANT uses computationally efficient algorithms: Diffusion Synthetic Acceleration (DSA) is used to accelerate the convergence of transport iterations, the DSA solution is accelerated using the multigrid technique. THREEDANT runs on a wide range of computers, from scientific workstations to CRAY supercomputers. The algorithms are highly vectorized on CRAY computers. Recently, the THREEDANT transport algorithm was implemented on the massively parallel CM-2 computer, with performance that is comparable to a single-processor CRAY-YMP We present the results of THREEDANT analysis of test problems

  15. Particle-in-Cell Codes for plasma-based particle acceleration

    CERN Document Server

    Pukhov, Alexander

    2016-01-01

    Basic principles of particle-in-cell (PIC ) codes with the main application for plasma-based acceleration are discussed. The ab initio full electromagnetic relativistic PIC codes provide the most reliable description of plasmas. Their properties are considered in detail. Representing the most fundamental model, the full PIC codes are computationally expensive. The plasma-based acceler- ation is a multi-scale problem with very disparate scales. The smallest scale is the laser or plasma wavelength (from one to hundred microns) and the largest scale is the acceleration distance (from a few centimeters to meters or even kilometers). The Lorentz-boost technique allows to reduce the scale disparity at the costs of complicating the simulations and causing unphysical numerical instabilities in the code. Another possibility is to use the quasi-static approxi- mation where the disparate scales are separated analytically.

  16. Particle tracking code of simulating global RF feedback

    International Nuclear Information System (INIS)

    Mestha, L.K.

    1991-09-01

    It is well known in the ''control community'' that a good feedback controller design is deeply rooted in the physics of the system. For example, when accelerating the beam we must keep several parameters under control so that the beam travels within the confined space. Important parameters include the frequency and phase of the rf signal, the dipole field, and the cavity voltage. Because errors in these parameters will progressively mislead the beam from its projected path in the tube, feedback loops are used to correct the behavior. Since the feedback loop feeds energy to the system, it changes the overall behavior of the system and may drive it to instability. Various types of controllers are used to stabilize the feedback loop. Integrating the beam physics with the feedback controllers allows us to carefully analyze the beam behavior. This will not only guarantee optimal performance but will also significantly enhance the ability of the beam control engineer to deal effectively with the interaction of various feedback loops. Motivated by this theme, we developed a simple one-particle tracking code to simulate particle behavior with feedback controllers. In order to achieve our fundamental objective, we can ask some key questions: What are the input and output parameters? How can they be applied to the practical machine? How can one interface the rf system dynamics such as the transfer characteristics of the rf cavities and phasing between the cavities? Answers to these questions can be found by considering a simple case of a single cavity with one particle, tracking it turn-by-turn with appropriate initial conditions, then introducing constraints on crucial parameters. Critical parameters are rf frequency, phase, and amplitude once the dipole field has been given. These are arranged in the tracking code so that we can interface the feedback system controlling them

  17. EM modeling for GPIR using 3D FDTD modeling codes

    Energy Technology Data Exchange (ETDEWEB)

    Nelson, S.D.

    1994-10-01

    An analysis of the one-, two-, and three-dimensional electrical characteristics of structural cement and concrete is presented. This work connects experimental efforts in characterizing cement and concrete in the frequency and time domains with the Finite Difference Time Domain (FDTD) modeling efforts of these substances. These efforts include Electromagnetic (EM) modeling of simple lossless homogeneous materials with aggregate and targets and the modeling dispersive and lossy materials with aggregate and complex target geometries for Ground Penetrating Imaging Radar (GPIR). Two- and three-dimensional FDTD codes (developed at LLNL) where used for the modeling efforts. Purpose of the experimental and modeling efforts is to gain knowledge about the electrical properties of concrete typically used in the construction industry for bridges and other load bearing structures. The goal is to optimize the performance of a high-sample-rate impulse radar and data acquisition system and to design an antenna system to match the characteristics of this material. Results show agreement to within 2 dB of the amplitudes of the experimental and modeled data while the frequency peaks correlate to within 10% the differences being due to the unknown exact nature of the aggregate placement.

  18. Parallel treatment of simulation particles in particle-in-cell codes on SUPRENUM

    International Nuclear Information System (INIS)

    Seldner, D.

    1990-02-01

    This report contains the program documentation and description of the program package 2D-PLAS, which has been developed at the Nuclear Research Center Karlsruhe in the Institute for Data Processing in Technology (IDT) under the auspices of the BMFT. 2D-PLAS is a parallel program version of the treatment of the simulation particles of the two-dimensional stationary particle-in-cell code BFCPIC which has been developed at the Nuclear Research Center Karlsruhe. This parallel version has been designed for the parallel computer SUPRENUM. (orig.) [de

  19. Development of general-purpose particle and heavy ion transport monte carlo code

    International Nuclear Information System (INIS)

    Iwase, Hiroshi; Nakamura, Takashi; Niita, Koji

    2002-01-01

    The high-energy particle transport code NMTC/JAM, which has been developed at JAERI, was improved for the high-energy heavy ion transport calculation by incorporating the JQMD code, the SPAR code and the Shen formula. The new NMTC/JAM named PHITS (Particle and Heavy-Ion Transport code System) is the first general-purpose heavy ion transport Monte Carlo code over the incident energies from several MeV/nucleon to several GeV/nucleon. (author)

  20. New features of the mercury Monte Carlo particle transport code

    International Nuclear Information System (INIS)

    Procassini, Richard; Brantley, Patrick; Dawson, Shawn

    2010-01-01

    Several new capabilities have been added to the Mercury Monte Carlo transport code over the past four years. The most important algorithmic enhancement is a general, extensible infrastructure to support source, tally and variance reduction actions. For each action, the user defines a phase space, as well as any number of responses that are applied to a specified event. Tallies are accumulated into a correlated, multi-dimensional. Cartesian-product result phase space. Our approach employs a common user interface to specify the data sets and distributions that define the phase, response and result for each action. Modifications to the particle trackers include the use of facet halos (instead of extrapolative fuzz) for robust tracking, and material interface reconstruction for use in shape overlaid meshes. Support for expected-value criticality eigenvalue calculations has also been implemented. Computer science enhancements include an in-line Python interface for user customization of problem setup and output. (author)

  1. Computer codes used in particle accelerator design: First edition

    International Nuclear Information System (INIS)

    1987-01-01

    This paper contains a listing of more than 150 programs that have been used in the design and analysis of accelerators. Given on each citation are person to contact, classification of the computer code, publications describing the code, computer and language runned on, and a short description of the code. Codes are indexed by subject, person to contact, and code acronym

  2. Code-B-1 for stress/strain calculation for TRISO fuel particle (Contract research)

    International Nuclear Information System (INIS)

    Aihara, Jun; Ueta, Shohei; Shibata, Taiju; Sawa, Kazuhiro

    2011-12-01

    We have developed Code-B-1 for the prediction of the failure probabilities of the coated fuel particles for the high temperature gas-cooled reactors (HTGRs) under operation by modification of an existing code. A finite element method (FEM) is employed for the stress calculation part and Code-B-1 can treat the plastic deformation of the coating layer of the coated fuel particles which the existing code cannot treat. (author)

  3. SSCTRK: A particle tracking code for the SSC

    International Nuclear Information System (INIS)

    Ritson, D.

    1990-07-01

    While many indirect methods are available to evaluate dynamic aperture there appears at this time to be no reliable substitute to tracking particles through realistic machine lattices for a number of turns determined by the storage times. Machine lattices are generated by ''Monte Carlo'' techniques from the expected rms fabrication and survey errors. Any given generated machine can potentially be a lucky or unlucky fluctuation from the average. Therefore simulation to serve as a predictor of future performance must be done for an ensemble of generated machines. Further, several amplitudes and momenta are necessary to predict machine performance. Thus to make Monte Carlo type simulations for the SSC requires very considerable computer resources. Hitherto, it has been assumed that this was not feasible, and alternative indirect methods have been proposed or tried to answer the problem. We reexamined the feasibility of using direct computation. Previous codes have represented lattices by a succession of thin elements separated by bend-drifts. With ''kick-drift'' configurations, tracking time is linear in the multipole order included, and the code is symplectic. Modern vector processors simultaneously handle a large number of cases in parallel. Combining the efficiencies of kick drift tracking with vector processing, in fact, makes realistic Monte Carlo simulation entirely feasible. SSCTRK uses the above features. It is structured to have a very friendly interface, a very wide latitude of choice for cases to be run in parallel, and, by using pure FORTRAN 77, to interchangeably run on a wide variety of computers. We describe in this paper the program structure operational checks and results achieved

  4. A novel neutron energy spectrum unfolding code using particle swarm optimization

    International Nuclear Information System (INIS)

    Shahabinejad, H.; Sohrabpour, M.

    2017-01-01

    A novel neutron Spectrum Deconvolution using Particle Swarm Optimization (SDPSO) code has been developed to unfold the neutron spectrum from a pulse height distribution and a response matrix. The Particle Swarm Optimization (PSO) imitates the bird flocks social behavior to solve complex optimization problems. The results of the SDPSO code have been compared with those of the standard spectra and recently published Two-steps Genetic Algorithm Spectrum Unfolding (TGASU) code. The TGASU code have been previously compared with the other codes such as MAXED, GRAVEL, FERDOR and GAMCD and shown to be more accurate than the previous codes. The results of the SDPSO code have been demonstrated to match well with those of the TGASU code for both under determined and over-determined problems. In addition the SDPSO has been shown to be nearly two times faster than the TGASU code. - Highlights: • Introducing a novel method for neutron spectrum unfolding. • Implementation of a particle swarm optimization code for neutron unfolding. • Comparing results of the PSO code with those of recently published TGASU code. • Match results of the PSO code with those of TGASU code. • Greater convergence rate of implemented PSO code than TGASU code.

  5. ORBXYZ: a 3D single-particle orbit code for following charged-particle trajectories in equilibrium magnetic fields

    International Nuclear Information System (INIS)

    Anderson, D.V.; Cohen, R.H.; Ferguson, J.R.; Johnston, B.M.; Sharp, C.B.; Willmann, P.A.

    1981-01-01

    The single particle orbit code, TIBRO, has been modified extensively to improve the interpolation methods used and to allow use of vector potential fields in the simulation of charged particle orbits on a 3D domain. A 3D cubic B-spline algorithm is used to generate spline coefficients used in the interpolation. Smooth and accurate field representations are obtained. When vector potential fields are used, the 3D cubic spline interpolation formula analytically generates the magnetic field used to push the particles. This field has del.BETA = 0 to computer roundoff. When magnetic induction is used the interpolation allows del.BETA does not equal 0, which can lead to significant nonphysical results. Presently the code assumes quadrupole symmetry, but this is not an essential feature of the code and could be easily removed for other applications. Many details pertaining to this code are given on microfiche accompanying this report

  6. Massively parallel unsupervised single-particle cryo-EM data clustering via statistical manifold learning.

    Science.gov (United States)

    Wu, Jiayi; Ma, Yong-Bei; Congdon, Charles; Brett, Bevin; Chen, Shuobing; Xu, Yaofang; Ouyang, Qi; Mao, Youdong

    2017-01-01

    Structural heterogeneity in single-particle cryo-electron microscopy (cryo-EM) data represents a major challenge for high-resolution structure determination. Unsupervised classification may serve as the first step in the assessment of structural heterogeneity. However, traditional algorithms for unsupervised classification, such as K-means clustering and maximum likelihood optimization, may classify images into wrong classes with decreasing signal-to-noise-ratio (SNR) in the image data, yet demand increased computational costs. Overcoming these limitations requires further development of clustering algorithms for high-performance cryo-EM data processing. Here we introduce an unsupervised single-particle clustering algorithm derived from a statistical manifold learning framework called generative topographic mapping (GTM). We show that unsupervised GTM clustering improves classification accuracy by about 40% in the absence of input references for data with lower SNRs. Applications to several experimental datasets suggest that our algorithm can detect subtle structural differences among classes via a hierarchical clustering strategy. After code optimization over a high-performance computing (HPC) environment, our software implementation was able to generate thousands of reference-free class averages within hours in a massively parallel fashion, which allows a significant improvement on ab initio 3D reconstruction and assists in the computational purification of homogeneous datasets for high-resolution visualization.

  7. Massively parallel unsupervised single-particle cryo-EM data clustering via statistical manifold learning.

    Directory of Open Access Journals (Sweden)

    Jiayi Wu

    Full Text Available Structural heterogeneity in single-particle cryo-electron microscopy (cryo-EM data represents a major challenge for high-resolution structure determination. Unsupervised classification may serve as the first step in the assessment of structural heterogeneity. However, traditional algorithms for unsupervised classification, such as K-means clustering and maximum likelihood optimization, may classify images into wrong classes with decreasing signal-to-noise-ratio (SNR in the image data, yet demand increased computational costs. Overcoming these limitations requires further development of clustering algorithms for high-performance cryo-EM data processing. Here we introduce an unsupervised single-particle clustering algorithm derived from a statistical manifold learning framework called generative topographic mapping (GTM. We show that unsupervised GTM clustering improves classification accuracy by about 40% in the absence of input references for data with lower SNRs. Applications to several experimental datasets suggest that our algorithm can detect subtle structural differences among classes via a hierarchical clustering strategy. After code optimization over a high-performance computing (HPC environment, our software implementation was able to generate thousands of reference-free class averages within hours in a massively parallel fashion, which allows a significant improvement on ab initio 3D reconstruction and assists in the computational purification of homogeneous datasets for high-resolution visualization.

  8. The failure mechanisms of HTR coated particle fuel and computer code

    International Nuclear Information System (INIS)

    Yang Lin; Liu Bing; Shao Youlin; Liang Tongxiang; Tang Chunhe

    2010-01-01

    The basic constituent unit of fuel element in HTR is ceramic coated particle fuel. And the performance of coated particle fuel determines the safety of HTR. In addition to the traditional detection of radiation experiments, establishing computer code is of great significance to the research. This paper mainly introduces the structure and the failure mechanism of TRISO-coated particle fuel, as well as a few basic assumptions,principles and characteristics of some existed main overseas codes. Meanwhile, this paper has proposed direction of future research by comparing the advantages and disadvantages of several computer codes. (authors)

  9. A general concurrent algorithm for plasma particle-in-cell simulation codes

    International Nuclear Information System (INIS)

    Liewer, P.C.; Decyk, V.K.

    1989-01-01

    We have developed a new algorithm for implementing plasma particle-in-cell (PIC) simulation codes on concurrent processors with distributed memory. This algorithm, named the general concurrent PIC algorithm (GCPIC), has been used to implement an electrostatic PIC code on the 33-node JPL Mark III Hypercube parallel computer. To decompose at PIC code using the GCPIC algorithm, the physical domain of the particle simulation is divided into sub-domains, equal in number to the number of processors, such that all sub-domains have roughly equal numbers of particles. For problems with non-uniform particle densities, these sub-domains will be of unequal physical size. Each processor is assigned a sub-domain and is responsible for updating the particles in its sub-domain. This algorithm has led to a a very efficient parallel implementation of a well-benchmarked 1-dimensional PIC code. The dominant portion of the code, updating the particle positions and velocities, is nearly 100% efficient when the number of particles is increased linearly with the number of hypercube processors used so that the number of particles per processor is constant. For example, the increase in time spent updating particles in going from a problem with 11,264 particles run on 1 processor to 360,448 particles on 32 processors was only 3% (parallel efficiency of 97%). Although implemented on a hypercube concurrent computer, this algorithm should also be efficient for PIC codes on other parallel architectures and for large PIC codes on sequential computers where part of the data must reside on external disks. copyright 1989 Academic Press, Inc

  10. The FLUKA code: An accurate simulation tool for particle therapy

    CERN Document Server

    Battistoni, Giuseppe; Böhlen, Till T; Cerutti, Francesco; Chin, Mary Pik Wai; Dos Santos Augusto, Ricardo M; Ferrari, Alfredo; Garcia Ortega, Pablo; Kozlowska, Wioletta S; Magro, Giuseppe; Mairani, Andrea; Parodi, Katia; Sala, Paola R; Schoofs, Philippe; Tessonnier, Thomas; Vlachoudis, Vasilis

    2016-01-01

    Monte Carlo (MC) codes are increasingly spreading in the hadrontherapy community due to their detailed description of radiation transport and interaction with matter. The suitability of a MC code for application to hadrontherapy demands accurate and reliable physical models capable of handling all components of the expected radiation field. This becomes extremely important for correctly performing not only physical but also biologically-based dose calculations, especially in cases where ions heavier than protons are involved. In addition, accurate prediction of emerging secondary radiation is of utmost importance in innovative areas of research aiming at in-vivo treatment verification. This contribution will address the recent developments of the FLUKA MC code and its practical applications in this field. Refinements of the FLUKA nuclear models in the therapeutic energy interval lead to an improved description of the mixed radiation field as shown in the presented benchmarks against experimental data with bot...

  11. Development of 2D particle-in-cell code to simulate high current, low ...

    Indian Academy of Sciences (India)

    Abstract. A code for 2D space-charge dominated beam dynamics study in beam trans- port lines is developed. The code is used for particle-in-cell (PIC) simulation of z-uniform beam in a channel containing solenoids and drift space. It can also simulate a transport line where quadrupoles are used for focusing the beam.

  12. The Live Coding of <em>Slub> - Art Oriented Programming as Media Critique

    DEFF Research Database (Denmark)

    Andersen, Christian Ulrik

    2007-01-01

    Computer art is often associated with computer-generated expressions (digital audio/images in music, video, stage design, etc.). In recent computer art, however, the code-text itself – not the generated output – has become the artwork (Perl Poetry, ASCII Art, obfuscated code, etc.). This paper wi...

  13. DANTSYS: A diffusion accelerated neutral particle transport code system

    International Nuclear Information System (INIS)

    Alcouffe, R.E.; Baker, R.S.; Brinkley, F.W.; Marr, D.R.; O'Dell, R.D.; Walters, W.F.

    1995-06-01

    The DANTSYS code package includes the following transport codes: ONEDANT, TWODANT, TWODANT/GQ, TWOHEX, and THREEDANT. The DANTSYS code package is a modular computer program package designed to solve the time-independent, multigroup discrete ordinates form of the boltzmann transport equation in several different geometries. The modular construction of the package separates the input processing, the transport equation solving, and the post processing (or edit) functions into distinct code modules: the Input Module, one or more Solver Modules, and the Edit Module, respectively. The Input and Edit Modules are very general in nature and are common to all the Solver Modules. The ONEDANT Solver Module contains a one-dimensional (slab, cylinder, and sphere), time-independent transport equation solver using the standard diamond-differencing method for space/angle discretization. Also included in the package are solver Modules named TWODANT, TWODANT/GQ, THREEDANT, and TWOHEX. The TWODANT Solver Module solves the time-independent two-dimensional transport equation using the diamond-differencing method for space/angle discretization. The authors have also introduced an adaptive weighted diamond differencing (AWDD) method for the spatial and angular discretization into TWODANT as an option. The TWOHEX Solver Module solves the time-independent two-dimensional transport equation on an equilateral triangle spatial mesh. The THREEDANT Solver Module solves the time independent, three-dimensional transport equation for XYZ and RZΘ symmetries using both diamond differencing with set-to-zero fixup and the AWDD method. The TWODANT/GQ Solver Module solves the 2-D transport equation in XY and RZ symmetries using a spatial mesh of arbitrary quadrilaterals. The spatial differencing method is based upon the diamond differencing method with set-to-zero fixup with changes to accommodate the generalized spatial meshing

  14. DANTSYS: A diffusion accelerated neutral particle transport code system

    Energy Technology Data Exchange (ETDEWEB)

    Alcouffe, R.E.; Baker, R.S.; Brinkley, F.W.; Marr, D.R.; O`Dell, R.D.; Walters, W.F.

    1995-06-01

    The DANTSYS code package includes the following transport codes: ONEDANT, TWODANT, TWODANT/GQ, TWOHEX, and THREEDANT. The DANTSYS code package is a modular computer program package designed to solve the time-independent, multigroup discrete ordinates form of the boltzmann transport equation in several different geometries. The modular construction of the package separates the input processing, the transport equation solving, and the post processing (or edit) functions into distinct code modules: the Input Module, one or more Solver Modules, and the Edit Module, respectively. The Input and Edit Modules are very general in nature and are common to all the Solver Modules. The ONEDANT Solver Module contains a one-dimensional (slab, cylinder, and sphere), time-independent transport equation solver using the standard diamond-differencing method for space/angle discretization. Also included in the package are solver Modules named TWODANT, TWODANT/GQ, THREEDANT, and TWOHEX. The TWODANT Solver Module solves the time-independent two-dimensional transport equation using the diamond-differencing method for space/angle discretization. The authors have also introduced an adaptive weighted diamond differencing (AWDD) method for the spatial and angular discretization into TWODANT as an option. The TWOHEX Solver Module solves the time-independent two-dimensional transport equation on an equilateral triangle spatial mesh. The THREEDANT Solver Module solves the time independent, three-dimensional transport equation for XYZ and RZ{Theta} symmetries using both diamond differencing with set-to-zero fixup and the AWDD method. The TWODANT/GQ Solver Module solves the 2-D transport equation in XY and RZ symmetries using a spatial mesh of arbitrary quadrilaterals. The spatial differencing method is based upon the diamond differencing method with set-to-zero fixup with changes to accommodate the generalized spatial meshing.

  15. GRADSPH: A parallel smoothed particle hydrodynamics code for self-gravitating astrophysical fluid dynamics

    NARCIS (Netherlands)

    Vanaverbeke, S.; Keppens, R.; Poedts, S.; Boffin, H.

    2009-01-01

    We describe the algorithms implemented in the first version of GRADSPH, a parallel, tree-based, smoothed particle hydrodynamics code for simulating self-gravitating astrophysical systems written in FORTRAN 90. The paper presents details on the implementation of the Smoothed Particle Hydro (SPH)

  16. Numerical code to determine the particle trapping region in the LISA machine

    International Nuclear Information System (INIS)

    Azevedo, M.T. de; Raposo, C.C. de; Tomimura, A.

    1984-01-01

    A numerical code is constructed to determine the trapping region in machine like LISA. The variable magnetic field is two deimensional and is coupled to the Runge-Kutta through the Tchebichev polynomial. Various particle orbits including particle interactions were analysed. Beside this, a strong electric field is introduced to see the possible effects happening inside the plasma. (Author) [pt

  17. Implementing particle-in-cell plasma simulation code on the BBN TC2000

    International Nuclear Information System (INIS)

    Sturtevant, J.E.; Maccabe, A.B.

    1990-01-01

    The BBN TC2000 is a multiple instruction, multiple data (MIMD) machine that combines a physically distributed memory with a logically shared memory programming environment using the unique Butterfly switch. Particle-In-Cell (PIC) plasma simulations model the interaction of charged particles with electric and magnetic fields. This paper describes the implementation of both a 1-D electrostatic and a 2 1/2-D electromagnetic PIC (particle-in-cell) plasma simulation code on a BBN TC2000. Performance is compared to implementations of the same code on the shared memory Sequent Balance and distributed memory Intel iPSC hypercube

  18. photon-plasma: A modern high-order particle-in-cell code

    International Nuclear Information System (INIS)

    Haugbølle, Troels; Frederiksen, Jacob Trier; Nordlund, Åke

    2013-01-01

    We present the photon-plasma code, a modern high order charge conserving particle-in-cell code for simulating relativistic plasmas. The code is using a high order implicit field solver and a novel high order charge conserving interpolation scheme for particle-to-cell interpolation and charge deposition. It includes powerful diagnostics tools with on-the-fly particle tracking, synthetic spectra integration, 2D volume slicing, and a new method to correctly account for radiative cooling in the simulations. A robust technique for imposing (time-dependent) particle and field fluxes on the boundaries is also presented. Using a hybrid OpenMP and MPI approach, the code scales efficiently from 8 to more than 250.000 cores with almost linear weak scaling on a range of architectures. The code is tested with the classical benchmarks particle heating, cold beam instability, and two-stream instability. We also present particle-in-cell simulations of the Kelvin-Helmholtz instability, and new results on radiative collisionless shocks

  19. Neutron secondary-particle production cross sections and their incorporation into Monte-Carlo transport codes

    International Nuclear Information System (INIS)

    Brenner, D.J.; Prael, R.E.; Little, R.C.

    1987-01-01

    Realistic simulations of the passage of fast neutrons through tissue require a large quantity of cross-sectional data. What are needed are differential (in particle type, energy and angle) cross sections. A computer code is described which produces such spectra for neutrons above ∼14 MeV incident on light nuclei such as carbon and oxygen. Comparisons have been made with experimental measurements of double-differential secondary charged-particle production on carbon and oxygen at energies from 27 to 60 MeV; they indicate that the model is adequate in this energy range. In order to utilize fully the results of these calculations, they should be incorporated into a neutron transport code. This requires defining a generalized format for describing charged-particle production, putting the calculated results in this format, interfacing the neutron transport code with these data, and charged-particle transport. The design and development of such a program is described. 13 refs., 3 figs

  20. Development of a relativistic Particle In Cell code PARTDYN for linear accelerator beam transport

    Energy Technology Data Exchange (ETDEWEB)

    Phadte, D., E-mail: deepraj@rrcat.gov.in [LPD, Raja Ramanna Centre for Advanced Technology, Indore 452013 (India); Patidar, C.B.; Pal, M.K. [MAASD, Raja Ramanna Centre for Advanced Technology, Indore (India)

    2017-04-11

    A relativistic Particle In Cell (PIC) code PARTDYN is developed for the beam dynamics simulation of z-continuous and bunched beams. The code is implemented in MATLAB using its MEX functionality which allows both ease of development as well higher performance similar to a compiled language like C. The beam dynamics calculations carried out by the code are compared with analytical results and with other well developed codes like PARMELA and BEAMPATH. The effect of finite number of simulation particles on the emittance growth of intense beams has been studied. Corrections to the RF cavity field expressions were incorporated in the code so that the fields could be calculated correctly. The deviations of the beam dynamics results between PARTDYN and BEAMPATH for a cavity driven in zero-mode have been discussed. The beam dynamics studies of the Low Energy Beam Transport (LEBT) using PARTDYN have been presented.

  1. Adaptation of multidimensional group particle tracking and particle wall-boundary condition model to the FDNS code

    Science.gov (United States)

    Chen, Y. S.; Farmer, R. C.

    1992-01-01

    A particulate two-phase flow CFD model was developed based on the FDNS code which is a pressure based predictor plus multi-corrector Navier-Stokes flow solver. Turbulence models with compressibility correction and the wall function models were employed as submodels. A finite-rate chemistry model was used for reacting flow simulation. For particulate two-phase flow simulations, a Eulerian-Lagrangian solution method using an efficient implicit particle trajectory integration scheme was developed in this study. Effects of particle-gas reaction and particle size change to agglomeration or fragmentation were not considered in this investigation. At the onset of the present study, a two-dimensional version of FDNS which had been modified to treat Lagrangian tracking of particles (FDNS-2DEL) had already been written and was operational. The FDNS-2DEL code was too slow for practical use, mainly because it had not been written in a form amenable to vectorization on the Cray, nor was the full three-dimensional form of FDNS utilized. The specific objective of this study was to reorder to calculations into long single arrays for automatic vectorization on the Cray and to implement the full three-dimensional version of FDNS to produce the FDNS-3DEL code. Since the FDNS-2DEL code was slow, a very limited number of test cases had been run with it. This study was also intended to increase the number of cases simulated to verify and improve, as necessary, the particle tracking methodology coded in FDNS.

  2. Microfluidic CODES: a scalable multiplexed electronic sensor for orthogonal detection of particles in microfluidic channels.

    Science.gov (United States)

    Liu, Ruxiu; Wang, Ningquan; Kamili, Farhan; Sarioglu, A Fatih

    2016-04-21

    Numerous biophysical and biochemical assays rely on spatial manipulation of particles/cells as they are processed on lab-on-a-chip devices. Analysis of spatially distributed particles on these devices typically requires microscopy negating the cost and size advantages of microfluidic assays. In this paper, we introduce a scalable electronic sensor technology, called microfluidic CODES, that utilizes resistive pulse sensing to orthogonally detect particles in multiple microfluidic channels from a single electrical output. Combining the techniques from telecommunications and microfluidics, we route three coplanar electrodes on a glass substrate to create multiple Coulter counters producing distinct orthogonal digital codes when they detect particles. We specifically design a digital code set using the mathematical principles of Code Division Multiple Access (CDMA) telecommunication networks and can decode signals from different microfluidic channels with >90% accuracy through computation even if these signals overlap. As a proof of principle, we use this technology to detect human ovarian cancer cells in four different microfluidic channels fabricated using soft lithography. Microfluidic CODES offers a simple, all-electronic interface that is well suited to create integrated, low-cost lab-on-a-chip devices for cell- or particle-based assays in resource-limited settings.

  3. Auxiliary plasma heating and fueling models for use in particle simulation codes

    International Nuclear Information System (INIS)

    Procassini, R.J.; Cohen, B.I.

    1989-01-01

    Computational models of a radiofrequency (RF) heating system and neutral-beam injector are presented. These physics packages, when incorporated into a particle simulation code allow one to simulate the auxiliary heating and fueling of fusion plasmas. The RF-heating package is based upon a quasilinear diffusion equation which describes the slow evolution of the heated particle distribution. The neutral-beam injector package models the charge exchange and impact ionization processes which transfer energy and particles from the beam to the background plasma. Particle simulations of an RF-heated and a neutral-beam-heated simple-mirror plasma are presented. 8 refs., 5 figs

  4. THREEDANT: A code to perform three-dimensional, neutral particle transport calculations

    International Nuclear Information System (INIS)

    Alcouffe, R.E.

    1994-01-01

    The THREEDANT code solves the three-dimensional neutral particle transport equation in its first order, multigroup, discrate ordinate form. The code allows an unlimited number of groups (depending upon the cross section set), angular quadrature up to S-100, and unlimited Pn order again depending upon the cross section set. The code has three options for spatial differencing, diamond with set-to-zero fixup, adaptive weighted diamond, and linear modal. The geometry options are XYZ and RZΘ with a special XYZ option based upon a volume fraction method. This allows objects or bodies of any shape to be modelled as input which gives the code as much geometric description flexibility as the Monte Carlo code MCNP. The transport equation is solved by source iteration accelerated by the DSA method. Both inner and outer iterations are so accelerated. Some results are presented which demonstrate the effectiveness of these techniques. The code is available on several types of computing platforms

  5. StarSmasher: Smoothed Particle Hydrodynamics code for smashing stars and planets

    Science.gov (United States)

    Gaburov, Evghenii; Lombardi, James C., Jr.; Portegies Zwart, Simon; Rasio, F. A.

    2018-05-01

    Smoothed Particle Hydrodynamics (SPH) is a Lagrangian particle method that approximates a continuous fluid as discrete nodes, each carrying various parameters such as mass, position, velocity, pressure, and temperature. In an SPH simulation the resolution scales with the particle density; StarSmasher is able to handle both equal-mass and equal number-density particle models. StarSmasher solves for hydro forces by calculating the pressure for each particle as a function of the particle's properties - density, internal energy, and internal properties (e.g. temperature and mean molecular weight). The code implements variational equations of motion and libraries to calculate the gravitational forces between particles using direct summation on NVIDIA graphics cards. Using a direct summation instead of a tree-based algorithm for gravity increases the accuracy of the gravity calculations at the cost of speed. The code uses a cubic spline for the smoothing kernel and an artificial viscosity prescription coupled with a Balsara Switch to prevent unphysical interparticle penetration. The code also implements an artificial relaxation force to the equations of motion to add a drag term to the calculated accelerations during relaxation integrations. Initially called StarCrash, StarSmasher was developed originally by Rasio.

  6. Progress of laser-plasma interaction simulations with the particle-in-cell code

    International Nuclear Information System (INIS)

    Sakagami, Hitoshi; Kishimoto, Yasuaki; Sentoku, Yasuhiko; Taguchi, Toshihiro

    2005-01-01

    As the laser-plasma interaction is a non-equilibrium, non-linear and relativistic phenomenon, we must introduce a microscopic method, namely, the relativistic electromagnetic PIC (Particle-In-Cell) simulation code. The PIC code requires a huge number of particles to validate simulation results, and its task is very computation-intensive. Thus simulation researches by the PIC code have been progressing along with advances in computer technology. Recently, parallel computers with tremendous computational power have become available, and thus we can perform three-dimensional PIC simulations for the laser-plasma interaction to investigate laser fusion. Some simulation results are shown with figures. We discuss a recent trend of large-scale PIC simulations that enable direct comparison between experimental facts and computational results. We also discharge/lightning simulations by the extended PIC code, which include various atomic and relaxation processes. (author)

  7. Neptune: An astrophysical smooth particle hydrodynamics code for massively parallel computer architectures

    Science.gov (United States)

    Sandalski, Stou

    Smooth particle hydrodynamics is an efficient method for modeling the dynamics of fluids. It is commonly used to simulate astrophysical processes such as binary mergers. We present a newly developed GPU accelerated smooth particle hydrodynamics code for astrophysical simulations. The code is named neptune after the Roman god of water. It is written in OpenMP parallelized C++ and OpenCL and includes octree based hydrodynamic and gravitational acceleration. The design relies on object-oriented methodologies in order to provide a flexible and modular framework that can be easily extended and modified by the user. Several pre-built scenarios for simulating collisions of polytropes and black-hole accretion are provided. The code is released under the MIT Open Source license and publicly available at http://code.google.com/p/neptune-sph/.

  8. Design of sampling tools for Monte Carlo particle transport code JMCT

    International Nuclear Information System (INIS)

    Shangguan Danhua; Li Gang; Zhang Baoyin; Deng Li

    2012-01-01

    A class of sampling tools for general Monte Carlo particle transport code JMCT is designed. Two ways are provided to sample from distributions. One is the utilization of special sampling methods for special distribution; the other is the utilization of general sampling methods for arbitrary discrete distribution and one-dimensional continuous distribution on a finite interval. Some open source codes are included in the general sampling method for the maximum convenience of users. The sampling results show sampling correctly from distribution which are popular in particle transport can be achieved with these tools, and the user's convenience can be assured. (authors)

  9. Introduction to the Latest Version of the Test-Particle Monte Carlo Code Molflow+

    CERN Document Server

    Ady, M

    2014-01-01

    The Test-Particle Monte Carlo code Molflow+ is getting more and more attention from the scientific community needing detailed 3D calculations of vacuum in the molecular flow regime mainly, but not limited to, the particle accelerator field. Substantial changes, bug fixes, geometry-editing and modelling features, and computational speed improvements have been made to the code in the last couple of years. This paper will outline some of these new features, and show examples of applications to the design and analysis of vacuum systems at CERN and elsewhere.

  10. Load-balancing techniques for a parallel electromagnetic particle-in-cell code

    Energy Technology Data Exchange (ETDEWEB)

    PLIMPTON,STEVEN J.; SEIDEL,DAVID B.; PASIK,MICHAEL F.; COATS,REBECCA S.

    2000-01-01

    QUICKSILVER is a 3-d electromagnetic particle-in-cell simulation code developed and used at Sandia to model relativistic charged particle transport. It models the time-response of electromagnetic fields and low-density-plasmas in a self-consistent manner: the fields push the plasma particles and the plasma current modifies the fields. Through an LDRD project a new parallel version of QUICKSILVER was created to enable large-scale plasma simulations to be run on massively-parallel distributed-memory supercomputers with thousands of processors, such as the Intel Tflops and DEC CPlant machines at Sandia. The new parallel code implements nearly all the features of the original serial QUICKSILVER and can be run on any platform which supports the message-passing interface (MPI) standard as well as on single-processor workstations. This report describes basic strategies useful for parallelizing and load-balancing particle-in-cell codes, outlines the parallel algorithms used in this implementation, and provides a summary of the modifications made to QUICKSILVER. It also highlights a series of benchmark simulations which have been run with the new code that illustrate its performance and parallel efficiency. These calculations have up to a billion grid cells and particles and were run on thousands of processors. This report also serves as a user manual for people wishing to run parallel QUICKSILVER.

  11. Load-balancing techniques for a parallel electromagnetic particle-in-cell code

    International Nuclear Information System (INIS)

    Plimpton, Steven J.; Seidel, David B.; Pasik, Michael F.; Coats, Rebecca S.

    2000-01-01

    QUICKSILVER is a 3-d electromagnetic particle-in-cell simulation code developed and used at Sandia to model relativistic charged particle transport. It models the time-response of electromagnetic fields and low-density-plasmas in a self-consistent manner: the fields push the plasma particles and the plasma current modifies the fields. Through an LDRD project a new parallel version of QUICKSILVER was created to enable large-scale plasma simulations to be run on massively-parallel distributed-memory supercomputers with thousands of processors, such as the Intel Tflops and DEC CPlant machines at Sandia. The new parallel code implements nearly all the features of the original serial QUICKSILVER and can be run on any platform which supports the message-passing interface (MPI) standard as well as on single-processor workstations. This report describes basic strategies useful for parallelizing and load-balancing particle-in-cell codes, outlines the parallel algorithms used in this implementation, and provides a summary of the modifications made to QUICKSILVER. It also highlights a series of benchmark simulations which have been run with the new code that illustrate its performance and parallel efficiency. These calculations have up to a billion grid cells and particles and were run on thousands of processors. This report also serves as a user manual for people wishing to run parallel QUICKSILVER

  12. Implementation of a 3D plasma particle-in-cell code on a MIMD parallel computer

    International Nuclear Information System (INIS)

    Liewer, P.C.; Lyster, P.; Wang, J.

    1993-01-01

    A three-dimensional plasma particle-in-cell (PIC) code has been implemented on the Intel Delta MIMD parallel supercomputer using the General Concurrent PIC algorithm. The GCPIC algorithm uses a domain decomposition to divide the computation among the processors: A processor is assigned a subdomain and all the particles in it. Particles must be exchanged between processors as they move. Results are presented comparing the efficiency for 1-, 2- and 3-dimensional partitions of the three dimensional domain. This algorithm has been found to be very efficient even when a large fraction (e.g. 30%) of the particles must be exchanged at every time step. On the 512-node Intel Delta, up to 125 million particles have been pushed with an electrostatic push time of under 500 nsec/particle/time step

  13. Hot particle dose calculations using the computer code VARSKIN Mod 2

    International Nuclear Information System (INIS)

    Durham, J.S.

    1991-01-01

    The only calculational model recognised by the Nuclear Regulatory Commission (NRC) for hot particle dosimetry is VARSKIN Mod 1. Because the code was designed to calculate skin dose from distributed skin contamination and not hot particles, it is assumed that the particle has no thickness and, therefore, that no self-absorption occurs within the source material. For low energy beta particles such as those emitted from 60 Co, a significant amount of self-shielding occurs in hot particles and VARSKIN Mod 1 overestimates the skin dose. In addition, the presence of protective clothing, which will reduce the calculated skin dose for both high and low energy beta emitters, is not modelled in VARSKIN Mod 1. Finally, there is no provision in VARSKIN Mod 1 to calculate the gamma contribution to skin dose from radionuclides that emit both beta and gamma radiation. The computer code VARSKIN Mod 1 has been modified to model three-dimensional sources, insertion of layers of protective clothing between the source and skin, and gamma dose from appropriate radionuclides. The new code, VARSKIN Mod 2, is described and the sensitivity of the calculated dose to source geometry, diameter, thickness, density, and protective clothing thickness are discussed. Finally, doses calculated using VARSKIN Mod 2 are compared to doses measured from hot particles found in nuclear power plants. (author)

  14. Object-Oriented Parallel Particle-in-Cell Code for Beam Dynamics Simulation in Linear Accelerators

    International Nuclear Information System (INIS)

    Qiang, J.; Ryne, R.D.; Habib, S.; Decky, V.

    1999-01-01

    In this paper, we present an object-oriented three-dimensional parallel particle-in-cell code for beam dynamics simulation in linear accelerators. A two-dimensional parallel domain decomposition approach is employed within a message passing programming paradigm along with a dynamic load balancing. Implementing object-oriented software design provides the code with better maintainability, reusability, and extensibility compared with conventional structure based code. This also helps to encapsulate the details of communications syntax. Performance tests on SGI/Cray T3E-900 and SGI Origin 2000 machines show good scalability of the object-oriented code. Some important features of this code also include employing symplectic integration with linear maps of external focusing elements and using z as the independent variable, typical in accelerators. A successful application was done to simulate beam transport through three superconducting sections in the APT linac design

  15. DeepPicker: A deep learning approach for fully automated particle picking in cryo-EM.

    Science.gov (United States)

    Wang, Feng; Gong, Huichao; Liu, Gaochao; Li, Meijing; Yan, Chuangye; Xia, Tian; Li, Xueming; Zeng, Jianyang

    2016-09-01

    Particle picking is a time-consuming step in single-particle analysis and often requires significant interventions from users, which has become a bottleneck for future automated electron cryo-microscopy (cryo-EM). Here we report a deep learning framework, called DeepPicker, to address this problem and fill the current gaps toward a fully automated cryo-EM pipeline. DeepPicker employs a novel cross-molecule training strategy to capture common features of particles from previously-analyzed micrographs, and thus does not require any human intervention during particle picking. Tests on the recently-published cryo-EM data of three complexes have demonstrated that our deep learning based scheme can successfully accomplish the human-level particle picking process and identify a sufficient number of particles that are comparable to those picked manually by human experts. These results indicate that DeepPicker can provide a practically useful tool to significantly reduce the time and manual effort spent in single-particle analysis and thus greatly facilitate high-resolution cryo-EM structure determination. DeepPicker is released as an open-source program, which can be downloaded from https://github.com/nejyeah/DeepPicker-python. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. PRIAM: A self consistent finite element code for particle simulation in electromagnetic fields

    International Nuclear Information System (INIS)

    Le Meur, G.; Touze, F.

    1990-06-01

    A 2 1/2 dimensional, relativistic particle simulation code is described. A short review of the used mixed finite element method is given. The treatment of the driving terms (charge and current densities), initial, boundary conditions are exposed. Graphical results are shown

  17. Modification V to the computer code, STRETCH, for predicting coated-particle behavior

    International Nuclear Information System (INIS)

    Valentine, K.H.

    1975-04-01

    Several modifications have been made to the stress analysis code, STRETCH, in an attempt to improve agreement between the calculated and observed behavior of pyrocarbon-coated fuel particles during irradiation in a reactor environment. Specific areas of the code that have been modified are the neutron-induced densification model and the neutron-induced creep calculation. Also, the capability for modeling surface temperature variations has been added. HFIR Target experiments HT-12 through HT-15 have been simulated with the modified code, and the neutron-fluence vs particle-failure predictions compare favorably with the experimental results. Listings of the modified FORTRAN IV main source program and additional FORTRAN IV functions are provided along with instructions for supplying the additional input data. (U.S.)

  18. SoAx: A generic C++ Structure of Arrays for handling particles in HPC codes

    Science.gov (United States)

    Homann, Holger; Laenen, Francois

    2018-03-01

    The numerical study of physical problems often require integrating the dynamics of a large number of particles evolving according to a given set of equations. Particles are characterized by the information they are carrying such as an identity, a position other. There are generally speaking two different possibilities for handling particles in high performance computing (HPC) codes. The concept of an Array of Structures (AoS) is in the spirit of the object-oriented programming (OOP) paradigm in that the particle information is implemented as a structure. Here, an object (realization of the structure) represents one particle and a set of many particles is stored in an array. In contrast, using the concept of a Structure of Arrays (SoA), a single structure holds several arrays each representing one property (such as the identity) of the whole set of particles. The AoS approach is often implemented in HPC codes due to its handiness and flexibility. For a class of problems, however, it is known that the performance of SoA is much better than that of AoS. We confirm this observation for our particle problem. Using a benchmark we show that on modern Intel Xeon processors the SoA implementation is typically several times faster than the AoS one. On Intel's MIC co-processors the performance gap even attains a factor of ten. The same is true for GPU computing, using both computational and multi-purpose GPUs. Combining performance and handiness, we present the library SoAx that has optimal performance (on CPUs, MICs, and GPUs) while providing the same handiness as AoS. For this, SoAx uses modern C++ design techniques such template meta programming that allows to automatically generate code for user defined heterogeneous data structures.

  19. Randomly dispersed particle fuel model in the PSG Monte Carlo neutron transport code

    International Nuclear Information System (INIS)

    Leppaenen, J.

    2007-01-01

    High-temperature gas-cooled reactor fuels are composed of thousands of microscopic fuel particles, randomly dispersed in a graphite matrix. The modelling of such geometry is complicated, especially using continuous-energy Monte Carlo codes, which are unable to apply any deterministic corrections in the calculation. This paper presents the geometry routine developed for modelling randomly dispersed particle fuels using the PSG Monte Carlo reactor physics code. The model is based on the delta-tracking method, and it takes into account the spatial self-shielding effects and the random dispersion of the fuel particles. The calculation routine is validated by comparing the results to reference MCNP4C calculations using uranium and plutonium based fuels. (authors)

  20. Parallel processing of Monte Carlo code MCNP for particle transport problem

    Energy Technology Data Exchange (ETDEWEB)

    Higuchi, Kenji; Kawasaki, Takuji

    1996-06-01

    It is possible to vectorize or parallelize Monte Carlo codes (MC code) for photon and neutron transport problem, making use of independency of the calculation for each particle. Applicability of existing MC code to parallel processing is mentioned. As for parallel computer, we have used both vector-parallel processor and scalar-parallel processor in performance evaluation. We have made (i) vector-parallel processing of MCNP code on Monte Carlo machine Monte-4 with four vector processors, (ii) parallel processing on Paragon XP/S with 256 processors. In this report we describe the methodology and results for parallel processing on two types of parallel or distributed memory computers. In addition, we mention the evaluation of parallel programming environments for parallel computers used in the present work as a part of the work developing STA (Seamless Thinking Aid) Basic Software. (author)

  1. R-Matrix Codes for Charged-particle Induced Reactionsin the Resolved Resonance Region

    Energy Technology Data Exchange (ETDEWEB)

    Leeb, Helmut [Technical Univ. of Wien, Vienna (Austria); Dimitriou, Paraskevi [Intl Atomic Energy Agency (IAEA), Vienna (Austria); Thompson, Ian J. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-01-01

    A Consultant’s Meeting was held at the IAEA Headquarters, from 5 to 7 December 2016, to discuss the status of R-matrix codes currently used in calculations of charged-particle induced reaction cross sections at low energies. The meeting was a follow-up to the R-matrix Codes meeting held in December 2015, and served the purpose of monitoring progress in: the development of a translation code to enable exchange of input/output parameters between the various codes in different formats, fitting procedures and treatment of uncertainties, the evaluation methodology, and finally dissemination. The details of the presentations and technical discussions, as well as additional actions that were proposed to achieve all the goals of the meeting are summarized in this report.

  2. Damped time advance methods for particles and EM fields

    International Nuclear Information System (INIS)

    Friedman, A.; Ambrosiano, J.J.; Boyd, J.K.; Brandon, S.T.; Nielsen, D.E. Jr.; Rambo, P.W.

    1990-01-01

    Recent developments in the application of damped time advance methods to plasma simulations include the synthesis of implicit and explicit ''adjustably damped'' second order accurate methods for particle motion and electromagnetic field propagation. This paper discusses this method

  3. Load balancing in highly parallel processing of Monte Carlo code for particle transport

    International Nuclear Information System (INIS)

    Higuchi, Kenji; Takemiya, Hiroshi; Kawasaki, Takuji

    1998-01-01

    In parallel processing of Monte Carlo (MC) codes for neutron, photon and electron transport problems, particle histories are assigned to processors making use of independency of the calculation for each particle. Although we can easily parallelize main part of a MC code by this method, it is necessary and practically difficult to optimize the code concerning load balancing in order to attain high speedup ratio in highly parallel processing. In fact, the speedup ratio in the case of 128 processors remains in nearly one hundred times when using the test bed for the performance evaluation. Through the parallel processing of the MCNP code, which is widely used in the nuclear field, it is shown that it is difficult to attain high performance by static load balancing in especially neutron transport problems, and a load balancing method, which dynamically changes the number of assigned particles minimizing the sum of the computational and communication costs, overcomes the difficulty, resulting in nearly fifteen percentage of reduction for execution time. (author)

  4. Development of CAD-Based Geometry Processing Module for a Monte Carlo Particle Transport Analysis Code

    International Nuclear Information System (INIS)

    Choi, Sung Hoon; Kwark, Min Su; Shim, Hyung Jin

    2012-01-01

    As The Monte Carlo (MC) particle transport analysis for a complex system such as research reactor, accelerator, and fusion facility may require accurate modeling of the complicated geometry. Its manual modeling by using the text interface of a MC code to define the geometrical objects is tedious, lengthy and error-prone. This problem can be overcome by taking advantage of modeling capability of the computer aided design (CAD) system. There have been two kinds of approaches to develop MC code systems utilizing the CAD data: the external format conversion and the CAD kernel imbedded MC simulation. The first approach includes several interfacing programs such as McCAD, MCAM, GEOMIT etc. which were developed to automatically convert the CAD data into the MCNP geometry input data. This approach makes the most of the existing MC codes without any modifications, but implies latent data inconsistency due to the difference of the geometry modeling system. In the second approach, a MC code utilizes the CAD data for the direct particle tracking or the conversion to an internal data structure of the constructive solid geometry (CSG) and/or boundary representation (B-rep) modeling with help of a CAD kernel. MCNP-BRL and OiNC have demonstrated their capabilities of the CAD-based MC simulations. Recently we have developed a CAD-based geometry processing module for the MC particle simulation by using the OpenCASCADE (OCC) library. In the developed module, CAD data can be used for the particle tracking through primitive CAD surfaces (hereafter the CAD-based tracking) or the internal conversion to the CSG data structure. In this paper, the performances of the text-based model, the CAD-based tracking, and the internal CSG conversion are compared by using an in-house MC code, McSIM, equipped with the developed CAD-based geometry processing module

  5. The local skin dose conversion coefficients of electrons, protons and alpha particles calculated using the Geant4 code.

    Science.gov (United States)

    Zhang, Bintuan; Dang, Bingrong; Wang, Zhuanzi; Wei, Wei; Li, Wenjian

    2013-10-01

    The skin tissue-equivalent slab reported in the International Commission on Radiological Protection (ICRP) Publication 116 to calculate the localised skin dose conversion coefficients (LSDCCs) was adopted into the Monte Carlo transport code Geant4. The Geant4 code was then utilised for computation of LSDCCs due to a circular parallel beam of monoenergetic electrons, protons and alpha particles electrons and alpha particles are found to be in good agreement with the results using the MCNPX code of ICRP 116 data. The present work thus validates the LSDCC values for both electrons and alpha particles using the Geant4 code.

  6. Solution of charged particle transport equation by Monte-Carlo method in the BRANDZ code system

    International Nuclear Information System (INIS)

    Artamonov, S.N.; Androsenko, P.A.; Androsenko, A.A.

    1992-01-01

    Consideration is given to the issues of Monte-Carlo employment for the solution of charged particle transport equation and its implementation in the BRANDZ code system under the conditions of real 3D geometry and all the data available on radiation-to-matter interaction in multicomponent and multilayer targets. For the solution of implantation problem the results of BRANDZ data comparison with the experiments and calculations by other codes in complexes systems are presented. The results of direct nuclear pumping process simulation for laser-active media by a proton beam are also included. 4 refs.; 7 figs

  7. Two- and three-dimensional magnetoinductive particle codes with guiding center electron motion

    International Nuclear Information System (INIS)

    Geary, J.L.; Tajima, T.; Leboeuf, J.N.; Zaidman, E.G.; Han, J.H.

    1986-07-01

    A magnetoinductive (Darwin) particle simulation model developed for examining low frequency plasma behavior with large time steps is presented. Electron motion perpendicular to the magnetic field is treated as massless keeping only the guiding center motion. Electron motion parallel to the magnetic field retains full inertial effects as does the ion motion. This model has been implemented in two and three dimensions. Computational tests of the equilibrium properties of the code are compared with linear theory and the fluctuation dissipation theorem. This code has been applied to the problems of Alfven wave resonance heating and twist-kink modes

  8. Syrlic: a Lagrangian code to handle industrial problems involving particles and droplets

    International Nuclear Information System (INIS)

    Peniguel, C.

    1997-01-01

    Numerous industrial applications require to solve droplets or solid particles trajectories and their effects on the flow. (fuel injection in combustion engine, agricultural spraying, spray drying, spray cooling, spray painting, particles separator, dispersion of pollutant, etc). SYRLIC is being developed to handle the dispersed phase while the continuous phase is tackled by classical Eulerian codes like N3S-EF, N3S-NATUR, ESTET. The trajectory of each droplet is calculated on unstructured grids or structured grids according the Eulerian code with SYRLIC is coupled. The forces applied to each particle are recalculated along each path. The Lagrangian approach treats the convection and the source terms exactly. It is particularly adapted to problems involving a wide range of particles characteristics (diameter, mass, etc). In the near future, wall interaction, heat transfer, evaporation more complex physics, etc, will be included. Turbulent effects will be accounted for by a Langevin equation. The illustration shows the trajectories followed by water droplets (diameter from 1 mm to 4 mm) in a cooling tower. the droplets are falling down due to gravity but are deflected towards the center of the tower because of a lateral wind. It is clear that particles are affected differently according their diameter. The Eulerian flow field used to compute the forces has been generated by N3S-AERO, on an unstructured mesh

  9. Particle-in-Cell Code BEAMPATH for Beam Dynamics Simulations in Linear Accelerators and Beamlines

    International Nuclear Information System (INIS)

    Batygin, Y.

    2004-01-01

    A code library BEAMPATH for 2 - dimensional and 3 - dimensional space charge dominated beam dynamics study in linear particle accelerators and beam transport lines is developed. The program is used for particle-in-cell simulation of axial-symmetric, quadrupole-symmetric and z-uniform beams in a channel containing RF gaps, radio-frequency quadrupoles, multipole lenses, solenoids and bending magnets. The programming method includes hierarchical program design using program-independent modules and a flexible combination of modules to provide the most effective version of the structure for every specific case of simulation. Numerical techniques as well as the results of beam dynamics studies are presented

  10. Particle-in-Cell Code BEAMPATH for Beam Dynamics Simulations in Linear Accelerators and Beamlines

    Energy Technology Data Exchange (ETDEWEB)

    Batygin, Y.

    2004-10-28

    A code library BEAMPATH for 2 - dimensional and 3 - dimensional space charge dominated beam dynamics study in linear particle accelerators and beam transport lines is developed. The program is used for particle-in-cell simulation of axial-symmetric, quadrupole-symmetric and z-uniform beams in a channel containing RF gaps, radio-frequency quadrupoles, multipole lenses, solenoids and bending magnets. The programming method includes hierarchical program design using program-independent modules and a flexible combination of modules to provide the most effective version of the structure for every specific case of simulation. Numerical techniques as well as the results of beam dynamics studies are presented.

  11. PEREGRINE: An all-particle Monte Carlo code for radiation therapy

    International Nuclear Information System (INIS)

    Hartmann Siantar, C.L.; Chandler, W.P.; Rathkopf, J.A.; Svatos, M.M.; White, R.M.

    1994-09-01

    The goal of radiation therapy is to deliver a lethal dose to the tumor while minimizing the dose to normal tissues. To carry out this task, it is critical to calculate correctly the distribution of dose delivered. Monte Carlo transport methods have the potential to provide more accurate prediction of dose distributions than currently-used methods. PEREGRINE is a new Monte Carlo transport code developed at Lawrence Livermore National Laboratory for the specific purpose of modeling the effects of radiation therapy. PEREGRINE transports neutrons, photons, electrons, positrons, and heavy charged-particles, including protons, deuterons, tritons, helium-3, and alpha particles. This paper describes the PEREGRINE transport code and some preliminary results for clinically relevant materials and radiation sources

  12. CASINO, a code for simulation of charged particles in an axisymmetric Tokamak

    International Nuclear Information System (INIS)

    Dillner, Oe.

    1992-01-01

    The present report comprises a documentation of CASINO, a simulation code developed as a means for the study of high energy charged particles in an axisymmetric Tokamak. The background of the need for such a numerical tool is presented. In the description of the numerical model used for the orbit integration, the method using constants of motion, the Lao-Hirsman geometry for the flux surfaces and a method for reducing the necessary number of particles is elucidated. A brief outline of the calculational sequence is given as a flow chart. The essential routines and functions as well as the common blocks are briefly described. The input and output routines are shown. Finally the documentation is completed by a short discussion of possible extensions of the code and a test case. (au)

  13. Authorship attribution of source code by using back propagation neural network based on particle swarm optimization.

    Science.gov (United States)

    Yang, Xinyu; Xu, Guoai; Li, Qi; Guo, Yanhui; Zhang, Miao

    2017-01-01

    Authorship attribution is to identify the most likely author of a given sample among a set of candidate known authors. It can be not only applied to discover the original author of plain text, such as novels, blogs, emails, posts etc., but also used to identify source code programmers. Authorship attribution of source code is required in diverse applications, ranging from malicious code tracking to solving authorship dispute or software plagiarism detection. This paper aims to propose a new method to identify the programmer of Java source code samples with a higher accuracy. To this end, it first introduces back propagation (BP) neural network based on particle swarm optimization (PSO) into authorship attribution of source code. It begins by computing a set of defined feature metrics, including lexical and layout metrics, structure and syntax metrics, totally 19 dimensions. Then these metrics are input to neural network for supervised learning, the weights of which are output by PSO and BP hybrid algorithm. The effectiveness of the proposed method is evaluated on a collected dataset with 3,022 Java files belong to 40 authors. Experiment results show that the proposed method achieves 91.060% accuracy. And a comparison with previous work on authorship attribution of source code for Java language illustrates that this proposed method outperforms others overall, also with an acceptable overhead.

  14. The CNCSN: one, two- and three-dimensional coupled neutral and charged particle discrete ordinates code package

    International Nuclear Information System (INIS)

    Voloschenko, A.M.; Gukov, S.V.; Kryuchkov, V.P.; Dubinin, A.A.; Sumaneev, O.V.

    2005-01-01

    The CNCSN package is composed of the following codes: -) KATRIN-2.0: a three-dimensional neutral and charged particle transport code; -) KASKAD-S-2.5: a two-dimensional neutral and charged particle transport code; -) ROZ-6.6: a one-dimensional neutral and charged particle transport code; -) ARVES-2.5: a preprocessor for the working macroscopic cross-section format FMAC-M for transport calculations; -) MIXERM: a utility code for preparing mixtures on the base of multigroup cross-section libraries in ANISN format; -) CEPXS-BFP: a version of the Sandia National Lab. multigroup coupled electron-photon cross-section generating code CEPXS, adapted for solving the charged particles transport in the Boltzmann-Fokker-Planck formulation with the use of discrete ordinate method; -) SADCO-2.4: Institute for High-Energy Physics modular system for generating coupled nuclear data libraries to provide high-energy particles transport calculations by multigroup method; -) KATRIF: the post-processor for the KATRIN code; -) KASF: the post-processor for the KASKAD-S code; and ROZ6F: the post-processor for the ROZ-6 code. The coding language is Fortran-90

  15. First experience with particle-in-cell plasma physics code on ARM-based HPC systems

    Science.gov (United States)

    Sáez, Xavier; Soba, Alejandro; Sánchez, Edilberto; Mantsinen, Mervi; Mateo, Sergi; Cela, José M.; Castejón, Francisco

    2015-09-01

    In this work, we will explore the feasibility of porting a Particle-in-cell code (EUTERPE) to an ARM multi-core platform from the Mont-Blanc project. The used prototype is based on a system-on-chip Samsung Exynos 5 with an integrated GPU. It is the first prototype that could be used for High-Performance Computing (HPC), since it supports double precision and parallel programming languages.

  16. The three-dimensional, discrete ordinates neutral particle transport code TORT: An overview

    International Nuclear Information System (INIS)

    Azmy, Y.Y.

    1996-01-01

    The centerpiece of the Discrete Ordinates Oak Ridge System (DOORS), the three-dimensional neutral particle transport code TORT is reviewed. Its most prominent features pertaining to large applications, such as adjustable problem parameters, memory management, and coarse mesh methods, are described. Advanced, state-of-the-art capabilities including acceleration and multiprocessing are summarized here. Future enhancement of existing graphics and visualization tools is briefly presented

  17. SPACE CHARGE SIMULATION METHODS INCORPORATED IN SOME MULTI - PARTICLE TRACKING CODES AND THEIR RESULTS COMPARISON

    International Nuclear Information System (INIS)

    BEEBE - WANG, J.; LUCCIO, A.U.; D IMPERIO, N.; MACHIDA, S.

    2002-01-01

    Space charge in high intensity beams is an important issue in accelerator physics. Due to the complicity of the problems, the most effective way of investigating its effect is by computer simulations. In the resent years, many space charge simulation methods have been developed and incorporated in various 2D or 3D multi-particle-tracking codes. It has becoming necessary to benchmark these methods against each other, and against experimental results. As a part of global effort, we present our initial comparison of the space charge methods incorporated in simulation codes ORBIT++, ORBIT and SIMPSONS. In this paper, the methods included in these codes are overviewed. The simulation results are presented and compared. Finally, from this study, the advantages and disadvantages of each method are discussed

  18. The use of electromagnetic particle-in-cell codes in accelerator applications

    International Nuclear Information System (INIS)

    Eppley, K.

    1988-12-01

    The techniques developed for the numerical simulation of plasmas have numerous applications relevant to accelerators. The operation of many accelerator components involves transients, interactions between beams and rf fields, and internal plasma oscillations. These effects produce non-linear behavior which can be represented accurately by particle in cell (PIC) simulations. We will give a very brief overview of the algorithms used in PIC Codes. We will examine the range of parameters over which they are useful. We will discuss the factors which determine whether a two or three dimensional simulation is most appropriate. PIC codes have been applied to a wide variety of diverse problems, spanning many of the systems in a linear accelerator. We will present a number of practical examples of the application of these codes to areas such as guns, bunchers, rf sources, beam transport, emittance growth and final focus. 8 refs., 8 figs., 2 tabs

  19. SPACE CHARGE SIMULATION METHODS INCORPORATED IN SOME MULTI - PARTICLE TRACKING CODES AND THEIR RESULTS COMPARISON.

    Energy Technology Data Exchange (ETDEWEB)

    BEEBE - WANG,J.; LUCCIO,A.U.; D IMPERIO,N.; MACHIDA,S.

    2002-06-03

    Space charge in high intensity beams is an important issue in accelerator physics. Due to the complicity of the problems, the most effective way of investigating its effect is by computer simulations. In the resent years, many space charge simulation methods have been developed and incorporated in various 2D or 3D multi-particle-tracking codes. It has becoming necessary to benchmark these methods against each other, and against experimental results. As a part of global effort, we present our initial comparison of the space charge methods incorporated in simulation codes ORBIT++, ORBIT and SIMPSONS. In this paper, the methods included in these codes are overviewed. The simulation results are presented and compared. Finally, from this study, the advantages and disadvantages of each method are discussed.

  20. ALFITeX. A new code for the deconvolution of complex alpha-particle spectra

    International Nuclear Information System (INIS)

    Caro Marroyo, B.; Martin Sanchez, A.; Jurado Vargas, M.

    2013-01-01

    A new code for the deconvolution of complex alpha-particle spectra has been developed. The ALFITeX code is written in Visual Basic for Microsoft Office Excel 2010 spreadsheets, incorporating several features aimed at making it a fast, robust and useful tool with a user-friendly interface. The deconvolution procedure is based on the Levenberg-Marquardt algorithm, with the curve fitting the experimental data being the mathematical function formed by the convolution of a Gaussian with two left-handed exponentials in the low-energy-tail region. The code also includes the capability of fitting a possible constant background contribution. The application of the singular value decomposition method for matrix inversion permits the fit of any kind of alpha-particle spectra, even those presenting singularities or an ill-conditioned curvature matrix. ALFITeX has been checked with its application to the deconvolution and the calculation of the alpha-particle emission probabilities of 239 Pu, 241 Am and 235 U. (author)

  1. PHITS: Particle and heavy ion transport code system, version 2.23

    International Nuclear Information System (INIS)

    Niita, Koji; Matsuda, Norihiro; Iwamoto, Yosuke; Sato, Tatsuhiko; Nakashima, Hiroshi; Sakamoto, Yukio; Iwase, Hiroshi; Sihver, Lembit

    2010-10-01

    A Particle and Heavy-Ion Transport code System PHITS has been developed under the collaboration of JAEA (Japan Atomic Energy Agency), RIST (Research Organization for Information Science and Technology) and KEK (High Energy Accelerator Research Organization). PHITS can deal with the transport of all particles (nucleons, nuclei, mesons, photons, and electrons) over wide energy ranges, using several nuclear reaction models and nuclear data libraries. Geometrical configuration of the simulation can be set with GG (General Geometry) or CG (Combinatorial Geometry). Various quantities such as heat deposition, track length and production yields can be deduced from the simulation, using implemented estimator functions called 'tally'. The code also has a function to draw 2D and 3D figures of the calculated results as well as the setup geometries, using a code ANGEL. Because of these features, PHITS has been widely used for various purposes such as designs of accelerator shielding, radiation therapy and space exploration. Recently PHITS introduces an event generator for particle transport parts in the low energy region. Thus, PHITS was completely rewritten for the introduction of the event generator for neutron-induced reactions in energy region less than 20 MeV. Furthermore, several new tallis were incorporated for estimation of the relative biological effects. This document provides a manual of the new PHITS. (author)

  2. VINE-A NUMERICAL CODE FOR SIMULATING ASTROPHYSICAL SYSTEMS USING PARTICLES. II. IMPLEMENTATION AND PERFORMANCE CHARACTERISTICS

    International Nuclear Information System (INIS)

    Nelson, Andrew F.; Wetzstein, M.; Naab, T.

    2009-01-01

    We continue our presentation of VINE. In this paper, we begin with a description of relevant architectural properties of the serial and shared memory parallel computers on which VINE is intended to run, and describe their influences on the design of the code itself. We continue with a detailed description of a number of optimizations made to the layout of the particle data in memory and to our implementation of a binary tree used to access that data for use in gravitational force calculations and searches for smoothed particle hydrodynamics (SPH) neighbor particles. We describe the modifications to the code necessary to obtain forces efficiently from special purpose 'GRAPE' hardware, the interfaces required to allow transparent substitution of those forces in the code instead of those obtained from the tree, and the modifications necessary to use both tree and GRAPE together as a fused GRAPE/tree combination. We conclude with an extensive series of performance tests, which demonstrate that the code can be run efficiently and without modification in serial on small workstations or in parallel using the OpenMP compiler directives on large-scale, shared memory parallel machines. We analyze the effects of the code optimizations and estimate that they improve its overall performance by more than an order of magnitude over that obtained by many other tree codes. Scaled parallel performance of the gravity and SPH calculations, together the most costly components of most simulations, is nearly linear up to at least 120 processors on moderate sized test problems using the Origin 3000 architecture, and to the maximum machine sizes available to us on several other architectures. At similar accuracy, performance of VINE, used in GRAPE-tree mode, is approximately a factor 2 slower than that of VINE, used in host-only mode. Further optimizations of the GRAPE/host communications could improve the speed by as much as a factor of 3, but have not yet been implemented in VINE

  3. Development and Verification of Smoothed Particle Hydrodynamics Code for Analysis of Tsunami near NPP

    Energy Technology Data Exchange (ETDEWEB)

    Jo, Young Beom; Kim, Eung Soo [Seoul National Univ., Seoul (Korea, Republic of)

    2014-10-15

    It becomes more complicated when considering the shape and phase of the ground below the seawater. Therefore, some different attempts are required to precisely analyze the behavior of tsunami. This paper introduces an on-going activities on code development in SNU based on an unconventional mesh-free fluid analysis method called Smoothed Particle Hydrodynamics (SPH) and its verification work with some practice simulations. This paper summarizes the on-going development and verification activities on Lagrangian mesh-free SPH code in SNU. The newly developed code can cover equation of motions and heat conduction equation so far, and verification of each models is completed. In addition, parallel computation using GPU is now possible, and GUI is also prepared. If users change input geometry or input values, they can simulate for various conditions geometries. A SPH method has large advantages and potential in modeling of free surface, highly deformable geometry and multi-phase problems that traditional grid-based code has difficulties in analysis. Therefore, by incorporating more complex physical models such as turbulent flow, phase change, two-phase flow, and even solid mechanics, application of the current SPH code is expected to be much more extended including molten fuel behaviors in the sever accident.

  4. Alignment of cryo-EM movies of individual particles by optimization of image translations.

    Science.gov (United States)

    Rubinstein, John L; Brubaker, Marcus A

    2015-11-01

    Direct detector device (DDD) cameras have revolutionized single particle electron cryomicroscopy (cryo-EM). In addition to an improved camera detective quantum efficiency, acquisition of DDD movies allows for correction of movement of the specimen, due to both instabilities in the microscope specimen stage and electron beam-induced movement. Unlike specimen stage drift, beam-induced movement is not always homogeneous within an image. Local correlation in the trajectories of nearby particles suggests that beam-induced motion is due to deformation of the ice layer. Algorithms have already been described that can correct movement for large regions of frames and for >1 MDa protein particles. Another algorithm allows individual images to be aligned without frame averaging or linear trajectories. The algorithm maximizes the overall correlation of the shifted frames with the sum of the shifted frames. The optimum in this single objective function is found efficiently by making use of analytically calculated derivatives of the function. To smooth estimates of particle trajectories, rapid changes in particle positions between frames are penalized in the objective function and weighted averaging of nearby trajectories ensures local correlation in trajectories. This individual particle motion correction, in combination with weighting of Fourier components to account for increasing radiation damage in later frames, can be used to improve 3-D maps from single particle cryo-EM. Copyright © 2015 Elsevier Inc. All rights reserved.

  5. Modelling of a general purpose irradiation chamber using a Monte Carlo particle transport code

    International Nuclear Information System (INIS)

    Dhiyauddin Ahmad Fauzi; Sheik, F.O.A.; Nurul Fadzlin Hasbullah

    2013-01-01

    Full-text: The aim of this research is to stimulate the effectiveness use of a general purpose irradiation chamber to contain pure neutron particles obtained from a research reactor. The secondary neutron and gamma particles dose discharge from the chamber layers will be used as a platform to estimate the safe dimension of the chamber. The chamber, made up of layers of lead (Pb), shielding, polyethylene (PE), moderator and commercial grade aluminium (Al) cladding is proposed for the use of interacting samples with pure neutron particles in a nuclear reactor environment. The estimation was accomplished through simulation based on general Monte Carlo N-Particle transport code using Los Alamos MCNPX software. Simulations were performed on the model of the chamber subjected to high neutron flux radiation and its gamma radiation product. The model of neutron particle used is based on the neutron source found in PUSPATI TRIGA MARK II research reactor which holds a maximum flux value of 1 x 10 12 neutron/ cm 2 s. The expected outcomes of this research are zero gamma dose in the core of the chamber and neutron dose rate of less than 10 μSv/ day discharge from the chamber system. (author)

  6. Mathematical model and computer code for coated particles performance at normal operating conditions

    International Nuclear Information System (INIS)

    Golubev, I.; Kadarmetov, I.; Makarov, V.

    2002-01-01

    Computer modeling of thermo-mechanical behavior of coated particles during operating both at normal and off-normal conditions has a very significant role particularly on a stage of new reactors development. In Russia a big experience has been accumulated on fabrication and reactor tests of CP and fuel elements with UO 2 kernels. However, this experience cannot be using in full volume for development of a new reactor installation GT-MHR. This is due to very deep burn-up of the fuel based on plutonium oxide (up to 70% fima). Therefore the mathematical modeling of CP thermal-mechanical behavior and failure prediction becomes particularly important. The authors have a clean understanding that serviceability of fuel with high burn-ups are defined not only by thermo-mechanics, but also by structured changes in coating materials, thermodynamics of chemical processes, 'amoeba-effect', formation CO etc. In the report the first steps of development of integrate code for numerical modeling of coated particles behavior and some calculating results concerning the influence of various design parameters on fuel coated particles endurance for GT-MHR normal operating conditions are submitted. A failure model is developed to predict the fraction of TRISO-coated particles. In this model it is assumed that the failure of CP depends not only on probability of SiC-layer fracture but also on the PyC-layers damage. The coated particle is considered as a uniform design. (author)

  7. A 3d particle simulation code for heavy ion fusion accelerator studies

    International Nuclear Information System (INIS)

    Friedman, A.; Bangerter, R.O.; Callahan, D.A.; Grote, D.P.; Langdon, A.B.; Haber, I.

    1990-01-01

    We describe WARP, a new particle-in-cell code being developed and optimized for ion beam studies in true geometry. We seek to model transport around bends, axial compression with strong focusing, multiple beamlet interaction, and other inherently 3d processes that affect emittance growth. Constraints imposed by memory and running time are severe. Thus, we employ only two 3d field arrays (ρ and φ), and difference φ directly on each particle to get E, rather than interpolating E from three meshes; use of a single 3d array is feasible. A new method for PIC simulation of bent beams follows the beam particles in a family of rotated laboratory frames, thus ''straightening'' the bends. We are also incorporating an envelope calculation, an (r, z) model, and 1d (axial) model within WARP. The BASIS development and run-time system is used, providing a powerful interactive environment in which the user has access to all variables in the code database. 10 refs., 3 figs

  8. SimTrack: A compact c++ code for particle orbit and spin tracking in accelerators

    Energy Technology Data Exchange (ETDEWEB)

    Luo, Yun

    2015-11-21

    SimTrack is a compact c++ code of 6-d symplectic element-by-element particle tracking in accelerators originally designed for head-on beam–beam compensation simulation studies in the Relativistic Heavy Ion Collider (RHIC) at Brookhaven National Laboratory. It provides a 6-d symplectic orbit tracking with the 4th order symplectic integration for magnet elements and the 6-d symplectic synchro-beam map for beam–beam interaction. Since its inception in 2009, SimTrack has been intensively used for dynamic aperture calculations with beam–beam interaction for RHIC. Recently, proton spin tracking and electron energy loss due to synchrotron radiation were added. In this paper, I will present the code architecture, physics models, and some selected examples of its applications to RHIC and a future electron-ion collider design eRHIC.

  9. Beam Dynamics in an Electron Lens with the Warp Particle-in-cell Code

    CERN Document Server

    Stancari, Giulio; Redaelli, Stefano

    2014-01-01

    Electron lenses are a mature technique for beam manipulation in colliders and storage rings. In an electron lens, a pulsed, magnetically confined electron beam with a given current-density profile interacts with the circulating beam to obtain the desired effect. Electron lenses were used in the Fermilab Tevatron collider for beam-beam compensation, for abort-gap clearing, and for halo scraping. They will be used in RHIC at BNL for head-on beam-beam compensation, and their application to the Large Hadron Collider for halo control is under development. At Fermilab, electron lenses will be implemented as lattice elements for nonlinear integrable optics. The design of electron lenses requires tools to calculate the kicks and wakefields experienced by the circulating beam. We use the Warp particle-in-cell code to study generation, transport, and evolution of the electron beam. For the first time, a fully 3-dimensional code is used for this purpose.

  10. Progress on the Development of the hPIC Particle-in-Cell Code

    Science.gov (United States)

    Dart, Cameron; Hayes, Alyssa; Khaziev, Rinat; Marcinko, Stephen; Curreli, Davide; Laboratory of Computational Plasma Physics Team

    2017-10-01

    Advancements were made in the development of the kinetic-kinetic electrostatic Particle-in-Cell code, hPIC, designed for large-scale simulation of the Plasma-Material Interface. hPIC achieved a weak scaling efficiency of 87% using the Algebraic Multigrid Solver BoomerAMG from the PETSc library on more than 64,000 cores of the Blue Waters supercomputer at the University of Illinois at Urbana-Champaign. The code successfully simulates two-stream instability and a volume of plasma over several square centimeters of surface extending out to the presheath in kinetic-kinetic mode. Results from a parametric study of the plasma sheath in strongly magnetized conditions will be presented, as well as a detailed analysis of the plasma sheath structure at grazing magnetic angles. The distribution function and its moments will be reported for plasma species in the simulation domain and at the material surface for plasma sheath simulations. Membership Pending.

  11. First experience with particle-in-cell plasma physics code on ARM-based HPC systems

    OpenAIRE

    Sáez, Xavier; Soba, Alejandro; Sánchez, Edilberto; Mantsinen, Mervi; Mateo, Sergio; Cela, José M.; Castejón, Francisco

    2015-01-01

    In this work, we will explore the feasibility of porting a Particle-in-cell code (EUTERPE) to an ARM multi-core platform from the Mont-Blanc project. The used prototype is based on a system-on-chip Samsung Exynos 5 with an integrated GPU. It is the first prototype that could be used for High-Performance Computing (HPC), since it supports double precision and parallel programming languages. The research leading to these results has received funding from the European Com- munity's Seventh...

  12. TIERCE: A code system for particles and radiation transport in thick targets

    Energy Technology Data Exchange (ETDEWEB)

    Bersillon, O.; Bauge, E.; Borne, F.; Clergeau, J.F.; Collin, M.; Cotten, D.; Delaroche, J.P.; Duarte, H.; Flament, J.L.; Girod, M.; Gosselin, G.; Granier, T.; Hilaire, S.; Morel, P.; Perrier, R.; Romain, P.; Roux, L. [CEA, Bruyeres-le-Chatel (France). Service de Physique Nucleaire

    1997-09-01

    Over the last few years, a great effort at Bruyeres-le-Chatel has been the development of the TIERCE code system for the transport of particles and radiations in complex geometry. The comparison of calculated results with experimental data, either microscopic (double differential spectra, residual nuclide yield...) or macroscopic (energy deposition, neutron leakage...), shows the need to improve the nuclear reaction models used. We present some new developments concerning data required for the evaporation model in the framework of a microscopic approach. 22 refs., 6 figs.

  13. Particle-in-cell/accelerator code for space-charge dominated beam simulation

    Energy Technology Data Exchange (ETDEWEB)

    2012-05-08

    Warp is a multidimensional discrete-particle beam simulation program designed to be applicable where the beam space-charge is non-negligible or dominant. It is being developed in a collaboration among LLNL, LBNL and the University of Maryland. It was originally designed and optimized for heave ion fusion accelerator physics studies, but has received use in a broader range of applications, including for example laser wakefield accelerators, e-cloud studies in high enery accelerators, particle traps and other areas. At present it incorporates 3-D, axisymmetric (r,z) planar (x-z) and transverse slice (x,y) descriptions, with both electrostatic and electro-magnetic fields, and a beam envelope model. The code is guilt atop the Python interpreter language.

  14. Cold-Leg Small Break LOCA Analysis of APR1400 Plant Using a SPACE/sEM Code

    Energy Technology Data Exchange (ETDEWEB)

    Lim, Sang Gyu; Lee, Suk Ho; Yu, Keuk Jong; Kim, Han Gon; Lee, Jae Yong [Central Research Institute, KHNP, Ltd., Daejeon (Korea, Republic of)

    2013-10-15

    The Small Break Loss-of-Coolant Accident (SBLOCA) evaluation methodology (EM) for APR1400, called sEM, is now being developed using SPACE code. SPACE/sEM is to set up a conservative evaluation methodology in accordance with appendix K of 10 CFR 50. Major required and acceptable features of the evaluation models are described as below. - Fission product decay : 1.2 times of ANS97 decay curve - Critical flow model : Henry-Fauske Moody two phase critical flow model - Metal-Water reaction model : Baker-Just equation - Critical Heat Flux (CHF) : B and W, Barnett and Modified Barnett correlation - Post-CHF : Groeneveld 5.7 film boiling correlation A series of test matrix is established to validate SPACE/sEM code in terms of major SBLOCA phenomena, e.g. core level swelling and boiling, core heat transfer, critical flow, loop seal clearance and their integrated effects. The separated effect tests (SETs) and integrated effect tests (IETs) are successfully performed and these results shows that SPACE/sEM code has a conservatism comparing with experimental data. Finally, plant calculations of SBLOCA for APR1400 are conducted as described below. - Break location sensitivity : DVI line, hot-leg, cold-leg, pump suction leg. - Break size spectrum : 0.4ft{sup 2}∼0.02ft{sup 2}(DVI) 0.5ft{sup 2}∼0.02ft{sup 2}(hot-leg, cold-leg, pump suction leg) This paper deals with break size spectrum analysis of cold-leg break accidents. Based on the calculation results, emergency core cooling system (ECCS) performances of APR1400 and typical SBLOCA phenomena can be evaluated. Cold-leg SBLOCA analysis for APR1400 is performed using SPACE/sEM code under harsh environment condition. SPACE/sEM code shows the typical SBLOCA behaviors and it is reasonably predicted. Although SPACE/sEM code has conservative models and correlations based on appendix K of 10 CFR 50, PCT does not exceed the requirement (1477 K). It is concluded that ECCS in APR1400 has a sufficient performance in cold-leg SBLOCA.

  15. Cold-Leg Small Break LOCA Analysis of APR1400 Plant Using a SPACE/sEM Code

    International Nuclear Information System (INIS)

    Lim, Sang Gyu; Lee, Suk Ho; Yu, Keuk Jong; Kim, Han Gon; Lee, Jae Yong

    2013-01-01

    The Small Break Loss-of-Coolant Accident (SBLOCA) evaluation methodology (EM) for APR1400, called sEM, is now being developed using SPACE code. SPACE/sEM is to set up a conservative evaluation methodology in accordance with appendix K of 10 CFR 50. Major required and acceptable features of the evaluation models are described as below. - Fission product decay : 1.2 times of ANS97 decay curve - Critical flow model : Henry-Fauske Moody two phase critical flow model - Metal-Water reaction model : Baker-Just equation - Critical Heat Flux (CHF) : B and W, Barnett and Modified Barnett correlation - Post-CHF : Groeneveld 5.7 film boiling correlation A series of test matrix is established to validate SPACE/sEM code in terms of major SBLOCA phenomena, e.g. core level swelling and boiling, core heat transfer, critical flow, loop seal clearance and their integrated effects. The separated effect tests (SETs) and integrated effect tests (IETs) are successfully performed and these results shows that SPACE/sEM code has a conservatism comparing with experimental data. Finally, plant calculations of SBLOCA for APR1400 are conducted as described below. - Break location sensitivity : DVI line, hot-leg, cold-leg, pump suction leg. - Break size spectrum : 0.4ft 2 ∼0.02ft 2 (DVI) 0.5ft 2 ∼0.02ft 2 (hot-leg, cold-leg, pump suction leg) This paper deals with break size spectrum analysis of cold-leg break accidents. Based on the calculation results, emergency core cooling system (ECCS) performances of APR1400 and typical SBLOCA phenomena can be evaluated. Cold-leg SBLOCA analysis for APR1400 is performed using SPACE/sEM code under harsh environment condition. SPACE/sEM code shows the typical SBLOCA behaviors and it is reasonably predicted. Although SPACE/sEM code has conservative models and correlations based on appendix K of 10 CFR 50, PCT does not exceed the requirement (1477 K). It is concluded that ECCS in APR1400 has a sufficient performance in cold-leg SBLOCA

  16. Ordered particles versus ordered pointers in the hybrid ordered plasma simulation (HOPS) code

    International Nuclear Information System (INIS)

    Anderson, D.V.; Shumaker, D.E.

    1993-01-01

    From a computational standpoint, particle simulation calculations for plasmas have not adapted well to the transitions from scalar to vector processing nor from serial to parallel environments. They have suffered from inordinate and excessive accessing of computer memory and have been hobbled by relatively inefficient gather-scatter constructs resulting from the use of indirect indexing. Lastly, the many-to-one mapping characteristic of the deposition phase has made it difficult to perform this in parallel. The authors' code sorts and reorders the particles in a spatial order. This allows them to greatly reduce the memory references, to run in directly indexed vector mode, and to employ domain decomposition to achieve parallelization. The field model solves pre-maxwell equations by interatively implicit methods. The OSOP (Ordered Storage Ordered Processing) version of HOPS keeps the particle tables ordered by rebuilding them after each particle pushing phase. Alternatively, the RSOP (Random Storage Ordered Processing) version keeps a table of pointers ordered by rebuilding them. Although OSOP is somewhat faster than RSOP in tests on vector-parallel machines, it is not clear this advantage will carry over to massively parallel computers

  17. Particle and heavy ion transport code system, PHITS, version 2.52

    International Nuclear Information System (INIS)

    Sato, Tatsuhiko; Matsuda, Norihiro; Hashimoto, Shintaro; Iwamoto, Yosuke; Noda, Shusaku; Ogawa, Tatsuhiko; Nakashima, Hiroshi; Fukahori, Tokio; Okumura, Keisuke; Kai, Tetsuya; Niita, Koji; Iwase, Hiroshi; Chiba, Satoshi; Furuta, Takuya; Sihver, Lembit

    2013-01-01

    An upgraded version of the Particle and Heavy Ion Transport code System, PHITS2.52, was developed and released to the public. The new version has been greatly improved from the previously released version, PHITS2.24, in terms of not only the code itself but also the contents of its package, such as the attached data libraries. In the new version, a higher accuracy of simulation was achieved by implementing several latest nuclear reaction models. The reliability of the simulation was improved by modifying both the algorithms for the electron-, positron-, and photon-transport simulations and the procedure for calculating the statistical uncertainties of the tally results. Estimation of the time evolution of radioactivity became feasible by incorporating the activation calculation program DCHAIN-SP into the new package. The efficiency of the simulation was also improved as a result of the implementation of shared-memory parallelization and the optimization of several time-consuming algorithms. Furthermore, a number of new user-support tools and functions that help users to intuitively and effectively perform PHITS simulations were developed and incorporated. Due to these improvements, PHITS is now a more powerful tool for particle transport simulation applicable to various research and development fields, such as nuclear technology, accelerator design, medical physics, and cosmic-ray research. (author)

  18. Algorithm for Wave-Particle Resonances in Fluid Codes - Final Report

    CERN Document Server

    Mattor, N

    2000-01-01

    We review the work performed under LDRD ER grant 98-ERD-099. The goal of this work is to write a subroutine for a fluid turbulence code that allows it to incorporate wave-particle resonances (WPR). WPR historically have required a kinetic code, with extra dimensions needed to evolve the phase space distribution function, f(x, v, t). The main results accomplished under this grant have been: (1) Derivation of a nonlinear closure term for 1D electrostatic collisionless fluid; (2) Writing of a 1D electrostatic fluid code, ''es1f,'' with a subroutine to calculate the aforementioned closure term; (3) derivation of several methods to calculate the closure term, including Eulerian, Euler-local, fully local, linearized, and linearized zero-phase-velocity, and implementation of these in es1f; (4) Successful modeling of the Landau damping of an arbitrary Langmuir wave; (5) Successful description of a kinetic two-stream instability up to the point of the first bounce; and (6) a spin-off project which uses a mathematical ...

  19. Algorithm for Wave-Particle Resonances in Fluid Codes - Final Report

    International Nuclear Information System (INIS)

    Mattor, N.

    2000-01-01

    We review the work performed under LDRD ER grant 98-ERD-099. The goal of this work is to write a subroutine for a fluid turbulence code that allows it to incorporate wave-particle resonances (WPR). WPR historically have required a kinetic code, with extra dimensions needed to evolve the phase space distribution function, f(x, v, t). The main results accomplished under this grant have been: (1) Derivation of a nonlinear closure term for 1D electrostatic collisionless fluid; (2) Writing of a 1D electrostatic fluid code, ''es1f,'' with a subroutine to calculate the aforementioned closure term; (3) derivation of several methods to calculate the closure term, including Eulerian, Euler-local, fully local, linearized, and linearized zero-phase-velocity, and implementation of these in es1f; (4) Successful modeling of the Landau damping of an arbitrary Langmuir wave; (5) Successful description of a kinetic two-stream instability up to the point of the first bounce; and (6) a spin-off project which uses a mathematical technique developed for the closure, known as the Phase Velocity Transform (PVT) to decompose turbulent fluctuations

  20. Magnetic reconnection simulation using the 2.5D em [electromagnetic] direct implicit code AVANTI

    International Nuclear Information System (INIS)

    Hewett, D.W.; Francis, G.E.; Max, C.E.

    1988-01-01

    Collisionless reconnection of magnetic field lines depends upon electron inertia effects and details of the electron and ion distribution functions, thus requiring a kinetic description of both. Though traditional explicit PIC techniques provide this description in principle, they are severely limited in parameters by time step constraints. This parameter regime has been expanded by using the recently constructed 2.5 D electromagnetic code AVANTI in this work. The code runs stably with arbitrarily large Δt and is quite robust with respect to large fluctuations occurring due to small numbers of particles per cell. We have found several qualitatively new features. The reconnection process is found to occur in distinct stages: early spontaneous reconnection fed by the free energy of an initial anisotropy in the electron component, coalescence of the resulting small-scale filaments of electron current, accompanied by electron jetting, and oscillatory flow of electrons through the magnetic X-point, superposed on continuing nonlinear growth of ion-mediated reconnection. The time evolution of stage is strongly dependent on M i /m e . 12 refs., 6 figs

  1. Beam-induced motion correction for sub-megadalton cryo-EM particles.

    Science.gov (United States)

    Scheres, Sjors Hw

    2014-08-13

    In electron cryo-microscopy (cryo-EM), the electron beam that is used for imaging also causes the sample to move. This motion blurs the images and limits the resolution attainable by single-particle analysis. In a previous Research article (Bai et al., 2013) we showed that correcting for this motion by processing movies from fast direct-electron detectors allowed structure determination to near-atomic resolution from 35,000 ribosome particles. In this Research advance article, we show that an improved movie processing algorithm is applicable to a much wider range of specimens. The new algorithm estimates straight movement tracks by considering multiple particles that are close to each other in the field of view, and models the fall-off of high-resolution information content by radiation damage in a dose-dependent manner. Application of the new algorithm to four data sets illustrates its potential for significantly improving cryo-EM structures, even for particles that are smaller than 200 kDa. Copyright © 2014, Scheres.

  2. PyMercury: Interactive Python for the Mercury Monte Carlo Particle Transport Code

    International Nuclear Information System (INIS)

    Iandola, F.N.; O'Brien, M.J.; Procassini, R.J.

    2010-01-01

    Monte Carlo particle transport applications are often written in low-level languages (C/C++) for optimal performance on clusters and supercomputers. However, this development approach often sacrifices straightforward usability and testing in the interest of fast application performance. To improve usability, some high-performance computing applications employ mixed-language programming with high-level and low-level languages. In this study, we consider the benefits of incorporating an interactive Python interface into a Monte Carlo application. With PyMercury, a new Python extension to the Mercury general-purpose Monte Carlo particle transport code, we improve application usability without diminishing performance. In two case studies, we illustrate how PyMercury improves usability and simplifies testing and validation in a Monte Carlo application. In short, PyMercury demonstrates the value of interactive Python for Monte Carlo particle transport applications. In the future, we expect interactive Python to play an increasingly significant role in Monte Carlo usage and testing.

  3. A parallel 3D particle-in-cell code with dynamic load balancing

    International Nuclear Information System (INIS)

    Wolfheimer, Felix; Gjonaj, Erion; Weiland, Thomas

    2006-01-01

    A parallel 3D electrostatic Particle-In-Cell (PIC) code including an algorithm for modelling Space Charge Limited (SCL) emission [E. Gjonaj, T. Weiland, 3D-modeling of space-charge-limited electron emission. A charge conserving algorithm, Proceedings of the 11th Biennial IEEE Conference on Electromagnetic Field Computation, 2004] is presented. A domain decomposition technique based on orthogonal recursive bisection is used to parallelize the computation on a distributed memory environment of clustered workstations. For problems with a highly nonuniform and time dependent distribution of particles, e.g., bunch dynamics, a dynamic load balancing between the processes is needed to preserve the parallel performance. The algorithm for the detection of a load imbalance and the redistribution of the tasks among the processes is based on a weight function criterion, where the weight of a cell measures the computational load associated with it. The algorithm is studied with two examples. In the first example, multiple electron bunches as occurring in the S-DALINAC [A. Richter, Operational experience at the S-DALINAC, Proceedings of the Fifth European Particle Accelerator Conference, 1996] accelerator are simulated in the absence of space charge fields. In the second example, the SCL emission and electron trajectories in an electron gun are simulated

  4. A parallel 3D particle-in-cell code with dynamic load balancing

    Energy Technology Data Exchange (ETDEWEB)

    Wolfheimer, Felix [Technische Universitaet Darmstadt, Institut fuer Theorie Elektromagnetischer Felder, Schlossgartenstr.8, 64283 Darmstadt (Germany)]. E-mail: wolfheimer@temf.de; Gjonaj, Erion [Technische Universitaet Darmstadt, Institut fuer Theorie Elektromagnetischer Felder, Schlossgartenstr.8, 64283 Darmstadt (Germany); Weiland, Thomas [Technische Universitaet Darmstadt, Institut fuer Theorie Elektromagnetischer Felder, Schlossgartenstr.8, 64283 Darmstadt (Germany)

    2006-03-01

    A parallel 3D electrostatic Particle-In-Cell (PIC) code including an algorithm for modelling Space Charge Limited (SCL) emission [E. Gjonaj, T. Weiland, 3D-modeling of space-charge-limited electron emission. A charge conserving algorithm, Proceedings of the 11th Biennial IEEE Conference on Electromagnetic Field Computation, 2004] is presented. A domain decomposition technique based on orthogonal recursive bisection is used to parallelize the computation on a distributed memory environment of clustered workstations. For problems with a highly nonuniform and time dependent distribution of particles, e.g., bunch dynamics, a dynamic load balancing between the processes is needed to preserve the parallel performance. The algorithm for the detection of a load imbalance and the redistribution of the tasks among the processes is based on a weight function criterion, where the weight of a cell measures the computational load associated with it. The algorithm is studied with two examples. In the first example, multiple electron bunches as occurring in the S-DALINAC [A. Richter, Operational experience at the S-DALINAC, Proceedings of the Fifth European Particle Accelerator Conference, 1996] accelerator are simulated in the absence of space charge fields. In the second example, the SCL emission and electron trajectories in an electron gun are simulated.

  5. Tripoli-3: monte Carlo transport code for neutral particles - version 3.5 - users manual

    International Nuclear Information System (INIS)

    Vergnaud, Th.; Nimal, J.C.; Chiron, M.

    2001-01-01

    The TRIPOLI-3 code applies the Monte Carlo method to neutron, gamma-ray and coupled neutron and gamma-ray transport calculations in three-dimensional geometries, either in steady-state conditions or having a time dependence. It can be used to study problems where there is a high flux attenuation between the source zone and the result zone (studies of shielding configurations or source driven sub-critical systems, with fission being taken into account), as well as problems where there is a low flux attenuation (neutronic calculations -- in a fuel lattice cell, for example -- where fission is taken into account, usually with the calculation on the effective multiplication factor, fine structure studies, numerical experiments to investigate methods approximations, etc). TRIPOLI-3 has been operational since 1995 and is the version of the TRIPOLI code that follows on from TRIPOLI-2; it can be used on SUN, RISC600 and HP workstations and on PC using the Linux or Windows/NT operating systems. The code uses nuclear data libraries generated using the THEMIS/NJOY system. The current libraries were derived from ENDF/B6 and JEF2. There is also a response function library based on a number of evaluations, notably the dosimetry libraries IRDF/85, IRDF/90 and also evaluations from JEF2. The treatment of particle transport is the same in version 3.5 as in version 3.4 of the TRIPOLI code; but the version 3.5 is more convenient for preparing the input data and for reading the output. The french version of the user's manual exists. (authors)

  6. Apar-T: code, validation, and physical interpretation of particle-in-cell results

    Science.gov (United States)

    Melzani, Mickaël; Winisdoerffer, Christophe; Walder, Rolf; Folini, Doris; Favre, Jean M.; Krastanov, Stefan; Messmer, Peter

    2013-10-01

    We present the parallel particle-in-cell (PIC) code Apar-T and, more importantly, address the fundamental question of the relations between the PIC model, the Vlasov-Maxwell theory, and real plasmas. First, we present four validation tests: spectra from simulations of thermal plasmas, linear growth rates of the relativistic tearing instability and of the filamentation instability, and nonlinear filamentation merging phase. For the filamentation instability we show that the effective growth rates measured on the total energy can differ by more than 50% from the linear cold predictions and from the fastest modes of the simulation. We link these discrepancies to the superparticle number per cell and to the level of field fluctuations. Second, we detail a new method for initial loading of Maxwell-Jüttner particle distributions with relativistic bulk velocity and relativistic temperature, and explain why the traditional method with individual particle boosting fails. The formulation of the relativistic Harris equilibrium is generalized to arbitrary temperature and mass ratios. Both are required for the tearing instability setup. Third, we turn to the key point of this paper and scrutinize the question of what description of (weakly coupled) physical plasmas is obtained by PIC models. These models rely on two building blocks: coarse-graining, i.e., grouping of the order of p ~ 1010 real particles into a single computer superparticle, and field storage on a grid with its subsequent finite superparticle size. We introduce the notion of coarse-graining dependent quantities, i.e., quantities depending on p. They derive from the PIC plasma parameter ΛPIC, which we show to behave as ΛPIC ∝ 1/p. We explore two important implications. One is that PIC collision- and fluctuation-induced thermalization times are expected to scale with the number of superparticles per grid cell, and thus to be a factor p ~ 1010 smaller than in real plasmas, a fact that we confirm with

  7. Particle-in-cell plasma simulation codes on the connection machine

    International Nuclear Information System (INIS)

    Walker, D.W.

    1991-01-01

    Methods for implementing three-dimensional, electromagnetic, relativistic PIC plasma simulation codes on the Connection Machine (CM-2) are discussed. The gather and scatter phases of the PIC algorithm involve indirect indexing of data, which results in a large amount of communication on the CM-2. Different data decompositions are described that seek to reduce the amount of communication while maintaining good load balance. These methods require the particles to be spatially sorted at the start of each time step, which introduced another form of overhead. The different methods are implemented in CM Fortran on the CM-2 and compared. It was found that the general router is slow in performing the communication in the gather and scatter steps, which precludes an efficient CM Fortran implementation. An alternative method that uses PARIS calls and the NEWS communication network to pipeline data along the axes of the VP set is suggested as a more efficient algorithm

  8. Update on comparison of the particle production using Mars simulation code

    CERN Document Server

    Prior, G; Kirk, H G; Souchlas, N; Ding, X

    2011-01-01

    In the International Design Study for the Neutrino Factory (IDS-NF), a 5-15 GeV (kinetic energy) proton beam impinges a Hg jet target, in order to produce pions that will decay into muons. The muons are captured and transformed into a beam, then passed to the downstream acceleration system. The target sits in a solenoid eld tapering from 20 T down to below 2 T, over several meters, permitting an optimized capture of the pions that will produce useful muons for the machine. The target and pion capture systems have been simulated using MARS. This paper presents an updated comparison of the particles production using the MARS code versions m1507 and m1510 on different machines located at the European Organization for Nuclear Research (CERN) and Brookhaven National Laboratory (BNL).

  9. SHARP: A Spatially Higher-order, Relativistic Particle-in-cell Code

    Energy Technology Data Exchange (ETDEWEB)

    Shalaby, Mohamad; Broderick, Avery E. [Department of Physics and Astronomy, University of Waterloo, 200 University Avenue West, Waterloo, ON, N2L 3G1 (Canada); Chang, Philip [Department of Physics, University of Wisconsin-Milwaukee, 1900 E. Kenwood Boulevard, Milwaukee, WI 53211 (United States); Pfrommer, Christoph [Leibniz-Institut für Astrophysik Potsdam (AIP), An der Sternwarte 16, D-14482 Potsdam (Germany); Lamberts, Astrid [Theoretical Astrophysics, California Institute of Technology, Pasadena, CA 91125 (United States); Puchwein, Ewald, E-mail: mshalaby@live.ca [Institute of Astronomy and Kavli Institute for Cosmology, University of Cambridge, Madingley Road, Cambridge, CB3 0HA (United Kingdom)

    2017-05-20

    Numerical heating in particle-in-cell (PIC) codes currently precludes the accurate simulation of cold, relativistic plasma over long periods, severely limiting their applications in astrophysical environments. We present a spatially higher-order accurate relativistic PIC algorithm in one spatial dimension, which conserves charge and momentum exactly. We utilize the smoothness implied by the usage of higher-order interpolation functions to achieve a spatially higher-order accurate algorithm (up to the fifth order). We validate our algorithm against several test problems—thermal stability of stationary plasma, stability of linear plasma waves, and two-stream instability in the relativistic and non-relativistic regimes. Comparing our simulations to exact solutions of the dispersion relations, we demonstrate that SHARP can quantitatively reproduce important kinetic features of the linear regime. Our simulations have a superior ability to control energy non-conservation and avoid numerical heating in comparison to common second-order schemes. We provide a natural definition for convergence of a general PIC algorithm: the complement of physical modes captured by the simulation, i.e., those that lie above the Poisson noise, must grow commensurately with the resolution. This implies that it is necessary to simultaneously increase the number of particles per cell and decrease the cell size. We demonstrate that traditional ways for testing for convergence fail, leading to plateauing of the energy error. This new PIC code enables us to faithfully study the long-term evolution of plasma problems that require absolute control of the energy and momentum conservation.

  10. 'ACTIV' - a package of codes for charged particle and neutron activation analysis

    International Nuclear Information System (INIS)

    Cincu, Em.; Alexandreanu, B.; Manu, V.; Moisa, V.

    1997-01-01

    The 'ACTIV' Program is an advanced software package dedicated to applications of the thermal neutron and charged particle activation (NAA and CPA) induced reactions. The program is designed to run on personal computers compatible IBM PC-Models XT/AT, 286 or more advanced, operating under DOS version 5.0 or later, on systems with minimum 5 MB of hard disk memory. The package consists of 6 software modules and a Nuclear Data Base comprising physical, nuclear reaction and decay data for: thermal neutron, proton, deuteron and α-particle induced reactions on 15 selected metallic elements; the nuclear reaction data corresponds to the energy range (5-100) MeV. In the first version - ACTIV 1.0 - the set of input data concerns: the sample type, irradiation and measurement conditions, the γ-ray spectrum identification code, selected detection efficiency calibration curve, selected radionuclides, selected standardization method for elemental analysis, version of results. At present, the 'ACTIV' package comprises 6 soft modules for processing the experimental data, which ensure computation of the quantities: radionuclide activities, activation yield data (case of CPA) and elemental concentration by relative and absolute standardization methods. Recently, the software designed to processing complex γ-ray spectra was acquired and installed on our PC 486 (8 MB RAM, 100 MHz). The next step in developing the 'ACTIV' program envisages improving the existing computing codes, completing the data libraries, incorporating a new soft for the direct use of the 'Quantum TM MCA' data, developing modules dedicated to uncertainty computation and optimization of the activation experiments

  11. DCHAIN-SP 2001: High energy particle induced radioactivity calculation code

    Energy Technology Data Exchange (ETDEWEB)

    Kai, Tetsuya; Maekawa, Fujio; Kasugai, Yoshimi; Takada, Hiroshi; Ikeda, Yujiro [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Kosako, Kazuaki [Sumitomo Atomic Energy Industries, Ltd., Tokyo (Japan)

    2001-03-01

    For the purpose of contribution to safety design calculations for induced radioactivities in the JAERI/KEK high-intensity proton accelerator project facilities, the DCHAIN-SP which calculates the high energy particle induced radioactivity has been updated to DCHAIN-SP 2001. The following three items were improved: (1) Fission yield data are included to apply the code to experimental facility design for nuclear transmutation of long-lived radioactive waste where fissionable materials are treated. (2) Activation cross section data below 20 MeV are revised. In particular, attentions are paid to cross section data of materials which have close relation to the facilities, i.e., mercury, lead and bismuth, and to tritium production cross sections which are important in terms of safety of the facilities. (3) User-interface for input/output data is sophisticated to perform calculations more efficiently than that in the previous version. Information needed for use of the code is attached in Appendices; the DCHAIN-SP 2001 manual, the procedures of installation and execution of DCHAIN-SP, and sample problems. (author)

  12. Development Of A Parallel Performance Model For The THOR Neutral Particle Transport Code

    Energy Technology Data Exchange (ETDEWEB)

    Yessayan, Raffi; Azmy, Yousry; Schunert, Sebastian

    2017-02-01

    The THOR neutral particle transport code enables simulation of complex geometries for various problems from reactor simulations to nuclear non-proliferation. It is undergoing a thorough V&V requiring computational efficiency. This has motivated various improvements including angular parallelization, outer iteration acceleration, and development of peripheral tools. For guiding future improvements to the code’s efficiency, better characterization of its parallel performance is useful. A parallel performance model (PPM) can be used to evaluate the benefits of modifications and to identify performance bottlenecks. Using INL’s Falcon HPC, the PPM development incorporates an evaluation of network communication behavior over heterogeneous links and a functional characterization of the per-cell/angle/group runtime of each major code component. After evaluating several possible sources of variability, this resulted in a communication model and a parallel portion model. The former’s accuracy is bounded by the variability of communication on Falcon while the latter has an error on the order of 1%.

  13. Novel methods in the Particle-In-Cell accelerator Code-Framework Warp

    Energy Technology Data Exchange (ETDEWEB)

    Vay, J-L [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Grote, D. P. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Cohen, R. H. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Friedman, A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2012-12-26

    The Particle-In-Cell (PIC) Code-Framework Warp is being developed by the Heavy Ion Fusion Science Virtual National Laboratory (HIFS-VNL) to guide the development of accelerators that can deliver beams suitable for high-energy density experiments and implosion of inertial fusion capsules. It is also applied in various areas outside the Heavy Ion Fusion program to the study and design of existing and next-generation high-energy accelerators, including the study of electron cloud effects and laser wakefield acceleration for example. This study presents an overview of Warp's capabilities, summarizing recent original numerical methods that were developed by the HIFS-VNL (including PIC with adaptive mesh refinement, a large-timestep 'drift-Lorentz' mover for arbitrarily magnetized species, a relativistic Lorentz invariant leapfrog particle pusher, simulations in Lorentz-boosted frames, an electromagnetic solver with tunable numerical dispersion and efficient stride-based digital filtering), with special emphasis on the description of the mesh refinement capability. In addition, selected examples of the applications of the methods to the abovementioned fields are given.

  14. Synthetic radiation diagnostics in PIConGPU. Integrating spectral detectors into particle-in-cell codes

    Energy Technology Data Exchange (ETDEWEB)

    Pausch, Richard; Burau, Heiko; Huebl, Axel; Steiniger, Klaus [Helmholtz-Zentrum Dresden-Rossendorf (Germany); Technische Universitaet Dresden (Germany); Debus, Alexander; Widera, Rene; Bussmann, Michael [Helmholtz-Zentrum Dresden-Rossendorf (Germany)

    2016-07-01

    We present the in-situ far field radiation diagnostics in the particle-in-cell code PIConGPU. It was developed to close the gap between simulated plasma dynamics and radiation observed in laser plasma experiments. Its predictive capabilities, both qualitative and quantitative, have been tested against analytical models. Now, we apply this synthetic spectral diagnostics to investigate plasma dynamics in laser wakefield acceleration, laser foil irradiation and plasma instabilities. Our method is based on the far field approximation of the Lienard-Wiechert potential and allows predicting both coherent and incoherent radiation spectrally from infrared to X-rays. Its capability to resolve the radiation polarization and to determine the temporal and spatial origin of the radiation enables us to correlate specific spectral signatures with characteristic dynamics in the plasma. Furthermore, its direct integration into the highly-scalable GPU framework of PIConGPU allows computing radiation spectra for thousands of frequencies, hundreds of detector positions and billions of particles efficiently. In this talk we will demonstrate these capabilities on resent simulations of laser wakefield acceleration (LWFA) and high harmonics generation during target normal sheath acceleration (TNSA).

  15. The CNCSN-2: One, two-and three-dimensional coupled neutral and charged particle discrete ordinates code system

    International Nuclear Information System (INIS)

    Voloschenko, A. M.; Gukov, S. V.; Russkov, A. A.; Gurevich, M. I.; Shkarovsky, D. A.; Kryuchkov, V. P.; Sumaneev, O. V.; Dubinin, A. A.

    2009-01-01

    KATRIN, KASKAD-Sand ROZ-6 codes solve the multigroup transport equation for neutrons, photons and charged particles in 3D. BOT3P-5., ConDat can be used as preprocessor. ARVES-2.5, a cross-section preprocessor (the package of utilities for operating with the cross section file in FMAC-M format) is included. Auxiliary codes MIXERM, CEPXS-BFP, CEPXS-BFP, SADCO-2.4 and CNCSN-2 are used

  16. Verification and Validation of Monte Carlo n-Particle Code 6 (MCNP6) with Neutron Protection Factor Measurements of an Iron Box

    Science.gov (United States)

    2014-03-27

    Vehicle Code System (VCS), the Monte Carlo Adjoint SHielding (MASH), and the Monte Carlo n- Particle ( MCNP ) code. Of the three, the oldest and still most...widely utilized radiation transport code is MCNP . First created at Los Alamos National Laboratory (LANL) in 1957, the code simulated neutral...particle types, and previous versions of MCNP were repeatedly validated using both simple and complex 10 geometries [12, 13]. Much greater discussion and

  17. Radiation protection studies for medical particle accelerators using FLUKA Monte Carlo code

    International Nuclear Information System (INIS)

    Infantino, Angelo; Mostacci, Domiziano; Cicoria, Gianfranco; Lucconi, Giulia; Pancaldi, Davide; Vichi, Sara; Zagni, Federico; Marengo, Mario

    2017-01-01

    Radiation protection (RP) in the use of medical cyclotrons involves many aspects both in the routine use and for the decommissioning of a site. Guidelines for site planning and installation, as well as for RP assessment, are given in international documents; however, the latter typically offer analytic methods of calculation of shielding and materials activation, in approximate or idealised geometry set-ups. The availability of Monte Carlo (MC) codes with accurate up-to-date libraries for transport and interaction of neutrons and charged particles at energies below 250 MeV, together with the continuously increasing power of modern computers, makes the systematic use of simulations with realistic geometries possible, yielding equipment and site-specific evaluation of the source terms, shielding requirements and all quantities relevant to RP at the same time. In this work, the well-known FLUKA MC code was used to simulate different aspects of RP in the use of biomedical accelerators, particularly for the production of medical radioisotopes. In the context of the Young Professionals Award, held at the IRPA 14 conference, only a part of the complete work is presented. In particular, the simulation of the GE PETtrace cyclotron (16.5 MeV) installed at S. Orsola-Malpighi University Hospital evaluated the effective dose distribution around the equipment; the effective number of neutrons produced per incident proton and their spectral distribution; the activation of the structure of the cyclotron and the vault walls; the activation of the ambient air, in particular the production of "4"1Ar. The simulations were validated, in terms of physical and transport parameters to be used at the energy range of interest, through an extensive measurement campaign of the neutron environmental dose equivalent using a rem-counter and TLD dosemeters. The validated model was then used in the design and the licensing request of a new Positron Emission Tomography facility. (authors)

  18. PIConGPU - How to build one of the fastest GPU particle-in-cell codes in the world

    Energy Technology Data Exchange (ETDEWEB)

    Burau, Heiko; Debus, Alexander; Helm, Anton; Huebl, Axel; Kluge, Thomas; Widera, Rene; Bussmann, Michael; Schramm, Ulrich; Cowan, Thomas [HZDR, Dresden (Germany); Juckeland, Guido; Nagel, Wolfgang [TU Dresden (Germany); ZIH, Dresden (Germany); Schmitt, Felix [NVIDIA (United States)

    2013-07-01

    We present the algorithmic building blocks of PIConGPU, one of the fastest implementations of the particle-in-cell algortihm on GPU clusters. PIConGPU is a highly-scalable, 3D3V electromagnetic PIC code that is used in laser plasma and astrophysical plasma simulations.

  19. Random geometry capability in RMC code for explicit analysis of polytype particle/pebble and applications to HTR-10 benchmark

    International Nuclear Information System (INIS)

    Liu, Shichang; Li, Zeguang; Wang, Kan; Cheng, Quan; She, Ding

    2018-01-01

    Highlights: •A new random geometry was developed in RMC for mixed and polytype particle/pebble. •This capability was applied to the full core calculations of HTR-10 benchmark. •Reactivity, temperature coefficient and control rod worth of HTR-10 were compared. •This method can explicitly model different packing fraction of different pebbles. •Monte Carlo code with this method can simulate polytype particle/pebble type reactor. -- Abstract: With the increasing demands of high fidelity neutronics analysis and the development of computer technology, Monte Carlo method is becoming more and more attractive in accurate simulation of pebble bed High Temperature gas-cooled Reactor (HTR), owing to its advantages of the flexible geometry modeling and the use of continuous-energy nuclear cross sections. For the double-heterogeneous geometry of pebble bed, traditional Monte Carlo codes can treat it by explicit geometry description. However, packing methods such as Random Sequential Addition (RSA) can only produce a sphere packing up to 38% volume packing fraction, while Discrete Element Method (DEM) is troublesome and also time consuming. Moreover, traditional Monte Carlo codes are difficult and inconvenient to simulate the mixed and polytype particles or pebbles. A new random geometry method was developed in Monte Carlo code RMC to simulate the particle transport in polytype particle/pebble in double heterogeneous geometry systems. This method was verified by some test cases, and applied to the full core calculations of HTR-10 benchmark. The reactivity, temperature coefficient and control rod worth of HTR-10 were compared for full core and initial core in helium and air atmosphere respectively, and the results agree well with the benchmark results and experimental results. This work would provide an efficient tool for the innovative design of pebble bed, prism HTRs and molten salt reactors with polytype particles or pebbles using Monte Carlo method.

  20. Monte Carlo method implemented in a finite element code with application to dynamic vacuum in particle accelerators

    CERN Document Server

    Garion, C

    2009-01-01

    Modern particle accelerators require UHV conditions during their operation. In the accelerating cavities, breakdowns can occur, releasing large amount of gas into the vacuum chamber. To determine the pressure profile along the cavity as a function of time, the time-dependent behaviour of the gas has to be simulated. To do that, it is useful to apply accurate three-dimensional method, such as Test Particles Monte Carlo. In this paper, a time-dependent Test Particles Monte Carlo is used. It has been implemented in a Finite Element code, CASTEM. The principle is to track a sample of molecules during time. The complex geometry of the cavities can be created either in the FE code or in a CAD software (CATIA in our case). The interface between the two softwares to export the geometry from CATIA to CASTEM is given. The algorithm of particle tracking for collisionless flow in the FE code is shown. Thermal outgassing, pumping surfaces and electron and/or ion stimulated desorption can all be generated as well as differ...

  1. EMHP: an accurate automated hole masking algorithm for single-particle cryo-EM image processing.

    Science.gov (United States)

    Berndsen, Zachary; Bowman, Charles; Jang, Haerin; Ward, Andrew B

    2017-12-01

    The Electron Microscopy Hole Punch (EMHP) is a streamlined suite of tools for quick assessment, sorting and hole masking of electron micrographs. With recent advances in single-particle electron cryo-microscopy (cryo-EM) data processing allowing for the rapid determination of protein structures using a smaller computational footprint, we saw the need for a fast and simple tool for data pre-processing that could run independent of existing high-performance computing (HPC) infrastructures. EMHP provides a data preprocessing platform in a small package that requires minimal python dependencies to function. https://www.bitbucket.org/chazbot/emhp Apache 2.0 License. bowman@scripps.edu. Supplementary data are available at Bioinformatics online. © The Author(s) 2017. Published by Oxford University Press.

  2. Solutions to HYDROCOIN [Hydrologic Code Intercomparison] Level 1 problems using STOKES and PARTICLE (Cases 1,2,4,7)

    International Nuclear Information System (INIS)

    Gureghian, A.B.; Andrews, A.; Steidl, S.B.; Brandstetter, A.

    1987-10-01

    HYDROCOIN (Hydrologic Code Intercomparison) Level 1 benchmark problems are solved using the finite element ground-water flow code STOKES and the pathline generating code PARTICLE developed for the Office of Crystalline Repository Development (OCRD). The objective of the Level 1 benchmark problems is to verify the numerical accuracy of ground-water flow codes by intercomparison of their results with analytical solutions and other numerical computer codes. Seven test cases were proposed for Level 1 to the Swedish Nuclear Power Inspectorate, the managing participant of HYDROCOIN. Cases 1, 2, 4, and 7 were selected by OCRD because of their appropriateness to the nature of crystalline repository hydrologic performance. The background relevance, conceptual model, and assumptions of each case are presented. The governing equations, boundary conditions, input parameters, and the solution schemes applied to each case are discussed. The results are shown in graphic and tabular form with concluding remarks. The results demonstrate the two-dimensional verification of STOKES and PARTICLE. 5 refs., 61 figs., 30 tabs

  3. GraDeR: Membrane Protein Complex Preparation for Single-Particle Cryo-EM.

    Science.gov (United States)

    Hauer, Florian; Gerle, Christoph; Fischer, Niels; Oshima, Atsunori; Shinzawa-Itoh, Kyoko; Shimada, Satoru; Yokoyama, Ken; Fujiyoshi, Yoshinori; Stark, Holger

    2015-09-01

    We developed a method, named GraDeR, which substantially improves the preparation of membrane protein complexes for structure determination by single-particle cryo-electron microscopy (cryo-EM). In GraDeR, glycerol gradient centrifugation is used for the mild removal of free detergent monomers and micelles from lauryl maltose-neopentyl glycol detergent stabilized membrane complexes, resulting in monodisperse and stable complexes to which standard processes for water-soluble complexes can be applied. We demonstrate the applicability of the method on three different membrane complexes, including the mammalian FoF1 ATP synthase. For this highly dynamic and fragile rotary motor, we show that GraDeR allows visualizing the asymmetry of the F1 domain, which matches the ground state structure of the isolated domain. Therefore, the present cryo-EM structure of FoF1 ATP synthase provides direct structural evidence for Boyer's binding change mechanism in the context of the intact enzyme. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. Enhancements to the Combinatorial Geometry Particle Tracker in the Mercury Monte Carlo Transport Code: Embedded Meshes and Domain Decomposition

    International Nuclear Information System (INIS)

    Greenman, G.M.; O'Brien, M.J.; Procassini, R.J.; Joy, K.I.

    2009-01-01

    Two enhancements to the combinatorial geometry (CG) particle tracker in the Mercury Monte Carlo transport code are presented. The first enhancement is a hybrid particle tracker wherein a mesh region is embedded within a CG region. This method permits efficient calculations of problems with contain both large-scale heterogeneous and homogeneous regions. The second enhancement relates to the addition of parallelism within the CG tracker via spatial domain decomposition. This permits calculations of problems with a large degree of geometric complexity, which are not possible through particle parallelism alone. In this method, the cells are decomposed across processors and a particles is communicated to an adjacent processor when it tracks to an interprocessor boundary. Applications that demonstrate the efficacy of these new methods are presented

  5. LPIC++. A parallel one-dimensional relativistic electromagnetic particle-in-cell code for simulating laser-plasma-interaction

    International Nuclear Information System (INIS)

    Lichters, R.; Pfund, R.E.W.; Meyer-ter-Vehn, J.

    1997-08-01

    The code LPIC++ presented here, is based on a one-dimensional, electromagnetic, relativistic PIC code that has originally been developed by one of the authors during a PhD thesis at the Max-Planck-Institut fuer Quantenoptik for kinetic simulations of high harmonic generation from overdense plasma surfaces. The code uses essentially the algorithm of Birdsall and Langdon and Villasenor and Bunemann. It is written in C++ in order to be easily extendable and has been parallelized to be able to grow in power linearly with the size of accessable hardware, e.g. massively parallel machines like Cray T3E. The parallel LPIC++ version uses PVM for communication between processors. PVM is public domain software, can be downloaded from the world wide web. A particular strength of LPIC++ lies in its clear program and data structure, which uses chained lists for the organization of grid cells and enables dynamic adjustment of spatial domain sizes in a very convenient way, and therefore easy balancing of processor loads. Also particles belonging to one cell are linked in a chained list and are immediately accessable from this cell. In addition to this convenient type of data organization in a PIC code, the code shows excellent performance in both its single processor and parallel version. (orig.)

  6. Study on EM-parameters and EM-wave absorption properties of materials with bio-flaky particles added

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Wenqiang, E-mail: zwqcau@gmail.com [College of Engineering, China Agricultural University, Beijing 100083 (China); Bionic and Micro/Nano/Bio Manufacturing Technology Research Center, Beihang University, Beijing 100191 (China); Zhang, Deyuan; Xu, Yonggang [Bionic and Micro/Nano/Bio Manufacturing Technology Research Center, Beihang University, Beijing 100191 (China); McNaughton, Ryan [Department of Biomedical Engineering, Boston University, Boston 02215 (United States)

    2016-01-01

    Bio-flaky particles, fabricated through deposition of carbonyl iron on the surface of disk shaped diatomite, demonstrated beneficial performance on electromagnetic parameters. This paper will detail the improvements to the electromagnetic parameters and absorbing properties of traditional absorbing material generated by the addition of bio-flaky particles. Composites' electromagnetic parameters were measured using the transmission method. Calculated test results confirmed with bio-flaky particles were added, composites' permittivity increased due to the high permeability of bio-flaky particles. Secondly, the permeability of composites increased as a result of the increased volume content of iron particles. Composites with bio-flaky particles added exhibited superlative absorption properties at 0.5 mm thickness, with a maximum reflection loss of approximately −5.1 dB at 14.4 GHz. - Highlights: • Light weight absorbing composites were fabricated with bio-flaky particles added. • SEM results show bio-flaky particles could help the arrangement of FCIPs. • Composites' RL could be improved with bio-flaky particles added. • The RL peak move to lower frequency with bio-flaky particles added.

  7. Study on EM-parameters and EM-wave absorption properties of materials with bio-flaky particles added

    International Nuclear Information System (INIS)

    Zhang, Wenqiang; Zhang, Deyuan; Xu, Yonggang; McNaughton, Ryan

    2016-01-01

    Bio-flaky particles, fabricated through deposition of carbonyl iron on the surface of disk shaped diatomite, demonstrated beneficial performance on electromagnetic parameters. This paper will detail the improvements to the electromagnetic parameters and absorbing properties of traditional absorbing material generated by the addition of bio-flaky particles. Composites' electromagnetic parameters were measured using the transmission method. Calculated test results confirmed with bio-flaky particles were added, composites' permittivity increased due to the high permeability of bio-flaky particles. Secondly, the permeability of composites increased as a result of the increased volume content of iron particles. Composites with bio-flaky particles added exhibited superlative absorption properties at 0.5 mm thickness, with a maximum reflection loss of approximately −5.1 dB at 14.4 GHz. - Highlights: • Light weight absorbing composites were fabricated with bio-flaky particles added. • SEM results show bio-flaky particles could help the arrangement of FCIPs. • Composites' RL could be improved with bio-flaky particles added. • The RL peak move to lower frequency with bio-flaky particles added.

  8. Biological dose estimation for charged-particle therapy using an improved PHITS code coupled with a microdosimetric kinetic model

    International Nuclear Information System (INIS)

    Sato, Tatsuhiko; Watanabe, Ritsuko; Kase, Yuki; Niita, Koji; Sihver, Lembit

    2009-01-01

    High-energy heavy ions (HZE particles) have become widely used for radiotherapy of tumors owing to their high biological effectiveness. In the treatment planning of such charged-particle therapy, it is necessary to estimate not only physical but also biological dose, which is the product of physical dose and relative biological effectiveness (RBE). In the Heavy-ion Medical Accelerator in Chiba (HIMAC), the biological dose is estimated by a method proposed by Kanai et al., which is based on the linear-quadratic (LQ) model with its parameters α and β determined by the dose distribution in terms of the unrestricted linear energy transfer (LET). Thus, RBE is simply expressed as a function of LET in their model. However, RBE of HZE particles cannot be uniquely determined from their LET because of their large cross sections for high-energy δ-ray production. Hence, development of a biological dose estimation model that can explicitly consider the track structure of δ-rays around the trajectory of HZE particles is urgently needed. Microdosimetric quantities such as lineal energy y are better indexes for representing RBE of HZE particles in comparison to LET, since they can express the decrease of ionization densities around their trajectories due to the production of δ-rays. The difference of the concept between LET and y is illustrated in Figure 1. However, the use of microdosimetric quantities in computational dosimetry was severely limited because of the difficulty in calculating their probability densities (PDs) in macroscopic matter. We therefore improved the 3-dimensional particle transport simulation code PHITS, providing it with the capability of estimating the microdosimetric PDs in a macroscopic framework by incorporating a mathematical function that can instantaneously calculate the PDs around the trajectory of HZE particles with precision equivalent to a microscopic track-structure simulation. A new method for estimating biological dose from charged-particle

  9. The OpenMOC method of characteristics neutral particle transport code

    International Nuclear Information System (INIS)

    Boyd, William; Shaner, Samuel; Li, Lulu; Forget, Benoit; Smith, Kord

    2014-01-01

    Highlights: • An open source method of characteristics neutron transport code has been developed. • OpenMOC shows nearly perfect scaling on CPUs and 30× speedup on GPUs. • Nonlinear acceleration techniques demonstrate a 40× reduction in source iterations. • OpenMOC uses modern software design principles within a C++ and Python framework. • Validation with respect to the C5G7 and LRA benchmarks is presented. - Abstract: The method of characteristics (MOC) is a numerical integration technique for partial differential equations, and has seen widespread use for reactor physics lattice calculations. The exponential growth in computing power has finally brought the possibility for high-fidelity full core MOC calculations within reach. The OpenMOC code is being developed at the Massachusetts Institute of Technology to investigate algorithmic acceleration techniques and parallel algorithms for MOC. OpenMOC is a free, open source code written using modern software languages such as C/C++ and CUDA with an emphasis on extensible design principles for code developers and an easy to use Python interface for code users. The present work describes the OpenMOC code and illustrates its ability to model large problems accurately and efficiently

  10. Application of a Java-based, univel geometry, neutral particle Monte Carlo code to the searchlight problem

    International Nuclear Information System (INIS)

    Charles A. Wemple; Joshua J. Cogliati

    2005-01-01

    A univel geometry, neutral particle Monte Carlo transport code, written entirely in the Java programming language, is under development for medical radiotherapy applications. The code uses ENDF-VI based continuous energy cross section data in a flexible XML format. Full neutron-photon coupling, including detailed photon production and photonuclear reactions, is included. Charged particle equilibrium is assumed within the patient model so that detailed transport of electrons produced by photon interactions may be neglected. External beam and internal distributed source descriptions for mixed neutron-photon sources are allowed. Flux and dose tallies are performed on a univel basis. A four-tap, shift-register-sequence random number generator is used. Initial verification and validation testing of the basic neutron transport routines is underway. The searchlight problem was chosen as a suitable first application because of the simplicity of the physical model. Results show excellent agreement with analytic solutions. Computation times for similar numbers of histories are comparable to other neutron MC codes written in C and FORTRAN

  11. Computer codes for particle accelerator design and analysis: A compendium. Second edition

    International Nuclear Information System (INIS)

    Deaven, H.S.; Chan, K.C.D.

    1990-05-01

    The design of the next generation of high-energy accelerators will probably be done as an international collaborative efforts and it would make sense to establish, either formally or informally, an international center for accelerator codes with branches for maintenance, distribution, and consultation at strategically located accelerator centers around the world. This arrangement could have at least three beneficial effects. It would cut down duplication of effort, provide long-term support for the best codes, and provide a stimulating atmosphere for the evolution of new codes. It does not take much foresight to see that the natural evolution of accelerator design codes is toward the development of so-called Expert Systems, systems capable of taking design specifications of future accelerators and producing specifications for optimized magnetic transport and acceleration components, making a layout, and giving a fairly impartial cost estimate. Such an expert program would use present-day programs such as TRANSPORT, POISSON, and SUPERFISH as tools in the optimization process. Such a program would also serve to codify the experience of two generations of accelerator designers before it is lost as these designers reach retirement age. This document describes 203 codes that originate from 10 countries and are currently in use. The authors feel that this compendium will contribute to the dialogue supporting the international collaborative effort that is taking place in the field of accelerator physics today

  12. Recent Improvements to the IMPACT-T Parallel Particle Tracking Code

    International Nuclear Information System (INIS)

    Qiang, J.; Pogorelov, I.V.; Ryne, R.

    2006-01-01

    The IMPACT-T code is a parallel three-dimensional quasi-static beam dynamics code for modeling high brightness beams in photoinjectors and RF linacs. Developed under the US DOE Scientific Discovery through Advanced Computing (SciDAC) program, it includes several key features including a self-consistent calculation of 3D space-charge forces using a shifted and integrated Green function method, multiple energy bins for beams with large energy spread, and models for treating RF standing wave and traveling wave structures. In this paper, we report on recent improvements to the IMPACT-T code including modeling traveling wave structures, short-range transverse and longitudinal wakefields, and longitudinal coherent synchrotron radiation through bending magnets

  13. Convergence acceleration in the Monte-Carlo particle transport code TRIPOLI-4 in criticality

    International Nuclear Information System (INIS)

    Dehaye, Benjamin

    2014-01-01

    Fields such as criticality studies need to compute some values of interest in neutron physics. Two kind of codes may be used: deterministic ones and stochastic ones. The stochastic codes do not require approximation and are thus more exact. However, they may require a lot of time to converge with a sufficient precision.The work carried out during this thesis aims to build an efficient acceleration strategy in the TRIPOLI-4. We wish to implement the zero variance game. To do so, the method requires to compute the adjoint flux. The originality of this work is to directly compute the adjoint flux directly from a Monte-Carlo simulation without using external codes thanks to the fission matrix method. This adjoint flux is then used as an importance map to bias the simulation. (author) [fr

  14. New scope covered by PHITS. Particle and heavy ion transport code system

    International Nuclear Information System (INIS)

    Nakamura, Takashi; Niita, Koji; Iwase, Hiroshi; Sato, Tatsuhiko

    2006-01-01

    PHITS is a general high energy transport calculation code from hadron to heavy ions, which embedded in NMTC-JAM with JQMD code. Outline of PHITS and many application examples are stated. PHITS has been used by the shielding calculations of J-PARC, GSI, RIA and Big-RIPS and the good results were reported. The evaluation of exposure dose of astronauts, airmen, proton and heavy ion therapy, and estimation of error frequency of semiconductor software are explained as the application examples. Relation between the event generator and Monte Carlo method and the future are described. (S.Y.)

  15. Modeling an emittance-dominated elliptical sheet beam with a 212-dimensional particle-in-cell code

    International Nuclear Information System (INIS)

    Carlsten, Bruce E.

    2005-01-01

    Modeling a 3-dimensional (3-D) elliptical beam with a 212-D particle-in-cell (PIC) code requires a reduction in the beam parameters. The 212-D PIC code can only model the center slice of the sheet beam, but that can still provide useful information about the beam transport and distribution evolution, even if the beam is emittance dominated. The reduction of beam parameters and resulting interpretation of the simulation is straightforward, but not trivial. In this paper, we describe the beam parameter reduction and emittance issues related to the initial beam distribution. As a numerical example, we use the case of a sheet beam designed for use with a planar traveling-wave amplifier for high power generator for RF ranging from 95 to 300GHz [Carlsten et al., IEEE Trans. Plasma Sci. 33 (2005) 85]. These numerical techniques also apply to modeling high-energy elliptical bunches in RF accelerators

  16. Parallel Finite Element Particle-In-Cell Code for Simulations of Space-charge Dominated Beam-Cavity Interactions

    International Nuclear Information System (INIS)

    Candel, A.; Kabel, A.; Ko, K.; Lee, L.; Li, Z.; Limborg, C.; Ng, C.; Prudencio, E.; Schussman, G.; Uplenchwar, R.

    2007-01-01

    Over the past years, SLAC's Advanced Computations Department (ACD) has developed the parallel finite element (FE) particle-in-cell code Pic3P (Pic2P) for simulations of beam-cavity interactions dominated by space-charge effects. As opposed to standard space-charge dominated beam transport codes, which are based on the electrostatic approximation, Pic3P (Pic2P) includes space-charge, retardation and boundary effects as it self-consistently solves the complete set of Maxwell-Lorentz equations using higher-order FE methods on conformal meshes. Use of efficient, large-scale parallel processing allows for the modeling of photoinjectors with unprecedented accuracy, aiding the design and operation of the next-generation of accelerator facilities. Applications to the Linac Coherent Light Source (LCLS) RF gun are presented

  17. Simulation of thermal-neutron-induced single-event upset using particle and heavy-ion transport code system

    International Nuclear Information System (INIS)

    Arita, Yutaka; Kihara, Yuji; Mitsuhasi, Junichi; Niita, Koji; Takai, Mikio; Ogawa, Izumi; Kishimoto, Tadafumi; Yoshihara, Tsutomu

    2007-01-01

    The simulation of a thermal-neutron-induced single-event upset (SEU) was performed on a 0.4-μm-design-rule 4 Mbit static random access memory (SRAM) using particle and heavy-ion transport code system (PHITS): The SEU rates obtained by the simulation were in very good agreement with the result of experiments. PHITS is a useful tool for simulating SEUs in semiconductor devices. To further improve the accuracy of the simulation, additional methods for tallying the energy deposition are required for PHITS. (author)

  18. Controlled dense coding for continuous variables using three-particle entangled states

    CERN Document Server

    Jing Zhang; Kun Chi Peng; 10.1103/PhysRevA.66.032318

    2002-01-01

    A simple scheme to realize quantum controlled dense coding with a bright tripartite entangled state light generated from nondegenerate optical parametric amplifiers is proposed in this paper. The quantum channel between Alice and Bob is controlled by Claire. As a local oscillator and balanced homodyne detector are not needed, the proposed protocol is easy to be realized experimentally. (15 refs)

  19. A magnetostatic particle code and its application to studies of anomalous current penetration of a plasma

    International Nuclear Information System (INIS)

    Lin, A.T.; Pritchett, P.L.; Dawson, J.M.

    1976-01-01

    A large number of important plasma problems involves self-consistent magnetic fields. For disturbances which propagate slowly compared to the velocity of light, the magnetostatic approximation (Darwin model) suffices. Based on the Darwin model a particle model has been developed to investigate such problems. (GG) [de

  20. On the integration of equations of motion for particle-in-cell codes

    International Nuclear Information System (INIS)

    Fuchs, V.; Gunn, J.P.

    2006-01-01

    An area-preserving implementation of the 2nd order Runge-Kutta integration method for equations of motion is presented. For forces independent of velocity the scheme possesses the same numerical simplicity and stability as the leapfrog method, and is not implicit for forces which do depend on velocity. It can be therefore easily applied where the leapfrog method in general cannot. We discuss the stability of the new scheme and test its performance in calculations of particle motion in three cases of interest. First, in the ubiquitous and numerically demanding example of nonlinear interaction of particles with a propagating plane wave, second, in the case of particle motion in a static magnetic field and, third, in a nonlinear dissipative case leading to a limit cycle. We compare computed orbits with exact orbits and with results from the leapfrog and other low-order integration schemes. Of special interest is the role of intrinsic stochasticity introduced by time differencing, which can destroy orbits of an otherwise exactly integrable system and therefore constitutes a restriction on the applicability of an integration scheme in such a context [A. Friedman, S.P. Auerbach, J. Comput. Phys. 93 (1991) 171]. In particular, we show that for a plane wave the new scheme proposed herein can be reduced to a symmetric standard map. This leads to the nonlinear stability condition Δt ω B ≤ 1, where Δt is the time step and ω B the particle bounce frequency

  1. Misconception regarding conventional coupling of fields and particles in XFEL codes

    Energy Technology Data Exchange (ETDEWEB)

    Geloni, Gianluca [Europeam XFEL GmbH, Hamburg (Germany); Kocharyan, Vitali; Saldin, Evgeni [DESY Hamburg (Germany)

    2016-01-15

    Maxwell theory is usually treated in the laboratory frame under the standard time order, that is the usual light-signal clock synchronization. In contrast, particle tracking in the laboratory frame usually treats time as an independent variable. As a result, here we argue that the evolution of electron beams is usually treated according to the absolute time convention i.e. using a different time order defined by a non-standard clock synchronization procedure. This essential point has never received attention in the accelerator community. There are two possible ways of coupling fields and particles in this situation. The first, Lorentz's prerelativistic way, consists in a 'translation' of Maxwell's electrodynamics to the absolute time world-picture. The second, Einstein's way, consists in a 'translation' of particle tracking results to the electromagnetic world-picture, obeying the standard time order. Conventional particle tracking shows that the electron beam direction changes after a transverse kick, while the orientation of the microbunching phase front stays unvaried. Here we show that in the ultrarelativistic asymptotic v → c, the orientation of the planes of simultaneity, i.e. the orientation of the microbunching fronts, is always perpendicular to the electron beam velocity when the evolution of the modulated electron beam is treated under Einstein's time order. This effect allows for the production of coherent undulator radiation from a modulated electron beam in the kicked direction without suppression. We hold a recent FEL study at the LCLS as a direct experimental evidence that the microbunching wavefront indeed readjusts its direction after the electron beam is kicked by a large angle, limited only by the beamline aperture. In a previous paper we quantitatively described this result invoking the aberration of light effect, which corresponds to Lorentz's way of coupling fields and particles. The purpose of

  2. Misconception regarding conventional coupling of fields and particles in XFEL codes

    International Nuclear Information System (INIS)

    Geloni, Gianluca; Kocharyan, Vitali; Saldin, Evgeni

    2016-01-01

    Maxwell theory is usually treated in the laboratory frame under the standard time order, that is the usual light-signal clock synchronization. In contrast, particle tracking in the laboratory frame usually treats time as an independent variable. As a result, here we argue that the evolution of electron beams is usually treated according to the absolute time convention i.e. using a different time order defined by a non-standard clock synchronization procedure. This essential point has never received attention in the accelerator community. There are two possible ways of coupling fields and particles in this situation. The first, Lorentz's prerelativistic way, consists in a 'translation' of Maxwell's electrodynamics to the absolute time world-picture. The second, Einstein's way, consists in a 'translation' of particle tracking results to the electromagnetic world-picture, obeying the standard time order. Conventional particle tracking shows that the electron beam direction changes after a transverse kick, while the orientation of the microbunching phase front stays unvaried. Here we show that in the ultrarelativistic asymptotic v → c, the orientation of the planes of simultaneity, i.e. the orientation of the microbunching fronts, is always perpendicular to the electron beam velocity when the evolution of the modulated electron beam is treated under Einstein's time order. This effect allows for the production of coherent undulator radiation from a modulated electron beam in the kicked direction without suppression. We hold a recent FEL study at the LCLS as a direct experimental evidence that the microbunching wavefront indeed readjusts its direction after the electron beam is kicked by a large angle, limited only by the beamline aperture. In a previous paper we quantitatively described this result invoking the aberration of light effect, which corresponds to Lorentz's way of coupling fields and particles. The purpose of

  3. Tripoli-4, a three-dimensional poly-kinetic particle transport Monte-Carlo code

    International Nuclear Information System (INIS)

    Both, J.P.; Lee, Y.K.; Mazzolo, A.; Peneliau, Y.; Petit, O.; Roesslinger, B.; Soldevila, M.

    2003-01-01

    In this updated of the Monte-Carlo transport code Tripoli-4, we list and describe its current main features. The code computes coupled neutron-photon propagation as well as the electron-photon cascade shower. While providing the user with common biasing techniques, it also implements an automatic weighting scheme. Tripoli-4 enables the user to compute the following physical quantities: a flux, a multiplication factor, a current, a reaction rate, a dose equivalent rate as well as deposit of energy and recoil energies. For each interesting physical quantity, a Monte-Carlo simulation offers different types of estimators. Tripoli-4 has support for execution in parallel mode. Special features and applications are also presented

  4. Tripoli-4, a three-dimensional poly-kinetic particle transport Monte-Carlo code

    Energy Technology Data Exchange (ETDEWEB)

    Both, J P; Lee, Y K; Mazzolo, A; Peneliau, Y; Petit, O; Roesslinger, B; Soldevila, M [CEA Saclay, Dir. de l' Energie Nucleaire (DEN/DM2S/SERMA/LEPP), 91 - Gif sur Yvette (France)

    2003-07-01

    In this updated of the Monte-Carlo transport code Tripoli-4, we list and describe its current main features. The code computes coupled neutron-photon propagation as well as the electron-photon cascade shower. While providing the user with common biasing techniques, it also implements an automatic weighting scheme. Tripoli-4 enables the user to compute the following physical quantities: a flux, a multiplication factor, a current, a reaction rate, a dose equivalent rate as well as deposit of energy and recoil energies. For each interesting physical quantity, a Monte-Carlo simulation offers different types of estimators. Tripoli-4 has support for execution in parallel mode. Special features and applications are also presented.

  5. The Karlsruhe code MODINA for model independent analysis of elastic scattering of spinless particles

    International Nuclear Information System (INIS)

    Gils, H.J.

    1983-12-01

    The Karlsruhe code MODINA (KfK 3063, published November 1980) has been extended in particular with respect to new approximations in the folding models and to the calculation of errors in the fourier-Bessel potentials. The corresponding subroutines replacing previous ones are compiled in this first supplement. The listings of the fit-routine-package FITEX missing in the first publication of MODINA are also included now. (orig.) [de

  6. Towards the optimization of a gyrokinetic Particle-In-Cell (PIC) code on large-scale hybrid architectures

    International Nuclear Information System (INIS)

    Ohana, N; Lanti, E; Tran, T M; Brunner, S; Hariri, F; Villard, L; Jocksch, A; Gheller, C

    2016-01-01

    With the aim of enabling state-of-the-art gyrokinetic PIC codes to benefit from the performance of recent multithreaded devices, we developed an application from a platform called the “PIC-engine” [1, 2, 3] embedding simplified basic features of the PIC method. The application solves the gyrokinetic equations in a sheared plasma slab using B-spline finite elements up to fourth order to represent the self-consistent electrostatic field. Preliminary studies of the so-called Particle-In-Fourier (PIF) approach, which uses Fourier modes as basis functions in the periodic dimensions of the system instead of the real-space grid, show that this method can be faster than PIC for simulations with a small number of Fourier modes. Similarly to the PIC-engine, multiple levels of parallelism have been implemented using MPI+OpenMP [2] and MPI+OpenACC [1], the latter exploiting the computational power of GPUs without requiring complete code rewriting. It is shown that sorting particles [3] can lead to performance improvement by increasing data locality and vectorizing grid memory access. Weak scalability tests have been successfully run on the GPU-equipped Cray XC30 Piz Daint (at CSCS) up to 4,096 nodes. The reduced time-to-solution will enable more realistic and thus more computationally intensive simulations of turbulent transport in magnetic fusion devices. (paper)

  7. Comparative study of Monte Carlo particle transport code PHITS and nuclear data processing code NJOY for PKA energy spectra and heating number under neutron irradiation

    International Nuclear Information System (INIS)

    Iwamoto, Y.; Ogawa, T.

    2016-01-01

    The modelling of the damage in materials irradiated by neutrons is needed for understanding the mechanism of radiation damage in fission and fusion reactor facilities. The molecular dynamics simulations of damage cascades with full atomic interactions require information about the energy distribution of the Primary Knock on Atoms (PKAs). The most common process to calculate PKA energy spectra under low-energy neutron irradiation is to use the nuclear data processing code NJOY2012. It calculates group-to-group recoil cross section matrices using nuclear data libraries in ENDF data format, which is energy and angular recoil distributions for many reactions. After the NJOY2012 process, SPKA6C is employed to produce PKA energy spectra combining recoil cross section matrices with an incident neutron energy spectrum. However, intercomparison with different processes and nuclear data libraries has not been studied yet. Especially, the higher energy (~5 MeV) of the incident neutrons, compared to fission, leads to many reaction channels, which produces a complex distribution of PKAs in energy and type. Recently, we have developed the event generator mode (EGM) in the Particle and Heavy Ion Transport code System PHITS for neutron incident reactions in the energy region below 20 MeV. The main feature of EGM is to produce PKA with keeping energy and momentum conservation in a reaction. It is used for event-by-event analysis in application fields such as soft error analysis in semiconductors, micro dosimetry in human body, and estimation of Displacement per Atoms (DPA) value in metals and so on. The purpose of this work is to specify differences of PKA spectra and heating number related with kerma between different calculation method using PHITS-EGM and NJOY2012+SPKA6C with different libraries TENDL-2015, ENDF/B-VII.1 and JENDL-4.0 for fusion relevant materials

  8. Comparative study of Monte Carlo particle transport code PHITS and nuclear data processing code NJOY for recoil cross section spectra under neutron irradiation

    Energy Technology Data Exchange (ETDEWEB)

    Iwamoto, Yosuke, E-mail: iwamoto.yosuke@jaea.go.jp; Ogawa, Tatsuhiko

    2017-04-01

    Because primary knock-on atoms (PKAs) create point defects and clusters in materials that are irradiated with neutrons, it is important to validate the calculations of recoil cross section spectra that are used to estimate radiation damage in materials. Here, the recoil cross section spectra of fission- and fusion-relevant materials were calculated using the Event Generator Mode (EGM) of the Particle and Heavy Ion Transport code System (PHITS) and also using the data processing code NJOY2012 with the nuclear data libraries TENDL2015, ENDF/BVII.1, and JEFF3.2. The heating number, which is the integral of the recoil cross section spectra, was also calculated using PHITS-EGM and compared with data extracted from the ACE files of TENDL2015, ENDF/BVII.1, and JENDL4.0. In general, only a small difference was found between the PKA spectra of PHITS + TENDL2015 and NJOY + TENDL2015. From analyzing the recoil cross section spectra extracted from the nuclear data libraries using NJOY2012, we found that the recoil cross section spectra were incorrect for {sup 72}Ge, {sup 75}As, {sup 89}Y, and {sup 109}Ag in the ENDF/B-VII.1 library, and for {sup 90}Zr and {sup 55}Mn in the JEFF3.2 library. From analyzing the heating number, we found that the data extracted from the ACE file of TENDL2015 for all nuclides were problematic in the neutron capture region because of incorrect data regarding the emitted gamma energy. However, PHITS + TENDL2015 can calculate PKA spectra and heating numbers correctly.

  9. Agile deployment and code coverage testing metrics of the boot software on-board Solar Orbiter's Energetic Particle Detector

    Science.gov (United States)

    Parra, Pablo; da Silva, Antonio; Polo, Óscar R.; Sánchez, Sebastián

    2018-02-01

    In this day and age, successful embedded critical software needs agile and continuous development and testing procedures. This paper presents the overall testing and code coverage metrics obtained during the unit testing procedure carried out to verify the correctness of the boot software that will run in the Instrument Control Unit (ICU) of the Energetic Particle Detector (EPD) on-board Solar Orbiter. The ICU boot software is a critical part of the project so its verification should be addressed at an early development stage, so any test case missed in this process may affect the quality of the overall on-board software. According to the European Cooperation for Space Standardization ESA standards, testing this kind of critical software must cover 100% of the source code statement and decision paths. This leads to the complete testing of fault tolerance and recovery mechanisms that have to resolve every possible memory corruption or communication error brought about by the space environment. The introduced procedure enables fault injection from the beginning of the development process and enables to fulfill the exigent code coverage demands on the boot software.

  10. Plasma simulation by macroscale, electromagnetic particle code and its application to current-drive by relativistic electron beam injection

    International Nuclear Information System (INIS)

    Tanaka, M.; Sato, T.

    1985-01-01

    A new implicit macroscale electromagnetic particle simulation code (MARC) which allows a large scale length and a time step in multi-dimensions is described. Finite mass electrons and ions are used with relativistic version of the equation of motion. The electromagnetic fields are solved by using a complete set of Maxwell equations. For time integration of the field equations, a decentered (backward) finite differencing scheme is employed with the predictor - corrector method for small noise and super-stability. It is shown both in analytical and numerical ways that the present scheme efficiently suppresses high frequency electrostatic and electromagnetic waves in a plasma, and that it accurately reproduces low frequency waves such as ion acoustic waves, Alfven waves and fast magnetosonic waves. The present numerical scheme has currently been coded in three dimensions for application to a new tokamak current-drive method by means of relativistic electron beam injection. Some remarks of the proper macroscale code application is presented in this paper

  11. Vectorization of a particle code used in the simulation of rarefied hypersonic flow

    Science.gov (United States)

    Baganoff, D.

    1990-01-01

    A limitation of the direct simulation Monte Carlo (DSMC) method is that it does not allow efficient use of vector architectures that predominate in current supercomputers. Consequently, the problems that can be handled are limited to those of one- and two-dimensional flows. This work focuses on a reformulation of the DSMC method with the objective of designing a procedure that is optimized to the vector architectures found on machines such as the Cray-2. In addition, it focuses on finding a better balance between algorithmic complexity and the total number of particles employed in a simulation so that the overall performance of a particle simulation scheme can be greatly improved. Simulations of the flow about a 3D blunt body are performed with 10 to the 7th particles and 4 x 10 to the 5th mesh cells. Good statistics are obtained with time averaging over 800 time steps using 4.5 h of Cray-2 single-processor CPU time.

  12. Introducing a distributed unstructured mesh into gyrokinetic particle-in-cell code, XGC

    Science.gov (United States)

    Yoon, Eisung; Shephard, Mark; Seol, E. Seegyoung; Kalyanaraman, Kaushik

    2017-10-01

    XGC has shown good scalability for large leadership supercomputers. The current production version uses a copy of the entire unstructured finite element mesh on every MPI rank. Although an obvious scalability issue if the mesh sizes are to be dramatically increased, the current approach is also not optimal with respect to data locality of particles and mesh information. To address these issues we have initiated the development of a distributed mesh PIC method. This approach directly addresses the base scalability issue with respect to mesh size and, through the use of a mesh entity centric view of the particle mesh relationship, provides opportunities to address data locality needs of many core and GPU supported heterogeneous systems. The parallel mesh PIC capabilities are being built on the Parallel Unstructured Mesh Infrastructure (PUMI). The presentation will first overview the form of mesh distribution used and indicate the structures and functions used to support the mesh, the particles and their interaction. Attention will then focus on the node-level optimizations being carried out to ensure performant operation of all PIC operations on the distributed mesh. Partnership for Edge Physics Simulation (EPSI) Grant No. DE-SC0008449 and Center for Extended Magnetohydrodynamic Modeling (CEMM) Grant No. DE-SC0006618.

  13. Design of tallying function for general purpose Monte Carlo particle transport code JMCT

    International Nuclear Information System (INIS)

    Shangguan Danhua; Li Gang; Deng Li; Zhang Baoyin

    2013-01-01

    A new postponed accumulation algorithm was proposed. Based on JCOGIN (J combinatorial geometry Monte Carlo transport infrastructure) framework and the postponed accumulation algorithm, the tallying function of the general purpose Monte Carlo neutron-photon transport code JMCT was improved markedly. JMCT gets a higher tallying efficiency than MCNP 4C by 28% for simple geometry model, and JMCT is faster than MCNP 4C by two orders of magnitude for complicated repeated structure model. The available ability of tallying function for JMCT makes firm foundation for reactor analysis and multi-step burnup calculation. (authors)

  14. MC21 v.6.0 - A continuous-energy Monte Carlo particle transport code with integrated reactor feedback capabilities

    International Nuclear Information System (INIS)

    Grieshemer, D.P.; Gill, D.F.; Nease, B.R.; Carpenter, D.C.; Joo, H.; Millman, D.L.; Sutton, T.M.; Stedry, M.H.; Dobreff, P.S.; Trumbull, T.H.; Caro, E.

    2013-01-01

    MC21 is a continuous-energy Monte Carlo radiation transport code for the calculation of the steady-state spatial distributions of reaction rates in three-dimensional models. The code supports neutron and photon transport in fixed source problems, as well as iterated-fission-source (eigenvalue) neutron transport problems. MC21 has been designed and optimized to support large-scale problems in reactor physics, shielding, and criticality analysis applications. The code also supports many in-line reactor feedback effects, including depletion, thermal feedback, xenon feedback, eigenvalue search, and neutron and photon heating. MC21 uses continuous-energy neutron/nucleus interaction physics over the range from 10 -5 eV to 20 MeV. The code treats all common neutron scattering mechanisms, including fast-range elastic and non-elastic scattering, and thermal- and epithermal-range scattering from molecules and crystalline materials. For photon transport, MC21 uses continuous-energy interaction physics over the energy range from 1 keV to 100 GeV. The code treats all common photon interaction mechanisms, including Compton scattering, pair production, and photoelectric interactions. All of the nuclear data required by MC21 is provided by the NDEX system of codes, which extracts and processes data from EPDL-, ENDF-, and ACE-formatted source files. For geometry representation, MC21 employs a flexible constructive solid geometry system that allows users to create spatial cells from first- and second-order surfaces. The system also allows models to be built up as hierarchical collections of previously defined spatial cells, with interior detail provided by grids and template overlays. Results are collected by a generalized tally capability which allows users to edit integral flux and reaction rate information. Results can be collected over the entire problem or within specific regions of interest through the use of phase filters that control which particles are allowed to score each

  15. Implementing displacement damage calculations for electrons and gamma rays in the Particle and Heavy-Ion Transport code System

    Science.gov (United States)

    Iwamoto, Yosuke

    2018-03-01

    In this study, the Monte Carlo displacement damage calculation method in the Particle and Heavy-Ion Transport code System (PHITS) was improved to calculate displacements per atom (DPA) values due to irradiation by electrons (or positrons) and gamma rays. For the damage due to electrons and gamma rays, PHITS simulates electromagnetic cascades using the Electron Gamma Shower version 5 (EGS5) algorithm and calculates DPA values using the recoil energies and the McKinley-Feshbach cross section. A comparison of DPA values calculated by PHITS and the Monte Carlo assisted Classical Method (MCCM) reveals that they were in good agreement for gamma-ray irradiations of silicon and iron at energies that were less than 10 MeV. Above 10 MeV, PHITS can calculate DPA values not only for electrons but also for charged particles produced by photonuclear reactions. In DPA depth distributions under electron and gamma-ray irradiations, build-up effects can be observed near the target's surface. For irradiation of 90-cm-thick carbon by protons with energies of more than 30 GeV, the ratio of the secondary electron DPA values to the total DPA values is more than 10% and increases with an increase in incident energy. In summary, PHITS can calculate DPA values for all particles and materials over a wide energy range between 1 keV and 1 TeV for electrons, gamma rays, and charged particles and between 10-5 eV and 1 TeV for neutrons.

  16. The dynamics of low-β plasma clouds as simulated by a three-dimensional, electromagnetic particle code

    International Nuclear Information System (INIS)

    Neubert, T.; Miller, R.H.; Buneman, O.; Nishikawa, K.I.

    1992-01-01

    The dynamics of low-β plasma clouds moving perpendicular to an ambient magnetic field in vacuum and in a background plasma is simulated by means of a three-dimensional, electromagnetic, and relativistic particle simulation code. The simulations show the formation of the space charge sheaths at the sides of the cloud with the associated polarization electric field which facilitate the cross-field propagation, as well as the sheaths at the front and rear end of the cloud caused by the larger ion Larmor radius, which allows ions to move ahead and lag behind the electrons as they gyrate. Results on the cloud dynamics and electromagnetic radiation include the following: (1) In a background plasma, electron and ion sheaths expand along the magnetic field at the same rate, whereas in vacuum the electron sheath expands much faster than the ion sheath. (2) Sheath electrons are accelerated up to relativistic energies. This result indicates that artificial plasma clouds released in the ionosphere or magnetosphere may generate optical emissions (aurora) as energetic sheath electrons scatter in the upper atmosphere. (3) The expansion of the electron sheaths is analogous to the ejection of high-intensity electron beams from spacecraft. (4) Second-order and higher-order sheaths are formed which extend out into the ambient plasma. (5) Formation of the sheaths and the polarization field reduces the forward momentum of the cloud. (6) The coherent component of the particle gyromotion is damped in time as the particles establish a forward directed drift velocity. (7) The coherent particle gyrations generate electromagnetic radiation

  17. On the numerical dispersion of electromagnetic particle-in-cell code: Finite grid instability

    International Nuclear Information System (INIS)

    Meyers, M.D.; Huang, C.-K.; Zeng, Y.; Yi, S.A.; Albright, B.J.

    2015-01-01

    The Particle-In-Cell (PIC) method is widely used in relativistic particle beam and laser plasma modeling. However, the PIC method exhibits numerical instabilities that can render unphysical simulation results or even destroy the simulation. For electromagnetic relativistic beam and plasma modeling, the most relevant numerical instabilities are the finite grid instability and the numerical Cherenkov instability. We review the numerical dispersion relation of the Electromagnetic PIC model. We rigorously derive the faithful 3-D numerical dispersion relation of the PIC model, for a simple, direct current deposition scheme, which does not conserve electric charge exactly. We then specialize to the Yee FDTD scheme. In particular, we clarify the presence of alias modes in an eigenmode analysis of the PIC model, which combines both discrete and continuous variables. The manner in which the PIC model updates and samples the fields and distribution function, together with the temporal and spatial phase factors from solving Maxwell's equations on the Yee grid with the leapfrog scheme, is explicitly accounted for. Numerical solutions to the electrostatic-like modes in the 1-D dispersion relation for a cold drifting plasma are obtained for parameters of interest. In the succeeding analysis, we investigate how the finite grid instability arises from the interaction of the numerical modes admitted in the system and their aliases. The most significant interaction is due critically to the correct representation of the operators in the dispersion relation. We obtain a simple analytic expression for the peak growth rate due to this interaction, which is then verified by simulation. We demonstrate that our analysis is readily extendable to charge conserving models

  18. On the numerical dispersion of electromagnetic particle-in-cell code: Finite grid instability

    Science.gov (United States)

    Meyers, M. D.; Huang, C.-K.; Zeng, Y.; Yi, S. A.; Albright, B. J.

    2015-09-01

    The Particle-In-Cell (PIC) method is widely used in relativistic particle beam and laser plasma modeling. However, the PIC method exhibits numerical instabilities that can render unphysical simulation results or even destroy the simulation. For electromagnetic relativistic beam and plasma modeling, the most relevant numerical instabilities are the finite grid instability and the numerical Cherenkov instability. We review the numerical dispersion relation of the Electromagnetic PIC model. We rigorously derive the faithful 3-D numerical dispersion relation of the PIC model, for a simple, direct current deposition scheme, which does not conserve electric charge exactly. We then specialize to the Yee FDTD scheme. In particular, we clarify the presence of alias modes in an eigenmode analysis of the PIC model, which combines both discrete and continuous variables. The manner in which the PIC model updates and samples the fields and distribution function, together with the temporal and spatial phase factors from solving Maxwell's equations on the Yee grid with the leapfrog scheme, is explicitly accounted for. Numerical solutions to the electrostatic-like modes in the 1-D dispersion relation for a cold drifting plasma are obtained for parameters of interest. In the succeeding analysis, we investigate how the finite grid instability arises from the interaction of the numerical modes admitted in the system and their aliases. The most significant interaction is due critically to the correct representation of the operators in the dispersion relation. We obtain a simple analytic expression for the peak growth rate due to this interaction, which is then verified by simulation. We demonstrate that our analysis is readily extendable to charge conserving models.

  19. N-body simulations for f(R) gravity using a self-adaptive particle-mesh code

    Science.gov (United States)

    Zhao, Gong-Bo; Li, Baojiu; Koyama, Kazuya

    2011-02-01

    We perform high-resolution N-body simulations for f(R) gravity based on a self-adaptive particle-mesh code MLAPM. The chameleon mechanism that recovers general relativity on small scales is fully taken into account by self-consistently solving the nonlinear equation for the scalar field. We independently confirm the previous simulation results, including the matter power spectrum, halo mass function, and density profiles, obtained by Oyaizu [Phys. Rev. DPRVDAQ1550-7998 78, 123524 (2008)10.1103/PhysRevD.78.123524] and Schmidt [Phys. Rev. DPRVDAQ1550-7998 79, 083518 (2009)10.1103/PhysRevD.79.083518], and extend the resolution up to k˜20h/Mpc for the measurement of the matter power spectrum. Based on our simulation results, we discuss how the chameleon mechanism affects the clustering of dark matter and halos on full nonlinear scales.

  20. N-body simulations for f(R) gravity using a self-adaptive particle-mesh code

    International Nuclear Information System (INIS)

    Zhao Gongbo; Koyama, Kazuya; Li Baojiu

    2011-01-01

    We perform high-resolution N-body simulations for f(R) gravity based on a self-adaptive particle-mesh code MLAPM. The chameleon mechanism that recovers general relativity on small scales is fully taken into account by self-consistently solving the nonlinear equation for the scalar field. We independently confirm the previous simulation results, including the matter power spectrum, halo mass function, and density profiles, obtained by Oyaizu et al.[Phys. Rev. D 78, 123524 (2008)] and Schmidt et al.[Phys. Rev. D 79, 083518 (2009)], and extend the resolution up to k∼20 h/Mpc for the measurement of the matter power spectrum. Based on our simulation results, we discuss how the chameleon mechanism affects the clustering of dark matter and halos on full nonlinear scales.

  1. A massively parallel method of characteristic neutral particle transport code for GPUs

    International Nuclear Information System (INIS)

    Boyd, W. R.; Smith, K.; Forget, B.

    2013-01-01

    Over the past 20 years, parallel computing has enabled computers to grow ever larger and more powerful while scientific applications have advanced in sophistication and resolution. This trend is being challenged, however, as the power consumption for conventional parallel computing architectures has risen to unsustainable levels and memory limitations have come to dominate compute performance. Heterogeneous computing platforms, such as Graphics Processing Units (GPUs), are an increasingly popular paradigm for solving these issues. This paper explores the applicability of GPUs for deterministic neutron transport. A 2D method of characteristics (MOC) code - OpenMOC - has been developed with solvers for both shared memory multi-core platforms as well as GPUs. The multi-threading and memory locality methodologies for the GPU solver are presented. Performance results for the 2D C5G7 benchmark demonstrate 25-35 x speedup for MOC on the GPU. The lessons learned from this case study will provide the basis for further exploration of MOC on GPUs as well as design decisions for hardware vendors exploring technologies for the next generation of machines for scientific computing. (authors)

  2. A fully-implicit Particle-In-Cell Monte Carlo Collision code for the simulation of inductively coupled plasmas

    Science.gov (United States)

    Mattei, S.; Nishida, K.; Onai, M.; Lettry, J.; Tran, M. Q.; Hatayama, A.

    2017-12-01

    We present a fully-implicit electromagnetic Particle-In-Cell Monte Carlo collision code, called NINJA, written for the simulation of inductively coupled plasmas. NINJA employs a kinetic enslaved Jacobian-Free Newton Krylov method to solve self-consistently the interaction between the electromagnetic field generated by the radio-frequency coil and the plasma response. The simulated plasma includes a kinetic description of charged and neutral species as well as the collision processes between them. The algorithm allows simulations with cell sizes much larger than the Debye length and time steps in excess of the Courant-Friedrichs-Lewy condition whilst preserving the conservation of the total energy. The code is applied to the simulation of the plasma discharge of the Linac4 H- ion source at CERN. Simulation results of plasma density, temperature and EEDF are discussed and compared with optical emission spectroscopy measurements. A systematic study of the energy conservation as a function of the numerical parameters is presented.

  3. Development of high performance particle in cell code for the exascale age

    Science.gov (United States)

    Lapenta, Giovanni; Amaya, Jorge; Gonzalez, Diego; Deep-Est H2020 Consortium Collaboration

    2017-10-01

    Magnetized plasmas are most effectively described by magneto-hydrodynamics, MHD, a fluid theory based on describing some fields defined in space: electromagnetic fields, density, velocity and temperature of the plasma. However, microphysics processes need kinetic theory, where statistical distributions of particles are governed by the Boltzmann equation. While fluid models are based on the ordinary space and time, kinetic models require a six dimensional space, called phase space, besides time. The two methods are not separated but rather interact to determine the system evolution. Arriving at a single self-consistent model is the goal of our research. We present a new approach developed with the goal of extending the reach of kinetic models to the fluid scales. Kinetic models are a higher order description and all fluid effects are included in them. However, the cost in terms of computing power is much higher and it has been so far prohibitively expensive to treat space weather events fully kinetically. We have now designed a new method capable of reducing that cost by several orders of magnitude making it possible for kinetic models to study macroscopic systems. H2020 Deep-EST consortium (European Commission).

  4. MCNP-DSP, Monte Carlo Neutron-Particle Transport Code with Digital Signal Processing

    International Nuclear Information System (INIS)

    2002-01-01

    1 - Description of program or function: MCNP-DSP is recommended only for experienced MCNP users working with subcritical measurements. It is a modification of the Los Alamos National Laboratory's Monte Carlo code MCNP4a that is used to simulate a variety of subcritical measurements. The DSP version was developed to simulate frequency analysis measurements, correlation (Rossi-) measurements, pulsed neutron measurements, Feynman variance measurements, and multiplicity measurements. CCC-700/MCNP4C is recommended for general purpose calculations. 2 - Methods:MCNP-DSP performs calculations very similarly to MCNP and uses the same generalized geometry capabilities of MCNP. MCNP-DSP can only be used with the continuous-energy cross-section data. A variety of source and detector options are available. However, unlike standard MCNP, the source and detector options are limited to those described in the manual because these options are specified in the MCNP-DSP extra data file. MCNP-DSP is used to obtain the time-dependent response of detectors that are modeled in the simulation geometry. The detectors represent actual detectors used in measurements. These time-dependent detector responses are used to compute a variety of quantities such as frequency analysis signatures, correlation signatures, multiplicity signatures, etc., between detectors or sources and detectors. Energy ranges are 0-60 MeV for neutrons (data generally only available up to 20 MeV) and 1 keV - 1 GeV for photons and electrons. 3 - Restrictions on the complexity of the problem: None noted

  5. Implementation of Japanese male and female tomographic phantoms to multi-particle Monte Carlo code for ionizing radiation dosimetry

    International Nuclear Information System (INIS)

    Lee, Choonsik; Nagaoka, Tomoaki; Lee, Jai-Ki

    2006-01-01

    Japanese male and female tomographic phantoms, which have been developed for radio-frequency electromagnetic-field dosimetry, were implemented into multi-particle Monte Carlo transport code to evaluate realistic dose distribution in human body exposed to radiation field. Japanese tomographic phantoms, which were developed from the whole body magnetic resonance images of Japanese average adult male and female, were processed as follows to be implemented into general purpose multi-particle Monte Carlo code, MCNPX2.5. Original array size of Japanese male and female phantoms, 320 x 160 x 866 voxels and 320 x 160 x 804 voxels, respectively, were reduced into 320 x 160 x 433 voxels and 320 x 160 x 402 voxels due to the limitation of memory use in MCNPX2.5. The 3D voxel array of the phantoms were processed by using the built-in repeated structure algorithm, where the human anatomy was described by the repeated lattice of tiny cube containing the information of material composition and organ index number. Original phantom data were converted into ASCII file, which can be directly ported into the lattice card of MCNPX2.5 input deck by using in-house code. A total of 30 material compositions obtained from International Commission on Radiation Units and Measurement (ICRU) report 46 were assigned to 54 and 55 organs and tissues in the male and female phantoms, respectively, and imported into the material card of MCNPX2.5 along with the corresponding cross section data. Illustrative calculation of absorbed doses for 26 internal organs and effective dose were performed for idealized broad parallel photon and neutron beams in anterior-posterior irradiation geometry, which is typical for workers at nuclear power plant. The results were compared with the data from other Japanese and Caucasian tomographic phantom, and International Commission on Radiological Protection (ICRP) report 74. The further investigation of the difference in organ dose and effective dose among tomographic

  6. Simulation of Alfvén eigenmode bursts using a hybrid code for nonlinear magnetohydrodynamics and energetic particles

    Science.gov (United States)

    Todo, Y.; Berk, H. L.; Breizman, B. N.

    2012-03-01

    A hybrid simulation code for nonlinear magnetohydrodynamics (MHD) and energetic-particle dynamics has been extended to simulate recurrent bursts of Alfvén eigenmodes by implementing the energetic-particle source, collisions and losses. The Alfvén eigenmode bursts with synchronization of multiple modes and beam ion losses at each burst are successfully simulated with nonlinear MHD effects for the physics condition similar to a reduced simulation for a TFTR experiment (Wong et al 1991 Phys. Rev. Lett. 66 1874, Todo et al 2003 Phys. Plasmas 10 2888). It is demonstrated with a comparison between nonlinear MHD and linear MHD simulation results that the nonlinear MHD effects significantly reduce both the saturation amplitude of the Alfvén eigenmodes and the beam ion losses. Two types of time evolution are found depending on the MHD dissipation coefficients, namely viscosity, resistivity and diffusivity. The Alfvén eigenmode bursts take place for higher dissipation coefficients with roughly 10% drop in stored beam energy and the maximum amplitude of the dominant magnetic fluctuation harmonic δBm/n/B ~ 5 × 10-3 at the mode peak location inside the plasma. Quadratic dependence of beam ion loss rate on magnetic fluctuation amplitude is found for the bursting evolution in the nonlinear MHD simulation. For lower dissipation coefficients, the amplitude of the Alfvén eigenmodes is at steady levels δBm/n/B ~ 2 × 10-3 and the beam ion losses take place continuously. The beam ion pressure profiles are similar among the different dissipation coefficients, and the stored beam energy is higher for higher dissipation coefficients.

  7. TRANGE: computer code to calculate the energy beam degradation in target stack; TRANGE: programa para calcular a degradacao de energia de particulas carregadas em alvos

    Energy Technology Data Exchange (ETDEWEB)

    Bellido, Luis F.

    1995-07-01

    A computer code to calculate the projectile energy degradation along a target stack was developed for an IBM or compatible personal microcomputer. A comparison of protons and deuterons bombarding uranium and aluminium targets was made. The results showed that the data obtained with TRANGE were in agreement with other computers code such as TRIM, EDP and also using Williamsom and Janni range and stopping power tables. TRANGE can be used for any charged particle ion, for energies between 1 to 100 MeV, in metal foils and solid compounds targets. (author). 8 refs., 2 tabs.

  8. Porting the 3D Gyrokinetic Particle-in-cell Code GTC to the CRAY/NEC SX-6 Vector Architecture: Perspectives and Challenges

    International Nuclear Information System (INIS)

    Ethier, S.; Lin, Z.

    2003-01-01

    Several years of optimization on the super-scalar architecture has made it more difficult to port the current version of the 3D particle-in-cell code GTC to the CRAY/NEC SX-6 vector architecture. This paper explains the initial work that has been done to port this code to the SX-6 computer and to optimize the most time consuming parts. Early performance results are shown and compared to the same test done on the IBM SP Power 3 and Power 4 machines

  9. ACT-XN: Revised version of an activation calculation code for fusion reactor analysis. Supplement of the function for the sequential reaction activation by charged particles

    International Nuclear Information System (INIS)

    Yamauchi, Michinori; Sato, Satoshi; Nishitani, Takeo; Konno, Chikara; Hori, Jun-ichi; Kawasaki, Hiromitsu

    2007-09-01

    The ACT-XN is a revised version of the ACT4 code, which was developed in the Japan Atomic Energy Research Institute (JAERI) to calculate the transmutation, induced activity, decay heat, delayed gamma-ray source etc. for fusion devices. The ACT4 code cannot deal with the sequential reactions of charged particles generated by primary neutron reactions. In the design of present experimental reactors, the activation due to sequential reactions may not be of great concern as it is usually buried under the activity by primary neutron reactions. However, low activation material is one of the important factors for constructing high power fusion reactors in future, and unexpected activation may be produced through sequential reactions. Therefore, in the present work, the ACT4 code was newly supplemented with the calculation functions for the sequential reactions and renamed the ACT-XN. The ACT-XN code is equipped with functions to calculate effective cross sections for sequential reactions and input them in transmutation matrix. The FISPACT data were adopted for (x,n) reaction cross sections, charged particles emission spectra and stopping powers. The nuclear reaction chain data library were revised to cope with the (x,n) reactions. The charged particles are specified as p, d, t, 3 He(h) and α. The code was applied to the analysis of FNS experiment for LiF and Demo-reactor design with FLiBe, and confirmed that it reproduce the experimental values within 15-30% discrepancies. In addition, a notice was presented that the dose rate due to sequential reaction cannot always be neglected after a certain period cooling for some of the low activation material. (author)

  10. Distribution Pattern of Fe, Sr, Zr and Ca Elements as Particle Size Function in the Code River Sediments from Upstream to Downstream

    International Nuclear Information System (INIS)

    Sri Murniasih; Muzakky

    2007-01-01

    The analysis of Fe, Sr, Zr and Ca elements concentration of granular sediment from upstream to downstream of Code river has been done. The aim of this research is to know the influence of particle size on the concentration of Fe, Sr, Zr and Ca elements in the Code river sediments from upstream to downstream and its distribution pattern. The instrument used was x-ray fluorescence with Si(Li) detector. Analysis results show that more Fe and Sr elements are very much found in 150 - 90 μm particle size, while Zr and Ca elements are very much found in < 90 μm particle size. Distribution pattern of Fe, Sr, Zr and Ca elements distribution in Code river sediments tends to increase relatively from upstream to downstream following its conductivity. The concentration of Fe, Sr, Zr and Ca elements are 1.49 ± 0.03 % - 5.93 ± 0.02 % ; 118.20 ± 10.73 ppm - 468.21 ± 20.36 ppm; 19.81 ppm ± 0.86 ppm - 76.36 ± 3.02 ppm and 3.22 ± 0.25 % - 11.40 ± 0.31 % successively. (author)

  11. Improvement of neutron collimator design for thermal neutron radiography using Monte Carlo N-particle transport code version 5

    International Nuclear Information System (INIS)

    Thiagu Supramaniam

    2007-01-01

    The aim of this research was to propose a new neutron collimator design for thermal neutron radiography facility using tangential beam port of PUSPATI TRIGA Mark II reactor, Malaysia Institute of Nuclear Technology Research (MINT). Best geometry and materials for neutron collimator were chosen in order to obtain a uniform beam with maximum thermal neutron flux, high L/ D ratio, high neutron to gamma ratio and low beam divergence with high resolution. Monte Carlo N-particle Transport Code version 5 (MCNP 5) was used to optimize six neutron collimator components such as beam port medium, neutron scatterer, neutron moderator, gamma filter, aperture and collimator wall. The reactor and tangential beam port setup in MCNP5 was plotted according to its actual sizes. A homogeneous reactor core was assumed and population control method of variance reduction technique was applied by using cell importance. The comparison between experimental results and simulated results of the thermal neutron flux measurement of the bare tangential beam port, shows that both graph obtained had similar pattern. This directly suggests the reliability of MCNP5 in order to obtained optimal neutron collimator parameters. The simulated results of the optimal neutron medium, shows that vacuum was the best medium to transport neutrons followed by helium gas and air. The optimized aperture component was boral with 3 cm thickness. The optimal aperture center hole diameter was 2 cm which produces 88 L/ D ratio. Simulation also shows that graphite neutron scatterer improves thermal neutron flux while reducing fast neutron flux. Neutron moderator was used to moderate fast and epithermal neutrons in the beam port. Paraffin wax with 90 cm thick was bound to be the best neutron moderator material which produces the highest thermal neutron flux at the image plane. Cylindrical shape high density polyethylene neutron collimator produces the highest thermal neutron flux at the image plane rather than divergent

  12. Component tree analysis of cystovirus φ6 nucleocapsid Cryo-EM single particle reconstructions.

    Directory of Open Access Journals (Sweden)

    Lucas M Oliveira

    Full Text Available The 3-dimensional structure of the nucleocapsid (NC of bacteriophage φ6 is described utilizing component tree analysis, a topological and geometric image descriptor. The component trees are derived from density maps of cryo-electron microscopy single particle reconstructions. Analysis determines position and occupancy of structure elements responsible for RNA packaging and transcription. Occupancy of the hexameric nucleotide triphosphorylase (P4 and RNA polymerase (P2 are found to be essentially complete in the NC. The P8 protein lattice likely fixes P4 and P2 in place during maturation. We propose that the viral procapsid (PC is a dynamic structural intermediate where the P4 and P2 can attach and detach until held in place in mature NCs. During packaging, the PC expands to accommodate the RNA, and P2 translates from its original site near the inner 3-fold axis (20 sites to the inner 5-fold axis (12 sites with excess P2 positioned inside the central region of the NC.

  13. Nucleotide sequence of the <em>Escherichia coli pyrE em>gene and of the DNA in front of the protein-coding region

    DEFF Research Database (Denmark)

    Poulsen, Peter; Jensen, Kaj Frank; Valentin-Hansen, Poul

    1983-01-01

    leader segment in front of the protein-coding region. This leader contains a structure with features characteristic for a (translated?) rho-independent transcriptional terminator, which is preceded by a cluster of uridylate residues. This indicates that the frequency of pyrE transcription is regulated......Orotate phosphoribosyltransferase (EC 2.4.2.10) was purified to electrophoretic homogeneity from a strain of Escherichia coli containing the pyrE gene cloned on a multicopy plasmid. The relative molecular masses (Mr) of the native enzyme and its subunit were estimated by means of gel filtration...

  14. Test Particle Simulations of Electron Injection by the Bursty Bulk Flows (BBFs) using High Resolution Lyon-Feddor-Mobarry (LFM) Code

    Science.gov (United States)

    Eshetu, W. W.; Lyon, J.; Wiltberger, M. J.; Hudson, M. K.

    2017-12-01

    Test particle simulations of electron injection by the bursty bulk flows (BBFs) have been done using a test particle tracer code [1], and the output fields of the Lyon-Feddor-Mobarry global magnetohydro- dynamics (MHD) code[2]. The MHD code was run with high resolu- tion (oct resolution), and with specified solar wind conditions so as to reproduce the observed qualitative picture of the BBFs [3]. Test par- ticles were injected so that they interact with earthward propagating BBFs. The result of the simulation shows that electrons are pushed ahead of the BBFs and accelerated into the inner magnetosphere. Once electrons are in the inner magnetosphere they are further energized by drift resonance with the azimuthal electric field. In addition pitch angle scattering of electrons resulting in the violation conservation of the first adiabatic invariant has been observed. The violation of the first adiabatic invariant occurs as electrons cross a weak magnetic field region with a strong gradient of the field perturbed by the BBFs. References 1. Kress, B. T., Hudson,M. K., Looper, M. D. , Albert, J., Lyon, J. G., and Goodrich, C. C. (2007), Global MHD test particle simulations of ¿ 10 MeV radiation belt electrons during storm sudden commencement, J. Geophys. Res., 112, A09215, doi:10.1029/2006JA012218. Lyon,J. G., Fedder, J. A., and Mobarry, C.M., The Lyon- Fedder-Mobarry (LFM) Global MHD Magnetospheric Simulation Code (2004), J. Atm. And Solar-Terrestrial Phys., 66, Issue 15-16, 1333- 1350,doi:10.1016/j.jastp. Wiltberger, Merkin, M., Lyon, J. G., and Ohtani, S. (2015), High-resolution global magnetohydrodynamic simulation of bursty bulk flows, J. Geophys. Res. Space Physics, 120, 45554566, doi:10.1002/2015JA021080.

  15. Multi-Sensor Detection with Particle Swarm Optimization for Time-Frequency Coded Cooperative WSNs Based on MC-CDMA for Underground Coal Mines

    Directory of Open Access Journals (Sweden)

    Jingjing Xu

    2015-08-01

    Full Text Available In this paper, a wireless sensor network (WSN technology adapted to underground channel conditions is developed, which has important theoretical and practical value for safety monitoring in underground coal mines. According to the characteristics that the space, time and frequency resources of underground tunnel are open, it is proposed to constitute wireless sensor nodes based on multicarrier code division multiple access (MC-CDMA to make full use of these resources. To improve the wireless transmission performance of source sensor nodes, it is also proposed to utilize cooperative sensors with good channel conditions from the sink node to assist source sensors with poor channel conditions. Moreover, the total power of the source sensor and its cooperative sensors is allocated on the basis of their channel conditions to increase the energy efficiency of the WSN. To solve the problem that multiple access interference (MAI arises when multiple source sensors transmit monitoring information simultaneously, a kind of multi-sensor detection (MSD algorithm with particle swarm optimization (PSO, namely D-PSO, is proposed for the time-frequency coded cooperative MC-CDMA WSN. Simulation results show that the average bit error rate (BER performance of the proposed WSN in an underground coal mine is improved significantly by using wireless sensor nodes based on MC-CDMA, adopting time-frequency coded cooperative transmission and D-PSO algorithm with particle swarm optimization.

  16. Multi-Sensor Detection with Particle Swarm Optimization for Time-Frequency Coded Cooperative WSNs Based on MC-CDMA for Underground Coal Mines.

    Science.gov (United States)

    Xu, Jingjing; Yang, Wei; Zhang, Linyuan; Han, Ruisong; Shao, Xiaotao

    2015-08-27

    In this paper, a wireless sensor network (WSN) technology adapted to underground channel conditions is developed, which has important theoretical and practical value for safety monitoring in underground coal mines. According to the characteristics that the space, time and frequency resources of underground tunnel are open, it is proposed to constitute wireless sensor nodes based on multicarrier code division multiple access (MC-CDMA) to make full use of these resources. To improve the wireless transmission performance of source sensor nodes, it is also proposed to utilize cooperative sensors with good channel conditions from the sink node to assist source sensors with poor channel conditions. Moreover, the total power of the source sensor and its cooperative sensors is allocated on the basis of their channel conditions to increase the energy efficiency of the WSN. To solve the problem that multiple access interference (MAI) arises when multiple source sensors transmit monitoring information simultaneously, a kind of multi-sensor detection (MSD) algorithm with particle swarm optimization (PSO), namely D-PSO, is proposed for the time-frequency coded cooperative MC-CDMA WSN. Simulation results show that the average bit error rate (BER) performance of the proposed WSN in an underground coal mine is improved significantly by using wireless sensor nodes based on MC-CDMA, adopting time-frequency coded cooperative transmission and D-PSO algorithm with particle swarm optimization.

  17. Direct imaging electron microscopy (EM) methods in modern structural biology: overview and comparison with X-ray crystallography and single-particle cryo-EM reconstruction in the studies of large macromolecules.

    Science.gov (United States)

    Miyaguchi, Katsuyuki

    2014-10-01

    Determining the structure of macromolecules is important for understanding their function. The fine structure of large macromolecules is currently studied primarily by X-ray crystallography and single-particle cryo-electron microscopy (EM) reconstruction. Before the development of these techniques, macromolecular structure was often examined by negative-staining, rotary-shadowing and freeze-etching EM, which are categorised here as 'direct imaging EM methods'. In this review, the results are summarised by each of the above techniques and compared with respect to four macromolecules: the ryanodine receptor, cadherin, rhodopsin and the ribosome-translocon complex (RTC). The results of structural analysis of the ryanodine receptor and cadherin are consistent between each technique. The results obtained for rhodopsin vary to some extent within each technique and between the different techniques. Finally, the results for RTC are inconsistent between direct imaging EM and other analytical techniques, especially with respect to the space within RTC, the reasons for which are discussed. Then, the role of direct imaging EM methods in modern structural biology is discussed. Direct imaging methods should support and verify the results obtained by other analytical methods capable of solving three-dimensional molecular architecture, and they should still be used as a primary tool for studying macromolecule structure in vivo. © 2014 Société Française des Microscopies and Société de Biologie Cellulaire de France. Published by John Wiley & Sons Ltd.

  18. User manual for version 4.3 of the Tripoli-4 Monte-Carlo method particle transport computer code

    International Nuclear Information System (INIS)

    Both, J.P.; Mazzolo, A.; Peneliau, Y.; Petit, O.; Roesslinger, B.

    2003-01-01

    This manual relates to Version 4.3 TRIPOLI-4 code. TRIPOLI-4 is a computer code simulating the transport of neutrons, photons, electrons and positrons. It can be used for radiation shielding calculations (long-distance propagation with flux attenuation in non-multiplying media) and neutronic calculations (fissile medium, criticality or sub-criticality basis). This makes it possible to calculate k eff (for criticality), flux, currents, reaction rates and multi-group cross-sections. TRIPOLI-4 is a three-dimensional code that uses the Monte-Carlo method. It allows for point-wise description in terms of energy of cross-sections and multi-group homogenized cross-sections and features two modes of geometrical representation: surface and combinatorial. The code uses cross-section libraries in ENDF/B format (such as JEF2-2, ENDF/B-VI and JENDL) for point-wise description cross-sections in APOTRIM format (from the APOLLO2 code) or a format specific to TRIPOLI-4 for multi-group description. (authors)

  19. Tripoli-3: monte Carlo transport code for neutral particles - version 3.5 - users manual; Tripoli-3: code de transport des particules neutres par la methode de monte carlo - version 3.5 - manuel d'utilisation

    Energy Technology Data Exchange (ETDEWEB)

    Vergnaud, Th.; Nimal, J.C.; Chiron, M

    2001-07-01

    The TRIPOLI-3 code applies the Monte Carlo method to neutron, gamma-ray and coupled neutron and gamma-ray transport calculations in three-dimensional geometries, either in steady-state conditions or having a time dependence. It can be used to study problems where there is a high flux attenuation between the source zone and the result zone (studies of shielding configurations or source driven sub-critical systems, with fission being taken into account), as well as problems where there is a low flux attenuation (neutronic calculations -- in a fuel lattice cell, for example -- where fission is taken into account, usually with the calculation on the effective multiplication factor, fine structure studies, numerical experiments to investigate methods approximations, etc). TRIPOLI-3 has been operational since 1995 and is the version of the TRIPOLI code that follows on from TRIPOLI-2; it can be used on SUN, RISC600 and HP workstations and on PC using the Linux or Windows/NT operating systems. The code uses nuclear data libraries generated using the THEMIS/NJOY system. The current libraries were derived from ENDF/B6 and JEF2. There is also a response function library based on a number of evaluations, notably the dosimetry libraries IRDF/85, IRDF/90 and also evaluations from JEF2. The treatment of particle transport is the same in version 3.5 as in version 3.4 of the TRIPOLI code; but the version 3.5 is more convenient for preparing the input data and for reading the output. The french version of the user's manual exists. (authors)

  20. Tripoli-3: monte Carlo transport code for neutral particles - version 3.5 - users manual; Tripoli-3: code de transport des particules neutres par la methode de monte carlo - version 3.5 - manuel d'utilisation

    Energy Technology Data Exchange (ETDEWEB)

    Vergnaud, Th; Nimal, J C; Chiron, M

    2001-07-01

    The TRIPOLI-3 code applies the Monte Carlo method to neutron, gamma-ray and coupled neutron and gamma-ray transport calculations in three-dimensional geometries, either in steady-state conditions or having a time dependence. It can be used to study problems where there is a high flux attenuation between the source zone and the result zone (studies of shielding configurations or source driven sub-critical systems, with fission being taken into account), as well as problems where there is a low flux attenuation (neutronic calculations -- in a fuel lattice cell, for example -- where fission is taken into account, usually with the calculation on the effective multiplication factor, fine structure studies, numerical experiments to investigate methods approximations, etc). TRIPOLI-3 has been operational since 1995 and is the version of the TRIPOLI code that follows on from TRIPOLI-2; it can be used on SUN, RISC600 and HP workstations and on PC using the Linux or Windows/NT operating systems. The code uses nuclear data libraries generated using the THEMIS/NJOY system. The current libraries were derived from ENDF/B6 and JEF2. There is also a response function library based on a number of evaluations, notably the dosimetry libraries IRDF/85, IRDF/90 and also evaluations from JEF2. The treatment of particle transport is the same in version 3.5 as in version 3.4 of the TRIPOLI code; but the version 3.5 is more convenient for preparing the input data and for reading the output. The french version of the user's manual exists. (authors)

  1. Coding in Muscle Disease.

    Science.gov (United States)

    Jones, Lyell K; Ney, John P

    2016-12-01

    Accurate coding is critically important for clinical practice and research. Ongoing changes to diagnostic and billing codes require the clinician to stay abreast of coding updates. Payment for health care services, data sets for health services research, and reporting for medical quality improvement all require accurate administrative coding. This article provides an overview of administrative coding for patients with muscle disease and includes a case-based review of diagnostic and Evaluation and Management (E/M) coding principles in patients with myopathy. Procedural coding for electrodiagnostic studies and neuromuscular ultrasound is also reviewed.

  2. Nuclear GUI: a Graphical User Interface for 3D discrete ordinates neutral particle transport codes in the doors and BOT3P packages

    International Nuclear Information System (INIS)

    Saintagne, P.W.; Azmy, Y.Y.

    2005-01-01

    A GUI (Graphical User Interface) provides a graphical, interactive and intuitive link between the user and the software. It translates the user'actions into information, e.g; input data that is interpretable by the software. In order to develop an efficient GUI, it is important to master the target computational code. An initial version of a complete GUI for the DOORS and BOT3P packages for solving neutral particle transport problems in 3-dimensional geometry has been completed. This GUI is made of 4 components. The first component GipGui aims at handling cross-sections by mixing microscopic cross-sections from different libraries. The second component TORT-GUI provides the user a simple way to create or modify input files for the TORT codes that is a general purpose neutral transport code able to solve large problems with complex configurations. The third component GGTM-GUI prepares the data describing the problem configuration like the geometrical data, material assignment or key flux positions. The fourth component DTM3-GUI helps the user to visualize TORT results by providing data for a graphics post-processor

  3. Comparison of a 3D multi‐group SN particle transport code with Monte Carlo for intercavitary brachytherapy of the cervix uteri

    Science.gov (United States)

    Wareing, Todd A.; Failla, Gregory; Horton, John L.; Eifel, Patricia J.; Mourtada, Firas

    2009-01-01

    A patient dose distribution was calculated by a 3D multi‐group SN particle transport code for intracavitary brachytherapy of the cervix uteri and compared to previously published Monte Carlo results. A Cs‐137 LDR intracavitary brachytherapy CT data set was chosen from our clinical database. MCNPX version 2.5.c, was used to calculate the dose distribution. A 3D multi‐group SN particle transport code, Attila version 6.1.1 was used to simulate the same patient. Each patient applicator was built in SolidWorks, a mechanical design package, and then assembled with a coordinate transformation and rotation for the patient. The SolidWorks exported applicator geometry was imported into Attila for calculation. Dose matrices were overlaid on the patient CT data set. Dose volume histograms and point doses were compared. The MCNPX calculation required 14.8 hours, whereas the Attila calculation required 22.2 minutes on a 1.8 GHz AMD Opteron CPU. Agreement between Attila and MCNPX dose calculations at the ICRU 38 points was within ±3%. Calculated doses to the 2 cc and 5 cc volumes of highest dose differed by not more than ±1.1% between the two codes. Dose and DVH overlays agreed well qualitatively. Attila can calculate dose accurately and efficiently for this Cs‐137 CT‐based patient geometry. Our data showed that a three‐group cross‐section set is adequate for Cs‐137 computations. Future work is aimed at implementing an optimized version of Attila for radiotherapy calculations. PACS number: 87.53.Jw

  4. A code to compute the action-angle transformation for a particle in an abritrary potential well

    International Nuclear Information System (INIS)

    Berg, J.S.; Warnock, R.L.

    1995-01-01

    For a Vlasov treatment of longitudinal stability under an arbitrary wake field, with the solution of the Haiessinski equation as the unperturbed distribution, it is important to have the action-angle transformation for the distorted potential well in a convenient form. The authors have written a code that gives the transformation q,p → J, φ, with q(J,φ) as a Fourier series in φ, the Fourier coefficients and the Hamiltonian H(J) being spline functions of J in C 2 (having continuous second derivatives)

  5. Study of the radioactive particle tracking technique using gamma-ray attenuation and MCNP-X code to evaluate industrial agitators

    Energy Technology Data Exchange (ETDEWEB)

    Dam, Roos Sophia de F.; Salgado, César M., E-mail: rsophia.dam@gmail.com, E-mail: otero@ien.gov.br [Instituto de Engenharia Nuclear (IEN/CNEN-RJ), Rio de Janeiro, RJ (Brazil)

    2017-07-01

    Agitators or mixers are highly used in the chemical, food, pharmaceutical and cosmetic industries. During the fabrication process, the equipment may fail and compromise the appropriate stirring or mixing procedure. Besides that, it is also important to determine the right point of homogeneity of the mixture. Thus, it is very important to have a diagnosis tool for these industrial units to assure the quality of the product and to keep the market competitiveness. The radioactive particle tracking (RPT) technique is widely used in the nuclear field. In this paper, a method based on the principles of the RPT technique is presented. Counts obtained by an array of detectors properly positioned around the unit will be correlated to predict the instantaneous positions occupied by the radioactive particle by means of an appropriate mathematical search location algorithm. Detection geometry developed employs eight NaI(Tl) scintillator detectors and a Cs-137 (662 keV) source with isotropic emission of gamma-rays. The modeling of the detection system is performed using the Monte Carlo Method, by means of the MCNP-X code. In this work a methodology is presented to predict the position of a radioactive particle to evaluate the performance of agitators in industrial units by means of an Artificial Neural Network (ANN). (author)

  6. User's manual for ONEDANT: a code package for one-dimensional, diffusion-accelerated, neutral-particle transport

    International Nuclear Information System (INIS)

    O'Dell, R.D.; Brinkley, F.W. Jr.; Marr, D.R.

    1982-02-01

    ONEDANT is designed for the CDC-7600, but the program has been implemented and run on the IBM-370/190 and CRAY-I computers. ONEDANT solves the one-dimensional multigroup transport equation in plane, cylindrical, spherical, and two-angle plane geometries. Both regular and adjoint, inhomogeneous and homogeneous (k/sub eff/ and eigenvalue search) problems subject to vacuum, reflective, periodic, white, albedo, or inhomogeneous boundary flux conditions are solved. General anisotropic scattering is allowed and anisotropic inhomogeneous sources are permitted. ONEDANT numerically solves the one-dimensional, multigroup form of the neutral-particle, steady-state form of the Boltzmann transport equation. The discrete-ordinates approximation is used for treating the angular variation of the particle distribution and the diamond-difference scheme is used for phase space discretization. Negative fluxes are eliminated by a local set-to-zero-and-correct algorithm. A standard inner (within-group) iteration, outer (energy-group-dependent source) iteration technique is used. Both inner and outer iterations are accelerated using the diffusion synthetic acceleration method

  7. Simulations of particle and heat fluxes in an ELMy H-mode discharge on EAST using BOUT++ code

    Science.gov (United States)

    Wu, Y. B.; Xia, T. Y.; Zhong, F. C.; Zheng, Z.; Liu, J. B.; team3, EAST

    2018-05-01

    In order to study the distribution and evolution of the transient particle and heat fluxes during edge-localized mode (ELM) bursts on the Experimental Advanced Superconducting Tokamak (EAST), the BOUT++ six-field two-fluid model is used to simulate the pedestal collapse. The profiles from the EAST H-mode discharge #56129 are used as the initial conditions. Linear analysis shows that the resistive ballooning mode and drift-Alfven wave are two dominant instabilities for the equilibrium, and play important roles in driving ELMs. The evolution of the density profile and the growing process of the heat flux at divertor targets during the burst of ELMs are reproduced. The time evolution of the poloidal structures of T e is well simulated, and the dominant mode in each stage of the ELM crash process is found. The studies show that during the nonlinear phase, the dominant mode is 5, and it changes to 0 when the nonlinear phase goes to saturation after the ELM crash. The time evolution of the radial electron heat flux, ion heat flux, and particle density flux at the outer midplane (OMP) are obtained, and the corresponding transport coefficients D r, χ ir, and χ er reach maximum around 0.3 ∼ 0.5 m2 s‑1 at ΨN = 0.9. The heat fluxes at outer target plates are several times larger than that at inner target plates, which is consistent with the experimental observations. The simulated profiles of ion saturation current density (j s) at the lower outboard (LO) divertor target are compared to those of experiments by Langmuir probes. The profiles near the strike point are similar, and the peak values of j s from simulation are very close to the measurements.

  8. High-Fidelity RF Gun Simulations with the Parallel 3D Finite Element Particle-In-Cell Code Pic3P

    Energy Technology Data Exchange (ETDEWEB)

    Candel, A; Kabel, A.; Lee, L.; Li, Z.; Limborg, C.; Ng, C.; Schussman, G.; Ko, K.; /SLAC

    2009-06-19

    SLAC's Advanced Computations Department (ACD) has developed the first parallel Finite Element 3D Particle-In-Cell (PIC) code, Pic3P, for simulations of RF guns and other space-charge dominated beam-cavity interactions. Pic3P solves the complete set of Maxwell-Lorentz equations and thus includes space charge, retardation and wakefield effects from first principles. Pic3P uses higher-order Finite Elementmethods on unstructured conformal meshes. A novel scheme for causal adaptive refinement and dynamic load balancing enable unprecedented simulation accuracy, aiding the design and operation of the next generation of accelerator facilities. Application to the Linac Coherent Light Source (LCLS) RF gun is presented.

  9. DOUBLE code simulations of emissivities of fast neutrals for different plasma observation view-lines of neutral particle analyzers on the COMPASS tokamak

    Science.gov (United States)

    Mitosinkova, K.; Tomes, M.; Stockel, J.; Varju, J.; Stano, M.

    2018-03-01

    Neutral particle analyzers (NPA) measure line-integrated energy spectra of fast neutral atoms escaping the tokamak plasma, which are a product of charge-exchange (CX) collisions of plasma ions with background neutrals. They can observe variations in the ion temperature T i of non-thermal fast ions created by additional plasma heating. However, the plasma column which a fast atom has to pass through must be sufficiently short in comparison with the fast atom’s mean-free-path. Tokamak COMPASS is currently equipped with one NPA installed at a tangential mid-plane port. This orientation is optimal for observing non-thermal fast ions. However, in this configuration the signal at energies useful for T i derivation is lost in noise due to the too long fast atoms’ trajectories. Thus, a second NPA is planned to be connected for the purpose of measuring T i. We analyzed different possible view-lines (perpendicular mid-plane, tangential mid-plane, and top view) for the second NPA using the DOUBLE Monte-Carlo code and compared the results with the performance of the present NPA with tangential orientation. The DOUBLE code provides fast-atoms’ emissivity functions along the NPA view-line. The position of the median of these emissivity functions is related to the location from where the measured signal originates. Further, we compared the difference between the real central T i used as a DOUBLE code input and the T iCX derived from the exponential decay of simulated energy spectra. The advantages and disadvantages of each NPA location are discussed.

  10. Production of secondary particles and nuclei in cosmic rays collisions with the interstellar gas using the FLUKA code

    CERN Document Server

    Mazziotta, M N; Ferrari, A; Gaggero, D; Loparco, F; Sala, P R

    2016-01-01

    The measured fluxes of secondary particles produced by the interactions of Cosmic Rays (CRs) with the astronomical environment play a crucial role in understanding the physics of CR transport. In this work we present a comprehensive calculation of the secondary hadron, lepton, gamma-ray and neutrino yields produced by the inelastic interactions between several species of stable or long-lived cosmic rays projectiles (p, D, T, 3He, 4He, 6Li, 7Li, 9Be, 10Be, 10B, 11B, 12C, 13C, 14C, 14N, 15N, 16O, 17O, 18O, 20Ne, 24Mg and 28Si) and different target gas nuclei (p, 4He, 12C, 14N, 16O, 20Ne, 24Mg, 28Si and 40Ar). The yields are calculated using FLUKA, a simulation package designed to compute the energy distributions of secondary products with large accuracy in a wide energy range. The present results provide, for the first time, a complete and self-consistent set of all the relevant inclusive cross sections regarding the whole spectrum of secondary products in nuclear collisions. We cover, for the projectiles, a ki...

  11. Development and Implementation of Photonuclear Cross-Section Data for Mutually Coupled Neutron-Photon Transport Calculations in the Monte Carlo N-Particle (MCNP) Radiation Transport Code

    International Nuclear Information System (INIS)

    White, Morgan C.

    2000-01-01

    The fundamental motivation for the research presented in this dissertation was the need to development a more accurate prediction method for characterization of mixed radiation fields around medical electron accelerators (MEAs). Specifically, a model is developed for simulation of neutron and other particle production from photonuclear reactions and incorporated in the Monte Carlo N-Particle (MCNP) radiation transport code. This extension of the capability within the MCNP code provides for the more accurate assessment of the mixed radiation fields. The Nuclear Theory and Applications group of the Los Alamos National Laboratory has recently provided first-of-a-kind evaluated photonuclear data for a select group of isotopes. These data provide the reaction probabilities as functions of incident photon energy with angular and energy distribution information for all reaction products. The availability of these data is the cornerstone of the new methodology for state-of-the-art mutually coupled photon-neutron transport simulations. The dissertation includes details of the model development and implementation necessary to use the new photonuclear data within MCNP simulations. A new data format has been developed to include tabular photonuclear data. Data are processed from the Evaluated Nuclear Data Format (ENDF) to the new class ''u'' A Compact ENDF (ACE) format using a standalone processing code. MCNP modifications have been completed to enable Monte Carlo sampling of photonuclear reactions. Note that both neutron and gamma production are included in the present model. The new capability has been subjected to extensive verification and validation (V and V) testing. Verification testing has established the expected basic functionality. Two validation projects were undertaken. First, comparisons were made to benchmark data from literature. These calculations demonstrate the accuracy of the new data and transport routines to better than 25 percent. Second, the ability to

  12. Development and Implementation of Photonuclear Cross-Section Data for Mutually Coupled Neutron-Photon Transport Calculations in the Monte Carlo N-Particle (MCNP) Radiation Transport Code

    Energy Technology Data Exchange (ETDEWEB)

    White, Morgan C. [Univ. of Florida, Gainesville, FL (United States)

    2000-07-01

    The fundamental motivation for the research presented in this dissertation was the need to development a more accurate prediction method for characterization of mixed radiation fields around medical electron accelerators (MEAs). Specifically, a model is developed for simulation of neutron and other particle production from photonuclear reactions and incorporated in the Monte Carlo N-Particle (MCNP) radiation transport code. This extension of the capability within the MCNP code provides for the more accurate assessment of the mixed radiation fields. The Nuclear Theory and Applications group of the Los Alamos National Laboratory has recently provided first-of-a-kind evaluated photonuclear data for a select group of isotopes. These data provide the reaction probabilities as functions of incident photon energy with angular and energy distribution information for all reaction products. The availability of these data is the cornerstone of the new methodology for state-of-the-art mutually coupled photon-neutron transport simulations. The dissertation includes details of the model development and implementation necessary to use the new photonuclear data within MCNP simulations. A new data format has been developed to include tabular photonuclear data. Data are processed from the Evaluated Nuclear Data Format (ENDF) to the new class ''u'' A Compact ENDF (ACE) format using a standalone processing code. MCNP modifications have been completed to enable Monte Carlo sampling of photonuclear reactions. Note that both neutron and gamma production are included in the present model. The new capability has been subjected to extensive verification and validation (V&V) testing. Verification testing has established the expected basic functionality. Two validation projects were undertaken. First, comparisons were made to benchmark data from literature. These calculations demonstrate the accuracy of the new data and transport routines to better than 25 percent. Second

  13. Streamlined Darwin methods for particle beam injectors

    International Nuclear Information System (INIS)

    Boyd, J.K.

    1987-01-01

    Physics issues that involve inductive effects, such as beam fluctuations, electromagnetic (EM) instability, or interactions with a cavity require a time-dependent simulation. The most elaborate time-dependent codes self-consistently solve Maxwell's equations and the force equation for a large number of macroparticles. Although these full EM particle-in-cell (PIC) codes have been used to study a broad range of phenomena, including beam injectors, they have several drawbacks. In an explicit solution of Maxwell's equations, the time step is restricted by a Courant condition. A second disadvantage is the production of anomalously large numerical fluctuations, caused by representing many real particles by a single computational macroparticle. Last, approximate models of internal boundaries can create nonphysical radiation in a full EM simulation. In this work, many of the problems of a fully electromagnetic simulation are avoided by using the Darwin field model. The Darwin field model is the magnetoinductive limit of Maxwell's equations, and it retains the first-order relativistic correction to the particle Lagrangian. It includes the part of the displacement current necessary to satisfy the charge-continuity equation. This feature is important for simulation of nonneutral beams. Because the Darwin model does not include the solenoidal vector component of the displacement current, it cannot be used to study high-frequency phenomena or effects caused by rapid current changes. However, because wave motion is not followed, the Courant condition of a fully electromagnetic code can be exceeded. In addition, inductive effects are modeled without creating nonphysical radiation

  14. Comprehensive Benchmark Suite for Simulation of Particle Laden Flows Using the Discrete Element Method with Performance Profiles from the Multiphase Flow with Interface eXchanges (MFiX) Code

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Peiyuan [Univ. of Colorado, Boulder, CO (United States); Brown, Timothy [Univ. of Colorado, Boulder, CO (United States); Fullmer, William D. [Univ. of Colorado, Boulder, CO (United States); Hauser, Thomas [Univ. of Colorado, Boulder, CO (United States); Hrenya, Christine [Univ. of Colorado, Boulder, CO (United States); Grout, Ray [National Renewable Energy Lab. (NREL), Golden, CO (United States); Sitaraman, Hariswaran [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2016-01-29

    Five benchmark problems are developed and simulated with the computational fluid dynamics and discrete element model code MFiX. The benchmark problems span dilute and dense regimes, consider statistically homogeneous and inhomogeneous (both clusters and bubbles) particle concentrations and a range of particle and fluid dynamic computational loads. Several variations of the benchmark problems are also discussed to extend the computational phase space to cover granular (particles only), bidisperse and heat transfer cases. A weak scaling analysis is performed for each benchmark problem and, in most cases, the scalability of the code appears reasonable up to approx. 103 cores. Profiling of the benchmark problems indicate that the most substantial computational time is being spent on particle-particle force calculations, drag force calculations and interpolating between discrete particle and continuum fields. Hardware performance analysis was also carried out showing significant Level 2 cache miss ratios and a rather low degree of vectorization. These results are intended to serve as a baseline for future developments to the code as well as a preliminary indicator of where to best focus performance optimizations.

  15. Numerical Modeling and Investigation of Fluid-Driven Fracture Propagation in Reservoirs Based on a Modified Fluid-Mechanically Coupled Model in Two-Dimensional Particle Flow Code

    Directory of Open Access Journals (Sweden)

    Jian Zhou

    2016-09-01

    Full Text Available Hydraulic fracturing is a useful tool for enhancing rock mass permeability for shale gas development, enhanced geothermal systems, and geological carbon sequestration by the high-pressure injection of a fracturing fluid into tight reservoir rocks. Although significant advances have been made in hydraulic fracturing theory, experiments, and numerical modeling, when it comes to the complexity of geological conditions knowledge is still limited. Mechanisms of fluid injection-induced fracture initiation and propagation should be better understood to take full advantage of hydraulic fracturing. This paper presents the development and application of discrete particle modeling based on two-dimensional particle flow code (PFC2D. Firstly, it is shown that the modeled value of the breakdown pressure for the hydraulic fracturing process is approximately equal to analytically calculated values under varied in situ stress conditions. Furthermore, a series of simulations for hydraulic fracturing in competent rock was performed to examine the influence of the in situ stress ratio, fluid injection rate, and fluid viscosity on the borehole pressure history, the geometry of hydraulic fractures, and the pore-pressure field, respectively. It was found that the hydraulic fractures in an isotropic medium always propagate parallel to the orientation of the maximum principal stress. When a high fluid injection rate is used, higher breakdown pressure is needed for fracture propagation and complex geometries of fractures can develop. When a low viscosity fluid is used, fluid can more easily penetrate from the borehole into the surrounding rock, which causes a reduction of the effective stress and leads to a lower breakdown pressure. Moreover, the geometry of the fractures is not particularly sensitive to the fluid viscosity in the approximate isotropic model.

  16. Improving the Calibration of Image Sensors Based on IOFBs, Using Differential Gray-Code Space Encoding

    Directory of Open Access Journals (Sweden)

    Carlos Luna Vázquez

    2012-07-01

    Full Text Available This paper presents a fast calibration method to determine the transfer function for spatial correspondences in image transmission devices with <em>Incoherent> em>>Optical Fiberem>> em>Bundles> em>(IOFBs, by performing a scan of the input, using differential patterns generated from a Gray code (<em>Differential Gray-Code Space Encodingem>, DGSE. The results demonstrate that this technique provides a noticeable reduction in processing time and better quality of the reconstructed image compared to other, previously employed techniques, such as point or fringe scanning, or even other known space encoding techniques.

  17. Test particles dynamics in the JOREK 3D non-linear MHD code and application to electron transport in a disruption simulation

    Science.gov (United States)

    Sommariva, C.; Nardon, E.; Beyer, P.; Hoelzl, M.; Huijsmans, G. T. A.; van Vugt, D.; Contributors, JET

    2018-01-01

    In order to contribute to the understanding of runaway electron generation mechanisms during tokamak disruptions, a test particle tracker is introduced in the JOREK 3D non-linear MHD code, able to compute both full and guiding center relativistic orbits. Tests of the module show good conservation of the invariants of motion and consistency between full orbit and guiding center solutions. A first application is presented where test electron confinement properties are investigated in a massive gas injection-triggered disruption simulation in JET-like geometry. It is found that electron populations initialised before the thermal quench (TQ) are typically not fully deconfined in spite of the global stochasticity of the magnetic field during the TQ. The fraction of ‘survivors’ decreases from a few tens down to a few tenths of percent as the electron energy varies from 1 keV to 10 MeV. The underlying mechanism for electron ‘survival’ is the prompt reformation of closed magnetic surfaces at the plasma core and, to a smaller extent, the subsequent reappearance of a magnetic surface at the edge. It is also found that electrons are less deconfined at 10 MeV than at 1 MeV, which appears consistent with a phase averaging effect due to orbit shifts at high energy.

  18. SU-E-T-590: Optimizing Magnetic Field Strengths with Matlab for An Ion-Optic System in Particle Therapy Consisting of Two Quadrupole Magnets for Subsequent Simulations with the Monte-Carlo Code FLUKA

    International Nuclear Information System (INIS)

    Baumann, K; Weber, U; Simeonov, Y; Zink, K

    2015-01-01

    Purpose: Aim of this study was to optimize the magnetic field strengths of two quadrupole magnets in a particle therapy facility in order to obtain a beam quality suitable for spot beam scanning. Methods: The particle transport through an ion-optic system of a particle therapy facility consisting of the beam tube, two quadrupole magnets and a beam monitor system was calculated with the help of Matlab by using matrices that solve the equation of motion of a charged particle in a magnetic field and field-free region, respectively. The magnetic field strengths were optimized in order to obtain a circular and thin beam spot at the iso-center of the therapy facility. These optimized field strengths were subsequently transferred to the Monte-Carlo code FLUKA and the transport of 80 MeV/u C12-ions through this ion-optic system was calculated by using a user-routine to implement magnetic fields. The fluence along the beam-axis and at the iso-center was evaluated. Results: The magnetic field strengths could be optimized by using Matlab and transferred to the Monte-Carlo code FLUKA. The implementation via a user-routine was successful. Analyzing the fluence-pattern along the beam-axis the characteristic focusing and de-focusing effects of the quadrupole magnets could be reproduced. Furthermore the beam spot at the iso-center was circular and significantly thinner compared to an unfocused beam. Conclusion: In this study a Matlab tool was developed to optimize magnetic field strengths for an ion-optic system consisting of two quadrupole magnets as part of a particle therapy facility. These magnetic field strengths could subsequently be transferred to and implemented in the Monte-Carlo code FLUKA to simulate the particle transport through this optimized ion-optic system

  19. Energetic particle physics with applications in fusion and space plasmas

    International Nuclear Information System (INIS)

    Cheng, C.Z.

    1997-01-01

    Energetic particle physics is the study of the effects of energetic particles on collective electromagnetic (EM) instabilities and energetic particle transport in plasmas. Anomalously large energetic particle transport is often caused by low frequency MHD instabilities, which are driven by these energetic particles in the presence of a much denser background of thermal particles. The theory of collective energetic particle phenomena studies complex wave-particle interactions in which particle kinetic physics involving small spatial and fast temporal scales can strongly affect the MHD structure and long-time behavior of plasmas. The difficulty of modeling kinetic-MHD multiscale coupling processes stems from the disparate scales which are traditionally analyzed separately: the macroscale MHD phenomena are studied using the fluid MHD framework, while microscale kinetic phenomena are best described by complicated kinetic theories. The authors have developed a kinetic-MHD model that properly incorporates major particle kinetic effects into the MHD fluid description. For tokamak plasmas a nonvariational kinetic-MHD stability code, the NOVA-K code, has been successfully developed and applied to study problems such as the excitation of fishbone and Toroidal Alfven Eigenmodes (TAE) and the sawtooth stabilization by energetic ions in tokamaks. In space plasmas the authors have employed the kinetic-MHD model to study the energetic particle effects on the ballooning-mirror instability which explains the multisatellite observation of the stability and field-aligned structure of compressional Pc 5 waves in the magnetospheric ring current plasma

  20. Code Cactus; Code Cactus

    Energy Technology Data Exchange (ETDEWEB)

    Fajeau, M; Nguyen, L T; Saunier, J [Commissariat a l' Energie Atomique, Centre d' Etudes Nucleaires de Saclay, 91 - Gif-sur-Yvette (France)

    1966-09-01

    This code handles the following problems: -1) Analysis of thermal experiments on a water loop at high or low pressure; steady state or transient behavior; -2) Analysis of thermal and hydrodynamic behavior of water-cooled and moderated reactors, at either high or low pressure, with boiling permitted; fuel elements are assumed to be flat plates: - Flowrate in parallel channels coupled or not by conduction across plates, with conditions of pressure drops or flowrate, variable or not with respect to time is given; the power can be coupled to reactor kinetics calculation or supplied by the code user. The code, containing a schematic representation of safety rod behavior, is a one dimensional, multi-channel code, and has as its complement (FLID), a one-channel, two-dimensional code. (authors) [French] Ce code permet de traiter les problemes ci-dessous: 1. Depouillement d'essais thermiques sur boucle a eau, haute ou basse pression, en regime permanent ou transitoire; 2. Etudes thermiques et hydrauliques de reacteurs a eau, a plaques, a haute ou basse pression, ebullition permise: - repartition entre canaux paralleles, couples on non par conduction a travers plaques, pour des conditions de debit ou de pertes de charge imposees, variables ou non dans le temps; - la puissance peut etre couplee a la neutronique et une representation schematique des actions de securite est prevue. Ce code (Cactus) a une dimension d'espace et plusieurs canaux, a pour complement Flid qui traite l'etude d'un seul canal a deux dimensions. (auteurs)

  1. 3D Mapping of the SPRY2 domain of ryanodine receptor 1 by single-particle cryo-EM.

    Directory of Open Access Journals (Sweden)

    Alex Perálvarez-Marín

    Full Text Available The type 1 skeletal muscle ryanodine receptor (RyR1 is principally responsible for Ca(2+ release from the sarcoplasmic reticulum and for the subsequent muscle contraction. The RyR1 contains three SPRY domains. SPRY domains are generally known to mediate protein-protein interactions, however the location of the three SPRY domains in the 3D structure of the RyR1 is not known. Combining immunolabeling and single-particle cryo-electron microscopy we have mapped the SPRY2 domain (S1085-V1208 in the 3D structure of RyR1 using three different antibodies against the SPRY2 domain. Two obstacles for the image processing procedure; limited amount of data and signal dilution introduced by the multiple orientations of the antibody bound in the tetrameric RyR1, were overcome by modifying the 3D reconstruction scheme. This approach enabled us to ascertain that the three antibodies bind to the same region, to obtain a 3D reconstruction of RyR1 with the antibody bound, and to map SPRY2 to the periphery of the cytoplasmic domain of RyR1. We report here the first 3D localization of a SPRY2 domain in any known RyR isoform.

  2. A Calculation Method of PKA, KERMA and DPA from Evaluated Nuclear Data with an Effective Single-particle Emission Approximation (ESPEA) and Introduction of Event Generator Mode in PHITS Code

    International Nuclear Information System (INIS)

    Fukahori, Tokio; Iwamoto, Yosuke

    2012-01-01

    The displacement calculation method from evaluated nuclear data file has been developed by using effective single-particle emission approximation (ESPEA). The ESPEA can be used effectively below about 50 MeV, because of since multiplicity of emitted particles. These are also reported in the Ref. 24. The displacement calculation method in PHITS has been developed. In the high energy region (≥ 20 MeV) for proton and neutron beams, DPA created by secondary particles increase due to nuclear reactions. For heavy-ion beams, DPA created by the primaries are dominant to total DPA due to the large Coulomb scattering cross sections. PHITS results agree with FLUKA ones within a factor of 1.7. In the high-energy region above 10 MeV/nucleon, comparisons among codes and measurements of displacement damage cross section are necessary. (author)

  3. Evaluation of neutral detergent fiber contents in forages, concentrates and cattle feces ground at different particle sizes and using bags made from different textiles Avaliação dos teores de fibra em detergente neutro em forragens, concentrados e fezes bovinas moídas em diferentes tamanhos e em sacos de diferentes tecidos

    Directory of Open Access Journals (Sweden)

    Tiago Neves Pereira Valente

    2011-05-01

    Full Text Available Two experiments were carried out to evaluate the effects of particles sizes on contents of neutral detergent fiber (NDF by using nylon bags F57 (Ankom® and non-woven textile (NWT 100 g/m². In the first experiment, to check contents of NDF obtained with each one of the textilles, quantitative filter paper (purified cellulose was used as analytical standard. The material was processed in a 1 or 2 mm and put in the bags in a ratio of 20 mg of dry matter/cm² of surface. The analyses were performed in a fiber analyser (Ankom220 and using a heat-stable a-amylase. No effects of particles sizes were found. In the second experiment, it was used samples of forage (signal grass hay, sugar cane, coastcross hay, corn straw, corn silage, and elephant grass at 50 and 250 days of growing after cutting, concentrate feeds (sorghum grain, gluten meal, citrus pulp, cottonseed meal, soybean meal, wheat bran, corn grain, whole soybean, and soybean hulls, and feces of cattle of animals fed diets containing 15 or 50% of concentrate. The samples were prepared, put in bags, and analyzed as described in the previous experiment. In both experiments, the smallest contents of NDF were found by using nylon bags, indicating loss of particles through the bag porosity. It is suggested grind of samples by using 1-mm screen sieve, which provides efficient extraction of cell content by the action of neutral detergent and a greater specific surface by the action of the heat-stable a-amylase enzyme. The use of particles ground at 2-mm overestimates NDF contents.Foram realizados dois experimentos objetivando-se avaliar a influência do tamanho das partículas sobre os teores de fibra em detergente neutro (FDN utilizando-se sacos de náilon, F57 (Ankom® e tecido-não-tecido (TNT 100 g/m². No primeiro experimento, com o intuito de aferir os teores de FDN obtidos com cada um dos tecidos, utilizou-se papel-filtro quantitativo (celulose purificada como padrão analítico. O material

  4. Comparison of european computer codes relative to the aerosol behavior in PWR containment buildings during severe core damage accidents. (Modelling of steam condensation on the particles)

    International Nuclear Information System (INIS)

    Bunz, H.; Dunbar, L.H.; Fermandjian, J.; Lhiaubet, G.

    1987-11-01

    An aerosol code comparison exercise was performed within the framework of the Commission of European Communities (Division of Safety of Nuclear Installations). This exercise, focused on the process of steam condensation onto the aerosols occurring in PWR containment buildings during severe core damage accidents, has allowed to understand the discrepancies between the results obtained. These discrepancies are due, in particular, to whether the curvature effect is modelled or not in the codes

  5. Sedimentação em leite UHT integral, semidesnatado e desnatado durante armazenamento Particle sedimentation in semi-skimmed, skimmed on whole milk UHT, during storage

    Directory of Open Access Journals (Sweden)

    Cintia Neuwald Vesconsi

    2012-04-01

    Full Text Available Um dos maiores problemas do leite UHT é a sedimentação que ocorre durante o período de armazenamento, o que é muito reclamado pelos consumidores. O objetivo deste trabalho foi determinar a sedimentação em leite UHT integral, semidesnatado e desnatado, armazenados a 20°C e 30°C (±1°C durante 120 dias. Nos leites pasteurizados que deram origem aos leites UHT, foram efetuadas análises físico-químicas e microbiológicas (bactérias mesófilas, psicrotróficas e láticas e, nos leites UHT integral, semidesnatado e desnatado, avaliaram-se a acidez, pH, fervura, sensorial, integridade das embalagens e sedimentação, logo depois de embalados e no 30°, 60°, 90° e 120° dia de armazenamento. Os leites pasteurizados integral, semidesnatado e desnatado apresentaram resultados dentro dos padrões estipulados pela indústria para as bactérias mesófilas (log10 4,37 a log10 4,08UFC mL-1, psicrotróficas (log10 3,06 a log10 2,77UFC mL-1 e lácticas (log10 3,10 a log10 2,42UFC mL-1, que diferiram significativamente (POne of the biggest problems of UHT milk is the sedimentation of protein particles that occurs during the storage period, which faces rejection by the consumers. The aim of this study was to evaluate the particle sedimentation in semi-skimmed, skimmed and wholemilk, stored at 20°C, 30°C (±1 for 120 days. In pasteurized milk that yielded UHT milk physico-chemical and microbiological analisys (mesophilic, psychrotrophic and lactic acid bacteria were carried out. In semi-skimmed, skimmed and whole milk, the acidity, pH, boiling, sensory analysis, integrity of packaging and sedimentation tests were carried out shortly after packing and at 30th, 60th, 90th and 120th days of storage. The pasteurized whole milk, semi skimmed and skimmed milk showed results within the required standards by the industry for mesophilic bacteria(log10 4.37 a log10 4.08UFC mL-1, psychrotrophic (log10 3.06 a log10 2.77UFC mL-1 and lactic acid (log10 3.10 a

  6. Fast-solving thermally thick model of biomass particles embedded in a CFD code for the simulation of fixed-bed burners

    International Nuclear Information System (INIS)

    Gómez, M.A.; Porteiro, J.; Patiño, D.; Míguez, J.L.

    2015-01-01

    Highlights: • A thermally thick treatment is used to simulate of fuel the thermal conversion of solid biomass. • A dynamic subgrid scale is used to model the advance of reactive fronts inside the particle. • Efficient solution algorithms are applied to calculate the temperatures and volume of the internal layers. • Several tests were simulated and compared with experimental data. - Abstract: The thermally thick treatment of fuel particles during the thermal conversion of solid biomass is required to consider the internal gradients of temperature and composition and the overlapping of the existing biomass combustion stages. Due to the implied mixture of scales, the balance between model resolution and computational efficiency is an important limitation in the simulation of beds with large numbers of particles. In this study, a subgrid-scale model is applied to consider the intraparticle gradients, the interactions with other particles and the gas phase using a Euler–Euler CFD framework. Numerical heat transfer and mass conservation equations are formulated on a subparticle scale to obtain a system of linear equations that can be used to resolve the temperature and position of the reacting front inside the characteristic particle of each cell. To simulate the entire system, this modelling is combined with other submodels of the gas phase, the bed reaction and the interactions. The performance of the new model is tested using published experimental results for the particle and the bed. Similar temperatures are obtained in the particle-alone tests. Although the mass consumption rates tend to be underpredicted during the drying stage, they are subsequently compensated. In addition, an experimental batch-loaded pellet burner was simulated and tested with different air mass fluxes, in which the experimental ignition rates and temperatures are employed to compare the thermally thick model with the thermally thin model that was previously developed by the authors

  7. Beam-dynamics codes used at DARHT

    Energy Technology Data Exchange (ETDEWEB)

    Ekdahl, Jr., Carl August [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-02-01

    Several beam simulation codes are used to help gain a better understanding of beam dynamics in the DARHT LIAs. The most notable of these fall into the following categories: for beam production – Tricomp Trak orbit tracking code, LSP Particle in cell (PIC) code, for beam transport and acceleration – XTR static envelope and centroid code, LAMDA time-resolved envelope and centroid code, LSP-Slice PIC code, for coasting-beam transport to target – LAMDA time-resolved envelope code, LSP-Slice PIC code. These codes are also being used to inform the design of Scorpius.

  8. An approach based on genetic algorithms with coding in real for the solution of a DC OPF to hydrothermal systems; Uma abordagem baseada em algoritmos geneticos com codificacao em real para a solucao de um FPO DC para sistemas hidrotermicos

    Energy Technology Data Exchange (ETDEWEB)

    Barbosa, Diego R.; Silva, Alessandro L. da; Luciano, Edson Jose Rezende; Nepomuceno, Leonardo [Universidade Estadual Paulista (UNESP), Bauru, SP (Brazil). Dept. de Engenharia Eletrica], Emails: diego_eng.eletricista@hotmail.com, alessandrolopessilva@uol.com.br, edson.joserl@uol.com.br, leo@feb.unesp.br

    2009-07-01

    Problems of DC Optimal Power Flow (OPF) have been solved by various conventional optimization methods. When the modeling of DC OPF involves discontinuous functions or not differentiable, the use of solution methods based on conventional optimization is often not possible because of the difficulty in calculating the gradient vectors at points of discontinuity/non-differentiability of these functions. This paper proposes a method for solving the DC OPF based on Genetic Algorithms (GA) with real coding. The proposed GA has specific genetic operators to improve the quality and viability of the solution. The results are analyzed for an IEEE test system, and its solutions are compared, when possible, with those obtained by a method of interior point primal-dual logarithmic barrier. The results highlight the robustness of the method and feasibility of obtaining the solution to real systems.

  9. Coding Partitions

    Directory of Open Access Journals (Sweden)

    Fabio Burderi

    2007-05-01

    Full Text Available Motivated by the study of decipherability conditions for codes weaker than Unique Decipherability (UD, we introduce the notion of coding partition. Such a notion generalizes that of UD code and, for codes that are not UD, allows to recover the ``unique decipherability" at the level of the classes of the partition. By tacking into account the natural order between the partitions, we define the characteristic partition of a code X as the finest coding partition of X. This leads to introduce the canonical decomposition of a code in at most one unambiguouscomponent and other (if any totally ambiguouscomponents. In the case the code is finite, we give an algorithm for computing its canonical partition. This, in particular, allows to decide whether a given partition of a finite code X is a coding partition. This last problem is then approached in the case the code is a rational set. We prove its decidability under the hypothesis that the partition contains a finite number of classes and each class is a rational set. Moreover we conjecture that the canonical partition satisfies such a hypothesis. Finally we consider also some relationships between coding partitions and varieties of codes.

  10. How <em>Varroa> Parasitism Affects the Immunological and Nutritional Status of the Honey Bee, <em>Apis melliferaem>

    Directory of Open Access Journals (Sweden)

    Katherine A. Aronstein

    2012-06-01

    Full Text Available We investigated the effect of the parasitic mite <em>Varroa destructorem> on the immunological and nutritional condition of honey bees, <em>Apis melliferaem>, from the perspective of the individual bee and the colony. Pupae, newly-emerged adults and foraging adults were sampled from honey bee colonies at one site in S. Texas, USA. <em>Varroa>‑infested bees displayed elevated titer of Deformed Wing Virus (DWV, suggestive of depressed capacity to limit viral replication. Expression of genes coding three anti-microbial peptides (<em>defensin1, abaecin, hymenoptaecinem> was either not significantly different between <em>Varroa>-infested and uninfested bees or was significantly elevated in <em>Varroa>-infested bees, varying with sampling date and bee developmental age. The effect of <em>Varroa> on nutritional indices of the bees was complex, with protein, triglyceride, glycogen and sugar levels strongly influenced by life-stage of the bee and individual colony. Protein content was depressed and free amino acid content elevated in <em>Varroa>-infested pupae, suggesting that protein synthesis, and consequently growth, may be limited in these insects. No simple relationship between the values of nutritional and immune-related indices was observed, and colony-scale effects were indicated by the reduced weight of pupae in colonies with high <em>Varroa> abundance, irrespective of whether the individual pupa bore <em>Varroa>.

  11. Language Recognition via Sparse Coding

    Science.gov (United States)

    2016-09-08

    explanation is that sparse coding can achieve a near-optimal approximation of much complicated nonlinear relationship through local and piecewise linear...training examples, where x(i) ∈ RN is the ith example in the batch. Optionally, X can be normalized and whitened before sparse coding for better result...normalized input vectors are then ZCA- whitened [20]. Em- pirically, we choose ZCA- whitening over PCA- whitening , and there is no dimensionality reduction

  12. A Monte Carlo simulation code for calculating damage and particle transport in solids: The case for electron-bombarded solids for electron energies up to 900 MeV

    Energy Technology Data Exchange (ETDEWEB)

    Yan, Qiang [College of Nuclear Science and Technology, Harbin Engineering University, Harbin 150001 (China); Shao, Lin, E-mail: lshao@tamu.edu [Department of Nuclear Engineering, Texas A& M University, College Station, TX 77843 (United States)

    2017-03-15

    Current popular Monte Carlo simulation codes for simulating electron bombardment in solids focus primarily on electron trajectories, instead of electron-induced displacements. Here we report a Monte Carol simulation code, DEEPER (damage creation and particle transport in matter), developed for calculating 3-D distributions of displacements produced by electrons of incident energies up to 900 MeV. Electron elastic scattering is calculated by using full-Mott cross sections for high accuracy, and primary-knock-on-atoms (PKAs)-induced damage cascades are modeled using ZBL potential. We compare and show large differences in 3-D distributions of displacements and electrons in electron-irradiated Fe. The distributions of total displacements are similar to that of PKAs at low electron energies. But they are substantially different for higher energy electrons due to the shifting of PKA energy spectra towards higher energies. The study is important to evaluate electron-induced radiation damage, for the applications using high flux electron beams to intentionally introduce defects and using an electron analysis beam for microstructural characterization of nuclear materials.

  13. Simulation of halo particles with Simpsons

    International Nuclear Information System (INIS)

    Machida, Shinji

    2003-01-01

    Recent code improvements and some simulation results of halo particles with Simpsons will be presented. We tried to identify resonance behavior of halo particles by looking at tune evolution of individual macro particle

  14. Simulation of halo particles with Simpsons

    Science.gov (United States)

    Machida, Shinji

    2003-12-01

    Recent code improvements and some simulation results of halo particles with Simpsons will be presented. We tried to identify resonance behavior of halo particles by looking at tune evolution of individual macro particle.

  15. Experimental analysis of minimum shear stress to drag particles in a horizontal bed; Analise experimental da tensao de cisalhamento minima para arraste de particulas em um leito horizontal

    Energy Technology Data Exchange (ETDEWEB)

    Dornelas, Breno Almeida; Soares, Edson Jose [Universidade Federal do Espirito Santo. Departamento de Engenharia Mecanica (Brazil)], e-mails: bad@ucl.br, edson@ct.ufes.br; Quirino Filho, Joao Pedro; Loureiro, Bruno Venturini [Faculdade do Centro Leste (UCL). Laboratorio de Fluidos e Fenomenos de Transporte (Brazil)], e-mails: joaoquirino@ucl.br, brunovl@ucl.br

    2009-12-15

    Efficient hole cleaning is still a challenge in well bore drilling to produce oil and gas. The critical point is the horizontal drilling that inherently tends to form a bed of sediment particles at the well bottom during drilling. The cuttings bed erosion depends mainly on the shear stress promoted by the drilling fluid flow. The shear stress required to cause drag in the cuttings bed is investigated according to the fluid and particles properties, using an experimental assembly, composed of: a system for fluid circulation, a particle box, a pump system and measuring equipment. The observation area is a box below the flow line in an acrylic duct used to calibrate sand particles. The test starts with the pumps in a low frequency which is increased in steps. At each frequency level, images are captured of carried particles and the established flow rate is recorded. The images are analyzed when the dragged particle is no longer random and sporadic, but becomes permanent. The shear stress is identified by the PKN correlation (by Prandtl, von Karman, and Nikuradse) for the minimum flow rate necessary to cause drag. Results were obtained for just water and water-glycerin solution flows. (author)

  16. User manual for version 4.3 of the Tripoli-4 Monte-Carlo method particle transport computer code; Notice d'utilisation du code Tripoli-4, version 4.3: code de transport de particules par la methode de Monte Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Both, J.P.; Mazzolo, A.; Peneliau, Y.; Petit, O.; Roesslinger, B

    2003-07-01

    This manual relates to Version 4.3 TRIPOLI-4 code. TRIPOLI-4 is a computer code simulating the transport of neutrons, photons, electrons and positrons. It can be used for radiation shielding calculations (long-distance propagation with flux attenuation in non-multiplying media) and neutronic calculations (fissile medium, criticality or sub-criticality basis). This makes it possible to calculate k{sub eff} (for criticality), flux, currents, reaction rates and multi-group cross-sections. TRIPOLI-4 is a three-dimensional code that uses the Monte-Carlo method. It allows for point-wise description in terms of energy of cross-sections and multi-group homogenized cross-sections and features two modes of geometrical representation: surface and combinatorial. The code uses cross-section libraries in ENDF/B format (such as JEF2-2, ENDF/B-VI and JENDL) for point-wise description cross-sections in APOTRIM format (from the APOLLO2 code) or a format specific to TRIPOLI-4 for multi-group description. (authors)

  17. Speaking Code

    DEFF Research Database (Denmark)

    Cox, Geoff

    Speaking Code begins by invoking the “Hello World” convention used by programmers when learning a new language, helping to establish the interplay of text and code that runs through the book. Interweaving the voice of critical writing from the humanities with the tradition of computing and software...

  18. Tristan code and its application

    Science.gov (United States)

    Nishikawa, K.-I.

    Since TRISTAN: The 3-D Electromagnetic Particle Code was introduced in 1990, it has been used for many applications including the simulations of global solar windmagnetosphere interaction. The most essential ingridients of this code have been published in the ISSS-4 book. In this abstract we describe some of issues and an application of this code for the study of global solar wind-magnetosphere interaction including a substorm study. The basic code (tristan.f) for the global simulation and a local simulation of reconnection with a Harris model (issrec2.f) are available at http:/www.physics.rutger.edu/˜kenichi. For beginners the code (isssrc2.f) with simpler boundary conditions is suitable to start to run simulations. The future of global particle simulations for a global geospace general circulation (GGCM) model with predictive capability (for Space Weather Program) is discussed.

  19. Escoabilidade de leitos de partículas inertes com polpa de frutas tropicais: efeitos na secagem em leito de jorro Flowability of inert particle beds with fruit pulp: effects on the drying in spouted bed

    Directory of Open Access Journals (Sweden)

    Maria de F. D. de Medeiros

    2001-12-01

    Full Text Available Neste trabalho, foram caracterizados seis tipos de material inerte, utilizados na secagem de polpa de frutas em leito de jorro. Determinou-se o ângulo de repouso das partículas, com e sem adição de água e de polpa de diversas frutas tropicais. Correlacionou-se a escoabilidade com as propriedades das partículas e com a composição química das polpas. Analisou-se a influência do ângulo de repouso sobre o desempenho do secador, no que se refere à produção. Os resultados mostraram que, em geral, as polpas com elevadas concentrações de gordura e sólidos insolúveis e baixos teores de açúcares redutores, facilitam a escoabilidade. Uma análise dos resultados obtidos na secagem de polpa de frutas tropicais, utilizando-se partículas de poliestireno de baixa densidade, como material inerte, mostrou que, embora a escoabilidade permita a obtenção de menores vazões de jorro mínimo, em relação ao desempenho do secador, pode não favorecer uma produção maior de pó.In this work six types of inert particles were characterized and analyzed for drying tropical fruit pulps. The repose angle was determined with and without the addition of water and pulp of various tropical fruits. The bed flowability was related to the particle properties and chemical composition of pulps. The influence of the repose angle on the drying performance was analyzed. It was also verified that the composition of pulps influenced the bed flowability. The global analysis showed that the pulps with high lipids and insoluble solids content and low reducing sugar content improved the bed flowability. The results obtained with the drying of the fruit pulps using low-density polystyrene granules as inert particles showed that high flowabilities lead to lower minimum spout flow rates, but do not necessarily lead to the highest powder production.

  20. Particle-tracking code (track3d) for convective solute transport modelling in the geosphere: Description and user`s manual; Programme de reperage de particules (track3d) pour la modelisation du transport par convection des solutes dans la geosphere: description et manuel de l`utilisateur

    Energy Technology Data Exchange (ETDEWEB)

    Nakka, B W; Chan, T

    1994-12-01

    A deterministic particle-tracking code (TRACK3D) has been developed to compute convective flow paths of conservative (nonreactive) contaminants through porous geological media. TRACK3D requires the groundwater velocity distribution, which, in our applications, results from flow simulations using AECL`s MOTIF code. The MOTIF finite-element code solves the transient and steady-state coupled equations of groundwater flow, solute transport and heat transport in fractured/porous media. With few modifications, TRACK3D can be used to analyse the velocity distributions calculated by other finite-element or finite-difference flow codes. This report describes the assumptions, limitations, organization, operation and applications of the TRACK3D code, and provides a comprehensive user`s manual.

  1. Compósitos Bioativos Obtidos a Partir da Inserção de Vidro Bioativo em Matriz de Poli(Metacrilato de Metila Bioactive Composites Obtained from Bioactive Glass Particles into Poly(Methyl Methacrylate

    Directory of Open Access Journals (Sweden)

    Paulo E. Silva Junior

    2001-09-01

    Full Text Available Várias biocerâmicas são capazes de se ligarem a tecidos vivos (bioatividade, no entanto apresentam propriedades mecânicas muito diferentes daquelas exibidas por tecidos naturais. Tal fato acaba por restringir o uso desses materiais em um mais extenso número de aplicações biomédicas. Compósitos de matriz polimérica reforçada com uma fase bioativa podem combinar o comportamento bioativo característico de algumas biocerâmicas com propriedades mecânicas próximas à de tecidos humanos. O presente trabalho tem como objetivo sintetizar e caracterizar compósitos de matriz polimérica reforçada por partículas de vidro bioativo. Os compósitos foram produzidos a partir da polimerização em massa de metacrilato de metila na presença de partículas de vidro bioativo (vidro de silicato de cálcio, fósforo e sódio. Partículas de vidro foram adicionadas ao monômero em diversas concentrações para permitir a variação das propriedades mecânicas e da bioatividade desses compósitos. A bioatividade dos materiais produzidos foi avaliada através de testes in vitro realizados a 37ºC em uma solução simuladora do fluido humano por períodos de tempo de 0 hora a 30 dias. Em seguida, os compósitos submetidos aos testes in vitro foram caracterizados por espectroscopia de infravermelho. O procedimento de síntese mostrou-se eficaz na produção de compósitos com diferentes frações volumétricas de partículas distribuídas homogeneamente pelo material. Os resultados dos testes in vitro revelaram a deposição de uma camada de hidroxiapatita carbonatada (HCA na superfície dos materiais, comprovando a bioatividade dos compósitos. Foi ainda observado que a cinética de deposição da camada de HCA pode ser controlada pela fração volumétrica da fase bioativa.Some bioceramics have the ability to bind to tissues but they show mechanical properties very different from the ones of natural tissues. This fact restricts the use of these

  2. Coding Labour

    Directory of Open Access Journals (Sweden)

    Anthony McCosker

    2014-03-01

    Full Text Available As well as introducing the Coding Labour section, the authors explore the diffusion of code across the material contexts of everyday life, through the objects and tools of mediation, the systems and practices of cultural production and organisational management, and in the material conditions of labour. Taking code beyond computation and software, their specific focus is on the increasingly familiar connections between code and labour with a focus on the codification and modulation of affect through technologies and practices of management within the contemporary work organisation. In the grey literature of spreadsheets, minutes, workload models, email and the like they identify a violence of forms through which workplace affect, in its constant flux of crisis and ‘prodromal’ modes, is regulated and governed.

  3. Transport of iron particles generated during milling operations in multilateral wells; Transporte de particulas de aco geradas pela abertura de janelas em pocos multilaterais

    Energy Technology Data Exchange (ETDEWEB)

    Martins, Andre Leibsohn; Rezende, Carla Leonor Teixeira; Leal, Rafael Amorim Ferreira; Lourenco, Fabio Gustavo Fernandes [PETROBRAS S.A., Rio de Janeiro, RJ (Brazil). Centro de Pesquisas]. E-mail: aleibsohn@cenpes.petrobras.com.br; rezenc@hotmail.com; ramorim@cenpes.petrobras.com.br; fabiolou@urbi.com.br

    2000-07-01

    This paper presents a series of numerical simulations aimng the definition of requirements (flow rate and fluid properties) to remove iron particles both in the inclined sections and in the riser annulus. Additionally, experimental work was developed in a pilot scale flow loop in order tocompare the behavior of water and sinthetic oil baed fluids in milling operations. (author)

  4. Factors affecting the energy resolution in alpha particle spectrometry with silicon diodes; Fatores que influenciam a resolucao em energia na espectrometria de particulas alfa com diodos de Si

    Energy Technology Data Exchange (ETDEWEB)

    Camargo, Fabio de. E-mail: f.camargo@bol.com.br

    2005-07-01

    In this work are presented the studies about the response of a multi-structure guard rings silicon diode for detection and spectrometry of alpha particles. This ion-implanted diode (Al/p{sup +}/n/n{sup +}/Al) was processed out of 300 {mu}m thick, n type substrate with a resistivity of 3 k{omega}{center_dot}cm and an active area of 4 mm{sup 2}. In order to use this diode as a detector, the bias voltage was applied on the n{sup +} side, the first guard ring was grounded and the electrical signals were readout from the p{sup +} side. These signals were directly sent to a tailor made preamplifier, based on the hybrid circuit A250 (Amptek), followed by a conventional nuclear electronic. The results obtained with this system for the direct detection of alpha particles from {sup 241}Am showed an excellent response stability with a high detection efficiency ({approx_equal} 100 %). The performance of this diode for alpha particle spectrometry was studied and it was prioritized the influence of the polarization voltage, the electronic noise, the temperature and the source-diode distance on the energy resolution. The results showed that the major contribution for the deterioration of this parameter is due to the diode dead layer thickness (1 {mu}m). However, even at room temperature, the energy resolution (FWHM = 18.8 keV) measured for the 5485.6 MeV alpha particles ({sup 241}Am) is comparable to those obtained with ordinary silicon barrier detectors frequently used for these particles spectrometry. (author)

  5. Particle size distribution and total solids suspension in samples monitoring of capturing water for optimization of water injection filtration system; Monitoramento da quantidade de particulas e do total de solidos em suspensao em amostras de agua de captacao

    Energy Technology Data Exchange (ETDEWEB)

    Ramalhao, Adriano Gorga; Seno, Carlos Eduardo; Ribeiro, Alice [3M do Brasil, Sumare, SP (Brazil)

    2008-07-01

    There is a wide variation in the amount of particulate material in sea water by a great number of reasons. The most well-known contaminant is the organic material derived from seaweed or fish spawning causing seasonally sensitive variations in the water quality treated and injected for enhance oil recovery. This paper presents the results of one year the water monitoring form water sampled at 30 meters deep in the Roncador field, which is located 125 km from the coast with a depth of 1290 meters. It was observed the water seasonal variation with peaks in summer and winter. The monitoring was done through particle counting and distribution analysis and total solids in suspension. It was noted that even in peak with largest amount of particles and greater quantity of solid in suspension the particles had remained concentrated in the range bellow 25 {mu}m. For that reason the life of final filter elements may vary and pre-filters are many times ineffective and sometimes even bypassed due to frequent clogging and not to do the protecting job of the final filter. (author)

  6. SU-E-T-753: Three-Dimensional Dose Distributions of Incident Proton Particle in the Polymer Gel Dosimeter and the Radiochromic Gel Dosimeter: A Simulation Study with MCNP Code

    International Nuclear Information System (INIS)

    Park, M; Kim, G; Ji, Y; Kim, K; Park, S; Jung, H

    2015-01-01

    Purpose: The purpose of this study is to estimate the three-dimensional dose distributions in the polymer and the radiochromic gel dosimeter, and to identify the detectability of both gel dosimeters by comparing with the water phantom in case of irradiating the proton particles. Methods: The normoxic polymer gel and the LCV micelle radiochromic gel were used in this study. The densities of polymer and the radiochromic gel dosimeter were 1.024 and 1.005 g/cm 3 , respectively. The dose distributions of protons in the polymer and radiochromic gel were simulated using Monte Carlo radiation transport code (MCNPX, Los Alamos National Laboratory). The shape of phantom irradiated by proton particles was a hexahedron with the dimension of 12.4 × 12.4 × 15.0 cm 3 . The energies of proton beam were 50, 80, and 140 MeV energies were directed to top of the surface of phantom. The cross-sectional view of proton dose distribution in both gel dosimeters was estimated with the water phantom and evaluated by the gamma evaluation method. In addition, the absorbed dose(Gy) was also calculated for evaluating the proton detectability. Results: The evaluation results show that dose distributions in both gel dosimeters at intermediated section and Bragg-peak region are similar with that of the water phantom. At entrance section, however, inconsistencies of dose distribution are represented, compared with water. The relative absorbed doses in radiochromic and polymer gel dosimeter were represented to be 0.47 % and 2.26 % difference, respectively. These results show that the radiochromic gel dosimeter was better matched than the water phantom in the absorbed dose evaluation. Conclusion: The polymer and the radiochromic gel dosimeter show similar characteristics in dose distributions for the proton beams at intermediate section and Bragg-peak region. Moreover the calculated absorbed dose in both gel dosimeters represents similar tendency by comparing with that in water phantom

  7. Vectorization, parallelization and porting of nuclear codes on the VPP500 system (vectorization). Progress report fiscal 1996

    Energy Technology Data Exchange (ETDEWEB)

    Nemoto, Toshiyuki; Kawai, Wataru [Fujitsu Ltd., Tokyo (Japan); Kawasaki, Nobuo [and others

    1997-12-01

    Several computer codes in the nuclear field have been vectorized, parallelized and transported on the FUJITSU VPP500 system at Center for Promotion of Computational Science and Engineering in Japan Atomic Energy Research Institute. These results are reported in 3 parts, i.e., the vectorization part, the parallelization part and the porting part. In this report, we describe the vectorization. In this vectorization part, the vectorization of two and three dimensional discrete ordinates simulation code DORT-TORT, gas dynamics analysis code FLOWGR and relativistic Boltzmann-Uehling-Uhlenbeck simulation code RBUU are described. In the parallelization part, the parallelization of 2-Dimensional relativistic electromagnetic particle code EM2D, Cylindrical Direct Numerical Simulation code CYLDNS and molecular dynamics code for simulating radiation damages in diamond crystals DGR are described. And then, in the porting part, the porting of reactor safety analysis code RELAP5/MOD3.2 and RELAP5/MOD3.2.1.2, nuclear data processing system NJOY and 2-D multigroup discrete ordinate transport code TWOTRAN-II are described. And also, a survey for the porting of command-driven interactive data analysis plotting program IPLOT are described. (author)

  8. Vectorization, parallelization and porting of nuclear codes on the VPP500 system (parallelization). Progress report fiscal 1996

    Energy Technology Data Exchange (ETDEWEB)

    Watanabe, Hideo; Kawai, Wataru; Nemoto, Toshiyuki [Fujitsu Ltd., Tokyo (Japan); and others

    1997-12-01

    Several computer codes in the nuclear field have been vectorized, parallelized and transported on the FUJITSU VPP500 system at Center for Promotion of Computational Science and Engineering in Japan Atomic Energy Research Institute. These results are reported in 3 parts, i.e., the vectorization part, the parallelization part and the porting part. In this report, we describe the parallelization. In this parallelization part, the parallelization of 2-Dimensional relativistic electromagnetic particle code EM2D, Cylindrical Direct Numerical Simulation code CYLDNS and molecular dynamics code for simulating radiation damages in diamond crystals DGR are described. In the vectorization part, the vectorization of two and three dimensional discrete ordinates simulation code DORT-TORT, gas dynamics analysis code FLOWGR and relativistic Boltzmann-Uehling-Uhlenbeck simulation code RBUU are described. And then, in the porting part, the porting of reactor safety analysis code RELAP5/MOD3.2 and RELAP5/MOD3.2.1.2, nuclear data processing system NJOY and 2-D multigroup discrete ordinate transport code TWOTRAN-II are described. And also, a survey for the porting of command-driven interactive data analysis plotting program IPLOT are described. (author)

  9. Vectorization, parallelization and porting of nuclear codes on the VPP500 system (porting). Progress report fiscal 1996

    Energy Technology Data Exchange (ETDEWEB)

    Nemoto, Toshiyuki [Fujitsu Ltd., Tokyo (Japan); Kawasaki, Nobuo; Tanabe, Hidenobu [and others

    1998-01-01

    Several computer codes in the nuclear field have been vectorized, parallelized and transported on the FUJITSU VPP500 system at Center for Promotion of Computational Science and Engineering in Japan Atomic Energy Research Institute. These results are reported in 3 parts, i.e., the vectorization part, the parallelization part and the porting part. In this report, we describe the porting. In this porting part, the porting of reactor safety analysis code RELAP5/MOD3.2 and RELAP5/MOD3.2.1.2, nuclear data processing system NJOY and 2-D multigroup discrete ordinate transport code TWOTRAN-II are described. And also, a survey for the porting of command-driven interactive data analysis plotting program IPLOT are described. In the parallelization part, the parallelization of 2-Dimensional relativistic electromagnetic particle code EM2D, Cylindrical Direct Numerical Simulation code CYLDNS and molecular dynamics code for simulating radiation damages in diamond crystals DGR are described. And then, in the vectorization part, the vectorization of two and three dimensional discrete ordinates simulation code DORT-TORT, gas dynamics analysis code FLOWGR and relativistic Boltzmann-Uehling-Uhlenbeck simulation code RBUU are described. (author)

  10. Speech coding

    Energy Technology Data Exchange (ETDEWEB)

    Ravishankar, C., Hughes Network Systems, Germantown, MD

    1998-05-08

    Speech is the predominant means of communication between human beings and since the invention of the telephone by Alexander Graham Bell in 1876, speech services have remained to be the core service in almost all telecommunication systems. Original analog methods of telephony had the disadvantage of speech signal getting corrupted by noise, cross-talk and distortion Long haul transmissions which use repeaters to compensate for the loss in signal strength on transmission links also increase the associated noise and distortion. On the other hand digital transmission is relatively immune to noise, cross-talk and distortion primarily because of the capability to faithfully regenerate digital signal at each repeater purely based on a binary decision. Hence end-to-end performance of the digital link essentially becomes independent of the length and operating frequency bands of the link Hence from a transmission point of view digital transmission has been the preferred approach due to its higher immunity to noise. The need to carry digital speech became extremely important from a service provision point of view as well. Modem requirements have introduced the need for robust, flexible and secure services that can carry a multitude of signal types (such as voice, data and video) without a fundamental change in infrastructure. Such a requirement could not have been easily met without the advent of digital transmission systems, thereby requiring speech to be coded digitally. The term Speech Coding is often referred to techniques that represent or code speech signals either directly as a waveform or as a set of parameters by analyzing the speech signal. In either case, the codes are transmitted to the distant end where speech is reconstructed or synthesized using the received set of codes. A more generic term that is applicable to these techniques that is often interchangeably used with speech coding is the term voice coding. This term is more generic in the sense that the

  11. Optimal codes as Tanner codes with cyclic component codes

    DEFF Research Database (Denmark)

    Høholdt, Tom; Pinero, Fernando; Zeng, Peng

    2014-01-01

    In this article we study a class of graph codes with cyclic code component codes as affine variety codes. Within this class of Tanner codes we find some optimal binary codes. We use a particular subgraph of the point-line incidence plane of A(2,q) as the Tanner graph, and we are able to describe ...

  12. Aztheca Code

    International Nuclear Information System (INIS)

    Quezada G, S.; Espinosa P, G.; Centeno P, J.; Sanchez M, H.

    2017-09-01

    This paper presents the Aztheca code, which is formed by the mathematical models of neutron kinetics, power generation, heat transfer, core thermo-hydraulics, recirculation systems, dynamic pressure and level models and control system. The Aztheca code is validated with plant data, as well as with predictions from the manufacturer when the reactor operates in a stationary state. On the other hand, to demonstrate that the model is applicable during a transient, an event occurred in a nuclear power plant with a BWR reactor is selected. The plant data are compared with the results obtained with RELAP-5 and the Aztheca model. The results show that both RELAP-5 and the Aztheca code have the ability to adequately predict the behavior of the reactor. (Author)

  13. Cadmium and copper adsorption on bentonite: effects of pH and particle size Adsorção de cádmio e cobre em bentonita: efeito do pH e da granulometria

    Directory of Open Access Journals (Sweden)

    Lúcia Helena Garófalo Chaves

    2011-06-01

    Full Text Available Reactions of heavy metals with clay minerals are important in determining metal fates in the environment. However, the adsorption process of these metals by the bentonite has been extensively investigated. The objectives of this work were to assess the ability of bentonite clay to adsorb cadmium and copper and to study the effects of pH and particle size upon these metals adsorption. Adsorption isotherms were obtained from batch adsorption experiments, with increasing cadmium and copper concentrations (5-200 mg L-1. To find out the effects of pH and particle size on adsorption, the experiments were conducted at pH 4; 5 and 6 using particles sizes of As reações dos metais pesados com os minerais de argila são importantes para determinar o destino dos mesmos no meio ambiente. Assim, o processo de adsorção destes metais pela bentonita tem sido muito investigado. Objetivou-se com este trabalho avaliar a capacidade da argila bentonita em adsorver cádmio e cobre e os efeitos do pH e de sua granulometria na adsorção dos mesmos. A partir de experimentos tipo "batch", foram elaboradas isotermas de adsorção com quantidades crescentes de Cd e Cu (5-200 mg L-1. Para determinar o efeito do pH e da granulometria sobre a adsorção, os experimentos foram conduzidos a pH 4; 5 e 6, utilizando bentonita com granulometria < 0,5 mm e entre 0,5 a 2,0 mm. As quantidades de Cd e Cu adsorvidas pela bentonita foram determinadas pela diferença entre as concentrações inicial e final dos elementos na solução de equilíbrio. A bentonita adsorveu mais Cu do que Cd, entretanto a adsorção dos dois metais aumentaram com o aumento do pH independentemente da granulometria. Nenhum efeito da granulometria sobre a adsorção foi observada. Os dados experimentais foram bem ajustados ao modelo de Langmuir. A capacidade máxima de adsorção diminuiu e a energia de ligação aumentou em função do aumento do pH.

  14. Vocable Code

    DEFF Research Database (Denmark)

    Soon, Winnie; Cox, Geoff

    2018-01-01

    a computational and poetic composition for two screens: on one of these, texts and voices are repeated and disrupted by mathematical chaos, together exploring the performativity of code and language; on the other, is a mix of a computer programming syntax and human language. In this sense queer code can...... be understood as both an object and subject of study that intervenes in the world’s ‘becoming' and how material bodies are produced via human and nonhuman practices. Through mixing the natural and computer language, this article presents a script in six parts from a performative lecture for two persons...

  15. NSURE code

    International Nuclear Information System (INIS)

    Rattan, D.S.

    1993-11-01

    NSURE stands for Near-Surface Repository code. NSURE is a performance assessment code. developed for the safety assessment of near-surface disposal facilities for low-level radioactive waste (LLRW). Part one of this report documents the NSURE model, governing equations and formulation of the mathematical models, and their implementation under the SYVAC3 executive. The NSURE model simulates the release of nuclides from an engineered vault, their subsequent transport via the groundwater and surface water pathways tot he biosphere, and predicts the resulting dose rate to a critical individual. Part two of this report consists of a User's manual, describing simulation procedures, input data preparation, output and example test cases

  16. The Aster code; Code Aster

    Energy Technology Data Exchange (ETDEWEB)

    Delbecq, J.M

    1999-07-01

    The Aster code is a 2D or 3D finite-element calculation code for structures developed by the R and D direction of Electricite de France (EdF). This dossier presents a complete overview of the characteristics and uses of the Aster code: introduction of version 4; the context of Aster (organisation of the code development, versions, systems and interfaces, development tools, quality assurance, independent validation); static mechanics (linear thermo-elasticity, Euler buckling, cables, Zarka-Casier method); non-linear mechanics (materials behaviour, big deformations, specific loads, unloading and loss of load proportionality indicators, global algorithm, contact and friction); rupture mechanics (G energy restitution level, restitution level in thermo-elasto-plasticity, 3D local energy restitution level, KI and KII stress intensity factors, calculation of limit loads for structures), specific treatments (fatigue, rupture, wear, error estimation); meshes and models (mesh generation, modeling, loads and boundary conditions, links between different modeling processes, resolution of linear systems, display of results etc..); vibration mechanics (modal and harmonic analysis, dynamics with shocks, direct transient dynamics, seismic analysis and aleatory dynamics, non-linear dynamics, dynamical sub-structuring); fluid-structure interactions (internal acoustics, mass, rigidity and damping); linear and non-linear thermal analysis; steels and metal industry (structure transformations); coupled problems (internal chaining, internal thermo-hydro-mechanical coupling, chaining with other codes); products and services. (J.S.)

  17. Coefficient of solid-gas heat transfer in particle fixed bed; Coeficiente de transferencia de calor gas-solido em leito fixo de particulas

    Energy Technology Data Exchange (ETDEWEB)

    Fernandes Filho, Francisco

    1991-03-01

    The work presents a study on heat transfer between gas and solid phases for fixed beds in the absence of mass transfer and chemical reactions. Mathematical models presented in the literature were analyzed concerning to the assumptions made on axial dispersion in the fluid phase and interparticle thermal conductivity. Heat transfer coefficients and their dependency on flow conditions, particles and packed bed characteristics were experimentally determined through the solution of the previous mathematical models. Pressure drop behaviour for the packed beds used for the heat transfer study was also included. (author) 32 refs., 12 figs.

  18. Code Modernization of VPIC

    Science.gov (United States)

    Bird, Robert; Nystrom, David; Albright, Brian

    2017-10-01

    The ability of scientific simulations to effectively deliver performant computation is increasingly being challenged by successive generations of high-performance computing architectures. Code development to support efficient computation on these modern architectures is both expensive, and highly complex; if it is approached without due care, it may also not be directly transferable between subsequent hardware generations. Previous works have discussed techniques to support the process of adapting a legacy code for modern hardware generations, but despite the breakthroughs in the areas of mini-app development, portable-performance, and cache oblivious algorithms the problem still remains largely unsolved. In this work we demonstrate how a focus on platform agnostic modern code-development can be applied to Particle-in-Cell (PIC) simulations to facilitate effective scientific delivery. This work builds directly on our previous work optimizing VPIC, in which we replaced intrinsic based vectorisation with compile generated auto-vectorization to improve the performance and portability of VPIC. In this work we present the use of a specialized SIMD queue for processing some particle operations, and also preview a GPU capable OpenMP variant of VPIC. Finally we include a lessons learnt. Work performed under the auspices of the U.S. Dept. of Energy by the Los Alamos National Security, LLC Los Alamos National Laboratory under contract DE-AC52-06NA25396 and supported by the LANL LDRD program.

  19. Particle Swarm Optimization applied to combinatorial problem aiming the fuel recharge problem solution in a nuclear reactor; Particle swarm optimization aplicado ao problema combinatorio com vistas a solucao do problema de recarga em um reator nuclear

    Energy Technology Data Exchange (ETDEWEB)

    Meneses, Anderson Alvarenga de Moura; Schirru, Roberto [Universidade Federal, Rio de Janeiro, RJ (Brazil). Coordenacao dos Programas de Pos-graduacao de Engenharia. Programa de Engenharia Nuclear]. E-mail: ameneses@con.ufrj.br; schirru@lmp.ufrj.br

    2005-07-01

    This work focuses on the usage the Artificial Intelligence technique Particle Swarm Optimization (PSO) to optimize the fuel recharge at a nuclear reactor. This is a combinatorial problem, in which the search of the best feasible solution is done by minimizing a specific objective function. However, in this first moment it is possible to compare the fuel recharge problem with the Traveling Salesman Problem (TSP), since both of them are combinatorial, with one advantage: the evaluation of the TSP objective function is much more simple. Thus, the proposed methods have been applied to two TSPs: Oliver 30 and Rykel 48. In 1995, KENNEDY and EBERHART presented the PSO technique to optimize non-linear continued functions. Recently some PSO models for discrete search spaces have been developed for combinatorial optimization. Although all of them having different formulation from the ones presented here. In this paper, we use the PSO theory associated with to the Random Keys (RK)model, used in some optimizations with Genetic Algorithms. The Particle Swarm Optimization with Random Keys (PSORK) results from this association, which combines PSO and RK. The adaptations and changings in the PSO aim to allow the usage of the PSO at the nuclear fuel recharge. This work shows the PSORK being applied to the proposed combinatorial problem and the obtained results. (author)

  20. Coding Class

    DEFF Research Database (Denmark)

    Ejsing-Duun, Stine; Hansbøl, Mikala

    Denne rapport rummer evaluering og dokumentation af Coding Class projektet1. Coding Class projektet blev igangsat i skoleåret 2016/2017 af IT-Branchen i samarbejde med en række medlemsvirksomheder, Københavns kommune, Vejle Kommune, Styrelsen for IT- og Læring (STIL) og den frivillige forening...... Coding Pirates2. Rapporten er forfattet af Docent i digitale læringsressourcer og forskningskoordinator for forsknings- og udviklingsmiljøet Digitalisering i Skolen (DiS), Mikala Hansbøl, fra Institut for Skole og Læring ved Professionshøjskolen Metropol; og Lektor i læringsteknologi, interaktionsdesign......, design tænkning og design-pædagogik, Stine Ejsing-Duun fra Forskningslab: It og Læringsdesign (ILD-LAB) ved Institut for kommunikation og psykologi, Aalborg Universitet i København. Vi har fulgt og gennemført evaluering og dokumentation af Coding Class projektet i perioden november 2016 til maj 2017...

  1. Uplink Coding

    Science.gov (United States)

    Andrews, Ken; Divsalar, Dariush; Dolinar, Sam; Moision, Bruce; Hamkins, Jon; Pollara, Fabrizio

    2007-01-01

    This slide presentation reviews the objectives, meeting goals and overall NASA goals for the NASA Data Standards Working Group. The presentation includes information on the technical progress surrounding the objective, short LDPC codes, and the general results on the Pu-Pw tradeoff.

  2. ANIMAL code

    International Nuclear Information System (INIS)

    Lindemuth, I.R.

    1979-01-01

    This report describes ANIMAL, a two-dimensional Eulerian magnetohydrodynamic computer code. ANIMAL's physical model also appears. Formulated are temporal and spatial finite-difference equations in a manner that facilitates implementation of the algorithm. Outlined are the functions of the algorithm's FORTRAN subroutines and variables

  3. Network Coding

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 15; Issue 7. Network Coding. K V Rashmi Nihar B Shah P Vijay Kumar. General Article Volume 15 Issue 7 July 2010 pp 604-621. Fulltext. Click here to view fulltext PDF. Permanent link: https://www.ias.ac.in/article/fulltext/reso/015/07/0604-0621 ...

  4. MCNP code

    International Nuclear Information System (INIS)

    Cramer, S.N.

    1984-01-01

    The MCNP code is the major Monte Carlo coupled neutron-photon transport research tool at the Los Alamos National Laboratory, and it represents the most extensive Monte Carlo development program in the United States which is available in the public domain. The present code is the direct descendent of the original Monte Carlo work of Fermi, von Neumaum, and Ulam at Los Alamos in the 1940s. Development has continued uninterrupted since that time, and the current version of MCNP (or its predecessors) has always included state-of-the-art methods in the Monte Carlo simulation of radiation transport, basic cross section data, geometry capability, variance reduction, and estimation procedures. The authors of the present code have oriented its development toward general user application. The documentation, though extensive, is presented in a clear and simple manner with many examples, illustrations, and sample problems. In addition to providing the desired results, the output listings give a a wealth of detailed information (some optional) concerning each state of the calculation. The code system is continually updated to take advantage of advances in computer hardware and software, including interactive modes of operation, diagnostic interrupts and restarts, and a variety of graphical and video aids

  5. Expander Codes

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 10; Issue 1. Expander Codes - The Sipser–Spielman Construction. Priti Shankar. General Article Volume 10 ... Author Affiliations. Priti Shankar1. Department of Computer Science and Automation, Indian Institute of Science Bangalore 560 012, India.

  6. Determination of threshold shear stress to drag particles in cuttings bed; Determinacao da tensao de cisalhamento minima para arraste de particulas em um leito fluidizado

    Energy Technology Data Exchange (ETDEWEB)

    Loureiro, Bruno Venturini; Siqueira, Renato do Nascimento [Faculdade do Centro Leste (UCL), Serra, ES (Brazil). Lab. de Fenomenos de Transporte], e-mail: brunovl@ucl.br, e-mail: renatons@ucl.br

    2006-07-01

    Drilling of horizontal wells for oil and gas production needs an efficient cleaning process due to settling of particles removed during the drilling process, which settles on the inferior part of the annular space between the drilling column and the walls of the well. The erosion of the bed is an important physical phenomenon to petroleum and gas industry since it can improve the opening of the wells. This work aims to estimate the threshold shear stress necessary to start the erosion process in a sediment bed. An experimental apparatus was built from simplifications of the problem in order to measure the flow rate and identify the beginning of the process. The experiment consists of a rectangular duct with aspect ratio ({lambda} = h/b) of 1/3 and non dimensional length (L{sup *} = L/h) of 75. The sediment bed to be eroded was placed at 60< x{sup *}<66. Using the flow rate and the boundary conditions, a discretization of the problem was carried out to permit a computational solution using the finite volume method and hence, determine the shear stress. This work used particles with up to 3.0 mm and modeled the flow considering a bed with equivalent roughness. (author)

  7. The enterovirus 71 A-particle forms a gateway to allow genome release: a cryoEM study of picornavirus uncoating.

    Directory of Open Access Journals (Sweden)

    Kristin L Shingler

    2013-03-01

    Full Text Available Since its discovery in 1969, enterovirus 71 (EV71 has emerged as a serious worldwide health threat. This human pathogen of the picornavirus family causes hand, foot, and mouth disease, and also has the capacity to invade the central nervous system to cause severe disease and death. Upon binding to a host receptor on the cell surface, the virus begins a two-step uncoating process, first forming an expanded, altered "A-particle", which is primed for genome release. In a second step after endocytosis, an unknown trigger leads to RNA expulsion, generating an intact, empty capsid. Cryo-electron microscopy reconstructions of these two capsid states provide insight into the mechanics of genome release. The EV71 A-particle capsid interacts with the genome near the icosahedral two-fold axis of symmetry, which opens to the external environment via a channel ∼10 Å in diameter that is lined with patches of negatively charged residues. After the EV71 genome has been released, the two-fold channel shrinks, though the overall capsid dimensions are conserved. These structural characteristics identify the two-fold channel as the site where a gateway forms and regulates the process of genome release.

  8. Particle Pollution

    Science.gov (United States)

    ... Your Health Particle Pollution Public Health Issues Particle Pollution Recommend on Facebook Tweet Share Compartir Particle pollution — ... see them in the air. Where does particle pollution come from? Particle pollution can come from two ...

  9. The particle swarm optimization algorithm applied to nuclear systems surveillance test planning; Otimizacao aplicada ao planejamento de politicas de testes em sistemas nucleares por enxame de particulas

    Energy Technology Data Exchange (ETDEWEB)

    Siqueira, Newton Norat

    2006-12-15

    This work shows a new approach to solve availability maximization problems in electromechanical systems, under periodic preventive scheduled tests. This approach uses a new Optimization tool called PSO developed by Kennedy and Eberhart (2001), Particle Swarm Optimization, integrated with probabilistic safety analysis model. Two maintenance optimization problems are solved by the proposed technique, the first one is a hypothetical electromechanical configuration and the second one is a real case from a nuclear power plant (Emergency Diesel Generators). For both problem PSO is compared to a genetic algorithm (GA). In the experiments made, PSO was able to obtain results comparable or even slightly better than those obtained b GA. Therefore, the PSO algorithm is simpler and its convergence is faster, indicating that PSO is a good alternative for solving such kind of problems. (author)

  10. Physics of codes

    International Nuclear Information System (INIS)

    Cooper, R.K.; Jones, M.E.

    1989-01-01

    The title given this paper is a bit presumptuous, since one can hardly expect to cover the physics incorporated into all the codes already written and currently being written. The authors focus on those codes which have been found to be particularly useful in the analysis and design of linacs. At that the authors will be a bit parochial and discuss primarily those codes used for the design of radio-frequency (rf) linacs, although the discussions of TRANSPORT and MARYLIE have little to do with the time structures of the beams being analyzed. The plan of this paper is first to describe rather simply the concepts of emittance and brightness, then to describe rather briefly each of the codes TRANSPORT, PARMTEQ, TBCI, MARYLIE, and ISIS, indicating what physics is and is not included in each of them. It is expected that the vast majority of what is covered will apply equally well to protons and electrons (and other particles). This material is intended to be tutorial in nature and can in no way be expected to be exhaustive. 31 references, 4 figures

  11. The LIONS code (version 1.0)

    International Nuclear Information System (INIS)

    Bertrand, P.

    1993-01-01

    The new LIONS code (Lancement d'IONS or Ion Launching), a dynamical code implemented in the SPIRaL project for the CIME cyclotron studies, is presented. The various software involves a 3D magnetostatic code, 2D or 3D electrostatic codes for generation of realistic field maps, and several dynamical codes for studying the behaviour of the reference particle from the cyclotron center up to the ejection and for launching particles packets complying with given correlations. Its interactions with the other codes are described. The LIONS code, written in Fortran 90 is already used in studying the CIME cyclotron, from the center to the ejection. It is designed to be used, with minor modifications, in other contexts such as for the simulation of mass spectrometer facilities

  12. Panda code

    International Nuclear Information System (INIS)

    Altomare, S.; Minton, G.

    1975-02-01

    PANDA is a new two-group one-dimensional (slab/cylinder) neutron diffusion code designed to replace and extend the FAB series. PANDA allows for the nonlinear effects of xenon, enthalpy and Doppler. Fuel depletion is allowed. PANDA has a completely general search facility which will seek criticality, maximize reactivity, or minimize peaking. Any single parameter may be varied in a search. PANDA is written in FORTRAN IV, and as such is nearly machine independent. However, PANDA has been written with the present limitations of the Westinghouse CDC-6600 system in mind. Most computation loops are very short, and the code is less than half the useful 6600 memory size so that two jobs can reside in the core at once. (auth)

  13. CANAL code

    International Nuclear Information System (INIS)

    Gara, P.; Martin, E.

    1983-01-01

    The CANAL code presented here optimizes a realistic iron free extraction channel which has to provide a given transversal magnetic field law in the median plane: the current bars may be curved, have finite lengths and cooling ducts and move in a restricted transversal area; terminal connectors may be added, images of the bars in pole pieces may be included. A special option optimizes a real set of circular coils [fr

  14. Implementation of collisions on GPU architecture in the Vorpal code

    Science.gov (United States)

    Leddy, Jarrod; Averkin, Sergey; Cowan, Ben; Sides, Scott; Werner, Greg; Cary, John

    2017-10-01

    The Vorpal code contains a variety of collision operators allowing for the simulation of plasmas containing multiple charge species interacting with neutrals, background gas, and EM fields. These existing algorithms have been improved and reimplemented to take advantage of the massive parallelization allowed by GPU architecture. The use of GPUs is most effective when algorithms are single-instruction multiple-data, so particle collisions are an ideal candidate for this parallelization technique due to their nature as a series of independent processes with the same underlying operation. This refactoring required data memory reorganization and careful consideration of device/host data allocation to minimize memory access and data communication per operation. Successful implementation has resulted in an order of magnitude increase in simulation speed for a test-case involving multiple binary collisions using the null collision method. Work supported by DARPA under contract W31P4Q-16-C-0009.

  15. Comparison of spectra for validation of Penelope code for the energy range used in mammography; Comparacao de espectros para validacao do codigo PENELOPE para faixa de energia usada em mamografia

    Energy Technology Data Exchange (ETDEWEB)

    Albuquerque, M.A.G.; Ferreira, N.M.P.D., E-mail: malbuqueque@hotmail.co [Instituto Militar de Engenharia (IME), Rio de Janeiro, RJ (Brazil); Pires, E.; Ganizeu, M.D.; Almeida, C.E. de, E-mail: marianogd@uol.com.b [Universidade do Estado do Rio de Janeiro (UERJ), RJ (Brazil); Prizio, R.; Peixoto, J.G., E-mail: guilherm@ird.gov.b [Comissao Nacional de Energia Nuclear (CNEN), Rio de Janeiro, RJ (Brazil)

    2009-07-01

    The spectra simulated by the Penelope code were compared with the spectra experimentally obtained through the silicon PIN photodiode detector, and with spectra calculated by the code of IPEN, and the comparison exhibited a concordance of 93.3 %, and make them an option for study of X-ray spectroscopy in the voltage range used in mammography

  16. LFSC - Linac Feedback Simulation Code

    Energy Technology Data Exchange (ETDEWEB)

    Ivanov, Valentin; /Fermilab

    2008-05-01

    The computer program LFSC (Code>) is a numerical tool for simulation beam based feedback in high performance linacs. The code LFSC is based on the earlier version developed by a collective of authors at SLAC (L.Hendrickson, R. McEwen, T. Himel, H. Shoaee, S. Shah, P. Emma, P. Schultz) during 1990-2005. That code was successively used in simulation of SLC, TESLA, CLIC and NLC projects. It can simulate as pulse-to-pulse feedback on timescale corresponding to 5-100 Hz, as slower feedbacks, operating in the 0.1-1 Hz range in the Main Linac and Beam Delivery System. The code LFSC is running under Matlab for MS Windows operating system. It contains about 30,000 lines of source code in more than 260 subroutines. The code uses the LIAR ('Linear Accelerator Research code') for particle tracking under ground motion and technical noise perturbations. It uses the Guinea Pig code to simulate the luminosity performance. A set of input files includes the lattice description (XSIF format), and plane text files with numerical parameters, wake fields, ground motion data etc. The Matlab environment provides a flexible system for graphical output.

  17. From concatenated codes to graph codes

    DEFF Research Database (Denmark)

    Justesen, Jørn; Høholdt, Tom

    2004-01-01

    We consider codes based on simple bipartite expander graphs. These codes may be seen as the first step leading from product type concatenated codes to more complex graph codes. We emphasize constructions of specific codes of realistic lengths, and study the details of decoding by message passing...

  18. Ruminal variables in steers fed with Tifton 85 (Cynodon Spp hay with different particle sizes / Variáveis ruminais em novilhos alimentados com feno de Tifton 85 com diferentes tamanhos de partículas

    Directory of Open Access Journals (Sweden)

    Patrícia Guimarães Pimentel

    2009-07-01

    Full Text Available The ammonia nitrogen (N-NH3 concentration and the pH were determined with the objective of evaluating the effect of different particle sizes (5, 7, 10mm and whole of Tifton 85 hay in the diet of Holstein steers, with average live weight of 300kg and age of 20 months. A completely randomized design, with four replicates, in a split-plot arrangement was used. The plots were the experimental treatments (5, 7, 10mm and whole hay and the subplots were the times of collection (0, 2, 4, 6, 8h. The use of diets including hay with particle sizes of 5mm and whole, did not affect (P>0,05 the ruminal pH; average values were 6.14 and 6.61, respectively. A linear reduction in the ruminal pH was verified in the steers fed on diets constituted by 10mm particles. For the diets including Tifton hay with particle size of 7mm, it was observed a quadratic effect, where the minimum pH (5.39 was observed 8:00 h after the feed was furnished. Concentrations of N-NH3 were affected by collection time in a quadratic way. Maximum concentrations of N-NH3, 15.55, 15.83, 18.32, 12.0 mg/100 mL, were observed at 4:28, 3:58, 2:99 and 2:80h after feeding, for the diets including Tifton 85 hay with 5, 7, 10mm and whole particle sizes, respectively. It was concluded that all diets allowed normal nycterohemeral patterns of fermentation.As concentrações de nitrogênio amoniacal (N-NH3 e pH foram determinadas objetivando avaliar o efeito de diferentes tamanhos de partículas de feno de Tifton 85 (5, 7, 10mm e inteiro na dieta de novilhos holandeses, castrados, com peso vivo médio de 300kg e idade média de 20 meses. As determinações das concentrações de amônia ruminal (N-NH3 e o pH foram analisadas em delineamento inteiramente casualizado, em esquema de parcela subdivididas, tendo nas parcelas os tratamentos experimentais (5, 7, 10mm e feno inteiro e nas sub-parcelas os tempos de coleta (0, 2, 4, 6, 8h, com quatro repetições. A utilização de dietas constituídas com

  19. FRESCO: fusion reactor simulation code for tokamaks

    International Nuclear Information System (INIS)

    Mantsinen, M.J.

    1995-03-01

    The study of the dynamics of tokamak fusion reactors, a zero-dimensional particle and power balance code FRESCO (Fusion Reactor Simulation Code) has been developed at the Department of Technical Physics of Helsinki University of Technology. The FRESCO code is based on zero-dimensional particle and power balance equations averaged over prescribed plasma profiles. In the report the data structure of the FRESCO code is described, including the description of the COMMON statements, program input, and program output. The general structure of the code is described, including the description of subprograms and functions. The physical model used and examples of the code performance are also included in the report. (121 tabs.) (author)

  20. The intercomparison of aerosol codes

    International Nuclear Information System (INIS)

    Dunbar, I.H.; Fermandjian, J.; Gauvain, J.

    1988-01-01

    The behavior of aerosols in a reactor containment vessel following a severe accident could be an important determinant of the accident source term to the environment. Various processes result in the deposition of the aerosol onto surfaces within the containment, from where they are much less likely to be released. Some of these processes are very sensitive to particle size, so it is important to model the aerosol growth processes: agglomeration and condensation. A number of computer codes have been written to model growth and deposition processes. They have been tested against each other in a series of code comparison exercises. These exercises have investigated sensitivities to physical and numerical assumptions and have also proved a useful means of quality control for the codes. Various exercises in which code predictions are compared with experimental results are now under way

  1. VOA: a 2-d plasma physics code

    International Nuclear Information System (INIS)

    Eltgroth, P.G.

    1975-12-01

    A 2-dimensional relativistic plasma physics code was written and tested. The non-thermal components of the particle distribution functions are represented by expansion into moments in momentum space. These moments are computed directly from numerical equations. Currently three species are included - electrons, ions and ''beam electrons''. The computer code runs on either the 7600 or STAR machines at LLL. Both the physics and the operation of the code are discussed

  2. Degradabilidade ruminal in situ de vagens de faveira (Parkia platycephala Benth. em diferentes tamanhos de partículas In situ ruminal degradability of faveira (Parkia platycephala Benth. pods in different particle sizes

    Directory of Open Access Journals (Sweden)

    A.A. Alves

    2007-08-01

    Full Text Available Estimaram-se os parâmetros de degradação ruminal da matéria seca (MS e da proteína bruta (PB de vagens de faveira, trituradas em partículas de 2 e 5mm, pelo método do saco de náilon in situ em ovinos, nos tempos de incubação 3, 6, 12, 24, 48, 72 e 96 horas, e determinou-se a degradabilidade efetiva, considerando-se taxas de passagem 2, 5 e 8%/h. A fração a da MS e da PB foram 69,6 e 49,9%, respectivamente, revelando elevada solubilidade da MS; a fração b para MS e PB foi 24,7 e 43,9%, indicando baixa degradação da MS in situ, com estabilização da degradação da MS às 72h e da PB às 48h de incubação. O tanino de vagens de faveira não se mostrou depressor da degradabilidade in situ da PB.Soluble (a and potentially degradable (b fractions and degradation rate of b fraction (c of dry matter (DM and crude protein (CP of Parkia platycephala pods in particle sizes 2 and 5mm were estimated by in situ nylon bag method in sheep. The times of incubation were 3, 6, 12, 24, 48, 72 and 96 hours, and the effective degradability (ED was determined considering passage rates of 2, 5 and 8%/h. The a fractions for DM and CP was 69.6 and 49.9%, respectively, revealing elevated DM solubility: The b fractions for DM and CP was 24.7 and 43.9%, denoting reduced DM in situ degradation. The stabilization of the DM and CP degradation occurred at 72h and 48h after incubation, respectively. In situ degradability of constituents of P. platycephala pods, in special CP, were not depressed for their tannin contents.

  3. The ZPIC educational code suite

    Science.gov (United States)

    Calado, R.; Pardal, M.; Ninhos, P.; Helm, A.; Mori, W. B.; Decyk, V. K.; Vieira, J.; Silva, L. O.; Fonseca, R. A.

    2017-10-01

    Particle-in-Cell (PIC) codes are used in almost all areas of plasma physics, such as fusion energy research, plasma accelerators, space physics, ion propulsion, and plasma processing, and many other areas. In this work, we present the ZPIC educational code suite, a new initiative to foster training in plasma physics using computer simulations. Leveraging on our expertise and experience from the development and use of the OSIRIS PIC code, we have developed a suite of 1D/2D fully relativistic electromagnetic PIC codes, as well as 1D electrostatic. These codes are self-contained and require only a standard laptop/desktop computer with a C compiler to be run. The output files are written in a new file format called ZDF that can be easily read using the supplied routines in a number of languages, such as Python, and IDL. The code suite also includes a number of example problems that can be used to illustrate several textbook and advanced plasma mechanisms, including instructions for parameter space exploration. We also invite contributions to this repository of test problems that will be made freely available to the community provided the input files comply with the format defined by the ZPIC team. The code suite is freely available and hosted on GitHub at https://github.com/zambzamb/zpic. Work partially supported by PICKSC.

  4. LFSC - Linac Feedback Simulation Code

    International Nuclear Information System (INIS)

    Ivanov, Valentin; Fermilab

    2008-01-01

    The computer program LFSC ( ) is a numerical tool for simulation beam based feedback in high performance linacs. The code LFSC is based on the earlier version developed by a collective of authors at SLAC (L.Hendrickson, R. McEwen, T. Himel, H. Shoaee, S. Shah, P. Emma, P. Schultz) during 1990-2005. That code was successively used in simulation of SLC, TESLA, CLIC and NLC projects. It can simulate as pulse-to-pulse feedback on timescale corresponding to 5-100 Hz, as slower feedbacks, operating in the 0.1-1 Hz range in the Main Linac and Beam Delivery System. The code LFSC is running under Matlab for MS Windows operating system. It contains about 30,000 lines of source code in more than 260 subroutines. The code uses the LIAR ('Linear Accelerator Research code') for particle tracking under ground motion and technical noise perturbations. It uses the Guinea Pig code to simulate the luminosity performance. A set of input files includes the lattice description (XSIF format), and plane text files with numerical parameters, wake fields, ground motion data etc. The Matlab environment provides a flexible system for graphical output

  5. Translation of ARAC computer codes

    International Nuclear Information System (INIS)

    Takahashi, Kunio; Chino, Masamichi; Honma, Toshimitsu; Ishikawa, Hirohiko; Kai, Michiaki; Imai, Kazuhiko; Asai, Kiyoshi

    1982-05-01

    In 1981 we have translated the famous MATHEW, ADPIC and their auxiliary computer codes for CDC 7600 computer version to FACOM M-200's. The codes consist of a part of the Atmospheric Release Advisory Capability (ARAC) system of Lawrence Livermore National Laboratory (LLNL). The MATHEW is a code for three-dimensional wind field analysis. Using observed data, it calculates the mass-consistent wind field of grid cells by a variational method. The ADPIC is a code for three-dimensional concentration prediction of gases and particulates released to the atmosphere. It calculates concentrations in grid cells by the particle-in-cell method. They are written in LLLTRAN, i.e., LLNL Fortran language and are implemented on the CDC 7600 computers of LLNL. In this report, i) the computational methods of the MATHEW/ADPIC and their auxiliary codes, ii) comparisons of the calculated results with our JAERI particle-in-cell, and gaussian plume models, iii) translation procedures from the CDC version to FACOM M-200's, are described. Under the permission of LLNL G-Division, this report is published to keep the track of the translation procedures and to serve our JAERI researchers for comparisons and references of their works. (author)

  6. Automatic coding method of the ACR Code

    International Nuclear Information System (INIS)

    Park, Kwi Ae; Ihm, Jong Sool; Ahn, Woo Hyun; Baik, Seung Kook; Choi, Han Yong; Kim, Bong Gi

    1993-01-01

    The authors developed a computer program for automatic coding of ACR(American College of Radiology) code. The automatic coding of the ACR code is essential for computerization of the data in the department of radiology. This program was written in foxbase language and has been used for automatic coding of diagnosis in the Department of Radiology, Wallace Memorial Baptist since May 1992. The ACR dictionary files consisted of 11 files, one for the organ code and the others for the pathology code. The organ code was obtained by typing organ name or code number itself among the upper and lower level codes of the selected one that were simultaneous displayed on the screen. According to the first number of the selected organ code, the corresponding pathology code file was chosen automatically. By the similar fashion of organ code selection, the proper pathologic dode was obtained. An example of obtained ACR code is '131.3661'. This procedure was reproducible regardless of the number of fields of data. Because this program was written in 'User's Defined Function' from, decoding of the stored ACR code was achieved by this same program and incorporation of this program into program in to another data processing was possible. This program had merits of simple operation, accurate and detail coding, and easy adjustment for another program. Therefore, this program can be used for automation of routine work in the department of radiology

  7. Error-correction coding

    Science.gov (United States)

    Hinds, Erold W. (Principal Investigator)

    1996-01-01

    This report describes the progress made towards the completion of a specific task on error-correcting coding. The proposed research consisted of investigating the use of modulation block codes as the inner code of a concatenated coding system in order to improve the overall space link communications performance. The study proposed to identify and analyze candidate codes that will complement the performance of the overall coding system which uses the interleaved RS (255,223) code as the outer code.

  8. Parallelization of quantum molecular dynamics simulation code

    International Nuclear Information System (INIS)

    Kato, Kaori; Kunugi, Tomoaki; Shibahara, Masahiko; Kotake, Susumu

    1998-02-01

    A quantum molecular dynamics simulation code has been developed for the analysis of the thermalization of photon energies in the molecule or materials in Kansai Research Establishment. The simulation code is parallelized for both Scalar massively parallel computer (Intel Paragon XP/S75) and Vector parallel computer (Fujitsu VPP300/12). Scalable speed-up has been obtained with a distribution to processor units by division of particle group in both parallel computers. As a result of distribution to processor units not only by particle group but also by the particles calculation that is constructed with fine calculations, highly parallelization performance is achieved in Intel Paragon XP/S75. (author)

  9. Dynamic Shannon Coding

    OpenAIRE

    Gagie, Travis

    2005-01-01

    We present a new algorithm for dynamic prefix-free coding, based on Shannon coding. We give a simple analysis and prove a better upper bound on the length of the encoding produced than the corresponding bound for dynamic Huffman coding. We show how our algorithm can be modified for efficient length-restricted coding, alphabetic coding and coding with unequal letter costs.

  10. Fundamentals of convolutional coding

    CERN Document Server

    Johannesson, Rolf

    2015-01-01

    Fundamentals of Convolutional Coding, Second Edition, regarded as a bible of convolutional coding brings you a clear and comprehensive discussion of the basic principles of this field * Two new chapters on low-density parity-check (LDPC) convolutional codes and iterative coding * Viterbi, BCJR, BEAST, list, and sequential decoding of convolutional codes * Distance properties of convolutional codes * Includes a downloadable solutions manual

  11. Codes Over Hyperfields

    Directory of Open Access Journals (Sweden)

    Atamewoue Surdive

    2017-12-01

    Full Text Available In this paper, we define linear codes and cyclic codes over a finite Krasner hyperfield and we characterize these codes by their generator matrices and parity check matrices. We also demonstrate that codes over finite Krasner hyperfields are more interesting for code theory than codes over classical finite fields.

  12. (Nearly) portable PIC code for parallel computers

    International Nuclear Information System (INIS)

    Decyk, V.K.

    1993-01-01

    As part of the Numerical Tokamak Project, the author has developed a (nearly) portable, one dimensional version of the GCPIC algorithm for particle-in-cell codes on parallel computers. This algorithm uses a spatial domain decomposition for the fields, and passes particles from one domain to another as the particles move spatially. With only minor changes, the code has been run in parallel on the Intel Delta, the Cray C-90, the IBM ES/9000 and a cluster of workstations. After a line by line translation into cmfortran, the code was also run on the CM-200. Impressive speeds have been achieved, both on the Intel Delta and the Cray C-90, around 30 nanoseconds per particle per time step. In addition, the author was able to isolate the data management modules, so that the physics modules were not changed much from their sequential version, and the data management modules can be used as open-quotes black boxes.close quotes

  13. GOC: General Orbit Code

    International Nuclear Information System (INIS)

    Maddox, L.B.; McNeilly, G.S.

    1979-08-01

    GOC (General Orbit Code) is a versatile program which will perform a variety of calculations relevant to isochronous cyclotron design studies. In addition to the usual calculations of interest (e.g., equilibrium and accelerated orbits, focusing frequencies, field isochronization, etc.), GOC has a number of options to calculate injections with a charge change. GOC provides both printed and plotted output, and will follow groups of particles to allow determination of finite-beam properties. An interactive PDP-10 program called GIP, which prepares input data for GOC, is available. GIP is a very easy and convenient way to prepare complicated input data for GOC. Enclosed with this report are several microfiche containing source listings of GOC and other related routines and the printed output from a multiple-option GOC run

  14. ESP-TIMOC code manual

    International Nuclear Information System (INIS)

    Jaarsma, R.; Perlado, J.M.; Rief, H.

    1978-01-01

    ESP-TIMOC is an 'Event Scanning Program' to analyse the events (collision or boundary crossing parameters) of Monte Carlo particle transport problems. It is a modular program and belongs to the TIMOC code system. ESP-TIMOC is primarily designed to calculate the time dependent response functions such as energy dependent fluxes and currents at interfaces. An eventual extension to other quantities is simple and straight forward

  15. REX-ISOLDE RFQ Beam Dynamics Studies using CST EM Studio

    CERN Document Server

    Fraser, M A

    2014-01-01

    The original CNC milling files used to machine the electrodes of the REX-ISOLDE RFQ were acquired in late 2012 and electrostatic simulations were carried out using CST EM Studio in order to attain a 3D field map of the electric fields in the region around the beam axis. The objective was to construct a beam dynamics simulation tool that frees us from the constraints of the PARMTEQM code, which was used to design the RFQ, and that will afford us more flexibility in the studies needed for pre-bunching into the RFQ with an external multi-harmonic buncher. This note details the geometry of the electrodes and their simulation in CST EM Studio, the implementation of particle tracking in the computed field map using TRACK and benchmarking studies with PARMTEQM v3.09.

  16. Vector Network Coding Algorithms

    OpenAIRE

    Ebrahimi, Javad; Fragouli, Christina

    2010-01-01

    We develop new algebraic algorithms for scalar and vector network coding. In vector network coding, the source multicasts information by transmitting vectors of length L, while intermediate nodes process and combine their incoming packets by multiplying them with L x L coding matrices that play a similar role as coding c in scalar coding. Our algorithms for scalar network jointly optimize the employed field size while selecting the coding coefficients. Similarly, for vector coding, our algori...

  17. Homological stabilizer codes

    Energy Technology Data Exchange (ETDEWEB)

    Anderson, Jonas T., E-mail: jonastyleranderson@gmail.com

    2013-03-15

    In this paper we define homological stabilizer codes on qubits which encompass codes such as Kitaev's toric code and the topological color codes. These codes are defined solely by the graphs they reside on. This feature allows us to use properties of topological graph theory to determine the graphs which are suitable as homological stabilizer codes. We then show that all toric codes are equivalent to homological stabilizer codes on 4-valent graphs. We show that the topological color codes and toric codes correspond to two distinct classes of graphs. We define the notion of label set equivalencies and show that under a small set of constraints the only homological stabilizer codes without local logical operators are equivalent to Kitaev's toric code or to the topological color codes. - Highlights: Black-Right-Pointing-Pointer We show that Kitaev's toric codes are equivalent to homological stabilizer codes on 4-valent graphs. Black-Right-Pointing-Pointer We show that toric codes and color codes correspond to homological stabilizer codes on distinct graphs. Black-Right-Pointing-Pointer We find and classify all 2D homological stabilizer codes. Black-Right-Pointing-Pointer We find optimal codes among the homological stabilizer codes.

  18. R-Matrix Codes for Charged-Particle Induced Reactions in the Resolved Resonance Region (3), Summary Report of an IAEA Consultants' Meeting IAEA Headquarters, Vienna, Austria, 28-30 June 2017

    Energy Technology Data Exchange (ETDEWEB)

    Leeb, Helmut [Technische Univ. Wien, Vienna (Austria); Dimitriou, Paraskevi [Intl Atomic Energy Agency (IAEA), Vienna (Austria); Thompson, Ian [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-10-23

    A Consultants Meeting was held at the IAEA Headquarters, from 28 to 30 June 2017, to discuss the results of a test exercise that had been defined and assigned to all participants of the previous meeting held in December 2016. Five codes were used in this exercise: AMUR, AZURE2, RAC, SFRESCO and SAMMY. The results obtained from these codes were compared and further actions were proposed. Participants’ presentations and technical discussions, as well as proposed additional actions have been summarized in this report.

  19. Research on Primary Shielding Calculation Source Generation Codes

    Science.gov (United States)

    Zheng, Zheng; Mei, Qiliang; Li, Hui; Shangguan, Danhua; Zhang, Guangchun

    2017-09-01

    Primary Shielding Calculation (PSC) plays an important role in reactor shielding design and analysis. In order to facilitate PSC, a source generation code is developed to generate cumulative distribution functions (CDF) for the source particle sample code of the J Monte Carlo Transport (JMCT) code, and a source particle sample code is deveoped to sample source particle directions, types, coordinates, energy and weights from the CDFs. A source generation code is developed to transform three dimensional (3D) power distributions in xyz geometry to source distributions in r θ z geometry for the J Discrete Ordinate Transport (JSNT) code. Validation on PSC model of Qinshan No.1 nuclear power plant (NPP), CAP1400 and CAP1700 reactors are performed. Numerical results show that the theoretical model and the codes are both correct.

  20. Current status of high energy nucleon-meson transport code

    Energy Technology Data Exchange (ETDEWEB)

    Takada, Hiroshi; Sasa, Toshinobu [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    1998-03-01

    Current status of design code of accelerator (NMTC/JAERI code), outline of physical model and evaluation of accuracy of code were reported. To evaluate the nuclear performance of accelerator and strong spallation neutron origin, the nuclear reaction between high energy proton and target nuclide and behaviors of various produced particles are necessary. The nuclear design of spallation neutron system used a calculation code system connected the high energy nucleon{center_dot}meson transport code and the neutron{center_dot}photon transport code. NMTC/JAERI is described by the particle evaporation process under consideration of competition reaction of intranuclear cascade and fission process. Particle transport calculation was carried out for proton, neutron, {pi}- and {mu}-meson. To verify and improve accuracy of high energy nucleon-meson transport code, data of spallation and spallation neutron fragment by the integral experiment were collected. (S.Y.)

  1. Diagnostic Coding for Epilepsy.

    Science.gov (United States)

    Williams, Korwyn; Nuwer, Marc R; Buchhalter, Jeffrey R

    2016-02-01

    Accurate coding is an important function of neurologic practice. This contribution to Continuum is part of an ongoing series that presents helpful coding information along with examples related to the issue topic. Tips for diagnosis coding, Evaluation and Management coding, procedure coding, or a combination are presented, depending on which is most applicable to the subject area of the issue.

  2. Coding of Neuroinfectious Diseases.

    Science.gov (United States)

    Barkley, Gregory L

    2015-12-01

    Accurate coding is an important function of neurologic practice. This contribution to Continuum is part of an ongoing series that presents helpful coding information along with examples related to the issue topic. Tips for diagnosis coding, Evaluation and Management coding, procedure coding, or a combination are presented, depending on which is most applicable to the subject area of the issue.

  3. Particle bed reactor modeling

    Science.gov (United States)

    Sapyta, Joe; Reid, Hank; Walton, Lew

    The topics are presented in viewgraph form and include the following: particle bed reactor (PBR) core cross section; PBR bleed cycle; fuel and moderator flow paths; PBR modeling requirements; characteristics of PBR and nuclear thermal propulsion (NTP) modeling; challenges for PBR and NTP modeling; thermal hydraulic computer codes; capabilities for PBR/reactor application; thermal/hydralic codes; limitations; physical correlations; comparison of predicted friction factor and experimental data; frit pressure drop testing; cold frit mask factor; decay heat flow rate; startup transient simulation; and philosophy of systems modeling.

  4. Inner Radiation Belt Representation of the Energetic Electron Environment: Model and Data Synthesis Using the Salammbo Radiation Belt Transport Code and Los Alamos Geosynchronous and GPS Energetic Particle Data

    Science.gov (United States)

    Friedel, R. H. W.; Bourdarie, S.; Fennell, J.; Kanekal, S.; Cayton, T. E.

    2004-01-01

    The highly energetic electron environment in the inner magnetosphere (GEO inward) has received a lot of research attention in resent years, as the dynamics of relativistic electron acceleration and transport are not yet fully understood. These electrons can cause deep dielectric charging in any space hardware in the MEO to GEO region. We use a new and novel approach to obtain a global representation of the inner magnetospheric energetic electron environment, which can reproduce the absolute environment (flux) for any spacecraft orbit in that region to within a factor of 2 for the energy range of 100 KeV to 5 MeV electrons, for any levels of magnetospheric activity. We combine the extensive set of inner magnetospheric energetic electron observations available at Los Alamos with the physics based Salammbo transport code, using the data assimilation technique of "nudging". This in effect input in-situ data into the code and allows the diffusion mechanisms in the code to interpolate the data into regions and times of no data availability. We present here details of the methods used, both in the data assimilation process and in the necessary inter-calibration of the input data used. We will present sample runs of the model/data code and compare the results to test spacecraft data not used in the data assimilation process.

  5. Automated data collection in single particle electron microscopy

    Science.gov (United States)

    Tan, Yong Zi; Cheng, Anchi; Potter, Clinton S.; Carragher, Bridget

    2016-01-01

    Automated data collection is an integral part of modern workflows in single particle electron microscopy (EM) research. This review surveys the software packages available for automated single particle EM data collection. The degree of automation at each stage of data collection is evaluated, and the capabilities of the software packages are described. Finally, future trends in automation are discussed. PMID:26671944

  6. Interaction of free charged particles with a chirped electromagnetic pulse

    NARCIS (Netherlands)

    Khachatryan, A.G.; van Goor, F.A.; Boller, Klaus J.

    2004-01-01

    We study the effect of chirp on electromagnetic (EM) pulse interaction with a charged particle. Both the one-dimensional (1D) and 3D cases are considered. It is found that, in contrast to the case of a nonchirped pulse, the charged particle energy can be changed after the interaction with a 1D EM

  7. Rare particles

    International Nuclear Information System (INIS)

    Kutschera, W.

    1984-01-01

    The use of Accelerator Mass Spectrometry (AMS) to search for hypothetical particles and known particles of rare processes is discussed. The hypothetical particles considered include fractionally charged particles, anomalously heavy isotopes, and superheavy elements. The known particles produced in rare processes discussed include doubly-charged negative ions, counting neutrino-produced atoms in detectors for solar neutrino detection, and the spontaneous emission of 14 C from 223 Ra. 35 references

  8. The Complete Mitochondrial Genome of the Pink Stem Borer, <em>Sesamia inferensem>, in Comparison with Four Other Noctuid Moths

    Directory of Open Access Journals (Sweden)

    Yu-Zhou Du

    2012-08-01

    Full Text Available The complete 15,413-bp mitochondrial genome (mitogenome of <em>Sesamia inferensem> (Walker (Lepidoptera: Noctuidae was sequenced and compared with those of four other noctuid moths. All of the mitogenomes analyzed displayed similar characteristics with respect to gene content, genome organization, nucleotide comparison, and codon usages. Twelve-one protein-coding genes (PCGs utilized the standard ATN, but the <em>cox1 em>gene used CGA as the initiation codon; <em>cox1em>, <em>cox2em>, and <em>nad4em> genes had the truncated termination codon T in the <em>S. inferens em>mitogenome. All of the tRNA genes had typical cloverleaf secondary structures except for <em>trnS1(AGNem>, in which the dihydrouridine (DHU arm did not form a stable stem-loop structure. Both the secondary structures of <em>rrnL> and <em>rrnS> genes inferred from the <em>S. inferens em>mitogenome closely resembled those of other noctuid moths. In the A+T-rich region, the conserved motif “ATAGA” followed by a long T-stretch was observed in all noctuid moths, but other specific tandem-repeat elements were more variable. Additionally, the <em>S. inferens em>mitogenome contained a potential stem-loop structure, a duplicated 17-bp repeat element, a decuplicated segment, and a microsatellite “(AT7”, without a poly-A element upstream of the<em> trnM em>in the A+T-rich region. Finally, the phylogenetic relationships were reconstructed based on amino acid sequences of mitochondrial 13 PCGs, which support the traditional morphologically based view of relationships within the Noctuidae.

  9. Vector Network Coding

    OpenAIRE

    Ebrahimi, Javad; Fragouli, Christina

    2010-01-01

    We develop new algebraic algorithms for scalar and vector network coding. In vector network coding, the source multicasts information by transmitting vectors of length L, while intermediate nodes process and combine their incoming packets by multiplying them with L X L coding matrices that play a similar role as coding coefficients in scalar coding. Our algorithms for scalar network jointly optimize the employed field size while selecting the coding coefficients. Similarly, for vector co...

  10. Entropy Coding in HEVC

    OpenAIRE

    Sze, Vivienne; Marpe, Detlev

    2014-01-01

    Context-Based Adaptive Binary Arithmetic Coding (CABAC) is a method of entropy coding first introduced in H.264/AVC and now used in the latest High Efficiency Video Coding (HEVC) standard. While it provides high coding efficiency, the data dependencies in H.264/AVC CABAC make it challenging to parallelize and thus limit its throughput. Accordingly, during the standardization of entropy coding for HEVC, both aspects of coding efficiency and throughput were considered. This chapter describes th...

  11. Generalized concatenated quantum codes

    International Nuclear Information System (INIS)

    Grassl, Markus; Shor, Peter; Smith, Graeme; Smolin, John; Zeng Bei

    2009-01-01

    We discuss the concept of generalized concatenated quantum codes. This generalized concatenation method provides a systematical way for constructing good quantum codes, both stabilizer codes and nonadditive codes. Using this method, we construct families of single-error-correcting nonadditive quantum codes, in both binary and nonbinary cases, which not only outperform any stabilizer codes for finite block length but also asymptotically meet the quantum Hamming bound for large block length.

  12. Rateless feedback codes

    DEFF Research Database (Denmark)

    Sørensen, Jesper Hemming; Koike-Akino, Toshiaki; Orlik, Philip

    2012-01-01

    This paper proposes a concept called rateless feedback coding. We redesign the existing LT and Raptor codes, by introducing new degree distributions for the case when a few feedback opportunities are available. We show that incorporating feedback to LT codes can significantly decrease both...... the coding overhead and the encoding/decoding complexity. Moreover, we show that, at the price of a slight increase in the coding overhead, linear complexity is achieved with Raptor feedback coding....

  13. Advanced video coding systems

    CERN Document Server

    Gao, Wen

    2015-01-01

    This comprehensive and accessible text/reference presents an overview of the state of the art in video coding technology. Specifically, the book introduces the tools of the AVS2 standard, describing how AVS2 can help to achieve a significant improvement in coding efficiency for future video networks and applications by incorporating smarter coding tools such as scene video coding. Topics and features: introduces the basic concepts in video coding, and presents a short history of video coding technology and standards; reviews the coding framework, main coding tools, and syntax structure of AV

  14. Coding for dummies

    CERN Document Server

    Abraham, Nikhil

    2015-01-01

    Hands-on exercises help you learn to code like a pro No coding experience is required for Coding For Dummies,your one-stop guide to building a foundation of knowledge inwriting computer code for web, application, and softwaredevelopment. It doesn't matter if you've dabbled in coding or neverwritten a line of code, this book guides you through the basics.Using foundational web development languages like HTML, CSS, andJavaScript, it explains in plain English how coding works and whyit's needed. Online exercises developed by Codecademy, a leading online codetraining site, help hone coding skill

  15. Evaluation of <em>HER2em> Gene Amplification in Breast Cancer Using Nuclei Microarray <em>in em>S>itu em>Hybridization

    Directory of Open Access Journals (Sweden)

    Xuefeng Zhang

    2012-05-01

    Full Text Available Fluorescence<em> em>>in situ em>hybridization (FISH assay is considered the “gold standard” in evaluating <em>HER2/neu (HER2em> gene status. However, FISH detection is costly and time consuming. Thus, we established nuclei microarray with extracted intact nuclei from paraffin embedded breast cancer tissues for FISH detection. The nuclei microarray FISH (NMFISH technology serves as a useful platform for analyzing <em>HER2em> gene/chromosome 17 centromere ratio. We examined <em>HER2em> gene status in 152 cases of invasive ductal carcinomas of the breast that were resected surgically with FISH and NMFISH. <em>HER2em> gene amplification status was classified according to the guidelines of the American Society of Clinical Oncology and College of American Pathologists (ASCO/CAP. Comparison of the cut-off values for <em>HER2em>/chromosome 17 centromere copy number ratio obtained by NMFISH and FISH showed that there was almost perfect agreement between the two methods (κ coefficient 0.920. The results of the two methods were almost consistent for the evaluation of <em>HER2em> gene counts. The present study proved that NMFISH is comparable with FISH for evaluating <em>HER2em> gene status. The use of nuclei microarray technology is highly efficient, time and reagent conserving and inexpensive.

  16. Monte Carlo codes use in neutron therapy

    International Nuclear Information System (INIS)

    Paquis, P.; Mokhtari, F.; Karamanoukian, D.; Pignol, J.P.; Cuendet, P.; Iborra, N.

    1998-01-01

    Monte Carlo calculation codes allow to study accurately all the parameters relevant to radiation effects, like the dose deposition or the type of microscopic interactions, through one by one particle transport simulation. These features are very useful for neutron irradiations, from device development up to dosimetry. This paper illustrates some applications of these codes in Neutron Capture Therapy and Neutron Capture Enhancement of fast neutrons irradiations. (authors)

  17. The Los Alamos accelerator code group

    Energy Technology Data Exchange (ETDEWEB)

    Krawczyk, F.L.; Billen, J.H.; Ryne, R.D.; Takeda, Harunori; Young, L.M.

    1995-05-01

    The Los Alamos Accelerator Code Group (LAACG) is a national resource for members of the accelerator community who use and/or develop software for the design and analysis of particle accelerators, beam transport systems, light sources, storage rings, and components of these systems. Below the authors describe the LAACG`s activities in high performance computing, maintenance and enhancement of POISSON/SUPERFISH and related codes and the dissemination of information on the INTERNET.

  18. The Los Alamos accelerator code group

    International Nuclear Information System (INIS)

    Krawczyk, F.L.; Billen, J.H.; Ryne, R.D.; Takeda, Harunori; Young, L.M.

    1995-01-01

    The Los Alamos Accelerator Code Group (LAACG) is a national resource for members of the accelerator community who use and/or develop software for the design and analysis of particle accelerators, beam transport systems, light sources, storage rings, and components of these systems. Below the authors describe the LAACG's activities in high performance computing, maintenance and enhancement of POISSON/SUPERFISH and related codes and the dissemination of information on the INTERNET

  19. A New Natural Lactone from <em>Dimocarpus> <em>longan> Lour. Seeds

    Directory of Open Access Journals (Sweden)

    Zhongjun Li

    2012-08-01

    Full Text Available A new natural product named longanlactone was isolated from <em>Dimocarpus> <em>longan> Lour. seeds. Its structure was determined as 3-(2-acetyl-1<em>H>-pyrrol-1-yl-5-(prop-2-yn-1-yldihydrofuran-2(3H-one by spectroscopic methods and HRESIMS.

  20. Particle detection

    International Nuclear Information System (INIS)

    Charpak, G.

    2000-01-01

    In this article G.Charpak presents the principles on which particle detection is based. Particle accelerators are becoming more and more powerful and require new detectors able to track the right particle in a huge flux of particles. The gigantic size of detectors in high energy physics is often due to the necessity of getting a long enough trajectory in a magnetic field in order to deduce from the curvature an accurate account of impulses in the reaction. (A.C.)

  1. Discussion on LDPC Codes and Uplink Coding

    Science.gov (United States)

    Andrews, Ken; Divsalar, Dariush; Dolinar, Sam; Moision, Bruce; Hamkins, Jon; Pollara, Fabrizio

    2007-01-01

    This slide presentation reviews the progress that the workgroup on Low-Density Parity-Check (LDPC) for space link coding. The workgroup is tasked with developing and recommending new error correcting codes for near-Earth, Lunar, and deep space applications. Included in the presentation is a summary of the technical progress of the workgroup. Charts that show the LDPC decoder sensitivity to symbol scaling errors are reviewed, as well as a chart showing the performance of several frame synchronizer algorithms compared to that of some good codes and LDPC decoder tests at ESTL. Also reviewed is a study on Coding, Modulation, and Link Protocol (CMLP), and the recommended codes. A design for the Pseudo-Randomizer with LDPC Decoder and CRC is also reviewed. A chart that summarizes the three proposed coding systems is also presented.

  2. The Fireball integrated code package

    Energy Technology Data Exchange (ETDEWEB)

    Dobranich, D.; Powers, D.A.; Harper, F.T.

    1997-07-01

    Many deep-space satellites contain a plutonium heat source. An explosion, during launch, of a rocket carrying such a satellite offers the potential for the release of some of the plutonium. The fireball following such an explosion exposes any released plutonium to a high-temperature chemically-reactive environment. Vaporization, condensation, and agglomeration processes can alter the distribution of plutonium-bearing particles. The Fireball code package simulates the integrated response of the physical and chemical processes occurring in a fireball and the effect these processes have on the plutonium-bearing particle distribution. This integrated treatment of multiple phenomena represents a significant improvement in the state of the art for fireball simulations. Preliminary simulations of launch-second scenarios indicate: (1) most plutonium vaporization occurs within the first second of the fireball; (2) large non-aerosol-sized particles contribute very little to plutonium vapor production; (3) vaporization and both homogeneous and heterogeneous condensation occur simultaneously; (4) homogeneous condensation transports plutonium down to the smallest-particle sizes; (5) heterogeneous condensation precludes homogeneous condensation if sufficient condensation sites are available; and (6) agglomeration produces larger-sized particles but slows rapidly as the fireball grows.

  3. Locally orderless registration code

    DEFF Research Database (Denmark)

    2012-01-01

    This is code for the TPAMI paper "Locally Orderless Registration". The code requires intel threadding building blocks installed and is provided for 64 bit on mac, linux and windows.......This is code for the TPAMI paper "Locally Orderless Registration". The code requires intel threadding building blocks installed and is provided for 64 bit on mac, linux and windows....

  4. Decoding Codes on Graphs

    Indian Academy of Sciences (India)

    Shannon limit of the channel. Among the earliest discovered codes that approach the. Shannon limit were the low density parity check (LDPC) codes. The term low density arises from the property of the parity check matrix defining the code. We will now define this matrix and the role that it plays in decoding. 2. Linear Codes.

  5. Manually operated coded switch

    International Nuclear Information System (INIS)

    Barnette, J.H.

    1978-01-01

    The disclosure related to a manually operated recodable coded switch in which a code may be inserted, tried and used to actuate a lever controlling an external device. After attempting a code, the switch's code wheels must be returned to their zero positions before another try is made

  6. Strange particles

    International Nuclear Information System (INIS)

    Chinowsky, W.

    1989-01-01

    Work done in the mid 1950s at Brookhaven National Laboratory on strange particles is described. Experiments were done on the Cosmotron. The author describes his own and others' work on neutral kaons, lambda and theta particles and points out the theoretical gap between predictions and experimental findings. By the end of the decade, the theory of strange particles was better understood. (UK)

  7. Study of the spheronization process of glass particles by the gravitational falling process for internal selective radiotherapy; Estudo do processo de esferolizacao por queda gravitacional de particulas vitreas visando a aplicacao em radioterapia interna seletiva

    Energy Technology Data Exchange (ETDEWEB)

    Barros Filho, E.C.; Martinelli, J.R., E-mail: eraldo.filho@usp.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), SP (Brazil); Sene, F.F. [Centro Tecnologico da Marinha em Sao Paulo (CTMS), Sao Paulo, SP (Brazil)

    2011-07-01

    The internal selective radiotherapy is an alternative to treat hepatocellular carcinoma. Glass microspheres containing β{sup -} emitter radionuclide are introduced in the liver, and they are housed preferentially in the region where the cancer cells are located. The microspheres are trapped in the arterioles which feed the tumors, and the β{sup -} particles annihilate the cancer cells. The glass particles must be spherical to avoid unnecessary bleeding, and the particle size must be restricted to a range which optimizes the blocking effect and avoid the migration to other parts of the human body. The particle size distribution of microspheres is not easily predicted since the variation of the aspect ratio and the presence of agglomerates can influence the resulting particle size distribution. In the present work, the spheronization process to obtain microspheres from irregular shape glass particles with suitable diameter and shape for radiotherapy treatment is studied. (author)

  8. Níveis de proteína em suplementos para terminação de bovinos em pastejo durante o período de transição seca/águas: consumo voluntário e trânsito de partículas Effects of feeding different protein levels of supplements to finishing cattle in pasture during the dry to rainy transition season on voluntary intake and passage of particles

    Directory of Open Access Journals (Sweden)

    Edenio Detmann

    2005-08-01

    Full Text Available Avaliaram-se o consumo e os parâmetros da cinética de trânsito de partículas em bovinos suplementados durante a fase de transição entre os períodos seco e chuvoso. Foram utilizados cinco novilhos Holandês x Zebu com idade e peso médios iniciais de 24 meses e 304 kg, respectivamente, manejados em cinco piquetes de Brachiaria decumbens (0,34 ha. Foram fornecidos suplementos (4 kg/animal/dia, constituídos por fubá de milho, grão de soja integral, uréia, sulfato de amônia e mistura mineral, formulados para apresentarem níveis de 12, 16, 20 e 24% de proteína bruta (PB, com base na matéria natural. O experimento foi conduzido em quatro períodos experimentais de 21 dias e analisado por delineamento em quadrado latino 4 x 4. O quinto animal, mantido sem suplementação (controle, foi utilizado como medida de comparação descritiva. A forragem selecionada pelos animais apresentou teores médios de PB e fibra em detergente neutro (FDN de 109,9 e 564,4 g/kg de matéria seca (MS, respectivamente. A variação do nível de PB dos suplementos não alterou os consumos de MS, de matéria orgânica e de FDN. O fornecimento de suplementos reduziu o consumo de pastagem e ampliou o consumo de MS total em relação ao controle, com coeficiente médio de substituição de 0,41. O nível de PB dos suplementos não afetou a taxa de passagem ruminal das partículas, cujo valor médio (0,034 h-1, foi superior ao observado no controle (0,029 h-1.The objective of this trial was to evaluate the voluntary intake and ruminal passage of particles in supplemented finishing cattle during the drought to rainy transition season. Five Holstein x Zebu steers averaging 304 kg of live weight and 24 months of age located in five paddocks (0.34 ha each of Brachiaria decumbens were used in this trial. The supplements fed (4 kg/animal/day contained ground corn, whole soybean, urea, ammonium sulfate, and minerals and were formulated to yield, on as-fed basis, 12, 16, 20

  9. Ripple enhanced transport of suprathermal alpha particles

    International Nuclear Information System (INIS)

    Tani, K.; Takizuka, T.; Azumi, M.

    1986-01-01

    The ripple enhanced transport of suprathermal alpha particles has been studied by the newly developed Monte-Carlo code in which the motion of banana orbit in a toroidal field ripple is described by a mapping method. The existence of ripple-resonance diffusion has been confirmed numerically. We have developed another new code in which the radial displacement of banana orbit is given by the diffusion coefficients from the mapping code or the orbit following Monte-Carlo code. The ripple loss of α particles during slowing down has been estimated by the mapping model code as well as the diffusion model code. From the comparison of the results with those from the orbit-following Monte-Carlo code, it has been found that all of them agree very well. (author)

  10. Reference Gene Selection in the Desert Plant <em>Eremosparton songoricuem>m>

    Directory of Open Access Journals (Sweden)

    Dao-Yuan Zhang

    2012-06-01

    Full Text Available <em>Eremosparton songoricum em>(Litv. Vass. (<em>E. songoricumem> is a rare and extremely drought-tolerant desert plant that holds promise as a model organism for the identification of genes associated with water deficit stress. Here, we cloned and evaluated the expression of eight candidate reference genes using quantitative real-time reverse transcriptase polymerase chain reactions. The expression of these candidate reference genes was analyzed in a diverse set of 20 samples including various <em>E. songoricumem> plant tissues exposed to multiple environmental stresses. GeNorm analysis indicated that expression stability varied between the reference genes in the different experimental conditions, but the two most stable reference genes were sufficient for normalization in most conditions.<em> EsEFem> and <em>Esα-TUB> were sufficient for various stress conditions, <em>EsEF> and <em>EsACT> were suitable for samples of differing germination stages, and <em>EsGAPDH>and <em>Es>UBQ em>were most stable across multiple adult tissue samples. The <em>Es18Sem> gene was unsuitable as a reference gene in our analysis. In addition, the expression level of the drought-stress related transcription factor <em>EsDREB2em>> em>verified the utility of<em> E. songoricumem> reference genes and indicated that no single gene was adequate for normalization on its own. This is the first systematic report on the selection of reference genes in <em>E. songoricumem>, and these data will facilitate future work on gene expression in this species.

  11. QR Codes 101

    Science.gov (United States)

    Crompton, Helen; LaFrance, Jason; van 't Hooft, Mark

    2012-01-01

    A QR (quick-response) code is a two-dimensional scannable code, similar in function to a traditional bar code that one might find on a product at the supermarket. The main difference between the two is that, while a traditional bar code can hold a maximum of only 20 digits, a QR code can hold up to 7,089 characters, so it can contain much more…

  12. Controlling Energy Radiations of Electromagnetic Waves via Frequency Coding Metamaterials.

    Science.gov (United States)

    Wu, Haotian; Liu, Shuo; Wan, Xiang; Zhang, Lei; Wang, Dan; Li, Lianlin; Cui, Tie Jun

    2017-09-01

    Metamaterials are artificial structures composed of subwavelength unit cells to control electromagnetic (EM) waves. The spatial coding representation of metamaterial has the ability to describe the material in a digital way. The spatial coding metamaterials are typically constructed by unit cells that have similar shapes with fixed functionality. Here, the concept of frequency coding metamaterial is proposed, which achieves different controls of EM energy radiations with a fixed spatial coding pattern when the frequency changes. In this case, not only different phase responses of the unit cells are considered, but also different phase sensitivities are also required. Due to different frequency sensitivities of unit cells, two units with the same phase response at the initial frequency may have different phase responses at higher frequency. To describe the frequency coding property of unit cell, digitalized frequency sensitivity is proposed, in which the units are encoded with digits "0" and "1" to represent the low and high phase sensitivities, respectively. By this merit, two degrees of freedom, spatial coding and frequency coding, are obtained to control the EM energy radiations by a new class of frequency-spatial coding metamaterials. The above concepts and physical phenomena are confirmed by numerical simulations and experiments.

  13. Potential of the MCNP computer code

    International Nuclear Information System (INIS)

    Kyncl, J.

    1995-01-01

    The MCNP code is designed for numerical solution of neutron, photon, and electron transport problems by the Monte Carlo method. The code is based on the linear transport theory of behavior of the differential flux of the particles. The code directly uses data from the cross section point data library for input. Experience is outlined, gained in the application of the code to the calculation of the effective parameters of fuel assemblies and of the entire reactor core, to the determination of the effective parameters of the elementary fuel cell, and to the numerical solution of neutron diffusion and/or transport problems of the fuel assembly. The agreement between the calculated and observed data gives evidence that the MCNP code can be used with advantage for calculations involving WWER type fuel assemblies. (J.B.). 4 figs., 6 refs

  14. A deep convolutional neural network approach to single-particle recognition in cryo-electron microscopy.

    Science.gov (United States)

    Zhu, Yanan; Ouyang, Qi; Mao, Youdong

    2017-07-21

    Single-particle cryo-electron microscopy (cryo-EM) has become a mainstream tool for the structural determination of biological macromolecular complexes. However, high-resolution cryo-EM reconstruction often requires hundreds of thousands of single-particle images. Particle extraction from experimental micrographs thus can be laborious and presents a major practical bottleneck in cryo-EM structural determination. Existing computational methods for particle picking often use low-resolution templates for particle matching, making them susceptible to reference-dependent bias. It is critical to develop a highly efficient template-free method for the automatic recognition of particle images from cryo-EM micrographs. We developed a deep learning-based algorithmic framework, DeepEM, for single-particle recognition from noisy cryo-EM micrographs, enabling automated particle picking, selection and verification in an integrated fashion. The kernel of DeepEM is built upon a convolutional neural network (CNN) composed of eight layers, which can be recursively trained to be highly "knowledgeable". Our approach exhibits an improved performance and accuracy when tested on the standard KLH dataset. Application of DeepEM to several challenging experimental cryo-EM datasets demonstrated its ability to avoid the selection of un-wanted particles and non-particles even when true particles contain fewer features. The DeepEM methodology, derived from a deep CNN, allows automated particle extraction from raw cryo-EM micrographs in the absence of a template. It demonstrates an improved performance, objectivity and accuracy. Application of this novel method is expected to free the labor involved in single-particle verification, significantly improving the efficiency of cryo-EM data processing.

  15. Recent Improvements in the SHIELD-HIT Code

    DEFF Research Database (Denmark)

    Hansen, David Christoffer; Lühr, Armin Christian; Herrmann, Rochus

    2012-01-01

    Purpose: The SHIELD-HIT Monte Carlo particle transport code has previously been used to study a wide range of problems for heavy-ion treatment and has been benchmarked extensively against other Monte Carlo codes and experimental data. Here, an improved version of SHIELD-HIT is developed concentra......Purpose: The SHIELD-HIT Monte Carlo particle transport code has previously been used to study a wide range of problems for heavy-ion treatment and has been benchmarked extensively against other Monte Carlo codes and experimental data. Here, an improved version of SHIELD-HIT is developed...

  16. Status report on the 'Merging' of the Electron-Cloud Code POSINST with the 3-D Accelerator PIC CODE WARP

    International Nuclear Information System (INIS)

    Vay, J.-L.; Furman, M.A.; Azevedo, A.W.; Cohen, R.H.; Friedman, A.; Grote, D.P.; Stoltz, P.H.

    2004-01-01

    We have integrated the electron-cloud code POSINST [1] with WARP [2]--a 3-D parallel Particle-In-Cell accelerator code developed for Heavy Ion Inertial Fusion--so that the two can interoperate. Both codes are run in the same process, communicate through a Python interpreter (already used in WARP), and share certain key arrays (so far, particle positions and velocities). Currently, POSINST provides primary and secondary sources of electrons, beam bunch kicks, a particle mover, and diagnostics. WARP provides the field solvers and diagnostics. Secondary emission routines are provided by the Tech-X package CMEE

  17. Codes and curves

    CERN Document Server

    Walker, Judy L

    2000-01-01

    When information is transmitted, errors are likely to occur. Coding theory examines efficient ways of packaging data so that these errors can be detected, or even corrected. The traditional tools of coding theory have come from combinatorics and group theory. Lately, however, coding theorists have added techniques from algebraic geometry to their toolboxes. In particular, by re-interpreting the Reed-Solomon codes, one can see how to define new codes based on divisors on algebraic curves. For instance, using modular curves over finite fields, Tsfasman, Vladut, and Zink showed that one can define a sequence of codes with asymptotically better parameters than any previously known codes. This monograph is based on a series of lectures the author gave as part of the IAS/PCMI program on arithmetic algebraic geometry. Here, the reader is introduced to the exciting field of algebraic geometric coding theory. Presenting the material in the same conversational tone of the lectures, the author covers linear codes, inclu...

  18. Particle therapy

    Energy Technology Data Exchange (ETDEWEB)

    Raju, M.R.

    1993-09-01

    Particle therapy has a long history. The experimentation with particles for their therapeutic application got started soon after they were produced in the laboratory. Physicists played a major role in proposing the potential applications in radiotherapy as well as in the development of particle therapy. A brief review of the current status of particle radiotherapy with some historical perspective is presented and specific contributions made by physicists will be pointed out wherever appropriate. The rationale of using particles in cancer treatment is to reduce the treatment volume to the target volume by using precise dose distributions in three dimensions by using particles such as protons and to improve the differential effects on tumors compared to normal tissues by using high-LET radiations such as neutrons. Pions and heavy ions combine the above two characteristics.

  19. Particle therapy

    International Nuclear Information System (INIS)

    Raju, M.R.

    1993-01-01

    Particle therapy has a long history. The experimentation with particles for their therapeutic application got started soon after they were produced in the laboratory. Physicists played a major role in proposing the potential applications in radiotherapy as well as in the development of particle therapy. A brief review of the current status of particle radiotherapy with some historical perspective is presented and specific contributions made by physicists will be pointed out wherever appropriate. The rationale of using particles in cancer treatment is to reduce the treatment volume to the target volume by using precise dose distributions in three dimensions by using particles such as protons and to improve the differential effects on tumors compared to normal tissues by using high-LET radiations such as neutrons. Pions and heavy ions combine the above two characteristics

  20. Particle cosmology

    CERN Multimedia

    CERN. Geneva

    2007-01-01

    The understanding of the Universe at the largest and smallest scales traditionally has been the subject of cosmology and particle physics, respectively. Studying the evolution of the Universe connects today's large scales with the tiny scales in the very early Universe and provides the link between the physics of particles and of the cosmos. This series of five lectures aims at a modern and critical presentation of the basic ideas, methods, models and observations in today's particle cosmology.

  1. Steam condensation modelling in aerosol codes

    International Nuclear Information System (INIS)

    Dunbar, I.H.

    1986-01-01

    The principal subject of this study is the modelling of the condensation of steam into and evaporation of water from aerosol particles. These processes introduce a new type of term into the equation for the development of the aerosol particle size distribution. This new term faces the code developer with three major problems: the physical modelling of the condensation/evaporation process, the discretisation of the new term and the separate accounting for the masses of the water and of the other components. This study has considered four codes which model the condensation of steam into and its evaporation from aerosol particles: AEROSYM-M (UK), AEROSOLS/B1 (France), NAUA (Federal Republic of Germany) and CONTAIN (USA). The modelling in the codes has been addressed under three headings. These are the physical modelling of condensation, the mathematics of the discretisation of the equations, and the methods for modelling the separate behaviour of different chemical components of the aerosol. The codes are least advanced in area of solute effect modelling. At present only AEROSOLS/B1 includes the effect. The effect is greater for more concentrated solutions. Codes without the effect will be more in error (underestimating the total airborne mass) the less condensation they predict. Data are needed on the water vapour pressure above concentrated solutions of the substances of interest (especially CsOH and CsI) if the extent to which aerosols retain water under superheated conditions is to be modelled. 15 refs

  2. Particle physics

    International Nuclear Information System (INIS)

    Kamal, Anwar

    2014-01-01

    Provides step-by-step derivations. Contains numerous tables and diagrams. Supports learning and teaching with numerous worked examples, questions and problems with answers. Sketches also the historical development of the subject. This textbook teaches particle physics very didactically. It supports learning and teaching with numerous worked examples, questions and problems with answers. Numerous tables and diagrams lead to a better understanding of the explanations. The content of the book covers all important topics of particle physics: Elementary particles are classified from the point of view of the four fundamental interactions. The nomenclature used in particle physics is explained. The discoveries and properties of known elementary particles and resonances are given. The particles considered are positrons, muon, pions, anti-protons, strange particles, neutrino and hadrons. The conservation laws governing the interactions of elementary particles are given. The concepts of parity, spin, charge conjugation, time reversal and gauge invariance are explained. The quark theory is introduced to explain the hadron structure and strong interactions. The solar neutrino problem is considered. Weak interactions are classified into various types, and the selection rules are stated. Non-conservation of parity and the universality of the weak interactions are discussed. Neutral and charged currents, discovery of W and Z bosons and the early universe form important topics of the electroweak interactions. The principles of high energy accelerators including colliders are elaborately explained. Additionally, in the book detectors used in nuclear and particle physics are described. This book is on the upper undergraduate level.

  3. Magnetic particles

    Science.gov (United States)

    Chang, Manchium (Inventor); Colvin, Michael S. (Inventor)

    1989-01-01

    Magnetic polymer particles are formed by swelling porous, polymer particles and impregnating the particles with an aqueous solution of precursor magnetic metal salt such as an equimolar mixture of ferrous chloride and ferric chloride. On addition of a basic reagent such as dilute sodium hydroxide, the metal salts are converted to crystals of magnetite which are uniformly contained througout the pores of the polymer particle. The magnetite content can be increased and neutral buoyancy achieved by repetition of the impregnaton and neutralization steps to adjust the magnetite content to a desired level.

  4. Synthesis, Crystal Structure and Luminescent Property of Cd (II Complex with <em>N-Benzenesulphonyl-L>-leucine

    Directory of Open Access Journals (Sweden)

    Xishi Tai

    2012-09-01

    Full Text Available A new trinuclear Cd (II complex [Cd3(L6(2,2-bipyridine3] [L =<em> Nem>-phenylsulfonyl-L>-leucinato] has been synthesized and characterized by elemental analysis, IR and X-ray single crystal diffraction analysis. The results show that the complex belongs to the orthorhombic, space group<em> Pem>212121 with<em> aem> = 16.877(3 Å, <em>b> em>= 22.875(5 Å, <em>c em>= 29.495(6 Å, <em>α> em>= <emem>= <emem>= 90°, <em>V> em>= 11387(4 Å3, <em>Z> em>= 4, <em>Dc>= 1.416 μg·m−3, <emem>= 0.737 mm−1, <em>F> em>(000 = 4992, and final <em>R>1 = 0.0390, <em>ωR>2 = 0.0989. The complex comprises two seven-coordinated Cd (II atoms, with a N2O5 distorted pengonal bipyramidal coordination environment and a six-coordinated Cd (II atom, with a N2O4 distorted octahedral coordination environment. The molecules form one dimensional chain structure by the interaction of bridged carboxylato groups, hydrogen bonds and p-p interaction of 2,2-bipyridine. The luminescent properties of the Cd (II complex and <em>N-Benzenesulphonyl-L>-leucine in solid and in CH3OH solution also have been investigated.

  5. The materiality of Code

    DEFF Research Database (Denmark)

    Soon, Winnie

    2014-01-01

    This essay studies the source code of an artwork from a software studies perspective. By examining code that come close to the approach of critical code studies (Marino, 2006), I trace the network artwork, Pupufu (Lin, 2009) to understand various real-time approaches to social media platforms (MSN......, Twitter and Facebook). The focus is not to investigate the functionalities and efficiencies of the code, but to study and interpret the program level of code in order to trace the use of various technological methods such as third-party libraries and platforms’ interfaces. These are important...... to understand the socio-technical side of a changing network environment. Through the study of code, including but not limited to source code, technical specifications and other materials in relation to the artwork production, I would like to explore the materiality of code that goes beyond technical...

  6. Coding for optical channels

    CERN Document Server

    Djordjevic, Ivan; Vasic, Bane

    2010-01-01

    This unique book provides a coherent and comprehensive introduction to the fundamentals of optical communications, signal processing and coding for optical channels. It is the first to integrate the fundamentals of coding theory and optical communication.

  7. SEVERO code - user's manual

    International Nuclear Information System (INIS)

    Sacramento, A.M. do.

    1989-01-01

    This user's manual contains all the necessary information concerning the use of SEVERO code. This computer code is related to the statistics of extremes = extreme winds, extreme precipitation and flooding hazard risk analysis. (A.C.A.S.)

  8. Development of a computational system based in the code GEANT4 for dosimetric evaluation in radiotherapy; Desenvolvimento de um sistema computacional baseado no Codigo GEANT4 para avaliacoes dosimetricas em radioterapia

    Energy Technology Data Exchange (ETDEWEB)

    Oliveira, Alex Cristovao Holanda de

    2016-10-01

    The incidence of cancer has grown in Brazil, as well as around the world, following the change in the age profile of the population. One of the most important techniques and commonly used in cancer treatment is radiotherapy. Around 60% of new cases of cancer use radiation in at least one phase of treatment. The most used equipment for radiotherapy is a linear accelerator (Linac) which produces electron or X-ray beams in energy range from 5 to 30 MeV. The most appropriate way to irradiate a patient is determined during treatment planning. Currently, treatment planning system (TPS) is the main and the most important tool in the process of planning for radiotherapy. The main objective of this work is to develop a computational system based on the MC code Geant4 for dose evaluations in photon beam radiotherapy. In addition to treatment planning, these dose evaluations can be performed for research and quality control of equipment and TPSs. The computer system, called Quimera, consists of a graphical user interface (qGUI) and three MC applications (qLinacs, qMATphantoms and qNCTphantoms). The qGUI has the function of interface for the MC applications, by creating or editing the input files, running simulations and analyzing the results. The qLinacs is used for modeling and generation of Linac beams (phase space). The qMATphantoms and qNCTphantoms are used for dose calculations in virtual models of physical phantoms and computed tomography (CT) images, respectively. From manufacturer's data, models of a Varian Linac photon beam and a Varian multileaf collimator (MLC) were simulated in the qLinacs. The Linac and MLC modelling were validated using experimental data. qMATphamtoms and qNCTphantoms were validated using IAEA phase spaces. In this first version, the Quimera can be used for research, radiotherapy planning of simple treatments and quality control in photon beam radiotherapy. The MC applications work independent of the qGUI and the qGUI can be used for

  9. Synthesizing Certified Code

    OpenAIRE

    Whalen, Michael; Schumann, Johann; Fischer, Bernd

    2002-01-01

    Code certification is a lightweight approach for formally demonstrating software quality. Its basic idea is to require code producers to provide formal proofs that their code satisfies certain quality properties. These proofs serve as certificates that can be checked independently. Since code certification uses the same underlying technology as program verification, it requires detailed annotations (e.g., loop invariants) to make the proofs possible. However, manually adding annotations to th...

  10. FERRET data analysis code

    International Nuclear Information System (INIS)

    Schmittroth, F.

    1979-09-01

    A documentation of the FERRET data analysis code is given. The code provides a way to combine related measurements and calculations in a consistent evaluation. Basically a very general least-squares code, it is oriented towards problems frequently encountered in nuclear data and reactor physics. A strong emphasis is on the proper treatment of uncertainties and correlations and in providing quantitative uncertainty estimates. Documentation includes a review of the method, structure of the code, input formats, and examples

  11. Stylize Aesthetic QR Code

    OpenAIRE

    Xu, Mingliang; Su, Hao; Li, Yafei; Li, Xi; Liao, Jing; Niu, Jianwei; Lv, Pei; Zhou, Bing

    2018-01-01

    With the continued proliferation of smart mobile devices, Quick Response (QR) code has become one of the most-used types of two-dimensional code in the world. Aiming at beautifying the appearance of QR codes, existing works have developed a series of techniques to make the QR code more visual-pleasant. However, these works still leave much to be desired, such as visual diversity, aesthetic quality, flexibility, universal property, and robustness. To address these issues, in this paper, we pro...

  12. Enhancing QR Code Security

    OpenAIRE

    Zhang, Linfan; Zheng, Shuang

    2015-01-01

    Quick Response code opens possibility to convey data in a unique way yet insufficient prevention and protection might lead into QR code being exploited on behalf of attackers. This thesis starts by presenting a general introduction of background and stating two problems regarding QR code security, which followed by a comprehensive research on both QR code itself and related issues. From the research a solution taking advantages of cloud and cryptography together with an implementation come af...

  13. Particle displacement tracking for PIV

    Science.gov (United States)

    Wernet, Mark P.

    1990-01-01

    A new Particle Imaging Velocimetry (PIV) data acquisition and analysis system, which is an order of magnitude faster than any previously proposed system has been constructed and tested. The new Particle Displacement Tracing (PDT) system is an all electronic technique employing a video camera and a large memory buffer frame-grabber board. Using a simple encoding scheme, a time sequence of single exposure images are time coded into a single image and then processed to track particle displacements and determine velocity vectors. Application of the PDT technique to a counter-rotating vortex flow produced over 1100 velocity vectors in 110 seconds when processed on an 80386 PC.

  14. Opening up codings?

    DEFF Research Database (Denmark)

    Steensig, Jakob; Heinemann, Trine

    2015-01-01

    doing formal coding and when doing more “traditional” conversation analysis research based on collections. We are more wary, however, of the implication that coding-based research is the end result of a process that starts with qualitative investigations and ends with categories that can be coded...

  15. Gauge color codes

    DEFF Research Database (Denmark)

    Bombin Palomo, Hector

    2015-01-01

    Color codes are topological stabilizer codes with unusual transversality properties. Here I show that their group of transversal gates is optimal and only depends on the spatial dimension, not the local geometry. I also introduce a generalized, subsystem version of color codes. In 3D they allow...

  16. Refactoring test code

    NARCIS (Netherlands)

    A. van Deursen (Arie); L.M.F. Moonen (Leon); A. van den Bergh; G. Kok

    2001-01-01

    textabstractTwo key aspects of extreme programming (XP) are unit testing and merciless refactoring. Given the fact that the ideal test code / production code ratio approaches 1:1, it is not surprising that unit tests are being refactored. We found that refactoring test code is different from

  17. Development of code PRETOR for stellarator simulation

    International Nuclear Information System (INIS)

    Dies, J.; Fontanet, J.; Fontdecaba, J.M.; Castejon, F.; Alejandre, C.

    1998-01-01

    The Department de Fisica i Enginyeria Nuclear (DFEN) of the UPC has some experience in the development of the transport code PRETOR. This code has been validated with shots of DIII-D, JET and TFTR, it has also been used in the simulation of operational scenarios of ITER fast burnt termination. Recently, the association EURATOM-CIEMAT has started the operation of the TJ-II stellarator. Due to the need of validating the results given by others transport codes applied to stellarators and because all of them made some approximations, as a averaging magnitudes in each magnetic surface, it was thought suitable to adapt the PRETOR code to devices without axial symmetry, like stellarators, which is very suitable for the specific needs of the study of TJ-II. Several modifications are required in PRETOR; the main concerns to the models of: magnetic equilibrium, geometry and transport of energy and particles. In order to solve the complex magnetic equilibrium geometry the powerful numerical code VMEC has been used. This code gives the magnetic surface shape as a Fourier series in terms of the harmonics (m,n). Most of the geometric magnitudes are also obtained from the VMEC results file. The energy and particle transport models will be replaced by other phenomenological models that are better adapted to stellarator simulation. Using the proposed models, it is pretended to reproduce experimental data available from present stellarators, given especial attention to the TJ-II of the association EURATOM-CIEMAT. (Author)

  18. Software Certification - Coding, Code, and Coders

    Science.gov (United States)

    Havelund, Klaus; Holzmann, Gerard J.

    2011-01-01

    We describe a certification approach for software development that has been adopted at our organization. JPL develops robotic spacecraft for the exploration of the solar system. The flight software that controls these spacecraft is considered to be mission critical. We argue that the goal of a software certification process cannot be the development of "perfect" software, i.e., software that can be formally proven to be correct under all imaginable and unimaginable circumstances. More realistically, the goal is to guarantee a software development process that is conducted by knowledgeable engineers, who follow generally accepted procedures to control known risks, while meeting agreed upon standards of workmanship. We target three specific issues that must be addressed in such a certification procedure: the coding process, the code that is developed, and the skills of the coders. The coding process is driven by standards (e.g., a coding standard) and tools. The code is mechanically checked against the standard with the help of state-of-the-art static source code analyzers. The coders, finally, are certified in on-site training courses that include formal exams.

  19. Recent developments in the Los Alamos radiation transport code system

    International Nuclear Information System (INIS)

    Forster, R.A.; Parsons, K.

    1997-01-01

    A brief progress report on updates to the Los Alamos Radiation Transport Code System (LARTCS) for solving criticality and fixed-source problems is provided. LARTCS integrates the Diffusion Accelerated Neutral Transport (DANT) discrete ordinates codes with the Monte Carlo N-Particle (MCNP) code. The LARCTS code is being developed with a graphical user interface for problem setup and analysis. Progress in the DANT system for criticality applications include a two-dimensional module which can be linked to a mesh-generation code and a faster iteration scheme. Updates to MCNP Version 4A allow statistical checks of calculated Monte Carlo results

  20. Monte Carlo codes use in neutron therapy; Application de codes Monte Carlo en neutrontherapie

    Energy Technology Data Exchange (ETDEWEB)

    Paquis, P.; Mokhtari, F.; Karamanoukian, D. [Hopital Pasteur, 06 - Nice (France); Pignol, J.P. [Hopital du Hasenrain, 68 - Mulhouse (France); Cuendet, P. [CEA Centre d' Etudes de Saclay, 91 - Gif-sur-Yvette (France). Direction des Reacteurs Nucleaires; Fares, G.; Hachem, A. [Faculte des Sciences, 06 - Nice (France); Iborra, N. [Centre Antoine-Lacassagne, 06 - Nice (France)

    1998-04-01

    Monte Carlo calculation codes allow to study accurately all the parameters relevant to radiation effects, like the dose deposition or the type of microscopic interactions, through one by one particle transport simulation. These features are very useful for neutron irradiations, from device development up to dosimetry. This paper illustrates some applications of these codes in Neutron Capture Therapy and Neutron Capture Enhancement of fast neutrons irradiations. (authors)

  1. Particle accelerator

    International Nuclear Information System (INIS)

    Ress, R.I.

    1976-01-01

    Charged particles are entrained in a predetermined direction, independent of their polarity, in a circular orbit by a magnetic field rotating at high speed about an axis in a closed cylindrical or toroidal vessel. The field may be generated by a cylindrical laser structure, whose beam is polygonally reflected from the walls of an excited cavity centered on the axis, or by high-frequency energization of a set of electromagnets perpendicular to the axis. In the latter case, a separate magnetostatic axial field limits the orbital radius of the particles. These rotating and stationary magnetic fields may be generated centrally or by individual magnets peripherally spaced along its circular orbit. Chemical or nuclear reactions can be induced by collisions between the orbiting particles and an injected reactant, or by diverting high-speed particles from one doughnut into the path of counterrotating particles in an adjoining doughnut

  2. Relating quantum discord with the quantum dense coding capacity

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Xin; Qiu, Liang, E-mail: lqiu@cumt.edu.cn; Li, Song; Zhang, Chi [China University of Mining and Technology, School of Sciences (China); Ye, Bin [China University of Mining and Technology, School of Information and Electrical Engineering (China)

    2015-01-15

    We establish the relations between quantum discord and the quantum dense coding capacity in (n + 1)-particle quantum states. A necessary condition for the vanishing discord monogamy score is given. We also find that the loss of quantum dense coding capacity due to decoherence is bounded below by the sum of quantum discord. When these results are restricted to three-particle quantum states, some complementarity relations are obtained.

  3. Relating quantum discord with the quantum dense coding capacity

    International Nuclear Information System (INIS)

    Wang, Xin; Qiu, Liang; Li, Song; Zhang, Chi; Ye, Bin

    2015-01-01

    We establish the relations between quantum discord and the quantum dense coding capacity in (n + 1)-particle quantum states. A necessary condition for the vanishing discord monogamy score is given. We also find that the loss of quantum dense coding capacity due to decoherence is bounded below by the sum of quantum discord. When these results are restricted to three-particle quantum states, some complementarity relations are obtained

  4. The network code

    International Nuclear Information System (INIS)

    1997-01-01

    The Network Code defines the rights and responsibilities of all users of the natural gas transportation system in the liberalised gas industry in the United Kingdom. This report describes the operation of the Code, what it means, how it works and its implications for the various participants in the industry. The topics covered are: development of the competitive gas market in the UK; key points in the Code; gas transportation charging; impact of the Code on producers upstream; impact on shippers; gas storage; supply point administration; impact of the Code on end users; the future. (20 tables; 33 figures) (UK)

  5. Coding for Electronic Mail

    Science.gov (United States)

    Rice, R. F.; Lee, J. J.

    1986-01-01

    Scheme for coding facsimile messages promises to reduce data transmission requirements to one-tenth current level. Coding scheme paves way for true electronic mail in which handwritten, typed, or printed messages or diagrams sent virtually instantaneously - between buildings or between continents. Scheme, called Universal System for Efficient Electronic Mail (USEEM), uses unsupervised character recognition and adaptive noiseless coding of text. Image quality of resulting delivered messages improved over messages transmitted by conventional coding. Coding scheme compatible with direct-entry electronic mail as well as facsimile reproduction. Text transmitted in this scheme automatically translated to word-processor form.

  6. Simulação computacional de um feixe de fótons de 6 MV em diferentes meios heterogêneos utilizando o código PENELOPE Computer simulation of a 6 MV photon beam in different heterogeneous media utilizing the PENELOPE code

    Directory of Open Access Journals (Sweden)

    Camila Salata

    2009-08-01

    Full Text Available OBJETIVO: Utilizar o código PENELOPE e desenvolver geometrias onde estão presentes heterogeneidades para simular o comportamento do feixe de fótons nessas condições. MATERIAIS E MÉTODOS: Foram feitas simulações do comportamento da radiação ionizante para o caso homogêneo, apenas água, e para os casos heterogêneos, com diferentes materiais. Consideraram-se geometrias cúbicas para os fantomas e geometrias em forma de paralelepípedos para as heterogeneidades com a seguinte composição: tecido simulador de osso e pulmão, seguindo recomendações da International Commission on Radiological Protection, e titânio, alumínio e prata. Definiram-se, como parâmetros de entrada: a energia e o tipo de partícula da fonte, 6 MV de fótons; a distância fonte-superfície de 100 cm; e o campo de radiação de 10x 10 cm². RESULTADOS: Obtiveram-se curvas de percentual de dose em profundidade para todos os casos. Observou-se que em materiais com densidade eletrônica alta, como a prata, a dose absorvida é maior em relação à dose absorvida no fantoma homogêneo, enquanto no tecido simulador de pulmão a dose é menor. CONCLUSÃO: Os resultados obtidos demonstram a importância de se considerar heterogeneidades nos algoritmos dos sistemas de planejamento usados no cálculo da distribuição de dose nos pacientes, evitando-se sub ou superdosagem dos tecidos próximos às heterogeneidades.OBJECTIVE: The PENELOPE code was utilized to simulate irradiation geometries where heterogeneities are present and to simulate a photon beam behavior under these conditions. MATERIALS AND METHODS: For the homogeneous case, the ionizing radiation behavior was simulated only with water, and different materials were introduced to simulate heterogeneous conditions. Cubic geometries were utilized for the homogeneous phantoms, and parallelepiped-shaped geometries for the heterogeneities with the following composition: bone and lung tissue simulators, as recommended

  7. Charged particles in external electromagnetic fields

    International Nuclear Information System (INIS)

    Giovannini, N.P.D.

    1976-01-01

    The present study contains a general theoretical group analysis of the problem of a charged massive particle moving in an (arbitrary) classical external electromagnetic field. This analysis is essentially based on the space-time symmetry properties of e.m. fields and e.m. field equations, as well as the fact that the considered equations of motion depend on the field via a potential

  8. NAGRADATA. Code key. Geology

    International Nuclear Information System (INIS)

    Mueller, W.H.; Schneider, B.; Staeuble, J.

    1984-01-01

    This reference manual provides users of the NAGRADATA system with comprehensive keys to the coding/decoding of geological and technical information to be stored in or retreaved from the databank. Emphasis has been placed on input data coding. When data is retreaved the translation into plain language of stored coded information is done automatically by computer. Three keys each, list the complete set of currently defined codes for the NAGRADATA system, namely codes with appropriate definitions, arranged: 1. according to subject matter (thematically) 2. the codes listed alphabetically and 3. the definitions listed alphabetically. Additional explanation is provided for the proper application of the codes and the logic behind the creation of new codes to be used within the NAGRADATA system. NAGRADATA makes use of codes instead of plain language for data storage; this offers the following advantages: speed of data processing, mainly data retrieval, economies of storage memory requirements, the standardisation of terminology. The nature of this thesaurian type 'key to codes' makes it impossible to either establish a final form or to cover the entire spectrum of requirements. Therefore, this first issue of codes to NAGRADATA must be considered to represent the current state of progress of a living system and future editions will be issued in a loose leave ringbook system which can be updated by an organised (updating) service. (author)

  9. XSOR codes users manual

    International Nuclear Information System (INIS)

    Jow, Hong-Nian; Murfin, W.B.; Johnson, J.D.

    1993-11-01

    This report describes the source term estimation codes, XSORs. The codes are written for three pressurized water reactors (Surry, Sequoyah, and Zion) and two boiling water reactors (Peach Bottom and Grand Gulf). The ensemble of codes has been named ''XSOR''. The purpose of XSOR codes is to estimate the source terms which would be released to the atmosphere in severe accidents. A source term includes the release fractions of several radionuclide groups, the timing and duration of releases, the rates of energy release, and the elevation of releases. The codes have been developed by Sandia National Laboratories for the US Nuclear Regulatory Commission (NRC) in support of the NUREG-1150 program. The XSOR codes are fast running parametric codes and are used as surrogates for detailed mechanistic codes. The XSOR codes also provide the capability to explore the phenomena and their uncertainty which are not currently modeled by the mechanistic codes. The uncertainty distributions of input parameters may be used by an. XSOR code to estimate the uncertainty of source terms

  10. Reactor lattice codes

    International Nuclear Information System (INIS)

    Kulikowska, T.

    1999-01-01

    The present lecture has a main goal to show how the transport lattice calculations are realised in a standard computer code. This is illustrated on the example of the WIMSD code, belonging to the most popular tools for reactor calculations. Most of the approaches discussed here can be easily modified to any other lattice code. The description of the code assumes the basic knowledge of reactor lattice, on the level given in the lecture on 'Reactor lattice transport calculations'. For more advanced explanation of the WIMSD code the reader is directed to the detailed descriptions of the code cited in References. The discussion of the methods and models included in the code is followed by the generally used homogenisation procedure and several numerical examples of discrepancies in calculated multiplication factors based on different sources of library data. (author)

  11. DLLExternalCode

    Energy Technology Data Exchange (ETDEWEB)

    2014-05-14

    DLLExternalCode is the a general dynamic-link library (DLL) interface for linking GoldSim (www.goldsim.com) with external codes. The overall concept is to use GoldSim as top level modeling software with interfaces to external codes for specific calculations. The DLLExternalCode DLL that performs the linking function is designed to take a list of code inputs from GoldSim, create an input file for the external application, run the external code, and return a list of outputs, read from files created by the external application, back to GoldSim. Instructions for creating the input file, running the external code, and reading the output are contained in an instructions file that is read and interpreted by the DLL.

  12. Peripheral Codes in ASTRA for the TJ-II

    International Nuclear Information System (INIS)

    Lopez-Bruna, D.; Reynolds, J. M.; Cappa, A.; Martinell, J.; Garcia, J.; Gutierrez-Tapia, C.

    2010-01-01

    The study of data from the TJ-II device is often done with transport calculations based on the ASTRA transport system. However, complicated independent codes are used to obtain fundamental ingredients in these calculations, such as the particle and/or energy sources. These codes are accessible from ASTRA through the procedures explained in this report. (Author) 37 refs.

  13. Study of neutral particle behavior and particle confinement in JT-60U

    International Nuclear Information System (INIS)

    Takenaga, Hidenobu; Shimizu, Katsuhiro; Asakura, Nobuyuki; Shimada, Michiya; Kikuchi, Mitsuru; Tsuji-Iio, Shunji; Uchino, Kiichiro; Muraoka, Katsunori.

    1995-07-01

    In order to understand the particle confinement properties in JT-60U, the particle confinement time was estimated through analyses of the neutral particle behavior. First, the neutral particle transport simulation code DEGAS using a Monte-Carlo technique was combined with the simple divertor code for calculating the edge plasma parameters, and was developed to calculate under the experimental conditions in JT-60U. Then, the charged particle source in the main plasma due to the ionization of the neutral particles was evaluated from the analyses of the neutral particle penetration to the main plasma based on results of the simulation code and measurements of D α emission intensities. Finally, the particle confinement time was estimated from the analysis of particle balance. The analyses were performed systematically for the L-mode plasma and H-mode plasma of JT-60U, and a data base of the particle confinement time was obtained. The dependence of the particle confinement time on the plasma parameters and the relationship between the properties of the particle confinement and the energy confinement were examined. (author)

  14. Linac particle tracing simulations

    International Nuclear Information System (INIS)

    Lysenko, W.P.

    1979-01-01

    A particle tracing code was developed to study space--charge effects in proton or heavy-ion linear accelerators. The purpose is to study space--charge phenomena as directly as possible without the complications of many accelerator details. Thus, the accelerator is represented simply by harmonic oscillator or impulse restoring forces. Variable parameters as well as mismatched phase--space distributions were studied. This study represents the initial search for those features of the accelerator or of the phase--space distribution that lead to emittance growth

  15. Toric Varieties and Codes, Error-correcting Codes, Quantum Codes, Secret Sharing and Decoding

    DEFF Research Database (Denmark)

    Hansen, Johan Peder

    We present toric varieties and associated toric codes and their decoding. Toric codes are applied to construct Linear Secret Sharing Schemes (LSSS) with strong multiplication by the Massey construction. Asymmetric Quantum Codes are obtained from toric codes by the A.R. Calderbank P.W. Shor and A.......M. Steane construction of stabilizer codes (CSS) from linear codes containing their dual codes....

  16. Particle detectors

    CERN Document Server

    Hilke, Hans Jürgen

    1992-01-01

    We shall discuss the principles of the main techniques applied to particle detection (including front-end electronics), the construction and performance of some of the devices presently in operation and a few ideas on future developments.

  17. Spot: a new Monte Carlo solver for fast alpha particles

    International Nuclear Information System (INIS)

    Schneider, M.; Eriksson, L.G.; Basiuk, V.; Imbeaux, F.

    2004-01-01

    The predictive transport code CRONOS has been augmented by an orbit following Monte Carlo code, SPOT (Simulation of Particle Orbits in a Tokamak). The SPOT code simulates the dynamics of nonthermal particles, and takes into account effects of finite orbit width and collisional transport of fast ions. Recent developments indicate that it might be difficult to avoid, at least transiently, current holes in a reactor. They occur already on existing tokamaks during advanced tokamak scenarios. The SPOT code has been used to study the alpha particle behaviour in the presence of current holes for both JET and ITER relevant parameters. (authors)

  18. Stochastic geometry in PRIZMA code

    International Nuclear Information System (INIS)

    Malyshkin, G. N.; Kashaeva, E. A.; Mukhamadiev, R. F.

    2007-01-01

    The paper describes a method used to simulate radiation transport through random media - randomly placed grains in a matrix material. The method models the medium consequently from one grain crossed by particle trajectory to another. Like in the Limited Chord Length Sampling (LCLS) method, particles in grains are tracked in the actual grain geometry, but unlike LCLS, the medium is modeled using only Matrix Chord Length Sampling (MCLS) from the exponential distribution and it is not necessary to know the grain chord length distribution. This helped us extend the method to media with randomly oriented arbitrarily shaped convex grains. Other extensions include multicomponent media - grains of several sorts, and polydisperse media - grains of different sizes. Sort and size distributions of crossed grains were obtained and an algorithm was developed for sampling grain orientations and positions. Special consideration was given to medium modeling at the boundary of the stochastic region. The method was implemented in the universal 3D Monte Carlo code PRIZMA. The paper provides calculated results for a model problem where we determine volume fractions of modeled components crossed by particle trajectories. It also demonstrates the use of biased sampling techniques implemented in PRIZMA for solving a problem of deep penetration in model random media. Described are calculations for the spectral response of a capacitor dose detector whose anode was modeled with account for its stochastic structure. (authors)

  19. An Optimal Linear Coding for Index Coding Problem

    OpenAIRE

    Pezeshkpour, Pouya

    2015-01-01

    An optimal linear coding solution for index coding problem is established. Instead of network coding approach by focus on graph theoric and algebraic methods a linear coding program for solving both unicast and groupcast index coding problem is presented. The coding is proved to be the optimal solution from the linear perspective and can be easily utilize for any number of messages. The importance of this work is lying mostly on the usage of the presented coding in the groupcast index coding ...

  20. Auroral particles

    International Nuclear Information System (INIS)

    Evans, D.S.

    1987-01-01

    The problems concerning the aurora posed prior to the war are now either solved in principle or were restated in a more fundamental form. The pre-war hypothesis concerning the nature of the auroral particles and their energies was fully confirmed, with the exception that helium and oxygen ions were identified as participating in the auroral particle precipitation in addition to the protons. The nature of the near-Earth energization processes affecting auroral particles was clarified. Charged particle trajectories in various electric field geometries were modeled. The physical problems have now moved from determining the nature and geometry of the electric fields, which accelerate charged particles near the Earth, to accounting for the existence of these electric fields as a natural consequence of the solar wind's interaction with Earth. Ultimately the reward in continuing the work in auroral and magnetospheric particle dynamics will be a deeper understanding of the subtleties of classical electricity and magnetism as applied to situations not blessed with well-defined and invariant geometries

  1. Electron microscopy of atmospheric particles

    Science.gov (United States)

    Huang, Po-Fu

    Electron microscopy coupled with energy dispersive spectrometry (EM/EDS) is a powerful tool for single particle analysis. However, the accuracy with which atmospheric particle compositions can be quantitatively determined by EDS is often hampered by substrate-particle interactions, volatilization losses in the low pressure microscope chamber, electron beam irradiation and use of inaccurate quantitation factors. A pseudo-analytical solution was derived to calculate the temperature rise due to the dissipation of the electron energy on a particle-substrate system. Evaporative mass loss for a spherical cap-shaped sulfuric acid particle resting on a thin film supported by a TEM grid during electron beam impingement has been studied. Measured volatilization rates were found to be in very good agreement with theoretical predictions. The method proposed can also be used to estimate the vapor pressure of a species by measuring the decay of X-ray intensities. Several types of substrates were studied. We found that silver-coated silicon monoxide substrates give carbon detection limits comparable to commercially available substrates. An advantage of these substrates is that the high thermal conductivity of the silver reduces heating due to electron beam impingement. In addition, exposure of sulfuric acid samples to ammonia overnight substantially reduces sulfur loss in the electron beam. Use of size-dependent k-factors determined from particles of known compositions shows promise for improving the accuracy of atmospheric particle compositions measured by EM/EDS. Knowledge accumulated during the course of this thesis has been used to analyze atmospheric particles (Minneapolis, MN) selected by the TDMA and collected by an aerodynamic focusing impactor. 'Less' hygroscopic particles, which do not grow to any measurable extent when humidified to ~90% relative humidity, included chain agglomerates, spheres, flakes, and irregular shapes. Carbon was the predominant element detected in

  2. The Aesthetics of Coding

    DEFF Research Database (Denmark)

    Andersen, Christian Ulrik

    2007-01-01

    Computer art is often associated with computer-generated expressions (digitally manipulated audio/images in music, video, stage design, media facades, etc.). In recent computer art, however, the code-text itself – not the generated output – has become the artwork (Perl Poetry, ASCII Art, obfuscated...... code, etc.). The presentation relates this artistic fascination of code to a media critique expressed by Florian Cramer, claiming that the graphical interface represents a media separation (of text/code and image) causing alienation to the computer’s materiality. Cramer is thus the voice of a new ‘code...... avant-garde’. In line with Cramer, the artists Alex McLean and Adrian Ward (aka Slub) declare: “art-oriented programming needs to acknowledge the conditions of its own making – its poesis.” By analysing the Live Coding performances of Slub (where they program computer music live), the presentation...

  3. Majorana fermion codes

    International Nuclear Information System (INIS)

    Bravyi, Sergey; Terhal, Barbara M; Leemhuis, Bernhard

    2010-01-01

    We initiate the study of Majorana fermion codes (MFCs). These codes can be viewed as extensions of Kitaev's one-dimensional (1D) model of unpaired Majorana fermions in quantum wires to higher spatial dimensions and interacting fermions. The purpose of MFCs is to protect quantum information against low-weight fermionic errors, that is, operators acting on sufficiently small subsets of fermionic modes. We examine to what extent MFCs can surpass qubit stabilizer codes in terms of their stability properties. A general construction of 2D MFCs is proposed that combines topological protection based on a macroscopic code distance with protection based on fermionic parity conservation. Finally, we use MFCs to show how to transform any qubit stabilizer code to a weakly self-dual CSS code.

  4. Theory of epigenetic coding.

    Science.gov (United States)

    Elder, D

    1984-06-07

    The logic of genetic control of development may be based on a binary epigenetic code. This paper revises the author's previous scheme dealing with the numerology of annelid metamerism in these terms. Certain features of the code had been deduced to be combinatorial, others not. This paradoxical contrast is resolved here by the interpretation that these features relate to different operations of the code; the combinatiorial to coding identity of units, the non-combinatorial to coding production of units. Consideration of a second paradox in the theory of epigenetic coding leads to a new solution which further provides a basis for epimorphic regeneration, and may in particular throw light on the "regeneration-duplication" phenomenon. A possible test of the model is also put forward.

  5. DISP1 code

    International Nuclear Information System (INIS)

    Vokac, P.

    1999-12-01

    DISP1 code is a simple tool for assessment of the dispersion of the fission product cloud escaping from a nuclear power plant after an accident. The code makes it possible to tentatively check the feasibility of calculations by more complex PSA3 codes and/or codes for real-time dispersion calculations. The number of input parameters is reasonably low and the user interface is simple enough to allow a rapid processing of sensitivity analyses. All input data entered through the user interface are stored in the text format. Implementation of dispersion model corrections taken from the ARCON96 code enables the DISP1 code to be employed for assessment of the radiation hazard within the NPP area, in the control room for instance. (P.A.)

  6. Elementary particles and particle interactions

    International Nuclear Information System (INIS)

    Bethge, K.; Schroeder, U.E.

    1986-01-01

    This book is a textbook for an introductory course of elementary particle physics. After a general introduction the symmetry principles governing the interactions of elementary particles are discussed. Then the phenomenology of the electroweak and strong interactions are described together with a short introduction to the Weinberg-Salam theory respectively to quantum chromodynamics. Finally a short outlook is given to grand unification with special regards to SU(5) and cosmology in the framework of the current understanding of the fundamental principles of nature. In the appendix is a table of particle properties and physical constants. (HSI) [de

  7. Phonological coding during reading.

    Science.gov (United States)

    Leinenger, Mallorie

    2014-11-01

    The exact role that phonological coding (the recoding of written, orthographic information into a sound based code) plays during silent reading has been extensively studied for more than a century. Despite the large body of research surrounding the topic, varying theories as to the time course and function of this recoding still exist. The present review synthesizes this body of research, addressing the topics of time course and function in tandem. The varying theories surrounding the function of phonological coding (e.g., that phonological codes aid lexical access, that phonological codes aid comprehension and bolster short-term memory, or that phonological codes are largely epiphenomenal in skilled readers) are first outlined, and the time courses that each maps onto (e.g., that phonological codes come online early [prelexical] or that phonological codes come online late [postlexical]) are discussed. Next the research relevant to each of these proposed functions is reviewed, discussing the varying methodologies that have been used to investigate phonological coding (e.g., response time methods, reading while eye-tracking or recording EEG and MEG, concurrent articulation) and highlighting the advantages and limitations of each with respect to the study of phonological coding. In response to the view that phonological coding is largely epiphenomenal in skilled readers, research on the use of phonological codes in prelingually, profoundly deaf readers is reviewed. Finally, implications for current models of word identification (activation-verification model, Van Orden, 1987; dual-route model, e.g., M. Coltheart, Rastle, Perry, Langdon, & Ziegler, 2001; parallel distributed processing model, Seidenberg & McClelland, 1989) are discussed. (PsycINFO Database Record (c) 2014 APA, all rights reserved).

  8. The aeroelastic code FLEXLAST

    Energy Technology Data Exchange (ETDEWEB)

    Visser, B. [Stork Product Eng., Amsterdam (Netherlands)

    1996-09-01

    To support the discussion on aeroelastic codes, a description of the code FLEXLAST was given and experiences within benchmarks and measurement programmes were summarized. The code FLEXLAST has been developed since 1982 at Stork Product Engineering (SPE). Since 1992 FLEXLAST has been used by Dutch industries for wind turbine and rotor design. Based on the comparison with measurements, it can be concluded that the main shortcomings of wind turbine modelling lie in the field of aerodynamics, wind field and wake modelling. (au)

  9. Influência do tempo de incubação e do tamanho de partículas sobre os teores de compostos indigestíveis em alimentos e fezes bovinas obtidos por procedimentos in situ Influence of incubation time and particles size on indigestible compounds contents in cattle feeds and feces obtained by in situ procedures

    Directory of Open Access Journals (Sweden)

    André Oliveira Casali

    2008-02-01

    Full Text Available Objetivou-se avaliar a influência do tempo de incubação in situ e do tamanho de partículas sobre as estimativas das frações indigestíveis da matéria seca (MSi, da fibra em detergente neutro (FDNi e da fibra em detergente ácido (FDAi em alimentos e fezes bovinas. Avaliaram-se amostras de fubá de milho, casca de soja, farelo de trigo, farelo de soja, farelo de algodão, silagem de milho, capim-elefante, cana-de-açúcar, feno de capim-braquiária, palha de milho e fezes de bovinos alimentados com dietas com alto ou baixo nível de concentrado. As amostras foram processadas em moinho com peneiras de porosidade 1, 2 ou 3 mm e acondicionadas (20 mg MS/cm² de superfície em sacos de tecido não-tecido (100 g/m² de dimensão 4 × 5 cm. Os materiais foram divididos em três grupos, de modo que as amostras de cada grupo foram incubadas no rúmen de três novilhas mestiças (Holandês × Zebu. O procedimento de incubação foi repetido três vezes e, a cada período, procedeu-se à incubação dos grupos em animais distintos. Foram utilizados os tempos: 0, 12, 24, 48, 72, 96, 120, 144, 168, 192, 216, 240 e 312 horas. Os teores de MSi, FDNi e FDAi foram avaliados seqüencialmente para interpretação dos perfis de degradação por modelo logístico não-linear. Não houve efeito do tamanho de partículas sobre as estimativas de FDNi e FDAi. Verificaram-se efeitos dos tamanhos de partículas sobre a velocidade de degradação da MS da silagem de milho e do fubá de milho, da FDN da cana-de-açúcar, da silagem de milho e da palha de milho e sobre a velocidade de degradação da FDA da cana-de-açúcar. Para esses alimentos, o tamanho de partícula associou-se positivamente ao tempo necessário para estimar a fração indigestível. Tempos de incubação de 240 horas para MS e FDN e de 264 horas para FDA são recomendados para obtenção de estimativas exatas das frações indigestíveis. O uso de partículas de 2 mm é recomendado por

  10. MORSE Monte Carlo code

    International Nuclear Information System (INIS)

    Cramer, S.N.

    1984-01-01

    The MORSE code is a large general-use multigroup Monte Carlo code system. Although no claims can be made regarding its superiority in either theoretical details or Monte Carlo techniques, MORSE has been, since its inception at ORNL in the late 1960s, the most widely used Monte Carlo radiation transport code. The principal reason for this popularity is that MORSE is relatively easy to use, independent of any installation or distribution center, and it can be easily customized to fit almost any specific need. Features of the MORSE code are described

  11. QR codes for dummies

    CERN Document Server

    Waters, Joe

    2012-01-01

    Find out how to effectively create, use, and track QR codes QR (Quick Response) codes are popping up everywhere, and businesses are reaping the rewards. Get in on the action with the no-nonsense advice in this streamlined, portable guide. You'll find out how to get started, plan your strategy, and actually create the codes. Then you'll learn to link codes to mobile-friendly content, track your results, and develop ways to give your customers value that will keep them coming back. It's all presented in the straightforward style you've come to know and love, with a dash of humor thrown

  12. Tokamak Systems Code

    International Nuclear Information System (INIS)

    Reid, R.L.; Barrett, R.J.; Brown, T.G.

    1985-03-01

    The FEDC Tokamak Systems Code calculates tokamak performance, cost, and configuration as a function of plasma engineering parameters. This version of the code models experimental tokamaks. It does not currently consider tokamak configurations that generate electrical power or incorporate breeding blankets. The code has a modular (or subroutine) structure to allow independent modeling for each major tokamak component or system. A primary benefit of modularization is that a component module may be updated without disturbing the remainder of the systems code as long as the imput to or output from the module remains unchanged

  13. Efficient Coding of Information: Huffman Coding -RE ...

    Indian Academy of Sciences (India)

    to a stream of equally-likely symbols so as to recover the original stream in the event of errors. The for- ... The source-coding problem is one of finding a mapping from U to a ... probability that the random variable X takes the value x written as ...

  14. NR-code: Nonlinear reconstruction code

    Science.gov (United States)

    Yu, Yu; Pen, Ue-Li; Zhu, Hong-Ming

    2018-04-01

    NR-code applies nonlinear reconstruction to the dark matter density field in redshift space and solves for the nonlinear mapping from the initial Lagrangian positions to the final redshift space positions; this reverses the large-scale bulk flows and improves the precision measurement of the baryon acoustic oscillations (BAO) scale.

  15. Multiple sample, radioactive particle counting apparatus

    International Nuclear Information System (INIS)

    Reddy, R.R.V.; Kelso, D.M.

    1978-01-01

    An apparatus is described for determining the respective radioactive particle sample count being emitted from radioactive particle containing samples. It includes means for modulating the information on the radioactive particles being emitted from the samples, coded detecting means for sequentially detecting different respective coded combinations of the radioactive particles emitted from more than one but less than all of the samples, and processing the modulated information to derive the sample count for each sample. It includes a single light emitting crystal next to a number of samples, an encoder belt sequentially movable between the crystal and the samples. The encoder belt has a coded array of apertures to provide corresponding modulated light pulses from the crystal, and a photomultiplier tube to convert the modulated light pulses to decodable electrical signals for deriving the respective sample count

  16. Physics options in the plasma code VOA

    International Nuclear Information System (INIS)

    Eltgroth, P.G.

    1976-06-01

    A two dimensional relativistic plasma physics code has been modified to accomodate general electromagnetic boundary conditions and various approximations of basic physics. The code can treat internal conductors and insulators, imposed electromagnetic fields, the effects of external circuitry and non-equilibrium starting conditions. Particle dynamics options include a full microscopic treatment, fully relaxed electrons, a low frequency electron approximation and a combination of approximations for specified zones. Electromagnetic options include the full wave treatment, an electrostatic approximation and two varieties of magnetohydrodynamic approximations in specified zones

  17. Neonatal Phosphate Nutrition Alters <em>in em>Vivo> and <em>in em>Vitro> Satellite Cell Activity in Pigs

    Directory of Open Access Journals (Sweden)

    Chad H. Stahl

    2012-05-01

    Full Text Available Satellite cell activity is necessary for postnatal skeletal muscle growth. Severe phosphate (PO4 deficiency can alter satellite cell activity, however the role of neonatal PO4 nutrition on satellite cell biology remains obscure. Twenty-one piglets (1 day of age, 1.8 ± 0.2 kg BW were pair-fed liquid diets that were either PO4 adequate (0.9% total P, supra-adequate (1.2% total P in PO4 requirement or deficient (0.7% total P in PO4 content for 12 days. Body weight was recorded daily and blood samples collected every 6 days. At day 12, pigs were orally dosed with BrdU and 12 h later, satellite cells were isolated. Satellite cells were also cultured <em>in vitroem> for 7 days to determine if PO4 nutrition alters their ability to proceed through their myogenic lineage. Dietary PO4 deficiency resulted in reduced (<em>P> < 0.05 sera PO4 and parathyroid hormone (PTH concentrations, while supra-adequate dietary PO4 improved (<em>P> < 0.05 feed conversion efficiency as compared to the PO4 adequate group. <em>In vivoem> satellite cell proliferation was reduced (<em>P> < 0.05 among the PO4 deficient pigs, and these cells had altered <em>in vitroem> expression of markers of myogenic progression. Further work to better understand early nutritional programming of satellite cells and the potential benefits of emphasizing early PO4 nutrition for future lean growth potential is warranted.

  18. About the Code of Practice of the European Mathematical Society

    DEFF Research Database (Denmark)

    Jensen, Arne

    2013-01-01

    The Executive Committee of the European Mathematical Society created an Ethics Committee in the Spring of 2010. The first task of the Committee was to prepare a Code of Practice. This task was completed in the Spring of 2012 and went into effect on 1 November 2012. Arne Jensen, author...... of this article, is Chair of the EMS Ethics Committee...

  19. Vectorization, parallelization and porting of nuclear codes. 2001

    International Nuclear Information System (INIS)

    Akiyama, Mitsunaga; Katakura, Fumishige; Kume, Etsuo; Nemoto, Toshiyuki; Tsuruoka, Takuya; Adachi, Masaaki

    2003-07-01

    Several computer codes in the nuclear field have been vectorized, parallelized and transported on the super computer system at Center for Promotion of Computational Science and Engineering in Japan Atomic Energy Research Institute. We dealt with 10 codes in fiscal 2001. In this report, the parallelization of Neutron Radiography for 3 Dimensional CT code NR3DCT, the vectorization of unsteady-state heat conduction code THERMO3D, the porting of initial program of MHD simulation, the tuning of Heat And Mass Balance Analysis Code HAMBAC, the porting and parallelization of Monte Carlo N-Particle transport code MCNP4C3, the porting and parallelization of Monte Carlo N-Particle transport code system MCNPX2.1.5, the porting of induced activity calculation code CINAC-V4, the use of VisLink library in multidimensional two-fluid model code ACD3D and the porting of experiment data processing code from GS8500 to SR8000 are described. (author)

  20. The TESS [Tandem Experiment Simulation Studies] computer code user's manual

    International Nuclear Information System (INIS)

    Procassini, R.J.

    1990-01-01

    TESS (Tandem Experiment Simulation Studies) is a one-dimensional, bounded particle-in-cell (PIC) simulation code designed to investigate the confinement and transport of plasma in a magnetic mirror device, including tandem mirror configurations. Mirror plasmas may be modeled in a system which includes an applied magnetic field and/or a self-consistent or applied electrostatic potential. The PIC code TESS is similar to the PIC code DIPSI (Direct Implicit Plasma Surface Interactions) which is designed to study plasma transport to and interaction with a solid surface. The codes TESS and DIPSI are direct descendants of the PIC code ES1 that was created by A. B. Langdon. This document provides the user with a brief description of the methods used in the code and a tutorial on the use of the code. 10 refs., 2 tabs

  1. Particle detectors

    CERN Document Server

    Hilke, Hans Jürgen; Joram, Christian; CERN. Geneva

    1991-01-01

    Lecture 5: Detector characteristics: ALEPH Experiment cut through the devices and events - Discuss the principles of the main techniques applied to particle detection ( including front-end electronics), the construction and performance of some of the devices presently in operartion and a few ideas on the future performance. Lecture 4-pt. b Following the Scintillators. Lecture 4-pt. a : Scintillators - Used for: -Timing (TOF, Trigger) - Energy Measurement (Calorimeters) - Tracking (Fibres) Basic scintillation processes- Inorganic Scintillators - Organic Scintil - Discuss the principles of the main techniques applied to particle detection ( including front-end electronics), the construction and performance of some of the devices presently in operation and a fiew ideas on future developpement session 3 - part. b Following Calorimeters lecture 3-pt. a Calorimeters - determine energy E by total absorption of charged or neutral particles - fraction of E is transformed into measurable quantities - try to acheive sig...

  2. Stable particles

    International Nuclear Information System (INIS)

    Samios, N.P.

    1993-01-01

    I have been asked to review the subject of stable particles, essentially the particles that eventually comprised the meson and baryon octets. with a few more additions -- with an emphasis on the contributions made by experiments utilizing the bubble chamber technique. In this activity, much work had been done by the photographic emulsion technique and cloud chambers-exposed to cosmic rays as well as accelerator based beams. In fact, many if not most of the stable particles were found by these latter two techniques, however, the forte of the bubble chamber (coupled with the newer and more powerful accelerators) was to verify, and reinforce with large statistics, the existence of these states, to find some of the more difficult ones, mainly neutrals and further to elucidate their properties, i.e., spin, parity, lifetimes, decay parameters, etc

  3. Synthesizing Certified Code

    Science.gov (United States)

    Whalen, Michael; Schumann, Johann; Fischer, Bernd

    2002-01-01

    Code certification is a lightweight approach to demonstrate software quality on a formal level. Its basic idea is to require producers to provide formal proofs that their code satisfies certain quality properties. These proofs serve as certificates which can be checked independently. Since code certification uses the same underlying technology as program verification, it also requires many detailed annotations (e.g., loop invariants) to make the proofs possible. However, manually adding theses annotations to the code is time-consuming and error-prone. We address this problem by combining code certification with automatic program synthesis. We propose an approach to generate simultaneously, from a high-level specification, code and all annotations required to certify generated code. Here, we describe a certification extension of AUTOBAYES, a synthesis tool which automatically generates complex data analysis programs from compact specifications. AUTOBAYES contains sufficient high-level domain knowledge to generate detailed annotations. This allows us to use a general-purpose verification condition generator to produce a set of proof obligations in first-order logic. The obligations are then discharged using the automated theorem E-SETHEO. We demonstrate our approach by certifying operator safety for a generated iterative data classification program without manual annotation of the code.

  4. Code of Ethics

    Science.gov (United States)

    Division for Early Childhood, Council for Exceptional Children, 2009

    2009-01-01

    The Code of Ethics of the Division for Early Childhood (DEC) of the Council for Exceptional Children is a public statement of principles and practice guidelines supported by the mission of DEC. The foundation of this Code is based on sound ethical reasoning related to professional practice with young children with disabilities and their families…

  5. Interleaved Product LDPC Codes

    OpenAIRE

    Baldi, Marco; Cancellieri, Giovanni; Chiaraluce, Franco

    2011-01-01

    Product LDPC codes take advantage of LDPC decoding algorithms and the high minimum distance of product codes. We propose to add suitable interleavers to improve the waterfall performance of LDPC decoding. Interleaving also reduces the number of low weight codewords, that gives a further advantage in the error floor region.

  6. Insurance billing and coding.

    Science.gov (United States)

    Napier, Rebecca H; Bruelheide, Lori S; Demann, Eric T K; Haug, Richard H

    2008-07-01

    The purpose of this article is to highlight the importance of understanding various numeric and alpha-numeric codes for accurately billing dental and medically related services to private pay or third-party insurance carriers. In the United States, common dental terminology (CDT) codes are most commonly used by dentists to submit claims, whereas current procedural terminology (CPT) and International Classification of Diseases, Ninth Revision, Clinical Modification (ICD.9.CM) codes are more commonly used by physicians to bill for their services. The CPT and ICD.9.CM coding systems complement each other in that CPT codes provide the procedure and service information and ICD.9.CM codes provide the reason or rationale for a particular procedure or service. These codes are more commonly used for "medical necessity" determinations, and general dentists and specialists who routinely perform care, including trauma-related care, biopsies, and dental treatment as a result of or in anticipation of a cancer-related treatment, are likely to use these codes. Claim submissions for care provided can be completed electronically or by means of paper forms.

  7. Error Correcting Codes

    Indian Academy of Sciences (India)

    Science and Automation at ... the Reed-Solomon code contained 223 bytes of data, (a byte ... then you have a data storage system with error correction, that ..... practical codes, storing such a table is infeasible, as it is generally too large.

  8. Scrum Code Camps

    DEFF Research Database (Denmark)

    Pries-Heje, Lene; Pries-Heje, Jan; Dalgaard, Bente

    2013-01-01

    is required. In this paper we present the design of such a new approach, the Scrum Code Camp, which can be used to assess agile team capability in a transparent and consistent way. A design science research approach is used to analyze properties of two instances of the Scrum Code Camp where seven agile teams...

  9. RFQ simulation code

    International Nuclear Information System (INIS)

    Lysenko, W.P.

    1984-04-01

    We have developed the RFQLIB simulation system to provide a means to systematically generate the new versions of radio-frequency quadrupole (RFQ) linac simulation codes that are required by the constantly changing needs of a research environment. This integrated system simplifies keeping track of the various versions of the simulation code and makes it practical to maintain complete and up-to-date documentation. In this scheme, there is a certain standard version of the simulation code that forms a library upon which new versions are built. To generate a new version of the simulation code, the routines to be modified or added are appended to a standard command file, which contains the commands to compile the new routines and link them to the routines in the library. The library itself is rarely changed. Whenever the library is modified, however, this modification is seen by all versions of the simulation code, which actually exist as different versions of the command file. All code is written according to the rules of structured programming. Modularity is enforced by not using COMMON statements, simplifying the relation of the data flow to a hierarchy diagram. Simulation results are similar to those of the PARMTEQ code, as expected, because of the similar physical model. Different capabilities, such as those for generating beams matched in detail to the structure, are available in the new code for help in testing new ideas in designing RFQ linacs

  10. Error Correcting Codes

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 2; Issue 3. Error Correcting Codes - Reed Solomon Codes. Priti Shankar. Series Article Volume 2 Issue 3 March ... Author Affiliations. Priti Shankar1. Department of Computer Science and Automation, Indian Institute of Science, Bangalore 560 012, India ...

  11. 78 FR 18321 - International Code Council: The Update Process for the International Codes and Standards

    Science.gov (United States)

    2013-03-26

    ... Energy Conservation Code. International Existing Building Code. International Fire Code. International... Code. International Property Maintenance Code. International Residential Code. International Swimming Pool and Spa Code International Wildland-Urban Interface Code. International Zoning Code. ICC Standards...

  12. Constituents from <em>Vigna em>vexillata> and Their Anti-Inflammatory Activity

    Directory of Open Access Journals (Sweden)

    Guo-Feng Chen

    2012-08-01

    Full Text Available The seeds of <em>Vigna em>genus are important food resources and there have already been many reports regarding their bioactivities. In our preliminary bioassay, the chloroform layer of methanol extracts of<em> V. vexillata em>demonstrated significant anti-inflammatory bioactivity. Therefore, the present research is aimed to purify and identify the anti-inflammatory principles of <em>V. vexillataem>. One new sterol (1 and two new isoflavones (2,3 were reported from the natural sources for the first time and their chemical structures were determined by the spectroscopic and mass spectrometric analyses. In addition, 37 known compounds were identified by comparison of their physical and spectroscopic data with those reported in the literature. Among the isolates, daidzein (23, abscisic acid (25, and quercetin (40 displayed the most significant inhibition of superoxide anion generation and elastase release.

  13. Particle physics

    CERN Document Server

    Martin, Brian R

    2017-01-01

    An accessible and carefully structured introduction to Particle Physics, including important coverage of the Higgs Boson and recent progress in neutrino physics. Fourth edition of this successful title in the Manchester Physics series. Includes information on recent key discoveries including : An account of the discovery of exotic hadrons, beyond the simple quark model; Expanded treatments of neutrino physics and CP violation in B-decays; An updated account of ‘physics beyond the standard model’, including the interaction of particle physics with cosmology; Additional problems in all chapters, with solutions to selected problems available on the book’s website; Advanced material appears in optional starred sections.

  14. Validation of thermalhydraulic codes

    International Nuclear Information System (INIS)

    Wilkie, D.

    1992-01-01

    Thermalhydraulic codes require to be validated against experimental data collected over a wide range of situations if they are to be relied upon. A good example is provided by the nuclear industry where codes are used for safety studies and for determining operating conditions. Errors in the codes could lead to financial penalties, to the incorrect estimation of the consequences of accidents and even to the accidents themselves. Comparison between prediction and experiment is often described qualitatively or in approximate terms, e.g. ''agreement is within 10%''. A quantitative method is preferable, especially when several competing codes are available. The codes can then be ranked in order of merit. Such a method is described. (Author)

  15. Fracture flow code

    International Nuclear Information System (INIS)

    Dershowitz, W; Herbert, A.; Long, J.

    1989-03-01

    The hydrology of the SCV site will be modelled utilizing discrete fracture flow models. These models are complex, and can not be fully cerified by comparison to analytical solutions. The best approach for verification of these codes is therefore cross-verification between different codes. This is complicated by the variation in assumptions and solution techniques utilized in different codes. Cross-verification procedures are defined which allow comparison of the codes developed by Harwell Laboratory, Lawrence Berkeley Laboratory, and Golder Associates Inc. Six cross-verification datasets are defined for deterministic and stochastic verification of geometric and flow features of the codes. Additional datasets for verification of transport features will be documented in a future report. (13 figs., 7 tabs., 10 refs.) (authors)

  16. High-fidelity plasma codes for burn physics

    Energy Technology Data Exchange (ETDEWEB)

    Cooley, James [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Graziani, Frank [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Marinak, Marty [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Murillo, Michael [Michigan State Univ., East Lansing, MI (United States)

    2016-10-19

    Accurate predictions of equation of state (EOS), ionic and electronic transport properties are of critical importance for high-energy-density plasma science. Transport coefficients inform radiation-hydrodynamic codes and impact diagnostic interpretation, which in turn impacts our understanding of the development of instabilities, the overall energy balance of burning plasmas, and the efficacy of self-heating from charged-particle stopping. Important processes include thermal and electrical conduction, electron-ion coupling, inter-diffusion, ion viscosity, and charged particle stopping. However, uncertainties in these coefficients are not well established. Fundamental plasma science codes, also called high-fidelity plasma codes, are a relatively recent computational tool that augments both experimental data and theoretical foundations of transport coefficients. This paper addresses the current status of HFPC codes and their future development, and the potential impact they play in improving the predictive capability of the multi-physics hydrodynamic codes used in HED design.

  17. Monte Carlo simulation code modernization

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    The continual development of sophisticated transport simulation algorithms allows increasingly accurate description of the effect of the passage of particles through matter. This modelling capability finds applications in a large spectrum of fields from medicine to astrophysics, and of course HEP. These new capabilities however come at the cost of a greater computational intensity of the new models, which has the effect of increasing the demands of computing resources. This is particularly true for HEP, where the demand for more simulation are driven by the need of both more accuracy and more precision, i.e. better models and more events. Usually HEP has relied on the "Moore's law" evolution, but since almost ten years the increase in clock speed has withered and computing capacity comes in the form of hardware architectures of many-core or accelerated processors. To harness these opportunities we need to adapt our code to concurrent programming models taking advantages of both SIMD and SIMT architectures. Th...

  18. Mathematical models and illustrative results for the RINGBEARER II monopole/dipole beam-propagation code

    International Nuclear Information System (INIS)

    Chambers, F.W.; Masamitsu, J.A.; Lee, E.P.

    1982-01-01

    RINGBEARER II is a linearized monopole/dipole particle simulation code for studying intense relativistic electron beam propagation in gas. In this report the mathematical models utilized for beam particle dynamics and pinch field computation are delineated. Difficulties encountered in code operations and some remedies are discussed. Sample output is presented detailing the diagnostics and the methods of display and analysis utilized

  19. Positron emission zone plate holography for particle tracking

    Energy Technology Data Exchange (ETDEWEB)

    Gundogdu, O. [University of Birmingham, School of Physics and Astronomy, Birmingham B15 2TT (United Kingdom)]. E-mail: o.gundogdu@surrey.ac.uk

    2006-01-15

    Positron Emission Particle Tracking (PEPT) is a powerful non-invasive technique that has been used extensively for tracking a single particle. In this paper, we present a study of zone plate holography method in order to track multiple particles, mainly two particles. The main aim is to use as small number of events as possible in the order to make it possible to track particles in fast moving industrial systems. A zone plate with 100% focal efficiency is simulated and applied to the Positron Emission Tomography (PET) data for multiple particle tracking. A simple trajectory code was employed to explore the effects of the nature of the experimental trajectories. A computer holographic reconstruction code that simulates optical reconstruction was developed. The different aspects of the particle location, particle activity ratios for enabling tagging of particles and zone plate and hologram locations are investigated. The effect of the shot noise is investigated and the limitations of the zone plate holography are reported.

  20. Positron emission zone plate holography for particle tracking

    International Nuclear Information System (INIS)

    Gundogdu, O.

    2006-01-01

    Positron Emission Particle Tracking (PEPT) is a powerful non-invasive technique that has been used extensively for tracking a single particle. In this paper, we present a study of zone plate holography method in order to track multiple particles, mainly two particles. The main aim is to use as small number of events as possible in the order to make it possible to track particles in fast moving industrial systems. A zone plate with 100% focal efficiency is simulated and applied to the Positron Emission Tomography (PET) data for multiple particle tracking. A simple trajectory code was employed to explore the effects of the nature of the experimental trajectories. A computer holographic reconstruction code that simulates optical reconstruction was developed. The different aspects of the particle location, particle activity ratios for enabling tagging of particles and zone plate and hologram locations are investigated. The effect of the shot noise is investigated and the limitations of the zone plate holography are reported

  1. Huffman coding in advanced audio coding standard

    Science.gov (United States)

    Brzuchalski, Grzegorz

    2012-05-01

    This article presents several hardware architectures of Advanced Audio Coding (AAC) Huffman noiseless encoder, its optimisations and working implementation. Much attention has been paid to optimise the demand of hardware resources especially memory size. The aim of design was to get as short binary stream as possible in this standard. The Huffman encoder with whole audio-video system has been implemented in FPGA devices.

  2. Alterações nos atributos químicos de um Latossolo distroférrico decorrentes da granulometria e doses de calcário em sistemas plantio direto e convencional Changes of chemical soil characteristics due to doses and particle sizes of limestone in no-tillage and conventional systems

    Directory of Open Access Journals (Sweden)

    J. C. A. Mello

    2003-06-01

    Full Text Available Atualmente, percebe-se o interesse na calagem superficial, sem prévia incorporação, para instalação do sistema plantio direto (SPD. Dessa forma, objetivou-se determinar os efeitos de granulometria e doses de calcários no SPD, em fase de implantação, e no sistema de plantio convencional (SPC sobre o pH, H + Al, Ca2+ e Mg2+. O experimento foi realizado no ano agrícola de 1998/99, na FCA/UNESP-Botucatu (SP, em Latossolo Vermelho. O delineamento experimental foi de blocos ao acaso, com parcelas subsubdivididas e quatro repetições. As parcelas representaram os sistemas de plantio (SPD e SPC; as subparcelas, a granulometria dos calcários [grosso (PRNT = 56 % e fino (PRNT = 90 %], e as subsubparcelas, as doses de 2, 4 e 6 t ha-1 (calcário grosso e 1,2; 2,4 e 3,6 t ha-1 (fino. O solo foi amostrado, a 0-5, 5-10, 10-20 e 20-40 cm de profundidade, 1, 3 e 12 meses após a aplicação dos corretivos. A análise de variância não detectou interação tripla entre os fatores. A aplicação de calcário superficial no SPD, independentemente da granulometria e da dose, alterou positivamente os atributos químicos do solo (0-5 e 5-10 cm, 12 meses após a calagem. O corretivo continuou reagindo, independentemente do sistema de plantio, de forma intensa, mesmo após três meses. A aplicação de doses mais elevadas de calcário, com maior granulometria, sugeriu efeito residual prolongado.The interest of surface liming of soils on no-tillage system is increasing. The objective of this work was to evaluate the effects of different lime particle sizes and doses applied during the implantation phase of no-tillage and in conventional tillage systems on soil pH and H + Al, Ca2+, and Mg2+ contents. The experiment was carried out in 1998/99, on the experimental farm of São Paulo State University in Botucatu, São Paulo, Brazil, on a Red Distroferric Latossol (Oxisol. A split plot experimental design with four replications was used. The number of plots

  3. Elementary particles

    International Nuclear Information System (INIS)

    Prasad, R.

    1984-01-01

    Two previous monographs report on investigations into the extent to which a unified field theory can satisfactorily describe physical reality. The first, Unified field Theory, showed that the paths within a non-Riemannian space are governed by eigenvalue equations. The second, Fundamental Constants, show that the field tensors satisfy sets of differential equations with solutions which represent the evolution of the fields along the paths of the space. The results from the first two monographs are used in this one to make progress on the theory of elementary particles. The five chapters are as follows - Quantum mechanics, gravitation and electromagnetism are aspects of the Unified theory; the fields inside the particle; the quadratic and linear theories; the calculation of the eigenvalues and elementary particles as stable configurations of interacting fields. It is shown that it is possible to construct an internal structure theory for elementary particles. The theory lies within the framework of Einstein's programme-to identify physical reality with a specified geometrical structure. (U.K.)

  4. Pinpointing particles

    International Nuclear Information System (INIS)

    Miller, David J.

    1987-01-01

    The Conference on Position-Sensitive Detectors held at London's University College from 7-11 September highlighted the importance and the growing applications of these precision devices in many branches of science, underlining once again the high spinoff potential for techniques developed inside particle physics

  5. Particle tracking

    International Nuclear Information System (INIS)

    Mais, H.; Ripken, G.; Wrulich, A.; Schmidt, F.

    1986-02-01

    After a brief description of typical applications of particle tracking in storage rings and after a short discussion of some limitations and problems related with tracking we summarize some concepts and methods developed in the qualitative theory of dynamical systems. We show how these concepts can be applied to the proton ring HERA. (orig.)

  6. Pinpointing particles

    Energy Technology Data Exchange (ETDEWEB)

    Miller, David J.

    1987-10-15

    The Conference on Position-Sensitive Detectors held at London's University College from 7-11 September highlighted the importance and the growing applications of these precision devices in many branches of science, underlining once again the high spinoff potential for techniques developed inside particle physics.

  7. Particle Physics

    CERN Multimedia

    2005-01-01

    While biomedicine and geoscience use grids to bring together many different sub-disciplines, particle physicists use grid computing to increase computing power and storage resources, and to access and analyze vast amounts of data collected from detectors at the world's most powerful accelerators (1 page)

  8. Geometry and dynamics of particle emission from strongly deformed nuclei

    International Nuclear Information System (INIS)

    Aleshin, V.P.

    1995-01-01

    By using our semiclassical approach to particle evaporation from deformed nuclei, we analyze the heuristic models of particle emission from deformed nuclei which are used in the codes GANES, ALICE, and EVAP. The calculations revealed that the heuristic models are reasonable for particle energy spectra but fail, at large deformations, to describe the angular distributions

  9. Particle based 3D modeling of positive streamer inception

    NARCIS (Netherlands)

    H.J. Teunissen (Jannis)

    2012-01-01

    htmlabstractIn this report we present a particle based 3D model for the study of streamer inception near positive electrodes in air. The particle code is of the PIC-MCC type and an electrode is included using the charge simulation method. An algorithm for the adaptive creation of super-particles is

  10. Momentos em freios e em embraiagens

    OpenAIRE

    Mimoso, Rui Miguel Pereira

    2011-01-01

    Dissertação para obtenção do Grau de Mestre em Mestrado Integrado em Engenharia Mecânica Nesta dissertação reúnem-se os modelos de cálculo utilizados na determinação dos momentos em freios e em embraiagens. Neste trabalho consideram-se os casos de freios e embraiagens de atrito seco e atrito viscoso. Nos freios de atrito viscoso são considerados casos em que as características dos fluidos não são induzidas, e outros em que são induzidas modificações a essas mesmas características. São a...

  11. Report number codes

    Energy Technology Data Exchange (ETDEWEB)

    Nelson, R.N. (ed.)

    1985-05-01

    This publication lists all report number codes processed by the Office of Scientific and Technical Information. The report codes are substantially based on the American National Standards Institute, Standard Technical Report Number (STRN)-Format and Creation Z39.23-1983. The Standard Technical Report Number (STRN) provides one of the primary methods of identifying a specific technical report. The STRN consists of two parts: The report code and the sequential number. The report code identifies the issuing organization, a specific program, or a type of document. The sequential number, which is assigned in sequence by each report issuing entity, is not included in this publication. Part I of this compilation is alphabetized by report codes followed by issuing installations. Part II lists the issuing organization followed by the assigned report code(s). In both Parts I and II, the names of issuing organizations appear for the most part in the form used at the time the reports were issued. However, for some of the more prolific installations which have had name changes, all entries have been merged under the current name.

  12. Report number codes

    International Nuclear Information System (INIS)

    Nelson, R.N.

    1985-05-01

    This publication lists all report number codes processed by the Office of Scientific and Technical Information. The report codes are substantially based on the American National Standards Institute, Standard Technical Report Number (STRN)-Format and Creation Z39.23-1983. The Standard Technical Report Number (STRN) provides one of the primary methods of identifying a specific technical report. The STRN consists of two parts: The report code and the sequential number. The report code identifies the issuing organization, a specific program, or a type of document. The sequential number, which is assigned in sequence by each report issuing entity, is not included in this publication. Part I of this compilation is alphabetized by report codes followed by issuing installations. Part II lists the issuing organization followed by the assigned report code(s). In both Parts I and II, the names of issuing organizations appear for the most part in the form used at the time the reports were issued. However, for some of the more prolific installations which have had name changes, all entries have been merged under the current name

  13. Poisson/Superfish codes for personal computers

    International Nuclear Information System (INIS)

    Humphries, S.

    1992-01-01

    The Poisson/Superfish codes calculate static E or B fields in two-dimensions and electromagnetic fields in resonant structures. New versions for 386/486 PCs and Macintosh computers have capabilities that exceed the mainframe versions. Notable improvements are interactive graphical post-processors, improved field calculation routines, and a new program for charged particle orbit tracking. (author). 4 refs., 1 tab., figs

  14. Monte Carlo Particle Lists: MCPL

    DEFF Research Database (Denmark)

    Kittelmann, Thomas; Klinkby, Esben Bryndt; Bergbäck Knudsen, Erik

    2017-01-01

    A binary format with lists of particle state information, for interchanging particles between various Monte Carlo simulation applications, is presented. Portable C code for file manipulation is made available to the scientific community, along with converters and plugins for several popular...... simulation packages. Program summary: Program Title: MCPL. Program Files doi: http://dx.doi.org/10.17632/cby92vsv5g.1 Licensing provisions: CC0 for core MCPL, see LICENSE file for details. Programming language: C and C++ External routines/libraries: Geant4, MCNP, McStas, McXtrace Nature of problem: Saving...

  15. Cryptography cracking codes

    CERN Document Server

    2014-01-01

    While cracking a code might seem like something few of us would encounter in our daily lives, it is actually far more prevalent than we may realize. Anyone who has had personal information taken because of a hacked email account can understand the need for cryptography and the importance of encryption-essentially the need to code information to keep it safe. This detailed volume examines the logic and science behind various ciphers, their real world uses, how codes can be broken, and the use of technology in this oft-overlooked field.

  16. Coded Splitting Tree Protocols

    DEFF Research Database (Denmark)

    Sørensen, Jesper Hemming; Stefanovic, Cedomir; Popovski, Petar

    2013-01-01

    This paper presents a novel approach to multiple access control called coded splitting tree protocol. The approach builds on the known tree splitting protocols, code structure and successive interference cancellation (SIC). Several instances of the tree splitting protocol are initiated, each...... instance is terminated prematurely and subsequently iterated. The combined set of leaves from all the tree instances can then be viewed as a graph code, which is decodable using belief propagation. The main design problem is determining the order of splitting, which enables successful decoding as early...

  17. Transport theory and codes

    International Nuclear Information System (INIS)

    Clancy, B.E.

    1986-01-01

    This chapter begins with a neutron transport equation which includes the one dimensional plane geometry problems, the one dimensional spherical geometry problems, and numerical solutions. The section on the ANISN code and its look-alikes covers problems which can be solved; eigenvalue problems; outer iteration loop; inner iteration loop; and finite difference solution procedures. The input and output data for ANISN is also discussed. Two dimensional problems such as the DOT code are given. Finally, an overview of the Monte-Carlo methods and codes are elaborated on

  18. Gravity inversion code

    International Nuclear Information System (INIS)

    Burkhard, N.R.

    1979-01-01

    The gravity inversion code applies stabilized linear inverse theory to determine the topography of a subsurface density anomaly from Bouguer gravity data. The gravity inversion program consists of four source codes: SEARCH, TREND, INVERT, and AVERAGE. TREND and INVERT are used iteratively to converge on a solution. SEARCH forms the input gravity data files for Nevada Test Site data. AVERAGE performs a covariance analysis on the solution. This document describes the necessary input files and the proper operation of the code. 2 figures, 2 tables

  19. Models and applications of the UEDGE code

    International Nuclear Information System (INIS)

    Rensink, M.E.; Knoll, D.A.; Porter, G.D.; Rognlien, T.D.; Smith, G.R.; Wising, F.

    1996-09-01

    The transport of particles and energy from the core of a tokamak to nearby material surfaces is an important problem for understanding present experiments and for designing reactor-grade devices. A number of fluid transport codes have been developed to model the plasma in the edge and scrape-off layer (SOL) regions. This report will focus on recent model improvements and illustrative results from the UEDGE code. Some geometric and mesh considerations are introduced, followed by a general description of the plasma and neutral fluid models. A few comments on computational issues are given and then two important applications are illustrated concerning benchmarking and the ITER radiative divertor. Finally, we report on some recent work to improve the models in UEDGE by coupling to a Monte Carlo neutrals code and by utilizing an adaptive grid

  20. Validation of comprehensive space radiation transport code

    International Nuclear Information System (INIS)

    Shinn, J.L.; Simonsen, L.C.; Cucinotta, F.A.

    1998-01-01

    The HZETRN code has been developed over the past decade to evaluate the local radiation fields within sensitive materials on spacecraft in the space environment. Most of the more important nuclear and atomic processes are now modeled and evaluation within a complex spacecraft geometry with differing material components, including transition effects across boundaries of dissimilar materials, are included. The atomic/nuclear database and transport procedures have received limited validation in laboratory testing with high energy ion beams. The codes have been applied in design of the SAGE-III instrument resulting in material changes to control injurious neutron production, in the study of the Space Shuttle single event upsets, and in validation with space measurements (particle telescopes, tissue equivalent proportional counters, CR-39) on Shuttle and Mir. The present paper reviews the code development and presents recent results in laboratory and space flight validation

  1. Present state of the SOURCES computer code

    International Nuclear Information System (INIS)

    Shores, Erik F.

    2002-01-01

    In various stages of development for over two decades, the SOURCES computer code continues to calculate neutron production rates and spectra from four types of problems: homogeneous media, two-region interfaces, three-region interfaces and that of a monoenergetic alpha particle beam incident on a slab of target material. Graduate work at the University of Missouri - Rolla, in addition to user feedback from a tutorial course, provided the impetus for a variety of code improvements. Recently upgraded to version 4B, initial modifications to SOURCES focused on updates to the 'tape5' decay data library. Shortly thereafter, efforts focused on development of a graphical user interface for the code. This paper documents the Los Alamos SOURCES Tape1 Creator and Library Link (LASTCALL) and describes additional library modifications in more detail. Minor improvements and planned enhancements are discussed.

  2. Code ATOM for calculation of atomic characteristics

    International Nuclear Information System (INIS)

    Vainshtein, L.A.

    1990-01-01

    In applying atomic physics to problems of plasma diagnostics, it is necessary to determine some atomic characteristics, including energies and transition probabilities, for very many atoms and ions. Development of general codes for calculation of many types of atomic characteristics has been based on general but comparatively simple approximate methods. The program ATOM represents an attempt at effective use of such a general code. This report gives a brief description of the methods used, and the possibilities of and limitations to the code are discussed. Characteristics of the following processes can be calculated by ATOM: radiative transitions between discrete levels, radiative ionization and recombination, collisional excitation and ionization by electron impact, collisional excitation and ionization by point heavy particle (Born approximation only), dielectronic recombination, and autoionization. ATOM explores Born (for z=1) or Coulomb-Born (for z>1) approximations. In both cases exchange and normalization can be included. (N.K.)

  3. New features in the design code TLIE

    International Nuclear Information System (INIS)

    van Zeijts, J.

    1993-01-01

    We present features recently installed in the arbitrary-order accelerator design code TLIE. The code uses the MAD input language, and implements programmable extensions modeled after the C language that make it a powerful tool in a wide range of applications: from basic beamline design to high precision-high order design and even control room applications. The basic quantities important in accelerator design are easily accessible from inside the control language. Entities like parameters in elements (strength, current), transfer maps (either in Taylor series or in Lie algebraic form), lines, and beams (either as sets of particles or as distributions) are among the type of variables available. These variables can be set, used as arguments in subroutines, or just typed out. The code is easily extensible with new datatypes

  4. Successful vectorization - reactor physics Monte Carlo code

    International Nuclear Information System (INIS)

    Martin, W.R.

    1989-01-01

    Most particle transport Monte Carlo codes in use today are based on the ''history-based'' algorithm, wherein one particle history at a time is simulated. Unfortunately, the ''history-based'' approach (present in all Monte Carlo codes until recent years) is inherently scalar and cannot be vectorized. In particular, the history-based algorithm cannot take advantage of vector architectures, which characterize the largest and fastest computers at the current time, vector supercomputers such as the Cray X/MP or IBM 3090/600. However, substantial progress has been made in recent years in developing and implementing a vectorized Monte Carlo algorithm. This algorithm follows portions of many particle histories at the same time and forms the basis for all successful vectorized Monte Carlo codes that are in use today. This paper describes the basic vectorized algorithm along with descriptions of several variations that have been developed by different researchers for specific applications. These applications have been mainly in the areas of neutron transport in nuclear reactor and shielding analysis and photon transport in fusion plasmas. The relative merits of the various approach schemes will be discussed and the present status of known vectorization efforts will be summarized along with available timing results, including results from the successful vectorization of 3-D general geometry, continuous energy Monte Carlo. (orig.)

  5. Charged-particle calculations using Boltzmann transport methods

    International Nuclear Information System (INIS)

    Hoffman, T.J.; Dodds, H.L. Jr.; Robinson, M.T.; Holmes, D.K.

    1981-01-01

    Several aspects of radiation damage effects in fusion reactor neutron and ion irradiation environments are amenable to treatment by transport theory methods. In this paper, multigroup transport techniques are developed for the calculation of charged particle range distributions, reflection coefficients, and sputtering yields. The Boltzmann transport approach can be implemented, with minor changes, in standard neutral particle computer codes. With the multigroup discrete ordinates code, ANISN, determination of ion and target atom distributions as functions of position, energy, and direction can be obtained without the stochastic error associated with atomistic computer codes such as MARLOWE and TRIM. With the multigroup Monte Carlo code, MORSE, charged particle effects can be obtained for problems associated with very complex geometries. Results are presented for several charged particle problems. Good agreement is obtained between quantities calculated with the multigroup approach and those obtained experimentally or by atomistic computer codes

  6. MUXS: a code to generate multigroup cross sections for sputtering calculations

    International Nuclear Information System (INIS)

    Hoffman, T.J.; Robinson, M.T.; Dodds, H.L. Jr.

    1982-10-01

    This report documents MUXS, a computer code to generate multigroup cross sections for charged particle transport problems. Cross sections generated by MUXS can be used in many multigroup transport codes, with minor modifications to these codes, to calculate sputtering yields, reflection coefficients, penetration distances, etc

  7. Fulcrum Network Codes

    DEFF Research Database (Denmark)

    2015-01-01

    Fulcrum network codes, which are a network coding framework, achieve three objectives: (i) to reduce the overhead per coded packet to almost 1 bit per source packet; (ii) to operate the network using only low field size operations at intermediate nodes, dramatically reducing complexity...... in the network; and (iii) to deliver an end-to-end performance that is close to that of a high field size network coding system for high-end receivers while simultaneously catering to low-end ones that can only decode in a lower field size. Sources may encode using a high field size expansion to increase...... the number of dimensions seen by the network using a linear mapping. Receivers can tradeoff computational effort with network delay, decoding in the high field size, the low field size, or a combination thereof....

  8. Supervised Convolutional Sparse Coding

    KAUST Repository

    Affara, Lama Ahmed; Ghanem, Bernard; Wonka, Peter

    2018-01-01

    coding, which aims at learning discriminative dictionaries instead of purely reconstructive ones. We incorporate a supervised regularization term into the traditional unsupervised CSC objective to encourage the final dictionary elements

  9. SASSYS LMFBR systems code

    International Nuclear Information System (INIS)

    Dunn, F.E.; Prohammer, F.G.; Weber, D.P.

    1983-01-01

    The SASSYS LMFBR systems analysis code is being developed mainly to analyze the behavior of the shut-down heat-removal system and the consequences of failures in the system, although it is also capable of analyzing a wide range of transients, from mild operational transients through more severe transients leading to sodium boiling in the core and possible melting of clad and fuel. The code includes a detailed SAS4A multi-channel core treatment plus a general thermal-hydraulic treatment of the primary and intermediate heat-transport loops and the steam generators. The code can handle any LMFBR design, loop or pool, with an arbitrary arrangement of components. The code is fast running: usually faster than real time

  10. OCA Code Enforcement

    Data.gov (United States)

    Montgomery County of Maryland — The Office of the County Attorney (OCA) processes Code Violation Citations issued by County agencies. The citations can be viewed by issued department, issued date...

  11. The fast code

    Energy Technology Data Exchange (ETDEWEB)

    Freeman, L.N.; Wilson, R.E. [Oregon State Univ., Dept. of Mechanical Engineering, Corvallis, OR (United States)

    1996-09-01

    The FAST Code which is capable of determining structural loads on a flexible, teetering, horizontal axis wind turbine is described and comparisons of calculated loads with test data are given at two wind speeds for the ESI-80. The FAST Code models a two-bladed HAWT with degrees of freedom for blade bending, teeter, drive train flexibility, yaw, and windwise and crosswind tower motion. The code allows blade dimensions, stiffnesses, and weights to differ and models tower shadow, wind shear, and turbulence. Additionally, dynamic stall is included as are delta-3 and an underslung rotor. Load comparisons are made with ESI-80 test data in the form of power spectral density, rainflow counting, occurrence histograms, and azimuth averaged bin plots. It is concluded that agreement between the FAST Code and test results is good. (au)

  12. Code Disentanglement: Initial Plan

    Energy Technology Data Exchange (ETDEWEB)

    Wohlbier, John Greaton [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Kelley, Timothy M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Rockefeller, Gabriel M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Calef, Matthew Thomas [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-01-27

    The first step to making more ambitious changes in the EAP code base is to disentangle the code into a set of independent, levelized packages. We define a package as a collection of code, most often across a set of files, that provides a defined set of functionality; a package a) can be built and tested as an entity and b) fits within an overall levelization design. Each package contributes one or more libraries, or an application that uses the other libraries. A package set is levelized if the relationships between packages form a directed, acyclic graph and each package uses only packages at lower levels of the diagram (in Fortran this relationship is often describable by the use relationship between modules). Independent packages permit independent- and therefore parallel|development. The packages form separable units for the purposes of development and testing. This is a proven path for enabling finer-grained changes to a complex code.

  13. Induction technology optimization code

    International Nuclear Information System (INIS)

    Caporaso, G.J.; Brooks, A.L.; Kirbie, H.C.

    1992-01-01

    A code has been developed to evaluate relative costs of induction accelerator driver systems for relativistic klystrons. The code incorporates beam generation, transport and pulsed power system constraints to provide an integrated design tool. The code generates an injector/accelerator combination which satisfies the top level requirements and all system constraints once a small number of design choices have been specified (rise time of the injector voltage and aspect ratio of the ferrite induction cores, for example). The code calculates dimensions of accelerator mechanical assemblies and values of all electrical components. Cost factors for machined parts, raw materials and components are applied to yield a total system cost. These costs are then plotted as a function of the two design choices to enable selection of an optimum design based on various criteria. (Author) 11 refs., 3 figs

  14. VT ZIP Code Areas

    Data.gov (United States)

    Vermont Center for Geographic Information — (Link to Metadata) A ZIP Code Tabulation Area (ZCTA) is a statistical geographic entity that approximates the delivery area for a U.S. Postal Service five-digit...

  15. Bandwidth efficient coding

    CERN Document Server

    Anderson, John B

    2017-01-01

    Bandwidth Efficient Coding addresses the major challenge in communication engineering today: how to communicate more bits of information in the same radio spectrum. Energy and bandwidth are needed to transmit bits, and bandwidth affects capacity the most. Methods have been developed that are ten times as energy efficient at a given bandwidth consumption as simple methods. These employ signals with very complex patterns and are called "coding" solutions. The book begins with classical theory before introducing new techniques that combine older methods of error correction coding and radio transmission in order to create narrowband methods that are as efficient in both spectrum and energy as nature allows. Other topics covered include modulation techniques such as CPM, coded QAM and pulse design.

  16. Reactor lattice codes

    International Nuclear Information System (INIS)

    Kulikowska, T.

    2001-01-01

    The description of reactor lattice codes is carried out on the example of the WIMSD-5B code. The WIMS code in its various version is the most recognised lattice code. It is used in all parts of the world for calculations of research and power reactors. The version WIMSD-5B is distributed free of charge by NEA Data Bank. The description of its main features given in the present lecture follows the aspects defined previously for lattice calculations in the lecture on Reactor Lattice Transport Calculations. The spatial models are described, and the approach to the energy treatment is given. Finally the specific algorithm applied in fuel depletion calculations is outlined. (author)

  17. Active particles

    CERN Document Server

    Degond, Pierre; Tadmor, Eitan

    2017-01-01

    This volume collects ten surveys on the modeling, simulation, and applications of active particles using methods ranging from mathematical kinetic theory to nonequilibrium statistical mechanics. The contributing authors are leading experts working in this challenging field, and each of their chapters provides a review of the most recent results in their areas and looks ahead to future research directions. The approaches to studying active matter are presented here from many different perspectives, such as individual-based models, evolutionary games, Brownian motion, and continuum theories, as well as various combinations of these. Applications covered include biological network formation and network theory; opinion formation and social systems; control theory of sparse systems; theory and applications of mean field games; population learning; dynamics of flocking systems; vehicular traffic flow; and stochastic particles and mean field approximation. Mathematicians and other members of the scientific commu...

  18. Critical Care Coding for Neurologists.

    Science.gov (United States)

    Nuwer, Marc R; Vespa, Paul M

    2015-10-01

    Accurate coding is an important function of neurologic practice. This contribution to Continuum is part of an ongoing series that presents helpful coding information along with examples related to the issue topic. Tips for diagnosis coding, Evaluation and Management coding, procedure coding, or a combination are presented, depending on which is most applicable to the subject area of the issue.

  19. Lattice Index Coding

    OpenAIRE

    Natarajan, Lakshmi; Hong, Yi; Viterbo, Emanuele

    2014-01-01

    The index coding problem involves a sender with K messages to be transmitted across a broadcast channel, and a set of receivers each of which demands a subset of the K messages while having prior knowledge of a different subset as side information. We consider the specific case of noisy index coding where the broadcast channel is Gaussian and every receiver demands all the messages from the source. Instances of this communication problem arise in wireless relay networks, sensor networks, and ...

  20. Towards advanced code simulators

    International Nuclear Information System (INIS)

    Scriven, A.H.

    1990-01-01

    The Central Electricity Generating Board (CEGB) uses advanced thermohydraulic codes extensively to support PWR safety analyses. A system has been developed to allow fully interactive execution of any code with graphical simulation of the operator desk and mimic display. The system operates in a virtual machine environment, with the thermohydraulic code executing in one virtual machine, communicating via interrupts with any number of other virtual machines each running other programs and graphics drivers. The driver code itself does not have to be modified from its normal batch form. Shortly following the release of RELAP5 MOD1 in IBM compatible form in 1983, this code was used as the driver for this system. When RELAP5 MOD2 became available, it was adopted with no changes needed in the basic system. Overall the system has been used for some 5 years for the analysis of LOBI tests, full scale plant studies and for simple what-if studies. For gaining rapid understanding of system dependencies it has proved invaluable. The graphical mimic system, being independent of the driver code, has also been used with other codes to study core rewetting, to replay results obtained from batch jobs on a CRAY2 computer system and to display suitably processed experimental results from the LOBI facility to aid interpretation. For the above work real-time execution was not necessary. Current work now centers on implementing the RELAP 5 code on a true parallel architecture machine. Marconi Simulation have been contracted to investigate the feasibility of using upwards of 100 processors, each capable of a peak of 30 MIPS to run a highly detailed RELAP5 model in real time, complete with specially written 3D core neutronics and balance of plant models. This paper describes the experience of using RELAP5 as an analyzer/simulator, and outlines the proposed methods and problems associated with parallel execution of RELAP5

  1. Cracking the Gender Codes

    DEFF Research Database (Denmark)

    Rennison, Betina Wolfgang

    2016-01-01

    extensive work to raise the proportion of women. This has helped slightly, but women remain underrepresented at the corporate top. Why is this so? What can be done to solve it? This article presents five different types of answers relating to five discursive codes: nature, talent, business, exclusion...... in leadership management, we must become more aware and take advantage of this complexity. We must crack the codes in order to crack the curve....

  2. Dermatoses em renais cronicos em terapia dialitica

    Directory of Open Access Journals (Sweden)

    Luis Alberto Batista Peres

    2014-03-01

    Full Text Available Objetivo: As desordens cutâneas e das mucosas são comuns em pacientes em hemodiálise a longo prazo. A diálise prolonga a expectativa de vida, dando tempo para a manifestação destas anormalidades. Os objetivos deste estudo foram avaliar a prevalência de problemas dermatológicos em pacientes com doença renal crônica (DRC em hemodiálise. Métodos: Cento e quarenta e cinco pacientes com doença renal crônica em hemodiálise foram estudados. Todos os pacientes foram completamente analisados para as alterações cutâneas, de cabelos, mucosas e unhas por um único examinador e foram coletados dados de exames laboratoriais. Os dados foram armazenados em um banco de dados do Microsolft Excel e analisados por estatística descritiva. As variáveis contínuas foram comparadas pelo teste t de Student e as variáveis categóricas utilizando o teste do qui-quadrado ou o teste Exato de Fischer, conforme adequado. Resultados: O estudo incluiu 145 pacientes, com idade média de 53,6 ± 14,7 anos, predominantemente do sexo masculino (64,1% e caucasianos (90,0%. O tempo médio de diálise foi de 43,3 ± 42,3 meses. As principais doenças subjacentes foram: hipertensão arterial em 33,8%, diabetes mellitus em 29,6% e glomerulonefrite crônica em 13,1%. As principais manifestações dermatológicas observadas foram: xerose em 109 (75,2%, equimose em 87 (60,0%, prurido em 78 (53,8% e lentigo em 33 (22,8% pacientes. Conclusão: O nosso estudo mostrou a presença de mais do que uma dermatose por paciente. As alterações cutâneas são frequentes em pacientes em diálise. Mais estudos são necessários para melhor caracterização e manejo destas dermatoses.

  3. PEAR code review

    International Nuclear Information System (INIS)

    De Wit, R.; Jamieson, T.; Lord, M.; Lafortune, J.F.

    1997-07-01

    As a necessary component in the continuous improvement and refinement of methodologies employed in the nuclear industry, regulatory agencies need to periodically evaluate these processes to improve confidence in results and ensure appropriate levels of safety are being achieved. The independent and objective review of industry-standard computer codes forms an essential part of this program. To this end, this work undertakes an in-depth review of the computer code PEAR (Public Exposures from Accidental Releases), developed by Atomic Energy of Canada Limited (AECL) to assess accidental releases from CANDU reactors. PEAR is based largely on the models contained in the Canadian Standards Association (CSA) N288.2-M91. This report presents the results of a detailed technical review of the PEAR code to identify any variations from the CSA standard and other supporting documentation, verify the source code, assess the quality of numerical models and results, and identify general strengths and weaknesses of the code. The version of the code employed in this review is the one which AECL intends to use for CANDU 9 safety analyses. (author)

  4. KENO-V code

    International Nuclear Information System (INIS)

    Cramer, S.N.

    1984-01-01

    The KENO-V code is the current release of the Oak Ridge multigroup Monte Carlo criticality code development. The original KENO, with 16 group Hansen-Roach cross sections and P 1 scattering, was one ot the first multigroup Monte Carlo codes and it and its successors have always been a much-used research tool for criticality studies. KENO-V is able to accept large neutron cross section libraries (a 218 group set is distributed with the code) and has a general P/sub N/ scattering capability. A supergroup feature allows execution of large problems on small computers, but at the expense of increased calculation time and system input/output operations. This supergroup feature is activated automatically by the code in a manner which utilizes as much computer memory as is available. The primary purpose of KENO-V is to calculate the system k/sub eff/, from small bare critical assemblies to large reflected arrays of differing fissile and moderator elements. In this respect KENO-V neither has nor requires the many options and sophisticated biasing techniques of general Monte Carlo codes

  5. Code, standard and specifications

    International Nuclear Information System (INIS)

    Abdul Nassir Ibrahim; Azali Muhammad; Ab. Razak Hamzah; Abd. Aziz Mohamed; Mohamad Pauzi Ismail

    2008-01-01

    Radiography also same as the other technique, it need standard. This standard was used widely and method of used it also regular. With that, radiography testing only practical based on regulations as mentioned and documented. These regulation or guideline documented in code, standard and specifications. In Malaysia, level one and basic radiographer can do radiography work based on instruction give by level two or three radiographer. This instruction was produced based on guideline that mention in document. Level two must follow the specifications mentioned in standard when write the instruction. From this scenario, it makes clearly that this radiography work is a type of work that everything must follow the rule. For the code, the radiography follow the code of American Society for Mechanical Engineer (ASME) and the only code that have in Malaysia for this time is rule that published by Atomic Energy Licensing Board (AELB) known as Practical code for radiation Protection in Industrial radiography. With the existence of this code, all the radiography must follow the rule or standard regulated automatically.

  6. Hot particles

    International Nuclear Information System (INIS)

    Merwin, S.E.; Moeller, M.P.

    1989-01-01

    Nuclear Regulatory Commission (NRC) licensees are required to assess the dose to skin from a hot particle contamination event at a depth of skin of7mg/cm 2 over an area of 1 cm 2 and compare the value to the current dose limit for the skin. Although the resulting number is interesting from a comparative standpoint and can be used to predict local skin reactions, comparison of the number to existing limits based on uniform exposures is inappropriate. Most incidents that can be classified as overexposures based on this interpretation of dose actually have no effect on the health of the worker. As a result, resources are expended to reduce the likelihood that an overexposure event will occur when they could be directed toward eliminating the cause of the problem or enhancing existing programs such as contamination control. Furthermore, from a risk standpoint, this practice is not ALARA because some workers receive whole body doses in order to minimize the occurrence of hot particle skin contaminations. In this paper the authors suggest an alternative approach to controlling hot particle exposures

  7. Fast Coding Unit Encoding Mechanism for Low Complexity Video Coding

    OpenAIRE

    Gao, Yuan; Liu, Pengyu; Wu, Yueying; Jia, Kebin; Gao, Guandong

    2016-01-01

    In high efficiency video coding (HEVC), coding tree contributes to excellent compression performance. However, coding tree brings extremely high computational complexity. Innovative works for improving coding tree to further reduce encoding time are stated in this paper. A novel low complexity coding tree mechanism is proposed for HEVC fast coding unit (CU) encoding. Firstly, this paper makes an in-depth study of the relationship among CU distribution, quantization parameter (QP) and content ...

  8. Data exchange between zero dimensional code and physics platform in the CFETR integrated system code

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Guoliang [School of Nuclear Science and Technology, University of Science and Technology of China, Hefei 230026 China (China); Shi, Nan [Institute of Plasma Physics, Chinese Academy of Sciences, No. 350 Shushanhu Road, Hefei (China); Zhou, Yifu; Mao, Shifeng [School of Nuclear Science and Technology, University of Science and Technology of China, Hefei 230026 China (China); Jian, Xiang [State Key Laboratory of Advanced Electromagnetic Engineering and Technology, School of Electrical and Electronics Engineering, Huazhong University of Science and Technology, Wuhan 430074 (China); Chen, Jiale [Institute of Plasma Physics, Chinese Academy of Sciences, No. 350 Shushanhu Road, Hefei (China); Liu, Li; Chan, Vincent [School of Nuclear Science and Technology, University of Science and Technology of China, Hefei 230026 China (China); Ye, Minyou, E-mail: yemy@ustc.edu.cn [School of Nuclear Science and Technology, University of Science and Technology of China, Hefei 230026 China (China)

    2016-11-01

    Highlights: • The workflow of the zero dimensional code and the multi-dimension physics platform of CFETR integrated system codeis introduced. • The iteration process among the codes in the physics platform. • The data transfer between the zero dimensionalcode and the physical platform, including data iteration and validation, and justification for performance parameters.. - Abstract: The China Fusion Engineering Test Reactor (CFETR) integrated system code contains three parts: a zero dimensional code, a physics platform and an engineering platform. We use the zero dimensional code to identify a set of preliminary physics and engineering parameters for CFETR, which is used as input to initiate multi-dimension studies using the physics and engineering platform for design, verification and validation. Effective data exchange between the zero dimensional code and the physical platform is critical for the optimization of CFETR design. For example, in evaluating the impact of impurity radiation on core performance, an open field line code is used to calculate the impurity transport from the first-wall boundary to the pedestal. The impurity particle in the pedestal are used as boundary conditions in a transport code for calculating impurity transport in the core plasma and the impact of core radiation on core performance. Comparison of the results from the multi-dimensional study to those from the zero dimensional code is used to further refine the controlled radiation model. The data transfer between the zero dimensional code and the physical platform, including data iteration and validation, and justification for performance parameters will be presented in this paper.

  9. Radiation in Particle Simulations

    International Nuclear Information System (INIS)

    More, R.; Graziani, F.; Glosli, J.; Surh, M.

    2010-01-01

    Hot dense radiative (HDR) plasmas common to Inertial Confinement Fusion (ICF) and stellar interiors have high temperature (a few hundred eV to tens of keV), high density (tens to hundreds of g/cc) and high pressure (hundreds of megabars to thousands of gigabars). Typically, such plasmas undergo collisional, radiative, atomic and possibly thermonuclear processes. In order to describe HDR plasmas, computational physicists in ICF and astrophysics use atomic-scale microphysical models implemented in various simulation codes. Experimental validation of the models used to describe HDR plasmas are difficult to perform. Direct Numerical Simulation (DNS) of the many-body interactions of plasmas is a promising approach to model validation but, previous work either relies on the collisionless approximation or ignores radiation. We present four methods that attempt a new numerical simulation technique to address a currently unsolved problem: the extension of molecular dynamics to collisional plasmas including emission and absorption of radiation. The first method applies the Lienard-Weichert solution of Maxwell's equations for a classical particle whose motion is assumed to be known. The second method expands the electromagnetic field in normal modes (planewaves in a box with periodic boundary-conditions) and solves the equation for wave amplitudes coupled to the particle motion. The third method is a hybrid molecular dynamics/Monte Carlo (MD/MC) method which calculates radiation emitted or absorbed by electron-ion pairs during close collisions. The fourth method is a generalization of the third method to include small clusters of particles emitting radiation during close encounters: one electron simultaneously hitting two ions, two electrons simultaneously hitting one ion, etc. This approach is inspired by the virial expansion method of equilibrium statistical mechanics. Using a combination of these methods we believe it is possible to do atomic-scale particle simulations of

  10. Recent Progress on the Marylie/Impact Beam Dynamics Code

    International Nuclear Information System (INIS)

    Ryne, R.D.; Qiang, J.; Bethel, E.W.; Pogorelov, I.; Shalf, J.; Siegerist, C.; Venturini, M.; Dragt, A.J.; Adelmann, A.; Abell, D.; Amundson, J.; Spentzouris, P.; Neri, F.; Walstrom, P.; Mottershead, C.T.; Samulyak, R.

    2006-01-01

    MARYLIE/IMPACT (ML/I) is a hybrid code that combines the beam optics capabilities of MARYLIE with the parallel Particle-In-Cell capabilities of IMPACT. In addition to combining the capabilities of these codes, ML/I has a number of powerful features, including a choice of Poisson solvers, a fifth-order rf cavity model, multiple reference particles for rf cavities, a library of soft-edge magnet models, representation of magnet systems in terms of coil stacks with possibly overlapping fields, and wakefield effects. The code allows for map production, map analysis, particle tracking, and 3D envelope tracking, all within a single, coherent user environment. ML/I has a front end that can read both MARYLIE input and MAD lattice descriptions. The code can model beams with or without acceleration, and with or without space charge. Developed under a US DOE Scientific Discovery through Advanced Computing (SciDAC) project, ML/I is well suited to large-scale modeling, simulations having been performed with up to 100M macroparticles. The code inherits the powerful fitting and optimizing capabilities of MARYLIE augmented for the new features of ML/I. The combination of soft-edge magnet models, high-order capability, space charge effects, and fitting/optimization capabilities, make ML/I a powerful code for a wide range of beam optics design problems. This paper provides a description of the code and its unique capabilities

  11. COSY INFINITY, a new arbitrary order optics code

    International Nuclear Information System (INIS)

    Berz, M.

    1990-01-01

    The new arbitrary order particle optics and beam dynamics code COSY INFINITY is presented. The code is based on differential algebraic (DA) methods. COSY INFINITY has a full structured object oriented language environment. This provides a simple interface for the casual or novice user. At the same time, it offers the advanced user a very flexible and powerful tool for the utilization of DA. The power and generality of the environment is perhaps best demonstrated by the fact that the physics routines of COSY INFINITY are written in its own input language. The approach also facilitates the implementation of new features because special code generated by a user can be readily adopted to the source code. Besides being very compact in size, the code is also very fast, thanks to efficiently programmed elementary DA operations. For simple low order problems, which can be handled by conventional codes, the speed of COSY INFINITY is comparable and in certain cases even higher

  12. Evolução quaternária, distribuição de partículas nos solos e ambientes de sedimentação em manguezais do estado de São Paulo Quaternary evolution, particle distribution in soils and sedimentary environment in mangroves in São Paulo State, Brazil

    Directory of Open Access Journals (Sweden)

    Valdomiro Severino de Souza-Júnior

    2007-08-01

    Full Text Available A distribuição de partículas em solos ou sedimentos das planícies litorâneas auxilia no entendimento dos processos de sedimentação em estuários, servindo com importante atributo para aplicações em estudos de reconstrução paleoambiental, ciclos geoquímicos e poluição ambiental, como contaminação por metais pesados e derrames de petróleo, que, devido à ação antrópica, são relativamente comuns nesses ambientes. Com o objetivo de caracterizar os ambientes de sedimentação de acordo com a granulometria e com o processo de evolução quaternária ao longo do litoral do Estado de São Paulo, foram estudados solos de 14 manguezais. As análises granulométricas foram realizadas nas camadas de 0-20 e 60-80 cm de profundidade, determinando as frações argila, silte, areia total e cinco frações da areia. Realizaram-se datações 14C por cintilação líquida, espectrometria de massa acoplada a acelerador de partículas na fração humina da matéria orgânica e por termoluminescência em grãos de quartzo, para amostras de diferentes camadas dos manguezais amostrados. Os resultados de granulometria foram tratados de acordo com os parâmetros estatísticos de Folk & Ward. Os solos dos manguezais do Estado de São Paulo têm idade holocênica oscilando entre 410 e 3.700 anos AP, até a profundidade de 80 cm. Em alguns casos este substrato holocênico encontra-se sobreposto à camada arenosa pleistocênica, como foi identificado em SG1 (65-77 cm = 11.000 anos e 90-95 cm = 24.700 anos, PM (72-79 cm = 60.000 anos e em RF, cuja camada a 40-50 cm apresentou idade de 12.200 anos. Os manguezais apresentam solos de diferentes texturas, variando de arenosa a argilosa. Os solos de constituição arenosa foram identificados na Ilha do Cardoso, na planície do Rio Guaratuba e ao longo do litoral norte, cujos manguezais foram estabelecidos sobre os sedimentos retrabalhados de antigos cordões arenosos e localizados às margens dos rios que

  13. Desenvolvimento de um simulador numérico empregando o método Smoothed Particle Hydrodynamics para a resolução de escoamentos incompressíveis. Implementação computacional em paralelo (CUDA)

    OpenAIRE

    Marciana Lima Góes

    2012-01-01

    Neste trabalho, foi desenvolvido um simulador numérico baseado no método livre de malhas Smoothed Particle Hydrodynamics (SPH) para a resolução de escoamentos de fluidos newtonianos incompressíveis. Diferentemente da maioria das versões existentes deste método, o código numérico faz uso de uma técnica iterativa na determinação do campo de pressões. Este procedimento emprega a forma diferencial de uma equação de estado para um fluido compressível e a equação da continuidade a ...

  14. Status report on the 'Merging' of the Electron-Cloud Code POSINST with the 3-D Accelerator PIC CODE WARP

    Energy Technology Data Exchange (ETDEWEB)

    Vay, J.-L.; Furman, M.A.; Azevedo, A.W.; Cohen, R.H.; Friedman, A.; Grote, D.P.; Stoltz, P.H.

    2004-04-19

    We have integrated the electron-cloud code POSINST [1] with WARP [2]--a 3-D parallel Particle-In-Cell accelerator code developed for Heavy Ion Inertial Fusion--so that the two can interoperate. Both codes are run in the same process, communicate through a Python interpreter (already used in WARP), and share certain key arrays (so far, particle positions and velocities). Currently, POSINST provides primary and secondary sources of electrons, beam bunch kicks, a particle mover, and diagnostics. WARP provides the field solvers and diagnostics. Secondary emission routines are provided by the Tech-X package CMEE.

  15. Vectorization, parallelization and porting of nuclear codes (vectorization and parallelization). Progress report fiscal 1998

    International Nuclear Information System (INIS)

    Ishizuki, Shigeru; Kawai, Wataru; Nemoto, Toshiyuki; Ogasawara, Shinobu; Kume, Etsuo; Adachi, Masaaki; Kawasaki, Nobuo; Yatake, Yo-ichi

    2000-03-01

    Several computer codes in the nuclear field have been vectorized, parallelized and transported on the FUJITSU VPP500 system, the AP3000 system and the Paragon system at Center for Promotion of Computational Science and Engineering in Japan Atomic Energy Research Institute. We dealt with 12 codes in fiscal 1998. These results are reported in 3 parts, i.e., the vectorization and parallelization on vector processors part, the parallelization on scalar processors part and the porting part. In this report, we describe the vectorization and parallelization on vector processors. In this vectorization and parallelization on vector processors part, the vectorization of General Tokamak Circuit Simulation Program code GTCSP, the vectorization and parallelization of Molecular Dynamics NTV (n-particle, Temperature and Velocity) Simulation code MSP2, Eddy Current Analysis code EDDYCAL, Thermal Analysis Code for Test of Passive Cooling System by HENDEL T2 code THANPACST2 and MHD Equilibrium code SELENEJ on the VPP500 are described. In the parallelization on scalar processors part, the parallelization of Monte Carlo N-Particle Transport code MCNP4B2, Plasma Hydrodynamics code using Cubic Interpolated Propagation Method PHCIP and Vectorized Monte Carlo code (continuous energy model / multi-group model) MVP/GMVP on the Paragon are described. In the porting part, the porting of Monte Carlo N-Particle Transport code MCNP4B2 and Reactor Safety Analysis code RELAP5 on the AP3000 are described. (author)

  16. Interaction of free charged particles with a chirped electromagnetic pulse

    International Nuclear Information System (INIS)

    Khachatryan, A.G.; Goor, F.A. van; Boller, K.-J.

    2004-01-01

    We study the effect of chirp on electromagnetic (EM) pulse interaction with a charged particle. Both the one-dimensional (1D) and 3D cases are considered. It is found that, in contrast to the case of a nonchirped pulse, the charged particle energy can be changed after the interaction with a 1D EM chirped pulse. Different types of chirp and pulse envelopes are considered. In the case of small chirp, an analytical expression is found for arbitrary temporal profiles of the chirp and the pulse envelope. In the 3D case, the interaction with a chirped pulse results in a polarization-dependent scattering of charged particles

  17. SPECTRAL AMPLITUDE CODING OCDMA SYSTEMS USING ENHANCED DOUBLE WEIGHT CODE

    Directory of Open Access Journals (Sweden)

    F.N. HASOON

    2006-12-01

    Full Text Available A new code structure for spectral amplitude coding optical code division multiple access systems based on double weight (DW code families is proposed. The DW has a fixed weight of two. Enhanced double-weight (EDW code is another variation of a DW code family that can has a variable weight greater than one. The EDW code possesses ideal cross-correlation properties and exists for every natural number n. A much better performance can be provided by using the EDW code compared to the existing code such as Hadamard and Modified Frequency-Hopping (MFH codes. It has been observed that theoretical analysis and simulation for EDW is much better performance compared to Hadamard and Modified Frequency-Hopping (MFH codes.

  18. Nuclear code abstracts (1975 edition)

    International Nuclear Information System (INIS)

    Akanuma, Makoto; Hirakawa, Takashi

    1976-02-01

    Nuclear Code Abstracts is compiled in the Nuclear Code Committee to exchange information of the nuclear code developments among members of the committee. Enlarging the collection, the present one includes nuclear code abstracts obtained in 1975 through liaison officers of the organizations in Japan participating in the Nuclear Energy Agency's Computer Program Library at Ispra, Italy. The classification of nuclear codes and the format of code abstracts are the same as those in the library. (auth.)

  19. Some new ternary linear codes

    Directory of Open Access Journals (Sweden)

    Rumen Daskalov

    2017-07-01

    Full Text Available Let an $[n,k,d]_q$ code be a linear code of length $n$, dimension $k$ and minimum Hamming distance $d$ over $GF(q$. One of the most important problems in coding theory is to construct codes with optimal minimum distances. In this paper 22 new ternary linear codes are presented. Two of them are optimal. All new codes improve the respective lower bounds in [11].

  20. Enhanced stopping of macro-particles in particle-in-cell simulations

    International Nuclear Information System (INIS)

    May, J.; Tonge, J.; Ellis, I.; Mori, W. B.; Fiuza, F.; Fonseca, R. A.; Silva, L. O.; Ren, C.

    2014-01-01

    We derive an equation for energy transfer from relativistic charged particles to a cold background plasma appropriate for finite-size particles that are used in particle-in-cell simulation codes. Expressions for one-, two-, and three-dimensional particles are presented, with special attention given to the two-dimensional case. This energy transfer is due to the electric field of the wake set up in the background plasma by the relativistic particle. The enhanced stopping is dependent on the q 2 /m, where q is the charge and m is the mass of the relativistic particle, and therefore simulation macro-particles with large charge but identical q/m will stop more rapidly. The stopping power also depends on the effective particle shape of the macro-particle. These conclusions are verified in particle-in-cell simulations. We present 2D simulations of test particles, relaxation of high-energy tails, and integrated fast ignition simulations showing that the enhanced drag on macro-particles may adversely affect the results of these simulations in a wide range of high-energy density plasma scenarios. We also describe a particle splitting algorithm which can potentially overcome this problem and show its effect in controlling the stopping of macro-particles

  1. The nuclear reaction model code MEDICUS

    International Nuclear Information System (INIS)

    Ibishia, A.I.

    2008-01-01

    The new computer code MEDICUS has been used to calculate cross sections of nuclear reactions. The code, implemented in MATLAB 6.5, Mathematica 5, and Fortran 95 programming languages, can be run in graphical and command line mode. Graphical User Interface (GUI) has been built that allows the user to perform calculations and to plot results just by mouse clicking. The MS Windows XP and Red Hat Linux platforms are supported. MEDICUS is a modern nuclear reaction code that can compute charged particle-, photon-, and neutron-induced reactions in the energy range from thresholds to about 200 MeV. The calculation of the cross sections of nuclear reactions are done in the framework of the Exact Many-Body Nuclear Cluster Model (EMBNCM), Direct Nuclear Reactions, Pre-equilibrium Reactions, Optical Model, DWBA, and Exciton Model with Cluster Emission. The code can be used also for the calculation of nuclear cluster structure of nuclei. We have calculated nuclear cluster models for some nuclei such as 177 Lu, 90 Y, and 27 Al. It has been found that nucleus 27 Al can be represented through the two different nuclear cluster models: 25 Mg + d and 24 Na + 3 He. Cross sections in function of energy for the reaction 27 Al( 3 He,x) 22 Na, established as a production method of 22 Na, are calculated by the code MEDICUS. Theoretical calculations of cross sections are in good agreement with experimental results. Reaction mechanisms are taken into account. (author)

  2. ACE - Manufacturer Identification Code (MID)

    Data.gov (United States)

    Department of Homeland Security — The ACE Manufacturer Identification Code (MID) application is used to track and control identifications codes for manufacturers. A manufacturer is identified on an...

  3. Algebraic and stochastic coding theory

    CERN Document Server

    Kythe, Dave K

    2012-01-01

    Using a simple yet rigorous approach, Algebraic and Stochastic Coding Theory makes the subject of coding theory easy to understand for readers with a thorough knowledge of digital arithmetic, Boolean and modern algebra, and probability theory. It explains the underlying principles of coding theory and offers a clear, detailed description of each code. More advanced readers will appreciate its coverage of recent developments in coding theory and stochastic processes. After a brief review of coding history and Boolean algebra, the book introduces linear codes, including Hamming and Golay codes.

  4. Optical coding theory with Prime

    CERN Document Server

    Kwong, Wing C

    2013-01-01

    Although several books cover the coding theory of wireless communications and the hardware technologies and coding techniques of optical CDMA, no book has been specifically dedicated to optical coding theory-until now. Written by renowned authorities in the field, Optical Coding Theory with Prime gathers together in one volume the fundamentals and developments of optical coding theory, with a focus on families of prime codes, supplemented with several families of non-prime codes. The book also explores potential applications to coding-based optical systems and networks. Learn How to Construct

  5. Requirements and specifications for a particle database

    International Nuclear Information System (INIS)

    2015-01-01

    One of the tasks of WPEC Subgroup 38 (SG38) is to design a database structure for storing the particle information needed for nuclear reaction databases and transport codes. Since the same particle may appear many times in a reaction database (produced by many different reactions on different targets), one of the long-term goals for SG38 is to move towards a central database of particle information to reduce redundancy and ensure consistency among evaluations. The database structure must be general enough to describe all relevant particles and their properties, including mass, charge, spin and parity, half-life, decay properties, and so on. Furthermore, it must be broad enough to handle not only excited nuclear states but also excited atomic states that can de-excite through atomic relaxation. Databases built with this hierarchy will serve as central repositories for particle information that can be linked to from codes and other databases. It is hoped that the final product is general enough for use in other projects besides SG38. While this is called a 'particle database', the definition of a particle (as described in Section 2) is very broad. The database must describe nucleons, nuclei, excited nuclear states (and possibly atomic states) in addition to fundamental particles like photons, electrons, muons, etc. Under this definition the list of possible particles becomes quite large. To help organize them the database will need a way of grouping related particles (e.g., all the isotopes of an element, or all the excited levels of an isotope) together into particle 'groups'. The database will also need a way to classify particles that belong to the same 'family' (such as 'leptons', 'baryons', etc.). Each family of particles may have special requirements as to what properties are required. One important function of the particle database will be to provide an easy way for codes and external databases to look up any particle stored inside. In order to make access as

  6. New particles

    Energy Technology Data Exchange (ETDEWEB)

    Khare, A.

    1980-07-01

    Current state of art in the discovery of new elementary particles is reviewed. At present, quarks and mesons are accepted as the basic constituents of matter. The charmonium model (canti-c system), and the 'open charm' are discussed. Explanations are offered for the recent discovery of the heavy lepton tau. Quark states such as the beauty and taste are also dealt with at length. The properties of the tanti-t bound system are speculated. It is concluded that the understanding of canti-c and banti-b families is facilitated by the assumption of the quarkonium model. Implications at the astrophysical level are indicated.

  7. Particle Mechanics

    CERN Document Server

    Collinson, Chris

    1995-01-01

    * Assumes no prior knowledge* Adopts a modelling approach* Numerous tutorial problems, worked examples and exercises included* Elementary topics augmented by planetary motion and rotating framesThis text provides an invaluable introduction to mechanicsm confining attention to the motion of a particle. It begins with a full discussion of the foundations of the subject within the context of mathematical modelling before covering more advanced topics including the theory of planetary orbits and the use of rotating frames of reference. Truly introductory , the style adoped is perfect for those u

  8. The Aster code

    International Nuclear Information System (INIS)

    Delbecq, J.M.

    1999-01-01

    The Aster code is a 2D or 3D finite-element calculation code for structures developed by the R and D direction of Electricite de France (EdF). This dossier presents a complete overview of the characteristics and uses of the Aster code: introduction of version 4; the context of Aster (organisation of the code development, versions, systems and interfaces, development tools, quality assurance, independent validation); static mechanics (linear thermo-elasticity, Euler buckling, cables, Zarka-Casier method); non-linear mechanics (materials behaviour, big deformations, specific loads, unloading and loss of load proportionality indicators, global algorithm, contact and friction); rupture mechanics (G energy restitution level, restitution level in thermo-elasto-plasticity, 3D local energy restitution level, KI and KII stress intensity factors, calculation of limit loads for structures), specific treatments (fatigue, rupture, wear, error estimation); meshes and models (mesh generation, modeling, loads and boundary conditions, links between different modeling processes, resolution of linear systems, display of results etc..); vibration mechanics (modal and harmonic analysis, dynamics with shocks, direct transient dynamics, seismic analysis and aleatory dynamics, non-linear dynamics, dynamical sub-structuring); fluid-structure interactions (internal acoustics, mass, rigidity and damping); linear and non-linear thermal analysis; steels and metal industry (structure transformations); coupled problems (internal chaining, internal thermo-hydro-mechanical coupling, chaining with other codes); products and services. (J.S.)

  9. Adaptive distributed source coding.

    Science.gov (United States)

    Varodayan, David; Lin, Yao-Chung; Girod, Bernd

    2012-05-01

    We consider distributed source coding in the presence of hidden variables that parameterize the statistical dependence among sources. We derive the Slepian-Wolf bound and devise coding algorithms for a block-candidate model of this problem. The encoder sends, in addition to syndrome bits, a portion of the source to the decoder uncoded as doping bits. The decoder uses the sum-product algorithm to simultaneously recover the source symbols and the hidden statistical dependence variables. We also develop novel techniques based on density evolution (DE) to analyze the coding algorithms. We experimentally confirm that our DE analysis closely approximates practical performance. This result allows us to efficiently optimize parameters of the algorithms. In particular, we show that the system performs close to the Slepian-Wolf bound when an appropriate doping rate is selected. We then apply our coding and analysis techniques to a reduced-reference video quality monitoring system and show a bit rate saving of about 75% compared with fixed-length coding.

  10. Contribution to the study by a Monte-Carlo method of mono-kinetic particles propagation in a cavity simulated by rectangular cubes. CODE Cupidon 2 (version 29); Contribution a l'etude par une methode de Monte-Carlo de la propagation de particules monocinetiques dans une enceinte schematisee par un assemblage de parallelepipedes rectangles. Programme Cupidon 2 (version 29)

    Energy Technology Data Exchange (ETDEWEB)

    Duco-Sivagnanam, J

    1967-07-01

    The Cupidon 2 CODE aims to calculate the mono-kinetic neutrons flux in an assembly of cubes cavities jointed by rectangular holes. This report is a partial description of the code Cupidon 2 which explains the calculation procedure: data entry, code limits...). (A.L.B.)

  11. Progress in 3D Space-charge Calculations in the GPT Code

    NARCIS (Netherlands)

    Pöplau, G.; Rienen, van U.; Loos, de M.J.; Geer, van der S.B.

    2004-01-01

    The mesh-based 3D space-charge routine in the GPT (General Particle Tracer, Pulsar Physics) code scales linearly with the number of particles in terms of CPU time and allows a million particles to be tracked on a normal PC. The crucial ingredient of the routine is a non-equidistant multi-grid

  12. Coding considerations for standalone molecular dynamics simulations of atomistic structures

    Science.gov (United States)

    Ocaya, R. O.; Terblans, J. J.

    2017-10-01

    The laws of Newtonian mechanics allow ab-initio molecular dynamics to model and simulate particle trajectories in material science by defining a differentiable potential function. This paper discusses some considerations for the coding of ab-initio programs for simulation on a standalone computer and illustrates the approach by C language codes in the context of embedded metallic atoms in the face-centred cubic structure. The algorithms use velocity-time integration to determine particle parameter evolution for up to several thousands of particles in a thermodynamical ensemble. Such functions are reusable and can be placed in a redistributable header library file. While there are both commercial and free packages available, their heuristic nature prevents dissection. In addition, developing own codes has the obvious advantage of teaching techniques applicable to new problems.

  13. Fusion PIC code performance analysis on the Cori KNL system

    Energy Technology Data Exchange (ETDEWEB)

    Koskela, Tuomas S. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). National Energy Research Scientific Computing Center (NERSC); Deslippe, Jack [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). National Energy Research Scientific Computing Center (NERSC); Friesen, Brian [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). National Energy Research Scientific Computing Center (NERSC); Raman, Karthic [INTEL Corp. (United States)

    2017-05-25

    We study the attainable performance of Particle-In-Cell codes on the Cori KNL system by analyzing a miniature particle push application based on the fusion PIC code XGC1. We start from the most basic building blocks of a PIC code and build up the complexity to identify the kernels that cost the most in performance and focus optimization efforts there. Particle push kernels operate at high AI and are not likely to be memory bandwidth or even cache bandwidth bound on KNL. Therefore, we see only minor benefits from the high bandwidth memory available on KNL, and achieving good vectorization is shown to be the most beneficial optimization path with theoretical yield of up to 8x speedup on KNL. In practice we are able to obtain up to a 4x gain from vectorization due to limitations set by the data layout and memory latency.

  14. Speech coding code- excited linear prediction

    CERN Document Server

    Bäckström, Tom

    2017-01-01

    This book provides scientific understanding of the most central techniques used in speech coding both for advanced students as well as professionals with a background in speech audio and or digital signal processing. It provides a clear connection between the whys hows and whats thus enabling a clear view of the necessity purpose and solutions provided by various tools as well as their strengths and weaknesses in each respect Equivalently this book sheds light on the following perspectives for each technology presented Objective What do we want to achieve and especially why is this goal important Resource Information What information is available and how can it be useful and Resource Platform What kind of platforms are we working with and what are their capabilities restrictions This includes computational memory and acoustic properties and the transmission capacity of devices used. The book goes on to address Solutions Which solutions have been proposed and how can they be used to reach the stated goals and ...

  15. Physical characterisation of particles and rheological of a heterogeneous system used in low-pressure injection moulding; Caracterizacao fisica de particulas e reologica de um sistema heterogeneo utilizado em moldagem de pos por injecao a baixa pressao

    Energy Technology Data Exchange (ETDEWEB)

    Zampieron, Joao Vicente

    2002-07-01

    The powder injection moulding process is a recent technology, which offers as advantages a high production of complex geometry metal parts, with low cost, where secondary operations of machinery are unnecessary. The main of this thesis was centered on a coarse powders feedstock injection. The process begins with the composition of the mass, that is the combination of metal powders with organic binders. The following steps succeed injection in moulds, debinding, sintering and, if necessary, cleaning. For the formulation of the feedstock it is indispensable the characterisation of the powders. This is little mentioned in the open literature and brings up controversy among authors. At first, a series of powders characterisations of AISI 316 L stainless steel (below 25 {mu}m) was adopted. The next step was to characterise the rheological behaviour of the feedstock using different rheological apparatus, so as to find the most appropriate equipment to the low-pressure powder injection molding process. The mass has to present a favourable rheological behaviour, which is low viscosity. The results of the physical characterisation were correlated among themselves and with the rheological characterisation. This was undertaken with the purpose of finding agreement among their values. Finally, the possibility of injection of water and gas atomised stainless steel coarse powders feedstock was studied. This presents as main advantage, the reduction of costs for the process. According to the literature, only powders with size below 25 {mu}m are possible to be injected. Hence, starting from the physical characterisation of particles and rheological characterisation of the feedstock, the formulation of an appropriate mass was found for the coarse powders. These coarse powders were characterised by particles below 45 {mu}m. In this case it was necessary to alter drastically the feedstock composition, using high amounts of wax, which lead to unstable rheological conditions. But, it

  16. Electro-magnetic cascade calculation using EGS4 code

    International Nuclear Information System (INIS)

    Namito, Yoshihito; Hirayama, Hideo

    2001-01-01

    The outline of the general-purpose electron-photon transport code EGS4 (Electron-Gamma-Shower Version 4) is described. In section 1, the history of the electron photon Monte Carlo transport code toward EGS4 is described. In section 2, the features of the EGS4 and the physical processes treated, cross section preparation and language is explained. The upper energy limit of EGS4 is a few thousand GeV. The lower energy limit of EGS4 is 1 keV and 10 keV for photon and electron, respectively. In section 3, particle transport method in EGS4 code is discussed. The points are; condensed history method, continuous slowing down approximation and multiple scattering approximation. Order of the particle transport calculation is also mentioned. The switches to control scoring routine AUSGAB is listed. In section 4, the output from the code is described. In section 5, several benchmark calculations are described. (author)

  17. Aersol particle losses in sampling systems

    International Nuclear Information System (INIS)

    Fan, B.J.; Wong, F.S.; Ortiz, C.A.; Anand, N.K.; McFarland, A.R.

    1993-01-01

    When aerosols are sampled from stacks and ducts, it is usually necessary to transport them from the point of sampling to a location of collection or analysis. Losses of aerosol particles can occur in the inlet region of the probe, in straight horizontal and vertical tubes and in elbows. For probes in laminary flow, the Saffman lift force can cause substantial losses of particles in a short inlet region. An empirical model has been developed to predict probe inlet losses, which are often on the order of 40% for 10 μm AED particles. A user-friendly PC computer code, DEPOSITION, has been setup to model losses in transport systems. Experiments have been conducted to compare the actual aerosol particle losses in transport systems with those predicted by the DEPOSITION code

  18. Spatially coded backscatter radiography

    International Nuclear Information System (INIS)

    Thangavelu, S.; Hussein, E.M.A.

    2007-01-01

    Conventional radiography requires access to two opposite sides of an object, which makes it unsuitable for the inspection of extended and/or thick structures (airframes, bridges, floors etc.). Backscatter imaging can overcome this problem, but the indications obtained are difficult to interpret. This paper applies the coded aperture technique to gamma-ray backscatter-radiography in order to enhance the detectability of flaws. This spatial coding method involves the positioning of a mask with closed and open holes to selectively permit or block the passage of radiation. The obtained coded-aperture indications are then mathematically decoded to detect the presence of anomalies. Indications obtained from Monte Carlo calculations were utilized in this work to simulate radiation scattering measurements. These simulated measurements were used to investigate the applicability of this technique to the detection of flaws by backscatter radiography

  19. Aztheca Code; Codigo Aztheca

    Energy Technology Data Exchange (ETDEWEB)

    Quezada G, S.; Espinosa P, G. [Universidad Autonoma Metropolitana, Unidad Iztapalapa, San Rafael Atlixco No. 186, Col. Vicentina, 09340 Ciudad de Mexico (Mexico); Centeno P, J.; Sanchez M, H., E-mail: sequga@gmail.com [UNAM, Facultad de Ingenieria, Ciudad Universitaria, Circuito Exterior s/n, 04510 Ciudad de Mexico (Mexico)

    2017-09-15

    This paper presents the Aztheca code, which is formed by the mathematical models of neutron kinetics, power generation, heat transfer, core thermo-hydraulics, recirculation systems, dynamic pressure and level models and control system. The Aztheca code is validated with plant data, as well as with predictions from the manufacturer when the reactor operates in a stationary state. On the other hand, to demonstrate that the model is applicable during a transient, an event occurred in a nuclear power plant with a BWR reactor is selected. The plant data are compared with the results obtained with RELAP-5 and the Aztheca model. The results show that both RELAP-5 and the Aztheca code have the ability to adequately predict the behavior of the reactor. (Author)

  20. The Coding Question.

    Science.gov (United States)

    Gallistel, C R

    2017-07-01

    Recent electrophysiological results imply that the duration of the stimulus onset asynchrony in eyeblink conditioning is encoded by a mechanism intrinsic to the cerebellar Purkinje cell. This raises the general question - how is quantitative information (durations, distances, rates, probabilities, amounts, etc.) transmitted by spike trains and encoded into engrams? The usual assumption is that information is transmitted by firing rates. However, rate codes are energetically inefficient and computationally awkward. A combinatorial code is more plausible. If the engram consists of altered synaptic conductances (the usual assumption), then we must ask how numbers may be written to synapses. It is much easier to formulate a coding hypothesis if the engram is realized by a cell-intrinsic molecular mechanism. Copyright © 2017 Elsevier Ltd. All rights reserved.