WorldWideScience

Sample records for hybrid particle code

  1. Ordered particles versus ordered pointers in the hybrid ordered plasma simulation (HOPS) code

    International Nuclear Information System (INIS)

    Anderson, D.V.; Shumaker, D.E.

    1993-01-01

    From a computational standpoint, particle simulation calculations for plasmas have not adapted well to the transitions from scalar to vector processing nor from serial to parallel environments. They have suffered from inordinate and excessive accessing of computer memory and have been hobbled by relatively inefficient gather-scatter constructs resulting from the use of indirect indexing. Lastly, the many-to-one mapping characteristic of the deposition phase has made it difficult to perform this in parallel. The authors' code sorts and reorders the particles in a spatial order. This allows them to greatly reduce the memory references, to run in directly indexed vector mode, and to employ domain decomposition to achieve parallelization. The field model solves pre-maxwell equations by interatively implicit methods. The OSOP (Ordered Storage Ordered Processing) version of HOPS keeps the particle tables ordered by rebuilding them after each particle pushing phase. Alternatively, the RSOP (Random Storage Ordered Processing) version keeps a table of pointers ordered by rebuilding them. Although OSOP is somewhat faster than RSOP in tests on vector-parallel machines, it is not clear this advantage will carry over to massively parallel computers

  2. Simulation of Alfvén eigenmode bursts using a hybrid code for nonlinear magnetohydrodynamics and energetic particles

    Science.gov (United States)

    Todo, Y.; Berk, H. L.; Breizman, B. N.

    2012-03-01

    A hybrid simulation code for nonlinear magnetohydrodynamics (MHD) and energetic-particle dynamics has been extended to simulate recurrent bursts of Alfvén eigenmodes by implementing the energetic-particle source, collisions and losses. The Alfvén eigenmode bursts with synchronization of multiple modes and beam ion losses at each burst are successfully simulated with nonlinear MHD effects for the physics condition similar to a reduced simulation for a TFTR experiment (Wong et al 1991 Phys. Rev. Lett. 66 1874, Todo et al 2003 Phys. Plasmas 10 2888). It is demonstrated with a comparison between nonlinear MHD and linear MHD simulation results that the nonlinear MHD effects significantly reduce both the saturation amplitude of the Alfvén eigenmodes and the beam ion losses. Two types of time evolution are found depending on the MHD dissipation coefficients, namely viscosity, resistivity and diffusivity. The Alfvén eigenmode bursts take place for higher dissipation coefficients with roughly 10% drop in stored beam energy and the maximum amplitude of the dominant magnetic fluctuation harmonic δBm/n/B ~ 5 × 10-3 at the mode peak location inside the plasma. Quadratic dependence of beam ion loss rate on magnetic fluctuation amplitude is found for the bursting evolution in the nonlinear MHD simulation. For lower dissipation coefficients, the amplitude of the Alfvén eigenmodes is at steady levels δBm/n/B ~ 2 × 10-3 and the beam ion losses take place continuously. The beam ion pressure profiles are similar among the different dissipation coefficients, and the stored beam energy is higher for higher dissipation coefficients.

  3. Towards the optimization of a gyrokinetic Particle-In-Cell (PIC) code on large-scale hybrid architectures

    International Nuclear Information System (INIS)

    Ohana, N; Lanti, E; Tran, T M; Brunner, S; Hariri, F; Villard, L; Jocksch, A; Gheller, C

    2016-01-01

    With the aim of enabling state-of-the-art gyrokinetic PIC codes to benefit from the performance of recent multithreaded devices, we developed an application from a platform called the “PIC-engine” [1, 2, 3] embedding simplified basic features of the PIC method. The application solves the gyrokinetic equations in a sheared plasma slab using B-spline finite elements up to fourth order to represent the self-consistent electrostatic field. Preliminary studies of the so-called Particle-In-Fourier (PIF) approach, which uses Fourier modes as basis functions in the periodic dimensions of the system instead of the real-space grid, show that this method can be faster than PIC for simulations with a small number of Fourier modes. Similarly to the PIC-engine, multiple levels of parallelism have been implemented using MPI+OpenMP [2] and MPI+OpenACC [1], the latter exploiting the computational power of GPUs without requiring complete code rewriting. It is shown that sorting particles [3] can lead to performance improvement by increasing data locality and vectorizing grid memory access. Weak scalability tests have been successfully run on the GPU-equipped Cray XC30 Piz Daint (at CSCS) up to 4,096 nodes. The reduced time-to-solution will enable more realistic and thus more computationally intensive simulations of turbulent transport in magnetic fusion devices. (paper)

  4. Hybrid particles and associated methods

    Science.gov (United States)

    Fox, Robert V; Rodriguez, Rene; Pak, Joshua J; Sun, Chivin

    2015-02-10

    Hybrid particles that comprise a coating surrounding a chalcopyrite material, the coating comprising a metal, a semiconductive material, or a polymer; a core comprising a chalcopyrite material and a shell comprising a functionalized chalcopyrite material, the shell enveloping the core; or a reaction product of a chalcopyrite material and at least one of a reagent, heat, and radiation. Methods of forming the hybrid particles are also disclosed.

  5. The Accurate Particle Tracer Code

    OpenAIRE

    Wang, Yulei; Liu, Jian; Qin, Hong; Yu, Zhi

    2016-01-01

    The Accurate Particle Tracer (APT) code is designed for large-scale particle simulations on dynamical systems. Based on a large variety of advanced geometric algorithms, APT possesses long-term numerical accuracy and stability, which are critical for solving multi-scale and non-linear problems. Under the well-designed integrated and modularized framework, APT serves as a universal platform for researchers from different fields, such as plasma physics, accelerator physics, space science, fusio...

  6. The accurate particle tracer code

    Science.gov (United States)

    Wang, Yulei; Liu, Jian; Qin, Hong; Yu, Zhi; Yao, Yicun

    2017-11-01

    The Accurate Particle Tracer (APT) code is designed for systematic large-scale applications of geometric algorithms for particle dynamical simulations. Based on a large variety of advanced geometric algorithms, APT possesses long-term numerical accuracy and stability, which are critical for solving multi-scale and nonlinear problems. To provide a flexible and convenient I/O interface, the libraries of Lua and Hdf5 are used. Following a three-step procedure, users can efficiently extend the libraries of electromagnetic configurations, external non-electromagnetic forces, particle pushers, and initialization approaches by use of the extendible module. APT has been used in simulations of key physical problems, such as runaway electrons in tokamaks and energetic particles in Van Allen belt. As an important realization, the APT-SW version has been successfully distributed on the world's fastest computer, the Sunway TaihuLight supercomputer, by supporting master-slave architecture of Sunway many-core processors. Based on large-scale simulations of a runaway beam under parameters of the ITER tokamak, it is revealed that the magnetic ripple field can disperse the pitch-angle distribution significantly and improve the confinement of energetic runaway beam on the same time.

  7. 1-D hybrid code for FRM dynamics

    International Nuclear Information System (INIS)

    Stark, R.A.; Miley, G.H.

    1985-01-01

    A 1-D radial hybrid code has been written to study the start-up of the FRM via neutral-beam injection. This code, named FROST (Field Reversed One-dimensional STart-up), models the plasma as azimuthal symmetric with no axial dependence. A multi-group method in energy and canonical angular momentum describes the large-orbit ions from the beam. This method is designed to be more efficient than those employing particle tracking, since the characteristic timescale of the simulation is the ion slowing down time, rather than the much shorter cyclotron period. A time-differentiated Grad-Shafranov equation couples the ion current to massless fluid equations describing electrons and low energy ions. Flux coordinates are used in this fluid model, in preference to an Eulerian framework, so that coupling of plasma at the two different radii of a closed flux surface may be treated with ease. Since a fluid treatment for electrons is invalid near a field null, a separate model for the electron current has been included for this region, a unique feature. Results of simulation of injection into a 2XIIB-like plasma are discussed. Electron currents are found to retard, but not prevent reversal of the magnetic field at the plasma center

  8. Analysis of Non-binary Hybrid LDPC Codes

    OpenAIRE

    Sassatelli, Lucile; Declercq, David

    2008-01-01

    In this paper, we analyse asymptotically a new class of LDPC codes called Non-binary Hybrid LDPC codes, which has been recently introduced. We use density evolution techniques to derive a stability condition for hybrid LDPC codes, and prove their threshold behavior. We study this stability condition to conclude on asymptotic advantages of hybrid LDPC codes compared to their non-hybrid counterparts.

  9. PENTACLE: Parallelized particle-particle particle-tree code for planet formation

    Science.gov (United States)

    Iwasawa, Masaki; Oshino, Shoichi; Fujii, Michiko S.; Hori, Yasunori

    2017-10-01

    We have newly developed a parallelized particle-particle particle-tree code for planet formation, PENTACLE, which is a parallelized hybrid N-body integrator executed on a CPU-based (super)computer. PENTACLE uses a fourth-order Hermite algorithm to calculate gravitational interactions between particles within a cut-off radius and a Barnes-Hut tree method for gravity from particles beyond. It also implements an open-source library designed for full automatic parallelization of particle simulations, FDPS (Framework for Developing Particle Simulator), to parallelize a Barnes-Hut tree algorithm for a memory-distributed supercomputer. These allow us to handle 1-10 million particles in a high-resolution N-body simulation on CPU clusters for collisional dynamics, including physical collisions in a planetesimal disc. In this paper, we show the performance and the accuracy of PENTACLE in terms of \\tilde{R}_cut and a time-step Δt. It turns out that the accuracy of a hybrid N-body simulation is controlled through Δ t / \\tilde{R}_cut and Δ t / \\tilde{R}_cut ˜ 0.1 is necessary to simulate accurately the accretion process of a planet for ≥106 yr. For all those interested in large-scale particle simulations, PENTACLE, customized for planet formation, will be freely available from https://github.com/PENTACLE-Team/PENTACLE under the MIT licence.

  10. PHANTOM: Smoothed particle hydrodynamics and magnetohydrodynamics code

    Science.gov (United States)

    Price, Daniel J.; Wurster, James; Nixon, Chris; Tricco, Terrence S.; Toupin, Stéven; Pettitt, Alex; Chan, Conrad; Laibe, Guillaume; Glover, Simon; Dobbs, Clare; Nealon, Rebecca; Liptai, David; Worpel, Hauke; Bonnerot, Clément; Dipierro, Giovanni; Ragusa, Enrico; Federrath, Christoph; Iaconi, Roberto; Reichardt, Thomas; Forgan, Duncan; Hutchison, Mark; Constantino, Thomas; Ayliffe, Ben; Mentiplay, Daniel; Hirsh, Kieran; Lodato, Giuseppe

    2017-09-01

    Phantom is a smoothed particle hydrodynamics and magnetohydrodynamics code focused on stellar, galactic, planetary, and high energy astrophysics. It is modular, and handles sink particles, self-gravity, two fluid and one fluid dust, ISM chemistry and cooling, physical viscosity, non-ideal MHD, and more. Its modular structure makes it easy to add new physics to the code.

  11. An implicit Smooth Particle Hydrodynamic code

    Energy Technology Data Exchange (ETDEWEB)

    Knapp, Charles E. [Univ. of New Mexico, Albuquerque, NM (United States)

    2000-05-01

    An implicit version of the Smooth Particle Hydrodynamic (SPH) code SPHINX has been written and is working. In conjunction with the SPHINX code the new implicit code models fluids and solids under a wide range of conditions. SPH codes are Lagrangian, meshless and use particles to model the fluids and solids. The implicit code makes use of the Krylov iterative techniques for solving large linear-systems and a Newton-Raphson method for non-linear corrections. It uses numerical derivatives to construct the Jacobian matrix. It uses sparse techniques to save on memory storage and to reduce the amount of computation. It is believed that this is the first implicit SPH code to use Newton-Krylov techniques, and is also the first implicit SPH code to model solids. A description of SPH and the techniques used in the implicit code are presented. Then, the results of a number of tests cases are discussed, which include a shock tube problem, a Rayleigh-Taylor problem, a breaking dam problem, and a single jet of gas problem. The results are shown to be in very good agreement with analytic solutions, experimental results, and the explicit SPHINX code. In the case of the single jet of gas case it has been demonstrated that the implicit code can do a problem in much shorter time than the explicit code. The problem was, however, very unphysical, but it does demonstrate the potential of the implicit code. It is a first step toward a useful implicit SPH code.

  12. Computer codes in particle transport physics

    International Nuclear Information System (INIS)

    Pesic, M.

    2004-01-01

    Simulation of transport and interaction of various particles in complex media and wide energy range (from 1 MeV up to 1 TeV) is very complicated problem that requires valid model of a real process in nature and appropriate solving tool - computer code and data library. A brief overview of computer codes based on Monte Carlo techniques for simulation of transport and interaction of hadrons and ions in wide energy range in three dimensional (3D) geometry is shown. Firstly, a short attention is paid to underline the approach to the solution of the problem - process in nature - by selection of the appropriate 3D model and corresponding tools - computer codes and cross sections data libraries. Process of data collection and evaluation from experimental measurements and theoretical approach to establishing reliable libraries of evaluated cross sections data is Ion g, difficult and not straightforward activity. For this reason, world reference data centers and specialized ones are acknowledged, together with the currently available, state of art evaluated nuclear data libraries, as the ENDF/B-VI, JEF, JENDL, CENDL, BROND, etc. Codes for experimental and theoretical data evaluations (e.g., SAMMY and GNASH) together with the codes for data processing (e.g., NJOY, PREPRO and GRUCON) are briefly described. Examples of data evaluation and data processing to generate computer usable data libraries are shown. Among numerous and various computer codes developed in transport physics of particles, the most general ones are described only: MCNPX, FLUKA and SHIELD. A short overview of basic application of these codes, physical models implemented with their limitations, energy ranges of particles and types of interactions, is given. General information about the codes covers also programming language, operation system, calculation speed and the code availability. An example of increasing computation speed of running MCNPX code using a MPI cluster compared to the code sequential option

  13. FLUKA: A Multi-Particle Transport Code

    Energy Technology Data Exchange (ETDEWEB)

    Ferrari, A.; Sala, P.R.; /CERN /INFN, Milan; Fasso, A.; /SLAC; Ranft, J.; /Siegen U.

    2005-12-14

    This report describes the 2005 version of the Fluka particle transport code. The first part introduces the basic notions, describes the modular structure of the system, and contains an installation and beginner's guide. The second part complements this initial information with details about the various components of Fluka and how to use them. It concludes with a detailed history and bibliography.

  14. Papa, a Particle Tracing Code in Pascal

    NARCIS (Netherlands)

    Haselhoff, E.H.; Haselhoff, Eltjo H.; Ernst, G.J.

    1990-01-01

    During the design of a 10 ¿m high-gain FEL oscillator (TEUFEL Project) we developed a new particle-tracing code to perform simulations of thermionic- and photo-cathode electron injectors/accelerators. The program allows predictions of current, energy and beam emittance in a user-specified linac

  15. Particle tracing code for multispecies gas

    International Nuclear Information System (INIS)

    Eaton, R.R.; Fox, R.L.; Vandevender, W.H.

    1979-06-01

    Details are presented for the development of a computer code designed to calculate the flow of a multispecies gas mixture using particle tracing techniques. The current technique eliminates the need for a full simulation by utilizing local time averaged velocity distribution functions to obtain the dynamic properties for probable collision partners. The development of this concept reduces statistical scatter experienced in conventional Monte Carlo simulations. The technique is applicable to flow problems involving gas mixtures with disparate masses and trace constituents in the Knudsen number, Kn, range from 1.0 to less than 0.01. The resulting code has previously been used to analyze several aerodynamic isotope enrichment devices

  16. IFR code for secondary particle dynamics

    International Nuclear Information System (INIS)

    Teague, M.R.; Yu, S.S.

    1985-01-01

    A numerical simulation has been constructed to obtain a detailed, quantitative estimate of the electromagnetic fields and currents existing in the Advanced Test Accelerator under conditions of laser guiding. The code treats the secondary electrons by particle simulation and the beam dynamics by a time-dependent envelope model. The simulation gives a fully relativistic description of secondary electrons moving in self-consistent electromagnetic fields. The calculations are made using coordinates t, x, y, z for the electrons and t, ct-z, r for the axisymmetric electromagnetic fields and currents. Code results, showing in particular current enhancement effects, will be given

  17. BOA, Beam Optics Analyzer A Particle-In-Cell Code

    International Nuclear Information System (INIS)

    Bui, Thuc

    2007-01-01

    The program was tasked with implementing time dependent analysis of charges particles into an existing finite element code with adaptive meshing, called Beam Optics Analyzer (BOA). BOA was initially funded by a DOE Phase II program to use the finite element method with adaptive meshing to track particles in unstructured meshes. It uses modern programming techniques, state-of-the-art data structures, so that new methods, features and capabilities are easily added and maintained. This Phase II program was funded to implement plasma simulations in BOA and extend its capabilities to model thermal electrons, secondary emissions, self magnetic field and implement a more comprehensive post-processing and feature-rich GUI. The program was successful in implementing thermal electrons, secondary emissions, and self magnetic field calculations. The BOA GUI was also upgraded significantly, and CCR is receiving interest from the microwave tube and semiconductor equipment industry for the code. Implementation of PIC analysis was partially successful. Computational resource requirements for modeling more than 2000 particles begin to exceed the capability of most readily available computers. Modern plasma analysis typically requires modeling of approximately 2 million particles or more. The problem is that tracking many particles in an unstructured mesh that is adapting becomes inefficient. In particular memory requirements become excessive. This probably makes particle tracking in unstructured meshes currently unfeasible with commonly available computer resources. Consequently, Calabazas Creek Research, Inc. is exploring hybrid codes where the electromagnetic fields are solved on the unstructured, adaptive mesh while particles are tracked on a fixed mesh. Efficient interpolation routines should be able to transfer information between nodes of the two meshes. If successfully developed, this could provide high accuracy and reasonable computational efficiency.

  18. Parallelization Issues and Particle-In Codes.

    Science.gov (United States)

    Elster, Anne Cathrine

    1994-01-01

    "Everything should be made as simple as possible, but not simpler." Albert Einstein. The field of parallel scientific computing has concentrated on parallelization of individual modules such as matrix solvers and factorizers. However, many applications involve several interacting modules. Our analyses of a particle-in-cell code modeling charged particles in an electric field, show that these accompanying dependencies affect data partitioning and lead to new parallelization strategies concerning processor, memory and cache utilization. Our test-bed, a KSR1, is a distributed memory machine with a globally shared addressing space. However, most of the new methods presented hold generally for hierarchical and/or distributed memory systems. We introduce a novel approach that uses dual pointers on the local particle arrays to keep the particle locations automatically partially sorted. Complexity and performance analyses with accompanying KSR benchmarks, have been included for both this scheme and for the traditional replicated grids approach. The latter approach maintains load-balance with respect to particles. However, our results demonstrate it fails to scale properly for problems with large grids (say, greater than 128-by-128) running on as few as 15 KSR nodes, since the extra storage and computation time associated with adding the grid copies, becomes significant. Our grid partitioning scheme, although harder to implement, does not need to replicate the whole grid. Consequently, it scales well for large problems on highly parallel systems. It may, however, require load balancing schemes for non-uniform particle distributions. Our dual pointer approach may facilitate this through dynamically partitioned grids. We also introduce hierarchical data structures that store neighboring grid-points within the same cache -line by reordering the grid indexing. This alignment produces a 25% savings in cache-hits for a 4-by-4 cache. A consideration of the input data's effect on

  19. Scalable Simulation of Electromagnetic Hybrid Codes

    International Nuclear Information System (INIS)

    Perumalla, Kalyan S.; Fujimoto, Richard; Karimabadi, Dr. Homa

    2006-01-01

    New discrete-event formulations of physics simulation models are emerging that can outperform models based on traditional time-stepped techniques. Detailed simulation of the Earth's magnetosphere, for example, requires execution of sub-models that are at widely differing timescales. In contrast to time-stepped simulation which requires tightly coupled updates to entire system state at regular time intervals, the new discrete event simulation (DES) approaches help evolve the states of sub-models on relatively independent timescales. However, parallel execution of DES-based models raises challenges with respect to their scalability and performance. One of the key challenges is to improve the computation granularity to offset synchronization and communication overheads within and across processors. Our previous work was limited in scalability and runtime performance due to the parallelization challenges. Here we report on optimizations we performed on DES-based plasma simulation models to improve parallel performance. The net result is the capability to simulate hybrid particle-in-cell (PIC) models with over 2 billion ion particles using 512 processors on supercomputing platforms

  20. Recent progress of hybrid simulation for energetic particles and MHD

    International Nuclear Information System (INIS)

    Todo, Y.

    2013-01-01

    Several hybrid simulation models have been constructed to study the evolution of Alfven eigenmodes destabilized by energetic particles. Recent hybrid simulation results of energetic particle driven instabilities are presented in this paper. (J.P.N.)

  1. Absorption of lower hybrid waves by alpha particles in ITER

    International Nuclear Information System (INIS)

    Imbeaux, F.; Peysson, Y.; Eriksson, L.G.

    2003-01-01

    Absorption of lower hybrid (LH) waves by alpha particles may reduce significantly the current drive efficiency of the waves in a reactor or burning plasma experiment. This absorption is quantified for ITER using the ray-tracing+2D relativistic Fokker-Planck code Delphine. The absorption is calculated as a function of the superthermal alpha particle density, which is constant in these simulations, for two candidate frequencies for the LH system of ITER. Negligible absorption by alpha particles at 3.7 GHz requires n(alpha,supra) = 7.5 10 16 m -3 , while no significant impact on the driven current is found at 5 GHz, even if n(alpha,supra) = 1.5 10 18 m -3 . (authors)

  2. Hybrid 3D Fractal Coding with Neighbourhood Vector Quantisation

    Directory of Open Access Journals (Sweden)

    Zhen Yao

    2004-12-01

    Full Text Available A hybrid 3D compression scheme which combines fractal coding with neighbourhood vector quantisation for video and volume data is reported. While fractal coding exploits the redundancy present in different scales, neighbourhood vector quantisation, as a generalisation of translational motion compensation, is a useful method for removing both intra- and inter-frame coherences. The hybrid coder outperforms most of the fractal coders published to date while the algorithm complexity is kept relatively low.

  3. Non-binary Hybrid LDPC Codes: Structure, Decoding and Optimization

    OpenAIRE

    Sassatelli, Lucile; Declercq, David

    2007-01-01

    In this paper, we propose to study and optimize a very general class of LDPC codes whose variable nodes belong to finite sets with different orders. We named this class of codes Hybrid LDPC codes. Although efficient optimization techniques exist for binary LDPC codes and more recently for non-binary LDPC codes, they both exhibit drawbacks due to different reasons. Our goal is to capitalize on the advantages of both families by building codes with binary (or small finite set order) and non-bin...

  4. A new hybrid code (CHIEF) implementing the inertial electron fluid equation without approximation

    Science.gov (United States)

    Muñoz, P. A.; Jain, N.; Kilian, P.; Büchner, J.

    2018-03-01

    We present a new hybrid algorithm implemented in the code CHIEF (Code Hybrid with Inertial Electron Fluid) for simulations of electron-ion plasmas. The algorithm treats the ions kinetically, modeled by the Particle-in-Cell (PiC) method, and electrons as an inertial fluid, modeled by electron fluid equations without any of the approximations used in most of the other hybrid codes with an inertial electron fluid. This kind of code is appropriate to model a large variety of quasineutral plasma phenomena where the electron inertia and/or ion kinetic effects are relevant. We present here the governing equations of the model, how these are discretized and implemented numerically, as well as six test problems to validate our numerical approach. Our chosen test problems, where the electron inertia and ion kinetic effects play the essential role, are: 0) Excitation of parallel eigenmodes to check numerical convergence and stability, 1) parallel (to a background magnetic field) propagating electromagnetic waves, 2) perpendicular propagating electrostatic waves (ion Bernstein modes), 3) ion beam right-hand instability (resonant and non-resonant), 4) ion Landau damping, 5) ion firehose instability, and 6) 2D oblique ion firehose instability. Our results reproduce successfully the predictions of linear and non-linear theory for all these problems, validating our code. All properties of this hybrid code make it ideal to study multi-scale phenomena between electron and ion scales such as collisionless shocks, magnetic reconnection and kinetic plasma turbulence in the dissipation range above the electron scales.

  5. Use of a hybrid code for global-scale plasma simulation

    International Nuclear Information System (INIS)

    Swift, D.W.

    1996-01-01

    This paper presents a demonstration of the use of a hybrid code to model the Earth's magnetosphere on a global scale. The typical hybrid code calculates the interaction of fully kinetic ions and a massless electron fluid with the magnetic field. This code also includes a fluid ion component to approximate the cold ionospheric plasma that spatially overlaps with the discrete particle component. Other innovative features of the code include a numerically generated curvilinear coordinate system and subcycling of the magnetic field update to the particle push. These innovations allow the code to accommodate disparate time and distance scales. The demonstration is a simulation of the noon meridian plane of the magnetosphere. The code exhibits the formation of fast and slow-mode shocks and tearing reconnection at the magnetopause. New results include particle acceleration in the cusp and nearly field aligned currents linking the cusp and polar ionosphere. The paper also describes a density depletion instability and measures to avoid it. 27 refs., 4 figs

  6. Computer-assisted Particle-in-Cell code development

    International Nuclear Information System (INIS)

    Kawata, S.; Boonmee, C.; Teramoto, T.; Drska, L.; Limpouch, J.; Liska, R.; Sinor, M.

    1997-12-01

    This report presents a new approach for an electromagnetic Particle-in-Cell (PIC) code development by a computer: in general PIC codes have a common structure, and consist of a particle pusher, a field solver, charge and current density collections, and a field interpolation. Because of the common feature, the main part of the PIC code can be mechanically developed on a computer. In this report we use the packages FIDE and GENTRAN of the REDUCE computer algebra system for discretizations of field equations and a particle equation, and for an automatic generation of Fortran codes. The approach proposed is successfully applied to the development of 1.5-dimensional PIC code. By using the generated PIC code the Weibel instability in a plasma is simulated. The obtained growth rate agrees well with the theoretical value. (author)

  7. photon-plasma: A modern high-order particle-in-cell code

    International Nuclear Information System (INIS)

    Haugbølle, Troels; Frederiksen, Jacob Trier; Nordlund, Åke

    2013-01-01

    We present the photon-plasma code, a modern high order charge conserving particle-in-cell code for simulating relativistic plasmas. The code is using a high order implicit field solver and a novel high order charge conserving interpolation scheme for particle-to-cell interpolation and charge deposition. It includes powerful diagnostics tools with on-the-fly particle tracking, synthetic spectra integration, 2D volume slicing, and a new method to correctly account for radiative cooling in the simulations. A robust technique for imposing (time-dependent) particle and field fluxes on the boundaries is also presented. Using a hybrid OpenMP and MPI approach, the code scales efficiently from 8 to more than 250.000 cores with almost linear weak scaling on a range of architectures. The code is tested with the classical benchmarks particle heating, cold beam instability, and two-stream instability. We also present particle-in-cell simulations of the Kelvin-Helmholtz instability, and new results on radiative collisionless shocks

  8. Global Hybrid Simulations of Energetic Particle-driven Modes in Toroidal Plasmas

    International Nuclear Information System (INIS)

    Fu, G.Y.; Breslau, J.; Fredrickson, E.; Park, W.; Strauss, H.R.

    2004-01-01

    Global hybrid simulations of energetic particle-driven MHD modes have been carried out for tokamaks and spherical tokamaks using the hybrid code M3D. The numerical results for the National Spherical Tokamak Experiments (NSTX) show that Toroidal Alfven Eigenmodes are excited by beam ions with their frequencies consistent with the experimental observations. Nonlinear simulations indicate that the n=2 mode frequency chirps down as the mode moves out radially. For ITER, it is shown that the alpha-particle effects are strongly stabilizing for internal kink mode when central safety factor q(0) is sufficiently close to unity. However, the elongation of ITER plasma shape reduces the stabilization significantly

  9. Hybrid microscopic depletion model in nodal code DYN3D

    International Nuclear Information System (INIS)

    Bilodid, Y.; Kotlyar, D.; Shwageraus, E.; Fridman, E.; Kliem, S.

    2016-01-01

    Highlights: • A new hybrid method of accounting for spectral history effects is proposed. • Local concentrations of over 1000 nuclides are calculated using micro depletion. • The new method is implemented in nodal code DYN3D and verified. - Abstract: The paper presents a general hybrid method that combines the micro-depletion technique with correction of micro- and macro-diffusion parameters to account for the spectral history effects. The fuel in a core is subjected to time- and space-dependent operational conditions (e.g. coolant density), which cannot be predicted in advance. However, lattice codes assume some average conditions to generate cross sections (XS) for nodal diffusion codes such as DYN3D. Deviation of local operational history from average conditions leads to accumulation of errors in XS, which is referred as spectral history effects. Various methods to account for the spectral history effects, such as spectral index, burnup-averaged operational parameters and micro-depletion, were implemented in some nodal codes. Recently, an alternative method, which characterizes fuel depletion state by burnup and 239 Pu concentration (denoted as Pu-correction) was proposed, implemented in nodal code DYN3D and verified for a wide range of history effects. The method is computationally efficient, however, it has applicability limitations. The current study seeks to improve the accuracy and applicability range of Pu-correction method. The proposed hybrid method combines the micro-depletion method with a XS characterization technique similar to the Pu-correction method. The method was implemented in DYN3D and verified on multiple test cases. The results obtained with DYN3D were compared to those obtained with Monte Carlo code Serpent, which was also used to generate the XS. The observed differences are within the statistical uncertainties.

  10. 1-D hybrid code for FRM start-up

    International Nuclear Information System (INIS)

    Stark, R.A.; Miley, G.H.

    1982-01-01

    A one-D hybrid has been developed to study the start-up of the FRM via neutral-beam injection. The code uses a multi-group numerical model originally developed by J. Willenberg to describe fusion product dynamics in a solenoidal plasma. Earlier we described such a model for use in determining self-consistent ion currents and magnetic fields in FRM start-up. However, consideration of electron dynamics during start-up indicate that the electron current will oppose the injected ion current and may even foil the attempt to achieve reversal. For this reason, we have combined the multi-group ion (model) with a fluid treatment for electron dynamics to form the hybrid code FROST (Field Reversed One-dimensional STart-up). The details of this merger, along with sample results of operation of FROST, are given

  11. Optimization of Particle-in-Cell Codes on RISC Processors

    Science.gov (United States)

    Decyk, Viktor K.; Karmesin, Steve Roy; Boer, Aeint de; Liewer, Paulette C.

    1996-01-01

    General strategies are developed to optimize particle-cell-codes written in Fortran for RISC processors which are commonly used on massively parallel computers. These strategies include data reorganization to improve cache utilization and code reorganization to improve efficiency of arithmetic pipelines.

  12. Particle In Cell Codes on Highly Parallel Architectures

    Science.gov (United States)

    Tableman, Adam

    2014-10-01

    We describe strategies and examples of Particle-In-Cell Codes running on Nvidia GPU and Intel Phi architectures. This includes basic implementations in skeletons codes and full-scale development versions (encompassing 1D, 2D, and 3D codes) in Osiris. Both the similarities and differences between Intel's and Nvidia's hardware will be examined. Work supported by grants NSF ACI 1339893, DOE DE SC 000849, DOE DE SC 0008316, DOE DE NA 0001833, and DOE DE FC02 04ER 54780.

  13. Survey of particle codes in the Magnetic Fusion Energy Program

    International Nuclear Information System (INIS)

    1977-12-01

    In the spring of 1976, the Fusion Plasma Theory Branch of the Division of Magnetic Fusion Energy conducted a survey of all the physics computer codes being supported at that time. The purpose of that survey was to allow DMFE to prepare a description of the codes for distribution to the plasma physics community. This document is the first of several planned and covers those types of codes which treat the plasma as a group of particles

  14. PHITS-a particle and heavy ion transport code system

    International Nuclear Information System (INIS)

    Niita, Koji; Sato, Tatsuhiko; Iwase, Hiroshi; Nose, Hiroyuki; Nakashima, Hiroshi; Sihver, Lembit

    2006-01-01

    The paper presents a summary of the recent development of the multi-purpose Monte Carlo Particle and Heavy Ion Transport code System, PHITS. In particular, we discuss in detail the development of two new models, JAM and JQMD, for high energy particle interactions, incorporated in PHITS, and show comparisons between model calculations and experiments for the validations of these models. The paper presents three applications of the code including spallation neutron source, heavy ion therapy and space radiation. The results and examples shown indicate PHITS has great ability of carrying out the radiation transport analysis of almost all particles including heavy ions within a wide energy range

  15. Hybrid coded aperture and Compton imaging using an active mask

    International Nuclear Information System (INIS)

    Schultz, L.J.; Wallace, M.S.; Galassi, M.C.; Hoover, A.S.; Mocko, M.; Palmer, D.M.; Tornga, S.R.; Kippen, R.M.; Hynes, M.V.; Toolin, M.J.; Harris, B.; McElroy, J.E.; Wakeford, D.; Lanza, R.C.; Horn, B.K.P.; Wehe, D.K.

    2009-01-01

    The trimodal imager (TMI) images gamma-ray sources from a mobile platform using both coded aperture (CA) and Compton imaging (CI) modalities. In this paper we will discuss development and performance of image reconstruction algorithms for the TMI. In order to develop algorithms in parallel with detector hardware we are using a GEANT4 [J. Allison, K. Amako, J. Apostolakis, H. Araujo, P.A. Dubois, M. Asai, G. Barrand, R. Capra, S. Chauvie, R. Chytracek, G. Cirrone, G. Cooperman, G. Cosmo, G. Cuttone, G. Daquino, et al., IEEE Trans. Nucl. Sci. NS-53 (1) (2006) 270] based simulation package to produce realistic data sets for code development. The simulation code incorporates detailed detector modeling, contributions from natural background radiation, and validation of simulation results against measured data. Maximum likelihood algorithms for both imaging methods are discussed, as well as a hybrid imaging algorithm wherein CA and CI information is fused to generate a higher fidelity reconstruction.

  16. Recent advances in neutral particle transport methods and codes

    International Nuclear Information System (INIS)

    Azmy, Y.Y.

    1996-01-01

    An overview of ORNL's three-dimensional neutral particle transport code, TORT, is presented. Special features of the code that make it invaluable for large applications are summarized for the prospective user. Advanced capabilities currently under development and installation in the production release of TORT are discussed; they include: multitasking on Cray platforms running the UNICOS operating system; Adjacent cell Preconditioning acceleration scheme; and graphics codes for displaying computed quantities such as the flux. Further developments for TORT and its companion codes to enhance its present capabilities, as well as expand its range of applications are disucssed. Speculation on the next generation of neutron particle transport codes at ORNL, especially regarding unstructured grids and high order spatial approximations, are also mentioned

  17. Hybrid Video Coding Based on Bidimensional Matching Pursuit

    Directory of Open Access Journals (Sweden)

    Lorenzo Granai

    2004-12-01

    Full Text Available Hybrid video coding combines together two stages: first, motion estimation and compensation predict each frame from the neighboring frames, then the prediction error is coded, reducing the correlation in the spatial domain. In this work, we focus on the latter stage, presenting a scheme that profits from some of the features introduced by the standard H.264/AVC for motion estimation and replaces the transform in the spatial domain. The prediction error is so coded using the matching pursuit algorithm which decomposes the signal over an appositely designed bidimensional, anisotropic, redundant dictionary. Comparisons are made among the proposed technique, H.264, and a DCT-based coding scheme. Moreover, we introduce fast techniques for atom selection, which exploit the spatial localization of the atoms. An adaptive coding scheme aimed at optimizing the resource allocation is also presented, together with a rate-distortion study for the matching pursuit algorithm. Results show that the proposed scheme outperforms the standard DCT, especially at very low bit rates.

  18. Development of particle and heavy ion transport code system

    International Nuclear Information System (INIS)

    Niita, Koji

    2004-01-01

    Particle and heavy ion transport code system (PHITS) is 3 dimension general purpose Monte Carlo simulation codes for description of transport and reaction of particle and heavy ion in materials. It is developed on the basis of NMTC/JAM for design and safety of J-PARC. What is PHITS, it's physical process, physical models and development process of PHITC code are described. For examples of application, evaluation of neutron optics, cancer treatment by heavy particle ray and cosmic radiation are stated. JAM and JQMD model are used as the physical model. Neutron motion in six polar magnetic field and gravitational field, PHITC simulation of trace of C 12 beam and secondary neutron track of small model of cancer treatment device in HIMAC and neutron flux in Space Shuttle are explained. (S.Y.)

  19. Optimization of the particle pusher in a diode simulation code

    International Nuclear Information System (INIS)

    Theimer, M.M.; Quintenz, J.P.

    1979-09-01

    The particle pusher in Sandia's particle-in-cell diode simulation code has been rewritten to reduce the required run time of a typical simulation. The resulting new version of the code has been found to run up to three times as fast as the original with comparable accuracy. The cost of this optimization was an increase in storage requirements of about 15%. The new version has also been written to run efficiently on a CRAY-1 computing system. Steps taken to affect this reduced run time are described. Various test cases are detailed

  20. High energy particle transport code NMTC/JAM

    International Nuclear Information System (INIS)

    Niita, K.; Takada, H.; Meigo, S.; Ikeda, Y.

    2001-01-01

    We have developed a high energy particle transport code NMTC/JAM, which is an upgrade version of NMTC/JAERI97. The available energy range of NMTC/JAM is, in principle, extended to 200 GeV for nucleons and mesons including the high energy nuclear reaction code JAM for the intra-nuclear cascade part. We compare the calculations by NMTC/JAM code with the experimental data of thin and thick targets for proton induced reactions up to several 10 GeV. The results of NMTC/JAM code show excellent agreement with the experimental data. From these code validation, it is concluded that NMTC/JAM is reliable in neutronics optimization study of the high intense spallation neutron utilization facility. (author)

  1. PEAK-TO-AVERAGE POWER RATIO REDUCTION USING CODING AND HYBRID TECHNIQUES FOR OFDM SYSTEM

    OpenAIRE

    Bahubali K. Shiragapur; Uday Wali

    2016-01-01

    In this article, the research work investigated is based on an error correction coding techniques are used to reduce the undesirable Peak-to-Average Power Ratio (PAPR) quantity. The Golay Code (24, 12), Reed-Muller code (16, 11), Hamming code (7, 4) and Hybrid technique (Combination of Signal Scrambling and Signal Distortion) proposed by us are used as proposed coding techniques, the simulation results shows that performance of Hybrid technique, reduces PAPR significantly as compared to Conve...

  2. FLUKA A multi-particle transport code (program version 2005)

    CERN Document Server

    Ferrari, A; Fassò, A; Ranft, Johannes

    2005-01-01

    This report describes the 2005 version of the Fluka particle transport code. The first part introduces the basic notions, describes the modular structure of the system, and contains an installation and beginner’s guide. The second part complements this initial information with details about the various components of Fluka and how to use them. It concludes with a detailed history and bibliography.

  3. Hybrid electrokinetics for separation, mixing, and concentration of colloidal particles

    International Nuclear Information System (INIS)

    Sin, Mandy L Y; Shimabukuro, Yusuke; Wong, Pak Kin

    2009-01-01

    The advent of nanotechnology has facilitated the preparation of colloidal particles with adjustable sizes and the control of their size-dependent properties. Physical manipulation, such as separation, mixing, and concentration, of these colloidal particles represents an essential step for fully utilizing their potential in a wide spectrum of nanotechnology applications. In this study, we investigate hybrid electrokinetics, the combination of dielectrophoresis and electrohydrodynamics, for active manipulation of colloidal particles ranging from nanometers to micrometers in size. A concentric electrode configuration, which is optimized for generating electrohydrodynamic flow, has been designed to elucidate the effectiveness of hybrid electrokinetics and define the operating regimes for different microfluidic operations. The results indicate that the relative importance of electrohydrodynamics increases with decreasing particle size as predicted by a scaling analysis and that electrohydrodynamics is pivotal for manipulating nanoscale particles. Using the concentric electrodes, we demonstrate separation, mixing, and concentration of colloidal particles by adjusting the relative strengths of different electrokinetic phenomena. The effectiveness of hybrid electrokinetics indicates its potential to serve as a generic technique for active manipulation of colloidal particles in various nanotechnology applications.

  4. Antiproton annihilation physics annihilation physics in the Monte Carlo particle transport code particle transport code SHIELD-HIT12A

    DEFF Research Database (Denmark)

    Taasti, Vicki Trier; Knudsen, Helge; Holzscheiter, Michael

    2015-01-01

    The Monte Carlo particle transport code SHIELD-HIT12A is designed to simulate therapeutic beams for cancer radiotherapy with fast ions. SHIELD-HIT12A allows creation of antiproton beam kernels for the treatment planning system TRiP98, but first it must be benchmarked against experimental data. An...

  5. Parallelization of a Monte Carlo particle transport simulation code

    Science.gov (United States)

    Hadjidoukas, P.; Bousis, C.; Emfietzoglou, D.

    2010-05-01

    We have developed a high performance version of the Monte Carlo particle transport simulation code MC4. The original application code, developed in Visual Basic for Applications (VBA) for Microsoft Excel, was first rewritten in the C programming language for improving code portability. Several pseudo-random number generators have been also integrated and studied. The new MC4 version was then parallelized for shared and distributed-memory multiprocessor systems using the Message Passing Interface. Two parallel pseudo-random number generator libraries (SPRNG and DCMT) have been seamlessly integrated. The performance speedup of parallel MC4 has been studied on a variety of parallel computing architectures including an Intel Xeon server with 4 dual-core processors, a Sun cluster consisting of 16 nodes of 2 dual-core AMD Opteron processors and a 200 dual-processor HP cluster. For large problem size, which is limited only by the physical memory of the multiprocessor server, the speedup results are almost linear on all systems. We have validated the parallel implementation against the serial VBA and C implementations using the same random number generator. Our experimental results on the transport and energy loss of electrons in a water medium show that the serial and parallel codes are equivalent in accuracy. The present improvements allow for studying of higher particle energies with the use of more accurate physical models, and improve statistics as more particles tracks can be simulated in low response time.

  6. DART: a simulation code for charged particle beams

    International Nuclear Information System (INIS)

    White, R.C.; Barr, W.L.; Moir, R.W.

    1988-01-01

    This paper presents a recently modified verion of the 2-D DART code designed to simulate the behavior of a beam of charged particles whose paths are affected by electric and magnetic fields. This code was originally used to design laboratory-scale and full-scale beam direct converters. Since then, its utility has been expanded to allow more general applications. The simulation technique includes space charge, secondary electron effects, and neutral gas ionization. Calculations of electrode placement and energy conversion efficiency are described. Basic operation procedures are given including sample input files and output. 7 refs., 18 figs

  7. Particle and heavy ion transport code system; PHITS

    International Nuclear Information System (INIS)

    Niita, Koji

    2004-01-01

    Intermediate and high energy nuclear data are strongly required in design study of many facilities such as accelerator-driven systems, intense pulse spallation neutron sources, and also in medical and space technology. There is, however, few evaluated nuclear data of intermediate and high energy nuclear reactions. Therefore, we have to use some models or systematics for the cross sections, which are essential ingredients of high energy particle and heavy ion transport code to estimate neutron yield, heat deposition and many other quantities of the transport phenomena in materials. We have developed general purpose particle and heavy ion transport Monte Carlo code system, PHITS (Particle and Heavy Ion Transport code System), based on the NMTC/JAM code by the collaboration of Tohoku University, JAERI and RIST. The PHITS has three important ingredients which enable us to calculate (1) high energy nuclear reactions up to 200 GeV, (2) heavy ion collision and its transport in material, (3) low energy neutron transport based on the evaluated nuclear data. In the PHITS, the cross sections of high energy nuclear reactions are obtained by JAM model. JAM (Jet AA Microscopic Transport Model) is a hadronic cascade model, which explicitly treats all established hadronic states including resonances and all hadron-hadron cross sections parametrized based on the resonance model and string model by fitting the available experimental data. The PHITS can describe the transport of heavy ions and their collisions by making use of JQMD and SPAR code. The JQMD (JAERI Quantum Molecular Dynamics) is a simulation code for nucleus nucleus collisions based on the molecular dynamics. The SPAR code is widely used to calculate the stopping powers and ranges for charged particles and heavy ions. The PHITS has included some part of MCNP4C code, by which the transport of low energy neutron, photon and electron based on the evaluated nuclear data can be described. Furthermore, the high energy nuclear

  8. Comparison of TITAN hybrid deterministic transport code and MCNP5 for simulation of SPECT

    International Nuclear Information System (INIS)

    Royston, K.; Haghighat, A.; Yi, C.

    2010-01-01

    Traditionally, Single Photon Emission Computed Tomography (SPECT) simulations use Monte Carlo methods. The hybrid deterministic transport code TITAN has recently been applied to the simulation of a SPECT myocardial perfusion study. The TITAN SPECT simulation uses the discrete ordinates formulation in the phantom region and a simplified ray-tracing formulation outside of the phantom. A SPECT model has been created in the Monte Carlo Neutral particle (MCNP)5 Monte Carlo code for comparison. In MCNP5 the collimator is directly modeled, but TITAN instead simulates the effect of collimator blur using a circular ordinate splitting technique. Projection images created using the TITAN code are compared to results using MCNP5 for three collimator acceptance angles. Normalized projection images for 2.97 deg, 1.42 deg and 0.98 deg collimator acceptance angles had maximum relative differences of 21.3%, 11.9% and 8.3%, respectively. Visually the images are in good agreement. Profiles through the projection images were plotted to find that the TITAN results followed the shape of the MCNP5 results with some differences in magnitude. A timing comparison on 16 processors found that the TITAN code completed the calculation 382 to 2787 times faster than MCNP5. Both codes exhibit good parallel performance. (author)

  9. DART: A simulation code for charged particle beams

    International Nuclear Information System (INIS)

    White, R.C.; Barr, W.L.; Moir, R.W.

    1989-01-01

    This paper presents a recently modified version of the 2-D code, DART, which can simulate the behavior of a beam of charged particles whose trajectories are determined by electric and magnetic fields. This code was originally used to design laboratory-scale and full-scale beam direct converters. Since then, its utility has been expanded to allow more general applications. The simulation includes space charge, secondary electrons, and the ionization of neutral gas. A beam can contain up to nine superimposed beamlets of different energy and species. The calculation of energy conversion efficiency and the method of specifying the electrode geometry are described. Basic procedures for using the code are given, and sample input and output fields are shown. 7 refs., 18 figs

  10. Authorship attribution of source code by using back propagation neural network based on particle swarm optimization.

    Science.gov (United States)

    Yang, Xinyu; Xu, Guoai; Li, Qi; Guo, Yanhui; Zhang, Miao

    2017-01-01

    Authorship attribution is to identify the most likely author of a given sample among a set of candidate known authors. It can be not only applied to discover the original author of plain text, such as novels, blogs, emails, posts etc., but also used to identify source code programmers. Authorship attribution of source code is required in diverse applications, ranging from malicious code tracking to solving authorship dispute or software plagiarism detection. This paper aims to propose a new method to identify the programmer of Java source code samples with a higher accuracy. To this end, it first introduces back propagation (BP) neural network based on particle swarm optimization (PSO) into authorship attribution of source code. It begins by computing a set of defined feature metrics, including lexical and layout metrics, structure and syntax metrics, totally 19 dimensions. Then these metrics are input to neural network for supervised learning, the weights of which are output by PSO and BP hybrid algorithm. The effectiveness of the proposed method is evaluated on a collected dataset with 3,022 Java files belong to 40 authors. Experiment results show that the proposed method achieves 91.060% accuracy. And a comparison with previous work on authorship attribution of source code for Java language illustrates that this proposed method outperforms others overall, also with an acceptable overhead.

  11. Canonical momenta and numerical instabilities in particle codes

    International Nuclear Information System (INIS)

    Godfrey, B.B.

    1975-01-01

    A set of warm plasma dispersion relations appropriate to a large class of electromagnetic plasma simulation codes is derived. The numerical Cherenkov instability is shown by analytic and numerical analysis of these dispersion relations to be the most significant nonphysical effect involving transverse electromagnetic waves. The instability arises due to a spurious phase shift between resonant particles and light waves, caused by a basic incompatibility between the Lagrangian treatment of particle positions and the Eulerian treatment of particle velocities characteristic of most PIC--CIC algorithms. It is demonstrated that, through the use of canonical momentum, this mismatch is alleviated sufficiently to completely eliminate the Cherenkov instability. Collateral effects on simulation accuracy and on other numerical instabilities appear to be minor

  12. A model for particle acceleration in lower hybrid collapse

    International Nuclear Information System (INIS)

    Retterer, J.M.

    1997-01-01

    A model for particle acceleration during the nonlinear collapse of lower hybrid waves is described. Using the Musher-Sturman wave equation to describe the effects of nonlinear processes and a velocity diffusion equation for the particle velocity distribution, the model self-consistently describes the exchange of energy between the fields and the particles in the local plasma. Two-dimensional solutions are presented for the modulational instability of a plane wave and the collapse of a cylindrical wave packet. These calculations were motivated by sounding rocket observations in the vicinity of auroral arcs in the Earth close-quote s ionosphere, which have revealed the existence of large-amplitude lower-hybrid wave packets associated with ions accelerated to energies of 100 eV. The scaling of the sizes of these wave packets is consistent with the theory of lower-hybrid collapse and the observed lower-hybrid field amplitudes are adequate to accelerate the ionospheric ions to the observed energies

  13. The OpenMC Monte Carlo particle transport code

    International Nuclear Information System (INIS)

    Romano, Paul K.; Forget, Benoit

    2013-01-01

    Highlights: ► An open source Monte Carlo particle transport code, OpenMC, has been developed. ► Solid geometry and continuous-energy physics allow high-fidelity simulations. ► Development has focused on high performance and modern I/O techniques. ► OpenMC is capable of scaling up to hundreds of thousands of processors. ► Results on a variety of benchmark problems agree with MCNP5. -- Abstract: A new Monte Carlo code called OpenMC is currently under development at the Massachusetts Institute of Technology as a tool for simulation on high-performance computing platforms. Given that many legacy codes do not scale well on existing and future parallel computer architectures, OpenMC has been developed from scratch with a focus on high performance scalable algorithms as well as modern software design practices. The present work describes the methods used in the OpenMC code and demonstrates the performance and accuracy of the code on a variety of problems.

  14. Optical Code-Division Multiple-Access and Wavelength Division Multiplexing: Hybrid Scheme Review

    OpenAIRE

    P. Susthitha Menon; Sahbudin Shaari; Isaac A.M. Ashour; Hesham A. Bakarman

    2012-01-01

    Problem statement: Hybrid Optical Code-Division Multiple-Access (OCDMA) and Wavelength-Division Multiplexing (WDM) have flourished as successful schemes for expanding the transmission capacity as well as enhancing the security for OCDMA. However, a comprehensive review related to this hybrid system are lacking currently. Approach: The purpose of this paper is to review the literature on OCDMA-WDM overlay systems, including our hybrid approach of one-dimensional coding of SAC OCDMA with WDM si...

  15. High energy particle transport code NMTC/JAM

    International Nuclear Information System (INIS)

    Niita, Koji; Meigo, Shin-ichiro; Takada, Hiroshi; Ikeda, Yujiro

    2001-03-01

    We have developed a high energy particle transport code NMTC/JAM, which is an upgraded version of NMTC/JAERI97. The applicable energy range of NMTC/JAM is extended in principle up to 200 GeV for nucleons and mesons by introducing the high energy nuclear reaction code JAM for the intra-nuclear cascade part. For the evaporation and fission process, we have also implemented a new model, GEM, by which the light nucleus production from the excited residual nucleus can be described. According to the extension of the applicable energy, we have upgraded the nucleon-nucleus non-elastic, elastic and differential elastic cross section data by employing new systematics. In addition, the particle transport in a magnetic field has been implemented for the beam transport calculations. In this upgrade, some new tally functions are added and the format of input of data has been improved very much in a user friendly manner. Due to the implementation of these new calculation functions and utilities, consequently, NMTC/JAM enables us to carry out reliable neutronics study of a large scale target system with complex geometry more accurately and easily than before. This report serves as a user manual of the code. (author)

  16. Los Alamos neutral particle transport codes: New and enhanced capabilities

    International Nuclear Information System (INIS)

    Alcouffe, R.E.; Baker, R.S.; Brinkley, F.W.; Clark, B.A.; Koch, K.R.; Marr, D.R.

    1992-01-01

    We present new developments in Los Alamos discrete-ordinates transport codes and introduce THREEDANT, the latest in the series of Los Alamos discrete ordinates transport codes. THREEDANT solves the multigroup, neutral-particle transport equation in X-Y-Z and R-Θ-Z geometries. THREEDANT uses computationally efficient algorithms: Diffusion Synthetic Acceleration (DSA) is used to accelerate the convergence of transport iterations, the DSA solution is accelerated using the multigrid technique. THREEDANT runs on a wide range of computers, from scientific workstations to CRAY supercomputers. The algorithms are highly vectorized on CRAY computers. Recently, the THREEDANT transport algorithm was implemented on the massively parallel CM-2 computer, with performance that is comparable to a single-processor CRAY-YMP We present the results of THREEDANT analysis of test problems

  17. Quasi-linear absorption of lower hybrid waves by fusion generated alpha particles

    International Nuclear Information System (INIS)

    Barbato, E.; Santini, F.

    1991-01-01

    Lower hybrid waves are expected to be used in a steady state reactor to produce current and to control the current profile and the stability of internal modes. In the ignition phase, however, the presence of energetic alpha particles may prevent wave-electron interaction, thus reducing the current drive efficiency. This is due to the very high birth energy of the alpha particles that may absorb much of the lower hybrid wave power. This unfavourable effect is absent at high frequencies (∼ 8 GHz for typical reactor parameters). Nevertheless, because of the technical difficulties involved in using such high frequencies, it is very important to investigate whether power absorption by alpha particles would be negligible also at relatively low frequencies. Such a study has been carried out on the basis of the quasi-linear theory of wave-alpha particle interaction, since the distortion of the alpha distribution function may enhance the radiofrequency absorption above the linear level. New effects have been found, such as local alpha concentration and acceleration. The model for alpha particles is coupled with a 1-D deposition code for lower hybrid waves to calculate the competition in the power absorption between alphas and electrons as the waves propagate into the plasma core for typical reactor (ITER) parameters. It is shown that at a frequency as low as 5 GHz, power absorption by alpha particles is negligible for conventional plasma conditions and realistic alpha particle concentrations. In more ''pessimistic'' and severe conditions, negligible absorption occurs at 6 GHz. (author). 19 refs, 11 figs, 2 tabs

  18. Particle-in-Cell Codes for plasma-based particle acceleration

    CERN Document Server

    Pukhov, Alexander

    2016-01-01

    Basic principles of particle-in-cell (PIC ) codes with the main application for plasma-based acceleration are discussed. The ab initio full electromagnetic relativistic PIC codes provide the most reliable description of plasmas. Their properties are considered in detail. Representing the most fundamental model, the full PIC codes are computationally expensive. The plasma-based acceler- ation is a multi-scale problem with very disparate scales. The smallest scale is the laser or plasma wavelength (from one to hundred microns) and the largest scale is the acceleration distance (from a few centimeters to meters or even kilometers). The Lorentz-boost technique allows to reduce the scale disparity at the costs of complicating the simulations and causing unphysical numerical instabilities in the code. Another possibility is to use the quasi-static approxi- mation where the disparate scales are separated analytically.

  19. A NEW HYBRID N-BODY-COAGULATION CODE FOR THE FORMATION OF GAS GIANT PLANETS

    International Nuclear Information System (INIS)

    Bromley, Benjamin C.; Kenyon, Scott J.

    2011-01-01

    We describe an updated version of our hybrid N-body-coagulation code for planet formation. In addition to the features of our 2006-2008 code, our treatment now includes algorithms for the one-dimensional evolution of the viscous disk, the accretion of small particles in planetary atmospheres, gas accretion onto massive cores, and the response of N-bodies to the gravitational potential of the gaseous disk and the swarm of planetesimals. To validate the N-body portion of the algorithm, we use a battery of tests in planetary dynamics. As a first application of the complete code, we consider the evolution of Pluto-mass planetesimals in a swarm of 0.1-1 cm pebbles. In a typical evolution time of 1-3 Myr, our calculations transform 0.01-0.1 M sun disks of gas and dust into planetary systems containing super-Earths, Saturns, and Jupiters. Low-mass planets form more often than massive planets; disks with smaller α form more massive planets than disks with larger α. For Jupiter-mass planets, masses of solid cores are 10-100 M + .

  20. Hybrid Modeling Method for a DEP Based Particle Manipulation

    Directory of Open Access Journals (Sweden)

    Mohamad Sawan

    2013-01-01

    Full Text Available In this paper, a new modeling approach for Dielectrophoresis (DEP based particle manipulation is presented. The proposed method fulfills missing links in finite element modeling between the multiphysic simulation and the biological behavior. This technique is amongst the first steps to develop a more complex platform covering several types of manipulations such as magnetophoresis and optics. The modeling approach is based on a hybrid interface using both ANSYS and MATLAB to link the propagation of the electrical field in the micro-channel to the particle motion. ANSYS is used to simulate the electrical propagation while MATLAB interprets the results to calculate cell displacement and send the new information to ANSYS for another turn. The beta version of the proposed technique takes into account particle shape, weight and its electrical properties. First obtained results are coherent with experimental results.

  1. Hybrid finite element and Brownian dynamics method for charged particles

    Energy Technology Data Exchange (ETDEWEB)

    Huber, Gary A., E-mail: ghuber@ucsd.edu; Miao, Yinglong [Howard Hughes Medical Institute, University of California San Diego, La Jolla, California 92093-0365 (United States); Zhou, Shenggao [Department of Mathematics and Mathematical Center for Interdiscipline Research, Soochow University, 1 Shizi Street, Suzhou, 215006 Jiangsu (China); Li, Bo [Department of Mathematics and Quantitative Biology Graduate Program, University of California, San Diego, 9500 Gilman Drive, La Jolla, California 92093-0112 (United States); McCammon, J. Andrew [Howard Hughes Medical Institute, University of California San Diego, La Jolla, California 92093 (United States); Department of Chemistry and Biochemistry, University of California San Diego, La Jolla, California 92093-0365 (United States); Department of Pharmacology, University of California San Diego, La Jolla, California 92093-0636 (United States)

    2016-04-28

    Diffusion is often the rate-determining step in many biological processes. Currently, the two main computational methods for studying diffusion are stochastic methods, such as Brownian dynamics, and continuum methods, such as the finite element method. A previous study introduced a new hybrid diffusion method that couples the strengths of each of these two methods, but was limited by the lack of interactions among the particles; the force on each particle had to be from an external field. This study further develops the method to allow charged particles. The method is derived for a general multidimensional system and is presented using a basic test case for a one-dimensional linear system with one charged species and a radially symmetric system with three charged species.

  2. Particle tracking code of simulating global RF feedback

    International Nuclear Information System (INIS)

    Mestha, L.K.

    1991-09-01

    It is well known in the ''control community'' that a good feedback controller design is deeply rooted in the physics of the system. For example, when accelerating the beam we must keep several parameters under control so that the beam travels within the confined space. Important parameters include the frequency and phase of the rf signal, the dipole field, and the cavity voltage. Because errors in these parameters will progressively mislead the beam from its projected path in the tube, feedback loops are used to correct the behavior. Since the feedback loop feeds energy to the system, it changes the overall behavior of the system and may drive it to instability. Various types of controllers are used to stabilize the feedback loop. Integrating the beam physics with the feedback controllers allows us to carefully analyze the beam behavior. This will not only guarantee optimal performance but will also significantly enhance the ability of the beam control engineer to deal effectively with the interaction of various feedback loops. Motivated by this theme, we developed a simple one-particle tracking code to simulate particle behavior with feedback controllers. In order to achieve our fundamental objective, we can ask some key questions: What are the input and output parameters? How can they be applied to the practical machine? How can one interface the rf system dynamics such as the transfer characteristics of the rf cavities and phasing between the cavities? Answers to these questions can be found by considering a simple case of a single cavity with one particle, tracking it turn-by-turn with appropriate initial conditions, then introducing constraints on crucial parameters. Critical parameters are rf frequency, phase, and amplitude once the dipole field has been given. These are arranged in the tracking code so that we can interface the feedback system controlling them

  3. Parallel deposition, sorting, and reordering methods in the Hybrid Ordered Plasma Simulation (HOPS) code

    International Nuclear Information System (INIS)

    Anderson, D.V.; Shumaker, D.E.

    1993-01-01

    From a computational standpoint, particle simulation calculations for plasmas have not adapted well to the transitions from scalar to vector processing nor from serial to parallel environments. They have suffered from inordinate and excessive accessing of computer memory and have been hobbled by relatively inefficient gather-scatter constructs resulting from the use of indirect indexing. Lastly, the many-to-one mapping characteristic of the deposition phase has made it difficult to perform this in parallel. The authors' code sorts and reorders the particles in a spatial order. This allows them to greatly reduce the memory references, to run in directly indexed vector mode, and to employ domain decomposition to achieve parallelization. In this hybrid simulation the electrons are modeled as a fluid and the field equations solved are obtained from the electron momentum equation together with the pre-Maxwell equations (displacement current neglected). Either zero or finite electron mass can be used in the electron model. The resulting field equations are solved with an iteratively explicit procedure which is thus trivial to parallelize. Likewise, the field interpolations and the particle pushing is simple to parallelize. The deposition, sorting, and reordering phases are less simple and it is for these that the authors present detailed algorithms. They have now successfully tested the parallel version of HOPS in serial mode and it is now being readied for parallel execution on the Cray C-90. They will then port HOPS to a massively parallel computer, in the next year

  4. Parallel treatment of simulation particles in particle-in-cell codes on SUPRENUM

    International Nuclear Information System (INIS)

    Seldner, D.

    1990-02-01

    This report contains the program documentation and description of the program package 2D-PLAS, which has been developed at the Nuclear Research Center Karlsruhe in the Institute for Data Processing in Technology (IDT) under the auspices of the BMFT. 2D-PLAS is a parallel program version of the treatment of the simulation particles of the two-dimensional stationary particle-in-cell code BFCPIC which has been developed at the Nuclear Research Center Karlsruhe. This parallel version has been designed for the parallel computer SUPRENUM. (orig.) [de

  5. Collaborative Multi-Layer Network Coding in Hybrid Cellular Cognitive Radio Networks

    KAUST Repository

    Moubayed, Abdallah J.; Sorour, Sameh; Al-Naffouri, Tareq Y.; Alouini, Mohamed-Slim

    2015-01-01

    In this paper, as an extension to [1], we propose a prioritized multi-layer network coding scheme for collaborative packet recovery in hybrid (interweave and underlay) cellular cognitive radio networks. This scheme allows the uncoordinated

  6. Collaborative Multi-Layer Network Coding For Hybrid Cellular Cognitive Radio Networks

    KAUST Repository

    Moubayed, Abdallah J.

    2014-01-01

    In this thesis, as an extension to [1], we propose a prioritized multi-layer network coding scheme for collaborative packet recovery in hybrid (interweave and underlay) cellular cognitive radio networks. This scheme allows the uncoordinated

  7. Development of general-purpose particle and heavy ion transport monte carlo code

    International Nuclear Information System (INIS)

    Iwase, Hiroshi; Nakamura, Takashi; Niita, Koji

    2002-01-01

    The high-energy particle transport code NMTC/JAM, which has been developed at JAERI, was improved for the high-energy heavy ion transport calculation by incorporating the JQMD code, the SPAR code and the Shen formula. The new NMTC/JAM named PHITS (Particle and Heavy-Ion Transport code System) is the first general-purpose heavy ion transport Monte Carlo code over the incident energies from several MeV/nucleon to several GeV/nucleon. (author)

  8. New features of the mercury Monte Carlo particle transport code

    International Nuclear Information System (INIS)

    Procassini, Richard; Brantley, Patrick; Dawson, Shawn

    2010-01-01

    Several new capabilities have been added to the Mercury Monte Carlo transport code over the past four years. The most important algorithmic enhancement is a general, extensible infrastructure to support source, tally and variance reduction actions. For each action, the user defines a phase space, as well as any number of responses that are applied to a specified event. Tallies are accumulated into a correlated, multi-dimensional. Cartesian-product result phase space. Our approach employs a common user interface to specify the data sets and distributions that define the phase, response and result for each action. Modifications to the particle trackers include the use of facet halos (instead of extrapolative fuzz) for robust tracking, and material interface reconstruction for use in shape overlaid meshes. Support for expected-value criticality eigenvalue calculations has also been implemented. Computer science enhancements include an in-line Python interface for user customization of problem setup and output. (author)

  9. Computer codes used in particle accelerator design: First edition

    International Nuclear Information System (INIS)

    1987-01-01

    This paper contains a listing of more than 150 programs that have been used in the design and analysis of accelerators. Given on each citation are person to contact, classification of the computer code, publications describing the code, computer and language runned on, and a short description of the code. Codes are indexed by subject, person to contact, and code acronym

  10. Low complexity source and channel coding for mm-wave hybrid fiber-wireless links

    DEFF Research Database (Denmark)

    Lebedev, Alexander; Vegas Olmos, Juan José; Pang, Xiaodan

    2014-01-01

    We report on the performance of channel and source coding applied for an experimentally realized hybrid fiber-wireless W-band link. Error control coding performance is presented for a wireless propagation distance of 3 m and 20 km fiber transmission. We report on peak signal-to-noise ratio perfor...

  11. Enhancements to the Combinatorial Geometry Particle Tracker in the Mercury Monte Carlo Transport Code: Embedded Meshes and Domain Decomposition

    International Nuclear Information System (INIS)

    Greenman, G.M.; O'Brien, M.J.; Procassini, R.J.; Joy, K.I.

    2009-01-01

    Two enhancements to the combinatorial geometry (CG) particle tracker in the Mercury Monte Carlo transport code are presented. The first enhancement is a hybrid particle tracker wherein a mesh region is embedded within a CG region. This method permits efficient calculations of problems with contain both large-scale heterogeneous and homogeneous regions. The second enhancement relates to the addition of parallelism within the CG tracker via spatial domain decomposition. This permits calculations of problems with a large degree of geometric complexity, which are not possible through particle parallelism alone. In this method, the cells are decomposed across processors and a particles is communicated to an adjacent processor when it tracks to an interprocessor boundary. Applications that demonstrate the efficacy of these new methods are presented

  12. Study on MPI/OpenMP hybrid parallelism for Monte Carlo neutron transport code

    International Nuclear Information System (INIS)

    Liang Jingang; Xu Qi; Wang Kan; Liu Shiwen

    2013-01-01

    Parallel programming with mixed mode of messages-passing and shared-memory has several advantages when used in Monte Carlo neutron transport code, such as fitting hardware of distributed-shared clusters, economizing memory demand of Monte Carlo transport, improving parallel performance, and so on. MPI/OpenMP hybrid parallelism was implemented based on a one dimension Monte Carlo neutron transport code. Some critical factors affecting the parallel performance were analyzed and solutions were proposed for several problems such as contention access, lock contention and false sharing. After optimization the code was tested finally. It is shown that the hybrid parallel code can reach good performance just as pure MPI parallel program, while it saves a lot of memory usage at the same time. Therefore hybrid parallel is efficient for achieving large-scale parallel of Monte Carlo neutron transport. (authors)

  13. Computer code MLCOSP for multiple-correlation and spectrum analysis with a hybrid computer

    International Nuclear Information System (INIS)

    Oguma, Ritsuo; Fujii, Yoshio; Usui, Hozumi; Watanabe, Koichi

    1975-10-01

    Usage of the computer code MLCOSP(Multiple Correlation and Spectrum) developed is described for a hybrid computer installed in JAERI Functions of the hybrid computer and its terminal devices are utilized ingeniously in the code to reduce complexity of the data handling which occurrs in analysis of the multivariable experimental data and to perform the analysis in perspective. Features of the code are as follows; Experimental data can be fed to the digital computer through the analog part of the hybrid computer by connecting with a data recorder. The computed results are displayed in figures, and hardcopies are taken when necessary. Series-messages to the code are shown on the terminal, so man-machine communication is possible. And further the data can be put in through a keyboard, so case study according to the results of analysis is possible. (auth.)

  14. Code-B-1 for stress/strain calculation for TRISO fuel particle (Contract research)

    International Nuclear Information System (INIS)

    Aihara, Jun; Ueta, Shohei; Shibata, Taiju; Sawa, Kazuhiro

    2011-12-01

    We have developed Code-B-1 for the prediction of the failure probabilities of the coated fuel particles for the high temperature gas-cooled reactors (HTGRs) under operation by modification of an existing code. A finite element method (FEM) is employed for the stress calculation part and Code-B-1 can treat the plastic deformation of the coating layer of the coated fuel particles which the existing code cannot treat. (author)

  15. SSCTRK: A particle tracking code for the SSC

    International Nuclear Information System (INIS)

    Ritson, D.

    1990-07-01

    While many indirect methods are available to evaluate dynamic aperture there appears at this time to be no reliable substitute to tracking particles through realistic machine lattices for a number of turns determined by the storage times. Machine lattices are generated by ''Monte Carlo'' techniques from the expected rms fabrication and survey errors. Any given generated machine can potentially be a lucky or unlucky fluctuation from the average. Therefore simulation to serve as a predictor of future performance must be done for an ensemble of generated machines. Further, several amplitudes and momenta are necessary to predict machine performance. Thus to make Monte Carlo type simulations for the SSC requires very considerable computer resources. Hitherto, it has been assumed that this was not feasible, and alternative indirect methods have been proposed or tried to answer the problem. We reexamined the feasibility of using direct computation. Previous codes have represented lattices by a succession of thin elements separated by bend-drifts. With ''kick-drift'' configurations, tracking time is linear in the multipole order included, and the code is symplectic. Modern vector processors simultaneously handle a large number of cases in parallel. Combining the efficiencies of kick drift tracking with vector processing, in fact, makes realistic Monte Carlo simulation entirely feasible. SSCTRK uses the above features. It is structured to have a very friendly interface, a very wide latitude of choice for cases to be run in parallel, and, by using pure FORTRAN 77, to interchangeably run on a wide variety of computers. We describe in this paper the program structure operational checks and results achieved

  16. A hybrid charged-particle guide for studying (n, charged particle) reactions

    International Nuclear Information System (INIS)

    Haight, R.C.; White, R.M.; Zinkle, S.J.

    1983-01-01

    Charged-particle transport systems consisting of magnetic quadrupole lenses have been employed in recent years in the study of (n, charged particle) reactions. A new transport system was completed at the laboratory that is based both on magnetic lenses as well as electrostatic fields. The magnetic focusing of the charged-particle guide is provided by six magnetic quadrupole lenses arranged in a CDCCDC sequence (in the vertical plane). The electrostatic field is produced by a wire at high voltage which stretches the length of the guide and is physically at the centre of the magnetic axis. The magnetic lenses are used for charged particles above 5 MeV; the electrostatic guide is used for lower energies. This hybrid system possesses the excellent focusing and background rejection properties of other magnetic systems. For low energy charged-particles, the electrostatic transport avoids the narrow band-passes in charged-particle energy which are a problem with purely magnetic transport systems. This system is installed at the LLNL Cyclograaff facility for the study of (n, charged particle) reactions at neutron energies up to 35 MeV. (Auth.)

  17. Hybrid Particle Swarm Optimization for Hybrid Flowshop Scheduling Problem with Maintenance Activities

    Science.gov (United States)

    Li, Jun-qing; Pan, Quan-ke; Mao, Kun

    2014-01-01

    A hybrid algorithm which combines particle swarm optimization (PSO) and iterated local search (ILS) is proposed for solving the hybrid flowshop scheduling (HFS) problem with preventive maintenance (PM) activities. In the proposed algorithm, different crossover operators and mutation operators are investigated. In addition, an efficient multiple insert mutation operator is developed for enhancing the searching ability of the algorithm. Furthermore, an ILS-based local search procedure is embedded in the algorithm to improve the exploitation ability of the proposed algorithm. The detailed experimental parameter for the canonical PSO is tuning. The proposed algorithm is tested on the variation of 77 Carlier and Néron's benchmark problems. Detailed comparisons with the present efficient algorithms, including hGA, ILS, PSO, and IG, verify the efficiency and effectiveness of the proposed algorithm. PMID:24883414

  18. Hybrid Particle Swarm Optimization for Hybrid Flowshop Scheduling Problem with Maintenance Activities

    Directory of Open Access Journals (Sweden)

    Jun-qing Li

    2014-01-01

    Full Text Available A hybrid algorithm which combines particle swarm optimization (PSO and iterated local search (ILS is proposed for solving the hybrid flowshop scheduling (HFS problem with preventive maintenance (PM activities. In the proposed algorithm, different crossover operators and mutation operators are investigated. In addition, an efficient multiple insert mutation operator is developed for enhancing the searching ability of the algorithm. Furthermore, an ILS-based local search procedure is embedded in the algorithm to improve the exploitation ability of the proposed algorithm. The detailed experimental parameter for the canonical PSO is tuning. The proposed algorithm is tested on the variation of 77 Carlier and Néron’s benchmark problems. Detailed comparisons with the present efficient algorithms, including hGA, ILS, PSO, and IG, verify the efficiency and effectiveness of the proposed algorithm.

  19. Hybrid particle swarm optimization for hybrid flowshop scheduling problem with maintenance activities.

    Science.gov (United States)

    Li, Jun-qing; Pan, Quan-ke; Mao, Kun

    2014-01-01

    A hybrid algorithm which combines particle swarm optimization (PSO) and iterated local search (ILS) is proposed for solving the hybrid flowshop scheduling (HFS) problem with preventive maintenance (PM) activities. In the proposed algorithm, different crossover operators and mutation operators are investigated. In addition, an efficient multiple insert mutation operator is developed for enhancing the searching ability of the algorithm. Furthermore, an ILS-based local search procedure is embedded in the algorithm to improve the exploitation ability of the proposed algorithm. The detailed experimental parameter for the canonical PSO is tuning. The proposed algorithm is tested on the variation of 77 Carlier and Néron's benchmark problems. Detailed comparisons with the present efficient algorithms, including hGA, ILS, PSO, and IG, verify the efficiency and effectiveness of the proposed algorithm.

  20. Hybrid Composite Material and Solid Particle Erosion Studies

    Science.gov (United States)

    Chellaganesh, D.; Khan, M. Adam; Ashif, A. Mohamed; Ragul Selvan, T.; Nachiappan, S.; Winowlin Jappes, J. T.

    2018-04-01

    Composite is one of the predominant material for most challenging engineering components. Most of the components are in the place of automobile structure, aircraft structures, and wind turbine blade and so on. At the same all the components are indulged to mechanical loading. Recent research on composite material are machinability, wear, tear and corrosion studies. One of the major issue on recent research was solid particle air jet erosion. In this paper hybrid composite material with and without filler. The fibre are in the combination of hemp – kevlar (60:40 wt.%) as reinforcement using epoxy as a matrix. The natural material palm and coconut shell are used as filler materials in the form of crushed powder. The process parameter involved are air jet velocity, volume of erodent and angle of impingement. Experiment performed are in eight different combinations followed from 2k (k = 3) factorial design. From the investigation surface morphology was studied using electron microscope. Mass change with respect to time are used to calculate wear rate and the influence of the process parameters. While solid particle erosion the hard particle impregnates in soft matrix material. Influence of filler material has reduced the wear and compared to plain natural composite material.

  1. A novel neutron energy spectrum unfolding code using particle swarm optimization

    International Nuclear Information System (INIS)

    Shahabinejad, H.; Sohrabpour, M.

    2017-01-01

    A novel neutron Spectrum Deconvolution using Particle Swarm Optimization (SDPSO) code has been developed to unfold the neutron spectrum from a pulse height distribution and a response matrix. The Particle Swarm Optimization (PSO) imitates the bird flocks social behavior to solve complex optimization problems. The results of the SDPSO code have been compared with those of the standard spectra and recently published Two-steps Genetic Algorithm Spectrum Unfolding (TGASU) code. The TGASU code have been previously compared with the other codes such as MAXED, GRAVEL, FERDOR and GAMCD and shown to be more accurate than the previous codes. The results of the SDPSO code have been demonstrated to match well with those of the TGASU code for both under determined and over-determined problems. In addition the SDPSO has been shown to be nearly two times faster than the TGASU code. - Highlights: • Introducing a novel method for neutron spectrum unfolding. • Implementation of a particle swarm optimization code for neutron unfolding. • Comparing results of the PSO code with those of recently published TGASU code. • Match results of the PSO code with those of TGASU code. • Greater convergence rate of implemented PSO code than TGASU code.

  2. Preparation and characterization of emulsifier-free polyphenylsilsesquioxane-poly (styrene–butyl acrylate) hybrid particles

    Energy Technology Data Exchange (ETDEWEB)

    Bai, Ruiqin; Qiu, Teng, E-mail: qiuteng@mail.buct.edu.cn; Han, Feng; He, Lifan; Li, Xiaoyu, E-mail: lixy@mail.buct.edu.cn

    2013-10-01

    The core–shell polyphenylsilsesquioxane-poly (styrene–butyl acrylate) hybrid latex paticles with polyphenylsilsesquioxane as core and poly (styrene–butyl acrylate) as shell were successfully synthesized by seeded emulsion polymerization using polyphenylsisesquioxane (PPSQ) latex particles as seeds. X-ray diffraction (XRD) indicated that the polyphenylsilsesquioxane (PPSQ) had ladder structure, and PPSQ had incorporated into the hybrid latex particles. Transmission electron microscopy (TEM) confirmed that the resultant hybrid latex particles had the core–shell structure. TEM and dynamic light scattering (DLS) analysis indicated that the polyphenylsisesquioxane latex particles and obtained core–shell hybrid latex particles were uniform and possessed narrow size distributions. X-ray photoelectron spectroscopy (XPS) analysis also indicated that the PPSQ core particles were enwrapped by the polymer shell. In addition, compared with pure poly (styrene–butyl acrylate) latex film, the polyphenylsilsesquioxane-poly (styrene–butyl acrylate) hybrid latex film exhibited lower water uptake, higher pencil hardness and better thermal stability.

  3. PEAK-TO-AVERAGE POWER RATIO REDUCTION USING CODING AND HYBRID TECHNIQUES FOR OFDM SYSTEM

    Directory of Open Access Journals (Sweden)

    Bahubali K. Shiragapur

    2016-03-01

    Full Text Available In this article, the research work investigated is based on an error correction coding techniques are used to reduce the undesirable Peak-to-Average Power Ratio (PAPR quantity. The Golay Code (24, 12, Reed-Muller code (16, 11, Hamming code (7, 4 and Hybrid technique (Combination of Signal Scrambling and Signal Distortion proposed by us are used as proposed coding techniques, the simulation results shows that performance of Hybrid technique, reduces PAPR significantly as compared to Conventional and Modified Selective mapping techniques. The simulation results are validated through statistical properties, for proposed technique’s autocorrelation value is maximum shows reduction in PAPR. The symbol preference is the key idea to reduce PAPR based on Hamming distance. The simulation results are discussed in detail, in this article.

  4. Noise suppression system of OCDMA with spectral/spatial 2D hybrid code

    Science.gov (United States)

    Matem, Rima; Aljunid, S. A.; Junita, M. N.; Rashidi, C. B. M.; Shihab Aqrab, Israa

    2017-11-01

    In this paper, we propose a novel 2D spectral/spatial hybrid code based on 1D ZCC and 1D MD where the both present a zero cross correlation property analyzed and the influence of the noise of optical as Phase Induced Intensity Noise (PIIN), shot and thermal noise. This new code is shown effectively to mitigate the PIIN and suppresses MAI. Using 2D ZCC/MD code the performance of the system can be improved in term of as well as to support more simultaneous users compared of the 2D FCC/MDW and 2D DPDC codes.

  5. Noise suppression system of OCDMA with spectral/spatial 2D hybrid code

    Directory of Open Access Journals (Sweden)

    Matem Rima

    2017-01-01

    Full Text Available In this paper, we propose a novel 2D spectral/spatial hybrid code based on 1D ZCC and 1D MD where the both present a zero cross correlation property analyzed and the influence of the noise of optical as Phase Induced Intensity Noise (PIIN, shot and thermal noise. This new code is shown effectively to mitigate the PIIN and suppresses MAI. Using 2D ZCC/MD code the performance of the system can be improved in term of as well as to support more simultaneous users compared of the 2D FCC/MDW and 2D DPDC codes.

  6. Hybrid real-code ant colony optimisation for constrained mechanical design

    Science.gov (United States)

    Pholdee, Nantiwat; Bureerat, Sujin

    2016-01-01

    This paper proposes a hybrid meta-heuristic based on integrating a local search simplex downhill (SDH) method into the search procedure of real-code ant colony optimisation (ACOR). This hybridisation leads to five hybrid algorithms where a Monte Carlo technique, a Latin hypercube sampling technique (LHS) and a translational propagation Latin hypercube design (TPLHD) algorithm are used to generate an initial population. Also, two numerical schemes for selecting an initial simplex are investigated. The original ACOR and its hybrid versions along with a variety of established meta-heuristics are implemented to solve 17 constrained test problems where a fuzzy set theory penalty function technique is used to handle design constraints. The comparative results show that the hybrid algorithms are the top performers. Using the TPLHD technique gives better results than the other sampling techniques. The hybrid optimisers are a powerful design tool for constrained mechanical design problems.

  7. Operation management of daily economic dispatch using novel hybrid particle swarm optimization and gravitational search algorithm with hybrid mutation strategy

    Science.gov (United States)

    Wang, Yan; Huang, Song; Ji, Zhicheng

    2017-07-01

    This paper presents a hybrid particle swarm optimization and gravitational search algorithm based on hybrid mutation strategy (HGSAPSO-M) to optimize economic dispatch (ED) including distributed generations (DGs) considering market-based energy pricing. A daily ED model was formulated and a hybrid mutation strategy was adopted in HGSAPSO-M. The hybrid mutation strategy includes two mutation operators, chaotic mutation, Gaussian mutation. The proposed algorithm was tested on IEEE-33 bus and results show that the approach is effective for this problem.

  8. A HYDROCHEMICAL HYBRID CODE FOR ASTROPHYSICAL PROBLEMS. I. CODE VERIFICATION AND BENCHMARKS FOR A PHOTON-DOMINATED REGION (PDR)

    International Nuclear Information System (INIS)

    Motoyama, Kazutaka; Morata, Oscar; Hasegawa, Tatsuhiko; Shang, Hsien; Krasnopolsky, Ruben

    2015-01-01

    A two-dimensional hydrochemical hybrid code, KM2, is constructed to deal with astrophysical problems that would require coupled hydrodynamical and chemical evolution. The code assumes axisymmetry in a cylindrical coordinate system and consists of two modules: a hydrodynamics module and a chemistry module. The hydrodynamics module solves hydrodynamics using a Godunov-type finite volume scheme and treats included chemical species as passively advected scalars. The chemistry module implicitly solves nonequilibrium chemistry and change of energy due to thermal processes with transfer of external ultraviolet radiation. Self-shielding effects on photodissociation of CO and H 2 are included. In this introductory paper, the adopted numerical method is presented, along with code verifications using the hydrodynamics module and a benchmark on the chemistry module with reactions specific to a photon-dominated region (PDR). Finally, as an example of the expected capability, the hydrochemical evolution of a PDR is presented based on the PDR benchmark

  9. An electrostatic particle-in-cell model for a lower hybrid grill

    International Nuclear Information System (INIS)

    Rantamaeki, K.

    1998-01-01

    In recent lower hybrid (LH) current drive experiments, generation of hot spots and impurities in the grill region have been observed on Tore Supra and Tokamak de Varennes (TdeV). A possible explanation is the parasitic absorption of the LH power in front of the grill. In parasitic absorption, the short-wavelength part of the lower hybrid spectrum can resonantly interact with the cold edge electrons. In this work, the absorption of the LH waves and the generation of fast electrons near the waveguide mouth is investigated with a new tool in this context: particle-in-cell (PIC) simulations. The advantage of this new method is that the electric field is calculated self-consistently. The PIC simulations also provide the key parameters for the hot spot problem: the absorbed power, the radial deposition profiles and the absorption length. A grill model has been added to the 2d3v PIC code XPDP2. Two sets of simulations were made. The first simulations used a phenomenological grill model. Strong absorption in the edge plasma was obtained. About 5% of the coupled power was absorbed within 1.7 mm in the case with fairly large amount of power in the modes with large parallel refractive index. Consequently, a rapid generation of fast electrons took place in the same region. In order to model experiments with realistic wave spectra, the PIC code was coupled to the slow wave antenna coupling code SWAN. The absorption within 1.7 mm in front of the grill was found to be between 2 and 5%. In the short time of a few wave periods, part of the initially thermal electrons (T e = 100 eV) were accelerated to velocities corresponding to a few keV. (orig.)

  10. An electrostatic particle-in-cell model for a lower hybrid grill

    Energy Technology Data Exchange (ETDEWEB)

    Rantamaeki, K

    1998-07-01

    In recent lower hybrid (LH) current drive experiments, generation of hot spots and impurities in the grill region have been observed on Tore Supra and Tokamak de Varennes (TdeV). A possible explanation is the parasitic absorption of the LH power in front of the grill. In parasitic absorption, the short-wavelength part of the lower hybrid spectrum can resonantly interact with the cold edge electrons. In this work, the absorption of the LH waves and the generation of fast electrons near the waveguide mouth is investigated with a new tool in this context: particle-in-cell (PIC) simulations. The advantage of this new method is that the electric field is calculated self-consistently. The PIC simulations also provide the key parameters for the hot spot problem: the absorbed power, the radial deposition profiles and the absorption length. A grill model has been added to the 2d3v PIC code XPDP2. Two sets of simulations were made. The first simulations used a phenomenological grill model. Strong absorption in the edge plasma was obtained. About 5% of the coupled power was absorbed within 1.7 mm in the case with fairly large amount of power in the modes with large parallel refractive index. Consequently, a rapid generation of fast electrons took place in the same region. In order to model experiments with realistic wave spectra, the PIC code was coupled to the slow wave antenna coupling code SWAN. The absorption within 1.7 mm in front of the grill was found to be between 2 and 5%. In the short time of a few wave periods, part of the initially thermal electrons (T{sub e} = 100 eV) were accelerated to velocities corresponding to a few keV. (orig.)

  11. ORBXYZ: a 3D single-particle orbit code for following charged-particle trajectories in equilibrium magnetic fields

    International Nuclear Information System (INIS)

    Anderson, D.V.; Cohen, R.H.; Ferguson, J.R.; Johnston, B.M.; Sharp, C.B.; Willmann, P.A.

    1981-01-01

    The single particle orbit code, TIBRO, has been modified extensively to improve the interpolation methods used and to allow use of vector potential fields in the simulation of charged particle orbits on a 3D domain. A 3D cubic B-spline algorithm is used to generate spline coefficients used in the interpolation. Smooth and accurate field representations are obtained. When vector potential fields are used, the 3D cubic spline interpolation formula analytically generates the magnetic field used to push the particles. This field has del.BETA = 0 to computer roundoff. When magnetic induction is used the interpolation allows del.BETA does not equal 0, which can lead to significant nonphysical results. Presently the code assumes quadrupole symmetry, but this is not an essential feature of the code and could be easily removed for other applications. Many details pertaining to this code are given on microfiche accompanying this report

  12. Hybrid code simulation on mode conversion in the second harmonic ICRF heating

    International Nuclear Information System (INIS)

    Sakai, K.; Takeuchi, S.; Matsumoto, M.; Sugihara, R.

    1985-01-01

    ICRF second harmonic heating of a single-species plasma is studied by using a 1-1/2 dimensional quasi-neutral hybrid code. Mode conversion, transmission and reflection of the magnetosonic waves are confirmed, both for the high- and low-field-side excitations. The ion heating by waves propagating perpendicularly to the static magnetic field is also observed

  13. A hybrid gyrokinetic ion and isothermal electron fluid code for astrophysical plasma

    Science.gov (United States)

    Kawazura, Y.; Barnes, M.

    2018-05-01

    This paper describes a new code for simulating astrophysical plasmas that solves a hybrid model composed of gyrokinetic ions (GKI) and an isothermal electron fluid (ITEF) Schekochihin et al. (2009) [9]. This model captures ion kinetic effects that are important near the ion gyro-radius scale while electron kinetic effects are ordered out by an electron-ion mass ratio expansion. The code is developed by incorporating the ITEF approximation into AstroGK, an Eulerian δf gyrokinetics code specialized to a slab geometry Numata et al. (2010) [41]. The new code treats the linear terms in the ITEF equations implicitly while the nonlinear terms are treated explicitly. We show linear and nonlinear benchmark tests to prove the validity and applicability of the simulation code. Since the fast electron timescale is eliminated by the mass ratio expansion, the Courant-Friedrichs-Lewy condition is much less restrictive than in full gyrokinetic codes; the present hybrid code runs ∼ 2√{mi /me } ∼ 100 times faster than AstroGK with a single ion species and kinetic electrons where mi /me is the ion-electron mass ratio. The improvement of the computational time makes it feasible to execute ion scale gyrokinetic simulations with a high velocity space resolution and to run multiple simulations to determine the dependence of turbulent dynamics on parameters such as electron-ion temperature ratio and plasma beta.

  14. The failure mechanisms of HTR coated particle fuel and computer code

    International Nuclear Information System (INIS)

    Yang Lin; Liu Bing; Shao Youlin; Liang Tongxiang; Tang Chunhe

    2010-01-01

    The basic constituent unit of fuel element in HTR is ceramic coated particle fuel. And the performance of coated particle fuel determines the safety of HTR. In addition to the traditional detection of radiation experiments, establishing computer code is of great significance to the research. This paper mainly introduces the structure and the failure mechanism of TRISO-coated particle fuel, as well as a few basic assumptions,principles and characteristics of some existed main overseas codes. Meanwhile, this paper has proposed direction of future research by comparing the advantages and disadvantages of several computer codes. (authors)

  15. A general concurrent algorithm for plasma particle-in-cell simulation codes

    International Nuclear Information System (INIS)

    Liewer, P.C.; Decyk, V.K.

    1989-01-01

    We have developed a new algorithm for implementing plasma particle-in-cell (PIC) simulation codes on concurrent processors with distributed memory. This algorithm, named the general concurrent PIC algorithm (GCPIC), has been used to implement an electrostatic PIC code on the 33-node JPL Mark III Hypercube parallel computer. To decompose at PIC code using the GCPIC algorithm, the physical domain of the particle simulation is divided into sub-domains, equal in number to the number of processors, such that all sub-domains have roughly equal numbers of particles. For problems with non-uniform particle densities, these sub-domains will be of unequal physical size. Each processor is assigned a sub-domain and is responsible for updating the particles in its sub-domain. This algorithm has led to a a very efficient parallel implementation of a well-benchmarked 1-dimensional PIC code. The dominant portion of the code, updating the particle positions and velocities, is nearly 100% efficient when the number of particles is increased linearly with the number of hypercube processors used so that the number of particles per processor is constant. For example, the increase in time spent updating particles in going from a problem with 11,264 particles run on 1 processor to 360,448 particles on 32 processors was only 3% (parallel efficiency of 97%). Although implemented on a hypercube concurrent computer, this algorithm should also be efficient for PIC codes on other parallel architectures and for large PIC codes on sequential computers where part of the data must reside on external disks. copyright 1989 Academic Press, Inc

  16. Sensitivity analysis of the titan hybrid deterministic transport code for SPECT simulation

    International Nuclear Information System (INIS)

    Royston, Katherine K.; Haghighat, Alireza

    2011-01-01

    Single photon emission computed tomography (SPECT) has been traditionally simulated using Monte Carlo methods. The TITAN code is a hybrid deterministic transport code that has recently been applied to the simulation of a SPECT myocardial perfusion study. For modeling SPECT, the TITAN code uses a discrete ordinates method in the phantom region and a combined simplified ray-tracing algorithm with a fictitious angular quadrature technique to simulate the collimator and generate projection images. In this paper, we compare the results of an experiment with a physical phantom with predictions from the MCNP5 and TITAN codes. While the results of the two codes are in good agreement, they differ from the experimental data by ∼ 21%. In order to understand these large differences, we conduct a sensitivity study by examining the effect of different parameters including heart size, collimator position, collimator simulation parameter, and number of energy groups. (author)

  17. The FLUKA code: An accurate simulation tool for particle therapy

    CERN Document Server

    Battistoni, Giuseppe; Böhlen, Till T; Cerutti, Francesco; Chin, Mary Pik Wai; Dos Santos Augusto, Ricardo M; Ferrari, Alfredo; Garcia Ortega, Pablo; Kozlowska, Wioletta S; Magro, Giuseppe; Mairani, Andrea; Parodi, Katia; Sala, Paola R; Schoofs, Philippe; Tessonnier, Thomas; Vlachoudis, Vasilis

    2016-01-01

    Monte Carlo (MC) codes are increasingly spreading in the hadrontherapy community due to their detailed description of radiation transport and interaction with matter. The suitability of a MC code for application to hadrontherapy demands accurate and reliable physical models capable of handling all components of the expected radiation field. This becomes extremely important for correctly performing not only physical but also biologically-based dose calculations, especially in cases where ions heavier than protons are involved. In addition, accurate prediction of emerging secondary radiation is of utmost importance in innovative areas of research aiming at in-vivo treatment verification. This contribution will address the recent developments of the FLUKA MC code and its practical applications in this field. Refinements of the FLUKA nuclear models in the therapeutic energy interval lead to an improved description of the mixed radiation field as shown in the presented benchmarks against experimental data with bot...

  18. Simulation of lower hybrid current drive in enhanced reversed shear plasmas in the tokamak fusion test reactor using the lower hybrid simulation code

    International Nuclear Information System (INIS)

    Kaita, R.; Bernabei, S.; Budny, R.

    1996-01-01

    The Enhanced Reversed Shear (ERS) mode has already shown great potential for improving the performance of the Tokamak Fusion Test Reactor (TFTR) and other devices. Sustaining the ERS, however, remains an outstanding problem. Lower hybrid (LH) current drive is a possible method for modifying the current profile and controlling its time evolution. To predict its effectiveness in TFTR, the Lower Hybrid Simulation Code (LSC) model is used in the TRANSP code and the Tokamak Simulation Code (TSC). Among the results from the simulations are the following. (1) Single-pass absorption is expected in TFTR ERS plasmas. The simulations show that the LH current follows isotherms of the electron temperature. The ability to control the location of the minimum in the q profile (q min ) has been demonstrated by varying the phase velocity of the launched LH waves and observing the change in the damping location. (2) LH current drive can been used to sustain the q min location. The tendency of qmin to drift inward, as the inductive current diffuses during the formation phase of the reversed shear discharge, is prevented by the LH current driven at a fixed radial location. If this results in an expanded plasma volume with improved confinement as high power neutral beam injection is applied, the high bootstrap currents induced during this phase can then maintain the larger qmin radius. (3) There should be no LH wave damping on energetic beam particles. The values of perpendicular index of refraction in the calculations never exceed about 20, while ions at TFR injection energies are resonant with waves having values closer to 100. Other issues being addressed in the study include the LH current drive efficiency in the presence of high bootstrap currents, and the effect of fast electron diffusion on LH current localization

  19. Uncertainty characterization of particle depth measurement using digital in-line holography and the hybrid method.

    Science.gov (United States)

    Gao, Jian; Guildenbecher, Daniel R; Reu, Phillip L; Chen, Jun

    2013-11-04

    In the detection of particles using digital in-line holography, measurement accuracy is substantially influenced by the hologram processing method. In particular, a number of methods have been proposed to determine the out-of-plane particle depth (z location). However, due to the lack of consistent uncertainty characterization, it has been unclear which method is best suited to a given measurement problem. In this work, depth determination accuracies of seven particle detection methods, including a recently proposed hybrid method, are systematically investigated in terms of relative depth measurement errors and uncertainties. Both synthetic and experimental holograms of particle fields are considered at conditions relevant to particle sizing and tracking. While all methods display a range of particle conditions where they are most accurate, in general the hybrid method is shown to be the most robust with depth uncertainty less than twice the particle diameter over a wide range of particle field conditions.

  20. Development of 2D particle-in-cell code to simulate high current, low ...

    Indian Academy of Sciences (India)

    Abstract. A code for 2D space-charge dominated beam dynamics study in beam trans- port lines is developed. The code is used for particle-in-cell (PIC) simulation of z-uniform beam in a channel containing solenoids and drift space. It can also simulate a transport line where quadrupoles are used for focusing the beam.

  1. Particle number emissions of gasoline hybrid electric vehicles; Partikelanzahl-Emission bei Hybridfahrzeugen mit Ottomotor

    Energy Technology Data Exchange (ETDEWEB)

    Porter, Scott [Horiba Instruments Inc., Ann Arbor, MI (United States)

    2012-04-15

    Hybrid Electric Vehicles (HEV) are commonly reputed to be environmentally friendly. Different studies show that this assumption raises some questions in terms of particle number emissions. Against the background that upcoming emission standards will not only limit particle matter emissions but also particle number emissions for gasoline engines, the exhaust behaviour of downsized gasoline engines used in HEV should be investigated more extensively. A Horiba study compares the particle number emissions of a gasoline vehicle to those of a gasoline powered HEV. (orig.)

  2. Exploring the tensile strain energy absorption of hybrid modified epoxies containing soft particles

    International Nuclear Information System (INIS)

    Abadyan, M.; Bagheri, R.; Kouchakzadeh, M.A.; Hosseini Kordkheili, S.A.

    2011-01-01

    Research highlights: → Two epoxy systems have been modified by combination of fine and coarse modifiers. → While both hybrid systems reveal synergistic K IC , no synergism is observed in tensile test. → It is found that coarse particles induce stress concentration in hybrid samples. → Stress concentration leads to fracture of samples at lower energy absorption levels. -- Abstract: In this paper, tensile strain energy absorption of two different hybrid modified epoxies has been systematically investigated. In one system, epoxy has been modified by amine-terminated butadiene acrylonitrile (ATBN) and hollow glass spheres as fine and coarse modifiers, respectively. The other hybrid epoxy has been modified by the combination of ATBN and recycled Tire particles. The results of fracture toughness measurement of blends revealed synergistic toughening for both hybrid systems in some formulations. However, no evidence of synergism is observed in tensile test of hybrid samples. Scanning electron microscope (SEM), transmission optical microscope (TOM) and finite element (FEM) simulation were utilized to study deformation mechanisms of hybrid systems in tensile test. It is found that coarse particles induce stress concentration in hybrid samples. This produces non-uniform strain localized regions which lead to fracture of hybrid samples at lower tensile loading and energy absorption levels.

  3. Optimal energy management of a hybrid electric powertrain system using improved particle swarm optimization

    International Nuclear Information System (INIS)

    Chen, Syuan-Yi; Hung, Yi-Hsuan; Wu, Chien-Hsun; Huang, Siang-Ting

    2015-01-01

    Highlights: • Online sub-optimal energy management using IPSO. • A second-order HEV model with 5 major segments was built. • IPSO with equivalent-fuel fitness function using 5 particles. • Engine, rule-based control, PSO, IPSO and ECMS are compared. • Max. 31+% fuel economy and 56+% energy consumption improved. - Abstract: This study developed an online suboptimal energy management system by using improved particle swarm optimization (IPSO) for engine/motor hybrid electric vehicles. The vehicle was modeled on the basis of second-order dynamics, and featured five major segments: a battery, a spark ignition engine, a lithium battery, transmission and vehicle dynamics, and a driver model. To manage the power distribution of dual power sources, the IPSO was equipped with three inputs (rotational speed, battery state-of-charge, and demanded torque) and one output (power split ratio). Five steps were developed for IPSO: (1) initialization; (2) determination of the fitness function; (3) selection and memorization; (4) modification of position and velocity; and (5) a stopping rule. Equivalent fuel consumption by the engine and motor was used as the fitness function with five particles, and the IPSO-based vehicle control unit was completed and integrated with the vehicle simulator. To quantify the energy improvement of IPSO, a four-mode rule-based control (system ready, motor only, engine only, and hybrid modes) was designed according to the engine efficiency and rotational speed. A three-loop Equivalent Consumption Minimization Strategy (ECMS) was coded as the best case. The simulation results revealed that IPSO searches the optimal solution more efficiently than conventional PSO does. In two standard driving cycles, ECE and FTP, the improvements in the equivalent fuel consumption and energy consumption compared to baseline were (24.25%, 45.27%) and (31.85%, 56.41%), respectively, for the IPSO. The CO_2 emission for all five cases (pure engine, rule-based, PSO

  4. DANTSYS: A diffusion accelerated neutral particle transport code system

    International Nuclear Information System (INIS)

    Alcouffe, R.E.; Baker, R.S.; Brinkley, F.W.; Marr, D.R.; O'Dell, R.D.; Walters, W.F.

    1995-06-01

    The DANTSYS code package includes the following transport codes: ONEDANT, TWODANT, TWODANT/GQ, TWOHEX, and THREEDANT. The DANTSYS code package is a modular computer program package designed to solve the time-independent, multigroup discrete ordinates form of the boltzmann transport equation in several different geometries. The modular construction of the package separates the input processing, the transport equation solving, and the post processing (or edit) functions into distinct code modules: the Input Module, one or more Solver Modules, and the Edit Module, respectively. The Input and Edit Modules are very general in nature and are common to all the Solver Modules. The ONEDANT Solver Module contains a one-dimensional (slab, cylinder, and sphere), time-independent transport equation solver using the standard diamond-differencing method for space/angle discretization. Also included in the package are solver Modules named TWODANT, TWODANT/GQ, THREEDANT, and TWOHEX. The TWODANT Solver Module solves the time-independent two-dimensional transport equation using the diamond-differencing method for space/angle discretization. The authors have also introduced an adaptive weighted diamond differencing (AWDD) method for the spatial and angular discretization into TWODANT as an option. The TWOHEX Solver Module solves the time-independent two-dimensional transport equation on an equilateral triangle spatial mesh. The THREEDANT Solver Module solves the time independent, three-dimensional transport equation for XYZ and RZΘ symmetries using both diamond differencing with set-to-zero fixup and the AWDD method. The TWODANT/GQ Solver Module solves the 2-D transport equation in XY and RZ symmetries using a spatial mesh of arbitrary quadrilaterals. The spatial differencing method is based upon the diamond differencing method with set-to-zero fixup with changes to accommodate the generalized spatial meshing

  5. DANTSYS: A diffusion accelerated neutral particle transport code system

    Energy Technology Data Exchange (ETDEWEB)

    Alcouffe, R.E.; Baker, R.S.; Brinkley, F.W.; Marr, D.R.; O`Dell, R.D.; Walters, W.F.

    1995-06-01

    The DANTSYS code package includes the following transport codes: ONEDANT, TWODANT, TWODANT/GQ, TWOHEX, and THREEDANT. The DANTSYS code package is a modular computer program package designed to solve the time-independent, multigroup discrete ordinates form of the boltzmann transport equation in several different geometries. The modular construction of the package separates the input processing, the transport equation solving, and the post processing (or edit) functions into distinct code modules: the Input Module, one or more Solver Modules, and the Edit Module, respectively. The Input and Edit Modules are very general in nature and are common to all the Solver Modules. The ONEDANT Solver Module contains a one-dimensional (slab, cylinder, and sphere), time-independent transport equation solver using the standard diamond-differencing method for space/angle discretization. Also included in the package are solver Modules named TWODANT, TWODANT/GQ, THREEDANT, and TWOHEX. The TWODANT Solver Module solves the time-independent two-dimensional transport equation using the diamond-differencing method for space/angle discretization. The authors have also introduced an adaptive weighted diamond differencing (AWDD) method for the spatial and angular discretization into TWODANT as an option. The TWOHEX Solver Module solves the time-independent two-dimensional transport equation on an equilateral triangle spatial mesh. The THREEDANT Solver Module solves the time independent, three-dimensional transport equation for XYZ and RZ{Theta} symmetries using both diamond differencing with set-to-zero fixup and the AWDD method. The TWODANT/GQ Solver Module solves the 2-D transport equation in XY and RZ symmetries using a spatial mesh of arbitrary quadrilaterals. The spatial differencing method is based upon the diamond differencing method with set-to-zero fixup with changes to accommodate the generalized spatial meshing.

  6. GRADSPH: A parallel smoothed particle hydrodynamics code for self-gravitating astrophysical fluid dynamics

    NARCIS (Netherlands)

    Vanaverbeke, S.; Keppens, R.; Poedts, S.; Boffin, H.

    2009-01-01

    We describe the algorithms implemented in the first version of GRADSPH, a parallel, tree-based, smoothed particle hydrodynamics code for simulating self-gravitating astrophysical systems written in FORTRAN 90. The paper presents details on the implementation of the Smoothed Particle Hydro (SPH)

  7. Numerical code to determine the particle trapping region in the LISA machine

    International Nuclear Information System (INIS)

    Azevedo, M.T. de; Raposo, C.C. de; Tomimura, A.

    1984-01-01

    A numerical code is constructed to determine the trapping region in machine like LISA. The variable magnetic field is two deimensional and is coupled to the Runge-Kutta through the Tchebichev polynomial. Various particle orbits including particle interactions were analysed. Beside this, a strong electric field is introduced to see the possible effects happening inside the plasma. (Author) [pt

  8. Implementing particle-in-cell plasma simulation code on the BBN TC2000

    International Nuclear Information System (INIS)

    Sturtevant, J.E.; Maccabe, A.B.

    1990-01-01

    The BBN TC2000 is a multiple instruction, multiple data (MIMD) machine that combines a physically distributed memory with a logically shared memory programming environment using the unique Butterfly switch. Particle-In-Cell (PIC) plasma simulations model the interaction of charged particles with electric and magnetic fields. This paper describes the implementation of both a 1-D electrostatic and a 2 1/2-D electromagnetic PIC (particle-in-cell) plasma simulation code on a BBN TC2000. Performance is compared to implementations of the same code on the shared memory Sequent Balance and distributed memory Intel iPSC hypercube

  9. Characterizations of Polystyrene-Based Hybrid Particles Containing Hydrophobic Mg(OH2 Powder and Composites Fabricated by Employing Resultant Hybrid Particles

    Directory of Open Access Journals (Sweden)

    Shuichi Kimura

    2007-01-01

    unchanged, even when the ST-1 powder content increased from 10 to 50 phr. Furthermore, a composite fabricated by employing the hybrid particles achieved homogenous distribution of ST-1 powder and showed a higher oxygen index than that of a composite fabricated by directly mixing of PS pellets and ST-1 powder.

  10. Design and Analysis of Self-Healing Tree-Based Hybrid Spectral Amplitude Coding OCDMA System

    Directory of Open Access Journals (Sweden)

    Waqas A. Imtiaz

    2017-01-01

    Full Text Available This paper presents an efficient tree-based hybrid spectral amplitude coding optical code division multiple access (SAC-OCDMA system that is able to provide high capacity transmission along with fault detection and restoration throughout the passive optical network (PON. Enhanced multidiagonal (EMD code is adapted to elevate system’s performance, which negates multiple access interference and associated phase induced intensity noise through efficient two-matrix structure. Moreover, system connection availability is enhanced through an efficient protection architecture with tree and star-ring topology at the feeder and distribution level, respectively. The proposed hybrid architecture aims to provide seamless transmission of information at minimum cost. Mathematical model based on Gaussian approximation is developed to analyze performance of the proposed setup, followed by simulation analysis for validation. It is observed that the proposed system supports 64 subscribers, operating at the data rates of 2.5 Gbps and above. Moreover, survivability and cost analysis in comparison with existing schemes show that the proposed tree-based hybrid SAC-OCDMA system provides the required redundancy at minimum cost of infrastructure and operation.

  11. Neutron secondary-particle production cross sections and their incorporation into Monte-Carlo transport codes

    International Nuclear Information System (INIS)

    Brenner, D.J.; Prael, R.E.; Little, R.C.

    1987-01-01

    Realistic simulations of the passage of fast neutrons through tissue require a large quantity of cross-sectional data. What are needed are differential (in particle type, energy and angle) cross sections. A computer code is described which produces such spectra for neutrons above ∼14 MeV incident on light nuclei such as carbon and oxygen. Comparisons have been made with experimental measurements of double-differential secondary charged-particle production on carbon and oxygen at energies from 27 to 60 MeV; they indicate that the model is adequate in this energy range. In order to utilize fully the results of these calculations, they should be incorporated into a neutron transport code. This requires defining a generalized format for describing charged-particle production, putting the calculated results in this format, interfacing the neutron transport code with these data, and charged-particle transport. The design and development of such a program is described. 13 refs., 3 figs

  12. Development of a relativistic Particle In Cell code PARTDYN for linear accelerator beam transport

    Energy Technology Data Exchange (ETDEWEB)

    Phadte, D., E-mail: deepraj@rrcat.gov.in [LPD, Raja Ramanna Centre for Advanced Technology, Indore 452013 (India); Patidar, C.B.; Pal, M.K. [MAASD, Raja Ramanna Centre for Advanced Technology, Indore (India)

    2017-04-11

    A relativistic Particle In Cell (PIC) code PARTDYN is developed for the beam dynamics simulation of z-continuous and bunched beams. The code is implemented in MATLAB using its MEX functionality which allows both ease of development as well higher performance similar to a compiled language like C. The beam dynamics calculations carried out by the code are compared with analytical results and with other well developed codes like PARMELA and BEAMPATH. The effect of finite number of simulation particles on the emittance growth of intense beams has been studied. Corrections to the RF cavity field expressions were incorporated in the code so that the fields could be calculated correctly. The deviations of the beam dynamics results between PARTDYN and BEAMPATH for a cavity driven in zero-mode have been discussed. The beam dynamics studies of the Low Energy Beam Transport (LEBT) using PARTDYN have been presented.

  13. Development and Benchmarking of a Hybrid PIC Code For Dense Plasmas and Fast Ignition

    Energy Technology Data Exchange (ETDEWEB)

    Witherspoon, F. Douglas [HyperV Technologies Corp.; Welch, Dale R. [Voss Scientific, LLC; Thompson, John R. [FAR-TECH, Inc.; MacFarlane, Joeseph J. [Prism Computational Sciences Inc.; Phillips, Michael W. [Advanced Energy Systems, Inc.; Bruner, Nicki [Voss Scientific, LLC; Mostrom, Chris [Voss Scientific, LLC; Thoma, Carsten [Voss Scientific, LLC; Clark, R. E. [Voss Scientific, LLC; Bogatu, Nick [FAR-TECH, Inc.; Kim, Jin-Soo [FAR-TECH, Inc.; Galkin, Sergei [FAR-TECH, Inc.; Golovkin, Igor E. [Prism Computational Sciences, Inc.; Woodruff, P. R. [Prism Computational Sciences, Inc.; Wu, Linchun [HyperV Technologies Corp.; Messer, Sarah J. [HyperV Technologies Corp.

    2014-05-20

    Radiation processes play an important role in the study of both fast ignition and other inertial confinement schemes, such as plasma jet driven magneto-inertial fusion, both in their effect on energy balance, and in generating diagnostic signals. In the latter case, warm and hot dense matter may be produced by the convergence of a plasma shell formed by the merging of an assembly of high Mach number plasma jets. This innovative approach has the potential advantage of creating matter of high energy densities in voluminous amount compared with high power lasers or particle beams. An important application of this technology is as a plasma liner for the flux compression of magnetized plasma to create ultra-high magnetic fields and burning plasmas. HyperV Technologies Corp. has been developing plasma jet accelerator technology in both coaxial and linear railgun geometries to produce plasma jets of sufficient mass, density, and velocity to create such imploding plasma liners. An enabling tool for the development of this technology is the ability to model the plasma dynamics, not only in the accelerators themselves, but also in the resulting magnetized target plasma and within the merging/interacting plasma jets during transport to the target. Welch pioneered numerical modeling of such plasmas (including for fast ignition) using the LSP simulation code. Lsp is an electromagnetic, parallelized, plasma simulation code under development since 1995. It has a number of innovative features making it uniquely suitable for modeling high energy density plasmas including a hybrid fluid model for electrons that allows electrons in dense plasmas to be modeled with a kinetic or fluid treatment as appropriate. In addition to in-house use at Voss Scientific, several groups carrying out research in Fast Ignition (LLNL, SNL, UCSD, AWE (UK), and Imperial College (UK)) also use LSP. A collaborative team consisting of HyperV Technologies Corp., Voss Scientific LLC, FAR-TECH, Inc., Prism

  14. Synthesis and hyperthermia property of hydroxyapatite-ferrite hybrid particles by ultrasonic spray pyrolysis

    International Nuclear Information System (INIS)

    Inukai, Akihiro; Sakamoto, Naonori; Aono, Hiromichi; Sakurai, Osamu; Shinozaki, Kazuo; Suzuki, Hisao; Wakiya, Naoki

    2011-01-01

    Biocompatible hybrid particles composed of hydroxyapatite (Ca 10 (PO 4 ) 6 (OH) 2 , HAp) and ferrite (γ-Fe 2 O 3 and Fe 3 O 4 ) were synthesized using a two-step procedure. First, the ferrite particles were synthesized by co-precipitation. Second, the suspension, which was composed of ferrite particles by a co-precipitation method, Ca(NO 3 ) 2 , and H 3 PO 4 aqueous solution with surfactant, was nebulized into mist ultrasonically. Then the mist was pyrolyzed at 1000 o C to synthesize HAp-ferrite hybrid particles. The molar ratio of Fe ion and HAp was (Fe 2+ and Fe 3+ )/HAp=6. The synthesized hybrid particle was round and dimpled, and the average diameter of a secondary particle was 740 nm. The cross section of the synthesized hybrid particles revealed two phases: HAp and ferrite. The ferrite was coated with HAp. The synthesized hybrid particles show a saturation magnetization of 11.8 emu/g. The net saturation magnetization of the ferrite component was calculated as 32.5 emu/g. The temperature increase in the AC-magnetic field (370 kHz, 1.77 kA/m) was 9 o C with 3.4 g (the ferrite component was 1.0 g). These results show that synthesized hybrid particles are biocompatible and might be useful for magnetic transport and hyperthermia studies. - Research Highlights: → Biocompatible hybrid particles composed of hydroxyapatite (Ca 10 (PO 4 ) 6 (OH) 2 , HAp) and ferrite (γ-Fe 2 O 3 and Fe 3 O 4 ) were synthesized using a two-step synthesis, which is comprised of co-precipitation and ultrasonic spray pyrolysis. → Cross sectional TEM observation and X-ray diffraction revealed that synthesized hybrid particles showed two phases (HAp and ferrite), and the ferrite was coated with HAp. → The saturation magnetization of ferrite in the HAp-ferrite hybrid was 32.49 emu/g. → The increased temperature in the AC-magnetic field (370 kHz, 1.77 kA/m) was 9 o C with 3.4 g (the ferrite component was 1.0 g).

  15. Adaptation of multidimensional group particle tracking and particle wall-boundary condition model to the FDNS code

    Science.gov (United States)

    Chen, Y. S.; Farmer, R. C.

    1992-01-01

    A particulate two-phase flow CFD model was developed based on the FDNS code which is a pressure based predictor plus multi-corrector Navier-Stokes flow solver. Turbulence models with compressibility correction and the wall function models were employed as submodels. A finite-rate chemistry model was used for reacting flow simulation. For particulate two-phase flow simulations, a Eulerian-Lagrangian solution method using an efficient implicit particle trajectory integration scheme was developed in this study. Effects of particle-gas reaction and particle size change to agglomeration or fragmentation were not considered in this investigation. At the onset of the present study, a two-dimensional version of FDNS which had been modified to treat Lagrangian tracking of particles (FDNS-2DEL) had already been written and was operational. The FDNS-2DEL code was too slow for practical use, mainly because it had not been written in a form amenable to vectorization on the Cray, nor was the full three-dimensional form of FDNS utilized. The specific objective of this study was to reorder to calculations into long single arrays for automatic vectorization on the Cray and to implement the full three-dimensional version of FDNS to produce the FDNS-3DEL code. Since the FDNS-2DEL code was slow, a very limited number of test cases had been run with it. This study was also intended to increase the number of cases simulated to verify and improve, as necessary, the particle tracking methodology coded in FDNS.

  16. Nuclear Characteristics of SPNDs and Preliminary Calculation of Hybrid Fixed Incore Detector with Monte Carlo Code

    International Nuclear Information System (INIS)

    Koo, Bon Seung; Lee, Kyung Hoon; Song, Jae Seung; Park, Sang Yoon

    2013-01-01

    In this paper, the basic nuclear characteristics of major emitter materials were surveyed. In addition, preliminary calculations of Cobalt-Vanadium fixed incore detector were performed using the Monte Carlo code. Calculational results were cross-checked by KARMA. KARMA is a two-dimensional multigroup transport theory code developed by the KAERI and approved by Korean regularity agency to be employed as a nuclear design tool for a Korean commercial pressurizer water reactor. The nuclear characteristics of the major emitter materials were surveyed, and preliminary calculations of the hybrid fixed incore detector were performed with the MCNP code. The eigenvalue and pin-by-pin fission power distributions were calculated and showed good agreement with the KARMA calculation results. As future work, gamma power distributions as well as several types of XS of the emitter, insulator, and collector regions for a Co-V ICI assembly will be evaluated and compared

  17. Hybrid luminescent/magnetic nanostructured porous silicon particles for biomedical applications

    Science.gov (United States)

    Muñoz-Noval, Álvaro; Sánchez-Vaquero, Vanessa; Torres-Costa, Vicente; Gallach, Darío; Ferro-Llanos, Vicente; Javier Serrano, José; Manso-Silván, Miguel; García-Ruiz, Josefa Predestinación; Del Pozo, Francisco; Martín-Palma, Raúl J.

    2011-02-01

    This work describes a novel process for the fabrication of hybrid nanostructured particles showing intense tunable photoluminescence and a simultaneous ferromagnetic behavior. The fabrication process involves the synthesis of nanostructured porous silicon (NPSi) by chemical anodization of crystalline silicon and subsequent in pore growth of Co nanoparticles by electrochemically-assisted infiltration. Final particles are obtained by subsequent sonication of the Co-infiltrated NPSi layers and conjugation with poly(ethylene glycol) aiming at enhancing their hydrophilic character. These particles respond to magnetic fields, emit light in the visible when excited in the UV range, and internalize into human mesenchymal stem cells with no apoptosis induction. Furthermore, cytotoxicity in in-vitro systems confirms their biocompatibility and the viability of the cells after incorporation of the particles. The hybrid nanostructured particles might represent powerful research tools as cellular trackers or in cellular therapy since they allow combining two or more properties into a single particle.

  18. Microfluidic CODES: a scalable multiplexed electronic sensor for orthogonal detection of particles in microfluidic channels.

    Science.gov (United States)

    Liu, Ruxiu; Wang, Ningquan; Kamili, Farhan; Sarioglu, A Fatih

    2016-04-21

    Numerous biophysical and biochemical assays rely on spatial manipulation of particles/cells as they are processed on lab-on-a-chip devices. Analysis of spatially distributed particles on these devices typically requires microscopy negating the cost and size advantages of microfluidic assays. In this paper, we introduce a scalable electronic sensor technology, called microfluidic CODES, that utilizes resistive pulse sensing to orthogonally detect particles in multiple microfluidic channels from a single electrical output. Combining the techniques from telecommunications and microfluidics, we route three coplanar electrodes on a glass substrate to create multiple Coulter counters producing distinct orthogonal digital codes when they detect particles. We specifically design a digital code set using the mathematical principles of Code Division Multiple Access (CDMA) telecommunication networks and can decode signals from different microfluidic channels with >90% accuracy through computation even if these signals overlap. As a proof of principle, we use this technology to detect human ovarian cancer cells in four different microfluidic channels fabricated using soft lithography. Microfluidic CODES offers a simple, all-electronic interface that is well suited to create integrated, low-cost lab-on-a-chip devices for cell- or particle-based assays in resource-limited settings.

  19. Auxiliary plasma heating and fueling models for use in particle simulation codes

    International Nuclear Information System (INIS)

    Procassini, R.J.; Cohen, B.I.

    1989-01-01

    Computational models of a radiofrequency (RF) heating system and neutral-beam injector are presented. These physics packages, when incorporated into a particle simulation code allow one to simulate the auxiliary heating and fueling of fusion plasmas. The RF-heating package is based upon a quasilinear diffusion equation which describes the slow evolution of the heated particle distribution. The neutral-beam injector package models the charge exchange and impact ionization processes which transfer energy and particles from the beam to the background plasma. Particle simulations of an RF-heated and a neutral-beam-heated simple-mirror plasma are presented. 8 refs., 5 figs

  20. THREEDANT: A code to perform three-dimensional, neutral particle transport calculations

    International Nuclear Information System (INIS)

    Alcouffe, R.E.

    1994-01-01

    The THREEDANT code solves the three-dimensional neutral particle transport equation in its first order, multigroup, discrate ordinate form. The code allows an unlimited number of groups (depending upon the cross section set), angular quadrature up to S-100, and unlimited Pn order again depending upon the cross section set. The code has three options for spatial differencing, diamond with set-to-zero fixup, adaptive weighted diamond, and linear modal. The geometry options are XYZ and RZΘ with a special XYZ option based upon a volume fraction method. This allows objects or bodies of any shape to be modelled as input which gives the code as much geometric description flexibility as the Monte Carlo code MCNP. The transport equation is solved by source iteration accelerated by the DSA method. Both inner and outer iterations are so accelerated. Some results are presented which demonstrate the effectiveness of these techniques. The code is available on several types of computing platforms

  1. StarSmasher: Smoothed Particle Hydrodynamics code for smashing stars and planets

    Science.gov (United States)

    Gaburov, Evghenii; Lombardi, James C., Jr.; Portegies Zwart, Simon; Rasio, F. A.

    2018-05-01

    Smoothed Particle Hydrodynamics (SPH) is a Lagrangian particle method that approximates a continuous fluid as discrete nodes, each carrying various parameters such as mass, position, velocity, pressure, and temperature. In an SPH simulation the resolution scales with the particle density; StarSmasher is able to handle both equal-mass and equal number-density particle models. StarSmasher solves for hydro forces by calculating the pressure for each particle as a function of the particle's properties - density, internal energy, and internal properties (e.g. temperature and mean molecular weight). The code implements variational equations of motion and libraries to calculate the gravitational forces between particles using direct summation on NVIDIA graphics cards. Using a direct summation instead of a tree-based algorithm for gravity increases the accuracy of the gravity calculations at the cost of speed. The code uses a cubic spline for the smoothing kernel and an artificial viscosity prescription coupled with a Balsara Switch to prevent unphysical interparticle penetration. The code also implements an artificial relaxation force to the equations of motion to add a drag term to the calculated accelerations during relaxation integrations. Initially called StarCrash, StarSmasher was developed originally by Rasio.

  2. Progress of laser-plasma interaction simulations with the particle-in-cell code

    International Nuclear Information System (INIS)

    Sakagami, Hitoshi; Kishimoto, Yasuaki; Sentoku, Yasuhiko; Taguchi, Toshihiro

    2005-01-01

    As the laser-plasma interaction is a non-equilibrium, non-linear and relativistic phenomenon, we must introduce a microscopic method, namely, the relativistic electromagnetic PIC (Particle-In-Cell) simulation code. The PIC code requires a huge number of particles to validate simulation results, and its task is very computation-intensive. Thus simulation researches by the PIC code have been progressing along with advances in computer technology. Recently, parallel computers with tremendous computational power have become available, and thus we can perform three-dimensional PIC simulations for the laser-plasma interaction to investigate laser fusion. Some simulation results are shown with figures. We discuss a recent trend of large-scale PIC simulations that enable direct comparison between experimental facts and computational results. We also discharge/lightning simulations by the extended PIC code, which include various atomic and relaxation processes. (author)

  3. Neptune: An astrophysical smooth particle hydrodynamics code for massively parallel computer architectures

    Science.gov (United States)

    Sandalski, Stou

    Smooth particle hydrodynamics is an efficient method for modeling the dynamics of fluids. It is commonly used to simulate astrophysical processes such as binary mergers. We present a newly developed GPU accelerated smooth particle hydrodynamics code for astrophysical simulations. The code is named neptune after the Roman god of water. It is written in OpenMP parallelized C++ and OpenCL and includes octree based hydrodynamic and gravitational acceleration. The design relies on object-oriented methodologies in order to provide a flexible and modular framework that can be easily extended and modified by the user. Several pre-built scenarios for simulating collisions of polytropes and black-hole accretion are provided. The code is released under the MIT Open Source license and publicly available at http://code.google.com/p/neptune-sph/.

  4. Computational analysis of electrical conduction in hybrid nanomaterials with embedded non-penetrating conductive particles

    Science.gov (United States)

    Cai, Jizhe; Naraghi, Mohammad

    2016-08-01

    In this work, a comprehensive multi-resolution two-dimensional (2D) resistor network model is proposed to analyze the electrical conductivity of hybrid nanomaterials made of insulating matrix with conductive particles such as CNT reinforced nanocomposites and thick film resistors. Unlike existing approaches, our model takes into account the impenetrability of the particles and their random placement within the matrix. Moreover, our model presents a detailed description of intra-particle conductivity via finite element analysis, which to the authors’ best knowledge has not been addressed before. The inter-particle conductivity is assumed to be primarily due to electron tunneling. The model is then used to predict the electrical conductivity of electrospun carbon nanofibers as a function of microstructural parameters such as turbostratic domain alignment and aspect ratio. To simulate the microstructure of single CNF, randomly positioned nucleation sites were seeded and grown as turbostratic particles with anisotropic growth rates. Particle growth was in steps and growth of each particle in each direction was stopped upon contact with other particles. The study points to the significant contribution of both intra-particle and inter-particle conductivity to the overall conductivity of hybrid composites. Influence of particle alignment and anisotropic growth rate ratio on electrical conductivity is also discussed. The results show that partial alignment in contrast to complete alignment can result in maximum electrical conductivity of whole CNF. High degrees of alignment can adversely affect conductivity by lowering the probability of the formation of a conductive path. The results demonstrate approaches to enhance electrical conductivity of hybrid materials through controlling their microstructure which is applicable not only to carbon nanofibers, but also many other types of hybrid composites such as thick film resistors.

  5. Hybrid digital-analog coding with bandwidth expansion for correlated Gaussian sources under Rayleigh fading

    Science.gov (United States)

    Yahampath, Pradeepa

    2017-12-01

    Consider communicating a correlated Gaussian source over a Rayleigh fading channel with no knowledge of the channel signal-to-noise ratio (CSNR) at the transmitter. In this case, a digital system cannot be optimal for a range of CSNRs. Analog transmission however is optimal at all CSNRs, if the source and channel are memoryless and bandwidth matched. This paper presents new hybrid digital-analog (HDA) systems for sources with memory and channels with bandwidth expansion, which outperform both digital-only and analog-only systems over a wide range of CSNRs. The digital part is either a predictive quantizer or a transform code, used to achieve a coding gain. Analog part uses linear encoding to transmit the quantization error which improves the performance under CSNR variations. The hybrid encoder is optimized to achieve the minimum AMMSE (average minimum mean square error) over the CSNR distribution. To this end, analytical expressions are derived for the AMMSE of asymptotically optimal systems. It is shown that the outage CSNR of the channel code and the analog-digital power allocation must be jointly optimized to achieve the minimum AMMSE. In the case of HDA predictive quantization, a simple algorithm is presented to solve the optimization problem. Experimental results are presented for both Gauss-Markov sources and speech signals.

  6. Design of sampling tools for Monte Carlo particle transport code JMCT

    International Nuclear Information System (INIS)

    Shangguan Danhua; Li Gang; Zhang Baoyin; Deng Li

    2012-01-01

    A class of sampling tools for general Monte Carlo particle transport code JMCT is designed. Two ways are provided to sample from distributions. One is the utilization of special sampling methods for special distribution; the other is the utilization of general sampling methods for arbitrary discrete distribution and one-dimensional continuous distribution on a finite interval. Some open source codes are included in the general sampling method for the maximum convenience of users. The sampling results show sampling correctly from distribution which are popular in particle transport can be achieved with these tools, and the user's convenience can be assured. (authors)

  7. Introduction to the Latest Version of the Test-Particle Monte Carlo Code Molflow+

    CERN Document Server

    Ady, M

    2014-01-01

    The Test-Particle Monte Carlo code Molflow+ is getting more and more attention from the scientific community needing detailed 3D calculations of vacuum in the molecular flow regime mainly, but not limited to, the particle accelerator field. Substantial changes, bug fixes, geometry-editing and modelling features, and computational speed improvements have been made to the code in the last couple of years. This paper will outline some of these new features, and show examples of applications to the design and analysis of vacuum systems at CERN and elsewhere.

  8. Load-balancing techniques for a parallel electromagnetic particle-in-cell code

    Energy Technology Data Exchange (ETDEWEB)

    PLIMPTON,STEVEN J.; SEIDEL,DAVID B.; PASIK,MICHAEL F.; COATS,REBECCA S.

    2000-01-01

    QUICKSILVER is a 3-d electromagnetic particle-in-cell simulation code developed and used at Sandia to model relativistic charged particle transport. It models the time-response of electromagnetic fields and low-density-plasmas in a self-consistent manner: the fields push the plasma particles and the plasma current modifies the fields. Through an LDRD project a new parallel version of QUICKSILVER was created to enable large-scale plasma simulations to be run on massively-parallel distributed-memory supercomputers with thousands of processors, such as the Intel Tflops and DEC CPlant machines at Sandia. The new parallel code implements nearly all the features of the original serial QUICKSILVER and can be run on any platform which supports the message-passing interface (MPI) standard as well as on single-processor workstations. This report describes basic strategies useful for parallelizing and load-balancing particle-in-cell codes, outlines the parallel algorithms used in this implementation, and provides a summary of the modifications made to QUICKSILVER. It also highlights a series of benchmark simulations which have been run with the new code that illustrate its performance and parallel efficiency. These calculations have up to a billion grid cells and particles and were run on thousands of processors. This report also serves as a user manual for people wishing to run parallel QUICKSILVER.

  9. Load-balancing techniques for a parallel electromagnetic particle-in-cell code

    International Nuclear Information System (INIS)

    Plimpton, Steven J.; Seidel, David B.; Pasik, Michael F.; Coats, Rebecca S.

    2000-01-01

    QUICKSILVER is a 3-d electromagnetic particle-in-cell simulation code developed and used at Sandia to model relativistic charged particle transport. It models the time-response of electromagnetic fields and low-density-plasmas in a self-consistent manner: the fields push the plasma particles and the plasma current modifies the fields. Through an LDRD project a new parallel version of QUICKSILVER was created to enable large-scale plasma simulations to be run on massively-parallel distributed-memory supercomputers with thousands of processors, such as the Intel Tflops and DEC CPlant machines at Sandia. The new parallel code implements nearly all the features of the original serial QUICKSILVER and can be run on any platform which supports the message-passing interface (MPI) standard as well as on single-processor workstations. This report describes basic strategies useful for parallelizing and load-balancing particle-in-cell codes, outlines the parallel algorithms used in this implementation, and provides a summary of the modifications made to QUICKSILVER. It also highlights a series of benchmark simulations which have been run with the new code that illustrate its performance and parallel efficiency. These calculations have up to a billion grid cells and particles and were run on thousands of processors. This report also serves as a user manual for people wishing to run parallel QUICKSILVER

  10. Implementation of a 3D plasma particle-in-cell code on a MIMD parallel computer

    International Nuclear Information System (INIS)

    Liewer, P.C.; Lyster, P.; Wang, J.

    1993-01-01

    A three-dimensional plasma particle-in-cell (PIC) code has been implemented on the Intel Delta MIMD parallel supercomputer using the General Concurrent PIC algorithm. The GCPIC algorithm uses a domain decomposition to divide the computation among the processors: A processor is assigned a subdomain and all the particles in it. Particles must be exchanged between processors as they move. Results are presented comparing the efficiency for 1-, 2- and 3-dimensional partitions of the three dimensional domain. This algorithm has been found to be very efficient even when a large fraction (e.g. 30%) of the particles must be exchanged at every time step. On the 512-node Intel Delta, up to 125 million particles have been pushed with an electrostatic push time of under 500 nsec/particle/time step

  11. Hot particle dose calculations using the computer code VARSKIN Mod 2

    International Nuclear Information System (INIS)

    Durham, J.S.

    1991-01-01

    The only calculational model recognised by the Nuclear Regulatory Commission (NRC) for hot particle dosimetry is VARSKIN Mod 1. Because the code was designed to calculate skin dose from distributed skin contamination and not hot particles, it is assumed that the particle has no thickness and, therefore, that no self-absorption occurs within the source material. For low energy beta particles such as those emitted from 60 Co, a significant amount of self-shielding occurs in hot particles and VARSKIN Mod 1 overestimates the skin dose. In addition, the presence of protective clothing, which will reduce the calculated skin dose for both high and low energy beta emitters, is not modelled in VARSKIN Mod 1. Finally, there is no provision in VARSKIN Mod 1 to calculate the gamma contribution to skin dose from radionuclides that emit both beta and gamma radiation. The computer code VARSKIN Mod 1 has been modified to model three-dimensional sources, insertion of layers of protective clothing between the source and skin, and gamma dose from appropriate radionuclides. The new code, VARSKIN Mod 2, is described and the sensitivity of the calculated dose to source geometry, diameter, thickness, density, and protective clothing thickness are discussed. Finally, doses calculated using VARSKIN Mod 2 are compared to doses measured from hot particles found in nuclear power plants. (author)

  12. Numerical analysis of splashing fluid using hybrid method of mesh-based and particle-based modelings

    International Nuclear Information System (INIS)

    Tanaka, Nobuatsu; Ogawara, Takuya; Kaneda, Takeshi; Maseguchi, Ryo

    2009-01-01

    In order to simulate splashing and scattering fluid behaviors, we developed a hybrid method of mesh-based model for large-scale continuum fluid and particle-based model for small-scale discrete fluid particles. As for the solver of the continuum fluid, we adopt the CIVA RefIned Multiphase SimulatiON (CRIMSON) code to evaluate two phase flow behaviors based on the recent computational fluid dynamics (CFD) techniques. The phase field model has been introduced to the CRIMSON in order to solve the problem of loosing phase interface sharpness in long-term calculation. As for the solver of the discrete fluid droplets, we applied the idea of Smoothed Particle Hydrodynamics (SPH) method. Both continuum fluid and discrete fluid interact each other through drag interaction force. We verified our method by applying it to a popular benchmark problem of collapse of water column problems, especially focusing on the splashing and scattering fluid behaviors after the column collided against the wall. We confirmed that the gross splashing and scattering behaviors were well reproduced by the introduction of particle model while the detailed behaviors of the particles were slightly different from the experimental results. (author)

  13. Object-Oriented Parallel Particle-in-Cell Code for Beam Dynamics Simulation in Linear Accelerators

    International Nuclear Information System (INIS)

    Qiang, J.; Ryne, R.D.; Habib, S.; Decky, V.

    1999-01-01

    In this paper, we present an object-oriented three-dimensional parallel particle-in-cell code for beam dynamics simulation in linear accelerators. A two-dimensional parallel domain decomposition approach is employed within a message passing programming paradigm along with a dynamic load balancing. Implementing object-oriented software design provides the code with better maintainability, reusability, and extensibility compared with conventional structure based code. This also helps to encapsulate the details of communications syntax. Performance tests on SGI/Cray T3E-900 and SGI Origin 2000 machines show good scalability of the object-oriented code. Some important features of this code also include employing symplectic integration with linear maps of external focusing elements and using z as the independent variable, typical in accelerators. A successful application was done to simulate beam transport through three superconducting sections in the APT linac design

  14. Computational modeling of electrically conductive networks formed by graphene nanoplatelet-carbon nanotube hybrid particles

    KAUST Repository

    Mora Cordova, Angel

    2018-01-30

    One strategy to ensure that nanofiller networks in a polymer composite percolate at low volume fractions is to promote segregation. In a segregated structure, the concentration of nanofillers is kept low in some regions of the sample. In turn, the concentration in remaining regions is much higher than the average concentration of the sample. This selective placement of the nanofillers ensures percolation at low average concentration. One original strategy to promote segregation is by tuning the shape of the nanofillers. We use a computational approach to study the conductive networks formed by hybrid particles obtained by growing carbon nanotubes (CNTs) on graphene nanoplatelets (GNPs). The objective of this study is (1) to show that the higher electrical conductivity of these composites is due to the hybrid particles forming a segregated structure and (2) to understand which parameters defining the hybrid particles determine the efficiency of the segregation. We construct a microstructure to observe the conducting paths and determine whether a segregated structure has indeed been formed inside the composite. A measure of efficiency is presented based on the fraction of nanofillers that contribute to the conductive network. Then, the efficiency of the hybrid-particle networks is compared to those of three other networks of carbon-based nanofillers in which no hybrid particles are used: only CNTs, only GNPs, and a mix of CNTs and GNPs. Finally, some parameters of the hybrid particle are studied: the CNT density on the GNPs, and the CNT and GNP geometries. We also present recommendations for the further improvement of a composite\\'s conductivity based on these parameters.

  15. Computational modeling of electrically conductive networks formed by graphene nanoplatelet-carbon nanotube hybrid particles

    Science.gov (United States)

    Mora, A.; Han, F.; Lubineau, G.

    2018-04-01

    One strategy to ensure that nanofiller networks in a polymer composite percolate at low volume fractions is to promote segregation. In a segregated structure, the concentration of nanofillers is kept low in some regions of the sample. In turn, the concentration in the remaining regions is much higher than the average concentration of the sample. This selective placement of the nanofillers ensures percolation at low average concentration. One original strategy to promote segregation is by tuning the shape of the nanofillers. We use a computational approach to study the conductive networks formed by hybrid particles obtained by growing carbon nanotubes (CNTs) on graphene nanoplatelets (GNPs). The objective of this study is (1) to show that the higher electrical conductivity of these composites is due to the hybrid particles forming a segregated structure and (2) to understand which parameters defining the hybrid particles determine the efficiency of the segregation. We construct a microstructure to observe the conducting paths and determine whether a segregated structure has indeed been formed inside the composite. A measure of efficiency is presented based on the fraction of nanofillers that contribute to the conductive network. Then, the efficiency of the hybrid-particle networks is compared to those of three other networks of carbon-based nanofillers in which no hybrid particles are used: only CNTs, only GNPs, and a mix of CNTs and GNPs. Finally, some parameters of the hybrid particle are studied: the CNT density on the GNPs, and the CNT and GNP geometries. We also present recommendations for the further improvement of a composite’s conductivity based on these parameters.

  16. Computational modeling of electrically conductive networks formed by graphene nanoplatelet-carbon nanotube hybrid particles

    KAUST Repository

    Mora Cordova, Angel; Han, Fei; Lubineau, Gilles

    2018-01-01

    One strategy to ensure that nanofiller networks in a polymer composite percolate at low volume fractions is to promote segregation. In a segregated structure, the concentration of nanofillers is kept low in some regions of the sample. In turn, the concentration in remaining regions is much higher than the average concentration of the sample. This selective placement of the nanofillers ensures percolation at low average concentration. One original strategy to promote segregation is by tuning the shape of the nanofillers. We use a computational approach to study the conductive networks formed by hybrid particles obtained by growing carbon nanotubes (CNTs) on graphene nanoplatelets (GNPs). The objective of this study is (1) to show that the higher electrical conductivity of these composites is due to the hybrid particles forming a segregated structure and (2) to understand which parameters defining the hybrid particles determine the efficiency of the segregation. We construct a microstructure to observe the conducting paths and determine whether a segregated structure has indeed been formed inside the composite. A measure of efficiency is presented based on the fraction of nanofillers that contribute to the conductive network. Then, the efficiency of the hybrid-particle networks is compared to those of three other networks of carbon-based nanofillers in which no hybrid particles are used: only CNTs, only GNPs, and a mix of CNTs and GNPs. Finally, some parameters of the hybrid particle are studied: the CNT density on the GNPs, and the CNT and GNP geometries. We also present recommendations for the further improvement of a composite's conductivity based on these parameters.

  17. Influence of Code Size Variation on the Performance of 2D Hybrid ZCC/MD in OCDMA System

    Directory of Open Access Journals (Sweden)

    Matem Rima.

    2018-01-01

    Full Text Available Several two dimensional OCDMA have been developed in order to overcome many problems in optical network, enhancing cardinality, suppress Multiple Access Interference (MAI and mitigate Phase Induced Intensity Noise (PIIN. This paper propose a new 2D hybrid ZCC/MD code combining between 1D ZCC spectral encoding where M is its code length and 1D MD spatial spreading where N is its code length. The spatial spreading (N code length offers a good cardinality so it represents the main effect to enhance the performance of the system compared to the spectral (M code length according to the numerical results.

  18. Particle transport analysis in lower hybrid current drive discharges of JT-60U

    International Nuclear Information System (INIS)

    Nagashima, K.; Ide, S.; Naito, O.

    1996-01-01

    Particle transport is modified in lower hybrid current drive discharges of JT-60U. The density profile becomes broad during the lower hybrid wave injection and the profile change depends on the injected wave spectrum. Particle transport coefficients (diffusion coefficient and profile peaking factor) were evaluated using gas-puff modulation experiments. The diffusion coefficient in the current drive discharges is about three times larger than in the ohmic discharges. The profile peaking factor decreases in the current drive discharges and the evaluated values are consistent with the measured density profiles. (author)

  19. Hybrid Algorithm of Particle Swarm Optimization and Grey Wolf Optimizer for Improving Convergence Performance

    Directory of Open Access Journals (Sweden)

    Narinder Singh

    2017-01-01

    Full Text Available A newly hybrid nature inspired algorithm called HPSOGWO is presented with the combination of Particle Swarm Optimization (PSO and Grey Wolf Optimizer (GWO. The main idea is to improve the ability of exploitation in Particle Swarm Optimization with the ability of exploration in Grey Wolf Optimizer to produce both variants’ strength. Some unimodal, multimodal, and fixed-dimension multimodal test functions are used to check the solution quality and performance of HPSOGWO variant. The numerical and statistical solutions show that the hybrid variant outperforms significantly the PSO and GWO variants in terms of solution quality, solution stability, convergence speed, and ability to find the global optimum.

  20. PRIAM: A self consistent finite element code for particle simulation in electromagnetic fields

    International Nuclear Information System (INIS)

    Le Meur, G.; Touze, F.

    1990-06-01

    A 2 1/2 dimensional, relativistic particle simulation code is described. A short review of the used mixed finite element method is given. The treatment of the driving terms (charge and current densities), initial, boundary conditions are exposed. Graphical results are shown

  1. Modification V to the computer code, STRETCH, for predicting coated-particle behavior

    International Nuclear Information System (INIS)

    Valentine, K.H.

    1975-04-01

    Several modifications have been made to the stress analysis code, STRETCH, in an attempt to improve agreement between the calculated and observed behavior of pyrocarbon-coated fuel particles during irradiation in a reactor environment. Specific areas of the code that have been modified are the neutron-induced densification model and the neutron-induced creep calculation. Also, the capability for modeling surface temperature variations has been added. HFIR Target experiments HT-12 through HT-15 have been simulated with the modified code, and the neutron-fluence vs particle-failure predictions compare favorably with the experimental results. Listings of the modified FORTRAN IV main source program and additional FORTRAN IV functions are provided along with instructions for supplying the additional input data. (U.S.)

  2. SoAx: A generic C++ Structure of Arrays for handling particles in HPC codes

    Science.gov (United States)

    Homann, Holger; Laenen, Francois

    2018-03-01

    The numerical study of physical problems often require integrating the dynamics of a large number of particles evolving according to a given set of equations. Particles are characterized by the information they are carrying such as an identity, a position other. There are generally speaking two different possibilities for handling particles in high performance computing (HPC) codes. The concept of an Array of Structures (AoS) is in the spirit of the object-oriented programming (OOP) paradigm in that the particle information is implemented as a structure. Here, an object (realization of the structure) represents one particle and a set of many particles is stored in an array. In contrast, using the concept of a Structure of Arrays (SoA), a single structure holds several arrays each representing one property (such as the identity) of the whole set of particles. The AoS approach is often implemented in HPC codes due to its handiness and flexibility. For a class of problems, however, it is known that the performance of SoA is much better than that of AoS. We confirm this observation for our particle problem. Using a benchmark we show that on modern Intel Xeon processors the SoA implementation is typically several times faster than the AoS one. On Intel's MIC co-processors the performance gap even attains a factor of ten. The same is true for GPU computing, using both computational and multi-purpose GPUs. Combining performance and handiness, we present the library SoAx that has optimal performance (on CPUs, MICs, and GPUs) while providing the same handiness as AoS. For this, SoAx uses modern C++ design techniques such template meta programming that allows to automatically generate code for user defined heterogeneous data structures.

  3. Center for Gyrokinetic/MHD Hybrid Simulation of Energetic Particle Physics in Toroidal Plasmas (CSEPP). Final report

    International Nuclear Information System (INIS)

    Chen, Yang

    2012-01-01

    At Colorado University-Boulder the primary task is to extend our gyrokinetic Particle-in-Cell simulation of tokamak micro-turbulence and transport to the area of energetic particle physics. We have implemented a gyrokinetic ion/massless fluid electron hybrid model in the global δf-PIC code GEM, and benchmarked the code with analytic results on the thermal ion radiative damping rate of Toroidal Alfven Eigenmodes (TAE) and with mode frequency and spatial structure from eigenmode analysis. We also performed nonlinear simulations of both a single-n mode (n is the toroidal mode number) and multiple-n modes, and in the case of single-n, benchmarked the code on the saturation amplitude vs. particle collision rate with analytical theory. Most simulations use the f method for both ions species, but we have explored the full-f method for energetic particles in cases where the burst amplitude of the excited instabilities is large as to cause significant re-distribution or loss of the energetic particles. We used the hybrid model to study the stability of high-n TAEs in ITER. Our simulations show that the most unstable modes in ITER lie in the rage of 10 α (0) = 0.7% for the fully shaped ITER equilibrium. We also carried nonlinear simulations of the most unstable n = 15 mode and found that the saturation amplitude for the nominal ITER discharge is too low to cause large redistribution or loss of alpha particles. To include kinetic electron effects in the hybrid model we have studied a kinetic electron closure scheme for the fluid electron model. The most important element of the closure scheme is a complete Ohm's law for the parallel electric field E || , derived by combining the quasi-neutrality condition, the Ampere's equation and the v || moment of the gyrokinetic equations. A discretization method for the closure scheme is studied in detail for a three-dimensional shear-less slab plasma. It is found that for long-wavelength shear Alfven waves the kinetic closure scheme

  4. A hybrid experiment to search for beauty particles

    International Nuclear Information System (INIS)

    Aoki, S.; Chiba, K.; Hoshino, K.; Kaway, T.; Kobayashi, M.; Kodama, K.; Miyanishi, M.; Nakamura, M.; Nakamura, Y.; Niu, K.; Niwa, K.; Ohashi, M.; Sasaki, H.; Tajima, H.; Tomita, Y.; Yamakawa, O.; Yanagisawa, Y.; Baroni, G.; Cecchetti, A.M.; Dell'Uomo, S.; De Vincenzi, M.; Di Liberto, S.; Frenkel, A.; Manfredini, A.; Marini, G.; Martellotti, G.; Mazzoni, M.A.; Meddi, F.; Nigro, A.; Penso, G.; Pistilli, P.; Sciubla, A.; Sgarbi, C.; Barth, M.; Bertrand, D.; Bertrand-Coremans, G.; Roosen, R.; Bartley, J.H.; Davis, D.H.; Duff, B.G.; Esten, M.J.; Heymann, F.F.; Imrie, D.C.; Lush, G.J.; Tovee, D.N.; Breslin, A.C.; Donnelly, W.; Montwill, A.; Coupland, M.; Trent, P.; Hazama, M.; Isokane, Y.; Tsuneoka, Y.; Kazuno, M.; Minakawa, F.; Shibuya, H.; Watanabe, S.

    1989-01-01

    We give here a detailed description of experiment WA75, which was performed at CERN to search for beauty particles. Events containing at least one muon with a high momentum transverse to the beam direction were selected; then the primary interactions and decay vertices, located in stacks of nuclear research emulsions, were examined and analysed. The various parts of the apparatus are described and the off-line analysis and search in emulsion are discussed. An estimate is made of the sensitivity of the experiment to beauty- and charmed-particle production. (orig.)

  5. Hybrid threshold adaptable quantum secret sharing scheme with reverse Huffman-Fibonacci-tree coding.

    Science.gov (United States)

    Lai, Hong; Zhang, Jun; Luo, Ming-Xing; Pan, Lei; Pieprzyk, Josef; Xiao, Fuyuan; Orgun, Mehmet A

    2016-08-12

    With prevalent attacks in communication, sharing a secret between communicating parties is an ongoing challenge. Moreover, it is important to integrate quantum solutions with classical secret sharing schemes with low computational cost for the real world use. This paper proposes a novel hybrid threshold adaptable quantum secret sharing scheme, using an m-bonacci orbital angular momentum (OAM) pump, Lagrange interpolation polynomials, and reverse Huffman-Fibonacci-tree coding. To be exact, we employ entangled states prepared by m-bonacci sequences to detect eavesdropping. Meanwhile, we encode m-bonacci sequences in Lagrange interpolation polynomials to generate the shares of a secret with reverse Huffman-Fibonacci-tree coding. The advantages of the proposed scheme is that it can detect eavesdropping without joint quantum operations, and permits secret sharing for an arbitrary but no less than threshold-value number of classical participants with much lower bandwidth. Also, in comparison with existing quantum secret sharing schemes, it still works when there are dynamic changes, such as the unavailability of some quantum channel, the arrival of new participants and the departure of participants. Finally, we provide security analysis of the new hybrid quantum secret sharing scheme and discuss its useful features for modern applications.

  6. Randomly dispersed particle fuel model in the PSG Monte Carlo neutron transport code

    International Nuclear Information System (INIS)

    Leppaenen, J.

    2007-01-01

    High-temperature gas-cooled reactor fuels are composed of thousands of microscopic fuel particles, randomly dispersed in a graphite matrix. The modelling of such geometry is complicated, especially using continuous-energy Monte Carlo codes, which are unable to apply any deterministic corrections in the calculation. This paper presents the geometry routine developed for modelling randomly dispersed particle fuels using the PSG Monte Carlo reactor physics code. The model is based on the delta-tracking method, and it takes into account the spatial self-shielding effects and the random dispersion of the fuel particles. The calculation routine is validated by comparing the results to reference MCNP4C calculations using uranium and plutonium based fuels. (authors)

  7. Color-coded Live Imaging of Heterokaryon Formation and Nuclear Fusion of Hybridizing Cancer Cells.

    Science.gov (United States)

    Suetsugu, Atsushi; Matsumoto, Takuro; Hasegawa, Kosuke; Nakamura, Miki; Kunisada, Takahiro; Shimizu, Masahito; Saji, Shigetoyo; Moriwaki, Hisataka; Bouvet, Michael; Hoffman, Robert M

    2016-08-01

    Fusion of cancer cells has been studied for over half a century. However, the steps involved after initial fusion between cells, such as heterokaryon formation and nuclear fusion, have been difficult to observe in real time. In order to be able to visualize these steps, we have established cancer-cell sublines from the human HT-1080 fibrosarcoma, one expressing green fluorescent protein (GFP) linked to histone H2B in the nucleus and a red fluorescent protein (RFP) in the cytoplasm and the other subline expressing RFP in the nucleus (mCherry) linked to histone H2B and GFP in the cytoplasm. The two reciprocal color-coded sublines of HT-1080 cells were fused using the Sendai virus. The fused cells were cultured on plastic and observed using an Olympus FV1000 confocal microscope. Multi-nucleate (heterokaryotic) cancer cells, in addition to hybrid cancer cells with single-or multiple-fused nuclei, including fused mitotic nuclei, were observed among the fused cells. Heterokaryons with red, green, orange and yellow nuclei were observed by confocal imaging, even in single hybrid cells. The orange and yellow nuclei indicate nuclear fusion. Red and green nuclei remained unfused. Cell fusion with heterokaryon formation and subsequent nuclear fusion resulting in hybridization may be an important natural phenomenon between cancer cells that may make them more malignant. The ability to image the complex processes following cell fusion using reciprocal color-coded cancer cells will allow greater understanding of the genetic basis of malignancy. Copyright© 2016 International Institute of Anticancer Research (Dr. John G. Delinassios), All rights reserved.

  8. Preparation of polyethersulfone-organophilic montmorillonite hybrid particles for the removal of bisphenol A

    International Nuclear Information System (INIS)

    Cao Fuming; Bai Pengli; Li Haocheng; Ma Yunli; Deng Xiaopei; Zhao Changsheng

    2009-01-01

    Polyethersulfone (PES)-organophilic montmorillonite (OMMT) hybrid particles, with various proportions of OMMT, were prepared by using a liquid-liquid phase separation technique, and then were used for the removal of bisphenol A (BPA) from aqueous solution. The adsorbed BPA amounts increased significantly when the OMMT were embedded into the particles. The structure of the particle was characterized by using scanning electron microscopy (SEM); and these particles hardly release small molecules below 250 deg. C which was testified by using thermogravimetric analysis (TGA). The experimental data of BPA adsorption were adequately fitted with Langmuir equations. Three simplified kinetics model including the pseudo-first-order (Lagergren equation), the pseudo-second-order, and the intraparticle diffusion model were used to describe the adsorption process. Kinetic studies showed that the adsorbed BPA amount reached an equilibrium value after 300 min, and the experimental data could be expressed by the intraparticular mass transfer diffusion model. Furthermore, the adsorbed BPA could be effectively removed by ethanol, which indicated that the hybrid particles could be reused. These results showed that the PES-OMMT hybrid particles have the potential to be used in the environmental application

  9. An Unscented Kalman-Particle Hybrid Filter for Space Object Tracking

    Science.gov (United States)

    Raihan A. V, Dilshad; Chakravorty, Suman

    2018-03-01

    Optimal and consistent estimation of the state of space objects is pivotal to surveillance and tracking applications. However, probabilistic estimation of space objects is made difficult by the non-Gaussianity and nonlinearity associated with orbital mechanics. In this paper, we present an unscented Kalman-particle hybrid filtering framework for recursive Bayesian estimation of space objects. The hybrid filtering scheme is designed to provide accurate and consistent estimates when measurements are sparse without incurring a large computational cost. It employs an unscented Kalman filter (UKF) for estimation when measurements are available. When the target is outside the field of view (FOV) of the sensor, it updates the state probability density function (PDF) via a sequential Monte Carlo method. The hybrid filter addresses the problem of particle depletion through a suitably designed filter transition scheme. To assess the performance of the hybrid filtering approach, we consider two test cases of space objects that are assumed to undergo full three dimensional orbital motion under the effects of J 2 and atmospheric drag perturbations. It is demonstrated that the hybrid filters can furnish fast, accurate and consistent estimates outperforming standard UKF and particle filter (PF) implementations.

  10. Parallel processing of Monte Carlo code MCNP for particle transport problem

    Energy Technology Data Exchange (ETDEWEB)

    Higuchi, Kenji; Kawasaki, Takuji

    1996-06-01

    It is possible to vectorize or parallelize Monte Carlo codes (MC code) for photon and neutron transport problem, making use of independency of the calculation for each particle. Applicability of existing MC code to parallel processing is mentioned. As for parallel computer, we have used both vector-parallel processor and scalar-parallel processor in performance evaluation. We have made (i) vector-parallel processing of MCNP code on Monte Carlo machine Monte-4 with four vector processors, (ii) parallel processing on Paragon XP/S with 256 processors. In this report we describe the methodology and results for parallel processing on two types of parallel or distributed memory computers. In addition, we mention the evaluation of parallel programming environments for parallel computers used in the present work as a part of the work developing STA (Seamless Thinking Aid) Basic Software. (author)

  11. R-Matrix Codes for Charged-particle Induced Reactionsin the Resolved Resonance Region

    Energy Technology Data Exchange (ETDEWEB)

    Leeb, Helmut [Technical Univ. of Wien, Vienna (Austria); Dimitriou, Paraskevi [Intl Atomic Energy Agency (IAEA), Vienna (Austria); Thompson, Ian J. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-01-01

    A Consultant’s Meeting was held at the IAEA Headquarters, from 5 to 7 December 2016, to discuss the status of R-matrix codes currently used in calculations of charged-particle induced reaction cross sections at low energies. The meeting was a follow-up to the R-matrix Codes meeting held in December 2015, and served the purpose of monitoring progress in: the development of a translation code to enable exchange of input/output parameters between the various codes in different formats, fitting procedures and treatment of uncertainties, the evaluation methodology, and finally dissemination. The details of the presentations and technical discussions, as well as additional actions that were proposed to achieve all the goals of the meeting are summarized in this report.

  12. Strength Pareto particle swarm optimization and hybrid EA-PSO for multi-objective optimization.

    Science.gov (United States)

    Elhossini, Ahmed; Areibi, Shawki; Dony, Robert

    2010-01-01

    This paper proposes an efficient particle swarm optimization (PSO) technique that can handle multi-objective optimization problems. It is based on the strength Pareto approach originally used in evolutionary algorithms (EA). The proposed modified particle swarm algorithm is used to build three hybrid EA-PSO algorithms to solve different multi-objective optimization problems. This algorithm and its hybrid forms are tested using seven benchmarks from the literature and the results are compared to the strength Pareto evolutionary algorithm (SPEA2) and a competitive multi-objective PSO using several metrics. The proposed algorithm shows a slower convergence, compared to the other algorithms, but requires less CPU time. Combining PSO and evolutionary algorithms leads to superior hybrid algorithms that outperform SPEA2, the competitive multi-objective PSO (MO-PSO), and the proposed strength Pareto PSO based on different metrics.

  13. Studying the Mechanism of Hybrid Nanoparticle Photoresists: Effect of Particle Size on Photopatterning

    KAUST Repository

    Li, Li

    2015-07-28

    © 2015 American Chemical Society. Hf-based hybrid photoresist materials with three different organic ligands were prepared by a sol-gel-based method, and their patterning mechanism was investigated in detail. All hybrid nanoparticle resists are patternable using UV exposure. Their particle sizes show a dramatic increase from the initial 3-4 nm to submicron size after exposure, with no apparent inorganic content or thermal property change detected. XPS results showed that the mass percentage of the carboxylic group in the structure of nanoparticles decreased with increasing exposure duration. The particle coarsening sensitivities of those hybrid nanoparticles are consistent with their EUV performance. The current work provides an understanding for the development mechanism and future guidance for the design and processing of high performance resist materials for large-scale microelectronics device fabrication.

  14. Load balancing in highly parallel processing of Monte Carlo code for particle transport

    International Nuclear Information System (INIS)

    Higuchi, Kenji; Takemiya, Hiroshi; Kawasaki, Takuji

    1998-01-01

    In parallel processing of Monte Carlo (MC) codes for neutron, photon and electron transport problems, particle histories are assigned to processors making use of independency of the calculation for each particle. Although we can easily parallelize main part of a MC code by this method, it is necessary and practically difficult to optimize the code concerning load balancing in order to attain high speedup ratio in highly parallel processing. In fact, the speedup ratio in the case of 128 processors remains in nearly one hundred times when using the test bed for the performance evaluation. Through the parallel processing of the MCNP code, which is widely used in the nuclear field, it is shown that it is difficult to attain high performance by static load balancing in especially neutron transport problems, and a load balancing method, which dynamically changes the number of assigned particles minimizing the sum of the computational and communication costs, overcomes the difficulty, resulting in nearly fifteen percentage of reduction for execution time. (author)

  15. Development of CAD-Based Geometry Processing Module for a Monte Carlo Particle Transport Analysis Code

    International Nuclear Information System (INIS)

    Choi, Sung Hoon; Kwark, Min Su; Shim, Hyung Jin

    2012-01-01

    As The Monte Carlo (MC) particle transport analysis for a complex system such as research reactor, accelerator, and fusion facility may require accurate modeling of the complicated geometry. Its manual modeling by using the text interface of a MC code to define the geometrical objects is tedious, lengthy and error-prone. This problem can be overcome by taking advantage of modeling capability of the computer aided design (CAD) system. There have been two kinds of approaches to develop MC code systems utilizing the CAD data: the external format conversion and the CAD kernel imbedded MC simulation. The first approach includes several interfacing programs such as McCAD, MCAM, GEOMIT etc. which were developed to automatically convert the CAD data into the MCNP geometry input data. This approach makes the most of the existing MC codes without any modifications, but implies latent data inconsistency due to the difference of the geometry modeling system. In the second approach, a MC code utilizes the CAD data for the direct particle tracking or the conversion to an internal data structure of the constructive solid geometry (CSG) and/or boundary representation (B-rep) modeling with help of a CAD kernel. MCNP-BRL and OiNC have demonstrated their capabilities of the CAD-based MC simulations. Recently we have developed a CAD-based geometry processing module for the MC particle simulation by using the OpenCASCADE (OCC) library. In the developed module, CAD data can be used for the particle tracking through primitive CAD surfaces (hereafter the CAD-based tracking) or the internal conversion to the CSG data structure. In this paper, the performances of the text-based model, the CAD-based tracking, and the internal CSG conversion are compared by using an in-house MC code, McSIM, equipped with the developed CAD-based geometry processing module

  16. Studies of Planet Formation using a Hybrid N-body + Planetesimal Code

    Science.gov (United States)

    Kenyon, Scott J.; Bromley, Benjamin C.; Salamon, Michael (Technical Monitor)

    2005-01-01

    The goal of our proposal was to use a hybrid multi-annulus planetesimal/n-body code to examine the planetesimal theory, one of the two main theories of planet formation. We developed this code to follow the evolution of numerous 1 m to 1 km planetesimals as they collide, merge, and grow into full-fledged planets. Our goal was to apply the code to several well-posed, topical problems in planet formation and to derive observational consequences of the models. We planned to construct detailed models to address two fundamental issues: 1) icy planets - models for icy planet formation will demonstrate how the physical properties of debris disks, including the Kuiper Belt in our solar system, depend on initial conditions and input physics; and 2) terrestrial planets - calculations following the evolution of 1-10 km planetesimals into Earth-mass planets and rings of dust will provide a better understanding of how terrestrial planets form and interact with their environment. During the past year, we made progress on each issue. Papers published in 2004 are summarized. Summaries of work to be completed during the first half of 2005 and work planned for the second half of 2005 are included.

  17. Hybrid composites of monodisperse pi-conjugated rodlike organic compounds and semiconductor quantum particles

    DEFF Research Database (Denmark)

    Hensel, V.; Godt, A.; Popovitz-Biro, R.

    2002-01-01

    Composite materials of quantum particles (Q-particles) arranged in layers within crystalline powders of pi-conjugated, rodlike dicarboxylic acids are reported. The synthesis of the composites, either as three-dimensional crystals or as thin films at the air-water interface, comprises a two...... analysis of the solids and grazing incidence X-ray diffraction analysis of the films on water. 2) Topotactic solid/gas reaction of these salts with H2S to convert the metal ions into Q-particles of CdS or PbS embedded in the organic matrix that consists of the acids 6(H) and 8(H). These hybrid materials...

  18. The interaction of energetic alpha-particles with intense lower hybrid waves

    International Nuclear Information System (INIS)

    Fisch, N.J.; Rax, J.M.

    1992-06-01

    Lower hybrid waves are a demonstrated, continuous means of driving toroidal current in a tokamak. When these waves propagate in a tokamak fusion reactor, in which there are energetic α- particles, there are conditions under which the α-particles do not appreciably damp, and may even amplify, the wave, thereby enhancing the current-drive effect. Waves traveling in one poloidal direction, in addition to being directed in one toroidal direction, are shown to be the most efficient drivers of current in the presence of the energetic α-particles

  19. Current generation by alpha particles interacting with lower hybrid waves in TOKAMAKS

    International Nuclear Information System (INIS)

    Belikov, V.S.; Kolesnichenko, Ya.I.; Lisak, M.; Anderson, D.

    1990-01-01

    The problem of the influence of fusion generated alpha particles on lower-hybrid-wave current drive is examined. Analysis is based on a new equation for the LH-wave-fast ion interaction which is derived by taking into consideration the non-zero value of the longitudinal wave number. The steady-state velocity distribution function for high energy alpha particles is found. The alpha current driven by LH-waves as well as the RF-power absorbed by alpha particle are calculated. (authors)

  20. The local skin dose conversion coefficients of electrons, protons and alpha particles calculated using the Geant4 code.

    Science.gov (United States)

    Zhang, Bintuan; Dang, Bingrong; Wang, Zhuanzi; Wei, Wei; Li, Wenjian

    2013-10-01

    The skin tissue-equivalent slab reported in the International Commission on Radiological Protection (ICRP) Publication 116 to calculate the localised skin dose conversion coefficients (LSDCCs) was adopted into the Monte Carlo transport code Geant4. The Geant4 code was then utilised for computation of LSDCCs due to a circular parallel beam of monoenergetic electrons, protons and alpha particles electrons and alpha particles are found to be in good agreement with the results using the MCNPX code of ICRP 116 data. The present work thus validates the LSDCC values for both electrons and alpha particles using the Geant4 code.

  1. Solution of charged particle transport equation by Monte-Carlo method in the BRANDZ code system

    International Nuclear Information System (INIS)

    Artamonov, S.N.; Androsenko, P.A.; Androsenko, A.A.

    1992-01-01

    Consideration is given to the issues of Monte-Carlo employment for the solution of charged particle transport equation and its implementation in the BRANDZ code system under the conditions of real 3D geometry and all the data available on radiation-to-matter interaction in multicomponent and multilayer targets. For the solution of implantation problem the results of BRANDZ data comparison with the experiments and calculations by other codes in complexes systems are presented. The results of direct nuclear pumping process simulation for laser-active media by a proton beam are also included. 4 refs.; 7 figs

  2. Two- and three-dimensional magnetoinductive particle codes with guiding center electron motion

    International Nuclear Information System (INIS)

    Geary, J.L.; Tajima, T.; Leboeuf, J.N.; Zaidman, E.G.; Han, J.H.

    1986-07-01

    A magnetoinductive (Darwin) particle simulation model developed for examining low frequency plasma behavior with large time steps is presented. Electron motion perpendicular to the magnetic field is treated as massless keeping only the guiding center motion. Electron motion parallel to the magnetic field retains full inertial effects as does the ion motion. This model has been implemented in two and three dimensions. Computational tests of the equilibrium properties of the code are compared with linear theory and the fluctuation dissipation theorem. This code has been applied to the problems of Alfven wave resonance heating and twist-kink modes

  3. Syrlic: a Lagrangian code to handle industrial problems involving particles and droplets

    International Nuclear Information System (INIS)

    Peniguel, C.

    1997-01-01

    Numerous industrial applications require to solve droplets or solid particles trajectories and their effects on the flow. (fuel injection in combustion engine, agricultural spraying, spray drying, spray cooling, spray painting, particles separator, dispersion of pollutant, etc). SYRLIC is being developed to handle the dispersed phase while the continuous phase is tackled by classical Eulerian codes like N3S-EF, N3S-NATUR, ESTET. The trajectory of each droplet is calculated on unstructured grids or structured grids according the Eulerian code with SYRLIC is coupled. The forces applied to each particle are recalculated along each path. The Lagrangian approach treats the convection and the source terms exactly. It is particularly adapted to problems involving a wide range of particles characteristics (diameter, mass, etc). In the near future, wall interaction, heat transfer, evaporation more complex physics, etc, will be included. Turbulent effects will be accounted for by a Langevin equation. The illustration shows the trajectories followed by water droplets (diameter from 1 mm to 4 mm) in a cooling tower. the droplets are falling down due to gravity but are deflected towards the center of the tower because of a lateral wind. It is clear that particles are affected differently according their diameter. The Eulerian flow field used to compute the forces has been generated by N3S-AERO, on an unstructured mesh

  4. Hybrid metal organic scintillator materials system and particle detector

    Science.gov (United States)

    Bauer, Christina A.; Allendorf, Mark D.; Doty, F. Patrick; Simmons, Blake A.

    2011-07-26

    We describe the preparation and characterization of two zinc hybrid luminescent structures based on the flexible and emissive linker molecule, trans-(4-R,4'-R') stilbene, where R and R' are mono- or poly-coordinating groups, which retain their luminescence within these solid materials. For example, reaction of trans-4,4'-stilbenedicarboxylic acid and zinc nitrate in the solvent dimethylformamide (DMF) yielded a dense 2-D network featuring zinc in both octahedral and tetrahedral coordination environments connected by trans-stilbene links. Similar reaction in diethylformamide (DEF) at higher temperatures resulted in a porous, 3-D framework structure consisting of two interpenetrating cubic lattices, each featuring basic to zinc carboxylate vertices joined by trans-stilbene, analogous to the isoreticular MOF (IRMOF) series. We demonstrate that the optical properties of both embodiments correlate directly with the local ligand environments observed in the crystal structures. We further demonstrate that these materials produce high luminescent response to proton radiation and high radiation tolerance relative to prior scintillators. These features can be used to create sophisticated scintillating detection sensors.

  5. Particle-in-Cell Code BEAMPATH for Beam Dynamics Simulations in Linear Accelerators and Beamlines

    International Nuclear Information System (INIS)

    Batygin, Y.

    2004-01-01

    A code library BEAMPATH for 2 - dimensional and 3 - dimensional space charge dominated beam dynamics study in linear particle accelerators and beam transport lines is developed. The program is used for particle-in-cell simulation of axial-symmetric, quadrupole-symmetric and z-uniform beams in a channel containing RF gaps, radio-frequency quadrupoles, multipole lenses, solenoids and bending magnets. The programming method includes hierarchical program design using program-independent modules and a flexible combination of modules to provide the most effective version of the structure for every specific case of simulation. Numerical techniques as well as the results of beam dynamics studies are presented

  6. Particle-in-Cell Code BEAMPATH for Beam Dynamics Simulations in Linear Accelerators and Beamlines

    Energy Technology Data Exchange (ETDEWEB)

    Batygin, Y.

    2004-10-28

    A code library BEAMPATH for 2 - dimensional and 3 - dimensional space charge dominated beam dynamics study in linear particle accelerators and beam transport lines is developed. The program is used for particle-in-cell simulation of axial-symmetric, quadrupole-symmetric and z-uniform beams in a channel containing RF gaps, radio-frequency quadrupoles, multipole lenses, solenoids and bending magnets. The programming method includes hierarchical program design using program-independent modules and a flexible combination of modules to provide the most effective version of the structure for every specific case of simulation. Numerical techniques as well as the results of beam dynamics studies are presented.

  7. Parallel Computing Characteristics of CUPID code under MPI and Hybrid environment

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jae Ryong; Yoon, Han Young [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Jeon, Byoung Jin; Choi, Hyoung Gwon [Seoul National Univ. of Science and Technology, Seoul (Korea, Republic of)

    2014-05-15

    In this paper, a characteristic of parallel algorithm is presented for solving an elliptic type equation of CUPID via domain decomposition method using the MPI and the parallel performance is estimated in terms of a scalability which shows the speedup ratio. In addition, the time-consuming pattern of major subroutines is studied. Two different grid systems are taken into account: 40,000 meshes for coarse system and 320,000 meshes for fine system. Since the matrix of the CUPID code differs according to whether the flow is single-phase or two-phase, the effect of matrix shape is evaluated. Finally, the effect of the preconditioner for matrix solver is also investigated. Finally, the hybrid (OpenMP+MPI) parallel algorithm is introduced and discussed in detail for solving pressure solver. Component-scale thermal-hydraulics code, CUPID has been developed for two-phase flow analysis, which adopts a three-dimensional, transient, three-field model, and parallelized to fulfill a recent demand for long-transient and highly resolved multi-phase flow behavior. In this study, the parallel performance of the CUPID code was investigated in terms of scalability. The CUPID code was parallelized with domain decomposition method. The MPI library was adopted to communicate the information at the neighboring domain. For managing the sparse matrix effectively, the CSR storage format is used. To take into account the characteristics of the pressure matrix which turns to be asymmetric for two-phase flow, both single-phase and two-phase calculations were run. In addition, the effect of the matrix size and preconditioning was also investigated. The fine mesh calculation shows better scalability than the coarse mesh because the number of coarse mesh does not need to decompose the computational domain excessively. The fine mesh can be present good scalability when dividing geometry with considering the ratio between computation and communication time. For a given mesh, single-phase flow

  8. PEREGRINE: An all-particle Monte Carlo code for radiation therapy

    International Nuclear Information System (INIS)

    Hartmann Siantar, C.L.; Chandler, W.P.; Rathkopf, J.A.; Svatos, M.M.; White, R.M.

    1994-09-01

    The goal of radiation therapy is to deliver a lethal dose to the tumor while minimizing the dose to normal tissues. To carry out this task, it is critical to calculate correctly the distribution of dose delivered. Monte Carlo transport methods have the potential to provide more accurate prediction of dose distributions than currently-used methods. PEREGRINE is a new Monte Carlo transport code developed at Lawrence Livermore National Laboratory for the specific purpose of modeling the effects of radiation therapy. PEREGRINE transports neutrons, photons, electrons, positrons, and heavy charged-particles, including protons, deuterons, tritons, helium-3, and alpha particles. This paper describes the PEREGRINE transport code and some preliminary results for clinically relevant materials and radiation sources

  9. CASINO, a code for simulation of charged particles in an axisymmetric Tokamak

    International Nuclear Information System (INIS)

    Dillner, Oe.

    1992-01-01

    The present report comprises a documentation of CASINO, a simulation code developed as a means for the study of high energy charged particles in an axisymmetric Tokamak. The background of the need for such a numerical tool is presented. In the description of the numerical model used for the orbit integration, the method using constants of motion, the Lao-Hirsman geometry for the flux surfaces and a method for reducing the necessary number of particles is elucidated. A brief outline of the calculational sequence is given as a flow chart. The essential routines and functions as well as the common blocks are briefly described. The input and output routines are shown. Finally the documentation is completed by a short discussion of possible extensions of the code and a test case. (au)

  10. The CNCSN: one, two- and three-dimensional coupled neutral and charged particle discrete ordinates code package

    International Nuclear Information System (INIS)

    Voloschenko, A.M.; Gukov, S.V.; Kryuchkov, V.P.; Dubinin, A.A.; Sumaneev, O.V.

    2005-01-01

    The CNCSN package is composed of the following codes: -) KATRIN-2.0: a three-dimensional neutral and charged particle transport code; -) KASKAD-S-2.5: a two-dimensional neutral and charged particle transport code; -) ROZ-6.6: a one-dimensional neutral and charged particle transport code; -) ARVES-2.5: a preprocessor for the working macroscopic cross-section format FMAC-M for transport calculations; -) MIXERM: a utility code for preparing mixtures on the base of multigroup cross-section libraries in ANISN format; -) CEPXS-BFP: a version of the Sandia National Lab. multigroup coupled electron-photon cross-section generating code CEPXS, adapted for solving the charged particles transport in the Boltzmann-Fokker-Planck formulation with the use of discrete ordinate method; -) SADCO-2.4: Institute for High-Energy Physics modular system for generating coupled nuclear data libraries to provide high-energy particles transport calculations by multigroup method; -) KATRIF: the post-processor for the KATRIN code; -) KASF: the post-processor for the KASKAD-S code; and ROZ6F: the post-processor for the ROZ-6 code. The coding language is Fortran-90

  11. First experience with particle-in-cell plasma physics code on ARM-based HPC systems

    Science.gov (United States)

    Sáez, Xavier; Soba, Alejandro; Sánchez, Edilberto; Mantsinen, Mervi; Mateo, Sergi; Cela, José M.; Castejón, Francisco

    2015-09-01

    In this work, we will explore the feasibility of porting a Particle-in-cell code (EUTERPE) to an ARM multi-core platform from the Mont-Blanc project. The used prototype is based on a system-on-chip Samsung Exynos 5 with an integrated GPU. It is the first prototype that could be used for High-Performance Computing (HPC), since it supports double precision and parallel programming languages.

  12. The three-dimensional, discrete ordinates neutral particle transport code TORT: An overview

    International Nuclear Information System (INIS)

    Azmy, Y.Y.

    1996-01-01

    The centerpiece of the Discrete Ordinates Oak Ridge System (DOORS), the three-dimensional neutral particle transport code TORT is reviewed. Its most prominent features pertaining to large applications, such as adjustable problem parameters, memory management, and coarse mesh methods, are described. Advanced, state-of-the-art capabilities including acceleration and multiprocessing are summarized here. Future enhancement of existing graphics and visualization tools is briefly presented

  13. Current-drive by lower hybrid waves in the presence of energetic alpha-particles

    Energy Technology Data Exchange (ETDEWEB)

    Fisch, N.J.; Rax, J.M.

    1991-10-01

    Many experiments have now proved the effectiveness of lower hybrid waves for driving toroidal current in tokamaks. The use of these waves, however, to provide all the current in a reactor is thought to be uncertain because the waves may not penetrate the center of the more energetic reactor plasma, and, if they did, the wave power may be absorbed by alpha particles rather than by electrons. This paper explores the conditions under which lower-hybrid waves might actually drive all the current. 26 refs.

  14. SPACE CHARGE SIMULATION METHODS INCORPORATED IN SOME MULTI - PARTICLE TRACKING CODES AND THEIR RESULTS COMPARISON

    International Nuclear Information System (INIS)

    BEEBE - WANG, J.; LUCCIO, A.U.; D IMPERIO, N.; MACHIDA, S.

    2002-01-01

    Space charge in high intensity beams is an important issue in accelerator physics. Due to the complicity of the problems, the most effective way of investigating its effect is by computer simulations. In the resent years, many space charge simulation methods have been developed and incorporated in various 2D or 3D multi-particle-tracking codes. It has becoming necessary to benchmark these methods against each other, and against experimental results. As a part of global effort, we present our initial comparison of the space charge methods incorporated in simulation codes ORBIT++, ORBIT and SIMPSONS. In this paper, the methods included in these codes are overviewed. The simulation results are presented and compared. Finally, from this study, the advantages and disadvantages of each method are discussed

  15. The use of electromagnetic particle-in-cell codes in accelerator applications

    International Nuclear Information System (INIS)

    Eppley, K.

    1988-12-01

    The techniques developed for the numerical simulation of plasmas have numerous applications relevant to accelerators. The operation of many accelerator components involves transients, interactions between beams and rf fields, and internal plasma oscillations. These effects produce non-linear behavior which can be represented accurately by particle in cell (PIC) simulations. We will give a very brief overview of the algorithms used in PIC Codes. We will examine the range of parameters over which they are useful. We will discuss the factors which determine whether a two or three dimensional simulation is most appropriate. PIC codes have been applied to a wide variety of diverse problems, spanning many of the systems in a linear accelerator. We will present a number of practical examples of the application of these codes to areas such as guns, bunchers, rf sources, beam transport, emittance growth and final focus. 8 refs., 8 figs., 2 tabs

  16. SPACE CHARGE SIMULATION METHODS INCORPORATED IN SOME MULTI - PARTICLE TRACKING CODES AND THEIR RESULTS COMPARISON.

    Energy Technology Data Exchange (ETDEWEB)

    BEEBE - WANG,J.; LUCCIO,A.U.; D IMPERIO,N.; MACHIDA,S.

    2002-06-03

    Space charge in high intensity beams is an important issue in accelerator physics. Due to the complicity of the problems, the most effective way of investigating its effect is by computer simulations. In the resent years, many space charge simulation methods have been developed and incorporated in various 2D or 3D multi-particle-tracking codes. It has becoming necessary to benchmark these methods against each other, and against experimental results. As a part of global effort, we present our initial comparison of the space charge methods incorporated in simulation codes ORBIT++, ORBIT and SIMPSONS. In this paper, the methods included in these codes are overviewed. The simulation results are presented and compared. Finally, from this study, the advantages and disadvantages of each method are discussed.

  17. ALFITeX. A new code for the deconvolution of complex alpha-particle spectra

    International Nuclear Information System (INIS)

    Caro Marroyo, B.; Martin Sanchez, A.; Jurado Vargas, M.

    2013-01-01

    A new code for the deconvolution of complex alpha-particle spectra has been developed. The ALFITeX code is written in Visual Basic for Microsoft Office Excel 2010 spreadsheets, incorporating several features aimed at making it a fast, robust and useful tool with a user-friendly interface. The deconvolution procedure is based on the Levenberg-Marquardt algorithm, with the curve fitting the experimental data being the mathematical function formed by the convolution of a Gaussian with two left-handed exponentials in the low-energy-tail region. The code also includes the capability of fitting a possible constant background contribution. The application of the singular value decomposition method for matrix inversion permits the fit of any kind of alpha-particle spectra, even those presenting singularities or an ill-conditioned curvature matrix. ALFITeX has been checked with its application to the deconvolution and the calculation of the alpha-particle emission probabilities of 239 Pu, 241 Am and 235 U. (author)

  18. PHITS: Particle and heavy ion transport code system, version 2.23

    International Nuclear Information System (INIS)

    Niita, Koji; Matsuda, Norihiro; Iwamoto, Yosuke; Sato, Tatsuhiko; Nakashima, Hiroshi; Sakamoto, Yukio; Iwase, Hiroshi; Sihver, Lembit

    2010-10-01

    A Particle and Heavy-Ion Transport code System PHITS has been developed under the collaboration of JAEA (Japan Atomic Energy Agency), RIST (Research Organization for Information Science and Technology) and KEK (High Energy Accelerator Research Organization). PHITS can deal with the transport of all particles (nucleons, nuclei, mesons, photons, and electrons) over wide energy ranges, using several nuclear reaction models and nuclear data libraries. Geometrical configuration of the simulation can be set with GG (General Geometry) or CG (Combinatorial Geometry). Various quantities such as heat deposition, track length and production yields can be deduced from the simulation, using implemented estimator functions called 'tally'. The code also has a function to draw 2D and 3D figures of the calculated results as well as the setup geometries, using a code ANGEL. Because of these features, PHITS has been widely used for various purposes such as designs of accelerator shielding, radiation therapy and space exploration. Recently PHITS introduces an event generator for particle transport parts in the low energy region. Thus, PHITS was completely rewritten for the introduction of the event generator for neutron-induced reactions in energy region less than 20 MeV. Furthermore, several new tallis were incorporated for estimation of the relative biological effects. This document provides a manual of the new PHITS. (author)

  19. (Bio)hybrid materials based on optically active particles

    Science.gov (United States)

    Reitzig, Manuela; Härtling, Thomas; Opitz, Jörg

    2014-03-01

    In this contribution we provide an overview of current investigations on optically active particles (nanodiamonds, upconversion phospors) for biohybrid and sensing applications. Due to their outstanding properties nanodiamonds gain attention in various application elds such as microelectronics, optical monitoring, medicine, and biotechnology. Beyond the typical diamond properties such as high thermal conductivity and extreme hardness, the carbon surface and its various functional groups enable diverse chemical and biological surface functionalization. At Fraunhofer IKTS-MD we develop a customization of material surfaces via integration of chemically modi ed nanodiamonds at variable surfaces, e.g bone implants and pipelines. For the rst purpose, nanodiamonds are covalently modi ed at their surface with amino or phosphate functionalities that are known to increase adhesion to bone or titanium alloys. The second type of surface is approached via mechanical implementation into coatings. Besides nanodiamonds, we also investigate the properties of upconversion phosphors. In our contribution we show how upconversion phosphors are used to verify sterilization processes via a change of optical properties due to sterilizing electron beam exposure.

  20. Coronal mass ejection hits mercury: A.I.K.E.F. hybrid-code results compared to MESSENGER data

    Science.gov (United States)

    Exner, W.; Heyner, D.; Liuzzo, L.; Motschmann, U.; Shiota, D.; Kusano, K.; Shibayama, T.

    2018-04-01

    Mercury is the closest orbiting planet around the sun and is therefore embedded in an intensive and highly varying solar wind. In-situ data from the MESSENGER spacecraft of the plasma environment near Mercury indicates that a coronal mass ejection (CME) passed the planet on 23 November 2011 over the span of the 12 h MESSENGER orbit. Slavin et al. (2014) derived the upstream parameters of the solar wind at the time of that orbit, and were able to explain the observed MESSENGER data in the cusp and magnetopause segments of MESSENGER's trajectory. These upstream parameters will be used for our first simulation run. We use the hybrid code A.I.K.E.F. which treats ions as individual particles and electrons as a mass-less fluid, to conduct hybrid simulations of Mercury's magnetospheric response to the impact of the CME on ion gyro time scales. Results from the simulation are in agreement with magnetic field measurements from the inner day-side magnetosphere and the bow-shock region. However, at the planet's nightside, Mercury's plasma environment seemed to be governed by different solar wind conditions, in conclusion, Mercury's interaction with the CME is not sufficiently describable by only one set of upstream parameters. Therefore, to simulate the magnetospheric response while MESSENGER was located in the tail region, we use parameters obtained from the MHD solar wind simulation code SUSANOO (Shiota et al. (2014)) for our second simulation run. The parameters of the SUSANOO model achieve a good agreement of the data concerning the plasma tail crossing and the night-side approach to Mercury. However, the polar and closest approach are hardly described by both upstream parameters, namely, neither upstream dataset is able to reproduce the MESSENGER crossing of Mercury's magnetospheric cusp. We conclude that the respective CME was too variable on the timescale of the MESSENGER orbit to be described by only two sets of upstream conditions. Our results suggest locally strong

  1. VINE-A NUMERICAL CODE FOR SIMULATING ASTROPHYSICAL SYSTEMS USING PARTICLES. II. IMPLEMENTATION AND PERFORMANCE CHARACTERISTICS

    International Nuclear Information System (INIS)

    Nelson, Andrew F.; Wetzstein, M.; Naab, T.

    2009-01-01

    We continue our presentation of VINE. In this paper, we begin with a description of relevant architectural properties of the serial and shared memory parallel computers on which VINE is intended to run, and describe their influences on the design of the code itself. We continue with a detailed description of a number of optimizations made to the layout of the particle data in memory and to our implementation of a binary tree used to access that data for use in gravitational force calculations and searches for smoothed particle hydrodynamics (SPH) neighbor particles. We describe the modifications to the code necessary to obtain forces efficiently from special purpose 'GRAPE' hardware, the interfaces required to allow transparent substitution of those forces in the code instead of those obtained from the tree, and the modifications necessary to use both tree and GRAPE together as a fused GRAPE/tree combination. We conclude with an extensive series of performance tests, which demonstrate that the code can be run efficiently and without modification in serial on small workstations or in parallel using the OpenMP compiler directives on large-scale, shared memory parallel machines. We analyze the effects of the code optimizations and estimate that they improve its overall performance by more than an order of magnitude over that obtained by many other tree codes. Scaled parallel performance of the gravity and SPH calculations, together the most costly components of most simulations, is nearly linear up to at least 120 processors on moderate sized test problems using the Origin 3000 architecture, and to the maximum machine sizes available to us on several other architectures. At similar accuracy, performance of VINE, used in GRAPE-tree mode, is approximately a factor 2 slower than that of VINE, used in host-only mode. Further optimizations of the GRAPE/host communications could improve the speed by as much as a factor of 3, but have not yet been implemented in VINE

  2. Development and Verification of Smoothed Particle Hydrodynamics Code for Analysis of Tsunami near NPP

    Energy Technology Data Exchange (ETDEWEB)

    Jo, Young Beom; Kim, Eung Soo [Seoul National Univ., Seoul (Korea, Republic of)

    2014-10-15

    It becomes more complicated when considering the shape and phase of the ground below the seawater. Therefore, some different attempts are required to precisely analyze the behavior of tsunami. This paper introduces an on-going activities on code development in SNU based on an unconventional mesh-free fluid analysis method called Smoothed Particle Hydrodynamics (SPH) and its verification work with some practice simulations. This paper summarizes the on-going development and verification activities on Lagrangian mesh-free SPH code in SNU. The newly developed code can cover equation of motions and heat conduction equation so far, and verification of each models is completed. In addition, parallel computation using GPU is now possible, and GUI is also prepared. If users change input geometry or input values, they can simulate for various conditions geometries. A SPH method has large advantages and potential in modeling of free surface, highly deformable geometry and multi-phase problems that traditional grid-based code has difficulties in analysis. Therefore, by incorporating more complex physical models such as turbulent flow, phase change, two-phase flow, and even solid mechanics, application of the current SPH code is expected to be much more extended including molten fuel behaviors in the sever accident.

  3. Microwave imaging for conducting scatterers by hybrid particle swarm optimization with simulated annealing

    International Nuclear Information System (INIS)

    Mhamdi, B.; Grayaa, K.; Aguili, T.

    2011-01-01

    In this paper, a microwave imaging technique for reconstructing the shape of two-dimensional perfectly conducting scatterers by means of a stochastic optimization approach is investigated. Based on the boundary condition and the measured scattered field derived by transverse magnetic illuminations, a set of nonlinear integral equations is obtained and the imaging problem is reformulated in to an optimization problem. A hybrid approximation algorithm, called PSO-SA, is developed in this work to solve the scattering inverse problem. In the hybrid algorithm, particle swarm optimization (PSO) combines global search and local search for finding the optimal results assignment with reasonable time and simulated annealing (SA) uses certain probability to avoid being trapped in a local optimum. The hybrid approach elegantly combines the exploration ability of PSO with the exploitation ability of SA. Reconstruction results are compared with exact shapes of some conducting cylinders; and good agreements with the original shapes are observed.

  4. HyDEn: A Hybrid Steganocryptographic Approach for Data Encryption Using Randomized Error-Correcting DNA Codes

    Directory of Open Access Journals (Sweden)

    Dan Tulpan

    2013-01-01

    Full Text Available This paper presents a novel hybrid DNA encryption (HyDEn approach that uses randomized assignments of unique error-correcting DNA Hamming code words for single characters in the extended ASCII set. HyDEn relies on custom-built quaternary codes and a private key used in the randomized assignment of code words and the cyclic permutations applied on the encoded message. Along with its ability to detect and correct errors, HyDEn equals or outperforms existing cryptographic methods and represents a promising in silico DNA steganographic approach.

  5. Extension of a hybrid particle-continuum method for a mixture of chemical species

    Science.gov (United States)

    Verhoff, Ashley M.; Boyd, Iain D.

    2012-11-01

    Due to the physical accuracy and numerical efficiency achieved by analyzing transitional, hypersonic flow fields with hybrid particle-continuum methods, this paper describes a Modular Particle-Continuum (MPC) method and its extension to include multiple chemical species. Considerations that are specific to a hybrid approach for simulating gas mixtures are addressed, including a discussion of the Chapman-Enskog velocity distribution function (VDF) for near-equilibrium flows, and consistent viscosity models for the individual CFD and DSMC modules of the MPC method. Representative results for a hypersonic blunt-body flow are then presented, where the flow field properties, surface properties, and computational performance are compared for simulations employing full CFD, full DSMC, and the MPC method.

  6. On the performance of accelerated particle swarm optimization for charging plug-in hybrid electric vehicles

    Directory of Open Access Journals (Sweden)

    Imran Rahman

    2016-03-01

    Full Text Available Transportation electrification has undergone major changes since the last decade. Success of smart grid with renewable energy integration solely depends upon the large-scale penetration of plug-in hybrid electric vehicles (PHEVs for a sustainable and carbon-free transportation sector. One of the key performance indicators in hybrid electric vehicle is the State-of-Charge (SoC which needs to be optimized for the betterment of charging infrastructure using stochastic computational methods. In this paper, a newly emerged Accelerated particle swarm optimization (APSO technique was applied and compared with standard particle swarm optimization (PSO considering charging time and battery capacity. Simulation results obtained for maximizing the highly nonlinear objective function indicate that APSO achieves some improvements in terms of best fitness and computation time.

  7. Particle pinch with fully noninductive lower hybrid current drive in Tore Supra.

    Science.gov (United States)

    Hoang, G T; Bourdelle, C; Pégourié, B; Schunke, B; Artaud, J F; Bucalossi, J; Clairet, F; Fenzi-Bonizec, C; Garbet, X; Gil, C; Guirlet, R; Imbeaux, F; Lasalle, J; Loarer, T; Lowry, C; Travère, J M; Tsitrone, E

    2003-04-18

    Recently, plasmas exceeding 4 min have been obtained with lower hybrid current drive (LHCD) in Tore Supra. These LHCD plasmas extend for over 80 times the resistive current diffusion time with zero loop voltage. Under such unique conditions the neoclassical particle pinch driven by the toroidal electric field vanishes. Nevertheless, the density profile remains peaked for more than 4 min. For the first time, the existence of an inward particle pinch in steady-state plasma without toroidal electric field, much larger than the value predicted by the collisional neoclassical theory, is experimentally demonstrated.

  8. Hybrid three-dimensional variation and particle filtering for nonlinear systems

    International Nuclear Information System (INIS)

    Leng Hong-Ze; Song Jun-Qiang

    2013-01-01

    This work addresses the problem of estimating the states of nonlinear dynamic systems with sparse observations. We present a hybrid three-dimensional variation (3DVar) and particle piltering (PF) method, which combines the advantages of 3DVar and particle-based filters. By minimizing the cost function, this approach will produce a better proposal distribution of the state. Afterwards the stochastic resampling step in standard PF can be avoided through a deterministic scheme. The simulation results show that the performance of the new method is superior to the traditional ensemble Kalman filtering (EnKF) and the standard PF, especially in highly nonlinear systems

  9. Short-Term Wind Power Forecasting Using the Enhanced Particle Swarm Optimization Based Hybrid Method

    OpenAIRE

    Wen-Yeau Chang

    2013-01-01

    High penetration of wind power in the electricity system provides many challenges to power system operators, mainly due to the unpredictability and variability of wind power generation. Although wind energy may not be dispatched, an accurate forecasting method of wind speed and power generation can help power system operators reduce the risk of an unreliable electricity supply. This paper proposes an enhanced particle swarm optimization (EPSO) based hybrid forecasting method for short-term wi...

  10. Analysis on applicable error-correcting code strength of storage class memory and NAND flash in hybrid storage

    Science.gov (United States)

    Matsui, Chihiro; Kinoshita, Reika; Takeuchi, Ken

    2018-04-01

    A hybrid of storage class memory (SCM) and NAND flash is a promising technology for high performance storage. Error correction is inevitable on SCM and NAND flash because their bit error rate (BER) increases with write/erase (W/E) cycles, data retention, and program/read disturb. In addition, scaling and multi-level cell technologies increase BER. However, error-correcting code (ECC) degrades storage performance because of extra memory reading and encoding/decoding time. Therefore, applicable ECC strength of SCM and NAND flash is evaluated independently by fixing ECC strength of one memory in the hybrid storage. As a result, weak BCH ECC with small correctable bit is recommended for the hybrid storage with large SCM capacity because SCM is accessed frequently. In contrast, strong and long-latency LDPC ECC can be applied to NAND flash in the hybrid storage with large SCM capacity because large-capacity SCM improves the storage performance.

  11. Modelling of a general purpose irradiation chamber using a Monte Carlo particle transport code

    International Nuclear Information System (INIS)

    Dhiyauddin Ahmad Fauzi; Sheik, F.O.A.; Nurul Fadzlin Hasbullah

    2013-01-01

    Full-text: The aim of this research is to stimulate the effectiveness use of a general purpose irradiation chamber to contain pure neutron particles obtained from a research reactor. The secondary neutron and gamma particles dose discharge from the chamber layers will be used as a platform to estimate the safe dimension of the chamber. The chamber, made up of layers of lead (Pb), shielding, polyethylene (PE), moderator and commercial grade aluminium (Al) cladding is proposed for the use of interacting samples with pure neutron particles in a nuclear reactor environment. The estimation was accomplished through simulation based on general Monte Carlo N-Particle transport code using Los Alamos MCNPX software. Simulations were performed on the model of the chamber subjected to high neutron flux radiation and its gamma radiation product. The model of neutron particle used is based on the neutron source found in PUSPATI TRIGA MARK II research reactor which holds a maximum flux value of 1 x 10 12 neutron/ cm 2 s. The expected outcomes of this research are zero gamma dose in the core of the chamber and neutron dose rate of less than 10 μSv/ day discharge from the chamber system. (author)

  12. Mathematical model and computer code for coated particles performance at normal operating conditions

    International Nuclear Information System (INIS)

    Golubev, I.; Kadarmetov, I.; Makarov, V.

    2002-01-01

    Computer modeling of thermo-mechanical behavior of coated particles during operating both at normal and off-normal conditions has a very significant role particularly on a stage of new reactors development. In Russia a big experience has been accumulated on fabrication and reactor tests of CP and fuel elements with UO 2 kernels. However, this experience cannot be using in full volume for development of a new reactor installation GT-MHR. This is due to very deep burn-up of the fuel based on plutonium oxide (up to 70% fima). Therefore the mathematical modeling of CP thermal-mechanical behavior and failure prediction becomes particularly important. The authors have a clean understanding that serviceability of fuel with high burn-ups are defined not only by thermo-mechanics, but also by structured changes in coating materials, thermodynamics of chemical processes, 'amoeba-effect', formation CO etc. In the report the first steps of development of integrate code for numerical modeling of coated particles behavior and some calculating results concerning the influence of various design parameters on fuel coated particles endurance for GT-MHR normal operating conditions are submitted. A failure model is developed to predict the fraction of TRISO-coated particles. In this model it is assumed that the failure of CP depends not only on probability of SiC-layer fracture but also on the PyC-layers damage. The coated particle is considered as a uniform design. (author)

  13. A 3d particle simulation code for heavy ion fusion accelerator studies

    International Nuclear Information System (INIS)

    Friedman, A.; Bangerter, R.O.; Callahan, D.A.; Grote, D.P.; Langdon, A.B.; Haber, I.

    1990-01-01

    We describe WARP, a new particle-in-cell code being developed and optimized for ion beam studies in true geometry. We seek to model transport around bends, axial compression with strong focusing, multiple beamlet interaction, and other inherently 3d processes that affect emittance growth. Constraints imposed by memory and running time are severe. Thus, we employ only two 3d field arrays (ρ and φ), and difference φ directly on each particle to get E, rather than interpolating E from three meshes; use of a single 3d array is feasible. A new method for PIC simulation of bent beams follows the beam particles in a family of rotated laboratory frames, thus ''straightening'' the bends. We are also incorporating an envelope calculation, an (r, z) model, and 1d (axial) model within WARP. The BASIS development and run-time system is used, providing a powerful interactive environment in which the user has access to all variables in the code database. 10 refs., 3 figs

  14. Neutron transport-burnup code MCORGS and its application in fusion fission hybrid blanket conceptual research

    Science.gov (United States)

    Shi, Xue-Ming; Peng, Xian-Jue

    2016-09-01

    Fusion science and technology has made progress in the last decades. However, commercialization of fusion reactors still faces challenges relating to higher fusion energy gain, irradiation-resistant material, and tritium self-sufficiency. Fusion Fission Hybrid Reactors (FFHR) can be introduced to accelerate the early application of fusion energy. Traditionally, FFHRs have been classified as either breeders or transmuters. Both need partition of plutonium from spent fuel, which will pose nuclear proliferation risks. A conceptual design of a Fusion Fission Hybrid Reactor for Energy (FFHR-E), which can make full use of natural uranium with lower nuclear proliferation risk, is presented. The fusion core parameters are similar to those of the International Thermonuclear Experimental Reactor. An alloy of natural uranium and zirconium is adopted in the fission blanket, which is cooled by light water. In order to model blanket burnup problems, a linkage code MCORGS, which couples MCNP4B and ORIGEN-S, is developed and validated through several typical benchmarks. The average blanket energy Multiplication and Tritium Breeding Ratio can be maintained at 10 and 1.15 respectively over tens of years of continuous irradiation. If simple reprocessing without separation of plutonium from uranium is adopted every few years, FFHR-E can achieve better neutronic performance. MCORGS has also been used to analyze the ultra-deep burnup model of Laser Inertial Confinement Fusion Fission Energy (LIFE) from LLNL, and a new blanket design that uses Pb instead of Be as the neutron multiplier is proposed. In addition, MCORGS has been used to simulate the fluid transmuter model of the In-Zinerater from Sandia. A brief comparison of LIFE, In-Zinerater, and FFHR-E will be given.

  15. A Hybrid Chaos-Particle Swarm Optimization Algorithm for the Vehicle Routing Problem with Time Window

    Directory of Open Access Journals (Sweden)

    Qi Hu

    2013-04-01

    Full Text Available State-of-the-art heuristic algorithms to solve the vehicle routing problem with time windows (VRPTW usually present slow speeds during the early iterations and easily fall into local optimal solutions. Focusing on solving the above problems, this paper analyzes the particle encoding and decoding strategy of the particle swarm optimization algorithm, the construction of the vehicle route and the judgment of the local optimal solution. Based on these, a hybrid chaos-particle swarm optimization algorithm (HPSO is proposed to solve VRPTW. The chaos algorithm is employed to re-initialize the particle swarm. An efficient insertion heuristic algorithm is also proposed to build the valid vehicle route in the particle decoding process. A particle swarm premature convergence judgment mechanism is formulated and combined with the chaos algorithm and Gaussian mutation into HPSO when the particle swarm falls into the local convergence. Extensive experiments are carried out to test the parameter settings in the insertion heuristic algorithm and to evaluate that they are corresponding to the data’s real-distribution in the concrete problem. It is also revealed that the HPSO achieves a better performance than the other state-of-the-art algorithms on solving VRPTW.

  16. Deploying electromagnetic particle-in-cell (EM-PIC) codes on Xeon Phi accelerators boards

    Science.gov (United States)

    Fonseca, Ricardo

    2014-10-01

    The complexity of the phenomena involved in several relevant plasma physics scenarios, where highly nonlinear and kinetic processes dominate, makes purely theoretical descriptions impossible. Further understanding of these scenarios requires detailed numerical modeling, but fully relativistic particle-in-cell codes such as OSIRIS are computationally intensive. The quest towards Exaflop computer systems has lead to the development of HPC systems based on add-on accelerator cards, such as GPGPUs and more recently the Xeon Phi accelerators that power the current number 1 system in the world. These cards, also referred to as Intel Many Integrated Core Architecture (MIC) offer peak theoretical performances of >1 TFlop/s for general purpose calculations in a single board, and are receiving significant attention as an attractive alternative to CPUs for plasma modeling. In this work we report on our efforts towards the deployment of an EM-PIC code on a Xeon Phi architecture system. We will focus on the parallelization and vectorization strategies followed, and present a detailed performance evaluation of code performance in comparison with the CPU code.

  17. SimTrack: A compact c++ code for particle orbit and spin tracking in accelerators

    Energy Technology Data Exchange (ETDEWEB)

    Luo, Yun

    2015-11-21

    SimTrack is a compact c++ code of 6-d symplectic element-by-element particle tracking in accelerators originally designed for head-on beam–beam compensation simulation studies in the Relativistic Heavy Ion Collider (RHIC) at Brookhaven National Laboratory. It provides a 6-d symplectic orbit tracking with the 4th order symplectic integration for magnet elements and the 6-d symplectic synchro-beam map for beam–beam interaction. Since its inception in 2009, SimTrack has been intensively used for dynamic aperture calculations with beam–beam interaction for RHIC. Recently, proton spin tracking and electron energy loss due to synchrotron radiation were added. In this paper, I will present the code architecture, physics models, and some selected examples of its applications to RHIC and a future electron-ion collider design eRHIC.

  18. Beam Dynamics in an Electron Lens with the Warp Particle-in-cell Code

    CERN Document Server

    Stancari, Giulio; Redaelli, Stefano

    2014-01-01

    Electron lenses are a mature technique for beam manipulation in colliders and storage rings. In an electron lens, a pulsed, magnetically confined electron beam with a given current-density profile interacts with the circulating beam to obtain the desired effect. Electron lenses were used in the Fermilab Tevatron collider for beam-beam compensation, for abort-gap clearing, and for halo scraping. They will be used in RHIC at BNL for head-on beam-beam compensation, and their application to the Large Hadron Collider for halo control is under development. At Fermilab, electron lenses will be implemented as lattice elements for nonlinear integrable optics. The design of electron lenses requires tools to calculate the kicks and wakefields experienced by the circulating beam. We use the Warp particle-in-cell code to study generation, transport, and evolution of the electron beam. For the first time, a fully 3-dimensional code is used for this purpose.

  19. Progress on the Development of the hPIC Particle-in-Cell Code

    Science.gov (United States)

    Dart, Cameron; Hayes, Alyssa; Khaziev, Rinat; Marcinko, Stephen; Curreli, Davide; Laboratory of Computational Plasma Physics Team

    2017-10-01

    Advancements were made in the development of the kinetic-kinetic electrostatic Particle-in-Cell code, hPIC, designed for large-scale simulation of the Plasma-Material Interface. hPIC achieved a weak scaling efficiency of 87% using the Algebraic Multigrid Solver BoomerAMG from the PETSc library on more than 64,000 cores of the Blue Waters supercomputer at the University of Illinois at Urbana-Champaign. The code successfully simulates two-stream instability and a volume of plasma over several square centimeters of surface extending out to the presheath in kinetic-kinetic mode. Results from a parametric study of the plasma sheath in strongly magnetized conditions will be presented, as well as a detailed analysis of the plasma sheath structure at grazing magnetic angles. The distribution function and its moments will be reported for plasma species in the simulation domain and at the material surface for plasma sheath simulations. Membership Pending.

  20. Hybrid information privacy system: integration of chaotic neural network and RSA coding

    Science.gov (United States)

    Hsu, Ming-Kai; Willey, Jeff; Lee, Ting N.; Szu, Harold H.

    2005-03-01

    Electronic mails are adopted worldwide; most are easily hacked by hackers. In this paper, we purposed a free, fast and convenient hybrid privacy system to protect email communication. The privacy system is implemented by combining private security RSA algorithm with specific chaos neural network encryption process. The receiver can decrypt received email as long as it can reproduce the specified chaos neural network series, so called spatial-temporal keys. The chaotic typing and initial seed value of chaos neural network series, encrypted by the RSA algorithm, can reproduce spatial-temporal keys. The encrypted chaotic typing and initial seed value are hidden in watermark mixed nonlinearly with message media, wrapped with convolution error correction codes for wireless 3rd generation cellular phones. The message media can be an arbitrary image. The pattern noise has to be considered during transmission and it could affect/change the spatial-temporal keys. Since any change/modification on chaotic typing or initial seed value of chaos neural network series is not acceptable, the RSA codec system must be robust and fault-tolerant via wireless channel. The robust and fault-tolerant properties of chaos neural networks (CNN) were proved by a field theory of Associative Memory by Szu in 1997. The 1-D chaos generating nodes from the logistic map having arbitrarily negative slope a = p/q generating the N-shaped sigmoid was given first by Szu in 1992. In this paper, we simulated the robust and fault-tolerance properties of CNN under additive noise and pattern noise. We also implement a private version of RSA coding and chaos encryption process on messages.

  1. First experience with particle-in-cell plasma physics code on ARM-based HPC systems

    OpenAIRE

    Sáez, Xavier; Soba, Alejandro; Sánchez, Edilberto; Mantsinen, Mervi; Mateo, Sergio; Cela, José M.; Castejón, Francisco

    2015-01-01

    In this work, we will explore the feasibility of porting a Particle-in-cell code (EUTERPE) to an ARM multi-core platform from the Mont-Blanc project. The used prototype is based on a system-on-chip Samsung Exynos 5 with an integrated GPU. It is the first prototype that could be used for High-Performance Computing (HPC), since it supports double precision and parallel programming languages. The research leading to these results has received funding from the European Com- munity's Seventh...

  2. TIERCE: A code system for particles and radiation transport in thick targets

    Energy Technology Data Exchange (ETDEWEB)

    Bersillon, O.; Bauge, E.; Borne, F.; Clergeau, J.F.; Collin, M.; Cotten, D.; Delaroche, J.P.; Duarte, H.; Flament, J.L.; Girod, M.; Gosselin, G.; Granier, T.; Hilaire, S.; Morel, P.; Perrier, R.; Romain, P.; Roux, L. [CEA, Bruyeres-le-Chatel (France). Service de Physique Nucleaire

    1997-09-01

    Over the last few years, a great effort at Bruyeres-le-Chatel has been the development of the TIERCE code system for the transport of particles and radiations in complex geometry. The comparison of calculated results with experimental data, either microscopic (double differential spectra, residual nuclide yield...) or macroscopic (energy deposition, neutron leakage...), shows the need to improve the nuclear reaction models used. We present some new developments concerning data required for the evaporation model in the framework of a microscopic approach. 22 refs., 6 figs.

  3. Particle-in-cell/accelerator code for space-charge dominated beam simulation

    Energy Technology Data Exchange (ETDEWEB)

    2012-05-08

    Warp is a multidimensional discrete-particle beam simulation program designed to be applicable where the beam space-charge is non-negligible or dominant. It is being developed in a collaboration among LLNL, LBNL and the University of Maryland. It was originally designed and optimized for heave ion fusion accelerator physics studies, but has received use in a broader range of applications, including for example laser wakefield accelerators, e-cloud studies in high enery accelerators, particle traps and other areas. At present it incorporates 3-D, axisymmetric (r,z) planar (x-z) and transverse slice (x,y) descriptions, with both electrostatic and electro-magnetic fields, and a beam envelope model. The code is guilt atop the Python interpreter language.

  4. Tribological Properties of Aluminum Alloy treated by Fine Particle Peening/DLC Hybrid Surface Modification

    Directory of Open Access Journals (Sweden)

    Nanbu H.

    2010-06-01

    Full Text Available In order to improve the adhesiveness of the DLC coating, Fine Particle Peening (FPP treatment was employed as pre-treatment of the DLC coating process. FPP treatment was performed using SiC shot particles, and then AA6061-T6 aluminum alloy was DLC-coated. A SiC-rich layer was formed around the surface of the aluminum alloy by the FPP treatment because small chips of shot particles were embedded into the substrate surface. Reciprocating sliding tests were conducted to measure the friction coefficients. While the DLC coated specimen without FPP treatment showed a sudden increase in friction coefficient at the early stage of the wear cycles, the FPP/DLC hybrid treated specimen maintained a low friction coefficient value during the test period. Further investigation revealed that the tribological properties of the substrate after the DLC coating were improved with an increase in the amount of Si at the surface.

  5. A hybrid self-adaptive Particle Swarm Optimization–Genetic Algorithm–Radial Basis Function model for annual electricity demand prediction

    International Nuclear Information System (INIS)

    Yu, Shiwei; Wang, Ke; Wei, Yi-Ming

    2015-01-01

    Highlights: • A hybrid self-adaptive PSO–GA-RBF model is proposed for electricity demand prediction. • Each mixed-coding particle is composed by two coding parts of binary and real. • Five independent variables have been selected to predict future electricity consumption in Wuhan. • The proposed model has a simpler structure or higher estimating precision than other ANN models. • No matter what the scenario, the electricity consumption of Wuhan will grow rapidly. - Abstract: The present study proposes a hybrid Particle Swarm Optimization and Genetic Algorithm optimized Radial Basis Function (PSO–GA-RBF) neural network for prediction of annual electricity demand. In the model, each mixed-coding particle (or chromosome) is composed of two coding parts, binary and real, which optimizes the structure of the RBF by GA operation and the parameters of the basis and weights by a PSO–GA implementation. Five independent variables have been selected to predict future electricity consumption in Wuhan by using optimized networks. The results shows that (1) the proposed PSO–GA-RBF model has a simpler network structure (fewer hidden neurons) or higher estimation precision than other selected ANN models; and (2) no matter what the scenario, the electricity consumption of Wuhan will grow rapidly at average annual growth rates of about 9.7–11.5%. By 2020, the electricity demand in the planning scenario, the highest among the scenarios, will be 95.85 billion kW h. The lowest demand is estimated for the business-as-usual scenario, and will be 88.45 billion kW h

  6. Collaborative Multi-Layer Network Coding For Hybrid Cellular Cognitive Radio Networks

    KAUST Repository

    Moubayed, Abdallah J.

    2014-05-01

    In this thesis, as an extension to [1], we propose a prioritized multi-layer network coding scheme for collaborative packet recovery in hybrid (interweave and underlay) cellular cognitive radio networks. This scheme allows the uncoordinated collaboration between the collocated primary and cognitive radio base-stations in order to minimize their own as well as each other’s packet recovery overheads, thus by improving their throughput. The proposed scheme ensures that each network’s performance is not degraded by its help to the other network. Moreover, it guarantees that the primary network’s interference threshold is not violated in the same and adjacent cells. Yet, the scheme allows the reduction of the recovery overhead in the collocated primary and cognitive radio networks. The reduction in the cognitive radio network is further amplified due to the perfect detection of spectrum holes which allows the cognitive radio base station to transmit at higher power without fear of violating the interference threshold of the primary network. For the secondary network, simulation results show reductions of 20% and 34% in the packet recovery overhead, compared to the non-collaborative scheme, for low and high probabilities of primary packet arrivals, respectively. For the primary network, this reduction was found to be 12%. Furthermore, with the use of fractional cooperation, the average recovery overhead is further reduced by around 5% for the primary network and around 10% for the secondary network when a high fractional cooperation probability is used.

  7. High performance 3D neutron transport on peta scale and hybrid architectures within APOLLO3 code

    International Nuclear Information System (INIS)

    Jamelot, E.; Dubois, J.; Lautard, J-J.; Calvin, C.; Baudron, A-M.

    2011-01-01

    APOLLO3 code is a common project of CEA, AREVA and EDF for the development of a new generation system for core physics analysis. We present here the parallelization of two deterministic transport solvers of APOLLO3: MINOS, a simplified 3D transport solver on structured Cartesian and hexagonal grids, and MINARET, a transport solver based on triangular meshes on 2D and prismatic ones in 3D. We used two different techniques to accelerate MINOS: a domain decomposition method, combined with an accelerated algorithm using GPU. The domain decomposition is based on the Schwarz iterative algorithm, with Robin boundary conditions to exchange information. The Robin parameters influence the convergence and we detail how we optimized the choice of these parameters. MINARET parallelization is based on angular directions calculation using explicit message passing. Fine grain parallelization is also available for each angular direction using shared memory multithreaded acceleration. Many performance results are presented on massively parallel architectures using more than 103 cores and on hybrid architectures using some tens of GPUs. This work contributes to the HPC development in reactor physics at the CEA Nuclear Energy Division. (author)

  8. Collaborative Multi-Layer Network Coding in Hybrid Cellular Cognitive Radio Networks

    KAUST Repository

    Moubayed, Abdallah J.

    2015-05-01

    In this paper, as an extension to [1], we propose a prioritized multi-layer network coding scheme for collaborative packet recovery in hybrid (interweave and underlay) cellular cognitive radio networks. This scheme allows the uncoordinated collaboration between the collocated primary and cognitive radio base-stations in order to minimize their own as well as each other\\'s packet recovery overheads, thus by improving their throughput. The proposed scheme ensures that each network\\'s performance is not degraded by its help to the other network. Moreover, it guarantees that the primary network\\'s interference threshold is not violated in the same and adjacent cells. Yet, the scheme allows the reduction of the recovery overhead in the collocated primary and cognitive radio networks. The reduction in the cognitive radio network is further amplified due to the perfect detection of spectrum holes which allows the cognitive radio base station to transmit at higher power without fear of violating the interference threshold of the primary network. For the secondary network, simulation results show reductions of 20% and 34% in the packet recovery overhead, compared to the non-collaborative scheme, for low and high probabilities of primary packet arrivals, respectively. For the primary network, this reduction was found to be 12%. © 2015 IEEE.

  9. A novel hybrid particle swarm optimization for economic dispatch with valve-point loading effects

    Energy Technology Data Exchange (ETDEWEB)

    Niknam, Taher, E-mail: niknam@sutech.ac.i [Department of Electrical and Electronics Engineering, Shiraz University of Technology, Shiraz, P.O. 71555-313 (Iran, Islamic Republic of); Mojarrad, Hasan Doagou, E-mail: hasan_doagou@yahoo.co [Department of Electrical and Electronics Engineering, Shiraz University of Technology, Shiraz, P.O. 71555-313 (Iran, Islamic Republic of); Meymand, Hamed Zeinoddini, E-mail: h.zeinaddini@gmail.co [Department of Electrical and Electronics Engineering, Shiraz University of Technology, Shiraz, P.O. 71555-313 (Iran, Islamic Republic of)

    2011-04-15

    Economic dispatch (ED) is one of the important problems in the operation and management of the electric power systems which is formulated as an optimization problem. Modern heuristics stochastic optimization techniques appear to be efficient in solving ED problem without any restriction because of their ability to seek the global optimal solution. One of modern heuristic algorithms is particle swarm optimization (PSO). In PSO algorithm, particles change place to get close to the best position and find the global minimum point. Also, differential evolution (DE) is a robust statistical method for solving non-linear and non-convex optimization problem. The fast convergence of DE degrades its performance and reduces its search capability that leads to a higher probability towards obtaining a local optimum. In order to overcome this drawback a hybrid method is presented to solve the ED problem with valve-point loading effect by integrating the variable DE with the fuzzy adaptive PSO called FAPSO-VDE. DE is the main optimizer and the PSO is used to maintain the population diversity and prevent leading to misleading local optima for every improvement in the solution of the DE run. The parameters of proposed hybrid algorithm such as inertia weight, mutation and crossover factors are adaptively adjusted. The feasibility and effectiveness of the proposed hybrid algorithm is demonstrated for two case studies and results are compared with those of other methods. It is shown that FAPSO-VDE has high quality solution, superior convergence characteristics and shorter computation time.

  10. Variable weight Khazani-Syed code using hybrid fixed-dynamic technique for optical code division multiple access system

    Science.gov (United States)

    Anas, Siti Barirah Ahmad; Seyedzadeh, Saleh; Mokhtar, Makhfudzah; Sahbudin, Ratna Kalos Zakiah

    2016-10-01

    Future Internet consists of a wide spectrum of applications with different bit rates and quality of service (QoS) requirements. Prioritizing the services is essential to ensure that the delivery of information is at its best. Existing technologies have demonstrated how service differentiation techniques can be implemented in optical networks using data link and network layer operations. However, a physical layer approach can further improve system performance at a prescribed received signal quality by applying control at the bit level. This paper proposes a coding algorithm to support optical domain service differentiation using spectral amplitude coding techniques within an optical code division multiple access (OCDMA) scenario. A particular user or service has a varying weight applied to obtain the desired signal quality. The properties of the new code are compared with other OCDMA codes proposed for service differentiation. In addition, a mathematical model is developed for performance evaluation of the proposed code using two different detection techniques, namely direct decoding and complementary subtraction.

  11. Global Optimization Based on the Hybridization of Harmony Search and Particle Swarm Optimization Methods

    Directory of Open Access Journals (Sweden)

    A. P. Karpenko

    2014-01-01

    Full Text Available We consider a class of stochastic search algorithms of global optimization which in various publications are called behavioural, intellectual, metaheuristic, inspired by the nature, swarm, multi-agent, population, etc. We use the last term.Experience in using the population algorithms to solve challenges of global optimization shows that application of one such algorithm may not always effective. Therefore now great attention is paid to hybridization of population algorithms of global optimization. Hybrid algorithms unite various algorithms or identical algorithms, but with various values of free parameters. Thus efficiency of one algorithm can compensate weakness of another.The purposes of the work are development of hybrid algorithm of global optimization based on known algorithms of harmony search (HS and swarm of particles (PSO, software implementation of algorithm, study of its efficiency using a number of known benchmark problems, and a problem of dimensional optimization of truss structure.We set a problem of global optimization, consider basic algorithms of HS and PSO, give a flow chart of the offered hybrid algorithm called PSO HS , present results of computing experiments with developed algorithm and software, formulate main results of work and prospects of its development.

  12. Numerical schemes for the hybrid modeling approach of gas-particle turbulent flows

    International Nuclear Information System (INIS)

    Dorogan, K.

    2012-01-01

    Hybrid Moments/PDF methods have shown to be well suitable for the description of poly-dispersed turbulent two-phase flows in non-equilibrium which are encountered in some industrial situations involving chemical reactions, combustion or sprays. They allow to obtain a fine enough physical description of the poly-dispersity, non-linear source terms and convection phenomena. However, their approximations are noised with the statistical error, which in several situations may be a source of a bias. An alternative hybrid Moments-Moments/PDF approach examined in this work consists in coupling the Moments and the PDF descriptions, within the description of the dispersed phase itself. This hybrid method could reduce the statistical error and remove the bias. However, such a coupling is not straightforward in practice and requires the development of accurate and stable numerical schemes. The approaches introduced in this work rely on the combined use of the up-winding and relaxation-type techniques. They allow to obtain stable unsteady approximations for a system of partial differential equations containing non-smooth external data which are provided by the PDF part of the model. A comparison of the results obtained using the present method with those of the 'classical' hybrid approach is presented in terms of the numerical errors for a case of a co-current gas-particle wall jet. (author)

  13. New electromagnetic particle simulation code for the analysis of spacecraft-plasma interactions

    International Nuclear Information System (INIS)

    Miyake, Yohei; Usui, Hideyuki

    2009-01-01

    A novel particle simulation code, the electromagnetic spacecraft environment simulator (EMSES), has been developed for the self-consistent analysis of spacecraft-plasma interactions on the full electromagnetic (EM) basis. EMSES includes several boundary treatments carefully coded for both longitudinal and transverse electric fields to satisfy perfect conductive surface conditions. For the longitudinal component, the following are considered: (1) the surface charge accumulation caused by impinging or emitted particles and (2) the surface charge redistribution, such that the surface becomes an equipotential. For item (1), a special treatment has been adopted for the current density calculated around the spacecraft surface, so that the charge accumulation occurs exactly on the surface. As a result, (1) is realized automatically in the updates of the charge density and the electric field through the current density. Item (2) is achieved by applying the capacity matrix method. Meanwhile, the transverse electric field is simply set to zero for components defined inside and tangential to the spacecraft surfaces. This paper also presents the validation of EMSES by performing test simulations for spacecraft charging and peculiar EM wave modes in a plasma sheath.

  14. Particle and heavy ion transport code system, PHITS, version 2.52

    International Nuclear Information System (INIS)

    Sato, Tatsuhiko; Matsuda, Norihiro; Hashimoto, Shintaro; Iwamoto, Yosuke; Noda, Shusaku; Ogawa, Tatsuhiko; Nakashima, Hiroshi; Fukahori, Tokio; Okumura, Keisuke; Kai, Tetsuya; Niita, Koji; Iwase, Hiroshi; Chiba, Satoshi; Furuta, Takuya; Sihver, Lembit

    2013-01-01

    An upgraded version of the Particle and Heavy Ion Transport code System, PHITS2.52, was developed and released to the public. The new version has been greatly improved from the previously released version, PHITS2.24, in terms of not only the code itself but also the contents of its package, such as the attached data libraries. In the new version, a higher accuracy of simulation was achieved by implementing several latest nuclear reaction models. The reliability of the simulation was improved by modifying both the algorithms for the electron-, positron-, and photon-transport simulations and the procedure for calculating the statistical uncertainties of the tally results. Estimation of the time evolution of radioactivity became feasible by incorporating the activation calculation program DCHAIN-SP into the new package. The efficiency of the simulation was also improved as a result of the implementation of shared-memory parallelization and the optimization of several time-consuming algorithms. Furthermore, a number of new user-support tools and functions that help users to intuitively and effectively perform PHITS simulations were developed and incorporated. Due to these improvements, PHITS is now a more powerful tool for particle transport simulation applicable to various research and development fields, such as nuclear technology, accelerator design, medical physics, and cosmic-ray research. (author)

  15. Algorithm for Wave-Particle Resonances in Fluid Codes - Final Report

    CERN Document Server

    Mattor, N

    2000-01-01

    We review the work performed under LDRD ER grant 98-ERD-099. The goal of this work is to write a subroutine for a fluid turbulence code that allows it to incorporate wave-particle resonances (WPR). WPR historically have required a kinetic code, with extra dimensions needed to evolve the phase space distribution function, f(x, v, t). The main results accomplished under this grant have been: (1) Derivation of a nonlinear closure term for 1D electrostatic collisionless fluid; (2) Writing of a 1D electrostatic fluid code, ''es1f,'' with a subroutine to calculate the aforementioned closure term; (3) derivation of several methods to calculate the closure term, including Eulerian, Euler-local, fully local, linearized, and linearized zero-phase-velocity, and implementation of these in es1f; (4) Successful modeling of the Landau damping of an arbitrary Langmuir wave; (5) Successful description of a kinetic two-stream instability up to the point of the first bounce; and (6) a spin-off project which uses a mathematical ...

  16. Algorithm for Wave-Particle Resonances in Fluid Codes - Final Report

    International Nuclear Information System (INIS)

    Mattor, N.

    2000-01-01

    We review the work performed under LDRD ER grant 98-ERD-099. The goal of this work is to write a subroutine for a fluid turbulence code that allows it to incorporate wave-particle resonances (WPR). WPR historically have required a kinetic code, with extra dimensions needed to evolve the phase space distribution function, f(x, v, t). The main results accomplished under this grant have been: (1) Derivation of a nonlinear closure term for 1D electrostatic collisionless fluid; (2) Writing of a 1D electrostatic fluid code, ''es1f,'' with a subroutine to calculate the aforementioned closure term; (3) derivation of several methods to calculate the closure term, including Eulerian, Euler-local, fully local, linearized, and linearized zero-phase-velocity, and implementation of these in es1f; (4) Successful modeling of the Landau damping of an arbitrary Langmuir wave; (5) Successful description of a kinetic two-stream instability up to the point of the first bounce; and (6) a spin-off project which uses a mathematical technique developed for the closure, known as the Phase Velocity Transform (PVT) to decompose turbulent fluctuations

  17. Modeling of MeV alpha particle energy transfer to lower hybrid waves

    International Nuclear Information System (INIS)

    Schivell, J.; Monticello, D.A.; Fisch, N.; Rax, J.M.

    1993-10-01

    The interaction between a lower hybrid wave and a fusion alpha particle displaces the alpha particle simultaneously in space and energy. This results in coupled diffusion. Diffusion of alphas down the density gradient could lead to their transferring energy to the wave. This could, in turn, put energy into current drive. An initial analytic study was done by Fisch and Rax. Here the authors calculate numerical solutions for the alpha energy transfer and study a range of conditions that are favorable for wave amplification from alpha energy. They find that it is possible for fusion alpha particles to transfer a large fraction of their energy to the lower hybrid wave. The numerical calculation shows that the net energy transfer is not sensitive to the value of the diffusion coefficient over a wide range of practical values. An extension of this idea, the use of a lossy boundary to enhance the energy transfer, is investigated. This technique is shown to offer a large potential benefit

  18. PyMercury: Interactive Python for the Mercury Monte Carlo Particle Transport Code

    International Nuclear Information System (INIS)

    Iandola, F.N.; O'Brien, M.J.; Procassini, R.J.

    2010-01-01

    Monte Carlo particle transport applications are often written in low-level languages (C/C++) for optimal performance on clusters and supercomputers. However, this development approach often sacrifices straightforward usability and testing in the interest of fast application performance. To improve usability, some high-performance computing applications employ mixed-language programming with high-level and low-level languages. In this study, we consider the benefits of incorporating an interactive Python interface into a Monte Carlo application. With PyMercury, a new Python extension to the Mercury general-purpose Monte Carlo particle transport code, we improve application usability without diminishing performance. In two case studies, we illustrate how PyMercury improves usability and simplifies testing and validation in a Monte Carlo application. In short, PyMercury demonstrates the value of interactive Python for Monte Carlo particle transport applications. In the future, we expect interactive Python to play an increasingly significant role in Monte Carlo usage and testing.

  19. A parallel 3D particle-in-cell code with dynamic load balancing

    International Nuclear Information System (INIS)

    Wolfheimer, Felix; Gjonaj, Erion; Weiland, Thomas

    2006-01-01

    A parallel 3D electrostatic Particle-In-Cell (PIC) code including an algorithm for modelling Space Charge Limited (SCL) emission [E. Gjonaj, T. Weiland, 3D-modeling of space-charge-limited electron emission. A charge conserving algorithm, Proceedings of the 11th Biennial IEEE Conference on Electromagnetic Field Computation, 2004] is presented. A domain decomposition technique based on orthogonal recursive bisection is used to parallelize the computation on a distributed memory environment of clustered workstations. For problems with a highly nonuniform and time dependent distribution of particles, e.g., bunch dynamics, a dynamic load balancing between the processes is needed to preserve the parallel performance. The algorithm for the detection of a load imbalance and the redistribution of the tasks among the processes is based on a weight function criterion, where the weight of a cell measures the computational load associated with it. The algorithm is studied with two examples. In the first example, multiple electron bunches as occurring in the S-DALINAC [A. Richter, Operational experience at the S-DALINAC, Proceedings of the Fifth European Particle Accelerator Conference, 1996] accelerator are simulated in the absence of space charge fields. In the second example, the SCL emission and electron trajectories in an electron gun are simulated

  20. A parallel 3D particle-in-cell code with dynamic load balancing

    Energy Technology Data Exchange (ETDEWEB)

    Wolfheimer, Felix [Technische Universitaet Darmstadt, Institut fuer Theorie Elektromagnetischer Felder, Schlossgartenstr.8, 64283 Darmstadt (Germany)]. E-mail: wolfheimer@temf.de; Gjonaj, Erion [Technische Universitaet Darmstadt, Institut fuer Theorie Elektromagnetischer Felder, Schlossgartenstr.8, 64283 Darmstadt (Germany); Weiland, Thomas [Technische Universitaet Darmstadt, Institut fuer Theorie Elektromagnetischer Felder, Schlossgartenstr.8, 64283 Darmstadt (Germany)

    2006-03-01

    A parallel 3D electrostatic Particle-In-Cell (PIC) code including an algorithm for modelling Space Charge Limited (SCL) emission [E. Gjonaj, T. Weiland, 3D-modeling of space-charge-limited electron emission. A charge conserving algorithm, Proceedings of the 11th Biennial IEEE Conference on Electromagnetic Field Computation, 2004] is presented. A domain decomposition technique based on orthogonal recursive bisection is used to parallelize the computation on a distributed memory environment of clustered workstations. For problems with a highly nonuniform and time dependent distribution of particles, e.g., bunch dynamics, a dynamic load balancing between the processes is needed to preserve the parallel performance. The algorithm for the detection of a load imbalance and the redistribution of the tasks among the processes is based on a weight function criterion, where the weight of a cell measures the computational load associated with it. The algorithm is studied with two examples. In the first example, multiple electron bunches as occurring in the S-DALINAC [A. Richter, Operational experience at the S-DALINAC, Proceedings of the Fifth European Particle Accelerator Conference, 1996] accelerator are simulated in the absence of space charge fields. In the second example, the SCL emission and electron trajectories in an electron gun are simulated.

  1. A Hybrid Multiobjective Discrete Particle Swarm Optimization Algorithm for a SLA-Aware Service Composition Problem

    Directory of Open Access Journals (Sweden)

    Hao Yin

    2014-01-01

    Full Text Available For SLA-aware service composition problem (SSC, an optimization model for this algorithm is built, and a hybrid multiobjective discrete particle swarm optimization algorithm (HMDPSO is also proposed in this paper. According to the characteristic of this problem, a particle updating strategy is designed by introducing crossover operator. In order to restrain particle swarm’s premature convergence and increase its global search capacity, the swarm diversity indicator is introduced and a particle mutation strategy is proposed to increase the swarm diversity. To accelerate the process of obtaining the feasible particle position, a local search strategy based on constraint domination is proposed and incorporated into the proposed algorithm. At last, some parameters in the algorithm HMDPSO are analyzed and set with relative proper values, and then the algorithm HMDPSO and the algorithm HMDPSO+ incorporated by local search strategy are compared with the recently proposed related algorithms on different scale cases. The results show that algorithm HMDPSO+ can solve the SSC problem more effectively.

  2. Tripoli-3: monte Carlo transport code for neutral particles - version 3.5 - users manual

    International Nuclear Information System (INIS)

    Vergnaud, Th.; Nimal, J.C.; Chiron, M.

    2001-01-01

    The TRIPOLI-3 code applies the Monte Carlo method to neutron, gamma-ray and coupled neutron and gamma-ray transport calculations in three-dimensional geometries, either in steady-state conditions or having a time dependence. It can be used to study problems where there is a high flux attenuation between the source zone and the result zone (studies of shielding configurations or source driven sub-critical systems, with fission being taken into account), as well as problems where there is a low flux attenuation (neutronic calculations -- in a fuel lattice cell, for example -- where fission is taken into account, usually with the calculation on the effective multiplication factor, fine structure studies, numerical experiments to investigate methods approximations, etc). TRIPOLI-3 has been operational since 1995 and is the version of the TRIPOLI code that follows on from TRIPOLI-2; it can be used on SUN, RISC600 and HP workstations and on PC using the Linux or Windows/NT operating systems. The code uses nuclear data libraries generated using the THEMIS/NJOY system. The current libraries were derived from ENDF/B6 and JEF2. There is also a response function library based on a number of evaluations, notably the dosimetry libraries IRDF/85, IRDF/90 and also evaluations from JEF2. The treatment of particle transport is the same in version 3.5 as in version 3.4 of the TRIPOLI code; but the version 3.5 is more convenient for preparing the input data and for reading the output. The french version of the user's manual exists. (authors)

  3. Hybrid PAPR reduction scheme with Huffman coding and DFT-spread technique for direct-detection optical OFDM systems

    Science.gov (United States)

    Peng, Miao; Chen, Ming; Zhou, Hui; Wan, Qiuzhen; Jiang, LeYong; Yang, Lin; Zheng, Zhiwei; Chen, Lin

    2018-01-01

    High peak-to-average power ratio (PAPR) of the transmit signal is a major drawback in optical orthogonal frequency division multiplexing (OOFDM) system. In this paper, we propose and experimentally demonstrate a novel hybrid scheme, combined the Huffman coding and Discrete Fourier Transmission-Spread (DFT-spread), in order to reduce high PAPR in a 16-QAM short-reach intensity-modulated and direct-detection OOFDM (IMDD-OOFDM) system. The experimental results demonstrated that the hybrid scheme can reduce the PAPR by about 1.5, 2, 3 and 6 dB, and achieve 1.5, 1, 2.5 and 3 dB receiver sensitivity improvement compared to clipping, DFT-spread and Huffman coding and original OFDM signals, respectively, at an error vector magnitude (EVM) of -10 dB after transmission over 20 km standard single-mode fiber (SSMF). Furthermore, the throughput gain can be of the order of 30% by using the hybrid scheme compared with the cases of without applying the Huffman coding.

  4. Hybrid Optimization Algorithm of Particle Swarm Optimization and Cuckoo Search for Preventive Maintenance Period Optimization

    Directory of Open Access Journals (Sweden)

    Jianwen Guo

    2016-01-01

    Full Text Available All equipment must be maintained during its lifetime to ensure normal operation. Maintenance is one of the critical roles in the success of manufacturing enterprises. This paper proposed a preventive maintenance period optimization model (PMPOM to find an optimal preventive maintenance period. By making use of the advantages of particle swarm optimization (PSO and cuckoo search (CS algorithm, a hybrid optimization algorithm of PSO and CS is proposed to solve the PMPOM problem. The test functions show that the proposed algorithm exhibits more outstanding performance than particle swarm optimization and cuckoo search. Experiment results show that the proposed algorithm has advantages of strong optimization ability and fast convergence speed to solve the PMPOM problem.

  5. Synthesis and spectroscopic properties of silica-dye-semiconductor nanocrystal hybrid particles.

    Science.gov (United States)

    Ren, Ting; Erker, Wolfgang; Basché, Thomas; Schärtl, Wolfgang

    2010-12-07

    We prepared silica-dye-nanocrystal hybrid particles and studied the energy transfer from semiconductor nanocrystals (= donor) to organic dye molecules (= acceptor). Multishell CdSe/CdS/ZnS semiconductor nanocrystals were adsorbed onto monodisperse Stöber silica particles with an outer silica shell of thickness 2-23 nm containing organic dye molecules (Texas Red). The thickness of this dye layer has a strong effect on the energy transfer efficiency, which is explained by the increase in the number of dye molecules homogeneously distributed within the silica shell, in combination with an enhanced surface adsorption of nanocrystals with increasing dye amount. Our conclusions were underlined by comparison of the experimental results with numerically calculated FRET efficiencies and by control experiments confirming attractive interaction between the nanocrystals and Texas Red freely dissolved in solution.

  6. Delay-area trade-off for MPRM circuits based on hybrid discrete particle swarm optimization

    International Nuclear Information System (INIS)

    Jiang Zhidi; Wang Zhenhai; Wang Pengjun

    2013-01-01

    Polarity optimization for mixed polarity Reed—Muller (MPRM) circuits is a combinatorial issue. Based on the study on discrete particle swarm optimization (DPSO) and mixed polarity, the corresponding relation between particle and mixed polarity is established, and the delay-area trade-off of large-scale MPRM circuits is proposed. Firstly, mutation operation and elitist strategy in genetic algorithm are incorporated into DPSO to further develop a hybrid DPSO (HDPSO). Then the best polarity for delay and area trade-off is searched for large-scale MPRM circuits by combining the HDPSO and a delay estimation model. Finally, the proposed algorithm is testified by MCNC Benchmarks. Experimental results show that HDPSO achieves a better convergence than DPSO in terms of search capability for large-scale MPRM circuits. (semiconductor integrated circuits)

  7. Aminopropyl-Silica Hybrid Particles as Supports for Humic Acids Immobilization

    Directory of Open Access Journals (Sweden)

    Mónika Sándor

    2016-01-01

    Full Text Available A series of aminopropyl-functionalized silica nanoparticles were prepared through a basic two step sol-gel process in water. Prior to being aminopropyl-functionalized, silica particles with an average diameter of 549 nm were prepared from tetraethyl orthosilicate (TEOS, using a Stöber method. In a second step, aminopropyl-silica particles were prepared by silanization with 3-aminopropyltriethoxysilane (APTES, added drop by drop to the sol-gel mixture. The synthesized amino-functionalized silica particles are intended to be used as supports for immobilization of humic acids (HA, through electrostatic bonds. Furthermore, by inserting beside APTES, unhydrolysable mono-, di- or trifunctional alkylsilanes (methyltriethoxy silane (MeTES, trimethylethoxysilane (Me3ES, diethoxydimethylsilane (Me2DES and 1,2-bis(triethoxysilylethane (BETES onto silica particles surface, the spacing of the free amino groups was intended in order to facilitate their interaction with HA large molecules. Two sorts of HA were used for evaluating the immobilization capacity of the novel aminosilane supports. The results proved the efficient functionalization of silica nanoparticles with amino groups and showed that the immobilization of the two tested types of humic acid substances was well achieved for all the TEOS/APTES = 20/1 (molar ratio silica hybrids having or not having the amino functions spaced by alkyl groups. It was shown that the density of aminopropyl functions is low enough at this low APTES fraction and do not require a further spacing by alkyl groups. Moreover, all the hybrids having negative zeta potential values exhibited low interaction with HA molecules.

  8. Apar-T: code, validation, and physical interpretation of particle-in-cell results

    Science.gov (United States)

    Melzani, Mickaël; Winisdoerffer, Christophe; Walder, Rolf; Folini, Doris; Favre, Jean M.; Krastanov, Stefan; Messmer, Peter

    2013-10-01

    We present the parallel particle-in-cell (PIC) code Apar-T and, more importantly, address the fundamental question of the relations between the PIC model, the Vlasov-Maxwell theory, and real plasmas. First, we present four validation tests: spectra from simulations of thermal plasmas, linear growth rates of the relativistic tearing instability and of the filamentation instability, and nonlinear filamentation merging phase. For the filamentation instability we show that the effective growth rates measured on the total energy can differ by more than 50% from the linear cold predictions and from the fastest modes of the simulation. We link these discrepancies to the superparticle number per cell and to the level of field fluctuations. Second, we detail a new method for initial loading of Maxwell-Jüttner particle distributions with relativistic bulk velocity and relativistic temperature, and explain why the traditional method with individual particle boosting fails. The formulation of the relativistic Harris equilibrium is generalized to arbitrary temperature and mass ratios. Both are required for the tearing instability setup. Third, we turn to the key point of this paper and scrutinize the question of what description of (weakly coupled) physical plasmas is obtained by PIC models. These models rely on two building blocks: coarse-graining, i.e., grouping of the order of p ~ 1010 real particles into a single computer superparticle, and field storage on a grid with its subsequent finite superparticle size. We introduce the notion of coarse-graining dependent quantities, i.e., quantities depending on p. They derive from the PIC plasma parameter ΛPIC, which we show to behave as ΛPIC ∝ 1/p. We explore two important implications. One is that PIC collision- and fluctuation-induced thermalization times are expected to scale with the number of superparticles per grid cell, and thus to be a factor p ~ 1010 smaller than in real plasmas, a fact that we confirm with

  9. Particle-in-cell plasma simulation codes on the connection machine

    International Nuclear Information System (INIS)

    Walker, D.W.

    1991-01-01

    Methods for implementing three-dimensional, electromagnetic, relativistic PIC plasma simulation codes on the Connection Machine (CM-2) are discussed. The gather and scatter phases of the PIC algorithm involve indirect indexing of data, which results in a large amount of communication on the CM-2. Different data decompositions are described that seek to reduce the amount of communication while maintaining good load balance. These methods require the particles to be spatially sorted at the start of each time step, which introduced another form of overhead. The different methods are implemented in CM Fortran on the CM-2 and compared. It was found that the general router is slow in performing the communication in the gather and scatter steps, which precludes an efficient CM Fortran implementation. An alternative method that uses PARIS calls and the NEWS communication network to pipeline data along the axes of the VP set is suggested as a more efficient algorithm

  10. Update on comparison of the particle production using Mars simulation code

    CERN Document Server

    Prior, G; Kirk, H G; Souchlas, N; Ding, X

    2011-01-01

    In the International Design Study for the Neutrino Factory (IDS-NF), a 5-15 GeV (kinetic energy) proton beam impinges a Hg jet target, in order to produce pions that will decay into muons. The muons are captured and transformed into a beam, then passed to the downstream acceleration system. The target sits in a solenoid eld tapering from 20 T down to below 2 T, over several meters, permitting an optimized capture of the pions that will produce useful muons for the machine. The target and pion capture systems have been simulated using MARS. This paper presents an updated comparison of the particles production using the MARS code versions m1507 and m1510 on different machines located at the European Organization for Nuclear Research (CERN) and Brookhaven National Laboratory (BNL).

  11. Advanced burnup calculation code system in a subcritical state with continuous-energy Monte Carlo code for fusion-fission hybrid reactor

    International Nuclear Information System (INIS)

    Matsunaka, Masayuki; Ohta, Masayuki; Miyamaru, Hiroyuki; Murata, Isao

    2009-01-01

    The fusion-fission (FF) hybrid reactor is a promising energy source that is thought to act as a bridge between the existing fission reactor and the genuine fusion reactor in the future. The burnup calculation system that aims at precise burnup calculations of a subcritical system was developed for the detailed design of the FF hybrid reactor, and the system consists of MCNP, ORIGEN, and postprocess codes. In the present study, the calculation system was substantially modified to improve the calculation accuracy and at the same time the calculation speed as well. The reaction rate estimation can be carried out accurately with the present system that uses track-length (TL) data in the continuous-energy treatment. As for the speed-up of the reaction rate calculation, a new TL data bunching scheme was developed so that only necessary TL data are used as long as the accuracy of the point-wise nuclear data is conserved. With the present system, an example analysis result for our proposed FF hybrid reactor is described, showing that the computation time could really be saved with the same accuracy as before. (author)

  12. Daily River Flow Forecasting with Hybrid Support Vector Machine – Particle Swarm Optimization

    Science.gov (United States)

    Zaini, N.; Malek, M. A.; Yusoff, M.; Mardi, N. H.; Norhisham, S.

    2018-04-01

    The application of artificial intelligence techniques for river flow forecasting can further improve the management of water resources and flood prevention. This study concerns the development of support vector machine (SVM) based model and its hybridization with particle swarm optimization (PSO) to forecast short term daily river flow at Upper Bertam Catchment located in Cameron Highland, Malaysia. Ten years duration of historical rainfall, antecedent river flow data and various meteorology parameters data from 2003 to 2012 are used in this study. Four SVM based models are proposed which are SVM1, SVM2, SVM-PSO1 and SVM-PSO2 to forecast 1 to 7 day ahead of river flow. SVM1 and SVM-PSO1 are the models with historical rainfall and antecedent river flow as its input, while SVM2 and SVM-PSO2 are the models with historical rainfall, antecedent river flow data and additional meteorological parameters as input. The performances of the proposed model are measured in term of RMSE and R2 . It is found that, SVM2 outperformed SVM1 and SVM-PSO2 outperformed SVM-PSO1 which meant the additional meteorology parameters used as input to the proposed models significantly affect the model performances. Hybrid models SVM-PSO1 and SVM-PSO2 yield higher performances as compared to SVM1 and SVM2. It is found that hybrid models are more effective in forecasting river flow at 1 to 7 day ahead at the study area.

  13. Short-Term Wind Power Forecasting Using the Enhanced Particle Swarm Optimization Based Hybrid Method

    Directory of Open Access Journals (Sweden)

    Wen-Yeau Chang

    2013-09-01

    Full Text Available High penetration of wind power in the electricity system provides many challenges to power system operators, mainly due to the unpredictability and variability of wind power generation. Although wind energy may not be dispatched, an accurate forecasting method of wind speed and power generation can help power system operators reduce the risk of an unreliable electricity supply. This paper proposes an enhanced particle swarm optimization (EPSO based hybrid forecasting method for short-term wind power forecasting. The hybrid forecasting method combines the persistence method, the back propagation neural network, and the radial basis function (RBF neural network. The EPSO algorithm is employed to optimize the weight coefficients in the hybrid forecasting method. To demonstrate the effectiveness of the proposed method, the method is tested on the practical information of wind power generation of a wind energy conversion system (WECS installed on the Taichung coast of Taiwan. Comparisons of forecasting performance are made with the individual forecasting methods. Good agreements between the realistic values and forecasting values are obtained; the test results show the proposed forecasting method is accurate and reliable.

  14. SHARP: A Spatially Higher-order, Relativistic Particle-in-cell Code

    Energy Technology Data Exchange (ETDEWEB)

    Shalaby, Mohamad; Broderick, Avery E. [Department of Physics and Astronomy, University of Waterloo, 200 University Avenue West, Waterloo, ON, N2L 3G1 (Canada); Chang, Philip [Department of Physics, University of Wisconsin-Milwaukee, 1900 E. Kenwood Boulevard, Milwaukee, WI 53211 (United States); Pfrommer, Christoph [Leibniz-Institut für Astrophysik Potsdam (AIP), An der Sternwarte 16, D-14482 Potsdam (Germany); Lamberts, Astrid [Theoretical Astrophysics, California Institute of Technology, Pasadena, CA 91125 (United States); Puchwein, Ewald, E-mail: mshalaby@live.ca [Institute of Astronomy and Kavli Institute for Cosmology, University of Cambridge, Madingley Road, Cambridge, CB3 0HA (United Kingdom)

    2017-05-20

    Numerical heating in particle-in-cell (PIC) codes currently precludes the accurate simulation of cold, relativistic plasma over long periods, severely limiting their applications in astrophysical environments. We present a spatially higher-order accurate relativistic PIC algorithm in one spatial dimension, which conserves charge and momentum exactly. We utilize the smoothness implied by the usage of higher-order interpolation functions to achieve a spatially higher-order accurate algorithm (up to the fifth order). We validate our algorithm against several test problems—thermal stability of stationary plasma, stability of linear plasma waves, and two-stream instability in the relativistic and non-relativistic regimes. Comparing our simulations to exact solutions of the dispersion relations, we demonstrate that SHARP can quantitatively reproduce important kinetic features of the linear regime. Our simulations have a superior ability to control energy non-conservation and avoid numerical heating in comparison to common second-order schemes. We provide a natural definition for convergence of a general PIC algorithm: the complement of physical modes captured by the simulation, i.e., those that lie above the Poisson noise, must grow commensurately with the resolution. This implies that it is necessary to simultaneously increase the number of particles per cell and decrease the cell size. We demonstrate that traditional ways for testing for convergence fail, leading to plateauing of the energy error. This new PIC code enables us to faithfully study the long-term evolution of plasma problems that require absolute control of the energy and momentum conservation.

  15. 'ACTIV' - a package of codes for charged particle and neutron activation analysis

    International Nuclear Information System (INIS)

    Cincu, Em.; Alexandreanu, B.; Manu, V.; Moisa, V.

    1997-01-01

    The 'ACTIV' Program is an advanced software package dedicated to applications of the thermal neutron and charged particle activation (NAA and CPA) induced reactions. The program is designed to run on personal computers compatible IBM PC-Models XT/AT, 286 or more advanced, operating under DOS version 5.0 or later, on systems with minimum 5 MB of hard disk memory. The package consists of 6 software modules and a Nuclear Data Base comprising physical, nuclear reaction and decay data for: thermal neutron, proton, deuteron and α-particle induced reactions on 15 selected metallic elements; the nuclear reaction data corresponds to the energy range (5-100) MeV. In the first version - ACTIV 1.0 - the set of input data concerns: the sample type, irradiation and measurement conditions, the γ-ray spectrum identification code, selected detection efficiency calibration curve, selected radionuclides, selected standardization method for elemental analysis, version of results. At present, the 'ACTIV' package comprises 6 soft modules for processing the experimental data, which ensure computation of the quantities: radionuclide activities, activation yield data (case of CPA) and elemental concentration by relative and absolute standardization methods. Recently, the software designed to processing complex γ-ray spectra was acquired and installed on our PC 486 (8 MB RAM, 100 MHz). The next step in developing the 'ACTIV' program envisages improving the existing computing codes, completing the data libraries, incorporating a new soft for the direct use of the 'Quantum TM MCA' data, developing modules dedicated to uncertainty computation and optimization of the activation experiments

  16. CROWDED HYBRID PANEL MANUFACTURED WITH PEANUT HULLS REINFORCED WITH ITAÚBA WOOD PARTICLES

    Directory of Open Access Journals (Sweden)

    Guilherme Barbirato

    2014-09-01

    Full Text Available http://dx.doi.org/10.5902/1980509815726In this paper, it was considered the study of the potential use of peanut hulls and wood particles of itaúba (Mezilaurus itauba species in order to add value to these materials through the manufacture of hybrid particle board in order to compare the physical and mechanical performances as well as durability. For these procedures, it was used the bi-component polyurethane resin based on castor beans (mammon oil and urea-formaldehyde. The product quality was evaluated based on the requirements of the standards NBR 14.810:2006 APA PRP and 108, through physico-mechanical and microstructural durability. The results indicate that the incorporation of wood particles warrants an increase in physical-mechanical properties of the particleboard manufactured with peanut hulls, the polyurethane resin based on castor oil was effective as a particle adhesive binder and the durability assay indicated that the material should be used under conditions of low exposure to moisture.

  17. Hybrid Bacterial Foraging and Particle Swarm Optimization for detecting Bundle Branch Block.

    Science.gov (United States)

    Kora, Padmavathi; Kalva, Sri Ramakrishna

    2015-01-01

    Abnormal cardiac beat identification is a key process in the detection of heart diseases. Our present study describes a procedure for the detection of left and right bundle branch block (LBBB and RBBB) Electrocardiogram (ECG) patterns. The electrical impulses that control the cardiac beat face difficulty in moving inside the heart. This problem is termed as bundle branch block (BBB). BBB makes it harder for the heart to pump blood effectively through the heart circulatory system. ECG feature extraction is a key process in detecting heart ailments. Our present study comes up with a hybrid method combining two heuristic optimization methods: Bacterial Forging Optimization (BFO) and Particle Swarm Optimization (PSO) for the feature selection of ECG signals. One of the major controlling forces of BFO algorithm is the chemotactic movement of a bacterium that models a test solution. The chemotaxis process of the BFO depends on random search directions which may lead to a delay in achieving the global optimum solution. The hybrid technique: Bacterial Forging-Particle Swarm Optimization (BFPSO) incorporates the concepts from BFO and PSO and it creates individuals in a new generation. This BFPSO method performs local search through the chemotactic movement of BFO and the global search over the entire search domain is accomplished by a PSO operator. The BFPSO feature values are given as the input for the Levenberg-Marquardt Neural Network classifier.

  18. Hybrid unscented particle filter based state-of-charge determination for lead-acid batteries

    International Nuclear Information System (INIS)

    Shen, Yanqing

    2014-01-01

    Accurate prediction of cell SOC (state of charge) is important for the safety and functional capabilities of the battery energy storage application system. This paper presents a hybrid UPF (unscented particle filter) based SOC determination combined model for batteries. To simulate the entire dynamic electrical characteristics of batteries, a novel combined state space model, which takes current as a control input and let SOC and two constructed parameters as state variables, is advanced to represent cell behavior. Besides that, an improved UPF method is used to evaluate cell SOC. Taking lead-acid batteries for example, we apply the established model for test. Results show that the evolved combined state space cell model simulates battery dynamics robustly with high accuracy and the prediction value based on the improved UPF method converges to the real SOC very quickly within the error of±2%. - Highlights: • This paper introduces a hybrid UPF based SOC determination model for batteries. • The evolved model takes SOC and two constructed parameters as state variables. • The combined state space cell model simulates battery dynamics robustly. • NLMS based method is employed to lessen search space and fasten convergence process. • Novel model converges to the real SOC robustly and quickly with fewer particles

  19. DCHAIN-SP 2001: High energy particle induced radioactivity calculation code

    Energy Technology Data Exchange (ETDEWEB)

    Kai, Tetsuya; Maekawa, Fujio; Kasugai, Yoshimi; Takada, Hiroshi; Ikeda, Yujiro [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Kosako, Kazuaki [Sumitomo Atomic Energy Industries, Ltd., Tokyo (Japan)

    2001-03-01

    For the purpose of contribution to safety design calculations for induced radioactivities in the JAERI/KEK high-intensity proton accelerator project facilities, the DCHAIN-SP which calculates the high energy particle induced radioactivity has been updated to DCHAIN-SP 2001. The following three items were improved: (1) Fission yield data are included to apply the code to experimental facility design for nuclear transmutation of long-lived radioactive waste where fissionable materials are treated. (2) Activation cross section data below 20 MeV are revised. In particular, attentions are paid to cross section data of materials which have close relation to the facilities, i.e., mercury, lead and bismuth, and to tritium production cross sections which are important in terms of safety of the facilities. (3) User-interface for input/output data is sophisticated to perform calculations more efficiently than that in the previous version. Information needed for use of the code is attached in Appendices; the DCHAIN-SP 2001 manual, the procedures of installation and execution of DCHAIN-SP, and sample problems. (author)

  20. Development Of A Parallel Performance Model For The THOR Neutral Particle Transport Code

    Energy Technology Data Exchange (ETDEWEB)

    Yessayan, Raffi; Azmy, Yousry; Schunert, Sebastian

    2017-02-01

    The THOR neutral particle transport code enables simulation of complex geometries for various problems from reactor simulations to nuclear non-proliferation. It is undergoing a thorough V&V requiring computational efficiency. This has motivated various improvements including angular parallelization, outer iteration acceleration, and development of peripheral tools. For guiding future improvements to the code’s efficiency, better characterization of its parallel performance is useful. A parallel performance model (PPM) can be used to evaluate the benefits of modifications and to identify performance bottlenecks. Using INL’s Falcon HPC, the PPM development incorporates an evaluation of network communication behavior over heterogeneous links and a functional characterization of the per-cell/angle/group runtime of each major code component. After evaluating several possible sources of variability, this resulted in a communication model and a parallel portion model. The former’s accuracy is bounded by the variability of communication on Falcon while the latter has an error on the order of 1%.

  1. Novel methods in the Particle-In-Cell accelerator Code-Framework Warp

    Energy Technology Data Exchange (ETDEWEB)

    Vay, J-L [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Grote, D. P. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Cohen, R. H. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Friedman, A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2012-12-26

    The Particle-In-Cell (PIC) Code-Framework Warp is being developed by the Heavy Ion Fusion Science Virtual National Laboratory (HIFS-VNL) to guide the development of accelerators that can deliver beams suitable for high-energy density experiments and implosion of inertial fusion capsules. It is also applied in various areas outside the Heavy Ion Fusion program to the study and design of existing and next-generation high-energy accelerators, including the study of electron cloud effects and laser wakefield acceleration for example. This study presents an overview of Warp's capabilities, summarizing recent original numerical methods that were developed by the HIFS-VNL (including PIC with adaptive mesh refinement, a large-timestep 'drift-Lorentz' mover for arbitrarily magnetized species, a relativistic Lorentz invariant leapfrog particle pusher, simulations in Lorentz-boosted frames, an electromagnetic solver with tunable numerical dispersion and efficient stride-based digital filtering), with special emphasis on the description of the mesh refinement capability. In addition, selected examples of the applications of the methods to the abovementioned fields are given.

  2. Synthetic radiation diagnostics in PIConGPU. Integrating spectral detectors into particle-in-cell codes

    Energy Technology Data Exchange (ETDEWEB)

    Pausch, Richard; Burau, Heiko; Huebl, Axel; Steiniger, Klaus [Helmholtz-Zentrum Dresden-Rossendorf (Germany); Technische Universitaet Dresden (Germany); Debus, Alexander; Widera, Rene; Bussmann, Michael [Helmholtz-Zentrum Dresden-Rossendorf (Germany)

    2016-07-01

    We present the in-situ far field radiation diagnostics in the particle-in-cell code PIConGPU. It was developed to close the gap between simulated plasma dynamics and radiation observed in laser plasma experiments. Its predictive capabilities, both qualitative and quantitative, have been tested against analytical models. Now, we apply this synthetic spectral diagnostics to investigate plasma dynamics in laser wakefield acceleration, laser foil irradiation and plasma instabilities. Our method is based on the far field approximation of the Lienard-Wiechert potential and allows predicting both coherent and incoherent radiation spectrally from infrared to X-rays. Its capability to resolve the radiation polarization and to determine the temporal and spatial origin of the radiation enables us to correlate specific spectral signatures with characteristic dynamics in the plasma. Furthermore, its direct integration into the highly-scalable GPU framework of PIConGPU allows computing radiation spectra for thousands of frequencies, hundreds of detector positions and billions of particles efficiently. In this talk we will demonstrate these capabilities on resent simulations of laser wakefield acceleration (LWFA) and high harmonics generation during target normal sheath acceleration (TNSA).

  3. The CNCSN-2: One, two-and three-dimensional coupled neutral and charged particle discrete ordinates code system

    International Nuclear Information System (INIS)

    Voloschenko, A. M.; Gukov, S. V.; Russkov, A. A.; Gurevich, M. I.; Shkarovsky, D. A.; Kryuchkov, V. P.; Sumaneev, O. V.; Dubinin, A. A.

    2009-01-01

    KATRIN, KASKAD-Sand ROZ-6 codes solve the multigroup transport equation for neutrons, photons and charged particles in 3D. BOT3P-5., ConDat can be used as preprocessor. ARVES-2.5, a cross-section preprocessor (the package of utilities for operating with the cross section file in FMAC-M format) is included. Auxiliary codes MIXERM, CEPXS-BFP, CEPXS-BFP, SADCO-2.4 and CNCSN-2 are used

  4. Hybrid particle swarm optimization algorithm and its application in nuclear engineering

    International Nuclear Information System (INIS)

    Liu, C.Y.; Yan, C.Q.; Wang, J.J.

    2014-01-01

    Highlights: • We propose a hybrid particle swarm optimization algorithm (HPSO). • Modified Nelder–Mead simplex search method is applied in HPSO. • The algorithm has a high search precision and rapidly calculation speed. • HPSO can be used in the nuclear engineering optimization design problems. - Abstract: A hybrid particle swarm optimization algorithm with a feasibility-based rule for solving constrained optimization problems has been developed in this research. Firstly, the global optimal solution zone can be obtained through particle swarm optimization process, and then the refined search of the global optimal solution will be achieved through the modified Nelder–Mead simplex algorithm. Simulations based on two well-studied benchmark problems demonstrate the proposed algorithm will be an efficient alternative to solving constrained optimization problems. The vertical electrical heating pressurizer is one of the key components in reactor coolant system. The mathematical model of pressurizer has been established in steady state. The optimization design of pressurizer weight has been carried out through HPSO algorithm. The results show the pressurizer weight can be reduced by 16.92%. The thermal efficiencies of conventional PWR nuclear power plants are about 31–35% so far, which are much lower than fossil fueled plants based in a steam cycle as PWR. The thermal equilibrium mathematic model for nuclear power plant secondary loop has been established. An optimization case study has been conducted to improve the efficiency of the nuclear power plant with the proposed algorithm. The results show the thermal efficiency is improved by 0.5%

  5. Review of Hybrid (Deterministic/Monte Carlo) Radiation Transport Methods, Codes, and Applications at Oak Ridge National Laboratory

    International Nuclear Information System (INIS)

    Wagner, John C.; Peplow, Douglas E.; Mosher, Scott W.; Evans, Thomas M.

    2010-01-01

    This paper provides a review of the hybrid (Monte Carlo/deterministic) radiation transport methods and codes used at the Oak Ridge National Laboratory and examples of their application for increasing the efficiency of real-world, fixed-source Monte Carlo analyses. The two principal hybrid methods are (1) Consistent Adjoint Driven Importance Sampling (CADIS) for optimization of a localized detector (tally) region (e.g., flux, dose, or reaction rate at a particular location) and (2) Forward Weighted CADIS (FW-CADIS) for optimizing distributions (e.g., mesh tallies over all or part of the problem space) or multiple localized detector regions (e.g., simultaneous optimization of two or more localized tally regions). The two methods have been implemented and automated in both the MAVRIC sequence of SCALE 6 and ADVANTG, a code that works with the MCNP code. As implemented, the methods utilize the results of approximate, fast-running 3-D discrete ordinates transport calculations (with the Denovo code) to generate consistent space- and energy-dependent source and transport (weight windows) biasing parameters. These methods and codes have been applied to many relevant and challenging problems, including calculations of PWR ex-core thermal detector response, dose rates throughout an entire PWR facility, site boundary dose from arrays of commercial spent fuel storage casks, radiation fields for criticality accident alarm system placement, and detector response for special nuclear material detection scenarios and nuclear well-logging tools. Substantial computational speed-ups, generally O(10 2-4 ), have been realized for all applications to date. This paper provides a brief review of the methods, their implementation, results of their application, and current development activities, as well as a considerable list of references for readers seeking more information about the methods and/or their applications.

  6. Review of Hybrid (Deterministic/Monte Carlo) Radiation Transport Methods, Codes, and Applications at Oak Ridge National Laboratory

    International Nuclear Information System (INIS)

    Wagner, John C.; Peplow, Douglas E.; Mosher, Scott W.; Evans, Thomas M.

    2010-01-01

    This paper provides a review of the hybrid (Monte Carlo/deterministic) radiation transport methods and codes used at the Oak Ridge National Laboratory and examples of their application for increasing the efficiency of real-world, fixed-source Monte Carlo analyses. The two principal hybrid methods are (1) Consistent Adjoint Driven Importance Sampling (CADIS) for optimization of a localized detector (tally) region (e.g., flux, dose, or reaction rate at a particular location) and (2) Forward Weighted CADIS (FW-CADIS) for optimizing distributions (e.g., mesh tallies over all or part of the problem space) or multiple localized detector regions (e.g., simultaneous optimization of two or more localized tally regions). The two methods have been implemented and automated in both the MAVRIC sequence of SCALE 6 and ADVANTG, a code that works with the MCNP code. As implemented, the methods utilize the results of approximate, fast-running 3-D discrete ordinates transport calculations (with the Denovo code) to generate consistent space- and energy-dependent source and transport (weight windows) biasing parameters. These methods and codes have been applied to many relevant and challenging problems, including calculations of PWR ex-core thermal detector response, dose rates throughout an entire PWR facility, site boundary dose from arrays of commercial spent fuel storage casks, radiation fields for criticality accident alarm system placement, and detector response for special nuclear material detection scenarios and nuclear well-logging tools. Substantial computational speed-ups, generally O(102-4), have been realized for all applications to date. This paper provides a brief review of the methods, their implementation, results of their application, and current development activities, as well as a considerable list of references for readers seeking more information about the methods and/or their applications.

  7. Review of hybrid (deterministic/Monte Carlo) radiation transport methods, codes, and applications at Oak Ridge National Laboratory

    International Nuclear Information System (INIS)

    Wagner, J.C.; Peplow, D.E.; Mosher, S.W.; Evans, T.M.

    2010-01-01

    This paper provides a review of the hybrid (Monte Carlo/deterministic) radiation transport methods and codes used at the Oak Ridge National Laboratory and examples of their application for increasing the efficiency of real-world, fixed-source Monte Carlo analyses. The two principal hybrid methods are (1) Consistent Adjoint Driven Importance Sampling (CADIS) for optimization of a localized detector (tally) region (e.g., flux, dose, or reaction rate at a particular location) and (2) Forward Weighted CADIS (FW-CADIS) for optimizing distributions (e.g., mesh tallies over all or part of the problem space) or multiple localized detector regions (e.g., simultaneous optimization of two or more localized tally regions). The two methods have been implemented and automated in both the MAVRIC sequence of SCALE 6 and ADVANTG, a code that works with the MCNP code. As implemented, the methods utilize the results of approximate, fast-running 3-D discrete ordinates transport calculations (with the Denovo code) to generate consistent space- and energy-dependent source and transport (weight windows) biasing parameters. These methods and codes have been applied to many relevant and challenging problems, including calculations of PWR ex-core thermal detector response, dose rates throughout an entire PWR facility, site boundary dose from arrays of commercial spent fuel storage casks, radiation fields for criticality accident alarm system placement, and detector response for special nuclear material detection scenarios and nuclear well-logging tools. Substantial computational speed-ups, generally O(10 2-4 ), have been realized for all applications to date. This paper provides a brief review of the methods, their implementation, results of their application, and current development activities, as well as a considerable list of references for readers seeking more information about the methods and/or their applications. (author)

  8. Verification and Validation of Monte Carlo n-Particle Code 6 (MCNP6) with Neutron Protection Factor Measurements of an Iron Box

    Science.gov (United States)

    2014-03-27

    Vehicle Code System (VCS), the Monte Carlo Adjoint SHielding (MASH), and the Monte Carlo n- Particle ( MCNP ) code. Of the three, the oldest and still most...widely utilized radiation transport code is MCNP . First created at Los Alamos National Laboratory (LANL) in 1957, the code simulated neutral...particle types, and previous versions of MCNP were repeatedly validated using both simple and complex 10 geometries [12, 13]. Much greater discussion and

  9. Radiation protection studies for medical particle accelerators using FLUKA Monte Carlo code

    International Nuclear Information System (INIS)

    Infantino, Angelo; Mostacci, Domiziano; Cicoria, Gianfranco; Lucconi, Giulia; Pancaldi, Davide; Vichi, Sara; Zagni, Federico; Marengo, Mario

    2017-01-01

    Radiation protection (RP) in the use of medical cyclotrons involves many aspects both in the routine use and for the decommissioning of a site. Guidelines for site planning and installation, as well as for RP assessment, are given in international documents; however, the latter typically offer analytic methods of calculation of shielding and materials activation, in approximate or idealised geometry set-ups. The availability of Monte Carlo (MC) codes with accurate up-to-date libraries for transport and interaction of neutrons and charged particles at energies below 250 MeV, together with the continuously increasing power of modern computers, makes the systematic use of simulations with realistic geometries possible, yielding equipment and site-specific evaluation of the source terms, shielding requirements and all quantities relevant to RP at the same time. In this work, the well-known FLUKA MC code was used to simulate different aspects of RP in the use of biomedical accelerators, particularly for the production of medical radioisotopes. In the context of the Young Professionals Award, held at the IRPA 14 conference, only a part of the complete work is presented. In particular, the simulation of the GE PETtrace cyclotron (16.5 MeV) installed at S. Orsola-Malpighi University Hospital evaluated the effective dose distribution around the equipment; the effective number of neutrons produced per incident proton and their spectral distribution; the activation of the structure of the cyclotron and the vault walls; the activation of the ambient air, in particular the production of "4"1Ar. The simulations were validated, in terms of physical and transport parameters to be used at the energy range of interest, through an extensive measurement campaign of the neutron environmental dose equivalent using a rem-counter and TLD dosemeters. The validated model was then used in the design and the licensing request of a new Positron Emission Tomography facility. (authors)

  10. TSV last for hybrid pixel detectors: Application to particle physics and imaging experiments

    CERN Document Server

    Henry, D; Berthelot, A; Cuchet, R; Chantre, C; Campbell, M

    Hybrid pixel detectors are now widely used in particle physics experiments and at synchrotron light sources. They have also stimulated growing interest in other fields and, in particular, in medical imaging. Through the continuous pursuit of miniaturization in CMOS it has been possible to increase the functionality per pixel while maintaining or even shrinking pixel dimensions. The main constraint on the more extensive use of the technology in all fields is the cost of module building and the difficulty of covering large areas seamlessly [1]. On another hand, in the field of electronic component integration, a new approach has been developed in the last years, called 3D Integration. This concept, based on using the vertical axis for component integration, allows improving the global performance of complex systems. Thanks to this technology, the cost and the form factor of components could be decreased and the performance of the global system could be enhanced. In the field of radiation imaging detectors the a...

  11. An Entropy-Based Adaptive Hybrid Particle Swarm Optimization for Disassembly Line Balancing Problems

    Directory of Open Access Journals (Sweden)

    Shanli Xiao

    2017-11-01

    Full Text Available In order to improve the product disassembly efficiency, the disassembly line balancing problem (DLBP is transformed into a problem of searching for the optimum path in the directed and weighted graph by constructing the disassembly hierarchy information graph (DHIG. Then, combining the characteristic of the disassembly sequence, an entropy-based adaptive hybrid particle swarm optimization algorithm (AHPSO is presented. In this algorithm, entropy is introduced to measure the changing tendency of population diversity, and the dimension learning, crossover and mutation operator are used to increase the probability of producing feasible disassembly solutions (FDS. Performance of the proposed methodology is tested on the primary problem instances available in the literature, and the results are compared with other evolutionary algorithms. The results show that the proposed algorithm is efficient to solve the complex DLBP.

  12. Generation expansion planning in Pool market: A hybrid modified game theory and particle swarm optimization

    International Nuclear Information System (INIS)

    Moghddas-Tafreshi, S.M.; Shayanfar, H.A.; Saliminia Lahiji, A.; Rabiee, A.; Aghaei, J.

    2011-01-01

    Unlike the traditional policy, Generation Expansion Planning (GEP) problem in competitive framework is complicated. In the new policy, each GENeration COmpany (GENCO) decides to invest in such a way that obtains as much profit as possible. This paper presents a new hybrid algorithm to determine GEP in a Pool market. The proposed algorithm is divided in two programming levels: master and slave. In the master level a modified game theory (MGT) is proposed to evaluate the contrast of GENCOs by the Independent System Operator (ISO). In the slave level, a particle swarm optimization (PSO) method is used to find the best solution of each GENCO for decision-making of investment. The validity of the proposed method is examined in the case study including three GENCOs with multi-types of power plants. The results show that the presented method is both satisfactory and consistent with expectation.

  13. Numerical Simulation of Transitional, Hypersonic Flows using a Hybrid Particle-Continuum Method

    Science.gov (United States)

    Verhoff, Ashley Marie

    Analysis of hypersonic flows requires consideration of multiscale phenomena due to the range of flight regimes encountered, from rarefied conditions in the upper atmosphere to fully continuum flow at low altitudes. At transitional Knudsen numbers there are likely to be localized regions of strong thermodynamic nonequilibrium effects that invalidate the continuum assumptions of the Navier-Stokes equations. Accurate simulation of these regions, which include shock waves, boundary and shear layers, and low-density wakes, requires a kinetic theory-based approach where no prior assumptions are made regarding the molecular distribution function. Because of the nature of these types of flows, there is much to be gained in terms of both numerical efficiency and physical accuracy by developing hybrid particle-continuum simulation approaches. The focus of the present research effort is the continued development of the Modular Particle-Continuum (MPC) method, where the Navier-Stokes equations are solved numerically using computational fluid dynamics (CFD) techniques in regions of the flow field where continuum assumptions are valid, and the direct simulation Monte Carlo (DSMC) method is used where strong thermodynamic nonequilibrium effects are present. Numerical solutions of transitional, hypersonic flows are thus obtained with increased physical accuracy relative to CFD alone, and improved numerical efficiency is achieved in comparison to DSMC alone because this more computationally expensive method is restricted to those regions of the flow field where it is necessary to maintain physical accuracy. In this dissertation, a comprehensive assessment of the physical accuracy of the MPC method is performed, leading to the implementation of a non-vacuum supersonic outflow boundary condition in particle domains, and more consistent initialization of DSMC simulator particles along hybrid interfaces. The relative errors between MPC and full DSMC results are greatly reduced as a

  14. PIConGPU - How to build one of the fastest GPU particle-in-cell codes in the world

    Energy Technology Data Exchange (ETDEWEB)

    Burau, Heiko; Debus, Alexander; Helm, Anton; Huebl, Axel; Kluge, Thomas; Widera, Rene; Bussmann, Michael; Schramm, Ulrich; Cowan, Thomas [HZDR, Dresden (Germany); Juckeland, Guido; Nagel, Wolfgang [TU Dresden (Germany); ZIH, Dresden (Germany); Schmitt, Felix [NVIDIA (United States)

    2013-07-01

    We present the algorithmic building blocks of PIConGPU, one of the fastest implementations of the particle-in-cell algortihm on GPU clusters. PIConGPU is a highly-scalable, 3D3V electromagnetic PIC code that is used in laser plasma and astrophysical plasma simulations.

  15. Hybrid K-means Dan Particle Swarm Optimization Untuk Clustering Nasabah Kredit

    Directory of Open Access Journals (Sweden)

    Yusuf Priyo Anggodo

    2017-05-01

    Credit is the biggest revenue for the bank. However, banks have to be selective in deciding which clients can receive the credit. This issue is becoming increasingly complex because when the bank was wrong to give credit to customers can do harm, apart of that a large number of deciding parameter in determining customer credit. Clustering is one way to be able to resolve this issue. K-means is a simple and popular method for solving clustering. However, K-means pure can’t provide optimum solutions so that needs to be done to get the optimum solution to improve. One method of optimization that can solve the problems of optimization with particle swarm optimization is good (PSO. PSO is very helpful in the process of clustering to perform optimization on the central point of each cluster. To improve better results on PSO there are some that do improve. The first use of time-variant inertia to make the dynamic value of inertial w each iteration. Both control the speed of the particle velocity or clamping to get the best position. Besides to overcome premature convergence do hybrid PSO with random injection. The results of this research provide the optimum results for solving clustering of customer credits. The test results showed the hybrid PSO K-means provide the greatest results than K-means and PSO K-means, where the silhouette of the K-means, PSO K-means, and hybrid PSO K-means respectively 0.57343, 0.792045, 1. Keywords: Credit, Clustering, PSO, K-means, Random Injection

  16. Random geometry capability in RMC code for explicit analysis of polytype particle/pebble and applications to HTR-10 benchmark

    International Nuclear Information System (INIS)

    Liu, Shichang; Li, Zeguang; Wang, Kan; Cheng, Quan; She, Ding

    2018-01-01

    Highlights: •A new random geometry was developed in RMC for mixed and polytype particle/pebble. •This capability was applied to the full core calculations of HTR-10 benchmark. •Reactivity, temperature coefficient and control rod worth of HTR-10 were compared. •This method can explicitly model different packing fraction of different pebbles. •Monte Carlo code with this method can simulate polytype particle/pebble type reactor. -- Abstract: With the increasing demands of high fidelity neutronics analysis and the development of computer technology, Monte Carlo method is becoming more and more attractive in accurate simulation of pebble bed High Temperature gas-cooled Reactor (HTR), owing to its advantages of the flexible geometry modeling and the use of continuous-energy nuclear cross sections. For the double-heterogeneous geometry of pebble bed, traditional Monte Carlo codes can treat it by explicit geometry description. However, packing methods such as Random Sequential Addition (RSA) can only produce a sphere packing up to 38% volume packing fraction, while Discrete Element Method (DEM) is troublesome and also time consuming. Moreover, traditional Monte Carlo codes are difficult and inconvenient to simulate the mixed and polytype particles or pebbles. A new random geometry method was developed in Monte Carlo code RMC to simulate the particle transport in polytype particle/pebble in double heterogeneous geometry systems. This method was verified by some test cases, and applied to the full core calculations of HTR-10 benchmark. The reactivity, temperature coefficient and control rod worth of HTR-10 were compared for full core and initial core in helium and air atmosphere respectively, and the results agree well with the benchmark results and experimental results. This work would provide an efficient tool for the innovative design of pebble bed, prism HTRs and molten salt reactors with polytype particles or pebbles using Monte Carlo method.

  17. An integrated PCR colony hybridization approach to screen cDNA libraries for full-length coding sequences.

    Science.gov (United States)

    Pollier, Jacob; González-Guzmán, Miguel; Ardiles-Diaz, Wilson; Geelen, Danny; Goossens, Alain

    2011-01-01

    cDNA-Amplified Fragment Length Polymorphism (cDNA-AFLP) is a commonly used technique for genome-wide expression analysis that does not require prior sequence knowledge. Typically, quantitative expression data and sequence information are obtained for a large number of differentially expressed gene tags. However, most of the gene tags do not correspond to full-length (FL) coding sequences, which is a prerequisite for subsequent functional analysis. A medium-throughput screening strategy, based on integration of polymerase chain reaction (PCR) and colony hybridization, was developed that allows in parallel screening of a cDNA library for FL clones corresponding to incomplete cDNAs. The method was applied to screen for the FL open reading frames of a selection of 163 cDNA-AFLP tags from three different medicinal plants, leading to the identification of 109 (67%) FL clones. Furthermore, the protocol allows for the use of multiple probes in a single hybridization event, thus significantly increasing the throughput when screening for rare transcripts. The presented strategy offers an efficient method for the conversion of incomplete expressed sequence tags (ESTs), such as cDNA-AFLP tags, to FL-coding sequences.

  18. Benchmark of coupling codes (ALOHA, TOPLHA and GRILL3D) with ITER-relevant Lower Hybrid antenna

    International Nuclear Information System (INIS)

    Milanesio, D.; Hillairet, J.; Panaccione, L.; Maggiora, R.; Artaud, J.F.; Bae, Y.S.; Barbera, A.M.A.; Belo, J.; Berger-By, G.; Bernard, J.M.; Cara, Ph.; Cardinali, A.; Castaldo, C.; Ceccuzzi, S.; Cesario, R.; Decker, J.; Delpech, L.; Ekedahl, A.; Garcia, J.; Garibaldi, P.

    2011-01-01

    In order to assist the design of the future ITER Lower Hybrid launcher, coupling codes ALOHA, from CEA/IRFM, TOPLHA, from Politecnico di Torino, and GRILL3D, developed by Dr. Mikhail Irzak (A.F. Ioffe Physico-Technical Institute, St. Petersburg, Russia) and operated by ENEA Frascati, have been compared with the updated (six modules with four active waveguides per module) Passive-Active Multi-junction (PAM) Lower Hybrid antennas. Both ALOHA and GRILL3D formulate the problem in terms of rectangular waveguides modes, while TOPLHA is based on boundary-value problem with the adoption of a triangular cell-mesh to represent the relevant waveguides surfaces. Several plasma profiles, with varying edge density and density increase, have been adopted to provide a complete description of the simulated launcher in terms of reflection coefficient, computed at the beginning of each LH module, and of power spectra. Good agreement can be observed among codes for all the simulated profiles.

  19. A New Hybrid Nelder-Mead Particle Swarm Optimization for Coordination Optimization of Directional Overcurrent Relays

    Directory of Open Access Journals (Sweden)

    An Liu

    2012-01-01

    Full Text Available Coordination optimization of directional overcurrent relays (DOCRs is an important part of an efficient distribution system. This optimization problem involves obtaining the time dial setting (TDS and pickup current (Ip values of each DOCR. The optimal results should have the shortest primary relay operating time for all fault lines. Recently, the particle swarm optimization (PSO algorithm has been considered an effective tool for linear/nonlinear optimization problems with application in the protection and coordination of power systems. With a limited runtime period, the conventional PSO considers the optimal solution as the final solution, and an early convergence of PSO results in decreased overall performance and an increase in the risk of mistaking local optima for global optima. Therefore, this study proposes a new hybrid Nelder-Mead simplex search method and particle swarm optimization (proposed NM-PSO algorithm to solve the DOCR coordination optimization problem. PSO is the main optimizer, and the Nelder-Mead simplex search method is used to improve the efficiency of PSO due to its potential for rapid convergence. To validate the proposal, this study compared the performance of the proposed algorithm with that of PSO and original NM-PSO. The findings demonstrate the outstanding performance of the proposed NM-PSO in terms of computation speed, rate of convergence, and feasibility.

  20. OPTIMIZED PARTICLE SWARM OPTIMIZATION BASED DEADLINE CONSTRAINED TASK SCHEDULING IN HYBRID CLOUD

    Directory of Open Access Journals (Sweden)

    Dhananjay Kumar

    2016-01-01

    Full Text Available Cloud Computing is a dominant way of sharing of computing resources that can be configured and provisioned easily. Task scheduling in Hybrid cloud is a challenge as it suffers from producing the best QoS (Quality of Service when there is a high demand. In this paper a new resource allocation algorithm, to find the best External Cloud provider when the intermediate provider’s resources aren’t enough to satisfy the customer’s demand is proposed. The proposed algorithm called Optimized Particle Swarm Optimization (OPSO combines the two metaheuristic algorithms namely Particle Swarm Optimization and Ant Colony Optimization (ACO. These metaheuristic algorithms are used for the purpose of optimization in the search space of the required solution, to find the best resource from the pool of resources and to obtain maximum profit even when the number of tasks submitted for execution is very high. This optimization is performed to allocate job requests to internal and external cloud providers to obtain maximum profit. It helps to improve the system performance by improving the CPU utilization, and handle multiple requests at the same time. The simulation result shows that an OPSO yields 0.1% - 5% profit to the intermediate cloud provider compared with standard PSO and ACO algorithms and it also increases the CPU utilization by 0.1%.

  1. Hybrid extended particle filter (HEPF) for integrated inertial navigation and global positioning systems

    International Nuclear Information System (INIS)

    Aggarwal, Priyanka; Syed, Zainab; El-Sheimy, Naser

    2009-01-01

    Navigation includes the integration of methodologies and systems for estimating time-varying position, velocity and attitude of moving objects. Navigation incorporating the integrated inertial navigation system (INS) and global positioning system (GPS) generally requires extensive evaluations of nonlinear equations involving double integration. Currently, integrated navigation systems are commonly implemented using the extended Kalman filter (EKF). The EKF assumes a linearized process, measurement models and Gaussian noise distributions. These assumptions are unrealistic for highly nonlinear systems like land vehicle navigation and may cause filter divergence. A particle filter (PF) is developed to enhance integrated INS/GPS system performance as it can easily deal with nonlinearity and non-Gaussian noises. In this paper, a hybrid extended particle filter (HEPF) is developed as an alternative to the well-known EKF to achieve better navigation data accuracy for low-cost microelectromechanical system sensors. The results show that the HEPF performs better than the EKF during GPS outages, especially when simulated outages are located in periods with high vehicle dynamics

  2. New hybrid genetic particle swarm optimization algorithm to design multi-zone binary filter.

    Science.gov (United States)

    Lin, Jie; Zhao, Hongyang; Ma, Yuan; Tan, Jiubin; Jin, Peng

    2016-05-16

    The binary phase filters have been used to achieve an optical needle with small lateral size. Designing a binary phase filter is still a scientific challenge in such fields. In this paper, a hybrid genetic particle swarm optimization (HGPSO) algorithm is proposed to design the binary phase filter. The HGPSO algorithm includes self-adaptive parameters, recombination and mutation operations that originated from the genetic algorithm. Based on the benchmark test, the HGPSO algorithm has achieved global optimization and fast convergence. In an easy-to-perform optimizing procedure, the iteration number of HGPSO is decreased to about a quarter of the original particle swarm optimization process. A multi-zone binary phase filter is designed by using the HGPSO. The long depth of focus and high resolution are achieved simultaneously, where the depth of focus and focal spot transverse size are 6.05λ and 0.41λ, respectively. Therefore, the proposed HGPSO can be applied to the optimization of filter with multiple parameters.

  3. Hybrid particle swarm optimization with Cauchy distribution for solving reentrant flexible flow shop with blocking constraint

    Directory of Open Access Journals (Sweden)

    Chatnugrob Sangsawang

    2016-06-01

    Full Text Available This paper addresses a problem of the two-stage flexible flow shop with reentrant and blocking constraints in Hard Disk Drive Manufacturing. This problem can be formulated as a deterministic FFS|stage=2,rcrc, block|Cmax problem. In this study, adaptive Hybrid Particle Swarm Optimization with Cauchy distribution (HPSO was developed to solve the problem. The objective of this research is to find the sequences in order to minimize the makespan. To show their performances, computational experiments were performed on a number of test problems and the results are reported. Experimental results show that the proposed algorithms give better solutions than the classical Particle Swarm Optimization (PSO for all test problems. Additionally, the relative improvement (RI of the makespan solutions obtained by the proposed algorithms with respect to those of the current practice is performed in order to measure the quality of the makespan solutions generated by the proposed algorithms. The RI results show that the HPSO algorithm can improve the makespan solution by averages of 14.78%.

  4. High rate particle tracking and ultra-fast timing with a thin hybrid silicon pixel detector

    Science.gov (United States)

    Fiorini, M.; Aglieri Rinella, G.; Carassiti, V.; Ceccucci, A.; Cortina Gil, E.; Cotta Ramusino, A.; Dellacasa, G.; Garbolino, S.; Jarron, P.; Kaplon, J.; Kluge, A.; Marchetto, F.; Mapelli, A.; Martin, E.; Mazza, G.; Morel, M.; Noy, M.; Nuessle, G.; Perktold, L.; Petagna, P.; Petrucci, F.; Poltorak, K.; Riedler, P.; Rivetti, A.; Statera, M.; Velghe, B.

    2013-08-01

    The Gigatracker (GTK) is a hybrid silicon pixel detector designed for the NA62 experiment at CERN. The beam spectrometer, made of three GTK stations, has to sustain high and non-uniform particle rate (∼ 1 GHz in total) and measure momentum and angles of each beam track with a combined time resolution of 150 ps. In order to reduce multiple scattering and hadronic interactions of beam particles, the material budget of a single GTK station has been fixed to 0.5% X0. The expected fluence for 100 days of running is 2 ×1014 1 MeV neq /cm2, comparable to the one foreseen in the inner trackers of LHC detectors during 10 years of operation. To comply with these requirements, an efficient and very low-mass (< 0.15 %X0) cooling system is being constructed, using a novel microchannel cooling silicon plate. Two complementary read-out architectures have been produced as small-scale prototypes: one is based on a Time-over-Threshold circuit followed by a TDC shared by a group of pixels, while the other makes use of a constant-fraction discriminator followed by an on-pixel TDC. The read-out ASICs are produced in 130 nm IBM CMOS technology and will be thinned down to 100 μm or less. An overview of the Gigatracker detector system will be presented. Experimental results from laboratory and beam tests of prototype bump-bonded assemblies will be described as well. These results show a time resolution of about 170 ps for single hits from minimum ionizing particles, using 200 μm thick silicon sensors.

  5. Thermo-mechanical characterization of siliconized E-glass fiber/hematite particles reinforced epoxy resin hybrid composite

    Energy Technology Data Exchange (ETDEWEB)

    Arun Prakash, V.R., E-mail: vinprakash101@gmail.com; Rajadurai, A., E-mail: rajadurai@annauniv.edu.in

    2016-10-30

    Highlights: • Particles dimension have reduced using Ball milling process. • Importance of surface modification was explored. • Surface modification has been done to improve adhesion of fiber/particles with epoxy. • Mechanical properties has been increased by adding modified fiber and particles. • Thermal properties have been increased. - Abstract: In this present work hybrid polymer (epoxy) matrix composite has been strengthened with surface modified E-glass fiber and iron(III) oxide particles with varying size. The particle sizes of 200 nm and <100 nm has been prepared by high energy ball milling and sol-gel methods respectively. To enhance better dispersion of particles and improve adhesion of fibers and fillers with epoxy matrix surface modification process has been done on both fiber and filler by an amino functional silane 3-Aminopropyltrimethoxysilane (APTMS). Crystalline and functional groups of siliconized iron(III) oxide particles were characterized by XRD and FTIR spectroscopy analysis. Fixed quantity of surface treated 15 vol% E-glass fiber was laid along with 0.5 and 1.0 vol% of iron(III) oxide particles into the matrix to fabricate hybrid composites. The composites were cured by an aliphatic hardener Triethylenetetramine (TETA). Effectiveness of surface modified particles and fibers addition into the resin matrix were revealed by mechanical testing like tensile testing, flexural testing, impact testing, inter laminar shear strength and hardness. Thermal behavior of composites was evaluated by TGA, DSC and thermal conductivity (Lee’s disc). The scanning electron microscopy was employed to found shape and size of iron(III) oxide particles adhesion quality of fiber with epoxy matrix. Good dispersion of fillers in matrix was achieved with surface modifier APTMS. Tensile, flexural, impact and inter laminar shear strength of composites was improved by reinforcing surface modified fiber and filler. Thermal stability of epoxy resin was improved

  6. Investigations of the response of hybrid particle detectors for the Space Environmental Viewing and Analysis Network (SEVAN

    Directory of Open Access Journals (Sweden)

    A. Chilingarian

    2008-02-01

    Full Text Available A network of particle detectors located at middle to low latitudes known as SEVAN (Space Environmental Viewing and Analysis Network is being created in the framework of the International Heliophysical Year (IHY-2007. It aims to improve the fundamental research of the particle acceleration in the vicinity of the Sun and space environment conditions. The new type of particle detectors will simultaneously measure the changing fluxes of most species of secondary cosmic rays, thus turning into a powerful integrated device used for exploration of solar modulation effects. Ground-based detectors measure time series of secondary particles born in cascades originating in the atmosphere by nuclear interactions of protons and nuclei accelerated in the galaxy. During violent solar explosions, sometimes additional secondary particles are added to this "background" flux. The studies of the changing time series of secondary particles shed light on the high-energy particle acceleration mechanisms. The time series of intensities of high energy particles can also provide highly cost-effective information on the key characteristics of interplanetary disturbances. The recent results of the detection of the solar extreme events (2003–2005 by the monitors of the Aragats Space-Environmental Center (ASEC illustrate the wide possibilities provided by new particle detectors measuring neutron, electron and muon fluxes with inherent correlations. We present the results of the simulation studies revealing the characteristics of the SEVAN networks' basic measuring module. We illustrate the possibilities of the hybrid particle detector to measure neutral and charged fluxes of secondary CR, to estimate the efficiency and purity of detection; corresponding median energies of the primary proton flux, the ability to distinguish between neutron and proton initiated GLEs and some other important properties of hybrid particle detectors.

  7. Monte Carlo method implemented in a finite element code with application to dynamic vacuum in particle accelerators

    CERN Document Server

    Garion, C

    2009-01-01

    Modern particle accelerators require UHV conditions during their operation. In the accelerating cavities, breakdowns can occur, releasing large amount of gas into the vacuum chamber. To determine the pressure profile along the cavity as a function of time, the time-dependent behaviour of the gas has to be simulated. To do that, it is useful to apply accurate three-dimensional method, such as Test Particles Monte Carlo. In this paper, a time-dependent Test Particles Monte Carlo is used. It has been implemented in a Finite Element code, CASTEM. The principle is to track a sample of molecules during time. The complex geometry of the cavities can be created either in the FE code or in a CAD software (CATIA in our case). The interface between the two softwares to export the geometry from CATIA to CASTEM is given. The algorithm of particle tracking for collisionless flow in the FE code is shown. Thermal outgassing, pumping surfaces and electron and/or ion stimulated desorption can all be generated as well as differ...

  8. Solutions to HYDROCOIN [Hydrologic Code Intercomparison] Level 1 problems using STOKES and PARTICLE (Cases 1,2,4,7)

    International Nuclear Information System (INIS)

    Gureghian, A.B.; Andrews, A.; Steidl, S.B.; Brandstetter, A.

    1987-10-01

    HYDROCOIN (Hydrologic Code Intercomparison) Level 1 benchmark problems are solved using the finite element ground-water flow code STOKES and the pathline generating code PARTICLE developed for the Office of Crystalline Repository Development (OCRD). The objective of the Level 1 benchmark problems is to verify the numerical accuracy of ground-water flow codes by intercomparison of their results with analytical solutions and other numerical computer codes. Seven test cases were proposed for Level 1 to the Swedish Nuclear Power Inspectorate, the managing participant of HYDROCOIN. Cases 1, 2, 4, and 7 were selected by OCRD because of their appropriateness to the nature of crystalline repository hydrologic performance. The background relevance, conceptual model, and assumptions of each case are presented. The governing equations, boundary conditions, input parameters, and the solution schemes applied to each case are discussed. The results are shown in graphic and tabular form with concluding remarks. The results demonstrate the two-dimensional verification of STOKES and PARTICLE. 5 refs., 61 figs., 30 tabs

  9. Hybrid Micro-Depletion method in the DYN3D code

    Energy Technology Data Exchange (ETDEWEB)

    Bilodid, Yurii [Helmholtz-Zentrum Dresden-Rossendorf e.V., Dresden (Germany). Div. Reactor Safety

    2016-07-01

    A new method for accounting spectral history effects was developed and implemented in the reactor dynamics code DYN3D. Detailed nuclide content is calculated for each region of the reactor core and used to correct fuel properties. The new method demonstrates excellent results in test cases.

  10. Studying the Mechanism of Hybrid Nanoparticle Photoresists: Effect of Particle Size on Photopatterning

    KAUST Repository

    Li, Li; Chakrabarty, Souvik; Spyrou, Konstantinos; Ober, Christopher K.; Giannelis, Emmanuel P.

    2015-01-01

    © 2015 American Chemical Society. Hf-based hybrid photoresist materials with three different organic ligands were prepared by a sol-gel-based method, and their patterning mechanism was investigated in detail. All hybrid nanoparticle resists

  11. Fabrication of PLA/CaCO3 hybrid micro-particles as carriers for water-soluble bioactive molecules.

    Science.gov (United States)

    Kudryavtseva, Valeriya L; Zhao, Li; Tverdokhlebov, Sergei I; Sukhorukov, Gleb B

    2017-09-01

    We propose the use of polylactic acid/calcium carbonate (PLA/CaCO 3 ) hybrid micro-particles for achieving improved encapsulation of water-soluble substances. Biodegradable porous CaCO 3 microparticles can be loaded with wide range of bioactive substance. Thus, the formation of hydrophobic polymeric shell on surface of these loaded microparticles results on encapsulation and, hence, sealing internal cargo and preventing their release in aqueous media. In this study, to encapsulate proteins, we explore the solid-in-oil-in-water emulsion method for fabricating core/shell PLA/CaCO 3 systems. We used CaCO 3 particles as a protective core for encapsulated bovine serum albumin, which served as a model protein system. We prepared a PLA coating using dichloromethane as an organic solvent and polyvinyl alcohol as a surfactant for emulsification; in addition, we varied experimental parameters such as surfactant concentration and polymer-to-CaCO 3 ratio to determine their effect on particle-size distribution, encapsulation efficiency and capsule permeability. The results show that the particle size decreased and the size distribution narrowed as the surfactant concentration increased in the external aqueous phase. In addition, when the CaCO 3 /PLA mass ratio dropped below 0.8, the hybrid micro-particles were more likely to resist treatment by ethylenediaminetetraacetic acid and thus retained their bioactive cargos within the polymer-coated micro-particles. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Free-Standing and Self-Crosslinkable Hybrid Films by Core–Shell Particle Design and Processing

    Directory of Open Access Journals (Sweden)

    Steffen Vowinkel

    2017-11-01

    Full Text Available The utilization and preparation of functional hybrid films for optical sensing applications and membranes is of utmost importance. In this work, we report the convenient and scalable preparation of self-crosslinking particle-based films derived by directed self-assembly of alkoxysilane-based cross-linkers as part of a core-shell particle architecture. The synthesis of well-designed monodisperse core-shell particles by emulsion polymerization is the basic prerequisite for subsequent particle processing via the melt-shear organization technique. In more detail, the core particles consist of polystyrene (PS or poly(methyl methacrylate (PMMA, while the comparably soft particle shell consists of poly(ethyl acrylate (PEA and different alkoxysilane-based poly(methacrylates. For hybrid film formation and convenient self-cross-linking, different alkyl groups at the siloxane moieties were investigated in detail by solid-state Magic-Angle Spinning Nuclear Magnetic Resonance (MAS, NMR spectroscopy revealing different crosslinking capabilities, which strongly influence the properties of the core or shell particle films with respect to transparency and iridescent reflection colors. Furthermore, solid-state NMR spectroscopy and investigation of the thermal properties by differential scanning calorimetry (DSC measurements allow for insights into the cross-linking capabilities prior to and after synthesis, as well as after the thermally and pressure-induced processing steps. Subsequently, free-standing and self-crosslinked particle-based films featuring excellent particle order are obtained by application of the melt-shear organization technique, as shown by microscopy (TEM, SEM.

  13. A hybrid WDM/OCDMA ring with a dynamic add/drop function based on Fourier code for local area networks.

    Science.gov (United States)

    Choi, Yong-Kyu; Hosoya, Kenta; Lee, Chung Ghiu; Hanawa, Masanori; Park, Chang-Soo

    2011-03-28

    We propose and experimentally demonstrate a hybrid WDM/OCDMA ring with a dynamic add/drop function based on Fourier code for local area networks. Dynamic function is implemented by mechanically tuning the Fourier encoder/decoder for optical code division multiple access (OCDMA) encoding/decoding. Wavelength division multiplexing (WDM) is utilized for node assignment and 4-chip Fourier code recovers the matched signal from the codes. For an optical source well adapted to WDM channels and its short optical pulse generation, reflective semiconductor optical amplifiers (RSOAs) are used with a fiber Bragg grating (FBG) and gain-switched. To demonstrate we experimentally investigated a two-node hybrid WDM/OCDMA ring with a 4-chip Fourier encoder/decoder fabricated by cascading four FBGs with the bit error rate (BER) of <10(-9) for the node span of 10.64 km at 1.25 Gb/s.

  14. Gene selection using hybrid binary black hole algorithm and modified binary particle swarm optimization.

    Science.gov (United States)

    Pashaei, Elnaz; Pashaei, Elham; Aydin, Nizamettin

    2018-04-14

    In cancer classification, gene selection is an important data preprocessing technique, but it is a difficult task due to the large search space. Accordingly, the objective of this study is to develop a hybrid meta-heuristic Binary Black Hole Algorithm (BBHA) and Binary Particle Swarm Optimization (BPSO) (4-2) model that emphasizes gene selection. In this model, the BBHA is embedded in the BPSO (4-2) algorithm to make the BPSO (4-2) more effective and to facilitate the exploration and exploitation of the BPSO (4-2) algorithm to further improve the performance. This model has been associated with Random Forest Recursive Feature Elimination (RF-RFE) pre-filtering technique. The classifiers which are evaluated in the proposed framework are Sparse Partial Least Squares Discriminant Analysis (SPLSDA); k-nearest neighbor and Naive Bayes. The performance of the proposed method was evaluated on two benchmark and three clinical microarrays. The experimental results and statistical analysis confirm the better performance of the BPSO (4-2)-BBHA compared with the BBHA, the BPSO (4-2) and several state-of-the-art methods in terms of avoiding local minima, convergence rate, accuracy and number of selected genes. The results also show that the BPSO (4-2)-BBHA model can successfully identify known biologically and statistically significant genes from the clinical datasets. Copyright © 2018 Elsevier Inc. All rights reserved.

  15. PS-FW: A Hybrid Algorithm Based on Particle Swarm and Fireworks for Global Optimization

    Science.gov (United States)

    Chen, Shuangqing; Wei, Lixin; Guan, Bing

    2018-01-01

    Particle swarm optimization (PSO) and fireworks algorithm (FWA) are two recently developed optimization methods which have been applied in various areas due to their simplicity and efficiency. However, when being applied to high-dimensional optimization problems, PSO algorithm may be trapped in the local optima owing to the lack of powerful global exploration capability, and fireworks algorithm is difficult to converge in some cases because of its relatively low local exploitation efficiency for noncore fireworks. In this paper, a hybrid algorithm called PS-FW is presented, in which the modified operators of FWA are embedded into the solving process of PSO. In the iteration process, the abandonment and supplement mechanism is adopted to balance the exploration and exploitation ability of PS-FW, and the modified explosion operator and the novel mutation operator are proposed to speed up the global convergence and to avoid prematurity. To verify the performance of the proposed PS-FW algorithm, 22 high-dimensional benchmark functions have been employed, and it is compared with PSO, FWA, stdPSO, CPSO, CLPSO, FIPS, Frankenstein, and ALWPSO algorithms. Results show that the PS-FW algorithm is an efficient, robust, and fast converging optimization method for solving global optimization problems. PMID:29675036

  16. Hybrid fs/ps CARS for Sooting and Particle-laden Flames

    Energy Technology Data Exchange (ETDEWEB)

    Hoffmeister, Kathryn N. Gabet; Guildenbecher, Daniel Robert; Kearney, Sean P.

    2015-12-01

    We report the application of ultrafast rotational coherent anti-Stokes Raman scattering (CARS) for temperature and relative oxygen concentration measurements in the plume emanating from a burning aluminized ammonium perchlorate propellant strand. Combustion of these metal-based propellants is a particularly hostile environment for laserbased diagnostics, with intense background luminosity, scattering and beam obstruction from hot metal particles that can be as large as several hundred microns in diameter. CARS spectra that were previously obtained using nanosecond pulsed lasers in an aluminumparticle- seeded flame are examined and are determined to be severely impacted by nonresonant background, presumably as a result of the plasma formed by particulateenhanced laser-induced breakdown. Introduction of fs/ps laser pulses enables CARS detection at reduced pulse energies, decreasing the likelihood of breakdown, while simultaneously providing time-gated elimination of any nonresonant background interference. Temperature probability densities and temperature/oxygen correlations were constructed from ensembles of several thousand single-laser-shot measurements from the fs/ps rotational CARS measurement volume positioned within 3 mm or less of the burning propellant surface. Preliminary results in canonical flames are presented using a hybrid fs/ps vibrational CARS system to demonstrate our progress towards acquiring vibrational CARS measurements for more accurate temperatures in the very high temperature propellant burns.

  17. Refractive Index Tuning of Hybrid Materials for Highly Transmissive Luminescent Lanthanide Particle-Polymer Composites.

    Science.gov (United States)

    Kim, Paul; Li, Cheng; Riman, Richard E; Watkins, James

    2018-03-14

    High-refractive-index ZrO 2 nanoparticles were used to tailor the refractive index of a polymer matrix to match that of luminescent lanthanide-ion-doped (La 0.92 Yb 0.075 Er 0.005 F 3 ) light-emitting particles, thereby reducing scattering losses to yield highly transparent emissive composites. Photopolymerization of blends of an amine-modified poly(ether acrylate) oligomer and tailored quantities of ZrO 2 nanoparticles yielded optically transparent composites with tailored refractive indices between 1.49 and 1.69. By matching the refractive index of the matrix to that of La 0.92 Yb 0.075 Er 0.005 F 3 , composites with high transmittance (>85%) and low haze from the visible to infrared regions, bright 1530 nm optical emissions were achieved at solids loadings of La 0.92 Yb 0.075 Er 0.005 F 3 , ranging from 5 to 30 vol %. These optical results suggest that a hybrid matrix approach is a versatile strategy for the fabrication of functional luminescent optical composites of high transparency.

  18. Hybrid fs/ps CARS for Sooting and Particle-laden Flames [PowerPoint

    Energy Technology Data Exchange (ETDEWEB)

    Hoffmeister, Kathryn N. Gabet; Guildenbecher, Daniel Robert; Kearney, Sean P.

    2016-01-01

    We report the application of ultrafast rotational coherent anti-Stokes Raman scattering (CARS) for temperature and relative oxygen concentration measurements in the plume emanating from a burning aluminized ammonium perchlorate propellant strand. Combustion of these metal-based propellants is a particularly hostile environment for laserbased diagnostics, with intense background luminosity, scattering and beam obstruction from hot metal particles that can be as large as several hundred microns in diameter. CARS spectra that were previously obtained using nanosecond pulsed lasers in an aluminumparticle- seeded flame are examined and are determined to be severely impacted by nonresonant background, presumably as a result of the plasma formed by particulateenhanced laser-induced breakdown. Introduction of fs/ps laser pulses enables CARS detection at reduced pulse energies, decreasing the likelihood of breakdown, while simultaneously providing time-gated elimination of any nonresonant background interference. Temperature probability densities and temperature/oxygen correlations were constructed from ensembles of several thousand single-laser-shot measurements from the fs/ps rotational CARS measurement volume positioned within 3 mm or less of the burning propellant surface. Preliminary results in canonical flames are presented using a hybrid fs/ps vibrational CARS system to demonstrate our progress towards acquiring vibrational CARS measurements for more accurate temperatures in the very high temperature propellant burns.

  19. A NEW HYBRID YIN-YANG-PAIR-PARTICLE SWARM OPTIMIZATION ALGORITHM FOR UNCAPACITATED WAREHOUSE LOCATION PROBLEMS

    Directory of Open Access Journals (Sweden)

    A. A. Heidari

    2017-09-01

    Full Text Available Yin-Yang-pair optimization (YYPO is one of the latest metaheuristic algorithms (MA proposed in 2015 that tries to inspire the philosophy of balance between conflicting concepts. Particle swarm optimizer (PSO is one of the first population-based MA inspired by social behaviors of birds. In spite of PSO, the YYPO is not a nature inspired optimizer. It has a low complexity and starts with only two initial positions and can produce more points with regard to the dimension of target problem. Due to unique advantages of these methodologies and to mitigate the immature convergence and local optima (LO stagnation problems in PSO, in this work, a continuous hybrid strategy based on the behaviors of PSO and YYPO is proposed to attain the suboptimal solutions of uncapacitated warehouse location (UWL problems. This efficient hierarchical PSO-based optimizer (PSOYPO can improve the effectiveness of PSO on spatial optimization tasks such as the family of UWL problems. The performance of the proposed PSOYPO is verified according to some UWL benchmark cases. These test cases have been used in several works to evaluate the efficacy of different MA. Then, the PSOYPO is compared to the standard PSO, genetic algorithm (GA, harmony search (HS, modified HS (OBCHS, and evolutionary simulated annealing (ESA. The experimental results demonstrate that the PSOYPO can reveal a better or competitive efficacy compared to the PSO and other MA.

  20. A Hybrid Scheme Based on Pipelining and Multitasking in Mobile Application Processors for Advanced Video Coding

    Directory of Open Access Journals (Sweden)

    Muhammad Asif

    2015-01-01

    Full Text Available One of the key requirements for mobile devices is to provide high-performance computing at lower power consumption. The processors used in these devices provide specific hardware resources to handle computationally intensive video processing and interactive graphical applications. Moreover, processors designed for low-power applications may introduce limitations on the availability and usage of resources, which present additional challenges to the system designers. Owing to the specific design of the JZ47x series of mobile application processors, a hybrid software-hardware implementation scheme for H.264/AVC encoder is proposed in this work. The proposed scheme distributes the encoding tasks among hardware and software modules. A series of optimization techniques are developed to speed up the memory access and data transferring among memories. Moreover, an efficient data reusage design is proposed for the deblock filter video processing unit to reduce the memory accesses. Furthermore, fine grained macroblock (MB level parallelism is effectively exploited and a pipelined approach is proposed for efficient utilization of hardware processing cores. Finally, based on parallelism in the proposed design, encoding tasks are distributed between two processing cores. Experiments show that the hybrid encoder is 12 times faster than a highly optimized sequential encoder due to proposed techniques.

  1. Neutron flux investigation on certain alternative fluids in a hybrid system by using MCNPX Monte Carlo transport code

    Energy Technology Data Exchange (ETDEWEB)

    Guenay, Mehtap [Inoenue Univ., Malatya (Turkey). Physics Dept.

    2014-04-15

    In this study, the molten salt-heavy metal mixtures 93-85 % Li{sub 20}Sn{sub 80} + 5 % SFG-PuO{sub 2} and 2-10 % UO{sub 2}, 93-85 % Li{sub 20}Sn{sub 80} + 5 % SFG-PuO{sub 2} and 2-10 % NpO{sub 2}, 93-85 % Li{sub 20}Sn{sub 80} + 5 % SFG-PuO{sub 2} and 2-10 % UCO were used as fluids. The fluids were used in the liquid first wall, blanket and shield zones of the designed hybrid reactor system. Four centimeter thick 9Cr2WVTa ferritic steel was used as the structural material. In this study, the effect of mixture components on the neutron flux was investigated in a designed fusion-fission hybrid reactor system. The neutron flux was investigated according to the mixture components, radial flux distribution and energy spectrum in the designed system. Three-dimensional analyses were performed using the most recent MCNPX-2.7.0 Monte Carlo radiation transport code and the ENDF/B-VII.0 nuclear data library. (orig.)

  2. A hybrid artificial bee colony algorithm and pattern search method for inversion of particle size distribution from spectral extinction data

    Science.gov (United States)

    Wang, Li; Li, Feng; Xing, Jian

    2017-10-01

    In this paper, a hybrid artificial bee colony (ABC) algorithm and pattern search (PS) method is proposed and applied for recovery of particle size distribution (PSD) from spectral extinction data. To be more useful and practical, size distribution function is modelled as the general Johnson's ? function that can overcome the difficulty of not knowing the exact type beforehand encountered in many real circumstances. The proposed hybrid algorithm is evaluated through simulated examples involving unimodal, bimodal and trimodal PSDs with different widths and mean particle diameters. For comparison, all examples are additionally validated by the single ABC algorithm. In addition, the performance of the proposed algorithm is further tested by actual extinction measurements with real standard polystyrene samples immersed in water. Simulation and experimental results illustrate that the hybrid algorithm can be used as an effective technique to retrieve the PSDs with high reliability and accuracy. Compared with the single ABC algorithm, our proposed algorithm can produce more accurate and robust inversion results while taking almost comparative CPU time over ABC algorithm alone. The superiority of ABC and PS hybridization strategy in terms of reaching a better balance of estimation accuracy and computation effort increases its potentials as an excellent inversion technique for reliable and efficient actual measurement of PSD.

  3. Hybrid mesh finite volume CFD code for studying heat transfer in a forward-facing step

    Energy Technology Data Exchange (ETDEWEB)

    Jayakumar, J S; Kumar, Inder [Bhabha Atomic Research Center, Mumbai (India); Eswaran, V, E-mail: jsjayan@gmail.com, E-mail: inderk@barc.gov.in, E-mail: eswar@iitk.ac.in [Indian Institute of Technology, Kanpur (India)

    2010-12-15

    Computational fluid dynamics (CFD) methods employ two types of grid: structured and unstructured. Developing the solver and data structures for a finite-volume solver is easier than for unstructured grids. But real-life problems are too complicated to be fitted flexibly by structured grids. Therefore, unstructured grids are widely used for solving real-life problems. However, using only one type of unstructured element consumes a lot of computational time because the number of elements cannot be controlled. Hence, a hybrid grid that contains mixed elements, such as the use of hexahedral elements along with tetrahedral and pyramidal elements, gives the user control over the number of elements in the domain, and thus only the domain that requires a finer grid is meshed finer and not the entire domain. This work aims to develop such a finite-volume hybrid grid solver capable of handling turbulence flows and conjugate heat transfer. It has been extended to solving flow involving separation and subsequent reattachment occurring due to sudden expansion or contraction. A significant effect of mixing high- and low-enthalpy fluid occurs in the reattached regions of these devices. This makes the study of the backward-facing and forward-facing step with heat transfer an important field of research. The problem of the forward-facing step with conjugate heat transfer was taken up and solved for turbulence flow using a two-equation model of k-{omega}. The variation in the flow profile and heat transfer behavior has been studied with the variation in Re and solid to fluid thermal conductivity ratios. The results for the variation in local Nusselt number, interface temperature and skin friction factor are presented.

  4. Hybrid mesh finite volume CFD code for studying heat transfer in a forward-facing step

    Science.gov (United States)

    Jayakumar, J. S.; Kumar, Inder; Eswaran, V.

    2010-12-01

    Computational fluid dynamics (CFD) methods employ two types of grid: structured and unstructured. Developing the solver and data structures for a finite-volume solver is easier than for unstructured grids. But real-life problems are too complicated to be fitted flexibly by structured grids. Therefore, unstructured grids are widely used for solving real-life problems. However, using only one type of unstructured element consumes a lot of computational time because the number of elements cannot be controlled. Hence, a hybrid grid that contains mixed elements, such as the use of hexahedral elements along with tetrahedral and pyramidal elements, gives the user control over the number of elements in the domain, and thus only the domain that requires a finer grid is meshed finer and not the entire domain. This work aims to develop such a finite-volume hybrid grid solver capable of handling turbulence flows and conjugate heat transfer. It has been extended to solving flow involving separation and subsequent reattachment occurring due to sudden expansion or contraction. A significant effect of mixing high- and low-enthalpy fluid occurs in the reattached regions of these devices. This makes the study of the backward-facing and forward-facing step with heat transfer an important field of research. The problem of the forward-facing step with conjugate heat transfer was taken up and solved for turbulence flow using a two-equation model of k-ω. The variation in the flow profile and heat transfer behavior has been studied with the variation in Re and solid to fluid thermal conductivity ratios. The results for the variation in local Nusselt number, interface temperature and skin friction factor are presented.

  5. Hybrid mesh finite volume CFD code for studying heat transfer in a forward-facing step

    International Nuclear Information System (INIS)

    Jayakumar, J S; Kumar, Inder; Eswaran, V

    2010-01-01

    Computational fluid dynamics (CFD) methods employ two types of grid: structured and unstructured. Developing the solver and data structures for a finite-volume solver is easier than for unstructured grids. But real-life problems are too complicated to be fitted flexibly by structured grids. Therefore, unstructured grids are widely used for solving real-life problems. However, using only one type of unstructured element consumes a lot of computational time because the number of elements cannot be controlled. Hence, a hybrid grid that contains mixed elements, such as the use of hexahedral elements along with tetrahedral and pyramidal elements, gives the user control over the number of elements in the domain, and thus only the domain that requires a finer grid is meshed finer and not the entire domain. This work aims to develop such a finite-volume hybrid grid solver capable of handling turbulence flows and conjugate heat transfer. It has been extended to solving flow involving separation and subsequent reattachment occurring due to sudden expansion or contraction. A significant effect of mixing high- and low-enthalpy fluid occurs in the reattached regions of these devices. This makes the study of the backward-facing and forward-facing step with heat transfer an important field of research. The problem of the forward-facing step with conjugate heat transfer was taken up and solved for turbulence flow using a two-equation model of k-ω. The variation in the flow profile and heat transfer behavior has been studied with the variation in Re and solid to fluid thermal conductivity ratios. The results for the variation in local Nusselt number, interface temperature and skin friction factor are presented.

  6. LPIC++. A parallel one-dimensional relativistic electromagnetic particle-in-cell code for simulating laser-plasma-interaction

    International Nuclear Information System (INIS)

    Lichters, R.; Pfund, R.E.W.; Meyer-ter-Vehn, J.

    1997-08-01

    The code LPIC++ presented here, is based on a one-dimensional, electromagnetic, relativistic PIC code that has originally been developed by one of the authors during a PhD thesis at the Max-Planck-Institut fuer Quantenoptik for kinetic simulations of high harmonic generation from overdense plasma surfaces. The code uses essentially the algorithm of Birdsall and Langdon and Villasenor and Bunemann. It is written in C++ in order to be easily extendable and has been parallelized to be able to grow in power linearly with the size of accessable hardware, e.g. massively parallel machines like Cray T3E. The parallel LPIC++ version uses PVM for communication between processors. PVM is public domain software, can be downloaded from the world wide web. A particular strength of LPIC++ lies in its clear program and data structure, which uses chained lists for the organization of grid cells and enables dynamic adjustment of spatial domain sizes in a very convenient way, and therefore easy balancing of processor loads. Also particles belonging to one cell are linked in a chained list and are immediately accessable from this cell. In addition to this convenient type of data organization in a PIC code, the code shows excellent performance in both its single processor and parallel version. (orig.)

  7. 3D integration technology for hybrid pixel detectors designed for particle physics and imaging experiments

    International Nuclear Information System (INIS)

    Henry, D.; Berthelot, A.; Cuchet, R.; Chantre, C.; Campbell, M.; Tick, T.

    2012-01-01

    Hybrid pixel detectors are now widely used in particle physics experiments and are becoming established at synchrotron light sources. They have also stimulated growing interest in other fields and, in particular, in medical imaging. Through the continuous pursuit of miniaturization in CMOS it has been possible to increase the functionality per pixel while maintaining or even shrinking pixel dimensions. The main constraint on the more extensive use of the technology in all fields is the cost of module building and the difficulty of covering large areas seamlessly. On another hand, in the field of electronic component integration, a new approach has been developed in the last years, called 3D Integration. This concept, based on using the vertical axis for component integration, allows improving the global performance of complex systems. Thanks to this technology, the cost and the form factor of components could be decreased and the performance of the global system could be enhanced. In the field of radiation imaging detectors the advantages of 3D Integration come from reduced inter chip dead area even on large surfaces and from improved detector construction yield resulting from the use of single chip 4-side buttable tiles. For many years, numerous R and centres and companies have put a lot of effort into developing 3D integration technologies and today, some mature technologies are ready for prototyping and production. The core technology of the 3D integration is the TSV (Through Silicon Via) and for many years, LETI has developed those technologies for various types of applications. In this paper we present how one of the TSV approaches developed by LETI, called TSV last, has been applied to a readout wafer containing readout chips intended for a hybrid pixel detector assembly. In the first part of this paper, the 3D design adapted to the read-out chip will be described. Then the complete process flow will be explained and, finally, the test strategy adopted and

  8. Fluid-structure-interaction analyses of reactor vessel using improved hybrid Lagrangian Eulerian code ALICE-II

    Energy Technology Data Exchange (ETDEWEB)

    Wang, C.Y.

    1993-06-01

    This paper describes fluid-structure-interaction and structure response analyses of a reactor vessel subjected to loadings associated with postulated accidents, using the hybrid Lagrangian-Eulerian code ALICE-II. This code has been improved recently to accommodate many features associated with innovative designs of reactor vessels. Calculational capabilities have been developed to treat water in the reactor cavity outside the vessel, internal shield structures and internal thin shells. The objective of the present analyses is to study the cover response and potential for missile generation in response to a fuel-coolant interaction in the core region. Three calculations were performed using the cover weight as a parameter. To study the effect of the cavity water, vessel response calculations for both wet- and dry-cavity designs are compared. Results indicate that for all cases studied and for the design parameters assumed, the calculated cover displacements are all smaller than the bolts` ultimate displacement and no missile generation of the closure head is predicted. Also, solutions reveal that the cavity water of the wet-cavity design plays an important role of restraining the downward displacement of the bottom head. Based on these studies, the analyses predict that the structure integrity is maintained throughout the postulated accident for the wet-cavity design.

  9. Fluid-structure-interaction analyses of reactor vessel using improved hybrid Lagrangian Eulerian code ALICE-II

    Energy Technology Data Exchange (ETDEWEB)

    Wang, C.Y.

    1993-01-01

    This paper describes fluid-structure-interaction and structure response analyses of a reactor vessel subjected to loadings associated with postulated accidents, using the hybrid Lagrangian-Eulerian code ALICE-II. This code has been improved recently to accommodate many features associated with innovative designs of reactor vessels. Calculational capabilities have been developed to treat water in the reactor cavity outside the vessel, internal shield structures and internal thin shells. The objective of the present analyses is to study the cover response and potential for missile generation in response to a fuel-coolant interaction in the core region. Three calculations were performed using the cover weight as a parameter. To study the effect of the cavity water, vessel response calculations for both wet- and dry-cavity designs are compared. Results indicate that for all cases studied and for the design parameters assumed, the calculated cover displacements are all smaller than the bolts' ultimate displacement and no missile generation of the closure head is predicted. Also, solutions reveal that the cavity water of the wet-cavity design plays an important role of restraining the downward displacement of the bottom head. Based on these studies, the analyses predict that the structure integrity is maintained throughout the postulated accident for the wet-cavity design.

  10. The effect on radiation damage of structural material in a hybrid system by using a Monte Carlo radiation transport code

    International Nuclear Information System (INIS)

    Günay, Mehtap; Şarer, Başar; Kasap, Hızır

    2014-01-01

    Highlights: • The effects of some fluids on gas production rates in structural material were investigated. • The MCNPX-2.7.0 Monte Carlo code was used for three-dimensional calculations. • It was found that biggest contribution to gas production rates comes from Fe isotope of the. • The desirable values for 5% SFG-PuO 2 with respect to radiation damage were specified. - Abstract: In this study, the molten salt-heavy metal mixtures 99–95% Li20Sn80-1-5% SFG-Pu, 99–95% Li20Sn80-1-5% SFG-PuF4, 99-95% Li20Sn80-1-5% SFG-PuO2 were used as fluids. The fluids were used in the liquid first-wall, blanket and shield zones of the designed hybrid reactor system. 9Cr2WVTa ferritic steel with the width of 4 cm was used as the structural material. The parameters of radiation damage are proton, deuterium, tritium, He-3 and He-4 gas production rates. In this study, the effects of the selected fluid on the radiation damage, in terms of individual as well as total isotopes in the structural material, were investigated for 30 full power years (FPYs). Three-dimensional analyses were performed using the most recent version of the MCNPX-2.7.0 Monte Carlo radiation transport code and the ENDF/B-VII.0 nuclear data library

  11. Hybrid parallelization of the XTOR-2F code for the simulation of two-fluid MHD instabilities in tokamaks

    Science.gov (United States)

    Marx, Alain; Lütjens, Hinrich

    2017-03-01

    A hybrid MPI/OpenMP parallel version of the XTOR-2F code [Lütjens and Luciani, J. Comput. Phys. 229 (2010) 8130] solving the two-fluid MHD equations in full tokamak geometry by means of an iterative Newton-Krylov matrix-free method has been developed. The present work shows that the code has been parallelized significantly despite the numerical profile of the problem solved by XTOR-2F, i.e. a discretization with pseudo-spectral representations in all angular directions, the stiffness of the two-fluid stability problem in tokamaks, and the use of a direct LU decomposition to invert the physical pre-conditioner at every Krylov iteration of the solver. The execution time of the parallelized version is an order of magnitude smaller than the sequential one for low resolution cases, with an increasing speedup when the discretization mesh is refined. Moreover, it allows to perform simulations with higher resolutions, previously forbidden because of memory limitations.

  12. What do Codes of Conduct do? Hybrid Constitutionalization and Militarization in Military Markets

    DEFF Research Database (Denmark)

    Leander, Anna

    2012-01-01

    that makes it possible to hold firms accountable and a militarization of politics. It does so by showing that the codes create first-, second- and third-order rules but also processes of misrecognition through distraction, distinction and diffusion that empower military professionals. It draws on a study...... performative but are so in multiple ways. It can provide no easy way to dissolve the specific dilemma this multiple jurisgenerativity poses in the context of military markets specifically. But logically flowing from the argument is a suggestion that encouraging and empowering a broader, non...

  13. Dry powder inhaler formulation of lipid-polymer hybrid nanoparticles via electrostatically-driven nanoparticle assembly onto microscale carrier particles.

    Science.gov (United States)

    Yang, Yue; Cheow, Wean Sin; Hadinoto, Kunn

    2012-09-15

    Lipid-polymer hybrid nanoparticles have emerged as promising nanoscale carriers of therapeutics as they combine the attractive characteristics of liposomes and polymers. Herein we develop dry powder inhaler (DPI) formulation of hybrid nanoparticles composed of poly(lactic-co-glycolic acid) and soybean lecithin as the polymer and lipid constituents, respectively. The hybrid nanoparticles are transformed into inhalable microscale nanocomposite structures by a novel technique based on electrostatically-driven adsorption of nanoparticles onto polysaccharide carrier particles, which eliminates the drawbacks of conventional techniques based on controlled drying (e.g. nanoparticle-specific formulation, low yield). First, we engineer polysaccharide carrier particles made up of chitosan cross-linked with tripolyphosphate and dextran sulphate to exhibit the desired aerosolization characteristics and physical robustness. Second, we investigate the effects of nanoparticle to carrier mass ratio and salt inclusion on the adsorption efficiency, in terms of the nanoparticle loading and yield, from which the optimal formulation is determined. Desorption of the nanoparticles from the carrier particles in phosphate buffer saline is also examined. Lastly, we characterize aerosolization efficiency of the nanocomposite product in vitro, where the emitted dose and respirable fraction are found to be comparable to the values of conventional DPI formulations. Copyright © 2012 Elsevier B.V. All rights reserved.

  14. Biological dose estimation for charged-particle therapy using an improved PHITS code coupled with a microdosimetric kinetic model

    International Nuclear Information System (INIS)

    Sato, Tatsuhiko; Watanabe, Ritsuko; Kase, Yuki; Niita, Koji; Sihver, Lembit

    2009-01-01

    High-energy heavy ions (HZE particles) have become widely used for radiotherapy of tumors owing to their high biological effectiveness. In the treatment planning of such charged-particle therapy, it is necessary to estimate not only physical but also biological dose, which is the product of physical dose and relative biological effectiveness (RBE). In the Heavy-ion Medical Accelerator in Chiba (HIMAC), the biological dose is estimated by a method proposed by Kanai et al., which is based on the linear-quadratic (LQ) model with its parameters α and β determined by the dose distribution in terms of the unrestricted linear energy transfer (LET). Thus, RBE is simply expressed as a function of LET in their model. However, RBE of HZE particles cannot be uniquely determined from their LET because of their large cross sections for high-energy δ-ray production. Hence, development of a biological dose estimation model that can explicitly consider the track structure of δ-rays around the trajectory of HZE particles is urgently needed. Microdosimetric quantities such as lineal energy y are better indexes for representing RBE of HZE particles in comparison to LET, since they can express the decrease of ionization densities around their trajectories due to the production of δ-rays. The difference of the concept between LET and y is illustrated in Figure 1. However, the use of microdosimetric quantities in computational dosimetry was severely limited because of the difficulty in calculating their probability densities (PDs) in macroscopic matter. We therefore improved the 3-dimensional particle transport simulation code PHITS, providing it with the capability of estimating the microdosimetric PDs in a macroscopic framework by incorporating a mathematical function that can instantaneously calculate the PDs around the trajectory of HZE particles with precision equivalent to a microscopic track-structure simulation. A new method for estimating biological dose from charged-particle

  15. The OpenMOC method of characteristics neutral particle transport code

    International Nuclear Information System (INIS)

    Boyd, William; Shaner, Samuel; Li, Lulu; Forget, Benoit; Smith, Kord

    2014-01-01

    Highlights: • An open source method of characteristics neutron transport code has been developed. • OpenMOC shows nearly perfect scaling on CPUs and 30× speedup on GPUs. • Nonlinear acceleration techniques demonstrate a 40× reduction in source iterations. • OpenMOC uses modern software design principles within a C++ and Python framework. • Validation with respect to the C5G7 and LRA benchmarks is presented. - Abstract: The method of characteristics (MOC) is a numerical integration technique for partial differential equations, and has seen widespread use for reactor physics lattice calculations. The exponential growth in computing power has finally brought the possibility for high-fidelity full core MOC calculations within reach. The OpenMOC code is being developed at the Massachusetts Institute of Technology to investigate algorithmic acceleration techniques and parallel algorithms for MOC. OpenMOC is a free, open source code written using modern software languages such as C/C++ and CUDA with an emphasis on extensible design principles for code developers and an easy to use Python interface for code users. The present work describes the OpenMOC code and illustrates its ability to model large problems accurately and efficiently

  16. Application of a Java-based, univel geometry, neutral particle Monte Carlo code to the searchlight problem

    International Nuclear Information System (INIS)

    Charles A. Wemple; Joshua J. Cogliati

    2005-01-01

    A univel geometry, neutral particle Monte Carlo transport code, written entirely in the Java programming language, is under development for medical radiotherapy applications. The code uses ENDF-VI based continuous energy cross section data in a flexible XML format. Full neutron-photon coupling, including detailed photon production and photonuclear reactions, is included. Charged particle equilibrium is assumed within the patient model so that detailed transport of electrons produced by photon interactions may be neglected. External beam and internal distributed source descriptions for mixed neutron-photon sources are allowed. Flux and dose tallies are performed on a univel basis. A four-tap, shift-register-sequence random number generator is used. Initial verification and validation testing of the basic neutron transport routines is underway. The searchlight problem was chosen as a suitable first application because of the simplicity of the physical model. Results show excellent agreement with analytic solutions. Computation times for similar numbers of histories are comparable to other neutron MC codes written in C and FORTRAN

  17. Scattering and absorption of light by ice particles: Solution by a new physical-geometric optics hybrid method

    International Nuclear Information System (INIS)

    Bi Lei; Yang Ping; Kattawar, George W.; Hu Yongxiang; Baum, Bryan A.

    2011-01-01

    A new physical-geometric optics hybrid (PGOH) method is developed to compute the scattering and absorption properties of ice particles. This method is suitable for studying the optical properties of ice particles with arbitrary orientations, complex refractive indices (i.e., particles with significant absorption), and size parameters (proportional to the ratio of particle size to incident wavelength) larger than ∼20, and includes consideration of the edge effects necessary for accurate determination of the extinction and absorption efficiencies. Light beams with polygon-shaped cross sections propagate within a particle and are traced by using a beam-splitting technique. The electric field associated with a beam is calculated using a beam-tracing process in which the amplitude and phase variations over the wavefront of the localized wave associated with the beam are considered analytically. The geometric-optics near field for each ray is obtained, and the single-scattering properties of particles are calculated from electromagnetic integral equations. The present method does not assume additional physical simplifications and approximations, except for geometric optics principles, and may be regarded as a 'benchmark' within the framework of the geometric optics approach. The computational time is on the order of seconds for a single-orientation simulation and is essentially independent of the size parameter. The single-scattering properties of oriented hexagonal ice particles (ice plates and hexagons) are presented. The numerical results are compared with those computed from the discrete-dipole-approximation (DDA) method.

  18. Computer codes for particle accelerator design and analysis: A compendium. Second edition

    International Nuclear Information System (INIS)

    Deaven, H.S.; Chan, K.C.D.

    1990-05-01

    The design of the next generation of high-energy accelerators will probably be done as an international collaborative efforts and it would make sense to establish, either formally or informally, an international center for accelerator codes with branches for maintenance, distribution, and consultation at strategically located accelerator centers around the world. This arrangement could have at least three beneficial effects. It would cut down duplication of effort, provide long-term support for the best codes, and provide a stimulating atmosphere for the evolution of new codes. It does not take much foresight to see that the natural evolution of accelerator design codes is toward the development of so-called Expert Systems, systems capable of taking design specifications of future accelerators and producing specifications for optimized magnetic transport and acceleration components, making a layout, and giving a fairly impartial cost estimate. Such an expert program would use present-day programs such as TRANSPORT, POISSON, and SUPERFISH as tools in the optimization process. Such a program would also serve to codify the experience of two generations of accelerator designers before it is lost as these designers reach retirement age. This document describes 203 codes that originate from 10 countries and are currently in use. The authors feel that this compendium will contribute to the dialogue supporting the international collaborative effort that is taking place in the field of accelerator physics today

  19. Recent Improvements to the IMPACT-T Parallel Particle Tracking Code

    International Nuclear Information System (INIS)

    Qiang, J.; Pogorelov, I.V.; Ryne, R.

    2006-01-01

    The IMPACT-T code is a parallel three-dimensional quasi-static beam dynamics code for modeling high brightness beams in photoinjectors and RF linacs. Developed under the US DOE Scientific Discovery through Advanced Computing (SciDAC) program, it includes several key features including a self-consistent calculation of 3D space-charge forces using a shifted and integrated Green function method, multiple energy bins for beams with large energy spread, and models for treating RF standing wave and traveling wave structures. In this paper, we report on recent improvements to the IMPACT-T code including modeling traveling wave structures, short-range transverse and longitudinal wakefields, and longitudinal coherent synchrotron radiation through bending magnets

  20. Convergence acceleration in the Monte-Carlo particle transport code TRIPOLI-4 in criticality

    International Nuclear Information System (INIS)

    Dehaye, Benjamin

    2014-01-01

    Fields such as criticality studies need to compute some values of interest in neutron physics. Two kind of codes may be used: deterministic ones and stochastic ones. The stochastic codes do not require approximation and are thus more exact. However, they may require a lot of time to converge with a sufficient precision.The work carried out during this thesis aims to build an efficient acceleration strategy in the TRIPOLI-4. We wish to implement the zero variance game. To do so, the method requires to compute the adjoint flux. The originality of this work is to directly compute the adjoint flux directly from a Monte-Carlo simulation without using external codes thanks to the fission matrix method. This adjoint flux is then used as an importance map to bias the simulation. (author) [fr

  1. New scope covered by PHITS. Particle and heavy ion transport code system

    International Nuclear Information System (INIS)

    Nakamura, Takashi; Niita, Koji; Iwase, Hiroshi; Sato, Tatsuhiko

    2006-01-01

    PHITS is a general high energy transport calculation code from hadron to heavy ions, which embedded in NMTC-JAM with JQMD code. Outline of PHITS and many application examples are stated. PHITS has been used by the shielding calculations of J-PARC, GSI, RIA and Big-RIPS and the good results were reported. The evaluation of exposure dose of astronauts, airmen, proton and heavy ion therapy, and estimation of error frequency of semiconductor software are explained as the application examples. Relation between the event generator and Monte Carlo method and the future are described. (S.Y.)

  2. Modeling an emittance-dominated elliptical sheet beam with a 212-dimensional particle-in-cell code

    International Nuclear Information System (INIS)

    Carlsten, Bruce E.

    2005-01-01

    Modeling a 3-dimensional (3-D) elliptical beam with a 212-D particle-in-cell (PIC) code requires a reduction in the beam parameters. The 212-D PIC code can only model the center slice of the sheet beam, but that can still provide useful information about the beam transport and distribution evolution, even if the beam is emittance dominated. The reduction of beam parameters and resulting interpretation of the simulation is straightforward, but not trivial. In this paper, we describe the beam parameter reduction and emittance issues related to the initial beam distribution. As a numerical example, we use the case of a sheet beam designed for use with a planar traveling-wave amplifier for high power generator for RF ranging from 95 to 300GHz [Carlsten et al., IEEE Trans. Plasma Sci. 33 (2005) 85]. These numerical techniques also apply to modeling high-energy elliptical bunches in RF accelerators

  3. Parallel Finite Element Particle-In-Cell Code for Simulations of Space-charge Dominated Beam-Cavity Interactions

    International Nuclear Information System (INIS)

    Candel, A.; Kabel, A.; Ko, K.; Lee, L.; Li, Z.; Limborg, C.; Ng, C.; Prudencio, E.; Schussman, G.; Uplenchwar, R.

    2007-01-01

    Over the past years, SLAC's Advanced Computations Department (ACD) has developed the parallel finite element (FE) particle-in-cell code Pic3P (Pic2P) for simulations of beam-cavity interactions dominated by space-charge effects. As opposed to standard space-charge dominated beam transport codes, which are based on the electrostatic approximation, Pic3P (Pic2P) includes space-charge, retardation and boundary effects as it self-consistently solves the complete set of Maxwell-Lorentz equations using higher-order FE methods on conformal meshes. Use of efficient, large-scale parallel processing allows for the modeling of photoinjectors with unprecedented accuracy, aiding the design and operation of the next-generation of accelerator facilities. Applications to the Linac Coherent Light Source (LCLS) RF gun are presented

  4. Hybridization-based reconstruction of small non-coding RNA transcripts from deep sequencing data.

    Science.gov (United States)

    Ragan, Chikako; Mowry, Bryan J; Bauer, Denis C

    2012-09-01

    Recent advances in RNA sequencing technology (RNA-Seq) enables comprehensive profiling of RNAs by producing millions of short sequence reads from size-fractionated RNA libraries. Although conventional tools for detecting and distinguishing non-coding RNAs (ncRNAs) from reference-genome data can be applied to sequence data, ncRNA detection can be improved by harnessing the full information content provided by this new technology. Here we present NorahDesk, the first unbiased and universally applicable method for small ncRNAs detection from RNA-Seq data. NorahDesk utilizes the coverage-distribution of small RNA sequence data as well as thermodynamic assessments of secondary structure to reliably predict and annotate ncRNA classes. Using publicly available mouse sequence data from brain, skeletal muscle, testis and ovary, we evaluated our method with an emphasis on the performance for microRNAs (miRNAs) and piwi-interacting small RNA (piRNA). We compared our method with Dario and mirDeep2 and found that NorahDesk produces longer transcripts with higher read coverage. This feature makes it the first method particularly suitable for the prediction of both known and novel piRNAs.

  5. Simulation of thermal-neutron-induced single-event upset using particle and heavy-ion transport code system

    International Nuclear Information System (INIS)

    Arita, Yutaka; Kihara, Yuji; Mitsuhasi, Junichi; Niita, Koji; Takai, Mikio; Ogawa, Izumi; Kishimoto, Tadafumi; Yoshihara, Tsutomu

    2007-01-01

    The simulation of a thermal-neutron-induced single-event upset (SEU) was performed on a 0.4-μm-design-rule 4 Mbit static random access memory (SRAM) using particle and heavy-ion transport code system (PHITS): The SEU rates obtained by the simulation were in very good agreement with the result of experiments. PHITS is a useful tool for simulating SEUs in semiconductor devices. To further improve the accuracy of the simulation, additional methods for tallying the energy deposition are required for PHITS. (author)

  6. A hybrid parallel architecture for electrostatic interactions in the simulation of dissipative particle dynamics

    Science.gov (United States)

    Yang, Sheng-Chun; Lu, Zhong-Yuan; Qian, Hu-Jun; Wang, Yong-Lei; Han, Jie-Ping

    2017-11-01

    In this work, we upgraded the electrostatic interaction method of CU-ENUF (Yang, et al., 2016) which first applied CUNFFT (nonequispaced Fourier transforms based on CUDA) to the reciprocal-space electrostatic computation and made the computation of electrostatic interaction done thoroughly in GPU. The upgraded edition of CU-ENUF runs concurrently in a hybrid parallel way that enables the computation parallelizing on multiple computer nodes firstly, then further on the installed GPU in each computer. By this parallel strategy, the size of simulation system will be never restricted to the throughput of a single CPU or GPU. The most critical technical problem is how to parallelize a CUNFFT in the parallel strategy, which is conquered effectively by deep-seated research of basic principles and some algorithm skills. Furthermore, the upgraded method is capable of computing electrostatic interactions for both the atomistic molecular dynamics (MD) and the dissipative particle dynamics (DPD). Finally, the benchmarks conducted for validation and performance indicate that the upgraded method is able to not only present a good precision when setting suitable parameters, but also give an efficient way to compute electrostatic interactions for huge simulation systems. Program Files doi:http://dx.doi.org/10.17632/zncf24fhpv.1 Licensing provisions: GNU General Public License 3 (GPL) Programming language: C, C++, and CUDA C Supplementary material: The program is designed for effective electrostatic interactions of large-scale simulation systems, which runs on particular computers equipped with NVIDIA GPUs. It has been tested on (a) single computer node with Intel(R) Core(TM) i7-3770@ 3.40 GHz (CPU) and GTX 980 Ti (GPU), and (b) MPI parallel computer nodes with the same configurations. Nature of problem: For molecular dynamics simulation, the electrostatic interaction is the most time-consuming computation because of its long-range feature and slow convergence in simulation space

  7. Controlled dense coding for continuous variables using three-particle entangled states

    CERN Document Server

    Jing Zhang; Kun Chi Peng; 10.1103/PhysRevA.66.032318

    2002-01-01

    A simple scheme to realize quantum controlled dense coding with a bright tripartite entangled state light generated from nondegenerate optical parametric amplifiers is proposed in this paper. The quantum channel between Alice and Bob is controlled by Claire. As a local oscillator and balanced homodyne detector are not needed, the proposed protocol is easy to be realized experimentally. (15 refs)

  8. Gold nano particle decorated graphene core first generation PAMAM dendrimer for label free electrochemical DNA hybridization sensing.

    Science.gov (United States)

    Jayakumar, K; Rajesh, R; Dharuman, V; Venkatasan, R; Hahn, J H; Pandian, S Karutha

    2012-01-15

    A novel first generation (G1) poly(amidoamine) dendrimer (PAMAM) with graphene core (GG1PAMAM) was synthesized for the first time. Single layer of GG1PAMAM was immobilized covalently on mercaptopropionic acid (MPA) monolayer on Au transducer. This allows cost effective and easy deposition of single layer graphene on the Au transducer surface than the advanced vacuum techniques used in the literature. Au nano particles (17.5 nm) then decorated the GG1PAMAM and used for electrochemical DNA hybridization sensing. The sensor discriminates selectively and sensitively the complementary double stranded DNA (dsDNA, hybridized), non-complementary DNA (ssDNA, un-hybridized) and single nucleotide polymorphism (SNP) surfaces. Interactions of the MPA, GG1PAMAM and the Au nano particles were characterized by Ultra Violet (UV), Fourier Transform Infrared (FTIR), Raman spectroscopy (RS), Thermo gravimetric analysis (TGA), Scanning Electron Microscopy (SEM), Atomic Force Microscopy (AFM), Cyclic Voltmetric (CV), Impedance spectroscopy (IS) and Differntial Pulse Voltammetry (DPV) techniques. The sensor showed linear range 1×10(-6) to 1×10(-12) M with lowest detection limit 1 pM which is 1000 times lower than G1PAMAM without graphene core. Copyright © 2011 Elsevier B.V. All rights reserved.

  9. Synthesis of MnO nano-particle@Flourine doped carbon and its application in hybrid supercapacitor

    Energy Technology Data Exchange (ETDEWEB)

    Qu, Deyu; Feng, Xiaoke [Department of Chemistry, School of Chemistry, Chemical Engineering and Life Science, Wuhan University of Technology, Wuhan 430070, Hubei (China); Wei, Xi [School of Materials Science and Engineering, State Key Laboratory of Advanced Technology for Materials Synthesis and Processing, Wuhan University of Technology, Wuhan 430070, Hubei (China); Guo, Liping [Department of Chemistry, School of Chemistry, Chemical Engineering and Life Science, Wuhan University of Technology, Wuhan 430070, Hubei (China); Cai, Haopeng, E-mail: cai_haopeng@whut.edu.cn [School of Materials Science and Engineering, State Key Laboratory of Advanced Technology for Materials Synthesis and Processing, Wuhan University of Technology, Wuhan 430070, Hubei (China); Tang, Haolin [School of Materials Science and Engineering, State Key Laboratory of Advanced Technology for Materials Synthesis and Processing, Wuhan University of Technology, Wuhan 430070, Hubei (China); Xie, Zhizhong, E-mail: zhizhong_xie@163.com [Department of Chemistry, School of Chemistry, Chemical Engineering and Life Science, Wuhan University of Technology, Wuhan 430070, Hubei (China)

    2017-08-15

    Highlights: • A Fluorine doped carbon encapsulated MnO nanoparticle material was fabricated through a self-assembly method. • Nafion ionomers was used as the fluorine and carbon precursor. • A lithium ion supercapacitor was assemblied by using MnO@FC and porous carbon. • A stable energy density as well as superior cycling stability were demonstrated in this hybrid system. - Abstract: A flourine doped carbon materials encapsulated MnO nano-particle was synthesized through a self-assembly method. The MnO nano-crystal covered with a thin layer of graphite were achieved. This hybrid MnO/carbon materials were employed as negative electrode in a new lithium ion hybrid supercapacitor, while the electrochemical double-layer porous carbon served as positive electrode. The electrochemical performances of this hybrid device were investigated and exhibited relative high capacity upto 40 mAh g{sup −1} in an applied current of 200 mAh g{sup −1}, good rate performance as well as superior cycling stability.

  10. A magnetostatic particle code and its application to studies of anomalous current penetration of a plasma

    International Nuclear Information System (INIS)

    Lin, A.T.; Pritchett, P.L.; Dawson, J.M.

    1976-01-01

    A large number of important plasma problems involves self-consistent magnetic fields. For disturbances which propagate slowly compared to the velocity of light, the magnetostatic approximation (Darwin model) suffices. Based on the Darwin model a particle model has been developed to investigate such problems. (GG) [de

  11. On the integration of equations of motion for particle-in-cell codes

    International Nuclear Information System (INIS)

    Fuchs, V.; Gunn, J.P.

    2006-01-01

    An area-preserving implementation of the 2nd order Runge-Kutta integration method for equations of motion is presented. For forces independent of velocity the scheme possesses the same numerical simplicity and stability as the leapfrog method, and is not implicit for forces which do depend on velocity. It can be therefore easily applied where the leapfrog method in general cannot. We discuss the stability of the new scheme and test its performance in calculations of particle motion in three cases of interest. First, in the ubiquitous and numerically demanding example of nonlinear interaction of particles with a propagating plane wave, second, in the case of particle motion in a static magnetic field and, third, in a nonlinear dissipative case leading to a limit cycle. We compare computed orbits with exact orbits and with results from the leapfrog and other low-order integration schemes. Of special interest is the role of intrinsic stochasticity introduced by time differencing, which can destroy orbits of an otherwise exactly integrable system and therefore constitutes a restriction on the applicability of an integration scheme in such a context [A. Friedman, S.P. Auerbach, J. Comput. Phys. 93 (1991) 171]. In particular, we show that for a plane wave the new scheme proposed herein can be reduced to a symmetric standard map. This leads to the nonlinear stability condition Δt ω B ≤ 1, where Δt is the time step and ω B the particle bounce frequency

  12. Misconception regarding conventional coupling of fields and particles in XFEL codes

    Energy Technology Data Exchange (ETDEWEB)

    Geloni, Gianluca [Europeam XFEL GmbH, Hamburg (Germany); Kocharyan, Vitali; Saldin, Evgeni [DESY Hamburg (Germany)

    2016-01-15

    Maxwell theory is usually treated in the laboratory frame under the standard time order, that is the usual light-signal clock synchronization. In contrast, particle tracking in the laboratory frame usually treats time as an independent variable. As a result, here we argue that the evolution of electron beams is usually treated according to the absolute time convention i.e. using a different time order defined by a non-standard clock synchronization procedure. This essential point has never received attention in the accelerator community. There are two possible ways of coupling fields and particles in this situation. The first, Lorentz's prerelativistic way, consists in a 'translation' of Maxwell's electrodynamics to the absolute time world-picture. The second, Einstein's way, consists in a 'translation' of particle tracking results to the electromagnetic world-picture, obeying the standard time order. Conventional particle tracking shows that the electron beam direction changes after a transverse kick, while the orientation of the microbunching phase front stays unvaried. Here we show that in the ultrarelativistic asymptotic v → c, the orientation of the planes of simultaneity, i.e. the orientation of the microbunching fronts, is always perpendicular to the electron beam velocity when the evolution of the modulated electron beam is treated under Einstein's time order. This effect allows for the production of coherent undulator radiation from a modulated electron beam in the kicked direction without suppression. We hold a recent FEL study at the LCLS as a direct experimental evidence that the microbunching wavefront indeed readjusts its direction after the electron beam is kicked by a large angle, limited only by the beamline aperture. In a previous paper we quantitatively described this result invoking the aberration of light effect, which corresponds to Lorentz's way of coupling fields and particles. The purpose of

  13. Misconception regarding conventional coupling of fields and particles in XFEL codes

    International Nuclear Information System (INIS)

    Geloni, Gianluca; Kocharyan, Vitali; Saldin, Evgeni

    2016-01-01

    Maxwell theory is usually treated in the laboratory frame under the standard time order, that is the usual light-signal clock synchronization. In contrast, particle tracking in the laboratory frame usually treats time as an independent variable. As a result, here we argue that the evolution of electron beams is usually treated according to the absolute time convention i.e. using a different time order defined by a non-standard clock synchronization procedure. This essential point has never received attention in the accelerator community. There are two possible ways of coupling fields and particles in this situation. The first, Lorentz's prerelativistic way, consists in a 'translation' of Maxwell's electrodynamics to the absolute time world-picture. The second, Einstein's way, consists in a 'translation' of particle tracking results to the electromagnetic world-picture, obeying the standard time order. Conventional particle tracking shows that the electron beam direction changes after a transverse kick, while the orientation of the microbunching phase front stays unvaried. Here we show that in the ultrarelativistic asymptotic v → c, the orientation of the planes of simultaneity, i.e. the orientation of the microbunching fronts, is always perpendicular to the electron beam velocity when the evolution of the modulated electron beam is treated under Einstein's time order. This effect allows for the production of coherent undulator radiation from a modulated electron beam in the kicked direction without suppression. We hold a recent FEL study at the LCLS as a direct experimental evidence that the microbunching wavefront indeed readjusts its direction after the electron beam is kicked by a large angle, limited only by the beamline aperture. In a previous paper we quantitatively described this result invoking the aberration of light effect, which corresponds to Lorentz's way of coupling fields and particles. The purpose of

  14. Tripoli-4, a three-dimensional poly-kinetic particle transport Monte-Carlo code

    International Nuclear Information System (INIS)

    Both, J.P.; Lee, Y.K.; Mazzolo, A.; Peneliau, Y.; Petit, O.; Roesslinger, B.; Soldevila, M.

    2003-01-01

    In this updated of the Monte-Carlo transport code Tripoli-4, we list and describe its current main features. The code computes coupled neutron-photon propagation as well as the electron-photon cascade shower. While providing the user with common biasing techniques, it also implements an automatic weighting scheme. Tripoli-4 enables the user to compute the following physical quantities: a flux, a multiplication factor, a current, a reaction rate, a dose equivalent rate as well as deposit of energy and recoil energies. For each interesting physical quantity, a Monte-Carlo simulation offers different types of estimators. Tripoli-4 has support for execution in parallel mode. Special features and applications are also presented

  15. Tripoli-4, a three-dimensional poly-kinetic particle transport Monte-Carlo code

    Energy Technology Data Exchange (ETDEWEB)

    Both, J P; Lee, Y K; Mazzolo, A; Peneliau, Y; Petit, O; Roesslinger, B; Soldevila, M [CEA Saclay, Dir. de l' Energie Nucleaire (DEN/DM2S/SERMA/LEPP), 91 - Gif sur Yvette (France)

    2003-07-01

    In this updated of the Monte-Carlo transport code Tripoli-4, we list and describe its current main features. The code computes coupled neutron-photon propagation as well as the electron-photon cascade shower. While providing the user with common biasing techniques, it also implements an automatic weighting scheme. Tripoli-4 enables the user to compute the following physical quantities: a flux, a multiplication factor, a current, a reaction rate, a dose equivalent rate as well as deposit of energy and recoil energies. For each interesting physical quantity, a Monte-Carlo simulation offers different types of estimators. Tripoli-4 has support for execution in parallel mode. Special features and applications are also presented.

  16. The Karlsruhe code MODINA for model independent analysis of elastic scattering of spinless particles

    International Nuclear Information System (INIS)

    Gils, H.J.

    1983-12-01

    The Karlsruhe code MODINA (KfK 3063, published November 1980) has been extended in particular with respect to new approximations in the folding models and to the calculation of errors in the fourier-Bessel potentials. The corresponding subroutines replacing previous ones are compiled in this first supplement. The listings of the fit-routine-package FITEX missing in the first publication of MODINA are also included now. (orig.) [de

  17. On the performance of hybrid-ARQ with incremental redundancy and with code combining over relay channels

    KAUST Repository

    Chelli, Ali

    2013-08-01

    In this paper, we consider a relay network consisting of a source, a relay, and a destination. The source transmits a message to the destination using hybrid automatic repeat request (HARQ). The relay overhears the transmitted messages over the different HARQ rounds and tries to decode the data packet. In case of successful decoding at the relay, both the relay and the source cooperate to transmit the message to the destination. The channel realizations are independent for different HARQ rounds. We assume that the transmitter has no channel state information (CSI). Under such conditions, power and rate adaptation are not possible. To overcome this problem, HARQ allows the implicit adaptation of the transmission rate to the channel conditions by the use of feedback. There are two major HARQ techniques, namely HARQ with incremental redundancy (IR) and HARQ with code combining (CC). We investigate the performance of HARQ-IR and HARQ-CC over a relay channel from an information theoretic perspective. Analytical expressions are derived for the information outage probability, the average number of transmissions, and the average transmission rate. We illustrate through our investigation the benefit of relaying. We also compare the performance of HARQ-IR and HARQ-CC and show that HARQ-IR outperforms HARQ-CC. © 2013 IEEE.

  18. Medium-energy electrons and heavy ions in Jupiter's magnetosphere - Effects of lower hybrid wave-particle interactions

    Science.gov (United States)

    Barbosa, D. D.

    1986-01-01

    A theory of medium-energy (about keV) electrons and heavy ions in Jupiter's magnetosphere is presented. Lower hybrid waves are generated by the combined effects of a ring instability of neutral wind pickup ions and the modified two-stream instability associated with transport of cool Iogenic plasma. The quasi-linear energy diffusion coefficient for lower hybrid wave-particle interactions is evaluated, and several solutions to the diffusion equation are given. Calculations based on measured wave properties show that the noise substantially modifies the particle distribution functions. The effects are to accelerate superthermal ions and electrons to keV energies and to thermalize the pickup ions on time scales comparable to the particle residence time. The S(2+)/S(+) ratio at medium energies is a measure of the relative contribution from Iogenic thermal plasma and neutral wind ions, and this important quantity should be determined from future measurements. The theory also predicts a preferential acceleration of heavy ions with an accleration time that scales inversely with the root of the ion mass. Electrons accelerated by the process contribute to further reionization of the neutral wind by electron impact, thus providing a possible confirmation of Alfven's critical velocity effect in the Jovian magnetosphere.

  19. Particle swarm optimization of driving torque demand decision based on fuel economy for plug-in hybrid electric vehicle

    International Nuclear Information System (INIS)

    Shen, Peihong; Zhao, Zhiguo; Zhan, Xiaowen; Li, Jingwei

    2017-01-01

    In this paper, an energy management strategy based on logic threshold is proposed for a plug-in hybrid electric vehicle. The plug-in hybrid electric vehicle powertrain model is established using MATLAB/Simulink based on experimental tests of the power components, which is validated by the comparison with the verified simulation model which is built in the AVL Cruise. The influence of the driving torque demand decision on the fuel economy of plug-in hybrid electric vehicle is studied using a simulation. The optimization method for the driving torque demand decision, which refers to the relationship between the accelerator pedal opening and driving torque demand, from the perspective of fuel economy is formulated. The dynamically changing inertia weight particle swarm optimization is used to optimize the decision parameters. The simulation results show that the optimized driving torque demand decision can improve the PHEV fuel economy by 15.8% and 14.5% in the fuel economy test driving cycle of new European driving cycle and worldwide harmonized light vehicles test respectively, using the same rule-based energy management strategy. The proposed optimization method provides a theoretical guide for calibrating the parameters of driving torque demand decision to improve the fuel economy of the real plug-in hybrid electric vehicle. - Highlights: • The influence of the driving torque demand decision on the fuel economy is studied. • The optimization method for the driving torque demand decision is formulated. • An improved particle swarm optimization is utilized to optimize the parameters. • Fuel economy is improved by using the optimized driving torque demand decision.

  20. Effects of metallic Ti particles on the aging behavior and the influenced mechanical properties of squeeze-cast (SiCp+Ti)/7075Al hybrid composites

    International Nuclear Information System (INIS)

    Liu, Yixiong; Chen, Weiping; Yang, Chao; Zhu, Dezhi; Li, Yuanyuan

    2015-01-01

    The effects of metallic Ti particles on the aging behavior of squeeze-cast (SiC p +Ti)/7075Al hybrid composites and the mechanical properties of the aging treated composites were investigated. Results shown that the precipitation hardening of the hybrid composites during aging processes was delayed due to the segregation of solute Mg atoms in the vicinity of the Ti particles even though the activation energy of the η′ precipitates in the hybrid composites was reduced when compared with the Ti particle-free composites. The segregation of the solute Mg atoms was facilitated as a result of the high diffusivity paths formed by the generated dislocations in the matrix induced by the thermal misfit between the SiC particle and the matrix. The smaller activation energy for the hybrid composite may attribute to a significant reduction in the nucleation rate of the dislocation nucleated η′ precipitates compared with the Ti particle-free composite. After aging treated under the optimum aging conditions, the tensile strength of both composites was improved because of the precipitation hardening of the matrix alloy. In contrast with the reduced ductility of the traditional Ti particle-free composites after aging treatment, the ductility of the Ti particle-containing composites was improved as a result of the strengthened interfaces between the Ti particles and the matrix alloy

  1. Comparative study of Monte Carlo particle transport code PHITS and nuclear data processing code NJOY for PKA energy spectra and heating number under neutron irradiation

    International Nuclear Information System (INIS)

    Iwamoto, Y.; Ogawa, T.

    2016-01-01

    The modelling of the damage in materials irradiated by neutrons is needed for understanding the mechanism of radiation damage in fission and fusion reactor facilities. The molecular dynamics simulations of damage cascades with full atomic interactions require information about the energy distribution of the Primary Knock on Atoms (PKAs). The most common process to calculate PKA energy spectra under low-energy neutron irradiation is to use the nuclear data processing code NJOY2012. It calculates group-to-group recoil cross section matrices using nuclear data libraries in ENDF data format, which is energy and angular recoil distributions for many reactions. After the NJOY2012 process, SPKA6C is employed to produce PKA energy spectra combining recoil cross section matrices with an incident neutron energy spectrum. However, intercomparison with different processes and nuclear data libraries has not been studied yet. Especially, the higher energy (~5 MeV) of the incident neutrons, compared to fission, leads to many reaction channels, which produces a complex distribution of PKAs in energy and type. Recently, we have developed the event generator mode (EGM) in the Particle and Heavy Ion Transport code System PHITS for neutron incident reactions in the energy region below 20 MeV. The main feature of EGM is to produce PKA with keeping energy and momentum conservation in a reaction. It is used for event-by-event analysis in application fields such as soft error analysis in semiconductors, micro dosimetry in human body, and estimation of Displacement per Atoms (DPA) value in metals and so on. The purpose of this work is to specify differences of PKA spectra and heating number related with kerma between different calculation method using PHITS-EGM and NJOY2012+SPKA6C with different libraries TENDL-2015, ENDF/B-VII.1 and JENDL-4.0 for fusion relevant materials

  2. Comparative study of Monte Carlo particle transport code PHITS and nuclear data processing code NJOY for recoil cross section spectra under neutron irradiation

    Energy Technology Data Exchange (ETDEWEB)

    Iwamoto, Yosuke, E-mail: iwamoto.yosuke@jaea.go.jp; Ogawa, Tatsuhiko

    2017-04-01

    Because primary knock-on atoms (PKAs) create point defects and clusters in materials that are irradiated with neutrons, it is important to validate the calculations of recoil cross section spectra that are used to estimate radiation damage in materials. Here, the recoil cross section spectra of fission- and fusion-relevant materials were calculated using the Event Generator Mode (EGM) of the Particle and Heavy Ion Transport code System (PHITS) and also using the data processing code NJOY2012 with the nuclear data libraries TENDL2015, ENDF/BVII.1, and JEFF3.2. The heating number, which is the integral of the recoil cross section spectra, was also calculated using PHITS-EGM and compared with data extracted from the ACE files of TENDL2015, ENDF/BVII.1, and JENDL4.0. In general, only a small difference was found between the PKA spectra of PHITS + TENDL2015 and NJOY + TENDL2015. From analyzing the recoil cross section spectra extracted from the nuclear data libraries using NJOY2012, we found that the recoil cross section spectra were incorrect for {sup 72}Ge, {sup 75}As, {sup 89}Y, and {sup 109}Ag in the ENDF/B-VII.1 library, and for {sup 90}Zr and {sup 55}Mn in the JEFF3.2 library. From analyzing the heating number, we found that the data extracted from the ACE file of TENDL2015 for all nuclides were problematic in the neutron capture region because of incorrect data regarding the emitted gamma energy. However, PHITS + TENDL2015 can calculate PKA spectra and heating numbers correctly.

  3. Excitation of hybridized Dirac plasmon polaritons and transition radiation in multi-layer graphene traversed by a fast charged particle

    Science.gov (United States)

    Akbari, Kamran; Mišković, Zoran L.; Segui, Silvina; Gervasoni, Juana L.; Arista, Néstor R.

    2018-06-01

    We analyze the energy loss channels for a fast charged particle traversing a multi-layer graphene (MLG) structure with N layers under normal incidence. Focusing on a terahertz (THz) range of frequencies, and assuming equally doped graphene layers with a large enough separation d between them to neglect interlayer electron hopping, we use the Drude model for two-dimensional conductivity of each layer to describe hybridization of graphene’s Dirac plasmon polaritons (DPPs). Performing a layer decomposition of ohmic energy losses, which include excitation of hybridized DPPs (HDPPs), we have found for N = 3 that the middle HDPP eigenfrequency is not excited in the middle layer due to symmetry constraint, whereas the excitation of the lowest HDPP eigenfrequency produces a Fano resonance in the graphene layer that is first traversed by the charged particle. While the angular distribution of transition radiation emitted in the far field region also shows asymmetry with respect to the traversal order by the incident charged particle at supra-THz frequencies, the integrated radiative energy loss is surprisingly independent of both d and N for N ≤ 5, which is explained by a dominant role of the outer graphene layers in transition radiation. We have further found that the integrated ohmic energy loss in optically thin MLG scales as ∝1/N at sub-THz frequencies, which is explained by exposing the role of dissipative processes in graphene at low frequencies. Finally, prominent peaks are observed at supra-THz frequencies in the integrated ohmic energy loss for MLG structures that are not optically thin. The magnitude of those peaks is found to scale with N for N ≥ 2, while their shape and position replicate the peak in a double-layer graphene (N = 2), which is explained by arguing that plasmon hybridization in such MLG structures is dominated by electromagnetic interaction between the nearest-neighbor graphene layers.

  4. Hybrid micro-/nano-particle image velocimetry for 3D3C multi-scale velocity field measurement in microfluidics

    International Nuclear Information System (INIS)

    Min, Young Uk; Kim, Kyung Chun

    2011-01-01

    The conventional two-dimensional (2D) micro-particle image velocimetry (micro-PIV) technique has inherent bias error due to the depth of focus along the optical axis to measure the velocity field near the wall of a microfluidics device. However, the far-field measurement of velocity vectors yields good accuracy for micro-scale flows. Nano-PIV using the evanescent wave of total internal reflection fluorescence microscopy can measure near-field velocity vectors within a distance of around 200 nm from the solid surface. A micro-/nano-hybrid PIV system is proposed to measure both near- and far-field velocity vectors simultaneously in microfluidics. A near-field particle image can be obtained by total internal reflection fluorescence microscopy using nanoparticles, and the far-field velocity vectors are measured by three-hole defocusing micro-particle tracking velocimetry (micro-PTV) using micro-particles. In order to identify near- and far-field particle images, lasers of different wavelengths are adopted and tested in a straight microchannel for acquiring the three-dimensional three-component velocity field. We found that the new technique gives superior accuracy for the velocity profile near the wall compared to that of conventional nano-PIV. This method has been successfully applied to precisely measure wall shear stress in 2D microscale Poiseulle flows

  5. Agile deployment and code coverage testing metrics of the boot software on-board Solar Orbiter's Energetic Particle Detector

    Science.gov (United States)

    Parra, Pablo; da Silva, Antonio; Polo, Óscar R.; Sánchez, Sebastián

    2018-02-01

    In this day and age, successful embedded critical software needs agile and continuous development and testing procedures. This paper presents the overall testing and code coverage metrics obtained during the unit testing procedure carried out to verify the correctness of the boot software that will run in the Instrument Control Unit (ICU) of the Energetic Particle Detector (EPD) on-board Solar Orbiter. The ICU boot software is a critical part of the project so its verification should be addressed at an early development stage, so any test case missed in this process may affect the quality of the overall on-board software. According to the European Cooperation for Space Standardization ESA standards, testing this kind of critical software must cover 100% of the source code statement and decision paths. This leads to the complete testing of fault tolerance and recovery mechanisms that have to resolve every possible memory corruption or communication error brought about by the space environment. The introduced procedure enables fault injection from the beginning of the development process and enables to fulfill the exigent code coverage demands on the boot software.

  6. Plasma simulation by macroscale, electromagnetic particle code and its application to current-drive by relativistic electron beam injection

    International Nuclear Information System (INIS)

    Tanaka, M.; Sato, T.

    1985-01-01

    A new implicit macroscale electromagnetic particle simulation code (MARC) which allows a large scale length and a time step in multi-dimensions is described. Finite mass electrons and ions are used with relativistic version of the equation of motion. The electromagnetic fields are solved by using a complete set of Maxwell equations. For time integration of the field equations, a decentered (backward) finite differencing scheme is employed with the predictor - corrector method for small noise and super-stability. It is shown both in analytical and numerical ways that the present scheme efficiently suppresses high frequency electrostatic and electromagnetic waves in a plasma, and that it accurately reproduces low frequency waves such as ion acoustic waves, Alfven waves and fast magnetosonic waves. The present numerical scheme has currently been coded in three dimensions for application to a new tokamak current-drive method by means of relativistic electron beam injection. Some remarks of the proper macroscale code application is presented in this paper

  7. Vectorization of a particle code used in the simulation of rarefied hypersonic flow

    Science.gov (United States)

    Baganoff, D.

    1990-01-01

    A limitation of the direct simulation Monte Carlo (DSMC) method is that it does not allow efficient use of vector architectures that predominate in current supercomputers. Consequently, the problems that can be handled are limited to those of one- and two-dimensional flows. This work focuses on a reformulation of the DSMC method with the objective of designing a procedure that is optimized to the vector architectures found on machines such as the Cray-2. In addition, it focuses on finding a better balance between algorithmic complexity and the total number of particles employed in a simulation so that the overall performance of a particle simulation scheme can be greatly improved. Simulations of the flow about a 3D blunt body are performed with 10 to the 7th particles and 4 x 10 to the 5th mesh cells. Good statistics are obtained with time averaging over 800 time steps using 4.5 h of Cray-2 single-processor CPU time.

  8. Introducing a distributed unstructured mesh into gyrokinetic particle-in-cell code, XGC

    Science.gov (United States)

    Yoon, Eisung; Shephard, Mark; Seol, E. Seegyoung; Kalyanaraman, Kaushik

    2017-10-01

    XGC has shown good scalability for large leadership supercomputers. The current production version uses a copy of the entire unstructured finite element mesh on every MPI rank. Although an obvious scalability issue if the mesh sizes are to be dramatically increased, the current approach is also not optimal with respect to data locality of particles and mesh information. To address these issues we have initiated the development of a distributed mesh PIC method. This approach directly addresses the base scalability issue with respect to mesh size and, through the use of a mesh entity centric view of the particle mesh relationship, provides opportunities to address data locality needs of many core and GPU supported heterogeneous systems. The parallel mesh PIC capabilities are being built on the Parallel Unstructured Mesh Infrastructure (PUMI). The presentation will first overview the form of mesh distribution used and indicate the structures and functions used to support the mesh, the particles and their interaction. Attention will then focus on the node-level optimizations being carried out to ensure performant operation of all PIC operations on the distributed mesh. Partnership for Edge Physics Simulation (EPSI) Grant No. DE-SC0008449 and Center for Extended Magnetohydrodynamic Modeling (CEMM) Grant No. DE-SC0006618.

  9. Design of tallying function for general purpose Monte Carlo particle transport code JMCT

    International Nuclear Information System (INIS)

    Shangguan Danhua; Li Gang; Deng Li; Zhang Baoyin

    2013-01-01

    A new postponed accumulation algorithm was proposed. Based on JCOGIN (J combinatorial geometry Monte Carlo transport infrastructure) framework and the postponed accumulation algorithm, the tallying function of the general purpose Monte Carlo neutron-photon transport code JMCT was improved markedly. JMCT gets a higher tallying efficiency than MCNP 4C by 28% for simple geometry model, and JMCT is faster than MCNP 4C by two orders of magnitude for complicated repeated structure model. The available ability of tallying function for JMCT makes firm foundation for reactor analysis and multi-step burnup calculation. (authors)

  10. Hybrid MPI-OpenMP Parallelism in the ONETEP Linear-Scaling Electronic Structure Code: Application to the Delamination of Cellulose Nanofibrils.

    Science.gov (United States)

    Wilkinson, Karl A; Hine, Nicholas D M; Skylaris, Chris-Kriton

    2014-11-11

    We present a hybrid MPI-OpenMP implementation of Linear-Scaling Density Functional Theory within the ONETEP code. We illustrate its performance on a range of high performance computing (HPC) platforms comprising shared-memory nodes with fast interconnect. Our work has focused on applying OpenMP parallelism to the routines which dominate the computational load, attempting where possible to parallelize different loops from those already parallelized within MPI. This includes 3D FFT box operations, sparse matrix algebra operations, calculation of integrals, and Ewald summation. While the underlying numerical methods are unchanged, these developments represent significant changes to the algorithms used within ONETEP to distribute the workload across CPU cores. The new hybrid code exhibits much-improved strong scaling relative to the MPI-only code and permits calculations with a much higher ratio of cores to atoms. These developments result in a significantly shorter time to solution than was possible using MPI alone and facilitate the application of the ONETEP code to systems larger than previously feasible. We illustrate this with benchmark calculations from an amyloid fibril trimer containing 41,907 atoms. We use the code to study the mechanism of delamination of cellulose nanofibrils when undergoing sonification, a process which is controlled by a large number of interactions that collectively determine the structural properties of the fibrils. Many energy evaluations were needed for these simulations, and as these systems comprise up to 21,276 atoms this would not have been feasible without the developments described here.

  11. Wear Characteristics of Hybrid Composites Based on Za27 Alloy Reinforced With Silicon Carbide and Graphite Particles

    Directory of Open Access Journals (Sweden)

    S. Mitrović

    2014-06-01

    Full Text Available The paper presents the wear characteristics of a hybrid composite based on zinc-aluminium ZA27 alloy, reinforced with silicon-carbide and graphite particles. The tested sample contains 5 vol.% of SiC and 3 vol.% Gr particles. Compocasting technique has been used to prepare the samples. The experiments were performed on a “block-on-disc” tribometer under conditions of dry sliding. The wear volumes of the alloy and the composite were determined by varying the normal loads and sliding speeds. The paper contains the procedure for preparation of sample composites and microstructure of the composite material and the base ZA27 alloy. The wear surface of the composite material was examined using the scanning electronic microscope (SEM and energy dispersive spectrometry (EDS. Conclusions were obtained based on the observed impact of the sliding speed, normal load and sliding distance on tribological behaviour of the observed composite.

  12. MC21 v.6.0 - A continuous-energy Monte Carlo particle transport code with integrated reactor feedback capabilities

    International Nuclear Information System (INIS)

    Grieshemer, D.P.; Gill, D.F.; Nease, B.R.; Carpenter, D.C.; Joo, H.; Millman, D.L.; Sutton, T.M.; Stedry, M.H.; Dobreff, P.S.; Trumbull, T.H.; Caro, E.

    2013-01-01

    MC21 is a continuous-energy Monte Carlo radiation transport code for the calculation of the steady-state spatial distributions of reaction rates in three-dimensional models. The code supports neutron and photon transport in fixed source problems, as well as iterated-fission-source (eigenvalue) neutron transport problems. MC21 has been designed and optimized to support large-scale problems in reactor physics, shielding, and criticality analysis applications. The code also supports many in-line reactor feedback effects, including depletion, thermal feedback, xenon feedback, eigenvalue search, and neutron and photon heating. MC21 uses continuous-energy neutron/nucleus interaction physics over the range from 10 -5 eV to 20 MeV. The code treats all common neutron scattering mechanisms, including fast-range elastic and non-elastic scattering, and thermal- and epithermal-range scattering from molecules and crystalline materials. For photon transport, MC21 uses continuous-energy interaction physics over the energy range from 1 keV to 100 GeV. The code treats all common photon interaction mechanisms, including Compton scattering, pair production, and photoelectric interactions. All of the nuclear data required by MC21 is provided by the NDEX system of codes, which extracts and processes data from EPDL-, ENDF-, and ACE-formatted source files. For geometry representation, MC21 employs a flexible constructive solid geometry system that allows users to create spatial cells from first- and second-order surfaces. The system also allows models to be built up as hierarchical collections of previously defined spatial cells, with interior detail provided by grids and template overlays. Results are collected by a generalized tally capability which allows users to edit integral flux and reaction rate information. Results can be collected over the entire problem or within specific regions of interest through the use of phase filters that control which particles are allowed to score each

  13. Radial diffusion of toroidally trapped particles induced by lower hybrid and fast waves

    International Nuclear Information System (INIS)

    Krlin, L.

    1992-10-01

    The interaction of RF field with toroidally trapped particles (bananas) can cause their intrinsic stochastically diffusion both in the configuration and velocity space. In RF heating and/or current drive regimes, RF field can interact with plasma particles and with thermonuclear alpha particles. The aim of this contribution is to give some analytical estimates of induced radial diffusion of alphas and of ions. (author)

  14. Intelligent sizing of a series hybrid electric power-train system based on Chaos-enhanced accelerated particle swarm optimization

    International Nuclear Information System (INIS)

    Zhou, Quan; Zhang, Wei; Cash, Scott; Olatunbosun, Oluremi; Xu, Hongming; Lu, Guoxiang

    2017-01-01

    Highlights: • A novel algorithm for hybrid electric powertrain intelligent sizing is introduced and applied. • The proposed CAPSO algorithm is capable of finding the real optimal result with much higher reputation. • Logistic mapping is the most effective strategy to build CAPSO. • The CAPSO gave more reliable results and increased the efficiency by 1.71%. - Abstract: This paper firstly proposed a novel HEV sizing method using the Chaos-enhanced Accelerated Particle Swarm Optimization (CAPSO) algorithm and secondly provided a demonstration on sizing a series hybrid electric powertrain with investigations of chaotic mapping strategies to achieve the global optimization. In this paper, the intelligent sizing of a series hybrid electric powertrain is formulated as an integer multi-objective optimization issue by modelling the powertrain system. The intelligent sizing mechanism based on APSO is then introduced, and 4 types of the most effective chaotic mapping strategy are investigated to upgrade the standard APSO into CAPSO algorithms for intelligent sizing. The evaluation of the intelligent sizing systems based on standard APSO and CAPSOs are then performed. The Monte Carlo analysis and reputation evaluation indicate that the CAPSO outperforms the standard APSO for finding the real optimal sizing result with much higher reputation, and CAPSO with logistic mapping strategy is the most effective algorithm for HEV powertrain components intelligent sizing. In addition, this paper also performs the sensitivity analysis and Pareto analysis to help engineers customize the intelligent sizing system.

  15. Multi-objective AGV scheduling in an FMS using a hybrid of genetic algorithm and particle swarm optimization.

    Directory of Open Access Journals (Sweden)

    Maryam Mousavi

    Full Text Available Flexible manufacturing system (FMS enhances the firm's flexibility and responsiveness to the ever-changing customer demand by providing a fast product diversification capability. Performance of an FMS is highly dependent upon the accuracy of scheduling policy for the components of the system, such as automated guided vehicles (AGVs. An AGV as a mobile robot provides remarkable industrial capabilities for material and goods transportation within a manufacturing facility or a warehouse. Allocating AGVs to tasks, while considering the cost and time of operations, defines the AGV scheduling process. Multi-objective scheduling of AGVs, unlike single objective practices, is a complex and combinatorial process. In the main draw of the research, a mathematical model was developed and integrated with evolutionary algorithms (genetic algorithm (GA, particle swarm optimization (PSO, and hybrid GA-PSO to optimize the task scheduling of AGVs with the objectives of minimizing makespan and number of AGVs while considering the AGVs' battery charge. Assessment of the numerical examples' scheduling before and after the optimization proved the applicability of all the three algorithms in decreasing the makespan and AGV numbers. The hybrid GA-PSO produced the optimum result and outperformed the other two algorithms, in which the mean of AGVs operation efficiency was found to be 69.4, 74, and 79.8 percent in PSO, GA, and hybrid GA-PSO, respectively. Evaluation and validation of the model was performed by simulation via Flexsim software.

  16. Multi-objective AGV scheduling in an FMS using a hybrid of genetic algorithm and particle swarm optimization.

    Science.gov (United States)

    Mousavi, Maryam; Yap, Hwa Jen; Musa, Siti Nurmaya; Tahriri, Farzad; Md Dawal, Siti Zawiah

    2017-01-01

    Flexible manufacturing system (FMS) enhances the firm's flexibility and responsiveness to the ever-changing customer demand by providing a fast product diversification capability. Performance of an FMS is highly dependent upon the accuracy of scheduling policy for the components of the system, such as automated guided vehicles (AGVs). An AGV as a mobile robot provides remarkable industrial capabilities for material and goods transportation within a manufacturing facility or a warehouse. Allocating AGVs to tasks, while considering the cost and time of operations, defines the AGV scheduling process. Multi-objective scheduling of AGVs, unlike single objective practices, is a complex and combinatorial process. In the main draw of the research, a mathematical model was developed and integrated with evolutionary algorithms (genetic algorithm (GA), particle swarm optimization (PSO), and hybrid GA-PSO) to optimize the task scheduling of AGVs with the objectives of minimizing makespan and number of AGVs while considering the AGVs' battery charge. Assessment of the numerical examples' scheduling before and after the optimization proved the applicability of all the three algorithms in decreasing the makespan and AGV numbers. The hybrid GA-PSO produced the optimum result and outperformed the other two algorithms, in which the mean of AGVs operation efficiency was found to be 69.4, 74, and 79.8 percent in PSO, GA, and hybrid GA-PSO, respectively. Evaluation and validation of the model was performed by simulation via Flexsim software.

  17. A novel hybrid approach based on Particle Swarm Optimization and Ant Colony Algorithm to forecast energy demand of Turkey

    International Nuclear Information System (INIS)

    Kıran, Mustafa Servet; Özceylan, Eren; Gündüz, Mesut; Paksoy, Turan

    2012-01-01

    Highlights: ► PSO and ACO algorithms are hybridized for forecasting energy demands of Turkey. ► Linear and quadratic forms are developed to meet the fluctuations of indicators. ► GDP, population, export and import have significant impacts on energy demand. ► Quadratic form provides better fit solution than linear form. ► Proposed approach gives lower estimation error than ACO and PSO, separately. - Abstract: This paper proposes a new hybrid method (HAP) for estimating energy demand of Turkey using Particle Swarm Optimization (PSO) and Ant Colony Optimization (ACO). Proposed energy demand model (HAPE) is the first model which integrates two mentioned meta-heuristic techniques. While, PSO, developed for solving continuous optimization problems, is a population based stochastic technique; ACO, simulating behaviors between nest and food source of real ants, is generally used for discrete optimizations. Hybrid method based PSO and ACO is developed to estimate energy demand using gross domestic product (GDP), population, import and export. HAPE is developed in two forms which are linear (HAPEL) and quadratic (HAPEQ). The future energy demand is estimated under different scenarios. In order to show the accuracy of the algorithm, a comparison is made with ACO and PSO which are developed for the same problem. According to obtained results, relative estimation errors of the HAPE model are the lowest of them and quadratic form (HAPEQ) provides better-fit solutions due to fluctuations of the socio-economic indicators.

  18. Implementing displacement damage calculations for electrons and gamma rays in the Particle and Heavy-Ion Transport code System

    Science.gov (United States)

    Iwamoto, Yosuke

    2018-03-01

    In this study, the Monte Carlo displacement damage calculation method in the Particle and Heavy-Ion Transport code System (PHITS) was improved to calculate displacements per atom (DPA) values due to irradiation by electrons (or positrons) and gamma rays. For the damage due to electrons and gamma rays, PHITS simulates electromagnetic cascades using the Electron Gamma Shower version 5 (EGS5) algorithm and calculates DPA values using the recoil energies and the McKinley-Feshbach cross section. A comparison of DPA values calculated by PHITS and the Monte Carlo assisted Classical Method (MCCM) reveals that they were in good agreement for gamma-ray irradiations of silicon and iron at energies that were less than 10 MeV. Above 10 MeV, PHITS can calculate DPA values not only for electrons but also for charged particles produced by photonuclear reactions. In DPA depth distributions under electron and gamma-ray irradiations, build-up effects can be observed near the target's surface. For irradiation of 90-cm-thick carbon by protons with energies of more than 30 GeV, the ratio of the secondary electron DPA values to the total DPA values is more than 10% and increases with an increase in incident energy. In summary, PHITS can calculate DPA values for all particles and materials over a wide energy range between 1 keV and 1 TeV for electrons, gamma rays, and charged particles and between 10-5 eV and 1 TeV for neutrons.

  19. The dynamics of low-β plasma clouds as simulated by a three-dimensional, electromagnetic particle code

    International Nuclear Information System (INIS)

    Neubert, T.; Miller, R.H.; Buneman, O.; Nishikawa, K.I.

    1992-01-01

    The dynamics of low-β plasma clouds moving perpendicular to an ambient magnetic field in vacuum and in a background plasma is simulated by means of a three-dimensional, electromagnetic, and relativistic particle simulation code. The simulations show the formation of the space charge sheaths at the sides of the cloud with the associated polarization electric field which facilitate the cross-field propagation, as well as the sheaths at the front and rear end of the cloud caused by the larger ion Larmor radius, which allows ions to move ahead and lag behind the electrons as they gyrate. Results on the cloud dynamics and electromagnetic radiation include the following: (1) In a background plasma, electron and ion sheaths expand along the magnetic field at the same rate, whereas in vacuum the electron sheath expands much faster than the ion sheath. (2) Sheath electrons are accelerated up to relativistic energies. This result indicates that artificial plasma clouds released in the ionosphere or magnetosphere may generate optical emissions (aurora) as energetic sheath electrons scatter in the upper atmosphere. (3) The expansion of the electron sheaths is analogous to the ejection of high-intensity electron beams from spacecraft. (4) Second-order and higher-order sheaths are formed which extend out into the ambient plasma. (5) Formation of the sheaths and the polarization field reduces the forward momentum of the cloud. (6) The coherent component of the particle gyromotion is damped in time as the particles establish a forward directed drift velocity. (7) The coherent particle gyrations generate electromagnetic radiation

  20. Fabrication of polyaniline coated iron oxide hybrid particles and their dual stimuli-response under electric and magnetic fields

    Directory of Open Access Journals (Sweden)

    B. Sim

    2015-08-01

    Full Text Available Polyaniline (PANI-coated iron oxide (Fe3O4 sphere particles were fabricated and applied to a dual stimuliresponsive material under electric and magnetic fields, respectively. Sphere Fe3O4 particles were synthesized by a solvothermal process and protonated after acidification. The aniline monomer tended to surround the surface of the Fe3O4 core due to the electrostatic and hydrogen bond interactions. A core-shell structured product was finally formed by the oxidation polymerization of PANI on the surface of Fe3O4. The formation of Fe3O4@PANI particles was examined by scanning electron microscope and transmission electron microscope. The bond between Fe3O4 and PANI was confirmed by Fourier transform-infrared spectroscope and magnetic properties were analyzed by vibration sample magnetometer. A hybrid of a conducting and magnetic particle-based suspension displayed dual stimuli-response under electric and magnetic fields. The suspension exhibited typical electrorheological and magnetorheological behaviors of the shear stress, shear viscosity and dynamic yield stress, as determined using a rotational rheometer. Sedimentation stability was also compared between Fe3O4 and Fe3O4@PANI suspension.

  1. On the numerical dispersion of electromagnetic particle-in-cell code: Finite grid instability

    International Nuclear Information System (INIS)

    Meyers, M.D.; Huang, C.-K.; Zeng, Y.; Yi, S.A.; Albright, B.J.

    2015-01-01

    The Particle-In-Cell (PIC) method is widely used in relativistic particle beam and laser plasma modeling. However, the PIC method exhibits numerical instabilities that can render unphysical simulation results or even destroy the simulation. For electromagnetic relativistic beam and plasma modeling, the most relevant numerical instabilities are the finite grid instability and the numerical Cherenkov instability. We review the numerical dispersion relation of the Electromagnetic PIC model. We rigorously derive the faithful 3-D numerical dispersion relation of the PIC model, for a simple, direct current deposition scheme, which does not conserve electric charge exactly. We then specialize to the Yee FDTD scheme. In particular, we clarify the presence of alias modes in an eigenmode analysis of the PIC model, which combines both discrete and continuous variables. The manner in which the PIC model updates and samples the fields and distribution function, together with the temporal and spatial phase factors from solving Maxwell's equations on the Yee grid with the leapfrog scheme, is explicitly accounted for. Numerical solutions to the electrostatic-like modes in the 1-D dispersion relation for a cold drifting plasma are obtained for parameters of interest. In the succeeding analysis, we investigate how the finite grid instability arises from the interaction of the numerical modes admitted in the system and their aliases. The most significant interaction is due critically to the correct representation of the operators in the dispersion relation. We obtain a simple analytic expression for the peak growth rate due to this interaction, which is then verified by simulation. We demonstrate that our analysis is readily extendable to charge conserving models

  2. On the numerical dispersion of electromagnetic particle-in-cell code: Finite grid instability

    Science.gov (United States)

    Meyers, M. D.; Huang, C.-K.; Zeng, Y.; Yi, S. A.; Albright, B. J.

    2015-09-01

    The Particle-In-Cell (PIC) method is widely used in relativistic particle beam and laser plasma modeling. However, the PIC method exhibits numerical instabilities that can render unphysical simulation results or even destroy the simulation. For electromagnetic relativistic beam and plasma modeling, the most relevant numerical instabilities are the finite grid instability and the numerical Cherenkov instability. We review the numerical dispersion relation of the Electromagnetic PIC model. We rigorously derive the faithful 3-D numerical dispersion relation of the PIC model, for a simple, direct current deposition scheme, which does not conserve electric charge exactly. We then specialize to the Yee FDTD scheme. In particular, we clarify the presence of alias modes in an eigenmode analysis of the PIC model, which combines both discrete and continuous variables. The manner in which the PIC model updates and samples the fields and distribution function, together with the temporal and spatial phase factors from solving Maxwell's equations on the Yee grid with the leapfrog scheme, is explicitly accounted for. Numerical solutions to the electrostatic-like modes in the 1-D dispersion relation for a cold drifting plasma are obtained for parameters of interest. In the succeeding analysis, we investigate how the finite grid instability arises from the interaction of the numerical modes admitted in the system and their aliases. The most significant interaction is due critically to the correct representation of the operators in the dispersion relation. We obtain a simple analytic expression for the peak growth rate due to this interaction, which is then verified by simulation. We demonstrate that our analysis is readily extendable to charge conserving models.

  3. N-body simulations for f(R) gravity using a self-adaptive particle-mesh code

    Science.gov (United States)

    Zhao, Gong-Bo; Li, Baojiu; Koyama, Kazuya

    2011-02-01

    We perform high-resolution N-body simulations for f(R) gravity based on a self-adaptive particle-mesh code MLAPM. The chameleon mechanism that recovers general relativity on small scales is fully taken into account by self-consistently solving the nonlinear equation for the scalar field. We independently confirm the previous simulation results, including the matter power spectrum, halo mass function, and density profiles, obtained by Oyaizu [Phys. Rev. DPRVDAQ1550-7998 78, 123524 (2008)10.1103/PhysRevD.78.123524] and Schmidt [Phys. Rev. DPRVDAQ1550-7998 79, 083518 (2009)10.1103/PhysRevD.79.083518], and extend the resolution up to k˜20h/Mpc for the measurement of the matter power spectrum. Based on our simulation results, we discuss how the chameleon mechanism affects the clustering of dark matter and halos on full nonlinear scales.

  4. N-body simulations for f(R) gravity using a self-adaptive particle-mesh code

    International Nuclear Information System (INIS)

    Zhao Gongbo; Koyama, Kazuya; Li Baojiu

    2011-01-01

    We perform high-resolution N-body simulations for f(R) gravity based on a self-adaptive particle-mesh code MLAPM. The chameleon mechanism that recovers general relativity on small scales is fully taken into account by self-consistently solving the nonlinear equation for the scalar field. We independently confirm the previous simulation results, including the matter power spectrum, halo mass function, and density profiles, obtained by Oyaizu et al.[Phys. Rev. D 78, 123524 (2008)] and Schmidt et al.[Phys. Rev. D 79, 083518 (2009)], and extend the resolution up to k∼20 h/Mpc for the measurement of the matter power spectrum. Based on our simulation results, we discuss how the chameleon mechanism affects the clustering of dark matter and halos on full nonlinear scales.

  5. A massively parallel method of characteristic neutral particle transport code for GPUs

    International Nuclear Information System (INIS)

    Boyd, W. R.; Smith, K.; Forget, B.

    2013-01-01

    Over the past 20 years, parallel computing has enabled computers to grow ever larger and more powerful while scientific applications have advanced in sophistication and resolution. This trend is being challenged, however, as the power consumption for conventional parallel computing architectures has risen to unsustainable levels and memory limitations have come to dominate compute performance. Heterogeneous computing platforms, such as Graphics Processing Units (GPUs), are an increasingly popular paradigm for solving these issues. This paper explores the applicability of GPUs for deterministic neutron transport. A 2D method of characteristics (MOC) code - OpenMOC - has been developed with solvers for both shared memory multi-core platforms as well as GPUs. The multi-threading and memory locality methodologies for the GPU solver are presented. Performance results for the 2D C5G7 benchmark demonstrate 25-35 x speedup for MOC on the GPU. The lessons learned from this case study will provide the basis for further exploration of MOC on GPUs as well as design decisions for hardware vendors exploring technologies for the next generation of machines for scientific computing. (authors)

  6. A fully-implicit Particle-In-Cell Monte Carlo Collision code for the simulation of inductively coupled plasmas

    Science.gov (United States)

    Mattei, S.; Nishida, K.; Onai, M.; Lettry, J.; Tran, M. Q.; Hatayama, A.

    2017-12-01

    We present a fully-implicit electromagnetic Particle-In-Cell Monte Carlo collision code, called NINJA, written for the simulation of inductively coupled plasmas. NINJA employs a kinetic enslaved Jacobian-Free Newton Krylov method to solve self-consistently the interaction between the electromagnetic field generated by the radio-frequency coil and the plasma response. The simulated plasma includes a kinetic description of charged and neutral species as well as the collision processes between them. The algorithm allows simulations with cell sizes much larger than the Debye length and time steps in excess of the Courant-Friedrichs-Lewy condition whilst preserving the conservation of the total energy. The code is applied to the simulation of the plasma discharge of the Linac4 H- ion source at CERN. Simulation results of plasma density, temperature and EEDF are discussed and compared with optical emission spectroscopy measurements. A systematic study of the energy conservation as a function of the numerical parameters is presented.

  7. Development of high performance particle in cell code for the exascale age

    Science.gov (United States)

    Lapenta, Giovanni; Amaya, Jorge; Gonzalez, Diego; Deep-Est H2020 Consortium Collaboration

    2017-10-01

    Magnetized plasmas are most effectively described by magneto-hydrodynamics, MHD, a fluid theory based on describing some fields defined in space: electromagnetic fields, density, velocity and temperature of the plasma. However, microphysics processes need kinetic theory, where statistical distributions of particles are governed by the Boltzmann equation. While fluid models are based on the ordinary space and time, kinetic models require a six dimensional space, called phase space, besides time. The two methods are not separated but rather interact to determine the system evolution. Arriving at a single self-consistent model is the goal of our research. We present a new approach developed with the goal of extending the reach of kinetic models to the fluid scales. Kinetic models are a higher order description and all fluid effects are included in them. However, the cost in terms of computing power is much higher and it has been so far prohibitively expensive to treat space weather events fully kinetically. We have now designed a new method capable of reducing that cost by several orders of magnitude making it possible for kinetic models to study macroscopic systems. H2020 Deep-EST consortium (European Commission).

  8. Evolutionary Hybrid Particle Swarm Optimization Algorithm for Solving NP-Hard No-Wait Flow Shop Scheduling Problems

    Directory of Open Access Journals (Sweden)

    Laxmi A. Bewoor

    2017-10-01

    Full Text Available The no-wait flow shop is a flowshop in which the scheduling of jobs is continuous and simultaneous through all machines without waiting for any consecutive machines. The scheduling of a no-wait flow shop requires finding an appropriate sequence of jobs for scheduling, which in turn reduces total processing time. The classical brute force method for finding the probabilities of scheduling for improving the utilization of resources may become trapped in local optima, and this problem can hence be observed as a typical NP-hard combinatorial optimization problem that requires finding a near optimal solution with heuristic and metaheuristic techniques. This paper proposes an effective hybrid Particle Swarm Optimization (PSO metaheuristic algorithm for solving no-wait flow shop scheduling problems with the objective of minimizing the total flow time of jobs. This Proposed Hybrid Particle Swarm Optimization (PHPSO algorithm presents a solution by the random key representation rule for converting the continuous position information values of particles to a discrete job permutation. The proposed algorithm initializes population efficiently with the Nawaz-Enscore-Ham (NEH heuristic technique and uses an evolutionary search guided by the mechanism of PSO, as well as simulated annealing based on a local neighborhood search to avoid getting stuck in local optima and to provide the appropriate balance of global exploration and local exploitation. Extensive computational experiments are carried out based on Taillard’s benchmark suite. Computational results and comparisons with existing metaheuristics show that the PHPSO algorithm outperforms the existing methods in terms of quality search and robustness for the problem considered. The improvement in solution quality is confirmed by statistical tests of significance.

  9. MCNP-DSP, Monte Carlo Neutron-Particle Transport Code with Digital Signal Processing

    International Nuclear Information System (INIS)

    2002-01-01

    1 - Description of program or function: MCNP-DSP is recommended only for experienced MCNP users working with subcritical measurements. It is a modification of the Los Alamos National Laboratory's Monte Carlo code MCNP4a that is used to simulate a variety of subcritical measurements. The DSP version was developed to simulate frequency analysis measurements, correlation (Rossi-) measurements, pulsed neutron measurements, Feynman variance measurements, and multiplicity measurements. CCC-700/MCNP4C is recommended for general purpose calculations. 2 - Methods:MCNP-DSP performs calculations very similarly to MCNP and uses the same generalized geometry capabilities of MCNP. MCNP-DSP can only be used with the continuous-energy cross-section data. A variety of source and detector options are available. However, unlike standard MCNP, the source and detector options are limited to those described in the manual because these options are specified in the MCNP-DSP extra data file. MCNP-DSP is used to obtain the time-dependent response of detectors that are modeled in the simulation geometry. The detectors represent actual detectors used in measurements. These time-dependent detector responses are used to compute a variety of quantities such as frequency analysis signatures, correlation signatures, multiplicity signatures, etc., between detectors or sources and detectors. Energy ranges are 0-60 MeV for neutrons (data generally only available up to 20 MeV) and 1 keV - 1 GeV for photons and electrons. 3 - Restrictions on the complexity of the problem: None noted

  10. Implementation of Japanese male and female tomographic phantoms to multi-particle Monte Carlo code for ionizing radiation dosimetry

    International Nuclear Information System (INIS)

    Lee, Choonsik; Nagaoka, Tomoaki; Lee, Jai-Ki

    2006-01-01

    Japanese male and female tomographic phantoms, which have been developed for radio-frequency electromagnetic-field dosimetry, were implemented into multi-particle Monte Carlo transport code to evaluate realistic dose distribution in human body exposed to radiation field. Japanese tomographic phantoms, which were developed from the whole body magnetic resonance images of Japanese average adult male and female, were processed as follows to be implemented into general purpose multi-particle Monte Carlo code, MCNPX2.5. Original array size of Japanese male and female phantoms, 320 x 160 x 866 voxels and 320 x 160 x 804 voxels, respectively, were reduced into 320 x 160 x 433 voxels and 320 x 160 x 402 voxels due to the limitation of memory use in MCNPX2.5. The 3D voxel array of the phantoms were processed by using the built-in repeated structure algorithm, where the human anatomy was described by the repeated lattice of tiny cube containing the information of material composition and organ index number. Original phantom data were converted into ASCII file, which can be directly ported into the lattice card of MCNPX2.5 input deck by using in-house code. A total of 30 material compositions obtained from International Commission on Radiation Units and Measurement (ICRU) report 46 were assigned to 54 and 55 organs and tissues in the male and female phantoms, respectively, and imported into the material card of MCNPX2.5 along with the corresponding cross section data. Illustrative calculation of absorbed doses for 26 internal organs and effective dose were performed for idealized broad parallel photon and neutron beams in anterior-posterior irradiation geometry, which is typical for workers at nuclear power plant. The results were compared with the data from other Japanese and Caucasian tomographic phantom, and International Commission on Radiological Protection (ICRP) report 74. The further investigation of the difference in organ dose and effective dose among tomographic

  11. Hybrid method based on embedded coupled simulation of vortex particles in grid based solution

    Science.gov (United States)

    Kornev, Nikolai

    2017-09-01

    The paper presents a novel hybrid approach developed to improve the resolution of concentrated vortices in computational fluid mechanics. The method is based on combination of a grid based and the grid free computational vortex (CVM) methods. The large scale flow structures are simulated on the grid whereas the concentrated structures are modeled using CVM. Due to this combination the advantages of both methods are strengthened whereas the disadvantages are diminished. The procedure of the separation of small concentrated vortices from the large scale ones is based on LES filtering idea. The flow dynamics is governed by two coupled transport equations taking two-way interaction between large and fine structures into account. The fine structures are mapped back to the grid if their size grows due to diffusion. Algorithmic aspects of the hybrid method are discussed. Advantages of the new approach are illustrated on some simple two dimensional canonical flows containing concentrated vortices.

  12. Membrane flux dynamics in the submerged ultrafiltration hybrid treatment process during particle and natural organic matter removal

    Institute of Scientific and Technical Information of China (English)

    Wei Zhang; Xiaojian Zhang; Yonghong Li; Jun Wang; Chao Chen

    2011-01-01

    Particles and natural organic matter (NOM) are two major concerns in surface water,which greatly influence the membrane filtration process.The objective of this article is to investigate the effect of particles,NOM and their interaction on the submerged ultrafiltration (UF) membrane flux under conditions of solo UF and coagulation and PAC adsorption as the pretreatment of UF.Particles,NOM and their mixture were spiked in tap water to simulate raw water.Exponential relationship,(JP/JP0 =axexp{-k[t-(n- 1)T]}),was developed to quantify the normalized membrane flux dynamics during the filtration period and fitted the results well.In this equation,coefficient a was determined by the value of Jp/Jp0 at the beginning of a filtration cycle,reflecting the flux recovery after backwashing,that is,the irreversible fouling.The coefficient k reflected the trend of flux dynamics.Integrated total permeability (ΣJp) in one filtration period could be used as a quantified indicator for comparison of different hybrid membrane processes or under different scenarios.According to the results,there was an additive effect on membrane flux by NOM and particles during solo UF process.This additive fouling could be alleviated by coagulation pretreatment since particles helped the formation of flocs with coagulant,which further delayed the decrease of membrane flux and benefited flux recovery by backwashing.The addition of PAC also increased membrane flux by adsorbing NOM and improved flux recovery through backwashing.

  13. Mesoscopic dispersion of colloidal agglomerate in a complex fluid modelled by a hybrid fluid-particle model.

    Science.gov (United States)

    Dzwinel, Witold; Yuen, David A

    2002-03-15

    The dispersion of the agglomerating fluid process involving colloids has been investigated at the mesoscale level by a discrete particle approach--the hybrid fluid-particle model (FPM). Dynamical processes occurring in the granulation of colloidal agglomerate in solvents are severely influenced by coupling between the dispersed microstructures and the global flow. On the mesoscale this coupling is further exacerbated by thermal fluctuations, particle-particle interactions between colloidal beds, and hydrodynamic interactions between colloidal beds and the solvent. Using the method of FPM, we have tackled the problem of dispersion of a colloidal slab being accelerated in a long box filled with a fluid. Our results show that the average size of the agglomerated fragments decreases with increasing shearing rate gamma, according to the power law A x gamma(k), where k is around 2. For larger values of gamma, the mean size of the agglomerate S(avg) increases slowly with gamma from the collisions between the aggregates and the longitudinal stretching induced by the flow. The proportionality constant A increases exponentially with the scaling factor of the attractive forces acting between the colloidal particles. The value of A shows a rather weak dependence on the solvent viscosity. But A increases proportionally with the scaling factor of the colloid-solvent dissipative interactions. Similar type of dependence can be found for the mixing induced by Rayleigh-Taylor instabilities involving the colloidal agglomerate and the solvent. Three types of fragmentation structures can be identified, which are called rupture, erosion, and shatter. They generate very complex structures with multiresolution character. The aggregation of colloidal beds is formed by the collisions between aggregates, which are influenced by the flow or by the cohesive forces for small dispersion energies. These results may be applied to enhance our understanding concerning the nonlinear complex

  14. In-flight particle measurement of glass raw materials in hybrid heating of twelve-phase AC arc with oxygen burner

    International Nuclear Information System (INIS)

    Liu, Y; Tanaka, M; Ikeba, T; Choi, S; Watanabe, T

    2012-01-01

    The high temperature provided by a 12-phase AC arc plasma is beneficial to finish vitrification reaction in milliseconds. Another heating method called “hybrid plasma” combines multi-phase AC arc and oxygen burner are expected to improve glass quality and increase productivity with minimum energy consumption. In this study, recent works on the development of in-flight particle measurement in hybrid plasma system are presented. Two-colour pyrometry offers considerable advantages for measuring particle temperatures in flight. A high-speed camera equipped with a band-pass filter system was applied to measure the in-flight temperatures of glass particles. The intensity recorded by the camera was calibrated using a tungsten halogen lamp. This technique also allows evaluating the fluctuation of the average particle temperature within millisecond in plasma region.

  15. Hybrid Particle Swarm Optimization based Day-Ahead Self-Scheduling for Thermal Generator in Competitive Electricity Market

    DEFF Research Database (Denmark)

    Pindoriya, Naran M.; Singh, S.N.; Østergaard, Jacob

    2009-01-01

    in day-ahead energy market subject to operational constraints and 2) at the same time, to minimize the risk due to uncertainty in price forecast. Therefore, it is a conflicting bi-objective optimization problem which has both binary and continuous optimization variables considered as constrained mixed......This paper presents a hybrid particle swarm optimization algorithm (HPSO) to solve the day-ahead self-scheduling for thermal power producer in competitive electricity market. The objective functions considered to model the self-scheduling problem are 1) to maximize the profit from selling energy...... integer nonlinear programming. To demonstrate the effectiveness of the proposed method for self-scheduling in a day-ahead energy market, the locational margin price (LMP) forecast uncertainty in PJM electricity market is considered. An adaptive wavelet neural network (AWNN) is used to forecast the day...

  16. Evident anomalous inward particle pinch in full non-inductive plasmas driven by lower hybrid waves on Tore Supra

    International Nuclear Information System (INIS)

    Hoang, G.T.; Bourdelle, C.; Pegourie, B.; Artaud, J.F.; Bucalossi, J.; Clairet, F.; Fenzi-Bonizec, C.; Garbet, X.; Gil, C.; Guirlet, R.; Imbeaux, F.; Lasalle, J.; Loarer, T.; Lowry, C.; Schunke, B.; Travere, J.M.; Tsitrone, E.

    2003-01-01

    These slides present some characteristics concerning peaked density profile observed in Tore-Supra. It appears that density profile remains peaked for more than 3 minutes in fully LHCD (lower hybrid current drive) discharges. The absence of toroidal electric field and the fact that the ware pinch has vanished across the entire plasma show that toroidal electric field and ware pinch are not the cause of the peaked profile. It is shown that peaked profile is linked to transport properties and can only be explained by a particle pinch velocity 2 orders of magnitude above the neoclassical pinch. It is also shown that the radial profile is in agreement with Isitchenko's formula. (A.C.)

  17. Silicon PIN diode hybrid arrays for charged particle detection: Building blocks for vertex detectors at the SSC

    International Nuclear Information System (INIS)

    Kramer, G.; Gaalema, S.; Shapiro, S.L.; Dunwoodie, W.M.; Arens, J.F.; Jernigan, J.G.

    1989-05-01

    Two-dimensional arrays of solid state detectors have long been used in visible and infrared systems. Hybrid arrays with separately optimized detector and readout substrates have been extensively developed for infrared sensors. The characteristics and use of these infrared readout chips with silicon PIN diode arrays produced by MICRON SEMICONDUCTOR for detecting high-energy particles are reported. Some of these arrays have been produced in formats as large as 512 /times/ 512 pixels; others have been radiation hardened to total dose levels beyond 1 Mrad. Data generation rates of 380 megasamples/second have been achieved. Analog and digital signal transmission and processing techniques have also been developed to accept and reduce these high data rates. 9 refs., 15 figs., 2 tabs

  18. Experimental investigation of the dynamics of a hybrid morphing wing: time resolved particle image velocimetry and force measures

    Science.gov (United States)

    Jodin, Gurvan; Scheller, Johannes; Rouchon, Jean-François; Braza, Marianna; Mit Collaboration; Imft Collaboration; Laplace Collaboration

    2016-11-01

    A quantitative characterization of the effects obtained by high frequency-low amplitude trailing edge actuation is performed. Particle image velocimetry, as well as pressure and aerodynamic force measurements, are carried out on an airfoil model. This hybrid morphing wing model is equipped with both trailing edge piezoelectric-actuators and camber control shape memory alloy actuators. It will be shown that this actuation allows for an effective manipulation of the wake turbulent structures. Frequency domain analysis and proper orthogonal decomposition show that proper actuating reduces the energy dissipation by favoring more coherent vortical structures. This modification in the airflow dynamics eventually allows for a tapering of the wake thickness compared to the baseline configuration. Hence, drag reductions relative to the non-actuated trailing edge configuration are observed. Massachusetts Institute of Technology.

  19. Particle simulation on the propagation and plasma heating of the lower hybrid wave in the nonuniform system

    International Nuclear Information System (INIS)

    Abe, Hirotada; Kajitani, Hiroyuki; Itatani, Ryohei.

    1977-07-01

    A particle simulation model which treats the wave excitation and propagation in the nonuniform density by the external source is developed and applied for study of the lower hybrid heating in a fusion device. As the linear theory predicts, the cold lower hybrid wave is observed to increase its perpendicular wave number as it propagates to the higher density region and to damp away near the turning point. When the wave amplitude is large or the wave energy is about a half of the initial kinetic energy at a surface of plasma, the following features are observed for the increase of the ion and electron kinetic energies. Ion perpendicular energy distributions are observed to be approximated by the two Maxwell distributions or to have the components of the high energy tail, whose parallel velocities satisfy the resonance condition: νparallel = (ω-IOTAΩ sub(iota))/kappa parallel, where ω and kappa parallel the frequency and the parallel wave number of the external source, IOTA is an integer, and Ω sub(iota) is the ion cyclotron frequency. An strong increase of the parallel kinetic energy of the electron is observed near the plasma surface. These are mainly due to the trapped electrons and the collisional heating. (auth.)

  20. Identification of transformer fault based on dissolved gas analysis using hybrid support vector machine-modified evolutionary particle swarm optimisation

    Science.gov (United States)

    2018-01-01

    Early detection of power transformer fault is important because it can reduce the maintenance cost of the transformer and it can ensure continuous electricity supply in power systems. Dissolved Gas Analysis (DGA) technique is commonly used to identify oil-filled power transformer fault type but utilisation of artificial intelligence method with optimisation methods has shown convincing results. In this work, a hybrid support vector machine (SVM) with modified evolutionary particle swarm optimisation (EPSO) algorithm was proposed to determine the transformer fault type. The superiority of the modified PSO technique with SVM was evaluated by comparing the results with the actual fault diagnosis, unoptimised SVM and previous reported works. Data reduction was also applied using stepwise regression prior to the training process of SVM to reduce the training time. It was found that the proposed hybrid SVM-Modified EPSO (MEPSO)-Time Varying Acceleration Coefficient (TVAC) technique results in the highest correct identification percentage of faults in a power transformer compared to other PSO algorithms. Thus, the proposed technique can be one of the potential solutions to identify the transformer fault type based on DGA data on site. PMID:29370230

  1. Identification of transformer fault based on dissolved gas analysis using hybrid support vector machine-modified evolutionary particle swarm optimisation.

    Directory of Open Access Journals (Sweden)

    Hazlee Azil Illias

    Full Text Available Early detection of power transformer fault is important because it can reduce the maintenance cost of the transformer and it can ensure continuous electricity supply in power systems. Dissolved Gas Analysis (DGA technique is commonly used to identify oil-filled power transformer fault type but utilisation of artificial intelligence method with optimisation methods has shown convincing results. In this work, a hybrid support vector machine (SVM with modified evolutionary particle swarm optimisation (EPSO algorithm was proposed to determine the transformer fault type. The superiority of the modified PSO technique with SVM was evaluated by comparing the results with the actual fault diagnosis, unoptimised SVM and previous reported works. Data reduction was also applied using stepwise regression prior to the training process of SVM to reduce the training time. It was found that the proposed hybrid SVM-Modified EPSO (MEPSO-Time Varying Acceleration Coefficient (TVAC technique results in the highest correct identification percentage of faults in a power transformer compared to other PSO algorithms. Thus, the proposed technique can be one of the potential solutions to identify the transformer fault type based on DGA data on site.

  2. Power control based on particle swarm optimization of grid-connected inverter for hybrid renewable energy system

    International Nuclear Information System (INIS)

    García-Triviño, Pablo; Gil-Mena, Antonio José; Llorens-Iborra, Francisco; García-Vázquez, Carlos Andrés; Fernández-Ramírez, Luis M.; Jurado, Francisco

    2015-01-01

    Highlights: • Three PSO-based PI controllers for a grid-connected inverter were presented. • Two online PSO-based PI controllers were compared with an offline PSO-tuned PI. • The HRES and the inverter were evaluated under power changes and grid voltage sags. • Online ITAE-based PSO reduced ITAE (current THD) by 15.24% (5.32%) versus offline one. - Abstract: This paper is focused on the study of particle swarm optimization (PSO)-based PI controllers for the power control of a grid-connected inverter supplied from a hybrid renewable energy system. It is composed of two renewable energy sources (wind turbine and photovoltaic – PV – solar panels) and two energy storage systems (battery and hydrogen system, integrated by fuel cell and electrolyzer). Three PSO-based PI controllers are implemented: (1) conventional PI controller with offline tuning by PSO algorithm based on the integral time absolute error (ITAE) index; (2) PI controllers with online self-tuning by PSO algorithm based on the error; and (3) PI controllers with online self-tuning by PSO algorithm based on the ITAE index. To evaluate and compare the three controllers, the hybrid renewable energy system and the grid-connected inverter are simulated under changes in the active and reactive power values, as well as under a grid voltage sag. The results show that the online PSO-based PI controllers that optimize the ITAE index achieves the best response

  3. Identification of transformer fault based on dissolved gas analysis using hybrid support vector machine-modified evolutionary particle swarm optimisation.

    Science.gov (United States)

    Illias, Hazlee Azil; Zhao Liang, Wee

    2018-01-01

    Early detection of power transformer fault is important because it can reduce the maintenance cost of the transformer and it can ensure continuous electricity supply in power systems. Dissolved Gas Analysis (DGA) technique is commonly used to identify oil-filled power transformer fault type but utilisation of artificial intelligence method with optimisation methods has shown convincing results. In this work, a hybrid support vector machine (SVM) with modified evolutionary particle swarm optimisation (EPSO) algorithm was proposed to determine the transformer fault type. The superiority of the modified PSO technique with SVM was evaluated by comparing the results with the actual fault diagnosis, unoptimised SVM and previous reported works. Data reduction was also applied using stepwise regression prior to the training process of SVM to reduce the training time. It was found that the proposed hybrid SVM-Modified EPSO (MEPSO)-Time Varying Acceleration Coefficient (TVAC) technique results in the highest correct identification percentage of faults in a power transformer compared to other PSO algorithms. Thus, the proposed technique can be one of the potential solutions to identify the transformer fault type based on DGA data on site.

  4. Extension of hybrid micro-depletion model for decay heat calculation in the DYN3D code

    International Nuclear Information System (INIS)

    Bilodid, Yurii; Fridman, Emil; Shwageraus, E.

    2017-01-01

    This work extends the hybrid micro-depletion methodology, recently implemented in DYN3D, to the decay heat calculation by accounting explicitly for the heat contribution from the decay of each nuclide in the fuel.

  5. Extension of hybrid micro-depletion model for decay heat calculation in the DYN3D code

    Energy Technology Data Exchange (ETDEWEB)

    Bilodid, Yurii; Fridman, Emil [Helmholtz-Zentrum Dresden-Rossendorf e.V., Dresden (Germany). Reactor Safety; Kotlyar, D. [Georgia Institute of Technology, Atlanta, GA (United States); Shwageraus, E. [Cambridge Univ. (United Kingdom)

    2017-06-01

    This work extends the hybrid micro-depletion methodology, recently implemented in DYN3D, to the decay heat calculation by accounting explicitly for the heat contribution from the decay of each nuclide in the fuel.

  6. Modified hybrid subcarrier/amplitude/ phase/polarization LDPC-coded modulation for 400 Gb/s optical transmission and beyond.

    Science.gov (United States)

    Batshon, Hussam G; Djordjevic, Ivan; Xu, Lei; Wang, Ting

    2010-06-21

    In this paper, we present a modified coded hybrid subcarrier/ amplitude/phase/polarization (H-SAPP) modulation scheme as a technique capable of achieving beyond 400 Gb/s single-channel transmission over optical channels. The modified H-SAPP scheme profits from the available resources in addition to geometry to increase the bandwidth efficiency of the transmission system, and so increases the aggregate rate of the system. In this report we present the modified H-SAPP scheme and focus on an example that allows 11 bits/Symbol that can achieve 440 Gb/s transmission using components of 50 Giga Symbol/s (GS/s).

  7. Krylov solvers preconditioned with the low-order red-black algorithm for the PN hybrid FEM for the instant code

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Yaqi; Rabiti, Cristian; Palmiotti, Giuseppe, E-mail: yaqi.wang@inl.gov, E-mail: cristian.rabiti@inl.gov, E-mail: giuseppe.palmiotti@inl.gov [Idaho National Laboratory, Idaho Falls, ID (United States)

    2011-07-01

    This paper proposes a new set of Krylov solvers, CG and GMRes, as an alternative of the Red-Black (RB) algorithm on on solving the steady-state one-speed neutron transport equation discretized with PN in angle and hybrid FEM (Finite Element Method) in space. A pre conditioner with the low-order RB iteration is designed to improve their convergence. These Krylov solvers can reduce the cost of pre-assembling the response matrices greatly. Numerical results with the INSTANT code are presented in order to show that they can be a good supplement on solving the PN-HFEM system. (author)

  8. Krylov solvers preconditioned with the low-order red-black algorithm for the PN hybrid FEM for the instant code

    International Nuclear Information System (INIS)

    Wang, Yaqi; Rabiti, Cristian; Palmiotti, Giuseppe

    2011-01-01

    This paper proposes a new set of Krylov solvers, CG and GMRes, as an alternative of the Red-Black (RB) algorithm on on solving the steady-state one-speed neutron transport equation discretized with PN in angle and hybrid FEM (Finite Element Method) in space. A pre conditioner with the low-order RB iteration is designed to improve their convergence. These Krylov solvers can reduce the cost of pre-assembling the response matrices greatly. Numerical results with the INSTANT code are presented in order to show that they can be a good supplement on solving the PN-HFEM system. (author)

  9. Determination of the ruminant origin of bone particles using fluorescence in situ hybridization (FISH).

    Science.gov (United States)

    Lecrenier, M C; Ledoux, Q; Berben, G; Fumière, O; Saegerman, C; Baeten, V; Veys, P

    2014-07-17

    Molecular biology techniques such as PCR constitute powerful tools for the determination of the taxonomic origin of bones. DNA degradation and contamination by exogenous DNA, however, jeopardise bone identification. Despite the vast array of techniques used to decontaminate bone fragments, the isolation and determination of bone DNA content are still problematic. Within the framework of the eradication of transmissible spongiform encephalopathies (including BSE, commonly known as "mad cow disease"), a fluorescence in situ hybridization (FISH) protocol was developed. Results from the described study showed that this method can be applied directly to bones without a demineralisation step and that it allows the identification of bovine and ruminant bones even after severe processing. The results also showed that the method is independent of exogenous contamination and that it is therefore entirely appropriate for this application.

  10. Horseradish peroxidase-nanoclay hybrid particles of high functional and colloidal stability.

    Science.gov (United States)

    Pavlovic, Marko; Rouster, Paul; Somosi, Zoltan; Szilagyi, Istvan

    2018-08-15

    Highly stable dispersions of enzyme-clay nanohybrids of excellent horseradish peroxidase activity were developed. Layered double hydroxide nanoclay was synthesized and functionalized with heparin polyelectrolyte to immobilize the horseradish peroxidase enzyme. The formation of a saturated heparin layer on the platelets led to charge inversion of the positively charged bare nanoclay and to highly stable aqueous dispersions. Great affinity of the enzyme to the surface modified platelets resulted in strong horseradish peroxidase adsorption through electrostatic and hydrophobic interactions as well as hydrogen bonding network and prevented enzyme leakage from the obtained material. The enzyme kept its functional integrity upon immobilization and showed excellent activity in decomposition of hydrogen peroxide and oxidation of an aromatic compound in the test reactions. In addition, remarkable long term functional stability of the enzyme-nanoclay hybrid was observed making the developed colloidal system a promising antioxidant candidate in biomedical treatments and industrial processes. Copyright © 2018 Elsevier Inc. All rights reserved.

  11. Consistent Continuum Particle Modeling of Hypersonic Flows and Development of HybridSimulation Capability

    Science.gov (United States)

    2017-07-01

    including suggestions for reducing this burden to Department of Defense, Washington Headquarters Services, Directorate for Information Operations...cells and all particles within them) can be interrogated by direct access to the hdf5 data file format. This avoids the process of loading the entire...grid and solution into memory before post-processing. Rather, a precise region of the flow can be interrogated directly from the hdf5 solution file

  12. Effect of wear parameters on dry sliding behavior of Fly Ash/SiC particles reinforced AA 2024 hybrid composites

    Science.gov (United States)

    Bhaskar Kurapati, Vijaya; Kommineni, Ravindra

    2017-09-01

    In the present work AA 2024 alloy reinforced with mixtures of SiC and Fly Ash (FA) particles of 70 µm (5, 10 and 15 wt. %) are fabricated using the stir casting method. Both reinforcements are added in equal weight proportions. The wear test specimens are prepared from both the alloy and composite castings in the dimensions of Ф 4 mm and 30 mm lengths by the wire cut EDM process. The dry sliding wear properties of the prepared composites at room temperature are estimated by pin-on-disc wear testing equipment. The wear characteristics of the composites are studied by conducting the dry sliding wear test over loads of 0.5 Kgf, 1.0 Kgf, 1.5 Kgf, a track diameter of 60 mm and sliding times of 15 min, 30 min, 45min. The experimental results shows that the wear decreases with an increase in the weight percentage of FA and SiC particles in the matrix. Additionally wear increases with an increase in load and sliding time. Further, it is found that the wear resistance of the AA2024-Hybrid composites is higher than that of the AA2024 matrix.

  13. Discrete particle swarm optimization to solve multi-objective limited-wait hybrid flow shop scheduling problem

    Science.gov (United States)

    Santosa, B.; Siswanto, N.; Fiqihesa

    2018-04-01

    This paper proposes a discrete Particle Swam Optimization (PSO) to solve limited-wait hybrid flowshop scheduing problem with multi objectives. Flow shop schedulimg represents the condition when several machines are arranged in series and each job must be processed at each machine with same sequence. The objective functions are minimizing completion time (makespan), total tardiness time, and total machine idle time. Flow shop scheduling model always grows to cope with the real production system accurately. Since flow shop scheduling is a NP-Hard problem then the most suitable method to solve is metaheuristics. One of metaheuristics algorithm is Particle Swarm Optimization (PSO), an algorithm which is based on the behavior of a swarm. Originally, PSO was intended to solve continuous optimization problems. Since flow shop scheduling is a discrete optimization problem, then, we need to modify PSO to fit the problem. The modification is done by using probability transition matrix mechanism. While to handle multi objectives problem, we use Pareto Optimal (MPSO). The results of MPSO is better than the PSO because the MPSO solution set produced higher probability to find the optimal solution. Besides the MPSO solution set is closer to the optimal solution

  14. Hybrid model for simulation of plasma jet injection in tokamak

    Science.gov (United States)

    Galkin, Sergei A.; Bogatu, I. N.

    2016-10-01

    Hybrid kinetic model of plasma treats the ions as kinetic particles and the electrons as charge neutralizing massless fluid. The model is essentially applicable when most of the energy is concentrated in the ions rather than in the electrons, i.e. it is well suited for the high-density hyper-velocity C60 plasma jet. The hybrid model separates the slower ion time scale from the faster electron time scale, which becomes disregardable. That is why hybrid codes consistently outperform the traditional PIC codes in computational efficiency, still resolving kinetic ions effects. We discuss 2D hybrid model and code with exact energy conservation numerical algorithm and present some results of its application to simulation of C60 plasma jet penetration through tokamak-like magnetic barrier. We also examine the 3D model/code extension and its possible applications to tokamak and ionospheric plasmas. The work is supported in part by US DOE DE-SC0015776 Grant.

  15. Effect of hybrid carbon nanotubes-bimetallic composite particles on the performance of polymer solar cells

    Energy Technology Data Exchange (ETDEWEB)

    Park, Sun-Young [Department of Material Processing, Korea Institute of Materials Science, Changwon 641-831 (Korea); Division of Applied Chemical Engineering, Department of Polymer Engineering, Pukyong National University, Busan 608-739 (Korea); Kim, Whi-Dong; Kim, Soo H. [Department of Nanosystem and Nanoprocess Engineering, Pusan National University, 30 Jangjeon-dong, Geumjeong-gu, Busan 609-735 (Korea); Kim, Do-Geun; Kim, Jong-Kuk; Jeong, Yong-Soo; Kang, Jae-Wook [Department of Material Processing, Korea Institute of Materials Science, Changwon 641-831 (Korea); Kim, Joo Hyun [Division of Applied Chemical Engineering, Department of Polymer Engineering, Pukyong National University, Busan 608-739 (Korea); Lee, Jae Keun [School of Mechanical Engineering, Pusan National University, 30 Jangjeon-dong, Geumjeong-gu, Busan 609-735 (Korea)

    2010-05-15

    Hybrid carbon nanotubes-bimetallic composite nanoparticles with sea urchin-like structures (SU-CNTs) were introduced to bulk heterojunction polymer-fullerene solar cells to improve their performance. The SU-CNTs were composed of multi-walled CNTs, which were grown radially over the entire surface of the bimetallic nanoparticles composed of Ni and Al. SU-CNTs with a precisely controlled length of {proportional_to}200{+-}40 nm were dispersed homogenously in a polymer active layer. Compared with a pristine device (i.e., without SU-CNTs), the SU-CNTs-doped organic photovoltaic (OPV) cells showed an improved short-circuit current density and power conversion efficiency from 7.5 to 9.5 mA/cm{sup 2} and 2.1{+-}0.1% to 2.2{+-}0.2% (max. 2.5%), respectively. The specially designed SU-CNTs have strong potential as an effective exciton dissociation medium in the polymer active layer to enhance the performance of organic solar cells. (author)

  16. Wave-Particle Interactions Associated with Nongyrotropic Distribution Functions: A Hybrid Simulation Study

    Science.gov (United States)

    Convery, P. D.; Schriver, D.; Ashour-Abdalla, M.; Richard, R. L.

    2002-01-01

    Nongyrotropic plasma distribution functions can be formed in regions of space where guiding center motion breaks down as a result of strongly curved and weak ambient magnetic fields. Such are the conditions near the current sheet in the Earth's middle and distant magnetotail, where observations of nongyrotropic ion distributions have been made. Here a systematic parameter study of nongyrotropic proton distributions using electromagnetic hybrid simulations is made. We model the observed nongyrotropic distributions by removing a number of arc length segments from a cold ring distribution and find significant differences with the results of simulations that initially have a gyrotropic ring distribution. Model nongyrotropic distributions with initially small perpendicular thermalization produce growing fluctuations that diffuse the ions into a stable Maxwellian-like distribution within a few proton gyro periods. The growing waves produced by nongyrotropic distributions are similar to the electromagnetic proton cyclotron waves produced by a gyrotropic proton ring distribution in that they propagate parallel to the background magnetic field and occur at frequencies on the order of the proton gyrofrequency, The maximum energy of the fluctuating magnetic field increases as the initial proton distribution is made more nongyrotropic, that is, more highly bunched in perpendicular velocity space. This increase can be as much as twice the energy produced in the gyrotropic case.

  17. Porting the 3D Gyrokinetic Particle-in-cell Code GTC to the CRAY/NEC SX-6 Vector Architecture: Perspectives and Challenges

    International Nuclear Information System (INIS)

    Ethier, S.; Lin, Z.

    2003-01-01

    Several years of optimization on the super-scalar architecture has made it more difficult to port the current version of the 3D particle-in-cell code GTC to the CRAY/NEC SX-6 vector architecture. This paper explains the initial work that has been done to port this code to the SX-6 computer and to optimize the most time consuming parts. Early performance results are shown and compared to the same test done on the IBM SP Power 3 and Power 4 machines

  18. A hybrid particle swarm optimization-SVM classification for automatic cardiac auscultation

    Directory of Open Access Journals (Sweden)

    Prasertsak Charoen

    2017-04-01

    Full Text Available Cardiac auscultation is a method for a doctor to listen to heart sounds, using a stethoscope, for examining the condition of the heart. Automatic cardiac auscultation with machine learning is a promising technique to classify heart conditions without need of doctors or expertise. In this paper, we develop a classification model based on support vector machine (SVM and particle swarm optimization (PSO for an automatic cardiac auscultation system. The model consists of two parts: heart sound signal processing part and a proposed PSO for weighted SVM (WSVM classifier part. In this method, the PSO takes into account the degree of importance for each feature extracted from wavelet packet (WP decomposition. Then, by using principle component analysis (PCA, the features can be selected. The PSO technique is used to assign diverse weights to different features for the WSVM classifier. Experimental results show that both continuous and binary PSO-WSVM models achieve better classification accuracy on the heart sound samples, by reducing system false negatives (FNs, compared to traditional SVM and genetic algorithm (GA based SVM.

  19. Hybrid Artificial Bee Colony Algorithm and Particle Swarm Search for Global Optimization

    Directory of Open Access Journals (Sweden)

    Wang Chun-Feng

    2014-01-01

    Full Text Available Artificial bee colony (ABC algorithm is one of the most recent swarm intelligence based algorithms, which has been shown to be competitive to other population-based algorithms. However, there is still an insufficiency in ABC regarding its solution search equation, which is good at exploration but poor at exploitation. To overcome this problem, we propose a novel artificial bee colony algorithm based on particle swarm search mechanism. In this algorithm, for improving the convergence speed, the initial population is generated by using good point set theory rather than random selection firstly. Secondly, in order to enhance the exploitation ability, the employed bee, onlookers, and scouts utilize the mechanism of PSO to search new candidate solutions. Finally, for further improving the searching ability, the chaotic search operator is adopted in the best solution of the current iteration. Our algorithm is tested on some well-known benchmark functions and compared with other algorithms. Results show that our algorithm has good performance.

  20. Hybrid micro-particles as a magnetically-guidable decontaminant for cesium-eluted ash slurry

    Science.gov (United States)

    Namiki, Yoshihisa; Ueyama, Toshihiko; Yoshida, Takayuki; Watanabe, Ryoei; Koido, Shigeo; Namiki, Tamami

    2014-09-01

    Decontamination of the radioactive cesium that is widely dispersed owing to a nuclear power station accident and concentrated in fly ash requires an effective elimination system. Radioactive fly ash contains large amounts of water-soluble cesium that can cause severe secondary contamination and represents a serious health risk, yet its complete removal is complicated and difficult. Here it is shown that a new fine-powder formulation can be magnetically guided to eliminate cesium after being mixed with the ash slurry. This formulation, termed MagCE, consists of a ferromagnetic porous structure and alkaline- and salt-resistant nickel ferrocyanide. It has potent cesium-adsorption- and magnetic-separation-properties. Because of its resistance against physical and chemical attack such as with ash particles, as well as with the high pH and salt concentration of the ash slurry, MagCE simplifies the decontamination process without the need of the continued presence of the hazardous water-soluble cesium in the treated ash.

  1. PSOVina: The hybrid particle swarm optimization algorithm for protein-ligand docking.

    Science.gov (United States)

    Ng, Marcus C K; Fong, Simon; Siu, Shirley W I

    2015-06-01

    Protein-ligand docking is an essential step in modern drug discovery process. The challenge here is to accurately predict and efficiently optimize the position and orientation of ligands in the binding pocket of a target protein. In this paper, we present a new method called PSOVina which combined the particle swarm optimization (PSO) algorithm with the efficient Broyden-Fletcher-Goldfarb-Shannon (BFGS) local search method adopted in AutoDock Vina to tackle the conformational search problem in docking. Using a diverse data set of 201 protein-ligand complexes from the PDBbind database and a full set of ligands and decoys for four representative targets from the directory of useful decoys (DUD) virtual screening data set, we assessed the docking performance of PSOVina in comparison to the original Vina program. Our results showed that PSOVina achieves a remarkable execution time reduction of 51-60% without compromising the prediction accuracies in the docking and virtual screening experiments. This improvement in time efficiency makes PSOVina a better choice of a docking tool in large-scale protein-ligand docking applications. Our work lays the foundation for the future development of swarm-based algorithms in molecular docking programs. PSOVina is freely available to non-commercial users at http://cbbio.cis.umac.mo .

  2. Molecular dynamics simulations in hybrid particle-continuum schemes: Pitfalls and caveats

    Science.gov (United States)

    Stalter, S.; Yelash, L.; Emamy, N.; Statt, A.; Hanke, M.; Lukáčová-Medvid'ová, M.; Virnau, P.

    2018-03-01

    Heterogeneous multiscale methods (HMM) combine molecular accuracy of particle-based simulations with the computational efficiency of continuum descriptions to model flow in soft matter liquids. In these schemes, molecular simulations typically pose a computational bottleneck, which we investigate in detail in this study. We find that it is preferable to simulate many small systems as opposed to a few large systems, and that a choice of a simple isokinetic thermostat is typically sufficient while thermostats such as Lowe-Andersen allow for simulations at elevated viscosity. We discuss suitable choices for time steps and finite-size effects which arise in the limit of very small simulation boxes. We also argue that if colloidal systems are considered as opposed to atomistic systems, the gap between microscopic and macroscopic simulations regarding time and length scales is significantly smaller. We propose a novel reduced-order technique for the coupling to the macroscopic solver, which allows us to approximate a non-linear stress-strain relation efficiently and thus further reduce computational effort of microscopic simulations.

  3. Mobile, hybrid Compton/coded aperture imaging for detection, identification and localization of gamma-ray sources at stand-off distances

    Science.gov (United States)

    Tornga, Shawn R.

    The Stand-off Radiation Detection System (SORDS) program is an Advanced Technology Demonstration (ATD) project through the Department of Homeland Security's Domestic Nuclear Detection Office (DNDO) with the goal of detection, identification and localization of weak radiological sources in the presence of large dynamic backgrounds. The Raytheon-SORDS Tri-Modal Imager (TMI) is a mobile truck-based, hybrid gamma-ray imaging system able to quickly detect, identify and localize, radiation sources at standoff distances through improved sensitivity while minimizing the false alarm rate. Reconstruction of gamma-ray sources is performed using a combination of two imaging modalities; coded aperture and Compton scatter imaging. The TMI consists of 35 sodium iodide (NaI) crystals 5x5x2 in3 each, arranged in a random coded aperture mask array (CA), followed by 30 position sensitive NaI bars each 24x2.5x3 in3 called the detection array (DA). The CA array acts as both a coded aperture mask and scattering detector for Compton events. The large-area DA array acts as a collection detector for both Compton scattered events and coded aperture events. In this thesis, developed coded aperture, Compton and hybrid imaging algorithms will be described along with their performance. It will be shown that multiple imaging modalities can be fused to improve detection sensitivity over a broader energy range than either alone. Since the TMI is a moving system, peripheral data, such as a Global Positioning System (GPS) and Inertial Navigation System (INS) must also be incorporated. A method of adapting static imaging algorithms to a moving platform has been developed. Also, algorithms were developed in parallel with detector hardware, through the use of extensive simulations performed with the Geometry and Tracking Toolkit v4 (GEANT4). Simulations have been well validated against measured data. Results of image reconstruction algorithms at various speeds and distances will be presented as well as

  4. Investigation of Fluctuation-Induced Electron Transport in Hall Thrusters with a 2D Hybrid Code in the Azimuthal and Axial Coordinates

    Science.gov (United States)

    Fernandez, Eduardo; Borelli, Noah; Cappelli, Mark; Gascon, Nicolas

    2003-10-01

    Most current Hall thruster simulation efforts employ either 1D (axial), or 2D (axial and radial) codes. These descriptions crucially depend on the use of an ad-hoc perpendicular electron mobility. Several models for the mobility are typically invoked: classical, Bohm, empirically based, wall-induced, as well as combinations of the above. Experimentally, it is observed that fluctuations and electron transport depend on axial distance and operating parameters. Theoretically, linear stability analyses have predicted a number of unstable modes; yet the nonlinear character of the fluctuations and/or their contribution to electron transport remains poorly understood. Motivated by these observations, a 2D code in the azimuthal and axial coordinates has been written. In particular, the simulation self-consistently calculates the azimuthal disturbances resulting in fluctuating drifts, which in turn (if properly correlated with plasma density disturbances) result in fluctuation-driven electron transport. The characterization of the turbulence at various operating parameters and across the channel length is also the object of this study. A description of the hybrid code used in the simulation as well as the initial results will be presented.

  5. Crude Oil Price Forecasting Based on Hybridizing Wavelet Multiple Linear Regression Model, Particle Swarm Optimization Techniques, and Principal Component Analysis

    Science.gov (United States)

    Shabri, Ani; Samsudin, Ruhaidah

    2014-01-01

    Crude oil prices do play significant role in the global economy and are a key input into option pricing formulas, portfolio allocation, and risk measurement. In this paper, a hybrid model integrating wavelet and multiple linear regressions (MLR) is proposed for crude oil price forecasting. In this model, Mallat wavelet transform is first selected to decompose an original time series into several subseries with different scale. Then, the principal component analysis (PCA) is used in processing subseries data in MLR for crude oil price forecasting. The particle swarm optimization (PSO) is used to adopt the optimal parameters of the MLR model. To assess the effectiveness of this model, daily crude oil market, West Texas Intermediate (WTI), has been used as the case study. Time series prediction capability performance of the WMLR model is compared with the MLR, ARIMA, and GARCH models using various statistics measures. The experimental results show that the proposed model outperforms the individual models in forecasting of the crude oil prices series. PMID:24895666

  6. Crude Oil Price Forecasting Based on Hybridizing Wavelet Multiple Linear Regression Model, Particle Swarm Optimization Techniques, and Principal Component Analysis

    Directory of Open Access Journals (Sweden)

    Ani Shabri

    2014-01-01

    Full Text Available Crude oil prices do play significant role in the global economy and are a key input into option pricing formulas, portfolio allocation, and risk measurement. In this paper, a hybrid model integrating wavelet and multiple linear regressions (MLR is proposed for crude oil price forecasting. In this model, Mallat wavelet transform is first selected to decompose an original time series into several subseries with different scale. Then, the principal component analysis (PCA is used in processing subseries data in MLR for crude oil price forecasting. The particle swarm optimization (PSO is used to adopt the optimal parameters of the MLR model. To assess the effectiveness of this model, daily crude oil market, West Texas Intermediate (WTI, has been used as the case study. Time series prediction capability performance of the WMLR model is compared with the MLR, ARIMA, and GARCH models using various statistics measures. The experimental results show that the proposed model outperforms the individual models in forecasting of the crude oil prices series.

  7. Crude oil price forecasting based on hybridizing wavelet multiple linear regression model, particle swarm optimization techniques, and principal component analysis.

    Science.gov (United States)

    Shabri, Ani; Samsudin, Ruhaidah

    2014-01-01

    Crude oil prices do play significant role in the global economy and are a key input into option pricing formulas, portfolio allocation, and risk measurement. In this paper, a hybrid model integrating wavelet and multiple linear regressions (MLR) is proposed for crude oil price forecasting. In this model, Mallat wavelet transform is first selected to decompose an original time series into several subseries with different scale. Then, the principal component analysis (PCA) is used in processing subseries data in MLR for crude oil price forecasting. The particle swarm optimization (PSO) is used to adopt the optimal parameters of the MLR model. To assess the effectiveness of this model, daily crude oil market, West Texas Intermediate (WTI), has been used as the case study. Time series prediction capability performance of the WMLR model is compared with the MLR, ARIMA, and GARCH models using various statistics measures. The experimental results show that the proposed model outperforms the individual models in forecasting of the crude oil prices series.

  8. ACT-XN: Revised version of an activation calculation code for fusion reactor analysis. Supplement of the function for the sequential reaction activation by charged particles

    International Nuclear Information System (INIS)

    Yamauchi, Michinori; Sato, Satoshi; Nishitani, Takeo; Konno, Chikara; Hori, Jun-ichi; Kawasaki, Hiromitsu

    2007-09-01

    The ACT-XN is a revised version of the ACT4 code, which was developed in the Japan Atomic Energy Research Institute (JAERI) to calculate the transmutation, induced activity, decay heat, delayed gamma-ray source etc. for fusion devices. The ACT4 code cannot deal with the sequential reactions of charged particles generated by primary neutron reactions. In the design of present experimental reactors, the activation due to sequential reactions may not be of great concern as it is usually buried under the activity by primary neutron reactions. However, low activation material is one of the important factors for constructing high power fusion reactors in future, and unexpected activation may be produced through sequential reactions. Therefore, in the present work, the ACT4 code was newly supplemented with the calculation functions for the sequential reactions and renamed the ACT-XN. The ACT-XN code is equipped with functions to calculate effective cross sections for sequential reactions and input them in transmutation matrix. The FISPACT data were adopted for (x,n) reaction cross sections, charged particles emission spectra and stopping powers. The nuclear reaction chain data library were revised to cope with the (x,n) reactions. The charged particles are specified as p, d, t, 3 He(h) and α. The code was applied to the analysis of FNS experiment for LiF and Demo-reactor design with FLiBe, and confirmed that it reproduce the experimental values within 15-30% discrepancies. In addition, a notice was presented that the dose rate due to sequential reaction cannot always be neglected after a certain period cooling for some of the low activation material. (author)

  9. Learning Concepts, Language, and Literacy in Hybrid Linguistic Codes: The Multilingual Maze of Urban Grade 1 Classrooms in South Africa

    Science.gov (United States)

    Henning, Elizabeth

    2012-01-01

    From the field of developmental psycholinguistics and from conceptual development theory there is evidence that excessive linguistic "code-switching" in early school education may pose some hazards for the learning of young multilingual children. In this article the author addresses the issue, invoking post-Piagetian and neo-Vygotskian…

  10. A hybrid path-oriented code assignment CDMA-based MAC protocol for underwater acoustic sensor networks.

    Science.gov (United States)

    Chen, Huifang; Fan, Guangyu; Xie, Lei; Cui, Jun-Hong

    2013-11-04

    Due to the characteristics of underwater acoustic channel, media access control (MAC) protocols designed for underwater acoustic sensor networks (UWASNs) are quite different from those for terrestrial wireless sensor networks. Moreover, in a sink-oriented network with event information generation in a sensor field and message forwarding to the sink hop-by-hop, the sensors near the sink have to transmit more packets than those far from the sink, and then a funneling effect occurs, which leads to packet congestion, collisions and losses, especially in UWASNs with long propagation delays. An improved CDMA-based MAC protocol, named path-oriented code assignment (POCA) CDMA MAC (POCA-CDMA-MAC), is proposed for UWASNs in this paper. In the proposed MAC protocol, both the round-robin method and CDMA technology are adopted to make the sink receive packets from multiple paths simultaneously. Since the number of paths for information gathering is much less than that of nodes, the length of the spreading code used in the POCA-CDMA-MAC protocol is shorter greatly than that used in the CDMA-based protocols with transmitter-oriented code assignment (TOCA) or receiver-oriented code assignment (ROCA). Simulation results show that the proposed POCA-CDMA-MAC protocol achieves a higher network throughput and a lower end-to-end delay compared to other CDMA-based MAC protocols.

  11. A Hybrid Path-Oriented Code Assignment CDMA-Based MAC Protocol for Underwater Acoustic Sensor Networks

    Directory of Open Access Journals (Sweden)

    Huifang Chen

    2013-11-01

    Full Text Available Due to the characteristics of underwater acoustic channel, media access control (MAC protocols designed for underwater acoustic sensor networks (UWASNs are quite different from those for terrestrial wireless sensor networks. Moreover, in a sink-oriented network with event information generation in a sensor field and message forwarding to the sink hop-by-hop, the sensors near the sink have to transmit more packets than those far from the sink, and then a funneling effect occurs, which leads to packet congestion, collisions and losses, especially in UWASNs with long propagation delays. An improved CDMA-based MAC protocol, named path-oriented code assignment (POCA CDMA MAC (POCA-CDMA-MAC, is proposed for UWASNs in this paper. In the proposed MAC protocol, both the round-robin method and CDMA technology are adopted to make the sink receive packets from multiple paths simultaneously. Since the number of paths for information gathering is much less than that of nodes, the length of the spreading code used in the POCA-CDMA-MAC protocol is shorter greatly than that used in the CDMA-based protocols with transmitter-oriented code assignment (TOCA or receiver-oriented code assignment (ROCA. Simulation results show that the proposed POCA-CDMA-MAC protocol achieves a higher network throughput and a lower end-to-end delay compared to other CDMA-based MAC protocols.

  12. Distribution Pattern of Fe, Sr, Zr and Ca Elements as Particle Size Function in the Code River Sediments from Upstream to Downstream

    International Nuclear Information System (INIS)

    Sri Murniasih; Muzakky

    2007-01-01

    The analysis of Fe, Sr, Zr and Ca elements concentration of granular sediment from upstream to downstream of Code river has been done. The aim of this research is to know the influence of particle size on the concentration of Fe, Sr, Zr and Ca elements in the Code river sediments from upstream to downstream and its distribution pattern. The instrument used was x-ray fluorescence with Si(Li) detector. Analysis results show that more Fe and Sr elements are very much found in 150 - 90 μm particle size, while Zr and Ca elements are very much found in < 90 μm particle size. Distribution pattern of Fe, Sr, Zr and Ca elements distribution in Code river sediments tends to increase relatively from upstream to downstream following its conductivity. The concentration of Fe, Sr, Zr and Ca elements are 1.49 ± 0.03 % - 5.93 ± 0.02 % ; 118.20 ± 10.73 ppm - 468.21 ± 20.36 ppm; 19.81 ppm ± 0.86 ppm - 76.36 ± 3.02 ppm and 3.22 ± 0.25 % - 11.40 ± 0.31 % successively. (author)

  13. Improvement of neutron collimator design for thermal neutron radiography using Monte Carlo N-particle transport code version 5

    International Nuclear Information System (INIS)

    Thiagu Supramaniam

    2007-01-01

    The aim of this research was to propose a new neutron collimator design for thermal neutron radiography facility using tangential beam port of PUSPATI TRIGA Mark II reactor, Malaysia Institute of Nuclear Technology Research (MINT). Best geometry and materials for neutron collimator were chosen in order to obtain a uniform beam with maximum thermal neutron flux, high L/ D ratio, high neutron to gamma ratio and low beam divergence with high resolution. Monte Carlo N-particle Transport Code version 5 (MCNP 5) was used to optimize six neutron collimator components such as beam port medium, neutron scatterer, neutron moderator, gamma filter, aperture and collimator wall. The reactor and tangential beam port setup in MCNP5 was plotted according to its actual sizes. A homogeneous reactor core was assumed and population control method of variance reduction technique was applied by using cell importance. The comparison between experimental results and simulated results of the thermal neutron flux measurement of the bare tangential beam port, shows that both graph obtained had similar pattern. This directly suggests the reliability of MCNP5 in order to obtained optimal neutron collimator parameters. The simulated results of the optimal neutron medium, shows that vacuum was the best medium to transport neutrons followed by helium gas and air. The optimized aperture component was boral with 3 cm thickness. The optimal aperture center hole diameter was 2 cm which produces 88 L/ D ratio. Simulation also shows that graphite neutron scatterer improves thermal neutron flux while reducing fast neutron flux. Neutron moderator was used to moderate fast and epithermal neutrons in the beam port. Paraffin wax with 90 cm thick was bound to be the best neutron moderator material which produces the highest thermal neutron flux at the image plane. Cylindrical shape high density polyethylene neutron collimator produces the highest thermal neutron flux at the image plane rather than divergent

  14. An ultra-small NiFe2O4 hollow particle/graphene hybrid: fabrication and electromagnetic wave absorption property.

    Science.gov (United States)

    Yan, Feng; Guo, Dong; Zhang, Shen; Li, Chunyan; Zhu, Chunling; Zhang, Xitian; Chen, Yujin

    2018-02-08

    Herein, ultra-small NiFe 2 O 4 hollow particles, with the diameter and wall thickness of only 6 and 1.8 nm, respectively, were anchored on a graphene surface based on the nanoscale Kirkendall effect. The hybrid exhibits an excellent electromagnetic wave absorption property, comparable or superior to that of most reported absorbers. Our strategy may open a way to grow ultra-small hollow particles on graphene for applications in many fields such as eletromagnetic wave absorption and energy storage and conversion.

  15. Optimization of high-definition video coding and hybrid fiber-wireless transmission in the 60 GHz band

    DEFF Research Database (Denmark)

    Lebedev, Alexander; Pham, Tien Thang; Beltrán, Marta

    2011-01-01

    The paper addresses the problem of distribution of highdefinition video over fiber-wireless networks. The physical layer architecture with the low complexity envelope detection solution is investigated. We present both experimental studies and simulation of high quality high-definition compressed...... video transmission over 60 GHz fiberwireless link. Using advanced video coding we satisfy low complexity and low delay constraints, meanwhile preserving the superb video quality after significantly extended wireless distance. © 2011 Optical Society of America....

  16. Test Particle Simulations of Electron Injection by the Bursty Bulk Flows (BBFs) using High Resolution Lyon-Feddor-Mobarry (LFM) Code

    Science.gov (United States)

    Eshetu, W. W.; Lyon, J.; Wiltberger, M. J.; Hudson, M. K.

    2017-12-01

    Test particle simulations of electron injection by the bursty bulk flows (BBFs) have been done using a test particle tracer code [1], and the output fields of the Lyon-Feddor-Mobarry global magnetohydro- dynamics (MHD) code[2]. The MHD code was run with high resolu- tion (oct resolution), and with specified solar wind conditions so as to reproduce the observed qualitative picture of the BBFs [3]. Test par- ticles were injected so that they interact with earthward propagating BBFs. The result of the simulation shows that electrons are pushed ahead of the BBFs and accelerated into the inner magnetosphere. Once electrons are in the inner magnetosphere they are further energized by drift resonance with the azimuthal electric field. In addition pitch angle scattering of electrons resulting in the violation conservation of the first adiabatic invariant has been observed. The violation of the first adiabatic invariant occurs as electrons cross a weak magnetic field region with a strong gradient of the field perturbed by the BBFs. References 1. Kress, B. T., Hudson,M. K., Looper, M. D. , Albert, J., Lyon, J. G., and Goodrich, C. C. (2007), Global MHD test particle simulations of ¿ 10 MeV radiation belt electrons during storm sudden commencement, J. Geophys. Res., 112, A09215, doi:10.1029/2006JA012218. Lyon,J. G., Fedder, J. A., and Mobarry, C.M., The Lyon- Fedder-Mobarry (LFM) Global MHD Magnetospheric Simulation Code (2004), J. Atm. And Solar-Terrestrial Phys., 66, Issue 15-16, 1333- 1350,doi:10.1016/j.jastp. Wiltberger, Merkin, M., Lyon, J. G., and Ohtani, S. (2015), High-resolution global magnetohydrodynamic simulation of bursty bulk flows, J. Geophys. Res. Space Physics, 120, 45554566, doi:10.1002/2015JA021080.

  17. Numerical stability of 2nd order Runge-Kutta integration alghorithms for use in particle-in-cell codes

    Czech Academy of Sciences Publication Activity Database

    Fuchs, Vladimír; Gunn, J. P.

    2004-01-01

    Roč. 54, suppl.C (2004), C100-C110 ISSN 0011-4626. [Symposium on Plasma Physics and Technology /21./. Praha, 14.06.2004-17.06.2004] Institutional research plan: CEZ:AV0Z2043910 Keywords : simulation, tokamak edge plasma, lower hybrid antenna Subject RIV: BL - Plasma and Gas Discharge Physics Impact factor: 0.292, year: 2004

  18. A hybridized discontinuous Galerkin framework for high-order particle-mesh operator splitting of the incompressible Navier-Stokes equations

    Science.gov (United States)

    Maljaars, Jakob M.; Labeur, Robert Jan; Möller, Matthias

    2018-04-01

    A generic particle-mesh method using a hybridized discontinuous Galerkin (HDG) framework is presented and validated for the solution of the incompressible Navier-Stokes equations. Building upon particle-in-cell concepts, the method is formulated in terms of an operator splitting technique in which Lagrangian particles are used to discretize an advection operator, and an Eulerian mesh-based HDG method is employed for the constitutive modeling to account for the inter-particle interactions. Key to the method is the variational framework provided by the HDG method. This allows to formulate the projections between the Lagrangian particle space and the Eulerian finite element space in terms of local (i.e. cellwise) ℓ2-projections efficiently. Furthermore, exploiting the HDG framework for solving the constitutive equations results in velocity fields which excellently approach the incompressibility constraint in a local sense. By advecting the particles through these velocity fields, the particle distribution remains uniform over time, obviating the need for additional quality control. The presented methodology allows for a straightforward extension to arbitrary-order spatial accuracy on general meshes. A range of numerical examples shows that optimal convergence rates are obtained in space and, given the particular time stepping strategy, second-order accuracy is obtained in time. The model capabilities are further demonstrated by presenting results for the flow over a backward facing step and for the flow around a cylinder.

  19. NE213/BC501A scintillator−lightguide assembly response to 241Am−Be neutrons: An MCNPX−PHOTRACK hybrid code simulation

    International Nuclear Information System (INIS)

    Tajik, M.; Ghal-Eh, N.; Etaati, G.R.; Afarideh, H.

    2014-01-01

    The response of an NE213 (or its BICRON equivalent, BC501A) scintillator attached to different sizes of polished/painted lightguides when exposed to 241 Am–Be neutrons has been simulated. This kind of simulation basically needs both particle and light transports: the transport of neutrons and neutron-induced charged particles such as alphas, protons, carbon nuclei and so on has been undertaken using MCNPX whilst the scintillation light transport has been performed with PHOTRACK codes. The comparison between simulated and experimental response functions of NE213 attached to different sizes of polished/painted lightguides and also the influence of length/covering of lightguide on the detection efficiency and uniformity of the scintillator–lightguide assembly response have been studied. - Highlights: • The response of NE213 scintillator with/without lightguides to Am–Be neutrons has been simulated. • The MCNPX–PHOTRACK code has been used for simulation studies in order to model radio-optical properties. • The measured and simulated spectra for an NE213 scintillator exposed to Am–Be source represent a good agreement

  20. Multi-Sensor Detection with Particle Swarm Optimization for Time-Frequency Coded Cooperative WSNs Based on MC-CDMA for Underground Coal Mines

    Directory of Open Access Journals (Sweden)

    Jingjing Xu

    2015-08-01

    Full Text Available In this paper, a wireless sensor network (WSN technology adapted to underground channel conditions is developed, which has important theoretical and practical value for safety monitoring in underground coal mines. According to the characteristics that the space, time and frequency resources of underground tunnel are open, it is proposed to constitute wireless sensor nodes based on multicarrier code division multiple access (MC-CDMA to make full use of these resources. To improve the wireless transmission performance of source sensor nodes, it is also proposed to utilize cooperative sensors with good channel conditions from the sink node to assist source sensors with poor channel conditions. Moreover, the total power of the source sensor and its cooperative sensors is allocated on the basis of their channel conditions to increase the energy efficiency of the WSN. To solve the problem that multiple access interference (MAI arises when multiple source sensors transmit monitoring information simultaneously, a kind of multi-sensor detection (MSD algorithm with particle swarm optimization (PSO, namely D-PSO, is proposed for the time-frequency coded cooperative MC-CDMA WSN. Simulation results show that the average bit error rate (BER performance of the proposed WSN in an underground coal mine is improved significantly by using wireless sensor nodes based on MC-CDMA, adopting time-frequency coded cooperative transmission and D-PSO algorithm with particle swarm optimization.

  1. Multi-Sensor Detection with Particle Swarm Optimization for Time-Frequency Coded Cooperative WSNs Based on MC-CDMA for Underground Coal Mines.

    Science.gov (United States)

    Xu, Jingjing; Yang, Wei; Zhang, Linyuan; Han, Ruisong; Shao, Xiaotao

    2015-08-27

    In this paper, a wireless sensor network (WSN) technology adapted to underground channel conditions is developed, which has important theoretical and practical value for safety monitoring in underground coal mines. According to the characteristics that the space, time and frequency resources of underground tunnel are open, it is proposed to constitute wireless sensor nodes based on multicarrier code division multiple access (MC-CDMA) to make full use of these resources. To improve the wireless transmission performance of source sensor nodes, it is also proposed to utilize cooperative sensors with good channel conditions from the sink node to assist source sensors with poor channel conditions. Moreover, the total power of the source sensor and its cooperative sensors is allocated on the basis of their channel conditions to increase the energy efficiency of the WSN. To solve the problem that multiple access interference (MAI) arises when multiple source sensors transmit monitoring information simultaneously, a kind of multi-sensor detection (MSD) algorithm with particle swarm optimization (PSO), namely D-PSO, is proposed for the time-frequency coded cooperative MC-CDMA WSN. Simulation results show that the average bit error rate (BER) performance of the proposed WSN in an underground coal mine is improved significantly by using wireless sensor nodes based on MC-CDMA, adopting time-frequency coded cooperative transmission and D-PSO algorithm with particle swarm optimization.

  2. User manual for version 4.3 of the Tripoli-4 Monte-Carlo method particle transport computer code

    International Nuclear Information System (INIS)

    Both, J.P.; Mazzolo, A.; Peneliau, Y.; Petit, O.; Roesslinger, B.

    2003-01-01

    This manual relates to Version 4.3 TRIPOLI-4 code. TRIPOLI-4 is a computer code simulating the transport of neutrons, photons, electrons and positrons. It can be used for radiation shielding calculations (long-distance propagation with flux attenuation in non-multiplying media) and neutronic calculations (fissile medium, criticality or sub-criticality basis). This makes it possible to calculate k eff (for criticality), flux, currents, reaction rates and multi-group cross-sections. TRIPOLI-4 is a three-dimensional code that uses the Monte-Carlo method. It allows for point-wise description in terms of energy of cross-sections and multi-group homogenized cross-sections and features two modes of geometrical representation: surface and combinatorial. The code uses cross-section libraries in ENDF/B format (such as JEF2-2, ENDF/B-VI and JENDL) for point-wise description cross-sections in APOTRIM format (from the APOLLO2 code) or a format specific to TRIPOLI-4 for multi-group description. (authors)

  3. Computer calculation of neutron cross sections with Hauser-Feshbach code STAPRE incorporating the hybrid pre-compound emission model

    International Nuclear Information System (INIS)

    Ivascu, M.

    1983-10-01

    Computer codes incorporating advanced nuclear models (optical, statistical and pre-equilibrium decay nuclear reaction models) were used to calculate neutron cross sections needed for fusion reactor technology. The elastic and inelastic scattering (n,2n), (n,p), (n,n'p), (n,d) and (n,γ) cross sections for stable molybdenum isotopes Mosup(92,94,95,96,97,98,100) and incident neutron energy from about 100 keV or a threshold to 20 MeV were calculated using the consistent set of input parameters. The hydrogen production cross section which determined the radiation damage in structural materials of fusion reactors can be simply deduced from the presented results. The more elaborated microscopic models of nuclear level density are required for high accuracy calculations

  4. Decision support tool for Virtual Power Players: Hybrid Particle Swarm Optimization applied to Day-ahead Vehicle-To-Grid Scheduling

    DEFF Research Database (Denmark)

    Soares, João; Valle, Zita; Morais, Hugo

    2013-01-01

    This paper presents a decision support Tool methodology to help virtual power players (VPPs) in the Smart Grid (SGs) context to solve the day-ahead energy ressource scheduling considering the intensive use of Distributed Generation (DG) and Vehicle-To-Grid (V2G). The main focus is the application...... of a new hybrid method combing a particle swarm approach and a deterministic technique based on mixedinteger linear programming (MILP) to solve the day-ahead scheduling minimizing total operation costs from the aggregator point of view. A realistic mathematical formulation, considering the electric network...... constraints and V2G charging and discharging efficiencies is presented. Full AC power flow calculation is included in the hybrid method to allow taking into account the network constraints. A case study with a 33-bus distribution network and 1800 V2G resources is used to illustrate the performance...

  5. Optimal design of permanent magnet flux switching generator for wind applications via artificial neural network and multi-objective particle swarm optimization hybrid approach

    International Nuclear Information System (INIS)

    Meo, Santolo; Zohoori, Alireza; Vahedi, Abolfazl

    2016-01-01

    Highlights: • A new optimal design of flux switching permanent magnet generator is developed. • A prototype is employed to validate numerical data used for optimization. • A novel hybrid multi-objective particle swarm optimization approach is proposed. • Optimization targets are weight, cost, voltage and its total harmonic distortion. • The hybrid approach preference is proved compared with other optimization methods. - Abstract: In this paper a new hybrid approach obtained combining a multi-objective particle swarm optimization and artificial neural network is proposed for the design optimization of a direct-drive permanent magnet flux switching generators for low power wind applications. The targets of the proposed multi-objective optimization are to reduce the costs and weight of the machine while maximizing the amplitude of the induced voltage as well as minimizing its total harmonic distortion. The permanent magnet width, the stator and rotor tooth width, the rotor teeth number and stator pole number of the machine define the search space for the optimization problem. Four supervised artificial neural networks are designed for modeling the complex relationships among the weight, the cost, the amplitude and the total harmonic distortion of the output voltage respect to the quantities of the search space. Finite element analysis is adopted to generate training dataset for the artificial neural networks. Finite element analysis based model is verified by experimental results with a 1.5 kW permanent magnet flux switching generator prototype suitable for renewable energy applications, having 6/19 stator poles/rotor teeth. Finally the effectiveness of the proposed hybrid procedure is compared with the results given by conventional multi-objective optimization algorithms. The obtained results show the soundness of the proposed multi objective optimization technique and its feasibility to be adopted as suitable methodology for optimal design of permanent

  6. Tripoli-3: monte Carlo transport code for neutral particles - version 3.5 - users manual; Tripoli-3: code de transport des particules neutres par la methode de monte carlo - version 3.5 - manuel d'utilisation

    Energy Technology Data Exchange (ETDEWEB)

    Vergnaud, Th.; Nimal, J.C.; Chiron, M

    2001-07-01

    The TRIPOLI-3 code applies the Monte Carlo method to neutron, gamma-ray and coupled neutron and gamma-ray transport calculations in three-dimensional geometries, either in steady-state conditions or having a time dependence. It can be used to study problems where there is a high flux attenuation between the source zone and the result zone (studies of shielding configurations or source driven sub-critical systems, with fission being taken into account), as well as problems where there is a low flux attenuation (neutronic calculations -- in a fuel lattice cell, for example -- where fission is taken into account, usually with the calculation on the effective multiplication factor, fine structure studies, numerical experiments to investigate methods approximations, etc). TRIPOLI-3 has been operational since 1995 and is the version of the TRIPOLI code that follows on from TRIPOLI-2; it can be used on SUN, RISC600 and HP workstations and on PC using the Linux or Windows/NT operating systems. The code uses nuclear data libraries generated using the THEMIS/NJOY system. The current libraries were derived from ENDF/B6 and JEF2. There is also a response function library based on a number of evaluations, notably the dosimetry libraries IRDF/85, IRDF/90 and also evaluations from JEF2. The treatment of particle transport is the same in version 3.5 as in version 3.4 of the TRIPOLI code; but the version 3.5 is more convenient for preparing the input data and for reading the output. The french version of the user's manual exists. (authors)

  7. Tripoli-3: monte Carlo transport code for neutral particles - version 3.5 - users manual; Tripoli-3: code de transport des particules neutres par la methode de monte carlo - version 3.5 - manuel d'utilisation

    Energy Technology Data Exchange (ETDEWEB)

    Vergnaud, Th; Nimal, J C; Chiron, M

    2001-07-01

    The TRIPOLI-3 code applies the Monte Carlo method to neutron, gamma-ray and coupled neutron and gamma-ray transport calculations in three-dimensional geometries, either in steady-state conditions or having a time dependence. It can be used to study problems where there is a high flux attenuation between the source zone and the result zone (studies of shielding configurations or source driven sub-critical systems, with fission being taken into account), as well as problems where there is a low flux attenuation (neutronic calculations -- in a fuel lattice cell, for example -- where fission is taken into account, usually with the calculation on the effective multiplication factor, fine structure studies, numerical experiments to investigate methods approximations, etc). TRIPOLI-3 has been operational since 1995 and is the version of the TRIPOLI code that follows on from TRIPOLI-2; it can be used on SUN, RISC600 and HP workstations and on PC using the Linux or Windows/NT operating systems. The code uses nuclear data libraries generated using the THEMIS/NJOY system. The current libraries were derived from ENDF/B6 and JEF2. There is also a response function library based on a number of evaluations, notably the dosimetry libraries IRDF/85, IRDF/90 and also evaluations from JEF2. The treatment of particle transport is the same in version 3.5 as in version 3.4 of the TRIPOLI code; but the version 3.5 is more convenient for preparing the input data and for reading the output. The french version of the user's manual exists. (authors)

  8. Simulations for the transmutation of nuclear wastes with hybrid reactors

    International Nuclear Information System (INIS)

    Vuillier, St.

    1998-06-01

    A Monte Carlo simulation, devoted to the spallation, has been built in the framework of the hybrid systems proposed for the nuclear wastes incineration. This system GSPARTE, described the reactions evolution. It takes into account and improves the nuclear codes and the low and high energy particles transport in the GEANT code environment, adapted to the geometry of the hybrid reactors. Many applications and abacus useful for the wastes transmutation, have been realized with this system: production of thick target neutrons, source definition, material damages. (A.L.B.)

  9. An Aerial Robot for Rice Farm Quality Inspection With Type-2 Fuzzy Neural Networks Tuned by Particle Swarm Optimization-Sliding Mode Control Hybrid Algorithm

    DEFF Research Database (Denmark)

    Camci, Efe; Kripalan, Devesh Raju; Ma, Linlu

    2017-01-01

    , an autonomous quality inspection over rice farms is proposed by employing quadcopters. Real-time control of these vehicles, however, is still challenging as they exhibit highly nonlinear behavior especially for agile maneuvers. What is more, these vehicles have to operate under uncertain working conditions...... particle swarm optimization-sliding mode control (PSO-SMC) theory-based hybrid algorithm is proposed for the training of T2-FNNs. In particular, continuous version of PSO is adopted for the identification of the antecedent part of T2-FNNs while SMCbased update rules are utilized for online learning...

  10. A hybrid particle swarm optimization and genetic algorithm for closed-loop supply chain network design in large-scale networks

    DEFF Research Database (Denmark)

    Soleimani, Hamed; Kannan, Govindan

    2015-01-01

    Today, tracking the growing interest in closed-loop supply chain shown by both practitioners and academia is easily possible. There are many factors, which transform closed-loop supply chain issues into a unique and vital subject in supply chain management, such as environmental legislation...... is proposed and a complete validation process is undertaken using CPLEX and MATLAB software. In small instances, the global optimum points of CPLEX for the proposed hybrid algorithm are compared to genetic algorithm, and particle swarm optimization. Then, in small, mid, and large-size instances, performances...

  11. On the Performance Analysis of Hybrid ARQ With Incremental Redundancy and With Code Combining Over Free-Space Optical Channels With Pointing Errors

    KAUST Repository

    Zedini, Emna; Chelli, Ali; Alouini, Mohamed-Slim

    2014-01-01

    In this paper, we investigate the performance of hybrid automatic repeat request (HARQ) with incremental redundancy (IR) and with code combining (CC) from an information-theoretic perspective over a point-to-point free-space optical (FSO) system. First, we introduce new closed-form expressions for the probability density function, the cumulative distribution function, the moment generating function, and the moments of an FSO link modeled by the Gamma fading channel subject to pointing errors and using intensity modulation with direct detection technique at the receiver. Based on these formulas, we derive exact results for the average bit error rate and the capacity in terms of Meijer's G functions. Moreover, we present asymptotic expressions by utilizing the Meijer's G function expansion and using the moments method, too, for the ergodic capacity approximations. Then, we provide novel analytical expressions for the outage probability, the average number of transmissions, and the average transmission rate for HARQ with IR, assuming a maximum number of rounds for the HARQ protocol. Besides, we offer asymptotic expressions for these results in terms of simple elementary functions. Additionally, we compare the performance of HARQ with IR and HARQ with CC. Our analysis demonstrates that HARQ with IR outperforms HARQ with CC.

  12. On the Performance Analysis of Hybrid ARQ With Incremental Redundancy and With Code Combining Over Free-Space Optical Channels With Pointing Errors

    KAUST Repository

    Zedini, Emna

    2014-07-16

    In this paper, we investigate the performance of hybrid automatic repeat request (HARQ) with incremental redundancy (IR) and with code combining (CC) from an information-theoretic perspective over a point-to-point free-space optical (FSO) system. First, we introduce new closed-form expressions for the probability density function, the cumulative distribution function, the moment generating function, and the moments of an FSO link modeled by the Gamma fading channel subject to pointing errors and using intensity modulation with direct detection technique at the receiver. Based on these formulas, we derive exact results for the average bit error rate and the capacity in terms of Meijer\\'s G functions. Moreover, we present asymptotic expressions by utilizing the Meijer\\'s G function expansion and using the moments method, too, for the ergodic capacity approximations. Then, we provide novel analytical expressions for the outage probability, the average number of transmissions, and the average transmission rate for HARQ with IR, assuming a maximum number of rounds for the HARQ protocol. Besides, we offer asymptotic expressions for these results in terms of simple elementary functions. Additionally, we compare the performance of HARQ with IR and HARQ with CC. Our analysis demonstrates that HARQ with IR outperforms HARQ with CC.

  13. Parallel pic plasma simulation through particle decomposition techniques

    International Nuclear Information System (INIS)

    Briguglio, S.; Vlad, G.; Di Martino, B.; Naples, Univ. 'Federico II'

    1998-02-01

    Particle-in-cell (PIC) codes are among the major candidates to yield a satisfactory description of the detail of kinetic effects, such as the resonant wave-particle interaction, relevant in determining the transport mechanism in magnetically confined plasmas. A significant improvement of the simulation performance of such codes con be expected from parallelization, e.g., by distributing the particle population among several parallel processors. Parallelization of a hybrid magnetohydrodynamic-gyrokinetic code has been accomplished within the High Performance Fortran (HPF) framework, and tested on the IBM SP2 parallel system, using a 'particle decomposition' technique. The adopted technique requires a moderate effort in porting the code in parallel form and results in intrinsic load balancing and modest inter processor communication. The performance tests obtained confirm the hypothesis of high effectiveness of the strategy, if targeted towards moderately parallel architectures. Optimal use of resources is also discussed with reference to a specific physics problem [it

  14. Nuclear GUI: a Graphical User Interface for 3D discrete ordinates neutral particle transport codes in the doors and BOT3P packages

    International Nuclear Information System (INIS)

    Saintagne, P.W.; Azmy, Y.Y.

    2005-01-01

    A GUI (Graphical User Interface) provides a graphical, interactive and intuitive link between the user and the software. It translates the user'actions into information, e.g; input data that is interpretable by the software. In order to develop an efficient GUI, it is important to master the target computational code. An initial version of a complete GUI for the DOORS and BOT3P packages for solving neutral particle transport problems in 3-dimensional geometry has been completed. This GUI is made of 4 components. The first component GipGui aims at handling cross-sections by mixing microscopic cross-sections from different libraries. The second component TORT-GUI provides the user a simple way to create or modify input files for the TORT codes that is a general purpose neutral transport code able to solve large problems with complex configurations. The third component GGTM-GUI prepares the data describing the problem configuration like the geometrical data, material assignment or key flux positions. The fourth component DTM3-GUI helps the user to visualize TORT results by providing data for a graphics post-processor

  15. Comparison of a 3D multi‐group SN particle transport code with Monte Carlo for intercavitary brachytherapy of the cervix uteri

    Science.gov (United States)

    Wareing, Todd A.; Failla, Gregory; Horton, John L.; Eifel, Patricia J.; Mourtada, Firas

    2009-01-01

    A patient dose distribution was calculated by a 3D multi‐group SN particle transport code for intracavitary brachytherapy of the cervix uteri and compared to previously published Monte Carlo results. A Cs‐137 LDR intracavitary brachytherapy CT data set was chosen from our clinical database. MCNPX version 2.5.c, was used to calculate the dose distribution. A 3D multi‐group SN particle transport code, Attila version 6.1.1 was used to simulate the same patient. Each patient applicator was built in SolidWorks, a mechanical design package, and then assembled with a coordinate transformation and rotation for the patient. The SolidWorks exported applicator geometry was imported into Attila for calculation. Dose matrices were overlaid on the patient CT data set. Dose volume histograms and point doses were compared. The MCNPX calculation required 14.8 hours, whereas the Attila calculation required 22.2 minutes on a 1.8 GHz AMD Opteron CPU. Agreement between Attila and MCNPX dose calculations at the ICRU 38 points was within ±3%. Calculated doses to the 2 cc and 5 cc volumes of highest dose differed by not more than ±1.1% between the two codes. Dose and DVH overlays agreed well qualitatively. Attila can calculate dose accurately and efficiently for this Cs‐137 CT‐based patient geometry. Our data showed that a three‐group cross‐section set is adequate for Cs‐137 computations. Future work is aimed at implementing an optimized version of Attila for radiotherapy calculations. PACS number: 87.53.Jw

  16. Nonlinear hybrid simulation of toroidicity-induced alfven eigenmode

    International Nuclear Information System (INIS)

    Fu, G.Y.; Park, W.

    1994-11-01

    Gyrokinetic/Magnetohydrodynamics hybrid simulations have been carried out using MH3D-K code to study the nonlinear saturation of the toroidicity-induced Alfven eigenmode driven by energetic particles in a tokamak plasma. It is shown that the wave particle trapping is the nonlinear saturation mechanism for the parameters considered. The corresponding density profile flattening of hot particles is observed. The saturation amplitude is proportional to the square of linear growth rate. In addition to TAE modes, a new n = 1, m = 0 global Alfven eigenmode is shown to be excited by the energetic particles

  17. A code to compute the action-angle transformation for a particle in an abritrary potential well

    International Nuclear Information System (INIS)

    Berg, J.S.; Warnock, R.L.

    1995-01-01

    For a Vlasov treatment of longitudinal stability under an arbitrary wake field, with the solution of the Haiessinski equation as the unperturbed distribution, it is important to have the action-angle transformation for the distorted potential well in a convenient form. The authors have written a code that gives the transformation q,p → J, φ, with q(J,φ) as a Fourier series in φ, the Fourier coefficients and the Hamiltonian H(J) being spline functions of J in C 2 (having continuous second derivatives)

  18. Study of the radioactive particle tracking technique using gamma-ray attenuation and MCNP-X code to evaluate industrial agitators

    Energy Technology Data Exchange (ETDEWEB)

    Dam, Roos Sophia de F.; Salgado, César M., E-mail: rsophia.dam@gmail.com, E-mail: otero@ien.gov.br [Instituto de Engenharia Nuclear (IEN/CNEN-RJ), Rio de Janeiro, RJ (Brazil)

    2017-07-01

    Agitators or mixers are highly used in the chemical, food, pharmaceutical and cosmetic industries. During the fabrication process, the equipment may fail and compromise the appropriate stirring or mixing procedure. Besides that, it is also important to determine the right point of homogeneity of the mixture. Thus, it is very important to have a diagnosis tool for these industrial units to assure the quality of the product and to keep the market competitiveness. The radioactive particle tracking (RPT) technique is widely used in the nuclear field. In this paper, a method based on the principles of the RPT technique is presented. Counts obtained by an array of detectors properly positioned around the unit will be correlated to predict the instantaneous positions occupied by the radioactive particle by means of an appropriate mathematical search location algorithm. Detection geometry developed employs eight NaI(Tl) scintillator detectors and a Cs-137 (662 keV) source with isotropic emission of gamma-rays. The modeling of the detection system is performed using the Monte Carlo Method, by means of the MCNP-X code. In this work a methodology is presented to predict the position of a radioactive particle to evaluate the performance of agitators in industrial units by means of an Artificial Neural Network (ANN). (author)

  19. Effects of temperature and particles concentration on the dynamic viscosity of MgO-MWCNT/ethylene glycol hybrid nanofluid: Experimental study

    Science.gov (United States)

    Soltani, Omid; Akbari, Mohammad

    2016-10-01

    In this paper, the effects of temperature and particles concentration on the dynamic viscosity of MgO-MWCNT/ethylene glycol hybrid nanofluid is examined. The experiments carried out in the solid volume fraction range of 0 to 1.0% under the temperature ranging from 30 °C to 60 °C. The results showed that the hybrid nanofluid behaves as a Newtonian fluid for all solid volume fractions and temperatures considered. The measurements also indicated that the dynamic viscosity increases with increasing the solid volume fraction and decreases with the temperature rising. The relative viscosity revealed that when the solid volume fraction enhances from 0.1 to 1%, the dynamic viscosity increases up to 168%. Finally, using experimental data, in order to predict the dynamic viscosity of MgO-MWCNT/ethylene glycol hybrid nanofluids, a new correlation has been suggested. The comparisons between the correlation outputs and experimental results showed that the suggested correlation has an acceptable accuracy.

  20. Hybrid support vector regression and autoregressive integrated moving average models improved by particle swarm optimization for property crime rates forecasting with economic indicators.

    Science.gov (United States)

    Alwee, Razana; Shamsuddin, Siti Mariyam Hj; Sallehuddin, Roselina

    2013-01-01

    Crimes forecasting is an important area in the field of criminology. Linear models, such as regression and econometric models, are commonly applied in crime forecasting. However, in real crimes data, it is common that the data consists of both linear and nonlinear components. A single model may not be sufficient to identify all the characteristics of the data. The purpose of this study is to introduce a hybrid model that combines support vector regression (SVR) and autoregressive integrated moving average (ARIMA) to be applied in crime rates forecasting. SVR is very robust with small training data and high-dimensional problem. Meanwhile, ARIMA has the ability to model several types of time series. However, the accuracy of the SVR model depends on values of its parameters, while ARIMA is not robust to be applied to small data sets. Therefore, to overcome this problem, particle swarm optimization is used to estimate the parameters of the SVR and ARIMA models. The proposed hybrid model is used to forecast the property crime rates of the United State based on economic indicators. The experimental results show that the proposed hybrid model is able to produce more accurate forecasting results as compared to the individual models.

  1. Hybrid Support Vector Regression and Autoregressive Integrated Moving Average Models Improved by Particle Swarm Optimization for Property Crime Rates Forecasting with Economic Indicators

    Directory of Open Access Journals (Sweden)

    Razana Alwee

    2013-01-01

    Full Text Available Crimes forecasting is an important area in the field of criminology. Linear models, such as regression and econometric models, are commonly applied in crime forecasting. However, in real crimes data, it is common that the data consists of both linear and nonlinear components. A single model may not be sufficient to identify all the characteristics of the data. The purpose of this study is to introduce a hybrid model that combines support vector regression (SVR and autoregressive integrated moving average (ARIMA to be applied in crime rates forecasting. SVR is very robust with small training data and high-dimensional problem. Meanwhile, ARIMA has the ability to model several types of time series. However, the accuracy of the SVR model depends on values of its parameters, while ARIMA is not robust to be applied to small data sets. Therefore, to overcome this problem, particle swarm optimization is used to estimate the parameters of the SVR and ARIMA models. The proposed hybrid model is used to forecast the property crime rates of the United State based on economic indicators. The experimental results show that the proposed hybrid model is able to produce more accurate forecasting results as compared to the individual models.

  2. Micromechanical analysis of a hybrid composite—effect of boron carbide particles on the elastic properties of basalt fiber reinforced polymer composite

    Science.gov (United States)

    Krishna Golla, Sai; Prasanthi, P.

    2016-11-01

    A fiber reinforced polymer (FRP) composite is an important material for structural application. The diversified application of FRP composites has become the center of attention for interdisciplinary research. However, improvements in the mechanical properties of this class of materials are still under research for different applications. The reinforcement of inorganic particles in a composite improves its structural properties due to their high stiffness. The present research work is focused on the prediction of the mechanical properties of the hybrid composites where continuous fibers are reinforced in a micro boron carbide particle mixed polypropylene matrix. The effectiveness of the addition of 30 wt. % of boron carbide (B4C) particle contributions regarding the longitudinal and transverse properties of the basalt fiber reinforced polymer composite at various fiber volume fractions is examined by finite element analysis (FEA). The experimental approach is the best way to determine the properties of the composite but it is expensive and time-consuming. Therefore, the finite element method (FEM) and analytical methods are the viable methods for the determination of the composite properties. The FEM results were obtained by adopting a micromechanics approach with the support of FEM. Assuming a uniform distribution of reinforcement and considering one unit-cell of the whole array, the properties of the composite materials are determined. The predicted elastic properties from FEA are compared with the analytical results. The results suggest that B4C particles are a good reinforcement for the enhancement of the transverse properties of basalt fiber reinforced polypropylene.

  3. Type I Collagen and Strontium-Containing Mesoporous Glass Particles as Hybrid Material for 3D Printing of Bone-Like Materials.

    Science.gov (United States)

    Montalbano, Giorgia; Fiorilli, Sonia; Caneschi, Andrea; Vitale-Brovarone, Chiara

    2018-04-28

    Bone tissue engineering offers an alternative promising solution to treat a large number of bone injuries with special focus on pathological conditions, such as osteoporosis. In this scenario, the bone tissue regeneration may be promoted using bioactive and biomimetic materials able to direct cell response, while the desired scaffold architecture can be tailored by means of 3D printing technologies. In this context, our study aimed to develop a hybrid bioactive material suitable for 3D printing of scaffolds mimicking the natural composition and structure of healthy bone. Type I collagen and strontium-containing mesoporous bioactive glasses were combined to obtain suspensions able to perform a sol-gel transition under physiological conditions. Field emission scanning electron microscopy (FESEM) analyses confirmed the formation of fibrous nanostructures homogeneously embedding inorganic particles, whereas bioactivity studies demonstrated the large calcium phosphate deposition. The high-water content promoted the strontium ion release from the embedded glass particles, potentially enhancing the osteogenic behaviour of the composite. Furthermore, the suspension printability was assessed by means of rheological studies and preliminary extrusion tests, showing shear thinning and fast material recovery upon deposition. In conclusion, the reported results suggest that promising hybrid systems suitable for 3D printing of bioactive scaffolds for bone tissue engineering have been developed.

  4. Final Report (1994 to 1996) Diagnostic of the Spatial and Velocity Distribution of Alpha Particles in Tokamak Fusion Reactor using Beat-wave Generated Lower Hybrid Wave

    International Nuclear Information System (INIS)

    Hwang, D.Q.; Horton, R.D.; Evans, R.W.

    1999-01-01

    The alpha particles in a fusion reactor play a key role in the sustaining the fusion reaction. It is the heating provided by the alpha particles that help a fusion reactor operating in the ignition regime. It is, therefore, essential to understand the behavior of the alpha population both in real space and velocity space in order to design the optimal confinement device for fusion application. Moreover, the alphas represent a strong source of free energy that may generate plasma instabilities. Theoretical studies has identified the Toroidal Alfven Eigenmode (TAE) as an instability that can be excited by the alpha population in a toroidal device. Since the alpha has an energy of 3.5 MeV, a good confinement device will retain it in the interior of the plasma. Therefore, alpha measurement system need to probe the interior of a high density plasma. Due to the conducting nature of a plasma, wave with frequencies below the plasma frequency can not penetrate into the interior of the plasma where the alphas reside. This project uses a wave that can interact with the perpendicular motion of the alphas to probe its characteristics. However, this wave (the lower hybrid wave) is below the plasma frequency and can not be directly launched from the plasma edge. This project was designed to non-linearly excite the lower hybrid in the interior of a magnetized plasma and measure its interaction with a fast ion population

  5. Ni-polymer nanogel hybrid particles: A new strategy for hydrogen production from the hydrolysis of dimethylamine-borane and sodium borohydride

    International Nuclear Information System (INIS)

    Cai, Haokun; Liu, Liping; Chen, Qiang; Lu, Ping; Dong, Jian

    2016-01-01

    Efficient non-precious metal catalysts are crucial for hydrogen production from borohydride compounds in aqueous media via hydrogen atoms in water. A method for preparing magnetic polymer nanoparticles is developed in this study based on the chemical deposition of nickel onto hydrophilic polymer nanogels. High-resolution transmission electron microscopic and XPS analyses show that Ni exists mainly in the form of NiO in nanogels. Excellent catalytic activities of the nanoparticles are demonstrated for hydrogen generation from the hydrolysis of dimethylamine-borane and sodium borohydride in which the initial TOF (turn-over frequencies) are 376 and 1919 h"−"1, respectively. Kinetic studies also reveal an Arrhenius activation energy of 50.96 kJ mol"−"1 for the hydrolysis of dimethylamine-borane and 47.82 kJ mol"−"1 for the hydrolysis of sodium borohydride, which are lower than those catalyzed by Ru metal. Excellent reusability and the use of water for hydrogen production from dimethylamine-borane provide the additional benefit of using a hybrid catalyst. The principle illustrated in the present study offers a new strategy to explore polymer-transition metal hybrid particles for hydrogen energy technology. - Highlights: • Electroless Ni plating on polymer nanogels generated recyclable catalysts. • The Ni particles proved efficient for H_2 production from borohydride compounds. • The catalysts have lower activation energies than Ru for the hydrolysis. • Borohydride hydrolysis is more beneficial than dehydrogenation in organic solvent.

  6. Enhanced performance of P(VDF-HFP)-based composite polymer electrolytes doped with organic-inorganic hybrid particles PMMA-ZrO2 for lithium ion batteries

    Science.gov (United States)

    Xiao, Wei; Wang, Zhiyan; Zhang, Yan; Fang, Rui; Yuan, Zun; Miao, Chang; Yan, Xuemin; Jiang, Yu

    2018-04-01

    To improve the ionic conductivity as well as enhance the mechanical strength of the gel polymer electrolyte, poly(vinylidene fluoride-hexafluoroprolene) (P(VDF-HFP))-based composite polymer electrolyte (CPE) membranes doped with the organic-inorganic hybrid particles poly(methyl methacrylate) -ZrO2 (PMMA-ZrO2) are prepared by phase inversion method, in which PMMA is successfully grafted onto the surface of the homemade nano-ZrO2 particles via in situ polymerization confirmed by FT-IR. XRD and DSC patterns show adding PMMA-ZrO2 particles into P(VDF-HFP) can significantly decrease the crystallinity of the CPE membrane. The CPE membrane doped with 5 wt % PMMA-ZrO2 particles can not only present a homogeneous surface with abundant interconnected micro-pores, but maintain its initial shape after thermal exposure at 160 °C for 1 h, in which the ionic conductivity and lithium ion transference number at room temperature can reach to 3.59 × 10-3 S cm-1 and 0.41, respectively. The fitting results of the EIS plots indicate the doped PMMA-ZrO2 particles can significantly lower the interface resistance and promote lithium ions diffusion rate. The Li/CPE-sPZ/LiCoO2 and Li/CPE-sPZ/Graphite coin cells can deliver excellent rate and cycling performance. Those results suggest the P(VDF-HFP)-based CPE doped with 5 wt % PMMA-ZrO2 particles can become an exciting potential candidate as polymer electrolyte for the lithium ion battery.

  7. Polyelectrolyte mediated nano hybrid particle as a nano-sensor with outstandingly amplified specificity and sensitivity for enzyme free estimation of cholesterol.

    Science.gov (United States)

    Chebl, Mazhar; Moussa, Zeinab; Peurla, Markus; Patra, Digambara

    2017-07-01

    As a proof of concept, here it is established that curcumin integrated chitosan oligosaccharide lactate (COL) self-assembles on silica nanoparticle surface to form nano hybrid particles (NHPs). These NHPs have size in the ranges of 25-35nm with silica nanoparticle as its core and curcumin-COL as outer layer having thickness of 4-8nm. The fluorescence intensity of these NHPs are found to be quenched and emission maximum is ~50nm red shifted compared to free curcumin implying inner filter effect and/or homo-FRET between curcumin molecules present on the surface of individual nano hybrid particle. Although fluorescence of free curcumin is remarkably quenched by Hg 2+ /Cu 2+ ions due to chelation through keto-enol form, the fluorescence of NHPs is unaffected by Hg 2+ /Cu 2+ ion that boosts analytical selectivity. The fluorescence intensity is outstandingly enhanced in the presence of cholesterol but is not influenced by ascorbic acid, uric acid, glucose, albumin, lipid and other potential interfering substances that either obstruct during enzymatic reaction or affect fluorescence of free curcumin. Thus, NHPs outstandingly improve analytical specificity, selectivity and sensitivity during cholesterol estimation compared to free curcumin. The interaction between cholesterol and NHPs is found to be a combination of ground state electrostatic interaction through the free hydroxyl group of cholesterol along with hydrophobic interaction between NHPs and cholesterol and excited state interaction. The proposed cholesterol biosensor illustrates a wider linear dynamic range, 0.002-10mmolL -1 , (upper limit is due to lack of solubility of cholesterol) needed for biomedical application and better than reported values during enzymatic reaction. In addition, the NHPs are found to be photo-stable potentially making it suitable for simple, quick and cost-effective cholesterol estimation and opening an alternative approach other than enzymatic reaction using nano hybrid structure to

  8. User's manual for ONEDANT: a code package for one-dimensional, diffusion-accelerated, neutral-particle transport

    International Nuclear Information System (INIS)

    O'Dell, R.D.; Brinkley, F.W. Jr.; Marr, D.R.

    1982-02-01

    ONEDANT is designed for the CDC-7600, but the program has been implemented and run on the IBM-370/190 and CRAY-I computers. ONEDANT solves the one-dimensional multigroup transport equation in plane, cylindrical, spherical, and two-angle plane geometries. Both regular and adjoint, inhomogeneous and homogeneous (k/sub eff/ and eigenvalue search) problems subject to vacuum, reflective, periodic, white, albedo, or inhomogeneous boundary flux conditions are solved. General anisotropic scattering is allowed and anisotropic inhomogeneous sources are permitted. ONEDANT numerically solves the one-dimensional, multigroup form of the neutral-particle, steady-state form of the Boltzmann transport equation. The discrete-ordinates approximation is used for treating the angular variation of the particle distribution and the diamond-difference scheme is used for phase space discretization. Negative fluxes are eliminated by a local set-to-zero-and-correct algorithm. A standard inner (within-group) iteration, outer (energy-group-dependent source) iteration technique is used. Both inner and outer iterations are accelerated using the diffusion synthetic acceleration method

  9. Simulations of particle and heat fluxes in an ELMy H-mode discharge on EAST using BOUT++ code

    Science.gov (United States)

    Wu, Y. B.; Xia, T. Y.; Zhong, F. C.; Zheng, Z.; Liu, J. B.; team3, EAST

    2018-05-01

    In order to study the distribution and evolution of the transient particle and heat fluxes during edge-localized mode (ELM) bursts on the Experimental Advanced Superconducting Tokamak (EAST), the BOUT++ six-field two-fluid model is used to simulate the pedestal collapse. The profiles from the EAST H-mode discharge #56129 are used as the initial conditions. Linear analysis shows that the resistive ballooning mode and drift-Alfven wave are two dominant instabilities for the equilibrium, and play important roles in driving ELMs. The evolution of the density profile and the growing process of the heat flux at divertor targets during the burst of ELMs are reproduced. The time evolution of the poloidal structures of T e is well simulated, and the dominant mode in each stage of the ELM crash process is found. The studies show that during the nonlinear phase, the dominant mode is 5, and it changes to 0 when the nonlinear phase goes to saturation after the ELM crash. The time evolution of the radial electron heat flux, ion heat flux, and particle density flux at the outer midplane (OMP) are obtained, and the corresponding transport coefficients D r, χ ir, and χ er reach maximum around 0.3 ∼ 0.5 m2 s‑1 at ΨN = 0.9. The heat fluxes at outer target plates are several times larger than that at inner target plates, which is consistent with the experimental observations. The simulated profiles of ion saturation current density (j s) at the lower outboard (LO) divertor target are compared to those of experiments by Langmuir probes. The profiles near the strike point are similar, and the peak values of j s from simulation are very close to the measurements.

  10. High-Fidelity RF Gun Simulations with the Parallel 3D Finite Element Particle-In-Cell Code Pic3P

    Energy Technology Data Exchange (ETDEWEB)

    Candel, A; Kabel, A.; Lee, L.; Li, Z.; Limborg, C.; Ng, C.; Schussman, G.; Ko, K.; /SLAC

    2009-06-19

    SLAC's Advanced Computations Department (ACD) has developed the first parallel Finite Element 3D Particle-In-Cell (PIC) code, Pic3P, for simulations of RF guns and other space-charge dominated beam-cavity interactions. Pic3P solves the complete set of Maxwell-Lorentz equations and thus includes space charge, retardation and wakefield effects from first principles. Pic3P uses higher-order Finite Elementmethods on unstructured conformal meshes. A novel scheme for causal adaptive refinement and dynamic load balancing enable unprecedented simulation accuracy, aiding the design and operation of the next generation of accelerator facilities. Application to the Linac Coherent Light Source (LCLS) RF gun is presented.

  11. An integrated high-performance beam optics-nuclear processes framework with hybrid transfer map-Monte Carlo particle transport and optimization

    Energy Technology Data Exchange (ETDEWEB)

    Bandura, L., E-mail: bandura@msu.ed [Argonne National Laboratory, Argonne, IL 60439 (United States); Erdelyi, B. [Argonne National Laboratory, Argonne, IL 60439 (United States); Northern Illinois University, DeKalb, IL 60115 (United States); Nolen, J. [Argonne National Laboratory, Argonne, IL 60439 (United States)

    2010-12-01

    An integrated beam optics-nuclear processes framework is essential for accurate simulation of fragment separator beam dynamics. The code COSY INFINITY provides powerful differential algebraic methods for modeling and beam dynamics simulations in absence of beam-material interactions. However, these interactions are key for accurately simulating the dynamics of heavy ion fragmentation and fission. We have developed an extended version of the code that includes these interactions, and a set of new tools that allow efficient and accurate particle transport: by transfer map in vacuum and by Monte Carlo methods in materials. The new framework is presented, along with several examples from a preliminary layout of a fragment separator for a facility for rare isotope beams.

  12. An integrated high-performance beam optics-nuclear processes framework with hybrid transfer map-Monte Carlo particle transport and optimization

    International Nuclear Information System (INIS)

    Bandura, L.; Erdelyi, B.; Nolen, J.

    2010-01-01

    An integrated beam optics-nuclear processes framework is essential for accurate simulation of fragment separator beam dynamics. The code COSY INFINITY provides powerful differential algebraic methods for modeling and beam dynamics simulations in absence of beam-material interactions. However, these interactions are key for accurately simulating the dynamics of heavy ion fragmentation and fission. We have developed an extended version of the code that includes these interactions, and a set of new tools that allow efficient and accurate particle transport: by transfer map in vacuum and by Monte Carlo methods in materials. The new framework is presented, along with several examples from a preliminary layout of a fragment separator for a facility for rare isotope beams.

  13. DOUBLE code simulations of emissivities of fast neutrals for different plasma observation view-lines of neutral particle analyzers on the COMPASS tokamak

    Science.gov (United States)

    Mitosinkova, K.; Tomes, M.; Stockel, J.; Varju, J.; Stano, M.

    2018-03-01

    Neutral particle analyzers (NPA) measure line-integrated energy spectra of fast neutral atoms escaping the tokamak plasma, which are a product of charge-exchange (CX) collisions of plasma ions with background neutrals. They can observe variations in the ion temperature T i of non-thermal fast ions created by additional plasma heating. However, the plasma column which a fast atom has to pass through must be sufficiently short in comparison with the fast atom’s mean-free-path. Tokamak COMPASS is currently equipped with one NPA installed at a tangential mid-plane port. This orientation is optimal for observing non-thermal fast ions. However, in this configuration the signal at energies useful for T i derivation is lost in noise due to the too long fast atoms’ trajectories. Thus, a second NPA is planned to be connected for the purpose of measuring T i. We analyzed different possible view-lines (perpendicular mid-plane, tangential mid-plane, and top view) for the second NPA using the DOUBLE Monte-Carlo code and compared the results with the performance of the present NPA with tangential orientation. The DOUBLE code provides fast-atoms’ emissivity functions along the NPA view-line. The position of the median of these emissivity functions is related to the location from where the measured signal originates. Further, we compared the difference between the real central T i used as a DOUBLE code input and the T iCX derived from the exponential decay of simulated energy spectra. The advantages and disadvantages of each NPA location are discussed.

  14. Production of secondary particles and nuclei in cosmic rays collisions with the interstellar gas using the FLUKA code

    CERN Document Server

    Mazziotta, M N; Ferrari, A; Gaggero, D; Loparco, F; Sala, P R

    2016-01-01

    The measured fluxes of secondary particles produced by the interactions of Cosmic Rays (CRs) with the astronomical environment play a crucial role in understanding the physics of CR transport. In this work we present a comprehensive calculation of the secondary hadron, lepton, gamma-ray and neutrino yields produced by the inelastic interactions between several species of stable or long-lived cosmic rays projectiles (p, D, T, 3He, 4He, 6Li, 7Li, 9Be, 10Be, 10B, 11B, 12C, 13C, 14C, 14N, 15N, 16O, 17O, 18O, 20Ne, 24Mg and 28Si) and different target gas nuclei (p, 4He, 12C, 14N, 16O, 20Ne, 24Mg, 28Si and 40Ar). The yields are calculated using FLUKA, a simulation package designed to compute the energy distributions of secondary products with large accuracy in a wide energy range. The present results provide, for the first time, a complete and self-consistent set of all the relevant inclusive cross sections regarding the whole spectrum of secondary products in nuclear collisions. We cover, for the projectiles, a ki...

  15. Phase and electrical properties of PZT thin films embedded with CuO nano-particles by a hybrid sol-gel route

    Science.gov (United States)

    Sreesattabud, Tharathip; Gibbons, Brady J.; Watcharapasorn, Anucha; Jiansirisomboon, Sukanda

    2013-07-01

    Pb(Zr0.52Ti0.48)O3 or PZT thin films embedded with CuO nano-particles were successfully prepared by a hybrid sol-gel process. In this process, CuO (0, 0.1, 0.2, 0.3, 0.4, 0.5 and 1 wt. %) nanopowder was suspended in an organometallic solution of PZT, and then coated on platinised silicon substrate using a spin-coating technique. The influence of CuO nano-particles' dispersion on the phase of PZT thin films was investigated. XRD results showed a perovskite phase in all films. At the CuO concentration of 0.4-1 wt. %, a second phase was observed. The addition of CuO nano-particles affected the orientation of PZT thin films. The addition was also found to reduce the ferroelectric properties of PZT thin films. However, at 0.2 wt. % CuO concentration, the film exhibited good ferroelectric properties similar to those of PZT films. In addition, the fatigue retention properties of the PZT/CuO system was observed, and it showed 14% fatigue at 108 switching bipolar pulse cycles while the fatigue in PZT thin films was found to be 17% at the same switching bipolar pulse cycles.

  16. Development and Implementation of Photonuclear Cross-Section Data for Mutually Coupled Neutron-Photon Transport Calculations in the Monte Carlo N-Particle (MCNP) Radiation Transport Code

    International Nuclear Information System (INIS)

    White, Morgan C.

    2000-01-01

    The fundamental motivation for the research presented in this dissertation was the need to development a more accurate prediction method for characterization of mixed radiation fields around medical electron accelerators (MEAs). Specifically, a model is developed for simulation of neutron and other particle production from photonuclear reactions and incorporated in the Monte Carlo N-Particle (MCNP) radiation transport code. This extension of the capability within the MCNP code provides for the more accurate assessment of the mixed radiation fields. The Nuclear Theory and Applications group of the Los Alamos National Laboratory has recently provided first-of-a-kind evaluated photonuclear data for a select group of isotopes. These data provide the reaction probabilities as functions of incident photon energy with angular and energy distribution information for all reaction products. The availability of these data is the cornerstone of the new methodology for state-of-the-art mutually coupled photon-neutron transport simulations. The dissertation includes details of the model development and implementation necessary to use the new photonuclear data within MCNP simulations. A new data format has been developed to include tabular photonuclear data. Data are processed from the Evaluated Nuclear Data Format (ENDF) to the new class ''u'' A Compact ENDF (ACE) format using a standalone processing code. MCNP modifications have been completed to enable Monte Carlo sampling of photonuclear reactions. Note that both neutron and gamma production are included in the present model. The new capability has been subjected to extensive verification and validation (V and V) testing. Verification testing has established the expected basic functionality. Two validation projects were undertaken. First, comparisons were made to benchmark data from literature. These calculations demonstrate the accuracy of the new data and transport routines to better than 25 percent. Second, the ability to

  17. Development and Implementation of Photonuclear Cross-Section Data for Mutually Coupled Neutron-Photon Transport Calculations in the Monte Carlo N-Particle (MCNP) Radiation Transport Code

    Energy Technology Data Exchange (ETDEWEB)

    White, Morgan C. [Univ. of Florida, Gainesville, FL (United States)

    2000-07-01

    The fundamental motivation for the research presented in this dissertation was the need to development a more accurate prediction method for characterization of mixed radiation fields around medical electron accelerators (MEAs). Specifically, a model is developed for simulation of neutron and other particle production from photonuclear reactions and incorporated in the Monte Carlo N-Particle (MCNP) radiation transport code. This extension of the capability within the MCNP code provides for the more accurate assessment of the mixed radiation fields. The Nuclear Theory and Applications group of the Los Alamos National Laboratory has recently provided first-of-a-kind evaluated photonuclear data for a select group of isotopes. These data provide the reaction probabilities as functions of incident photon energy with angular and energy distribution information for all reaction products. The availability of these data is the cornerstone of the new methodology for state-of-the-art mutually coupled photon-neutron transport simulations. The dissertation includes details of the model development and implementation necessary to use the new photonuclear data within MCNP simulations. A new data format has been developed to include tabular photonuclear data. Data are processed from the Evaluated Nuclear Data Format (ENDF) to the new class ''u'' A Compact ENDF (ACE) format using a standalone processing code. MCNP modifications have been completed to enable Monte Carlo sampling of photonuclear reactions. Note that both neutron and gamma production are included in the present model. The new capability has been subjected to extensive verification and validation (V&V) testing. Verification testing has established the expected basic functionality. Two validation projects were undertaken. First, comparisons were made to benchmark data from literature. These calculations demonstrate the accuracy of the new data and transport routines to better than 25 percent. Second

  18. Total Particle Number Emissions from Modern Diesel, Natural Gas, and Hybrid Heavy-Duty Vehicles During On-Road Operation.

    Science.gov (United States)

    Wang, Tianyang; Quiros, David C; Thiruvengadam, Arvind; Pradhan, Saroj; Hu, Shaohua; Huai, Tao; Lee, Eon S; Zhu, Yifang

    2017-06-20

    Particle emissions from heavy-duty vehicles (HDVs) have significant environmental and public health impacts. This study measured total particle number emission factors (PNEFs) from six newly certified HDVs powered by diesel and compressed natural gas totaling over 6800 miles of on-road operation in California. Distance-, fuel- and work-based PNEFs were calculated for each vehicle. Distance-based PNEFs of vehicles equipped with original equipment manufacturer (OEM) diesel particulate filters (DPFs) in this study have decreased by 355-3200 times compared to a previous retrofit DPF dynamometer study. Fuel-based PNEFs were consistent with previous studies measuring plume exhaust in the ambient air. Meanwhile, on-road PNEF shows route and technology dependence. For vehicles with OEM DPFs and Selective Catalytic Reduction Systems, PNEFs under highway driving (i.e., 3.34 × 10 12 to 2.29 × 10 13 particles/mile) were larger than those measured on urban and drayage routes (i.e., 5.06 × 10 11 to 1.31 × 10 13 particles/mile). This is likely because a significant amount of nucleation mode volatile particles were formed when the DPF outlet temperature reached a critical value, usually over 310 °C, which was commonly achieved when vehicle speed sustained over 45 mph. A model year 2013 diesel HDV produced approximately 10 times higher PNEFs during DPF active regeneration events than nonactive regeneration.

  19. Novel preparation of controlled porosity particle/fibre loaded scaffolds using a hybrid micro-fluidic and electrohydrodynamic technique.

    Science.gov (United States)

    Parhizkar, Maryam; Sofokleous, Panagiotis; Stride, Eleanor; Edirisinghe, Mohan

    2014-11-27

    The purpose of this research was to produce multi-dimensional scaffolds containing biocompatible particles and fibres. To achieve this, two techniques were combined and used: T-Junction microfluidics and electrohydrodynamic (EHD) processing. The former was used to form layers of monodispersed bovine serum albumin (BSA) bubbles, which upon drying formed porous scaffolds. By altering the T-Junction processing parameters, bubbles with different diameters were produced and hence the scaffold porosity could be controlled. EHD processing was used to spray or spin poly(lactic-co-glycolic) (PLGA), polymethysilsesquioxane (PMSQ) and collagen particles/fibres onto the scaffolds during their production and after drying. As a result, multifunctional BSA scaffolds with controlled porosity containing PLGA, PMSQ and collagen particles/fibres were obtained. Product morphology was studied by optical and scanning electron microscopy. These products have potential applications in many advanced biomedical, pharmaceutical and cosmetic fields e.g. bone regeneration, drug delivery, cosmetic cream lathers, facial scrubbing creams etc.

  20. ML-Space: Hybrid Spatial Gillespie and Particle Simulation of Multi-Level Rule-Based Models in Cell Biology.

    Science.gov (United States)

    Bittig, Arne T; Uhrmacher, Adelinde M

    2017-01-01

    Spatio-temporal dynamics of cellular processes can be simulated at different levels of detail, from (deterministic) partial differential equations via the spatial Stochastic Simulation algorithm to tracking Brownian trajectories of individual particles. We present a spatial simulation approach for multi-level rule-based models, which includes dynamically hierarchically nested cellular compartments and entities. Our approach ML-Space combines discrete compartmental dynamics, stochastic spatial approaches in discrete space, and particles moving in continuous space. The rule-based specification language of ML-Space supports concise and compact descriptions of models and to adapt the spatial resolution of models easily.

  1. Hybrid Recurrent Laguerre-Orthogonal-Polynomial NN Control System Applied in V-Belt Continuously Variable Transmission System Using Particle Swarm Optimization

    Directory of Open Access Journals (Sweden)

    Chih-Hong Lin

    2015-01-01

    Full Text Available Because the V-belt continuously variable transmission (CVT system driven by permanent magnet synchronous motor (PMSM has much unknown nonlinear and time-varying characteristics, the better control performance design for the linear control design is a time consuming procedure. In order to overcome difficulties for design of the linear controllers, the hybrid recurrent Laguerre-orthogonal-polynomial neural network (NN control system which has online learning ability to respond to the system’s nonlinear and time-varying behaviors is proposed to control PMSM servo-driven V-belt CVT system under the occurrence of the lumped nonlinear load disturbances. The hybrid recurrent Laguerre-orthogonal-polynomial NN control system consists of an inspector control, a recurrent Laguerre-orthogonal-polynomial NN control with adaptive law, and a recouped control with estimated law. Moreover, the adaptive law of online parameters in the recurrent Laguerre-orthogonal-polynomial NN is derived using the Lyapunov stability theorem. Furthermore, the optimal learning rate of the parameters by means of modified particle swarm optimization (PSO is proposed to achieve fast convergence. Finally, to show the effectiveness of the proposed control scheme, comparative studies are demonstrated by experimental results.

  2. Recent Progress on the Marylie/Impact Beam Dynamics Code

    International Nuclear Information System (INIS)

    Ryne, R.D.; Qiang, J.; Bethel, E.W.; Pogorelov, I.; Shalf, J.; Siegerist, C.; Venturini, M.; Dragt, A.J.; Adelmann, A.; Abell, D.; Amundson, J.; Spentzouris, P.; Neri, F.; Walstrom, P.; Mottershead, C.T.; Samulyak, R.

    2006-01-01

    MARYLIE/IMPACT (ML/I) is a hybrid code that combines the beam optics capabilities of MARYLIE with the parallel Particle-In-Cell capabilities of IMPACT. In addition to combining the capabilities of these codes, ML/I has a number of powerful features, including a choice of Poisson solvers, a fifth-order rf cavity model, multiple reference particles for rf cavities, a library of soft-edge magnet models, representation of magnet systems in terms of coil stacks with possibly overlapping fields, and wakefield effects. The code allows for map production, map analysis, particle tracking, and 3D envelope tracking, all within a single, coherent user environment. ML/I has a front end that can read both MARYLIE input and MAD lattice descriptions. The code can model beams with or without acceleration, and with or without space charge. Developed under a US DOE Scientific Discovery through Advanced Computing (SciDAC) project, ML/I is well suited to large-scale modeling, simulations having been performed with up to 100M macroparticles. The code inherits the powerful fitting and optimizing capabilities of MARYLIE augmented for the new features of ML/I. The combination of soft-edge magnet models, high-order capability, space charge effects, and fitting/optimization capabilities, make ML/I a powerful code for a wide range of beam optics design problems. This paper provides a description of the code and its unique capabilities

  3. Comprehensive Benchmark Suite for Simulation of Particle Laden Flows Using the Discrete Element Method with Performance Profiles from the Multiphase Flow with Interface eXchanges (MFiX) Code

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Peiyuan [Univ. of Colorado, Boulder, CO (United States); Brown, Timothy [Univ. of Colorado, Boulder, CO (United States); Fullmer, William D. [Univ. of Colorado, Boulder, CO (United States); Hauser, Thomas [Univ. of Colorado, Boulder, CO (United States); Hrenya, Christine [Univ. of Colorado, Boulder, CO (United States); Grout, Ray [National Renewable Energy Lab. (NREL), Golden, CO (United States); Sitaraman, Hariswaran [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2016-01-29

    Five benchmark problems are developed and simulated with the computational fluid dynamics and discrete element model code MFiX. The benchmark problems span dilute and dense regimes, consider statistically homogeneous and inhomogeneous (both clusters and bubbles) particle concentrations and a range of particle and fluid dynamic computational loads. Several variations of the benchmark problems are also discussed to extend the computational phase space to cover granular (particles only), bidisperse and heat transfer cases. A weak scaling analysis is performed for each benchmark problem and, in most cases, the scalability of the code appears reasonable up to approx. 103 cores. Profiling of the benchmark problems indicate that the most substantial computational time is being spent on particle-particle force calculations, drag force calculations and interpolating between discrete particle and continuum fields. Hardware performance analysis was also carried out showing significant Level 2 cache miss ratios and a rather low degree of vectorization. These results are intended to serve as a baseline for future developments to the code as well as a preliminary indicator of where to best focus performance optimizations.

  4. Computer codes incorporating pre-equilibrium decay

    International Nuclear Information System (INIS)

    Prince, A.

    1980-01-01

    After establishing the need to describe the high-energy particle spectrum which is evident in the experimental data, the various models used in the interpretation are presented. This includes the following: a) Cascade Model; b) Fermi-Gas Relaxation Model; c) Exciton Model; d) Hybrid and Geometry-Dependent Model. The codes description and preparation of input data for STAPRE was presented (Dr. Strohmaier). A simulated output was employed for a given input and comparison with experimental data substantiated the rather sophisticated treatment. (author)

  5. Hybrid petacomputing meets cosmology: The Roadrunner Universe project

    International Nuclear Information System (INIS)

    Habib, Salman; Pope, Adrian; Lukic, Zarija; Daniel, David; Fasel, Patricia; Desai, Nehal; Heitmann, Katrin; Hsu, Chung-Hsing; Ankeny, Lee; Mark, Graham; Bhattacharya, Suman; Ahrens, James

    2009-01-01

    The target of the Roadrunner Universe project at Los Alamos National Laboratory is a set of very large cosmological N-body simulation runs on the hybrid supercomputer Roadrunner, the world's first petaflop platform. Roadrunner's architecture presents opportunities and difficulties characteristic of next-generation supercomputing. We describe a new code designed to optimize performance and scalability by explicitly matching the underlying algorithms to the machine architecture, and by using the physics of the problem as an essential aid in this process. While applications will differ in specific exploits, we believe that such a design process will become increasingly important in the future. The Roadrunner Universe project code, MC 3 (Mesh-based Cosmology Code on the Cell), uses grid and direct particle methods to balance the capabilities of Roadrunner's conventional (Opteron) and accelerator (Cell BE) layers. Mirrored particle caches and spectral techniques are used to overcome communication bandwidth limitations and possible difficulties with complicated particle-grid interaction templates.

  6. Optimal reactive power and voltage control in distribution networks with distributed generators by fuzzy adaptive hybrid particle swarm optimisation method

    DEFF Research Database (Denmark)

    Chen, Shuheng; Hu, Weihao; Su, Chi

    2015-01-01

    A new and efficient methodology for optimal reactive power and voltage control of distribution networks with distributed generators based on fuzzy adaptive hybrid PSO (FAHPSO) is proposed. The objective is to minimize comprehensive cost, consisting of power loss and operation cost of transformers...... that the proposed method can search a more promising control schedule of all transformers, all capacitors and all distributed generators with less time consumption, compared with other listed artificial intelligent methods....... algorithm is implemented in VC++ 6.0 program language and the corresponding numerical experiments are finished on the modified version of the IEEE 33-node distribution system with two newly installed distributed generators and eight newly installed capacitors banks. The numerical results prove...

  7. Diagnostic of the spatial and velocity distribution of alpha particles in tokamak fusion reactor using beat-wave generated lower hybrid wave. Progress report, 1994-1995

    International Nuclear Information System (INIS)

    Hwang, D.Q.; Horton, R.D.; Evans, R.

    1995-01-01

    The alpha particle population from fusion reactions in a DT tokamak reactor can have dramatic effects on the pressure profiles, energetic particle confinement, and the overall stability of the plasma; thus leading to important design consideration of a fusion reactor based on the tokamak concept. In order to fully understand the effects of the alpha population, a non-invasive diagnostic technique suitable for use in a reacting plasma environment needs to be developed to map out both the spatial and velocity distribution of the alphas. The proposed experimental goals for the eventual demonstration of LH wave interaction with a fast ion population is given in the reduced 3 year plan in table 1. At present time the authors are approaching the 8th month in their first year of this project. Up to now, their main effort has been concentrated in the operation of the two beat wave sources in burst mode. The second priority in the experimental project is the probe diagnostics and computer aided data acquisition system. The progress made so far is given, and they are ready to perform the beat-wave generated lower hybrid wave experiment. Some theoretical calculation had been reported at APS meetings. More refined theoretical models are being constructed in collaboration with Drs. J. Rogers and E. Valeo at PPPL

  8. Numerical Modeling and Investigation of Fluid-Driven Fracture Propagation in Reservoirs Based on a Modified Fluid-Mechanically Coupled Model in Two-Dimensional Particle Flow Code

    Directory of Open Access Journals (Sweden)

    Jian Zhou

    2016-09-01

    Full Text Available Hydraulic fracturing is a useful tool for enhancing rock mass permeability for shale gas development, enhanced geothermal systems, and geological carbon sequestration by the high-pressure injection of a fracturing fluid into tight reservoir rocks. Although significant advances have been made in hydraulic fracturing theory, experiments, and numerical modeling, when it comes to the complexity of geological conditions knowledge is still limited. Mechanisms of fluid injection-induced fracture initiation and propagation should be better understood to take full advantage of hydraulic fracturing. This paper presents the development and application of discrete particle modeling based on two-dimensional particle flow code (PFC2D. Firstly, it is shown that the modeled value of the breakdown pressure for the hydraulic fracturing process is approximately equal to analytically calculated values under varied in situ stress conditions. Furthermore, a series of simulations for hydraulic fracturing in competent rock was performed to examine the influence of the in situ stress ratio, fluid injection rate, and fluid viscosity on the borehole pressure history, the geometry of hydraulic fractures, and the pore-pressure field, respectively. It was found that the hydraulic fractures in an isotropic medium always propagate parallel to the orientation of the maximum principal stress. When a high fluid injection rate is used, higher breakdown pressure is needed for fracture propagation and complex geometries of fractures can develop. When a low viscosity fluid is used, fluid can more easily penetrate from the borehole into the surrounding rock, which causes a reduction of the effective stress and leads to a lower breakdown pressure. Moreover, the geometry of the fractures is not particularly sensitive to the fluid viscosity in the approximate isotropic model.

  9. Test particles dynamics in the JOREK 3D non-linear MHD code and application to electron transport in a disruption simulation

    Science.gov (United States)

    Sommariva, C.; Nardon, E.; Beyer, P.; Hoelzl, M.; Huijsmans, G. T. A.; van Vugt, D.; Contributors, JET

    2018-01-01

    In order to contribute to the understanding of runaway electron generation mechanisms during tokamak disruptions, a test particle tracker is introduced in the JOREK 3D non-linear MHD code, able to compute both full and guiding center relativistic orbits. Tests of the module show good conservation of the invariants of motion and consistency between full orbit and guiding center solutions. A first application is presented where test electron confinement properties are investigated in a massive gas injection-triggered disruption simulation in JET-like geometry. It is found that electron populations initialised before the thermal quench (TQ) are typically not fully deconfined in spite of the global stochasticity of the magnetic field during the TQ. The fraction of ‘survivors’ decreases from a few tens down to a few tenths of percent as the electron energy varies from 1 keV to 10 MeV. The underlying mechanism for electron ‘survival’ is the prompt reformation of closed magnetic surfaces at the plasma core and, to a smaller extent, the subsequent reappearance of a magnetic surface at the edge. It is also found that electrons are less deconfined at 10 MeV than at 1 MeV, which appears consistent with a phase averaging effect due to orbit shifts at high energy.

  10. Implicit and explicit schemes for mass consistency preservation in hybrid particle/finite-volume algorithms for turbulent reactive flows

    International Nuclear Information System (INIS)

    Popov, Pavel P.; Pope, Stephen B.

    2014-01-01

    This work addresses the issue of particle mass consistency in Large Eddy Simulation/Probability Density Function (LES/PDF) methods for turbulent reactive flows. Numerical schemes for the implicit and explicit enforcement of particle mass consistency (PMC) are introduced, and their performance is examined in a representative LES/PDF application, namely the Sandia–Sydney Bluff-Body flame HM1. A new combination of interpolation schemes for velocity and scalar fields is found to better satisfy PMC than multilinear and fourth-order Lagrangian interpolation. A second-order accurate time-stepping scheme for stochastic differential equations (SDE) is found to improve PMC relative to Euler time stepping, which is the first time that a second-order scheme is found to be beneficial, when compared to a first-order scheme, in an LES/PDF application. An explicit corrective velocity scheme for PMC enforcement is introduced, and its parameters optimized to enforce a specified PMC criterion with minimal corrective velocity magnitudes

  11. Synthesis of raspberry-like monodisperse magnetic hollow hybrid nanospheres by coating polystyrene template with Fe(3)O(4)@SiO(2) particles.

    Science.gov (United States)

    Wang, Chunlei; Yan, Juntao; Cui, Xuejun; Wang, Hongyan

    2011-02-01

    In this paper, we present a novel method for the preparation of raspberry-like monodisperse magnetic hollow hybrid nanospheres with γ-Fe(2)O(3)@SiO(2) particles as the outer shell. PS@Fe(3)O(4)@SiO(2) composite nanoparticles were successfully prepared on the principle of the electrostatic interaction between negatively charged silica and positively charged polystyrene, and then raspberry-like magnetic hollow hybrid nanospheres with large cavities were achieved by means of calcinations, simultaneously, the magnetite (Fe(3)O(4)) was transformed into maghemite (γ-Fe(2)O(3)). Transmission electron microscopy (TEM) demonstrated that the obtained magnetic hollow silica nanospheres with the perfect spherical profile were well monodisperse and uniform with the mean size of 253nm. The Fourier transform infrared (FTIR) spectrometry, energy dispersive spectroscopy (EDS) and X-ray diffraction (XRD) provided the sufficient evidences for the presence of Fe(3)O(4) in the silica shell. Moreover, the magnetic hollow silica nanospheres possessed a characteristic of superparamagnetic with saturation magnetization value of about 7.84emu/g by the magnetization curve measurement. In addition, the nitrogen adsorption-desorption measurement exhibited that the pore size, BET surface area, pore volume of magnetic hollow silica nanospheres were 3.5-5.5nm, 307m(2)g(-1) and 1.33cm(3)g(-1), respectively. Therefore, the magnetic hollow nanospheres possess a promising future in controlled drug delivery and targeted drug applications. Copyright © 2010 Elsevier Inc. All rights reserved.

  12. A hybrid approach for quantizing complicated motion of a charged particle in time-varying magnetic field

    International Nuclear Information System (INIS)

    Menouar, Salah; Choi, Jeong Ryeol

    2015-01-01

    Quantum characteristics of a charged particle subjected to a singular oscillator potential under an external magnetic field is investigated via SU(1,1) Lie algebraic approach together with the invariant operator and the unitary transformation methods. The system we managed is somewhat complicated since we considered not only the time-variation of the effective mass of the system but also the dependence of the external magnetic field on time in an arbitrary fashion. In this case, the system is a kind of time-dependent Hamiltonian systems which require more delicate treatment when we study it. The complete wave functions are obtained without relying on the methods of perturbation and/or approximation, and the global phases of the system are identified. To promote the understanding of our development, we applied it to a particular case, assuming that the effective mass slowly varies with time under a time-dependent magnetic field

  13. Inverse Modeling of Soil Hydraulic Parameters Based on a Hybrid of Vector-Evaluated Genetic Algorithm and Particle Swarm Optimization

    Directory of Open Access Journals (Sweden)

    Yi-Bo Li

    2018-01-01

    Full Text Available The accurate estimation of soil hydraulic parameters (θs, α, n, and Ks of the van Genuchten–Mualem model has attracted considerable attention. In this study, we proposed a new two-step inversion method, which first estimated the hydraulic parameter θs using objective function by the final water content, and subsequently estimated the soil hydraulic parameters α, n, and Ks, using a vector-evaluated genetic algorithm and particle swarm optimization (VEGA-PSO method based on objective functions by cumulative infiltration and infiltration rate. The parameters were inversely estimated for four types of soils (sand, loam, silt, and clay under an in silico experiment simulating the tension disc infiltration at three initial water content levels. The results indicated that the method is excellent and robust. Because the objective function had multilocal minima in a tiny range near the true values, inverse estimation of the hydraulic parameters was difficult; however, the estimated soil water retention curves and hydraulic conductivity curves were nearly identical to the true curves. In addition, the proposed method was able to estimate the hydraulic parameters accurately despite substantial measurement errors in initial water content, final water content, and cumulative infiltration, proving that the method was feasible and practical for field application.

  14. The modification of hybrid method of ant colony optimization, particle swarm optimization and 3-OPT algorithm in traveling salesman problem

    Science.gov (United States)

    Hertono, G. F.; Ubadah; Handari, B. D.

    2018-03-01

    The traveling salesman problem (TSP) is a famous problem in finding the shortest tour to visit every vertex exactly once, except the first vertex, given a set of vertices. This paper discusses three modification methods to solve TSP by combining Ant Colony Optimization (ACO), Particle Swarm Optimization (PSO) and 3-Opt Algorithm. The ACO is used to find the solution of TSP, in which the PSO is implemented to find the best value of parameters α and β that are used in ACO.In order to reduce the total of tour length from the feasible solution obtained by ACO, then the 3-Opt will be used. In the first modification, the 3-Opt is used to reduce the total tour length from the feasible solutions obtained at each iteration, meanwhile, as the second modification, 3-Opt is used to reduce the total tour length from the entire solution obtained at every iteration. In the third modification, 3-Opt is used to reduce the total tour length from different solutions obtained at each iteration. Results are tested using 6 benchmark problems taken from TSPLIB by calculating the relative error to the best known solution as well as the running time. Among those modifications, only the second and third modification give satisfactory results except the second one needs more execution time compare to the third modifications.

  15. Bidirectional quantum teleportation of unknown photons using path-polarization intra-particle hybrid entanglement and controlled-unitary gates via cross-Kerr nonlinearity

    Science.gov (United States)

    Heo, Jino; Hong, Chang-Ho; Lim, Jong-In; Yang, Hyung-Jin

    2015-05-01

    We propose an arbitrary controlled-unitary (CU) gate and a bidirectional quantum teleportation (BQTP) scheme. The proposed CU gate utilizes photonic qubits (photons) with cross-Kerr nonlinearities (XKNLs), X-homodyne detectors, and linear optical elements, and consists of the consecutive operation of a controlled-path (C-path) gate and a gathering-path (G-path) gate. It is almost deterministic and feasible with current technology when a strong coherent state and weak XKNLs are employed. Based on the CU gate, we present a BQTP scheme that simultaneously teleports two unknown photons between distant users by transmitting only one photon in a path-polarization intra-particle hybrid entangled state. Consequently, it is possible to experimentally implement BQTP with a certain success probability using the proposed CU gate. Project supported by the Ministry of Science, ICT&Future Planning, Korea, under the C-ITRC (Convergence Information Technology Research Center) Support program (NIPA-2013-H0301-13-3007) supervised by the National IT Industry Promotion Agency.

  16. Effect of Nano-TiC Dispersed Particles and Electro-Codeposition Parameters on Morphology and Structure of Hybrid Ni/TiC Nanocomposite Layers.

    Science.gov (United States)

    Benea, Lidia; Celis, Jean-Pierre

    2016-04-06

    This research work describes the effect of dispersed titanium carbide (TiC) nanoparticles into nickel plating bath on Ni/TiC nanostructured composite layers obtained by electro-codeposition. The surface morphology of Ni/TiC nanostructured composite layers was characterized by scanning electron microscopy (SEM). The composition of coatings and the incorporation percentage of TiC nanoparticles into Ni matrix were studied and estimated by using energy dispersive X-ray analysis (EDX). X-ray diffractometer (XRD) has been applied in order to investigate the phase structure as well as the corresponding relative texture coefficients of the composite layers. The results show that the concentration of nano-TiC particles added in the nickel electrolyte affects the inclusion percentage of TiC into Ni/TiC nano strucured layers, as well as the corresponding morphology, relative texture coefficients and thickness indicating an increasing tendency with the increasing concentration of nano-TiC concentration. By increasing the amount of TiC nanoparticles in the electrolyte, their incorporation into nickel matrix also increases. The hybrid Ni/nano-TiC composite layers obtained revealed a higher roughness and higher hardness; therefore, these layers are promising superhydrophobic surfaces for special application and could be more resistant to wear than the pure Ni layers.

  17. Effect of Nano-TiC Dispersed Particles and Electro-Codeposition Parameters on Morphology and Structure of Hybrid Ni/TiC Nanocomposite Layers

    Directory of Open Access Journals (Sweden)

    Lidia Benea

    2016-04-01

    Full Text Available This research work describes the effect of dispersed titanium carbide (TiC nanoparticles into nickel plating bath on Ni/TiC nanostructured composite layers obtained by electro-codeposition. The surface morphology of Ni/TiC nanostructured composite layers was characterized by scanning electron microscopy (SEM. The composition of coatings and the incorporation percentage of TiC nanoparticles into Ni matrix were studied and estimated by using energy dispersive X-ray analysis (EDX. X-ray diffractometer (XRD has been applied in order to investigate the phase structure as well as the corresponding relative texture coefficients of the composite layers. The results show that the concentration of nano-TiC particles added in the nickel electrolyte affects the inclusion percentage of TiC into Ni/TiC nano strucured layers, as well as the corresponding morphology, relative texture coefficients and thickness indicating an increasing tendency with the increasing concentration of nano-TiC concentration. By increasing the amount of TiC nanoparticles in the electrolyte, their incorporation into nickel matrix also increases. The hybrid Ni/nano-TiC composite layers obtained revealed a higher roughness and higher hardness; therefore, these layers are promising superhydrophobic surfaces for special application and could be more resistant to wear than the pure Ni layers.

  18. Nanoscale Organic Hybrid Electrolytes

    KAUST Repository

    Nugent, Jennifer L.

    2010-08-20

    Nanoscale organic hybrid electrolytes are composed of organic-inorganic hybrid nanostructures, each with a metal oxide or metallic nanoparticle core densely grafted with an ion-conducting polyethylene glycol corona - doped with lithium salt. These materials form novel solvent-free hybrid electrolytes that are particle-rich, soft glasses at room temperature; yet manifest high ionic conductivity and good electrochemical stability above 5V. © 2010 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. Nanoscale Organic Hybrid Electrolytes

    KAUST Repository

    Nugent, Jennifer L.; Moganty, Surya S.; Archer, Lynden A.

    2010-01-01

    Nanoscale organic hybrid electrolytes are composed of organic-inorganic hybrid nanostructures, each with a metal oxide or metallic nanoparticle core densely grafted with an ion-conducting polyethylene glycol corona - doped with lithium salt. These materials form novel solvent-free hybrid electrolytes that are particle-rich, soft glasses at room temperature; yet manifest high ionic conductivity and good electrochemical stability above 5V. © 2010 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. SU-E-T-590: Optimizing Magnetic Field Strengths with Matlab for An Ion-Optic System in Particle Therapy Consisting of Two Quadrupole Magnets for Subsequent Simulations with the Monte-Carlo Code FLUKA

    International Nuclear Information System (INIS)

    Baumann, K; Weber, U; Simeonov, Y; Zink, K

    2015-01-01

    Purpose: Aim of this study was to optimize the magnetic field strengths of two quadrupole magnets in a particle therapy facility in order to obtain a beam quality suitable for spot beam scanning. Methods: The particle transport through an ion-optic system of a particle therapy facility consisting of the beam tube, two quadrupole magnets and a beam monitor system was calculated with the help of Matlab by using matrices that solve the equation of motion of a charged particle in a magnetic field and field-free region, respectively. The magnetic field strengths were optimized in order to obtain a circular and thin beam spot at the iso-center of the therapy facility. These optimized field strengths were subsequently transferred to the Monte-Carlo code FLUKA and the transport of 80 MeV/u C12-ions through this ion-optic system was calculated by using a user-routine to implement magnetic fields. The fluence along the beam-axis and at the iso-center was evaluated. Results: The magnetic field strengths could be optimized by using Matlab and transferred to the Monte-Carlo code FLUKA. The implementation via a user-routine was successful. Analyzing the fluence-pattern along the beam-axis the characteristic focusing and de-focusing effects of the quadrupole magnets could be reproduced. Furthermore the beam spot at the iso-center was circular and significantly thinner compared to an unfocused beam. Conclusion: In this study a Matlab tool was developed to optimize magnetic field strengths for an ion-optic system consisting of two quadrupole magnets as part of a particle therapy facility. These magnetic field strengths could subsequently be transferred to and implemented in the Monte-Carlo code FLUKA to simulate the particle transport through this optimized ion-optic system

  1. Modelling plastic scintillator response to gamma rays using light transport incorporated FLUKA code

    Energy Technology Data Exchange (ETDEWEB)

    Ranjbar Kohan, M. [Physics Department, Tafresh University, Tafresh (Iran, Islamic Republic of); Etaati, G.R. [Department of Nuclear Engineering and Physics, Amir Kabir University of Technology, Tehran (Iran, Islamic Republic of); Ghal-Eh, N., E-mail: ghal-eh@du.ac.ir [School of Physics, Damghan University, Damghan (Iran, Islamic Republic of); Safari, M.J. [Department of Energy Engineering, Sharif University of Technology, Tehran (Iran, Islamic Republic of); Afarideh, H. [Department of Nuclear Engineering and Physics, Amir Kabir University of Technology, Tehran (Iran, Islamic Republic of); Asadi, E. [Department of Physics, Payam-e-Noor University, Tehran (Iran, Islamic Republic of)

    2012-05-15

    The response function of NE102 plastic scintillator to gamma rays has been simulated using a joint FLUKA+PHOTRACK Monte Carlo code. The multi-purpose particle transport code, FLUKA, has been responsible for gamma transport whilst the light transport code, PHOTRACK, has simulated the transport of scintillation photons through scintillator and lightguide. The simulation results of plastic scintillator with/without light guides of different surface coverings have been successfully verified with experiments. - Highlights: Black-Right-Pointing-Pointer A multi-purpose code (FLUKA) and a light transport code (PHOTRACK) have been linked. Black-Right-Pointing-Pointer The hybrid code has been used to generate the response function of an NE102 scintillator. Black-Right-Pointing-Pointer The simulated response functions exhibit a good agreement with experimental data.

  2. Code Cactus; Code Cactus

    Energy Technology Data Exchange (ETDEWEB)

    Fajeau, M; Nguyen, L T; Saunier, J [Commissariat a l' Energie Atomique, Centre d' Etudes Nucleaires de Saclay, 91 - Gif-sur-Yvette (France)

    1966-09-01

    This code handles the following problems: -1) Analysis of thermal experiments on a water loop at high or low pressure; steady state or transient behavior; -2) Analysis of thermal and hydrodynamic behavior of water-cooled and moderated reactors, at either high or low pressure, with boiling permitted; fuel elements are assumed to be flat plates: - Flowrate in parallel channels coupled or not by conduction across plates, with conditions of pressure drops or flowrate, variable or not with respect to time is given; the power can be coupled to reactor kinetics calculation or supplied by the code user. The code, containing a schematic representation of safety rod behavior, is a one dimensional, multi-channel code, and has as its complement (FLID), a one-channel, two-dimensional code. (authors) [French] Ce code permet de traiter les problemes ci-dessous: 1. Depouillement d'essais thermiques sur boucle a eau, haute ou basse pression, en regime permanent ou transitoire; 2. Etudes thermiques et hydrauliques de reacteurs a eau, a plaques, a haute ou basse pression, ebullition permise: - repartition entre canaux paralleles, couples on non par conduction a travers plaques, pour des conditions de debit ou de pertes de charge imposees, variables ou non dans le temps; - la puissance peut etre couplee a la neutronique et une representation schematique des actions de securite est prevue. Ce code (Cactus) a une dimension d'espace et plusieurs canaux, a pour complement Flid qui traite l'etude d'un seul canal a deux dimensions. (auteurs)

  3. Influence of the Sr and Mg Alloying Additions on the Bonding Between Matrix and Reinforcing Particles in the AlSi7Mg/SiC-Cg Hybrid Composite

    Directory of Open Access Journals (Sweden)

    Dolata A. J.

    2016-06-01

    Full Text Available The aim of the work was to perform adequate selection of the phase composition of the composite designated for permanent - mould casting air compressor pistons. The hybrid composites based on AlSi7Mg matrix alloy reinforced with mixture of silicon carbide (SiC and glassy carbon (Cg particles were fabricated by the stir casting method. It has been shown that the proper selection of chemical composition of matrix alloy and its modification by used magnesium and strontium additions gives possibility to obtain both the advantageous casting properties of composite suspensions as well as good bonding between particles reinforcements and matrix.

  4. A Calculation Method of PKA, KERMA and DPA from Evaluated Nuclear Data with an Effective Single-particle Emission Approximation (ESPEA) and Introduction of Event Generator Mode in PHITS Code

    International Nuclear Information System (INIS)

    Fukahori, Tokio; Iwamoto, Yosuke

    2012-01-01

    The displacement calculation method from evaluated nuclear data file has been developed by using effective single-particle emission approximation (ESPEA). The ESPEA can be used effectively below about 50 MeV, because of since multiplicity of emitted particles. These are also reported in the Ref. 24. The displacement calculation method in PHITS has been developed. In the high energy region (≥ 20 MeV) for proton and neutron beams, DPA created by secondary particles increase due to nuclear reactions. For heavy-ion beams, DPA created by the primaries are dominant to total DPA due to the large Coulomb scattering cross sections. PHITS results agree with FLUKA ones within a factor of 1.7. In the high-energy region above 10 MeV/nucleon, comparisons among codes and measurements of displacement damage cross section are necessary. (author)

  5. Poloidal inhomogeneity of the particle fluctuation induced fluxes near of the LCFS at lower hybrid heating and improved confinement transition at the FT-2 tokamak

    International Nuclear Information System (INIS)

    Lashkul, S.I.; Altukhov, A.B.; Gurchenko, A.D.; Gusakov, E.Z.; Dyachenko, V.V.; Esipov, L.A.; Kantor, M.Y.; Kouprienko, D.V.; Stepanov, A.Y.; Sharpeonok, A.P.; Shatalin, S.V.; Vekshina, E.O.

    2004-01-01

    This paper present our observations and conclusions about development of the transport process at the plasma periphery of the small tokamak FT-2 during additional Lower Hybrid Heating (LHH), when external (ETB) transport barrier followed by Internal (ITB) transport barrier is observed. The peculiarities of the variations of the fluctuation fluxes near periphery are measured by three moveable multi-electrode Langmuir probes (L-probe) located in the same poloidal cross-section of the chamber. So the observed L-H transition and ETB formation after LHH and the associated negative E r rise result mainly from the decrease of the electron temperature (T e ) near inner region of the LCFS (last close flux surface) by greater extent than in SOL (scrape-off layer). This effect is stimulated by decrease of the input power and decrease of the radial correlation coefficient (for r equals 74-77 mm) (and radial particle fluctuation-induced Γ(t)) resulted from ITB formation mechanism during LHH. T e variation in the SOL after LH heating pulse takes place to a lesser extent. Observed non-monotonic radial profile of T e near LCFS with positive δT e /δr rise is kept constant obviously by large longitudinal conductivity and poloidal fluxes from the hotter limiter shadow regions because of the poloidal inhomogeneity of the T e (SOL) and n e (SOL). Such induced negative E r after RF pulse gives fast rise to a quasi-steady-state Γ 0 (t) drift fluxes with reversed direction structure, like 'zonal flows', which may inhibit transport across the flow. Large rise of grad(n e ) after LHH near LCFS with L-H transition is observed after the end of LH pulse for a long time - about 10-15 ms

  6. Effect of lower hybrid waves on turbulence and transport of particles and energy in the FTU tokamak scrape-off layer plasma

    Energy Technology Data Exchange (ETDEWEB)

    Ridolfini, V Pericoli [ENEA-CR Frascati, Via Enrico Fermi 45-00044 Frascati, Roma (Italy)

    2011-11-15

    All the main features of the scrape-off layer turbulence, magnitude, frequency spectrum and perpendicular wave vector, {xi}{sub t}, are strongly affected by the injection of lower hybrid (LH) power into the FTU tokamak. The governing parameters are the local last closed magnetic surface values of density, n{sub e,LCMS}, and temperature, T{sub e,LCMS}. n{sub e,LCMS} determines the perpendicular wave vector of the LH waves, which is a key parameter for the multiple scattering processes, and together with T{sub e,LCMS} the collisionality that exerts a stabilizing effect on the fluctuations. This effect, still to be examined in the light of theoretical models, leads to an asymptotic value for the fluctuation relative amplitude in the ohmic phase close to 25%, and {approx}10% in the LH phase, or even less, since the saturation level is not yet attained. The LH waves also can strongly raise {xi}{sub t}, about 3 times, and double the root mean square frequency. The transfer of momentum and energy in the mutual scattering of LH and turbulence 'waves' drives these changes. An increase also of the cross-correlation between temperature and electric potential fluctuations should occur in order to explain the magnitude of the fluctuation amplitude drop and the large increment of the temperature e-folding decay, by more than a factor of 2.5. Particle transport, however, does not appear to be affected to a large extent-the density e-folding decay length is almost unchanged but the power flow typical length rises by about a factor of 1.5, which is a relevant figure in view of the problem of mitigating the power loads on divertor targets in future reactors. These changes are confined mainly within the flux tube connected with the LH waves launching antenna, but start to spread significantly out of it at high plasma densities.

  7. Effect of lower hybrid waves on turbulence and transport of particles and energy in the FTU tokamak scrape-off layer plasma

    International Nuclear Information System (INIS)

    Ridolfini, V Pericoli

    2011-01-01

    All the main features of the scrape-off layer turbulence, magnitude, frequency spectrum and perpendicular wave vector, ξ t , are strongly affected by the injection of lower hybrid (LH) power into the FTU tokamak. The governing parameters are the local last closed magnetic surface values of density, n e,LCMS , and temperature, T e,LCMS . n e,LCMS determines the perpendicular wave vector of the LH waves, which is a key parameter for the multiple scattering processes, and together with T e,LCMS the collisionality that exerts a stabilizing effect on the fluctuations. This effect, still to be examined in the light of theoretical models, leads to an asymptotic value for the fluctuation relative amplitude in the ohmic phase close to 25%, and ∼10% in the LH phase, or even less, since the saturation level is not yet attained. The LH waves also can strongly raise ξ t , about 3 times, and double the root mean square frequency. The transfer of momentum and energy in the mutual scattering of LH and turbulence 'waves' drives these changes. An increase also of the cross-correlation between temperature and electric potential fluctuations should occur in order to explain the magnitude of the fluctuation amplitude drop and the large increment of the temperature e-folding decay, by more than a factor of 2.5. Particle transport, however, does not appear to be affected to a large extent-the density e-folding decay length is almost unchanged but the power flow typical length rises by about a factor of 1.5, which is a relevant figure in view of the problem of mitigating the power loads on divertor targets in future reactors. These changes are confined mainly within the flux tube connected with the LH waves launching antenna, but start to spread significantly out of it at high plasma densities.

  8. Validation of a new 39 neutron group self-shielded library based on the nucleonics analysis of the Lotus fusion-fission hybrid test facility performed with the Monte Carlo code

    International Nuclear Information System (INIS)

    Pelloni, S.; Cheng, E.T.

    1985-02-01

    The Swiss LOTUS fusion-fission hybrid test facility was used to investigate the influence of the self-shielding of resonance cross sections on the tritium breeding and on the thorium ratios. Nucleonic analyses were performed using the discrete-ordinates transport codes ANISN and ONEDANT, the surface-flux code SURCU, and the version 3 of the MCNP code for the Li 2 CO 3 and the Li 2 O blanket designs with lead, thorium and beryllium multipliers. Except for the MCNP calculation which bases on the ENDF/B-V files, all nuclear data are generated from the ENDF/B-IV basic library. For the deterministic methods three NJOY group libraries were considered. The first, a 39 neutron group self-shielded library, was generated at EIR. The second bases on the same group structure as the first does and consists of infinitely diluted cross sections. Finally the third library was processed at LANL and consists of coupled 30+12 neutron and gamma groups; these cross sections are not self-shielded. The Monte Carlo analysis bases on a continuous and on a discrete 262 group library from the ENDF/B-V evaluation. It is shown that the results agree well within 3% between the unshielded libraries and between the different transport codes and theories. The self-shielding of resonance cross sections results in a decrease of the thorium capture rate and in an increase of the tritium breeding of about 6%. The remaining computed ratios are not affected by the self-shielding of cross sections. (Auth.)

  9. High-resolution multi-code implementation of unsteady Navier-Stokes flow solver based on paralleled overset adaptive mesh refinement and high-order low-dissipation hybrid schemes

    Science.gov (United States)

    Li, Gaohua; Fu, Xiang; Wang, Fuxin

    2017-10-01

    The low-dissipation high-order accurate hybrid up-winding/central scheme based on fifth-order weighted essentially non-oscillatory (WENO) and sixth-order central schemes, along with the Spalart-Allmaras (SA)-based delayed detached eddy simulation (DDES) turbulence model, and the flow feature-based adaptive mesh refinement (AMR), are implemented into a dual-mesh overset grid infrastructure with parallel computing capabilities, for the purpose of simulating vortex-dominated unsteady detached wake flows with high spatial resolutions. The overset grid assembly (OGA) process based on collection detection theory and implicit hole-cutting algorithm achieves an automatic coupling for the near-body and off-body solvers, and the error-and-try method is used for obtaining a globally balanced load distribution among the composed multiple codes. The results of flows over high Reynolds cylinder and two-bladed helicopter rotor show that the combination of high-order hybrid scheme, advanced turbulence model, and overset adaptive mesh refinement can effectively enhance the spatial resolution for the simulation of turbulent wake eddies.

  10. Comparison of european computer codes relative to the aerosol behavior in PWR containment buildings during severe core damage accidents. (Modelling of steam condensation on the particles)

    International Nuclear Information System (INIS)

    Bunz, H.; Dunbar, L.H.; Fermandjian, J.; Lhiaubet, G.

    1987-11-01

    An aerosol code comparison exercise was performed within the framework of the Commission of European Communities (Division of Safety of Nuclear Installations). This exercise, focused on the process of steam condensation onto the aerosols occurring in PWR containment buildings during severe core damage accidents, has allowed to understand the discrepancies between the results obtained. These discrepancies are due, in particular, to whether the curvature effect is modelled or not in the codes

  11. Global Particle-in-Cell Simulations of Mercury's Magnetosphere

    Science.gov (United States)

    Schriver, D.; Travnicek, P. M.; Lapenta, G.; Amaya, J.; Gonzalez, D.; Richard, R. L.; Berchem, J.; Hellinger, P.

    2017-12-01

    Spacecraft observations of Mercury's magnetosphere have shown that kinetic ion and electron particle effects play a major role in the transport, acceleration, and loss of plasma within the magnetospheric system. Kinetic processes include reconnection, the breakdown of particle adiabaticity and wave-particle interactions. Because of the vast range in spatial scales involved in magnetospheric dynamics, from local electron Debye length scales ( meters) to solar wind/planetary magnetic scale lengths (tens to hundreds of planetary radii), fully self-consistent kinetic simulations of a global planetary magnetosphere remain challenging. Most global simulations of Earth's and other planet's magnetosphere are carried out using MHD, enhanced MHD (e.g., Hall MHD), hybrid, or a combination of MHD and particle in cell (PIC) simulations. Here, 3D kinetic self-consistent hybrid (ion particle, electron fluid) and full PIC (ion and electron particle) simulations of the solar wind interaction with Mercury's magnetosphere are carried out. Using the implicit PIC and hybrid simulations, Mercury's relatively small, but highly kinetic magnetosphere will be examined to determine how the self-consistent inclusion of electrons affects magnetic reconnection, particle transport and acceleration of plasma at Mercury. Also the spatial and energy profiles of precipitating magnetospheric ions and electrons onto Mercury's surface, which can strongly affect the regolith in terms of space weathering and particle outflow, will be examined with the PIC and hybrid codes. MESSENGER spacecraft observations are used both to initiate and validate the global kinetic simulations to achieve a deeper understanding of the role kinetic physics play in magnetospheric dynamics.

  12. Solid Rocket Motor Design Using Hybrid Optimization

    Directory of Open Access Journals (Sweden)

    Kevin Albarado

    2012-01-01

    Full Text Available A particle swarm/pattern search hybrid optimizer was used to drive a solid rocket motor modeling code to an optimal solution. The solid motor code models tapered motor geometries using analytical burn back methods by slicing the grain into thin sections along the axial direction. Grains with circular perforated stars, wagon wheels, and dog bones can be considered and multiple tapered sections can be constructed. The hybrid approach to optimization is capable of exploring large areas of the solution space through particle swarming, but is also able to climb “hills” of optimality through gradient based pattern searching. A preliminary method for designing tapered internal geometry as well as tapered outer mold-line geometry is presented. A total of four optimization cases were performed. The first two case studies examines designing motors to match a given regressive-progressive-regressive burn profile. The third case study studies designing a neutrally burning right circular perforated grain (utilizing inner and external geometry tapering. The final case study studies designing a linearly regressive burning profile for right circular perforated (tapered grains.

  13. An improved hybrid of particle swarm optimization and the gravitational search algorithm to produce a kinetic parameter estimation of aspartate biochemical pathways.

    Science.gov (United States)

    Ismail, Ahmad Muhaimin; Mohamad, Mohd Saberi; Abdul Majid, Hairudin; Abas, Khairul Hamimah; Deris, Safaai; Zaki, Nazar; Mohd Hashim, Siti Zaiton; Ibrahim, Zuwairie; Remli, Muhammad Akmal

    2017-12-01

    Mathematical modelling is fundamental to understand the dynamic behavior and regulation of the biochemical metabolisms and pathways that are found in biological systems. Pathways are used to describe complex processes that involve many parameters. It is important to have an accurate and complete set of parameters that describe the characteristics of a given model. However, measuring these parameters is typically difficult and even impossible in some cases. Furthermore, the experimental data are often incomplete and also suffer from experimental noise. These shortcomings make it challenging to identify the best-fit parameters that can represent the actual biological processes involved in biological systems. Computational approaches are required to estimate these parameters. The estimation is converted into multimodal optimization problems that require a global optimization algorithm that can avoid local solutions. These local solutions can lead to a bad fit when calibrating with a model. Although the model itself can potentially match a set of experimental data, a high-performance estimation algorithm is required to improve the quality of the solutions. This paper describes an improved hybrid of particle swarm optimization and the gravitational search algorithm (IPSOGSA) to improve the efficiency of a global optimum (the best set of kinetic parameter values) search. The findings suggest that the proposed algorithm is capable of narrowing down the search space by exploiting the feasible solution areas. Hence, the proposed algorithm is able to achieve a near-optimal set of parameters at a fast convergence speed. The proposed algorithm was tested and evaluated based on two aspartate pathways that were obtained from the BioModels Database. The results show that the proposed algorithm outperformed other standard optimization algorithms in terms of accuracy and near-optimal kinetic parameter estimation. Nevertheless, the proposed algorithm is only expected to work well in

  14. Radiation in Particle Simulations

    International Nuclear Information System (INIS)

    More, R.; Graziani, F.; Glosli, J.; Surh, M.

    2010-01-01

    Hot dense radiative (HDR) plasmas common to Inertial Confinement Fusion (ICF) and stellar interiors have high temperature (a few hundred eV to tens of keV), high density (tens to hundreds of g/cc) and high pressure (hundreds of megabars to thousands of gigabars). Typically, such plasmas undergo collisional, radiative, atomic and possibly thermonuclear processes. In order to describe HDR plasmas, computational physicists in ICF and astrophysics use atomic-scale microphysical models implemented in various simulation codes. Experimental validation of the models used to describe HDR plasmas are difficult to perform. Direct Numerical Simulation (DNS) of the many-body interactions of plasmas is a promising approach to model validation but, previous work either relies on the collisionless approximation or ignores radiation. We present four methods that attempt a new numerical simulation technique to address a currently unsolved problem: the extension of molecular dynamics to collisional plasmas including emission and absorption of radiation. The first method applies the Lienard-Weichert solution of Maxwell's equations for a classical particle whose motion is assumed to be known. The second method expands the electromagnetic field in normal modes (planewaves in a box with periodic boundary-conditions) and solves the equation for wave amplitudes coupled to the particle motion. The third method is a hybrid molecular dynamics/Monte Carlo (MD/MC) method which calculates radiation emitted or absorbed by electron-ion pairs during close collisions. The fourth method is a generalization of the third method to include small clusters of particles emitting radiation during close encounters: one electron simultaneously hitting two ions, two electrons simultaneously hitting one ion, etc. This approach is inspired by the virial expansion method of equilibrium statistical mechanics. Using a combination of these methods we believe it is possible to do atomic-scale particle simulations of

  15. Video tracking and post-mortem analysis of dust particles from all tungsten ASDEX Upgrade

    Energy Technology Data Exchange (ETDEWEB)

    Endstrasser, N., E-mail: Nikolaus.Endstrasser@ipp.mpg.de [Max-Planck-Insitut fuer Plasmaphysik, EURATOM Association, Boltzmannstrasse 2, D-85748 Garching (Germany); Brochard, F. [Institut Jean Lamour, Nancy-Universite, Bvd. des Aiguillettes, F-54506 Vandoeuvre (France); Rohde, V., E-mail: Volker.Rohde@ipp.mpg.de [Max-Planck-Insitut fuer Plasmaphysik, EURATOM Association, Boltzmannstrasse 2, D-85748 Garching (Germany); Balden, M. [Max-Planck-Insitut fuer Plasmaphysik, EURATOM Association, Boltzmannstrasse 2, D-85748 Garching (Germany); Lunt, T.; Bardin, S.; Briancon, J.-L. [Institut Jean Lamour, Nancy-Universite, Bvd. des Aiguillettes, F-54506 Vandoeuvre (France); Neu, R. [Max-Planck-Insitut fuer Plasmaphysik, EURATOM Association, Boltzmannstrasse 2, D-85748 Garching (Germany)

    2011-08-01

    2D dust particle trajectories are extracted from fast framing camera videos of ASDEX Upgrade (AUG) by a new time- and resource-efficient code and classified into stationary hot spots, single-frame events and real dust particle fly-bys. Using hybrid global and local intensity thresholding and linear trajectory extrapolation individual particles could be tracked up to 80 ms. Even under challenging conditions such as high particle density and strong vacuum vessel illumination all particles detected for more than 50 frames are tracked correctly. During campaign 2009 dust has been trapped on five silicon wafer dust collectors strategically positioned within the vacuum vessel of the full tungsten AUG. Characterisation of the outer morphology and determination of the elemental composition of 5 x 10{sup 4} particles were performed via automated SEM-EDX analysis. A dust classification scheme based on these parameters was defined with the goal to link the particles to their most probable production sites.

  16. Fast-solving thermally thick model of biomass particles embedded in a CFD code for the simulation of fixed-bed burners

    International Nuclear Information System (INIS)

    Gómez, M.A.; Porteiro, J.; Patiño, D.; Míguez, J.L.

    2015-01-01

    Highlights: • A thermally thick treatment is used to simulate of fuel the thermal conversion of solid biomass. • A dynamic subgrid scale is used to model the advance of reactive fronts inside the particle. • Efficient solution algorithms are applied to calculate the temperatures and volume of the internal layers. • Several tests were simulated and compared with experimental data. - Abstract: The thermally thick treatment of fuel particles during the thermal conversion of solid biomass is required to consider the internal gradients of temperature and composition and the overlapping of the existing biomass combustion stages. Due to the implied mixture of scales, the balance between model resolution and computational efficiency is an important limitation in the simulation of beds with large numbers of particles. In this study, a subgrid-scale model is applied to consider the intraparticle gradients, the interactions with other particles and the gas phase using a Euler–Euler CFD framework. Numerical heat transfer and mass conservation equations are formulated on a subparticle scale to obtain a system of linear equations that can be used to resolve the temperature and position of the reacting front inside the characteristic particle of each cell. To simulate the entire system, this modelling is combined with other submodels of the gas phase, the bed reaction and the interactions. The performance of the new model is tested using published experimental results for the particle and the bed. Similar temperatures are obtained in the particle-alone tests. Although the mass consumption rates tend to be underpredicted during the drying stage, they are subsequently compensated. In addition, an experimental batch-loaded pellet burner was simulated and tested with different air mass fluxes, in which the experimental ignition rates and temperatures are employed to compare the thermally thick model with the thermally thin model that was previously developed by the authors

  17. Fast particle effects on the internal kink, fishbone and Alfven modes

    International Nuclear Information System (INIS)

    Gorelenkov, N.N.; Bernabei, S.; Cheng, C.Z.; Fu, G.Y.; Hill, K.; Kaye, S.; Kramer, G.J.; Nazikian, R.; Park, W.; Kusama, Y.; Shinokhara, K.; Ozeki, T.

    2001-01-01

    The issues of linear stability of low frequency perturbative and nonperturbative modes in advanced tokamak regimes are addressed based on recent developments in theory, computational methods, and progress in experiments. Perturbative codes NOVA and ORBIT are used to calculate the effects of TAEs on fast particle population in spherical tokamak NSTX. Nonperturbative analysis of chirping frequency modes in experiments on TFTR and JT-60U is presented using the kinetic code HINST, which identified such modes as a separate branch of Alfven modes - resonance TAE (R-TAE). Internal kink mode stability in the presence of fast particles is studied using the NOVA code and hybrid kinetic-MHD nonlinear code M3D. (author)

  18. Fast Particle Effects on the Internal Kink, Fishbone and Alfven Modes

    International Nuclear Information System (INIS)

    Gorelenkov, N.N.; Bernabei, S.; Cheng, C.Z.; Fu, G.Y.; Hill, K.; Kaye, S.; Kramer, G.J.; Kusama, Y.; Shinohara, K.; Nazikian, R.; Ozeki, T.; Park, W.

    2000-01-01

    The issues of linear stability of low frequency perturbative and nonperturbative modes in advanced tokamak regimes are addressed based on recent developments in theory, computational methods, and progress in experiments. Perturbative codes NOVA and ORBIT are used to calculate the effects of TAEs on fast particle population in spherical tokamak NSTX. Nonperturbative analysis of chirping frequency modes in experiments on TFTR and JT-60U is presented using the kinetic code HINST, which identified such modes as a separate branch of Alfven modes - resonance TAE (R-TAE). Internal kink mode stability in the presence of fast particles is studied using the NOVA code and hybrid kinetic-MHD nonlinear code M3D

  19. Beam-dynamics codes used at DARHT

    Energy Technology Data Exchange (ETDEWEB)

    Ekdahl, Jr., Carl August [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-02-01

    Several beam simulation codes are used to help gain a better understanding of beam dynamics in the DARHT LIAs. The most notable of these fall into the following categories: for beam production – Tricomp Trak orbit tracking code, LSP Particle in cell (PIC) code, for beam transport and acceleration – XTR static envelope and centroid code, LAMDA time-resolved envelope and centroid code, LSP-Slice PIC code, for coasting-beam transport to target – LAMDA time-resolved envelope code, LSP-Slice PIC code. These codes are also being used to inform the design of Scorpius.

  20. A hybrid method using the widely-used WIEN2k and VASP codes to calculate the complete set of XAS/EELS edges in a hundred-atoms system.

    Science.gov (United States)

    Donval, Gaël; Moreau, Philippe; Danet, Julien; Larbi, Séverine Jouanneau-Si; Bayle-Guillemaud, Pascale; Boucher, Florent

    2017-01-04

    Most of the recent developments in EELS modelling has been focused on getting a better agreement with measurements. Less work however has been dedicated to bringing EELS calculations to larger structures that can more realistically describe actual systems. The purpose of this paper is to present a hybrid approach well adapted to calculating the whole set of localised EELS core-loss edges (at the XAS level of theory) on larger systems using only standard tools, namely the WIEN2k and VASP codes. We illustrate the usefulness of this method by applying it to a set of amorphous silicon structures in order to explain the flattening of the silicon L 2,3 EELS edge peak at the onset. We show that the peak flattening is actually caused by the collective contribution of each of the atoms to the average spectrum, as opposed to a flattening occurring on each individual spectrum. This method allowed us to reduce the execution time by a factor of 3 compared to a usual-carefully optimised-WIEN2k calculation. It provided even greater speed-ups on more complex systems (interfaces, ∼300 atoms) that will be presented in a future paper. This method is suited to calculate all the localized edges of all the atoms of a structure in a single calculation for light atoms as long as the core-hole effects can be neglected.

  1. Coding Partitions

    Directory of Open Access Journals (Sweden)

    Fabio Burderi

    2007-05-01

    Full Text Available Motivated by the study of decipherability conditions for codes weaker than Unique Decipherability (UD, we introduce the notion of coding partition. Such a notion generalizes that of UD code and, for codes that are not UD, allows to recover the ``unique decipherability" at the level of the classes of the partition. By tacking into account the natural order between the partitions, we define the characteristic partition of a code X as the finest coding partition of X. This leads to introduce the canonical decomposition of a code in at most one unambiguouscomponent and other (if any totally ambiguouscomponents. In the case the code is finite, we give an algorithm for computing its canonical partition. This, in particular, allows to decide whether a given partition of a finite code X is a coding partition. This last problem is then approached in the case the code is a rational set. We prove its decidability under the hypothesis that the partition contains a finite number of classes and each class is a rational set. Moreover we conjecture that the canonical partition satisfies such a hypothesis. Finally we consider also some relationships between coding partitions and varieties of codes.

  2. Hybridized Tetraquarks

    CERN Document Server

    Esposito, A.; Polosa, A.D.

    2016-01-01

    We propose a new interpretation of the neutral and charged X, Z exotic hadron resonances. Hybridized-tetraquarks are neither purely compact tetraquark states nor bound or loosely bound molecules. The latter would require a negative or zero binding energy whose counterpart in h-tetraquarks is a positive quantity. The formation mechanism of this new class of hadrons is inspired by that of Feshbach metastable states in atomic physics. The recent claim of an exotic resonance in the Bs pi+- channel by the D0 collaboration and the negative result presented subsequently by the LHCb collaboration are understood in this scheme, together with a considerable portion of available data on X, Z particles. Considerations on a state with the same quantum numbers as the X(5568) are also made.

  3. Program Hybrid/GDH. Revision

    International Nuclear Information System (INIS)

    Blann, M.; Bisplinghoff, J.

    1975-10-01

    This code is the most recent in a series of codes for doing a-priori pre-equilibrium decay calculations. It has been written to permit the user to exercise many options at time of execution. It will, for example, permit calculation with either Hybrid model or the geometry dependent Hybrid model (GDH). Intranuclear transition rates can be calculated using either a nucleon-nucleon scattering approach (improved over earlier results) or based on the imaginary optical potential. Transition rates based on exciton lifetimes can be selected (as suggested in the Hybrid model formulation) or an average lifetime for each n-exciton configuration may be selected

  4. Hybrid2 - The hybrid power system simulation model

    Energy Technology Data Exchange (ETDEWEB)

    Baring-Gould, E.I.; Green, H.J.; Dijk, V.A.P. van [National Renewable Energy Lab., Golden, CO (United States); Manwell, J.F. [Univ. of Massachusetts, Amherst, MA (United States)

    1996-12-31

    There is a large-scale need and desire for energy in remote communities, especially in the developing world; however the lack of a user friendly, flexible performance prediction model for hybrid power systems incorporating renewables hindered the analysis of hybrids as options to conventional solutions. A user friendly model was needed with the versatility to simulate the many system locations, widely varying hardware configurations, and differing control options for potential hybrid power systems. To meet these ends, researchers from the National Renewable Energy Laboratory (NREL) and the University of Massachusetts (UMass) developed the Hybrid2 software. This paper provides an overview of the capabilities, features, and functionality of the Hybrid2 code, discusses its validation and future plans. Model availability and technical support provided to Hybrid2 users are also discussed. 12 refs., 3 figs., 4 tabs.

  5. A Monte Carlo simulation code for calculating damage and particle transport in solids: The case for electron-bombarded solids for electron energies up to 900 MeV

    Energy Technology Data Exchange (ETDEWEB)

    Yan, Qiang [College of Nuclear Science and Technology, Harbin Engineering University, Harbin 150001 (China); Shao, Lin, E-mail: lshao@tamu.edu [Department of Nuclear Engineering, Texas A& M University, College Station, TX 77843 (United States)

    2017-03-15

    Current popular Monte Carlo simulation codes for simulating electron bombardment in solids focus primarily on electron trajectories, instead of electron-induced displacements. Here we report a Monte Carol simulation code, DEEPER (damage creation and particle transport in matter), developed for calculating 3-D distributions of displacements produced by electrons of incident energies up to 900 MeV. Electron elastic scattering is calculated by using full-Mott cross sections for high accuracy, and primary-knock-on-atoms (PKAs)-induced damage cascades are modeled using ZBL potential. We compare and show large differences in 3-D distributions of displacements and electrons in electron-irradiated Fe. The distributions of total displacements are similar to that of PKAs at low electron energies. But they are substantially different for higher energy electrons due to the shifting of PKA energy spectra towards higher energies. The study is important to evaluate electron-induced radiation damage, for the applications using high flux electron beams to intentionally introduce defects and using an electron analysis beam for microstructural characterization of nuclear materials.

  6. Plasma-catalyst hybrid reactor with CeO2/γ-Al2O3 for benzene decomposition with synergetic effect and nano particle by-product reduction.

    Science.gov (United States)

    Mao, Lingai; Chen, Zhizong; Wu, Xinyue; Tang, Xiujuan; Yao, Shuiliang; Zhang, Xuming; Jiang, Boqiong; Han, Jingyi; Wu, Zuliang; Lu, Hao; Nozaki, Tomohiro

    2018-04-05

    A dielectric barrier discharge (DBD) catalyst hybrid reactor with CeO 2 /γ-Al 2 O 3 catalyst balls was investigated for benzene decomposition at atmospheric pressure and 30 °C. At an energy density of 37-40 J/L, benzene decomposition was as high as 92.5% when using the hybrid reactor with 5.0wt%CeO 2 /γ-Al 2 O 3 ; while it was 10%-20% when using a normal DBD reactor without a catalyst. Benzene decomposition using the hybrid reactor was almost the same as that using an O 3 catalyst reactor with the same CeO 2 /γ-Al 2 O 3 catalyst, indicating that O 3 plays a key role in the benzene decomposition. Fourier transform infrared spectroscopy analysis showed that O 3 adsorption on CeO 2 /γ-Al 2 O 3 promotes the production of adsorbed O 2 - and O 2 2‒ , which contribute benzene decomposition over heterogeneous catalysts. Nano particles as by-products (phenol and 1,4-benzoquinone) from benzene decomposition can be significantly reduced using the CeO 2 /γ-Al 2 O 3 catalyst. H 2 O inhibits benzene decomposition; however, it improves CO 2 selectivity. The deactivated CeO 2 /γ-Al 2 O 3 catalyst can be regenerated by performing discharges at 100 °C and 192-204 J/L. The decomposition mechanism of benzene over CeO 2 /γ-Al 2 O 3 catalyst was proposed. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Simulation of halo particles with Simpsons

    International Nuclear Information System (INIS)

    Machida, Shinji

    2003-01-01

    Recent code improvements and some simulation results of halo particles with Simpsons will be presented. We tried to identify resonance behavior of halo particles by looking at tune evolution of individual macro particle

  8. Simulation of halo particles with Simpsons

    Science.gov (United States)

    Machida, Shinji

    2003-12-01

    Recent code improvements and some simulation results of halo particles with Simpsons will be presented. We tried to identify resonance behavior of halo particles by looking at tune evolution of individual macro particle.

  9. Adaptive RD Optimized Hybrid Sound Coding

    NARCIS (Netherlands)

    Schijndel, N.H. van; Bensa, J.; Christensen, M.G.; Colomes, C.; Edler, B.; Heusdens, R.; Jensen, J.; Jensen, S.H.; Kleijn, W.B.; Kot, V.; Kövesi, B.; Lindblom, J.; Massaloux, D.; Niamut, O.A.; Nordén, F.; Plasberg, J.H.; Vafin, R.; Virette, D.; Wübbolt, O.

    2008-01-01

    Traditionally, sound codecs have been developed with a particular application in mind, their performance being optimized for specific types of input signals, such as speech or audio (music), and application constraints, such as low bit rate, high quality, or low delay. There is, however, an

  10. Spatial resolution enhancement residual coding using hybrid ...

    Indian Academy of Sciences (India)

    ble value of the parameters (ρ1,ρ2,θ) in their respective ranges. For each block ... Dirac was developed by BBC and aimed at a royalty-free, open technology (Dirac video codec [online]. ..... In: Acoustics, Speech, and Signal Processing, 1995.

  11. Particle simulations of nonlinear whistler and Alfven wave instabilities - Amplitude modulation, decay, soliton and inverse cascading

    International Nuclear Information System (INIS)

    Omura, Yoshiharu; Matsumoto, Hiroshi.

    1989-01-01

    Past theoretical and numerical studies of the nonlinear evolution of electromagnetic cyclotron waves are reviewed. Such waves are commonly observed in space plasmas such as Alfven waves in the solar wind or VLF whistler mode waves in the magnetosphere. The use of an electromagnetic full-particle code to study an electron cyclotron wave and of an electromagnetic hybrid code to study an ion cyclotron wave is demonstrated. Recent achievements in the simulations of nonlinear revolution of electromagnetic cyclotron waves are discussed. The inverse cascading processes of finite-amplitude whistler and Alfven waves is interpreted in terms of physical elementary processes. 65 refs

  12. User manual for version 4.3 of the Tripoli-4 Monte-Carlo method particle transport computer code; Notice d'utilisation du code Tripoli-4, version 4.3: code de transport de particules par la methode de Monte Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Both, J.P.; Mazzolo, A.; Peneliau, Y.; Petit, O.; Roesslinger, B

    2003-07-01

    This manual relates to Version 4.3 TRIPOLI-4 code. TRIPOLI-4 is a computer code simulating the transport of neutrons, photons, electrons and positrons. It can be used for radiation shielding calculations (long-distance propagation with flux attenuation in non-multiplying media) and neutronic calculations (fissile medium, criticality or sub-criticality basis). This makes it possible to calculate k{sub eff} (for criticality), flux, currents, reaction rates and multi-group cross-sections. TRIPOLI-4 is a three-dimensional code that uses the Monte-Carlo method. It allows for point-wise description in terms of energy of cross-sections and multi-group homogenized cross-sections and features two modes of geometrical representation: surface and combinatorial. The code uses cross-section libraries in ENDF/B format (such as JEF2-2, ENDF/B-VI and JENDL) for point-wise description cross-sections in APOTRIM format (from the APOLLO2 code) or a format specific to TRIPOLI-4 for multi-group description. (authors)

  13. Lower hybrid current drive in shaped tokamaks

    International Nuclear Information System (INIS)

    Kesner, J.

    1993-01-01

    A time dependent lower hybrid current drive tokamak simulation code has been developed. This code combines the BALDUR tokamak simulation code and the Bonoli/Englade lower hybrid current drive code and permits the study of the interaction of lower hybrid current drive with neutral beam heating in shaped cross-section plasmas. The code is time dependent and includes the beam driven and bootstrap currents in addition to the current driven by the lower hybrid system. Examples of simulations are shown for the PBX-M experiment which include the effect of cross section shaping on current drive, ballooning mode stabilization by current profile control and sawtooth stabilization. A critical question in current drive calculations is the radial transport of the energetic electrons. The authors have developed a response function technique to calculate radial transport in the presence of an electric field. The consequences of the combined influences of radial diffusion and electric field acceleration are discussed

  14. Speaking Code

    DEFF Research Database (Denmark)

    Cox, Geoff

    Speaking Code begins by invoking the “Hello World” convention used by programmers when learning a new language, helping to establish the interplay of text and code that runs through the book. Interweaving the voice of critical writing from the humanities with the tradition of computing and software...

  15. Tristan code and its application

    Science.gov (United States)

    Nishikawa, K.-I.

    Since TRISTAN: The 3-D Electromagnetic Particle Code was introduced in 1990, it has been used for many applications including the simulations of global solar windmagnetosphere interaction. The most essential ingridients of this code have been published in the ISSS-4 book. In this abstract we describe some of issues and an application of this code for the study of global solar wind-magnetosphere interaction including a substorm study. The basic code (tristan.f) for the global simulation and a local simulation of reconnection with a Harris model (issrec2.f) are available at http:/www.physics.rutger.edu/˜kenichi. For beginners the code (isssrc2.f) with simpler boundary conditions is suitable to start to run simulations. The future of global particle simulations for a global geospace general circulation (GGCM) model with predictive capability (for Space Weather Program) is discussed.

  16. A prototype experiment to study charmed particle production and decay using a Holographic High Resolution Hydrogen Chamber (HOLEBC) and the European Hybrid Spectrometer

    CERN Multimedia

    2002-01-01

    The high resolution hydrogen bubble chamber LEBC has already been used in experiments at the SPS to detect particles with lifetime $\\geq 5 \\times 10^{-13}$s (NA13 & NA16). \\\\\\\\For this experiment, a new version of LEBC called HOLEBC, has been constructed. This chamber and the NA26 version of the spectrometer have been used with classical optics in the NA27 experiment. A significant improvement in resolution was achieved ($\\simeq$ 20 microns compared with $\\simeq$ 40 $\\mu$m in LEBC) and hence a good sensitivity to all (known) charmed particle decays. The development of holographic recording techniques with HOLEBC is in progress. \\\\\\\\The prototype NA26 experiment is designed to evaluate the feasibility of the high sensitivity, high resolution holographic hydrogen bubble chamber technique and evaluate various possible charm selective triggers using the information from the spectrometer.

  17. Fourier optics along a hybrid optical fiber for Bessel-like beam generation and its applications in multiple-particle trapping.

    Science.gov (United States)

    Kim, Jongki; Jeong, Yoonseob; Lee, Sejin; Ha, Woosung; Shin, Jeon-Soo; Oh, Kyunghwan

    2012-02-15

    Highly efficient Bessel-like beam generation was achieved based on a new all-fiber method that implements Fourier transformation of a micro annular aperture along a concatenated composite optical fiber. The beam showed unique characteristics of tilted washboard optical potential in the transverse plane and sustained a nondiffracting length over 400 μm along the axial direction. Optical trapping of multiple dielectric particles and living Jurkat cells were successfully demonstrated along the axial direction of the beam in the water.

  18. Particle-tracking code (track3d) for convective solute transport modelling in the geosphere: Description and user`s manual; Programme de reperage de particules (track3d) pour la modelisation du transport par convection des solutes dans la geosphere: description et manuel de l`utilisateur

    Energy Technology Data Exchange (ETDEWEB)

    Nakka, B W; Chan, T

    1994-12-01

    A deterministic particle-tracking code (TRACK3D) has been developed to compute convective flow paths of conservative (nonreactive) contaminants through porous geological media. TRACK3D requires the groundwater velocity distribution, which, in our applications, results from flow simulations using AECL`s MOTIF code. The MOTIF finite-element code solves the transient and steady-state coupled equations of groundwater flow, solute transport and heat transport in fractured/porous media. With few modifications, TRACK3D can be used to analyse the velocity distributions calculated by other finite-element or finite-difference flow codes. This report describes the assumptions, limitations, organization, operation and applications of the TRACK3D code, and provides a comprehensive user`s manual.

  19. Study on fission blanket fuel cycling of a fusion-fission hybrid energy generation system

    International Nuclear Information System (INIS)

    Zhou, Z.; Yang, Y.; Xu, H.

    2011-01-01

    This paper presents a preliminary study on neutron physics characteristics of a light water cooled fission blanket for a new type subcritical fusion-fission hybrid reactor aiming at electric power generation with low technical limits of fission fuel. The major objective is to study the fission fuel cycling performance in the blanket, which may possess significant impacts on the feasibility of the new concept of fusion-fission hybrid reactor with a high energy gain (M) and tritium breeding ratio (TBR). The COUPLE2 code developed by the Institute of Nuclear and New Energy Technology of Tsinghua University is employed to simulate the neutronic behaviour in the blanket. COUPLE2 combines the particle transport code MCNPX with the fuel depletion code ORIGEN2. The code calculation results show that soft neutron spectrum can yield M > 20 while maintaining TBR >1.15 and the conversion ratio of fissile materials CR > 1 in a reasonably long refuelling cycle (>five years). The preliminary results also indicate that it is rather promising to design a high-performance light water cooled fission blanket of fusion-fission hybrid reactor for electric power generation by directly loading natural or depleted uranium if an ITER-scale tokamak fusion neutron source is achievable.

  20. Numerical modeling of lower hybrid heating and current drive

    International Nuclear Information System (INIS)

    Valeo, E.J.; Eder, D.C.

    1986-03-01

    The generation of currents in toroidal plasma by application of waves in the lower hybrid frequency range involves the interplay of several physical phenomena which include: wave propagation in toroidal geometry, absorption via wave-particle resonances, the quasilinear generation of strongly nonequilibrium electron and ion distribution functions, and the self-consistent evolution of the current density in such a nonequilibrium plasma. We describe a code, LHMOD, which we have developed to treat these aspects of current drive and heating in tokamaks. We present results obtained by applying the code to a computation of current ramp-up and to an investigation of the possible importance of minority hydrogen absorption in a deuterium plasma as the ''density limit'' to current drive is approached