WorldWideScience

Sample records for hybrid particle code

  1. A particle-based hybrid code for planet formation

    CERN Document Server

    Morishima, Ryuji

    2015-01-01

    We introduce a new particle-based hybrid code for planetary accretion. The code uses an $N$-body routine for interactions with planetary embryos while it can handle a large number of planetesimals using a super-particle approximation, in which a large number of small planetesimals are represented by a small number of tracers. Tracer-tracer interactions are handled by a statistical routine which uses the phase-averaged stirring and collision rates. We compare hybrid simulations with analytic predictions and pure $N$-body simulations for various problems in detail and find good agreements for all cases. The computational load on the portion of the statistical routine is comparable to or less than that for the $N$-body routine. The present code includes an option of hit-and-run bouncing but not fragmentation, which remains for future work.

  2. Hybrid codes: Methods and applications

    Energy Technology Data Exchange (ETDEWEB)

    Winske, D. (Los Alamos National Lab., NM (USA)); Omidi, N. (California Univ., San Diego, La Jolla, CA (USA))

    1991-01-01

    In this chapter we discuss hybrid'' algorithms used in the study of low frequency electromagnetic phenomena, where one or more ion species are treated kinetically via standard PIC methods used in particle codes and the electrons are treated as a single charge neutralizing massless fluid. Other types of hybrid models are possible, as discussed in Winske and Quest, but hybrid codes with particle ions and massless fluid electrons have become the most common for simulating space plasma physics phenomena in the last decade, as we discuss in this paper.

  3. Linear benchmarks between the hybrid codes HYMAGYC and HMGC to study energetic particle driven Alfvénic modes

    Science.gov (United States)

    Fogaccia, G.; Vlad, G.; Briguglio, S.

    2016-11-01

    Resonant interaction between energetic particles (EPs), produced by fusion reactions and/or additional heating systems, and shear Alfvén modes can destabilize global Alfvénic modes enhancing the EP transport. In order to investigate the EP transport in present and next generation fusion devices, numerical simulations are recognized as a very important tool. Among the various numerical models, the hybrid MHD gyrokinetic one has shown to be a valid compromise between a sufficiently accurate wave-particle interaction description and affordable computational resource requirements. This paper presents a linear benchmark between the hybrid codes HYMAGYC and HMGC. The HYMAGYC code solves the full, linear MHD equations in general curvilinear geometry for the bulk plasma and describes the EP population by the nonlinear gyrokinetic Vlasov equation. On the other side, HMGC solves the nonlinear, reduced O≤ft(ε 03\\right) , pressureless MHD equations ({ε0} being the inverse aspect ratio) for the bulk plasma and the drift kinetic Vlasov equation for the EPs. The results of the HYMAGYC and HMGC codes have been compared both in the MHD limit and in a wide range of the EP parameter space for two test cases (one of which being the so-called TAE n  =  6 ITPA Energetic Particle Group test case), both characterized by {ε0}\\ll 1 . In the first test case (test case A), good qualitative agreement is found w.r.t. real frequencies, growth rates and spatial structures of the most unstable modes, with some quantitative differences for the growth rates. For the so-called ITPA test case (test case B), at the nominal energetic particle density value, the disagreement between the two codes is, on the contrary, also qualitative, as a different mode is found as the most unstable one.

  4. Hybrid Noncoherent Network Coding

    CERN Document Server

    Skachek, Vitaly; Nedic, Angelia

    2011-01-01

    We describe a novel extension of subspace codes for noncoherent networks, suitable for use when the network is viewed as a communication system that introduces both dimension and symbol errors. We show that when symbol erasures occur in a significantly large number of different basis vectors transmitted through the network and when the min-cut of the networks is much smaller then the length of the transmitted codewords, the new family of codes outperforms their subspace code counterparts. For the proposed coding scheme, termed hybrid network coding, we derive two upper bounds on the size of the codes. These bounds represent a variation of the Singleton and of the sphere-packing bound. We show that a simple concatenated scheme that represents a combination of subspace codes and Reed-Solomon codes is asymptotically optimal with respect to the Singleton bound. Finally, we describe two efficient decoding algorithms for concatenated subspace codes that in certain cases have smaller complexity than subspace decoder...

  5. Towards the optimization of a gyrokinetic Particle-In-Cell (PIC) code on large-scale hybrid architectures

    Science.gov (United States)

    Ohana, N.; Jocksch, A.; Lanti, E.; Tran, T. M.; Brunner, S.; Gheller, C.; Hariri, F.; Villard, L.

    2016-11-01

    With the aim of enabling state-of-the-art gyrokinetic PIC codes to benefit from the performance of recent multithreaded devices, we developed an application from a platform called the “PIC-engine” [1, 2, 3] embedding simplified basic features of the PIC method. The application solves the gyrokinetic equations in a sheared plasma slab using B-spline finite elements up to fourth order to represent the self-consistent electrostatic field. Preliminary studies of the so-called Particle-In-Fourier (PIF) approach, which uses Fourier modes as basis functions in the periodic dimensions of the system instead of the real-space grid, show that this method can be faster than PIC for simulations with a small number of Fourier modes. Similarly to the PIC-engine, multiple levels of parallelism have been implemented using MPI+OpenMP [2] and MPI+OpenACC [1], the latter exploiting the computational power of GPUs without requiring complete code rewriting. It is shown that sorting particles [3] can lead to performance improvement by increasing data locality and vectorizing grid memory access. Weak scalability tests have been successfully run on the GPU-equipped Cray XC30 Piz Daint (at CSCS) up to 4,096 nodes. The reduced time-to-solution will enable more realistic and thus more computationally intensive simulations of turbulent transport in magnetic fusion devices.

  6. The Accurate Particle Tracer Code

    CERN Document Server

    Wang, Yulei; Qin, Hong; Yu, Zhi

    2016-01-01

    The Accurate Particle Tracer (APT) code is designed for large-scale particle simulations on dynamical systems. Based on a large variety of advanced geometric algorithms, APT possesses long-term numerical accuracy and stability, which are critical for solving multi-scale and non-linear problems. Under the well-designed integrated and modularized framework, APT serves as a universal platform for researchers from different fields, such as plasma physics, accelerator physics, space science, fusion energy research, computational mathematics, software engineering, and high-performance computation. The APT code consists of seven main modules, including the I/O module, the initialization module, the particle pusher module, the parallelization module, the field configuration module, the external force-field module, and the extendible module. The I/O module, supported by Lua and Hdf5 projects, provides a user-friendly interface for both numerical simulation and data analysis. A series of new geometric numerical methods...

  7. Turbo Codes with Hybrid Interleaving Mode

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    In the investigation of turbo codes, either random interleavers or structured interleavers are used. Combined two kinds of interleavers in one turbo encoder, a hybrid interleaving mode is proposed in this paper. Computer simulations show that the performance of turbo codes with the hybrid interleaving mode is better than that with the typical interleaving mode.

  8. Hybrid particles and associated methods

    Science.gov (United States)

    Fox, Robert V; Rodriguez, Rene; Pak, Joshua J; Sun, Chivin

    2015-02-10

    Hybrid particles that comprise a coating surrounding a chalcopyrite material, the coating comprising a metal, a semiconductive material, or a polymer; a core comprising a chalcopyrite material and a shell comprising a functionalized chalcopyrite material, the shell enveloping the core; or a reaction product of a chalcopyrite material and at least one of a reagent, heat, and radiation. Methods of forming the hybrid particles are also disclosed.

  9. NDSPMHD Smoothed Particle Magnetohydrodynamics Code

    Science.gov (United States)

    Price, Daniel J.

    2011-01-01

    This paper presents an overview and introduction to Smoothed Particle Hydrodynamics and Magnetohydrodynamics in theory and in practice. Firstly, we give a basic grounding in the fundamentals of SPH, showing how the equations of motion and energy can be self-consistently derived from the density estimate. We then show how to interpret these equations using the basic SPH interpolation formulae and highlight the subtle difference in approach between SPH and other particle methods. In doing so, we also critique several 'urban myths' regarding SPH, in particular the idea that one can simply increase the 'neighbour number' more slowly than the total number of particles in order to obtain convergence. We also discuss the origin of numerical instabilities such as the pairing and tensile instabilities. Finally, we give practical advice on how to resolve three of the main issues with SPMHD: removing the tensile instability, formulating dissipative terms for MHD shocks and enforcing the divergence constraint on the particles, and we give the current status of developments in this area. Accompanying the paper is the first public release of the NDSPMHD SPH code, a 1, 2 and 3 dimensional code designed as a testbed for SPH/SPMHD algorithms that can be used to test many of the ideas and used to run all of the numerical examples contained in the paper.

  10. Analysis of Non-binary Hybrid LDPC Codes

    CERN Document Server

    Sassatelli, Lucile

    2008-01-01

    In this paper, we analyse asymptotically a new class of LDPC codes called Non-binary Hybrid LDPC codes, which has been recently introduced. We use density evolution techniques to derive a stability condition for hybrid LDPC codes, and prove their threshold behavior. We study this stability condition to conclude on asymptotic advantages of hybrid LDPC codes compared to their non-hybrid counterparts.

  11. Implementation of a hybrid particle code with a PIC description in r–z and a gridless description in ϕ into OSIRIS

    Energy Technology Data Exchange (ETDEWEB)

    Davidson, A., E-mail: davidsoa@physics.ucla.edu [Department of Physics and Astronomy, University of California, Los Angeles, CA 90095 (United States); Tableman, A., E-mail: Tableman@physics.ucla.edu [Department of Physics and Astronomy, University of California, Los Angeles, CA 90095 (United States); An, W., E-mail: anweiming@ucla.edu [Department of Physics and Astronomy, University of California, Los Angeles, CA 90095 (United States); Tsung, F.S., E-mail: tsung@physics.ucla.edu [Department of Physics and Astronomy, University of California, Los Angeles, CA 90095 (United States); Lu, W., E-mail: luwei@ucla.edu [Department of Electrical Engineering, University of California, Los Angeles, CA 90095 (United States); Department of Engineering Physics, Tsinghua University, Beijing 100084 (China); Vieira, J., E-mail: jorge.vieira@ist.utl.pt [GoLP/Instituto de Plasmas e Fusão Nuclear, Instituto Superior Técnico, Universidade de Lisboa, Lisbon (Portugal); Fonseca, R.A., E-mail: ricardo.fonseca@iscte.pt [GoLP/Instituto de Plasmas e Fusão Nuclear, Instituto Superior Técnico, Universidade de Lisboa, Lisbon (Portugal); Departamento Ciências e Tecnologias da Informação, ISCTE – Instituto Universitário de Lisboa, 1649-026 Lisboa (Portugal); Silva, L.O., E-mail: luis.silva@ist.utl.pt [GoLP/Instituto de Plasmas e Fusão Nuclear, Instituto Superior Técnico, Universidade de Lisboa, Lisbon (Portugal); Mori, W.B., E-mail: mori@physics.ucla.edu [Department of Physics and Astronomy, University of California, Los Angeles, CA 90095 (United States); Department of Electrical Engineering, University of California, Los Angeles, CA 90095 (United States)

    2015-01-15

    For many plasma physics problems, three-dimensional and kinetic effects are very important. However, such simulations are very computationally intensive. Fortunately, there is a class of problems for which there is nearly azimuthal symmetry and the dominant three-dimensional physics is captured by the inclusion of only a few azimuthal harmonics. Recently, it was proposed [1] to model one such problem, laser wakefield acceleration, by expanding the fields and currents in azimuthal harmonics and truncating the expansion. The complex amplitudes of the fundamental and first harmonic for the fields were solved on an r–z grid and a procedure for calculating the complex current amplitudes for each particle based on its motion in Cartesian geometry was presented using a Marder's correction to maintain the validity of Gauss's law. In this paper, we describe an implementation of this algorithm into OSIRIS using a rigorous charge conserving current deposition method to maintain the validity of Gauss's law. We show that this algorithm is a hybrid method which uses a particles-in-cell description in r–z and a gridless description in ϕ. We include the ability to keep an arbitrary number of harmonics and higher order particle shapes. Examples for laser wakefield acceleration, plasma wakefield acceleration, and beam loading are also presented and directions for future work are discussed.

  12. An implicit Smooth Particle Hydrodynamic code

    Energy Technology Data Exchange (ETDEWEB)

    Knapp, Charles E. [Univ. of New Mexico, Albuquerque, NM (United States)

    2000-05-01

    An implicit version of the Smooth Particle Hydrodynamic (SPH) code SPHINX has been written and is working. In conjunction with the SPHINX code the new implicit code models fluids and solids under a wide range of conditions. SPH codes are Lagrangian, meshless and use particles to model the fluids and solids. The implicit code makes use of the Krylov iterative techniques for solving large linear-systems and a Newton-Raphson method for non-linear corrections. It uses numerical derivatives to construct the Jacobian matrix. It uses sparse techniques to save on memory storage and to reduce the amount of computation. It is believed that this is the first implicit SPH code to use Newton-Krylov techniques, and is also the first implicit SPH code to model solids. A description of SPH and the techniques used in the implicit code are presented. Then, the results of a number of tests cases are discussed, which include a shock tube problem, a Rayleigh-Taylor problem, a breaking dam problem, and a single jet of gas problem. The results are shown to be in very good agreement with analytic solutions, experimental results, and the explicit SPHINX code. In the case of the single jet of gas case it has been demonstrated that the implicit code can do a problem in much shorter time than the explicit code. The problem was, however, very unphysical, but it does demonstrate the potential of the implicit code. It is a first step toward a useful implicit SPH code.

  13. FLUKA: A Multi-Particle Transport Code

    Energy Technology Data Exchange (ETDEWEB)

    Ferrari, A.; Sala, P.R.; /CERN /INFN, Milan; Fasso, A.; /SLAC; Ranft, J.; /Siegen U.

    2005-12-14

    This report describes the 2005 version of the Fluka particle transport code. The first part introduces the basic notions, describes the modular structure of the system, and contains an installation and beginner's guide. The second part complements this initial information with details about the various components of Fluka and how to use them. It concludes with a detailed history and bibliography.

  14. Non-binary Hybrid LDPC Codes: Structure, Decoding and Optimization

    CERN Document Server

    Sassatelli, Lucile

    2007-01-01

    In this paper, we propose to study and optimize a very general class of LDPC codes whose variable nodes belong to finite sets with different orders. We named this class of codes Hybrid LDPC codes. Although efficient optimization techniques exist for binary LDPC codes and more recently for non-binary LDPC codes, they both exhibit drawbacks due to different reasons. Our goal is to capitalize on the advantages of both families by building codes with binary (or small finite set order) and non-binary parts in their factor graph representation. The class of Hybrid LDPC codes is obviously larger than existing types of codes, which gives more degrees of freedom to find good codes where the existing codes show their limits. We give two examples where hybrid LDPC codes show their interest.

  15. Extended Lorentz code of a superluminal particle

    CERN Document Server

    Ter-Kazarian, G

    2012-01-01

    While the OPERA experimental scrutiny is ongoing in the community, in the present article we construct a toy model of {\\it extended Lorentz code} (ELC) of the uniform motion, which will be a well established consistent and unique theoretical framework to explain the apparent violations of the standard Lorentz code (SLC), the possible manifestations of which arise in a similar way in all particle sectors. We argue that in the ELC-framework the propagation of the superluminal particle, which implies the modified dispersion relation, could be consistent with causality. Furthermore, in this framework, we give a justification of forbiddance of Vavilov-Cherenkov (VC)-radiation/or analog processes in vacuum. To be consistent with the SN1987A and OPERA data, we identify the neutrinos from SN1987A and the light as so-called {\\it 1-th type} particles carrying the {\\it individual Lorentz motion code} with the velocity of light $c_{1}\\equiv c$ in vacuum as maximum attainable velocity for all the 1-th type particles. Ther...

  16. BOA, Beam Optics Analyzer A Particle-In-Cell Code

    Energy Technology Data Exchange (ETDEWEB)

    Thuc Bui

    2007-12-06

    The program was tasked with implementing time dependent analysis of charges particles into an existing finite element code with adaptive meshing, called Beam Optics Analyzer (BOA). BOA was initially funded by a DOE Phase II program to use the finite element method with adaptive meshing to track particles in unstructured meshes. It uses modern programming techniques, state-of-the-art data structures, so that new methods, features and capabilities are easily added and maintained. This Phase II program was funded to implement plasma simulations in BOA and extend its capabilities to model thermal electrons, secondary emissions, self magnetic field and implement a more comprehensive post-processing and feature-rich GUI. The program was successful in implementing thermal electrons, secondary emissions, and self magnetic field calculations. The BOA GUI was also upgraded significantly, and CCR is receiving interest from the microwave tube and semiconductor equipment industry for the code. Implementation of PIC analysis was partially successful. Computational resource requirements for modeling more than 2000 particles begin to exceed the capability of most readily available computers. Modern plasma analysis typically requires modeling of approximately 2 million particles or more. The problem is that tracking many particles in an unstructured mesh that is adapting becomes inefficient. In particular memory requirements become excessive. This probably makes particle tracking in unstructured meshes currently unfeasible with commonly available computer resources. Consequently, Calabazas Creek Research, Inc. is exploring hybrid codes where the electromagnetic fields are solved on the unstructured, adaptive mesh while particles are tracked on a fixed mesh. Efficient interpolation routines should be able to transfer information between nodes of the two meshes. If successfully developed, this could provide high accuracy and reasonable computational efficiency.

  17. Implementation of a hybrid particle code with a PIC description in r-z and a gridless description in $\\phi$ into OSIRIS

    CERN Document Server

    Davidson, A; An, W; Tsung, F S; Lu, W; Vieira, J; Fonseca, R A; Silva, L O; Mori, W B

    2014-01-01

    For many plasma physics problems, three-dimensional and kinetic effects are very important. However, such simulations are very computationally intensive. Fortunately, there is a class of problems for which there is nearly azimuthal symmetry and the dominant three-dimensional physics is captured by the inclusion of only a few azimuthal harmonics. Recently, it was proposed [A. Lifschitz et al., J. Comp. Phys. 228 (5) (2009) 1803-1814] to model one such problem, laser wakefield acceleration, by expanding the fields and currents in azimuthal harmonics and truncating the expansion after only the first harmonic. The complex amplitudes of the fundamental and first harmonic for the fields were solved on an r-z grid and a procedure for calculating the complex current amplitudes for each particle based on its motion in Cartesian geometry was presented using a Marder's correction to maintain the validity of Gauss's law. In this paper, we describe an implementation of this algorithm into OSIRIS using a rigorous charge co...

  18. GOTPM: A Parallel Hybrid Particle-Mesh Treecode

    CERN Document Server

    Dubinski, J; Park, C; Humble, R J; Dubinski, John; Kim, Juhan; Park, Changbom; Humble, Robin

    2004-01-01

    We describe a parallel, cosmological N-body code based on a hybrid scheme using the particle-mesh (PM) and Barnes-Hut (BH) oct-tree algorithm. We call the algorithm GOTPM for Grid-of-Oct-Trees-Particle-Mesh. The code is parallelized using the Message Passing Interface (MPI) library and is optimized to run on Beowulf clusters as well as symmetric multi-processors. The gravitational potential is determined on a mesh using a standard PM method with particle forces determined through interpolation. The softened PM force is corrected for short range interactions using a grid of localized BH trees throughout the entire simulation volume in a completely analogous way to P$^3$M methods. This method makes no assumptions about the local density for short range force corrections and so is consistent with the results of the P$^3$M method in the limit that the treecode opening angle parameter, $\\theta \\to 0$. (abridged)

  19. Computer code for intraply hybrid composite design

    Science.gov (United States)

    Chamis, C. C.; Sinclair, J. H.

    1981-01-01

    A computer program has been developed and is described herein for intraply hybrid composite design (INHYD). The program includes several composite micromechanics theories, intraply hybrid composite theories and a hygrothermomechanical theory. These theories provide INHYD with considerable flexibility and capability which the user can exercise through several available options. Key features and capabilities of INHYD are illustrated through selected samples.

  20. Comparisons of time explicit hybrid kinetic-fluid code Architect for Plasma Wakefield Acceleration with a full PIC code

    Science.gov (United States)

    Massimo, F.; Atzeni, S.; Marocchino, A.

    2016-12-01

    Architect, a time explicit hybrid code designed to perform quick simulations for electron driven plasma wakefield acceleration, is described. In order to obtain beam quality acceptable for applications, control of the beam-plasma-dynamics is necessary. Particle in Cell (PIC) codes represent the state-of-the-art technique to investigate the underlying physics and possible experimental scenarios; however PIC codes demand the necessity of heavy computational resources. Architect code substantially reduces the need for computational resources by using a hybrid approach: relativistic electron bunches are treated kinetically as in a PIC code and the background plasma as a fluid. Cylindrical symmetry is assumed for the solution of the electromagnetic fields and fluid equations. In this paper both the underlying algorithms as well as a comparison with a fully three dimensional particle in cell code are reported. The comparison highlights the good agreement between the two models up to the weakly non-linear regimes. In highly non-linear regimes the two models only disagree in a localized region, where the plasma electrons expelled by the bunch close up at the end of the first plasma oscillation.

  1. Hybrid Coding of Image Sequences by Using Wavelet Transform

    Directory of Open Access Journals (Sweden)

    M. Surin

    2000-04-01

    Full Text Available In this paper, a new method of hybrid coding of image sequences byusing wavelet transform is proposed. The basic MPEG scheme with DCT hasbeen modificated in sense of replacement DCT by wavelet transform. Inthe proposed method, the motion estimation and compensation are usedfor motion vectors calculation and different frame between currentframe and compensated frame is coded by using wavelet transform. Someexperimental results of image sequences coding by using a new methodare presented.

  2. Hybrid Simulations of Particle Acceleration at Shocks

    CERN Document Server

    Caprioli, Damiano

    2014-01-01

    We present the results of large hybrid (kinetic ions - fluid electrons) simulations of particle acceleration at non-relativistic collisionless shocks. Ion acceleration efficiency and magnetic field amplification are investigated in detail as a function of shock inclination and strength, and compared with predictions of diffusive shock acceleration theory, for shocks with Mach number up to 100. Moreover, we discuss the relative importance of resonant and Bell's instability in the shock precursor, and show that diffusion in the self-generated turbulence can be effectively parametrized as Bohm diffusion in the amplified magnetic field.

  3. Longitudinal development of extensive air showers: hybrid code SENECA and full Monte Carlo

    CERN Document Server

    Ortiz, J A; De Souza, V; Ortiz, Jeferson A.; Tanco, Gustavo Medina

    2004-01-01

    New experiments, exploring the ultra-high energy tail of the cosmic ray spectrum with unprecedented detail, are exerting a severe pressure on extensive air hower modeling. Detailed fast codes are in need in order to extract and understand the richness of information now available. Some hybrid simulation codes have been proposed recently to this effect (e.g., the combination of the traditional Monte Carlo scheme and system of cascade equations or pre-simulated air showers). In this context, we explore the potential of SENECA, an efficient hybrid tridimensional simulation code, as a valid practical alternative to full Monte Carlo simulations of extensive air showers generated by ultra-high energy cosmic rays. We extensively compare hybrid method with the traditional, but time consuming, full Monte Carlo code CORSIKA which is the de facto standard in the field. The hybrid scheme of the SENECA code is based on the simulation of each particle with the traditional Monte Carlo method at two steps of the shower devel...

  4. Particle Merging Algorithm for PIC Codes

    CERN Document Server

    Vranic, Marija; Martins, Joana L; Fonseca, Ricardo A; Silva, Luis O

    2014-01-01

    Particle-in-cell merging algorithms aim to resample dynamically the six-dimensional phase space occupied by particles without distorting substantially the physical description of the system. Whereas various approaches have been proposed in previous works, none of them seemed to be able to conserve fully charge, momentum, energy and their associated distributions. We describe here an alternative algorithm based on the coalescence of N massive or massless particles, considered to be close enough in phase space, into two new macro-particles. The local conservation of charge, momentum and energy are ensured by the resolution of a system of scalar equations. Various simulation comparisons have been carried out with and without the merging algorithm, from classical plasma physics problems to extreme scenarios where quantum electrodynamics is taken into account, showing in addition to the conservation of local quantities, the good reproducibility of the particle distributions. In case where the number of particles o...

  5. A New Efficient Hybrid Coding For Progressive Transmission Of Images

    Science.gov (United States)

    Akansu, Ali N.; Haddad, Richard A.

    1988-10-01

    The hybrid coding technique developed here involves a function of two concepts: progressive interactive image transmission coupled with transform differential coding. There are two notable features in this approach. First, a local average of an mxm (typically 5 x 5) pixel array is formed, quantized and transmitted to the receiver for a preliminary display. This initial pass provides a crude but recognizable image before any further processing or encoding. Upon request from the receiver, the technique then switches to an iterative transform differential encoding scheme. Each iteration progressively provides more image detail at the receiver as requested. Secondly, this hybrid coding technique uses a computationally efficient, real, orthogonal transform, called the Modified Hermite Transform(MHT) [1], to encode the difference image. This MHT is then compared with the Discrete Cosine Transform(DCT) [2] for the same hybrid algorithm. For the standard images tested, we found that the progressive differential coding method per-forms comparably to the well-known direct transform coding methods. The DCT was used as the standard in this traditional approach. This hybrid technique was within 5% of SNR peak-to-peak for the "LENA" image. Comparisons between MHT and DCT as the transform vehicle for the hybrid technique were also conducted. For a transform block size N=8, the DCT requires 50% more multiplications than the MHT. The price paid for this efficiency is modest. For the example tested ("LENA"), the DCT performance gain was 4.2 dB while the MHT was 3.8 dB.

  6. Multiview coding mode decision with hybrid optimal stopping model.

    Science.gov (United States)

    Zhao, Tiesong; Kwong, Sam; Wang, Hanli; Wang, Zhou; Pan, Zhaoqing; Kuo, C-C Jay

    2013-04-01

    In a generic decision process, optimal stopping theory aims to achieve a good tradeoff between decision performance and time consumed, with the advantages of theoretical decision-making and predictable decision performance. In this paper, optimal stopping theory is employed to develop an effective hybrid model for the mode decision problem, which aims to theoretically achieve a good tradeoff between the two interrelated measurements in mode decision, as computational complexity reduction and rate-distortion degradation. The proposed hybrid model is implemented and examined with a multiview encoder. To support the model and further promote coding performance, the multiview coding mode characteristics, including predicted mode probability and estimated coding time, are jointly investigated with inter-view correlations. Exhaustive experimental results with a wide range of video resolutions reveal the efficiency and robustness of our method, with high decision accuracy, negligible computational overhead, and almost intact rate-distortion performance compared to the original encoder.

  7. Longitudinal development of extensive air showers: Hybrid code SENECA and full Monte Carlo

    Science.gov (United States)

    Ortiz, Jeferson A.; Medina-Tanco, Gustavo; de Souza, Vitor

    2005-06-01

    New experiments, exploring the ultra-high energy tail of the cosmic ray spectrum with unprecedented detail, are exerting a severe pressure on extensive air shower modelling. Detailed fast codes are in need in order to extract and understand the richness of information now available. Some hybrid simulation codes have been proposed recently to this effect (e.g., the combination of the traditional Monte Carlo scheme and system of cascade equations or pre-simulated air showers). In this context, we explore the potential of SENECA, an efficient hybrid tri-dimensional simulation code, as a valid practical alternative to full Monte Carlo simulations of extensive air showers generated by ultra-high energy cosmic rays. We extensively compare hybrid method with the traditional, but time consuming, full Monte Carlo code CORSIKA which is the de facto standard in the field. The hybrid scheme of the SENECA code is based on the simulation of each particle with the traditional Monte Carlo method at two steps of the shower development: the first step predicts the large fluctuations in the very first particle interactions at high energies while the second step provides a well detailed lateral distribution simulation of the final stages of the air shower. Both Monte Carlo simulation steps are connected by a cascade equation system which reproduces correctly the hadronic and electromagnetic longitudinal profile. We study the influence of this approach on the main longitudinal characteristics of proton, iron nucleus and gamma induced air showers and compare the predictions of the well known CORSIKA code using the QGSJET hadronic interaction model.

  8. The Particle Accelerator Simulation Code PyORBIT

    Energy Technology Data Exchange (ETDEWEB)

    Gorlov, Timofey V [ORNL; Holmes, Jeffrey A [ORNL; Cousineau, Sarah M [ORNL; Shishlo, Andrei P [ORNL

    2015-01-01

    The particle accelerator simulation code PyORBIT is presented. The structure, implementation, history, parallel and simulation capabilities, and future development of the code are discussed. The PyORBIT code is a new implementation and extension of algorithms of the original ORBIT code that was developed for the Spallation Neutron Source accelerator at the Oak Ridge National Laboratory. The PyORBIT code has a two level structure. The upper level uses the Python programming language to control the flow of intensive calculations performed by the lower level code implemented in the C++ language. The parallel capabilities are based on MPI communications. The PyORBIT is an open source code accessible to the public through the Google Open Source Projects Hosting service.

  9. Multiuser Cooperation with Hybrid Network Coding in Wireless Networks

    Directory of Open Access Journals (Sweden)

    G. Wang

    2014-04-01

    Full Text Available In this paper a hybrid Network Coding Cooperation (hybrid-NCC system is proposed to achieve both reliable transmission and high throughput in wireless networks. To balance the transmission reliability with throughput, the users are divided into cooperative sub-networks based on the geographical information, and the cooperation is implemented in each sub-network. After receiving signals from the cooperative partners, each user encodes them by exploiting hybrid network coding and then forwards the recoded symbols via the Link-Adaptive Regenerative (LAR relaying. First, the Diversity-Multiplexing Tradeoff (DMT is analyzed to demonstrate that the proposed system is bandwidth-efficient. Second, the Symbol Error Probability (SEP is also derived, which shows that the proposed system achieves a higher reliability as compared to the traditional Complex Field Network Coding Cooperation (CFNCC. Moreover, because dedicated relays are not required, our proposed system can both reduce the costs and enhance the flexibility of the implementation. Finally, the analytical results are supported and validated by numerical simulations.

  10. Silicone-containing aqueous polymer dispersions with hybrid particle structure.

    Science.gov (United States)

    Kozakiewicz, Janusz; Ofat, Izabela; Trzaskowska, Joanna

    2015-09-01

    In this paper the synthesis, characterization and application of silicone-containing aqueous polymer dispersions (APD) with hybrid particle structure are reviewed based on available literature data. Advantages of synthesis of dispersions with hybrid particle structure over blending of individual dispersions are pointed out. Three main processes leading to silicone-containing hybrid APD are identified and described in detail: (1) emulsion polymerization of organic unsaturated monomers in aqueous dispersions of silicone polymers or copolymers, (2) emulsion copolymerization of unsaturated organic monomers with alkoxysilanes or polysiloxanes with unsaturated functionality and (3) emulsion polymerization of alkoxysilanes (in particular with unsaturated functionality) and/or cyclic siloxanes in organic polymer dispersions. The effect of various factors on the properties of such hybrid APD and films as well as on hybrid particles composition and morphology is presented. It is shown that core-shell morphology where silicones constitute either the core or the shell is predominant in hybrid particles. Main applications of silicone-containing hybrid APD and related hybrid particles are reviewed including (1) coatings which show specific surface properties such as enhanced water repellency or antisoiling or antigraffiti properties due to migration of silicone to the surface, and (2) impact modifiers for thermoplastics and thermosets. Other processes in which silicone-containing particles with hybrid structure can be obtained (miniemulsion polymerization, polymerization in non-aqueous media, hybridization of organic polymer and polysiloxane, emulsion polymerization of silicone monomers in silicone polymer dispersions and physical methods) are also discussed. Prospects for further developments in the area of silicone-containing hybrid APD and related hybrid particles are presented.

  11. The hybrid opacity code SCO-RCG: recent developments

    CERN Document Server

    Pain, Jean-Christophe; Porcherot, Quentin; Blenski, Thomas

    2013-01-01

    Absorption and emission spectra of multicharged-ion plasmas contain a huge number of electron configurations and electric-dipolar lines, which can be handled by global methods. However, some transition arrays consist only of a small bunch of lines. For that reason, we developed the hybrid opacity code SCO-RCG combining the (statistical) super-transition-array method and the (detailed) fine-structure calculation (requiring the diagonalization of the Hamiltonian matrix) of atomic structure. In order to decide whether a detailed treatment of lines is necessary and to determine the validity of statistical methods, the code involves criteria taking into account coalescence of lines and porosity (localized absence of lines) in transition arrays. Data required for the calculation of detailed transition arrays (Slater, spin-orbit and dipolar integrals) are provided by the super-configuration code SCO, which takes into account plasma screening effects on wavefunctions. Then, level energies and lines are calculated by ...

  12. Analysis of extensive air showers with the hybrid code SENECA

    CERN Document Server

    Ortiz, J A; Medina-Tanco, G; Ortiz, Jeferson A.; Souza, Vitor de; Medina-Tanco, Gustavo

    2005-01-01

    The ultrahigh energy tail of the cosmic ray spectrum has been explored with unprecedented detail. For this reason, new experiments are exerting a severe pressure on extensive air shower modeling. Detailed fast codes are in need in order to extract and understand the richness of information now available. In this sense we explore the potential of SENECA, an efficient hybrid tridimensional simulation code, as a valid practical alternative to full Monte Carlo simulations of extensive air showers generated by ultrahigh energy cosmic rays. We discuss the influence of this approach on the main longitudinal characteristics of proton, iron nucleus and gamma induced air showers for different hadronic interaction models. We also show the comparisons of our predictions with those of CORSIKA code.

  13. Analysis of extensive air showers with the hybrid code SENECA

    Science.gov (United States)

    Ortiz, Jeferson A.; de Souza, Vitor; Medina-Tanco, Gustavo

    The ultrahigh energy tail of the cosmic ray spectrum has been explored with unprecedented detail. For this reason, new experiments are exerting a severe pressure on extensive air shower modeling. Detailed fast codes are in need in order to extract and understand the richness of information now available. In this sense we explore the potential of SENECA, an efficient hybrid tridimensional simulation code, as a valid practical alternative to full Monte Carlo simulations of extensive air showers generated by ultrahigh energy cosmic rays. We discuss the influence of this approach on the main longitudinal characteristics of proton, iron nucleus and gamma induced air showers for different hadronic interaction models. We also show the comparisons of our predictions with those of CORSIKA code.

  14. Hybrid coded aperture and Compton imaging using an active mask

    Energy Technology Data Exchange (ETDEWEB)

    Schultz, L.J. [Los Alamos National Laboratory, Los Alamos, NM (United States)], E-mail: schultz@lanl.gov; Wallace, M.S.; Galassi, M.C.; Hoover, A.S.; Mocko, M.; Palmer, D.M.; Tornga, S.R.; Kippen, R.M. [Los Alamos National Laboratory, Los Alamos, NM (United States); Hynes, M.V.; Toolin, M.J.; Harris, B.; McElroy, J.E. [Raytheon Integrated Defense Systems, Tewksbury, MA (United States); Wakeford, D. [Bubble Technology Industries, Chalk River, Ontario (Canada); Lanza, R.C.; Horn, B.K.P. [Massachusetts Institute of Technology, Cambridge, MA (United States); Wehe, D.K. [University of Michigan, Ann Arbor, MI (United States)

    2009-09-11

    The trimodal imager (TMI) images gamma-ray sources from a mobile platform using both coded aperture (CA) and Compton imaging (CI) modalities. In this paper we will discuss development and performance of image reconstruction algorithms for the TMI. In order to develop algorithms in parallel with detector hardware we are using a GEANT4 [J. Allison, K. Amako, J. Apostolakis, H. Araujo, P.A. Dubois, M. Asai, G. Barrand, R. Capra, S. Chauvie, R. Chytracek, G. Cirrone, G. Cooperman, G. Cosmo, G. Cuttone, G. Daquino, et al., IEEE Trans. Nucl. Sci. NS-53 (1) (2006) 270] based simulation package to produce realistic data sets for code development. The simulation code incorporates detailed detector modeling, contributions from natural background radiation, and validation of simulation results against measured data. Maximum likelihood algorithms for both imaging methods are discussed, as well as a hybrid imaging algorithm wherein CA and CI information is fused to generate a higher fidelity reconstruction.

  15. SPAMCART: a code for smoothed particle Monte Carlo radiative transfer

    Science.gov (United States)

    Lomax, O.; Whitworth, A. P.

    2016-10-01

    We present a code for generating synthetic spectral energy distributions and intensity maps from smoothed particle hydrodynamics simulation snapshots. The code is based on the Lucy Monte Carlo radiative transfer method, i.e. it follows discrete luminosity packets as they propagate through a density field, and then uses their trajectories to compute the radiative equilibrium temperature of the ambient dust. The sources can be extended and/or embedded, and discrete and/or diffuse. The density is not mapped on to a grid, and therefore the calculation is performed at exactly the same resolution as the hydrodynamics. We present two example calculations using this method. First, we demonstrate that the code strictly adheres to Kirchhoff's law of radiation. Secondly, we present synthetic intensity maps and spectra of an embedded protostellar multiple system. The algorithm uses data structures that are already constructed for other purposes in modern particle codes. It is therefore relatively simple to implement.

  16. SPAMCART: a code for smoothed particle Monte Carlo radiative transfer

    CERN Document Server

    Lomax, O

    2016-01-01

    We present a code for generating synthetic SEDs and intensity maps from Smoothed Particle Hydrodynamics simulation snapshots. The code is based on the Lucy (1999) Monte Carlo Radiative Transfer method, i.e. it follows discrete luminosity packets, emitted from external and/or embedded sources, as they propagate through a density field, and then uses their trajectories to compute the radiative equilibrium temperature of the ambient dust. The density is not mapped onto a grid, and therefore the calculation is performed at exactly the same resolution as the hydrodynamics. We present two example calculations using this method. First, we demonstrate that the code strictly adheres to Kirchhoff's law of radiation. Second, we present synthetic intensity maps and spectra of an embedded protostellar multiple system. The algorithm uses data structures that are already constructed for other purposes in modern particle codes. It is therefore relatively simple to implement.

  17. Analysis of SMA Hybrid Composite Structures using Commercial Codes

    Science.gov (United States)

    Turner, Travis L.; Patel, Hemant D.

    2004-01-01

    A thermomechanical model for shape memory alloy (SMA) actuators and SMA hybrid composite (SMAHC) structures has been recently implemented in the commercial finite element codes MSC.Nastran and ABAQUS. The model may be easily implemented in any code that has the capability for analysis of laminated composite structures with temperature dependent material properties. The model is also relatively easy to use and requires input of only fundamental engineering properties. A brief description of the model is presented, followed by discussion of implementation and usage in the commercial codes. Results are presented from static and dynamic analysis of SMAHC beams of two types; a beam clamped at each end and a cantilevered beam. Nonlinear static (post-buckling) and random response analyses are demonstrated for the first specimen. Static deflection (shape) control is demonstrated for the cantilevered beam. Approaches for modeling SMAHC material systems with embedded SMA in ribbon and small round wire product forms are demonstrated and compared. The results from the commercial codes are compared to those from a research code as validation of the commercial implementations; excellent correlation is achieved in all cases.

  18. A Hybrid Method with Deviational Particles for Spatial Inhomogeneous Plasma

    CERN Document Server

    Yan, Bokai

    2015-01-01

    In this work we propose a Hybrid method with Deviational Particles (HDP) for a plasma modeled by the inhomogeneous Vlasov-Poisson-Landau system. We split the distribution into a Maxwellian part evolved by a grid based fluid solver and a deviation part simulated by numerical particles. These particles, named deviational particles, could be both positive and negative. We combine the Monte Carlo method proposed in \\cite{YC15}, a Particle in Cell method and a Macro-Micro decomposition method \\cite{BLM08} to design an efficient hybrid method. Furthermore, coarse particles are employed to accelerate the simulation. A particle resampling technique on both deviational particles and coarse particles is also investigated and improved. The efficiency is significantly improved compared to a PIC-MCC method, especially near the fluid regime.

  19. Spectral Shape of Check-Hybrid GLDPC Codes

    CERN Document Server

    Paolini, Enrico; Chiani, Marco; Fossorier, Marc P C

    2010-01-01

    This paper analyzes the asymptotic exponent of both the weight spectrum and the stopping set size spectrum for a class of generalized low-density parity-check (GLDPC) codes. Specifically, all variable nodes (VNs) are assumed to have the same degree (regular VN set), while the check node (CN) set is assumed to be composed of a mixture of different linear block codes (hybrid CN set). A simple expression for the exponent (which is also referred to as the growth rate or the spectral shape) is developed. This expression is consistent with previous results, including the case where the normalized weight or stopping set size tends to zero. Furthermore, it is shown how certain symmetry properties of the local weight distribution at the CNs induce a symmetry in the overall weight spectral shape function.

  20. General Relativistic Smoothed Particle Hydrodynamics code developments: A progress report

    Science.gov (United States)

    Faber, Joshua; Silberman, Zachary; Rizzo, Monica

    2017-01-01

    We report on our progress in developing a new general relativistic Smoothed Particle Hydrodynamics (SPH) code, which will be appropriate for studying the properties of accretion disks around black holes as well as compact object binary mergers and their ejecta. We will discuss in turn the relativistic formalisms being used to handle the evolution, our techniques for dealing with conservative and primitive variables, as well as those used to ensure proper conservation of various physical quantities. Code tests and performance metrics will be discussed, as will the prospects for including smoothed particle hydrodynamics codes within other numerical relativity codebases, particularly the publicly available Einstein Toolkit. We acknowledge support from NSF award ACI-1550436 and an internal RIT D-RIG grant.

  1. Antiproton annihilation physics annihilation physics in the Monte Carlo particle transport code particle transport code SHIELD-HIT12A

    DEFF Research Database (Denmark)

    Taasti, Vicki Trier; Knudsen, Helge; Holzscheiter, Michael

    2015-01-01

    The Monte Carlo particle transport code SHIELD-HIT12A is designed to simulate therapeutic beams for cancer radiotherapy with fast ions. SHIELD-HIT12A allows creation of antiproton beam kernels for the treatment planning system TRiP98, but first it must be benchmarked against experimental data...

  2. 3-D Parallel, Object-Oriented, Hybrid, PIC Code for Ion Ring Studies

    Science.gov (United States)

    Omelchenko, Y. A.

    1997-08-01

    The 3-D hybrid, Particle-in-Cell (PIC) code, FLAME has been developed to study low-frequency, large orbit plasmas in realistic cylindrical configurations. FLAME assumes plasma quasineutrality and solves the Maxwell equations with displacement current neglected. The electron component is modeled as a massless fluid and all ion components are represented by discrete macro-particles. The poloidal discretization is done by a finite-difference staggered grid method. FFT is applied in the azimuthal direction. A substantial reduction of CPU time is achieved by enabling separate time advances of background and beam particle species in the time-averaged fields. The FLAME structure follows the guidelines of object-oriented programming. Its C++ class hierarchy comprises the Utility, Geometry, Particle, Grid and Distributed base class packages. The latter encapsulates implementation of concurrent grid and particle algorithms. The particle and grid data interprocessor communications are unified and designed to be independent of both the underlying message-passing library and the actual poloidal domain decomposition technique (FFT's are local). Load balancing concerns are addressed by using adaptive domain partitions to account for nonuniform spatial distributions of particle objects. The results of 2-D and 3-D FLAME simulations in support of the FIREX program at Cornell are presented.

  3. Parallelization of a Monte Carlo particle transport simulation code

    Science.gov (United States)

    Hadjidoukas, P.; Bousis, C.; Emfietzoglou, D.

    2010-05-01

    We have developed a high performance version of the Monte Carlo particle transport simulation code MC4. The original application code, developed in Visual Basic for Applications (VBA) for Microsoft Excel, was first rewritten in the C programming language for improving code portability. Several pseudo-random number generators have been also integrated and studied. The new MC4 version was then parallelized for shared and distributed-memory multiprocessor systems using the Message Passing Interface. Two parallel pseudo-random number generator libraries (SPRNG and DCMT) have been seamlessly integrated. The performance speedup of parallel MC4 has been studied on a variety of parallel computing architectures including an Intel Xeon server with 4 dual-core processors, a Sun cluster consisting of 16 nodes of 2 dual-core AMD Opteron processors and a 200 dual-processor HP cluster. For large problem size, which is limited only by the physical memory of the multiprocessor server, the speedup results are almost linear on all systems. We have validated the parallel implementation against the serial VBA and C implementations using the same random number generator. Our experimental results on the transport and energy loss of electrons in a water medium show that the serial and parallel codes are equivalent in accuracy. The present improvements allow for studying of higher particle energies with the use of more accurate physical models, and improve statistics as more particles tracks can be simulated in low response time.

  4. Toroidal Electromagnetic Particle-in-Cell Code with Gyro-kinetic Electron and Fully-kinetic ion

    Science.gov (United States)

    Lin, Jingbo; Zhang, Wenlu; Liu, Pengfei; Li, Ding

    2016-10-01

    A kinetic simulation model has been developed using gyro-kinetic electron and fully-kinetic ion by removing fast gyro motion of electrons using the Lie-transform perturbation theory. A particle-in-cell kinetic code is developed based on this model in general magnetic flux coordinate systems, which is particularly suitable for simulations of toroidally confined plasma. Single particle motion and field solver are successfully verified respectively. Integrated electrostatic benchmark, for example the lower-hybrid wave (LHW) and ion Bernstein wave (IBW), shows a good agreement with theoretical results. Preliminary electromagnetic benchmark of fast wave at lower hybrid frequency range is also presented. This code can be a first-principal tool to investigate high frequency nonlinear phenomenon, such as parametric decay instability, during lower-hybrid current drive (LHCD) and ion cyclotron radio frequency heating (ICRF) with complex geometry effect included. Supported by National Special Research Program of China For ITER and National Natural Science Foundation of China.

  5. CPIC: A Parallel Particle-In-Cell Code for Studying Spacecraft Charging

    Science.gov (United States)

    Meierbachtol, Collin; Delzanno, Gian Luca; Moulton, David; Vernon, Louis

    2015-11-01

    CPIC is a three-dimensional electrostatic particle-in-cell code designed for use with curvilinear meshes. One of its primary objectives is to aid in studying spacecraft charging in the magnetosphere. CPIC maintains near-optimal computational performance and scaling thanks to a mapped logical mesh field solver, and a hybrid physical-logical space particle mover (avoiding the need to track particles). CPIC is written for parallel execution, utilizing a combination of both OpenMP threading and MPI distributed memory. New capabilities are being actively developed and added to CPIC, including the ability to handle multi-block curvilinear mesh structures. Verification results comparing CPIC to analytic test problems will be provided. Particular emphasis will be placed on the charging and shielding of a sphere-in-plasma system. Simulated charging results of representative spacecraft geometries will also be presented. Finally, its performance capabilities will be demonstrated through parallel scaling data.

  6. 2D Implosion Simulations with a Kinetic Particle Code

    CERN Document Server

    Sagert, Irina; Strother, Terrance T

    2016-01-01

    We perform two-dimensional (2D) implosion simulations using a Monte Carlo kinetic particle code. The paper is motivated by the importance of non-equilibrium effects in inertial confinement fusion (ICF) capsule implosions. These cannot be fully captured by hydrodynamic simulations while kinetic methods, as the one presented in this study, are able to describe continuum and rarefied regimes within one approach. In the past, our code has been verified via traditional shock wave and fluid instability simulations. In the present work, we focus on setups that are closer to applications in ICF. We perform simple 2D disk implosion simulations using one particle species. The obtained results are compared to simulations using the hydrodynamics code RAGE. In a first study, the implosions are powered by energy deposition in the outer layers of the disk. We test the impact of the particle mean-free-path and find that while the width of the implosion shock broadens, its location as a function of time remains very similar. ...

  7. Efficient and Scalable Algorithms for Smoothed Particle Hydrodynamics on Hybrid Shared/Distributed-Memory Architectures

    CERN Document Server

    Gonnet, Pedro

    2014-01-01

    This paper describes a new fast and implicitly parallel approach to neighbour-finding in multi-resolution Smoothed Particle Hydrodynamics (SPH) simulations. This new approach is based on hierarchical cell decompositions and sorted interactions, within a task-based formulation. It is shown to be faster than traditional tree-based codes, and to scale better than domain decomposition-based approaches on hybrid shared/distributed-memory parallel architectures, e.g. clusters of multi-cores, achieving a $40\\times$ speedup over the Gadget-2 simulation code.

  8. Antiproton annihilation physics annihilation physics in the Monte Carlo particle transport code particle transport code SHIELD-HIT12A

    DEFF Research Database (Denmark)

    Taasti, Vicki Trier; Knudsen, Helge; Holzscheiter, Michael

    2015-01-01

    The Monte Carlo particle transport code SHIELD-HIT12A is designed to simulate therapeutic beams for cancer radiotherapy with fast ions. SHIELD-HIT12A allows creation of antiproton beam kernels for the treatment planning system TRiP98, but first it must be benchmarked against experimental data. An...... conclude that more experimental cross section data are needed in the lower energy range in order to resolve this contradiction, ideally combined with more rigorous models for annihilation on compounds....

  9. Path Weight Complementary Convolutional Code for Type-II Bit-Interleaved Coded Modulation Hybrid ARQ System

    Institute of Scientific and Technical Information of China (English)

    CHENG Yuxin; ZHANG Lei; YI Na; XIANG Haige

    2007-01-01

    Bit-interleaved coded modulation (BICM) is suitable to bandwidth-efficient communication systems. Hybrid automatic repeat request (HARQ) can provide more reliability to high-speed wireless data transmission. A new path weight complementary convolutional (PWCC) code used in the type-ll BICM-HARQ system is proposed. The PWCC code is composed of the original code and the complimentary code. The path in trellis with large hamming weight of the complimentary code is designed to compensate for the path in trellis with small hamming weight of the original code. Hence, both of the original code and the complimentary code can achieve the performance of the good code criterion of corresponding code rate. The throughput efficiency of the BICM-HARQ system wit PWCC code is higher than repeat code system, a little higher than puncture code system in low signal-to-noise ratio (SNR) values and much higher than puncture code system, the same as repeat code system in high SNR values. These results are confirmed by the simulation.

  10. Properties of hybrid resin composite systems containing prepolymerized filler particles.

    Science.gov (United States)

    Blackham, Jason T; Vandewalle, Kraig S; Lien, Wen

    2009-01-01

    This study compared the properties of newer hybrid resin composites with prepolymerized-filler particles to traditional hybrids and a microfill composite. The following properties were examined per composite: diametral tensile strength, flexural strength/modulus, Knoop microhardness and polymerization shrinkage. Physical properties were determined for each Jason T Blackham, DMD, USAF, General Dentistry, Tyndall composite group (n = 8), showing significant differences between groups per property (p hybrid composites (Z250, Esthet-X) had higher strength, composites containing pre-polymerized fillers (Gradia Direct Posterior, Premise) performed more moderately and the microfill composite (Durafill VS) had lower strength. Premise and Durafill VS had the lowest polymerization shrinkage.

  11. High energy particle transport code NMTC/JAM

    Energy Technology Data Exchange (ETDEWEB)

    Niita, Koji [Research Organization for Information Science and Technology, Tokai, Ibaraki (Japan); Meigo, Shin-ichiro; Takada, Hiroshi; Ikeda, Yujiro [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    2001-03-01

    We have developed a high energy particle transport code NMTC/JAM, which is an upgraded version of NMTC/JAERI97. The applicable energy range of NMTC/JAM is extended in principle up to 200 GeV for nucleons and mesons by introducing the high energy nuclear reaction code JAM for the intra-nuclear cascade part. For the evaporation and fission process, we have also implemented a new model, GEM, by which the light nucleus production from the excited residual nucleus can be described. According to the extension of the applicable energy, we have upgraded the nucleon-nucleus non-elastic, elastic and differential elastic cross section data by employing new systematics. In addition, the particle transport in a magnetic field has been implemented for the beam transport calculations. In this upgrade, some new tally functions are added and the format of input of data has been improved very much in a user friendly manner. Due to the implementation of these new calculation functions and utilities, consequently, NMTC/JAM enables us to carry out reliable neutronics study of a large scale target system with complex geometry more accurately and easily than before. This report serves as a user manual of the code. (author)

  12. The Plasma Simulation Code: A modern particle-in-cell code with patch-based load-balancing

    Science.gov (United States)

    Germaschewski, Kai; Fox, William; Abbott, Stephen; Ahmadi, Narges; Maynard, Kristofor; Wang, Liang; Ruhl, Hartmut; Bhattacharjee, Amitava

    2016-08-01

    This work describes the Plasma Simulation Code (PSC), an explicit, electromagnetic particle-in-cell code with support for different order particle shape functions. We review the basic components of the particle-in-cell method as well as the computational architecture of the PSC code that allows support for modular algorithms and data structure in the code. We then describe and analyze in detail a distinguishing feature of PSC: patch-based load balancing using space-filling curves which is shown to lead to major efficiency gains over unbalanced methods and a previously used simpler balancing method.

  13. Particle-in-Cell Codes for plasma-based particle acceleration

    CERN Document Server

    Pukhov, Alexander

    2016-01-01

    Basic principles of particle-in-cell (PIC ) codes with the main application for plasma-based acceleration are discussed. The ab initio full electromagnetic relativistic PIC codes provide the most reliable description of plasmas. Their properties are considered in detail. Representing the most fundamental model, the full PIC codes are computationally expensive. The plasma-based acceler- ation is a multi-scale problem with very disparate scales. The smallest scale is the laser or plasma wavelength (from one to hundred microns) and the largest scale is the acceleration distance (from a few centimeters to meters or even kilometers). The Lorentz-boost technique allows to reduce the scale disparity at the costs of complicating the simulations and causing unphysical numerical instabilities in the code. Another possibility is to use the quasi-static approxi- mation where the disparate scales are separated analytically.

  14. Overview of Particle and Heavy Ion Transport Code System PHITS

    Science.gov (United States)

    Sato, Tatsuhiko; Niita, Koji; Matsuda, Norihiro; Hashimoto, Shintaro; Iwamoto, Yosuke; Furuta, Takuya; Noda, Shusaku; Ogawa, Tatsuhiko; Iwase, Hiroshi; Nakashima, Hiroshi; Fukahori, Tokio; Okumura, Keisuke; Kai, Tetsuya; Chiba, Satoshi; Sihver, Lembit

    2014-06-01

    A general purpose Monte Carlo Particle and Heavy Ion Transport code System, PHITS, is being developed through the collaboration of several institutes in Japan and Europe. The Japan Atomic Energy Agency is responsible for managing the entire project. PHITS can deal with the transport of nearly all particles, including neutrons, protons, heavy ions, photons, and electrons, over wide energy ranges using various nuclear reaction models and data libraries. It is written in Fortran language and can be executed on almost all computers. All components of PHITS such as its source, executable and data-library files are assembled in one package and then distributed to many countries via the Research organization for Information Science and Technology, the Data Bank of the Organization for Economic Co-operation and Development's Nuclear Energy Agency, and the Radiation Safety Information Computational Center. More than 1,000 researchers have been registered as PHITS users, and they apply the code to various research and development fields such as nuclear technology, accelerator design, medical physics, and cosmic-ray research. This paper briefly summarizes the physics models implemented in PHITS, and introduces some important functions useful for specific applications, such as an event generator mode and beam transport functions.

  15. Pressure calculation in hybrid particle-field simulations.

    Science.gov (United States)

    Milano, Giuseppe; Kawakatsu, Toshihiro

    2010-12-07

    In the framework of a recently developed scheme for a hybrid particle-field simulation techniques where self-consistent field (SCF) theory and particle models (molecular dynamics) are combined [J. Chem. Phys. 130, 214106 (2009)], we developed a general formulation for the calculation of instantaneous pressure and stress tensor. The expressions have been derived from statistical mechanical definition of the pressure starting from the expression for the free energy functional in the SCF theory. An implementation of the derived formulation suitable for hybrid particle-field molecular dynamics-self-consistent field simulations is described. A series of test simulations on model systems are reported comparing the calculated pressure with those obtained from standard molecular dynamics simulations based on pair potentials.

  16. Hybrid spherical particle field measurement based on interference technology

    Science.gov (United States)

    Sun, Jinlu; Zhang, Hongxia; Li, Jiao; Zhou, Ye; Jia, Dagong; Liu, Tiegen

    2017-03-01

    Interferometric particle imaging is widely used in particle size measurement. Conventional algorithms, which focus on single size particle fields, have difficulties in extracting each interference fringe in a hybrid spherical particle field due to the noise. To solve this problem, an iterative mean filter (IMF) algorithm is proposed. Instead of the specific mean filter template coefficient, the noise is reduced by iterating the calculation results under different template coefficients. The average value of the calculation results excluding the gross error is output as the final result. The effect of different template coefficients are simulated, furthermore, the value range of template coefficients has been analyzed. The interferogram of the hybrid spherical particle field from 21.3 µm to 57.9 µm is processed by the conventional algorithms with specific template coefficients of 2, 8, 12 and the IMF algorithm. The corresponding measurement errors are 17.22%, 10.69%, 9.04% and 5.11%. The experimental results show that the IMF algorithm would reduce measurement error, and could be potentially applied in particle field measurement.

  17. Efficient modeling of plasma wakefield acceleration in quasi-non-linear-regimes with the hybrid code Architect

    Science.gov (United States)

    Marocchino, A.; Massimo, F.; Rossi, A. R.; Chiadroni, E.; Ferrario, M.

    2016-09-01

    In this paper we present a hybrid approach aiming to assess feasible plasma wakefield acceleration working points with reduced computation resources. The growing interest for plasma wakefield acceleration and especially the need to control with increasing precision the quality of the accelerated bunch demands for more accurate and faster simulations. Particle in cell codes are the state of the art technique to simulate the underlying physics, however the run-time represents the major drawback. Architect is a hybrid code that treats the bunch kinetically and the background electron plasma as a fluid, initialising bunches in vacuum so to take into account for the transition from vacuum to plasma. Architect solves directly the Maxwell's equations on a Yee lattice. Such an approach allows us to drastically reduce run time without loss of generality or accuracy up to the weakly non linear regime.

  18. Efficient modeling of plasma wakefield acceleration in quasi-non-linear-regimes with the hybrid code Architect

    Energy Technology Data Exchange (ETDEWEB)

    Marocchino, A., E-mail: albz.uk@gmail.com [Dipartimento SBAI, “Sapienza” University of Rome and INFN-Roma 1, Rome (Italy); Massimo, F. [Dipartimento SBAI, “Sapienza” University of Rome and INFN-Roma 1, Rome (Italy); Rossi, A.R. [Dipartimento di Fisica, University of Milan and INFN-Milano, Milano (Italy); Chiadroni, E.; Ferrario, M. [INFN-LNF, Frascati (Italy)

    2016-09-01

    In this paper we present a hybrid approach aiming to assess feasible plasma wakefield acceleration working points with reduced computation resources. The growing interest for plasma wakefield acceleration and especially the need to control with increasing precision the quality of the accelerated bunch demands for more accurate and faster simulations. Particle in cell codes are the state of the art technique to simulate the underlying physics, however the run-time represents the major drawback. Architect is a hybrid code that treats the bunch kinetically and the background electron plasma as a fluid, initialising bunches in vacuum so to take into account for the transition from vacuum to plasma. Architect solves directly the Maxwell's equations on a Yee lattice. Such an approach allows us to drastically reduce run time without loss of generality or accuracy up to the weakly non linear regime.

  19. Neutral Particle Transport in Cylindrical Plasma Simulated by a Monte Carlo Code

    Institute of Scientific and Technical Information of China (English)

    YU Deliang; YAN Longwen; ZHONG Guangwu; LU Jie; YI Ping

    2007-01-01

    A Monte Carlo code (MCHGAS) has been developed to investigate the neutral particle transport.The code can calculate the radial profile and energy spectrum of neutral particles in cylindrical plasmas.The calculation time of the code is dramatically reduced when the Splitting and Roulette schemes are applied. The plasma model of an infinite cylinder is assumed in the code,which is very convenient in simulating neutral particle transports in small and middle-sized tokamaks.The design of the multi-channel neutral particle analyser (NPA) on HL-2A can be optimized by using this code.

  20. Second order Gyrokinetic theory for Particle-In-Cell codes

    CERN Document Server

    Tronko, Natalia; Sonnendruecker, Eric

    2016-01-01

    The main idea of Gyrokinetic dynamical reduction consists in systematical removing of fastest scale of motion (the gyro motion) from plasma's dynamics, resulting in a considerable model simplification and gain of computing time. Gyrokinetic Maxwell-Vlasov system is broadly implemented in nowadays numerical experiments for modeling strongly magnetized plasma (both laboratory and astrophysical). Different versions of reduced set of equations exist depending on the construction of the Gyrokinetic reduction procedure and approximations assumed while their derivation. The purpose of this paper is to explicitly show the connection between the general second order gyrokinetic Maxwell-Vlasov system issued from the Modern Gyrokinetic theory derivation and the model currently implemented in global electromagnetic Particle in Cell code ORB5. Strictly necessary information about the Modern Gyrokinetic formalism is given together with the consistent derivation of the gyrokinetic Maxwell-Vlasov equations from the first pri...

  1. Particle, momentum and thermal transport in the PTRANSP code

    Science.gov (United States)

    Bateman, G.; Halpern, F. D.; Kritz, A. H.; Pankin, A. Y.; Rafiq, T.; McCune, D. C.; Budny, R. V.; Indireshkumar, K.

    2008-11-01

    The combined effects of particle, momentum and thermal transport are investigated in tokamak discharges using a coupled system of transport equations implemented in the PTRANSP integrated modeling code. The magnetic diffusion equation is advanced separately, along with the evolution of the equilibrium. Simulations are carried out using theory-based models to compute transport, sources and sinks. Boundary conditions are either read from data or computed using a pedestal model for H-mode discharges. Different techniques are explored for controlling numerical problems [1] in time-dependent simulations that include sawtooth oscillations and other rapid changes in the profiles. Results for the density, temperature and toroidal angular velocity profiles are compared with experimental data. [1] S.C. Jardin et al, ``On 1D diffusion problems with a gradient-dependent diffusion coefficient''; G.V. Pereverzev and G. Corrigan, ``Stable numeric scheme for diffusion equation with a stiff transport''; both papers to appear in Comp. Phys. Comm. (2008).

  2. HYBRID AND CHARACTERISTIC OF POLYANILINE- BARIUM TITANATE NANOCOMPOSITE PARTICLES

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    Polyaniline-barium titanate (PAn-BaTiO3) ultrafine composite particles were prepared by the oxidative polymerization of aniline with H2O2 while barium titanate nanoparticles were synthesized with a sol-gel method. The infrared spectrogram shows that the polymerization of PAn in the hybrid process of PAn-BaTiO3 is similar with the polymeric process of pure aniline, and there is interaction of PAn and BaTiO3 in the PAn-BaTiO3. SEM and TEM results show that the average diameter of the composite particles is 1.50 μm and the diameters of BaTiO3 nanoparticles are 5-15 nm in the composite particle. The electrical conductivity of the ultrafine composite particles is transformable from 100 to 10-11S/cm by equilibrium doping or dedoping method using various concentration of HCl or NaOH solutions.

  3. Second order gyrokinetic theory for particle-in-cell codes

    Science.gov (United States)

    Tronko, Natalia; Bottino, Alberto; Sonnendrücker, Eric

    2016-08-01

    The main idea of the gyrokinetic dynamical reduction consists in a systematical removal of the fast scale motion (the gyromotion) from the dynamics of the plasma, resulting in a considerable simplification and a significant gain of computational time. The gyrokinetic Maxwell-Vlasov equations are nowadays implemented in for modeling (both laboratory and astrophysical) strongly magnetized plasmas. Different versions of the reduced set of equations exist, depending on the construction of the gyrokinetic reduction procedure and the approximations performed in the derivation. The purpose of this article is to explicitly show the connection between the general second order gyrokinetic Maxwell-Vlasov system issued from the modern gyrokinetic theory and the model currently implemented in the global electromagnetic Particle-in-Cell code ORB5. Necessary information about the modern gyrokinetic formalism is given together with the consistent derivation of the gyrokinetic Maxwell-Vlasov equations from first principles. The variational formulation of the dynamics is used to obtain the corresponding energy conservation law, which in turn is used for the verification of energy conservation diagnostics currently implemented in ORB5. This work fits within the context of the code verification project VeriGyro currently run at IPP Max-Planck Institut in collaboration with others European institutions.

  4. The FLUKA Code: An Accurate Simulation Tool for Particle Therapy.

    Science.gov (United States)

    Battistoni, Giuseppe; Bauer, Julia; Boehlen, Till T; Cerutti, Francesco; Chin, Mary P W; Dos Santos Augusto, Ricardo; Ferrari, Alfredo; Ortega, Pablo G; Kozłowska, Wioletta; Magro, Giuseppe; Mairani, Andrea; Parodi, Katia; Sala, Paola R; Schoofs, Philippe; Tessonnier, Thomas; Vlachoudis, Vasilis

    2016-01-01

    Monte Carlo (MC) codes are increasingly spreading in the hadrontherapy community due to their detailed description of radiation transport and interaction with matter. The suitability of a MC code for application to hadrontherapy demands accurate and reliable physical models capable of handling all components of the expected radiation field. This becomes extremely important for correctly performing not only physical but also biologically based dose calculations, especially in cases where ions heavier than protons are involved. In addition, accurate prediction of emerging secondary radiation is of utmost importance in innovative areas of research aiming at in vivo treatment verification. This contribution will address the recent developments of the FLUKA MC code and its practical applications in this field. Refinements of the FLUKA nuclear models in the therapeutic energy interval lead to an improved description of the mixed radiation field as shown in the presented benchmarks against experimental data with both (4)He and (12)C ion beams. Accurate description of ionization energy losses and of particle scattering and interactions lead to the excellent agreement of calculated depth-dose profiles with those measured at leading European hadron therapy centers, both with proton and ion beams. In order to support the application of FLUKA in hospital-based environments, Flair, the FLUKA graphical interface, has been enhanced with the capability of translating CT DICOM images into voxel-based computational phantoms in a fast and well-structured way. The interface is capable of importing also radiotherapy treatment data described in DICOM RT standard. In addition, the interface is equipped with an intuitive PET scanner geometry generator and automatic recording of coincidence events. Clinically, similar cases will be presented both in terms of absorbed dose and biological dose calculations describing the various available features.

  5. A New Class of Hybrid Particle Swarm Optimization Algorithm

    Institute of Scientific and Technical Information of China (English)

    Da-Qing Guo; Yong-Jin Zhao; Hui Xiong; Xiao Li

    2007-01-01

    A new class of hybrid particle swarm optimization (PSO) algorithm is developed for solving the premature convergence caused by some particles in standard PSO fall into stagnation. In this algorithm, the linearly decreasing inertia weight technique (LDIW) and the mutative scale chaos optimization algorithm (MSCOA) are combined with standard PSO, which are used to balance the global and local exploration abilities and enhance the local searching abilities, respectively. In order to evaluate the performance of the new method, three benchmark functions are used. The simulation results confirm the proposed algorithm can greatly enhance the searching ability and effectively improve the premature convergence.

  6. 3D hybrid simulations with gyrokinetic particle ions and fluid electrons

    Energy Technology Data Exchange (ETDEWEB)

    Belova, E.V.; Park, W.; Fu, G.Y. [Princeton Univ., NJ (United States). Plasma Physics Lab.; Strauss, H.R. [New York Univ., NY (United States); Sugiyama, L.E. [Massachusetts Inst. of Tech., Cambridge, MA (United States)

    1998-12-31

    The previous hybrid MHD/particle model (MH3D-K code) represented energetic ions as gyrokinetic (or drift-kinetic) particles coupled to MHD equations using the pressure or current coupling scheme. A small energetic to bulk ion density ratio was assumed, n{sub h}/n{sub b} {much_lt} 1, allowing the neglect of the energetic ion perpendicular inertia in the momentum equation and the use of MHD Ohm`s law E = {minus}v{sub b} {times} B. A generalization of this model in which all ions are treated as gyrokinetic/drift-kinetic particles and fluid description is used for the electron dynamics is considered in this paper.

  7. Hybrid multinary modulation codes for page-oriented holographic data storage

    Science.gov (United States)

    Berger, G.; Dietz, M.; Denz, C.

    2008-11-01

    Hybrid multinary block codes for implementation in page-oriented holographic storage systems are proposed. The codes utilize combined phase and amplitude modulations to encode input data. In comparison to pure amplitude-or pure phase-modulated block code designs hybrid multinary modulation coding allows us to augment the storage density at an unchanged error rate. Two different hybrid modulation code designs are introduced. Experimental implementation is thoroughly discussed, especially concentrating on readout concepts. Phase-resolved readout is accomplished by optical addition and subtraction, using an unmodulated reference page. Experimental results indicate that the overall error rate is usually dominated by errors related to amplitude detection. The study suggests that capacity gains of up to 31% or 47% are reasonable when utilizing phase modulations in conjunction with binary or ternary amplitude modulation.

  8. Performance Evaluation of Hybrid ARQ with Code Combining in Packet-Oriented CDMA System

    Institute of Scientific and Technical Information of China (English)

    CHENQingchun; FANPingzhi

    2004-01-01

    In this paper, an extended SNR (signal to noise ratio) concept is proposed to explicate the contribution of code combining to the performance improvement of hybrid ARQ (Automatic repeat request) over the additive white Gaussian noise channel. By extending the Pursley's SNR analysis to hybrid ARQ with code combining in packet-oriented CDMA (Code division multiple access)system, the extended SNR formula is derived, which describes explicitly the SNR variation of the code symbol involved in code combining. It is revealed that the extended SNR formula includes Pursley's SNR formula as a specialcase. Moreover, it is shown that the effective SNR of the combined symbol is increased by a coefficient, which is proportional to the number of repeated replicas involved in the code combining. Based on the extended SNR formula and the resultant SNR variation, a quasi-analytical approximation method is proposed for the performance evaluation of hybrid ARQ with code combining. The residual error rates, average transmission number together with throughput performance are presented by means of numerical analysis and through simulations. It is validated that the extended SNR formula and the resultant quasi-analytical approximations offer a simplified routine to estimate the performance of hybrid ARQ with code combining, particularly for the applications whose reliability performance with respect to the FEC counterpart system could be numerically calculated or evaluated through simulations.

  9. What do Codes of Conduct do? Hybrid Constitutionalization and Militarization in Military Markets

    DEFF Research Database (Denmark)

    Leander, Anna

    2012-01-01

    jursigenerativities) of these codes. The article illustrates the argument through an analysis of two jurisgenerative processes (linked to regulation and to politics) triggered by Codes of Conduct in commercial military markets. It shows that the codes are creating both a hybrid regulatory (or constitutional) network......-military/security professional involvement in the debate over the regulation of commercial military markets would be the appropriate way of handling it....

  10. A multi-scale code for flexible hybrid simulations

    CERN Document Server

    Leukkunen, L; Lopez-Acevedo, O

    2012-01-01

    Multi-scale computer simulations combine the computationally efficient classical algorithms with more expensive but also more accurate ab-initio quantum mechanical algorithms. This work describes one implementation of multi-scale computations using the Atomistic Simulation Environment (ASE). This implementation can mix classical codes like LAMMPS and the Density Functional Theory-based GPAW. Any combination of codes linked via the ASE interface however can be mixed. We also introduce a framework to easily add classical force fields calculators for ASE using LAMMPS, which also allows harnessing the full performance of classical-only molecular dynamics. Our work makes it possible to combine different simulation codes, quantum mechanical or classical, with great ease and minimal coding effort.

  11. Electromagnetic self-consistent field initialization and fluid advance techniques for hybrid-kinetic PWFA code Architect

    Science.gov (United States)

    Massimo, F.; Marocchino, A.; Rossi, A. R.

    2016-09-01

    The realization of Plasma Wakefield Acceleration experiments with high quality of the accelerated bunches requires an increasing number of numerical simulations to perform first-order assessments for the experimental design and online-analysis of the experimental results. Particle in Cell codes are the state-of-the-art tools to study the beam-plasma interaction mechanism, but due to their requirements in terms of number of cores and computational time makes them unsuitable for quick parametric scans. Considerable interest has been shown thus in methods which reduce the computational time needed for the simulation of plasma acceleration. Such methods include the use of hybrid kinetic-fluid models, which treat the relativistic bunches as in a PIC code and the background plasma electrons as a fluid. A technique to properly initialize the bunch electromagnetic fields in the time explicit hybrid kinetic-fluid code Architect is presented, as well the implementation of the Flux Corrected Transport scheme for the fluid equations integrated in the code.

  12. Hybrid particle swarm optimization for solving resource-constrained FMS

    Institute of Scientific and Technical Information of China (English)

    Dongyun Wang; Liping Liu

    2008-01-01

    In this paper,an approach for resource-constrained flexible manufacturing system(FMS)scheduling was proposed,which is based on the particle swarm optimization(PSO)algorithm and simulated annealing(SA)algorithm.First,the formulation for resource-con-strained FMS scheduling problem was introduced and cost function for this problem was obtained.Then.a hybrid algorithm of PSO and SA was employed to obtain optimal solution.The simulated results show that the approach can dislodge a state from a local min-imum and guide it to the global minimum.

  13. Optical Code-Division Multiple-Access and Wavelength Division Multiplexing: Hybrid Scheme Review

    Directory of Open Access Journals (Sweden)

    P. Susthitha Menon

    2012-01-01

    Full Text Available Problem statement: Hybrid Optical Code-Division Multiple-Access (OCDMA and Wavelength-Division Multiplexing (WDM have flourished as successful schemes for expanding the transmission capacity as well as enhancing the security for OCDMA. However, a comprehensive review related to this hybrid system are lacking currently. Approach: The purpose of this paper is to review the literature on OCDMA-WDM overlay systems, including our hybrid approach of one-dimensional coding of SAC OCDMA with WDM signals. In addition, we present an additional review of other categorios of hybrid WDM/OCDMA schemes, where codes of OCDMA can be employed on each WDM wavelength. Furthermore, an essential background of OCDMA, recent coding techniques and security issues are also presented. Results: Our results indicate that the feasibility of transmitting both OCDMA and WDM users on the same spectrum band can be achieved using MQC family code with an acceptable performance as well as good data confidentiality. In addition, the WDM interference signals can be suppressed properly for detection of optical broadband CDMA using notch filters. Conclusion: The paper provides a comprehensive overview of hybrid OCDMA-WDM systems and can be used as a baseline study for other scientists in the similar scope of research.

  14. A novel neutron energy spectrum unfolding code using particle swarm optimization

    Science.gov (United States)

    Shahabinejad, H.; Sohrabpour, M.

    2017-07-01

    A novel neutron Spectrum Deconvolution using Particle Swarm Optimization (SDPSO) code has been developed to unfold the neutron spectrum from a pulse height distribution and a response matrix. The Particle Swarm Optimization (PSO) imitates the bird flocks social behavior to solve complex optimization problems. The results of the SDPSO code have been compared with those of the standard spectra and recently published Two-steps Genetic Algorithm Spectrum Unfolding (TGASU) code. The TGASU code have been previously compared with the other codes such as MAXED, GRAVEL, FERDOR and GAMCD and shown to be more accurate than the previous codes. The results of the SDPSO code have been demonstrated to match well with those of the TGASU code for both under determined and over-determined problems. In addition the SDPSO has been shown to be nearly two times faster than the TGASU code.

  15. A Comparative Study of Several Hybrid Particle Swarm Algorithms for Function Optimization

    Directory of Open Access Journals (Sweden)

    Yanhua Zhong

    2012-11-01

    Full Text Available Currently, the researchers have made a lot of hybrid particle swarm algorithm in order to solve the shortcomings that the Particle Swarm Algorithms is easy to converge to local extremum, these algorithms declare that there has been better than the standard particle swarm. This study selects three kinds of representative hybrid particle swarm optimizations (differential evolution particle swarm optimization, GA particle swarm optimization, quantum particle swarm optimization and the standard particle swarm optimization to test with three objective functions. We compare evolutionary algorithm performance by a fixed number of iterations of the convergence speed and accuracy and the number of iterations under the fixed convergence precision; analyzing these types of hybrid particle swarm optimization results and practical performance. Test results show hybrid particle algorithm performance has improved significantly.

  16. A Comparative Study of Several Hybrid Particle Swarm Algorithms for Function Optimization

    Directory of Open Access Journals (Sweden)

    Yanhua Zhong

    2013-01-01

    Full Text Available Currently, the researchers have made a lot of hybrid particle swarm algorithm in order to solve the shortcomings that the Particle Swarm Algorithms is easy to converge to local extremum, these algorithms declare that there has been better than the standard particle swarm. This study selects three kinds of representative hybrid particle swarm optimizations (differential evolution particle swarm optimization, GA particle swarm optimization, quantum particle swarm optimization and the standard particle swarm optimization to test with three objective functions. We compare evolutionary algorithm performance by a fixed number of iterations of the convergence speed and accuracy and the number of iterations under the fixed convergence precision, analyzing these types of hybrid particle swarm optimization results and practical performance. Test results show hybrid particle algorithm performance has improved significantly.

  17. Adaptive hybrid subband image coding with DWT, DCT, and modified DPCM

    Science.gov (United States)

    Kim, Tae W.; Choe, Howard C.; Griswold, Norman C.

    1997-04-01

    Image coding based on subband decomposition with DPCM and PCM has received much attention in the areas of image compression research and industry. In this paper we present a new adaptive image subband coding with discrete wavelet transform, discrete cosine transform, and a modified DPCM. The main contribution of this work is the development of a simple, yet effective image compression and transmission algorithm. An important feature of this algorithm is the hybrid modified DPCM coding scheme which produces both simple, but significant, image compression and transmission coding.

  18. A HYDROCHEMICAL HYBRID CODE FOR ASTROPHYSICAL PROBLEMS. I. CODE VERIFICATION AND BENCHMARKS FOR A PHOTON-DOMINATED REGION (PDR)

    Energy Technology Data Exchange (ETDEWEB)

    Motoyama, Kazutaka [National Institute of Informatics, 2-1-2 Hitotsubashi, Chiyoda-ku, Tokyo 101-8430 (Japan); Morata, Oscar; Hasegawa, Tatsuhiko [Institute of Astronomy and Astrophysics, Academia Sinica, Taipei 10617, Taiwan (China); Shang, Hsien; Krasnopolsky, Ruben, E-mail: shang@asiaa.sinica.edu.tw [Theoretical Institute for Advanced Research in Astrophysics, Academia Sinica, Taipei 10617, Taiwan (China)

    2015-07-20

    A two-dimensional hydrochemical hybrid code, KM2, is constructed to deal with astrophysical problems that would require coupled hydrodynamical and chemical evolution. The code assumes axisymmetry in a cylindrical coordinate system and consists of two modules: a hydrodynamics module and a chemistry module. The hydrodynamics module solves hydrodynamics using a Godunov-type finite volume scheme and treats included chemical species as passively advected scalars. The chemistry module implicitly solves nonequilibrium chemistry and change of energy due to thermal processes with transfer of external ultraviolet radiation. Self-shielding effects on photodissociation of CO and H{sub 2} are included. In this introductory paper, the adopted numerical method is presented, along with code verifications using the hydrodynamics module and a benchmark on the chemistry module with reactions specific to a photon-dominated region (PDR). Finally, as an example of the expected capability, the hydrochemical evolution of a PDR is presented based on the PDR benchmark.

  19. A Hydrochemical Hybrid Code for Astrophysical Problems. I. Code Verification and Benchmarks for Photon-Dominated Region (PDR)

    CERN Document Server

    Motoyama, Kazutaka; Shang, Hsien; Krasnopolsky, Ruben; Hasegawa, Tatsuhiko

    2015-01-01

    A two dimensional hydrochemical hybrid code, KM2, is constructed to deal with astrophysical problems that would require coupled hydrodynamical and chemical evolution. The code assumes axisymmetry in cylindrical coordinate system, and consists of two modules: a hydrodynamics module and a chemistry module. The hydrodynamics module solves hydrodynamics using a Godunov-type finite volume scheme and treats included chemical species as passively advected scalars. The chemistry module implicitly solves non-equilibrium chemistry and change of the energy due to thermal processes with transfer of external ultraviolet radiation. Self-shielding effects on photodissociation of CO and H$_2$ are included. In this introductory paper, the adopted numerical method is presented, along with code verifications using the hydrodynamics modules, and a benchmark on the chemistry module with reactions specific to a photon-dominated region (PDR). Finally, as an example of the expected capability, the hydrochemical evolution of a PDR is...

  20. The Plasma Simulation Code: A modern particle-in-cell code with load-balancing and GPU support

    CERN Document Server

    Germaschewski, Kai; Ahmadi, Narges; Wang, Liang; Abbott, Stephen; Ruhl, Hartmut; Bhattacharjee, Amitava

    2013-01-01

    Recent increases in supercomputing power, driven by the multi-core revolution and accelerators such as the IBM Cell processor, graphics processing units (GPUs) and Intel's Many Integrated Core (MIC) technology have enabled kinetic simulations of plasmas at unprecedented resolutions, but changing HPC architectures also come with challenges for writing efficient numerical codes. This paper describes the Plasma Simulation Code (PSC), an explicit, electromagnetic particle-in-cell code with support for different order particle shape functions. We focus on two distinguishing feature of the code: patch-based load balancing using space-filling curves, and support for Nvidia GPUs, which achieves substantial speed-up of up to more than 6x on the Cray XK7 architecture compared to a CPU-only implementation.

  1. Benchmarking a hybrid MHD/kinetic code with C-2 experimental data

    Science.gov (United States)

    Magee, Richard; Clary, Ryan; Dettrick, Sean; Korepanov, Sergey; Onofri, Marco; Smirnov, Artem; TAE Team

    2013-10-01

    The C-2 device creates field-reversed configuration (FRC) plasmas via the dynamic merging of two compact toroids and heated with neutral beams. Simulations of these plasmas are performed with Q2D - a hybrid MHD/Monte Carlo code that evolves the plasma according to the resistive MHD equations and treats the neutral beam injected fast ions as a minority kinetic species. Recent Q2D runs have resulted in testable predictions, namely that the axial profile of the fast ions is double-peaked, and charge-exchange neutrals are localized in pitch-angle. In some simulations, the fast particle population can induce magnetic fluctuations. These fluctuations are largest in the radial component, have a characteristic frequency approximately equal to the fast ion bounce frequency (f ~ 150 kHz), and a broad k spectrum. These fluctuations have the beneficial effect of smoothing out the double-peaked axial fast ion density profile, resulting in an increased fast ion density at the mid-plane. We will present results from a benchmarking study to quantitatively compare the results of Q2D runs to existing C-2 experimental data.

  2. Spatial resolution enhancement residual coding using hybrid wavelets and directional filter banks

    Indian Academy of Sciences (India)

    Ankit Ashokrao Bhurane; Prateek Chaplot; Dushyanth Nutulapati; Vikram M Gadre

    2015-10-01

    Traditional video coding uses classical predictive coding techniques, where a signal is initially approximated by taking advantage of the various redundancies present. Most of the video coding standards, including the latest HEVC, use the well-accepted procedure of applying transform coding on self-contained (intra) and inter-predicted frame residuals. Nevertheless, it has been shown in the literature that, a normal video frames possess distinct characteristics compared to a residual frame. In this paper, we have made use of hybrid wavelet transforms and directional filter banks (HWD) to encode resolution enhancement residuals in the context of scalable video coding. The results are presented for the use of HWD in the framework of the Dirac video codec. The experiments are carried out on a variety of test frames. Our experiments on residue coding using HWD show better performance compared to the conventional DWT, when tested on the same platform of the well-known SPIHT algorithm.

  3. Codebook Design and Hybrid Digital/Analog Coding for Parallel Rayleigh Fading Channels

    OpenAIRE

    Shi, Shuying; Larsson, Erik G.; Skoglund, Mikael

    2011-01-01

    Low-delay source-channel transmission over parallel fading channels is studied. In this scenario separate sourceand channel coding is highly suboptimal. A scheme based on hybrid digital/analog joint source-channel coding istherefore proposed, employing scalar quantization and polynomial-based analog bandwidth expansion. Simulationsdemonstrate substantial performance gains. Funding agencies|European Community|248993|EL-LIIT||Knut and Alice Wallenberg Foundation||

  4. Tree-Particle-Mesh an adaptive, efficient, and parallel code for collisionless cosmological simulation

    CERN Document Server

    Bode, P; Bode, Paul; Ostriker, Jeremiah P.

    2003-01-01

    An improved implementation of an N-body code for simulating collisionless cosmological dynamics is presented. TPM (Tree-Particle-Mesh) combines the PM method on large scales with a tree code to handle particle-particle interactions at small separations. After the global PM forces are calculated, spatially distinct regions above a given density contrast are located; the tree code calculates the gravitational interactions inside these denser objects at higher spatial and temporal resolution. The new implementation includes individual particle time steps within trees, an improved treatment of tidal forces on trees, new criteria for higher force resolution and choice of time step, and parallel treatment of large trees. TPM is compared to P^3M and a tree code (GADGET) and is found to give equivalent results in significantly less time. The implementation is highly portable (requiring a Fortran compiler and MPI) and efficient on parallel machines. The source code can be found at http://astro.princeton.edu/~bode/TPM/

  5. Asymmetric backscattering from the hybrid magneto-electric meta particle

    Science.gov (United States)

    Kozlov, Vitali; Filonov, Dmitry; Shalin, Alexander S.; Steinberg, Ben Z.; Ginzburg, Pavel

    2016-11-01

    The optical theorem relates the total scattering cross-section of a given structure with its forward scattering, but does not impose any restrictions on other directions. Strong backward-forward asymmetry in scattering could be achieved by exploring retarded coupling between particles, exhibiting both electric and magnetic resonances. Here, a hybrid magneto-electric particle (HMEP), consisting of a split ring resonator acting as a magnetic dipole and a wire antenna acting as an electric dipole, is shown to possess asymmetric scattering properties. When illuminated from opposite directions with the same polarization of the electric field, the structure has exactly the same forward scattering, whereas the backward scattering is drastically different. The scattering cross section is shown to be as low as zero at a narrow frequency range when illuminated from one side, while being maximal at the same frequency range when illuminated from the other side. Theoretical predictions of the phenomena are supported with both numerical and experimental conformations, obtained at the GHz frequency range, and all are in a good agreement with each other. HMEP meta-particles could be used as building blocks for various metamaterials assembling solar cells, invisibility cloaks, holographic masks, etc.

  6. Hybrid coding for split gray values in radiological image compression

    Science.gov (United States)

    Lo, Shih-Chung B.; Krasner, Brian; Mun, Seong K.; Horii, Steven C.

    1992-05-01

    Digital techniques are used more often than ever in a variety of fields. Medical information management is one of the largest digital technology applications. It is desirable to have both a large data storage resource and extremely fast data transmission channels for communication. On the other hand, it is also essential to compress these data into an efficient form for storage and transmission. A variety of data compression techniques have been developed to tackle a diversity of situations. A digital value decomposition method using a splitting and remapping method has recently been proposed for image data compression. This method attempts to employ an error-free compression for one part of the digital value containing highly significant value and uses another method for the second part of the digital value. We have reported that the effect of this method is substantial for the vector quantization and other spatial encoding techniques. In conjunction with DCT type coding, however, the splitting method only showed a limited improvement when compared to the nonsplitting method. With the latter approach, we used a nonoptimized method for the images possessing only the top three-most-significant- bit value (3MSBV) and produced a compression ratio of approximately 10:1. Since the 3MSB images are highly correlated and the same values tend to aggregate together, the use of area or contour coding was investigated. In our experiment, we obtained an average error-free compression ratio of 30:1 and 12:1 for 3MSB and 4MSB images, respectively, with the alternate value contour coding. With this technique, we clearly verified that the splitting method is superior to the nonsplitting method for finely digitized radiographs.

  7. A semiclassical hybrid approach to many particle quantum dynamics

    Science.gov (United States)

    Grossmann, Frank

    2006-07-01

    We analytically derive a correlated approach for a mixed semiclassical many particle dynamics, treating a fraction of the degrees of freedom by the multitrajectory semiclassical initial value method of Herman and Kluk [Chem. Phys. 91, 27 (1984)] while approximately treating the dynamics of the remaining degrees of freedom with fixed initial phase space variables, analogously to the thawed Gaussian wave packet dynamics of Heller [J. Chem. Phys. 62, 1544 (1975)]. A first application of this hybrid approach to the well studied Secrest-Johnson [J. Chem. Phys. 45, 4556 (1966)] model of atom-diatomic collisions is promising. Results close to the quantum ones for correlation functions as well as scattering probabilities could be gained with considerably reduced numerical effort as compared to the full semiclassical Herman-Kluk approach. Furthermore, the harmonic nature of the different degrees of freedom can be determined a posteriori by comparing results with and without the additional approximation.

  8. Hybrid Compton camera/coded aperture imaging system

    Science.gov (United States)

    Mihailescu, Lucian [Livermore, CA; Vetter, Kai M [Alameda, CA

    2012-04-10

    A system in one embodiment includes an array of radiation detectors; and an array of imagers positioned behind the array of detectors relative to an expected trajectory of incoming radiation. A method in another embodiment includes detecting incoming radiation with an array of radiation detectors; detecting the incoming radiation with an array of imagers positioned behind the array of detectors relative to a trajectory of the incoming radiation; and performing at least one of Compton imaging using at least the imagers and coded aperture imaging using at least the imagers. A method in yet another embodiment includes detecting incoming radiation with an array of imagers positioned behind an array of detectors relative to a trajectory of the incoming radiation; and performing Compton imaging using at least the imagers.

  9. Shielding evaluation for solar particle events using MCNPX, PHITS and OLTARIS codes

    Science.gov (United States)

    Aghara, S. K.; Sriprisan, S. I.; Singleterry, R. C.; Sato, T.

    2015-01-01

    Detailed analyses of Solar Particle Events (SPE) were performed to calculate primary and secondary particle spectra behind aluminum, at various thicknesses in water. The simulations were based on Monte Carlo (MC) radiation transport codes, MCNPX 2.7.0 and PHITS 2.64, and the space radiation analysis website called OLTARIS (On-Line Tool for the Assessment of Radiation in Space) version 3.4 (uses deterministic code, HZETRN, for transport). The study is set to investigate the impact of SPEs spectra transporting through 10 or 20 g/cm2 Al shield followed by 30 g/cm2 of water slab. Four historical SPE events were selected and used as input source spectra particle differential spectra for protons, neutrons, and photons are presented. The total particle fluence as a function of depth is presented. In addition to particle flux, the dose and dose equivalent values are calculated and compared between the codes and with the other published results. Overall, the particle fluence spectra from all three codes show good agreement with the MC codes showing closer agreement compared to the OLTARIS results. The neutron particle fluence from OLTARIS is lower than the results from MC codes at lower energies (E MCNPX and PHITS agree better for fluence, dose and dose equivalent when compared to OLTARIS results.

  10. Performance of Hybrid Concatenated Trellis Codes CPFSK with Iterative Decoding over Fading Channels

    CERN Document Server

    Gergis, Labib Francis

    2011-01-01

    Concatenation is a method of building long codes out of shorter ones, it attempts to meet the problem of decoding complexity by breaking the required computation into manageable segments. Concatenated Continuous Phase Frequency Shift Keying (CPFSK) facilitates powerful error correction. CPFSK also has the advantage of being bandwidth efficient and compatible with nonlinear amplifiers. Bandwidth efficient concatenated coded modulation schemes were designed for communication over Additive White Gaussian noise (AWGN), and Rayleigh fading channels. An analytical bounds on the performance of serial concatenated convolutional codes (SCCC), and parallel concatenated convolutionalcodes (PCCC), were derived as a base of comparison with the third category known as hybrid concatenated trellis codes scheme (HCTC). An upper bound to the average maximum-likelihood bit error probability of the three schemes were obtained. Design rules for the parallel, outer, and inner codes that maximize the interleaver's gain were discuss...

  11. The FLUKA code: An accurate simulation tool for particle therapy

    CERN Document Server

    Battistoni, Giuseppe; Böhlen, Till T; Cerutti, Francesco; Chin, Mary Pik Wai; Dos Santos Augusto, Ricardo M; Ferrari, Alfredo; Garcia Ortega, Pablo; Kozlowska, Wioletta S; Magro, Giuseppe; Mairani, Andrea; Parodi, Katia; Sala, Paola R; Schoofs, Philippe; Tessonnier, Thomas; Vlachoudis, Vasilis

    2016-01-01

    Monte Carlo (MC) codes are increasingly spreading in the hadrontherapy community due to their detailed description of radiation transport and interaction with matter. The suitability of a MC code for application to hadrontherapy demands accurate and reliable physical models capable of handling all components of the expected radiation field. This becomes extremely important for correctly performing not only physical but also biologically-based dose calculations, especially in cases where ions heavier than protons are involved. In addition, accurate prediction of emerging secondary radiation is of utmost importance in innovative areas of research aiming at in-vivo treatment verification. This contribution will address the recent developments of the FLUKA MC code and its practical applications in this field. Refinements of the FLUKA nuclear models in the therapeutic energy interval lead to an improved description of the mixed radiation field as shown in the presented benchmarks against experimental data with bot...

  12. Implementation and performance of FDPS: A Framework Developing Parallel Particle Simulation Codes

    CERN Document Server

    Iwasawa, Masaki; Hosono, Natsuki; Nitadori, Keigo; Muranushi, Takayuki; Makino, Junichiro

    2016-01-01

    We have developed FDPS (Framework for Developing Particle Simulator), which enables researchers and programmers to develop high-performance parallel particle simulation codes easily. The basic idea of FDPS is to separate the program code for complex parallelization including domain decomposition, redistribution of particles, and exchange of particle information for interaction calculation between nodes, from actual interaction calculation and orbital integration. FDPS provides the former part and the users write the latter. Thus, a user can implement a high-performance fully parallelized $N$-body code only in 120 lines. In this paper, we present the structure and implementation of FDPS, and describe its performance on three sample applications: disk galaxy simulation, cosmological simulation and Giant impact simulation. All codes show very good parallel efficiency and scalability on K computer and XC30. FDPS lets the researchers concentrate on the implementation of physics and mathematical schemes, without wa...

  13. Operation management of daily economic dispatch using novel hybrid particle swarm optimization and gravitational search algorithm with hybrid mutation strategy

    Science.gov (United States)

    Wang, Yan; Huang, Song; Ji, Zhicheng

    2017-07-01

    This paper presents a hybrid particle swarm optimization and gravitational search algorithm based on hybrid mutation strategy (HGSAPSO-M) to optimize economic dispatch (ED) including distributed generations (DGs) considering market-based energy pricing. A daily ED model was formulated and a hybrid mutation strategy was adopted in HGSAPSO-M. The hybrid mutation strategy includes two mutation operators, chaotic mutation, Gaussian mutation. The proposed algorithm was tested on IEEE-33 bus and results show that the approach is effective for this problem.

  14. A Novel Hybrid Statistical Particle Swarm Optimization for Multimodal Functions and Frequency Control of Hybrid Wind-Solar System

    Science.gov (United States)

    Verma, Harish Kumar; Jain, Cheshta

    2016-09-01

    In this article, a hybrid algorithm of particle swarm optimization (PSO) with statistical parameter (HSPSO) is proposed. Basic PSO for shifted multimodal problems have low searching precision due to falling into a number of local minima. The proposed approach uses statistical characteristics to update the velocity of the particle to avoid local minima and help particles to search global optimum with improved convergence. The performance of the newly developed algorithm is verified using various standard multimodal, multivariable, shifted hybrid composition benchmark problems. Further, the comparative analysis of HSPSO with variants of PSO is tested to control frequency of hybrid renewable energy system which comprises solar system, wind system, diesel generator, aqua electrolyzer and ultra capacitor. A significant improvement in convergence characteristic of HSPSO algorithm over other variants of PSO is observed in solving benchmark optimization and renewable hybrid system problems.

  15. A new hybrid coding for protein secondary structure prediction based on primary structure similarity.

    Science.gov (United States)

    Li, Zhong; Wang, Jing; Zhang, Shunpu; Zhang, Qifeng; Wu, Wuming

    2017-03-16

    The coding pattern of protein can greatly affect the prediction accuracy of protein secondary structure. In this paper, a novel hybrid coding method based on the physicochemical properties of amino acids and tendency factors is proposed for the prediction of protein secondary structure. The principal component analysis (PCA) is first applied to the physicochemical properties of amino acids to construct a 3-bit-code, and then the 3 tendency factors of amino acids are calculated to generate another 3-bit-code. Two 3-bit-codes are fused to form a novel hybrid 6-bit-code. Furthermore, we make a geometry-based similarity comparison of the protein primary structure between the reference set and the test set before the secondary structure prediction. We finally use the support vector machine (SVM) to predict those amino acids which are not detected by the primary structure similarity comparison. Experimental results show that our method achieves a satisfactory improvement in accuracy in the prediction of protein secondary structure.

  16. ORBXYZ: A 3D single-particle orbit code for following charged particle trajectories in equilibrium magnetic fields

    Science.gov (United States)

    Anderson, D. V.; Cohen, R. H.; Ferguson, J. R.; Johnston, B. M.; Sharp, C. B.; Willmann, P. A.

    1981-06-01

    The single particle orbit code, TIBRO, was modified extensively to improve the interpolation methods used and to allow use of vector potential fields in the simulation of charged particle orbits on a 3D domain. A 3D cubic B-spline algorithm is used to generate spline coefficients used in the interpolation. Smooth and accurate field representations are obtained. When vector potential fields are used, the 3D cubic spline interpolation formula analytically generates the magnetic field used to push the particles. This field has del.BETA = 0 to computer roundoff. When magnetic induction is used the interpolation allows del.BETA does not equal 0, which can lead to significant nonphysical results. Presently the code assumes quadrupole symmetry, but this is not an essential feature of the code and could be easily removed for other applications.

  17. Particle Simulation Code for the Electron Temperature Gradient Instability in Tokamak Toroidal Plasmas

    Institute of Scientific and Technical Information of China (English)

    JIANGuangde; DONGJiaqi

    2003-01-01

    A numerical simulation code has been established with particle simulation method in order to study the gyro-kinetic equations for the electrostatic electron temperature gradient modes in toroidal plasmas. The flowchart is given as well for the code. The fourth-order adaptive step-size scheme is adopted, that saves computer time and is simple. The calculation code is useful for the research of the electron temperature gradient instability.

  18. An electrostatic particle-in-cell model for a lower hybrid grill

    Energy Technology Data Exchange (ETDEWEB)

    Rantamaeki, K

    1998-07-01

    In recent lower hybrid (LH) current drive experiments, generation of hot spots and impurities in the grill region have been observed on Tore Supra and Tokamak de Varennes (TdeV). A possible explanation is the parasitic absorption of the LH power in front of the grill. In parasitic absorption, the short-wavelength part of the lower hybrid spectrum can resonantly interact with the cold edge electrons. In this work, the absorption of the LH waves and the generation of fast electrons near the waveguide mouth is investigated with a new tool in this context: particle-in-cell (PIC) simulations. The advantage of this new method is that the electric field is calculated self-consistently. The PIC simulations also provide the key parameters for the hot spot problem: the absorbed power, the radial deposition profiles and the absorption length. A grill model has been added to the 2d3v PIC code XPDP2. Two sets of simulations were made. The first simulations used a phenomenological grill model. Strong absorption in the edge plasma was obtained. About 5% of the coupled power was absorbed within 1.7 mm in the case with fairly large amount of power in the modes with large parallel refractive index. Consequently, a rapid generation of fast electrons took place in the same region. In order to model experiments with realistic wave spectra, the PIC code was coupled to the slow wave antenna coupling code SWAN. The absorption within 1.7 mm in front of the grill was found to be between 2 and 5%. In the short time of a few wave periods, part of the initially thermal electrons (T{sub e} = 100 eV) were accelerated to velocities corresponding to a few keV. (orig.)

  19. Comparison of Particle Flow Code and Smoothed Particle Hydrodynamics Modelling of Landslide Run outs

    Science.gov (United States)

    Preh, A.; Poisel, R.; Hungr, O.

    2009-04-01

    In most continuum mechanics methods modelling the run out of landslides the moving mass is divided into a number of elements, the velocities of which can be established by numerical integration of Newtońs second law (Lagrangian solution). The methods are based on fluid mechanics modelling the movements of an equivalent fluid. In 2004, McDougall and Hungr presented a three-dimensional numerical model for rapid landslides, e.g. debris flows and rock avalanches, called DAN3D.The method is based on the previous work of Hungr (1995) and is using an integrated two-dimensional Lagrangian solution and meshless Smooth Particle Hydrodynamics (SPH) principle to maintain continuity. DAN3D has an open rheological kernel, allowing the use of frictional (with constant porepressure ratio) and Voellmy rheologies and gives the possibility to change material rheology along the path. Discontinuum (granular) mechanics methods model the run out mass as an assembly of particles moving down a surface. Each particle is followed exactly as it moves and interacts with the surface and with its neighbours. Every particle is checked on contacts with every other particle in every time step using a special cell-logic for contact detection in order to reduce the computational effort. The Discrete Element code PFC3D was adapted in order to make possible discontinuum mechanics models of run outs. Punta Thurwieser Rock Avalanche and Frank Slide were modelled by DAN as well as by PFC3D. The simulations showed correspondingly that the parameters necessary to get results coinciding with observations in nature are completely different. The maximum velocity distributions due to DAN3D reveal that areas of different maximum flow velocity are next to each other in Punta Thurwieser run out whereas the distribution of maximum flow velocity shows almost constant maximum flow velocity over the width of the run out regarding Frank Slide. Some 30 percent of total kinetic energy is rotational kinetic energy in

  20. DANTSYS: A diffusion accelerated neutral particle transport code system

    Energy Technology Data Exchange (ETDEWEB)

    Alcouffe, R.E.; Baker, R.S.; Brinkley, F.W.; Marr, D.R.; O`Dell, R.D.; Walters, W.F.

    1995-06-01

    The DANTSYS code package includes the following transport codes: ONEDANT, TWODANT, TWODANT/GQ, TWOHEX, and THREEDANT. The DANTSYS code package is a modular computer program package designed to solve the time-independent, multigroup discrete ordinates form of the boltzmann transport equation in several different geometries. The modular construction of the package separates the input processing, the transport equation solving, and the post processing (or edit) functions into distinct code modules: the Input Module, one or more Solver Modules, and the Edit Module, respectively. The Input and Edit Modules are very general in nature and are common to all the Solver Modules. The ONEDANT Solver Module contains a one-dimensional (slab, cylinder, and sphere), time-independent transport equation solver using the standard diamond-differencing method for space/angle discretization. Also included in the package are solver Modules named TWODANT, TWODANT/GQ, THREEDANT, and TWOHEX. The TWODANT Solver Module solves the time-independent two-dimensional transport equation using the diamond-differencing method for space/angle discretization. The authors have also introduced an adaptive weighted diamond differencing (AWDD) method for the spatial and angular discretization into TWODANT as an option. The TWOHEX Solver Module solves the time-independent two-dimensional transport equation on an equilateral triangle spatial mesh. The THREEDANT Solver Module solves the time independent, three-dimensional transport equation for XYZ and RZ{Theta} symmetries using both diamond differencing with set-to-zero fixup and the AWDD method. The TWODANT/GQ Solver Module solves the 2-D transport equation in XY and RZ symmetries using a spatial mesh of arbitrary quadrilaterals. The spatial differencing method is based upon the diamond differencing method with set-to-zero fixup with changes to accommodate the generalized spatial meshing.

  1. GRADSPH: A parallel smoothed particle hydrodynamics code for self-gravitating astrophysical fluid dynamics

    NARCIS (Netherlands)

    Vanaverbeke, S.; Keppens, R.; Poedts, S.; Boffin, H.

    2009-01-01

    We describe the algorithms implemented in the first version of GRADSPH, a parallel, tree-based, smoothed particle hydrodynamics code for simulating self-gravitating astrophysical systems written in FORTRAN 90. The paper presents details on the implementation of the Smoothed Particle Hydro (SPH) desc

  2. Beam-splitting code for light scattering by ice crystal particles within geometric-optics approximation

    Science.gov (United States)

    Konoshonkin, Alexander V.; Kustova, Natalia V.; Borovoi, Anatoli G.

    2015-10-01

    The open-source beam-splitting code is described which implements the geometric-optics approximation to light scattering by convex faceted particles. This code is written in C++ as a library which can be easy applied to a particular light scattering problem. The code uses only standard components, that makes it to be a cross-platform solution and provides its compatibility to popular Integrated Development Environments (IDE's). The included example of solving the light scattering by a randomly oriented ice crystal is written using Qt 5.1, consequently it is a cross-platform solution, too. Both physical and computational aspects of the beam-splitting algorithm are discussed. Computational speed of the beam-splitting code is obviously higher compared to the conventional ray-tracing codes. A comparison of the phase matrix as computed by our code with the ray-tracing code by A. Macke shows excellent agreement.

  3. Development of a relativistic Particle In Cell code PARTDYN for linear accelerator beam transport

    Science.gov (United States)

    Phadte, D.; Patidar, C. B.; Pal, M. K.

    2017-04-01

    A relativistic Particle In Cell (PIC) code PARTDYN is developed for the beam dynamics simulation of z-continuous and bunched beams. The code is implemented in MATLAB using its MEX functionality which allows both ease of development as well higher performance similar to a compiled language like C. The beam dynamics calculations carried out by the code are compared with analytical results and with other well developed codes like PARMELA and BEAMPATH. The effect of finite number of simulation particles on the emittance growth of intense beams has been studied. Corrections to the RF cavity field expressions were incorporated in the code so that the fields could be calculated correctly. The deviations of the beam dynamics results between PARTDYN and BEAMPATH for a cavity driven in zero-mode have been discussed. The beam dynamics studies of the Low Energy Beam Transport (LEBT) using PARTDYN have been presented.

  4. An Allele Real-Coded Quantum Evolutionary Algorithm Based on Hybrid Updating Strategy.

    Science.gov (United States)

    Zhang, Yu-Xian; Qian, Xiao-Yi; Peng, Hui-Deng; Wang, Jian-Hui

    2016-01-01

    For improving convergence rate and preventing prematurity in quantum evolutionary algorithm, an allele real-coded quantum evolutionary algorithm based on hybrid updating strategy is presented. The real variables are coded with probability superposition of allele. A hybrid updating strategy balancing the global search and local search is presented in which the superior allele is defined. On the basis of superior allele and inferior allele, a guided evolutionary process as well as updating allele with variable scale contraction is adopted. And H ε gate is introduced to prevent prematurity. Furthermore, the global convergence of proposed algorithm is proved by Markov chain. Finally, the proposed algorithm is compared with genetic algorithm, quantum evolutionary algorithm, and double chains quantum genetic algorithm in solving continuous optimization problem, and the experimental results verify the advantages on convergence rate and search accuracy.

  5. Performance analysis and code recognition for dual N-ary orthogonal hybrid modulation systems

    Institute of Scientific and Technical Information of China (English)

    Qiao Xiaoqiang; Zhao Hangsheng; Cai Yueming

    2008-01-01

    A dual N-ary orthogonal hybrid modulation system is introduced in this paper, which can increase the data rate greatly compared with conventional N-ary orthogonal spread spectrum system, so it can be used for high rate data communication. Then, three code recognition algorithms are presented for dual N-ary orthogonal hybrid modulation system and the analytic bit error rate (BER) performance of the system in additive white Gaussian noise (AWGN) and flat Rayleigh fading channel is derived. Finally, the computer simulation of the system with three code recognition algorithms is performed, which shows that the simplified maximum a posteriori (MAP) algorithm is the best for the system with a compromise between the performance and the complexity.

  6. An Allele Real-Coded Quantum Evolutionary Algorithm Based on Hybrid Updating Strategy

    Directory of Open Access Journals (Sweden)

    Yu-Xian Zhang

    2016-01-01

    Full Text Available For improving convergence rate and preventing prematurity in quantum evolutionary algorithm, an allele real-coded quantum evolutionary algorithm based on hybrid updating strategy is presented. The real variables are coded with probability superposition of allele. A hybrid updating strategy balancing the global search and local search is presented in which the superior allele is defined. On the basis of superior allele and inferior allele, a guided evolutionary process as well as updating allele with variable scale contraction is adopted. And Hε gate is introduced to prevent prematurity. Furthermore, the global convergence of proposed algorithm is proved by Markov chain. Finally, the proposed algorithm is compared with genetic algorithm, quantum evolutionary algorithm, and double chains quantum genetic algorithm in solving continuous optimization problem, and the experimental results verify the advantages on convergence rate and search accuracy.

  7. Outdoor Stand-Off Interrogation of Fissionable Material with a Hybrid Coded Imaging System

    Science.gov (United States)

    2013-06-01

    OUTDOOR STAND-OFF INTERROGATION OF FISSIONABLE MATERIAL WITH A HYBRID CODED IMAGING SYSTEM  A.L. Hutcheson  , B.F. Phlips, E.A. Wulf ...of the Hermes-III gamma ray simulator,” in Pulsed Power Conference, 1989. 7 th , 1898, p. 26. [5] E.A. Wulf , A.L. Hutcheson, B.F. Phlips, L.J

  8. Development and Benchmarking of a Hybrid PIC Code For Dense Plasmas and Fast Ignition

    Energy Technology Data Exchange (ETDEWEB)

    Witherspoon, F. Douglas [HyperV Technologies Corp.; Welch, Dale R. [Voss Scientific, LLC; Thompson, John R. [FAR-TECH, Inc.; MacFarlane, Joeseph J. [Prism Computational Sciences Inc.; Phillips, Michael W. [Advanced Energy Systems, Inc.; Bruner, Nicki [Voss Scientific, LLC; Mostrom, Chris [Voss Scientific, LLC; Thoma, Carsten [Voss Scientific, LLC; Clark, R. E. [Voss Scientific, LLC; Bogatu, Nick [FAR-TECH, Inc.; Kim, Jin-Soo [FAR-TECH, Inc.; Galkin, Sergei [FAR-TECH, Inc.; Golovkin, Igor E. [Prism Computational Sciences, Inc.; Woodruff, P. R. [Prism Computational Sciences, Inc.; Wu, Linchun [HyperV Technologies Corp.; Messer, Sarah J. [HyperV Technologies Corp.

    2014-05-20

    Radiation processes play an important role in the study of both fast ignition and other inertial confinement schemes, such as plasma jet driven magneto-inertial fusion, both in their effect on energy balance, and in generating diagnostic signals. In the latter case, warm and hot dense matter may be produced by the convergence of a plasma shell formed by the merging of an assembly of high Mach number plasma jets. This innovative approach has the potential advantage of creating matter of high energy densities in voluminous amount compared with high power lasers or particle beams. An important application of this technology is as a plasma liner for the flux compression of magnetized plasma to create ultra-high magnetic fields and burning plasmas. HyperV Technologies Corp. has been developing plasma jet accelerator technology in both coaxial and linear railgun geometries to produce plasma jets of sufficient mass, density, and velocity to create such imploding plasma liners. An enabling tool for the development of this technology is the ability to model the plasma dynamics, not only in the accelerators themselves, but also in the resulting magnetized target plasma and within the merging/interacting plasma jets during transport to the target. Welch pioneered numerical modeling of such plasmas (including for fast ignition) using the LSP simulation code. Lsp is an electromagnetic, parallelized, plasma simulation code under development since 1995. It has a number of innovative features making it uniquely suitable for modeling high energy density plasmas including a hybrid fluid model for electrons that allows electrons in dense plasmas to be modeled with a kinetic or fluid treatment as appropriate. In addition to in-house use at Voss Scientific, several groups carrying out research in Fast Ignition (LLNL, SNL, UCSD, AWE (UK), and Imperial College (UK)) also use LSP. A collaborative team consisting of HyperV Technologies Corp., Voss Scientific LLC, FAR-TECH, Inc., Prism

  9. Development and Benchmarking of a Hybrid PIC Code For Dense Plasmas and Fast Ignition

    Energy Technology Data Exchange (ETDEWEB)

    Witherspoon, F. Douglas [HyperV Technologies Corp.; Welch, Dale R. [Voss Scientific, LLC; Thompson, John R. [FAR-TECH, Inc.; MacFarlane, Joeseph J. [Prism Computational Sciences Inc.; Phillips, Michael W. [Advanced Energy Systems, Inc.; Bruner, Nicki [Voss Scientific, LLC; Mostrom, Chris [Voss Scientific, LLC; Thoma, Carsten [Voss Scientific, LLC; Clark, R. E. [Voss Scientific, LLC; Bogatu, Nick [FAR-TECH, Inc.; Kim, Jin-Soo [FAR-TECH, Inc.; Galkin, Sergei [FAR-TECH, Inc.; Golovkin, Igor E. [Prism Computational Sciences, Inc.; Woodruff, P. R. [Prism Computational Sciences, Inc.; Wu, Linchun [HyperV Technologies Corp.; Messer, Sarah J. [HyperV Technologies Corp.

    2014-05-20

    Radiation processes play an important role in the study of both fast ignition and other inertial confinement schemes, such as plasma jet driven magneto-inertial fusion, both in their effect on energy balance, and in generating diagnostic signals. In the latter case, warm and hot dense matter may be produced by the convergence of a plasma shell formed by the merging of an assembly of high Mach number plasma jets. This innovative approach has the potential advantage of creating matter of high energy densities in voluminous amount compared with high power lasers or particle beams. An important application of this technology is as a plasma liner for the flux compression of magnetized plasma to create ultra-high magnetic fields and burning plasmas. HyperV Technologies Corp. has been developing plasma jet accelerator technology in both coaxial and linear railgun geometries to produce plasma jets of sufficient mass, density, and velocity to create such imploding plasma liners. An enabling tool for the development of this technology is the ability to model the plasma dynamics, not only in the accelerators themselves, but also in the resulting magnetized target plasma and within the merging/interacting plasma jets during transport to the target. Welch pioneered numerical modeling of such plasmas (including for fast ignition) using the LSP simulation code. Lsp is an electromagnetic, parallelized, plasma simulation code under development since 1995. It has a number of innovative features making it uniquely suitable for modeling high energy density plasmas including a hybrid fluid model for electrons that allows electrons in dense plasmas to be modeled with a kinetic or fluid treatment as appropriate. In addition to in-house use at Voss Scientific, several groups carrying out research in Fast Ignition (LLNL, SNL, UCSD, AWE (UK), and Imperial College (UK)) also use LSP. A collaborative team consisting of HyperV Technologies Corp., Voss Scientific LLC, FAR-TECH, Inc., Prism

  10. Microfluidic CODES: a scalable multiplexed electronic sensor for orthogonal detection of particles in microfluidic channels.

    Science.gov (United States)

    Liu, Ruxiu; Wang, Ningquan; Kamili, Farhan; Sarioglu, A Fatih

    2016-04-21

    Numerous biophysical and biochemical assays rely on spatial manipulation of particles/cells as they are processed on lab-on-a-chip devices. Analysis of spatially distributed particles on these devices typically requires microscopy negating the cost and size advantages of microfluidic assays. In this paper, we introduce a scalable electronic sensor technology, called microfluidic CODES, that utilizes resistive pulse sensing to orthogonally detect particles in multiple microfluidic channels from a single electrical output. Combining the techniques from telecommunications and microfluidics, we route three coplanar electrodes on a glass substrate to create multiple Coulter counters producing distinct orthogonal digital codes when they detect particles. We specifically design a digital code set using the mathematical principles of Code Division Multiple Access (CDMA) telecommunication networks and can decode signals from different microfluidic channels with >90% accuracy through computation even if these signals overlap. As a proof of principle, we use this technology to detect human ovarian cancer cells in four different microfluidic channels fabricated using soft lithography. Microfluidic CODES offers a simple, all-electronic interface that is well suited to create integrated, low-cost lab-on-a-chip devices for cell- or particle-based assays in resource-limited settings.

  11. Polystyrene-Core-Silica-Shell Hybrid Particles Containing Gold and Magnetic Nanoparticles.

    Science.gov (United States)

    Tian, Jia; Vana, Philipp

    2016-02-18

    Polystyrene-core-silica-shell hybrid particles were synthesized by combining the self-assembly of nanoparticles and the polymer with a silica coating strategy. The core-shell hybrid particles are composed of gold-nanoparticle-decorated polystyrene (PS-AuNP) colloids as the core and silica particles as the shell. PS-AuNP colloids were generated by the self-assembly of the PS-grafted AuNPs. The silica coating improved the thermal stability and dispersibility of the AuNPs. By removing the "free" PS of the core, hollow particles with a hydrophobic cage having a AuNP corona and an inert silica shell were obtained. Also, Fe3O4 nanoparticles were encapsulated in the core, which resulted in magnetic core-shell hybrid particles by the same strategy. These particles have potential applications in biomolecular separation and high-temperature catalysis and as nanoreactors.

  12. Beam-beam simulation code BBSIM for particle accelerators

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Hyung J.; Sen, Tanaji; /Fermilab

    2011-01-01

    A highly efficient, fully parallelized, six-dimensional tracking model for simulating interactions of colliding hadron beams in high energy ring colliders and simulating schemes for mitigating their effects is described. The model uses the weak-strong approximation for calculating the head-on interactions when the test beam has lower intensity than the other beam, a look-up table for the efficient calculation of long-range beam-beam forces, and a self-consistent Poisson solver when both beams have comparable intensities. A performance test of the model in a parallel environment is presented. The code is used to calculate beam emittance and beam loss in the Tevatron at Fermilab and compared with measurements. They also present results from the studies of stwo schemes proposed to compensate the beam-beam interactions: (a) the compensation of long-range interactions in the Relativistic Heavy Ion Collider (RHIC) at Brookhaven and the Large Hadron Collider (LHC) at CERN with a current carrying wire, (b) the use of a low energy electron beam to compensate the head-on interactions in RHIC.

  13. Beam-beam simulation code BBSIM for particle accelerators

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Hyung J.; Sen, Tanaji; /Fermilab

    2011-01-01

    A highly efficient, fully parallelized, six-dimensional tracking model for simulating interactions of colliding hadron beams in high energy ring colliders and simulating schemes for mitigating their effects is described. The model uses the weak-strong approximation for calculating the head-on interactions when the test beam has lower intensity than the other beam, a look-up table for the efficient calculation of long-range beam-beam forces, and a self-consistent Poisson solver when both beams have comparable intensities. A performance test of the model in a parallel environment is presented. The code is used to calculate beam emittance and beam loss in the Tevatron at Fermilab and compared with measurements. They also present results from the studies of stwo schemes proposed to compensate the beam-beam interactions: (a) the compensation of long-range interactions in the Relativistic Heavy Ion Collider (RHIC) at Brookhaven and the Large Hadron Collider (LHC) at CERN with a current carrying wire, (b) the use of a low energy electron beam to compensate the head-on interactions in RHIC.

  14. The Hybrid Detailed / Statistical Opacity Code SCO-RCG: New Developments and Applications

    CERN Document Server

    Pain, Jean-Christophe; Porcherot, Quentin; Blenski, Thomas

    2013-01-01

    We present the hybrid opacity code SCO-RCG which combines statistical approaches with fine-structure calculations. Radial integrals needed for the computation of detailed transition arrays are calculated by the code SCO (Super-configuration Code for Opacity), which calculates atomic structure at finite temperature and density, taking into account plasma effects on the wave-functions. Levels and spectral lines are then computed by an adapted RCG routine of R. D. Cowan. SCO-RCG now includes the Partially Resolved Transition Array model, which allows one to replace a complex transition array by a small-scale detailed calculation preserving energy and variance of the genuine transition array and yielding improved high-order moments. An approximate method for studying the impact of strong magnetic field on opacity and emissivity was also recently implemented.

  15. Cracking the particle code of the universe the hunt for the Higgs boson

    CERN Document Server

    Moffat, John W

    2014-01-01

    Among the current books that celebrate the discovery of the Higgs boson, Cracking the Particle Code of the Universe is a rare objective treatment of the subject. The book is an insider's behind-the-scenes look at the arcane, fascinating world of theoretical and experimental particle physics leading up to the recent discovery of a new boson. If the new boson is indeed the Higgs particle, its discovery represents an important milestone in the history of particle physics. However, despite the pressure to award Nobel Prizes to physicists associated with the Higgs boson, John Moffat argues that the

  16. Bio-bar-code functionalized magnetic nanoparticle label for ultrasensitive flow injection chemiluminescence detection of DNA hybridization.

    Science.gov (United States)

    Bi, Sai; Zhou, Hong; Zhang, Shusheng

    2009-10-07

    A signal amplification strategy based on bio-bar-code functionalized magnetic nanoparticles as labels holds promise to improve the sensitivity and detection limit of the detection of DNA hybridization and single-nucleotide polymorphisms by flow injection chemiluminescence assays.

  17. Neptune: An astrophysical smooth particle hydrodynamics code for massively parallel computer architectures

    Science.gov (United States)

    Sandalski, Stou

    Smooth particle hydrodynamics is an efficient method for modeling the dynamics of fluids. It is commonly used to simulate astrophysical processes such as binary mergers. We present a newly developed GPU accelerated smooth particle hydrodynamics code for astrophysical simulations. The code is named neptune after the Roman god of water. It is written in OpenMP parallelized C++ and OpenCL and includes octree based hydrodynamic and gravitational acceleration. The design relies on object-oriented methodologies in order to provide a flexible and modular framework that can be easily extended and modified by the user. Several pre-built scenarios for simulating collisions of polytropes and black-hole accretion are provided. The code is released under the MIT Open Source license and publicly available at http://code.google.com/p/neptune-sph/.

  18. Acceleration of a Particle-in-Cell Code for Space Plasma Simulations with OpenACC

    Science.gov (United States)

    Peng, Ivy Bo; Markidis, Stefano; Vaivads, Andris; Vencels, Juris; Deca, Jan; Lapenta, Giovanni; Hart, Alistair; Laure, Erwin

    2015-04-01

    We simulate space plasmas with the Particle-in-cell (PIC) method that uses computational particles to mimic electrons and protons in solar wind and in Earth magnetosphere. The magnetic and electric fields are computed by solving the Maxwell's equations on a computational grid. In each PIC simulation step, there are four major phases: interpolation of fields to particles, updating the location and velocity of each particle, interpolation of particles to grids and solving the Maxwell's equations on the grid. We use the iPIC3D code, which was implemented in C++, using both MPI and OpenMP, for our case study. By November 2014, heterogeneous systems using hardware accelerators such as Graphics Processing Unit (GPUs) and the Many Integrated Core (MIC) coprocessors for high performance computing continue growth in the top 500 most powerful supercomputers world wide. Scientific applications for numerical simulations need to adapt to using accelerators to achieve portability and scalability in the coming exascale systems. In our work, we conduct a case study of using OpenACC to offload the computation intensive parts: particle mover and interpolation of particles to grids, in a massively parallel Particle-in-Cell simulation code, iPIC3D, to multi-GPU systems. We use MPI for inter-node communication for halo exchange and communicating particles. We identify the most promising parts suitable for GPUs accelerator by profiling using CrayPAT. We implemented manual deep copy to address the challenges of porting C++ classes to GPU. We document the necessary changes in the exiting algorithms to adapt for GPU computation. We present the challenges and findings as well as our methodology for porting a Particle-in-Cell code to multi-GPU systems using OpenACC. In this work, we will present the challenges, findings and our methodology of porting a Particle-in-Cell code for space applications as follows: We profile the iPIC3D code by Cray Performance Analysis Tool (CrayPAT) and identify

  19. Particle acceleration in tangential discontinuities by lower hybrid waves

    Directory of Open Access Journals (Sweden)

    D. Spicer

    2002-01-01

    Full Text Available We consider the role that the lower-hybrid wave turbulence plays in providing the necessary resistivity at collisionless reconnection sights. The mechanism for generating the waves is considered to be the lower-hybrid drift instability. We find that the level of the wave amplitude is sufficient enough to heat and accelerate both electrons and ions.

  20. Load-balancing techniques for a parallel electromagnetic particle-in-cell code

    Energy Technology Data Exchange (ETDEWEB)

    PLIMPTON,STEVEN J.; SEIDEL,DAVID B.; PASIK,MICHAEL F.; COATS,REBECCA S.

    2000-01-01

    QUICKSILVER is a 3-d electromagnetic particle-in-cell simulation code developed and used at Sandia to model relativistic charged particle transport. It models the time-response of electromagnetic fields and low-density-plasmas in a self-consistent manner: the fields push the plasma particles and the plasma current modifies the fields. Through an LDRD project a new parallel version of QUICKSILVER was created to enable large-scale plasma simulations to be run on massively-parallel distributed-memory supercomputers with thousands of processors, such as the Intel Tflops and DEC CPlant machines at Sandia. The new parallel code implements nearly all the features of the original serial QUICKSILVER and can be run on any platform which supports the message-passing interface (MPI) standard as well as on single-processor workstations. This report describes basic strategies useful for parallelizing and load-balancing particle-in-cell codes, outlines the parallel algorithms used in this implementation, and provides a summary of the modifications made to QUICKSILVER. It also highlights a series of benchmark simulations which have been run with the new code that illustrate its performance and parallel efficiency. These calculations have up to a billion grid cells and particles and were run on thousands of processors. This report also serves as a user manual for people wishing to run parallel QUICKSILVER.

  1. Blind Decorrelating Detection Based on Particle Swarm Optimization under Spreading Code Mismatch

    Institute of Scientific and Technical Information of China (English)

    Jhih-Chung Chang; Chih-Chang Shen

    2014-01-01

    A way of resolving spreading code mismatches in blind multiuser detection with a particle swarm optimization (PSO) approach is proposed. It has been shown that the PSO algorithm incorporating the linear system of the decorrelating detector, which is termed as decorrelating PSO (DPSO), can significantly improve the bit error rate (BER) and the system capacity. As the code mismatch occurs, the output BER performance is vulnerable to degradation for DPSO. With a blind decorrelating scheme, the proposed blind DPSO (BDPSO) offers more robust capabilities over existing DPSO under code mismatch scenarios.

  2. A parallelized particle tracing code for CFD simulations in Earth Sciences

    OpenAIRE

    Vlad Constantin Manea; Marina Manea; Mihai Pomeran; Lucian Besutiu; Luminita Zlagnean

    2012-01-01

    The problem of convective flows in a highly viscous fluid represents a common research direction in Earth Sciences. In order to trace the convective motion of the fluid material, a source of passive particles (or tracers) that flow at a local convection velocity and do not affect the pattern of flow is commonly used. It is presented a parallelized tracer code that uses passive and weightless particles with their position computed from their displacement during a small time interval at the vel...

  3. Production Level CFD Code Acceleration for Hybrid Many-Core Architectures

    Science.gov (United States)

    Duffy, Austen C.; Hammond, Dana P.; Nielsen, Eric J.

    2012-01-01

    In this work, a novel graphics processing unit (GPU) distributed sharing model for hybrid many-core architectures is introduced and employed in the acceleration of a production-level computational fluid dynamics (CFD) code. The latest generation graphics hardware allows multiple processor cores to simultaneously share a single GPU through concurrent kernel execution. This feature has allowed the NASA FUN3D code to be accelerated in parallel with up to four processor cores sharing a single GPU. For codes to scale and fully use resources on these and the next generation machines, codes will need to employ some type of GPU sharing model, as presented in this work. Findings include the effects of GPU sharing on overall performance. A discussion of the inherent challenges that parallel unstructured CFD codes face in accelerator-based computing environments is included, with considerations for future generation architectures. This work was completed by the author in August 2010, and reflects the analysis and results of the time.

  4. Characterizations of Polystyrene-Based Hybrid Particles Containing Hydrophobic Mg(OH2 Powder and Composites Fabricated by Employing Resultant Hybrid Particles

    Directory of Open Access Journals (Sweden)

    Shuichi Kimura

    2007-01-01

    unchanged, even when the ST-1 powder content increased from 10 to 50 phr. Furthermore, a composite fabricated by employing the hybrid particles achieved homogenous distribution of ST-1 powder and showed a higher oxygen index than that of a composite fabricated by directly mixing of PS pellets and ST-1 powder.

  5. Mechanical and tribological studies on nano particles reinforced hybrid aluminum based composite

    Directory of Open Access Journals (Sweden)

    Muley Aniruddha V.

    2015-01-01

    Full Text Available Hybrid metal matrix composites are new class of materials due to their better mechanical properties which can be achieved through proper selection and combination of materials. The work reported in this paper is based on fabrication of hybrid composites by using nano particles as reinforcements. The hybrid composites were fabricated by reinforcing them with nano sized SiC and Al2O3 particles in order to study mechanical and tribological properties of these enhanced materials. A stir casting method was used to obtain hybrid composites. LM 6 aluminum alloy was used as a matrix material. The results shown increase in hardness as well as in ultimate tensile strength of the composites with small wt.% of nano-sized hybrid reinforcements. The composites produced also exhibit better tribological properties.

  6. Controlled isotropic or anisotropic nanoscale growth of coordination polymers: formation of hybrid coordination polymer particles.

    Science.gov (United States)

    Lee, Hee Jung; Cho, Yea Jin; Cho, Won; Oh, Moonhyun

    2013-01-22

    The ability to fabricate multicompositional hybrid materials in a precise and controlled manner is one of the primary goals of modern materials science research. In addition, an understanding of the phenomena associated with the systematic growth of one material on another can facilitate the evolution of multifunctional hybrid materials. Here, we demonstrate precise manipulation of the isotropic and/or anisotropic nanoscale growth of various coordination polymers (CPs) to obtain heterocompositional hybrid coordination polymer particles. Chemical composition analyses conducted at every growth step reveal the formation of accurately assembled hybrid nanoscale CPs, and microscopy images are used to examine the morphology of the particles and visualize the hybrid structures. The dissimilar growth behavior, that is, growth in an isotropic or anisotropic fashion, is found to be dependent on the size of the metal ions involved within the CPs.

  7. Using k-alpha emission to determine fast electron spectra using the Hybrid code ZEPHYROS

    CERN Document Server

    White, Thomas; Gregori, Gianluca

    2014-01-01

    A high intensity laser-solid interaction invariably drives a non-thermal fast electron current through the target, however characterizing these fast electron distributions can prove difficult. An understanding of how these electrons propagate through dense materials is of fundamental interest and has applications relevant to fast ignition schemes and ion acceleration. Here, we utilize an upgraded version of the Hybrid code ZEPHYROS to demonstrate how the resulting k-alpha emission from such an interaction can be used as a diagnostic to obtain the characteristic temperature, divergence and total energy of the fast electron population.

  8. Rate-prediction structure complexity analysis for multi-view video coding using hybrid genetic algorithms

    Science.gov (United States)

    Liu, Yebin; Dai, Qionghai; You, Zhixiang; Xu, Wenli

    2007-01-01

    Efficient exploitation of the temporal and inter-view correlation is critical to multi-view video coding (MVC), and the key to it relies on the design of prediction chain structure according to the various pattern of correlations. In this paper, we propose a novel prediction structure model to design optimal MVC coding schemes along with tradeoff analysis in depth between compression efficiency and prediction structure complexity for certain standard functionalities. Focusing on the representation of the entire set of possible chain structures rather than certain typical ones, the proposed model can given efficient MVC schemes that adaptively vary with the requirements of structure complexity and video source characteristics (the number of views, the degrees of temporal and interview correlations). To handle large scale problem in model optimization, we deploy a hybrid genetic algorithm which yields satisfactory results shown in the simulations.

  9. A Network Coding Based Hybrid ARQ Protocol for Underwater Acoustic Sensor Networks.

    Science.gov (United States)

    Wang, Hao; Wang, Shilian; Zhang, Eryang; Zou, Jianbin

    2016-01-01

    Underwater Acoustic Sensor Networks (UASNs) have attracted increasing interest in recent years due to their extensive commercial and military applications. However, the harsh underwater channel causes many challenges for the design of reliable underwater data transport protocol. In this paper, we propose an energy efficient data transport protocol based on network coding and hybrid automatic repeat request (NCHARQ) to ensure reliability, efficiency and availability in UASNs. Moreover, an adaptive window length estimation algorithm is designed to optimize the throughput and energy consumption tradeoff. The algorithm can adaptively change the code rate and can be insensitive to the environment change. Extensive simulations and analysis show that NCHARQ significantly reduces energy consumption with short end-to-end delay.

  10. A Network Coding Based Hybrid ARQ Protocol for Underwater Acoustic Sensor Networks

    Directory of Open Access Journals (Sweden)

    Hao Wang

    2016-09-01

    Full Text Available Underwater Acoustic Sensor Networks (UASNs have attracted increasing interest in recent years due to their extensive commercial and military applications. However, the harsh underwater channel causes many challenges for the design of reliable underwater data transport protocol. In this paper, we propose an energy efficient data transport protocol based on network coding and hybrid automatic repeat request (NCHARQ to ensure reliability, efficiency and availability in UASNs. Moreover, an adaptive window length estimation algorithm is designed to optimize the throughput and energy consumption tradeoff. The algorithm can adaptively change the code rate and can be insensitive to the environment change. Extensive simulations and analysis show that NCHARQ significantly reduces energy consumption with short end-to-end delay.

  11. A Network Coding Based Hybrid ARQ Protocol for Underwater Acoustic Sensor Networks

    Science.gov (United States)

    Wang, Hao; Wang, Shilian; Zhang, Eryang; Zou, Jianbin

    2016-01-01

    Underwater Acoustic Sensor Networks (UASNs) have attracted increasing interest in recent years due to their extensive commercial and military applications. However, the harsh underwater channel causes many challenges for the design of reliable underwater data transport protocol. In this paper, we propose an energy efficient data transport protocol based on network coding and hybrid automatic repeat request (NCHARQ) to ensure reliability, efficiency and availability in UASNs. Moreover, an adaptive window length estimation algorithm is designed to optimize the throughput and energy consumption tradeoff. The algorithm can adaptively change the code rate and can be insensitive to the environment change. Extensive simulations and analysis show that NCHARQ significantly reduces energy consumption with short end-to-end delay. PMID:27618044

  12. Hybrid Codes Needed for Coordination over the Point-to-Point Channel

    CERN Document Server

    Cuff, Paul

    2011-01-01

    We consider a new fundamental question regarding the point-to-point memoryless channel. The source-channel separation theorem indicates that random codebook construction for lossy source compression and channel coding can be independently constructed and paired to achieve optimal performance for coordinating a source sequence with a reconstruction sequence. But what if we want the channel input to also be coordinated with the source and reconstruction? Such situations arise in network communication problems, where the correlation inherent in the information sources can be used to correlate channel inputs. Hybrid codes have been shown to be useful in a number of network communication problems. In this work we highlight their advantages over purely digital codebook construction by applying them to the point-to-point setting, coordinating both the channel input and the reconstruction with the source.

  13. A fully parallel, high precision, N-body code running on hybrid computing platforms

    CERN Document Server

    Capuzzo-Dolcetta, R; Punzo, D

    2012-01-01

    We present a new implementation of the numerical integration of the classical, gravitational, N-body problem based on a high order Hermite's integration scheme with block time steps, with a direct evaluation of the particle-particle forces. The main innovation of this code (called HiGPUs) is its full parallelization, exploiting both OpenMP and MPI in the use of the multicore Central Processing Units as well as either Compute Unified Device Architecture (CUDA) or OpenCL for the hosted Graphic Processing Units. We tested both performance and accuracy of the code using up to 256 GPUs in the supercomputer IBM iDataPlex DX360M3 Linux Infiniband Cluster provided by the italian supercomputing consortium CINECA, for values of N up to 8 millions. We were able to follow the evolution of a system of 8 million bodies for few crossing times, task previously unreached by direct summation codes. The code is freely available to the scientific community.

  14. Nonlinear Simulation of Alfven Eigenmodes driven by Energetic Particles: Comparison between HMGC and TAEFL Codes

    Science.gov (United States)

    Bierwage, Andreas; Spong, Donald A.

    2009-05-01

    Hybrid-MHD-Gyrokinetic Code (HMGC) [1] and the gyrofluid code TAEFL [2,3] are used for nonlinear simulation of Alfven Eigenmodes in Tokamak plasma. We compare results obtained in two cases: (I) a case designed for cross-code benchmark of TAE excitation; (II) a case based on a dedicated DIII-D shot #132707 where RSAE and TAE activity is observed. Differences between the numerical simulation results are discussed and future directions are outlined. [1] S. Briguglio, G. Vlad, F. Zonca and C. Kar, Phys. Plasmas 2 (1995) 3711. [2] D.A. Spong, B.A. Carreras and C.L. Hedrick, Phys. Fluids B4 (1992) 3316. [3] D.A. Spong, B.A. Carreras and C.L. Hedrick, Phys. Plasmas 1 (1994) 1503.

  15. Lower Hybrid Current Drive and Heating for the National Transport Code Collaboration

    Science.gov (United States)

    Ignat, D. W.; Jardin, S. C.; McCune, D. C.; Valeo, E. J.

    2000-10-01

    The Lower hybrid Simulation Code LSC was originally written as a subroutine to the Toroidal Simulation Code TSC (Jardin, Pomphrey, Kessel, et al) and subsequently ported to a subroutine of TRANSP. Modifications to simplify the use of the LSC both as a callable module, and also independently of larger transport codes, and improve the documentation have been undertaken with the goal of installing LSC in the NTCC library. The physical model, which includes ray tracing from a Brambilla spectrum, 1D Fokker-Planck development of the electron distribution, the Karney-Fisch treatment of the electric field, heuristic diffusion of current and power and wall scattering, has not been changed. The computational approach is to suppress or remove from the control of the user numerical parameters such as step size and number of iterations while changing some code to be extremely stable in varied conditions. Essential graphics are now output as gnuplot commands and data for off-line post processing, but the original outputs to sglib are retained as an option. Examples of output are shown.

  16. R-Matrix Codes for Charged-particle Induced Reactionsin the Resolved Resonance Region

    Energy Technology Data Exchange (ETDEWEB)

    Leeb, Helmut [Technical Univ. of Wien, Vienna (Austria); Dimitriou, Paraskevi [Intl Atomic Energy Agency (IAEA), Vienna (Austria); Thompson, Ian J. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-01-01

    A Consultant’s Meeting was held at the IAEA Headquarters, from 5 to 7 December 2016, to discuss the status of R-matrix codes currently used in calculations of charged-particle induced reaction cross sections at low energies. The meeting was a follow-up to the R-matrix Codes meeting held in December 2015, and served the purpose of monitoring progress in: the development of a translation code to enable exchange of input/output parameters between the various codes in different formats, fitting procedures and treatment of uncertainties, the evaluation methodology, and finally dissemination. The details of the presentations and technical discussions, as well as additional actions that were proposed to achieve all the goals of the meeting are summarized in this report.

  17. A constrained particle dynamics for continuum-particle hybrid method in micro-and nano-fluidics

    Institute of Scientific and Technical Information of China (English)

    Jia Cui; GuoWei He; Dewei Qi

    2006-01-01

    A hybrid method of continuum and particle dynamics is developed for micro- and nano-fluidics,where fluids are described by a molecular dynamics (MD) in one domain and by the Navier-Stokes (NS) equations in another domain.In order to ensure the continuity of momentum flux,the continuum and molecular dynamics in the overlap domain are coupled through a constrained particle dynamics.The constrained particle dynamics is constructed with a virtual damping force and a virtual added mass force.The sudden-start Couette flows with either non-Slip or slip boundary condition are used to test the hybrid method.It is shown that the results obtained are quantitatively in agreement with the analytical solutions under the non-slip boundary conditions and the full MD simulations under the slip boundary conditions.

  18. Hybrid optical-digital encryption system based on wavefront coding paradigm

    Science.gov (United States)

    Konnik, Mikhail V.

    2012-04-01

    The wavefront coding is a widely used in the optical systems to compensate aberrations and increase the depth of field. This paper presents experimental results on application of the wavefront coding paradigm for data encryption. We use a synthesised diffractive optical element (DOE) to deliberately introduce a phase distortion during the images registration process to encode the acquired image. In this case, an optical convolution of the input image with the point spread function (PSF) of the DOE is registered. The encryption is performed optically, and is therefore is fast and secure. Since the introduced distortion is the same across the image, the decryption is performed digitally using deconvolution methods. However, due to noise and finite accuracy of a photosensor, the reconstructed image is degraded but still readable. The experimental results, which are presented in this paper, indicate that the proposed hybrid optical-digital system can be implemented as a portable device using inexpensive off-the-shelf components. We present the results of optical encryption and digital restoration with quantitative estimations of the images quality. Details of hardware optical implementation of the hybrid optical-digital encryption system are discussed.

  19. Hybrid threshold adaptable quantum secret sharing scheme with reverse Huffman-Fibonacci-tree coding

    Science.gov (United States)

    Lai, Hong; Zhang, Jun; Luo, Ming-Xing; Pan, Lei; Pieprzyk, Josef; Xiao, Fuyuan; Orgun, Mehmet A.

    2016-08-01

    With prevalent attacks in communication, sharing a secret between communicating parties is an ongoing challenge. Moreover, it is important to integrate quantum solutions with classical secret sharing schemes with low computational cost for the real world use. This paper proposes a novel hybrid threshold adaptable quantum secret sharing scheme, using an m-bonacci orbital angular momentum (OAM) pump, Lagrange interpolation polynomials, and reverse Huffman-Fibonacci-tree coding. To be exact, we employ entangled states prepared by m-bonacci sequences to detect eavesdropping. Meanwhile, we encode m-bonacci sequences in Lagrange interpolation polynomials to generate the shares of a secret with reverse Huffman-Fibonacci-tree coding. The advantages of the proposed scheme is that it can detect eavesdropping without joint quantum operations, and permits secret sharing for an arbitrary but no less than threshold-value number of classical participants with much lower bandwidth. Also, in comparison with existing quantum secret sharing schemes, it still works when there are dynamic changes, such as the unavailability of some quantum channel, the arrival of new participants and the departure of participants. Finally, we provide security analysis of the new hybrid quantum secret sharing scheme and discuss its useful features for modern applications.

  20. Validation and benchmarking of two particle-in-cell codes for a glow discharge

    Science.gov (United States)

    Carlsson, Johan; Khrabrov, Alexander; Kaganovich, Igor; Sommerer, Timothy; Keating, David

    2017-01-01

    The two particle-in-cell codes EDIPIC and LSP are benchmarked and validated for a parallel-plate glow discharge in helium, in which the axial electric field had been carefully measured, primarily to investigate and improve the fidelity of their collision models. The scattering anisotropy of electron-impact ionization, as well as the value of the secondary-electron emission yield, are not well known in this case. The experimental uncertainty for the emission yield corresponds to a factor of two variation in the cathode current. If the emission yield is tuned to make the cathode current computed by each code match the experiment, the computed electric fields are in excellent agreement with each other, and within about 10% of the experimental value. The non-monotonic variation of the width of the cathode fall with the applied voltage seen in the experiment is reproduced by both codes. The electron temperature in the negative glow is within experimental error bars for both codes, but the density of slow trapped electrons is underestimated. A more detailed code comparison done for several synthetic cases of electron-beam injection into helium gas shows that the codes are in excellent agreement for ionization rate, as well as for elastic and excitation collisions with isotropic scattering pattern. The remaining significant discrepancies between the two codes are due to differences in their electron binary-collision models, and for anisotropic scattering due to elastic and excitation collisions.

  1. A smooth particle hydrodynamics code to model collisions between solid, self-gravitating objects

    Science.gov (United States)

    Schäfer, C.; Riecker, S.; Maindl, T. I.; Speith, R.; Scherrer, S.; Kley, W.

    2016-05-01

    Context. Modern graphics processing units (GPUs) lead to a major increase in the performance of the computation of astrophysical simulations. Owing to the different nature of GPU architecture compared to traditional central processing units (CPUs) such as x86 architecture, existing numerical codes cannot be easily migrated to run on GPU. Here, we present a new implementation of the numerical method smooth particle hydrodynamics (SPH) using CUDA and the first astrophysical application of the new code: the collision between Ceres-sized objects. Aims: The new code allows for a tremendous increase in speed of astrophysical simulations with SPH and self-gravity at low costs for new hardware. Methods: We have implemented the SPH equations to model gas, liquids and elastic, and plastic solid bodies and added a fragmentation model for brittle materials. Self-gravity may be optionally included in the simulations and is treated by the use of a Barnes-Hut tree. Results: We find an impressive performance gain using NVIDIA consumer devices compared to our existing OpenMP code. The new code is freely available to the community upon request. If you are interested in our CUDA SPH code miluphCUDA, please write an email to Christoph Schäfer. miluphCUDA is the CUDA port of miluph. miluph is pronounced [maßl2v]. We do not support the use of the code for military purposes.

  2. Load balancing in highly parallel processing of Monte Carlo code for particle transport

    Energy Technology Data Exchange (ETDEWEB)

    Higuchi, Kenji; Takemiya, Hiroshi [Japan Atomic Energy Research Inst., Tokyo (Japan); Kawasaki, Takuji [Fuji Research Institute Corporation, Tokyo (Japan)

    2001-01-01

    In parallel processing of Monte Carlo(MC) codes for neutron, photon and electron transport problems, particle histories are assigned to processors making use of independency of the calculation for each particle. Although we can easily parallelize main part of a MC code by this method, it is necessary and practically difficult to optimize the code concerning load balancing in order to attain high speedup ratio in highly parallel processing. In fact, the speedup ratio in the case of 128 processors remains in nearly one hundred times when using the test bed for the performance evaluation. Through the parallel processing of the MCNP code, which is widely used in the nuclear field, it is shown that it is difficult to attain high performance by static load balancing in especially neutron transport problems, and a load balancing method, which dynamically changes the number of assigned particles minimizing the sum of the computational and communication costs, overcomes the difficulty, resulting in nearly fifteen percentage of reduction for execution time. (author)

  3. Computational analysis of electrical conduction in hybrid nanomaterials with embedded non-penetrating conductive particles

    Science.gov (United States)

    Cai, Jizhe; Naraghi, Mohammad

    2016-08-01

    In this work, a comprehensive multi-resolution two-dimensional (2D) resistor network model is proposed to analyze the electrical conductivity of hybrid nanomaterials made of insulating matrix with conductive particles such as CNT reinforced nanocomposites and thick film resistors. Unlike existing approaches, our model takes into account the impenetrability of the particles and their random placement within the matrix. Moreover, our model presents a detailed description of intra-particle conductivity via finite element analysis, which to the authors’ best knowledge has not been addressed before. The inter-particle conductivity is assumed to be primarily due to electron tunneling. The model is then used to predict the electrical conductivity of electrospun carbon nanofibers as a function of microstructural parameters such as turbostratic domain alignment and aspect ratio. To simulate the microstructure of single CNF, randomly positioned nucleation sites were seeded and grown as turbostratic particles with anisotropic growth rates. Particle growth was in steps and growth of each particle in each direction was stopped upon contact with other particles. The study points to the significant contribution of both intra-particle and inter-particle conductivity to the overall conductivity of hybrid composites. Influence of particle alignment and anisotropic growth rate ratio on electrical conductivity is also discussed. The results show that partial alignment in contrast to complete alignment can result in maximum electrical conductivity of whole CNF. High degrees of alignment can adversely affect conductivity by lowering the probability of the formation of a conductive path. The results demonstrate approaches to enhance electrical conductivity of hybrid materials through controlling their microstructure which is applicable not only to carbon nanofibers, but also many other types of hybrid composites such as thick film resistors.

  4. Update on the Development and Validation of MERCURY: A Modern, Monte Carlo Particle Transport Code

    Energy Technology Data Exchange (ETDEWEB)

    Procassini, R J; Taylor, J M; McKinley, M S; Greenman, G M; Cullen, D E; O' Brien, M J; Beck, B R; Hagmann, C A

    2005-06-06

    An update on the development and validation of the MERCURY Monte Carlo particle transport code is presented. MERCURY is a modern, parallel, general-purpose Monte Carlo code being developed at the Lawrence Livermore National Laboratory. During the past year, several major algorithm enhancements have been completed. These include the addition of particle trackers for 3-D combinatorial geometry (CG), 1-D radial meshes, 2-D quadrilateral unstructured meshes, as well as a feature known as templates for defining recursive, repeated structures in CG. New physics capabilities include an elastic-scattering neutron thermalization model, support for continuous energy cross sections and S ({alpha}, {beta}) molecular bound scattering. Each of these new physics features has been validated through code-to-code comparisons with another Monte Carlo transport code. Several important computer science features have been developed, including an extensible input-parameter parser based upon the XML data description language, and a dynamic load-balance methodology for efficient parallel calculations. This paper discusses the recent work in each of these areas, and describes a plan for future extensions that are required to meet the needs of our ever expanding user base.

  5. Modeling of a-particle redistribution by sawteeth in TFTR using FPPT code

    Energy Technology Data Exchange (ETDEWEB)

    Gorelenkov, N.N.; Budny, R.V.; Duong, H.H. [and others

    1996-06-01

    Results from recent DT experiments on TFTR to measure the radial density profiles of fast confined well trapped {alpha}-particles using the Pellet Charge eXchange (PCX) diagnostic [PETROV M. P., et. al. Nucl. Fusion, 35 (1995) 1437] indicate that sawtooth oscillations produce a significant broadening of the trapped alpha radial density profiles. ` Conventional models consistent with measured sawtooth effects on passing particles do not provide satisfactory simulations of the trapped particle mixing measured by PCX diagnostic. We propose a different mechanism for fast particle mixing during the sawtooth crash to explain the trapped {alpha}-particle density profile broadening after the crash. The model is based on the fast particle orbit averaged toroidal drift in a perturbed helical electric field with an adjustable absolute value. Such a drift of the fast particles results in a change of their energy and a redistribution in phase space. The energy redistribution is shown to obey the diffusion equation, while the redistribution in toroidal momentum P{var_phi} (or in minor radius) is assumed stochastic with large diffusion coefficient and was taken flat. The distribution function in a pre- sawtooth plasma and its evolution in a post-sawtooth crash plasma is simulated using the Fokker-Planck Post-TRANSP (FPPT) processor code. It is shown that FPPT calculated {alpha}-particle distributions are consistent with TRANSP Monte-Carlo calculations. Comparison of FPPT results with Pellet Char eXchange (PCX) measurements shows good agreement for 9 both sawtooth free and sawtoothing plasmas.

  6. Acceleration of the Geostatistical Software Library (GSLIB) by code optimization and hybrid parallel programming

    Science.gov (United States)

    Peredo, Oscar; Ortiz, Julián M.; Herrero, José R.

    2015-12-01

    The Geostatistical Software Library (GSLIB) has been used in the geostatistical community for more than thirty years. It was designed as a bundle of sequential Fortran codes, and today it is still in use by many practitioners and researchers. Despite its widespread use, few attempts have been reported in order to bring this package to the multi-core era. Using all CPU resources, GSLIB algorithms can handle large datasets and grids, where tasks are compute- and memory-intensive applications. In this work, a methodology is presented to accelerate GSLIB applications using code optimization and hybrid parallel processing, specifically for compute-intensive applications. Minimal code modifications are added decreasing as much as possible the elapsed time of execution of the studied routines. If multi-core processing is available, the user can activate OpenMP directives to speed up the execution using all resources of the CPU. If multi-node processing is available, the execution is enhanced using MPI messages between the compute nodes.Four case studies are presented: experimental variogram calculation, kriging estimation, sequential gaussian and indicator simulation. For each application, three scenarios (small, large and extra large) are tested using a desktop environment with 4 CPU-cores and a multi-node server with 128 CPU-nodes. Elapsed times, speedup and efficiency results are shown.

  7. Dynamic Particle Weight Remapping in Hybrid PIC Hall-effect Thruster Simulation

    Science.gov (United States)

    2015-05-01

    International Electric Propulsion Conference and 6th Nano-satellite Symposium Hyogo-Kobe, Japan July 410, 2015 Robert Martin∗ ERC Incorporated, Huntsville...Algorithms, . 8Koo, J. and Martin, R., Pseudospectral model for hybrid PIC Hall -eect thruster simulation, 34th Int. Electric Propul- sion Conf...Paper 3. DATES COVERED (From - To) May 2015-July 2015 4. TITLE AND SUBTITLE Dynamic Particle Weight Remapping in Hybrid PIC Hall -effect Thruster

  8. Ultrasensitive Cracking-Assisted Strain Sensors Based on Silver Nanowires/Graphene Hybrid Particles.

    Science.gov (United States)

    Chen, Song; Wei, Yong; Wei, Siman; Lin, Yong; Liu, Lan

    2016-09-28

    Strain sensors with ultrahigh sensitivity under microstrain have numerous potential applications in heartbeat monitoring, pulsebeat detection, sound signal acquisition, and recognition. In this work, a two-part strain sensor (i.e., polyurethane part and brittle conductive hybrid particles layer on top) based on silver nanowires/graphene hybrid particles is developed via a simple coprecipitation, reduction, vacuum filtration, and casting process. Because of the nonuniform interface, weak interfacial bonding, and the hybrid particles' point-to-point conductive networks, the crack and overlap morphologies are successfully formed on the strain sensor after a prestretching; the crack-based stain sensor exhibits gauge factors as high as 20 (Δε sensor. Combined with its good response to bending, high strain resolution, and high working stability, the developed strain sensor is promising in the applications of electronic skins, motion sensors, and health monitoring sensors.

  9. Quantum efficiency of colloidal suspensions containing quantum dot/silica hybrid particles

    Science.gov (United States)

    Jeon, Hyungjoon; Yoon, Cheolsang; Lee, Sooho; Lee, Doh C.; Shin, Kyusoon; Lee, Kangtaek

    2016-10-01

    We have investigated the fluorescence properties of colloidal suspensions conntaining quantum dot (QD)/silica hybrid particles. First, we synthesized QD/silica hybrid particles with silica-QD-silica (SQS) core-shell-shell geometry, and monitored the quantum efficiencies of their suspensions at various particle concentrations. We found that the quantum efficiency (QE) of SQS particles in deionized (DI) water was much lower than that of the QDs even at low particle concentration, mainly due to the light scattering of emitted photons at the silica/water interface, followed by reabsorption by QDs. As the concentration of SQS particles was increased, both light scattering and reabsorption by QDs became more important, which further reduced the QE. Refractive index-matched solvent, however, reduced light scattering, yielding greater QE than DI water. Next, we induced aggregation of SQS particles, and found that QE increased as particles aggregated in DI water because of reduced light scattering and reabsorption, whereas it remained almost constant in the refractive index-matched solvent. Finally, we studied aggregation of highly concentrated silica particle suspensions containing a low concentration of SQS particles, and found that QE increased with aggregation because light scattering and reabsorption were reduced.

  10. First experience with particle-in-cell plasma physics code on ARM-based HPC systems

    Science.gov (United States)

    Sáez, Xavier; Soba, Alejandro; Sánchez, Edilberto; Mantsinen, Mervi; Mateo, Sergi; Cela, José M.; Castejón, Francisco

    2015-09-01

    In this work, we will explore the feasibility of porting a Particle-in-cell code (EUTERPE) to an ARM multi-core platform from the Mont-Blanc project. The used prototype is based on a system-on-chip Samsung Exynos 5 with an integrated GPU. It is the first prototype that could be used for High-Performance Computing (HPC), since it supports double precision and parallel programming languages.

  11. Recent advances in the smoothed-particle hydrodynamics technique: Building the code SPHYNX

    CERN Document Server

    Cabezon, Ruben M; Figueira, Joana

    2016-01-01

    A novel computational hydrocode oriented to Astrophysical applications is described, discussed and validated in the following pages. The code, called SPHYNX, is of Newtonian type and grounded on the Euler-Lagrange formulation of the smoothed-particle hydrodynamics technique. The distinctive features of the code are: the use of an integral approach to estimating the gradients; the use of a flexible family of interpolators called sinc kernels, which suppress pairing instability; and the incorporation of a new type of volume elements which provides a better partition of the unity. The ensuing hydrodynamic code conserves mass, linear and angular momentum, energy, entropy and preserves kernel normalization even in strong shocks. By a careful choice of the index of the sinc kernel and the number of neighbors in the SPH summations, there is a substantial improvement in the estimation of gradients. Additionally, the new volume elements reduce the so-called tensile instability. Both features help to suppress much of t...

  12. Design and Cost Performance of Decoding Technique for Hybrid Subcarrier Spectral Amplitude Coding-Optical Code Division Multiple Access System

    National Research Council Canada - National Science Library

    R. K.Z. Sahbudin; M. K. Abdullah; M. Mokhtar; S. Hitam; S. B.A. Anas

    2011-01-01

    ...) deploying the Khazani-Syed code was proposed. It was proposed as a mean of increasing the maximum number of simultaneous active users by increasing the subcarrier and/or the SAC-OCDMA code word...

  13. Mechanism of Methylene Blue adsorption on hybrid laponite-multi-walled carbon nanotube particles.

    Science.gov (United States)

    Manilo, Maryna; Lebovka, Nikolai; Barany, Sandor

    2016-04-01

    The kinetics of adsorption and parameters of equilibrium adsorption of Methylene Blue (MB) on hybrid laponite-multi-walled carbon nanotube (NT) particles in aqueous suspensions were determined. The laponite platelets were used in order to facilitate disaggregation of NTs in aqueous suspensions and enhance the adsorption capacity of hybrid particles for MB. Experiments were performed at room temperature (298 K), and the laponite/NT ratio (Xl) was varied in the range of 0-0.5. For elucidation of the mechanism of MB adsorption on hybrid particles, the electrical conductivity of the system as well as the electrokinetic potential of laponite-NT hybrid particles were measured. Three different stages in the kinetics of adsorption of MB on the surface of NTs or hybrid laponite-NT particles were discovered to be a fast initial stage I (adsorption time t=0-10 min), a slower intermediate stage II (up to t=120 min) and a long-lasting final stage III (up to t=24hr). The presence of these stages was explained accounting for different types of interactions between MB and adsorbent particles, as well as for the changes in the structure of aggregates of NT particles and the long-range processes of restructuring of laponite platelets on the surface of NTs. The analysis of experimental data on specific surface area versus the value of Xl evidenced in favor of the model with linear contacts between rigid laponite platelets and NTs. It was also concluded that electrostatic interactions control the first stage of adsorption at low MB concentrations.

  14. HyCFS, a high-resolution shock capturing code for numerical simulation on hybrid computational clusters

    Science.gov (United States)

    Shershnev, Anton A.; Kudryavtsev, Alexey N.; Kashkovsky, Alexander V.; Khotyanovsky, Dmitry V.

    2016-10-01

    The present paper describes HyCFS code, developed for numerical simulation of compressible high-speed flows on hybrid CPU/GPU (Central Processing Unit / Graphical Processing Unit) computational clusters on the basis of full unsteady Navier-Stokes equations, using modern shock capturing high-order TVD (Total Variation Diminishing) and WENO (Weighted Essentially Non-Oscillatory) schemes on general curvilinear structured grids. We discuss the specific features of hybrid architecture and details of program implementation and present the results of code verification.

  15. Large-scale nanocomposites simulations using hybrid particle/SCFT simulations

    Science.gov (United States)

    Sides, Scott

    2009-03-01

    Preliminary results from 2D simulations of block copolymer nanocomposites (Phys. Rev. Lett. Vol 96, 250601 (2006) have been performed using a hybrid self-consistent field theory (SCFT) algorithm. While these simulation results showed that the presence of nanoparticles could induce changes in block copolymer morphologies, quantitative agreement with experiments for the particle densities at this transition are not yet possible. A feature missing in the 2D hybrid simulations is the packing behavior of real, three-dimensional spherical particles embedded in lamellar layers or hexagonally packed cylinders formed by linear diblock chains. In order to carry out these hybrid particle/SCFT 3D simulations a new object-oriented SCFT framework has been developed. The object-oriented design enables the hybrid/SCFT simulations to be performed in a framework that is both numerically efficient and sufficiently flexible to incorporate new SCFT models easily, In particular, this new framework will be used to investigate the distribution of particle positions in diblock lamellar layers as function of nanoparticle density to study the interplay of patterning due to diblock domain structure and the chain depletion interaction between spherical particles.

  16. Parallel Computing Characteristics of CUPID code under MPI and Hybrid environment

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jae Ryong; Yoon, Han Young [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Jeon, Byoung Jin; Choi, Hyoung Gwon [Seoul National Univ. of Science and Technology, Seoul (Korea, Republic of)

    2014-05-15

    In this paper, a characteristic of parallel algorithm is presented for solving an elliptic type equation of CUPID via domain decomposition method using the MPI and the parallel performance is estimated in terms of a scalability which shows the speedup ratio. In addition, the time-consuming pattern of major subroutines is studied. Two different grid systems are taken into account: 40,000 meshes for coarse system and 320,000 meshes for fine system. Since the matrix of the CUPID code differs according to whether the flow is single-phase or two-phase, the effect of matrix shape is evaluated. Finally, the effect of the preconditioner for matrix solver is also investigated. Finally, the hybrid (OpenMP+MPI) parallel algorithm is introduced and discussed in detail for solving pressure solver. Component-scale thermal-hydraulics code, CUPID has been developed for two-phase flow analysis, which adopts a three-dimensional, transient, three-field model, and parallelized to fulfill a recent demand for long-transient and highly resolved multi-phase flow behavior. In this study, the parallel performance of the CUPID code was investigated in terms of scalability. The CUPID code was parallelized with domain decomposition method. The MPI library was adopted to communicate the information at the neighboring domain. For managing the sparse matrix effectively, the CSR storage format is used. To take into account the characteristics of the pressure matrix which turns to be asymmetric for two-phase flow, both single-phase and two-phase calculations were run. In addition, the effect of the matrix size and preconditioning was also investigated. The fine mesh calculation shows better scalability than the coarse mesh because the number of coarse mesh does not need to decompose the computational domain excessively. The fine mesh can be present good scalability when dividing geometry with considering the ratio between computation and communication time. For a given mesh, single-phase flow

  17. Application of hybrid coded genetic algorithm in fuzzy neural network controller

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    Presents the fuzzy neural network optimized by hybrid coded genetic algorithm of decimal encoding and bi nary encoding, the searching ability and stability of genetic algorithms enhanced by using binary encoding during the crossover operation and decimal encoding during the mutation operation, and the way of accepting new individuals by probability adopted, by which a new individual is accepted and its parent is discarded when its fitness is higher than that of its parent, and a new individual is accepted by probability when its fitness is lower than that of its parent. And concludes with calculations made with an example that these improvements enhance the speed of genetic algorithms to optimize the fuzzy neural network controller.

  18. Apar-T: code, validation, and physical interpretation of particle-in-cell results

    CERN Document Server

    Melzani, Mickaël; Walder, Rolf; Folini, Doris; Favre, Jean M; Krastanov, Stefan; Messmer, Peter

    2013-01-01

    We present the parallel particle-in-cell (PIC) code Apar-T and, more importantly, address the fundamental question of the relations between the PIC model, the Vlasov-Maxwell theory, and real plasmas. First, we present four validation tests: spectra from simulations of thermal plasmas, linear growth rates of the relativistic tearing instability and of the filamentation instability, and non-linear filamentation merging phase. For the filamentation instability we show that the effective growth rates measured on the total energy can differ by more than 50% from the linear cold predictions and from the fastest modes of the simulation. Second, we detail a new method for initial loading of Maxwell-J\\"uttner particle distributions with relativistic bulk velocity and relativistic temperature, and explain why the traditional method with individual particle boosting fails. Third, we scrutinize the question of what description of physical plasmas is obtained by PIC models. These models rely on two building blocks: coarse...

  19. Towards a more realistic sink particle algorithm for the RAMSES code

    CERN Document Server

    Bleuler, Andreas

    2014-01-01

    We present a new sink particle algorithm developed for the Adaptive Mesh Refinement code RAMSES. Our main addition is the use of a clump finder to identify density peaks and their associated regions (the peak patches). This allows us to unambiguously define a discrete set of dense molecular cores as potential sites for sink particle formation. Furthermore, we develop a new scheme to decide if the gas in which a sink could potentially form, is indeed gravitationally bound and rapidly collapsing. This is achieved using a general integral form of the virial theorem, where we use the curvature in the gravitational potential to correctly account for the background potential. We detail all the necessary steps to follow the evolution of sink particles in turbulent molecular cloud simulations, such as sink production, their trajectory integration, sink merging and finally the gas accretion rate onto an existing sink. We compare our new recipe for sink formation to other popular implementations. Statistical properties...

  20. Development and Verification of Smoothed Particle Hydrodynamics Code for Analysis of Tsunami near NPP

    Energy Technology Data Exchange (ETDEWEB)

    Jo, Young Beom; Kim, Eung Soo [Seoul National Univ., Seoul (Korea, Republic of)

    2014-10-15

    It becomes more complicated when considering the shape and phase of the ground below the seawater. Therefore, some different attempts are required to precisely analyze the behavior of tsunami. This paper introduces an on-going activities on code development in SNU based on an unconventional mesh-free fluid analysis method called Smoothed Particle Hydrodynamics (SPH) and its verification work with some practice simulations. This paper summarizes the on-going development and verification activities on Lagrangian mesh-free SPH code in SNU. The newly developed code can cover equation of motions and heat conduction equation so far, and verification of each models is completed. In addition, parallel computation using GPU is now possible, and GUI is also prepared. If users change input geometry or input values, they can simulate for various conditions geometries. A SPH method has large advantages and potential in modeling of free surface, highly deformable geometry and multi-phase problems that traditional grid-based code has difficulties in analysis. Therefore, by incorporating more complex physical models such as turbulent flow, phase change, two-phase flow, and even solid mechanics, application of the current SPH code is expected to be much more extended including molten fuel behaviors in the sever accident.

  1. A hybrid discrete particle swarm optimization-genetic algorithm for multi-task scheduling problem in service oriented manufacturing systems

    Institute of Scientific and Technical Information of China (English)

    武善玉; 张平; 李方; 古锋; 潘毅

    2016-01-01

    To cope with the task scheduling problem under multi-task and transportation consideration in large-scale service oriented manufacturing systems (SOMS), a service allocation optimization mathematical model was established, and then a hybrid discrete particle swarm optimization-genetic algorithm (HDPSOGA) was proposed. In SOMS, each resource involved in the whole life cycle of a product, whether it is provided by a piece of software or a hardware device, is encapsulated into a service. So, the transportation during production of a task should be taken into account because the hard-services selected are possibly provided by various providers in different areas. In the service allocation optimization mathematical model, multi-task and transportation were considered simultaneously. In the proposed HDPSOGA algorithm, integer coding method was applied to establish the mapping between the particle location matrix and the service allocation scheme. The position updating process was performed according to the cognition part, the social part, and the previous velocity and position while introducing the crossover and mutation idea of genetic algorithm to fit the discrete space. Finally, related simulation experiments were carried out to compare with other two previous algorithms. The results indicate the effectiveness and efficiency of the proposed hybrid algorithm.

  2. An Unscented Kalman-Particle Hybrid Filter for Space Object Tracking

    Science.gov (United States)

    Raihan A. V, Dilshad; Chakravorty, Suman

    2017-04-01

    Optimal and consistent estimation of the state of space objects is pivotal to surveillance and tracking applications. However, probabilistic estimation of space objects is made difficult by the non-Gaussianity and nonlinearity associated with orbital mechanics. In this paper, we present an unscented Kalman-particle hybrid filtering framework for recursive Bayesian estimation of space objects. The hybrid filtering scheme is designed to provide accurate and consistent estimates when measurements are sparse without incurring a large computational cost. It employs an unscented Kalman filter (UKF) for estimation when measurements are available. When the target is outside the field of view (FOV) of the sensor, it updates the state probability density function (PDF) via a sequential Monte Carlo method. The hybrid filter addresses the problem of particle depletion through a suitably designed filter transition scheme. To assess the performance of the hybrid filtering approach, we consider two test cases of space objects that are assumed to undergo full three dimensional orbital motion under the effects of J 2 and atmospheric drag perturbations. It is demonstrated that the hybrid filters can furnish fast, accurate and consistent estimates outperforming standard UKF and particle filter (PF) implementations.

  3. Hybridizing Particle Swarm Optimization and Differential Evolution for the Mobile Robot Global Path Planning

    Directory of Open Access Journals (Sweden)

    Biwei Tang

    2016-05-01

    Full Text Available Global path planning is a challenging issue in the filed of mobile robotics due to its complexity and the nature of nondeterministic polynomial-time hard (NP-hard. Particle swarm optimization (PSO has gained increasing popularity in global path planning due to its simplicity and high convergence speed. However, since the basic PSO has difficulties balancing exploration and exploitation, and suffers from stagnation, its efficiency in solving global path planning may be restricted. Aiming at overcoming these drawbacks and solving the global path planning problem efficiently, this paper proposes a hybrid PSO algorithm that hybridizes PSO and differential evolution (DE algorithms. To dynamically adjust the exploration and exploitation abilities of the hybrid PSO, a novel PSO, the nonlinear time-varying PSO (NTVPSO, is proposed for updating the velocities and positions of particles in the hybrid PSO. In an attempt to avoid stagnation, a modified DE, the ranking-based self adaptive DE (RBSADE, is developed to evolve the personal best experience of particles in the hybrid PSO. The proposed algorithm is compared with four state-of-the-art evolutionary algorithms. Simulation results show that the proposed algorithm is highly competitive in terms of path optimality and can be considered as a vital alternative for solving global path planning.

  4. Assessment of a Hybrid Continuous/Discontinuous Galerkin Finite Element Code for Geothermal Reservoir Simulations

    Science.gov (United States)

    Xia, Yidong; Podgorney, Robert; Huang, Hai

    2017-03-01

    FALCON (Fracturing And Liquid CONvection) is a hybrid continuous/discontinuous Galerkin finite element geothermal reservoir simulation code based on the MOOSE (Multiphysics Object-Oriented Simulation Environment) framework being developed and used for multiphysics applications. In the present work, a suite of verification and validation (V&V) test problems for FALCON was defined to meet the design requirements, and solved to the interests of enhanced geothermal system modeling and simulation. The intent for this test problem suite is to provide baseline comparison data that demonstrates the performance of FALCON solution methods. The test problems vary in complexity from a single mechanical or thermal process, to coupled thermo-hydro-mechanical processes in geological porous medium. Numerical results obtained by FALCON agreed well with either the available analytical solutions or experimental data, indicating the verified and validated implementation of these capabilities in FALCON. Whenever possible, some form of solution verification has been attempted to identify sensitivities in the solution methods, and suggest best practices when using the FALCON code.

  5. Hybrid particle swarm optimization for multiobjective resource allocation

    Institute of Scientific and Technical Information of China (English)

    Yi Yang; Li Xiaoxing; Gu Chunqin

    2008-01-01

    Resource allocation (RA) is the problem of allocating resources among various artifacts or business units to meet one or more expected goals,such as maximizing the profits,minimizing the costs,or achieving the best qualities.A complex multiobjective RA is addressed,and a multiobjective mathematical model is used to find solutions efficiently.Then,an improved particle swarm algorithm (mO_PSO) is proposed combined with a new particle diversity controller policies and dissipation operation.Meanwhile,a modified Pareto methods used in PSO to deal with multiobjectives optimization is presented.The effectiveness of the provided algorithm is validated by its application to some illustrative example dealing with multiobjective RA problems and with the comparative experiment with other algorithm.

  6. Preparation and characterization of uniformly sized sub-micrometer spherical silica/organic polymer hybrid particles

    Energy Technology Data Exchange (ETDEWEB)

    Xing, X.-S.; Li, R.K.Y.; Shek, C.-H. [Department of Physics and Materials Science, City University of Hong Kong, Tak Chee Avenue, Kowloon, Hong Kong (China)

    2003-09-01

    Hybrid particles with a core-shell structure, consisting of a silica core and a polyvinyl alcohol (PVA) shell were fabricated via a two-step sol-gel process. The PVA molecular chains are probably physically adsorbed onto the surface of silica cores by hydrogen bonds and van der Waals forces. (Abstract Copyright [2003], Wiley Periodicals, Inc.)

  7. Silica-graphene oxide hybrid composite particles and their electroresponsive characteristics.

    Science.gov (United States)

    Zhang, Wen Ling; Choi, Hyoung Jin

    2012-05-01

    Silica-graphene oxide (Si-GO) hybrid composite particles were prepared by the hydrolysis of tetraethyl orthosilicate (TEOS) in the presence of hydrophilic GO obtained from a modified Hummers method. Scanning electron microscopy (SEM) and transmission electron microscopy (TEM) images provided visible evidence of the silica nanoparticles grafted on the surface of GO, resulting in Si-GO hybrid composite particles. Energy dispersive X-ray spectroscopy (EDX) and X-ray diffraction (XRD) spectra indicated the coexistence of silica and GO in the composite particles. The Si-GO hybrid composite particles showed better thermal stability than that of GO according to thermogravimetric analysis (TGA). The electrorheological (ER) characteristics of the Si-GO hybrid composite based ER fluid were examined further by optical microscopy and a rotational rheometer in controlled shear rate mode under various electric field strengths. Shear stress curves were fitted using both conventional Bingham model and a constitutive Cho-Choi-Jhon model. The polarizability and relaxation time of the ER fluid from dielectric spectra measured using an LCR meter showed a good correlation with its ER characteristics.

  8. Studying the Mechanism of Hybrid Nanoparticle Photoresists: Effect of Particle Size on Photopatterning

    KAUST Repository

    Li, Li

    2015-07-28

    © 2015 American Chemical Society. Hf-based hybrid photoresist materials with three different organic ligands were prepared by a sol-gel-based method, and their patterning mechanism was investigated in detail. All hybrid nanoparticle resists are patternable using UV exposure. Their particle sizes show a dramatic increase from the initial 3-4 nm to submicron size after exposure, with no apparent inorganic content or thermal property change detected. XPS results showed that the mass percentage of the carboxylic group in the structure of nanoparticles decreased with increasing exposure duration. The particle coarsening sensitivities of those hybrid nanoparticles are consistent with their EUV performance. The current work provides an understanding for the development mechanism and future guidance for the design and processing of high performance resist materials for large-scale microelectronics device fabrication.

  9. Strength Pareto particle swarm optimization and hybrid EA-PSO for multi-objective optimization.

    Science.gov (United States)

    Elhossini, Ahmed; Areibi, Shawki; Dony, Robert

    2010-01-01

    This paper proposes an efficient particle swarm optimization (PSO) technique that can handle multi-objective optimization problems. It is based on the strength Pareto approach originally used in evolutionary algorithms (EA). The proposed modified particle swarm algorithm is used to build three hybrid EA-PSO algorithms to solve different multi-objective optimization problems. This algorithm and its hybrid forms are tested using seven benchmarks from the literature and the results are compared to the strength Pareto evolutionary algorithm (SPEA2) and a competitive multi-objective PSO using several metrics. The proposed algorithm shows a slower convergence, compared to the other algorithms, but requires less CPU time. Combining PSO and evolutionary algorithms leads to superior hybrid algorithms that outperform SPEA2, the competitive multi-objective PSO (MO-PSO), and the proposed strength Pareto PSO based on different metrics.

  10. HyDEn: a hybrid steganocryptographic approach for data encryption using randomized error-correcting DNA codes.

    Science.gov (United States)

    Tulpan, Dan; Regoui, Chaouki; Durand, Guillaume; Belliveau, Luc; Léger, Serge

    2013-01-01

    This paper presents a novel hybrid DNA encryption (HyDEn) approach that uses randomized assignments of unique error-correcting DNA Hamming code words for single characters in the extended ASCII set. HyDEn relies on custom-built quaternary codes and a private key used in the randomized assignment of code words and the cyclic permutations applied on the encoded message. Along with its ability to detect and correct errors, HyDEn equals or outperforms existing cryptographic methods and represents a promising in silico DNA steganographic approach.

  11. Dendrimer-like hybrid particles with tunable hierarchical pores

    Science.gov (United States)

    Du, Xin; Li, Xiaoyu; Huang, Hongwei; He, Junhui; Zhang, Xueji

    2015-03-01

    Dendrimer-like silica particles with a center-radial dendritic framework and a synergistic hierarchical porosity have attracted much attention due to their unique open three-dimensional superstructures with high accessibility to the internal surface areas; however, the delicate regulation of the hierarchical porosity has been difficult to achieve up to now. Herein, a series of dendrimer-like amino-functionalized silica particles with tunable hierarchical pores (HPSNs-NH2) were successfully fabricated by carefully regulating and optimizing the various experimental parameters in the ethyl ether emulsion systems via a one-pot sol-gel reaction. Interestingly, the simple adjustment of the stirring rate or reaction temperature was found to be an easy and effective route to achieve the controllable regulation towards center-radial large pore sizes from ca. 37-267 (148 +/- 45) nm to ca. 8-119 (36 +/- 21) nm for HPSNs-NH2 with particle sizes of 300-700 nm and from ca. 9-157 (52 +/- 28) nm to ca. 8-105 (30 +/- 16) nm for HPSNs-NH2 with particle sizes of 100-320 nm. To the best of our knowledge, this is the first successful regulation towards center-radial large pore sizes in such large ranges. The formation of HPSNs-NH2 may be attributed to the complex cross-coupling of two processes: the dynamic diffusion of ethyl ether molecules and the self-assembly of partially hydrolyzed TEOS species and CTAB molecules at the dynamic ethyl ether-water interface of uniform small quasi-emulsion droplets. Thus, these results regarding the elaborate regulation of center-radial large pores and particle sizes not only help us better understand the complicated self-assembly at the dynamic oil-water interface, but also provide a unique and ideal platform as carriers or supports for adsorption, separation, catalysis, biomedicine, and sensor.Dendrimer-like silica particles with a center-radial dendritic framework and a synergistic hierarchical porosity have attracted much attention due to their

  12. Hybrid composites of monodisperse pi-conjugated rodlike organic compounds and semiconductor quantum particles

    DEFF Research Database (Denmark)

    Hensel, V.; Godt, A.; Popovitz-Biro, R.

    2002-01-01

    Composite materials of quantum particles (Q-particles) arranged in layers within crystalline powders of pi-conjugated, rodlike dicarboxylic acids are reported. The synthesis of the composites, either as three-dimensional crystals or as thin films at the air-water interface, comprises a two...... analysis of the solids and grazing incidence X-ray diffraction analysis of the films on water. 2) Topotactic solid/gas reaction of these salts with H2S to convert the metal ions into Q-particles of CdS or PbS embedded in the organic matrix that consists of the acids 6(H) and 8(H). These hybrid materials...

  13. Hybrid particle swarm cooperative optimization algorithm and its application to MBC in alumina production

    Institute of Scientific and Technical Information of China (English)

    Shengli Song; Li Kong; Yong Gan; Rijian Su

    2008-01-01

    An effective hybrid particle swarm cooperative optimization (HPSCO) algorithm combining simulated annealing method and simplex method is proposed. The main idea is to divide particle swarm into several sub-groups and achieve optimization through cooperativeness of different sub-groups among the groups. The proposed algorithm is tested by benchmark functions and applied to material balance computation (MBC) in alumina production. Results show that HPSCO, with both a better stability and a steady convergence, has faster convergence speed and higher global convergence ability than the single method and the improved particle swarm optimization method. Most importantly, results demonstrate that HPSCO is more feasible and efficient than other algorithms in MBC.

  14. Statistical learning makes the hybridization of particle swarm and differential evolution more efficient-A novel hybrid optimizer

    Institute of Scientific and Technical Information of China (English)

    CHEN Jie; XIN Bin; PENG ZhiHong; PAN Feng

    2009-01-01

    This brief paper reports a hybrid algorithm we developed recently to solve the global optimization problems of multimodal functions, by combining the advantages of two powerful population-based metaheuristics-differential evolution (DE) and particle swarm optimization (PSO). In the hybrid denoted by DEPSO, each individual in one generation chooses its evolution method, DE or PSO, in a statistical learning way. The choice depends on the relative success ratio of the two methods in a previous learning period. The proposed DEPSO is compared with its PSO and DE parents, two advanced DE variants one of which is suggested by the originators of DE, two advanced PSO variants one of which is acknowledged as a recent standard by PSO community, and also a previous DEPSO. Benchmark tests demonstrate that the DEPSO is more competent for the global optimization of multimodal functions due to its high optimization quality.

  15. Study on angular variation of cosmic ray secondary particles with atmospheric depth using CORSIKA code

    Science.gov (United States)

    Patgiri, P.; Kalita, D.; Boruah, K.

    2017-04-01

    The distribution of the secondary cosmic ray charged particles in the atmosphere as a function of zenith angle of the primary particle depends on various factors such as atmospheric depth, latitude and longitude of the place of observation and possibly other atmospheric conditions. This work is focussed on the investigation of atmospheric attenuation of an Extensive Air Shower using the zenith angle distribution of the secondary charged particles, at different atmospheric depths for pure primary compositions (gamma, proton and iron nucleus) and mixed compositions employing the Monte Carlo Simulation code CORSIKA (versions 6.990 and 7.3500) in the energy range 10 TeV-1 PeV. The secondary charged particles in different zenith angle bins are fitted with a differential distribution dN sp /dθ = A(X)sinθcos n(X)θ, where the power index n(X) is a function of atmospheric depth X. For a given zenith angle θ, the frequency of the showers with secondary charged particle intensity higher than a threshold is also fitted with a relation F(θ,X0) = F(0,X0)exp[-X0(secθ - 1)/λ], where X0 is the vertical atmospheric depth and λ is the attenuation length. Further, the angular distribution parameter n(X) and attenuation co-efficients (λ) from our simulation result for different primaries are compared with available experimental data.

  16. Study on angular variation of cosmic ray secondary particles with atmospheric depth using CORSIKA code

    Science.gov (United States)

    Patgiri, P.; Kalita, D.; Boruah, K.

    2016-08-01

    The distribution of the secondary cosmic ray charged particles in the atmosphere as a function of zenith angle of the primary particle depends on various factors such as atmospheric depth, latitude and longitude of the place of observation and possibly other atmospheric conditions. This work is focussed on the investigation of atmospheric attenuation of an Extensive Air Shower using the zenith angle distribution of the secondary charged particles, at different atmospheric depths for pure primary compositions (gamma, proton and iron nucleus) and mixed compositions employing the Monte Carlo Simulation code CORSIKA (versions 6.990 and 7.3500) in the energy range 10 TeV-1 PeV. The secondary charged particles in different zenith angle bins are fitted with a differential distribution dN sp /dθ = A(X)sinθcos n(X)θ, where the power index n(X) is a function of atmospheric depth X. For a given zenith angle θ, the frequency of the showers with secondary charged particle intensity higher than a threshold is also fitted with a relation F(θ,X0) = F(0,X0)exp[-X0(secθ - 1)/λ], where X0 is the vertical atmospheric depth and λ is the attenuation length. Further, the angular distribution parameter n(X) and attenuation co-efficients (λ) from our simulation result for different primaries are compared with available experimental data.

  17. Thermal Cracking Analysis during Pipe Cooling of Mass Concrete Using Particle Flow Code

    Directory of Open Access Journals (Sweden)

    Liang Li

    2016-01-01

    Full Text Available Pipe cooling systems are among the potentially effective measures to control the temperature of mass concrete. However, if not properly controlled, thermal cracking in concrete, especially near water pipes, might occur, as experienced in many mass concrete structures. In this paper, a new numerical approach to simulate thermal cracking based on particle flow code is used to shed more light onto the process of thermal crack propagation and the effect of thermal cracks on thermal fields. Key details of the simulation, including the procedure of obtaining thermal and mechanical properties of particles, are presented. Importantly, a heat flow boundary based on an analytical solution is proposed and used in particle flow code in two dimensions to simulate the effect of pipe cooling. The simulation results are in good agreement with the monitored temperature data and observations on cored specimens from a real concrete gravity dam, giving confidence to the appropriateness of the adopted simulation. The simulated results also clearly demonstrate why thermal cracks occur and how they propagate, as well as the influence of such cracks on thermal fields.

  18. Spacecraft charging analysis with the implicit particle-in-cell code iPic3D

    Energy Technology Data Exchange (ETDEWEB)

    Deca, J.; Lapenta, G. [Centre for Mathematical Plasma Astrophysics, KU Leuven, Celestijnenlaan 200B bus 2400, 3001 Leuven (Belgium); Marchand, R. [Department of Physics, University of Alberta, Edmonton, Alberta T6G 2J1 (Canada); Markidis, S. [High Performance Computing and Visualization Department, KTH Royal Institute of Technology, Stockholm (Sweden)

    2013-10-15

    We present the first results on the analysis of spacecraft charging with the implicit particle-in-cell code iPic3D, designed for running on massively parallel supercomputers. The numerical algorithm is presented, highlighting the implementation of the electrostatic solver and the immersed boundary algorithm; the latter which creates the possibility to handle complex spacecraft geometries. As a first step in the verification process, a comparison is made between the floating potential obtained with iPic3D and with Orbital Motion Limited theory for a spherical particle in a uniform stationary plasma. Second, the numerical model is verified for a CubeSat benchmark by comparing simulation results with those of PTetra for space environment conditions with increasing levels of complexity. In particular, we consider spacecraft charging from plasma particle collection, photoelectron and secondary electron emission. The influence of a background magnetic field on the floating potential profile near the spacecraft is also considered. Although the numerical approaches in iPic3D and PTetra are rather different, good agreement is found between the two models, raising the level of confidence in both codes to predict and evaluate the complex plasma environment around spacecraft.

  19. Hybrid molecular-continuum simulations using smoothed dissipative particle dynamics.

    Science.gov (United States)

    Petsev, Nikolai D; Leal, L Gary; Shell, M Scott

    2015-01-28

    We present a new multiscale simulation methodology for coupling a region with atomistic detail simulated via molecular dynamics (MD) to a numerical solution of the fluctuating Navier-Stokes equations obtained from smoothed dissipative particle dynamics (SDPD). In this approach, chemical potential gradients emerge due to differences in resolution within the total system and are reduced by introducing a pairwise thermodynamic force inside the buffer region between the two domains where particles change from MD to SDPD types. When combined with a multi-resolution SDPD approach, such as the one proposed by Kulkarni et al. [J. Chem. Phys. 138, 234105 (2013)], this method makes it possible to systematically couple atomistic models to arbitrarily coarse continuum domains modeled as SDPD fluids with varying resolution. We test this technique by showing that it correctly reproduces thermodynamic properties across the entire simulation domain for a simple Lennard-Jones fluid. Furthermore, we demonstrate that this approach is also suitable for non-equilibrium problems by applying it to simulations of the start up of shear flow. The robustness of the method is illustrated with two different flow scenarios in which shear forces act in directions parallel and perpendicular to the interface separating the continuum and atomistic domains. In both cases, we obtain the correct transient velocity profile. We also perform a triple-scale shear flow simulation where we include two SDPD regions with different resolutions in addition to a MD domain, illustrating the feasibility of a three-scale coupling.

  20. Hybrid molecular-continuum simulations using smoothed dissipative particle dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Petsev, Nikolai D.; Leal, L. Gary; Shell, M. Scott [Department of Chemical Engineering, University of California at Santa Barbara, Santa Barbara, California 93106-5080 (United States)

    2015-01-28

    We present a new multiscale simulation methodology for coupling a region with atomistic detail simulated via molecular dynamics (MD) to a numerical solution of the fluctuating Navier-Stokes equations obtained from smoothed dissipative particle dynamics (SDPD). In this approach, chemical potential gradients emerge due to differences in resolution within the total system and are reduced by introducing a pairwise thermodynamic force inside the buffer region between the two domains where particles change from MD to SDPD types. When combined with a multi-resolution SDPD approach, such as the one proposed by Kulkarni et al. [J. Chem. Phys. 138, 234105 (2013)], this method makes it possible to systematically couple atomistic models to arbitrarily coarse continuum domains modeled as SDPD fluids with varying resolution. We test this technique by showing that it correctly reproduces thermodynamic properties across the entire simulation domain for a simple Lennard-Jones fluid. Furthermore, we demonstrate that this approach is also suitable for non-equilibrium problems by applying it to simulations of the start up of shear flow. The robustness of the method is illustrated with two different flow scenarios in which shear forces act in directions parallel and perpendicular to the interface separating the continuum and atomistic domains. In both cases, we obtain the correct transient velocity profile. We also perform a triple-scale shear flow simulation where we include two SDPD regions with different resolutions in addition to a MD domain, illustrating the feasibility of a three-scale coupling.

  1. Hybrid metal organic scintillator materials system and particle detector

    Science.gov (United States)

    Bauer, Christina A.; Allendorf, Mark D.; Doty, F. Patrick; Simmons, Blake A.

    2011-07-26

    We describe the preparation and characterization of two zinc hybrid luminescent structures based on the flexible and emissive linker molecule, trans-(4-R,4'-R') stilbene, where R and R' are mono- or poly-coordinating groups, which retain their luminescence within these solid materials. For example, reaction of trans-4,4'-stilbenedicarboxylic acid and zinc nitrate in the solvent dimethylformamide (DMF) yielded a dense 2-D network featuring zinc in both octahedral and tetrahedral coordination environments connected by trans-stilbene links. Similar reaction in diethylformamide (DEF) at higher temperatures resulted in a porous, 3-D framework structure consisting of two interpenetrating cubic lattices, each featuring basic to zinc carboxylate vertices joined by trans-stilbene, analogous to the isoreticular MOF (IRMOF) series. We demonstrate that the optical properties of both embodiments correlate directly with the local ligand environments observed in the crystal structures. We further demonstrate that these materials produce high luminescent response to proton radiation and high radiation tolerance relative to prior scintillators. These features can be used to create sophisticated scintillating detection sensors.

  2. SimTrack: A compact c++ code for particle orbit and spin tracking in accelerators

    Science.gov (United States)

    Luo, Yun

    2015-11-01

    SimTrack is a compact c++ code of 6-d symplectic element-by-element particle tracking in accelerators originally designed for head-on beam-beam compensation simulation studies in the Relativistic Heavy Ion Collider (RHIC) at Brookhaven National Laboratory. It provides a 6-d symplectic orbit tracking with the 4th order symplectic integration for magnet elements and the 6-d symplectic synchro-beam map for beam-beam interaction. Since its inception in 2009, SimTrack has been intensively used for dynamic aperture calculations with beam-beam interaction for RHIC. Recently, proton spin tracking and electron energy loss due to synchrotron radiation were added. In this paper, I will present the code architecture, physics models, and some selected examples of its applications to RHIC and a future electron-ion collider design eRHIC.

  3. Beam Dynamics in an Electron Lens with the Warp Particle-in-cell Code

    CERN Document Server

    Stancari, Giulio; Redaelli, Stefano

    2014-01-01

    Electron lenses are a mature technique for beam manipulation in colliders and storage rings. In an electron lens, a pulsed, magnetically confined electron beam with a given current-density profile interacts with the circulating beam to obtain the desired effect. Electron lenses were used in the Fermilab Tevatron collider for beam-beam compensation, for abort-gap clearing, and for halo scraping. They will be used in RHIC at BNL for head-on beam-beam compensation, and their application to the Large Hadron Collider for halo control is under development. At Fermilab, electron lenses will be implemented as lattice elements for nonlinear integrable optics. The design of electron lenses requires tools to calculate the kicks and wakefields experienced by the circulating beam. We use the Warp particle-in-cell code to study generation, transport, and evolution of the electron beam. For the first time, a fully 3-dimensional code is used for this purpose.

  4. Examining of abrasion resistance of hybrid composites reinforced with SiC and Cgr particles

    Directory of Open Access Journals (Sweden)

    M. Łągiewka

    2008-08-01

    Full Text Available The presented work discusses the influence of the type and volume percentage of particulate reinforcement consisting of mixed silicon carbide and graphite on the abrasion wear of hybrid composites with AlMg10 matrix. Also the macro photos of frictional surfaces have been shown and the results of hardness measurements have been presented. The performed examinations have allowed for stating that the mixture of SiC and Cgr particles changes in favour the tribological properties of the matrix alloy. It has been also proved that introducing hard reinforcing particles along with soft lubricating ones allows for achieving the material exhibiting high abrasion resistance, and moreover, the graphite particles protect the abraded surface from the destructive action of silicon carbide particles. Also hardness measurements have been performed and the resulting conclusion is that the composite hardness increases with an increase in volume fraction of the reinforcing particles.

  5. Synthesis and properties of hybrid hydroxyapatite-ferrite (Fe3O4) particles for hyperthermia applications

    Science.gov (United States)

    Tkachenko, M. V.; Kamzin, A. S.

    2016-04-01

    Hybrid ceramics consisting of hydroxyapatite Ca10(PO4)6(OH)2 and ferrite Fe3O4 were synthesized using a two-stage procedure. The first stage included the synthesis of Fe3O4 ferrite particles by co-precipitation and the synthesis of hydroxyapatite. In the second stage, the magnetic hybrid hydroxyapatite-ferrite bioceramics were synthesized by a thorough mixing of the obtained powders of carbonated hydroxyapatite and Fe3O4 ferrite taken in a certain proportion, pressing into tablets, and annealing in a carbon dioxide atmosphere for 30 min at a temperature of 1200°C. The properties of the components and hybrid particles were investigated using X-ray diffraction, scanning electron microscopy, transmission electron microscopy, and Mössbauer spectroscopy. The saturation magnetization of the hybrid ceramic composite containing 20 wt % Fe3O4 was found to be 12 emu/g. The hybrid hydroxyapatite (Ca10(PO4)6(OH)2)-ferrite Fe3O4 ceramics, which are promising for the use in magnetotransport and hyperthermia treatment, were synthesized and investigated for the first time.

  6. Using field theory to construct hybrid particle-continuum simulation schemes with adaptive resolution for soft matter systems

    OpenAIRE

    Qi, Shuanhu; Behringer, Hans; Schmid, Friederike

    2013-01-01

    We develop a multiscale hybrid scheme for simulations of soft condensed matter systems, which allows one to treat the system at the particle level in selected regions of space, and at the continuum level elsewhere. It is derived systematically from an underlying particle-based model by field theoretic methods. Particles in different representation regions can switch representations on the fly, controlled by a spatially varying tuning function. As a test case, the hybrid scheme is applied to s...

  7. Hybrid Decoder Reconfiguration of AVS-P7 and MPEG-4 /AVC in the Reconfigurable Video Coding Framework

    Directory of Open Access Journals (Sweden)

    Zhang Zhaoyang

    2012-08-01

    Full Text Available With the rapid development of video coding technology, all kinds of video coding standards have been advanced in recent years with a variety of different and complex algorithms. They share common and/or similar coding tools, yet there is currently no explicit way to exploit such commonalities at the level of specifications or implementations. Reconfigurable video coding (RVC is to develop a video coding standard that overcomes many shortcomings of current standardization and specification process by updating and progressively incrementing a modular library of components. In this paper, a hybrid decoder reconfiguration is instantiated in the RVC framework by grouping the coding tools from AVS-P7 and MPEG-4/AVC. Experimental results show that compared with MPEG-4/AVC baseline profile, the reconfigurable coding system reduces the computational complexity and guarantees the coding performance at low bit rate. Moreover, it enriches the RVC video tool library (VTL by introducing the coding tools of AVS-P7, and also verifies the flexibility and re-configurability of RVC framework to meet the needs of different applications.

  8. Neutron transport-burnup code MCORGS and its application in fusion fission hybrid blanket conceptual research

    Science.gov (United States)

    Shi, Xue-Ming; Peng, Xian-Jue

    2016-09-01

    Fusion science and technology has made progress in the last decades. However, commercialization of fusion reactors still faces challenges relating to higher fusion energy gain, irradiation-resistant material, and tritium self-sufficiency. Fusion Fission Hybrid Reactors (FFHR) can be introduced to accelerate the early application of fusion energy. Traditionally, FFHRs have been classified as either breeders or transmuters. Both need partition of plutonium from spent fuel, which will pose nuclear proliferation risks. A conceptual design of a Fusion Fission Hybrid Reactor for Energy (FFHR-E), which can make full use of natural uranium with lower nuclear proliferation risk, is presented. The fusion core parameters are similar to those of the International Thermonuclear Experimental Reactor. An alloy of natural uranium and zirconium is adopted in the fission blanket, which is cooled by light water. In order to model blanket burnup problems, a linkage code MCORGS, which couples MCNP4B and ORIGEN-S, is developed and validated through several typical benchmarks. The average blanket energy Multiplication and Tritium Breeding Ratio can be maintained at 10 and 1.15 respectively over tens of years of continuous irradiation. If simple reprocessing without separation of plutonium from uranium is adopted every few years, FFHR-E can achieve better neutronic performance. MCORGS has also been used to analyze the ultra-deep burnup model of Laser Inertial Confinement Fusion Fission Energy (LIFE) from LLNL, and a new blanket design that uses Pb instead of Be as the neutron multiplier is proposed. In addition, MCORGS has been used to simulate the fluid transmuter model of the In-Zinerater from Sandia. A brief comparison of LIFE, In-Zinerater, and FFHR-E will be given.

  9. A Particle In Cell code development for high current ion beam transport and plasma simulations

    CERN Document Server

    Joshi, N

    2016-01-01

    A simulation package employing a Particle in Cell (PIC) method is developed to study the high current beam transport and the dynamics of plasmas. This package includes subroutines those are suited for various planned projects at University of Frankfurt. In the framework of the storage ring project (F8SR) the code was written to describe the beam optics in toroidal magnetic fields. It is used to design an injection system for a ring with closed magnetic field lines. The generalized numerical model, in Cartesian coordinates is used to describe the intense ion beam transport through the chopper system in the low energy beam section of the FRANZ project. Especially for the chopper system, the Poisson equation is implemented with irregular geometries. The Particle In Cell model is further upgraded with a Monte Carlo Collision subroutine for simulation of plasma in the volume type ion source.

  10. Hybrid discrete particle swarm optimization algorithm for capacitated vehicle routing problem

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    Capacitated vehicle routing problem (CVRP) is an NP-hard problem. For large-scale problems, it is quite difficult to achieve an optimal solution with traditional optimization methods due to the high computational complexity. A new hybrid approximation algorithm is developed in this work to solve the problem. In the hybrid algorithm, discrete particle swarm optimization (DPSO) combines global search and local search to search for the optimal results and simulated annealing (SA) uses certain probability to avoid being trapped in a local optimum. The computational study showed that the proposed algorithm is a feasible and effective approach for capacitated vehicle routing problem, especially for large scale problems.

  11. Brightness through Local Constraint-LNA-Enhanced FIT Hybridization Probes for In Vivo Ribonucleotide Particle Tracking

    DEFF Research Database (Denmark)

    Hövelmann, Felix; Gaspar, Imre; Loibl, Simon

    2014-01-01

    Imaging the dynamics of RNA in living cells is usually performed by means of transgenic approaches that require modification of RNA targets and cells. Fluorogenic hybridization probes would also allow the analysis of wild-type organisms. We developed nuclease-resistant DNA forced intercalation (FIT......) probes that combine the high enhancement of fluorescence upon hybridization with the high brightness required to allow tracking of individual ribonucleotide particles (RNPs). In our design, a single thiazole orange (TO) intercalator dye is linked as a nucleobase surrogate and an adjacent locked nucleic...

  12. Development and testing of cut-cell boundaries for electromagnetic particle-in-cell codes.

    Science.gov (United States)

    Nieter, Chet; Smithe, David N.; Stoltz, Peter H.; Cary, John R.

    2007-03-01

    The finite difference time domain (FDTD) approach for electromagnetic particle-in-cell (EM-PIC) is a proven method for many problems involving interactions of charged particles with electromagnetic fields. However accurately modeling fields and particle process at complex boundaries with such methods is still an active research topic. A variety of methods have been developed for this purpose but the testing and application of these methods to real world problems in fairly limited. We have recently implemented the Dey-Mittra boundary algorithm into our EM-PIC code VORPAL. Convergence tests comparing how the frequency of cavity oscillations converge to the physical values for simulations run with stair-step and Dey-Mittra algorithms will be presented. These tests demonstrate how the Dey-Mittra algorithm provides considerable improvements over stair step boundaries. A method to correct for the image charge accumulation from removing particles at complex surfaces will also be presented. Applications to superconducting RF cavities and high-powered microwave devices will be presented.

  13. HOTB: High precision parallel code for calculation of four-particle harmonic oscillator transformation brackets

    Science.gov (United States)

    Stepšys, A.; Mickevicius, S.; Germanas, D.; Kalinauskas, R. K.

    2014-11-01

    This new version of the HOTB program for calculation of the three and four particle harmonic oscillator transformation brackets provides some enhancements and corrections to the earlier version (Germanas et al., 2010) [1]. In particular, new version allows calculations of harmonic oscillator transformation brackets be performed in parallel using MPI parallel communication standard. Moreover, higher precision of intermediate calculations using GNU Quadruple Precision and arbitrary precision library FMLib [2] is done. A package of Fortran code is presented. Calculation time of large matrices can be significantly reduced using effective parallel code. Use of Higher Precision methods in intermediate calculations increases the stability of algorithms and extends the validity of used algorithms for larger input values. Catalogue identifier: AEFQ_v4_0 Program summary URL: http://cpc.cs.qub.ac.uk/summaries/AEFQ_v4_0.html Program obtainable from: CPC Program Library, Queen’s University of Belfast, N. Ireland Licensing provisions: GNU General Public License, version 3 Number of lines in programs, including test data, etc.: 1711 Number of bytes in distributed programs, including test data, etc.: 11667 Distribution format: tar.gz Program language used: FORTRAN 90 with MPI extensions for parallelism Computer: Any computer with FORTRAN 90 compiler Operating system: Windows, Linux, FreeBSD, True64 Unix Has the code been vectorized of parallelized?: Yes, parallelism using MPI extensions. Number of CPUs used: up to 999 RAM(per CPU core): Depending on allocated binomial and trinomial matrices and use of precision; at least 500 MB Catalogue identifier of previous version: AEFQ_v1_0 Journal reference of previous version: Comput. Phys. Comm. 181, Issue 2, (2010) 420-425 Does the new version supersede the previous version? Yes Nature of problem: Calculation of matrices of three-particle harmonic oscillator brackets (3HOB) and four-particle harmonic oscillator brackets (4HOB) in a more

  14. Hybrid information privacy system: integration of chaotic neural network and RSA coding

    Science.gov (United States)

    Hsu, Ming-Kai; Willey, Jeff; Lee, Ting N.; Szu, Harold H.

    2005-03-01

    Electronic mails are adopted worldwide; most are easily hacked by hackers. In this paper, we purposed a free, fast and convenient hybrid privacy system to protect email communication. The privacy system is implemented by combining private security RSA algorithm with specific chaos neural network encryption process. The receiver can decrypt received email as long as it can reproduce the specified chaos neural network series, so called spatial-temporal keys. The chaotic typing and initial seed value of chaos neural network series, encrypted by the RSA algorithm, can reproduce spatial-temporal keys. The encrypted chaotic typing and initial seed value are hidden in watermark mixed nonlinearly with message media, wrapped with convolution error correction codes for wireless 3rd generation cellular phones. The message media can be an arbitrary image. The pattern noise has to be considered during transmission and it could affect/change the spatial-temporal keys. Since any change/modification on chaotic typing or initial seed value of chaos neural network series is not acceptable, the RSA codec system must be robust and fault-tolerant via wireless channel. The robust and fault-tolerant properties of chaos neural networks (CNN) were proved by a field theory of Associative Memory by Szu in 1997. The 1-D chaos generating nodes from the logistic map having arbitrarily negative slope a = p/q generating the N-shaped sigmoid was given first by Szu in 1992. In this paper, we simulated the robust and fault-tolerance properties of CNN under additive noise and pattern noise. We also implement a private version of RSA coding and chaos encryption process on messages.

  15. (Bio)hybrid materials based on optically active particles

    Science.gov (United States)

    Reitzig, Manuela; Härtling, Thomas; Opitz, Jörg

    2014-03-01

    In this contribution we provide an overview of current investigations on optically active particles (nanodiamonds, upconversion phospors) for biohybrid and sensing applications. Due to their outstanding properties nanodiamonds gain attention in various application elds such as microelectronics, optical monitoring, medicine, and biotechnology. Beyond the typical diamond properties such as high thermal conductivity and extreme hardness, the carbon surface and its various functional groups enable diverse chemical and biological surface functionalization. At Fraunhofer IKTS-MD we develop a customization of material surfaces via integration of chemically modi ed nanodiamonds at variable surfaces, e.g bone implants and pipelines. For the rst purpose, nanodiamonds are covalently modi ed at their surface with amino or phosphate functionalities that are known to increase adhesion to bone or titanium alloys. The second type of surface is approached via mechanical implementation into coatings. Besides nanodiamonds, we also investigate the properties of upconversion phosphors. In our contribution we show how upconversion phosphors are used to verify sterilization processes via a change of optical properties due to sterilizing electron beam exposure.

  16. Variable weight Khazani-Syed code using hybrid fixed-dynamic technique for optical code division multiple access system

    Science.gov (United States)

    Anas, Siti Barirah Ahmad; Seyedzadeh, Saleh; Mokhtar, Makhfudzah; Sahbudin, Ratna Kalos Zakiah

    2016-10-01

    Future Internet consists of a wide spectrum of applications with different bit rates and quality of service (QoS) requirements. Prioritizing the services is essential to ensure that the delivery of information is at its best. Existing technologies have demonstrated how service differentiation techniques can be implemented in optical networks using data link and network layer operations. However, a physical layer approach can further improve system performance at a prescribed received signal quality by applying control at the bit level. This paper proposes a coding algorithm to support optical domain service differentiation using spectral amplitude coding techniques within an optical code division multiple access (OCDMA) scenario. A particular user or service has a varying weight applied to obtain the desired signal quality. The properties of the new code are compared with other OCDMA codes proposed for service differentiation. In addition, a mathematical model is developed for performance evaluation of the proposed code using two different detection techniques, namely direct decoding and complementary subtraction.

  17. On the Way to Future's High Energy Particle Physics Transport Code

    CERN Document Server

    Bíró, Gábor; Futó, Endre

    2015-01-01

    High Energy Physics (HEP) needs a huge amount of computing resources. In addition data acquisition, transfer, and analysis require a well developed infrastructure too. In order to prove new physics disciplines it is required to higher the luminosity of the accelerator facilities, which produce more-and-more data in the experimental detectors. Both testing new theories and detector R&D are based on complex simulations. Today have already reach that level, the Monte Carlo detector simulation takes much more time than real data collection. This is why speed up of the calculations and simulations became important in the HEP community. The Geant Vector Prototype (GeantV) project aims to optimize the most-used particle transport code applying parallel computing and to exploit the capabilities of the modern CPU and GPU architectures as well. With the maximized concurrency at multiple levels the GeantV is intended to be the successor of the Geant4 particle transport code that has been used since two decades succe...

  18. Collaborative Multi-Layer Network Coding in Hybrid Cellular Cognitive Radio Networks

    KAUST Repository

    Moubayed, Abdallah J.

    2015-05-01

    In this paper, as an extension to [1], we propose a prioritized multi-layer network coding scheme for collaborative packet recovery in hybrid (interweave and underlay) cellular cognitive radio networks. This scheme allows the uncoordinated collaboration between the collocated primary and cognitive radio base-stations in order to minimize their own as well as each other\\'s packet recovery overheads, thus by improving their throughput. The proposed scheme ensures that each network\\'s performance is not degraded by its help to the other network. Moreover, it guarantees that the primary network\\'s interference threshold is not violated in the same and adjacent cells. Yet, the scheme allows the reduction of the recovery overhead in the collocated primary and cognitive radio networks. The reduction in the cognitive radio network is further amplified due to the perfect detection of spectrum holes which allows the cognitive radio base station to transmit at higher power without fear of violating the interference threshold of the primary network. For the secondary network, simulation results show reductions of 20% and 34% in the packet recovery overhead, compared to the non-collaborative scheme, for low and high probabilities of primary packet arrivals, respectively. For the primary network, this reduction was found to be 12%. © 2015 IEEE.

  19. Collaborative Multi-Layer Network Coding For Hybrid Cellular Cognitive Radio Networks

    KAUST Repository

    Moubayed, Abdallah J.

    2014-05-01

    In this thesis, as an extension to [1], we propose a prioritized multi-layer network coding scheme for collaborative packet recovery in hybrid (interweave and underlay) cellular cognitive radio networks. This scheme allows the uncoordinated collaboration between the collocated primary and cognitive radio base-stations in order to minimize their own as well as each other’s packet recovery overheads, thus by improving their throughput. The proposed scheme ensures that each network’s performance is not degraded by its help to the other network. Moreover, it guarantees that the primary network’s interference threshold is not violated in the same and adjacent cells. Yet, the scheme allows the reduction of the recovery overhead in the collocated primary and cognitive radio networks. The reduction in the cognitive radio network is further amplified due to the perfect detection of spectrum holes which allows the cognitive radio base station to transmit at higher power without fear of violating the interference threshold of the primary network. For the secondary network, simulation results show reductions of 20% and 34% in the packet recovery overhead, compared to the non-collaborative scheme, for low and high probabilities of primary packet arrivals, respectively. For the primary network, this reduction was found to be 12%. Furthermore, with the use of fractional cooperation, the average recovery overhead is further reduced by around 5% for the primary network and around 10% for the secondary network when a high fractional cooperation probability is used.

  20. Hybrid scheduling mechanisms for Next-generation Passive Optical Networks based on network coding

    Science.gov (United States)

    Zhao, Jijun; Bai, Wei; Liu, Xin; Feng, Nan; Maier, Martin

    2014-10-01

    Network coding (NC) integrated into Passive Optical Networks (PONs) is regarded as a promising solution to achieve higher throughput and energy efficiency. To efficiently support multimedia traffic under this new transmission mode, novel NC-based hybrid scheduling mechanisms for Next-generation PONs (NG-PONs) including energy management, time slot management, resource allocation, and Quality-of-Service (QoS) scheduling are proposed in this paper. First, we design an energy-saving scheme that is based on Bidirectional Centric Scheduling (BCS) to reduce the energy consumption of both the Optical Line Terminal (OLT) and Optical Network Units (ONUs). Next, we propose an intra-ONU scheduling and an inter-ONU scheduling scheme, which takes NC into account to support service differentiation and QoS assurance. The presented simulation results show that BCS achieves higher energy efficiency under low traffic loads, clearly outperforming the alternative NC-based Upstream Centric Scheduling (UCS) scheme. Furthermore, BCS is shown to provide better QoS assurance.

  1. Dose estimation in space using the Particle and Heavy-Ion Transport code System (PHITS)

    Energy Technology Data Exchange (ETDEWEB)

    Gustafsson, Katarina

    2009-06-15

    The radiation risks in space are well known, but work still needs to be done in order to fully understand the radiation effects on humans and how to minimize the risks especially now when the activity in space is increasing with plans for missions to the Moon and Mars. One goal is to develop transport codes that can estimate the radiation environment and its effects. These would be useful tools for reducing the radiation effects when designing and planning space missions. The Particle and Heavy-Ion Transport code System, PHITS, is a three dimensional Monte Carlo code with great possibilities to perform radiation transport calculations and estimating radiation exposure such as absorbed dose, equivalent dose and dose equivalent. Therefore a benchmarking with experiments performed at the ISS was done and also an estimation of different material's influences on the shielding was made. The simulated results already agree reasonable with the measurements, but can most likely be significantly improved when more realistic shielding geometries will be used. This indicates that PHITS is a useful tool for estimating radiation risks for humans in space and when designing shielding of space crafts

  2. A smooth particle hydrodynamics code to model collisions between solid, self-gravitating objects

    CERN Document Server

    Schäfer, Christoph M; Maindl, Thomas I; Speith, Roland; Scherrer, Samuel; Kley, Wilhelm

    2016-01-01

    Modern graphics processing units (GPUs) lead to a major increase in the performance of the computation of astrophysical simulations. Owing to the different nature of GPU architecture compared to traditional central processing units (CPUs) such as x86 architecture, existing numerical codes cannot be easily migrated to run on GPU. Here, we present a new implementation of the numerical method smooth particle hydrodynamics (SPH) using CUDA and the first astrophysical application of the new code: the collision between Ceres-sized objects. The new code allows for a tremendous increase in speed of astrophysical simulations with SPH and self-gravity at low costs for new hardware. We have implemented the SPH equations to model gas, liquids and elastic, and plastic solid bodies and added a fragmentation model for brittle materials. Self-gravity may be optionally included in the simulations and is treated by the use of a Barnes-Hut tree. We find an impressive performance gain using NVIDIA consumer devices compared to ou...

  3. Performance improvement of hybrid subcarrier multiplexing optical spectrum code division multiplexing system using spectral direct decoding detection technique

    Science.gov (United States)

    Sahbudin, R. K. Z.; Abdullah, M. K.; Mokhtar, M.

    2009-06-01

    This paper proposes a hybrid subcarrier multiplexing/optical spectrum code division multiplexing (SCM/OSCDM) system for the purpose of combining the advantages of both techniques. Optical spectrum code division multiple-access (OSCDMA) is one of the multiplexing techniques that is becoming popular because of the flexibility in the allocation of channels, ability to operate asynchronously, enhanced privacy and increased capacity in bursty nature networks. On the other hand, subcarrier multiplexing (SCM) technique is able to enhance the channel data rate of OSCDMA systems. In this paper, a newly developed detection technique for the OSCDM called spectral direct decoding (SDD) detection technique is compared mathematically with the AND subtraction detection technique. The system utilizes a new unified code construction named KS (Khazani-Syed) code. The results characterizing the bit-error-rate (BER) show that SDD offers a significant improved performance at BER of 10 -9.

  4. Optimal Control for a Parallel Hybrid Hydraulic Excavator Using Particle Swarm Optimization

    Directory of Open Access Journals (Sweden)

    Dong-yun Wang

    2013-01-01

    Full Text Available Optimal control using particle swarm optimization (PSO is put forward in a parallel hybrid hydraulic excavator (PHHE. A power-train mathematical model of PHHE is illustrated along with the analysis of components’ parameters. Then, the optimal control problem is addressed, and PSO algorithm is introduced to deal with this nonlinear optimal problem which contains lots of inequality/equality constraints. Then, the comparisons between the optimal control and rule-based one are made, and the results show that hybrids with the optimal control would increase fuel economy. Although PSO algorithm is off-line optimization, still it would bring performance benchmark for PHHE and also help have a deep insight into hybrid excavators.

  5. User Manual and Source Code for a LAMMPS Implementation of Constant Energy Dissipative Particle Dynamics (DPD-E)

    Science.gov (United States)

    2014-06-01

    User Manual and Source Code for a LAMMPS Implementation of Constant Energy Dissipative Particle Dynamics ( DPD -E) by James P. Larentzos...Energy Dissipative Particle Dynamics ( DPD -E) James P. Larentzos Engility Corporation John K. Brennan, Joshua D. Moore, and William D. Mattson...Constant Energy Dissipative Particle Dynamics ( DPD -E) 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) James P

  6. Particle production and chemical freezeout from the hybrid UrQMD approach at NICA energies

    CERN Document Server

    Tawfik, Abdel Nasser; Shalaby, Asmaa G; Hanafy, Mahmoud; Sorin, Alexander; Rogachevsky, Oleg; Scheinast, Werner

    2016-01-01

    The energy dependence of various particle ratios is calculated within the Ultra-Relativistic Quantum Molecular Dynamics approach and compared with the hadron resonance gas (HRG) model and measurements from various experiments, including RHIC-BES, SPS and AGS. It is found that the UrQMD particle ratios agree well with the experimental results at the RHIC-BES energies. Thus, we have utilized UrQMD in simulating particle ratios at other beam energies down to 3 GeV, which will be accessed at NICA and FAIR future facilities. We observe that the particle ratios for crossover and first-order phase transition, implemented in the hybrid UrQMD v3.4, are nearly indistinguishable, especially at low energies (at large baryon chemical potentials or high density).

  7. Particle production and chemical freezeout from the hybrid UrQMD approach at NICA energies

    Energy Technology Data Exchange (ETDEWEB)

    Nasser Tawfik, Abdel [Modern University for Technology and Information (MTI), Egyptian Center for Theoretical Physics (ECTP), Cairo (Egypt); World Laboratory for Cosmology and Particle Physics (WLCAPP), Cairo (Egypt); Abou-Salem, Loutfy I. [Benha University, Physics Department, Faculty of Science, Benha (Egypt); Shalaby, Asmaa G.; Hanafy, Mahmoud [World Laboratory for Cosmology and Particle Physics (WLCAPP), Cairo (Egypt); Benha University, Physics Department, Faculty of Science, Benha (Egypt); Sorin, Alexander [Joint Institute for Nuclear Research, Bogoliubov Laboratory of Theoretical Physics, Dubna, Moscow region (Russian Federation); Joint Institute for Nuclear Research, Veksler and Baldin Laboratory of High Energy Physics, Dubna, Moscow region (Russian Federation); National Research Nuclear University (MEPhI), Moscow (Russian Federation); Dubna International University, Dubna (Russian Federation); Rogachevsky, Oleg; Scheinast, Werner [Joint Institute for Nuclear Research, Veksler and Baldin Laboratory of High Energy Physics, Dubna, Moscow region (Russian Federation)

    2016-10-15

    The energy dependence of various particle ratios is calculated within the Ultra-relativistic Quantum Molecular Dynamics approach and compared with the hadron resonance gas (HRG) model and measurements from various experiments, including RHIC-BES, SPS and AGS. It is found that the UrQMD particle ratios agree well with the experimental results at the RHIC-BES energies. Thus, we have utilized UrQMD in simulating particle ratios at other beam energies down to 3GeV, which will be accessed at NICA and FAIR future facilities. We observe that the particle ratios for crossover and first-order phase transition, implemented in the hybrid UrQMD v3.4, are nearly indistinguishable, especially at low energies (at large baryon chemical potentials or high density). (orig.)

  8. Solid Particle Erosion response of fiber and particulate filled polymer based hybrid composites: A review

    Directory of Open Access Journals (Sweden)

    Yogesh M

    2016-01-01

    Full Text Available The solid particle erosion behaviour of fiber and particulate filled polymer composites has been reviewed. An overview of the problem of solid particle erosion was given with respect to the processes and modes during erosion with focus on polymer matrix composites. The new aspects in the experimental studies of erosion of fiber and particulate filled polymer composites were emphasized in this paper. Various predictions and models proposed to describe the erosion rate were listed and their suitability was mentioned. Implementation of design of experiments and statistical techniques in analyzing the erosion behaviour of composites was discussed. Recent findings on erosion response of multi-component hybrid composites were also presented. Recommendations were given on how to solve some open questions related to the structureerosion resistance relationships for polymers and polymer based hybrid composites.

  9. Detection of oligonucleotide hybridization on a single microparticle by time-resolved fluorometry: hybridization assays on polymer particles obtained by direct solid phase assembly of the oligonucleotide probes.

    Science.gov (United States)

    Hakala, H; Heinonen, P; Iitiä, A; Lönnberg, H

    1997-01-01

    Oligodeoxyribonucleotides were assembled by conventional phosphoramidite chemistry on uniformly sized (50 microns) porous glycidyl methacrylate/ethylene dimethacrylate (SINTEF) and compact polystyrene (Dynosphere) particles, the aminoalkyl side chains of which were further derivatized with DMTrO-acetyl groups. The linker was completely resistant toward ammonolytic deprotection of the base moieties. The quality of oligonucleotides was assessed by repeating the synthesis on the same particles derivatized with a cleavable ester linker. The ability of the oligonucleotide-coated particles to bind complementary sequences via hybridization was examined by following the attachment of oligonucleotides bearing a photoluminescent europium(III) chelate to the particles. The fluorescence emission was measured directly on a single particle. The effects of the following factors on the kinetics and efficiency of hybridization were studied: number of particles in a given volume of the assay solution, loading of oligonucleotide on the particle, concentration of the target oligonucleotide in solution, length of the hybridizing sequence, presence of noncomplementary sequences, and ionic strength. The fluorescence signal measured on a single particle after hybridization was observed to be proportional to the concentration of the target oligonucleotide in solution over a concentration range of 5 orders of magnitude.

  10. Hybrid three-dimensional variation and particle filtering for nonlinear systems

    Institute of Scientific and Technical Information of China (English)

    Leng Hong-Ze; Song Jun-Qiang

    2013-01-01

    This work addresses the problem of estimating the states of nonlinear dynamic systems with sparse observations.We present a hybrid three-dimensional variation (3DVar) and particle piltering (PF) method,which combines the advantages of 3DVar and particle-based filters.By minimizing the cost function,this approach will produce a better proposal distribution of the state.Afterwards the stochastic resampling step in standard PF can be avoided through a deterministic scheme.The simulation results show that the performance of the new method is superior to the traditional ensemble Kalman filtering (EnKF) and the standard PF,especially in highly nonlinear systems.

  11. Hybrid Adsorptive and Oxidative Removal of Natural Organic Matter Using Iron Oxide-Coated Pumice Particles

    Directory of Open Access Journals (Sweden)

    Sehnaz Sule Kaplan Bekaroglu

    2016-01-01

    Full Text Available The aim of this work was to combine adsorptive and catalytic properties of iron oxide surfaces in a hybrid process using hydrogen peroxide and iron oxide-coated pumice particles to remove natural organic matter (NOM in water. Experiments were conducted in batch, completely mixed reactors using various original and coated pumice particles. The results showed that both adsorption and catalytic oxidation mechanisms played role in the removal of NOM. The hybrid process was found to be effective in removing NOM from water having a wide range of specific UV absorbance values. Iron oxide surfaces preferentially adsorbed UV280-absorbing NOM fractions. Furthermore, the strong oxidants produced from reactions among iron oxide surfaces and hydrogen peroxide also preferentially oxidized UV280-absorbing NOM fractions. Preloading of iron oxide surfaces with NOM slightly reduced the further NOM removal performance of the hybrid process. Overall, the results suggested that the tested hybrid process may be effective for removal of NOM and control disinfection by-product formation.

  12. A New Hybrid Algorithm for Bankruptcy Prediction Using Switching Particle Swarm Optimization and Support Vector Machines

    OpenAIRE

    2015-01-01

    Bankruptcy prediction has been extensively investigated by data mining techniques since it is a critical issue in the accounting and finance field. In this paper, a new hybrid algorithm combining switching particle swarm optimization (SPSO) and support vector machine (SVM) is proposed to solve the bankruptcy prediction problem. In particular, a recently developed SPSO algorithm is exploited to search the optimal parameter values of radial basis function (RBF) kernel of the SVM. The new algori...

  13. Hybrid Optimization Algorithm of Particle Swarm Optimization and Cuckoo Search for Preventive Maintenance Period Optimization

    OpenAIRE

    Jianwen Guo; Zhenzhong Sun; Hong Tang; Xuejun Jia; Song Wang; Xiaohui Yan; Guoliang Ye; Guohong Wu

    2016-01-01

    All equipment must be maintained during its lifetime to ensure normal operation. Maintenance is one of the critical roles in the success of manufacturing enterprises. This paper proposed a preventive maintenance period optimization model (PMPOM) to find an optimal preventive maintenance period. By making use of the advantages of particle swarm optimization (PSO) and cuckoo search (CS) algorithm, a hybrid optimization algorithm of PSO and CS is proposed to solve the PMPOM problem. The test fun...

  14. PyMercury: Interactive Python for the Mercury Monte Carlo Particle Transport Code

    Energy Technology Data Exchange (ETDEWEB)

    Iandola, F N; O' Brien, M J; Procassini, R J

    2010-11-29

    Monte Carlo particle transport applications are often written in low-level languages (C/C++) for optimal performance on clusters and supercomputers. However, this development approach often sacrifices straightforward usability and testing in the interest of fast application performance. To improve usability, some high-performance computing applications employ mixed-language programming with high-level and low-level languages. In this study, we consider the benefits of incorporating an interactive Python interface into a Monte Carlo application. With PyMercury, a new Python extension to the Mercury general-purpose Monte Carlo particle transport code, we improve application usability without diminishing performance. In two case studies, we illustrate how PyMercury improves usability and simplifies testing and validation in a Monte Carlo application. In short, PyMercury demonstrates the value of interactive Python for Monte Carlo particle transport applications. In the future, we expect interactive Python to play an increasingly significant role in Monte Carlo usage and testing.

  15. A Hybrid Chaos-Particle Swarm Optimization Algorithm for the Vehicle Routing Problem with Time Window

    Directory of Open Access Journals (Sweden)

    Qi Hu

    2013-04-01

    Full Text Available State-of-the-art heuristic algorithms to solve the vehicle routing problem with time windows (VRPTW usually present slow speeds during the early iterations and easily fall into local optimal solutions. Focusing on solving the above problems, this paper analyzes the particle encoding and decoding strategy of the particle swarm optimization algorithm, the construction of the vehicle route and the judgment of the local optimal solution. Based on these, a hybrid chaos-particle swarm optimization algorithm (HPSO is proposed to solve VRPTW. The chaos algorithm is employed to re-initialize the particle swarm. An efficient insertion heuristic algorithm is also proposed to build the valid vehicle route in the particle decoding process. A particle swarm premature convergence judgment mechanism is formulated and combined with the chaos algorithm and Gaussian mutation into HPSO when the particle swarm falls into the local convergence. Extensive experiments are carried out to test the parameter settings in the insertion heuristic algorithm and to evaluate that they are corresponding to the data’s real-distribution in the concrete problem. It is also revealed that the HPSO achieves a better performance than the other state-of-the-art algorithms on solving VRPTW.

  16. Preparation and characterization of inorganic-organic trilayer core-shell polysilsesquioxane/polyacrylate/polydimethylsiloxane hybrid latex particles

    Science.gov (United States)

    Bai, Ruiqin; Qiu, Teng; Han, Feng; He, Lifan; Li, Xiaoyu

    2012-07-01

    The inorganic-organic trilayer core-shell polysilsesquioxane/polyacrylate/polydimethylsiloxane hybrid latex particles have been successfully prepared via seeded emulsion polymerization of acrylate monomers and octamethylcyclotetrasiloxane (D4) gradually, using functional polymethacryloxypropylsilsesquioxane (PSQ) latex particles with reactive methacryloxypropyl groups synthesized by the hydrolysis and polycondensation of (3-methacryloxypropyl)trimethoxysilane in the presence of mixed emulsifiers as seeds. The FTIR spectra show that acrylate monomers and D4 are effectively involved in the emulsion copolymerization and formed the polydimethylsiloxane-containing hybrid latex particles. Transmission electron microscopy (TEM) and dynamic light scattering (DLS) confirm that the resultant hybrid latex particles have evident trilayer core-shell structure and a narrow size distribution. XPS analysis also indicates that polysilsesquioxane/polyacrylate/polydimethylsiloxane hybrid latex particles have been successfully prepared and PDMS is rich in the surface of the hybrid latex film. Additionally, compared with the hybrid latex film without PDMS, the hybrid latex film containing PDMS shows higher hydrophobicity (water contact angle) and lower water absorption.

  17. Preparation and characterization of inorganic-organic trilayer core-shell polysilsesquioxane/polyacrylate/polydimethylsiloxane hybrid latex particles

    Energy Technology Data Exchange (ETDEWEB)

    Bai Ruiqin [College of Materials Science and Engineering, State Key Laboratory of Organic-Inorganic Composite, Key Laboratory of Carbon Fiber and Functional Polymers, Ministry of Education, Beijing University of Chemical Technology, Beijing 100029 (China); Qiu Teng, E-mail: qiuteng@mail.buct.edu.cn [College of Materials Science and Engineering, State Key Laboratory of Organic-Inorganic Composite, Key Laboratory of Carbon Fiber and Functional Polymers, Ministry of Education, Beijing University of Chemical Technology, Beijing 100029 (China); Han Feng; He Lifan [College of Materials Science and Engineering, State Key Laboratory of Organic-Inorganic Composite, Key Laboratory of Carbon Fiber and Functional Polymers, Ministry of Education, Beijing University of Chemical Technology, Beijing 100029 (China); Li Xiaoyu, E-mail: lixy@mail.buct.edu.cn [College of Materials Science and Engineering, State Key Laboratory of Organic-Inorganic Composite, Key Laboratory of Carbon Fiber and Functional Polymers, Ministry of Education, Beijing University of Chemical Technology, Beijing 100029 (China)

    2012-07-15

    The inorganic-organic trilayer core-shell polysilsesquioxane/polyacrylate/polydimethylsiloxane hybrid latex particles have been successfully prepared via seeded emulsion polymerization of acrylate monomers and octamethylcyclotetrasiloxane (D{sub 4}) gradually, using functional polymethacryloxypropylsilsesquioxane (PSQ) latex particles with reactive methacryloxypropyl groups synthesized by the hydrolysis and polycondensation of (3-methacryloxypropyl)trimethoxysilane in the presence of mixed emulsifiers as seeds. The FTIR spectra show that acrylate monomers and D{sub 4} are effectively involved in the emulsion copolymerization and formed the polydimethylsiloxane-containing hybrid latex particles. Transmission electron microscopy (TEM) and dynamic light scattering (DLS) confirm that the resultant hybrid latex particles have evident trilayer core-shell structure and a narrow size distribution. XPS analysis also indicates that polysilsesquioxane/polyacrylate/polydimethylsiloxane hybrid latex particles have been successfully prepared and PDMS is rich in the surface of the hybrid latex film. Additionally, compared with the hybrid latex film without PDMS, the hybrid latex film containing PDMS shows higher hydrophobicity (water contact angle) and lower water absorption.

  18. DCHAIN-SP 2001: High energy particle induced radioactivity calculation code

    Energy Technology Data Exchange (ETDEWEB)

    Kai, Tetsuya; Maekawa, Fujio; Kasugai, Yoshimi; Takada, Hiroshi; Ikeda, Yujiro [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Kosako, Kazuaki [Sumitomo Atomic Energy Industries, Ltd., Tokyo (Japan)

    2001-03-01

    For the purpose of contribution to safety design calculations for induced radioactivities in the JAERI/KEK high-intensity proton accelerator project facilities, the DCHAIN-SP which calculates the high energy particle induced radioactivity has been updated to DCHAIN-SP 2001. The following three items were improved: (1) Fission yield data are included to apply the code to experimental facility design for nuclear transmutation of long-lived radioactive waste where fissionable materials are treated. (2) Activation cross section data below 20 MeV are revised. In particular, attentions are paid to cross section data of materials which have close relation to the facilities, i.e., mercury, lead and bismuth, and to tritium production cross sections which are important in terms of safety of the facilities. (3) User-interface for input/output data is sophisticated to perform calculations more efficiently than that in the previous version. Information needed for use of the code is attached in Appendices; the DCHAIN-SP 2001 manual, the procedures of installation and execution of DCHAIN-SP, and sample problems. (author)

  19. Application of the S3M and Mcnpx Codes in Particle Detector Development

    Science.gov (United States)

    Pavlovič, Márius; Sedlačková, Katarína; Šagátová, Andrea; Strašík, Ivan

    2014-02-01

    Semiconductor detectors can be used to detect neutrons if they are covered by a conversion layer. Some neutrons transfer their kinetic energy to hydrogen via elastic nuclear scattering in the conversion layer, and protons are produced as recoils. These protons enter the sensitive volume of the detector and are detected. In the process of detector development, Monte Carlo computer codes are necessary to simulate the detection process. This paper presents the main features of the S3M code (SRIM Supporting Software Modules) and shows its application potential. Examples are given for the neutron detectors with a conversion layer and for CVD (Chemical Vapor Deposition) diamond detectors for beam-condition monitors at the LHC (Large Hadron Collider). Special attention is paid to the S3M statistical modules that can be of interest also for other application areas like beam transport, accelerators, ion therapy, etc. The results are generated by MCNPX (Monte Carlo N-Particle eXtended) simulations used to optimize the thickness of the HDPE (high density polyethylene) conversion layer.

  20. Development Of A Parallel Performance Model For The THOR Neutral Particle Transport Code

    Energy Technology Data Exchange (ETDEWEB)

    Yessayan, Raffi; Azmy, Yousry; Schunert, Sebastian

    2017-02-01

    The THOR neutral particle transport code enables simulation of complex geometries for various problems from reactor simulations to nuclear non-proliferation. It is undergoing a thorough V&V requiring computational efficiency. This has motivated various improvements including angular parallelization, outer iteration acceleration, and development of peripheral tools. For guiding future improvements to the code’s efficiency, better characterization of its parallel performance is useful. A parallel performance model (PPM) can be used to evaluate the benefits of modifications and to identify performance bottlenecks. Using INL’s Falcon HPC, the PPM development incorporates an evaluation of network communication behavior over heterogeneous links and a functional characterization of the per-cell/angle/group runtime of each major code component. After evaluating several possible sources of variability, this resulted in a communication model and a parallel portion model. The former’s accuracy is bounded by the variability of communication on Falcon while the latter has an error on the order of 1%.

  1. Tribological Properties of Aluminum Alloy treated by Fine Particle Peening/DLC Hybrid Surface Modification

    Directory of Open Access Journals (Sweden)

    Nanbu H.

    2010-06-01

    Full Text Available In order to improve the adhesiveness of the DLC coating, Fine Particle Peening (FPP treatment was employed as pre-treatment of the DLC coating process. FPP treatment was performed using SiC shot particles, and then AA6061-T6 aluminum alloy was DLC-coated. A SiC-rich layer was formed around the surface of the aluminum alloy by the FPP treatment because small chips of shot particles were embedded into the substrate surface. Reciprocating sliding tests were conducted to measure the friction coefficients. While the DLC coated specimen without FPP treatment showed a sudden increase in friction coefficient at the early stage of the wear cycles, the FPP/DLC hybrid treated specimen maintained a low friction coefficient value during the test period. Further investigation revealed that the tribological properties of the substrate after the DLC coating were improved with an increase in the amount of Si at the surface.

  2. Correlation of Particle Traversals with Clonogenic Survival Using Cell-Fluorescent Ion Track Hybrid Detector

    Science.gov (United States)

    Dokic, Ivana; Niklas, Martin; Zimmermann, Ferdinand; Mairani, Andrea; Seidel, Philipp; Krunic, Damir; Jäkel, Oliver; Debus, Jürgen; Greilich, Steffen; Abdollahi, Amir

    2015-01-01

    Development of novel approaches linking the physical characteristics of particles with biological responses are of high relevance for the field of particle therapy. In radiobiology, the clonogenic survival of cells is considered the gold standard assay for the assessment of cellular sensitivity to ionizing radiation. Toward further development of next generation biodosimeters in particle therapy, cell-fluorescent ion track hybrid detector (Cell-FIT-HD) was recently engineered by our group and successfully employed to study physical particle track information in correlation with irradiation-induced DNA damage in cell nuclei. In this work, we investigated the feasibility of Cell-FIT-HD as a tool to study the effects of clinical beams on cellular clonogenic survival. Tumor cells were grown on the fluorescent nuclear track detector as cell culture, mimicking the standard procedures for clonogenic assay. Cell-FIT-HD was used to detect the spatial distribution of particle tracks within colony-initiating cells. The physical data were associated with radiation-induced foci as surrogates for DNA double-strand breaks, the hallmark of radiation-induced cell lethality. Long-term cell fate was monitored to determine the ability of cells to form colonies. We report the first successful detection of particle traversal within colony-initiating cells at subcellular resolution using Cell-FIT-HD. PMID:26697410

  3. Correlation of Particle Traversals with Clonogenic Survival Using Cell-Fluorescent Ion Track Hybrid Detector

    Directory of Open Access Journals (Sweden)

    Ivana eDokic

    2015-12-01

    Full Text Available Development of novel approaches linking the physical characteristics of particles with biological responses are of high relevance for the field of particle therapy. In radiobiology, the clonogenic survival of cells is considered the gold standard assay for assessment of cellular sensitivity to ionizing radiation. Towards further development of next generation biodosimeters in particle therapy, cell-fluorescent ion track hybrid detector (Cell-FIT-HD was recently engineered by our group and successfully employed to study physical particle track information in correlation with irradiation- induced DNA damage in cell nuclei. In this work, we investigated the feasibility of Cell-FIT-HD as a tool to study the effects of clinical beams on cellular clonogenic survival. Tumor cells were grown on the FNTD as cell culture, mimicking the standard procedures for clonogenic assay. Cell-FIT-HD was used to detect the spatial distribution of particle tracks within colony-initiating cells. The physical data were associated to radiation induced foci as surrogates for DNA double strand breakages (DSB, the hallmark of radiation ‐induced cell lethality. Long‐term cell fate was monitored to determine the ability of cells to form colonies. We report the first successful detection of particle traversal within colony-initiating cells at subcellular resolution using Cell-FIT-HD.

  4. Novel methods in the Particle-In-Cell accelerator Code-Framework Warp

    Energy Technology Data Exchange (ETDEWEB)

    Vay, J-L [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Grote, D. P. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Cohen, R. H. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Friedman, A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2012-12-26

    The Particle-In-Cell (PIC) Code-Framework Warp is being developed by the Heavy Ion Fusion Science Virtual National Laboratory (HIFS-VNL) to guide the development of accelerators that can deliver beams suitable for high-energy density experiments and implosion of inertial fusion capsules. It is also applied in various areas outside the Heavy Ion Fusion program to the study and design of existing and next-generation high-energy accelerators, including the study of electron cloud effects and laser wakefield acceleration for example. This study presents an overview of Warp's capabilities, summarizing recent original numerical methods that were developed by the HIFS-VNL (including PIC with adaptive mesh refinement, a large-timestep 'drift-Lorentz' mover for arbitrarily magnetized species, a relativistic Lorentz invariant leapfrog particle pusher, simulations in Lorentz-boosted frames, an electromagnetic solver with tunable numerical dispersion and efficient stride-based digital filtering), with special emphasis on the description of the mesh refinement capability. In addition, selected examples of the applications of the methods to the abovementioned fields are given.

  5. Hybrid Particle-In-Cell (PIC) simulation of heat transfer and ionization balance in overdense plasmas irradiated by subpicosecond pulse lasers

    Energy Technology Data Exchange (ETDEWEB)

    Zhidkov, A.; Sasaki, Akira [Japan Atomic Energy Research Inst., Neyagawa, Osaka (Japan). Kansai Research Establishment

    1998-11-01

    A 1D hybrid electromagnetic particle-in-cell code with new methods to include particle collisions and atomic kinetics is developed and applied to ultra-short-pulse laser plasma interaction. Using the Langevin equation to calculate the Coulomb collision term, the present code is shown to be fast and stable in calculating the particle motion in the PIC simulation. Furthermore, by noting that the scale length of the change of atomic kinetics is much longer than the Debye radius, we calculate ionization and X-ray emission on kinetics cells, which are determined by averaging plasma parameters such as the electron density and energy over number of PIC cells. The absorption of short-pulse laser by overdense plasmas is calculated in self-consistent manner, including the effect of rapid change of density and temperature caused by instantaneous heating and successive fast ionization of the target material. The calculated results agree well with those obtained from the Fokker-Planck simulation as well as experiments, for non-local heat transport in plasmas with steep temperature gradient, and for the absorption of a short laser pulse by solid density targets. These results demonstrate usefulness of the code and the computational method therein for understanding of physics of short pulse laser plasma interaction experiments, and for application to the gain calculation of short-pulse laser excited X-ray laser as well. (author)

  6. Hybrid Multi-Objective Particle Swarm Optimization for Flexible Job Shop Scheduling Problem

    Directory of Open Access Journals (Sweden)

    S. V. Kamble

    2015-03-01

    Full Text Available Hybrid algorithm based on Particle Swarm Optimization (PSO and Simulated annealing (SA is proposed, to solve Flexible Job Shop Scheduling with five objectives to be minimized simultaneously: makespan, maximal machine workload, total workload, machine idle time & total tardiness. Rescheduling strategy used to shuffle workload once the machine breakdown takes place in proposed algorithm. The hybrid algorithm combines the high global search efficiency of PSO with the powerful ability to avoid being trapped in local minimum of SA. A hybrid multi-objective PSO (MPSO and SA algorithm is proposed to identify an approximation of the pareto front for Flexible job shop scheduling (FJSSP. Pareto front and crowding distance is used for identify the fitness of particle. MPSO is significant to global search and SA used to local search. The proposed MPSO algorithm is experimentally applied on two benchmark data set. The result shows that the proposed algorithm is better in term quality of non-dominated solution compared to the other algorithms in the literature.

  7. Development of flash nanoprecipitation as a scalable platform for production of hybrid polymer-inorganic Janus particles

    Science.gov (United States)

    Lee, Victoria E.; Prud'Homme, Robert K.; Priestley, Rodney D.

    Polymer Janus particles, containing two or more distinct domains, can act as supports for inorganic nanoparticles, stabilizing them against aggregation and templating anisotropic functionalization of the microparticles. This anisotropy can be advantageous for applications such as biofuel upgrading, bionanosensors, and responsive materials. Here, we introduce flash nanoprecipitation (FNP) as a scalable, fast process to create hybrid polymer-inorganic Janus particles with control of particle size and anisotropy. During FNP, polymer Janus particles form by rapid intermixing of a polymer solution with a poor solvent, inducing polymer precipitation and phase separation. Inorganic nanoparticles are then adsorbed selectively onto one domain of the polymer support by exploiting electrostatic interactions between the charged particles. By tuning polymer concentration and ratio in the feed stream, the particle size and anisotropy can be controlled. We further demonstrate that these hybrid particles can simultaneously stabilize emulsions and selectively catalyze the degradation of dye in one phase. With support from the Princeton Imaging Analysis Center.

  8. Global Optimization Based on the Hybridization of Harmony Search and Particle Swarm Optimization Methods

    Directory of Open Access Journals (Sweden)

    A. P. Karpenko

    2014-01-01

    Full Text Available We consider a class of stochastic search algorithms of global optimization which in various publications are called behavioural, intellectual, metaheuristic, inspired by the nature, swarm, multi-agent, population, etc. We use the last term.Experience in using the population algorithms to solve challenges of global optimization shows that application of one such algorithm may not always effective. Therefore now great attention is paid to hybridization of population algorithms of global optimization. Hybrid algorithms unite various algorithms or identical algorithms, but with various values of free parameters. Thus efficiency of one algorithm can compensate weakness of another.The purposes of the work are development of hybrid algorithm of global optimization based on known algorithms of harmony search (HS and swarm of particles (PSO, software implementation of algorithm, study of its efficiency using a number of known benchmark problems, and a problem of dimensional optimization of truss structure.We set a problem of global optimization, consider basic algorithms of HS and PSO, give a flow chart of the offered hybrid algorithm called PSO HS , present results of computing experiments with developed algorithm and software, formulate main results of work and prospects of its development.

  9. Hybrid particle-field molecular dynamics simulations for dense polymer systems.

    Science.gov (United States)

    Milano, Giuseppe; Kawakatsu, Toshihiro

    2009-06-07

    We propose a theoretical scheme for a hybrid simulation technique where self-consistent field theory and molecular dynamics simulation are combined (MD-SCF). We describe the detail of the main implementation issues on the evaluation of a smooth three-dimensional spatial density distribution and its special gradient based on the positions of particles. The treatments of our multiscale model system on an atomic scale or on a specific coarse-grained scale are carefully discussed. We perform a series of test simulations on this hybrid model system and compare the structural correlations on the atomic scale with those of classical MD simulations. The results are very encouraging and open a way to an efficient strategy that possess the main advantages common to the SCF and the atomistic approaches, while avoiding the disadvantages of each of the treatments.

  10. Fluorescent Nanodiamond-Gold Hybrid Particles for Multimodal Optical and Electron Microscopy Cellular Imaging.

    Science.gov (United States)

    Liu, Weina; Naydenov, Boris; Chakrabortty, Sabyasachi; Wuensch, Bettina; Hübner, Kristina; Ritz, Sandra; Cölfen, Helmut; Barth, Holger; Koynov, Kaloian; Qi, Haoyuan; Leiter, Robert; Reuter, Rolf; Wrachtrup, Jörg; Boldt, Felix; Scheuer, Jonas; Kaiser, Ute; Sison, Miguel; Lasser, Theo; Tinnefeld, Philip; Jelezko, Fedor; Walther, Paul; Wu, Yuzhou; Weil, Tanja

    2016-10-12

    There is a continuous demand for imaging probes offering excellent performance in various microscopy techniques for comprehensive investigations of cellular processes by more than one technique. Fluorescent nanodiamond-gold nanoparticles (FND-Au) constitute a new class of "all-in-one" hybrid particles providing unique features for multimodal cellular imaging including optical imaging, electron microscopy, and, and potentially even quantum sensing. Confocal and optical coherence microscopy of the FND-Au allow fast investigations inside living cells via emission, scattering, and photothermal imaging techniques because the FND emission is not quenched by AuNPs. In electron microscopy, transmission electron microscopy (TEM) and scanning transmission electron microscopy (STEM) analysis of FND-Au reveals greatly enhanced contrast due to the gold particles as well as an extraordinary flickering behavior in three-dimensional cellular environments originating from the nanodiamonds. The unique multimodal imaging characteristics of FND-Au enable detailed studies inside cells ranging from statistical distributions at the entire cellular level (micrometers) down to the tracking of individual particles in subcellular organelles (nanometers). Herein, the processes of endosomal membrane uptake and release of FNDs were elucidated for the first time by the imaging of individual FND-Au hybrid nanoparticles with single-particle resolution. Their convenient preparation, the availability of various surface groups, their flexible detection modalities, and their single-particle contrast in combination with the capability for endosomal penetration and low cytotoxicity make FND-Au unique candidates for multimodal optical-electronic imaging applications with great potential for emerging techniques, such as quantum sensing inside living cells.

  11. A Hybrid Multiobjective Discrete Particle Swarm Optimization Algorithm for a SLA-Aware Service Composition Problem

    Directory of Open Access Journals (Sweden)

    Hao Yin

    2014-01-01

    Full Text Available For SLA-aware service composition problem (SSC, an optimization model for this algorithm is built, and a hybrid multiobjective discrete particle swarm optimization algorithm (HMDPSO is also proposed in this paper. According to the characteristic of this problem, a particle updating strategy is designed by introducing crossover operator. In order to restrain particle swarm’s premature convergence and increase its global search capacity, the swarm diversity indicator is introduced and a particle mutation strategy is proposed to increase the swarm diversity. To accelerate the process of obtaining the feasible particle position, a local search strategy based on constraint domination is proposed and incorporated into the proposed algorithm. At last, some parameters in the algorithm HMDPSO are analyzed and set with relative proper values, and then the algorithm HMDPSO and the algorithm HMDPSO+ incorporated by local search strategy are compared with the recently proposed related algorithms on different scale cases. The results show that algorithm HMDPSO+ can solve the SSC problem more effectively.

  12. Layered double oxide (LDO) particle containing photoreactive hybrid layers with tunable superhydrophobic and photocatalytic properties

    Science.gov (United States)

    Deák, Ágota; Janovák, László; Csapó, Edit; Ungor, Ditta; Pálinkó, István; Puskás, Sándor; Ördög, Tibor; Ricza, Tamás; Dékány, Imre

    2016-12-01

    Inorganic/organic hybrid layers have been prepared having superhydrophobic as well as photoreactive properties. The hybrid thin films with micro- and nanosized dual-scale surface roughness consist of ∼25 μm layered double oxide (LDO) photocatalyst particles and low surface energy poly(perfluorodecyl acrylate) [p(PFDAc)] fluoropolymer binder material. The application of [p(PFDAc)] resulted in the decrease in the surface free energy of the hydrophilic LDO. The structured surface LDO with ∼12% ZnO phase content were synthesized from layer double hydroxide (LDH) spheres. The determined excitation wavelength and the calculated band gap energy values were 386 nm and 3.23 eV, respectively. The hybrid thin films were prepared by a simple spray-coating method, which is a low-cost, fast and scalable film-forming technique. The surface roughness and also the wetting properties of the two-component hybrid layers proved to be finely adjustable by the LDO:fluoropolymer ratio. It was found that at 80-90 wt% LDO content, the thin films with a surface free energy value of ∼12 mJ/m2 displayed superhydrophobic behaviour (Θ > 150°) with satisfactory photocatalytic properties. This means special photoreactive surfaces with superhydrophobic properties instead of the conventional superhydropilic photocatalyst layers. According to the benzoic acid photodegradation test experiments of benzoic acid, the hybrid layers with 80-90 wt% LDO content photooxidized 22-24% of the initial test molecule concentration (0.17 g/L) under UV-A (λmax = 365 nm) illumination.

  13. Enhancement of hybrid rocket combustion performance using nano-sized energetic particles

    Science.gov (United States)

    Risha, Grant Alexander

    Until now, the regression rate of classical hybrid rocket engines have typically been an order of magnitude lower than solid propellant motors; thus, hybrids require a relatively large fuel surface area for a given thrust level. In addition to low linear regression rates, relatively low combustion efficiency (87 to 92%), low mass burning rates, varying oxidizer-to-fuel ratio during operation, and lack of scaling laws have been reported. These disadvantages can be ameliorated by introducing nano-sized energetic powder additives into the solid fuel. The addition of nano-sized energetic particles into the solid fuel enhances performance as measured by parameters such as: density specific impulse, mass and linear burning rates, and thrust. Thermophysical properties of the solid fuel such as density, heat of combustion, thermal diffusivity, and thermal conductivity are also enhanced. The types of nano-sized energetic particles used in this study include aluminum, boron, boron carbide, and some Viton-A coated particles. Since the combustion process of solid fuels in a hybrid rocket engine is governed by the mass flux of the oxidizer entering the combustion chamber, the rate-limiting process is the mixing and reacting of the pyrolysis products of the fuel grain with the incoming oxidizer. The overall goal of this research was to determine the relative propulsive and combustion behavior for a family of newly-developed HTPB-based solid-fuel formulations containing various nano-sized energetic particles. Seventeen formulations contained 13% additive by weight, one formulation (SF4) contained 6.5% additive by weight, and one formulation (SF19) contained 5.65% boron by weight. The two hybrid rocket engines which were used in this investigation were the Long Grain Center-Perforated (LGCP) rocket engine and the X-Ray Transparent Casing (XTC) rocket engine. The smaller scale LGCP rocket engine was used to evaluate all of the formulations because conducting experiments using the

  14. Profiling of fine- and coarse-mode particles with LIRIC (LIdar/Radiometer Inversion Code

    Directory of Open Access Journals (Sweden)

    M. R. Perrone

    2014-08-01

    Full Text Available The paper investigates numerical procedures that allow determining the dependence on altitude of aerosol properties from multi wavelength elastic lidar signals. In particular, the potential of the LIdar/Radiometer Inversion Code (LIRIC to retrieve the vertical profiles of fine and coarse-mode particles by combining 3-wavelength lidar measurements and collocated AERONET (AErosol RObotic NETwork sun/sky photometer measurements is investigated. The used lidar signals are at 355, 532 and 1064 nm. Aerosol extinction coefficient (αL, lidar ratio (LRL, and Ångstrom exponent (ÅL profiles from LIRIC are compared with the corresponding profiles (α, LR, and Å retrieved from a Constrained Iterative Inversion (CII procedure to investigate the LIRIC retrieval ability. Then, an aerosol classification framework which relies on the use of a graphical framework and on the combined analysis of the Ångstrom exponent (at the 355 and 1064 nm wavelength pair, Å(355, 1064 and its spectral curvature (ΔÅ = Å(355, 532–Å(532, 1064 is used to investigate the ability of LIRIC to retrieve vertical profiles of fine and coarse-mode particles. The Å-ΔÅ aerosol classification framework allows estimating the dependence on altitude of the aerosol fine modal radius and of the fine mode contribution to the whole aerosol optical thickness, as discussed in Perrone et al. (2014. The application of LIRIC to three different aerosol scenarios dealing with aerosol properties dependent on altitude has revealed that the differences between αL and α vary with the altitude and on average increase with the decrease of the lidar signal wavelength. It has also been found that the differences between ÅL and corresponding Å values vary with the altitude and the wavelength pair. The sensitivity of Ångstrom exponents to the aerosol size distribution which vary with the wavelength pair was responsible for these last results. The aerosol classification framework has revealed that

  15. Further validation of the hybrid particle-mesh method for vortex shedding flow simulations

    Directory of Open Access Journals (Sweden)

    Lee Seung-Jae

    2015-11-01

    Full Text Available This is the continuation of a numerical study on vortex shedding from a blunt trailing-edge of a hydrofoil. In our previous work (Lee et al., 2015, numerical schemes for efficient computations were successfully implemented; i.e. multiple domains, the approximation of domain boundary conditions using cubic spline functions, and particle-based domain decomposition for better load balancing. In this study, numerical results through a hybrid particle-mesh method which adopts the Vortex-In-Cell (VIC method and the Brinkman penalization model are further rigorously validated through comparison to experimental data at the Reynolds number of 2 × 106. The effects of changes in numerical parameters are also explored herein. We find that the present numerical method enables us to reasonably simulate vortex shedding phenomenon, as well as turbulent wakes of a hydrofoil.

  16. A hybrid FEM-DEM approach to the simulation of fluid flow laden with many particles

    Science.gov (United States)

    Casagrande, Marcus V. S.; Alves, José L. D.; Silva, Carlos E.; Alves, Fábio T.; Elias, Renato N.; Coutinho, Alvaro L. G. A.

    2017-04-01

    In this work we address a contribution to the study of particle laden fluid flows in scales smaller than TFM (two-fluid models). The hybrid model is based on a Lagrangian-Eulerian approach. A Lagrangian description is used for the particle system employing the discrete element method (DEM), while a fixed Eulerian mesh is used for the fluid phase modeled by the finite element method (FEM). The resulting coupled DEM-FEM model is integrated in time with a subcycling scheme. The aforementioned scheme is applied in the simulation of a seabed current to analyze which mechanisms lead to the emergence of bedload transport and sediment suspension, and also quantify the effective viscosity of the seabed in comparison with the ideal no-slip wall condition. A simulation of a salt plume falling in a fluid column is performed, comparing the main characteristics of the system with an experiment.

  17. Particle Swarm and Bacterial Foraging Inspired Hybrid Artificial Bee Colony Algorithm for Numerical Function Optimization

    Directory of Open Access Journals (Sweden)

    Li Mao

    2016-01-01

    Full Text Available Artificial bee colony (ABC algorithm has good performance in discovering the optimal solutions to difficult optimization problems, but it has weak local search ability and easily plunges into local optimum. In this paper, we introduce the chemotactic behavior of Bacterial Foraging Optimization into employed bees and adopt the principle of moving the particles toward the best solutions in the particle swarm optimization to improve the global search ability of onlooker bees and gain a hybrid artificial bee colony (HABC algorithm. To obtain a global optimal solution efficiently, we make HABC algorithm converge rapidly in the early stages of the search process, and the search range contracts dynamically during the late stages. Our experimental results on 16 benchmark functions of CEC 2014 show that HABC achieves significant improvement at accuracy and convergence rate, compared with the standard ABC, best-so-far ABC, directed ABC, Gaussian ABC, improved ABC, and memetic ABC algorithms.

  18. Hybrid Optimization Algorithm of Particle Swarm Optimization and Cuckoo Search for Preventive Maintenance Period Optimization

    Directory of Open Access Journals (Sweden)

    Jianwen Guo

    2016-01-01

    Full Text Available All equipment must be maintained during its lifetime to ensure normal operation. Maintenance is one of the critical roles in the success of manufacturing enterprises. This paper proposed a preventive maintenance period optimization model (PMPOM to find an optimal preventive maintenance period. By making use of the advantages of particle swarm optimization (PSO and cuckoo search (CS algorithm, a hybrid optimization algorithm of PSO and CS is proposed to solve the PMPOM problem. The test functions show that the proposed algorithm exhibits more outstanding performance than particle swarm optimization and cuckoo search. Experiment results show that the proposed algorithm has advantages of strong optimization ability and fast convergence speed to solve the PMPOM problem.

  19. Delay-area trade-off for MPRM circuits based on hybrid discrete particle swarm optimization

    Institute of Scientific and Technical Information of China (English)

    Jiang Zhidi; Wang Zhenhai; Wang Pengjun

    2013-01-01

    Polarity optimization for mixed polarity Reed-Muller (MPRM) circuits is a combinatorial issue.Based on the study on discrete particle swarm optimization (DPSO) and mixed polarity,the corresponding relation between particle and mixed polarity is established,and the delay-area trade-off of large-scale MPRM circuits is proposed.Firstly,mutation operation and elitist strategy in genetic algorithm are incorporated into DPSO to further develop a hybrid DPSO (HDPSO).Then the best polarity for delay and area trade-off is searched for large-scale MPRM circuits by combining the HDPSO and a delay estimation model.Finally,the proposed algorithm is testified by MCNC Benchmarks.Experimental results show that HDPSO achieves a better convergence than DPSO in terms of search capability for large-scale MPRM circuits.

  20. Magnetic properties of hybrid elastomers with magnetically hard fillers: rotation of particles

    Science.gov (United States)

    Stepanov, G. V.; Borin, D. Yu; Bakhtiiarov, A. V.; Storozhenko, P. A.

    2017-03-01

    Hybrid magnetic elastomers belonging to the family of magnetorheological elastomers contain magnetically hard components and are of the utmost interest for the development of semiactive and active damping devices as well as actuators and sensors. The processes of magnetizing of such elastomers are accompanied by structural rearrangements inside the material. When magnetized, the elastomer gains its own magnetic moment resulting in changes of its magneto-mechanical properties, which remain permanent, even in the absence of external magnetic fields. Influenced by the magnetic field, magnetized particles move inside the matrix forming chain-like structures. In addition, the magnetically hard particles can rotate to align their magnetic moments with the new direction of the external field. Such an elastomer cannot be demagnetized by the application of a reverse field.

  1. Polymethacrylate monolithic and hybrid particle-monolithic columns for reversed-phase and hydrophilic interaction capillary liquid chromatography.

    Science.gov (United States)

    Jandera, Pavel; Urban, Jirí; Skeríková, Veronika; Langmaier, Pavel; Kubícková, Romana; Planeta, Josef

    2010-01-01

    We prepared hybrid particle-monolithic polymethacrylate columns for micro-HPLC by in situ polymerization in fused silica capillaries pre-packed with 3-5microm C(18) and aminopropyl silica bonded particles, using polymerization mixtures based on laurylmethacrylate-ethylene dimethacrylate (co)polymers for the reversed-phase (RP) mode and [2-(methacryloyloxy)ethyl]-dimethyl-(3-sulfopropyl) zwitterionic (co)polymers for the hydrophilic interaction (HILIC) mode. The hybrid particle-monolithic columns showed reduced porosity and hold-up volumes, approximately 2-2.5 times lower in comparison to the pure monolithic columns prepared in the whole volume of empty capillaries. The elution volumes of sample compounds are also generally lower in comparison to packed or pure monolithic columns. The efficiency and permeability of the hybrid columns are intermediate in between the properties of the reference pure monolithic and particle-packed columns. The chemistries of the embedded solid particles and of the interparticle monolithic moiety in the hybrid capillary columns contribute to the retention to various degrees, affecting the selectivity of separation. Some hybrid columns provided improved separations of proteins in comparison to the reference particle-packed columns in the reversed-phase mode. Zwitterionic hybrid particle-monolithic columns show dual mode retention HILIC/RP behaviour depending on the composition of the mobile phase and allow separations of polar compounds such as phenolic acids in the HILIC mode at lower concentrations of acetonitrile and, often in shorter analysis time in comparison to particle-packed and full-volume monolithic columns.

  2. Aminopropyl-Silica Hybrid Particles as Supports for Humic Acids Immobilization

    Directory of Open Access Journals (Sweden)

    Mónika Sándor

    2016-01-01

    Full Text Available A series of aminopropyl-functionalized silica nanoparticles were prepared through a basic two step sol-gel process in water. Prior to being aminopropyl-functionalized, silica particles with an average diameter of 549 nm were prepared from tetraethyl orthosilicate (TEOS, using a Stöber method. In a second step, aminopropyl-silica particles were prepared by silanization with 3-aminopropyltriethoxysilane (APTES, added drop by drop to the sol-gel mixture. The synthesized amino-functionalized silica particles are intended to be used as supports for immobilization of humic acids (HA, through electrostatic bonds. Furthermore, by inserting beside APTES, unhydrolysable mono-, di- or trifunctional alkylsilanes (methyltriethoxy silane (MeTES, trimethylethoxysilane (Me3ES, diethoxydimethylsilane (Me2DES and 1,2-bis(triethoxysilylethane (BETES onto silica particles surface, the spacing of the free amino groups was intended in order to facilitate their interaction with HA large molecules. Two sorts of HA were used for evaluating the immobilization capacity of the novel aminosilane supports. The results proved the efficient functionalization of silica nanoparticles with amino groups and showed that the immobilization of the two tested types of humic acid substances was well achieved for all the TEOS/APTES = 20/1 (molar ratio silica hybrids having or not having the amino functions spaced by alkyl groups. It was shown that the density of aminopropyl functions is low enough at this low APTES fraction and do not require a further spacing by alkyl groups. Moreover, all the hybrids having negative zeta potential values exhibited low interaction with HA molecules.

  3. Heat transfer partitioning model of film boiling of particle cluster in a liquid pool: implementation in a CFD code

    Science.gov (United States)

    Mahapatra, Pallab S.; Ghosh, Koushik; Manna, Nirmal K.

    2015-08-01

    In the present work an effective heat transfer partitioning model of three phase (particles, liquid and vapour) flow and thermal interaction have been developed by a multi-fluid approach under film boiling condition. The in-house multiphase flow code is based on finite volume method of discretization and SIMPLE-based pressure correction algorithm. From consideration of mass, momentum and energy balance across the liquid-vapour interface, the vapour bubble generated from the vapour film have been modeled and incorporated in the code. Different interaction terms between each phase are incorporated depending upon the flow regime. The code is validated with in-house and available experimental results. Finally the effect of relevant parameters on void generation under film boiling condition of particles is estimated.

  4. Hyperbolic divergence cleaning, the electrostatic limit, and potential boundary conditions for particle-in-cell codes

    Science.gov (United States)

    Pfeiffer, M.; Munz, C.-D.; Fasoulas, S.

    2015-08-01

    In a numerical solution of the Maxwell-Vlasov system, the consistency with the charge conservation and divergence conditions has to be kept solving the hyperbolic evolution equations of the Maxwell system, since the vector identity ∇ ṡ (∇ × u →) = 0 and/or the charge conservation of moving particles may be not satisfied completely due to discretization errors. One possible method to force the consistency is the hyperbolic divergence cleaning. This hyperbolic constraint formulation of Maxwell's equations has been proposed previously, coupling the divergence conditions to the hyperbolic evolution equations, which can then be treated with the same numerical method. We pick up this method again and show that electrostatic limit may be obtained by accentuating the divergence cleaning sub-system and converging to steady state. Hence, the electrostatic case can be treated by the electrodynamic code with reduced computational effort. In addition, potential boundary conditions as often given in practical applications can be coupled in a similar way to get appropriate boundary conditions for the field equations. Numerical results are shown for an electric dipole, a parallel-plate capacitor, and a Langmuir wave. The use of potential boundary conditions is demonstrated in an Einzel lens simulation.

  5. Simulation of a Smith-Purcell FEL Using a Particle-in-Cell Code

    CERN Document Server

    Donohue, J T

    2005-01-01

    A simulation of the generation of Smith-Purcell (S-P) radiation at microwave frequencies is performed using the two-dimensional particle-in-cell code MAGIC. The simulation supposes that a continuous, thin (but infinitely wide), mono-energetic electron beam passes over a diffraction grating, while a strong axial magnetic field constrains the electrons to essentially one-dimensional motion. We find that the passage of the beam excites an evanescent electromagnetic wave in the proximity of the grating, which in turn leads to bunching of the initially continuous electron beam. The frequency and wave number of the bunching are determined, and found to be close to those proposed by Brau and co-workers in recent work [1]. This frequency is below the threshold for S-P radiation. However, the bunching is sufficiently strong that higher harmonics are clearly visible in the beam current. These harmonic frequencies correspond to allowed S-P radiation, and we see strong emission of such radiation at the appropriate angles...

  6. A Generalization Belief Propagation Decoding Algorithm for Polar Codes Based on Particle Swarm Optimization

    Directory of Open Access Journals (Sweden)

    Yingxian Zhang

    2014-01-01

    Full Text Available We propose a generalization belief propagation (BP decoding algorithm based on particle swarm optimization (PSO to improve the performance of the polar codes. Through the analysis of the existing BP decoding algorithm, we first introduce a probability modifying factor to each node of the BP decoder, so as to enhance the error correcting capacity of the decoding. Then, we generalize the BP decoding algorithm based on these modifying factors and drive the probability update equations for the proposed decoding. Based on the new probability update equations, we show the intrinsic relationship of the existing decoding algorithms. Finally, in order to achieve the best performance, we formulate an optimization problem to find the optimal probability modifying factors for the proposed decoding algorithm. Furthermore, a method based on the modified PSO algorithm is also introduced to solve that optimization problem. Numerical results show that the proposed generalization BP decoding algorithm achieves better performance than that of the existing BP decoding, which suggests the effectiveness of the proposed decoding algorithm.

  7. Preparation and characterization of TiO 2-cationic hybrid nanoparticles as electrophoretic particles

    Science.gov (United States)

    Li, Jingjing; Deng, Liandong; Xing, Jinfeng; Dong, Anjie; Li, Xianggao

    2012-01-01

    The hybrid nanoparticles (TiO2-HNPs) with TiO2 nanoparticles as core and with poly(N,N-dimethylaminoethyl methacrylate-co-methyl methacrylate) by using triallylamine as cross-linking agent as shell were firstly prepared via atom transfer radical polymerization (ATRP) in methanol. Then the hybrid nanoparticles with positive charge were produced by the quaternization with methyl iodide as quaternization reagent so as to endow them with greater electrophoretic mobility. The cationic hybrid nanoparticles (TiO2-CHNPs) were studied by means of transmission electron microscopy (TEM), scanning electron microscopy (SEM), Fourier-transform infrared (FTIR) spectroscopy and dynamic light scattering (DLS) measurements. The results indicate that the cationic polymer is successfully grafted on the surface of the TiO2 nanoparticles. The particle size of TiO2-CHNPs is about 150 nm and the polydispersity index (PDI) is 0.307. The zeta potential, the contrast ratio of white state to dark state and response time of TiO2-CHNPs are +16.8 mV, 30 and 3 s, respectively, which show the potential application prospect in the development of electrophoretic ink.

  8. Short-Term Wind Power Forecasting Using the Enhanced Particle Swarm Optimization Based Hybrid Method

    Directory of Open Access Journals (Sweden)

    Wen-Yeau Chang

    2013-09-01

    Full Text Available High penetration of wind power in the electricity system provides many challenges to power system operators, mainly due to the unpredictability and variability of wind power generation. Although wind energy may not be dispatched, an accurate forecasting method of wind speed and power generation can help power system operators reduce the risk of an unreliable electricity supply. This paper proposes an enhanced particle swarm optimization (EPSO based hybrid forecasting method for short-term wind power forecasting. The hybrid forecasting method combines the persistence method, the back propagation neural network, and the radial basis function (RBF neural network. The EPSO algorithm is employed to optimize the weight coefficients in the hybrid forecasting method. To demonstrate the effectiveness of the proposed method, the method is tested on the practical information of wind power generation of a wind energy conversion system (WECS installed on the Taichung coast of Taiwan. Comparisons of forecasting performance are made with the individual forecasting methods. Good agreements between the realistic values and forecasting values are obtained; the test results show the proposed forecasting method is accurate and reliable.

  9. Hybrid (particle in cell-fluid) simulation of ion-acoustic soliton generation including super-thermal and trapped electrons

    Energy Technology Data Exchange (ETDEWEB)

    Nopoush, M.; Abbasi, H. [Faculty of Physics, Amirkabir University of Technology, P. O. Box 15875-4413, Tehran (Iran, Islamic Republic of)

    2011-08-15

    The present paper is devoted to the simulation of the nonlinear disintegration of a localized perturbation into an ion-acoustic soliton in a plasma. Recently, this problem was studied by a simple model [H. Abbasi et al., Plasma Phys. Controlled Fusion 50, 095007 (2008)]. The main assumptions were (i) in the electron velocity distribution function (DF), the ion-acoustic soliton velocity was neglected in comparison to the electron thermal velocity, (ii) on the ion-acoustic evolution time-scale, the electron velocity DF was assumed to be stationary, and (iii) the calculation was restricted to the small amplitude case. In order to generalize the model, one has to consider the evolution of the electron velocity DF for finite amplitudes. For this purpose, a one dimensional electrostatic hybrid code, particle in cell (PIC)-fluid, was designed. It simulates the electrons dynamics by the PIC method and the cold ions dynamics by the fluid equations. The plasma contains a population of super-thermal electrons and, therefore, a Lorentzian (kappa) velocity DF is used to model the high energy tail in the electron velocity DF. Electron trapping is included in the simulation in view of their nonlinear resonant interaction with the localized perturbation. A Gaussian initial perturbation is used to model the localized perturbation. The influence of both the trapped and the super-thermal electrons on this process is studied and compared with the previous model.

  10. CROWDED HYBRID PANEL MANUFACTURED WITH PEANUT HULLS REINFORCED WITH ITAÚBA WOOD PARTICLES

    Directory of Open Access Journals (Sweden)

    Guilherme Barbirato

    2014-09-01

    Full Text Available http://dx.doi.org/10.5902/1980509815726In this paper, it was considered the study of the potential use of peanut hulls and wood particles of itaúba (Mezilaurus itauba species in order to add value to these materials through the manufacture of hybrid particle board in order to compare the physical and mechanical performances as well as durability. For these procedures, it was used the bi-component polyurethane resin based on castor beans (mammon oil and urea-formaldehyde. The product quality was evaluated based on the requirements of the standards NBR 14.810:2006 APA PRP and 108, through physico-mechanical and microstructural durability. The results indicate that the incorporation of wood particles warrants an increase in physical-mechanical properties of the particleboard manufactured with peanut hulls, the polyurethane resin based on castor oil was effective as a particle adhesive binder and the durability assay indicated that the material should be used under conditions of low exposure to moisture.

  11. Probing particle acceleration in lower hybrid turbulence via synthetic diagnostics produced by PIC simulations

    Science.gov (United States)

    Cruz, F.; Fonseca, R. A.; Silva, L. O.; Rigby, A.; Gregori, G.; Bamford, R. A.; Bingham, R.; Koenig, M.

    2016-10-01

    Efficient particle acceleration in astrophysical shocks can only be achieved in the presence of initial high energy particles. A candidate mechanism to provide an initial seed of energetic particles is lower hybrid turbulence (LHT). This type of turbulence is commonly excited in regions where space and astrophysical plasmas interact with large obstacles. Due to the nature of LH waves, energy can be resonantly transferred from ions (travelling perpendicular to the magnetic field) to electrons (travelling parallel to it) and the consequent motion of the latter in turbulent shock electromagnetic fields is believed to be responsible for the observed x-ray fluxes from non-thermal electrons produced in astrophysical shocks. Here we present PIC simulations of plasma flows colliding with magnetized obstacles showing the formation of a bow shock and the consequent development of LHT. The plasma and obstacle parameters are chosen in order to reproduce the results obtained in a recent experiment conducted at the LULI laser facility at Ecole Polytechnique (France) to study accelerated electrons via LHT. The wave and particle spectra are studied and used to produce synthetic diagnostics that show good qualitative agreement with experimental results. Work supported by the European Research Council (Accelerates ERC-2010-AdG 267841).

  12. Generation of discrete scattering cross sections and demonstration of Monte Carlo charged particle transport in the Milagro IMC code package

    Energy Technology Data Exchange (ETDEWEB)

    Walsh, J. A. [Department of Nuclear Science and Engineering, Massachusetts Institute of Technology, NW12-312 Albany, St. Cambridge, MA 02139 (United States); Palmer, T. S. [Department of Nuclear Engineering and Radiation Health Physics, Oregon State University, 116 Radiation Center, Corvallis, OR 97331 (United States); Urbatsch, T. J. [XTD-5: Air Force Systems, Los Alamos National Laboratory, Los Alamos, NM 87545 (United States)

    2013-07-01

    A new method for generating discrete scattering cross sections to be used in charged particle transport calculations is investigated. The method of data generation is presented and compared to current methods for obtaining discrete cross sections. The new, more generalized approach allows greater flexibility in choosing a cross section model from which to derive discrete values. Cross section data generated with the new method is verified through a comparison with discrete data obtained with an existing method. Additionally, a charged particle transport capability is demonstrated in the time-dependent Implicit Monte Carlo radiative transfer code package, Milagro. The implementation of this capability is verified using test problems with analytic solutions as well as a comparison of electron dose-depth profiles calculated with Milagro and an already-established electron transport code. An initial investigation of a preliminary integration of the discrete cross section generation method with the new charged particle transport capability in Milagro is also presented. (authors)

  13. Confocal Raman Microscopy of Hybrid-Supported Phospholipid Bilayers within Individual C18-Functionalized Chromatographic Particles.

    Science.gov (United States)

    Kitt, Jay P; Harris, Joel M

    2016-09-01

    Measuring lipid-membrane partitioning of small molecules is critical to predicting bioavailability and investigating molecule-membrane interactions. A stable model membrane for such studies has been developed through assembly of a phospholipid monolayer on n-alkane-modified surfaces. These hybrid bilayers have recently been generated within n-alkyl-chain (C18)-modified porous silica and used in chromatographic retention studies of small molecules. Despite their successful application, determining the structure of hybrid bilayers within chromatographic silica is challenging because they reside at buried interfaces within the porous structure. In this work, we employ confocal Raman microscopy to investigate the formation and temperature-dependent structure of hybrid-phospholipid bilayers in C18-modified, porous-silica chromatographic particles. Porous silica provides sufficient surface area within a confocal probe volume centered in an individual particle to readily measure, with Raman microscopy, the formation of an ordered hybrid bilayer of 1,2-dimyristoyl-sn-glycero-3-phosphocholine (DMPC) with the surface C18 chains. The DMPC surface density was quantified from the relative Raman scattering intensities of C18 and phospholipid acyl chains and found to be ∼40% of a DMPC vesicle membrane. By monitoring Raman spectra acquired versus temperature, the bilayer main phase transition was observed to be broadened and shifted to higher temperature compared to a DMPC vesicle, in agreement with differential scanning calorimetry (DSC) results. Raman scattering of deuterated phospholipid was resolved from protonated C18 chain scattering, showing that the lipid acyl and C18 chains melt simultaneously in a single phase transition. The surface density of lipid in the hybrid bilayer, the ordering of both C18 and lipid acyl chains upon bilayer formation, and decoupling of C18 methylene C-H vibrations by deuterated lipid acyl chains all suggest an interdigitated acyl chain

  14. Hybrid Bacterial Foraging and Particle Swarm Optimization for detecting Bundle Branch Block.

    Science.gov (United States)

    Kora, Padmavathi; Kalva, Sri Ramakrishna

    2015-01-01

    Abnormal cardiac beat identification is a key process in the detection of heart diseases. Our present study describes a procedure for the detection of left and right bundle branch block (LBBB and RBBB) Electrocardiogram (ECG) patterns. The electrical impulses that control the cardiac beat face difficulty in moving inside the heart. This problem is termed as bundle branch block (BBB). BBB makes it harder for the heart to pump blood effectively through the heart circulatory system. ECG feature extraction is a key process in detecting heart ailments. Our present study comes up with a hybrid method combining two heuristic optimization methods: Bacterial Forging Optimization (BFO) and Particle Swarm Optimization (PSO) for the feature selection of ECG signals. One of the major controlling forces of BFO algorithm is the chemotactic movement of a bacterium that models a test solution. The chemotaxis process of the BFO depends on random search directions which may lead to a delay in achieving the global optimum solution. The hybrid technique: Bacterial Forging-Particle Swarm Optimization (BFPSO) incorporates the concepts from BFO and PSO and it creates individuals in a new generation. This BFPSO method performs local search through the chemotactic movement of BFO and the global search over the entire search domain is accomplished by a PSO operator. The BFPSO feature values are given as the input for the Levenberg-Marquardt Neural Network classifier.

  15. Single-Particle Cryo-EM and 3D Reconstruction of Hybrid Nanoparticles with Electron-Dense Components.

    Science.gov (United States)

    Yu, Guimei; Yan, Rui; Zhang, Chuan; Mao, Chengde; Jiang, Wen

    2015-10-01

    Single-particle cryo-electron microscopy (cryo-EM), accompanied with 3D reconstruction, is a broadly applicable tool for the structural characterization of macromolecules and nanoparticles. Recently, the cryo-EM field has pushed the limits of this technique to higher resolutions and samples of smaller molecular mass, however, some samples still present hurdles to this technique. Hybrid particles with electron-dense components, which have been studied using single-particle cryo-EM yet with limited success in 3D reconstruction due to the interference caused by electron-dense elements, constitute one group of such challenging samples. To process such hybrid particles, a masking method is developed in this work to adaptively remove pixels arising from electron-dense portions in individual projection images while maintaining maximal biomass signals for subsequent 2D alignment, 3D reconstruction, and iterative refinements. As demonstrated by the success in 3D reconstruction of an octahedron DNA/gold hybrid particle, which has been previously published without a 3D reconstruction, the devised strategy that combines adaptive masking and standard single-particle 3D reconstruction approach has overcome the hurdle of electron-dense elements interference, and is generally applicable to cryo-EM structural characterization of most, if not all, hybrid nanomaterials with electron-dense components.

  16. Radiation Protection Studies for Medical Particle Accelerators using Fluka Monte Carlo Code.

    Science.gov (United States)

    Infantino, Angelo; Cicoria, Gianfranco; Lucconi, Giulia; Pancaldi, Davide; Vichi, Sara; Zagni, Federico; Mostacci, Domiziano; Marengo, Mario

    2017-04-01

    Radiation protection (RP) in the use of medical cyclotrons involves many aspects both in the routine use and for the decommissioning of a site. Guidelines for site planning and installation, as well as for RP assessment, are given in international documents; however, the latter typically offer analytic methods of calculation of shielding and materials activation, in approximate or idealised geometry set-ups. The availability of Monte Carlo (MC) codes with accurate up-to-date libraries for transport and interaction of neutrons and charged particles at energies below 250 MeV, together with the continuously increasing power of modern computers, makes the systematic use of simulations with realistic geometries possible, yielding equipment and site-specific evaluation of the source terms, shielding requirements and all quantities relevant to RP at the same time. In this work, the well-known FLUKA MC code was used to simulate different aspects of RP in the use of biomedical accelerators, particularly for the production of medical radioisotopes. In the context of the Young Professionals Award, held at the IRPA 14 conference, only a part of the complete work is presented. In particular, the simulation of the GE PETtrace cyclotron (16.5 MeV) installed at S. Orsola-Malpighi University Hospital evaluated the effective dose distribution around the equipment; the effective number of neutrons produced per incident proton and their spectral distribution; the activation of the structure of the cyclotron and the vault walls; the activation of the ambient air, in particular the production of 41Ar. The simulations were validated, in terms of physical and transport parameters to be used at the energy range of interest, through an extensive measurement campaign of the neutron environmental dose equivalent using a rem-counter and TLD dosemeters. The validated model was then used in the design and the licensing request of a new Positron Emission Tomography facility. © The Author 2016

  17. Use of generalized curvilinear coordinate systems in electromagnetic and hybrid codes

    Energy Technology Data Exchange (ETDEWEB)

    Swift, D.W. [Univ. of Alaska, Fairbanks, AK (United States)

    1995-07-01

    The author develops a code to simulate the dynamics in the magnetosphere system. The calculation involves a single level, structured, curvilinear 2D mesh. The mesh density is varied to support regions which demand higher resolution.

  18. Application of wavelet filtering and Barker-coded pulse compression hybrid method to air-coupled ultrasonic testing

    Science.gov (United States)

    Zhou, Zhenggan; Ma, Baoquan; Jiang, Jingtao; Yu, Guang; Liu, Kui; Zhang, Dongmei; Liu, Weiping

    2014-10-01

    Air-coupled ultrasonic testing (ACUT) technique has been viewed as a viable solution in defect detection of advanced composites used in aerospace and aviation industries. However, the giant mismatch of acoustic impedance in air-solid interface makes the transmission efficiency of ultrasound low, and leads to poor signal-to-noise (SNR) ratio of received signal. The utilisation of signal-processing techniques in non-destructive testing is highly appreciated. This paper presents a wavelet filtering and phase-coded pulse compression hybrid method to improve the SNR and output power of received signal. The wavelet transform is utilised to filter insignificant components from noisy ultrasonic signal, and pulse compression process is used to improve the power of correlated signal based on cross-correction algorithm. For the purpose of reasonable parameter selection, different families of wavelets (Daubechies, Symlet and Coiflet) and decomposition level in discrete wavelet transform are analysed, different Barker codes (5-13 bits) are also analysed to acquire higher main-to-side lobe ratio. The performance of the hybrid method was verified in a honeycomb composite sample. Experimental results demonstrated that the proposed method is very efficient in improving the SNR and signal strength. The applicability of the proposed method seems to be a very promising tool to evaluate the integrity of high ultrasound attenuation composite materials using the ACUT.

  19. Influence of particles on the loading capacity and the temperature rise of water film in Ultra-high speed hybrid bearing

    Science.gov (United States)

    Zhu, Aibin; Li, Pei; Zhang, Yefan; Chen, Wei; Yuan, Xiaoyang

    2015-04-01

    Ultra-high speed machining technology enables high efficiency, high precision and high integrity of machined surface. Previous researches of hybrid bearing rarely consider influences of solid particles in lubricant and ultra-high speed of hybrid bearing, which cannot be ignored under the high speed and micro-space conditions of ultra-high speed water-lubricated hybrid bearing. Considering the impact of solid particles in lubricant, turbulence and temperature viscosity effects of lubricant, the influences of particles on pressure distribution, loading capacity and the temperature rise of the lubricant film with four-step-cavity ultra-high speed water-lubricated hybrid bearing are presented in the paper. The results show that loading capacity of the hybrid bearing can be affected by changing the viscosity of the lubricant, and large particles can improve the bearing loading capacity higher. The impact of water film temperature rise produced by solid particles in lubricant is related with particle diameter and minimum film thickness. Compared with the soft particles, hard particles cause the more increasing of water film temperature rise and loading capacity. When the speed of hybrid bearing increases, the impact of solid particles on hybrid bearing becomes increasingly apparent, especially for ultra-high speed water-lubricated hybrid bearing. This research presents influences of solid particles on the loading capacity and the temperature rise of water film in ultra-high speed hybrid bearings, the research conclusions provide a new method to evaluate the influence of solid particles in lubricant of ultra-high speed water-lubricated hybrid bearing, which is important to performance calculation of ultra-high speed hybrid bearings, design of filtration system, and safe operation of ultra-high speed hybrid bearings.

  20. Influence of Particles on the Loading Capacity and the Temperature Rise of Water Film in Ultra-high Speed Hybrid Bearing

    Institute of Scientific and Technical Information of China (English)

    ZHU Aibin; LI Pei; ZHANG Yefan; CHEN Wei; YUAN Xiaoyang

    2015-01-01

    Ultra-high speed machining technology enables high efficiency, high precision and high integrity of machined surface. Previous researches of hybrid bearing rarely consider influences of solid particles in lubricant and ultra-high speed of hybrid bearing, which cannot be ignored under the high speed and micro-space conditions of ultra-high speed water-lubricated hybrid bearing. Considering the impact of solid particles in lubricant, turbulence and temperature viscosity effects of lubricant, the influences of particles on pressure distribution, loading capacity and the temperature rise of the lubricant film with four-step-cavity ultra-high speed water-lubricated hybrid bearing are presented in the paper. The results show that loading capacity of the hybrid bearing can be affected by changing the viscosity of the lubricant, and large particles can improve the bearing loading capacity higher. The impact of water film temperature rise produced by solid particles in lubricant is related with particle diameter and minimum film thickness. Compared with the soft particles, hard particles cause the more increasing of water film temperature rise and loading capacity. When the speed of hybrid bearing increases, the impact of solid particles on hybrid bearing becomes increasingly apparent, especially for ultra-high speed water-lubricated hybrid bearing. This research presents influences of solid particles on the loading capacity and the temperature rise of water film in ultra-high speed hybrid bearings, the research conclusions provide a new method to evaluate the influence of solid particles in lubricant of ultra-high speed water-lubricated hybrid bearing, which is important to performance calculation of ultra-high speed hybrid bearings, design of filtration system, and safe operation of ultra-high speed hybrid bearings.

  1. Antiproton annihilation physics in the Monte Carlo particle transport code SHIELD-HIT12A

    Energy Technology Data Exchange (ETDEWEB)

    Taasti, Vicki Trier; Knudsen, Helge [Dept. of Physics and Astronomy, Aarhus University (Denmark); Holzscheiter, Michael H. [Dept. of Physics and Astronomy, Aarhus University (Denmark); Dept. of Physics and Astronomy, University of New Mexico (United States); Sobolevsky, Nikolai [Institute for Nuclear Research of the Russian Academy of Sciences (INR), Moscow (Russian Federation); Moscow Institute of Physics and Technology (MIPT), Dolgoprudny (Russian Federation); Thomsen, Bjarne [Dept. of Physics and Astronomy, Aarhus University (Denmark); Bassler, Niels, E-mail: bassler@phys.au.dk [Dept. of Physics and Astronomy, Aarhus University (Denmark)

    2015-03-15

    The Monte Carlo particle transport code SHIELD-HIT12A is designed to simulate therapeutic beams for cancer radiotherapy with fast ions. SHIELD-HIT12A allows creation of antiproton beam kernels for the treatment planning system TRiP98, but first it must be benchmarked against experimental data. An experimental depth dose curve obtained by the AD-4/ACE collaboration was compared with an earlier version of SHIELD-HIT, but since then inelastic annihilation cross sections for antiprotons have been updated and a more detailed geometric model of the AD-4/ACE experiment was applied. Furthermore, the Fermi–Teller Z-law, which is implemented by default in SHIELD-HIT12A has been shown not to be a good approximation for the capture probability of negative projectiles by nuclei. We investigate other theories which have been developed, and give a better agreement with experimental findings. The consequence of these updates is tested by comparing simulated data with the antiproton depth dose curve in water. It is found that the implementation of these new capture probabilities results in an overestimation of the depth dose curve in the Bragg peak. This can be mitigated by scaling the antiproton collision cross sections, which restores the agreement, but some small deviations still remain. Best agreement is achieved by using the most recent antiproton collision cross sections and the Fermi–Teller Z-law, even if experimental data conclude that the Z-law is inadequately describing annihilation on compounds. We conclude that more experimental cross section data are needed in the lower energy range in order to resolve this contradiction, ideally combined with more rigorous models for annihilation on compounds.

  2. Exact hybrid particle/population simulation of rule-based models of biochemical systems.

    Directory of Open Access Journals (Sweden)

    Justin S Hogg

    2014-04-01

    Full Text Available Detailed modeling and simulation of biochemical systems is complicated by the problem of combinatorial complexity, an explosion in the number of species and reactions due to myriad protein-protein interactions and post-translational modifications. Rule-based modeling overcomes this problem by representing molecules as structured objects and encoding their interactions as pattern-based rules. This greatly simplifies the process of model specification, avoiding the tedious and error prone task of manually enumerating all species and reactions that can potentially exist in a system. From a simulation perspective, rule-based models can be expanded algorithmically into fully-enumerated reaction networks and simulated using a variety of network-based simulation methods, such as ordinary differential equations or Gillespie's algorithm, provided that the network is not exceedingly large. Alternatively, rule-based models can be simulated directly using particle-based kinetic Monte Carlo methods. This "network-free" approach produces exact stochastic trajectories with a computational cost that is independent of network size. However, memory and run time costs increase with the number of particles, limiting the size of system that can be feasibly simulated. Here, we present a hybrid particle/population simulation method that combines the best attributes of both the network-based and network-free approaches. The method takes as input a rule-based model and a user-specified subset of species to treat as population variables rather than as particles. The model is then transformed by a process of "partial network expansion" into a dynamically equivalent form that can be simulated using a population-adapted network-free simulator. The transformation method has been implemented within the open-source rule-based modeling platform BioNetGen, and resulting hybrid models can be simulated using the particle-based simulator NFsim. Performance tests show that

  3. Influence of Block Copolymer on Formation and Acid Resistant Properties of Hybrid CaCO3 Particles

    Institute of Scientific and Technical Information of China (English)

    LI Xiao-dong; HU Qiao-ling; ZHAO Shi-fang; SHEN Jia-cong

    2009-01-01

    Block copolymer polystyrene-b-poly(acrylic acid)(PS-b-PAA) was used as structural template for the synthesis of CaCO3 microparticles. Through this procedure, acid resistant hybrid CaCO3 micro-spheres were obtained. Acid resistant properties of this type of hybrid CaCO3 were studied. Size mea-surement shows that the acid resistant properties of the hybrid particles are different in different solutions, such as HCI, EDTA, and H2SO4 solutions.

  4. Tribological Potential of Hybrid Composites Based on Zinc and Aluminium Alloys Reinforced with SiC and Graphite Particles

    Directory of Open Access Journals (Sweden)

    D. Džunić

    2012-12-01

    Full Text Available The paper reviews contemporary research in the area of hybrid composites based on zinc and aluminium alloys reinforced with SiC and graphite particles. Metal matrix composites (MMCs based on ZA matrix are being increasingly applied as light-weight and wear resistant materials. Aluminium matrix composites with multiple reinforcements (hybrid AMCsare finding increased applications because of improved mechanical and tribological properties and hence are better substitutes for single reinforced composites. The results of research show that the hybrid composites possess higher hardness, higher tensile strength, better wear resistance and lower coefficient of friction when compared to pure alloys.

  5. A hybrid multi-swarm particle swarm optimization to solve constrained optimization problems

    Institute of Scientific and Technical Information of China (English)

    Yong WANG; Zixing CAI

    2009-01-01

    In the real-world applications, most optimization problems are subject to different types of constraints. These problems are known as constrained optimization problems (COPs). Solving COPs is a very important area in the optimization field. In this paper, a hybrid multi-swarm particle swarm optimization (HMPSO) is proposed to deal with COPs. This method adopts a parallel search operator in which the current swarm is partitioned into several subswarms and particle swarm optimization (PSO) is severed as the search engine for each sub-swarm. Moreover, in order to explore more promising regions of the search space, differential evolution (DE) is incorporated to improve the personal best of each particle. First, the method is tested on 13 benchmark test functions and compared with three stateof-the-art approaches. The simulation results indicate that the proposed HMPSO is highly competitive in solving the 13 benchmark test functions. Afterward, the effectiveness of some mechanisms proposed in this paper and the effect of the parameter setting were validated by various experiments. Finally, HMPSO is further applied to solve 24 benchmark test functions collected in the 2006 IEEE Congress on Evolutionary Computation (CEC2006) and the experimental results indicate that HMPSO is able to deal with 22 test functions.

  6. Transparent and High Refractive Index Thermoplastic Polymer Glasses Using Evaporative Ligand Exchange of Hybrid Particle Fillers.

    Science.gov (United States)

    Wang, Zongyu; Lu, Zhao; Mahoney, Clare; Yan, Jiajun; Ferebee, Rachel; Luo, Danli; Matyjaszewski, Krzysztof; Bockstaller, Michael R

    2017-03-01

    Development of high refractive index glasses on the basis of commodity polymer thermoplastics presents an important requisite to further advancement of technologies ranging from energy efficient lighting to cost efficient photonics. This contribution presents a novel particle dispersion strategy that enables uniform dispersion of zinc oxide (ZnO) particles in a poly(methyl methacrylate) (PMMA) matrix to facilitate hybrid glasses with inorganic content exceeding 25% by weight, optical transparency in excess of 0.8/mm, and a refractive index greater than 1.64 in the visible wavelength range. The method is based on the application of evaporative ligand exchange to synthesize poly(styrene-r-acrylonitrile) (PSAN)-tethered zinc oxide (ZnO) particle fillers. Favorable filler-matrix interactions are shown to enable the synthesis of isomorphous blends with high molecular PMMA that exhibit improved thermomechanical stability compared to that of the pristine PMMA matrix. The concurrent realization of high refractive index and optical transparency in polymer glasses by modification of a thermoplastic commodity polymer could present a viable alternative to expensive specialty polymers in applications where high costs or demands for thermomechanical stability and/or UV resistance prohibit the application of specialty polymer solutions.

  7. Multiple description coding with spatial-temporal hybrid interpolation for video streaming in peer-to-peer networks

    Institute of Scientific and Technical Information of China (English)

    LU Meng-ting; LIN Chang-kuan; YAO Jason; CHEN Homer H.

    2006-01-01

    In this paper, we present an innovative design of multiple description coding with spatial-temporal hybrid interpolation (MDC-STHI) for peer-to-peer (P2P) video streaming. MDC can be effective in P2P networks because the nature of overlay routing makes path diversity more feasible. However, most MDC schemes require a redesign of video coding systems and are not cost-effective for wide deployment. We base our work on multiple state video coding, a form of MDC that can utilize standard codecs. Two quarter-sized video bit streams are generated as redundancies and embedded in the original-sized streams. With MDC-STHI, the nodes in P2P network can adjust the streaming traffic to satisfy the constraints of their devices and network environment. By design, the redundancies are used to compensate for missing frames, and can also be streamed independently to fulfill certain needs of low rate, low resolution applications. For better error concealment, optimal weights for spatial and temporal interpolation are determined at the source, quantized, and included in redundancies.

  8. Generation expansion planning in Pool market: A hybrid modified game theory and particle swarm optimization

    Energy Technology Data Exchange (ETDEWEB)

    Moghddas-Tafreshi, S.M. [Department of Electrical Engineering, K.N. Toosi University of Technology, Tehran (Iran, Islamic Republic of); Shayanfar, H.A. [Center of Excellence for Power System Automation and Operation, Department of Electrical Engineering, Iran University of Science and Technology, Tehran (Iran, Islamic Republic of); Saliminia Lahiji, A. [Department of Electrical Engineering, K.N. Toosi University of Technology, Tehran (Iran, Islamic Republic of); Rabiee, A. [Center of Excellence for Power System Automation and Operation, Department of Electrical Engineering, Iran University of Science and Technology, Tehran (Iran, Islamic Republic of); Aghaei, J., E-mail: aghaei@iust.ac.i [Department of Electrical and Electronic Engineering, Shiraz University of Technology, Shiraz (Iran, Islamic Republic of)

    2011-02-15

    Unlike the traditional policy, Generation Expansion Planning (GEP) problem in competitive framework is complicated. In the new policy, each GENeration COmpany (GENCO) decides to invest in such a way that obtains as much profit as possible. This paper presents a new hybrid algorithm to determine GEP in a Pool market. The proposed algorithm is divided in two programming levels: master and slave. In the master level a modified game theory (MGT) is proposed to evaluate the contrast of GENCOs by the Independent System Operator (ISO). In the slave level, a particle swarm optimization (PSO) method is used to find the best solution of each GENCO for decision-making of investment. The validity of the proposed method is examined in the case study including three GENCOs with multi-types of power plants. The results show that the presented method is both satisfactory and consistent with expectation.

  9. TSV last for hybrid pixel detectors: Application to particle physics and imaging experiments

    CERN Document Server

    Henry, D; Berthelot, A; Cuchet, R; Chantre, C; Campbell, M

    Hybrid pixel detectors are now widely used in particle physics experiments and at synchrotron light sources. They have also stimulated growing interest in other fields and, in particular, in medical imaging. Through the continuous pursuit of miniaturization in CMOS it has been possible to increase the functionality per pixel while maintaining or even shrinking pixel dimensions. The main constraint on the more extensive use of the technology in all fields is the cost of module building and the difficulty of covering large areas seamlessly [1]. On another hand, in the field of electronic component integration, a new approach has been developed in the last years, called 3D Integration. This concept, based on using the vertical axis for component integration, allows improving the global performance of complex systems. Thanks to this technology, the cost and the form factor of components could be decreased and the performance of the global system could be enhanced. In the field of radiation imaging detectors the a...

  10. A Hybrid Multi Objective Particle Swarm Optimization Method to Discover Biclusters in Microarray Data

    CERN Document Server

    lashkargir, Mohsen; Dastjerdi, Ahmad Baraani

    2009-01-01

    In recent years, with the development of microarray technique, discovery of useful knowledge from microarray data has become very important. Biclustering is a very useful data mining technique for discovering genes which have similar behavior. In microarray data, several objectives have to be optimized simultaneously and often these objectives are in conflict with each other. A Multi Objective model is capable of solving such problems. Our method proposes a Hybrid algorithm which is based on the Multi Objective Particle Swarm Optimization for discovering biclusters in gene expression data. In our method, we will consider a low level of overlapping amongst the biclusters and try to cover all elements of the gene expression matrix. Experimental results in the bench mark database show a significant improvement in both overlap among biclusters and coverage of elements in the gene expression matrix.

  11. Viewpoint Selection Using Hybrid Simplex Search and Particle Swarm Optimization for Volume Rendering

    Directory of Open Access Journals (Sweden)

    Zhang You-sai,,,

    2012-09-01

    Full Text Available In this paper we proposed a novel method of viewpoint selection using the hybrid Nelder-Mead (NM simplex search and particle swarm optimization (PSO to improve the efficiency and the intelligent level of volume rendering. This method constructed the viewpoint quality evaluation function in the form of entropy by utilizing the luminance and structure features of the two-dimensional projective image of volume data. During the process of volume rendering, the hybrid NM-PSO algorithm intended to locate the globally optimal viewpoint or a set of the optimized viewpoints automatically and intelligently. Experimental results have shown that this method avoids redundant interactions and evidently improves the efficiency of volume rendering. The optimized viewpoints can focus on the important structural features or the region of interest in volume data and exhibit definite correlation with the perception character of human visual system. Compared with the methods based on PSO or NM simplex search, our method has the better performance of convergence rate, convergence accuracy and robustness.

  12. Analysis of Winske-Daughton 3D Electromagnetic Particle Simulation of Ion Ring Generated Lower Hybrid Turbulence

    CERN Document Server

    Rudakov, Leonid; Mithaiwala, Manish; Ganguli, Gurudas

    2012-01-01

    Using electromagnetic particle-in-cell simulations Winske and Daughton [Phys Plasmas, 19, 072109, 2012] have recently demonstrated that the nonlinear evolution of a wave turbulence initiated by cold ion ring beam is vastly different in three dimensions than in two dimensions. We further analyze the Winske-Daughton three dimensional simulation data and show that the nonlinear induced scattering by thermal plasma particles is crucial for understanding the evolution of lower hybrid/whistler wave turbulence as described in the simulation.

  13. Low complexity source and channel coding for mm-wave hybrid fiber-wireless links

    DEFF Research Database (Denmark)

    Lebedev, Alexander; Vegas Olmos, Juan José; Pang, Xiaodan;

    2014-01-01

    performance of several encoded high-definition video sequences constrained by the channel bitrate and the packet size. We argue that light video compression and low complexity channel coding for the W-band fiber-wireless link enable low-delay multiple channel 1080p wireless HD video transmission....

  14. Channel Efficiency with Security Enhancement for Remote Condition Monitoring of Multi Machine System Using Hybrid Huffman Coding

    Science.gov (United States)

    Datta, Jinia; Chowdhuri, Sumana; Bera, Jitendranath

    2016-12-01

    This paper presents a novel scheme of remote condition monitoring of multi machine system where a secured and coded data of induction machine with different parameters is communicated between a state-of-the-art dedicated hardware Units (DHU) installed at the machine terminal and a centralized PC based machine data management (MDM) software. The DHUs are built for acquisition of different parameters from the respective machines, and hence are placed at their nearby panels in order to acquire different parameters cost effectively during their running condition. The MDM software collects these data through a communication channel where all the DHUs are networked using RS485 protocol. Before transmitting, the parameter's related data is modified with the adoption of differential pulse coded modulation (DPCM) and Huffman coding technique. It is further encrypted with a private key where different keys are used for different DHUs. In this way a data security scheme is adopted during its passage through the communication channel in order to avoid any third party attack into the channel. The hybrid mode of DPCM and Huffman coding is chosen to reduce the data packet length. A MATLAB based simulation and its practical implementation using DHUs at three machine terminals (one healthy three phase, one healthy single phase and one faulty three phase machine) proves its efficacy and usefulness for condition based maintenance of multi machine system. The data at the central control room are decrypted and decoded using MDM software. In this work it is observed that Chanel efficiency with respect to different parameter measurements has been increased very much.

  15. Hybrid Particle Swarm Optimization based Day-Ahead Self-Scheduling for Thermal Generator in Competitive Electricity Market

    DEFF Research Database (Denmark)

    Pindoriya, Naran M.; Singh, S.N.; Østergaard, Jacob

    2009-01-01

    This paper presents a hybrid particle swarm optimization algorithm (HPSO) to solve the day-ahead self-scheduling for thermal power producer in competitive electricity market. The objective functions considered to model the self-scheduling problem are 1) to maximize the profit from selling energy...

  16. New hybrid genetic particle swarm optimization algorithm to design multi-zone binary filter.

    Science.gov (United States)

    Lin, Jie; Zhao, Hongyang; Ma, Yuan; Tan, Jiubin; Jin, Peng

    2016-05-16

    The binary phase filters have been used to achieve an optical needle with small lateral size. Designing a binary phase filter is still a scientific challenge in such fields. In this paper, a hybrid genetic particle swarm optimization (HGPSO) algorithm is proposed to design the binary phase filter. The HGPSO algorithm includes self-adaptive parameters, recombination and mutation operations that originated from the genetic algorithm. Based on the benchmark test, the HGPSO algorithm has achieved global optimization and fast convergence. In an easy-to-perform optimizing procedure, the iteration number of HGPSO is decreased to about a quarter of the original particle swarm optimization process. A multi-zone binary phase filter is designed by using the HGPSO. The long depth of focus and high resolution are achieved simultaneously, where the depth of focus and focal spot transverse size are 6.05λ and 0.41λ, respectively. Therefore, the proposed HGPSO can be applied to the optimization of filter with multiple parameters.

  17. Hybrid particle swarm optimization with Cauchy distribution for solving reentrant flexible flow shop with blocking constraint

    Directory of Open Access Journals (Sweden)

    Chatnugrob Sangsawang

    2016-06-01

    Full Text Available This paper addresses a problem of the two-stage flexible flow shop with reentrant and blocking constraints in Hard Disk Drive Manufacturing. This problem can be formulated as a deterministic FFS|stage=2,rcrc, block|Cmax problem. In this study, adaptive Hybrid Particle Swarm Optimization with Cauchy distribution (HPSO was developed to solve the problem. The objective of this research is to find the sequences in order to minimize the makespan. To show their performances, computational experiments were performed on a number of test problems and the results are reported. Experimental results show that the proposed algorithms give better solutions than the classical Particle Swarm Optimization (PSO for all test problems. Additionally, the relative improvement (RI of the makespan solutions obtained by the proposed algorithms with respect to those of the current practice is performed in order to measure the quality of the makespan solutions generated by the proposed algorithms. The RI results show that the HPSO algorithm can improve the makespan solution by averages of 14.78%.

  18. Particle-in-cell simulation study of a lower-hybrid shock

    Science.gov (United States)

    Dieckmann, M. E.; Sarri, G.; Doria, D.; Ynnerman, A.; Borghesi, M.

    2016-06-01

    The expansion of a magnetized high-pressure plasma into a low-pressure ambient medium is examined with particle-in-cell simulations. The magnetic field points perpendicular to the plasma's expansion direction and binary collisions between particles are absent. The expanding plasma steepens into a quasi-electrostatic shock that is sustained by the lower-hybrid (LH) wave. The ambipolar electric field points in the expansion direction and it induces together with the background magnetic field a fast E cross B drift of electrons. The drifting electrons modify the background magnetic field, resulting in its pile-up by the LH shock. The magnetic pressure gradient force accelerates the ambient ions ahead of the LH shock, reducing the relative velocity between the ambient plasma and the LH shock to about the phase speed of the shocked LH wave, transforming the LH shock into a nonlinear LH wave. The oscillations of the electrostatic potential have a larger amplitude and wavelength in the magnetized plasma than in an unmagnetized one with otherwise identical conditions. The energy loss to the drifting electrons leads to a noticeable slowdown of the LH shock compared to that in an unmagnetized plasma.

  19. Particle-in-cell simulation study of a lower-hybrid shock

    CERN Document Server

    Dieckmann, Mark Eric; Doria, Domenico; Ynnerman, Anders; Borghesi, Marco

    2016-01-01

    The expansion of a magnetized high-pressure plasma into a low-pressure ambient medium is examined with particle-in-cell (PIC) simulations. The magnetic field points perpendicularly to the plasma's expansion direction and binary collisions between particles are absent. The expanding plasma steepens into a quasi-electrostatic shock that is sustained by the lower-hybrid (LH) wave. The ambipolar electric field points in the expansion direction and it induces together with the background magnetic field a fast E cross B drift of electrons. The drifting electrons modify the background magnetic field, resulting in its pile-up by the LH shock. The magnetic pressure gradient force accelerates the ambient ions ahead of the LH shock, reducing the relative velocity between the ambient plasma and the LH shock to about the phase speed of the shocked LH wave, transforming the LH shock into a nonlinear LH wave. The oscillations of the electrostatic potential have a larger amplitude and wavelength in the magnetized plasma than...

  20. OPTIMIZED PARTICLE SWARM OPTIMIZATION BASED DEADLINE CONSTRAINED TASK SCHEDULING IN HYBRID CLOUD

    Directory of Open Access Journals (Sweden)

    Dhananjay Kumar

    2016-01-01

    Full Text Available Cloud Computing is a dominant way of sharing of computing resources that can be configured and provisioned easily. Task scheduling in Hybrid cloud is a challenge as it suffers from producing the best QoS (Quality of Service when there is a high demand. In this paper a new resource allocation algorithm, to find the best External Cloud provider when the intermediate provider’s resources aren’t enough to satisfy the customer’s demand is proposed. The proposed algorithm called Optimized Particle Swarm Optimization (OPSO combines the two metaheuristic algorithms namely Particle Swarm Optimization and Ant Colony Optimization (ACO. These metaheuristic algorithms are used for the purpose of optimization in the search space of the required solution, to find the best resource from the pool of resources and to obtain maximum profit even when the number of tasks submitted for execution is very high. This optimization is performed to allocate job requests to internal and external cloud providers to obtain maximum profit. It helps to improve the system performance by improving the CPU utilization, and handle multiple requests at the same time. The simulation result shows that an OPSO yields 0.1% - 5% profit to the intermediate cloud provider compared with standard PSO and ACO algorithms and it also increases the CPU utilization by 0.1%.

  1. Assessment of Microphysical Models in the National Combustion Code (NCC) for Aircraft Particulate Emissions: Particle Loss in Sampling Lines

    Science.gov (United States)

    Wey, Thomas; Liu, Nan-Suey

    2008-01-01

    This paper at first describes the fluid network approach recently implemented into the National Combustion Code (NCC) for the simulation of transport of aerosols (volatile particles and soot) in the particulate sampling systems. This network-based approach complements the other two approaches already in the NCC, namely, the lower-order temporal approach and the CFD-based approach. The accuracy and the computational costs of these three approaches are then investigated in terms of their application to the prediction of particle losses through sample transmission and distribution lines. Their predictive capabilities are assessed by comparing the computed results with the experimental data. The present work will help establish standard methodologies for measuring the size and concentration of particles in high-temperature, high-velocity jet engine exhaust. Furthermore, the present work also represents the first step of a long term effort of validating physics-based tools for the prediction of aircraft particulate emissions.

  2. Hybrid parallel code acceleration methods in full-core reactor physics calculations

    Energy Technology Data Exchange (ETDEWEB)

    Courau, T.; Plagne, L.; Ponicot, A. [EDF R and D, 1, Avenue du General de Gaulle, 92141 Clamart Cedex (France); Sjoden, G. [Nuclear and Radiological Engineering, Georgia Inst. of Technology, Atlanta, GA 30332 (United States)

    2012-07-01

    When dealing with nuclear reactor calculation schemes, the need for three dimensional (3D) transport-based reference solutions is essential for both validation and optimization purposes. Considering a benchmark problem, this work investigates the potential of discrete ordinates (Sn) transport methods applied to 3D pressurized water reactor (PWR) full-core calculations. First, the benchmark problem is described. It involves a pin-by-pin description of a 3D PWR first core, and uses a 8-group cross-section library prepared with the DRAGON cell code. Then, a convergence analysis is performed using the PENTRAN parallel Sn Cartesian code. It discusses the spatial refinement and the associated angular quadrature required to properly describe the problem physics. It also shows that initializing the Sn solution with the EDF SPN solver COCAGNE reduces the number of iterations required to converge by nearly a factor of 6. Using a best estimate model, PENTRAN results are then compared to multigroup Monte Carlo results obtained with the MCNP5 code. Good consistency is observed between the two methods (Sn and Monte Carlo), with discrepancies that are less than 25 pcm for the k{sub eff}, and less than 2.1% and 1.6% for the flux at the pin-cell level and for the pin-power distribution, respectively. (authors)

  3. Modeling Spectra of Icy Satellites and Cometary Icy Particles Using Multi-Sphere T-Matrix Code

    Science.gov (United States)

    Kolokolova, Ludmilla; Mackowski, Daniel; Pitman, Karly M.; Joseph, Emily C. S.; Buratti, Bonnie J.; Protopapa, Silvia; Kelley, Michael S.

    2016-10-01

    The Multi-Sphere T-matrix code (MSTM) allows rigorous computations of characteristics of the light scattered by a cluster of spherical particles. It was introduced to the scientific community in 1996 (Mackowski & Mishchenko, 1996, JOSA A, 13, 2266). Later it was put online and became one of the most popular codes to study photopolarimetric properties of aggregated particles. Later versions of this code, especially its parallelized version MSTM3 (Mackowski & Mishchenko, 2011, JQSRT, 112, 2182), were used to compute angular and wavelength dependence of the intensity and polarization of light scattered by aggregates of up to 4000 constituent particles (Kolokolova & Mackowski, 2012, JQSRT, 113, 2567). The version MSTM4 considers large thick slabs of spheres (Mackowski, 2014, Proc. of the Workshop ``Scattering by aggregates``, Bremen, Germany, March 2014, Th. Wriedt & Yu. Eremin, Eds., 6) and is significantly different from the earlier versions. It adopts a Discrete Fourier Convolution, implemented using a Fast Fourier Transform, for evaluation of the exciting field. MSTM4 is able to treat dozens of thousands of spheres and is about 100 times faster than the MSTM3 code. This allows us not only to compute the light scattering properties of a large number of electromagnetically interacting constituent particles, but also to perform multi-wavelength and multi-angular computations using computer resources with rather reasonable CPU and computer memory. We used MSTM4 to model near-infrared spectra of icy satellites of Saturn (Rhea, Dione, and Tethys data from Cassini VIMS), and of icy particles observed in the coma of comet 103P/Hartley 2 (data from EPOXI/DI HRII). Results of our modeling show that in the case of icy satellites the best fit to the observed spectra is provided by regolith made of spheres of radius ~1 micron with a porosity in the range 85% - 95%, which slightly varies for the different satellites. Fitting the spectra of the cometary icy particles requires icy

  4. A Hybrid Scheme Based on Pipelining and Multitasking in Mobile Application Processors for Advanced Video Coding

    Directory of Open Access Journals (Sweden)

    Muhammad Asif

    2015-01-01

    Full Text Available One of the key requirements for mobile devices is to provide high-performance computing at lower power consumption. The processors used in these devices provide specific hardware resources to handle computationally intensive video processing and interactive graphical applications. Moreover, processors designed for low-power applications may introduce limitations on the availability and usage of resources, which present additional challenges to the system designers. Owing to the specific design of the JZ47x series of mobile application processors, a hybrid software-hardware implementation scheme for H.264/AVC encoder is proposed in this work. The proposed scheme distributes the encoding tasks among hardware and software modules. A series of optimization techniques are developed to speed up the memory access and data transferring among memories. Moreover, an efficient data reusage design is proposed for the deblock filter video processing unit to reduce the memory accesses. Furthermore, fine grained macroblock (MB level parallelism is effectively exploited and a pipelined approach is proposed for efficient utilization of hardware processing cores. Finally, based on parallelism in the proposed design, encoding tasks are distributed between two processing cores. Experiments show that the hybrid encoder is 12 times faster than a highly optimized sequential encoder due to proposed techniques.

  5. OpenGeoSys-GEMS: Hybrid parallelization of a reactive transport code with MPI and threads

    Science.gov (United States)

    Kosakowski, G.; Kulik, D. A.; Shao, H.

    2012-04-01

    OpenGeoSys-GEMS is a generic purpose reactive transport code based on the operator splitting approach. The code couples the Finite-Element groundwater flow and multi-species transport modules of the OpenGeoSys (OGS) project (http://www.ufz.de/index.php?en=18345) with the GEM-Selektor research package to model thermodynamic equilibrium of aquatic (geo)chemical systems utilizing the Gibbs Energy Minimization approach (http://gems.web.psi.ch/). The combination of OGS and the GEM-Selektor kernel (GEMS3K) is highly flexible due to the object-oriented modular code structures and the well defined (memory based) data exchange modules. Like other reactive transport codes, the practical applicability of OGS-GEMS is often hampered by the long calculation time and large memory requirements. • For realistic geochemical systems which might include dozens of mineral phases and several (non-ideal) solid solutions the time needed to solve the chemical system with GEMS3K may increase exceptionally. • The codes are coupled in a sequential non-iterative loop. In order to keep the accuracy, the time step size is restricted. In combination with a fine spatial discretization the time step size may become very small which increases calculation times drastically even for small 1D problems. • The current version of OGS is not optimized for memory use and the MPI version of OGS does not distribute data between nodes. Even for moderately small 2D problems the number of MPI processes that fit into memory of up-to-date workstations or HPC hardware is limited. One strategy to overcome the above mentioned restrictions of OGS-GEMS is to parallelize the coupled code. For OGS a parallelized version already exists. It is based on a domain decomposition method implemented with MPI and provides a parallel solver for fluid and mass transport processes. In the coupled code, after solving fluid flow and solute transport, geochemical calculations are done in form of a central loop over all finite

  6. High rate particle tracking and ultra-fast timing with a thin hybrid silicon pixel detector

    Science.gov (United States)

    Fiorini, M.; Aglieri Rinella, G.; Carassiti, V.; Ceccucci, A.; Cortina Gil, E.; Cotta Ramusino, A.; Dellacasa, G.; Garbolino, S.; Jarron, P.; Kaplon, J.; Kluge, A.; Marchetto, F.; Mapelli, A.; Martin, E.; Mazza, G.; Morel, M.; Noy, M.; Nuessle, G.; Perktold, L.; Petagna, P.; Petrucci, F.; Poltorak, K.; Riedler, P.; Rivetti, A.; Statera, M.; Velghe, B.

    2013-08-01

    The Gigatracker (GTK) is a hybrid silicon pixel detector designed for the NA62 experiment at CERN. The beam spectrometer, made of three GTK stations, has to sustain high and non-uniform particle rate (∼ 1 GHz in total) and measure momentum and angles of each beam track with a combined time resolution of 150 ps. In order to reduce multiple scattering and hadronic interactions of beam particles, the material budget of a single GTK station has been fixed to 0.5% X0. The expected fluence for 100 days of running is 2 ×1014 1 MeV neq /cm2, comparable to the one foreseen in the inner trackers of LHC detectors during 10 years of operation. To comply with these requirements, an efficient and very low-mass (architectures have been produced as small-scale prototypes: one is based on a Time-over-Threshold circuit followed by a TDC shared by a group of pixels, while the other makes use of a constant-fraction discriminator followed by an on-pixel TDC. The read-out ASICs are produced in 130 nm IBM CMOS technology and will be thinned down to 100 μm or less. An overview of the Gigatracker detector system will be presented. Experimental results from laboratory and beam tests of prototype bump-bonded assemblies will be described as well. These results show a time resolution of about 170 ps for single hits from minimum ionizing particles, using 200 μm thick silicon sensors.

  7. High rate particle tracking and ultra-fast timing with a thin hybrid silicon pixel detector

    Energy Technology Data Exchange (ETDEWEB)

    Fiorini, M., E-mail: Massimiliano.Fiorini@cern.ch [CERN, CH-1211 Geneva 23 (Switzerland); Aglieri Rinella, G. [CERN, CH-1211 Geneva 23 (Switzerland); Carassiti, V. [INFN Sezione di Ferrara (Italy); Ceccucci, A. [CERN, CH-1211 Geneva 23 (Switzerland); Cortina Gil, E. [Université Catholique de Louvain, Louvain-la-Neuve (Belgium); Cotta Ramusino, A. [INFN Sezione di Ferrara (Italy); Dellacasa, G.; Garbolino, S.; Jarron, P. [INFN Sezione di Torino (Italy); Kaplon, J.; Kluge, A.; Marchetto, F.; Mapelli, A. [CERN, CH-1211 Geneva 23 (Switzerland); Martin, E. [Université Catholique de Louvain, Louvain-la-Neuve (Belgium); Mazza, G. [INFN Sezione di Torino (Italy); Morel, M.; Noy, M. [CERN, CH-1211 Geneva 23 (Switzerland); Nuessle, G. [Université Catholique de Louvain, Louvain-la-Neuve (Belgium); Perktold, L.; Petagna, P. [CERN, CH-1211 Geneva 23 (Switzerland); and others

    2013-08-01

    The Gigatracker (GTK) is a hybrid silicon pixel detector designed for the NA62 experiment at CERN. The beam spectrometer, made of three GTK stations, has to sustain high and non-uniform particle rate (∼1GHz in total) and measure momentum and angles of each beam track with a combined time resolution of 150 ps. In order to reduce multiple scattering and hadronic interactions of beam particles, the material budget of a single GTK station has been fixed to 0.5% X{sub 0}. The expected fluence for 100 days of running is 2×10{sup 14} 1 MeV n{sub eq}/cm{sup 2}, comparable to the one foreseen in the inner trackers of LHC detectors during 10 years of operation. To comply with these requirements, an efficient and very low-mass (<0.15%X{sub 0}) cooling system is being constructed, using a novel microchannel cooling silicon plate. Two complementary read-out architectures have been produced as small-scale prototypes: one is based on a Time-over-Threshold circuit followed by a TDC shared by a group of pixels, while the other makes use of a constant-fraction discriminator followed by an on-pixel TDC. The read-out ASICs are produced in 130 nm IBM CMOS technology and will be thinned down to 100μm or less. An overview of the Gigatracker detector system will be presented. Experimental results from laboratory and beam tests of prototype bump-bonded assemblies will be described as well. These results show a time resolution of about 170 ps for single hits from minimum ionizing particles, using 200μm thick silicon sensors.

  8. Formation and transport of entropy structures in the magnetotail simulated with a 3-D global hybrid code

    Science.gov (United States)

    Lin, Y.; Wing, S.; Johnson, J. R.; Wang, X. Y.; Perez, J. D.; Cheng, L.

    2017-06-01

    Global structure and evolution of flux tube entropy S, integrated over closed field lines, associated with magnetic reconnection in the magnetotail are investigated using the AuburN Global hybrId codE in three dimensions (3-D), ANGIE3D. Flux tubes with decreased entropy, or "bubbles," are found to be generated due to the sudden change of flux tube topology and thus volume in reconnection. By tracking the propagation of the entropy-depleted flux tubes, the roles of the entropy structure in plasma transport to the inner magnetosphere is examined with a self-consistent global hybrid simulation for the first time. The value of S first decreases due to the shortening of flux tubes and then increases due to local ion heating as the bubbles are injected earthward by interchange-ballooning instability, finally oscillating around an equilibrium radial distance where S is nearly the same as the ambient value. The pressure remains anisotropic and not constant along the flux tubes during their propagation with a nonzero heat flux along the field line throughout the duration of the simulation. The correlation of these bubbles with earthward fast flows and specific entropy s is also studied.

  9. Proton Dose Assessment to the Human Eye Using Monte Carlo N-Particle Transport Code (MCNPX)

    Science.gov (United States)

    2006-08-01

    objective of this project was to develop a simple MCNPX model of the human eye to approximate dose delivered from proton therapy. The calculated dose...computer code MCNPX that approximates dose delivered during proton therapy. The calculations considered proton interactions and secondary interactions...Volume Calculation The MCNPX code has limited ability to compute the volumes of defined cells. The dosimetric volumes in the outer wall of the eye are

  10. Investigations of the response of hybrid particle detectors for the Space Environmental Viewing and Analysis Network (SEVAN

    Directory of Open Access Journals (Sweden)

    A. Chilingarian

    2008-02-01

    Full Text Available A network of particle detectors located at middle to low latitudes known as SEVAN (Space Environmental Viewing and Analysis Network is being created in the framework of the International Heliophysical Year (IHY-2007. It aims to improve the fundamental research of the particle acceleration in the vicinity of the Sun and space environment conditions. The new type of particle detectors will simultaneously measure the changing fluxes of most species of secondary cosmic rays, thus turning into a powerful integrated device used for exploration of solar modulation effects. Ground-based detectors measure time series of secondary particles born in cascades originating in the atmosphere by nuclear interactions of protons and nuclei accelerated in the galaxy. During violent solar explosions, sometimes additional secondary particles are added to this "background" flux. The studies of the changing time series of secondary particles shed light on the high-energy particle acceleration mechanisms. The time series of intensities of high energy particles can also provide highly cost-effective information on the key characteristics of interplanetary disturbances. The recent results of the detection of the solar extreme events (2003–2005 by the monitors of the Aragats Space-Environmental Center (ASEC illustrate the wide possibilities provided by new particle detectors measuring neutron, electron and muon fluxes with inherent correlations. We present the results of the simulation studies revealing the characteristics of the SEVAN networks' basic measuring module. We illustrate the possibilities of the hybrid particle detector to measure neutral and charged fluxes of secondary CR, to estimate the efficiency and purity of detection; corresponding median energies of the primary proton flux, the ability to distinguish between neutron and proton initiated GLEs and some other important properties of hybrid particle detectors.

  11. Thermo-mechanical characterization of siliconized E-glass fiber/hematite particles reinforced epoxy resin hybrid composite

    Energy Technology Data Exchange (ETDEWEB)

    Arun Prakash, V.R., E-mail: vinprakash101@gmail.com; Rajadurai, A., E-mail: rajadurai@annauniv.edu.in

    2016-10-30

    Highlights: • Particles dimension have reduced using Ball milling process. • Importance of surface modification was explored. • Surface modification has been done to improve adhesion of fiber/particles with epoxy. • Mechanical properties has been increased by adding modified fiber and particles. • Thermal properties have been increased. - Abstract: In this present work hybrid polymer (epoxy) matrix composite has been strengthened with surface modified E-glass fiber and iron(III) oxide particles with varying size. The particle sizes of 200 nm and <100 nm has been prepared by high energy ball milling and sol-gel methods respectively. To enhance better dispersion of particles and improve adhesion of fibers and fillers with epoxy matrix surface modification process has been done on both fiber and filler by an amino functional silane 3-Aminopropyltrimethoxysilane (APTMS). Crystalline and functional groups of siliconized iron(III) oxide particles were characterized by XRD and FTIR spectroscopy analysis. Fixed quantity of surface treated 15 vol% E-glass fiber was laid along with 0.5 and 1.0 vol% of iron(III) oxide particles into the matrix to fabricate hybrid composites. The composites were cured by an aliphatic hardener Triethylenetetramine (TETA). Effectiveness of surface modified particles and fibers addition into the resin matrix were revealed by mechanical testing like tensile testing, flexural testing, impact testing, inter laminar shear strength and hardness. Thermal behavior of composites was evaluated by TGA, DSC and thermal conductivity (Lee’s disc). The scanning electron microscopy was employed to found shape and size of iron(III) oxide particles adhesion quality of fiber with epoxy matrix. Good dispersion of fillers in matrix was achieved with surface modifier APTMS. Tensile, flexural, impact and inter laminar shear strength of composites was improved by reinforcing surface modified fiber and filler. Thermal stability of epoxy resin was improved

  12. Characterization of exposures to nanoscale particles and fibers during solid core drilling of hybrid carbon nanotube advanced composites.

    Science.gov (United States)

    Bello, Dhimiter; Wardle, Brian L; Zhang, Jie; Yamamoto, Namiko; Santeufemio, Christopher; Hallock, Marilyn; Virji, M Abbas

    2010-01-01

    This work investigated exposures to nanoparticles and nanofibers during solid core drilling of two types of advanced carbon nanotube (CNT)-hybrid composites: (1) reinforced plastic hybrid laminates (alumina fibers and CNT); and (2) graphite-epoxy composites (carbon fibers and CNT). Multiple real-time instruments were used to characterize the size distribution (5.6 nm to 20 microm), number and mass concentration, particle-bound polyaromatic hydrocarbons (b-PAHs), and surface area of airborne particles at the source and breathing zone. Time-integrated samples included grids for electron microscopy characterization of particle morphology and size resolved (2 nm to 20 microm) samples for the quantification of metals. Several new important findings herein include generation of airborne clusters of CNTs not seen during saw-cutting of similar composites, fewer nanofibers and respirable fibers released, similarly high exposures to nanoparticles with less dependence on the composite thickness, and ultrafine (composite material.

  13. Production of Hybrid Chimeric PVX Particles Using a Combination of TMV and PVX-Based Expression Vectors.

    Science.gov (United States)

    Dickmeis, Christina; Honickel, Mareike Michaela Antonia; Fischer, Rainer; Commandeur, Ulrich

    2015-01-01

    We have generated hybrid chimeric potato virus X (PVX) particles by coexpression of different PVX coat protein fusions utilizing tobacco mosaic virus (TMV) and PVX-based expression vectors. Coinfection was achieved with a modified PVX overcoat vector displaying a fluorescent protein and a TMV vector expressing another PVX fluorescent overcoat fusion protein. Coexpression of the PVX-CP fusions in the same cells was confirmed by epifluorescence microscopy. Labeling with specific antibodies and transmission electron microscopy revealed chimeric particles displaying green fluorescent protein and mCherry on the surface. These data were corroborated by bimolecular fluorescence complementation. We used split-mCherry fragments as PVX coat fusions and confirmed an interaction between the split-mCherry fragments in coinfected cells. The presence of assembled split-mCherry on the surface confirmed the hybrid character of the chimeric particles.

  14. Inside the structure of a nanocomposite electrolyte membrane: how hybrid particles get along with the polymer matrix.

    Science.gov (United States)

    Maréchal, M; Niepceron, F; Gebel, G; Mendil-Jakani, H; Galiano, H

    2015-02-21

    Hybrid materials remain the target for a fruitful range of investigations, especially for energy devices. A number of hybrid electrolyte membranes consisting of inorganic and organic phases were then synthesized. Mechanical, solvent uptake and ionic transport properties were studied with the key point being the characteristic length scale of the interaction between the phases. A group of nanocomposite membranes made of polystyrenesulfonic acid-grafted silica particles embedded in a Poly(Vinylidene Fluoride-co-HexaFluoroPropene) (PVdF-HFP) matrix was studied by combining neutron and X-ray scatterings on the nanometer to angstrom scale. This approach allows for the variation in the morphology and structure as a function of particle loading to be described. These studies showed that the particles aggregate with increasing particle loading and these aggregates swell, creating a physical interaction with the polymer matrix. Particle loadings lower than 30 wt% induce a slight strain between both of the subphases, namely the polymer matrix and the particles. This strain is decreased with particle loading between 20 and 30 wt% conjointly with the beginning of proton conduction. Then the percolation of the aggregates is the beginning of a significant increase of the conduction without any strain. This new insight can give information on the variation in other important intrinsic properties.

  15. Thermo-mechanical characterization of siliconized E-glass fiber/hematite particles reinforced epoxy resin hybrid composite

    Science.gov (United States)

    V. R., Arun prakash; Rajadurai, A.

    2016-10-01

    In this present work hybrid polymer (epoxy) matrix composite has been strengthened with surface modified E-glass fiber and iron(III) oxide particles with varying size. The particle sizes of 200 nm and surface modification process has been done on both fiber and filler by an amino functional silane 3-Aminopropyltrimethoxysilane (APTMS). Crystalline and functional groups of siliconized iron(III) oxide particles were characterized by XRD and FTIR spectroscopy analysis. Fixed quantity of surface treated 15 vol% E-glass fiber was laid along with 0.5 and 1.0 vol% of iron(III) oxide particles into the matrix to fabricate hybrid composites. The composites were cured by an aliphatic hardener Triethylenetetramine (TETA). Effectiveness of surface modified particles and fibers addition into the resin matrix were revealed by mechanical testing like tensile testing, flexural testing, impact testing, inter laminar shear strength and hardness. Thermal behavior of composites was evaluated by TGA, DSC and thermal conductivity (Lee's disc). The scanning electron microscopy was employed to found shape and size of iron(III) oxide particles adhesion quality of fiber with epoxy matrix. Good dispersion of fillers in matrix was achieved with surface modifier APTMS. Tensile, flexural, impact and inter laminar shear strength of composites was improved by reinforcing surface modified fiber and filler. Thermal stability of epoxy resin was improved when surface modified fiber was reinforced along with hard hematite particles. Thermal conductivity of epoxy increased with increase of hematite content in epoxy matrix.

  16. Red-blood-cell-like BSA/Zn3(PO4)2 hybrid particles: Preparation and application to adsorption of heavy metal ions

    Science.gov (United States)

    Zhang, Baoliang; Li, Peitao; Zhang, Hepeng; Li, Xiangjie; Tian, Lei; Wang, Hai; Chen, Xin; Ali, Nisar; Ali, Zafar; Zhang, Qiuyu

    2016-03-01

    A novel kind of red-blood-cell-like bovine serum albumin (BSA)/Zn3(PO4)2 hybrid particle is prepared at room temperature by a facile and rapid one-step method based on coordination between BSA and zinc ion. The morphology of the monodisperse hybrid particle shows oblate spheroidal type with a one sided single hole on the surface. The hybrid particle is constructed with BSA/Zn3(PO4)2 nanoplates of 35 nm thick. The average particle size of hybrid particle is 2.3 μm, and its BET specific surface area is 146.64 cm2/g. To clarify the evolution of BSA/Zn3(PO4)2 hybrid particle, SEM and elemental analysis as a function of particle growth time are investigated. The formation mechanism of BSA/Zn3(PO4)2 hybrid particle, which can be described as crystallization, coordination and self-assembly process, is illustrated in detail. The as-prepared BSA/Zn3(PO4)2 hybrid particle is used for adsorption of Cu2+. The hybrid particle displayed excellent adsorption properties on Cu2+. The adsorption efficiency of BSA/Zn3(PO4)2 hybrid particles at 5 min and 30 min are 86.33% and 98.9%, respectively. The maximum adsorption capacity is 6.85 mg/g. Thus, this kind of novel adsorbent shows potential application value in ultra-fast and highly efficient removal of Cu2+.

  17. Hybrid parallelization of the XTOR-2F code for the simulation of two-fluid MHD instabilities in tokamaks

    Science.gov (United States)

    Marx, Alain; Lütjens, Hinrich

    2017-03-01

    A hybrid MPI/OpenMP parallel version of the XTOR-2F code [Lütjens and Luciani, J. Comput. Phys. 229 (2010) 8130] solving the two-fluid MHD equations in full tokamak geometry by means of an iterative Newton-Krylov matrix-free method has been developed. The present work shows that the code has been parallelized significantly despite the numerical profile of the problem solved by XTOR-2F, i.e. a discretization with pseudo-spectral representations in all angular directions, the stiffness of the two-fluid stability problem in tokamaks, and the use of a direct LU decomposition to invert the physical pre-conditioner at every Krylov iteration of the solver. The execution time of the parallelized version is an order of magnitude smaller than the sequential one for low resolution cases, with an increasing speedup when the discretization mesh is refined. Moreover, it allows to perform simulations with higher resolutions, previously forbidden because of memory limitations.

  18. GTNEUT: A code for the calculation of neutral particle transport in plasmas based on the Transmission and Escape Probability method

    Science.gov (United States)

    Mandrekas, John

    2004-08-01

    GTNEUT is a two-dimensional code for the calculation of the transport of neutral particles in fusion plasmas. It is based on the Transmission and Escape Probabilities (TEP) method and can be considered a computationally efficient alternative to traditional Monte Carlo methods. The code has been benchmarked extensively against Monte Carlo and has been used to model the distribution of neutrals in fusion experiments. Program summaryTitle of program: GTNEUT Catalogue identifier: ADTX Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADTX Computer for which the program is designed and others on which it has been tested: The program was developed on a SUN Ultra 10 workstation and has been tested on other Unix workstations and PCs. Operating systems or monitors under which the program has been tested: Solaris 8, 9, HP-UX 11i, Linux Red Hat v8.0, Windows NT/2000/XP. Programming language used: Fortran 77 Memory required to execute with typical data: 6 219 388 bytes No. of bits in a word: 32 No. of processors used: 1 Has the code been vectorized or parallelized?: No No. of bytes in distributed program, including test data, etc.: 300 709 No. of lines in distributed program, including test data, etc.: 17 365 Distribution format: compressed tar gzip file Keywords: Neutral transport in plasmas, Escape probability methods Nature of physical problem: This code calculates the transport of neutral particles in thermonuclear plasmas in two-dimensional geometric configurations. Method of solution: The code is based on the Transmission and Escape Probability (TEP) methodology [1], which is part of the family of integral transport methods for neutral particles and neutrons. The resulting linear system of equations is solved by standard direct linear system solvers (sparse and non-sparse versions are included). Restrictions on the complexity of the problem: The current version of the code can

  19. Use of hybrid composite particles prepared using alkoxysilane-functionalized amphiphilic polymer precursors for simultaneous removal of various pollutants from water.

    Science.gov (United States)

    Cho, Seulki; Kim, Nahae; Lee, Soonjae; Lee, Hoseok; Lee, Sang-Hyup; Kim, Juyoung; Choi, Jae-Woo

    2016-08-01

    In this study, we present new inorganic-organic hybrid particles and their possible application as an adsorbent for simultaneous removal of hydrophobic and hydrophilic pollutants from water. These hybrid particles were prepared using tailor-made alkoxysilane-functionalized amphiphilic polymer precursors (M-APAS), which have amphiphilic polymers and reactive alkoxysilane groups attached to the same backbone. Through a single conventional sol-gel process, the polymerization of M-APAS and the chemical conjugation of M-APAS onto silica nanoparticles was simultaneous, resulting in the formation of hybrid particles (M-APAS-SiO2) comprised of hyperbranch-like amphiphilic polymers bonded onto silica nanoparticles with a relatively high grafting efficiency. A test for the adsorption of water-soluble dye (organe-16) and water insoluble dye (solvent blue-35) onto the hybrid particles was performed to evaluate the possibility of adsorbing hydrophilic and hydrophobic compound within the same particle. The hybrid particle was also evaluated as an adsorbent for the removal of contaminated water containing various pollutants by wastewater treatment test. The hybrid particle could remove phenolic compounds from wastewater and the azo dye reactive orange-16 from aqueous solutions, and it was easily separated from the treated wastewater because of the different densities involved. These results demonstrate that the hybrid particles are a promising sorbent for hydrophilic and/or hydrophobic pollutants in water.

  20. Optimum design of a hybrid erbium-doped fiber amplifier/fiber Raman amplifier using particle swarm optimization.

    Science.gov (United States)

    Mowla, Alireza; Granpayeh, Nosrat

    2009-02-10

    We propose and optimize a hybrid erbium-doped fiber amplifier/fiber Raman amplifier (EDFA/FRA). A large number of parameters of a wide-band hybrid amplifier consisting of an erbium-doped fiber amplifier (EDFA) and a fiber Raman amplifier (FRA) have been optimized using an effective and fast global optimization method called particle swarm optimization. Two types of hybrid EDFA/FRA with six- and 10-pumped FRAs have been optimized. A large number of variables affect the hybrid EDFA/FRA performance, thus we need a global optimization method to be able to deal with these variables. Particle swarm optimization helps us to find optimum parameters of a hybrid EDFA/FRA and reduce the gain spectrum variations to 2.91 and 2.03 dB for the six and 10 pumped FRAs, respectively. The optimum design supports the amplification of 60 signal channels in the wavelength range of 1529.2-1627.1 nm for a wavelength-division multiplexing system.

  1. Nanogold-based bio-bar codes for label-free immunosensing of proteins coupling with an in situ DNA-based hybridization chain reaction.

    Science.gov (United States)

    Zhou, Jun; Xu, Mingdi; Tang, Dianping; Gao, Zhuangqiang; Tang, Juan; Chen, Guonan

    2012-12-28

    A label-free, non-enzyme immunosensing strategy is designed for ultrasensitive electronic detection of disease-related proteins (carcinoembryonic antigen as a model) by using gold nanoparticle-based bio-bar codes and an in situ amplified DNA-based hybridization chain reaction.

  2. Misconception Regarding Conventional Coupling of Fields and Particles in XFEL Codes

    CERN Document Server

    Geloni, Gianluca; Saldin, Evgeni

    2016-01-01

    Maxwell theory is usually treated in the lab frame under the standard time order (light-signal clock synchronization). Particle tracking in the lab frame usually treats time as an independent variable. Then, the evolution of electron beams is treated according to the absolute time convention (non-standard clock synchronization). This point has never received attention in the accelerator community. There are two ways of coupling fields and particles. The first, Lorentz's way, consists in `translating' Maxwell's electrodynamics to the absolute time world-picture. The second, Einstein's way, consists in `translating' particle tracking results to the electromagnetic world-picture. Conventional particle tracking shows that the electron beam direction changes after a transverse kick, while the orientation of the microbunching fronts stays unvaried. We show that under Einstein's time order, in the ultrarelativistic asymptote the orientation of the planes of simultaneity is always perpendicular to the electron beam v...

  3. Effect of Silane Treatment on Hybridized Use of Short Cellulose Fibers and Silica Particles for Natural Rubber Reinforcement

    Science.gov (United States)

    Lopattananon, Natinee; Jitkalong, Dolmalik; Seadan, Manus; Sakai, Tadamoto

    Processability, swelling and tensile properties of natural-rubber-based hybrid composites prepared by mixing short cellulose fibers and fine silica particles of equal contents with total loading of 20 phr using a two-roll mill were analyzed. Their properties were compared with those of natural rubber reinforced with single filler (silica or cellulose fiber) and corresponding unfilled natural rubber. The tensile test showed the reinforcing effect of both single filler system and hybrid filler system in relation to natural rubber. The tensile modulus and tensile strength of hybrid composites generally laid between those of fiber-reinforced and silica-reinforced natural rubber composites, whereas the elongation at break of hybrid composites was equal to that of single filler reinforcement system. The Mooney viscosity of silica-filled compound was much higher than that of unfilled natural rubber and short fiber-filled compounds, and was significantly reduced when hybridized fillers were used. Furthermore, a silane coupling agent, Si 69, was used to modify the surface properties of cellulose fibers and silica particles. Three microscopic evaluation techniques, that is, elemental X-ray mapping (EDX), 3D microfocus X-ray scanning, and N-ARC methods were applied to investigate the filler dispersion/mixing effects. It was found that both of the fillers were more homogeneously dispersed in the hybrid composites, and the affinity between the fillers and natural rubber was improved after the silane treatment. The results from this work suggested that the better dispersion of short cellulose fiber/silica hybrid fillers had great advantages in rubber processing, and allowed for equal or higher composite strength compared to a simply silica-filled composite system.

  4. Resistive-plate-chamber background particles simulation studies for the endcap region of a compact muon solenoid/large hadron collider using the geometry and tracking code

    National Research Council Canada - National Science Library

    Jamil, M; Rhee, J T

    2005-01-01

    We present a method to simulate the double-gap resistive plate chambers (RPC) background particles for the endcap region of a compact muon solenoid/large hadron collider using the geometry and tracking (GEANT) code...

  5. An implementation of hybrid parallel CUDA code for the hyperonic nuclear forces

    CERN Document Server

    Nemura, Hidekatsu

    2016-01-01

    We present our recent effort to develop a GPGPU program to calculate 52 channels of the Nambu-Bethe-Salpeter (NBS) wave functions in order to study the baryon interactions, from nucleon-nucleon to $\\Xi-\\Xi$, from lattice QCD. We adopt CUDA programming to perform the multi-GPU execution on a hybrid parallel programming with MPI and OpenMP. Effective baryon block algorithm is briefly outlined, which calculates efficaciously a large number of NBS wave functions at a time, and three CUDA kernel programs are implemented to materialize the effective baryon block algorithm using GPUs on the single-program multiple-data (SPMD) programming model. In order to parallelize multiple GPUs, we take both two approaches by dividing the time dimension and by dividing the spatial dimensions. Performances are measured using HA-PACS supercomputer in University of Tsukuba, which includes NVIDIA M2090 and NVIDIA K20X GPUs. Strong scaling and weak scaling measured by using both M2090 and K20X GPUs are presented. We find distinct dif...

  6. Implementation and performance of FDPS: a framework for developing parallel particle simulation codes

    Science.gov (United States)

    Iwasawa, Masaki; Tanikawa, Ataru; Hosono, Natsuki; Nitadori, Keigo; Muranushi, Takayuki; Makino, Junichiro

    2016-08-01

    We present the basic idea, implementation, measured performance, and performance model of FDPS (Framework for Developing Particle Simulators). FDPS is an application-development framework which helps researchers to develop simulation programs using particle methods for large-scale distributed-memory parallel supercomputers. A particle-based simulation program for distributed-memory parallel computers needs to perform domain decomposition, exchange of particles which are not in the domain of each computing node, and gathering of the particle information in other nodes which are necessary for interaction calculation. Also, even if distributed-memory parallel computers are not used, in order to reduce the amount of computation, algorithms such as the Barnes-Hut tree algorithm or the Fast Multipole Method should be used in the case of long-range interactions. For short-range interactions, some methods to limit the calculation to neighbor particles are required. FDPS provides all of these functions which are necessary for efficient parallel execution of particle-based simulations as "templates," which are independent of the actual data structure of particles and the functional form of the particle-particle interaction. By using FDPS, researchers can write their programs with the amount of work necessary to write a simple, sequential and unoptimized program of O(N2) calculation cost, and yet the program, once compiled with FDPS, will run efficiently on large-scale parallel supercomputers. A simple gravitational N-body program can be written in around 120 lines. We report the actual performance of these programs and the performance model. The weak scaling performance is very good, and almost linear speed-up was obtained for up to the full system of the K computer. The minimum calculation time per timestep is in the range of 30 ms (N = 107) to 300 ms (N = 109). These are currently limited by the time for the calculation of the domain decomposition and communication

  7. Performance Evaluation of OLSR Using Swarm Intelligence and Hybrid Particle Swarm Optimization Using Gravitational Search Algorithm

    Directory of Open Access Journals (Sweden)

    S. Meenakshi Sundaram

    2014-04-01

    Full Text Available The aim of this research is to evaluate the performance of OLSR using swarm intelligence and HPSO with Gravitational search algorithm to lower the jitter time, data drop and end to end delay and improve the network throughput. Simulation was carried out for multimedia traffic and video streamed network traffic using OPNET Simulator. Routing is exchanging of information from one host to another in a network. Routing forwards packets to destination using an efficient path. Path efficiency is measured through metrics like hop number, traffic and security. Each host node acts as a specialized router in Ad-hoc networks. A table driven proactive routing protocol Optimized Link State Protocol (OLSR has available topology information and routes. OLSR’s efficiency depends on Multipoint relay selection. Various studies were conducted to decrease control traffic overheads through modification of existing OLSR routing protocol and traffic shaping based on packet priority. This study proposes a modification of OLSR using swarm intelligence, Hybrid Particle Swarm Optimization (HPSO using Gravitational Search Algorithm (GSA and evaluation of performance of jitter, end to end delay, data drop and throughput. Simulation was carried out to investigate the proposed method for the network’s multimedia traffic.

  8. Hybrid fs/ps CARS for Sooting and Particle-laden Flames [PowerPoint

    Energy Technology Data Exchange (ETDEWEB)

    Hoffmeister, Kathryn N. Gabet; Guildenbecher, Daniel Robert; Kearney, Sean P.

    2016-01-01

    We report the application of ultrafast rotational coherent anti-Stokes Raman scattering (CARS) for temperature and relative oxygen concentration measurements in the plume emanating from a burning aluminized ammonium perchlorate propellant strand. Combustion of these metal-based propellants is a particularly hostile environment for laserbased diagnostics, with intense background luminosity, scattering and beam obstruction from hot metal particles that can be as large as several hundred microns in diameter. CARS spectra that were previously obtained using nanosecond pulsed lasers in an aluminumparticle- seeded flame are examined and are determined to be severely impacted by nonresonant background, presumably as a result of the plasma formed by particulateenhanced laser-induced breakdown. Introduction of fs/ps laser pulses enables CARS detection at reduced pulse energies, decreasing the likelihood of breakdown, while simultaneously providing time-gated elimination of any nonresonant background interference. Temperature probability densities and temperature/oxygen correlations were constructed from ensembles of several thousand single-laser-shot measurements from the fs/ps rotational CARS measurement volume positioned within 3 mm or less of the burning propellant surface. Preliminary results in canonical flames are presented using a hybrid fs/ps vibrational CARS system to demonstrate our progress towards acquiring vibrational CARS measurements for more accurate temperatures in the very high temperature propellant burns.

  9. Hybrid fs/ps CARS for Sooting and Particle-laden Flames

    Energy Technology Data Exchange (ETDEWEB)

    Hoffmeister, Kathryn N. Gabet; Guildenbecher, Daniel Robert; Kearney, Sean P.

    2015-12-01

    We report the application of ultrafast rotational coherent anti-Stokes Raman scattering (CARS) for temperature and relative oxygen concentration measurements in the plume emanating from a burning aluminized ammonium perchlorate propellant strand. Combustion of these metal-based propellants is a particularly hostile environment for laserbased diagnostics, with intense background luminosity, scattering and beam obstruction from hot metal particles that can be as large as several hundred microns in diameter. CARS spectra that were previously obtained using nanosecond pulsed lasers in an aluminumparticle- seeded flame are examined and are determined to be severely impacted by nonresonant background, presumably as a result of the plasma formed by particulateenhanced laser-induced breakdown. Introduction of fs/ps laser pulses enables CARS detection at reduced pulse energies, decreasing the likelihood of breakdown, while simultaneously providing time-gated elimination of any nonresonant background interference. Temperature probability densities and temperature/oxygen correlations were constructed from ensembles of several thousand single-laser-shot measurements from the fs/ps rotational CARS measurement volume positioned within 3 mm or less of the burning propellant surface. Preliminary results in canonical flames are presented using a hybrid fs/ps vibrational CARS system to demonstrate our progress towards acquiring vibrational CARS measurements for more accurate temperatures in the very high temperature propellant burns.

  10. Parallelisation of PyHEADTAIL, a Collective Beam Dynamics Code for Particle Accelerator Physics

    CERN Document Server

    Oeftiger, Adrian

    2016-01-01

    The longitudinal tracking engine of the particle accelerator simulation application PyHEADTAIL shows a heavy potential for parallelisation. For basic beam circulation, the tracking functionality with the leap-frog algorithm is extracted and compared between a sequential C and a concurrent CUDA C API implementation for 1 million revolutions. Including the sequential data I/O in both versions, a pure speedup of up to S = 100 is observed which is in the order of magnitude of what is expected from Amdahl's law. From O(100) macro-particles on the overhead of initialising the GPU CUDA device appears outweighed by the concurrent computations on the 448 available CUDA cores.

  11. Particle-in-cell simulations of an alpha channeling scenario: electron current drive arising from lower hybrid drift instability of fusion-born ions

    Science.gov (United States)

    Cook, James; Chapman, Sandra; Dendy, Richard

    2010-11-01

    Particle-in-cell (PIC) simulations of fusion-born protons in deuterium plasmas demonstrate a key alpha channeling phenomenon for tokamak fusion plasmas. We focus on obliquely propagating modes at the plasma edge, excited by centrally born fusion products on banana orbits, known to be responsible for observations of ion cyclotron emission in JET and TFTR. A fully self-consistent electromagnetic 1D3V PIC code evolves a ring-beam distribution of 3MeV protons in a 10keV thermal deuterium-electron plasma with realistic mass ratio. A collective instability occurs, giving rise to electromagnetic field activity in the lower hybrid range of frequencies. Waves spontaneously excited by this lower hybrid drift instability undergo Landau damping on resonant electrons, drawing out an asymmetric tail in the distribution of electron parallel velocities, which constitutes a net current. These simulations demonstrate a key building block of some alpha channeling scenarios: the direct collisionless coupling of fusion product energy into a form which can help sustain the equilibrium of the tokamak.

  12. Optimal reactive power and voltage control in distribution networks with distributed generators by fuzzy adaptive hybrid particle swarm optimisation method

    DEFF Research Database (Denmark)

    Chen, Shuheng; Hu, Weihao; Su, Chi

    2015-01-01

    A new and efficient methodology for optimal reactive power and voltage control of distribution networks with distributed generators based on fuzzy adaptive hybrid PSO (FAHPSO) is proposed. The objective is to minimize comprehensive cost, consisting of power loss and operation cost of transformers...... and capacitors, and subject to constraints such as minimum and maximum reactive power limits of distributed generators, maximum deviation of bus voltages, maximum allowable daily switching operation number (MADSON). Particle swarm optimization (PSO) is used to solve the corresponding mixed integer non......-linear programming problem (MINLP) and the hybrid PSO method (HPSO), consisting of three PSO variants, is presented. In order to mitigate the local convergence problem, fuzzy adaptive inference is used to improve the searching process and the final fuzzy adaptive inference based hybrid PSO is proposed. The proposed...

  13. Investigation of Mechanical and Electrical Properties of Hybrid Composites Reinforced with Carbon Nanotubes and Micrometer-Sized Silica Particles

    Energy Technology Data Exchange (ETDEWEB)

    Oh, Yun; You, Byeong Il; Ahn, Ji Ho; Lee, Gyo Woo [Chonbuk Nat’l Univ., Junju (Korea, Republic of)

    2016-12-15

    In this study, to enhance the electrical insulation of composite specimens in addition to the improved mechanical properties, the epoxy composite were reinforced with carbon nanotubes and silica particles. Tensile strength, Young's modulus, dynamic mechanical behavior, and electrical resistivity of the specimens were measured with varied contents of the two fillers. The mechanical and electrical properties were discussed, and the experimental results related to the mechanical properties of the specimens were compared with those from several micromechanics models. The hybrid composites specimens with 0.6 wt% of carbon nanotubes and 50 wt% of silica particles showed improved mechanical properties, with increase in tensile strength and Young's modulus up to 11% and 35%, respectively, with respect to those of the baseline specimen. The electrical conductivity of the composite specimens with carbon nanotubes filler also improved. Further, the electrical insulation of the hybrid composites specimens with the two fillers improved in addition to the improvement in mechanical properties.

  14. Comparative study of Monte Carlo particle transport code PHITS and nuclear data processing code NJOY for recoil cross section spectra under neutron irradiation

    Science.gov (United States)

    Iwamoto, Yosuke; Ogawa, Tatsuhiko

    2017-04-01

    Because primary knock-on atoms (PKAs) create point defects and clusters in materials that are irradiated with neutrons, it is important to validate the calculations of recoil cross section spectra that are used to estimate radiation damage in materials. Here, the recoil cross section spectra of fission- and fusion-relevant materials were calculated using the Event Generator Mode (EGM) of the Particle and Heavy Ion Transport code System (PHITS) and also using the data processing code NJOY2012 with the nuclear data libraries TENDL2015, ENDF/BVII.1, and JEFF3.2. The heating number, which is the integral of the recoil cross section spectra, was also calculated using PHITS-EGM and compared with data extracted from the ACE files of TENDL2015, ENDF/BVII.1, and JENDL4.0. In general, only a small difference was found between the PKA spectra of PHITS + TENDL2015 and NJOY + TENDL2015. From analyzing the recoil cross section spectra extracted from the nuclear data libraries using NJOY2012, we found that the recoil cross section spectra were incorrect for 72Ge, 75As, 89Y, and 109Ag in the ENDF/B-VII.1 library, and for 90Zr and 55Mn in the JEFF3.2 library. From analyzing the heating number, we found that the data extracted from the ACE file of TENDL2015 for all nuclides were problematic in the neutron capture region because of incorrect data regarding the emitted gamma energy. However, PHITS + TENDL2015 can calculate PKA spectra and heating numbers correctly.

  15. Comparative study of Monte Carlo particle transport code PHITS and nuclear data processing code NJOY for recoil cross section spectra under neutron irradiation

    Energy Technology Data Exchange (ETDEWEB)

    Iwamoto, Yosuke, E-mail: iwamoto.yosuke@jaea.go.jp; Ogawa, Tatsuhiko

    2017-04-01

    Because primary knock-on atoms (PKAs) create point defects and clusters in materials that are irradiated with neutrons, it is important to validate the calculations of recoil cross section spectra that are used to estimate radiation damage in materials. Here, the recoil cross section spectra of fission- and fusion-relevant materials were calculated using the Event Generator Mode (EGM) of the Particle and Heavy Ion Transport code System (PHITS) and also using the data processing code NJOY2012 with the nuclear data libraries TENDL2015, ENDF/BVII.1, and JEFF3.2. The heating number, which is the integral of the recoil cross section spectra, was also calculated using PHITS-EGM and compared with data extracted from the ACE files of TENDL2015, ENDF/BVII.1, and JENDL4.0. In general, only a small difference was found between the PKA spectra of PHITS + TENDL2015 and NJOY + TENDL2015. From analyzing the recoil cross section spectra extracted from the nuclear data libraries using NJOY2012, we found that the recoil cross section spectra were incorrect for {sup 72}Ge, {sup 75}As, {sup 89}Y, and {sup 109}Ag in the ENDF/B-VII.1 library, and for {sup 90}Zr and {sup 55}Mn in the JEFF3.2 library. From analyzing the heating number, we found that the data extracted from the ACE file of TENDL2015 for all nuclides were problematic in the neutron capture region because of incorrect data regarding the emitted gamma energy. However, PHITS + TENDL2015 can calculate PKA spectra and heating numbers correctly.

  16. Misconception regarding conventional coupling of fields and particles in XFEL codes

    Energy Technology Data Exchange (ETDEWEB)

    Geloni, Gianluca [Europeam XFEL GmbH, Hamburg (Germany); Kocharyan, Vitali; Saldin, Evgeni [DESY Hamburg (Germany)

    2016-01-15

    Maxwell theory is usually treated in the laboratory frame under the standard time order, that is the usual light-signal clock synchronization. In contrast, particle tracking in the laboratory frame usually treats time as an independent variable. As a result, here we argue that the evolution of electron beams is usually treated according to the absolute time convention i.e. using a different time order defined by a non-standard clock synchronization procedure. This essential point has never received attention in the accelerator community. There are two possible ways of coupling fields and particles in this situation. The first, Lorentz's prerelativistic way, consists in a 'translation' of Maxwell's electrodynamics to the absolute time world-picture. The second, Einstein's way, consists in a 'translation' of particle tracking results to the electromagnetic world-picture, obeying the standard time order. Conventional particle tracking shows that the electron beam direction changes after a transverse kick, while the orientation of the microbunching phase front stays unvaried. Here we show that in the ultrarelativistic asymptotic v → c, the orientation of the planes of simultaneity, i.e. the orientation of the microbunching fronts, is always perpendicular to the electron beam velocity when the evolution of the modulated electron beam is treated under Einstein's time order. This effect allows for the production of coherent undulator radiation from a modulated electron beam in the kicked direction without suppression. We hold a recent FEL study at the LCLS as a direct experimental evidence that the microbunching wavefront indeed readjusts its direction after the electron beam is kicked by a large angle, limited only by the beamline aperture. In a previous paper we quantitatively described this result invoking the aberration of light effect, which corresponds to Lorentz's way of coupling fields and particles. The purpose of

  17. Sliding Wear Properties of Hybrid Aluminium Composite Reinforced by Particles of Palm Shell Activated Carbon and Slag

    Directory of Open Access Journals (Sweden)

    Zamri Yusoff

    2011-09-01

    Full Text Available In present work, dry sliding wear tests were conducted on hybrid composite reinforced with natural carbon based particles such as palm shell activated carbon (PSAC and slag. Hybrid composites containing 5 -20 wt.% of both reinforcements with average particles sizes about 125μm were prepared by conventional powder metallurgy technique, which involves the steps of mixing, compacting and sintering. Dry sliding experiments were conducted in air at room temperature using a pin-on-disc self-built attach to polisher machine. The disc which acted as the mating surface material was made of mild steel (120 HV cut from commercial mild steel sheet (2 mm thickness into 100mm diameter. The influence of the applied load was investigated under a constant sliding velocity of 0.1m/s with the applied loads at 3N, 11N and 51N. The contribution of the reinforcement content and the applied load as well as the sliding distance on the wear process and the wear rate have been investigated. The contribution of synergic factors such as applied load, sliding distance and reinforcement content (wt.% have been studied using analysis of variance (ANOVA. All synergic factors contribute to the wear process of all tested composites. Among synergic factors, the applied load is the highest contribution to wear process on both composites (Al/PSAC and Al/Slag and hybrid composite. The degree of improvement of wear resistance of hybrid composite is strongly dependent on the reinforcement content.

  18. Microdosimetry of alpha particles for simple and 3D voxelised geometries using MCNPX and Geant4 Monte Carlo codes.

    Science.gov (United States)

    Elbast, M; Saudo, A; Franck, D; Petitot, F; Desbrée, A

    2012-07-01

    Microdosimetry using Monte Carlo simulation is a suitable technique to describe the stochastic nature of energy deposition by alpha particle at cellular level. Because of its short range, the energy imparted by this particle to the targets is highly non-uniform. Thus, to achieve accurate dosimetric results, the modelling of the geometry should be as realistic as possible. The objectives of the present study were to validate the use of the MCNPX and Geant4 Monte Carlo codes for microdosimetric studies using simple and three-dimensional voxelised geometry and to study their limit of validity in this last case. To that aim, the specific energy (z) deposited in the cell nucleus, the single-hit density of specific energy f(1)(z) and the mean-specific energy were calculated. Results show a good agreement when compared with the literature using simple geometry. The maximum percentage difference found is MCNPX for calculation time is 10 times higher with Geant4 than MCNPX code in the same conditions.

  19. Interplay of Internal Structure and Interfaces on the Emitting Properties of Hybrid ZnO Hierarchical Particles.

    Science.gov (United States)

    Distaso, Monica; Bertoni, Giovanni; Todisco, Stefano; Marras, Sergio; Gallo, Vito; Manna, Liberato; Peukert, Wolfgang

    2017-05-03

    The design of hybrid organic/inorganic nanostructures with controlled assembly drives the development of materials with new or improved properties and superior performances. In this paper, the surface and internal structure of hybrid ZnO poly-N-vinylpyrrolidone (ZnO/PVP) mesocrystals are investigated in detail and correlated with their emitting properties. A photoluminescence study at room temperature reveals that the as-synthesized particles show a remarkable ultraviolet (UV) emission, whereas an emission from defects in the visible region is not observed. On the other hand, a visible emission is achieved upon calcination of the hybrid ZnO/PVP particles in air, and its intensity is found to increase with the calcination temperature and, in some cases, to overwhelm the UV emission. A molecular description is proposed for the absence of a visible emission from defects in the as-synthesized ZnO/PVP mesocrystals on the basis of Fourier transform infrared (FTIR) and solid-state (13)C NMR (SSNMR) spectroscopy. An in-depth electron microscopy study sheds light on the internal organization of mesocrystals and reveals the formation of nanoreactors, that is, particles with enclosed porosity, upon thermal treatment.

  20. Dry powder inhaler formulation of lipid-polymer hybrid nanoparticles via electrostatically-driven nanoparticle assembly onto microscale carrier particles.

    Science.gov (United States)

    Yang, Yue; Cheow, Wean Sin; Hadinoto, Kunn

    2012-09-15

    Lipid-polymer hybrid nanoparticles have emerged as promising nanoscale carriers of therapeutics as they combine the attractive characteristics of liposomes and polymers. Herein we develop dry powder inhaler (DPI) formulation of hybrid nanoparticles composed of poly(lactic-co-glycolic acid) and soybean lecithin as the polymer and lipid constituents, respectively. The hybrid nanoparticles are transformed into inhalable microscale nanocomposite structures by a novel technique based on electrostatically-driven adsorption of nanoparticles onto polysaccharide carrier particles, which eliminates the drawbacks of conventional techniques based on controlled drying (e.g. nanoparticle-specific formulation, low yield). First, we engineer polysaccharide carrier particles made up of chitosan cross-linked with tripolyphosphate and dextran sulphate to exhibit the desired aerosolization characteristics and physical robustness. Second, we investigate the effects of nanoparticle to carrier mass ratio and salt inclusion on the adsorption efficiency, in terms of the nanoparticle loading and yield, from which the optimal formulation is determined. Desorption of the nanoparticles from the carrier particles in phosphate buffer saline is also examined. Lastly, we characterize aerosolization efficiency of the nanocomposite product in vitro, where the emitted dose and respirable fraction are found to be comparable to the values of conventional DPI formulations.

  1. On the performance of hybrid-ARQ with incremental redundancy and with code combining over relay channels

    KAUST Repository

    Chelli, Ali

    2013-08-01

    In this paper, we consider a relay network consisting of a source, a relay, and a destination. The source transmits a message to the destination using hybrid automatic repeat request (HARQ). The relay overhears the transmitted messages over the different HARQ rounds and tries to decode the data packet. In case of successful decoding at the relay, both the relay and the source cooperate to transmit the message to the destination. The channel realizations are independent for different HARQ rounds. We assume that the transmitter has no channel state information (CSI). Under such conditions, power and rate adaptation are not possible. To overcome this problem, HARQ allows the implicit adaptation of the transmission rate to the channel conditions by the use of feedback. There are two major HARQ techniques, namely HARQ with incremental redundancy (IR) and HARQ with code combining (CC). We investigate the performance of HARQ-IR and HARQ-CC over a relay channel from an information theoretic perspective. Analytical expressions are derived for the information outage probability, the average number of transmissions, and the average transmission rate. We illustrate through our investigation the benefit of relaying. We also compare the performance of HARQ-IR and HARQ-CC and show that HARQ-IR outperforms HARQ-CC. © 2013 IEEE.

  2. On the Numerical Dispersion of Electromagnetic Particle-In-Cell Code : Finite Grid Instability

    CERN Document Server

    Meyers, M D; Zeng, Y; Yi, S A; Albright, B J

    2014-01-01

    The Particle-In-Cell (PIC) method is widely used in relativistic particle beam and laser plasma modeling. However, the PIC method exhibits numerical instabilities that can render unphysical simulation results or even destroy the simulation. For electromagnetic relativistic beam and plasma modeling, the most relevant numerical instabilities are the finite grid instability and the numerical Cherenkov instability. We review the numerical dispersion relation of the electromagnetic PIC algorithm to analyze the origin of these instabilities. We rigorously derive the faithful 3D numerical dispersion of the PIC algorithm, and then specialize to the Yee FDTD scheme. In particular, we account for the manner in which the PIC algorithm updates and samples the fields and distribution function. Temporal and spatial phase factors from solving Maxwell's equations on the Yee grid with the leapfrog scheme are also explicitly accounted for. Numerical solutions to the electrostatic-like modes in the 1D dispersion relation for a ...

  3. Electron cloud studies for CERN particle accelerators and simulation code development

    OpenAIRE

    Iadarola, Giovanni

    2014-01-01

    In a particle accelerator free electrons in the beam chambers can be generated by different mechanisms like the ionization of the residual gas or the photoemission from the chamber’s wall due to the synchrotron radiation emitted by the beam. The electromagnetic field of the beam can accelerate these electrons and project them onto the chamber’s wall. According to their impact energy and to the Secondary Electron Yield (SEY) of the surface, secondary electrons can be generated. Especially...

  4. Fluid-particle hybrid simulation on the transports of plasma, recycling neutrals, and carbon impurities in the Korea Superconducting Tokamak Advanced Research divertor region

    Science.gov (United States)

    Kim, Deok-Kyu; Hong, Sang Hee

    2005-06-01

    A two-dimensional simulation modeling that has been performed in a self-consistent way for analysis on the fully coupled transports of plasma, recycling neutrals, and intrinsic carbon impurities in the divertor domain of tokamaks is presented. The numerical model coupling the three major species transports in the tokamak edge is based on a fluid-particle hybrid approach where the plasma is described as a single magnetohydrodynamic fluid while the neutrals and impurities are treated as kinetic particles using the Monte Carlo technique. This simulation code is applied to the KSTAR (Korea Superconducting Tokamak Advanced Research) tokamak [G. S. Lee, J. Kim, S. M. Hwang et al., Nucl. Fusion 40, 575 (2000)] to calculate the peak heat flux on the divertor plate and to explore the divertor plasma behavior depending on the upstream conditions in its base line operation mode for various values of input heating power and separatrix plasma density. The numerical modeling for the KSTAR tokamak shows that its full-powered operation is subject to the peak heat loads on the divertor plate exceeding an engineering limit, and reveals that the recycling zone is formed in front of the divertor by increasing plasma density and by reducing power flow into the scrape-off layer. Compared with other researchers' work, the present hybrid simulation more rigorously reproduces severe electron pressure losses along field lines by the presence of recycling zone accounting for the transitions between the sheath limited and the detached divertor regimes. The substantial profile changes in carbon impurity population and ionic composition also represent the key features of this divertor regime transition.

  5. HOTB update: Parallel code for calculation of three- and four-particle harmonic oscillator transformation brackets and their matrices using OpenMP

    Science.gov (United States)

    Germanas, D.; Stepšys, A.; Mickevičius, S.; Kalinauskas, R. K.

    2017-06-01

    This is a new version of the HOTB code designed to calculate three and four particle harmonic oscillator (HO) transformation brackets and their matrices. The new version uses the OpenMP parallel communication standard for calculations of harmonic oscillator transformation brackets. A package of Fortran code is presented. Calculation time of large matrices, orthogonality conditions and array of coefficients can be significantly reduced using effective parallel code. Other functionalities of the original code (for example calculation of single harmonic oscillator brackets) have not been modified.

  6. MC21 v.6.0 - A Continuous-Energy Monte Carlo Particle Transport Code with Integrated Reactor Feedback Capabilities

    Science.gov (United States)

    Griesheimer, D. P.; Gill, D. F.; Nease, B. R.; Sutton, T. M.; Stedry, M. H.; Dobreff, P. S.; Carpenter, D. C.; Trumbull, T. H.; Caro, E.; Joo, H.; Millman, D. L.

    2014-06-01

    MC21 is a continuous-energy Monte Carlo radiation transport code for the calculation of the steady-state spatial distributions of reaction rates in three-dimensional models. The code supports neutron and photon transport in fixed source problems, as well as iterated-fission-source (eigenvalue) neutron transport problems. MC21 has been designed and optimized to support large-scale problems in reactor physics, shielding, and criticality analysis applications. The code also supports many in-line reactor feedback effects, including depletion, thermal feedback, xenon feedback, eigenvalue search, and neutron and photon heating. MC21 uses continuous-energy neutron/nucleus interaction physics over the range from 10-5 eV to 20 MeV. The code treats all common neutron scattering mechanisms, including fast-range elastic and non-elastic scattering, and thermal- and epithermal-range scattering from molecules and crystalline materials. For photon transport, MC21 uses continuous-energy interaction physics over the energy range from 1 keV to 100 GeV. The code treats all common photon interaction mechanisms, including Compton scattering, pair production, and photoelectric interactions. All of the nuclear data required by MC21 is provided by the NDEX system of codes, which extracts and processes data from EPDL-, ENDF-, and ACE-formatted source files. For geometry representation, MC21 employs a flexible constructive solid geometry system that allows users to create spatial cells from first- and second-order surfaces. The system also allows models to be built up as hierarchical collections of previously defined spatial cells, with interior detail provided by grids and template overlays. Results are collected by a generalized tally capability which allows users to edit integral flux and reaction rate information. Results can be collected over the entire problem or within specific regions of interest through the use of phase filters that control which particles are allowed to score each

  7. Modeling secondary particle tracks generated by intermediate- and low-energy protons in water with the Low-Energy Particle Track Simulation code

    Science.gov (United States)

    Verkhovtsev, Alexey; Traore, Ali; Muñoz, Antonio; Blanco, Francisco; García, Gustavo

    2017-01-01

    Using a recent extension of the Low-Energy Particle Track Simulation (LEPTS) Monte Carlo code, we model the slowing-down of heavy charged particles propagating in water, combined with an explicit molecular-level description of radiation effects due to the formation of secondary electrons, their propagation through the medium, and electron-induced molecular dissociations. As a case study, we consider the transport of protons with the initial energy of 1 MeV until their thermalization, so that we cover the energy range that contributes mainly to the energy deposition in the Bragg peak region. In order to include protons into the simulation procedure, a comprehensive dataset of integral and differential cross sections of elastic and inelastic scattering of intermediate- and low-energy protons from water molecules is created. Experimental and theoretical cross sections available in the literature are carefully examined, compared and verified. The ionization cross section by protons includes recent experimental measurements of the production of different charged fragments.

  8. On the Numerical Dispersion of the Electromagnetic Particle-In-Cell Code: Finite Grid Instability

    Science.gov (United States)

    Meyers, M. D.; Huang, C.-K.; Zeng, Y.; Yi, S.; Albright, B. J.

    2014-10-01

    The widely used Particle-In-Cell (PIC) method in relativistic particle beam and laser plasma modeling is subject to numerical instabilities that can render unphysical simulation results or even destroy the simulation. For electromagnetic relativistic beam and plasma modeling, the most relevant numerical instabilities are the finite grid instability and the numerical Cherenkov instability. We rigorously derive the faithful 3D PIC numerical dispersion relation, and specialize to the Yee FDTD scheme. The manner in which the PIC algorithm updates and samples the fields and distribution function, along with any temporal and spatial phase factors, is accounted for. Numerical solutions to the 1D dispersion relation are obtained for parameters of interest. We investigate how the finite grid instability arises from the interaction of the numerical modes admitted in the system and their aliases. The most significant interaction is due critically to the correct placement of the operators in the dispersion relation. We obtain a simple analytic expression for the peak growth rates due to these interactions.

  9. On the Numerical Dispersion of Electromagnetic Particle-In-Cell Code : Finite Grid Instability

    Energy Technology Data Exchange (ETDEWEB)

    Meyers, Michael David [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Univ. of California, Los Angeles, CA (United States) Dept. of Physics and Astronomy; Huang, Chengkun [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Zeng, Yong [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Yi, Sunghwan [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Albright, Brian James [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2014-07-15

    The Particle-In-Cell (PIC) method is widely used in relativistic particle beam and laser plasma modeling. However, the PIC method exhibits numerical instabilities that can render unphysical simulation results or even destroy the simulation. For electromagnetic relativistic beam and plasma modeling, the most relevant numerical instabilities are the finite grid instability and the numerical Cherenkov instability. We review the numerical dispersion relation of the electromagnetic PIC algorithm to analyze the origin of these instabilities. We rigorously derive the faithful 3D numerical dispersion of the PIC algorithm, and then specialize to the Yee FDTD scheme. In particular, we account for the manner in which the PIC algorithm updates and samples the fields and distribution function. Temporal and spatial phase factors from solving Maxwell's equations on the Yee grid with the leapfrog scheme are also explicitly accounted for. Numerical solutions to the electrostatic-like modes in the 1D dispersion relation for a cold drifting plasma are obtained for parameters of interest. In the succeeding analysis, we investigate how the finite grid instability arises from the interaction of the numerical 1D modes admitted in the system and their aliases. The most significant interaction is due critically to the correct representation of the operators in the dispersion relation. We obtain a simple analytic expression for the peak growth rate due to this interaction.

  10. Combining cell-based hydrodynamics with hybrid particle-field simulations: efficient and realistic simulation of structuring dynamics.

    Science.gov (United States)

    Sevink, G J A; Schmid, F; Kawakatsu, T; Milano, G

    2017-02-22

    We have extended an existing hybrid MD-SCF simulation technique that employs a coarsening step to enhance the computational efficiency of evaluating non-bonded particle interactions. This technique is conceptually equivalent to the single chain in mean-field (SCMF) method in polymer physics, in the sense that non-bonded interactions are derived from the non-ideal chemical potential in self-consistent field (SCF) theory, after a particle-to-field projection. In contrast to SCMF, however, MD-SCF evolves particle coordinates by the usual Newton's equation of motion. Since collisions are seriously affected by the softening of non-bonded interactions that originates from their evaluation at the coarser continuum level, we have devised a way to reinsert the effect of collisions on the structural evolution. Merging MD-SCF with multi-particle collision dynamics (MPCD), we mimic particle collisions at the level of computational cells and at the same time properly account for the momentum transfer that is important for a realistic system evolution. The resulting hybrid MD-SCF/MPCD method was validated for a particular coarse-grained model of phospholipids in aqueous solution, against reference full-particle simulations and the original MD-SCF model. We additionally implemented and tested an alternative and more isotropic finite difference gradient. Our results show that efficiency is improved by merging MD-SCF with MPCD, as properly accounting for hydrodynamic interactions considerably speeds up the phase separation dynamics, with negligible additional computational costs compared to efficient MD-SCF. This new method enables realistic simulations of large-scale systems that are needed to investigate the applications of self-assembled structures of lipids in nanotechnologies.

  11. Variability of particle number emissions from diesel and hybrid diesel-electric buses in real driving conditions.

    Science.gov (United States)

    Sonntag, Darrell B; Gao, H Oliver; Holmén, Britt A

    2008-08-01

    A linear mixed model was developed to quantify the variability of particle number emissions from transit buses tested in real-world driving conditions. Two conventional diesel buses and two hybrid diesel-electric buses were tested throughout 2004 under different aftertreatments, fuels, drivers, and bus routes. The mixed model controlled the confounding influence of factors inherent to on-board testing. Statistical tests showed that particle number emissions varied significantly according to the after treatment, bus route, driver, bus type, and daily temperature, with only minor variability attributable to differences between fuel types. The daily setup and operation of the sampling equipment (electrical low pressure impactor) and mini-dilution system contributed to 30-84% of the total random variability of particle measurements among tests with diesel oxidation catalysts. By controlling for the sampling day variability, the model better defined the differences in particle emissions among bus routes. In contrast, the low particle number emissions measured with diesel particle filters (decreased by over 99%) did not vary according to operating conditions or bus type but did vary substantially with ambient temperature.

  12. On the numerical dispersion of electromagnetic particle-in-cell code: Finite grid instability

    Science.gov (United States)

    Meyers, M. D.; Huang, C.-K.; Zeng, Y.; Yi, S. A.; Albright, B. J.

    2015-09-01

    The Particle-In-Cell (PIC) method is widely used in relativistic particle beam and laser plasma modeling. However, the PIC method exhibits numerical instabilities that can render unphysical simulation results or even destroy the simulation. For electromagnetic relativistic beam and plasma modeling, the most relevant numerical instabilities are the finite grid instability and the numerical Cherenkov instability. We review the numerical dispersion relation of the Electromagnetic PIC model. We rigorously derive the faithful 3-D numerical dispersion relation of the PIC model, for a simple, direct current deposition scheme, which does not conserve electric charge exactly. We then specialize to the Yee FDTD scheme. In particular, we clarify the presence of alias modes in an eigenmode analysis of the PIC model, which combines both discrete and continuous variables. The manner in which the PIC model updates and samples the fields and distribution function, together with the temporal and spatial phase factors from solving Maxwell's equations on the Yee grid with the leapfrog scheme, is explicitly accounted for. Numerical solutions to the electrostatic-like modes in the 1-D dispersion relation for a cold drifting plasma are obtained for parameters of interest. In the succeeding analysis, we investigate how the finite grid instability arises from the interaction of the numerical modes admitted in the system and their aliases. The most significant interaction is due critically to the correct representation of the operators in the dispersion relation. We obtain a simple analytic expression for the peak growth rate due to this interaction, which is then verified by simulation. We demonstrate that our analysis is readily extendable to charge conserving models.

  13. NMSDECAY: A Fortran code for supersymmetric particle decays in the Next-to-Minimal Supersymmetric Standard Model

    Science.gov (United States)

    Das, Debottam; Ellwanger, Ulrich; Teixeira, Ana M.

    2012-03-01

    The code NMSDECAY allows to compute widths and branching ratios of sparticle decays in the Next-to-Minimal Supersymmetric Standard Model. It is based on a generalization of SDECAY, to include the extended Higgs and neutralino sectors of the NMSSM. Slepton 3-body decays, possibly relevant in the case of a singlino-like lightest supersymmetric particle, have been added. NMSDECAY will be part of the NMSSMTools package, which computes Higgs, sparticle masses and Higgs decays in the NMSSM. Program summaryProgram title: NMSDECAY Catalogue identifier: AELC_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AELC_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 188 177 No. of bytes in distributed program, including test data, etc.: 1 896 478 Distribution format: tar.gz Programming language: FORTRAN77 Computer: All supporting g77, gfortran, ifort Operating system: All supporting g77, gfortran, ifort Classification: 11.1 External routines: Routines in the NMSSMTools package: At least one of the routines in the directory main (e.g. nmhdecay.f), all routines in the directory sources. (All software is included in the distribution package.) Nature of problem: Calculation of all decay widths and decay branching fractions of all particles in the Next-to-Minimal Supersymmetric Standard Model. Solution method: Suitable generalization of the code SDECAY [1] including the extended Higgs and neutralino sector of the Next-to-Minimal Supersymmetric Standard Model, and slepton 3-body decays. Additional comments: NMSDECAY is interfaced with NMSSMTools, available on the web page http://www.th.u-psud.fr/NMHDECAY/nmssmtools.html. Running time: On an Intel Core i7 with 2.8 GHZ: about 2 seconds per point in parameter space, if all flags flagqcd, flagmulti and flagloop are switched on.

  14. Structural, thermal and ion transport studies of different particle size nanocomposite fillers incorporated PVdF-HFP hybrid membranes

    Energy Technology Data Exchange (ETDEWEB)

    Kumar, G. Gnana [Specialized Graduate School of Hydrogen and Fuel Cell Engineering, Chonbuk National University, Jeonju 561-756 (Korea, Republic of); Kim, Pil [Specialized Graduate School of Hydrogen and Fuel Cell Engineering, Chonbuk National University, Jeonju 561-756 (Korea, Republic of); School of Chemical Engineering and Technology, Chonbuk National University, Jeonju 561-756 (Korea, Republic of); Kim, Ae Rhan [Specialized Graduate School of Hydrogen and Fuel Cell Engineering, Chonbuk National University, Jeonju 561-756 (Korea, Republic of); Nahm, Kee Suk [Specialized Graduate School of Hydrogen and Fuel Cell Engineering, Chonbuk National University, Jeonju 561-756 (Korea, Republic of); School of Chemical Engineering and Technology, Chonbuk National University, Jeonju 561-756 (Korea, Republic of)], E-mail: nahmks@chonbuk.ac.kr; Elizabeth, R. Nimma [Department of Physics, Lady Doak College, Madurai 625002 (India)

    2009-05-15

    Organic-inorganic hybrid membranes based on poly(vinylidene fluoride-co-hexa fluoropropylene) (PVdF-HFP)/sulfosuccinic acid were fabricated with different nanometer sizes of silica particles. Morphological images reveal the embedded ceramic filler over the membrane. Structural characterizations were made by FT-IR and XPS, ensure the inclusion of sulfosuccinic acid and silica into the PVdF-HFP polymer matrix. Sulfonic acid groups promote the IEC values and greater swelling behavior. Silica content in the hybrid membranes had a great effect on crystalline character as well as thermal properties of the membranes. Decrease in the filler size creates an effective route of polymer-filler interface and promotes the protonic conductivity of the membranes. The high conductivities in the range of 10{sup -2} to 10{sup -3} S cm{sup -1} were achieved through synergistic interactions between the organic and inorganic moieties of the hybrid membranes. Due to these splendid features, the prepared hybrid membranes can be a trademark in the field of fuel cells.

  15. Pseudospectral Maxwell solvers for an accurate modeling of Doppler harmonic generation on plasma mirrors with particle-in-cell codes

    Science.gov (United States)

    Blaclard, G.; Vincenti, H.; Lehe, R.; Vay, J. L.

    2017-09-01

    With the advent of petawatt class lasers, the very large laser intensities attainable on target should enable the production of intense high-order Doppler harmonics from relativistic laser-plasma mirror interactions. At present, the modeling of these harmonics with particle-in-cell (PIC) codes is extremely challenging as it implies an accurate description of tens to hundreds of harmonic orders on a broad range of angles. In particular, we show here that due to the numerical dispersion of waves they induce in vacuum, standard finite difference time domain (FDTD) Maxwell solvers employed in most PIC codes can induce a spurious angular deviation of harmonic beams potentially degrading simulation results. This effect was extensively studied and a simple toy model based on the Snell-Descartes law was developed that allows us to finely predict the angular deviation of harmonics depending on the spatiotemporal resolution and the Maxwell solver used in the simulations. Our model demonstrates that the mitigation of this numerical artifact with FDTD solvers mandates very high spatiotemporal resolution preventing realistic three-dimensional (3D) simulations even on the largest computers available at the time of writing. We finally show that nondispersive pseudospectral analytical time domain solvers can considerably reduce the spatiotemporal resolution required to mitigate this spurious deviation and should enable in the near future 3D accurate modeling on supercomputers in a realistic time to solution.

  16. Medium-energy electrons and heavy ions in Jupiter's magnetosphere - Effects of lower hybrid wave-particle interactions

    Science.gov (United States)

    Barbosa, D. D.

    1986-01-01

    A theory of medium-energy (about keV) electrons and heavy ions in Jupiter's magnetosphere is presented. Lower hybrid waves are generated by the combined effects of a ring instability of neutral wind pickup ions and the modified two-stream instability associated with transport of cool Iogenic plasma. The quasi-linear energy diffusion coefficient for lower hybrid wave-particle interactions is evaluated, and several solutions to the diffusion equation are given. Calculations based on measured wave properties show that the noise substantially modifies the particle distribution functions. The effects are to accelerate superthermal ions and electrons to keV energies and to thermalize the pickup ions on time scales comparable to the particle residence time. The S(2+)/S(+) ratio at medium energies is a measure of the relative contribution from Iogenic thermal plasma and neutral wind ions, and this important quantity should be determined from future measurements. The theory also predicts a preferential acceleration of heavy ions with an accleration time that scales inversely with the root of the ion mass. Electrons accelerated by the process contribute to further reionization of the neutral wind by electron impact, thus providing a possible confirmation of Alfven's critical velocity effect in the Jovian magnetosphere.

  17. Development of 2D particle-in-cell code to simulate high current, low energy beam in a beam transport system

    Indian Academy of Sciences (India)

    S C L Srivastava; S V L S Rao; P Singh

    2007-10-01

    A code for 2D space-charge dominated beam dynamics study in beam transport lines is developed. The code is used for particle-in-cell (PIC) simulation of -uniform beam in a channel containing solenoids and drift space. It can also simulate a transport line where quadrupoles are used for focusing the beam. Numerical techniques as well as the results of beam dynamics studies are presented in the paper.

  18. Multi-responsive hybrid particles: thermo-, pH-, photo-, and magneto-responsive magnetic hydrogel cores with gold nanorod optical triggers.

    Science.gov (United States)

    Rittikulsittichai, Supparesk; Kolhatkar, Arati G; Sarangi, Subhasis; Vorontsova, Maria A; Vekilov, Peter G; Brazdeikis, Audrius; Randall Lee, T

    2016-06-01

    The research strategy described in this manuscript harnesses the attractive properties of hydrogels, gold nanorods (Aurods), and magnetic nanoparticles (MNPs) by synthesizing one unique multi-responsive nanostructure. This novel hybrid structure consists of silica-coated magnetic particles encapsulated within a thermo-responsive P(NIPAM-co-AA) hydrogel network on which Aurods are assembled. Furthermore, this research demonstrates that these composite particles respond to several forms of external stimuli (temperature, pH, light, and/or applied magnetic field) owing to their specific architecture. Exposure of the hybrid particles to external stimuli led to a systematic and reversible variation in the hydrodynamic diameter (swelling-deswelling) and thus in the optical properties of the hybrid particles (red-shifting of the plasmon band). Such stimuli-responsive volume changes can be effectively exploited in drug-delivery applications.

  19. Wear Characteristics of Hybrid Composites Based on Za27 Alloy Reinforced With Silicon Carbide and Graphite Particles

    Directory of Open Access Journals (Sweden)

    S. Mitrović

    2014-06-01

    Full Text Available The paper presents the wear characteristics of a hybrid composite based on zinc-aluminium ZA27 alloy, reinforced with silicon-carbide and graphite particles. The tested sample contains 5 vol.% of SiC and 3 vol.% Gr particles. Compocasting technique has been used to prepare the samples. The experiments were performed on a “block-on-disc” tribometer under conditions of dry sliding. The wear volumes of the alloy and the composite were determined by varying the normal loads and sliding speeds. The paper contains the procedure for preparation of sample composites and microstructure of the composite material and the base ZA27 alloy. The wear surface of the composite material was examined using the scanning electronic microscope (SEM and energy dispersive spectrometry (EDS. Conclusions were obtained based on the observed impact of the sliding speed, normal load and sliding distance on tribological behaviour of the observed composite.

  20. Multi-responsive hybrid particles: thermo-, pH-, photo-, and magneto-responsive magnetic hydrogel cores with gold nanorod optical triggers

    Science.gov (United States)

    Rittikulsittichai, Supparesk; Kolhatkar, Arati G.; Sarangi, Subhasis; Vorontsova, Maria A.; Vekilov, Peter G.; Brazdeikis, Audrius; Randall Lee, T.

    2016-06-01

    The research strategy described in this manuscript harnesses the attractive properties of hydrogels, gold nanorods (Aurods), and magnetic nanoparticles (MNPs) by synthesizing one unique multi-responsive nanostructure. This novel hybrid structure consists of silica-coated magnetic particles encapsulated within a thermo-responsive P(NIPAM-co-AA) hydrogel network on which Aurods are assembled. Furthermore, this research demonstrates that these composite particles respond to several forms of external stimuli (temperature, pH, light, and/or applied magnetic field) owing to their specific architecture. Exposure of the hybrid particles to external stimuli led to a systematic and reversible variation in the hydrodynamic diameter (swelling-deswelling) and thus in the optical properties of the hybrid particles (red-shifting of the plasmon band). Such stimuli-responsive volume changes can be effectively exploited in drug-delivery applications.The research strategy described in this manuscript harnesses the attractive properties of hydrogels, gold nanorods (Aurods), and magnetic nanoparticles (MNPs) by synthesizing one unique multi-responsive nanostructure. This novel hybrid structure consists of silica-coated magnetic particles encapsulated within a thermo-responsive P(NIPAM-co-AA) hydrogel network on which Aurods are assembled. Furthermore, this research demonstrates that these composite particles respond to several forms of external stimuli (temperature, pH, light, and/or applied magnetic field) owing to their specific architecture. Exposure of the hybrid particles to external stimuli led to a systematic and reversible variation in the hydrodynamic diameter (swelling-deswelling) and thus in the optical properties of the hybrid particles (red-shifting of the plasmon band). Such stimuli-responsive volume changes can be effectively exploited in drug-delivery applications. Electronic supplementary information (ESI) available: Contains detailed information about the synthesis of

  1. Biomimetic synthesis of raspberry-like hybrid polymer-silica core-shell nanoparticles by templating colloidal particles with hairy polyamine shell.

    Science.gov (United States)

    Pi, Mengwei; Yang, Tingting; Yuan, Jianjun; Fujii, Syuji; Kakigi, Yuichi; Nakamura, Yoshinobu; Cheng, Shiyuan

    2010-07-01

    The nanoparticles composed of polystyrene core and poly[2-(diethylamino)ethyl methacrylate] (PDEA) hairy shell were used as colloidal templates for in situ silica mineralization, allowing the well-controlled synthesis of hybrid silica core-shell nanoparticles with raspberry-like morphology and hollow silica nanoparticles by subsequent calcination. Silica deposition was performed by simply stirring a mixture of the polymeric core-shell particles in isopropanol, tetramethyl orthosilicate (TMOS) and water at 25 degrees C for 2.5h. No experimental evidence was found for nontemplated silica formation, which indicated that silica deposition occurred exclusively in the PDEA shell and formed PDEA-silica hybrid shell. The resulting hybrid silica core-shell particles were characterized by transmission electron microscopy (TEM), thermogravimetry, aqueous electrophoresis, and X-ray photoelectron spectroscopy. TEM studies indicated that the hybrid particles have well-defined core-shell structure with raspberry morphology after silica deposition. We found that the surface nanostructure of hybrid nanoparticles and the composition distribution of PDEA-silica hybrid shell could be well controlled by adjusting the silicification conditions. These new hybrid core-shell nanoparticles and hollow silica nanoparticles would have potential applications for high-performance coatings, encapsulation and delivery of active organic molecules.

  2. Multi-objective AGV scheduling in an FMS using a hybrid of genetic algorithm and particle swarm optimization

    Science.gov (United States)

    Yap, Hwa Jen; Musa, Siti Nurmaya; Tahriri, Farzad; Md Dawal, Siti Zawiah

    2017-01-01

    Flexible manufacturing system (FMS) enhances the firm’s flexibility and responsiveness to the ever-changing customer demand by providing a fast product diversification capability. Performance of an FMS is highly dependent upon the accuracy of scheduling policy for the components of the system, such as automated guided vehicles (AGVs). An AGV as a mobile robot provides remarkable industrial capabilities for material and goods transportation within a manufacturing facility or a warehouse. Allocating AGVs to tasks, while considering the cost and time of operations, defines the AGV scheduling process. Multi-objective scheduling of AGVs, unlike single objective practices, is a complex and combinatorial process. In the main draw of the research, a mathematical model was developed and integrated with evolutionary algorithms (genetic algorithm (GA), particle swarm optimization (PSO), and hybrid GA-PSO) to optimize the task scheduling of AGVs with the objectives of minimizing makespan and number of AGVs while considering the AGVs’ battery charge. Assessment of the numerical examples’ scheduling before and after the optimization proved the applicability of all the three algorithms in decreasing the makespan and AGV numbers. The hybrid GA-PSO produced the optimum result and outperformed the other two algorithms, in which the mean of AGVs operation efficiency was found to be 69.4, 74, and 79.8 percent in PSO, GA, and hybrid GA-PSO, respectively. Evaluation and validation of the model was performed by simulation via Flexsim software. PMID:28263994

  3. Multi-objective AGV scheduling in an FMS using a hybrid of genetic algorithm and particle swarm optimization.

    Science.gov (United States)

    Mousavi, Maryam; Yap, Hwa Jen; Musa, Siti Nurmaya; Tahriri, Farzad; Md Dawal, Siti Zawiah

    2017-01-01

    Flexible manufacturing system (FMS) enhances the firm's flexibility and responsiveness to the ever-changing customer demand by providing a fast product diversification capability. Performance of an FMS is highly dependent upon the accuracy of scheduling policy for the components of the system, such as automated guided vehicles (AGVs). An AGV as a mobile robot provides remarkable industrial capabilities for material and goods transportation within a manufacturing facility or a warehouse. Allocating AGVs to tasks, while considering the cost and time of operations, defines the AGV scheduling process. Multi-objective scheduling of AGVs, unlike single objective practices, is a complex and combinatorial process. In the main draw of the research, a mathematical model was developed and integrated with evolutionary algorithms (genetic algorithm (GA), particle swarm optimization (PSO), and hybrid GA-PSO) to optimize the task scheduling of AGVs with the objectives of minimizing makespan and number of AGVs while considering the AGVs' battery charge. Assessment of the numerical examples' scheduling before and after the optimization proved the applicability of all the three algorithms in decreasing the makespan and AGV numbers. The hybrid GA-PSO produced the optimum result and outperformed the other two algorithms, in which the mean of AGVs operation efficiency was found to be 69.4, 74, and 79.8 percent in PSO, GA, and hybrid GA-PSO, respectively. Evaluation and validation of the model was performed by simulation via Flexsim software.

  4. Robust hybrid raspberry-like hollow particles with complex structures: a facile method of swelling polymerization towards composite spheres.

    Science.gov (United States)

    Zhang, Xu; Yao, Xiaohui; Wang, Xiaomei; Feng, Lei; Qu, Jiayan; Liu, Pange

    2014-02-14

    A novel robust hybrid raspberry-like TiO2/PS hollow particles with complex double-shelled structures have been fabricated in large quantities by a facile swelling polymerization approach based on commercially available hollow polystyrene (PS) spheres. The crosslinked-PS protrusions are wedged firmly into the TiO2 shell, making the resultant particles both chemically and mechanically robust. By simply tuning the monomer concentration, the hierarchical morphology (the size and number of protrusion) of the surfaces can be well-controlled. Due to the dual-sized hierarchical morphology, the particulate coating possesses superhydrophobicity (water contact angle ≈ 161°). Moreover, the well-compartmentalized character is similar to that of typical Janus particles. The special particles with interfacial activity can stabilize water-in-toluene (w/o) emulsions well. Meanwhile, a TiO2 double-shelled hollow sphere with a complex structure is achieved by calcination or solvent treatment. All these unique features derived from a readily available method will endow the products with a broader range of applications.

  5. A CASE FOR HYBRID INSTRUCTION ENCODING FOR REDUCING CODE SIZE IN EMBEDDED SYSTEM-ON-CHIPS BASED ON RISC PROCESSOR CORES

    Directory of Open Access Journals (Sweden)

    Govindarajalu Bakthavatsalam

    2014-01-01

    Full Text Available Embedded computing differs from general purpose computing in several aspects. In most embedded systems, size, cost and power consumption are more important than performance. In embedded System-on-Chips (SoC, memory is a scarce resource and it poses constraints on chip space, cost and power consumption. Whereas fixed instruction length feature of RISC architecture simplifies instruction decoding and pipeline implementation, its undesirable side effect is code size increase caused by large number of unused bits. Code size reduction minimizes memory size, chip space and power consumption all of which are significant for low power portable embedded systems. Though code size reduction has drawn the attention of architects and developers, the solutions currently used are more of cure than of prevention. Considering the huge number of embedded applications, there is a need for a dedicated processor optimized for low power and portable embedded systems. In the study, we propose a variation of Hybrid Instruction Encoding (HIE for the embedded processors. Our scheme uses fixed number of multiple instruction lengths with provision for hybrid sizes for the offset and the immediate fields thereby reducing the number of unused bits. We simulated the HIE for the MIPS32 processors and measured code sizes of various embedded applications of MiBench and MediaBench benchmarks using an offline tool developed newly. We noticed up to 27% code reduction for large and medium sized embedded applications respectively. This results in reduction of on-chip memory capacity up to 1 mega bytes that is very significant for SoC based embedded applications. Considering the large market share of embedded systems, it is worth investing in a new architecture and development of dedicated HIE-RISC processor cores for portable embedded systems based on SoCs.

  6. Magnetic particle-based sandwich sensor with DNA-modified carbon nanotubes as recognition elements for detection of DNA hybridization.

    Science.gov (United States)

    Hu, Po; Huang, Cheng Zhi; Li, Yuan Fang; Ling, Jian; Liu, Yu Ling; Fei, Liang Run; Xie, Jian Ping

    2008-03-01

    In this contribution, we design a visual sensor for DNA hybridization with DNA probe-modified magnetic particles (MPs) and multiwalled carbon nanotubes (MWNTs) without involving a visual recognition element such as fluorescent/chemiluminescent reagents. It was found that DNA probe-modified MWNTs, which could be dispersed in aqueous medium and have strong light scattering signals under the excitation of a light beam in the UV-vis region, could connect with DNA probe-modified MPs together in the presence of perfectly complementary target DNA and form a sandwich structure. In a magnetic field, the formed MP-MWNT species can easily be removed from the solution, resulting in a decrease of light scattering signals. Thus, a magnetic particle-based sandwich sensor could be developed to detect DNA hybridization by measuring the light scattering signals with DNA-modified MWNTs as recognition elements. Experiments showed that the DNA-modified MPs sensor could be reused at least 17 times and was stable for more than 6 months.

  7. A Scalable and Fault Tolerant Particle Simulation Code%高可扩展可容错的无网格/粒子程序petaPar及其测试

    Institute of Scientific and Technical Information of China (English)

    黎雷生; 田荣

    2013-01-01

    Powered by petaflop supercomputers, numerical simulation steps into a completely new era, a new generation of simulation code is expected to explore the parallelism of hundreds of thousands of processor cores. petaPar is targeted at petascale particle simulation on petaflop systems. It unifies two most popular and powerful particle methods, the Smoothed Particle Hydrodynamics (SPH) and the Material Point Method (MPM). The code supports a number of material models, strength models and failure models, and is suitable for large deformation, high strain rates and fluid-solid interaction. Parallel implementations support both flat MPI and MPI+X hybrid parallel models. The code is highly fault tolerant in the sense that it can support unattended process restart from any time step. Scalability tests on Titan shows that the code is linearly scaled up to 260K CPU cores and delivers 87%and 90%parallel efficiency relative to 8 192 CPU cores for MPM and SPH respectively.%petaPar 粒子模拟程序面向千万亿次级计算,在统一框架下实现两种广受关注的粒子模拟算法:光滑粒子流体动力学(Smoothed Particle Hydrodynamics,SPH)和物质点法(Material Point Method, MPM)。代码支持多种材料模型、强度模型和失效模型,适合模拟大变形、高应变率和流固耦合问题。支持纯MPI和MPI+X混合两种并行模型。系统具有可容错性,支持无人值守变进程重启。在Titan上测试表明,petaPar可线性扩展到26万CPU核,SPH和MPM算法并行效率相对8192核分别为87%和90%。

  8. Particle size analyses of porous silica and hybrid silica chromatographic support particles. Comparison of flow/hyperlayer field-flow fractionation with scanning electron microscopy, electrical sensing zone, and static light scattering.

    Science.gov (United States)

    Xu, Yuehong

    2008-05-16

    Porous silica and hybrid silica chromatographic support particles having particle diameters ranging approximately from 1 microm to 15 microm have been characterized by flow/hyperlayer field-flow fractionation (FFF). The particle size accuracy has been improved significantly in this work by a second-order polynomial calibration. Very good agreement between the FFF data and scanning electron microscopic (SEM) results has been achieved. The effects of particle porosity, pore sizes, and particle sizes on the particle size accuracy in electrical sensing zone (ESZ) analyses have been discussed. It has been demonstrated by computer simulation and experimental measurements that false peaks can be generated in certain particle size regions when the static light scattering (SLS) technique is applied to tightly distributed spherical chromatographic support particles.

  9. Influence of wall motion on particle sedimentation using hybrid LB-IBM scheme

    Science.gov (United States)

    Habte, Mussie A.; Wu, ChuiJie

    2017-03-01

    We integrate the lattice Boltzmann method (LBM) and immersed boundary method (IBM) to capture the coupling between a rigid boundary surface and the hydrodynamic response of an enclosed particle laden fluid. We focus on a rigid box filled with a Newtonian fluid where the drag force based on the slip velocity at the wall and settling particles induces the interaction. We impose an external harmonic oscillation on the system boundary and found interesting results in the sedimentation behavior. Our results reveal that the sedimentation and particle locations are sensitive to the boundary walls oscillation amplitude and the subsequent changes on the enclosed flow field. Two different particle distribution analyses were performed and showed the presence of an agglomerate structure of particles. Despite the increase in the amplitude of wall motion, the turbulence level of the flow field and distribution of particles are found to be less in quantity compared to the stationary walls. The integrated LBM-IBM methodology promised the prospect of an efficient and accurate dynamic coupling between a non-compliant bounding surface and flow field in a wide-range of systems. Understanding the dynamics of the fluid-filled box can be particularly important in a simulation of particle deposition within biological systems and other engineering applications.

  10. A hybrid particle swarm optimization and genetic algorithm for closed-loop supply chain network design in large-scale networks

    DEFF Research Database (Denmark)

    Soleimani, Hamed; Kannan, Govindan

    2015-01-01

    -heuristic algorithms are considered to develop a new elevated hybrid algorithm: the genetic algorithm (GA) and particle swarm optimization (PSO). Analyzing the above-mentioned algorithms' strengths and weaknesses leads us to attempt to improve the GA using some aspects of PSO. Therefore, a new hybrid algorithm...... is proposed and a complete validation process is undertaken using CPLEX and MATLAB software. In small instances, the global optimum points of CPLEX for the proposed hybrid algorithm are compared to genetic algorithm, and particle swarm optimization. Then, in small, mid, and large-size instances, performances...... of the proposed meta-heuristics are analyzed and evaluated. Finally, a case study involving an Iranian hospital furniture manufacturer is used to evaluate the proposed solution approach. The results reveal the superiority of the proposed hybrid algorithm when compared to the GA and PSO....

  11. An efficient and portable SIMD algorithm for charge/current deposition in Particle-In-Cell codes

    CERN Document Server

    Vincenti, H; Sasanka, R; Vay, J-L

    2016-01-01

    In current computer architectures, data movement (from die to network) is by far the most energy consuming part of an algorithm (10pJ/word on-die to 10,000pJ/word on the network). To increase memory locality at the hardware level and reduce energy consumption related to data movement, future exascale computers tend to use more and more cores on each compute nodes ("fat nodes") that will have a reduced clock speed to allow for efficient cooling. To compensate for frequency decrease, machine vendors are making use of long SIMD instruction registers that are able to process multiple data with one arithmetic operator in one clock cycle. SIMD register length is expected to double every four years. As a consequence, Particle-In-Cell (PIC) codes will have to achieve good vectorization to fully take advantage of these upcoming architectures. In this paper, we present a new algorithm that allows for efficient and portable SIMD vectorization of current/charge deposition routines that are, along with the field gathering...

  12. Implementation of a 3D version of ponderomotive guiding center solver in particle-in-cell code OSIRIS

    Science.gov (United States)

    Helm, Anton; Vieira, Jorge; Silva, Luis; Fonseca, Ricardo

    2016-10-01

    Laser-driven accelerators gained an increased attention over the past decades. Typical modeling techniques for laser wakefield acceleration (LWFA) are based on particle-in-cell (PIC) simulations. PIC simulations, however, are very computationally expensive due to the disparity of the relevant scales ranging from the laser wavelength, in the micrometer range, to the acceleration length, currently beyond the ten centimeter range. To minimize the gap between these despair scales the ponderomotive guiding center (PGC) algorithm is a promising approach. By describing the evolution of the laser pulse envelope separately, only the scales larger than the plasma wavelength are required to be resolved in the PGC algorithm, leading to speedups in several orders of magnitude. Previous work was limited to two dimensions. Here we present the implementation of the 3D version of a PGC solver into the massively parallel, fully relativistic PIC code OSIRIS. We extended the solver to include periodic boundary conditions and parallelization in all spatial dimensions. We present benchmarks for distributed and shared memory parallelization. We also discuss the stability of the PGC solver.

  13. Extension of hybrid micro-depletion model for decay heat calculation in the DYN3D code

    Energy Technology Data Exchange (ETDEWEB)

    Bilodid, Yurii; Fridman, Emil [Helmholtz-Zentrum Dresden-Rossendorf e.V., Dresden (Germany). Reactor Safety; Kotlyar, D. [Georgia Institute of Technology, Atlanta, GA (United States); Shwageraus, E. [Cambridge Univ. (United Kingdom)

    2017-06-01

    This work extends the hybrid micro-depletion methodology, recently implemented in DYN3D, to the decay heat calculation by accounting explicitly for the heat contribution from the decay of each nuclide in the fuel.

  14. Implementation of a flexible and scalable particle-in-cell method for massively parallel computations in the mantle convection code ASPECT

    Science.gov (United States)

    Gassmöller, Rene; Bangerth, Wolfgang

    2016-04-01

    Particle-in-cell methods have a long history and many applications in geodynamic modelling of mantle convection, lithospheric deformation and crustal dynamics. They are primarily used to track material information, the strain a material has undergone, the pressure-temperature history a certain material region has experienced, or the amount of volatiles or partial melt present in a region. However, their efficient parallel implementation - in particular combined with adaptive finite-element meshes - is complicated due to the complex communication patterns and frequent reassignment of particles to cells. Consequently, many current scientific software packages accomplish this efficient implementation by specifically designing particle methods for a single purpose, like the advection of scalar material properties that do not evolve over time (e.g., for chemical heterogeneities). Design choices for particle integration, data storage, and parallel communication are then optimized for this single purpose, making the code relatively rigid to changing requirements. Here, we present the implementation of a flexible, scalable and efficient particle-in-cell method for massively parallel finite-element codes with adaptively changing meshes. Using a modular plugin structure, we allow maximum flexibility of the generation of particles, the carried tracer properties, the advection and output algorithms, and the projection of properties to the finite-element mesh. We present scaling tests ranging up to tens of thousands of cores and tens of billions of particles. Additionally, we discuss efficient load-balancing strategies for particles in adaptive meshes with their strengths and weaknesses, local particle-transfer between parallel subdomains utilizing existing communication patterns from the finite element mesh, and the use of established parallel output algorithms like the HDF5 library. Finally, we show some relevant particle application cases, compare our implementation to a

  15. 多群粒子输运问题在多核集群系统上的混合并行计算%Hybrid Parallel Computation of Multi-Group Particle transport Equations on Multi-Core Cluster Systems

    Institute of Scientific and Technical Information of China (English)

    迟利华; 刘杰; 龚春叶; 徐涵; 蒋杰; 胡庆丰

    2009-01-01

    The parallel performance of solving the multi-group particle transport equations on the unstructure meshes is analyzed Adapting to the characteristics of multi-core cluster systems, this paper desgins a MPI/OpenMP hybrid parallel code. For the meshes, the space domain decomposition is adopted, and MPI between the computations of multi-core CPU nodes is used. When each MPI process begin to compute the variables of the energy groups, several OpenMP threads will be forked, and the threads start to compute simultaneously in the same mutli-core CPU node. Using the MPI/OpenMP hybrid parallel code, we solve a 2D mutli-group particle transport equation on a cluster with mutli-core CPU nodes, and the results show that the code has good scalability and can be scaled to 1024 CPU cores.%本文分析了非结构网格多群粒子输运Sn方程求解的并行性,拟合多核机群系统的特点,设计了MPI/OpenMP混合程序,针对空间网格点采用区域分解划分,计算结点间基于消息传递MPI编程,每个MPI计算进程在计算过程中碰到关于能群的计算,就生成多个OpenMP线程,计算结点内针对能群进行多线程并行计算.数值测试结果表明,非结构网格上的粒子输运问题的混合并行计算能较好地匹配多核机群系统的硬件结构,具有良好的可扩展性,可以扩展到1 024个CPU核.

  16. Exposure to nanoscale particles and fibers during machining of hybrid advanced composites containing carbon nanotubes

    Science.gov (United States)

    Bello, Dhimiter; Wardle, Brian L.; Yamamoto, Namiko; Guzman deVilloria, Roberto; Garcia, Enrique J.; Hart, Anastasios J.; Ahn, Kwangseog; Ellenbecker, Michael J.; Hallock, Marilyn

    2009-01-01

    This study investigated airborne exposures to nanoscale particles and fibers generated during dry and wet abrasive machining of two three-phase advanced composite systems containing carbon nanotubes (CNTs), micron-diameter continuous fibers (carbon or alumina), and thermoset polymer matrices. Exposures were evaluated with a suite of complementary instruments, including real-time particle number concentration and size distribution (0.005-20 μm), electron microscopy, and integrated sampling for fibers and respirable particulate at the source and breathing zone of the operator. Wet cutting, the usual procedure for such composites, did not produce exposures significantly different than background whereas dry cutting, without any emissions controls, provided a worst-case exposure and this article focuses here. Overall particle release levels, peaks in the size distribution of the particles, and surface area of released particles (including size distribution) were not significantly different for composites with and without CNTs. The majority of released particle surface area originated from the respirable (1-10 μm) fraction, whereas the nano fraction contributed 10% of the surface area. CNTs, either individual or in bundles, were not observed in extensive electron microscopy of collected samples. The mean number concentration of peaks for dry cutting was composite dependent and varied over an order of magnitude with highest values for thicker laminates at the source being >1 × 106 particles cm-3. Concentration of respirable fibers for dry cutting at the source ranged from 2 to 4 fibers cm-3 depending on the composite type. Further investigation is required and underway to determine the effects of various exposure determinants, such as specimen and tool geometry, on particle release and effectiveness of controls.

  17. Potential Applications of Hybrid Layered Double Hydroxide (LDH Particles in Pulp and Paper Production

    Directory of Open Access Journals (Sweden)

    Sophia von Haartman

    2014-03-01

    Full Text Available Functionalization of papermaking pulp fibers using inorganic particles was investigated as a novel approach. Different layered double hydroxide (LDH particles were used in peroxide bleaching of thermomechanical pulp (TMP and in oxygen bleaching of eucalyptus kraft pulp. LDH particles were also tested as binding sites for optical brightening agents (OBA that are commonly used in paper production. The surface chemistry of LDH-treated pulps was examined using X-ray photoelectron spectroscopy (XPS and apparent contact angle with water. Adsorbed LDH was not detected by XPS on the fiber surfaces after the bleaching trials, but it had a clear impact on the processes. LDH particles modified with terephthalate anions decreased the consumption of hydrogen peroxide and increased opacity by 3 units in TMP. Unmodified LDH particles enhanced the selectivity in oxygen delignification of kraft pulp, leading to 10% gain in ISO brightness and reduction of 2 units in Kappa number in comparison with conventional processes. Paper strength properties were unaffected in the presented system. After bleaching with LDH, the amount of anionic groups on pulp surfaces was increased. Also, the retention of OBA onto TMP fibers was improved with modified LDH particles. LDH proved to have great potential for current and prospective applications in pulp and paper manufacture.

  18. Membrane flux dynamics in the submerged ultrafiltration hybrid treatment process during particle and natural organic matter removal

    Institute of Scientific and Technical Information of China (English)

    Wei Zhang; Xiaojian Zhang; Yonghong Li; Jun Wang; Chao Chen

    2011-01-01

    Particles and natural organic matter (NOM) are two major concerns in surface water,which greatly influence the membrane filtration process.The objective of this article is to investigate the effect of particles,NOM and their interaction on the submerged ultrafiltration (UF) membrane flux under conditions of solo UF and coagulation and PAC adsorption as the pretreatment of UF.Particles,NOM and their mixture were spiked in tap water to simulate raw water.Exponential relationship,(JP/JP0 =axexp{-k[t-(n- 1)T]}),was developed to quantify the normalized membrane flux dynamics during the filtration period and fitted the results well.In this equation,coefficient a was determined by the value of Jp/Jp0 at the beginning of a filtration cycle,reflecting the flux recovery after backwashing,that is,the irreversible fouling.The coefficient k reflected the trend of flux dynamics.Integrated total permeability (ΣJp) in one filtration period could be used as a quantified indicator for comparison of different hybrid membrane processes or under different scenarios.According to the results,there was an additive effect on membrane flux by NOM and particles during solo UF process.This additive fouling could be alleviated by coagulation pretreatment since particles helped the formation of flocs with coagulant,which further delayed the decrease of membrane flux and benefited flux recovery by backwashing.The addition of PAC also increased membrane flux by adsorbing NOM and improved flux recovery through backwashing.

  19. Effect of Particles Content on Microstructure, Mechanical Properties, and Electrochemical Behavior of Aluminum-Based Hybrid Composite Processed by Accumulative Roll Bonding Process

    Science.gov (United States)

    Fattah-Alhosseini, Arash; Naseri, Majid; Alemi, Mohamad Hesam

    2017-03-01

    Effect of B4C/SiC particles content on the microstructure, deformation, and electrochemical behavior of aluminum-based hybrid composite processed by accumulative roll bonding (ARB) was investigated. The ARB process was used to fabricate hybrid composites which consist of 1 and 2.5 wt pct of B4C/SiC mixed particles as reinforcement. The microstructure of the fabricated hybrid composites after the ninth cycle of the ARB process exhibited an excellent distribution of B4C/SiC particles in the aluminum matrix where no porosity was observed. In addition, with increasing the particle content in the aluminum matrix, the hybrid composites demonstrated higher tensile strength and lower elongation. The ARB-processed hybrid composites exhibited 3.12 and 3.37 times higher hardness for samples having 1 and 2.5 wt pct B4C/SiC, respectively, than that of the annealed aluminum. Electrochemical impedance spectroscopy and potentiodynamic polarization curves revealed that the corrosion resistance dropped drastically by increasing the number of ARB cycles from 3 to 5. However, by further ARB processing, the corrosion resistance gradually increased, and finally, after 9 cycles reached to the values higher than those of 3-cycle ARB-processed samples.

  20. Effect of Particles Content on Microstructure, Mechanical Properties, and Electrochemical Behavior of Aluminum-Based Hybrid Composite Processed by Accumulative Roll Bonding Process

    Science.gov (United States)

    Fattah-Alhosseini, Arash; Naseri, Majid; Alemi, Mohamad Hesam

    2017-01-01

    Effect of B4C/SiC particles content on the microstructure, deformation, and electrochemical behavior of aluminum-based hybrid composite processed by accumulative roll bonding (ARB) was investigated. The ARB process was used to fabricate hybrid composites which consist of 1 and 2.5 wt pct of B4C/SiC mixed particles as reinforcement. The microstructure of the fabricated hybrid composites after the ninth cycle of the ARB process exhibited an excellent distribution of B4C/SiC particles in the aluminum matrix where no porosity was observed. In addition, with increasing the particle content in the aluminum matrix, the hybrid composites demonstrated higher tensile strength and lower elongation. The ARB-processed hybrid composites exhibited 3.12 and 3.37 times higher hardness for samples having 1 and 2.5 wt pct B4C/SiC, respectively, than that of the annealed aluminum. Electrochemical impedance spectroscopy and potentiodynamic polarization curves revealed that the corrosion resistance dropped drastically by increasing the number of ARB cycles from 3 to 5. However, by further ARB processing, the corrosion resistance gradually increased, and finally, after 9 cycles reached to the values higher than those of 3-cycle ARB-processed samples.

  1. Preparation of Surfactant-free Core-Shell Poly(lactic acid) / Calcium Phosphate Hybrid Particles and Their Drug Release Characteristics

    Energy Technology Data Exchange (ETDEWEB)

    Kuno, T; Hirao, K [Department of Frontier Materials, Graduate School of Engineering, Nagoya Institute of Technology, Gokiso-cho, Showa-ku, Nagoya, 466-8555 (Japan); Nagata, F; Ohji, T; Kato, K, E-mail: katsuya-kato@aist.go.jp [Advanced Manufacturing Research Institute, National Institute of Advanced Industrial Science and Technology (AIST), 2266-98, Anagahora, Shimoshidami, Moriyama-ku, Nagoya, 463-8510 (Japan)

    2011-04-15

    We propose surfactant-free core-shell poly(lactic acid) (PLA) / calcium phosphate (CaP) hybrid particles as drug delivery carriers. These particles were prepared by biomineralization process using ultrasonic irradiation, and their drug release profiles were investigated. Drug release rate was earlier when particles were prepared by PLA with a low molecular weight, and/or by Ca(CH{sub 3}COO){sub 2} and (NH{sub 4}){sub 2}HPO{sub 4}. Also, these were shown good protein adsorption. This work indicates that these particles have sustained-release ability without initial burst and can do targeting capability by biomolecule conjugation.

  2. Emergent ultra-long-range interactions between active particles in hybrid active-inactive systems

    Science.gov (United States)

    Steimel, Joshua P.; Aragones, Juan L.; Hu, Helen; Qureshi, Naser

    2016-04-01

    Particle-particle interactions determine the state of a system. Control over the range of such interactions as well as their magnitude has been an active area of research for decades due to the fundamental challenges it poses in science and technology. Very recently, effective interactions between active particles have gathered much attention as they can lead to out-of-equilibrium cooperative states such as flocking. Inspired by nature, where active living cells coexist with lifeless objects and structures, here we study the effective interactions that appear in systems composed of active and passive mixtures of colloids. Our systems are 2D colloidal monolayers composed primarily of passive (inactive) colloids, and a very small fraction of active (spinning) ferromagnetic colloids. We find an emergent ultra-long-range attractive interaction induced by the activity of the spinning particles and mediated by the elasticity of the passive medium. Interestingly, the appearance of such interaction depends on the spinning protocol and has a minimum actuation timescale below which no attraction is observed. Overall, these results clearly show that, in the presence of elastic components, active particles can interact across very long distances without any chemical modification of the environment. Such a mechanism might potentially be important for some biological systems and can be harnessed for newer developments in synthetic active soft materials.

  3. Robustness measure of hybrid intra-particle entanglement, discord, and classical correlation with initial Werner state

    Science.gov (United States)

    Saha, P.; Sarkar, D.

    2016-02-01

    Quantum information processing is largely dependent on the robustness of non-classical correlations, such as entanglement and quantum discord. However, all the realistic quantum systems are thermodynamically open and lose their coherence with time through environmental interaction. The time evolution of quantum entanglement, discord, and the respective classical correlation for a single, spin-1/2 particle under spin and energy degrees of freedom, with an initial Werner state, has been investigated in the present study. The present intra-particle system is considered to be easier to produce than its inter-particle counterpart. Experimentally, this type of system may be realized in the well-known Penning trap. The most stable correlation was identified through maximization of a system-specific global objective function. Quantum discord was found to be the most stable, followed by the classical correlation. Moreover, all the correlations were observed to attain highest robustness under initial Bell state, with minimum possible dephasing and decoherence parameters.

  4. Preparation and antifrictional properties of surface modified hybrid fluorine-containing silica particles

    Science.gov (United States)

    Gorbunova, T. I.; Zapevalov, A. Ya.; Beketov, I. V.; Demina, T. M.; Timoshenkova, O. R.; Murzakaev, A. M.; Gaviko, V. S.; Safronov, A. P.; Saloutin, V. I.

    2015-01-01

    Modified SiO2 particles were successfully prepared via [(perfluorobutyl)methyl]oxirane and [(perfluorobutyl)methyl]thiirane in sol-gel conditions using basic catalysis. As a result of acid catalysis non-modified nano-sized SiO2 particles were formed. Chemically modified SiO2 particles were characterized by means of FT-IR, BET, TEM, XRD- and XPS-analyses. Friction coefficients were determined at steel surface for base oil with modified SiO2 additives (5, 10 and 15 wt.%) at 10, 20, 30 and 60 N loads. Friction was reduced most strongly in the oil mix with the lowest content of the additive. A possible mechanism of antifrictional improvement is the formation of boundary lubrication layers containing iron salts.

  5. Hybrid dynamic radioactive particle tracking (RPT) calibration technique for multiphase flow systems

    Science.gov (United States)

    Khane, Vaibhav; Al-Dahhan, Muthanna H.

    2017-04-01

    The radioactive particle tracking (RPT) technique has been utilized to measure three-dimensional hydrodynamic parameters for multiphase flow systems. An analytical solution to the inverse problem of the RPT technique, i.e. finding the instantaneous tracer positions based upon instantaneous counts received in the detectors, is not possible. Therefore, a calibration to obtain a counts-distance map is needed. There are major shortcomings in the conventional RPT calibration method due to which it has limited applicability in practical applications. In this work, the design and development of a novel dynamic RPT calibration technique are carried out to overcome the shortcomings of the conventional RPT calibration method. The dynamic RPT calibration technique has been implemented around a test reactor with 1foot in diameter and 1 foot in height using Cobalt-60 as an isotopes tracer particle. Two sets of experiments have been carried out to test the capability of novel dynamic RPT calibration. In the first set of experiments, a manual calibration apparatus has been used to hold a tracer particle at known static locations. In the second set of experiments, the tracer particle was moved vertically downwards along a straight line path in a controlled manner. The obtained reconstruction results about the tracer particle position were compared with the actual known position and the reconstruction errors were estimated. The obtained results revealed that the dynamic RPT calibration technique is capable of identifying tracer particle positions with a reconstruction error between 1 to 5.9 mm for the conditions studied which could be improved depending on various factors outlined here.

  6. Silicon PIN diode hybrid arrays for charged particle detection: Building blocks for vertex detectors at the SSC

    Energy Technology Data Exchange (ETDEWEB)

    Kramer, G.; Gaalema, S.; Shapiro, S.L.; Dunwoodie, W.M.; Arens, J.F.; Jernigan, J.G.

    1989-05-01

    Two-dimensional arrays of solid state detectors have long been used in visible and infrared systems. Hybrid arrays with separately optimized detector and readout substrates have been extensively developed for infrared sensors. The characteristics and use of these infrared readout chips with silicon PIN diode arrays produced by MICRON SEMICONDUCTOR for detecting high-energy particles are reported. Some of these arrays have been produced in formats as large as 512 /times/ 512 pixels; others have been radiation hardened to total dose levels beyond 1 Mrad. Data generation rates of 380 megasamples/second have been achieved. Analog and digital signal transmission and processing techniques have also been developed to accept and reduce these high data rates. 9 refs., 15 figs., 2 tabs.

  7. Experimental investigation of the dynamics of a hybrid morphing wing: time resolved particle image velocimetry and force measures

    Science.gov (United States)

    Jodin, Gurvan; Scheller, Johannes; Rouchon, Jean-François; Braza, Marianna; Mit Collaboration; Imft Collaboration; Laplace Collaboration

    2016-11-01

    A quantitative characterization of the effects obtained by high frequency-low amplitude trailing edge actuation is performed. Particle image velocimetry, as well as pressure and aerodynamic force measurements, are carried out on an airfoil model. This hybrid morphing wing model is equipped with both trailing edge piezoelectric-actuators and camber control shape memory alloy actuators. It will be shown that this actuation allows for an effective manipulation of the wake turbulent structures. Frequency domain analysis and proper orthogonal decomposition show that proper actuating reduces the energy dissipation by favoring more coherent vortical structures. This modification in the airflow dynamics eventually allows for a tapering of the wake thickness compared to the baseline configuration. Hence, drag reductions relative to the non-actuated trailing edge configuration are observed. Massachusetts Institute of Technology.

  8. Hybrid Particle Swarm Optimization based Day-Ahead Self-Scheduling for Thermal Generator in Competitive Electricity Market

    DEFF Research Database (Denmark)

    Pindoriya, Naran M.; Singh, S.N.; Østergaard, Jacob

    2009-01-01

    in day-ahead energy market subject to operational constraints and 2) at the same time, to minimize the risk due to uncertainty in price forecast. Therefore, it is a conflicting bi-objective optimization problem which has both binary and continuous optimization variables considered as constrained mixed......This paper presents a hybrid particle swarm optimization algorithm (HPSO) to solve the day-ahead self-scheduling for thermal power producer in competitive electricity market. The objective functions considered to model the self-scheduling problem are 1) to maximize the profit from selling energy...... integer nonlinear programming. To demonstrate the effectiveness of the proposed method for self-scheduling in a day-ahead energy market, the locational margin price (LMP) forecast uncertainty in PJM electricity market is considered. An adaptive wavelet neural network (AWNN) is used to forecast the day...

  9. Preparation and characterization of mesoporous hybrid particle-fiber carbon monoliths

    Energy Technology Data Exchange (ETDEWEB)

    Fuertes, A.B.; Marban, G. [Inst. Nacional del Carbon (CSIC), Oviedo (Spain); Nevskaia, D.M. [Universidad Nacional de Educacion a Distancia (UNED), Madrid (Spain). Facultad de Ciencas, Dept. de Quimica Inorganica y Tecnica

    2002-05-01

    Porous carbon materials are a subject of increasing attention in many areas of technology such as air purification, catalysis, refrigeration, gas and energy storage, and energy production. Superactivated carbons (SAC) are powdered activated carbons generally made from mesocarbon microbeads and have a very high adsorption capacity. They are highly appropriate for use in evaporative loss control devices (automobile canisters), catalytic supports, fuel-cell electrodes, and double-layer electrical capacitors. In all of these applications it is desirable that the carbon particles be immobilized in order to form rigid devices of high permeability. This communication describes a method to immobilize these fine particles in order to obtain rigid structures with a high internal porosity. (orig.)

  10. Development And Implementation Of Photonuclear Cross-section Data For Mutually Coupled Neutron-photon Transport Calculations In The Monte Carlo N-particle (mcnp) Radiation Transport Code

    CERN Document Server

    White, M C

    2000-01-01

    The fundamental motivation for the research presented in this dissertation was the need to development a more accurate prediction method for characterization of mixed radiation fields around medical electron accelerators (MEAs). Specifically, a model is developed for simulation of neutron and other particle production from photonuclear reactions and incorporated in the Monte Carlo N-Particle (MCNP) radiation transport code. This extension of the capability within the MCNP code provides for the more accurate assessment of the mixed radiation fields. The Nuclear Theory and Applications group of the Los Alamos National Laboratory has recently provided first-of-a-kind evaluated photonuclear data for a select group of isotopes. These data provide the reaction probabilities as functions of incident photon energy with angular and energy distribution information for all reaction products. The availability of these data is the cornerstone of the new methodology for state-of-the-art mutually coupled photon-neutron tran...

  11. Mobile, hybrid Compton/coded aperture imaging for detection, identification and localization of gamma-ray sources at stand-off distances

    Science.gov (United States)

    Tornga, Shawn R.

    The Stand-off Radiation Detection System (SORDS) program is an Advanced Technology Demonstration (ATD) project through the Department of Homeland Security's Domestic Nuclear Detection Office (DNDO) with the goal of detection, identification and localization of weak radiological sources in the presence of large dynamic backgrounds. The Raytheon-SORDS Tri-Modal Imager (TMI) is a mobile truck-based, hybrid gamma-ray imaging system able to quickly detect, identify and localize, radiation sources at standoff distances through improved sensitivity while minimizing the false alarm rate. Reconstruction of gamma-ray sources is performed using a combination of two imaging modalities; coded aperture and Compton scatter imaging. The TMI consists of 35 sodium iodide (NaI) crystals 5x5x2 in3 each, arranged in a random coded aperture mask array (CA), followed by 30 position sensitive NaI bars each 24x2.5x3 in3 called the detection array (DA). The CA array acts as both a coded aperture mask and scattering detector for Compton events. The large-area DA array acts as a collection detector for both Compton scattered events and coded aperture events. In this thesis, developed coded aperture, Compton and hybrid imaging algorithms will be described along with their performance. It will be shown that multiple imaging modalities can be fused to improve detection sensitivity over a broader energy range than either alone. Since the TMI is a moving system, peripheral data, such as a Global Positioning System (GPS) and Inertial Navigation System (INS) must also be incorporated. A method of adapting static imaging algorithms to a moving platform has been developed. Also, algorithms were developed in parallel with detector hardware, through the use of extensive simulations performed with the Geometry and Tracking Toolkit v4 (GEANT4). Simulations have been well validated against measured data. Results of image reconstruction algorithms at various speeds and distances will be presented as well as

  12. A Hybrid Algorithm Based on Particle Swarm Optimization and Artificial Immune for an Assembly Job Shop Scheduling Problem

    Directory of Open Access Journals (Sweden)

    Hui Du

    2016-01-01

    Full Text Available To produce the final product, parts need to be fabricated in the process stages and thereafter several parts are joined under the assembly operations based on the predefined bill of materials. But assembly relationship between the assembly parts and components has not been considered in general job shop scheduling problem model. The aim of this research is to find the schedule which minimizes completion time of Assembly Job Shop Scheduling Problem (AJSSP. Since the complexity of AJSSP is NP-hard, a hybrid particle swarm optimization (HPSO algorithm integrated PSO with Artificial Immune is proposed and developed to solve AJSSP. The selection strategy based on antibody density makes the particles of HPSO maintain the diversity during the iterative process, thus overcoming the defect of premature convergence. Then HPSO algorithm is applied into a case study development from classical FT06. Finally, the effect of key parameters on the proposed algorithm is analyzed and discussed regarding how to select the parameters. The experiment result confirmed its practice and effectiveness.

  13. A hybrid particle swarm optimization approach with neural network and set pair analysis for transmission network planning

    Institute of Scientific and Technical Information of China (English)

    刘吉成; 颜苏莉; 乞建勋

    2008-01-01

    Transmission network planning (TNP) is a large-scale, complex, with more non-linear discrete variables and the multi-objective constrained optimization problem. In the optimization process, the line investment, network reliability and the network loss are the main objective of transmission network planning. Combined with set pair analysis (SPA), particle swarm optimization (PSO), neural network (NN), a hybrid particle swarm optimization model was established with neural network and set pair analysis for transmission network planning (HPNS). Firstly, the contact degree of set pair analysis was introduced, the traditional goal set was converted into the collection of the three indicators including the identity degree, difference agree and contrary degree. On this bases, using shi(H), the three objective optimization problem was converted into single objective optimization problem. Secondly, using the fast and efficient search capabilities of PSO, the transmission network planning model based on set pair analysis was optimized. In the process of optimization, by improving the BP neural network constantly training so that the value of the fitness function of PSO becomes smaller in order to obtain the optimization program fitting the three objectives better. Finally, compared HPNS with PSO algorithm and the classic genetic algorithm, HPNS increased about 23% efficiency than THA, raised about 3.7% than PSO and improved about 2.96% than GA.

  14. Optimization of high-definition video coding and hybrid fiber-wireless transmission in the 60 GHz band

    DEFF Research Database (Denmark)

    Lebedev, Alexander; Pham, Tien Thang; Beltrán, Marta;

    2011-01-01

    We demonstrate that, by jointly optimizing video coding and radio-over-fibre transmission, we extend the reach of 60-GHz wireless distribution of high-quality high-definition video satisfying low complexity and low delay constraints, while preserving superb video quality.......We demonstrate that, by jointly optimizing video coding and radio-over-fibre transmission, we extend the reach of 60-GHz wireless distribution of high-quality high-definition video satisfying low complexity and low delay constraints, while preserving superb video quality....

  15. Cross-Layer Approach using k-NN Based Adaptive Modulation Coding (AMC and Incremental Redundancy Hybrid Automatic Repeat Request (IR-HARQ for MIMO

    Directory of Open Access Journals (Sweden)

    J. Sofia Priya Dharshini

    2014-09-01

    Full Text Available In MIMO Technology, a cross layer design enhances the spectral efficiency, reliability and throughput of the network. In this paper, a cross-layer approach using k-NN based Adaptive Modulation Coding (AMC and Incremental Redundancy Hybrid Automatic Repeat Request (IR-HARQ is proposed for MIMO Systems. The proposed cross layer approach connects physical layer and data link layer to enhance the performance of MIMO network. By means of MIMO fading channels, the coded symbols are forwarded in the physical layer on a frame by frame fashion subsequently using Space Time Block Coding (STBC. The receiver computes the signal to noise ratio (SNR and forwards back to the AMC controller. The controller selects a suitable MCS for the next transmission through k-NN classifier supervised learning algorithm. IR-HARQ is utilized at the data link layer to regulate packet retransmissions. The obtained results prove that the proposed technique has better performance in terms of throughput, BER and spectral efficiency

  16. Tree Code for Collision Detection of Large Numbers of Particles Application for the Breit-Wheeler Process

    CERN Document Server

    Jansen, Oliver; Ribeyre, Xavier; Jequier, Sophie; Tikhonchuk, Vladimir

    2016-01-01

    Collision detection of a large number N of particles can be challenging. Directly testing N particles for collision among each other leads to N 2 queries. Especially in scenarios, where fast, densely packed particles interact, challenges arise for classical methods like Particle-in-Cell or Monte-Carlo. Modern collision detection methods utilising bounding volume hierarchies are suitable to overcome these challenges and allow a detailed analysis of the interaction of large number of particles. This approach is applied to the analysis of the collision of two photon beams leading to the creation of electron-positron pairs.

  17. Co-design of a particle-in-cell plasma simulation code for Intel Xeon Phi: a first look at Knights Landing

    CERN Document Server

    Surmin, Igor; Matveev, Zakhar; Efimenko, Evgeny; Gonoskov, Arkady; Meyerov, Iosif

    2016-01-01

    Three dimensional particle-in-cell laser-plasma simulation is an important area of computational physics. Solving state-of-the-art problems requires large-scale simulation on a supercomputer using specialized codes. A growing demand in computational resources inspires research in improving efficiency and co-design for supercomputers based on many-core architectures. This paper presents first performance results of the particle-in-cell plasma simulation code PICADOR on the recently introduced Knights Landing generation of Intel Xeon Phi. A straightforward rebuilding of the code yields a 2.43 x speedup compared to the previous Knights Corner generation. Further code optimization results in an additional 1.89 x speedup. The optimization performed is beneficial not only for Knights Landing, but also for high-end CPUs and Knights Corner. The optimized version achieves 100 GFLOPS double precision performance on a Knights Landing device with the speedups of 2.35 x compared to a 14-core Haswell CPU and 3.47 x compare...

  18. Production of secondary particles and nuclei in cosmic rays collisions with the interstellar gas using the FLUKA code

    Science.gov (United States)

    Mazziotta, M. N.; Cerutti, F.; Ferrari, A.; Gaggero, D.; Loparco, F.; Sala, P. R.

    2016-08-01

    The measured fluxes of secondary particles produced by the interactions of Cosmic Rays (CRs) with the astronomical environment play a crucial role in understanding the physics of CR transport. In this work we present a comprehensive calculation of the secondary hadron, lepton, gamma-ray and neutrino yields produced by the inelastic interactions between several species of stable or long-lived cosmic rays projectiles (p, D, T, 3He, 4He, 6Li, 7Li, 9Be, 10Be, 10B, 11B, 12C, 13C, 14C, 14N, 15N, 16O, 17O, 18O, 20Ne, 24Mg and 28Si) and different target gas nuclei (p, 4He, 12C, 14N, 16O, 20Ne, 24Mg, 28Si and 40Ar). The yields are calculated using FLUKA, a simulation package designed to compute the energy distributions of secondary products with large accuracy in a wide energy range. The present results provide, for the first time, a complete and self-consistent set of all the relevant inclusive cross sections regarding the whole spectrum of secondary products in nuclear collisions. We cover, for the projectiles, a kinetic energy range extending from 0.1 GeV/n up to 100 TeV/n in the lab frame. In order to show the importance of our results for multi-messenger studies about the physics of CR propagation, we evaluate the propagated spectra of Galactic secondary nuclei, leptons, and gamma rays produced by the interactions of CRs with the interstellar gas, exploiting the numerical codes DRAGON and GammaSky. We show that, adopting our cross section database, we are able to provide a good fit of a complete sample of CR observables, including: leptonic and hadronic spectra measured at Earth, the local interstellar spectra measured by Voyager, and the gamma-ray emissivities from Fermi-LAT collaboration. We also show a set of gamma-ray and neutrino full-sky maps and spectra.

  19. A hybrid path-oriented code assignment CDMA-based MAC protocol for underwater acoustic sensor networks.

    Science.gov (United States)

    Chen, Huifang; Fan, Guangyu; Xie, Lei; Cui, Jun-Hong

    2013-11-04

    Due to the characteristics of underwater acoustic channel, media access control (MAC) protocols designed for underwater acoustic sensor networks (UWASNs) are quite different from those for terrestrial wireless sensor networks. Moreover, in a sink-oriented network with event information generation in a sensor field and message forwarding to the sink hop-by-hop, the sensors near the sink have to transmit more packets than those far from the sink, and then a funneling effect occurs, which leads to packet congestion, collisions and losses, especially in UWASNs with long propagation delays. An improved CDMA-based MAC protocol, named path-oriented code assignment (POCA) CDMA MAC (POCA-CDMA-MAC), is proposed for UWASNs in this paper. In the proposed MAC protocol, both the round-robin method and CDMA technology are adopted to make the sink receive packets from multiple paths simultaneously. Since the number of paths for information gathering is much less than that of nodes, the length of the spreading code used in the POCA-CDMA-MAC protocol is shorter greatly than that used in the CDMA-based protocols with transmitter-oriented code assignment (TOCA) or receiver-oriented code assignment (ROCA). Simulation results show that the proposed POCA-CDMA-MAC protocol achieves a higher network throughput and a lower end-to-end delay compared to other CDMA-based MAC protocols.

  20. Learning Concepts, Language, and Literacy in Hybrid Linguistic Codes: The Multilingual Maze of Urban Grade 1 Classrooms in South Africa

    Science.gov (United States)

    Henning, Elizabeth

    2012-01-01

    From the field of developmental psycholinguistics and from conceptual development theory there is evidence that excessive linguistic "code-switching" in early school education may pose some hazards for the learning of young multilingual children. In this article the author addresses the issue, invoking post-Piagetian and neo-Vygotskian…

  1. A Hybrid Path-Oriented Code Assignment CDMA-Based MAC Protocol for Underwater Acoustic Sensor Networks

    Directory of Open Access Journals (Sweden)

    Huifang Chen

    2013-11-01

    Full Text Available Due to the characteristics of underwater acoustic channel, media access control (MAC protocols designed for underwater acoustic sensor networks (UWASNs are quite different from those for terrestrial wireless sensor networks. Moreover, in a sink-oriented network with event information generation in a sensor field and message forwarding to the sink hop-by-hop, the sensors near the sink have to transmit more packets than those far from the sink, and then a funneling effect occurs, which leads to packet congestion, collisions and losses, especially in UWASNs with long propagation delays. An improved CDMA-based MAC protocol, named path-oriented code assignment (POCA CDMA MAC (POCA-CDMA-MAC, is proposed for UWASNs in this paper. In the proposed MAC protocol, both the round-robin method and CDMA technology are adopted to make the sink receive packets from multiple paths simultaneously. Since the number of paths for information gathering is much less than that of nodes, the length of the spreading code used in the POCA-CDMA-MAC protocol is shorter greatly than that used in the CDMA-based protocols with transmitter-oriented code assignment (TOCA or receiver-oriented code assignment (ROCA. Simulation results show that the proposed POCA-CDMA-MAC protocol achieves a higher network throughput and a lower end-to-end delay compared to other CDMA-based MAC protocols.

  2. Damage Detection in Flexible Plates through Reduced-Order Modeling and Hybrid Particle-Kalman Filtering

    Directory of Open Access Journals (Sweden)

    Giovanni Capellari

    2015-12-01

    Full Text Available Health monitoring of lightweight structures, like thin flexible plates, is of interest in several engineering fields. In this paper, a recursive Bayesian procedure is proposed to monitor the health of such structures through data collected by a network of optimally placed inertial sensors. As a main drawback of standard monitoring procedures is linked to the computational costs, two remedies are jointly considered: first, an order-reduction of the numerical model used to track the structural dynamics, enforced with proper orthogonal decomposition; and, second, an improved particle filter, which features an extended Kalman updating of each evolving particle before the resampling stage. The former remedy can reduce the number of effective degrees-of-freedom of the structural model to a few only (depending on the excitation, whereas the latter one allows to track the evolution of damage and to locate it thanks to an intricate formulation. To assess the effectiveness of the proposed procedure, the case of a plate subject to bending is investigated; it is shown that, when the procedure is appropriately fed by measurements, damage is efficiently and accurately estimated.

  3. Damage Detection in Flexible Plates through Reduced-Order Modeling and Hybrid Particle-Kalman Filtering.

    Science.gov (United States)

    Capellari, Giovanni; Azam, Saeed Eftekhar; Mariani, Stefano

    2015-12-22

    Health monitoring of lightweight structures, like thin flexible plates, is of interest in several engineering fields. In this paper, a recursive Bayesian procedure is proposed to monitor the health of such structures through data collected by a network of optimally placed inertial sensors. As a main drawback of standard monitoring procedures is linked to the computational costs, two remedies are jointly considered: first, an order-reduction of the numerical model used to track the structural dynamics, enforced with proper orthogonal decomposition; and, second, an improved particle filter, which features an extended Kalman updating of each evolving particle before the resampling stage. The former remedy can reduce the number of effective degrees-of-freedom of the structural model to a few only (depending on the excitation), whereas the latter one allows to track the evolution of damage and to locate it thanks to an intricate formulation. To assess the effectiveness of the proposed procedure, the case of a plate subject to bending is investigated; it is shown that, when the procedure is appropriately fed by measurements, damage is efficiently and accurately estimated.

  4. Optimization of high-definition video coding and hybrid fiber-wireless transmission in the 60 GHz band.

    Science.gov (United States)

    Lebedev, Alexander; Pham, Tien Thang; Beltrán, Marta; Yu, Xianbin; Ukhanova, Anna; Llorente, Roberto; Monroy, Idelfonso Tafur; Forchhammer, Søren

    2011-12-12

    The paper addresses the problem of distribution of high-definition video over fiber-wireless networks. The physical layer architecture with the low complexity envelope detection solution is investigated. We present both experimental studies and simulation of high quality high-definition compressed video transmission over 60 GHz fiber-wireless link. Using advanced video coding we satisfy low complexity and low delay constraints, meanwhile preserving the superb video quality after significantly extended wireless distance.

  5. Optimization of high-definition video coding and hybrid fiber-wireless transmission in the 60 GHz band

    DEFF Research Database (Denmark)

    Lebedev, Alexander; Pham, Tien Thang; Beltrán, Marta;

    2011-01-01

    The paper addresses the problem of distribution of highdefinition video over fiber-wireless networks. The physical layer architecture with the low complexity envelope detection solution is investigated. We present both experimental studies and simulation of high quality high-definition compressed...... video transmission over 60 GHz fiberwireless link. Using advanced video coding we satisfy low complexity and low delay constraints, meanwhile preserving the superb video quality after significantly extended wireless distance. © 2011 Optical Society of America....

  6. Synthesis of organic-inorganic hybrid sols with nano silica particles and organoalkoxysilanes for transparent and high-thermal-resistance coating films using sol-gel reaction.

    Science.gov (United States)

    Na, Moonkyong; Park, Hoyyul; Ahn, Myeongsang; Lee, Hyeonhwa; Chung, Ildoo

    2010-10-01

    Organic-inorganic hybrid sols were synthesized from nano silica particles dispersed in water and from organoalkoxysilanes, using the sol-gel reaction. This work focuses on the effects of the three multifunctional organoalkoxysilanes dimethyldimethoxysilane (DMDMS), methyltrimethoxysilane (MTMS), and tetramethoxysilane (TMOS) to form a transparent and high-thermal-resistance coating film. The stability of the hybrid sol was evaluated as a function of the reaction time for 10 d through the variation of the viscosity. The viscosity of the silica/DMDMS and silica/MTMS sol was slightly increased for 10 d. The multifunctional organoalkoxysilanes formed dense silica networks through hydrolysis and condensation reaction, which enhanced the thermal resistance of the coating films. No thermal degradation of the silica/DMDMS sample occurred up to 600 degrees C, and none of the silica/MTMS and silica/TMOS samples occurred either up to 700 degrees C. The organic-inorganic hybrid sols were coated on the glass substrate using a spin-coating procedure. The organic-inorganic hybrid sols formed flat coating films without cracks. The transmittance of the hybrid sol coating films using MTMS and DMDMS was shown to be over 90%. The transmittance of the silica/TMOS sol coating film reacted for 10 d abruptly decreased due to faster gelation. The silica/DMDMS and silica/MTMS hybrid sols formed smooth coating films while the surface roughness of the silica/TMOS coating film markedly increased when the hybrid sol reacted for 10 d. The increase of the surface roughness of the silica/TMOS coating film can be attributed to the degradation of the stability of the hybrid sol and to the loss of transmittance of the coating film. It was confirmed in this study that the use of organic-inorganic hybrid sol can yield transparent and high-thermal-resistance coating films.

  7. Scheme for implementing quantum dense coding with four-particle decoherence-free states in an ion trap

    Institute of Scientific and Technical Information of China (English)

    Zheng Xiao-Juan; Cao Shuai; Fang Mao-Fa; Liao Xiang-Ping

    2008-01-01

    This paper proposes an experimentally feasible scheme for implementing quantum dense coding of trapped-ion system in decoherence-free states.As the phase changes due to time evolution of components with different eigenenergies of quantum superposition are completely frozen,quantum dense coding based on this model would be perfect.The scheme is insensitive to heating of vibrational mode and Bell states can be exactly distinguished via detecting the ionic state.

  8. Hybridization Capture-Based Next-Generation Sequencing to Evaluate Coding Sequence and Deep Intronic Mutations in the NF1 Gene.

    Science.gov (United States)

    Cunha, Karin Soares; Oliveira, Nathalia Silva; Fausto, Anna Karoline; de Souza, Carolina Cruz; Gros, Audrey; Bandres, Thomas; Idrissi, Yamina; Merlio, Jean-Philippe; de Moura Neto, Rodrigo Soares; Silva, Rosane; Geller, Mauro; Cappellen, David

    2016-12-17

    Neurofibromatosis 1 (NF1) is one of the most common genetic disorders and is caused by mutations in the NF1 gene. NF1 gene mutational analysis presents a considerable challenge because of its large size, existence of highly homologous pseudogenes located throughout the human genome, absence of mutational hotspots, and diversity of mutations types, including deep intronic splicing mutations. We aimed to evaluate the use of hybridization capture-based next-generation sequencing to screen coding and noncoding NF1 regions. Hybridization capture-based next-generation sequencing, with genomic DNA as starting material, was used to sequence the whole NF1 gene (exons and introns) from 11 unrelated individuals and 1 relative, who all had NF1. All of them met the NF1 clinical diagnostic criteria. We showed a mutation detection rate of 91% (10 out of 11). We identified eight recurrent and two novel mutations, which were all confirmed by Sanger methodology. In the Sanger sequencing confirmation, we also included another three relatives with NF1. Splicing alterations accounted for 50% of the mutations. One of them was caused by a deep intronic mutation (c.1260 + 1604A > G). Frameshift truncation and missense mutations corresponded to 30% and 20% of the pathogenic variants, respectively. In conclusion, we show the use of a simple and fast approach to screen, at once, the entire NF1 gene (exons and introns) for different types of pathogenic variations, including the deep intronic splicing mutations.

  9. Hybridization Capture-Based Next-Generation Sequencing to Evaluate Coding Sequence and Deep Intronic Mutations in the NF1 Gene

    Science.gov (United States)

    Cunha, Karin Soares; Oliveira, Nathalia Silva; Fausto, Anna Karoline; de Souza, Carolina Cruz; Gros, Audrey; Bandres, Thomas; Idrissi, Yamina; Merlio, Jean-Philippe; de Moura Neto, Rodrigo Soares; Silva, Rosane; Geller, Mauro; Cappellen, David

    2016-01-01

    Neurofibromatosis 1 (NF1) is one of the most common genetic disorders and is caused by mutations in the NF1 gene. NF1 gene mutational analysis presents a considerable challenge because of its large size, existence of highly homologous pseudogenes located throughout the human genome, absence of mutational hotspots, and diversity of mutations types, including deep intronic splicing mutations. We aimed to evaluate the use of hybridization capture-based next-generation sequencing to screen coding and noncoding NF1 regions. Hybridization capture-based next-generation sequencing, with genomic DNA as starting material, was used to sequence the whole NF1 gene (exons and introns) from 11 unrelated individuals and 1 relative, who all had NF1. All of them met the NF1 clinical diagnostic criteria. We showed a mutation detection rate of 91% (10 out of 11). We identified eight recurrent and two novel mutations, which were all confirmed by Sanger methodology. In the Sanger sequencing confirmation, we also included another three relatives with NF1. Splicing alterations accounted for 50% of the mutations. One of them was caused by a deep intronic mutation (c.1260 + 1604A > G). Frameshift truncation and missense mutations corresponded to 30% and 20% of the pathogenic variants, respectively. In conclusion, we show the use of a simple and fast approach to screen, at once, the entire NF1 gene (exons and introns) for different types of pathogenic variations, including the deep intronic splicing mutations. PMID:27999334

  10. Hybridization Capture-Based Next-Generation Sequencing to Evaluate Coding Sequence and Deep Intronic Mutations in the NF1 Gene

    Directory of Open Access Journals (Sweden)

    Karin Soares Cunha

    2016-12-01

    Full Text Available Neurofibromatosis 1 (NF1 is one of the most common genetic disorders and is caused by mutations in the NF1 gene. NF1 gene mutational analysis presents a considerable challenge because of its large size, existence of highly homologous pseudogenes located throughout the human genome, absence of mutational hotspots, and diversity of mutations types, including deep intronic splicing mutations. We aimed to evaluate the use of hybridization capture-based next-generation sequencing to screen coding and noncoding NF1 regions. Hybridization capture-based next-generation sequencing, with genomic DNA as starting material, was used to sequence the whole NF1 gene (exons and introns from 11 unrelated individuals and 1 relative, who all had NF1. All of them met the NF1 clinical diagnostic criteria. We showed a mutation detection rate of 91% (10 out of 11. We identified eight recurrent and two novel mutations, which were all confirmed by Sanger methodology. In the Sanger sequencing confirmation, we also included another three relatives with NF1. Splicing alterations accounted for 50% of the mutations. One of them was caused by a deep intronic mutation (c.1260 + 1604A > G. Frameshift truncation and missense mutations corresponded to 30% and 20% of the pathogenic variants, respectively. In conclusion, we show the use of a simple and fast approach to screen, at once, the entire NF1 gene (exons and introns for different types of pathogenic variations, including the deep intronic splicing mutations.

  11. A sandwich-hybridization assay for simultaneous determination of HIV and tuberculosis DNA targets based on signal amplification by quantum dots-PowerVision™ polymer coding nanotracers.

    Science.gov (United States)

    Yan, Zhongdan; Gan, Ning; Zhang, Huairong; Wang, De; Qiao, Li; Cao, Yuting; Li, Tianhua; Hu, Futao

    2015-09-15

    A novel sandwich-hybridization assay for simultaneous electrochemical detection of multiple DNA targets related to human immune deficiency virus (HIV) and tuberculosis (TB) was developed based on the different quantum dots-PowerVision(TM) polymer nanotracers. The polymer nanotracers were respectively fabricated by immobilizing SH-labeled oligonucleotides (s-HIV or s-TB), which can partially hybrid with virus DNA (HIV or TB), on gold nanoparticles (Au NPs) and then modified with PowerVision(TM) (PV) polymer-encapsulated quantum dots (CdS or PbS) as signal tags. PV is a dendrimer enzyme linked polymer, which can immobilize abundant QDs to amplify the stripping voltammetry signals from the metal ions (Pb or Cd). The capture probes were prepared through the immobilization of SH-labeled oligonucleotides, which can complementary with HIV and TB DNA, on the magnetic Fe3O4@Au (GMPs) beads. After sandwich-hybridization, the polymer nanotracers together with HIV and TB DNA targets were simultaneously introduced onto the surface of GMPs. Then the two encoding metal ions (Cd(2+) and Pb(2+)) were used to differentiate two viruses DNA due to the different subsequent anodic stripping voltammetric peaks at -0.84 V (Cd) and -0.61 V (Pb). Because of the excellent signal amplification of the polymer nanotracers and the great specificity of DNA targets, this assay could detect targets DNA as low as 0.2 femtomolar and exhibited excellent selectivity with the dynamitic range from 0.5 fM to 500 pM. Those results demonstrated that this electrochemical coding assay has great potential in applications for screening more viruses DNA while changing the probes.

  12. Optimization of Micro Strip Array Antennas Using Hybrid Particle Swarm Optimizer with Breeding and Subpopulation for Maximum Side-Lobe Reduction

    Directory of Open Access Journals (Sweden)

    F. T. Bendimerad

    2008-12-01

    Full Text Available In this paper, a technique based on hybrid particle swarm optimiser with breeding and subpopulation is presented for optimal design of reconfigurable dual-beam linear array antennas and planar arrays. In the amplitudephase synthesis, the design of a reconfigurable dual-pattern antenna array is based on finding a common amplitude distribution that can generate either a pencil or sector beam power pattern, when the phase distribution of the array is modified appropriately. The goal of this study is to introduce the hybrid model to the electromagnetic community and demonstrate its great potential in electromagnetic optimizations.

  13. Development of a fluorescence in situ hybridization protocol for the identification of micro-organisms associated with wastewater particles and flocs.

    Science.gov (United States)

    Ormeci, Banu; Linden, Karl G

    2008-11-01

    Fluorescence in situ hybridization (FISH) provides a unique tool to study micro-organisms associated with particles and flocs. FISH enables visual examination of micro-organisms while they are structurally intact and associated with particles. However, application of FISH to wastewater and sludge samples presents a specific set of problems. Wastewater samples generate high background fluorescence due to their organic and inorganic content making it difficult to differentiate a probe-conferred signal from naturally fluorescing particles with reasonable certainty. Furthermore, some of the FISH steps involve harsh treatment of samples, and are likely to disrupt the floc structure. This study developed a FISH protocol for studying micro-organisms that are associated with particles and flocs. The results indicate that choice of a proper fluorochrome and labeling technique is a key step in reducing the background fluorescence and non-specific binding, and increasing the intensity of the probe signal. Compared to other fluorochromes tested, CY3 worked very well and enabled the observation of particles and debris in red and probe signal from microbes in yellow. Fixation, hybridization, and washing steps disturbed the floc structure and particle-microbe association. Modifications to these steps were necessary, and were achieved by replacing centrifugation with filtration and employment of nylon filters. Microscope slides generated excellent quality images, but polycarbonate membrane filters performed better in preserving the floc structure.

  14. A hybrid particle swarm optimization-SVM classification for automatic cardiac auscultation

    Directory of Open Access Journals (Sweden)

    Prasertsak Charoen

    2017-04-01

    Full Text Available Cardiac auscultation is a method for a doctor to listen to heart sounds, using a stethoscope, for examining the condition of the heart. Automatic cardiac auscultation with machine learning is a promising technique to classify heart conditions without need of doctors or expertise. In this paper, we develop a classification model based on support vector machine (SVM and particle swarm optimization (PSO for an automatic cardiac auscultation system. The model consists of two parts: heart sound signal processing part and a proposed PSO for weighted SVM (WSVM classifier part. In this method, the PSO takes into account the degree of importance for each feature extracted from wavelet packet (WP decomposition. Then, by using principle component analysis (PCA, the features can be selected. The PSO technique is used to assign diverse weights to different features for the WSVM classifier. Experimental results show that both continuous and binary PSO-WSVM models achieve better classification accuracy on the heart sound samples, by reducing system false negatives (FNs, compared to traditional SVM and genetic algorithm (GA based SVM.

  15. Hybrid Artificial Bee Colony Algorithm and Particle Swarm Search for Global Optimization

    Directory of Open Access Journals (Sweden)

    Wang Chun-Feng

    2014-01-01

    Full Text Available Artificial bee colony (ABC algorithm is one of the most recent swarm intelligence based algorithms, which has been shown to be competitive to other population-based algorithms. However, there is still an insufficiency in ABC regarding its solution search equation, which is good at exploration but poor at exploitation. To overcome this problem, we propose a novel artificial bee colony algorithm based on particle swarm search mechanism. In this algorithm, for improving the convergence speed, the initial population is generated by using good point set theory rather than random selection firstly. Secondly, in order to enhance the exploitation ability, the employed bee, onlookers, and scouts utilize the mechanism of PSO to search new candidate solutions. Finally, for further improving the searching ability, the chaotic search operator is adopted in the best solution of the current iteration. Our algorithm is tested on some well-known benchmark functions and compared with other algorithms. Results show that our algorithm has good performance.

  16. PSOVina: The hybrid particle swarm optimization algorithm for protein-ligand docking.

    Science.gov (United States)

    Ng, Marcus C K; Fong, Simon; Siu, Shirley W I

    2015-06-01

    Protein-ligand docking is an essential step in modern drug discovery process. The challenge here is to accurately predict and efficiently optimize the position and orientation of ligands in the binding pocket of a target protein. In this paper, we present a new method called PSOVina which combined the particle swarm optimization (PSO) algorithm with the efficient Broyden-Fletcher-Goldfarb-Shannon (BFGS) local search method adopted in AutoDock Vina to tackle the conformational search problem in docking. Using a diverse data set of 201 protein-ligand complexes from the PDBbind database and a full set of ligands and decoys for four representative targets from the directory of useful decoys (DUD) virtual screening data set, we assessed the docking performance of PSOVina in comparison to the original Vina program. Our results showed that PSOVina achieves a remarkable execution time reduction of 51-60% without compromising the prediction accuracies in the docking and virtual screening experiments. This improvement in time efficiency makes PSOVina a better choice of a docking tool in large-scale protein-ligand docking applications. Our work lays the foundation for the future development of swarm-based algorithms in molecular docking programs. PSOVina is freely available to non-commercial users at http://cbbio.cis.umac.mo .

  17. 3D Particle Track Reconstrution in a Single Layer Cadmium-Telluride Hybrid Active Pixel Detector

    CERN Document Server

    Filipenko, Mykhaylo; Anton, Gisela; Michel, Thilo

    2014-01-01

    In the past 20 years the search for neutrinoless double beta decay has driven many developements in all kind of detector technology. A new branch in this field are highly-pixelated semiconductor detectors - such as the CdTe-Timepix detectors. It compromises a cadmium-telluride sensor of 14 mm x 14 mm x 1 mm size with an ASIC which has 256 x 256 pixel of 55 \\textmu m pixel pitch and can be used to obtain either spectroscopic or timing information in every pixel. In regular operation it can provide a 2D projection of particle trajectories; however, three dimensional trajectories are desirable for neutrinoless double beta decay and other applications. In this paper we present a method to obtain such trajectories. The method was developed and tested with simulations that assume some minor modifications to the Timepix ASIC. Also, we were able to test the method experimentally and in the best case achieved a position resolution of about 90 \\textmu m with electrons of 4.4 GeV.

  18. Multi-Sensor Detection with Particle Swarm Optimization for Time-Frequency Coded Cooperative WSNs Based on MC-CDMA for Underground Coal Mines.

    Science.gov (United States)

    Xu, Jingjing; Yang, Wei; Zhang, Linyuan; Han, Ruisong; Shao, Xiaotao

    2015-08-27

    In this paper, a wireless sensor network (WSN) technology adapted to underground channel conditions is developed, which has important theoretical and practical value for safety monitoring in underground coal mines. According to the characteristics that the space, time and frequency resources of underground tunnel are open, it is proposed to constitute wireless sensor nodes based on multicarrier code division multiple access (MC-CDMA) to make full use of these resources. To improve the wireless transmission performance of source sensor nodes, it is also proposed to utilize cooperative sensors with good channel conditions from the sink node to assist source sensors with poor channel conditions. Moreover, the total power of the source sensor and its cooperative sensors is allocated on the basis of their channel conditions to increase the energy efficiency of the WSN. To solve the problem that multiple access interference (MAI) arises when multiple source sensors transmit monitoring information simultaneously, a kind of multi-sensor detection (MSD) algorithm with particle swarm optimization (PSO), namely D-PSO, is proposed for the time-frequency coded cooperative MC-CDMA WSN. Simulation results show that the average bit error rate (BER) performance of the proposed WSN in an underground coal mine is improved significantly by using wireless sensor nodes based on MC-CDMA, adopting time-frequency coded cooperative transmission and D-PSO algorithm with particle swarm optimization.

  19. Multi-Sensor Detection with Particle Swarm Optimization for Time-Frequency Coded Cooperative WSNs Based on MC-CDMA for Underground Coal Mines

    Directory of Open Access Journals (Sweden)

    Jingjing Xu

    2015-08-01

    Full Text Available In this paper, a wireless sensor network (WSN technology adapted to underground channel conditions is developed, which has important theoretical and practical value for safety monitoring in underground coal mines. According to the characteristics that the space, time and frequency resources of underground tunnel are open, it is proposed to constitute wireless sensor nodes based on multicarrier code division multiple access (MC-CDMA to make full use of these resources. To improve the wireless transmission performance of source sensor nodes, it is also proposed to utilize cooperative sensors with good channel conditions from the sink node to assist source sensors with poor channel conditions. Moreover, the total power of the source sensor and its cooperative sensors is allocated on the basis of their channel conditions to increase the energy efficiency of the WSN. To solve the problem that multiple access interference (MAI arises when multiple source sensors transmit monitoring information simultaneously, a kind of multi-sensor detection (MSD algorithm with particle swarm optimization (PSO, namely D-PSO, is proposed for the time-frequency coded cooperative MC-CDMA WSN. Simulation results show that the average bit error rate (BER performance of the proposed WSN in an underground coal mine is improved significantly by using wireless sensor nodes based on MC-CDMA, adopting time-frequency coded cooperative transmission and D-PSO algorithm with particle swarm optimization.

  20. EMAIL SPAM CLASSIFICATION USING HYBRID APPROACH OF RBF NEURAL NETWORK AND PARTICLE SWARM OPTIMIZATION

    Directory of Open Access Journals (Sweden)

    Mohammed Awad

    2016-07-01

    Full Text Available Email is one of the most popular communication media in the current century; it has become an effective and fast method to share and information exchangeall over the world. In recent years, emails users are facing problem which is spam emails. Spam emails are unsolicited, bulk emails are sent by spammers. It consumes storage of mail servers, waste of time and consumes network bandwidth.Many methods used for spam filtering to classify email messages into two groups spam and non-spam. In general, one of the most powerful tools used for data lassification is Artificial Neural Networks (ANNs; it has the capability of dealing a huge amount of data with high dimensionality in better accuracy. One important type of ANNs is the Radial Basis Function Neural Networks (RBFNN that will be used in this work to classify spam message. In this paper, we present a new approach of spam filtering technique which combinesRBFNN and Particles Swarm Optimization (PSO algorithm (HC-RBFPSO. The proposed approach uses PSO algorithm to optimize the RBFNN param eters, depending on the evolutionary heuristic search process of PSO. PSO use to optimize the best position of the RBFNN centers c. The Radii r optimize using K-Nearest Neighbors algorithmand the weights w optimize using Singular Value Decomposition algorithm within each iterative process of PSO depending the fitness (error function. The experiments are conducted on spam dataset namely SPAMBASE downloaded from UCI Machine Learning Repository. The experimental results show that our approach is performed in accuracy compared with other approaches that use the same dataset.

  1. Crude oil price forecasting based on hybridizing wavelet multiple linear regression model, particle swarm optimization techniques, and principal component analysis.

    Science.gov (United States)

    Shabri, Ani; Samsudin, Ruhaidah

    2014-01-01

    Crude oil prices do play significant role in the global economy and are a key input into option pricing formulas, portfolio allocation, and risk measurement. In this paper, a hybrid model integrating wavelet and multiple linear regressions (MLR) is proposed for crude oil price forecasting. In this model, Mallat wavelet transform is first selected to decompose an original time series into several subseries with different scale. Then, the principal component analysis (PCA) is used in processing subseries data in MLR for crude oil price forecasting. The particle swarm optimization (PSO) is used to adopt the optimal parameters of the MLR model. To assess the effectiveness of this model, daily crude oil market, West Texas Intermediate (WTI), has been used as the case study. Time series prediction capability performance of the WMLR model is compared with the MLR, ARIMA, and GARCH models using various statistics measures. The experimental results show that the proposed model outperforms the individual models in forecasting of the crude oil prices series.

  2. Receptor-mediated membrane adhesion of lipid-polymer hybrid (LPH) nanoparticles studied by dissipative particle dynamics simulations.

    Science.gov (United States)

    Li, Zhenlong; Gorfe, Alemayehu A

    2015-01-14

    Lipid-polymer hybrid (LPH) nanoparticles represent a novel class of targeted drug delivery platforms that combine the advantages of liposomes and biodegradable polymeric nanoparticles. However, the molecular details of the interaction between LPHs and their target cell membranes remain poorly understood. We have investigated the receptor-mediated membrane adhesion process of a ligand-tethered LPH nanoparticle using extensive dissipative particle dynamics (DPD) simulations. We found that the spontaneous adhesion process follows a first-order kinetics characterized by two distinct stages: a rapid nanoparticle-membrane engagement, followed by a slow growth in the number of ligand-receptor pairs coupled with structural re-organization of both the nanoparticle and the membrane. The number of ligand-receptor pairs increases with the dynamic segregation of ligands and receptors toward the adhesion zone causing an out-of-plane deformation of the membrane. Moreover, the fluidity of the lipid shell allows for strong nanoparticle-membrane interactions to occur even when the ligand density is low. The LPH-membrane avidity is enhanced by the increased stability of each receptor-ligand pair due to the geometric confinement and the cooperative effect arising from multiple binding events. Thus, our results reveal the unique advantages of LPH nanoparticles as active cell-targeting nanocarriers and provide some general principles governing nanoparticle-cell interactions that may aid future design of LPHs with improved affinity and specificity for a given target of interest.

  3. Receptor-mediated membrane adhesion of lipid-polymer hybrid (LPH) nanoparticles studied by dissipative particle dynamics simulations

    Science.gov (United States)

    Li, Zhenlong; Gorfe, Alemayehu A.

    2014-12-01

    Lipid-polymer hybrid (LPH) nanoparticles represent a novel class of targeted drug delivery platforms that combine the advantages of liposomes and biodegradable polymeric nanoparticles. However, the molecular details of the interaction between LPHs and their target cell membranes remain poorly understood. We have investigated the receptor-mediated membrane adhesion process of a ligand-tethered LPH nanoparticle using extensive dissipative particle dynamics (DPD) simulations. We found that the spontaneous adhesion process follows a first-order kinetics characterized by two distinct stages: a rapid nanoparticle-membrane engagement, followed by a slow growth in the number of ligand-receptor pairs coupled with structural re-organization of both the nanoparticle and the membrane. The number of ligand-receptor pairs increases with the dynamic segregation of ligands and receptors toward the adhesion zone causing an out-of-plane deformation of the membrane. Moreover, the fluidity of the lipid shell allows for strong nanoparticle-membrane interactions to occur even when the ligand density is low. The LPH-membrane avidity is enhanced by the increased stability of each receptor-ligand pair due to the geometric confinement and the cooperative effect arising from multiple binding events. Thus, our results reveal the unique advantages of LPH nanoparticles as active cell-targeting nanocarriers and provide some general principles governing nanoparticle-cell interactions that may aid future design of LPHs with improved affinity and specificity for a given target of interest.

  4. Optimal Energy Management Strategy of a Plug-in Hybrid Electric Vehicle Based on a Particle Swarm Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Zeyu Chen

    2015-04-01

    Full Text Available Plug-in hybrid electric vehicles (PHEVs have been recognized as one of the most promising vehicle categories nowadays due to their low fuel consumption and reduced emissions. Energy management is critical for improving the performance of PHEVs. This paper proposes an energy management approach based on a particle swarm optimization (PSO algorithm. The optimization objective is to minimize total energy cost (summation of oil and electricity from vehicle utilization. A main drawback of optimal strategies is that they can hardly be used in real-time control. In order to solve this problem, a rule-based strategy containing three operation modes is proposed first, and then the PSO algorithm is implemented on four threshold values in the presented rule-based strategy. The proposed strategy has been verified by the US06 driving cycle under the MATLAB/Simulink software environment. Two different driving cycles are adopted to evaluate the generalization ability of the proposed strategy. Simulation results indicate that the proposed PSO-based energy management method can achieve better energy efficiency compared with traditional blended strategies. Online control performance of the proposed approach has been demonstrated through a driver-in-the-loop real-time experiment.

  5. Crude Oil Price Forecasting Based on Hybridizing Wavelet Multiple Linear Regression Model, Particle Swarm Optimization Techniques, and Principal Component Analysis

    Directory of Open Access Journals (Sweden)

    Ani Shabri

    2014-01-01

    Full Text Available Crude oil prices do play significant role in the global economy and are a key input into option pricing formulas, portfolio allocation, and risk measurement. In this paper, a hybrid model integrating wavelet and multiple linear regressions (MLR is proposed for crude oil price forecasting. In this model, Mallat wavelet transform is first selected to decompose an original time series into several subseries with different scale. Then, the principal component analysis (PCA is used in processing subseries data in MLR for crude oil price forecasting. The particle swarm optimization (PSO is used to adopt the optimal parameters of the MLR model. To assess the effectiveness of this model, daily crude oil market, West Texas Intermediate (WTI, has been used as the case study. Time series prediction capability performance of the WMLR model is compared with the MLR, ARIMA, and GARCH models using various statistics measures. The experimental results show that the proposed model outperforms the individual models in forecasting of the crude oil prices series.

  6. Hybrid approach combining dissipative particle dynamics and finite-difference diffusion model: simulation of reactive polymer coupling and interfacial polymerization.

    Science.gov (United States)

    Berezkin, Anatoly V; Kudryavtsev, Yaroslav V

    2013-10-21

    A novel hybrid approach combining dissipative particle dynamics (DPD) and finite difference (FD) solution of partial differential equations is proposed to simulate complex reaction-diffusion phenomena in heterogeneous systems. DPD is used for the detailed molecular modeling of mass transfer, chemical reactions, and phase separation near the liquid∕liquid interface, while FD approach is applied to describe the large-scale diffusion of reactants outside the reaction zone. A smooth, self-consistent procedure of matching the solute concentration is performed in the buffer region between the DPD and FD domains. The new model is tested on a simple model system admitting an analytical solution for the diffusion controlled regime and then applied to simulate practically important heterogeneous processes of (i) reactive coupling between immiscible end-functionalized polymers and (ii) interfacial polymerization of two monomers dissolved in immiscible solvents. The results obtained due to extending the space and time scales accessible to modeling provide new insights into the kinetics and mechanism of those processes and demonstrate high robustness and accuracy of the novel technique.

  7. 采用分布式编码的协作HARQ协议%Collaborative hybrid-ARQ protocol with distributed code

    Institute of Scientific and Technical Information of China (English)

    吴熹; 龙华; 唐嘉麒; 彭永杰

    2015-01-01

    In order to improve the reliability of the cooperative communication system, a new HARQ protocol is pro-posed by combining the distributed code with the averaged diversity combining technology. The collaborative hybrid repeat request system is constructed by distributed Turbo code. In the destination terminal, the retransmission of relay is processed by incremental redundancy technology, and the Chase combining technology is used to process the source infor-mation. Joint soft decision decoding is adopted at the destination terminal. The outage probability and average throughput are deduced. Compared with non-collaborative HARQ protocols, the collaborative HARQ protocol with distributed code can achieve better performance on flat Rayleigh fading channel.%为提高协作通信系统的可靠性,将分布式编码和码合并技术相结合,提出了一种新的混合自动重传协议,构造了基于分布式Turbo码的协作重传系统模型。目的终端分别采用递增冗余和Chase合并技术处理中继节点和信源的重发信息,并进行联合软判决译码。分析了系统的中断概率和平均吞吐量。仿真结果表明,在平坦瑞利衰落信道下,该分布式编码协作HARQ协议较非协作HARQ协议可获得较大的性能改善。

  8. Computational performance of SequenceL coding of the lattice Boltzmann method for multi-particle flow simulations

    Science.gov (United States)

    Başağaoğlu, Hakan; Blount, Justin; Blount, Jarred; Nelson, Bryant; Succi, Sauro; Westhart, Phil M.; Harwell, John R.

    2017-04-01

    This paper reports, for the first time, the computational performance of SequenceL for mesoscale simulations of large numbers of particles in a microfluidic device via the lattice-Boltzmann method. The performance of SequenceL simulations was assessed against the optimized serial and parallelized (via OpenMP directives) FORTRAN90 simulations. At present, OpenMP directives were not included in inter-particle and particle-wall repulsive (steric) interaction calculations due to difficulties that arose from inter-iteration dependencies between consecutive iterations of the do-loops. SequenceL simulations, on the other hand, relied on built-in automatic parallelism. Under these conditions, numerical simulations revealed that the parallelized FORTRAN90 outran the performance of SequenceL by a factor of 2.5 or more when the number of particles was 100 or less. SequenceL, however, outran the performance of the parallelized FORTRAN90 by a factor of 1.3 when the number of particles was 300. Our results show that when the number of particles increased by 30-fold, the computational time of SequenceL simulations increased linearly by a factor of 1.5, as compared to a 3.2-fold increase in serial and a 7.7-fold increase in parallelized FORTRAN90 simulations. Considering SequenceL's efficient built-in parallelism that led to a relatively small increase in computational time with increased number of particles, it could be a promising programming language for computationally-efficient mesoscale simulations of large numbers of particles in microfluidic experiments.

  9. Developments of Electromagnetic Particle Simulation Code for Magnetic Reconnection Researches in Open System PASMO and Visualization Library VISMO

    Science.gov (United States)

    Ohtani, H.; Horiuchi, R.; Nunami, M.; Usami, S.; Ohno, N.

    2014-10-01

    As the capabilities of computers are improved, the sizes of simulations become greater and greater. In this situation, we have some big issues. One of them is how to develop an efficient simulation code, and another is how to visualize the large data by the simulation. In order to investigate magnetic reconnection from the microscopic viewpoint, we develop a three-dimensional electromagnetic PIC code in an open system (PASMO). For performing the code on a distributed memory and multi-processor computer system with a distributed parallel algorithm, we decompose three-dimensionally the simulation domain, and introduce the charge conservation scheme to exclude the global calculation, such as Poisson solver with FFT. In the visualization of the simulation data, we develop an in-situ visualization library VISMO for the PIC simulation to carry out the visualization in tandem with the simulation on the same computers. The simulation code with VISMO generates image files instead of raw data. We will discuss the performance of the new PASMO and the simulation results visualized by VISMO on the magnetic reconnection. Supported by a Grant-in-Aid for Scientific Research from JSPS (Grant No. 23340182) and General Coordinated Research at NIFS (NIFS14KNSS046, NIFS13KNXN260 and NIFS13KNTS024).

  10. Tripoli-3: monte Carlo transport code for neutral particles - version 3.5 - users manual; Tripoli-3: code de transport des particules neutres par la methode de monte carlo - version 3.5 - manuel d'utilisation

    Energy Technology Data Exchange (ETDEWEB)

    Vergnaud, Th.; Nimal, J.C.; Chiron, M

    2001-07-01

    The TRIPOLI-3 code applies the Monte Carlo method to neutron, gamma-ray and coupled neutron and gamma-ray transport calculations in three-dimensional geometries, either in steady-state conditions or having a time dependence. It can be used to study problems where there is a high flux attenuation between the source zone and the result zone (studies of shielding configurations or source driven sub-critical systems, with fission being taken into account), as well as problems where there is a low flux attenuation (neutronic calculations -- in a fuel lattice cell, for example -- where fission is taken into account, usually with the calculation on the effective multiplication factor, fine structure studies, numerical experiments to investigate methods approximations, etc). TRIPOLI-3 has been operational since 1995 and is the version of the TRIPOLI code that follows on from TRIPOLI-2; it can be used on SUN, RISC600 and HP workstations and on PC using the Linux or Windows/NT operating systems. The code uses nuclear data libraries generated using the THEMIS/NJOY system. The current libraries were derived from ENDF/B6 and JEF2. There is also a response function library based on a number of evaluations, notably the dosimetry libraries IRDF/85, IRDF/90 and also evaluations from JEF2. The treatment of particle transport is the same in version 3.5 as in version 3.4 of the TRIPOLI code; but the version 3.5 is more convenient for preparing the input data and for reading the output. The french version of the user's manual exists. (authors)

  11. Enhanced dielectric properties of poly(vinylidene fluoride) composites filled with nano iron oxide-deposited barium titanate hybrid particles

    Science.gov (United States)

    Zhang, Changhai; Chi, Qingguo; Dong, Jiufeng; Cui, Yang; Wang, Xuan; Liu, Lizhu; Lei, Qingquan

    2016-09-01

    We report enhancement of the dielectric permittivity of poly(vinylidene fluoride) (PVDF) generated by depositing magnetic iron oxide (Fe3O4) nanoparticles on the surface of barium titanate (BT) to fabricate BT–Fe3O4/PVDF composites. This process introduced an external magnetic field and the influences of external magnetic field on dielectric properties of composites were investigated systematically. The composites subjected to magnetic field treatment for 30 min at 60 °C exhibited the largest dielectric permittivity (385 at 100 Hz) when the BT–Fe3O4 concentration is approximately 33 vol.%. The BT–Fe3O4 suppressed the formation of a conducting path in the composite and induced low dielectric loss (0.3) and low conductivity (4.12 × 10‑9 S/cm) in the composite. Series-parallel model suggested that the enhanced dielectric permittivity of BT–Fe3O4/PVDF composites should arise from the ultrahigh permittivity of BT–Fe3O4 hybrid particles. However, the experimental results of the BT–Fe3O4/PVDF composites treated by magnetic field agree with percolation theory, which indicates that the enhanced dielectric properties of the BT–Fe3O4/PVDF composites originate from the interfacial polarization induced by the external magnetic field. This work provides a simple and effective way for preparing nanocomposites with enhanced dielectric properties for use in the electronics industry.

  12. Stellar GADGET: A smooth particle hydrodynamics code for stellar astrophysics and its application to Type Ia supernovae from white dwarf mergers

    CERN Document Server

    Pakmor, R; Roepke, F K; Hillebrandt, W

    2012-01-01

    Mergers of two carbon-oxygen white dwarfs have long been suspected to be progenitors of Type Ia Supernovae. Here we present our modifications to the cosmological smoothed particle hydrodynamics code Gadget to apply it to stellar physics including but not limited to mergers of white dwarfs. We demonstrate a new method to map a one-dimensional profile of an object in hydrostatic equilibrium to a stable particle distribution. We use the code to study the effect of initial conditions and resolution on the properties of the merger of two white dwarfs. We compare mergers with approximate and exact binary initial conditions and find that exact binary initial conditions lead to a much more stable binary system but there is no difference in the properties of the actual merger. In contrast, we find that resolution is a critical issue for simulations of white dwarf mergers. Carbon burning hotspots which may lead to a detonation in the so-called violent merger scenario emerge only in simulations with sufficient resolutio...

  13. A fast algorithm for non-Newtonian flow. An enhanced particle-tracking finite element code for solving boundary-valve problems in viscoelastic flow

    Science.gov (United States)

    Malkus, David S.

    1989-01-01

    This project concerned the development of a new fast finite element algorithm to solve flow problems of non-Newtonian fluids such as solutions or melts of polymers. Many constitutive theories for such materials involve single integrals over the deformation history of the particle at the stress evaluation point; examples are the Doi-Edwards and Curtiss-Bird molecular theories and the BKZ family derived from continuum arguments. These theories are believed to be among the most accurate in describing non-Newtonian effects important to polymer process design, effects such as stress relaxation, shear thinning, and normal stress effects. This research developed an optimized version of the algorithm which would run a factor of two faster than the pilot algorithm on scalar machines and would be able to take full advantage of vectorization on machines. Significant progress was made in code vectorization; code enhancement and streamlining; adaptive memory quadrature; model problems for the High Weissenberg Number Problem; exactly incompressible projection; development of multimesh extrapolation procedures; and solution of problems of physical interest. A portable version of the code is in the final stages of benchmarking and testing. It interfaces with the widely used FIDAP fluid dynamics package.

  14. Comparison of a 3-D multi-group SN particle transport code with Monte Carlo for intracavitary brachytherapy of the cervix uteri.

    Science.gov (United States)

    Gifford, Kent A; Wareing, Todd A; Failla, Gregory; Horton, John L; Eifel, Patricia J; Mourtada, Firas

    2009-12-03

    A patient dose distribution was calculated by a 3D multi-group S N particle transport code for intracavitary brachytherapy of the cervix uteri and compared to previously published Monte Carlo results. A Cs-137 LDR intracavitary brachytherapy CT data set was chosen from our clinical database. MCNPX version 2.5.c, was used to calculate the dose distribution. A 3D multi-group S N particle transport code, Attila version 6.1.1 was used to simulate the same patient. Each patient applicator was built in SolidWorks, a mechanical design package, and then assembled with a coordinate transformation and rotation for the patient. The SolidWorks exported applicator geometry was imported into Attila for calculation. Dose matrices were overlaid on the patient CT data set. Dose volume histograms and point doses were compared. The MCNPX calculation required 14.8 hours, whereas the Attila calculation required 22.2 minutes on a 1.8 GHz AMD Opteron CPU. Agreement between Attila and MCNPX dose calculations at the ICRU 38 points was within +/- 3%. Calculated doses to the 2 cc and 5 cc volumes of highest dose differed by not more than +/- 1.1% between the two codes. Dose and DVH overlays agreed well qualitatively. Attila can calculate dose accurately and efficiently for this Cs-137 CT-based patient geometry. Our data showed that a three-group cross-section set is adequate for Cs-137 computations. Future work is aimed at implementing an optimized version of Attila for radiotherapy calculations.

  15. On the Performance Analysis of Hybrid ARQ With Incremental Redundancy and With Code Combining Over Free-Space Optical Channels With Pointing Errors

    KAUST Repository

    Zedini, Emna

    2014-07-16

    In this paper, we investigate the performance of hybrid automatic repeat request (HARQ) with incremental redundancy (IR) and with code combining (CC) from an information-theoretic perspective over a point-to-point free-space optical (FSO) system. First, we introduce new closed-form expressions for the probability density function, the cumulative distribution function, the moment generating function, and the moments of an FSO link modeled by the Gamma fading channel subject to pointing errors and using intensity modulation with direct detection technique at the receiver. Based on these formulas, we derive exact results for the average bit error rate and the capacity in terms of Meijer\\'s G functions. Moreover, we present asymptotic expressions by utilizing the Meijer\\'s G function expansion and using the moments method, too, for the ergodic capacity approximations. Then, we provide novel analytical expressions for the outage probability, the average number of transmissions, and the average transmission rate for HARQ with IR, assuming a maximum number of rounds for the HARQ protocol. Besides, we offer asymptotic expressions for these results in terms of simple elementary functions. Additionally, we compare the performance of HARQ with IR and HARQ with CC. Our analysis demonstrates that HARQ with IR outperforms HARQ with CC.

  16. Particle-Film Plasmons on Periodic Silver Film over Nanosphere (AgFON): A Hybrid Plasmonic Nanoarchitecture for Surface-Enhanced Raman Spectroscopy.

    Science.gov (United States)

    Lee, Jiwon; Zhang, Qianpeng; Park, Seungyoung; Choe, Ayoung; Fan, Zhiyong; Ko, Hyunhyub

    2016-01-13

    Plasmonic systems based on particle-film plasmonic couplings have recently attracted great attention because of the significantly enhanced electric field at the particle-film gaps. Here, we introduce a hybrid plasmonic architecture utilizing combined plasmonic effects of particle-film gap plasmons and silver film over nanosphere (AgFON) substrates. When gold nanoparticles (AuNPs) are assembled on AgFON substrates with controllable particle-film gap distances, the AuNP-AgFON system supports multiple plasmonic couplings from interparticle, particle-film, and crevice gaps, resulting in a huge surface-enhanced Raman spectroscopy (SERS) effect. We show that the periodicity of AgFON substrates and the particle-film gaps greatly affects the surface plasmon resonances, and thus, the SERS effects due to the interplay between multiple plasmonic couplings. The optimally designed AuNP-AgFON substrate shows a SERS enhancement of 233 times compared to the bare AgFON substrate. The ultrasensitive SERS sensing capability is also demonstrated by detecting glutathione, a neurochemical molecule that is an important antioxidant, down to the 10 pM level.

  17. Classification of Medical Datasets Using SVMs with Hybrid Evolutionary Algorithms Based on Endocrine-Based Particle Swarm Optimization and Artificial Bee Colony Algorithms.

    Science.gov (United States)

    Lin, Kuan-Cheng; Hsieh, Yi-Hsiu

    2015-10-01

    The classification and analysis of data is an important issue in today's research. Selecting a suitable set of features makes it possible to classify an enormous quantity of data quickly and efficiently. Feature selection is generally viewed as a problem of feature subset selection, such as combination optimization problems. Evolutionary algorithms using random search methods have proven highly effective in obtaining solutions to problems of optimization in a diversity of applications. In this study, we developed a hybrid evolutionary algorithm based on endocrine-based particle swarm optimization (EPSO) and artificial bee colony (ABC) algorithms in conjunction with a support vector machine (SVM) for the selection of optimal feature subsets for the classification of datasets. The results of experiments using specific UCI medical datasets demonstrate that the accuracy of the proposed hybrid evolutionary algorithm is superior to that of basic PSO, EPSO and ABC algorithms, with regard to classification accuracy using subsets with a reduced number of features.

  18. Decision support tool for Virtual Power Players: Hybrid Particle Swarm Optimization applied to Day-ahead Vehicle-To-Grid Scheduling

    DEFF Research Database (Denmark)

    Soares, João; Valle, Zita; Morais, Hugo

    2013-01-01

    This paper presents a decision support Tool methodology to help virtual power players (VPPs) in the Smart Grid (SGs) context to solve the day-ahead energy ressource scheduling considering the intensive use of Distributed Generation (DG) and Vehicle-To-Grid (V2G). The main focus is the application...... of a new hybrid method combing a particle swarm approach and a deterministic technique based on mixedinteger linear programming (MILP) to solve the day-ahead scheduling minimizing total operation costs from the aggregator point of view. A realistic mathematical formulation, considering the electric network...... constraints and V2G charging and discharging efficiencies is presented. Full AC power flow calculation is included in the hybrid method to allow taking into account the network constraints. A case study with a 33-bus distribution network and 1800 V2G resources is used to illustrate the performance...

  19. Effect of surface fluorination of TiO2 particles on photocatalitytic activity of a hybrid multilayer coating obtained by sol-gel method.

    Science.gov (United States)

    Zhu, Yunfeng; Piscitelli, Filomena; Buonocore, Giovanna G; Lavorgna, Marino; Amendola, Eugenio; Ambrosio, Luigi

    2012-01-01

    A multilayer photoactive coating containing surface fluorinated TiO(2) nanoparticles and hybrid matrices by sol gel approach based on renewable chitosan was applied on poly(lactic acid) (PLA) film by a step wise spin-coating method. The upper photoactive layer contains nano-sized functionalized TiO(2) particles dispersed in a siloxane based matrix. For the purpose of improving TiO(2) dispersion at the air interface coating surface, TiO(2) nanoparticles were modified by silane coupling agent 1H,1H,2H,2H-perfluorooctyltriethoxysilane (FTS) with fluoro-organic side chains. An additional hybrid material consisting of chitosan (CS) cross-linked with 3-glycidyloxypropyl trimethoxy silane (GOTMS) was applied as interlayer between the PLA substrate and the upper photoactive coating to increase the adhesion and reciprocal affinity. The multilayer TiO(2)/CS-GOTMS coatings on PLA films showed a thickness of ~4-6 μm and resulted highly transparent. Their structure was exhaustively characterized by SEM, optical microscope, UV-vis spectroscopy and contact angle measurements. The photocatalytic activity of the multilayer coatings were investigated using methyl orange (MeO) as a target pollutant; the results showed that PLA films coated with surface fluorinated particles exhibit higher activity than films with neat particles, because of a better dispersion of TiO(2) particles. The mechanical properties of PLA and films coated with fluorinated particles, irradiated by UV light were also investigated; the results showed that the degradation of PLA substrate was markedly suppressed because of the UV adsorptive action of the multilayer coating.

  20. Reliability assessment of high energy particle induced radioactivity calculation code DCHAIN-SP 2001 by analysis of integral activation experiments with 14 MeV neutrons

    Energy Technology Data Exchange (ETDEWEB)

    Kai, Tetsuya; Maekawa, Fujio; Kasugai, Yoshimi; Takada, Hiroshi; Ikeda, Yujiro [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Kosako, Kazuaki [Sumitomo Atomic Energy Industries, Ltd., Tokyo (Japan)

    2002-03-01

    Reliability assessment for the high energy particle induced radioactivity calculation code DCHAIN-SP 2001 was carried out through analysis of integral activation experiments with 14-MeV neutrons aiming at validating the cross section and decay data revised from previous version. The following three kinds of experiments conducted at the D-T neutron source facility, FNS, in JAERI were employed: (1) the decay gamma-ray measurement experiment for fusion reactor materials, (2) the decay heat measurement experiment for 32 fusion reactor materials, and (3) the integral activation experiment on mercury. It was found that the calculations with DCHAIN-SP 2001 predicted the experimental data for (1) - (3) within several tens of percent. It was concluded that the cross section data below 20 MeV and the associated decay data as well as the calculation algorithm for solving the Beteman equation that was the master equation of DCHAIN-SP were adequate. (author)

  1. Atomic force microscopy indentation to determine mechanical property for polystyrene–silica core–shell hybrid particles with controlled shell thickness

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Yang, E-mail: cy.jpu@126.com [School of Materials Science and Engineering, Changzhou University, Changzhou, Jiangsu 213164 (China); Qian, Cheng [School of Materials Science and Engineering, Changzhou University, Changzhou, Jiangsu 213164 (China); Miao, Naiming [School of Mechanical Engineering, Changzhou University, Changzhou, Jiangsu 213016 (China)

    2015-03-31

    The positively charged polystyrene (PS) particles with a size of ca. 200 nm were synthesized by soap-free polymerization. The PS cores were coated with silica shells of tunable thickness employing the modified Stöber method. The PS cores were removed by thermal decomposition at 500 °C, resulting in well-defined silica hollow spheres (10–30 nm in shell thickness). The elastic response of the as-synthesized samples was probed by an atomic force microscope (AFM). A point load was applied to the particle surface through a sharp AFM tip, and the force–displacement curves were recorded. Elastic moduli (E) for the PS particles (2.01 ± 0.70 GPa) and the core–shell structured hybrid particles were determined on the basis of Hertzian contact model. The calculated E values of composites exhibited a linear dependence on the silica shell thickness. While the shell thickness increased from ca. 10 to 15 and 20 nm, the E values of composites increased from 4.42 ± 0.27 to 5.88 ± 0.48 and 9.07 ± 0.94 GPa. For core–shell structured organic/inorganic composites, the E values of the hybrid particles were much lower than those of inorganic shells, while these values were much close to those of organic cores. Moreover, the moduli of elasticity of the composites appeared to be determined by the properties of the polymer cores, the species of inorganic shells and the thickness of shells. Besides, the inorganic shells enhanced the mechanical properties of the polymer cores. This work will provide essential experimental and theoretical basis for the design and application of core–shell structured organic/inorganic composite abrasives in chemical mechanical polishing/planarization. - Highlights: • The elastic moduli (E) of the PS/SiO{sub 2} hybrid particles were probed by AFM. • The E values of composites exhibited a linear dependence on the shell thickness. • The elasticity appeared to be determined by the properties of the organic cores. • The E values were affected

  2. Pseudo-spectral Maxwell solvers for an accurate modeling of Doppler harmonic generation on plasma mirrors with Particle-In-Cell codes

    CERN Document Server

    Blaclard, G; Lehe, R; Vay, J L

    2016-01-01

    With the advent of PW class lasers, the very large laser intensities attainable on-target should enable the production of intense high order Doppler harmonics from relativistic laser-plasma mirrors interactions. At present, the modeling of these harmonics with Particle-In-Cell (PIC) codes is extremely challenging as it implies an accurate description of tens of harmonic orders on a a broad range of angles. In particular, we show here that standard Finite Difference Time Domain (FDTD) Maxwell solvers used in most PIC codes partly fail to model Doppler harmonic generation because they induce numerical dispersion of electromagnetic waves in vacuum which is responsible for a spurious angular deviation of harmonic beams. This effect was extensively studied and a simple toy-model based on Snell-Descartes law was developed that allows us to finely predict the angular deviation of harmonics depending on the spatio-temporal resolution and the Maxwell solver used in the simulations. Our model demonstrates that the miti...

  3. Production of secondary particles and nuclei in cosmic rays collisions with the interstellar gas using the FLUKA code

    CERN Document Server

    Mazziotta, M N; Ferrari, A; Gaggero, D; Loparco, F; Sala, P R

    2016-01-01

    The measured fluxes of secondary particles produced by the interactions of Cosmic Rays (CRs) with the astronomical environment play a crucial role in understanding the physics of CR transport. In this work we present a comprehensive calculation of the secondary hadron, lepton, gamma-ray and neutrino yields produced by the inelastic interactions between several species of stable or long-lived cosmic rays projectiles (p, D, T, 3He, 4He, 6Li, 7Li, 9Be, 10Be, 10B, 11B, 12C, 13C, 14C, 14N, 15N, 16O, 17O, 18O, 20Ne, 24Mg and 28Si) and different target gas nuclei (p, 4He, 12C, 14N, 16O, 20Ne, 24Mg, 28Si and 40Ar). The yields are calculated using FLUKA, a simulation package designed to compute the energy distributions of secondary products with large accuracy in a wide energy range. The present results provide, for the first time, a complete and self-consistent set of all the relevant inclusive cross sections regarding the whole spectrum of secondary products in nuclear collisions. We cover, for the projectiles, a ki...

  4. Development and Implementation of Photonuclear Cross-Section Data for Mutually Coupled Neutron-Photon Transport Calculations in the Monte Carlo N-Particle (MCNP) Radiation Transport Code

    Energy Technology Data Exchange (ETDEWEB)

    White, Morgan C. [Univ. of Florida, Gainesville, FL (United States)

    2000-07-01

    The fundamental motivation for the research presented in this dissertation was the need to development a more accurate prediction method for characterization of mixed radiation fields around medical electron accelerators (MEAs). Specifically, a model is developed for simulation of neutron and other particle production from photonuclear reactions and incorporated in the Monte Carlo N-Particle (MCNP) radiation transport code. This extension of the capability within the MCNP code provides for the more accurate assessment of the mixed radiation fields. The Nuclear Theory and Applications group of the Los Alamos National Laboratory has recently provided first-of-a-kind evaluated photonuclear data for a select group of isotopes. These data provide the reaction probabilities as functions of incident photon energy with angular and energy distribution information for all reaction products. The availability of these data is the cornerstone of the new methodology for state-of-the-art mutually coupled photon-neutron transport simulations. The dissertation includes details of the model development and implementation necessary to use the new photonuclear data within MCNP simulations. A new data format has been developed to include tabular photonuclear data. Data are processed from the Evaluated Nuclear Data Format (ENDF) to the new class ''u'' A Compact ENDF (ACE) format using a standalone processing code. MCNP modifications have been completed to enable Monte Carlo sampling of photonuclear reactions. Note that both neutron and gamma production are included in the present model. The new capability has been subjected to extensive verification and validation (V&V) testing. Verification testing has established the expected basic functionality. Two validation projects were undertaken. First, comparisons were made to benchmark data from literature. These calculations demonstrate the accuracy of the new data and transport routines to better than 25 percent. Second

  5. Building a Hydrodynamics Code with Kinetic Theory

    CERN Document Server

    Sagert, Irina; Colbry, Dirk; Pickett, Rodney; Strother, Terrance

    2013-01-01

    We report on the development of a test-particle based kinetic Monte Carlo code for large systems and its application to simulate matter in the continuum regime. Our code combines advantages of the Direct Simulation Monte Carlo and the Point-of-Closest-Approach methods to solve the collision integral of the Boltzmann equation. With that, we achieve a high spatial accuracy in simulations while maintaining computational feasibility when applying a large number of test-particles. The hybrid setup of our approach allows us to study systems which move in and out of the hydrodynamic regime, with low and high particle densities. To demonstrate our code's ability to reproduce hydrodynamic behavior we perform shock wave simulations and focus here on the Sedov blast wave test. The blast wave problem describes the evolution of a spherical expanding shock front and is an important verification problem for codes which are applied in astrophysical simulation, especially for approaches which aim to study core-collapse supern...

  6. Exploring the role of turbulent acceleration and heating in fractal current sheet of solar flares­ from hybrid particle in cell and lattice Boltzmann virtual test

    Science.gov (United States)

    Zhu, B.; Lin, J.; Yuan, X.; Li, Y.; Shen, C.

    2016-12-01

    The role of turbulent acceleration and heating in the fractal magnetic reconnection of solar flares is still not clear, especially at the X-point in the diffusion region. At virtual test aspect, it is hardly to quantitatively analyze the vortex generation, turbulence evolution, particle acceleration and heating in the magnetic islands coalesce in fractal manner, formatting into largest plasmid and ejection process in diffusion region through classical magnetohydrodynamics numerical method. With the development of physical particle numerical method (particle in cell method [PIC], Lattice Boltzmann method [LBM]) and high performance computing technology in recently two decades. Kinetic simulation has developed into an effectively manner to exploring the role of magnetic field and electric field turbulence in charged particles acceleration and heating process, since all the physical aspects relating to turbulent reconnection are taken into account. In this paper, the LBM based lattice DxQy grid and extended distribution are added into charged-particles-to-grid-interpolation of PIC based finite difference time domain scheme and Yee Grid, the hybrid PIC-LBM simulation tool is developed to investigating turbulence acceleration on TIANHE-2. The actual solar coronal condition (L≈105Km,B≈50-500G,T≈5×106K, n≈108-109, mi/me≈500-1836) is applied to study the turbulent acceleration and heating in solar flare fractal current sheet. At stage I, magnetic islands shrink due to magnetic tension forces, the process of island shrinking halts when the kinetic energy of the accelerated particles is sufficient to halt the further collapse due to magnetic tension forces, the particle energy gain is naturally a large fraction of the released magnetic energy. At stage II and III, the particles from the energized group come in to the center of the diffusion region and stay longer in the area. In contract, the particles from non energized group only skim the outer part of the

  7. Hybrid support vector regression and autoregressive integrated moving average models improved by particle swarm optimization for property crime rates forecasting with economic indicators.

    Science.gov (United States)

    Alwee, Razana; Shamsuddin, Siti Mariyam Hj; Sallehuddin, Roselina

    2013-01-01

    Crimes forecasting is an important area in the field of criminology. Linear models, such as regression and econometric models, are commonly applied in crime forecasting. However, in real crimes data, it is common that the data consists of both linear and nonlinear components. A single model may not be sufficient to identify all the characteristics of the data. The purpose of this study is to introduce a hybrid model that combines support vector regression (SVR) and autoregressive integrated moving average (ARIMA) to be applied in crime rates forecasting. SVR is very robust with small training data and high-dimensional problem. Meanwhile, ARIMA has the ability to model several types of time series. However, the accuracy of the SVR model depends on values of its parameters, while ARIMA is not robust to be applied to small data sets. Therefore, to overcome this problem, particle swarm optimization is used to estimate the parameters of the SVR and ARIMA models. The proposed hybrid model is used to forecast the property crime rates of the United State based on economic indicators. The experimental results show that the proposed hybrid model is able to produce more accurate forecasting results as compared to the individual models.

  8. Hybrid Support Vector Regression and Autoregressive Integrated Moving Average Models Improved by Particle Swarm Optimization for Property Crime Rates Forecasting with Economic Indicators

    Directory of Open Access Journals (Sweden)

    Razana Alwee

    2013-01-01

    Full Text Available Crimes forecasting is an important area in the field of criminology. Linear models, such as regression and econometric models, are commonly applied in crime forecasting. However, in real crimes data, it is common that the data consists of both linear and nonlinear components. A single model may not be sufficient to identify all the characteristics of the data. The purpose of this study is to introduce a hybrid model that combines support vector regression (SVR and autoregressive integrated moving average (ARIMA to be applied in crime rates forecasting. SVR is very robust with small training data and high-dimensional problem. Meanwhile, ARIMA has the ability to model several types of time series. However, the accuracy of the SVR model depends on values of its parameters, while ARIMA is not robust to be applied to small data sets. Therefore, to overcome this problem, particle swarm optimization is used to estimate the parameters of the SVR and ARIMA models. The proposed hybrid model is used to forecast the property crime rates of the United State based on economic indicators. The experimental results show that the proposed hybrid model is able to produce more accurate forecasting results as compared to the individual models.

  9. Effects of temperature and particles concentration on the dynamic viscosity of MgO-MWCNT/ethylene glycol hybrid nanofluid: Experimental study

    Science.gov (United States)

    Soltani, Omid; Akbari, Mohammad

    2016-10-01

    In this paper, the effects of temperature and particles concentration on the dynamic viscosity of MgO-MWCNT/ethylene glycol hybrid nanofluid is examined. The experiments carried out in the solid volume fraction range of 0 to 1.0% under the temperature ranging from 30 °C to 60 °C. The results showed that the hybrid nanofluid behaves as a Newtonian fluid for all solid volume fractions and temperatures considered. The measurements also indicated that the dynamic viscosity increases with increasing the solid volume fraction and decreases with the temperature rising. The relative viscosity revealed that when the solid volume fraction enhances from 0.1 to 1%, the dynamic viscosity increases up to 168%. Finally, using experimental data, in order to predict the dynamic viscosity of MgO-MWCNT/ethylene glycol hybrid nanofluids, a new correlation has been suggested. The comparisons between the correlation outputs and experimental results showed that the suggested correlation has an acceptable accuracy.

  10. Frequencies of complex chromosome exchange aberrations induced by 238Pu alpha-particles and detected by fluorescence in situ hybridization using single chromosome-specific probes.

    Science.gov (United States)

    Griffin, C S; Marsden, S J; Stevens, D L; Simpson, P; Savage, J R

    1995-04-01

    We undertook an analysis of chromosome-type exchange aberrations induced by alpha-particles using fluorescence in situ hybridization (FISH) with whole chromosome-specific probes for human chromosomes 1 or 4, together with a pan-centromeric probe. Contact-inhibited primary human fibroblasts (in G1) were irradiated with 0.41-1.00 Gy 238Pu alpha-particles and aberrations were analysed at the next mitosis following a single chromosome paint. Exchange and aberration painting patterns were classified according to Savage and Simpson (1994a). Of exchange aberrations, 38-47% were found to be complex derived, i.e. resulting from three or more breaks in two or more chromosomes, and the variation with dose was minimal. The class of complex aberrations most frequently observed were insertions, derived from a minimum of three breaks in two chromosomes. There was also an elevated frequency of rings. The high level of complex aberrations observed after alpha-particle irradiation indicates that, when chromosome domains are traversed by high linear energy transfer alpha-particle tracks, there is an enhanced probability of production of multiple localized double-strand breaks leading to more complicated interactions.

  11. Micromechanical analysis of a hybrid composite—effect of boron carbide particles on the elastic properties of basalt fiber reinforced polymer composite

    Science.gov (United States)

    Krishna Golla, Sai; Prasanthi, P.

    2016-11-01

    A fiber reinforced polymer (FRP) composite is an important material for structural application. The diversified application of FRP composites has become the center of attention for interdisciplinary research. However, improvements in the mechanical properties of this class of materials are still under research for different applications. The reinforcement of inorganic particles in a composite improves its structural properties due to their high stiffness. The present research work is focused on the prediction of the mechanical properties of the hybrid composites where continuous fibers are reinforced in a micro boron carbide particle mixed polypropylene matrix. The effectiveness of the addition of 30 wt. % of boron carbide (B4C) particle contributions regarding the longitudinal and transverse properties of the basalt fiber reinforced polymer composite at various fiber volume fractions is examined by finite element analysis (FEA). The experimental approach is the best way to determine the properties of the composite but it is expensive and time-consuming. Therefore, the finite element method (FEM) and analytical methods are the viable methods for the determination of the composite properties. The FEM results were obtained by adopting a micromechanics approach with the support of FEM. Assuming a uniform distribution of reinforcement and considering one unit-cell of the whole array, the properties of the composite materials are determined. The predicted elastic properties from FEA are compared with the analytical results. The results suggest that B4C particles are a good reinforcement for the enhancement of the transverse properties of basalt fiber reinforced polypropylene.

  12. Thiophene-Functionalized Hybrid Perovskite Microrods and their Application in Photodetector Devices for Investigating Charge Transport Through Interfaces in Particle-Based Materials.

    Science.gov (United States)

    Kollek, Tom; Wurmbrand, Daniel; Birkhold, Susanne T; Zimmermann, Eugen; Kalb, Julian; Schmidt-Mende, Lukas; Polarz, Sebastian

    2017-01-11

    Particle-based semiconductor materials are promising constituents of future technologies. They are described by unique features resulting from the combination of discrete nanoparticle characteristics and the emergence of cooperative phenomena based on long-range interaction within their superstructure. (Nano)particles of outstanding quality with regards to size and shape can be prepared via colloidal synthesis using appropriate capping agents. The classical capping agents are electrically insulating, which impedes particle-particle electronic communication. Consequently, there exists a high demand for realizing charge transport through interfaces especially for semiconductors of relevance like hybrid perovskites (HYPEs), for example, CH3NH3PbI3 (MAPI) as one of the most prominent representatives. Of particular interest are crystals in the micrometer range, as they possess synergistic advantages of single crystalline bulk properties, shape control as well as the possibility of being functionalized. Here we provide a synthetic strategy toward thiophene-functionalized single crystalline MAPI microrods originating from the single source precursor CH3NH3PbI3TEG2 (TEG = triethylene glycol). In the dark, the microrods show enhanced charge transport characteristics of holes over 2 orders of magnitude compared to microscale cuboids with insulating alkyl surface modifiers and nonfunctionalized random sized particles. In large-area prototype photodetector devices (2.21 cm(2)), the thiophene functionalization improves the response times because of the interparticle charge transport (tON = 190 ms, tOFF = 430 ms) compared to alkyl-functionalized particles (tON = 1055 ms, tOFF = 60 ms), at similar responsivities of 0.65 and 0.71 mA W(-1), respectively. Further, the surface functionalization and crystal grains on the micrometer scale improve the device stability. Therefore, this study provides clear evidence for the interplay and importance of crystal size, shape and surface

  13. High energy particles at Mars and Venus: Phobos-2, Mars Express and Venus Express observations and their interpretation by hybrid model simulations

    Science.gov (United States)

    McKenna-Lawlor, Susan; Kallio, Esa; Fram, Rudy A.; Alho, Markku; Jarvinen, Riku; Dyadechkin, Sergey; Wedlund, Cyril Simon; Zhang, Tielong; Collinson, Glyn A.; Futaana, Yoshifumi

    2013-04-01

    Mars and Venus can both be reached by Solar Energetic Particles (SEPs). Such high energy particles (protons, multiply charged heavy ions, electrons) penetrate the upper atmospheres of Mars and Venus because, in contrast to Earth, these bodies do not have a significant, global, intrinsic magnetic field to exclude them. One especially well documented, complex and prolonged SEP took in place in early 1989 (Solar Cycle 23) when the Phobos-2 spacecraft was orbiting Mars. This spacecraft had a dedicated high energy particle instrument onboard (SLED), which measured particles with energies in the keV range up to a few tens of MeV. There was in addition a magnetometer as well as solar wind plasma detectors onboard which together provided complementary data to support contemporaneous studies of the background SEP environment. Currently, while the Sun is displaying maximum activity (Solar Cycle 24), Mars and Venus are being individually monitored by instrumentation flown onboard the Mars Express (MEX) and Venus Express (VEX) spacecraft. Neither of these spacecraft carry a high energy particle instrument but their Analyzer of Space Plasmas and Energetic Atoms (ASPERA) experiments (ASPERA-3 on MEX and ASPERA-4 on VEX), can be used to study SEPs integrated over E ≥ ~30 MeV which penetrate the instrument hardware and form background counts in the plasma data. In the present work we present SEP events measured at Mars and Venus based on Phobos-2, 1989 data and on, more recent, MEX and VEX (identified from particle background) observations. We further introduce numerical global SEP simulations of the measured events based on 3-D self-consistent hybrid models (HYB-Mars and HYB-Venus). Through comparing the in situ SEP observations with these simulations, new insights are provided into the properties of the measured SEPs as well as into how their individual planetary bow shocks and magnetospheres affect the characteristics of their ambient Martian and Venusian SEP environments.

  14. An integrated high-performance beam optics-nuclear processes framework with hybrid transfer map-Monte Carlo particle transport and optimization

    Energy Technology Data Exchange (ETDEWEB)

    Bandura, L., E-mail: bandura@msu.ed [Argonne National Laboratory, Argonne, IL 60439 (United States); Erdelyi, B. [Argonne National Laboratory, Argonne, IL 60439 (United States); Northern Illinois University, DeKalb, IL 60115 (United States); Nolen, J. [Argonne National Laboratory, Argonne, IL 60439 (United States)

    2010-12-01

    An integrated beam optics-nuclear processes framework is essential for accurate simulation of fragment separator beam dynamics. The code COSY INFINITY provides powerful differential algebraic methods for modeling and beam dynamics simulations in absence of beam-material interactions. However, these interactions are key for accurately simulating the dynamics of heavy ion fragmentation and fission. We have developed an extended version of the code that includes these interactions, and a set of new tools that allow efficient and accurate particle transport: by transfer map in vacuum and by Monte Carlo methods in materials. The new framework is presented, along with several examples from a preliminary layout of a fragment separator for a facility for rare isotope beams.

  15. Particle variations and effect on the microstructure and microhardness of Ti6al4V hybrid metal matrix system

    CSIR Research Space (South Africa)

    Akinlabi, ET

    2017-01-01

    Full Text Available Manufacture of hybrid Ti6Al4 V alloy systems for application specifically in hot parts of turbine engines has been a challenge in recent years. This is due to the need to increase efficiency and reduce combustion rate and emission. In this work...

  16. Comprehensive Benchmark Suite for Simulation of Particle Laden Flows Using the Discrete Element Method with Performance Profiles from the Multiphase Flow with Interface eXchanges (MFiX) Code

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Peiyuan [Univ. of Colorado, Boulder, CO (United States); Brown, Timothy [Univ. of Colorado, Boulder, CO (United States); Fullmer, William D. [Univ. of Colorado, Boulder, CO (United States); Hauser, Thomas [Univ. of Colorado, Boulder, CO (United States); Hrenya, Christine [Univ. of Colorado, Boulder, CO (United States); Grout, Ray [National Renewable Energy Lab. (NREL), Golden, CO (United States); Sitaraman, Hariswaran [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2016-01-29

    Five benchmark problems are developed and simulated with the computational fluid dynamics and discrete element model code MFiX. The benchmark problems span dilute and dense regimes, consider statistically homogeneous and inhomogeneous (both clusters and bubbles) particle concentrations and a range of particle and fluid dynamic computational loads. Several variations of the benchmark problems are also discussed to extend the computational phase space to cover granular (particles only), bidisperse and heat transfer cases. A weak scaling analysis is performed for each benchmark problem and, in most cases, the scalability of the code appears reasonable up to approx. 103 cores. Profiling of the benchmark problems indicate that the most substantial computational time is being spent on particle-particle force calculations, drag force calculations and interpolating between discrete particle and continuum fields. Hardware performance analysis was also carried out showing significant Level 2 cache miss ratios and a rather low degree of vectorization. These results are intended to serve as a baseline for future developments to the code as well as a preliminary indicator of where to best focus performance optimizations.

  17. Amino-functionalized breath-figure cavities in polystyrene-alumina hybrid films: effect of particle concentration and dispersion.

    Science.gov (United States)

    V, Lakshmi; Raju, Annu; V G, Resmi; Pancrecious, Jerin K; T P D, Rajan; C, Pavithran

    2016-03-14

    We report the formation of breath-figure (BF) patterns with amino-functionalized cavities in a BF incompatible polystyrene (PS) by incorporating functionalized alumina nanoparticles. The particles were amphiphilic-modified and the modifier ratio was regulated to achieve a specific hydrophobic/hydrophilic balance of the particles. The influence of the physical and chemical properties of the particles like particle concentration, the hydrophobic/hydrophilic balance, etc., on particle dispersion in solvents having different polarity and the corresponding changes in the BF patterns have been studied. The amphiphilic-modified alumina particles could successfully assist the BF mechanism, generating uniform patterns in polystyrene films with the cavity walls decorated with the functionalized alumina particles, even from water-miscible solvents like THF. The possibility of fabricating free-standing micropatterned films by casting and drying the suspension under ambient conditions was also demonstrated. The present method opens up a simple route for producing functionalized BF cavities, which can be post-modified by a chemical route for various biological applications.

  18. Single particle calculations for a Woods-Saxon potential with triaxial deformations, and large Cartesian oscillator basis (TRIAXIAL 2014, Third version of the code Triaxial)

    Science.gov (United States)

    Mohammed-Azizi, B.; Medjadi, D. E.

    2014-11-01

    Theory and FORTRAN program of the first version of this code (TRIAXIAL) have already been described in detail in Computer Physics Comm. 156 (2004) 241-282. A second version of this code (TRIAXIAL 2007) has been given in CPC 176 (2007) 634-635. The present FORTRAN program is the third version (TRIAXIAL 2014) of the same code. Now, It is written in free format. As the former versions, this FORTRAN program solves the same Schrodinger equation of the independent particle model of the atomic nucleus with the same method. However, the present version is much more convenient. In effect, it is characterized by the fact that the eigenvalues and the eigenfunctions can be given by specific subroutines. The latters did not exist in the old versions (2004 and 2007). In addition, it is to be noted that in the previous versions, the eigenfunctions were only given by their coefficients of their expansion onto the harmonic oscillator basis. This method is needed in some cases. But in other cases, it is preferable to treat the eigenfunctions directly in configuration space. For this reason, we have implemented an additional subroutine for this task. Some other practical subroutines have also been implemented. Moreover, eigenvalues and eigenfunctions are recorded onto several files. All these new features of the code and some important aspects of its structure are explained in the document ‘Triaxial2014 use.pdf’. Catalogue identifier: ADSK_v3_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADSK_v3_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 13672 No. of bytes in distributed program, including test data, etc.: 217598 Distribution format: tar.gz Programming language: FORTRAN 77/90 (double precision). Computer: PC. Pentium 4, 2600MHz and beyond. Operating system: WINDOWS XP

  19. Numerical Modeling and Investigation of Fluid-Driven Fracture Propagation in Reservoirs Based on a Modified Fluid-Mechanically Coupled Model in Two-Dimensional Particle Flow Code

    Directory of Open Access Journals (Sweden)

    Jian Zhou

    2016-09-01

    Full Text Available Hydraulic fracturing is a useful tool for enhancing rock mass permeability for shale gas development, enhanced geothermal systems, and geological carbon sequestration by the high-pressure injection of a fracturing fluid into tight reservoir rocks. Although significant advances have been made in hydraulic fracturing theory, experiments, and numerical modeling, when it comes to the complexity of geological conditions knowledge is still limited. Mechanisms of fluid injection-induced fracture initiation and propagation should be better understood to take full advantage of hydraulic fracturing. This paper presents the development and application of discrete particle modeling based on two-dimensional particle flow code (PFC2D. Firstly, it is shown that the modeled value of the breakdown pressure for the hydraulic fracturing process is approximately equal to analytically calculated values under varied in situ stress conditions. Furthermore, a series of simulations for hydraulic fracturing in competent rock was performed to examine the influence of the in situ stress ratio, fluid injection rate, and fluid viscosity on the borehole pressure history, the geometry of hydraulic fractures, and the pore-pressure field, respectively. It was found that the hydraulic fractures in an isotropic medium always propagate parallel to the orientation of the maximum principal stress. When a high fluid injection rate is used, higher breakdown pressure is needed for fracture propagation and complex geometries of fractures can develop. When a low viscosity fluid is used, fluid can more easily penetrate from the borehole into the surrounding rock, which causes a reduction of the effective stress and leads to a lower breakdown pressure. Moreover, the geometry of the fractures is not particularly sensitive to the fluid viscosity in the approximate isotropic model.

  20. Synthesis of TiO2-poly(3-hexylthiophene) hybrid particles through surface-initiated Kumada catalyst-transfer polycondensation.

    Science.gov (United States)

    Boon, Florian; Moerman, David; Laurencin, Danielle; Richeter, Sébastien; Guari, Yannick; Mehdi, Ahmad; Dubois, Philippe; Lazzaroni, Roberto; Clément, Sébastien

    2014-09-30

    TiO2/conjugated polymers are promising materials in solar energy conversion where efficient photoinduced charge transfers are required. Here, a "grafting-from" approach for the synthesis of TiO2 nanoparticles supported with conjugated polymer brushes is presented. Poly(3-hexylthiophene) (P3HT), a benchmark material for organic electronics, was selectively grown from TiO2 nanoparticles by surface-initiated Kumada catalyst-transfer polycondensation. The grafting of the polymer onto the surface of the TiO2 nanoparticles by this method was demonstrated by (1)H and (13)C solid-state NMR, X-ray photoelectron spectrometry, thermogravimetric analysis, transmission electron microscopy, and UV-visible spectroscopy. Sedimentation tests in tetrahydrofuran revealed improved dispersion stability for the TiO2@P3HT hybrid material. Films were produced by solvent casting, and the quality of the dispersion of the modified TiO2 nanoparticles was evaluated by atomic force microscopy. The dispersion of the P3HT-coated TiO2 NPs in the P3HT matrix was found to be homogeneous, and the fibrillar structure of the P3HT matrix was maintained which is favorable for charge transport. Fluorescence quenching measurements on these hybrid materials in CHCl3 indicated improved photoinduced electron-transfer efficiency. All in all, better physicochemical properties for P3HT/TiO2 hybrid material were reached via the surface-initiated "grafted-from" approach compared to the "grafting-onto" approach.

  1. Hybrid ant colony and particle swarm algorithm for solving TSP%蚁群与粒子群混合算法求解TSP问题

    Institute of Scientific and Technical Information of China (English)

    孙凯; 吴红星; 王浩; 丁家栋

    2012-01-01

    The Traveling Salesman Problem(TSP) is the oldest and most extensively studied combinatorial optimization problem. For the traveling salesman problem, Hybrid Ant colony and Particle swarm Algorithm (HAPA) is proposed. The HAPA divides the ant colony into several ant sub colonies, then optimizes parameters of the ant sub colonies as particles by the particle swarm optimization algorithm, and introduces the operation of swapping the pheromone in each ant sub colony. Results show that the HAPA has more advantages than the traditional algorithm and the similar algorithm in solving the traveling salesman problem.%旅行商问题(TSP)是最古老而且研究最广泛的组合优化问题.针对TSP问题,提出一种蚁群与粒子群混合算法(HAPA).HAPA首先将蚁群划分成多个蚂蚁子群,然后把蚂蚁子群的参数作为粒子,通过粒子群算法来优化蚂蚁子群的参数,并在蚂蚁子群中引入了信息素交换操作.实验结果表明,HAPA在求解TSP问题中比传统算法和同类算法更具优越性.

  2. Computational Fluid Dynamics Study of Molten Steel Flow Patterns and Particle-Wall Interactions Inside a Slide-Gate Nozzle by a Hybrid Turbulent Model

    Science.gov (United States)

    Mohammadi-Ghaleni, Mahdi; Asle Zaeem, Mohsen; Smith, Jeffrey D.; O'Malley, Ronald

    2016-10-01

    Melt flow patterns and turbulence inside a slide-gate throttled submerged entry nozzle (SEN) were studied using Detached-Eddy Simulation (DES) model, which is a combination of Reynolds-Averaged Navier-Stokes (RANS) and Large-Eddy Simulation (LES) models. The DES switching criterion between RANS and LES was investigated to closely reproduce the flow structures of low and high turbulence regions similar to RANS and LES simulations, respectively. The melt flow patterns inside the nozzle were determined by k- ɛ (a RANS model), LES, and DES turbulent models, and convergence studies were performed to ensure reliability of the results. Results showed that the DES model has significant advantages over the standard k- ɛ model in transient simulations and in regions containing flow separation from the nozzle surface. Moreover, due to applying a hybrid approach, DES uses a RANS model at wall boundaries which resolves the extremely fine mesh requirement of LES simulations, and therefore it is computationally more efficient. Investigation of particle distribution inside the nozzle and particle adhesion to the nozzle wall also reveals that the DES model simulations predict more particle-wall interactions compared to LES model.

  3. Relative drifts and temperature anisotropies of protons and $\\alpha$ particles in the expanding solar wind -- 2.5D hybrid simulations

    CERN Document Server

    Maneva, Y G; Viñas, A

    2014-01-01

    We perform 2.5D hybrid simulations to investigate the origin and evolution of relative drift speeds between protons and $\\alpha$ particles in the collisionless turbulent low-$\\beta$ solar wind plasma. We study the generation of differential streaming by wave-particle interactions and absorption of turbulent wave spectra. Next we focus on the role of the relative drifts for the turbulent heating and acceleration of ions in the collisionless fast solar wind streams. The energy source is given by an initial broad-band spectrum of parallel propagating Alfv\\'en-cyclotron waves, which co-exists with the plasma and is self-consistently coupled to the perpendicular ion bulk velocities. We include the effect of a gradual solar wind expansion, which cools and decelerates the minor ions. This paper for the first time considers the combined effect of self-consistently initialized dispersive turbulent Alfv\\'enic spectra with differentially streaming protons and $\\alpha$ particles in the expanding solar wind outflows withi...

  4. Novel preparation of controlled porosity particle/fibre loaded scaffolds using a hybrid micro-fluidic and electrohydrodynamic technique.

    Science.gov (United States)

    Parhizkar, Maryam; Sofokleous, Panagiotis; Stride, Eleanor; Edirisinghe, Mohan

    2014-11-27

    The purpose of this research was to produce multi-dimensional scaffolds containing biocompatible particles and fibres. To achieve this, two techniques were combined and used: T-Junction microfluidics and electrohydrodynamic (EHD) processing. The former was used to form layers of monodispersed bovine serum albumin (BSA) bubbles, which upon drying formed porous scaffolds. By altering the T-Junction processing parameters, bubbles with different diameters were produced and hence the scaffold porosity could be controlled. EHD processing was used to spray or spin poly(lactic-co-glycolic) (PLGA), polymethysilsesquioxane (PMSQ) and collagen particles/fibres onto the scaffolds during their production and after drying. As a result, multifunctional BSA scaffolds with controlled porosity containing PLGA, PMSQ and collagen particles/fibres were obtained. Product morphology was studied by optical and scanning electron microscopy. These products have potential applications in many advanced biomedical, pharmaceutical and cosmetic fields e.g. bone regeneration, drug delivery, cosmetic cream lathers, facial scrubbing creams etc.

  5. ML-Space: Hybrid Spatial Gillespie and Particle Simulation of Multi-level Rule-based Models in Cell Biology.

    Science.gov (United States)

    Bittig, Arne; Uhrmacher, Adelinde

    2016-08-03

    Spatio-temporal dynamics of cellular processes can be simulated at different levels of detail, from (deterministic) partial differential equations via the spatial Stochastic Simulation algorithm to tracking Brownian trajectories of individual particles. We present a spatial simulation approach for multi-level rule-based models, which includes dynamically hierarchically nested cellular compartments and entities. Our approach ML-Space combines discrete compartmental dynamics, stochastic spatial approaches in discrete space, and particles moving in continuous space. The rule-based specification language of ML-Space supports concise and compact descriptions of models and to adapt the spatial resolution of models easily.

  6. Identification of protein-coding sequences using the hybridization of 18S rRNA and mRNA during translation.

    Science.gov (United States)

    Xing, Chuanhua; Bitzer, Donald L; Alexander, Winser E; Vouk, Mladen A; Stomp, Anne-Marie

    2009-02-01

    We introduce a new approach in this article to distinguish protein-coding sequences from non-coding sequences utilizing a period-3, free energy signal that arises from the interactions of the 3'-terminal nucleotides of the 18S rRNA with mRNA. We extracted the special features of the amplitude and the phase of the period-3 signal in protein-coding regions, which is not found in non-coding regions, and used them to distinguish protein-coding sequences from non-coding sequences. We tested on all the experimental genes from Saccharomyces cerevisiae and Schizosaccharomyces pombe. The identification was consistent with the corresponding information from GenBank, and produced better performance compared to existing methods that use a period-3 signal. The primary tests on some fly, mouse and human genes suggests that our method is applicable to higher eukaryotic genes. The tests on pseudogenes indicated that most pseudogenes have no period-3 signal. Some exploration of the 3'-tail of 18S rRNA and pattern analysis of protein-coding sequences supported further our assumption that the 3'-tail of 18S rRNA has a role of synchronization throughout translation elongation process. This, in turn, can be utilized for the identification of protein-coding sequences.

  7. Forecasting hysteresis behaviours of magnetorheological elastomer base isolator utilizing a hybrid model based on support vector regression and improved particle swarm optimization

    Science.gov (United States)

    Yu, Yang; Li, Yancheng; Li, Jianchun

    2015-03-01

    Due to its inherent hysteretic characteristics, the main challenge for the application of a magnetorheological elastomer- (MRE) based isolator is the exploitation of the accurate model, which could fully describe its unique behaviour. This paper proposes a nonparametric model for a MRE-based isolator based on support vector regression (SVR). The trained identification model is to forecast the shear force of the MRE-based isolator online; thus, the dynamic response from the MRE-based isolator can be well captured. In order to improve the forecast capacity of the model, a type of improved particle swarm optimization (IPSO) is employed to optimize the parameters in SVR. Eventually, the trained model is applied to the MRE-based isolator modelling with testing data. The results indicate that the proposed hybrid model has a better generalization capacity and better recognition accuracy than other conventional models, and it is an effective and suitable approach for forecasting the behaviours of a MRE-based isolator.

  8. Registration procedure for spatial correlation of physical energy deposition of particle irradiation and cellular response utilizing cell-fluorescent ion track hybrid detectors

    Science.gov (United States)

    Niklas, M.; Zimmermann, F.; Schlegel, J.; Schwager, C.; Debus, J.; Jäkel, O.; Abdollahi, A.; Greilich, S.

    2016-09-01

    The hybrid technology cell-fluorescent ion track hybrid detector (Cell-Fit-HD) enables the investigation of radiation-related cellular events along single ion tracks on the subcellular scale in clinical ion beams. The Cell-Fit-HD comprises a fluorescent nuclear track detector (FNTD, the physical compartment), a device for individual particle detection and a substrate for viable cell-coating, i.e. the biological compartment. To date both compartments have been imaged sequentially in situ by confocal laser scanning microscopy (CLSM). This is yet in conflict with a functional read-out of the Cell-Fit-HD utilizing a fast live-cell imaging of the biological compartment with low phototoxicity on greater time scales. The read-out of the biological from the physical compartment was uncoupled. A read-out procedure was developed to image the cell layer by conventional widefield microscopy whereas the FNTD was imaged by CLSM. Point mapping registration of the confocal and widefield imaging data was performed. Non-fluorescent crystal defects (spinels) visible in both read-outs were used as control point pairs. The accuracy achieved was on the sub-µm scale. The read-out procedure by widefield microscopy does not impair the unique ability of spatial correlation by the Cell-Fit-HD. The uncoupling will enlarge the application potential of the hybrid technology significantly. The registration allows for an ultimate correlation of microscopic physical beam parameters and cell kinetics on greater time scales. The method reported herein will be instrumental for the introduction of a novel generation of compact detectors facilitating biodosimetric research towards high-throughput analysis.

  9. Hybrid smoothed dissipative particle dynamics and immersed boundary method for simulation of red blood cells in flows.

    Science.gov (United States)

    Ye, Ting; Phan-Thien, Nhan; Lim, Chwee Teck; Peng, Lina; Shi, Huixin

    2017-06-01

    In biofluid flow systems, often the flow problems of fluids of complex structures, such as the flow of red blood cells (RBCs) through complex capillary vessels, need to be considered. The smoothed dissipative particle dynamics (SDPD), a particle-based method, is one of the easy and flexible methods to model such complex structure fluids. It couples the best features of the smoothed particle hydrodynamics (SPH) and dissipative particle dynamics (DPD), with parameters having specific physical meaning (coming from SPH discretization of the Navier-Stokes equations), combined with thermal fluctuations in a mesoscale simulation, in a similar manner to the DPD. On the other hand, the immersed boundary method (IBM), a preferred method for handling fluid-structure interaction problems, has also been widely used to handle the fluid-RBC interaction in RBC simulations. In this paper, we aim to couple SDPD and IBM together to carry out the simulations of RBCs in complex flow problems. First, we develop the SDPD-IBM model in details, including the SDPD model for the evolving fluid flow, the RBC model for calculating RBC deformation force, the IBM for treating fluid-RBC interaction, and the solid boundary treatment model as well. We then conduct the verification and validation of the combined SDPD-IBM method. Finally, we demonstrate the capability of the SDPD-IBM method by simulating the flows of RBCs in rectangular, cylinder, curved, bifurcated, and constricted tubes, respectively.

  10. Pentynyl Ether of β-Cyclodextrin Polymer and Silica Micro-Particles: A New Hybrid Material for Adsorption of Phenanthrene from Water

    Directory of Open Access Journals (Sweden)

    Jae Min Choi

    2017-01-01

    Full Text Available A new hybrid material for the removal of polycyclic aromatic hydrocarbons (PAH from water was prepared by the polymerization of pentynyl beta-cyclodextrin (PyβCD and silica micro-particles (SMP. Phenanthrene, being one of the important members of the PAH family and a potential risk for environmental pollution, was selected for this study. Results show that phenanthrene removal efficiency of the SMP was improved significantly after hybridization with PyβCD-polymer. Approximately 50% of the phenanthrene was removed in the first 60 min and more than 95% was removed in less than 7 h when 25 mL of the 2 ppm aqueous phenanthrene solution was incubated with the 100 mg of SMP-PyβCD-polymer material. Infrared spectroscopy and thermal gravimetric analysis show that the enhanced efficiency of the SMP-PyβCD-polymer compared to the unmodified SMP was due to the formation of the inclusion complexation of phenanthrene with the PyβCD. These results indicate that SMP-PyβCD polymers have a potential to be applied as molecular filters in water purification systems and also for waste water treatment.

  11. Hybrid Recurrent Laguerre-Orthogonal-Polynomial NN Control System Applied in V-Belt Continuously Variable Transmission System Using Particle Swarm Optimization

    Directory of Open Access Journals (Sweden)

    Chih-Hong Lin

    2015-01-01

    Full Text Available Because the V-belt continuously variable transmission (CVT system driven by permanent magnet synchronous motor (PMSM has much unknown nonlinear and time-varying characteristics, the better control performance design for the linear control design is a time consuming procedure. In order to overcome difficulties for design of the linear controllers, the hybrid recurrent Laguerre-orthogonal-polynomial neural network (NN control system which has online learning ability to respond to the system’s nonlinear and time-varying behaviors is proposed to control PMSM servo-driven V-belt CVT system under the occurrence of the lumped nonlinear load disturbances. The hybrid recurrent Laguerre-orthogonal-polynomial NN control system consists of an inspector control, a recurrent Laguerre-orthogonal-polynomial NN control with adaptive law, and a recouped control with estimated law. Moreover, the adaptive law of online parameters in the recurrent Laguerre-orthogonal-polynomial NN is derived using the Lyapunov stability theorem. Furthermore, the optimal learning rate of the parameters by means of modified particle swarm optimization (PSO is proposed to achieve fast convergence. Finally, to show the effectiveness of the proposed control scheme, comparative studies are demonstrated by experimental results.

  12. Symmetrical Key Automatic Changing Cryptography Based on Simple Hybrid Selection Coding%基于简易混合选择编码的对称密钥自变动加密

    Institute of Scientific and Technical Information of China (English)

    罗俊; 张国平

    2012-01-01

    Aiming at the encryption system with lower security requirements, this paper puts forward a symmetrical key automatic changing cryptography scheme based on simple hybrid selection coding, with the combination of unilateral canonical Huffman coding and fixed-length coding. The statistical results of plaintext are used as their own encrypted key and coding basis, which makes the scheme to be easy to implement and calculate and of cost low. It is proved that when the keys are completely unknown, the cracking of the encryption system has great difficulty.%针对安全性要求不太高的加密系统,将单边范式Huffman编码与等长编码相结合,提出一种基于混合选择编码的对称密钥自变动加密方案.通过将明文的统计结果作为自身加密的密钥和编码依据,使方案易于实现,且计算存储成本低.理论分析结果证明,在密钥完全未知的情况下破解该加密体制难度较大.

  13. Numerical studies of petawatt laser-driven proton generation from two-species targets using a two-dimensional particle-in-cell code

    Science.gov (United States)

    Domański, J.; Badziak, J.; Jabloński, S.

    2016-04-01

    Laser-driven generation of high-energy ion beams has recently attracted considerable interest due to a variety of potential applications including proton radiography, ICF fast ignition, nuclear physics or hadron therapy. The ion beam parameters depend on both laser pulse and target parameters, and in order to produce the ion beam of properties required for a particular application the laser and target parameters must be carefully selected, and the mechanism of the ion beam generation should be well understood and controlled. Convenient and commonly used tools for studies of the ion acceleration process are particle-in-cell (PIC) codes. Using two-dimensional PIC simulations, the properties of a proton beam generated from a thin erbium hydride (ErH3) target irradiated by a 25fs laser pulse of linear or circular polarization and of intensity ranging from 1020 to 1021 W/cm2 are investigated and compared with the features of a proton beam produced from a hydrocarbon (CH) target. It has been found that using erbium hydride targets instead of hydrocarbon ones creates an opportunity to generate more compact proton beams of higher mean energy, intensity and of better collimation. This is especially true for the linear polarization of the laser beam, for which the mean proton energy, the amount of high energy protons and the intensity of the proton beam generated from the hydride target is by an order of magnitude higher than for the hydrocarbon target. For the circular polarization, the proton beam parameters are lower than those for the linear one, and the effect of target composition on the acceleration process is weaker.

  14. The solvothermal synthesis of magnetic iron oxide nanocrystals and the preparation of hybrid poly(L-lactide)-polyethyleneimine magnetic particles.

    Science.gov (United States)

    Stojanović, Zoran; Otoničar, Mojca; Lee, Jongwook; Stevanović, Magdalena M; Hwang, Mintai P; Lee, Kwan Hyi; Choi, Jonghoon; Uskoković, Dragan

    2013-09-01

    We report a simple and green procedure for the preparation of magnetic iron oxide nanocrystals via solvothermal synthesis. The nanocrystal synthesis was carried out under mild conditions in the water-ethanol-oleic acid solvent system with the use of the oleate anion as a surface modifier of nanocrystals and glucose as a reducing agent. Specific conditions for homogenous precipitation achieved in such a reaction system lead to the formation of uniform high-quality nanocrystals down to 5 nm in diameter. The obtained hydrophobic nanocrystals can easily be converted to hydrophilic magnetic nanoparticles by being immobilized in a poly(L-lactide)-polyethyleneimine polymeric matrix. These hybrid nano-constructs may find various biomedical applications, such as magnetic separation, gene transfection and/or magnetic resonance imaging. Copyright © 2013 Elsevier B.V. All rights reserved.

  15. Simulation of the interaction between Alfven waves and fast particles

    Energy Technology Data Exchange (ETDEWEB)

    Feher, Tamas Bela

    2014-02-18

    There is a wide variety of Alfven waves in tokamak and stellarator plasmas. While most of them are damped, some of the global eigenmodes can be driven unstable when they interact with energetic particles. By coupling the MHD code CKA with the gyrokinetic code EUTERPE, a hybrid kinetic-MHD model is created to describe this wave-particle interaction in stellarator geometry. In this thesis, the CKA-EUTERPE code package is presented. This numerical tool can be used for linear perturbative stability analysis of Alfven waves in the presence of energetic particles. The equations for the hybrid model are based on the gyrokinetic equations. The fast particles are described with linearized gyrokinetic equations. The reduced MHD equations are derived by taking velocity moments of the gyrokinetic equations. An equation for describing the Alfven waves is derived by combining the reduced MHD equations. The Alfven wave equation can retain kinetic corrections. Considering the energy transfer between the particles and the waves, the stability of the waves can be calculated. Numerically, the Alfven waves are calculated using the CKA code. The equations are solved as an eigenvalue problem to determine the frequency spectrum and the mode structure of the waves. The results of the MHD model are in good agreement with other sophisticated MHD codes. CKA results are shown for a JET and a W7-AS example. The linear version of the EUTERPE code is used to study the motion of energetic particles in the wavefield with fixed spatial structure, and harmonic oscillations in time. In EUTERPE, the gyrokinetic equations are discretized with a PIC scheme using the delta-f method, and both full orbit width and finite Larmor radius effects are included. The code is modified to be able to use the wavefield calculated externally by CKA. Different slowing-down distribution functions are also implemented. The work done by the electric field on the particles is measured to calculate the energy transfer

  16. Self-assembly of carbon nanotubes in polymer melts: simulation of structural and electrical behaviour by hybrid particle-field molecular dynamics.

    Science.gov (United States)

    Zhao, Ying; Byshkin, Maksym; Cong, Yue; Kawakatsu, Toshihiro; Guadagno, Liberata; De Nicola, Antonio; Yu, Naisen; Milano, Giuseppe; Dong, Bin

    2016-08-25

    Self-assembly processes of carbon nanotubes (CNTs) dispersed in different polymer phases have been investigated using a hybrid particle-field molecular dynamics technique (MD-SCF). This efficient computational method allowed simulations of large-scale systems (up to ∼1 500 000 particles) of flexible rod-like particles in different matrices made of bead spring chains on the millisecond time scale. The equilibrium morphologies obtained for longer CNTs are in good agreement with those proposed by several experimental studies that hypothesized a two level "multiscale" organization of CNT assemblies. In addition, the electrical properties of the assembled structures have been calculated using a resistor network approach. The calculated behaviour of the conductivities for longer CNTs is consistent with the power laws obtained by numerous experiments. In particular, according to the interpretation established by the systematic studies of Bauhofer and Kovacs, systems close to "statistical percolation" show exponents t ∼ 2 for the power law dependence of the electrical conductivity on the CNT fraction, and systems in which the CNTs reach equilibrium aggregation show exponents t close to 1.7 ("kinetic percolation"). The confinement effects on the assembled structures and their corresponding conductivity behaviour in a non-homogeneous matrix, such as the phase separating block copolymer melt, have also been simulated using different starting configurations. The simulations reported herein contribute to a microscopic interpretation of the literature results, and the proposed modelling procedure may contribute meaningfully to the rational design of strategies aimed at optimizing nanomaterials for improved electrical properties.

  17. 模糊自适应混合退火粒子滤波算法%THE ALGORITHM OF FUZZY ADAPTIVE HYBRID ANNEALED PARTICLE FILTER

    Institute of Scientific and Technical Information of China (English)

    蒋东明

    2013-01-01

    A new particle filter algorithm is proposed based on the hybrid annealed particle filter (HAPF) for on-line estimation of non-Gaussian nonlinear systems and inherent degeneracy problem of the particle filter.In the filtering algorithm,according to the relation between the statistical properties of state noise and measurement noise of the system,we introduce an adjustment factor,then an annealed coefficient is produced by fuzzy inference system.The state parameters separation and the annealed coefficient are used to produce important probability density function.Using the algorithm,we get better annealed coefficient on the basis of keeping the advantages of HAPF.Simulation experiments show that the performance of the proposed filtering algorithm outperforms the HAPF.%针对非线性、非高斯系统状态的在线估计问题,及粒子滤波本身固有的退化问题,在已提出的混合退火粒子滤波算法的基础上提出一种新的粒子滤波算法.在滤波算法中,根据系统的状态噪声统计特性和量测噪声统计特性的关系引入调整因子,再由模糊推理系统产生退火系数.用状态参数分解和退火系数来产生重要性概率密度函数.在保留原算法优点的基础上取得了更佳的退火系数.仿真实验表明该粒子滤波器的性能优于混合退火粒子滤波算法.

  18. Mesoporous silica particle-PLA-PANI hybrid scaffolds for cell-directed intracellular drug delivery and tissue vascularization

    Science.gov (United States)

    Shokry, Hussein; Vanamo, Ulriika; Wiltschka, Oliver; Niinimäki, Jenni; Lerche, Martina; Levon, Kalle; Linden, Mika; Sahlgren, Cecilia

    2015-08-01

    Instructive materials are expected to revolutionize stem cell based tissue engineering. As many stem cell cues have adverse effects on normal tissue homeostasis, there is a need to develop bioactive scaffolds which offer locally retained and cell-targeted drug delivery for intracellular release in targeted cell populations. Further, the scaffolds need to support vascularization to promote tissue growth and function. We have developed an electrospun PLA-PANI fiber scaffold, and incorporated mesoporous silica nanoparticles within the scaffold matrix to obtain cell-targeted and localized drug delivery. The isotropy of the scaffold can be tuned to find the optimal morphology for a given application and the scaffold is electroactive to support differentiation of contractile tissues. We demonstrate that there is no premature drug release from particles under physiological conditions over a period of one week and that the drug is released upon internalization of particles by cells within the scaffold. The scaffold is biocompatible, supports muscle stem cell differentiation and cell-seeded scaffolds are vascularized in vivo upon transplantation on the chorioallantoic membrane of chicken embryos. The scaffold is a step towards instructive biomaterials for local control of stem cell differentiation, and tissue formation supported by vascularization and without adverse effects on the homeostasis of adjacent tissues due to diffusion of biological cues.Instructive materials are expected to revolutionize stem cell based tissue engineering. As many stem cell cues have adverse effects on normal tissue homeostasis, there is a need to develop bioactive scaffolds which offer locally retained and cell-targeted drug delivery for intracellular release in targeted cell populations. Further, the scaffolds need to support vascularization to promote tissue growth and function. We have developed an electrospun PLA-PANI fiber scaffold, and incorporated mesoporous silica nanoparticles within

  19. Electrical Resistivity, Tribological Behaviour of Multiwalled Carbon Nanotubes and Nanoboron Carbide Particles Reinforced Copper Hybrid Composites for Pantograph Application

    Directory of Open Access Journals (Sweden)

    N. Selvakumar

    2016-01-01

    Full Text Available This work focuses on the influence and contribution of multiwalled carbon-nanotube (MWCNT–boron carbide (B4C to the mechanical and tribological properties of copper matrix composites. Different weight fractions of nano- B4C-containing fixed-weight fractions of MWCNT-reinforced copper composites were prepared using the entrenched cold-press sintering method of powder metallurgy. The wear losses of sintered Cu–MWCNT–B4C composites were investigated by conducting sliding tests in a pin-on-disc apparatus. The addition of reinforcements showed enhancements in the hardness and wear properties of the composites due to the uniform dispersion of the secondary reinforcement in the copper matrix and the self-lubricating effect of the MWCNTs. The effects of the nanoparticle distribution in the matrix, the worn surface morphology, and the elemental composition of the composites were characterized using high-resolution scanning electron microscopy and X-ray diffraction analysis. The electrical resistivity of the fabricated copper hybrid composite preforms was evaluated using a four-point probe tester. Our results highlight the use of experiential reinforcing limits of B4C on the wear and electrical and mechanical behaviour of copper composites.

  20. Single-inclusive particle production in proton-nucleus collisions at next-to-leading order in the hybrid formalism

    CERN Document Server

    Altinoluk, Tolga; Beuf, Guillaume; Kovner, Alex; Lublinsky, Michael

    2014-01-01

    We reconsider the perturbative next-to-leading calculation of the single inclusive hadron production in the framework of the hybrid formalism, applied to hadron production in proton-nucleus collisions. Our analysis, performed in the wave function approach, differs from the previous works in three points. First, we are careful to specify unambiguously the rapidity interval that has to be included in the evolution of the leading-order eikonal scattering amplitude. This is important, since varying this interval by a number of order unity changes the next-to-leading order correction that the calculation is meant to determine. Second, we introduce the explicit requirement that fast fluctuations in the projectile wave function which only exist for a short time are not resolved by the target. This Ioffe time cutoff also strongly affects the next-to-leading order terms. Third, our result does not employ the approximation of a large number of colors. Our final expressions are unambiguous and do not coincide at next-to...

  1. PARALLELIZATION AND PERFECTION OF MCNP MONTE CARLO PARTICLE TRANSPORT CODE IN MPI%粒子输运蒙特卡罗程序MCNP在MPI下的并行化及完善

    Institute of Scientific and Technical Information of China (English)

    邓力; 刘杰; 张文勇

    2003-01-01

    The particle transport Monte Carlo code MCNP had been realized the paral-lelization in MPI (Message Passing Interface) in 1999. But due to adopting the leap random number producer, some differences were existed between the parallel result and the serial result. Now the same results have been achieved by using the segment random number. The speedup of the applied problem is the liner ups to 53 in 64-Processors and the parallel efficiencv is up to 83% in 64-Processors.

  2. A hybrid approach for quantizing complicated motion of a charged particle in time-varying magnetic field

    Energy Technology Data Exchange (ETDEWEB)

    Menouar, Salah [Laboratory of Optoelectronics and Compounds (LOC), Department of Physics, Faculty of Science, University of Ferhat Abbas Setif 1, Setif 19000 (Algeria); Choi, Jeong Ryeol, E-mail: choiardor@hanmail.net [Department of Radiologic Technology, Daegu Health College, Yeongsong 15, Buk-gu, Daegu 702-722 (Korea, Republic of)

    2015-02-15

    Quantum characteristics of a charged particle subjected to a singular oscillator potential under an external magnetic field is investigated via SU(1,1) Lie algebraic approach together with the invariant operator and the unitary transformation methods. The system we managed is somewhat complicated since we considered not only the time-variation of the effective mass of the system but also the dependence of the external magnetic field on time in an arbitrary fashion. In this case, the system is a kind of time-dependent Hamiltonian systems which require more delicate treatment when we study it. The complete wave functions are obtained without relying on the methods of perturbation and/or approximation, and the global phases of the system are identified. To promote the understanding of our development, we applied it to a particular case, assuming that the effective mass slowly varies with time under a time-dependent magnetic field.

  3. Flow Analysis of Code Customizations

    DEFF Research Database (Denmark)

    Hessellund, Anders; Sestoft, Peter

    2008-01-01

    Inconsistency between metadata and code customizations is a major concern in modern, configurable enterprise systems. The increasing reliance on metadata, in the form of XML files, and code customizations, in the form of Java files, has led to a hybrid development platform. The expected consisten...

  4. Genetic relatedness between intracisternal A particles and other major oncovirus genera.

    Science.gov (United States)

    Chiu, I M; Huang, R C; Aaronson, S A

    1985-07-01

    Intracisternal A particles represent a major oncovirus genus. By reciprocal hybridization between molecularly cloned A particles and representatives of other oncovirus genera, we established pol gene homology with type B, type D and avian type C viruses. The most extensive homology was with mammalian type D viruses. The transcriptional orientation of the IAP genome was determined, as well as evidence indicating that its pol gene, which is apparently defective, contains coding regions for both reverse transcriptase and endonuclease proteins.

  5. The quadriceps muscle of knee joint modelling Using Hybrid Particle Swarm Optimization-Neural Network (PSO-NN)

    Science.gov (United States)

    Kamaruddin, Saadi Bin Ahmad; Marponga Tolos, Siti; Hee, Pah Chin; Ghani, Nor Azura Md; Ramli, Norazan Mohamed; Nasir, Noorhamizah Binti Mohamed; Ksm Kader, Babul Salam Bin; Saiful Huq, Mohammad

    2017-03-01

    Neural framework has for quite a while been known for its ability to handle a complex nonlinear system without a logical model and can learn refined nonlinear associations gives. Theoretically, the most surely understood computation to set up the framework is the backpropagation (BP) count which relies on upon the minimization of the mean square error (MSE). However, this algorithm is not totally efficient in the presence of outliers which usually exist in dynamic data. This paper exhibits the modelling of quadriceps muscle model by utilizing counterfeit smart procedures named consolidated backpropagation neural network nonlinear autoregressive (BPNN-NAR) and backpropagation neural network nonlinear autoregressive moving average (BPNN-NARMA) models in view of utilitarian electrical incitement (FES). We adapted particle swarm optimization (PSO) approach to enhance the performance of backpropagation algorithm. In this research, a progression of tests utilizing FES was led. The information that is gotten is utilized to build up the quadriceps muscle model. 934 preparing information, 200 testing and 200 approval information set are utilized as a part of the improvement of muscle model. It was found that both BPNN-NAR and BPNN-NARMA performed well in modelling this type of data. As a conclusion, the neural network time series models performed reasonably efficient for non-linear modelling such as active properties of the quadriceps muscle with one input, namely output namely muscle force.

  6. Hybrid particle swarm optimization algorithm based on quantum genetic%基于量子遗传的混合粒子群优化算法

    Institute of Scientific and Technical Information of China (English)

    赵莉; 董玉民

    2014-01-01

    为提高智能优化算法的性能,将其更好地应用到各个领域,提出了一种两阶段优化算法。在改进的量子遗传算法的基础上,进一步结合粒子群优化算法,构造了量子遗传-粒子群混合算法。通过量子遗传算法对问题进行初步求解,将第一阶段的优化结果作为粒子群算法的初始值,进行第二阶段的问题求解过程,得到问题的最终优化解。通过实验将该算法与传统优化算法进行比较,实验结果表明,该算法在性能方面有一定程度的提高。%To improve the performance of the intelligent optimization algorithm,making the optimization algorithm better appli-cable to various fields,a kind of two phase optimization algorithm was put forward.On the basis of the improved quantum gene-tic algorithm and further combined with particle swarm optimization algorithm,the quantum genetic-mixed particle swarm algo-rithm was constructed.First the problem was solved by quantum genetic algorithm preliminarily,and then the first stage optimi-zation results were taken as initial values for the second stage of the problem solving process to get the final optimization problem solution.The new hybrid algorithm was compared with the traditional optimization algorithm,the experimental results showed that the performance of the new algorithm had certain degree of improvement.

  7. Estimation of in-situ bioremediation system cost using a hybrid Extreme Learning Machine (ELM)-particle swarm optimization approach

    Science.gov (United States)

    Yadav, Basant; Ch, Sudheer; Mathur, Shashi; Adamowski, Jan

    2016-12-01

    In-situ bioremediation is the most common groundwater remediation procedure used for treating organically contaminated sites. A simulation-optimization approach, which incorporates a simulation model for groundwaterflow and transport processes within an optimization program, could help engineers in designing a remediation system that best satisfies management objectives as well as regulatory constraints. In-situ bioremediation is a highly complex, non-linear process and the modelling of such a complex system requires significant computational exertion. Soft computing techniques have a flexible mathematical structure which can generalize complex nonlinear processes. In in-situ bioremediation management, a physically-based model is used for the simulation and the simulated data is utilized by the optimization model to optimize the remediation cost. The recalling of simulator to satisfy the constraints is an extremely tedious and time consuming process and thus there is need for a simulator which can reduce the computational burden. This study presents a simulation-optimization approach to achieve an accurate and cost effective in-situ bioremediation system design for groundwater contaminated with BTEX (Benzene, Toluene, Ethylbenzene, and Xylenes) compounds. In this study, the Extreme Learning Machine (ELM) is used as a proxy simulator to replace BIOPLUME III for the simulation. The selection of ELM is done by a comparative analysis with Artificial Neural Network (ANN) and Support Vector Machine (SVM) as they were successfully used in previous studies of in-situ bioremediation system design. Further, a single-objective optimization problem is solved by a coupled Extreme Learning Machine (ELM)-Particle Swarm Optimization (PSO) technique to achieve the minimum cost for the in-situ bioremediation system design. The results indicate that ELM is a faster and more accurate proxy simulator than ANN and SVM. The total cost obtained by the ELM-PSO approach is held to a minimum

  8. Verification and Validation of Monte Carlo n-Particle Code 6 (MCNP6) with Neutron Protection Factor Measurements of an Iron Box

    Science.gov (United States)

    2014-03-27

    want to express my sincere love, respect, and admiration for my wife, who motivated and supported me throughout this long endeavor; this document ...widely utilized radiation transport code is MCNP. First created at Los Alamos National Laboratory ( LANL ) in 1957, the code simulated neutral...explanation of the current capabilities of MCNP will occur within the next chapter of this document ; however, it is important to note that MCNP

  9. Computational Performance of Intel MIC, Sandy Bridge, and GPU Architectures: Implementation of a 1D c++/OpenMP Electrostatic Particle-In-Cell Code

    Science.gov (United States)

    2014-05-01

    Parallelization and vectorization on the GPU is achieved with modifying the code syntax for compatibility with CUDA . We assess the speedup due to various...ExaScience Lab in Leuven, Belgium) and compare it with the performance of a GPU unit running CUDA . We implement a test case of a 1D two-stream instability...programming language syntax only in the GPU / CUDA version of the code and these changes do not have any significant impact on the final performance. 2

  10. Effect of Nano-TiC Dispersed Particles and Electro-Codeposition Parameters on Morphology and Structure of Hybrid Ni/TiC Nanocomposite Layers

    Directory of Open Access Journals (Sweden)

    Lidia Benea

    2016-04-01

    Full Text Available This research work describes the effect of dispersed titanium carbide (TiC nanoparticles into nickel plating bath on Ni/TiC nanostructured composite layers obtained by electro-codeposition. The surface morphology of Ni/TiC nanostructured composite layers was characterized by scanning electron microscopy (SEM. The composition of coatings and the incorporation percentage of TiC nanoparticles into Ni matrix were studied and estimated by using energy dispersive X-ray analysis (EDX. X-ray diffractometer (XRD has been applied in order to investigate the phase structure as well as the corresponding relative texture coefficients of the composite layers. The results show that the concentration of nano-TiC particles added in the nickel electrolyte affects the inclusion percentage of TiC into Ni/TiC nano strucured layers, as well as the corresponding morphology, relative texture coefficients and thickness indicating an increasing tendency with the increasing concentration of nano-TiC concentration. By increasing the amount of TiC nanoparticles in the electrolyte, their incorporation into nickel matrix also increases. The hybrid Ni/nano-TiC composite layers obtained revealed a higher roughness and higher hardness; therefore, these layers are promising superhydrophobic surfaces for special application and could be more resistant to wear than the pure Ni layers.

  11. Nanoscale Organic Hybrid Electrolytes

    KAUST Repository

    Nugent, Jennifer L.

    2010-08-20

    Nanoscale organic hybrid electrolytes are composed of organic-inorganic hybrid nanostructures, each with a metal oxide or metallic nanoparticle core densely grafted with an ion-conducting polyethylene glycol corona - doped with lithium salt. These materials form novel solvent-free hybrid electrolytes that are particle-rich, soft glasses at room temperature; yet manifest high ionic conductivity and good electrochemical stability above 5V. © 2010 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. 低码率视频编码中基于对角线抽样的混合DCT算法%A Novel DS-Based Hybrid DCT for Low Bit-Rate Video Coding

    Institute of Scientific and Technical Information of China (English)

    陈志波; 何芸; 郑君里

    2001-01-01

    该文提出了一个适用于低码率视频编码的基于对角线抽样(DS- Diagonally Sampled)的混合快速DCT编码方案,此方案包含一个新的3系数DCT算法,同时提出了基于DS的计算色度SAD(Sum of Absolute Difference)的方法用于有效准确地检测全零或3系数的色度块。这一混合算法可以有效的降低DCT、IDCT、量化和反量化的运算量,提高编码器处理速度,而不会导致明显的图象质量的下降。%In this paper,a DS-Based hybrid DCT strategy is proposed,including a novel 3 coefficients DCT and an algorithm calculating chroma SAD(Sum of Absolute Difference)from Diagonally Sampled pixels is also introduced to effectively detect All-zero or Three-coefficient chroma blocks.This hybrid coding strategy efficiently decrease the computation of DCT,IDCT,quantization and inverse quantization,which achieves significant improvement in the processing speed with negligible video-quality degradation.

  13. Influence of the Sr and Mg Alloying Additions on the Bonding Between Matrix and Reinforcing Particles in the AlSi7Mg/SiC-Cg Hybrid Composite

    Directory of Open Access Journals (Sweden)

    Dolata A. J.

    2016-06-01

    Full Text Available The aim of the work was to perform adequate selection of the phase composition of the composite designated for permanent - mould casting air compressor pistons. The hybrid composites based on AlSi7Mg matrix alloy reinforced with mixture of silicon carbide (SiC and glassy carbon (Cg particles were fabricated by the stir casting method. It has been shown that the proper selection of chemical composition of matrix alloy and its modification by used magnesium and strontium additions gives possibility to obtain both the advantageous casting properties of composite suspensions as well as good bonding between particles reinforcements and matrix.

  14. FOXTAIL: Modeling the nonlinear interaction between Alfv\\'en eigenmodes and energetic particles in tokamaks

    CERN Document Server

    Tholerus, Emmi; Hellsten, Torbjörn

    2016-01-01

    FOXTAIL is a new hybrid magnetohydrodynamic-kinetic code used to describe interactions between energetic particles and Alfv\\'en eigenmodes in tokamaks with realistic geometries. The code simulates the nonlinear dynamics of the amplitudes of individual eigenmodes and of a set of discrete markers in five-dimensional phase space representing the energetic particle distribution. Action-angle coordinates of the equilibrium system are used for efficient tracing of energetic particles, and the particle acceleration by the wave fields of the eigenmodes is Fourier decomposed in the same angles. The eigenmodes are described using temporally constant eigenfunctions with dynamic complex amplitudes. Possible applications of the code are presented, e.g., making a quantitative validity evaluation of the one-dimensional bump-on-tail approximation of the system. Expected effects of the fulfillment of the Chirikov criterion in two-mode scenarios have also been verified.

  15. 格雷码混合遗传算法求解0-1背包问题%Gray coded hybrid genetic algorithm for 0-1 knapsack problem

    Institute of Scientific and Technical Information of China (English)

    王则林; 吴志健

    2012-01-01

    This paper gave an athematic mode of 0-1 knapsack problem,and modified the binary coding to establish a gray coded hybrid genetic algorithm used greedy algorithm to handle with the constraint conditions, And this paper proposed a value density operator to the individual, which could improve the search effciency, used the elitism mechanism to accelerate the convergence process, The numerical experiment proves the affectivity of the algorithm.%给出0-1背包问题的数学模型,修改传统二进制编码为格雷码混合遗传算法,使用贪心算法来解决约束问题,对每个个体使用价值密度来衡量,提高了算法搜索效率,同时使用精英保留机制来加速算法收敛的速度.最后通过数值实验证明了算法的有效性.

  16. Particle methods for simulation of subsurface multiphase fluid flow and biogeological processes

    Energy Technology Data Exchange (ETDEWEB)

    Meakin, Paul; Tartakovsky, Alexandre M.; Scheibe, Timothy D.; Tartakovsky, Daniel M.; Redden, George; Long, Philip E.; Brooks, Scott C.; Xu, Zhijie

    2007-08-01

    A number of particle models that are suitable for simulating multiphase fluid flow and biogeological processes have been developed during the last few decades. Here we discuss three of them: a microscopic model - molecular dynamics; a mesoscopic model - dissipative particle dynamics; and a macroscopic model - smoothed particle hydrodynamics. Particle methods are robust and versatile, and it is relatively easy to add additional physical, chemical and biological processes into particle codes. However, the computational efficiency of particle methods is low relative to continuum methods. Multiscale particle methods and hybrid (particle–particle and particle–continuum) methods are needed to improve computational efficiency and make effective use of emerging computational capabilities. These new methods are under development.

  17. Particle methods for simulation of subsurface multiphase fluid flow and biogeological processes

    Energy Technology Data Exchange (ETDEWEB)

    Paul Meakin; Alexandre Tartakovsky; Tim Scheibe; Daniel Tartakovsky; Georgr Redden; Philip E. Long; Scott C. Brooks; Zhijie Xu

    2007-06-01

    A number of particle models that are suitable for simulating multiphase fluid flow and biogeological processes have been developed during the last few decades. Here we discuss three of them: a microscopic model - molecular dynamics; a mesoscopic model - dissipative particle dynamics; and a macroscopic model - smoothed particle hydrodynamics. Particle methods are robust and versatile, and it is relatively easy to add additional physical, chemical and biological processes into particle codes. However, the computational efficiency of particle methods is low relative to continuum methods. Multiscale particle methods and hybrid (particle–particle and particle–continuum) methods are needed to improve computational efficiency and make effective use of emerging computational capabilities. These new methods are under development.

  18. Particle-Gas Dynamics with Athena: Method and Convergence

    CERN Document Server

    Bai, Xue-Ning

    2010-01-01

    The Athena MHD code has been extended to integrates the motion of particles coupled with the gas via aerodynamic drag, in order to study the dynamics of gas and solids in protoplanetary disks and the formation of planetesimals. Our particle-gas hybrid scheme is based on a second order predictor-corrector method. Careful treatment of the momentum feedback on the gas guarantees exact conservation. The hybrid scheme is stable and convergent in most regimes relevant to protoplanetary disks. We describe a semi-implicit integrator generalized from the leap-frog approach. In the absence of drag force, it preserves the geometric properties of a particle orbit. We also present a fully-implicit integrator that is unconditionally stable for all regimes of particle-gas coupling. Using our hybrid code, we study the numerical convergence of the non-linear saturated state of the streaming instability. We find that gas flow properties are well converged with modest grid resolution (128 cells per pressure length ${\\eta}r$ for...

  19. Beam-dynamics codes used at DARHT

    Energy Technology Data Exchange (ETDEWEB)

    Ekdahl, Jr., Carl August [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-02-01

    Several beam simulation codes are used to help gain a better understanding of beam dynamics in the DARHT LIAs. The most notable of these fall into the following categories: for beam production – Tricomp Trak orbit tracking code, LSP Particle in cell (PIC) code, for beam transport and acceleration – XTR static envelope and centroid code, LAMDA time-resolved envelope and centroid code, LSP-Slice PIC code, for coasting-beam transport to target – LAMDA time-resolved envelope code, LSP-Slice PIC code. These codes are also being used to inform the design of Scorpius.

  20. Coding Partitions

    Directory of Open Access Journals (Sweden)

    Fabio Burderi

    2007-05-01

    Full Text Available Motivated by the study of decipherability conditions for codes weaker than Unique Decipherability (UD, we introduce the notion of coding partition. Such a notion generalizes that of UD code and, for codes that are not UD, allows to recover the ``unique decipherability" at the level of the classes of the partition. By tacking into account the natural order between the partitions, we define the characteristic partition of a code X as the finest coding partition of X. This leads to introduce the canonical decomposition of a code in at most one unambiguouscomponent and other (if any totally ambiguouscomponents. In the case the code is finite, we give an algorithm for computing its canonical partition. This, in particular, allows to decide whether a given partition of a finite code X is a coding partition. This last problem is then approached in the case the code is a rational set. We prove its decidability under the hypothesis that the partition contains a finite number of classes and each class is a rational set. Moreover we conjecture that the canonical partition satisfies such a hypothesis. Finally we consider also some relationships between coding partitions and varieties of codes.

  1. A concurrent vector-based steering framework for particle transport

    CERN Document Server

    Apostolakis, John; Carminati, Federico; Gheata, Andrei; Wenzel, Sandro

    2014-01-01

    High Energy Physics has traditionally been a technology - limited science that has pushed the boundaries of both the detectors collecting the information about the particles and the computing infrastructure processing this information. However, since a few years the increase in computing power comes in the form of increased parallelism at all levels, and High Energy Physics has now to optimise its code to take advantage of the new architectures, including GPUs and hybrid systems. One of the primary targets for optimisation is the particle transport code used to simulate the detector response, as it is largely experiment independent and one of the most demanding applications in terms of CPU resources . The Geant Vector Prototype project aims to explore innovative designs in particle transport aimed at obtaining maximal performance on the new architectures. This paper describes the current status of the project and its future perspectives. In particular we describe how the present design tries to expose the par...

  2. GRACOS: Scalable and Load Balanced P3M Cosmological N-body Code

    CERN Document Server

    Shirokov, A; Shirokov, Alexander; Bertschinger, Edmund

    2005-01-01

    We present a parallel implementation of the particle-particle/particle-mesh (P3M) algorithm for distributed memory clusters. The GRACOS (GRAvitational COSmology) code uses a hybrid method for both computation and domain decomposition. Long-range forces are computed using a Fourier transform gravity solver on a regular mesh; the mesh is distributed across parallel processes using a static one-dimensional slab domain decomposition. Short-range forces are computed by direct summation of close pairs; particles are distributed using a dynamic domain decomposition based on a space-filling Hilbert curve. A nearly-optimal method was devised to dynamically repartition the particle distribution so as to maintain load balance even for extremely inhomogeneous mass distributions. Tests using $800^3$ simulations on a 40-processor beowulf cluster showed good load balance and scalability up to 80 processes. We discuss the limits on scalability imposed by communication and extreme clustering and suggest how they may be remove...

  3. Smoothed Particle Hydrodynamic Simulator

    Energy Technology Data Exchange (ETDEWEB)

    2016-10-05

    This code is a highly modular framework for developing smoothed particle hydrodynamic (SPH) simulations running on parallel platforms. The compartmentalization of the code allows for rapid development of new SPH applications and modifications of existing algorithms. The compartmentalization also allows changes in one part of the code used by many applications to instantly be made available to all applications.

  4. 基于提升小波的地形数据混合熵编码压缩与实时渲染%Terrain Data Hybrid Entropy Coding Compression Based on Lifting Wavelet and Real-time Rendering

    Institute of Scientific and Technical Information of China (English)

    郭浩然; 庞建民

    2012-01-01

    High resolution terrain Digital Elevation Model (DEM) and orthophoto bring severely load including data storage, schedule and real-time rendering, etc.. A high performance terrain data compression method is proposed based on lifting wavelet transform and parallel hybrid entropy codec, and combined with Graphics Process Unit (GPU) Ray-casting to achieve large-scale 3D terrain visualization. First, the multi-resolution wavelet transform model of terrain tile is constructed to map the refinement and simplification operation. Then the multi-resolution quadtree of DEM and terrain texture is built separately based on lifting wavelet transform, the sparse wavelet coefficient generated from quantization is compressed by a hybrid entropy codec which combined with parallel run-length coding and variable-length Huffman coding. The compressed data are organized into progressive stream to do real-time decoding and rendering. The present lifting wavelet transform and hybrid entropy codec is implemented by Compute Unified Device Architecture (CUDA) in GPU. Experiment results show that the data compression ratio is effective with this method, PSNR and code-decode data throughput. High Frames Per Second (FPS) in real-time rendering satisfied the demand of interactive visualization.%高分辨率地形高程和影像数据给交互式3维地形可视化应用带来沉重压力,主要体现在数据存储、调度传输及实时渲染等方面.该文设计一种基于提升小波变换与并行混合熵编码的地形数据高性能压缩方法,并结合图形处理器(Graphics Process Unit,GPU)Ray-casting实现大规模3维地形可视化.首先建立多分辨率地形块的小波变换模型来映射其求精和化简操作;其次,基于提升小波变换分别构建格网数字高程模型(Digital Elevation Model,DEM)和地表纹理的多分辨率四叉树,对量化后的稀疏小波系数引入并行游程编码与并行变长霍夫曼编码相结合的混合熵编码进行压

  5. Hybrid Fast Search Algorithm Based on Multiview Video Coding%多视点视频编码混合快速搜索算法

    Institute of Scientific and Technical Information of China (English)

    雷海军; 杨辉; 杨张; 袁梅冷

    2013-01-01

    EPZS是联合多视点视频编码(JMVC,Joint Multi-view Video Coding)运动估计中采用的一种预测搜索算法,其搜索速度慢.针对EPZS算法的性能不足,我们在预测矢量集合、搜索模型、阈值设置和搜索策略四个方面进行改进,提出了一种混合快速搜索算法.在联合多视点视频编码测试平台JMVC8.3中,对三个由平行摄像机采集的多视点视频测试序列BallRoom、Exit和Vassar进行测试.实验结果表明:在保证视频重建质量和码率的前提下,与Jmvc中的EPZS算法相比,编码速度平均提高了55.66% ~69.62%,改进算法的效果明显,编码效率得以提高.

  6. Improved hybrid particle swarm algorithm based on simulated annealing%基于自适应模拟退火的改进混合粒子群算法

    Institute of Scientific and Technical Information of China (English)

    杨文光; 严哲; 隋丽丽

    2015-01-01

    为了改善旅行商(TSP)优化求解能力,对模拟退火与混合粒子群算法进行改进,引入了自适应寻优策略。交叉、变异的混合粒子群算法,易于陷入局部最优,而自适应的模拟退火算法可以跳出局部最优,进行全局寻优,所以两者的结合兼顾了全局和局部。该算法增加的自适应性寻优策略提供了判定粒子是否陷入局部极值的条件,并可借此以一定概率进行自适应寻优,增强了全局寻优能力。与混合粒子群算法实验结果对比,显示了本文算法的有效性。%In order to enhance the ability of solving TSP optimization, the hybrid particle swarm optimization (PSO) algorithm with simulated annealing is improved, which introduced the adaptive optimization strategy. Hybrid particle swarm optimization algorithm with crossover and mutation, is easy to fall into local optimum, and the simulated annealing algorithm can avoid local optimization, so the combination of both global and lo-cal. This algorithm increases the adaptive optimization strategy which provided to determine whether the parti-cles fall into local extreme conditions, and can be used to with a certain probability of adaptive optimization, enhanced the ability of global optimization. Compared with the hybrid particle swarm algorithm experimental results, shows the effectiveness of the proposed algorithm.

  7. Solid Rocket Motor Design Using Hybrid Optimization

    Directory of Open Access Journals (Sweden)

    Kevin Albarado

    2012-01-01

    Full Text Available A particle swarm/pattern search hybrid optimizer was used to drive a solid rocket motor modeling code to an optimal solution. The solid motor code models tapered motor geometries using analytical burn back methods by slicing the grain into thin sections along the axial direction. Grains with circular perforated stars, wagon wheels, and dog bones can be considered and multiple tapered sections can be constructed. The hybrid approach to optimization is capable of exploring large areas of the solution space through particle swarming, but is also able to climb “hills” of optimality through gradient based pattern searching. A preliminary method for designing tapered internal geometry as well as tapered outer mold-line geometry is presented. A total of four optimization cases were performed. The first two case studies examines designing motors to match a given regressive-progressive-regressive burn profile. The third case study studies designing a neutrally burning right circular perforated grain (utilizing inner and external geometry tapering. The final case study studies designing a linearly regressive burning profile for right circular perforated (tapered grains.

  8. Quantum Superdense Coding Scheme Based on High-dimensional Two-particles System%基于高维两粒子纠缠态的超密编码方案

    Institute of Scientific and Technical Information of China (English)

    黄平武; 周萍; 农亮勤; 何良明; 尹彩流

    2011-01-01

    基于通信双方预先共享d维二粒子最大纠缠态非定域相关性,信息发送方Bob只需要向信息接收者Alice传送一个粒子,就可以传送log22比特经典信息,为保护信息的安全,方案采用诱骗光子技术,安全性等价于改进后的原始量子密钥分配方案(Bennett-Brassard 1984,BB84).本文讨论了基于高维纯纠缠态超密编码方案.即通过引入一个附加量子比特,信息接收方对手中的纠缠粒子和附加粒子在执行相应的幺正演化,可以获取d|ak|2 logd2 +logd2(ak=min{aj},j ∈ {O,L,d-1})比特经典信息.通信双方采用诱骗光子技术确保量子信道的安全建立.与其他方案相比,该方案具有通信效率较高、实用性较强的优点.%The author presents a generalized superdense coding scheme based on high-dimensional two particles maximally entangled state following some ideas of superdense coding scheme based on fourdimensional two particles. The quantum superdense coding based on noisy quantum channel was discussed. The receiver(Alice)can extract d|αk|2 logdj2 + log2d (αk = min {α}, j ∈ {0,L,d-1 } ) bits classic information by introducing one auxiliary two-level particle and perfprming corresponding unitray operation on her particles. All the parties can use some decoy photons to set up their quantum channel securely. The scheme only requires pure entangled state, which makes this scheme more convenient than others in practical application. Moreover, it has the advantage of having high communication efficiency.

  9. Radiation in Particle Simulations

    Energy Technology Data Exchange (ETDEWEB)

    More, R; Graziani, F; Glosli, J; Surh, M

    2010-11-19

    Hot dense radiative (HDR) plasmas common to Inertial Confinement Fusion (ICF) and stellar interiors have high temperature (a few hundred eV to tens of keV), high density (tens to hundreds of g/cc) and high pressure (hundreds of megabars to thousands of gigabars). Typically, such plasmas undergo collisional, radiative, atomic and possibly thermonuclear processes. In order to describe HDR plasmas, computational physicists in ICF and astrophysics use atomic-scale microphysical models implemented in various simulation codes. Experimental validation of the models used to describe HDR plasmas are difficult to perform. Direct Numerical Simulation (DNS) of the many-body interactions of plasmas is a promising approach to model validation but, previous work either relies on the collisionless approximation or ignores radiation. We present four methods that attempt a new numerical simulation technique to address a currently unsolved problem: the extension of molecular dynamics to collisional plasmas including emission and absorption of radiation. The first method applies the Lienard-Weichert solution of Maxwell's equations for a classical particle whose motion is assumed to be known. The second method expands the electromagnetic field in normal modes (planewaves in a box with periodic boundary-conditions) and solves the equation for wave amplitudes coupled to the particle motion. The third method is a hybrid molecular dynamics/Monte Carlo (MD/MC) method which calculates radiation emitted or absorbed by electron-ion pairs during close collisions. The fourth method is a generalization of the third method to include small clusters of particles emitting radiation during close encounters: one electron simultaneously hitting two ions, two electrons simultaneously hitting one ion, etc. This approach is inspired by the virial expansion method of equilibrium statistical mechanics. Using a combination of these methods we believe it is possible to do atomic-scale particle

  10. Preparation of organic/inorganic hybrid and hollow particles by catalytic deposition of silica onto core/shell heterocoagulates modified with poly[2-(N,N-dimethylamino)ethyl methacrylate].

    Science.gov (United States)

    Taniguchi, Tatsuo; Obi, Shun; Kamata, Yoshitada; Kashiwakura, Takuya; Kasuya, Masakatsu; Ogawa, Tatsuya; Kohri, Michinari; Nakahira, Takayuki

    2012-02-15

    The organic/inorganic hybrid particles PSt/P(St-CPEM)(θ)-g-PDMAEMA/SiO(2) were prepared by catalytic hydrolysis and subsequent polycondensation of tetraethoxysilane in the poly[2-(N,N-dimethylamino)ethyl methacrylate] (PDMAEMA) layers grafted on the PSt/P(St-CPEM)(θ) core/shell heterocoagulates. The micron-sized PSt core and the submicron-sized P(St-CPEM) shell particles bearing ATRP initiating groups were synthesized by dispersion polymerization of styrene (St) and emulsifier-free emulsion polymerization of St with 2-chloropropionyloxyethyl methacrylate (CPEM), respectively. The raspberry-shaped PSt/P(St-CPEM)(θ) heterocoagulates with a controlled surface coverage (θ=0.51, 0.81) were prepared by hydrophobic coagulation between the core and the shell particles in an aqueous NaCl solution near the T(g) of P(St-CPEM). Surface modification of heterocoagulates was carried out by ATRP of DMAEMA from the shell particles adsorbed on the core particles. Silica deposition was performed by simply adding tetraethoxysilane to a water/methanol dispersion of PSt/P(St-CPEM)(θ)-g-PDMAEMA. The SEM and TGA revealed that the resulting PSt/P(St-CPEM)(θ)-g-PDMAEMA/SiO(2) composites maintain a raspberry-like morphology after deposition of silica onto the PDMAEMA layer grafted on heterocoagulates. The micron-sized, raspberry-shaped or the submicron-sized, hole-structured silica hollow particles were obtained selectively by thermal decomposition of the PSt/P(St-CPEM)(θ)-g-PDMAEMA/SiO(2). The oriented particle array was fabricated by dropping anisotropically perforated silica particles onto a glass substrate settled at the bottom of a bottle filled with chloroform.

  11. Simulation of Geotextile Filter Analysis Based on Particle Flow Code%模拟土工织物反滤作用的颗粒流分析方法

    Institute of Scientific and Technical Information of China (English)

    李伟; 赵坚; 沈振中; 杭学军; 张松

    2013-01-01

    Particle flow code (PFC) method is used to establish the geotextile filter numerical model. And the particles moving characteristic and influencing factors hostaged by water in soil with geotextile filtration structure is analyzed from micromechanics level. The calculation errors of PFC application in numerical simulation of geotextile filter is discussed and the solution method is put forward. Under the influence of multi-factors, the porosity sensitivity of the geotextile particles flow filtration model is analyzed by using orthogonal test. Finally, it further verifies the permeability coefficient "peak" phenomenon in geotextile clogging test and its causes.%利用颗粒流(PFC)方法建立了土工织物PFC反滤模型,从细观层面分析了具有土工织物反滤结构的土体中水流挟持颗粒移动的特点及影响因素,探讨了PFC方法应用中引发的误差问题,并提出了解决方案.通过正交试验法分析了多因素影响下土工织物颗粒流反滤模型的孔隙率变化敏感性,进一步验证了土工织物淤堵试验中出现的渗透系数“峰值”现象及其产生的原因.

  12. Simulated Energetic Particle Transport in the Interplanetary Space: The Palmer Consensus Revisited

    CERN Document Server

    Tautz, R C

    2013-01-01

    Reproducing measurements of the scattering mean free paths for energetic particles propagating through the solar system has been a major problem in space physics. The pioneering work of Bieber et al. [Astrophys. J. 420, 294 (1994)] provided a theoretical explanation of such observations, which, however, was based on assumptions such as the questionable hypothesis that quasi-linear theory is correct for parallel diffusion. By employing a hybrid plasma-wave/magnetostatic turbulence model, a test-particle code is used to investigate the scattering of energetic particles. The results show excellent agreement with solar wind observations.

  13. Holographic codes

    CERN Document Server

    Latorre, Jose I

    2015-01-01

    There exists a remarkable four-qutrit state that carries absolute maximal entanglement in all its partitions. Employing this state, we construct a tensor network that delivers a holographic many body state, the H-code, where the physical properties of the boundary determine those of the bulk. This H-code is made of an even superposition of states whose relative Hamming distances are exponentially large with the size of the boundary. This property makes H-codes natural states for a quantum memory. H-codes exist on tori of definite sizes and get classified in three different sectors characterized by the sum of their qutrits on cycles wrapped through the boundaries of the system. We construct a parent Hamiltonian for the H-code which is highly non local and finally we compute the topological entanglement entropy of the H-code.

  14. Sharing code

    OpenAIRE

    Kubilius, Jonas

    2014-01-01

    Sharing code is becoming increasingly important in the wake of Open Science. In this review I describe and compare two popular code-sharing utilities, GitHub and Open Science Framework (OSF). GitHub is a mature, industry-standard tool but lacks focus towards researchers. In comparison, OSF offers a one-stop solution for researchers but a lot of functionality is still under development. I conclude by listing alternative lesser-known tools for code and materials sharing.

  15. A hybrid method using the widely-used WIEN2k and VASP codes to calculate the complete set of XAS/EELS edges in a hundred-atoms system.

    Science.gov (United States)

    Donval, Gaël; Moreau, Philippe; Danet, Julien; Larbi, Séverine Jouanneau-Si; Bayle-Guillemaud, Pascale; Boucher, Florent

    2017-01-04

    Most of the recent developments in EELS modelling has been focused on getting a better agreement with measurements. Less work however has been dedicated to bringing EELS calculations to larger structures that can more realistically describe actual systems. The purpose of this paper is to present a hybrid approach well adapted to calculating the whole set of localised EELS core-loss edges (at the XAS level of theory) on larger systems using only standard tools, namely the WIEN2k and VASP codes. We illustrate the usefulness of this method by applying it to a set of amorphous silicon structures in order to explain the flattening of the silicon L2,3 EELS edge peak at the onset. We show that the peak flattening is actually caused by the collective contribution of each of the atoms to the average spectrum, as opposed to a flattening occurring on each individual spectrum. This method allowed us to reduce the execution time by a factor of 3 compared to a usual-carefully optimised-WIEN2k calculation. It provided even greater speed-ups on more complex systems (interfaces, ∼300 atoms) that will be presented in a future paper. This method is suited to calculate all the localized edges of all the atoms of a structure in a single calculation for light atoms as long as the core-hole effects can be neglected.

  16. MASC-SiO2杂化颗粒的制备、表征与分散性能%Preparation, Characterization and Dispersion of MASC-SiO2 Hybrid Particles

    Institute of Scientific and Technical Information of China (English)

    季燕; 陈洪龄; 张渝

    2013-01-01

    Comb-type copolymer of poly (maleic anhydride-dodecenestyrene) (MASC) was synthesized.A hybrid composite of MASC encapsulating SiO2 nano-particles was prepared based on simple milling.The comb-polymer MASC was characterized by 1HNMR and FTIR.The polymer hybrid MASC-SiO2 was characterized by FTIR,TG,contact angle,TEM and transparency measurement.The results show that the maximum grafting ratios is approximately 21.05%.When the mass ratio of MASC and SiO2 is 1∶1,its static contact angle reaches 140°.Adjusting the mass ratio of copolymer and SiO2 nano-particles makes the hybrid particles have good dispersibility in organic solvents with different polarities.%合成马来酸酐-十二烯-苯乙烯梳形共聚物(MASC),利用球磨工艺将MASC接枝到SiO2颗粒表面,对聚合物进行核磁共振氢谱和红外光谱分析,对杂化颗粒进行红外光谱分析、热重、接触角、透射电镜和透光度测试.结果表明,颗粒表面的最大接枝率为21.05%;当聚合物与纳米SiO2质量比为1∶1时,接触角为140°;调整共聚物与颗粒的质量比可使改性后的颗粒在不同极性的有机溶剂中有良好的分散性.

  17. 基于多种技术的混合式程序代码抄袭检测方法%Hybrid plagiarism detection method in program code based on multiple techniques

    Institute of Scientific and Technical Information of China (English)

    杨超

    2016-01-01

    Based on analyzing characteristics and drawbacks of the existing plagiarism detection system in program code, a hybrid plagiarism detection method combining text analysis, structure metrics and attribute counting is proposed. Firstly, the document fingerprinting technology and Winnowing algorithm are used to compute text similarity. Secondly, the program code is translated to a Dynamic Control Structure tree(DCS), and Winnowing algorithm is applied to estimate the DCS tree similarity which is structural similarity also. Then each variable information in code is collected and counted. The variable similarity algorithm is applied to analyze variable information node and get variable similarity. Finally, the text similarity, structural similarity and variable similarity are assigned a weight to compute the total code similarity. The experi-mental results show that the proposed method can effectively detect all kinds of plagiarism. To the different threshold values, the accuracy and the recall ratio of test results are higher than JPLAG system. Especially for the simple structure in program code, the average accuracy of testing results of the method and JPLAG system are 82.5%and 69.5%respectively. Conse-quently it shows that the proposed method is more effective.%在分析现有程序代码抄袭检测系统的特点及局限性的基础上,提出一种综合文本分析、结构度量和属性计数技术的混合式程序抄袭检测方法。应用文档指纹技术和Winnowing算法计算程序的文本相似度;将程序代码表示成动态控制结构树(Dynamic Control Structure tree,DCS),运用Winnowing算法计算DCS树相似度,从而得到结构相似度;收集并统计程序中的每个变量信息,应用变量相似度算法分析变量信息节点获取变量相似度;分别赋予文本相似度、结构相似度和变量相似度一个权值,计算得到总体的代码相似度。实验结果表明,所提出的方法能够有效

  18. Speaking Code

    DEFF Research Database (Denmark)

    Cox, Geoff

    Speaking Code begins by invoking the “Hello World” convention used by programmers when learning a new language, helping to establish the interplay of text and code that runs through the book. Interweaving the voice of critical writing from the humanities with the tradition of computing and software...

  19. Polar Codes

    Science.gov (United States)

    2014-12-01