MULTIGRAIN: a smoothed particle hydrodynamic algorithm for multiple small dust grains and gas
Hutchison, Mark; Price, Daniel J.; Laibe, Guillaume
2018-05-01
We present a new algorithm, MULTIGRAIN, for modelling the dynamics of an entire population of small dust grains immersed in gas, typical of conditions that are found in molecular clouds and protoplanetary discs. The MULTIGRAIN method is more accurate than single-phase simulations because the gas experiences a backreaction from each dust phase and communicates this change to the other phases, thereby indirectly coupling the dust phases together. The MULTIGRAIN method is fast, explicit and low storage, requiring only an array of dust fractions and their derivatives defined for each resolution element.
Hydrodynamic limit of interacting particle systems
International Nuclear Information System (INIS)
Landim, C.
2004-01-01
We present in these notes two methods to derive the hydrodynamic equation of conservative interacting particle systems. The intention is to present the main ideas in the simplest possible context and refer for details and references. (author)
Petascale algorithms for reactor hydrodynamics
International Nuclear Information System (INIS)
Fischer, P.; Lottes, J.; Pointer, W.D.; Siegel, A.
2008-01-01
We describe recent algorithmic developments that have enabled large eddy simulations of reactor flows on up to P = 65, 000 processors on the IBM BG/P at the Argonne Leadership Computing Facility. Petascale computing is expected to play a pivotal role in the design and analysis of next-generation nuclear reactors. Argonne's SHARP project is focused on advanced reactor simulation, with a current emphasis on modeling coupled neutronics and thermal-hydraulics (TH). The TH modeling comprises a hierarchy of computational fluid dynamics approaches ranging from detailed turbulence computations, using DNS (direct numerical simulation) and LES (large eddy simulation), to full core analysis based on RANS (Reynolds-averaged Navier-Stokes) and subchannel models. Our initial study is focused on LES of sodium-cooled fast reactor cores. The aim is to leverage petascale platforms at DOE's Leadership Computing Facilities (LCFs) to provide detailed information about heat transfer within the core and to provide baseline data for less expensive RANS and subchannel models.
PHANTOM: Smoothed particle hydrodynamics and magnetohydrodynamics code
Price, Daniel J.; Wurster, James; Nixon, Chris; Tricco, Terrence S.; Toupin, Stéven; Pettitt, Alex; Chan, Conrad; Laibe, Guillaume; Glover, Simon; Dobbs, Clare; Nealon, Rebecca; Liptai, David; Worpel, Hauke; Bonnerot, Clément; Dipierro, Giovanni; Ragusa, Enrico; Federrath, Christoph; Iaconi, Roberto; Reichardt, Thomas; Forgan, Duncan; Hutchison, Mark; Constantino, Thomas; Ayliffe, Ben; Mentiplay, Daniel; Hirsh, Kieran; Lodato, Giuseppe
2017-09-01
Phantom is a smoothed particle hydrodynamics and magnetohydrodynamics code focused on stellar, galactic, planetary, and high energy astrophysics. It is modular, and handles sink particles, self-gravity, two fluid and one fluid dust, ISM chemistry and cooling, physical viscosity, non-ideal MHD, and more. Its modular structure makes it easy to add new physics to the code.
Hydrodynamic relaxations in dissipative particle dynamics
Hansen, J. S.; Greenfield, Michael L.; Dyre, Jeppe C.
2018-01-01
This paper studies the dynamics of relaxation phenomena in the standard dissipative particle dynamics (DPD) model [R. D. Groot and P. B. Warren, J. Chem. Phys. 107, 4423 (1997)]. Using fluctuating hydrodynamics as the framework of the investigation, we focus on the collective transverse and longitudinal dynamics. It is shown that classical hydrodynamic theory predicts the transverse dynamics at relatively low temperatures very well when compared to simulation data; however, the theory predictions are, on the same length scale, less accurate for higher temperatures. The agreement with hydrodynamics depends on the definition of the viscosity, and here we find that the transverse dynamics are independent of the dissipative and random shear force contributions to the stress. For high temperatures, the spectrum for the longitudinal dynamics is dominated by the Brillouin peak for large length scales and the relaxation is therefore governed by sound wave propagation and is athermal. This contrasts the results at lower temperatures and small length scale, where the thermal process is clearly present in the spectra. The DPD model, at least qualitatively, re-captures the underlying hydrodynamical mechanisms, and quantitative agreement is excellent at intermediate temperatures for the transverse dynamics.
Smoothed Particle Hydrodynamics Coupled with Radiation Transfer
Susa, Hajime
2006-04-01
We have constructed a brand-new radiation hydrodynamics solver based upon Smoothed Particle Hydrodynamics, which works on a parallel computer system. The code is designed to investigate the formation and evolution of first-generation objects at z ≳ 10, where the radiative feedback from various sources plays important roles. The code can compute the fraction of chemical species e, H+, H, H-, H2, and H+2 by by fully implicit time integration. It also can deal with multiple sources of ionizing radiation, as well as radiation at Lyman-Werner band. We compare the results for a few test calculations with the results of one-dimensional simulations, in which we find good agreements with each other. We also evaluate the speedup by parallelization, which is found to be almost ideal, as long as the number of sources is comparable to the number of processors.
Vanaverbeke, S.; Keppens, R.; Poedts, S.; Boffin, H.
2009-01-01
We describe the algorithms implemented in the first version of GRADSPH, a parallel, tree-based, smoothed particle hydrodynamics code for simulating self-gravitating astrophysical systems written in FORTRAN 90. The paper presents details on the implementation of the Smoothed Particle Hydro (SPH)
An implicit Smooth Particle Hydrodynamic code
Energy Technology Data Exchange (ETDEWEB)
Knapp, Charles E. [Univ. of New Mexico, Albuquerque, NM (United States)
2000-05-01
An implicit version of the Smooth Particle Hydrodynamic (SPH) code SPHINX has been written and is working. In conjunction with the SPHINX code the new implicit code models fluids and solids under a wide range of conditions. SPH codes are Lagrangian, meshless and use particles to model the fluids and solids. The implicit code makes use of the Krylov iterative techniques for solving large linear-systems and a Newton-Raphson method for non-linear corrections. It uses numerical derivatives to construct the Jacobian matrix. It uses sparse techniques to save on memory storage and to reduce the amount of computation. It is believed that this is the first implicit SPH code to use Newton-Krylov techniques, and is also the first implicit SPH code to model solids. A description of SPH and the techniques used in the implicit code are presented. Then, the results of a number of tests cases are discussed, which include a shock tube problem, a Rayleigh-Taylor problem, a breaking dam problem, and a single jet of gas problem. The results are shown to be in very good agreement with analytic solutions, experimental results, and the explicit SPHINX code. In the case of the single jet of gas case it has been demonstrated that the implicit code can do a problem in much shorter time than the explicit code. The problem was, however, very unphysical, but it does demonstrate the potential of the implicit code. It is a first step toward a useful implicit SPH code.
High-order hydrodynamic algorithms for exascale computing
Energy Technology Data Exchange (ETDEWEB)
Morgan, Nathaniel Ray [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-02-05
Hydrodynamic algorithms are at the core of many laboratory missions ranging from simulating ICF implosions to climate modeling. The hydrodynamic algorithms commonly employed at the laboratory and in industry (1) typically lack requisite accuracy for complex multi- material vortical flows and (2) are not well suited for exascale computing due to poor data locality and poor FLOP/memory ratios. Exascale computing requires advances in both computer science and numerical algorithms. We propose to research the second requirement and create a new high-order hydrodynamic algorithm that has superior accuracy, excellent data locality, and excellent FLOP/memory ratios. This proposal will impact a broad range of research areas including numerical theory, discrete mathematics, vorticity evolution, gas dynamics, interface instability evolution, turbulent flows, fluid dynamics and shock driven flows. If successful, the proposed research has the potential to radically transform simulation capabilities and help position the laboratory for computing at the exascale.
Coupling of smooth particle hydrodynamics with the finite element method
International Nuclear Information System (INIS)
Attaway, S.W.; Heinstein, M.W.; Swegle, J.W.
1994-01-01
A gridless technique called smooth particle hydrodynamics (SPH) has been coupled with the transient dynamics finite element code ppercase[pronto]. In this paper, a new weighted residual derivation for the SPH method will be presented, and the methods used to embed SPH within ppercase[pronto] will be outlined. Example SPH ppercase[pronto] calculations will also be presented. One major difficulty associated with the Lagrangian finite element method is modeling materials with no shear strength; for example, gases, fluids and explosive biproducts. Typically, these materials can be modeled for only a short time with a Lagrangian finite element code. Large distortions cause tangling of the mesh, which will eventually lead to numerical difficulties, such as negative element area or ''bow tie'' elements. Remeshing will allow the problem to continue for a short while, but the large distortions can prevent a complete analysis. SPH is a gridless Lagrangian technique. Requiring no mesh, SPH has the potential to model material fracture, large shear flows and penetration. SPH computes the strain rate and the stress divergence based on the nearest neighbors of a particle, which are determined using an efficient particle-sorting technique. Embedding the SPH method within ppercase[pronto] allows part of the problem to be modeled with quadrilateral finite elements, while other parts are modeled with the gridless SPH method. SPH elements are coupled to the quadrilateral elements through a contact-like algorithm. ((orig.))
Water Flow Simulation using Smoothed Particle Hydrodynamics (SPH)
Vu, Bruce; Berg, Jared; Harris, Michael F.
2014-01-01
Simulation of water flow from the rainbird nozzles has been accomplished using the Smoothed Particle Hydrodynamics (SPH). The advantage of using SPH is that no meshing is required, thus the grid quality is no longer an issue and accuracy can be improved.
Launch Environment Water Flow Simulations Using Smoothed Particle Hydrodynamics
Vu, Bruce T.; Berg, Jared J.; Harris, Michael F.; Crespo, Alejandro C.
2015-01-01
This paper describes the use of Smoothed Particle Hydrodynamics (SPH) to simulate the water flow from the rainbird nozzle system used in the sound suppression system during pad abort and nominal launch. The simulations help determine if water from rainbird nozzles will impinge on the rocket nozzles and other sensitive ground support elements.
Hydrodynamics in adaptive resolution particle simulations: Multiparticle collision dynamics
Energy Technology Data Exchange (ETDEWEB)
Alekseeva, Uliana, E-mail: Alekseeva@itc.rwth-aachen.de [Jülich Supercomputing Centre (JSC), Institute for Advanced Simulation (IAS), Forschungszentrum Jülich, D-52425 Jülich (Germany); German Research School for Simulation Sciences (GRS), Forschungszentrum Jülich, D-52425 Jülich (Germany); Winkler, Roland G., E-mail: r.winkler@fz-juelich.de [Theoretical Soft Matter and Biophysics, Institute for Advanced Simulation (IAS), Forschungszentrum Jülich, D-52425 Jülich (Germany); Sutmann, Godehard, E-mail: g.sutmann@fz-juelich.de [Jülich Supercomputing Centre (JSC), Institute for Advanced Simulation (IAS), Forschungszentrum Jülich, D-52425 Jülich (Germany); ICAMS, Ruhr-University Bochum, D-44801 Bochum (Germany)
2016-06-01
A new adaptive resolution technique for particle-based multi-level simulations of fluids is presented. In the approach, the representation of fluid and solvent particles is changed on the fly between an atomistic and a coarse-grained description. The present approach is based on a hybrid coupling of the multiparticle collision dynamics (MPC) method and molecular dynamics (MD), thereby coupling stochastic and deterministic particle-based methods. Hydrodynamics is examined by calculating velocity and current correlation functions for various mixed and coupled systems. We demonstrate that hydrodynamic properties of the mixed fluid are conserved by a suitable coupling of the two particle methods, and that the simulation results agree well with theoretical expectations.
Smoothed particle hydrodynamic simulations of expanding HII regions
Bisbas, Thomas G.
2009-09-01
This thesis deals with numerical simulations of expanding ionized regions, known as HII regions. We implement a new three dimensional algorithm in Smoothed Particle Hydrodynamics for including the dynamical effects of the interaction between ionizing radiation and the interstellar medium. This interaction plays a crucial role in star formation at all epochs. We study the influence of ionizing radiation in spherically symmetric clouds. In particular, we study the spherically symmetric expansion of an HII region inside a uniform-density, non-self-gravitating cloud. We examine the ability of our algorithm to reproduce the known theoretical solution and we find that the agreement is very good. We also study the spherically symmetric expansion inside a uniform-density, self-gravitating cloud. We propose a new differential equation of motion for the expanding shell that includes the effects of gravity. Comparing its numerical solution with the simulations, we find that the equation predicts the position of the shell accurately. We also study the expansion of an off-centre HII region inside a uniform-density, non- self-gravitating cloud. This results in an evolution known as the rocket effect, where the ionizing radiation pushes and accelerates the cloud away from the exciting star leading to its dispersal. During this evolution, cometary knots appear as a result of Rayleigh-Taylor and Vishniac instabilities. The knots are composed of a dense head with a conic tail behind them, a structure that points towards the ionizing source. Our simulations show that these knots are very reminiscent of the observed structures in planetary nebula, such as in the Helix nebula. The last part of this thesis is dedicated to the study of cores ionized by an exciting source which is placed outside and far away from them. The evolution of these cores is known as radiation driven compression (or implosion). We perform simulations and compare our findings with results of other workers and we
Clustering and phase behaviour of attractive active particles with hydrodynamics.
Navarro, Ricard Matas; Fielding, Suzanne M
2015-10-14
We simulate clustering, phase separation and hexatic ordering in a monolayered suspension of active squirming disks subject to an attractive Lennard-Jones-like pairwise interaction potential, taking hydrodynamic interactions between the particles fully into account. By comparing the hydrodynamic case with counterpart simulations for passive and active Brownian particles, we elucidate the relative roles of self-propulsion, interparticle attraction, and hydrodynamic interactions in determining clustering and phase behaviour. Even in the presence of an attractive potential, we find that hydrodynamic interactions strongly suppress the motility induced phase separation that might a priori have been expected in a highly active suspension. Instead, we find only a weak tendency for the particles to form stringlike clusters in this regime. At lower activities we demonstrate phase behaviour that is broadly equivalent to that of the counterpart passive system at low temperatures, characterized by regimes of gas-liquid, gas-solid and liquid-solid phase coexistence. In this way, we suggest that a dimensionless quantity representing the level of activity relative to the strength of attraction plays the role of something like an effective non-equilibrium temperature, counterpart to the (dimensionless) true thermodynamic temperature in the passive system. However there are also some important differences from the equilibrium case, most notably with regards the degree of hexatic ordering, which we discuss carefully.
Workshop on advances in smooth particle hydrodynamics
Energy Technology Data Exchange (ETDEWEB)
Wingate, C.A.; Miller, W.A.
1993-12-31
This proceedings contains viewgraphs presented at the 1993 workshop held at Los Alamos National Laboratory. Discussed topics include: negative stress, reactive flow calculations, interface problems, boundaries and interfaces, energy conservation in viscous flows, linked penetration calculations, stability and consistency of the SPH method, instabilities, wall heating and conservative smoothing, tensors, tidal disruption of stars, breaking the 10,000,000 particle limit, modelling relativistic collapse, SPH without H, relativistic KSPH avoidance of velocity based kernels, tidal compression and disruption of stars near a supermassive rotation black hole, and finally relativistic SPH viscosity and energy.
Smooth Particle Hydrodynamics-based Wind Representation
Energy Technology Data Exchange (ETDEWEB)
Prescott, Steven [Idaho National Lab. (INL), Idaho Falls, ID (United States); Smith, Curtis [Idaho National Lab. (INL), Idaho Falls, ID (United States); Hess, Stephen [Idaho National Lab. (INL), Idaho Falls, ID (United States); Lin, Linyu [Idaho National Lab. (INL), Idaho Falls, ID (United States); Sampath, Ram [Idaho National Lab. (INL), Idaho Falls, ID (United States)
2016-12-01
and computation time. An advanced method of combing results from grid-based methods with SPH through a data-driven model is proposed. This method could allow for more accurate simulation of particle movement near rigid bodies even with larger SPH particle sizes. If successful, the data-driven model would eliminate the need for a SPH turbulence model and increase the simulation domain size. Continued research beyond the scope of this project will be needed in order to determine the viability of a data-driven model.
IUTAM symposium on hydrodynamic diffusion of suspended particles
Energy Technology Data Exchange (ETDEWEB)
Davis, R.H. [ed.
1995-12-31
Hydrodynamic diffusion refers to the fluctuating motion of nonBrownian particles (or droplets or bubbles) which occurs in a dispersion due to multiparticle interactions. For example, in a concentrated sheared suspension, particles do not move along streamlines but instead exhibit fluctuating motions as they tumble around each other. This leads to a net migration of particles down gradients in particle concentration and in shear rate, due to the higher frequency of encounters of a test particle with other particles on the side of the test particle which has higher concentration or shear rate. As another example, suspended particles subject to sedimentation, centrifugation, or fluidization, do not generally move relative to the fluid with a constant velocity, but instead experience diffusion-like fluctuations in velocity due to interactions with neighboring particles and the resulting variation in the microstructure or configuration of the suspended particles. In flowing granular materials, the particles interact through direct collisions or contacts (rather than through the surrounding fluid); these collisions also cause the particles to undergo fluctuating motions characteristic of diffusion processes. Selected papers are indexed separately for inclusion in the Energy Science and Technology Database.
Many-particle hydrodynamic interactions in parallel-wall geometry: Cartesian-representation method
International Nuclear Information System (INIS)
Blawzdziewicz, J.; Wajnryb, E.; Bhattacharya, S.
2005-01-01
This talk will describe the results of our theoretical and numerical studies of hydrodynamic interactions in a suspension of spherical particles confined between two parallel planar walls, under creeping-flow conditions. We propose an efficient algorithm for evaluating many-particle friction matrix in this system-no Stokesian-dynamics algorithm of this kind has been available so far. Our approach involves expanding the fluid velocity field in the wall-bounded suspension into spherical and Cartesian fundamental sets of Stokes flows. The spherical set is used to describe the interaction of the fluid with the particles and the Cartesian set to describe the interaction with the walls. At the core of our method are transformation relations between the spherical and Cartesian fundamental sets. Using the transformation formulas, we derive a system of linear equations for the force multipoles induced on the particle surfaces; the coefficients in these equations are given in terms of lateral Fourier integrals corresponding to the directions parallel to the walls. The force-multipole equations have been implemented in a numerical algorithm for the evaluation of the multiparticle friction matrix in the wall-bounded system. The algorithm involves subtraction of the particle-wall and particle-particle lubrication contributions to accelerate the convergence of the results with the spherical-harmonics order, and a subtraction of the single-wall contributions to accelerate the convergence of the Fourier integrals. (author)
Numerical simulations of glass impacts using smooth particle hydrodynamics
International Nuclear Information System (INIS)
Mandell, D.A.; Wingate, C.A.
1995-01-01
As part of a program to develop advanced hydrocode design tools, we have implemented a brittle fracture model for glass into the SPHINX smooth particle hydrodynamics code. We have evaluated this model and the code by predicting data from one-dimensional flyer plate impacts into glass. Since fractured glass properties, which are needed in the model, are not available, we did sensitivity studies of these properties, as well as sensitivity studies to determine the number of particles needed in the calculations. The numerical results are in good agreement with the data
From particle to kinetic and hydrodynamic descriptions of flocking
Ha, Seung-Yeal; Tadmor, Eitan
2008-01-01
We discuss the Cucker-Smale's (C-S) particle model for flocking, deriving precise conditions for flocking to occur when pairwise interactions are sufficiently strong long range. We then derive a Vlasov-type kinetic model for the C-S particle model and prove it exhibits time-asymptotic flocking behavior for arbitrary compactly supported initial data. Finally, we introduce a hydrodynamic description of flocking based on the C-S Vlasov-type kinetic model and prove flocking behavior \\emph{without...
An Efficient Sleepy Algorithm for Particle-Based Fluids
Directory of Open Access Journals (Sweden)
Xiao Nie
2014-01-01
Full Text Available We present a novel Smoothed Particle Hydrodynamics (SPH based algorithm for efficiently simulating compressible and weakly compressible particle fluids. Prior particle-based methods simulate all fluid particles; however, in many cases some particles appearing to be at rest can be safely ignored without notably affecting the fluid flow behavior. To identify these particles, a novel sleepy strategy is introduced. By utilizing this strategy, only a portion of the fluid particles requires computational resources; thus an obvious performance gain can be achieved. In addition, in order to resolve unphysical clumping issue due to tensile instability in SPH based methods, a new artificial repulsive force is provided. We demonstrate that our approach can be easily integrated with existing SPH based methods to improve the efficiency without sacrificing visual quality.
A generalized transport-velocity formulation for smoothed particle hydrodynamics
Energy Technology Data Exchange (ETDEWEB)
Zhang, Chi; Hu, Xiangyu Y., E-mail: xiangyu.hu@tum.de; Adams, Nikolaus A.
2017-05-15
The standard smoothed particle hydrodynamics (SPH) method suffers from tensile instability. In fluid-dynamics simulations this instability leads to particle clumping and void regions when negative pressure occurs. In solid-dynamics simulations, it results in unphysical structure fragmentation. In this work the transport-velocity formulation of Adami et al. (2013) is generalized for providing a solution of this long-standing problem. Other than imposing a global background pressure, a variable background pressure is used to modify the particle transport velocity and eliminate the tensile instability completely. Furthermore, such a modification is localized by defining a shortened smoothing length. The generalized formulation is suitable for fluid and solid materials with and without free surfaces. The results of extensive numerical tests on both fluid and solid dynamics problems indicate that the new method provides a unified approach for multi-physics SPH simulations.
Pattern recognition issues on anisotropic smoothed particle hydrodynamics
Pereira Marinho, Eraldo
2014-03-01
This is a preliminary theoretical discussion on the computational requirements of the state of the art smoothed particle hydrodynamics (SPH) from the optics of pattern recognition and artificial intelligence. It is pointed out in the present paper that, when including anisotropy detection to improve resolution on shock layer, SPH is a very peculiar case of unsupervised machine learning. On the other hand, the free particle nature of SPH opens an opportunity for artificial intelligence to study particles as agents acting in a collaborative framework in which the timed outcomes of a fluid simulation forms a large knowledge base, which might be very attractive in computational astrophysics phenomenological problems like self-propagating star formation.
Pattern recognition issues on anisotropic smoothed particle hydrodynamics
International Nuclear Information System (INIS)
Marinho, Eraldo Pereira
2014-01-01
This is a preliminary theoretical discussion on the computational requirements of the state of the art smoothed particle hydrodynamics (SPH) from the optics of pattern recognition and artificial intelligence. It is pointed out in the present paper that, when including anisotropy detection to improve resolution on shock layer, SPH is a very peculiar case of unsupervised machine learning. On the other hand, the free particle nature of SPH opens an opportunity for artificial intelligence to study particles as agents acting in a collaborative framework in which the timed outcomes of a fluid simulation forms a large knowledge base, which might be very attractive in computational astrophysics phenomenological problems like self-propagating star formation
Hydrodynamic limit of a nongradient interacting particle process
International Nuclear Information System (INIS)
Wick, W.D.
1989-01-01
A simple example of a nongradient stochastic interacting particle system is analyzed. In this model, symmetric simple exclusion in one dimension in a periodic environment, the dynamical term in the Green-Kubo formula contributes to the bulk diffusion constant. The law of large numbers for the density field and the central limit theorem for the density fluctuation field are proven, and the Green-Kubo expression for the diffusion constant is computed exactly. The hydrodynamic equation for the model turns out to be linear
Mishler, Grant; Tsang, Alan Cheng Hou; Pak, On Shun
2018-03-01
The transport of active and passive particles plays central roles in diverse biological phenomena and engineering applications. In this paper, we present a theoretical investigation of a system consisting of an active particle and a passive particle in a confined micro-fluidic flow. The introduction of an external flow is found to induce the capture of the passive particle by the active particle via long-range hydrodynamic interactions among the particles. This hydrodynamic capture mechanism relies on an attracting stable equilibrium configuration formed by the particles, which occurs when the external flow intensity exceeds a certain threshold. We evaluate this threshold by studying the stability of the equilibrium configurations analytically and numerically. Furthermore, we study the dynamics of typical capture and non-capture events and characterize the basins of attraction of the equilibrium configurations. Our findings reveal a critical dependence of the hydrodynamic capture mechanism on the external flow intensity. Through adjusting the external flow intensity across the stability threshold, we demonstrate that the active particle can capture and release the passive particle in a controllable manner. Such a capture-and-release mechanism is desirable for biomedical applications such as the capture and release of therapeutic payloads by synthetic micro-swimmers in targeted drug delivery.
Smoothed particle hydrodynamics modelling in continuum mechanics: fluid-structure interaction
Directory of Open Access Journals (Sweden)
Groenenboom P. H. L.
2009-06-01
Full Text Available Within this study, the implementation of the smoothed particle hydrodynamics (SPH method solving the complex problem of interaction between a quasi-incompressible fluid involving a free surface and an elastic structure is outlined. A brief description of the SPH model for both the quasi-incompressible fluid and the isotropic elastic solid is presented. The interaction between the fluid and the elastic structure is realised through the contact algorithm. The results of numerical computations are confronted with the experimental as well as computational data published in the literature.
Energy Technology Data Exchange (ETDEWEB)
De Corato, M., E-mail: marco.decorato@unina.it [Dipartimento di Ingegneria Chimica, dei Materiali e della Produzione Industriale, Università di Napoli Federico II, Piazzale Tecchio 80, 80125 Napoli (Italy); Slot, J.J.M., E-mail: j.j.m.slot@tue.nl [Department of Mathematics and Computer Science, Eindhoven University of Technology, PO Box 513, 5600 MB Eindhoven (Netherlands); Hütter, M., E-mail: m.huetter@tue.nl [Department of Mechanical Engineering, Eindhoven University of Technology, PO Box 513, 5600 MB Eindhoven (Netherlands); D' Avino, G., E-mail: gadavino@unina.it [Dipartimento di Ingegneria Chimica, dei Materiali e della Produzione Industriale, Università di Napoli Federico II, Piazzale Tecchio 80, 80125 Napoli (Italy); Maffettone, P.L., E-mail: pierluca.maffettone@unina.it [Dipartimento di Ingegneria Chimica, dei Materiali e della Produzione Industriale, Università di Napoli Federico II, Piazzale Tecchio 80, 80125 Napoli (Italy); Hulsen, M.A., E-mail: m.a.hulsen@tue.nl [Department of Mechanical Engineering, Eindhoven University of Technology, PO Box 513, 5600 MB Eindhoven (Netherlands)
2016-07-01
In this paper, we present a finite element implementation of fluctuating hydrodynamics with a moving boundary fitted mesh for treating the suspended particles. The thermal fluctuations are incorporated into the continuum equations using the Landau and Lifshitz approach [1]. The proposed implementation fulfills the fluctuation–dissipation theorem exactly at the discrete level. Since we restrict the equations to the creeping flow case, this takes the form of a relation between the diffusion coefficient matrix and friction matrix both at the particle and nodal level of the finite elements. Brownian motion of arbitrarily shaped particles in complex confinements can be considered within the present formulation. A multi-step time integration scheme is developed to correctly capture the drift term required in the stochastic differential equation (SDE) describing the evolution of the positions of the particles. The proposed approach is validated by simulating the Brownian motion of a sphere between two parallel plates and the motion of a spherical particle in a cylindrical cavity. The time integration algorithm and the fluctuating hydrodynamics implementation are then applied to study the diffusion and the equilibrium probability distribution of a confined circle under an external harmonic potential.
Ding, E. J.
2015-06-01
The time-independent lattice Boltzmann algorithm (TILBA) is developed to calculate the hydrodynamic interactions between two particles in a Stokes flow. The TILBA is distinguished from the traditional lattice Boltzmann method in that a background matrix (BGM) is generated prior to the calculation. The BGM, once prepared, can be reused for calculations for different scenarios, and the computational cost for each such calculation will be significantly reduced. The advantage of the TILBA is that it is easy to code and can be applied to any particle shape without complicated implementation, and the computational cost is independent of the shape of the particle. The TILBA is validated and shown to be accurate by comparing calculation results obtained from the TILBA to analytical or numerical solutions for certain problems.
Particle swarm genetic algorithm and its application
International Nuclear Information System (INIS)
Liu Chengxiang; Yan Changxiang; Wang Jianjun; Liu Zhenhai
2012-01-01
To solve the problems of slow convergence speed and tendency to fall into the local optimum of the standard particle swarm optimization while dealing with nonlinear constraint optimization problem, a particle swarm genetic algorithm is designed. The proposed algorithm adopts feasibility principle handles constraint conditions and avoids the difficulty of penalty function method in selecting punishment factor, generates initial feasible group randomly, which accelerates particle swarm convergence speed, and introduces genetic algorithm crossover and mutation strategy to avoid particle swarm falls into the local optimum Through the optimization calculation of the typical test functions, the results show that particle swarm genetic algorithm has better optimized performance. The algorithm is applied in nuclear power plant optimization, and the optimization results are significantly. (authors)
Numerical modelling of extreme waves by Smoothed Particle Hydrodynamics
Directory of Open Access Journals (Sweden)
M. H. Dao
2011-02-01
Full Text Available The impact of extreme/rogue waves can lead to serious damage of vessels as well as marine and coastal structures. Such extreme waves in deep water are characterized by steep wave fronts and an energetic wave crest. The process of wave breaking is highly complex and, apart from the general knowledge that impact loadings are highly impulsive, the dynamics of the breaking and impact are still poorly understood. Using an advanced numerical method, the Smoothed Particle Hydrodynamics enhanced with parallel computing is able to reproduce well the extreme waves and their breaking process. Once the waves and their breaking process are modelled successfully, the dynamics of the breaking and the characteristics of their impact on offshore structures could be studied. The computational methodology and numerical results are presented in this paper.
Modelling free surface flows with smoothed particle hydrodynamics
Directory of Open Access Journals (Sweden)
L.Di G.Sigalotti
2006-01-01
Full Text Available In this paper the method of Smoothed Particle Hydrodynamics (SPH is extended to include an adaptive density kernel estimation (ADKE procedure. It is shown that for a van der Waals (vdW fluid, this method can be used to deal with free-surface phenomena without difficulties. In particular, arbitrary moving boundaries can be easily handled because surface tension is effectively simulated by the cohesive pressure forces. Moreover, the ADKE method is seen to increase both the accuracy and stability of SPH since it allows the width of the kernel interpolant to vary locally in a way that only the minimum necessary smoothing is applied at and near free surfaces and sharp fluid-fluid interfaces. The method is robust and easy to implement. Examples of its resolving power are given for both the formation of a circular liquid drop under surface tension and the nonlinear oscillation of excited drops.
StarSmasher: Smoothed Particle Hydrodynamics code for smashing stars and planets
Gaburov, Evghenii; Lombardi, James C., Jr.; Portegies Zwart, Simon; Rasio, F. A.
2018-05-01
Smoothed Particle Hydrodynamics (SPH) is a Lagrangian particle method that approximates a continuous fluid as discrete nodes, each carrying various parameters such as mass, position, velocity, pressure, and temperature. In an SPH simulation the resolution scales with the particle density; StarSmasher is able to handle both equal-mass and equal number-density particle models. StarSmasher solves for hydro forces by calculating the pressure for each particle as a function of the particle's properties - density, internal energy, and internal properties (e.g. temperature and mean molecular weight). The code implements variational equations of motion and libraries to calculate the gravitational forces between particles using direct summation on NVIDIA graphics cards. Using a direct summation instead of a tree-based algorithm for gravity increases the accuracy of the gravity calculations at the cost of speed. The code uses a cubic spline for the smoothing kernel and an artificial viscosity prescription coupled with a Balsara Switch to prevent unphysical interparticle penetration. The code also implements an artificial relaxation force to the equations of motion to add a drag term to the calculated accelerations during relaxation integrations. Initially called StarCrash, StarSmasher was developed originally by Rasio.
Hydrodynamic Stability Analysis of Particle-Laden Solid Rocket Motors
Elliott, T. S.; Majdalani, J.
2014-11-01
Fluid-wall interactions within solid rocket motors can result in parietal vortex shedding giving rise to hydrodynamic instabilities, or unsteady waves, that translate into pressure oscillations. The oscillations can result in vibrations observed by the rocket, rocket subsystems, or payload, which can lead to changes in flight characteristics, design failure, or other undesirable effects. For many years particles have been embedded in solid rocket propellants with the understanding that their presence increases specific impulse and suppresses fluctuations in the flowfield. This study utilizes a two dimensional framework to understand and quantify the aforementioned two-phase flowfield inside a motor case with a cylindrical grain perforation. This is accomplished through the use of linearized Navier-Stokes equations with the Stokes drag equation and application of the biglobal ansatz. Obtaining the biglobal equations for analysis requires quantification of the mean flowfield within the solid rocket motor. To that end, the extended Taylor-Culick form will be utilized to represent the gaseous phase of the mean flowfield while the self-similar form will be employed for the particle phase. Advancing the mean flowfield by quantifying the particle mass concentration with a semi-analytical solution the finalized mean flowfield is combined with the biglobal equations resulting in a system of eight partial differential equations. This system is solved using an eigensolver within the framework yielding the entire spectrum of eigenvalues, frequency and growth rate components, at once. This work will detail the parametric analysis performed to demonstrate the stabilizing and destabilizing effects of particles within solid rocket combustion.
Hydrodynamic Stability Analysis of Particle-Laden Solid Rocket Motors
International Nuclear Information System (INIS)
Elliott, T S; Majdalani, J
2014-01-01
Fluid-wall interactions within solid rocket motors can result in parietal vortex shedding giving rise to hydrodynamic instabilities, or unsteady waves, that translate into pressure oscillations. The oscillations can result in vibrations observed by the rocket, rocket subsystems, or payload, which can lead to changes in flight characteristics, design failure, or other undesirable effects. For many years particles have been embedded in solid rocket propellants with the understanding that their presence increases specific impulse and suppresses fluctuations in the flowfield. This study utilizes a two dimensional framework to understand and quantify the aforementioned two-phase flowfield inside a motor case with a cylindrical grain perforation. This is accomplished through the use of linearized Navier-Stokes equations with the Stokes drag equation and application of the biglobal ansatz. Obtaining the biglobal equations for analysis requires quantification of the mean flowfield within the solid rocket motor. To that end, the extended Taylor-Culick form will be utilized to represent the gaseous phase of the mean flowfield while the self-similar form will be employed for the particle phase. Advancing the mean flowfield by quantifying the particle mass concentration with a semi-analytical solution the finalized mean flowfield is combined with the biglobal equations resulting in a system of eight partial differential equations. This system is solved using an eigensolver within the framework yielding the entire spectrum of eigenvalues, frequency and growth rate components, at once. This work will detail the parametric analysis performed to demonstrate the stabilizing and destabilizing effects of particles within solid rocket combustion
Simulations of reactive transport and precipitation with smoothed particle hydrodynamics
Tartakovsky, Alexandre M.; Meakin, Paul; Scheibe, Timothy D.; Eichler West, Rogene M.
2007-03-01
A numerical model based on smoothed particle hydrodynamics (SPH) was developed for reactive transport and mineral precipitation in fractured and porous materials. Because of its Lagrangian particle nature, SPH has several advantages for modeling Navier-Stokes flow and reactive transport including: (1) in a Lagrangian framework there is no non-linear term in the momentum conservation equation, so that accurate solutions can be obtained for momentum dominated flows and; (2) complicated physical and chemical processes such as surface growth due to precipitation/dissolution and chemical reactions are easy to implement. In addition, SPH simulations explicitly conserve mass and linear momentum. The SPH solution of the diffusion equation with fixed and moving reactive solid-fluid boundaries was compared with analytical solutions, Lattice Boltzmann [Q. Kang, D. Zhang, P. Lichtner, I. Tsimpanogiannis, Lattice Boltzmann model for crystal growth from supersaturated solution, Geophysical Research Letters, 31 (2004) L21604] simulations and diffusion limited aggregation (DLA) [P. Meakin, Fractals, scaling and far from equilibrium. Cambridge University Press, Cambridge, UK, 1998] model simulations. To illustrate the capabilities of the model, coupled three-dimensional flow, reactive transport and precipitation in a fracture aperture with a complex geometry were simulated.
Energy Technology Data Exchange (ETDEWEB)
Ramis, Rafael, E-mail: rafael.ramis@upm.es
2017-02-01
A new one-dimensional hydrodynamic algorithm, specifically developed for Inertial Confinement Fusion (ICF) applications, is presented. The scheme uses a fully conservative Lagrangian formulation in planar, cylindrical, and spherically symmetric geometries, and supports arbitrary equations of state with separate ion and electron components. Fluid equations are discretized on a staggered grid and stabilized by means of an artificial viscosity formulation. The space discretized equations are advanced in time using an implicit algorithm. The method includes several numerical parameters that can be adjusted locally. In regions with low Courant–Friedrichs–Lewy (CFL) number, where stability is not an issue, they can be adjusted to optimize the accuracy. In typical problems, the truncation error can be reduced by a factor between 2 to 10 in comparison with conventional explicit algorithms. On the other hand, in regions with high CFL numbers, the parameters can be set to guarantee unconditional stability. The method can be integrated into complex ICF codes. This is demonstrated through several examples covering a wide range of situations: from thermonuclear ignition physics, where alpha particles are managed as an additional species, to low intensity laser–matter interaction, where liquid–vapor phase transitions occur.
Particle algorithms for population dynamics in flows
International Nuclear Information System (INIS)
Perlekar, Prasad; Toschi, Federico; Benzi, Roberto; Pigolotti, Simone
2011-01-01
We present and discuss particle based algorithms to numerically study the dynamics of population subjected to an advecting flow condition. We discuss few possible variants of the algorithms and compare them in a model compressible flow. A comparison against appropriate versions of the continuum stochastic Fisher equation (sFKPP) is also presented and discussed. The algorithms can be used to study populations genetics in fluid environments.
Simulating Magnetized Laboratory Plasmas with Smoothed Particle Hydrodynamics
Energy Technology Data Exchange (ETDEWEB)
Johnson, Jeffrey N. [Univ. of California, Davis, CA (United States)
2009-01-01
The creation of plasmas in the laboratory continues to generate excitement in the physics community. Despite the best efforts of the intrepid plasma diagnostics community, the dynamics of these plasmas remains a difficult challenge to both the theorist and the experimentalist. This dissertation describes the simulation of strongly magnetized laboratory plasmas with Smoothed Particle Hydrodynamics (SPH), a method born of astrophysics but gaining broad support in the engineering community. We describe the mathematical formulation that best characterizes a strongly magnetized plasma under our circumstances of interest, and we review the SPH method and its application to astrophysical plasmas based on research by Phillips [1], Buerve [2], and Price and Monaghan [3]. Some modifications and extensions to this method are necessary to simulate terrestrial plasmas, such as a treatment of magnetic diffusion based on work by Brookshaw [4] and by Atluri [5]; we describe these changes as we turn our attention toward laboratory experiments. Test problems that verify the method are provided throughout the discussion. Finally, we apply our method to the compression of a magnetized plasma performed by the Compact Toroid Injection eXperiment (CTIX) [6] and show that the experimental results support our computed predictions.
An analysis of 1-D smoothed particle hydrodynamics kernels
International Nuclear Information System (INIS)
Fulk, D.A.; Quinn, D.W.
1996-01-01
In this paper, the smoothed particle hydrodynamics (SPH) kernel is analyzed, resulting in measures of merit for one-dimensional SPH. Various methods of obtaining an objective measure of the quality and accuracy of the SPH kernel are addressed. Since the kernel is the key element in the SPH methodology, this should be of primary concern to any user of SPH. The results of this work are two measures of merit, one for smooth data and one near shocks. The measure of merit for smooth data is shown to be quite accurate and a useful delineator of better and poorer kernels. The measure of merit for non-smooth data is not quite as accurate, but results indicate the kernel is much less important for these types of problems. In addition to the theory, 20 kernels are analyzed using the measure of merit demonstrating the general usefulness of the measure of merit and the individual kernels. In general, it was decided that bell-shaped kernels perform better than other shapes. 12 refs., 16 figs., 7 tabs
International Nuclear Information System (INIS)
Mohd Amirul Syafiq Mohd Yunos
2016-01-01
Full text: Radioactive particle tracking (RPT) techniques have been widely applied in the field of chemical engineering, especially in hydrodynamics in multiphase reactors. This technique is widely used to monitor the motion of the flow inside a reactor by using a single radioactive particle tracer that is neutrally buoyant with respect to the phase is used as a tracker. The particle moves inside the volume of interest and its positions are determined by an array of scintillation detectors counting in coming photons. Particle position reconstruction algorithms have been traditionally used to map measured counts rate into the coordinates by solving a minimization problem between measured events and calibration data. RPT have been used to validate respective-scale CFD models to partial success. This presentation described an introduction to radioactive particle tracking and summarizing a history of such developments and the current state of this method in Malaysian Nuclear Agency, with a perspective towards the future and how these investigations may help scale-up developments. (author)
Particle simulation of 3D galactic hydrodynamics on the ICL DAP
International Nuclear Information System (INIS)
Johns, T.C.; Nelson, A.H.
1985-01-01
A non-self-gravitating galactic hydrodynamics code based on a quasi-particle technique and making use of a mesh for force evaluation and sorting purposes is described. The short-range nature of the interparticle pressure forces, coupled with the use of a mesh allows a particularly fast algorithm. The 3D representation of the galaxy is mapped onto the ''3D'' main store of ICL DAP in a natural way, the 2 spatial dimensions in the plane of the galaxy becoming the 2 dimensions of the processor plane on the DAP and the third dimension varying within individual processor storage elements. This leads to a fairly straightforward implementation and a high degree of parallelism in the crucial parts of the code. The particle shuffling which is necessary after each timestep is facilitated by the use of a parallel variant of the bitonic sorting algorithm. Some results of simulations using a 63x63x16 mesh and about 50,000 particles to follow the evolution of a model disk galaxy are presented
Smooth particle hydrodynamic modeling and validation for impact bird substitution
Babu, Arun; Prasad, Ganesh
2018-04-01
Bird strike events incidentally occur and can at times be fatal for air frame structures. Federal Aviation Regulations (FAR) and such other ones mandates aircrafts to be modeled to withstand various levels of bird hit damages. The subject matter of this paper is numerical modeling of a soft body geometry for realistically substituting an actual bird for carrying out simulations of bird hit on target structures. Evolution of such a numerical code to effect an actual bird behavior through impact is much desired for making use of the state of the art computational facilities in simulating bird strike events. Validity, of simulations depicting bird hits, is largely dependent on the correctness of the bird model. In an impact, a set of complex and coupled dynamic interaction exists between the target and the impactor. To simplify this problem, impactor response needs to be decoupled from that of the target. This can be done by assuming and modeling the target as noncompliant. Bird is assumed as fluidic in a impact. Generated stresses in the bird body are significant than its yield stresses. Hydrodynamic theory is most ideal for describing this problem. Impactor literally flows steadily over the target for most part of this problem. The impact starts with an initial shock and falls into a radial release shock regime. Subsequently a steady flow is established in the bird body and this phase continues till the whole length of the bird body is turned around. Initial shock pressure and steady state pressure are ideal variables for comparing and validating the bird model. Spatial discretization of the bird is done using Smooth Particle Hydrodynamic (SPH) approach. This Discrete Element Model (DEM) offers significant advantages over other contemporary approaches. Thermodynamic state variable relations are established using Polynomial Equation of State (EOS). ANSYS AUTODYN is used to perform the explicit dynamic simulation of the impact event. Validation of the shock and steady
On Using Particle Finite Element for Hydrodynamics Problems Solving
Directory of Open Access Journals (Sweden)
E. V. Davidova
2015-01-01
Full Text Available The aim of the present research is to develop software for the Particle Finite Element Method (PFEM and its verification on the model problem of viscous incompressible flow simulation in a square cavity. The Lagrangian description of the medium motion is used: the nodes of the finite element mesh move together with the fluid that allows to consider them as particles of the medium. Mesh cells deform when in time-stepping procedure, so it is necessary to reconstruct the mesh to provide stability of the finite element numerical procedure.Meshing algorithm allows us to obtain the mesh, which satisfies the Delaunay criteria: it is called \\the possible triangles method". This algorithm is based on the well-known Fortune method of Voronoi diagram constructing for a certain set of points in the plane. The graphical representation of the possible triangles method is shown. It is suitable to use generalization of Delaunay triangulation in order to construct meshes with polygonal cells in case of multiple nodes close to be lying on the same circle.The viscous incompressible fluid flow is described by the Navier | Stokes equations and the mass conservation equation with certain initial and boundary conditions. A fractional steps method, which allows us to avoid non-physical oscillations of the pressure, provides the timestepping procedure. Using the finite element discretization and the Bubnov | Galerkin method allows us to carry out spatial discretization.For form functions calculation of finite element mesh with polygonal cells, \
International Nuclear Information System (INIS)
Amanifard, N.; Haghighat Namini, V.
2012-01-01
In this study a Modified Compressible Smoothed Particle Hydrodynamics method is introduced which is applicable in problems involving shock wave structures and elastic-plastic deformations of solids. As a matter of fact, algorithm of the method is based on an approach which descritizes the momentum equation into three parts and solves each part separately and calculates their effects on the velocity field and displacement of particles. The most exclusive feature of the method is exactly removing artificial viscosity of the formulations and representing good compatibility with other reasonable numerical methods without any rigorous numerical fractures or tensile instabilities while Modified Compressible Smoothed Particle Hydrodynamics does not use any extra modifications. Two types of problems involving elastic-plastic deformations and shock waves are presented here to demonstrate the capability of Modified Compressible Smoothed Particle Hydrodynamics in simulation of such problems and its ability to capture shock. The problems that are proposed here are low and high velocity impacts between aluminum projectiles and semi infinite aluminum beams. Elastic-perfectly plastic model is chosen for constitutive model of the aluminum and the results of simulations are compared with other reasonable studies in these cases.
Jin, Chao; Ren, Carolyn L; Emelko, Monica B
2016-04-19
It is widely believed that media surface roughness enhances particle deposition-numerous, but inconsistent, examples of this effect have been reported. Here, a new mathematical framework describing the effects of hydrodynamics and interaction forces on particle deposition on rough spherical collectors in absence of an energy barrier was developed and validated. In addition to quantifying DLVO force, the model includes improved descriptions of flow field profiles and hydrodynamic retardation functions. This work demonstrates that hydrodynamic effects can significantly alter particle deposition relative to expectations when only the DLVO force is considered. Moreover, the combined effects of hydrodynamics and interaction forces on particle deposition on rough, spherical media are not additive, but synergistic. Notably, the developed model's particle deposition predictions are in closer agreement with experimental observations than those from current models, demonstrating the importance of inclusion of roughness impacts in particle deposition description/simulation. Consideration of hydrodynamic contributions to particle deposition may help to explain discrepancies between model-based expectations and experimental outcomes and improve descriptions of particle deposition during physicochemical filtration in systems with nonsmooth collector surfaces.
Neural Network Algorithm for Particle Loading
International Nuclear Information System (INIS)
Lewandowski, J.L.V.
2003-01-01
An artificial neural network algorithm for continuous minimization is developed and applied to the case of numerical particle loading. It is shown that higher-order moments of the probability distribution function can be efficiently renormalized using this technique. A general neural network for the renormalization of an arbitrary number of moments is given
Computational plasticity algorithm for particle dynamics simulations
Krabbenhoft, K.; Lyamin, A. V.; Vignes, C.
2018-01-01
The problem of particle dynamics simulation is interpreted in the framework of computational plasticity leading to an algorithm which is mathematically indistinguishable from the common implicit scheme widely used in the finite element analysis of elastoplastic boundary value problems. This algorithm provides somewhat of a unification of two particle methods, the discrete element method and the contact dynamics method, which usually are thought of as being quite disparate. In particular, it is shown that the former appears as the special case where the time stepping is explicit while the use of implicit time stepping leads to the kind of schemes usually labelled contact dynamics methods. The framing of particle dynamics simulation within computational plasticity paves the way for new approaches similar (or identical) to those frequently employed in nonlinear finite element analysis. These include mixed implicit-explicit time stepping, dynamic relaxation and domain decomposition schemes.
Partially linearized algorithms in gyrokinetic particle simulation
Energy Technology Data Exchange (ETDEWEB)
Dimits, A.M.; Lee, W.W.
1990-10-01
In this paper, particle simulation algorithms with time-varying weights for the gyrokinetic Vlasov-Poisson system have been developed. The primary purpose is to use them for the removal of the selected nonlinearities in the simulation of gradient-driven microturbulence so that the relative importance of the various nonlinear effects can be assessed. It is hoped that the use of these procedures will result in a better understanding of the transport mechanisms and scaling in tokamaks. Another application of these algorithms is for the improvement of the numerical properties of the simulation plasma. For instance, implementations of such algorithms (1) enable us to suppress the intrinsic numerical noise in the simulation, and (2) also make it possible to regulate the weights of the fast-moving particles and, in turn, to eliminate the associated high frequency oscillations. Examples of their application to drift-type instabilities in slab geometry are given. We note that the work reported here represents the first successful use of the weighted algorithms in particle codes for the nonlinear simulation of plasmas.
Partially linearized algorithms in gyrokinetic particle simulation
International Nuclear Information System (INIS)
Dimits, A.M.; Lee, W.W.
1990-10-01
In this paper, particle simulation algorithms with time-varying weights for the gyrokinetic Vlasov-Poisson system have been developed. The primary purpose is to use them for the removal of the selected nonlinearities in the simulation of gradient-driven microturbulence so that the relative importance of the various nonlinear effects can be assessed. It is hoped that the use of these procedures will result in a better understanding of the transport mechanisms and scaling in tokamaks. Another application of these algorithms is for the improvement of the numerical properties of the simulation plasma. For instance, implementations of such algorithms (1) enable us to suppress the intrinsic numerical noise in the simulation, and (2) also make it possible to regulate the weights of the fast-moving particles and, in turn, to eliminate the associated high frequency oscillations. Examples of their application to drift-type instabilities in slab geometry are given. We note that the work reported here represents the first successful use of the weighted algorithms in particle codes for the nonlinear simulation of plasmas
High-Throughput Particle Manipulation Based on Hydrodynamic Effects in Microchannels
Directory of Open Access Journals (Sweden)
Chao Liu
2017-03-01
Full Text Available Microfluidic techniques are effective tools for precise manipulation of particles and cells, whose enrichment and separation is crucial for a wide range of applications in biology, medicine, and chemistry. Recently, lateral particle migration induced by the intrinsic hydrodynamic effects in microchannels, such as inertia and elasticity, has shown its promise for high-throughput and label-free particle manipulation. The particle migration can be engineered to realize the controllable focusing and separation of particles based on a difference in size. The widespread use of inertial and viscoelastic microfluidics depends on the understanding of hydrodynamic effects on particle motion. This review will summarize the progress in the fundamental mechanisms and key applications of inertial and viscoelastic particle manipulation.
Multi-Algorithm Particle Simulations with Spatiocyte.
Arjunan, Satya N V; Takahashi, Koichi
2017-01-01
As quantitative biologists get more measurements of spatially regulated systems such as cell division and polarization, simulation of reaction and diffusion of proteins using the data is becoming increasingly relevant to uncover the mechanisms underlying the systems. Spatiocyte is a lattice-based stochastic particle simulator for biochemical reaction and diffusion processes. Simulations can be performed at single molecule and compartment spatial scales simultaneously. Molecules can diffuse and react in 1D (filament), 2D (membrane), and 3D (cytosol) compartments. The implications of crowded regions in the cell can be investigated because each diffusing molecule has spatial dimensions. Spatiocyte adopts multi-algorithm and multi-timescale frameworks to simulate models that simultaneously employ deterministic, stochastic, and particle reaction-diffusion algorithms. Comparison of light microscopy images to simulation snapshots is supported by Spatiocyte microscopy visualization and molecule tagging features. Spatiocyte is open-source software and is freely available at http://spatiocyte.org .
Analysis of Hydrodynamic Mechanism on Particles Focusing in Micro-Channel Flows
Directory of Open Access Journals (Sweden)
Qikun Wang
2017-06-01
Full Text Available In this paper, the hydrodynamic mechanism of moving particles in laminar micro-channel flows was numerically investigated. A hydrodynamic criterion was proposed to determine whether particles in channel flows can form a focusing pattern or not. A simple formula was derived to demonstrate how the focusing position varies with Reynolds number and particle size. Based on this proposed criterion, a possible hydrodynamic mechanism was discussed as to why the particles would not be focused if their sizes were too small or the channel Reynolds number was too low. The Re-λ curve (Re, λ respectively represents the channel-based Reynolds number and the particle’s diameter scaled by the channel was obtained using the data fitting with a least square method so as to obtain a parameter range of the focusing pattern. In addition, the importance of the particle rotation to the numerical modeling for the focusing of particles was discussed in view of the hydrodynamics. This research is expected to deepen the understanding of the particle transport phenomena in bounded flow, either in micro or macro fluidic scope.
Fast algorithm for two-dimensional data table use in hydrodynamic and radiative-transfer codes
International Nuclear Information System (INIS)
Slattery, W.L.; Spangenberg, W.H.
1982-01-01
A fast algorithm for finding interpolated atomic data in irregular two-dimensional tables with differing materials is described. The algorithm is tested in a hydrodynamic/radiative transfer code and shown to be of comparable speed to interpolation in regularly spaced tables, which require no table search. The concepts presented are expected to have application in any situation with irregular vector lengths. Also, the procedures that were rejected either because they were too slow or because they involved too much assembly coding are described
Particle emission in the hydrodynamical description of relativistic nuclear collisions
International Nuclear Information System (INIS)
Grassi, F.; Hama, Y.; Kodama, T.
1994-09-01
Continuous particle emission during the whole expansion of thermalized matter is studied and a new formula for the observed transverse mass spectrum is derived. In some limit, the usual emission at freeze out scenario (Cooper-Frye formula) may be recovered. In a simplified description of expansion, it is shown that continuous particle emission can lead to a sizable curvature in the pion transverse mass spectrum and parallel slopes for the various particles. These results are compared to experimental data. (author). 26 refs, 3 figs
Sandalski, Stou
Smooth particle hydrodynamics is an efficient method for modeling the dynamics of fluids. It is commonly used to simulate astrophysical processes such as binary mergers. We present a newly developed GPU accelerated smooth particle hydrodynamics code for astrophysical simulations. The code is named neptune after the Roman god of water. It is written in OpenMP parallelized C++ and OpenCL and includes octree based hydrodynamic and gravitational acceleration. The design relies on object-oriented methodologies in order to provide a flexible and modular framework that can be easily extended and modified by the user. Several pre-built scenarios for simulating collisions of polytropes and black-hole accretion are provided. The code is released under the MIT Open Source license and publicly available at http://code.google.com/p/neptune-sph/.
Nishiura, Daisuke; Furuichi, Mikito; Sakaguchi, Hide
2015-09-01
The computational performance of a smoothed particle hydrodynamics (SPH) simulation is investigated for three types of current shared-memory parallel computer devices: many integrated core (MIC) processors, graphics processing units (GPUs), and multi-core CPUs. We are especially interested in efficient shared-memory allocation methods for each chipset, because the efficient data access patterns differ between compute unified device architecture (CUDA) programming for GPUs and OpenMP programming for MIC processors and multi-core CPUs. We first introduce several parallel implementation techniques for the SPH code, and then examine these on our target computer architectures to determine the most effective algorithms for each processor unit. In addition, we evaluate the effective computing performance and power efficiency of the SPH simulation on each architecture, as these are critical metrics for overall performance in a multi-device environment. In our benchmark test, the GPU is found to produce the best arithmetic performance as a standalone device unit, and gives the most efficient power consumption. The multi-core CPU obtains the most effective computing performance. The computational speed of the MIC processor on Xeon Phi approached that of two Xeon CPUs. This indicates that using MICs is an attractive choice for existing SPH codes on multi-core CPUs parallelized by OpenMP, as it gains computational acceleration without the need for significant changes to the source code.
Pelssers, E.G.M.; Cohen Stuart, M.A.; Fleer, G.J.
1990-01-01
The shear and extension forces occurring during the hydrodynamic focusing in our SPOS instrument which is described in the preceding paper ([1.]), are analyzed and compared with estimates for the binding forces between particles in salt coagulation and polymer flocculation. It is found that the
Smoothed particle hydrodynamics model for phase separating fluid mixtures. I. General equations
Thieulot, C; Janssen, LPBM; Espanol, P
We present a thermodynamically consistent discrete fluid particle model for the simulation of a recently proposed set of hydrodynamic equations for a phase separating van der Waals fluid mixture [P. Espanol and C.A.P. Thieulot, J. Chem. Phys. 118, 9109 (2003)]. The discrete model is formulated by
A solution algorithm for fluid-particle flows across all flow regimes
Kong, Bo; Fox, Rodney O.
2017-09-01
Many fluid-particle flows occurring in nature and in technological applications exhibit large variations in the local particle volume fraction. For example, in circulating fluidized beds there are regions where the particles are close-packed as well as very dilute regions where particle-particle collisions are rare. Thus, in order to simulate such fluid-particle systems, it is necessary to design a flow solver that can accurately treat all flow regimes occurring simultaneously in the same flow domain. In this work, a solution algorithm is proposed for this purpose. The algorithm is based on splitting the free-transport flux solver dynamically and locally in the flow. In close-packed to moderately dense regions, a hydrodynamic solver is employed, while in dilute to very dilute regions a kinetic-based finite-volume solver is used in conjunction with quadrature-based moment methods. To illustrate the accuracy and robustness of the proposed solution algorithm, it is implemented in OpenFOAM for particle velocity moments up to second order, and applied to simulate gravity-driven, gas-particle flows exhibiting cluster-induced turbulence. By varying the average particle volume fraction in the flow domain, it is demonstrated that the flow solver can handle seamlessly all flow regimes present in fluid-particle flows.
About kinematics and hydrodynamics of spinning particles: some simple considerations
International Nuclear Information System (INIS)
Recami, Erasmo; Rodrigues Junior, Waldyr A.; Salesi, Giovanni
1995-12-01
In the first part (Sections 1 and 2) of this paper - starting from the Pauli current, in the ordinary tensorial language - we obtain the decomposition of the non-relativistic field velocity into two orthogonal parts: the classical part, that is the velocity w p/m of the center-of-mass (CM), and the so-called quantum part, that is, the velocity V of the motion in the CM frame (namely, the integral spin motion or Zitterbewegung). By inserting such a complete, composite expression of the velocity into the kinetic energy term of the non-relativistic classical (Newtonian) Lagrangian, we straightforwardly get the appearance of the so-called quantum potential associated, as it is know, with the Madelueng fluid. This result carries further evidence that the quantum behaviour of micro-systems can be a direct consequence of the fundamental existence of spin. In the second part (Sections 3 and 4), we fix our attention on the total velocity vector v vector w + vector V, being now necessary to pass to relativistic (classical) physics; and we show that the proper time entering the definition of the four-velocity v μ for spinning particles has to be the proper time τ of the CM frame. Inserting the correct Lorentz factor into the definition of v μ leads to completely new kinematical properties for v 2 . The important constraint pμ v μ identically true for scalar particles, but just assumed a priori in all previous spinning particle theories, is herein derived in a self-consistent way. (author). 24 refs
About kinematics and hydrodynamics of spinning particles: some simple considerations
Energy Technology Data Exchange (ETDEWEB)
Recami, Erasmo; Rodrigues Junior, Waldyr A. [Universidade Estadual de Campinas, SP (Brazil). Dept. de Matematica Aplicada; Salesi, Giovanni [Universita Statale di Catania (Italy). Dipt. di Fisica
1995-12-01
In the first part (Sections 1 and 2) of this paper - starting from the Pauli current, in the ordinary tensorial language - we obtain the decomposition of the non-relativistic field velocity into two orthogonal parts: the classical part, that is the velocity w p/m of the center-of-mass (CM), and the so-called quantum part, that is, the velocity V of the motion in the CM frame (namely, the integral spin motion or Zitterbewegung). By inserting such a complete, composite expression of the velocity into the kinetic energy term of the non-relativistic classical (Newtonian) Lagrangian, we straightforwardly get the appearance of the so-called quantum potential associated, as it is know, with the Madelueng fluid. This result carries further evidence that the quantum behaviour of micro-systems can be a direct consequence of the fundamental existence of spin. In the second part (Sections 3 and 4), we fix our attention on the total velocity vector v vector w + vector V, being now necessary to pass to relativistic (classical) physics; and we show that the proper time entering the definition of the four-velocity v{sup {mu}} for spinning particles has to be the proper time {tau} of the CM frame. Inserting the correct Lorentz factor into the definition of v{sup {mu}} leads to completely new kinematical properties for v{sup 2}. The important constraint p{mu} v{sup {mu}} identically true for scalar particles, but just assumed a priori in all previous spinning particle theories, is herein derived in a self-consistent way. (author). 24 refs.
Variational Algorithms for Test Particle Trajectories
Ellison, C. Leland; Finn, John M.; Qin, Hong; Tang, William M.
2015-11-01
The theory of variational integration provides a novel framework for constructing conservative numerical methods for magnetized test particle dynamics. The retention of conservation laws in the numerical time advance captures the correct qualitative behavior of the long time dynamics. For modeling the Lorentz force system, new variational integrators have been developed that are both symplectic and electromagnetically gauge invariant. For guiding center test particle dynamics, discretization of the phase-space action principle yields multistep variational algorithms, in general. Obtaining the desired long-term numerical fidelity requires mitigation of the multistep method's parasitic modes or applying a discretization scheme that possesses a discrete degeneracy to yield a one-step method. Dissipative effects may be modeled using Lagrange-D'Alembert variational principles. Numerical results will be presented using a new numerical platform that interfaces with popular equilibrium codes and utilizes parallel hardware to achieve reduced times to solution. This work was supported by DOE Contract DE-AC02-09CH11466.
Investigation of the hydrodynamic behavior of diatom aggregates using particle image velocimetry.
Xiao, Feng; Li, Xiaoyan; Lam, Kitming; Wang, Dongsheng
2012-01-01
The hydrodynamic behavior of diatom aggregates has a significant influence on the interactions and flocculation kinetics of algae. However, characterization of the hydrodynamics of diatoms and diatom aggregates in water is rather difficult. In this laboratory study, an advanced visualization technique in particle image velocimetry (PIV) was employed to investigate the hydrodynamic properties of settling diatom aggregates. The experiments were conducted in a settling column filled with a suspension of fluorescent polymeric beads as seed tracers. A laser light sheet was generated by the PIV setup to illuminate a thin vertical planar region in the settling column, while the motions of particles were recorded by a high speed charge-coupled device (CCD) camera. This technique was able to capture the trajectories of the tracers when a diatom aggregate settled through the tracer suspension. The PIV results indicated directly the curvilinear feature of the streamlines around diatom aggregates. The rectilinear collision model largely overestimated the collision areas of the settling particles. Algae aggregates appeared to be highly porous and fractal, which allowed streamlines to penetrate into the aggregate interior. The diatom aggregates have a fluid collection efficiency of 10%-40%. The permeable feature of aggregates can significantly enhance the collisions and flocculation between the aggregates and other small particles including algal cells in water.
Stochastic-hydrodynamic model of halo formation in charged particle beams
Directory of Open Access Journals (Sweden)
Nicola Cufaro Petroni
2003-03-01
Full Text Available The formation of the beam halo in charged particle accelerators is studied in the framework of a stochastic-hydrodynamic model for the collective motion of the particle beam. In such a stochastic-hydrodynamic theory the density and the phase of the charged beam obey a set of coupled nonlinear hydrodynamic equations with explicit time-reversal invariance. This leads to a linearized theory that describes the collective dynamics of the beam in terms of a classical Schrödinger equation. Taking into account space-charge effects, we derive a set of coupled nonlinear hydrodynamic equations. These equations define a collective dynamics of self-interacting systems much in the same spirit as in the Gross-Pitaevskii and Landau-Ginzburg theories of the collective dynamics for interacting quantum many-body systems. Self-consistent solutions of the dynamical equations lead to quasistationary beam configurations with enhanced transverse dispersion and transverse emittance growth. In the limit of a frozen space-charge core it is then possible to determine and study the properties of stationary, stable core-plus-halo beam distributions. In this scheme the possible reproduction of the halo after its elimination is a consequence of the stationarity of the transverse distribution which plays the role of an attractor for every other distribution.
A Coulomb collision algorithm for weighted particle simulations
Miller, Ronald H.; Combi, Michael R.
1994-01-01
A binary Coulomb collision algorithm is developed for weighted particle simulations employing Monte Carlo techniques. Charged particles within a given spatial grid cell are pair-wise scattered, explicitly conserving momentum and implicitly conserving energy. A similar algorithm developed by Takizuka and Abe (1977) conserves momentum and energy provided the particles are unweighted (each particle representing equal fractions of the total particle density). If applied as is to simulations incorporating weighted particles, the plasma temperatures equilibrate to an incorrect temperature, as compared to theory. Using the appropriate pairing statistics, a Coulomb collision algorithm is developed for weighted particles. The algorithm conserves energy and momentum and produces the appropriate relaxation time scales as compared to theoretical predictions. Such an algorithm is necessary for future work studying self-consistent multi-species kinetic transport.
International Nuclear Information System (INIS)
Mo Zeyao
2004-11-01
Multiphysics parallel numerical simulations are usually essential to simplify researches on complex physical phenomena in which several physics are tightly coupled. It is very important on how to concatenate those coupled physics for fully scalable parallel simulation. Meanwhile, three objectives should be balanced, the first is efficient data transfer among simulations, the second and the third are efficient parallel executions and simultaneously developments of those simulation codes. Two concatenating algorithms for multiphysics parallel numerical simulations coupling radiation hydrodynamics with neutron transport on unstructured grid are presented. The first algorithm, Fully Loosely Concatenation (FLC), focuses on the independence of code development and the independence running with optimal performance of code. The second algorithm. Two Level Tightly Concatenation (TLTC), focuses on the optimal tradeoffs among above three objectives. Theoretical analyses for communicational complexity and parallel numerical experiments on hundreds of processors on two parallel machines have showed that these two algorithms are efficient and can be generalized to other multiphysics parallel numerical simulations. In especial, algorithm TLTC is linearly scalable and has achieved the optimal parallel performance. (authors)
Smoothed Particle Hydrodynamics Simulations of Dam-Break Flows Around Movable Structures
Jian, Wei; Liang, Dongfang; Shao, Songdong; Chen, Ridong; Yang, Kejun
2015-01-01
In this paper, 3D weakly compressible and incompressible Smoothed Particle Hydrodynamics (WCSPH & ISPH) models are used to study dam-break flows impacting on either a fixed or a movable structure. First, the two models’ performances are compared in terms of CPU time efficiency and numerical accuracy, as well as the water surface shapes and pressure fields. Then, they are applied to investigate dam-break flow interactions with structures placed in the path of the flood. The study found that th...
Afshar, Sepideh; Nath, Shubhankar; Demirci, Utkan; Hasan, Tayyaba; Scarcelli, Giuliano; Rizvi, Imran; Franco, Walfre
2018-02-01
Previous studies have demonstrated that flow-induced shear stress induces a motile and aggressive tumor phenotype in a microfluidic model of 3D ovarian cancer. However, the magnitude and distribution of the hydrodynamic forces that influence this biological modulation on the 3D cancer nodules are not known. We have developed a series of numerical and experimental tools to identify these forces within a 3D microchannel. In this work, we used particle image velocimetry (PIV) to find the velocity profile using fluorescent micro-spheres as surrogates and nano-particles as tracers, from which hydrodynamic forces can be derived. The fluid velocity is obtained by imaging the trajectory of a range of florescence nano-particles (500-800 μm) via confocal microscopy. Imaging was done at different horizontal planes and with a 50 μm bead as the surrogate. For an inlet current rate of 2 μl/s, the maximum velocity at the center of the channel was 51 μm/s. The velocity profile around the sphere was symmetric which is expected since the flow is dominated by viscous forces as opposed to inertial forces. The confocal PIV was successfully employed in finding the velocity profile in a microchannel with a nodule surrogate; therefore, it seems feasible to use PIV to investigate the hydrodynamic forces around 3D biological models.
Behafarid, Farhad; Brasseur, James G.
2017-11-01
Following tablet disintegration, clouds of drug particles 5-200 μm in diameter pass through the intestines where drug molecules are absorbed into the blood. Release rate depends on particle size, drug solubility, local drug concentration and the hydrodynamic environment driven by patterned gut contractions. To analyze the dynamics underlying drug release and absorption, we use a 3D lattice Boltzmann model of the velocity and concentration fields driven by peristaltic contractions in vivo, combined with a mathematical model of dissolution-rate from each drug particle transported through the grid. The model is empirically extended for hydrodynamic enhancements to release rate by local convection and shear-rate, and incorporates heterogeneity in bulk concentration. Drug dosage and solubility are systematically varied along with peristaltic wave speed and volume. We predict large hydrodynamic enhancements (35-65%) from local shear-rate with minimal enhancement from convection. With high permeability boundary conditions, a quasi-equilibrium balance between release and absorption is established with volume and wave-speed dependent transport time scale, after an initial transient and before a final period of dissolution/absorption. Supported by FDA.
A dynamic global and local combined particle swarm optimization algorithm
International Nuclear Information System (INIS)
Jiao Bin; Lian Zhigang; Chen Qunxian
2009-01-01
Particle swarm optimization (PSO) algorithm has been developing rapidly and many results have been reported. PSO algorithm has shown some important advantages by providing high speed of convergence in specific problems, but it has a tendency to get stuck in a near optimal solution and one may find it difficult to improve solution accuracy by fine tuning. This paper presents a dynamic global and local combined particle swarm optimization (DGLCPSO) algorithm to improve the performance of original PSO, in which all particles dynamically share the best information of the local particle, global particle and group particles. It is tested with a set of eight benchmark functions with different dimensions and compared with original PSO. Experimental results indicate that the DGLCPSO algorithm improves the search performance on the benchmark functions significantly, and shows the effectiveness of the algorithm to solve optimization problems.
Directory of Open Access Journals (Sweden)
Usama Umer
2016-05-01
Full Text Available This study aims to perform comparative analyses in modeling serrated chip morphologies using traditional finite element and smoothed particles hydrodynamics methods. Although finite element models are being employed in predicting machining performance variables for the last two decades, many drawbacks and limitations exist with the current finite element models. The problems like excessive mesh distortions, high numerical cost of adaptive meshing techniques, and need of geometric chip separation criteria hinder its practical implementation in metal cutting industries. In this study, a mesh free method, namely, smoothed particles hydrodynamics, is implemented for modeling serrated chip morphology while machining AISI H13 hardened tool steel. The smoothed particles hydrodynamics models are compared with the traditional finite element models, and it has been found that the smoothed particles hydrodynamics models have good capabilities in handling large distortions and do not need any geometric or mesh-based chip separation criterion.
Li, Shunbo; Li, Ming; Bougot-Robin, Kristelle; Cao, Wenbin; Yeung Yeung Chau, Irene; Li, Weihua; Wen, Weijia
2013-01-01
Integrating different steps on a chip for cell manipulations and sample preparation is of foremost importance to fully take advantage of microfluidic possibilities, and therefore make tests faster, cheaper and more accurate. We demonstrated particle manipulation in an integrated microfluidic device by applying hydrodynamic, electroosmotic (EO), electrophoretic (EP), and dielectrophoretic (DEP) forces. The process involves generation of fluid flow by pressure difference, particle trapping by DEP force, and particle redirect by EO and EP forces. Both DC and AC signals were applied, taking advantages of DC EP, EO and AC DEP for on-chip particle manipulation. Since different types of particles respond differently to these signals, variations of DC and AC signals are capable to handle complex and highly variable colloidal and biological samples. The proposed technique can operate in a high-throughput manner with thirteen independent channels in radial directions for enrichment and separation in microfluidic chip. We evaluated our approach by collecting Polystyrene particles, yeast cells, and E. coli bacteria, which respond differently to electric field gradient. Live and dead yeast cells were separated successfully, validating the capability of our device to separate highly similar cells. Our results showed that this technique could achieve fast pre-concentration of colloidal particles and cells and separation of cells depending on their vitality. Hydrodynamic, DC electrophoretic and DC electroosmotic forces were used together instead of syringe pump to achieve sufficient fluid flow and particle mobility for particle trapping and sorting. By eliminating bulky mechanical pumps, this new technique has wide applications for in situ detection and analysis.
Li, Shunbo
2013-03-20
Integrating different steps on a chip for cell manipulations and sample preparation is of foremost importance to fully take advantage of microfluidic possibilities, and therefore make tests faster, cheaper and more accurate. We demonstrated particle manipulation in an integrated microfluidic device by applying hydrodynamic, electroosmotic (EO), electrophoretic (EP), and dielectrophoretic (DEP) forces. The process involves generation of fluid flow by pressure difference, particle trapping by DEP force, and particle redirect by EO and EP forces. Both DC and AC signals were applied, taking advantages of DC EP, EO and AC DEP for on-chip particle manipulation. Since different types of particles respond differently to these signals, variations of DC and AC signals are capable to handle complex and highly variable colloidal and biological samples. The proposed technique can operate in a high-throughput manner with thirteen independent channels in radial directions for enrichment and separation in microfluidic chip. We evaluated our approach by collecting Polystyrene particles, yeast cells, and E. coli bacteria, which respond differently to electric field gradient. Live and dead yeast cells were separated successfully, validating the capability of our device to separate highly similar cells. Our results showed that this technique could achieve fast pre-concentration of colloidal particles and cells and separation of cells depending on their vitality. Hydrodynamic, DC electrophoretic and DC electroosmotic forces were used together instead of syringe pump to achieve sufficient fluid flow and particle mobility for particle trapping and sorting. By eliminating bulky mechanical pumps, this new technique has wide applications for in situ detection and analysis.
Energy Technology Data Exchange (ETDEWEB)
Park, Dae Woong [Korea Testing and Research Institute, Kwachun (Korea, Republic of)
2015-03-15
A centrifuge works on the principle that particles with different densities will separate at a rate proportional to the centrifugal force during high-speed rotation. Dense particles are quickly precipitated, and particles with relatively smaller densities are precipitated more slowly. A decanter-type centrifuge is used to remove, concentrate, and dehydrate sludge in a water treatment process. This is a core technology for measuring the sludge conveyance efficiency improvement. In this study, a smoothed particle hydro-dynamic analysis was performed for a decanter centrifuge used to convey sludge to evaluate the efficiency improvement. This analysis was applied to both the original centrifugal model and the design change model, which was a ball-plate rail model, to evaluate the sludge transfer efficiency.
International Nuclear Information System (INIS)
Park, Dae Woong
2015-01-01
A centrifuge works on the principle that particles with different densities will separate at a rate proportional to the centrifugal force during high-speed rotation. Dense particles are quickly precipitated, and particles with relatively smaller densities are precipitated more slowly. A decanter-type centrifuge is used to remove, concentrate, and dehydrate sludge in a water treatment process. This is a core technology for measuring the sludge conveyance efficiency improvement. In this study, a smoothed particle hydro-dynamic analysis was performed for a decanter centrifuge used to convey sludge to evaluate the efficiency improvement. This analysis was applied to both the original centrifugal model and the design change model, which was a ball-plate rail model, to evaluate the sludge transfer efficiency.
A Novel Particle Swarm Optimization Algorithm for Global Optimization.
Wang, Chun-Feng; Liu, Kui
2016-01-01
Particle Swarm Optimization (PSO) is a recently developed optimization method, which has attracted interest of researchers in various areas due to its simplicity and effectiveness, and many variants have been proposed. In this paper, a novel Particle Swarm Optimization algorithm is presented, in which the information of the best neighbor of each particle and the best particle of the entire population in the current iteration is considered. Meanwhile, to avoid premature, an abandoned mechanism is used. Furthermore, for improving the global convergence speed of our algorithm, a chaotic search is adopted in the best solution of the current iteration. To verify the performance of our algorithm, standard test functions have been employed. The experimental results show that the algorithm is much more robust and efficient than some existing Particle Swarm Optimization algorithms.
Hydrodynamic interaction of two particles in confined linear shear flow at finite Reynolds number
Yan, Yiguang; Morris, Jeffrey F.; Koplik, Joel
2007-11-01
We discuss the hydrodynamic interactions of two solid bodies placed in linear shear flow between parallel plane walls in a periodic geometry at finite Reynolds number. The computations are based on the lattice Boltzmann method for particulate flow, validated here by comparison to previous results for a single particle. Most of our results pertain to cylinders in two dimensions but some examples are given for spheres in three dimensions. Either one mobile and one fixed particle or else two mobile particles are studied. The motion of a mobile particle is qualitatively similar in both cases at early times, exhibiting either trajectory reversal or bypass, depending upon the initial vector separation of the pair. At longer times, if a mobile particle does not approach a periodic image of the second, its trajectory tends to a stable limit point on the symmetry axis. The effect of interactions with periodic images is to produce nonconstant asymptotic long-time trajectories. For one free particle interacting with a fixed second particle within the unit cell, the free particle may either move to a fixed point or take up a limit cycle. Pairs of mobile particles starting from symmetric initial conditions are shown to asymptotically reach either fixed points, or mirror image limit cycles within the unit cell, or to bypass one another (and periodic images) indefinitely on a streamwise periodic trajectory. The limit cycle possibility requires finite Reynolds number and arises as a consequence of streamwise periodicity when the system length is sufficiently short.
Hydrodynamic and thermal modelling of gas-particle flow in fluidized beds
International Nuclear Information System (INIS)
Abdelkawi, O.S; Abdalla, A.M.; Atwan, E.F; Abdelmonem, S.A.; Elshazly, K.M.
2009-01-01
In this study a mathematical model has been developed to simulate two dimensional fluidized bed with uniform fluidization. The model consists of two sub models for hydrodynamic and thermal behavior of fluidized bed on which a FORTRAN program entitled (NEWFLUIDIZED) is devolved. The program is used to predict the volume fraction of gas and particle phases, the velocity of the two phases, the gas pressure and the temperature distribution for two phases. Also the program calculates the heat transfer coefficient. Besides the program predicts the fluidized bed stability and determines the optimum input gas velocity for fluidized bed to achieve the best thermal behavior. The hydrodynamic model is verified by comparing its results with the computational fluid dynamic code MFIX . While the thermal model was tested and compared by the available previous experimental correlations.The model results show good agreement with MFIX results and the thermal model of the present work confirms Zenz and Gunn equations
MODA: a new algorithm to compute optical depths in multidimensional hydrodynamic simulations
Perego, Albino; Gafton, Emanuel; Cabezón, Rubén; Rosswog, Stephan; Liebendörfer, Matthias
2014-08-01
Aims: We introduce the multidimensional optical depth algorithm (MODA) for the calculation of optical depths in approximate multidimensional radiative transport schemes, equally applicable to neutrinos and photons. Motivated by (but not limited to) neutrino transport in three-dimensional simulations of core-collapse supernovae and neutron star mergers, our method makes no assumptions about the geometry of the matter distribution, apart from expecting optically transparent boundaries. Methods: Based on local information about opacities, the algorithm figures out an escape route that tends to minimize the optical depth without assuming any predefined paths for radiation. Its adaptivity makes it suitable for a variety of astrophysical settings with complicated geometry (e.g., core-collapse supernovae, compact binary mergers, tidal disruptions, star formation, etc.). We implement the MODA algorithm into both a Eulerian hydrodynamics code with a fixed, uniform grid and into an SPH code where we use a tree structure that is otherwise used for searching neighbors and calculating gravity. Results: In a series of numerical experiments, we compare the MODA results with analytically known solutions. We also use snapshots from actual 3D simulations and compare the results of MODA with those obtained with other methods, such as the global and local ray-by-ray method. It turns out that MODA achieves excellent accuracy at a moderate computational cost. In appendix we also discuss implementation details and parallelization strategies.
Cooper, Andrew P.; Cole, Shaun; Frenk, Carlos S.; Le Bret, Theo; Pontzen, Andrew
2017-08-01
Particle tagging is an efficient, but approximate, technique for using cosmological N-body simulations to model the phase-space evolution of the stellar populations predicted, for example, by a semi-analytic model of galaxy formation. We test the technique developed by Cooper et al. (which we call stings here) by comparing particle tags with stars in a smooth particle hydrodynamic (SPH) simulation. We focus on the spherically averaged density profile of stars accreted from satellite galaxies in a Milky Way (MW)-like system. The stellar profile in the SPH simulation can be recovered accurately by tagging dark matter (DM) particles in the same simulation according to a prescription based on the rank order of particle binding energy. Applying the same prescription to an N-body version of this simulation produces a density profile differing from that of the SPH simulation by ≲10 per cent on average between 1 and 200 kpc. This confirms that particle tagging can provide a faithful and robust approximation to a self-consistent hydrodynamical simulation in this regime (in contradiction to previous claims in the literature). We find only one systematic effect, likely due to the collisionless approximation, namely that massive satellites in the SPH simulation are disrupted somewhat earlier than their collisionless counterparts. In most cases, this makes remarkably little difference to the spherically averaged distribution of their stellar debris. We conclude that, for galaxy formation models that do not predict strong baryonic effects on the present-day DM distribution of MW-like galaxies or their satellites, differences in stellar halo predictions associated with the treatment of star formation and feedback are much more important than those associated with the dynamical limitations of collisionless particle tagging.
Weighted Flow Algorithms (WFA) for stochastic particle coagulation
International Nuclear Information System (INIS)
DeVille, R.E.L.; Riemer, N.; West, M.
2011-01-01
Stochastic particle-resolved methods are a useful way to compute the time evolution of the multi-dimensional size distribution of atmospheric aerosol particles. An effective approach to improve the efficiency of such models is the use of weighted computational particles. Here we introduce particle weighting functions that are power laws in particle size to the recently-developed particle-resolved model PartMC-MOSAIC and present the mathematical formalism of these Weighted Flow Algorithms (WFA) for particle coagulation and growth. We apply this to an urban plume scenario that simulates a particle population undergoing emission of different particle types, dilution, coagulation and aerosol chemistry along a Lagrangian trajectory. We quantify the performance of the Weighted Flow Algorithm for number and mass-based quantities of relevance for atmospheric sciences applications.
Weighted Flow Algorithms (WFA) for stochastic particle coagulation
DeVille, R. E. L.; Riemer, N.; West, M.
2011-09-01
Stochastic particle-resolved methods are a useful way to compute the time evolution of the multi-dimensional size distribution of atmospheric aerosol particles. An effective approach to improve the efficiency of such models is the use of weighted computational particles. Here we introduce particle weighting functions that are power laws in particle size to the recently-developed particle-resolved model PartMC-MOSAIC and present the mathematical formalism of these Weighted Flow Algorithms (WFA) for particle coagulation and growth. We apply this to an urban plume scenario that simulates a particle population undergoing emission of different particle types, dilution, coagulation and aerosol chemistry along a Lagrangian trajectory. We quantify the performance of the Weighted Flow Algorithm for number and mass-based quantities of relevance for atmospheric sciences applications.
Lefauve, Adrien; Saintillan, David
2014-02-01
Strongly confined active liquids are subject to unique hydrodynamic interactions due to momentum screening and lubricated friction by the confining walls. Using numerical simulations, we demonstrate that two-dimensional dilute suspensions of fore-aft asymmetric polar swimmers in a Hele-Shaw geometry can exhibit a rich variety of novel phase behaviors depending on particle shape, including coherent polarized density waves with global alignment, persistent counterrotating vortices, density shocks and rarefaction waves. We also explain these phenomena using a linear stability analysis and a nonlinear traffic flow model, both derived from a mean-field kinetic theory.
Parallel Global Optimization with the Particle Swarm Algorithm (Preprint)
National Research Council Canada - National Science Library
Schutte, J. F; Reinbolt, J. A; Fregly, B. J; Haftka, R. T; George, A. D
2004-01-01
.... To obtain enhanced computational throughput and global search capability, we detail the coarse-grained parallelization of an increasingly popular global search method, the Particle Swarm Optimization (PSO) algorithm...
Chaotically encoded particle swarm optimization algorithm and its applications
International Nuclear Information System (INIS)
Alatas, Bilal; Akin, Erhan
2009-01-01
This paper proposes a novel particle swarm optimization (PSO) algorithm, chaotically encoded particle swarm optimization algorithm (CENPSOA), based on the notion of chaos numbers that have been recently proposed for a novel meaning to numbers. In this paper, various chaos arithmetic and evaluation measures that can be used in CENPSOA have been described. Furthermore, CENPSOA has been designed to be effectively utilized in data mining applications.
Effects of hydrodynamic interaction on random adhesive loose packings of micron-sized particles
Directory of Open Access Journals (Sweden)
Liu Wenwei
2017-01-01
Full Text Available Random loose packings of monodisperse spherical micron-sized particles under a uniform flow field are investigated via an adhesive discrete-element method with the two-way coupling between the particles and the fluid. Characterized by a dimensionless adhesion parameter, the packing fraction follows the similar law to that without fluid, but results in larger values due to the hydrodynamic compression. The total pressure drop through the packed bed shows a critical behaviour at the packing fraction of ϕ ≈ 0.22 in the present study. The normalized permeability of the packed bed for different parameters increases with the increase of porosities and is also in consistent with the Kozeny-Carman equation.
Smoothed-particle-hydrodynamics modeling of dissipation mechanisms in gravity waves.
Colagrossi, Andrea; Souto-Iglesias, Antonio; Antuono, Matteo; Marrone, Salvatore
2013-02-01
The smoothed-particle-hydrodynamics (SPH) method has been used to study the evolution of free-surface Newtonian viscous flows specifically focusing on dissipation mechanisms in gravity waves. The numerical results have been compared with an analytical solution of the linearized Navier-Stokes equations for Reynolds numbers in the range 50-5000. We found that a correct choice of the number of neighboring particles is of fundamental importance in order to obtain convergence towards the analytical solution. This number has to increase with higher Reynolds numbers in order to prevent the onset of spurious vorticity inside the bulk of the fluid, leading to an unphysical overdamping of the wave amplitude. This generation of spurious vorticity strongly depends on the specific kernel function used in the SPH model.
hydrodynamic behavior of particles in a Jet flow of a gas fluidized bed
International Nuclear Information System (INIS)
Mirmomen, L.; Alavi, M.
2005-01-01
Numerous investigations have been devoted towards understanding the hydrodynamics of gas jets in fluidized beds. However, most of them address the problem from macroscopic point of view, which does not reveal the true behavior in the jet region at the single particle level. The present work aims to understand the jet behavior from a more fundamental level, i.e. the individual particle level. A thin rectangular gas fluidized bed, constructed from acrylic glass, with a vertical jet nozzle located at the center of the distributor was used in the work. A high speed camera with a speed up to 10,000 frames per second was used to observe the jet behavior . Analysis of large quantity of images allowed determination of solids flux, solids Velocity and solids concentration in the jet region . The model present in this work has shown better agreement with the experimental data in compare with the previous models presented in the literature
Advective isotope transport by mixing cell and particle tracking algorithms
International Nuclear Information System (INIS)
Tezcan, L.; Meric, T.
1999-01-01
The 'mixing cell' algorithm of the environmental isotope data evaluation is integrated with the three dimensional finite difference ground water flow model (MODFLOW) to simulate the advective isotope transport and the approach is compared with the 'particle tracking' algorithm of the MOC3D, that simulates three-dimensional solute transport with the method of characteristics technique
Use of Genetic Algorithms to solve Inverse Problems in Relativistic Hydrodynamics
Guzmán, F. S.; González, J. A.
2018-04-01
We present the use of Genetic Algorithms (GAs) as a strategy to solve inverse problems associated with models of relativistic hydrodynamics. The signal we consider to emulate an observation is the density of a relativistic gas, measured at a point where a shock is traveling. This shock is generated numerically out of a Riemann problem with mildly relativistic conditions. The inverse problem we propose is the prediction of the initial conditions of density, velocity and pressure of the Riemann problem that gave origin to that signal. For this we use the density, velocity and pressure of the gas at both sides of the discontinuity, as the six genes of an organism, initially with random values within a tolerance. We then prepare an initial population of N of these organisms and evolve them using methods based on GAs. In the end, the organism with the best fitness of each generation is compared to the signal and the process ends when the set of initial conditions of the organisms of a later generation fit the Signal within a tolerance.
Douillet-Grellier, Thomas; Pramanik, Ranjan; Pan, Kai; Albaiz, Abdulaziz; Jones, Bruce D.; Williams, John R.
2017-10-01
This paper develops a method for imposing stress boundary conditions in smoothed particle hydrodynamics (SPH) with and without the need for dummy particles. SPH has been used for simulating phenomena in a number of fields, such as astrophysics and fluid mechanics. More recently, the method has gained traction as a technique for simulation of deformation and fracture in solids, where the meshless property of SPH can be leveraged to represent arbitrary crack paths. Despite this interest, application of boundary conditions within the SPH framework is typically limited to imposed velocity or displacement using fictitious dummy particles to compensate for the lack of particles beyond the boundary interface. While this is enough for a large variety of problems, especially in the case of fluid flow, for problems in solid mechanics there is a clear need to impose stresses upon boundaries. In addition to this, the use of dummy particles to impose a boundary condition is not always suitable or even feasibly, especially for those problems which include internal boundaries. In order to overcome these difficulties, this paper first presents an improved method for applying stress boundary conditions in SPH with dummy particles. This is then followed by a proposal of a formulation which does not require dummy particles. These techniques are then validated against analytical solutions to two common problems in rock mechanics, the Brazilian test and the penny-shaped crack problem both in 2D and 3D. This study highlights the fact that SPH offers a good level of accuracy to solve these problems and that results are reliable. This validation work serves as a foundation for addressing more complex problems involving plasticity and fracture propagation.
International Nuclear Information System (INIS)
Cregg, P.J.; Murphy, Kieran; Mardinoglu, Adil; Prina-Mello, Adriele
2010-01-01
The implant assisted magnetic targeted drug delivery system of Aviles, Ebner and Ritter is considered both experimentally (in vitro) and theoretically. The results of a 2D mathematical model are compared with 3D experimental results for a magnetizable wire stent. In this experiment a ferromagnetic, coiled wire stent is implanted to aid collection of particles which consist of single domain magnetic nanoparticles (radius ∼10nm). In order to model the agglomeration of particles known to occur in this system, the magnetic dipole-dipole and hydrodynamic interactions for multiple particles are included. Simulations based on this mathematical model were performed using open source C++ code. Different initial positions are considered and the system performance is assessed in terms of collection efficiency. The results of this model show closer agreement with the measured in vitro experimental results and with the literature. The implications in nanotechnology and nanomedicine are based on the prediction of the particle efficiency, in conjunction with the magnetizable stent, for targeted drug delivery.
Refined holonomic summation algorithms in particle physics
Energy Technology Data Exchange (ETDEWEB)
Bluemlein, Johannes [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany); Round, Mark; Schneider, Carsten [Johannes Kepler Univ., Linz (Austria). Research Inst. for Symbolic Computation (RISC)
2017-06-15
An improved multi-summation approach is introduced and discussed that enables one to simultaneously handle indefinite nested sums and products in the setting of difference rings and holonomic sequences. Relevant mathematics is reviewed and the underlying advanced difference ring machinery is elaborated upon. The flexibility of this new toolbox contributed substantially to evaluating complicated multi-sums coming from particle physics. Illustrative examples of the functionality of the new software package RhoSum are given.
Refined holonomic summation algorithms in particle physics
International Nuclear Information System (INIS)
Bluemlein, Johannes; Round, Mark; Schneider, Carsten
2017-06-01
An improved multi-summation approach is introduced and discussed that enables one to simultaneously handle indefinite nested sums and products in the setting of difference rings and holonomic sequences. Relevant mathematics is reviewed and the underlying advanced difference ring machinery is elaborated upon. The flexibility of this new toolbox contributed substantially to evaluating complicated multi-sums coming from particle physics. Illustrative examples of the functionality of the new software package RhoSum are given.
Li, Feng-guo; Ai, Bao-quan
2014-04-01
Transport of overdamped Brownian particles in a periodic hydrodynamical channel is investigated in the presence of an asymmetric unbiased force, a transverse gravitational force, and a pressure-driven flow. With the help of the generalized Fick-Jacobs approach, we obtain an analytical expression for the directed current and the generalized potential of mean force. It is found that, when the transverse gravitational force is larger than a certain value, the current is suppressed. Moreover, when the temporal asymmetry parameter of the unbiased force is negative, the current is always negative. However, when the temporal asymmetry parameter is positive, the transverse gravitational force and the pressure drop not only determine the direction of the current but also affect its amplitude. In particular, the competition between the asymmetric unbiased force and the pressure drop can result in multiple current reversals.
Pahar, Gourabananda; Dhar, Anirban
2017-04-01
A coupled solenoidal Incompressible Smoothed Particle Hydrodynamics (ISPH) model is presented for simulation of sediment displacement in erodible bed. The coupled framework consists of two separate incompressible modules: (a) granular module, (b) fluid module. The granular module considers a friction based rheology model to calculate deviatoric stress components from pressure. The module is validated for Bagnold flow profile and two standardized test cases of sediment avalanching. The fluid module resolves fluid flow inside and outside porous domain. An interaction force pair containing fluid pressure, viscous term and drag force acts as a bridge between two different flow modules. The coupled model is validated against three dambreak flow cases with different initial conditions of movable bed. The simulated results are in good agreement with experimental data. A demonstrative case considering effect of granular column failure under full/partial submergence highlights the capability of the coupled model for application in generalized scenario.
International Nuclear Information System (INIS)
Li, Feng-guo; Ai, Bao-quan
2014-01-01
Transport of overdamped Brownian particles in a periodic hydrodynamical channel is investigated in the presence of an asymmetric unbiased force, a transverse gravitational force, and a pressure-driven flow. With the help of the generalized Fick–Jacobs approach, we obtain an analytical expression for the directed current and the generalized potential of mean force. It is found that, when the transverse gravitational force is larger than a certain value, the current is suppressed. Moreover, when the temporal asymmetry parameter of the unbiased force is negative, the current is always negative. However, when the temporal asymmetry parameter is positive, the transverse gravitational force and the pressure drop not only determine the direction of the current but also affect its amplitude. In particular, the competition between the asymmetric unbiased force and the pressure drop can result in multiple current reversals. (paper)
Quantum Behaved Particle Swarm Optimization Algorithm Based on Artificial Fish Swarm
Yumin, Dong; Li, Zhao
2014-01-01
Quantum behaved particle swarm algorithm is a new intelligent optimization algorithm; the algorithm has less parameters and is easily implemented. In view of the existing quantum behaved particle swarm optimization algorithm for the premature convergence problem, put forward a quantum particle swarm optimization algorithm based on artificial fish swarm. The new algorithm based on quantum behaved particle swarm algorithm, introducing the swarm and following activities, meanwhile using the a...
Particle tracing in the magnetosphere: New algorithms and results
International Nuclear Information System (INIS)
Sheldon, R.B.; Gaffey, J.D. Jr.
1993-01-01
The authors present new algorithms for calculating charged-particle trajectories in realistic magnetospheric fields in fast and efficient manners. The scheme is based on a hamiltonian energy conservation principle. It requires that particles conserve the first two adiabatic invariants, and thus also conserve energy. It is applicable for particles ranging in energy from 0.01 to 100 keV, having arbitrary charge, and pitch angle. In addition to rapid particle trajectory calculations, it allows topological boundaries to be located efficiently. The results can be combined with fluid models to provide quantitative models of the time development of the whole convecting plasma model
Directory of Open Access Journals (Sweden)
Alejandro C Crespo
Full Text Available Smoothed Particle Hydrodynamics (SPH is a numerical method commonly used in Computational Fluid Dynamics (CFD to simulate complex free-surface flows. Simulations with this mesh-free particle method far exceed the capacity of a single processor. In this paper, as part of a dual-functioning code for either central processing units (CPUs or Graphics Processor Units (GPUs, a parallelisation using GPUs is presented. The GPU parallelisation technique uses the Compute Unified Device Architecture (CUDA of nVidia devices. Simulations with more than one million particles on a single GPU card exhibit speedups of up to two orders of magnitude over using a single-core CPU. It is demonstrated that the code achieves different speedups with different CUDA-enabled GPUs. The numerical behaviour of the SPH code is validated with a standard benchmark test case of dam break flow impacting on an obstacle where good agreement with the experimental results is observed. Both the achieved speed-ups and the quantitative agreement with experiments suggest that CUDA-based GPU programming can be used in SPH methods with efficiency and reliability.
A dynamic inertia weight particle swarm optimization algorithm
International Nuclear Information System (INIS)
Jiao Bin; Lian Zhigang; Gu Xingsheng
2008-01-01
Particle swarm optimization (PSO) algorithm has been developing rapidly and has been applied widely since it was introduced, as it is easily understood and realized. This paper presents an improved particle swarm optimization algorithm (IPSO) to improve the performance of standard PSO, which uses the dynamic inertia weight that decreases according to iterative generation increasing. It is tested with a set of 6 benchmark functions with 30, 50 and 150 different dimensions and compared with standard PSO. Experimental results indicate that the IPSO improves the search performance on the benchmark functions significantly
Particle swarm optimization - Genetic algorithm (PSOGA) on linear transportation problem
Rahmalia, Dinita
2017-08-01
Linear Transportation Problem (LTP) is the case of constrained optimization where we want to minimize cost subject to the balance of the number of supply and the number of demand. The exact method such as northwest corner, vogel, russel, minimal cost have been applied at approaching optimal solution. In this paper, we use heurisitic like Particle Swarm Optimization (PSO) for solving linear transportation problem at any size of decision variable. In addition, we combine mutation operator of Genetic Algorithm (GA) at PSO to improve optimal solution. This method is called Particle Swarm Optimization - Genetic Algorithm (PSOGA). The simulations show that PSOGA can improve optimal solution resulted by PSO.
Machine learning based global particle indentification algorithms at LHCb experiment
Derkach, Denis; Likhomanenko, Tatiana; Rogozhnikov, Aleksei; Ratnikov, Fedor
2017-01-01
One of the most important aspects of data processing at LHC experiments is the particle identification (PID) algorithm. In LHCb, several different sub-detector systems provide PID information: the Ring Imaging CHerenkov (RICH) detector, the hadronic and electromagnetic calorimeters, and the muon chambers. To improve charged particle identification, several neural networks including a deep architecture and gradient boosting have been applied to data. These new approaches provide higher identification efficiencies than existing implementations for all charged particle types. It is also necessary to achieve a flat dependency between efficiencies and spectator variables such as particle momentum, in order to reduce systematic uncertainties during later stages of data analysis. For this purpose, "flat” algorithms that guarantee the flatness property for efficiencies have also been developed. This talk presents this new approach based on machine learning and its performance.
A decoupled power flow algorithm using particle swarm optimization technique
International Nuclear Information System (INIS)
Acharjee, P.; Goswami, S.K.
2009-01-01
A robust, nondivergent power flow method has been developed using the particle swarm optimization (PSO) technique. The decoupling properties between the power system quantities have been exploited in developing the power flow algorithm. The speed of the power flow algorithm has been improved using a simple perturbation technique. The basic power flow algorithm and the improvement scheme have been designed to retain the simplicity of the evolutionary approach. The power flow is rugged, can determine the critical loading conditions and also can handle the flexible alternating current transmission system (FACTS) devices efficiently. Test results on standard test systems show that the proposed method can find the solution when the standard power flows fail.
Lorentz covariant canonical symplectic algorithms for dynamics of charged particles
Wang, Yulei; Liu, Jian; Qin, Hong
2016-12-01
In this paper, the Lorentz covariance of algorithms is introduced. Under Lorentz transformation, both the form and performance of a Lorentz covariant algorithm are invariant. To acquire the advantages of symplectic algorithms and Lorentz covariance, a general procedure for constructing Lorentz covariant canonical symplectic algorithms (LCCSAs) is provided, based on which an explicit LCCSA for dynamics of relativistic charged particles is built. LCCSA possesses Lorentz invariance as well as long-term numerical accuracy and stability, due to the preservation of a discrete symplectic structure and the Lorentz symmetry of the system. For situations with time-dependent electromagnetic fields, which are difficult to handle in traditional construction procedures of symplectic algorithms, LCCSA provides a perfect explicit canonical symplectic solution by implementing the discretization in 4-spacetime. We also show that LCCSA has built-in energy-based adaptive time steps, which can optimize the computation performance when the Lorentz factor varies.
Effects of Random Values for Particle Swarm Optimization Algorithm
Directory of Open Access Journals (Sweden)
Hou-Ping Dai
2018-02-01
Full Text Available Particle swarm optimization (PSO algorithm is generally improved by adaptively adjusting the inertia weight or combining with other evolution algorithms. However, in most modified PSO algorithms, the random values are always generated by uniform distribution in the range of [0, 1]. In this study, the random values, which are generated by uniform distribution in the ranges of [0, 1] and [−1, 1], and Gauss distribution with mean 0 and variance 1 ( U [ 0 , 1 ] , U [ − 1 , 1 ] and G ( 0 , 1 , are respectively used in the standard PSO and linear decreasing inertia weight (LDIW PSO algorithms. For comparison, the deterministic PSO algorithm, in which the random values are set as 0.5, is also investigated in this study. Some benchmark functions and the pressure vessel design problem are selected to test these algorithms with different types of random values in three space dimensions (10, 30, and 100. The experimental results show that the standard PSO and LDIW-PSO algorithms with random values generated by U [ − 1 , 1 ] or G ( 0 , 1 are more likely to avoid falling into local optima and quickly obtain the global optima. This is because the large-scale random values can expand the range of particle velocity to make the particle more likely to escape from local optima and obtain the global optima. Although the random values generated by U [ − 1 , 1 ] or G ( 0 , 1 are beneficial to improve the global searching ability, the local searching ability for a low dimensional practical optimization problem may be decreased due to the finite particles.
Shen, Zaiyi; Würger, Alois; Lintuvuori, Juho S
2018-03-27
Using lattice Boltzmann simulations we study the hydrodynamics of an active spherical particle near a no-slip wall. We develop a computational model for an active Janus particle, by considering different and independent mobilities on the two hemispheres and compare the behaviour to a standard squirmer model. We show that the topology of the far-field hydrodynamic nature of the active Janus particle is similar to the standard squirmer model, but in the near-field the hydrodynamics differ. In order to study how the near-field effects affect the interaction between the particle and a flat wall, we compare the behaviour of a Janus swimmer and a squirmer near a no-slip surface via extensive numerical simulations. Our results show generally a good agreement between these two models, but they reveal some key differences especially with low magnitudes of the squirming parameter [Formula: see text]. Notably the affinity of the particles to be trapped at a surface is increased for the active Janus particles when compared to standard squirmers. Finally, we find that when the particle is trapped on the surface, the velocity parallel to the surface exceeds the bulk swimming speed and scales linearly with [Formula: see text].
An efficient particle Fokker–Planck algorithm for rarefied gas flows
Energy Technology Data Exchange (ETDEWEB)
Gorji, M. Hossein; Jenny, Patrick
2014-04-01
This paper is devoted to the algorithmic improvement and careful analysis of the Fokker–Planck kinetic model derived by Jenny et al. [1] and Gorji et al. [2]. The motivation behind the Fokker–Planck based particle methods is to gain efficiency in low Knudsen rarefied gas flow simulations, where conventional direct simulation Monte Carlo (DSMC) becomes expensive. This can be achieved due to the fact that the resulting model equations are continuous stochastic differential equations in velocity space. Accordingly, the computational particles evolve along independent stochastic paths and thus no collision needs to be calculated. Therefore the computational cost of the solution algorithm becomes independent of the Knudsen number. In the present study, different computational improvements were persuaded in order to augment the method, including an accurate time integration scheme, local time stepping and noise reduction. For assessment of the performance, gas flow around a cylinder and lid driven cavity flow were studied. Convergence rates, accuracy and computational costs were compared with respect to DSMC for a range of Knudsen numbers (from hydrodynamic regime up to above one). In all the considered cases, the model together with the proposed scheme give rise to very efficient yet accurate solution algorithms.
Energy Technology Data Exchange (ETDEWEB)
Lemoine, Romain; Behkish, Arsam; Sehabiague, Laurent; Heintz, Yannick J.; Morsi, Badie I. [Chemical and Petroleum Engineering Department, University of Pittsburgh, Pittsburgh, PA 15261 (United States); Oukaci, Rachid [Energy Technology Partners, Pittsburgh, PA 15238 (United States)
2008-04-15
A large number of experimental data points obtained in our laboratory as well as from the literature, covering wide ranges of reactor geometry (column diameter, gas distributor type/open area), physicochemical properties (liquid and gas densities and molecular weights, liquid viscosity and surface tension, gas diffusivity, solid particles size/density), and operating variables (superficial gas velocity, temperature and pressure, solid loading, impurities concentration, mixtures) were used to develop empirical as well as Back-Propagation Neural Network (BPNN) correlations in order to predict the hydrodynamic and mass transfer parameters in bubble column reactors (BCRs) and slurry bubble column reactors (SBCRs). The empirical and BPNN correlations developed were incorporated in an algorithm for predicting gas holdups ({epsilon}{sub G}, {epsilon}{sub G-Small}, {epsilon}{sub G-Large}); volumetric liquid-side mass transfer coefficients (k{sub L}a, k{sub L}a{sub -Small,} k{sub L}a{sub -Large}); Sauter mean bubble diameters (d{sub S}, d{sub S-Small}, d{sub S-Large}){sub ;} gas-liquid interfacial areas (a, a{sub Small}, a{sub Large}); and liquid-side mass transfer coefficients (k{sub L}, k{sub L-Large}, k{sub L-Small}) for total, small and large gas bubbles in BCRs and SBCRs. The developed algorithm was used to predict the effects of reactor diameter and solid (alumina) loading on the hydrodynamic and mass transfer parameters in the Fisher-Tropsch (F-T) synthesis for the hydrogenation of carbon monoxide in a SBCR, and to predict the effects of presence of organic impurities (which decrease the liquid surface tension) and air superficial mass velocity in the Loprox process for the wet air oxidation of organic pollutants in a BCR. In the F-T process, the predictions showed that increasing the reactor diameter from 0.1 to 7.0 m and/or increasing the alumina loading from 25 to 50 wt.% significantly decreased {epsilon}{sub G,} k{sub L}a{sub H2} and k{sub L}a{sub CO} and
A Synchronous-Asynchronous Particle Swarm Optimisation Algorithm
Ab Aziz, Nor Azlina; Mubin, Marizan; Mohamad, Mohd Saberi; Ab Aziz, Kamarulzaman
2014-01-01
In the original particle swarm optimisation (PSO) algorithm, the particles' velocities and positions are updated after the whole swarm performance is evaluated. This algorithm is also known as synchronous PSO (S-PSO). The strength of this update method is in the exploitation of the information. Asynchronous update PSO (A-PSO) has been proposed as an alternative to S-PSO. A particle in A-PSO updates its velocity and position as soon as its own performance has been evaluated. Hence, particles are updated using partial information, leading to stronger exploration. In this paper, we attempt to improve PSO by merging both update methods to utilise the strengths of both methods. The proposed synchronous-asynchronous PSO (SA-PSO) algorithm divides the particles into smaller groups. The best member of a group and the swarm's best are chosen to lead the search. Members within a group are updated synchronously, while the groups themselves are asynchronously updated. Five well-known unimodal functions, four multimodal functions, and a real world optimisation problem are used to study the performance of SA-PSO, which is compared with the performances of S-PSO and A-PSO. The results are statistically analysed and show that the proposed SA-PSO has performed consistently well. PMID:25121109
STAR FORMATION AND FEEDBACK IN SMOOTHED PARTICLE HYDRODYNAMIC SIMULATIONS. II. RESOLUTION EFFECTS
International Nuclear Information System (INIS)
Christensen, Charlotte R.; Quinn, Thomas; Bellovary, Jillian; Stinson, Gregory; Wadsley, James
2010-01-01
We examine the effect of mass and force resolution on a specific star formation (SF) recipe using a set of N-body/smooth particle hydrodynamic simulations of isolated galaxies. Our simulations span halo masses from 10 9 to 10 13 M sun , more than 4 orders of magnitude in mass resolution, and 2 orders of magnitude in the gravitational softening length, ε, representing the force resolution. We examine the total global SF rate, the SF history, and the quantity of stellar feedback and compare the disk structure of the galaxies. Based on our analysis, we recommend using at least 10 4 particles each for the dark matter (DM) and gas component and a force resolution of ε ∼ 10 -3 R vir when studying global SF and feedback. When the spatial distribution of stars is important, the number of gas and DM particles must be increased to at least 10 5 of each. Low-mass resolution simulations with fixed softening lengths show particularly weak stellar disks due to two-body heating. While decreasing spatial resolution in low-mass resolution simulations limits two-body effects, density and potential gradients cannot be sustained. Regardless of the softening, low-mass resolution simulations contain fewer high density regions where SF may occur. Galaxies of approximately 10 10 M sun display unique sensitivity to both mass and force resolution. This mass of galaxy has a shallow potential and is on the verge of forming a disk. The combination of these factors gives this galaxy the potential for strong gas outflows driven by supernova feedback and makes it particularly sensitive to any changes to the simulation parameters.
A Parallel Particle Swarm Optimization Algorithm Accelerated by Asynchronous Evaluations
Venter, Gerhard; Sobieszczanski-Sobieski, Jaroslaw
2005-01-01
A parallel Particle Swarm Optimization (PSO) algorithm is presented. Particle swarm optimization is a fairly recent addition to the family of non-gradient based, probabilistic search algorithms that is based on a simplified social model and is closely tied to swarming theory. Although PSO algorithms present several attractive properties to the designer, they are plagued by high computational cost as measured by elapsed time. One approach to reduce the elapsed time is to make use of coarse-grained parallelization to evaluate the design points. Previous parallel PSO algorithms were mostly implemented in a synchronous manner, where all design points within a design iteration are evaluated before the next iteration is started. This approach leads to poor parallel speedup in cases where a heterogeneous parallel environment is used and/or where the analysis time depends on the design point being analyzed. This paper introduces an asynchronous parallel PSO algorithm that greatly improves the parallel e ciency. The asynchronous algorithm is benchmarked on a cluster assembled of Apple Macintosh G5 desktop computers, using the multi-disciplinary optimization of a typical transport aircraft wing as an example.
Directory of Open Access Journals (Sweden)
Alejandro Acevedo-Malavé
2012-06-01
Full Text Available Smoothed Particle Hydrodynamics (SPH is a Lagrangian mesh-free formalism and has been useful to model continuous fluid. This formalism is employed to solve the Navier-Stokes equations by replacing the fluid with a set of particles. These particles are interpolation points from which properties of the fluid can be determined. In this study, the SPH method is applied to simulate the hydrodynamics interaction of many drops, showing some settings for the coalescence, fragmentation and flocculation problem of equally sized liquid drops in three-dimensional spaces. For small velocities the drops interact only through their deformed surfaces and the flocculation of the droplets arises. This result is very different if the collision velocity is large enough for the fragmentation of droplets takes place. We observe that for velocities around 15 mm/ms the coalescence of droplets occurs. The velocity vector fields formed inside the drops during the collision process are shown.
Economic dispatch optimization algorithm based on particle diffusion
International Nuclear Information System (INIS)
Han, Li; Romero, Carlos E.; Yao, Zheng
2015-01-01
Highlights: • A dispatch model that considers fuel, emissions control and wind power cost is built. • An optimization algorithm named diffusion particle optimization (DPO) is proposed. • DPO was used to analyze the impact of wind power risk and emissions on dispatch. - Abstract: Due to the widespread installation of emissions control equipment in fossil fuel-fired power plants, the cost of emissions control needs to be considered, together with the plant fuel cost, in providing economic power dispatch of those units to the grid. On the other hand, while using wind power decreases the overall power generation cost for the power grid, it poses a risk to a traditional grid, because of its inherent stochastic characteristics. Therefore, an economic dispatch optimization model needs to consider all of the fuel cost, emissions control cost and wind power cost for each of the generating unit conforming the fleet that meets the required grid power demand. In this study, an optimization algorithm referred as diffusion particle optimization (DPO) is proposed to solve such complex optimization problem. In this algorithm, Brownian motion theory is used to guide the movement of particles so that the particles can search for an optimal solution over the entire definition region. Several benchmark functions and power grid system data were used to test the performance of DPO, and compared to traditional algorithms used for economic dispatch optimization, such as, particle swarm optimization and artificial bee colony algorithm. It was found that DPO has less probability to be trapped in local optimums. According to results of different power systems DPO was able to find economic dispatch solutions with lower costs. DPO was also used to analyze the impact of wind power risk and fossil unit emissions coefficients on power dispatch. The result are encouraging for the use of DPO as a dynamic tool for economic dispatch of the power grid.
A multi-parametric particle-pairing algorithm for particle tracking in single and multiphase flows
International Nuclear Information System (INIS)
Cardwell, Nicholas D; Vlachos, Pavlos P; Thole, Karen A
2011-01-01
Multiphase flows (MPFs) offer a rich area of fundamental study with many practical applications. Examples of such flows range from the ingestion of foreign particulates in gas turbines to transport of particles within the human body. Experimental investigation of MPFs, however, is challenging, and requires techniques that simultaneously resolve both the carrier and discrete phases present in the flowfield. This paper presents a new multi-parametric particle-pairing algorithm for particle tracking velocimetry (MP3-PTV) in MPFs. MP3-PTV improves upon previous particle tracking algorithms by employing a novel variable pair-matching algorithm which utilizes displacement preconditioning in combination with estimated particle size and intensity to more effectively and accurately match particle pairs between successive images. To improve the method's efficiency, a new particle identification and segmentation routine was also developed. Validation of the new method was initially performed on two artificial data sets: a traditional single-phase flow published by the Visualization Society of Japan (VSJ) and an in-house generated MPF data set having a bi-modal distribution of particles diameters. Metrics of the measurement yield, reliability and overall tracking efficiency were used for method comparison. On the VSJ data set, the newly presented segmentation routine delivered a twofold improvement in identifying particles when compared to other published methods. For the simulated MPF data set, measurement efficiency of the carrier phases improved from 9% to 41% for MP3-PTV as compared to a traditional hybrid PTV. When employed on experimental data of a gas–solid flow, the MP3-PTV effectively identified the two particle populations and reported a vector efficiency and velocity measurement error comparable to measurements for the single-phase flow images. Simultaneous measurement of the dispersed particle and the carrier flowfield velocities allowed for the calculation of
Designing Artificial Neural Networks Using Particle Swarm Optimization Algorithms.
Garro, Beatriz A; Vázquez, Roberto A
2015-01-01
Artificial Neural Network (ANN) design is a complex task because its performance depends on the architecture, the selected transfer function, and the learning algorithm used to train the set of synaptic weights. In this paper we present a methodology that automatically designs an ANN using particle swarm optimization algorithms such as Basic Particle Swarm Optimization (PSO), Second Generation of Particle Swarm Optimization (SGPSO), and a New Model of PSO called NMPSO. The aim of these algorithms is to evolve, at the same time, the three principal components of an ANN: the set of synaptic weights, the connections or architecture, and the transfer functions for each neuron. Eight different fitness functions were proposed to evaluate the fitness of each solution and find the best design. These functions are based on the mean square error (MSE) and the classification error (CER) and implement a strategy to avoid overtraining and to reduce the number of connections in the ANN. In addition, the ANN designed with the proposed methodology is compared with those designed manually using the well-known Back-Propagation and Levenberg-Marquardt Learning Algorithms. Finally, the accuracy of the method is tested with different nonlinear pattern classification problems.
Energy Technology Data Exchange (ETDEWEB)
Jo, Young Beom; Kim, Eung Soo [Seoul National Univ., Seoul (Korea, Republic of)
2014-10-15
It becomes more complicated when considering the shape and phase of the ground below the seawater. Therefore, some different attempts are required to precisely analyze the behavior of tsunami. This paper introduces an on-going activities on code development in SNU based on an unconventional mesh-free fluid analysis method called Smoothed Particle Hydrodynamics (SPH) and its verification work with some practice simulations. This paper summarizes the on-going development and verification activities on Lagrangian mesh-free SPH code in SNU. The newly developed code can cover equation of motions and heat conduction equation so far, and verification of each models is completed. In addition, parallel computation using GPU is now possible, and GUI is also prepared. If users change input geometry or input values, they can simulate for various conditions geometries. A SPH method has large advantages and potential in modeling of free surface, highly deformable geometry and multi-phase problems that traditional grid-based code has difficulties in analysis. Therefore, by incorporating more complex physical models such as turbulent flow, phase change, two-phase flow, and even solid mechanics, application of the current SPH code is expected to be much more extended including molten fuel behaviors in the sever accident.
Smooth Particle Hydrodynamics GPU-Acceleration Tool for Asteroid Fragmentation Simulation
Buruchenko, Sergey K.; Schäfer, Christoph M.; Maindl, Thomas I.
2017-10-01
The impact threat of near-Earth objects (NEOs) is a concern to the global community, as evidenced by the Chelyabinsk event (caused by a 17-m meteorite) in Russia on February 15, 2013 and a near miss by asteroid 2012 DA14 ( 30 m diameter), on the same day. The expected energy, from either a low-altitude air burst or direct impact, would have severe consequences, especially in populated regions. To mitigate this threat one of the methods is employment of large kinetic-energy impactors (KEIs). The simulation of asteroid target fragmentation is a challenging task which demands efficient and accurate numerical methods with large computational power. Modern graphics processing units (GPUs) lead to a major increase 10 times and more in the performance of the computation of astrophysical and high velocity impacts. The paper presents a new implementation of the numerical method smooth particle hydrodynamics (SPH) using NVIDIA-GPU and the first astrophysical and high velocity application of the new code. The code allows for a tremendous increase in speed of astrophysical simulations with SPH and self-gravity at low costs for new hardware. We have implemented the SPH equations to model gas, liquids and elastic, and plastic solid bodies and added a fragmentation model for brittle materials. Self-gravity may be optionally included in the simulations.
Simulation of impact ballistic of Cu-10wt%Sn frangible bullet using smoothed particle hydrodynamics
Hidayat, Mas Irfan P.; Widyastuti, Simaremare, Peniel
2018-04-01
Frangible bullet is designed to disintegrate upon impact against a hard target. Understanding the impact response and performance of frangible bullet is therefore of highly interest. In this paper, simulation of impact ballistic of Cu-IOwt%Sn frangible bullet using smoothed particle hydrodynamics (SPH) method is presented. The frangible bullet is impacted against a hard, cylindrical stainless steel target. Effect of variability of the frangible bullet material properties due to the variation of sintering temperature in its manufacturing process to the bullet frangibility factor (FF) is investigated numerically. In addition, the bullet kinetic energy during impact as well as its ricochet and fragmentation are also examined and simulated. Failure criterion based upon maximum strain is employed in the simulation. It is shown that the SPH simulation can produce good estimation for kinetic energy of bullet after impact, thus giving the FF prediction with respect to the variation of frangible bullet material properties. In comparison to explicit finite element (FE) simulation, in which only material/element deletion is shown, convenience in showing frangible bullet fragmentation is shown using the SPH simulation. As a result, the effect of sintering temperature to the way of the frangible bullet fragmented can be also observed clearly.
Pavlović, Marko Z.; Urošević, Dejan; Arbutina, Bojan; Orlando, Salvatore; Maxted, Nigel; Filipović, Miroslav D.
2018-01-01
We present a model for the radio evolution of supernova remnants (SNRs) obtained by using three-dimensional hydrodynamic simulations coupled with nonlinear kinetic theory of cosmic-ray (CR) acceleration in SNRs. We model the radio evolution of SNRs on a global level by performing simulations for a wide range of the relevant physical parameters, such as the ambient density, supernova (SN) explosion energy, acceleration efficiency, and magnetic field amplification (MFA) efficiency. We attribute the observed spread of radio surface brightnesses for corresponding SNR diameters to the spread of these parameters. In addition to our simulations of Type Ia SNRs, we also considered SNR radio evolution in denser, nonuniform circumstellar environments modified by the progenitor star wind. These simulations start with the mass of the ejecta substantially higher than in the case of a Type Ia SN and presumably lower shock speed. The magnetic field is understandably seen as very important for the radio evolution of SNRs. In terms of MFA, we include both resonant and nonresonant modes in our large-scale simulations by implementing models obtained from first-principles, particle-in-cell simulations and nonlinear magnetohydrodynamical simulations. We test the quality and reliability of our models on a sample consisting of Galactic and extragalactic SNRs. Our simulations give Σ ‑ D slopes between ‑4 and ‑6 for the full Sedov regime. Recent empirical slopes obtained for the Galactic samples are around ‑5, while those for the extragalactic samples are around ‑4.
Kordilla, Jannes; Pan, Wenxiao; Tartakovsky, Alexandre
2014-12-14
We propose a novel smoothed particle hydrodynamics (SPH) discretization of the fully coupled Landau-Lifshitz-Navier-Stokes (LLNS) and stochastic advection-diffusion equations. The accuracy of the SPH solution of the LLNS equations is demonstrated by comparing the scaling of velocity variance and the self-diffusion coefficient with kinetic temperature and particle mass obtained from the SPH simulations and analytical solutions. The spatial covariance of pressure and velocity fluctuations is found to be in a good agreement with theoretical models. To validate the accuracy of the SPH method for coupled LLNS and advection-diffusion equations, we simulate the interface between two miscible fluids. We study formation of the so-called "giant fluctuations" of the front between light and heavy fluids with and without gravity, where the light fluid lies on the top of the heavy fluid. We find that the power spectra of the simulated concentration field are in good agreement with the experiments and analytical solutions. In the absence of gravity, the power spectra decay as the power -4 of the wavenumber-except for small wavenumbers that diverge from this power law behavior due to the effect of finite domain size. Gravity suppresses the fluctuations, resulting in much weaker dependence of the power spectra on the wavenumber. Finally, the model is used to study the effect of thermal fluctuation on the Rayleigh-Taylor instability, an unstable dynamics of the front between a heavy fluid overlaying a light fluid. The front dynamics is shown to agree well with the analytical solutions.
Thermo-hydrodynamic lubrication in hydrodynamic bearings
Bonneau, Dominique; Souchet, Dominique
2014-01-01
This Series provides the necessary elements to the development and validation of numerical prediction models for hydrodynamic bearings. This book describes the thermo-hydrodynamic and the thermo-elasto-hydrodynamic lubrication. The algorithms are methodically detailed and each section is thoroughly illustrated.
Improved multi-objective clustering algorithm using particle swarm optimization.
Directory of Open Access Journals (Sweden)
Congcong Gong
Full Text Available Multi-objective clustering has received widespread attention recently, as it can obtain more accurate and reasonable solution. In this paper, an improved multi-objective clustering framework using particle swarm optimization (IMCPSO is proposed. Firstly, a novel particle representation for clustering problem is designed to help PSO search clustering solutions in continuous space. Secondly, the distribution of Pareto set is analyzed. The analysis results are applied to the leader selection strategy, and make algorithm avoid trapping in local optimum. Moreover, a clustering solution-improved method is proposed, which can increase the efficiency in searching clustering solution greatly. In the experiments, 28 datasets are used and nine state-of-the-art clustering algorithms are compared, the proposed method is superior to other approaches in the evaluation index ARI.
Improved multi-objective clustering algorithm using particle swarm optimization.
Gong, Congcong; Chen, Haisong; He, Weixiong; Zhang, Zhanliang
2017-01-01
Multi-objective clustering has received widespread attention recently, as it can obtain more accurate and reasonable solution. In this paper, an improved multi-objective clustering framework using particle swarm optimization (IMCPSO) is proposed. Firstly, a novel particle representation for clustering problem is designed to help PSO search clustering solutions in continuous space. Secondly, the distribution of Pareto set is analyzed. The analysis results are applied to the leader selection strategy, and make algorithm avoid trapping in local optimum. Moreover, a clustering solution-improved method is proposed, which can increase the efficiency in searching clustering solution greatly. In the experiments, 28 datasets are used and nine state-of-the-art clustering algorithms are compared, the proposed method is superior to other approaches in the evaluation index ARI.
Particle identification algorithms for the PANDA Endcap Disc DIRC
Schmidt, M.; Ali, A.; Belias, A.; Dzhygadlo, R.; Gerhardt, A.; Götzen, K.; Kalicy, G.; Krebs, M.; Lehmann, D.; Nerling, F.; Patsyuk, M.; Peters, K.; Schepers, G.; Schmitt, L.; Schwarz, C.; Schwiening, J.; Traxler, M.; Böhm, M.; Eyrich, W.; Lehmann, A.; Pfaffinger, M.; Uhlig, F.; Düren, M.; Etzelmüller, E.; Föhl, K.; Hayrapetyan, A.; Kreutzfeld, K.; Merle, O.; Rieke, J.; Wasem, T.; Achenbach, P.; Cardinali, M.; Hoek, M.; Lauth, W.; Schlimme, S.; Sfienti, C.; Thiel, M.
2017-12-01
The Endcap Disc DIRC has been developed to provide an excellent particle identification for the future PANDA experiment by separating pions and kaons up to a momentum of 4 GeV/c with a separation power of 3 standard deviations in the polar angle region from 5o to 22o. This goal will be achieved using dedicated particle identification algorithms based on likelihood methods and will be applied in an offline analysis and online event filtering. This paper evaluates the resulting PID performance using Monte-Carlo simulations to study basic single track PID as well as the analysis of complex physics channels. The online reconstruction algorithm has been tested with a Virtex4 FGPA card and optimized regarding the resulting constraints.
Canonical algorithms for numerical integration of charged particle motion equations
Efimov, I. N.; Morozov, E. A.; Morozova, A. R.
2017-02-01
A technique for numerically integrating the equation of charged particle motion in a magnetic field is considered. It is based on the canonical transformations of the phase space in Hamiltonian mechanics. The canonical transformations make the integration process stable against counting error accumulation. The integration algorithms contain a minimum possible amount of arithmetics and can be used to design accelerators and devices of electron and ion optics.
International Nuclear Information System (INIS)
Li Tianjin; Zhang He; Huang Zhiyong; Q, Weiwei; Bo Hanliang
2014-01-01
The absorber sphere pneumatic conveying system in pebble-bed high temperature gas-cooled reactor was a special application of pneumatic conveying technique. The whole conveying process was an intermittent circulation of absorber spheres between the side reflector boring and the sphere storage vessel in the reactor. The absorber spheres were designed to drop into the reflector borings by its own gravity when the sphere discharge valve was opened by the driving mechanism. The absorber spheres in the reflector boring were transported back to the sphere storage vessel when the reactor needs to be started up. The hydrodynamics and particle motion behavior characteristics of the absorber spheres were very important for the design and operation of this special pneumatic conveying system. The whole conveying process of absorber spheres was consisted of four subprocesses, i.e. the spheres discharge from the sphere storage vessel and the side reflector boring, entrainment of spheres in the feeder, conveying of spheres in the transport pipe, gas-solid separation and pile of spheres in the sphere storage vessel. The research status on hydrodynamics and particle motion behavior of the absorber spheres in the pneumatic conveying system of HTR-PM were introduced mainly from the viewpoint of granular flow and gas-solid flow. The experimental systems and apparatus constructed and numerical simulation work conducted for absorber sphere pneumatic conveying process investigation were introduced. Some typical experimental and numerical simulation results of the hydrodynamics and particle motion behavior characteristics of the absorber spheres conveying were briefly reported. (author)
International Nuclear Information System (INIS)
Krafft, G.A.; Mark, J.W.K.; Wang, T.S.F.
1983-01-01
In an earlier paper, closed hydrodynamic equations were derived with possible application to the simulation of beam plasmas relevant to designs of heavy ion accelerators for inertial confinement fusion energy applications. The closure equations involved a novel feature of anisotropic stresses even transverse to the beam. A related hydrodynamic model is used in this paper to examine further the boundaries of validity of such hydrodynamic approximations. It is also proposed as a useful tool to provide an economic means for searching the large parameter space relevant to three-dimensional stability problems involving coupling of longitudinal and transverse motions in the presence of wall impedance
Smoothed particle hydrodynamics study of the roughness effect on contact angle and droplet flow.
Shigorina, Elena; Kordilla, Jannes; Tartakovsky, Alexandre M
2017-09-01
We employ a pairwise force smoothed particle hydrodynamics (PF-SPH) model to simulate sessile and transient droplets on rough hydrophobic and hydrophilic surfaces. PF-SPH allows modeling of free-surface flows without discretizing the air phase, which is achieved by imposing the surface tension and dynamic contact angles with pairwise interaction forces. We use the PF-SPH model to study the effect of surface roughness and microscopic contact angle on the effective contact angle and droplet dynamics. In the first part of this work, we investigate static contact angles of sessile droplets on different types of rough surfaces. We find that the effective static contact angles of Cassie and Wenzel droplets on a rough surface are greater than the corresponding microscale static contact angles. As a result, microscale hydrophobic rough surfaces also show effective hydrophobic behavior. On the other hand, microscale hydrophilic surfaces may be macroscopically hydrophilic or hydrophobic, depending on the type of roughness. We study the dependence of the transition between Cassie and Wenzel states on roughness and droplet size, which can be linked to the critical pressure for the given fluid-substrate combination. We observe good agreement between simulations and theoretical predictions. Finally, we study the impact of the roughness orientation (i.e., an anisotropic roughness) and surface inclination on droplet flow velocities. Simulations show that droplet flow velocities are lower if the surface roughness is oriented perpendicular to the flow direction. If the predominant elements of surface roughness are in alignment with the flow direction, the flow velocities increase compared to smooth surfaces, which can be attributed to the decrease in fluid-solid contact area similar to the lotus effect. We demonstrate that classical linear scaling relationships between Bond and capillary numbers for droplet flow on flat surfaces also hold for flow on rough surfaces.
An External Archive-Guided Multiobjective Particle Swarm Optimization Algorithm.
Zhu, Qingling; Lin, Qiuzhen; Chen, Weineng; Wong, Ka-Chun; Coello Coello, Carlos A; Li, Jianqiang; Chen, Jianyong; Zhang, Jun
2017-09-01
The selection of swarm leaders (i.e., the personal best and global best), is important in the design of a multiobjective particle swarm optimization (MOPSO) algorithm. Such leaders are expected to effectively guide the swarm to approach the true Pareto optimal front. In this paper, we present a novel external archive-guided MOPSO algorithm (AgMOPSO), where the leaders for velocity update are all selected from the external archive. In our algorithm, multiobjective optimization problems (MOPs) are transformed into a set of subproblems using a decomposition approach, and then each particle is assigned accordingly to optimize each subproblem. A novel archive-guided velocity update method is designed to guide the swarm for exploration, and the external archive is also evolved using an immune-based evolutionary strategy. These proposed approaches speed up the convergence of AgMOPSO. The experimental results fully demonstrate the superiority of our proposed AgMOPSO in solving most of the test problems adopted, in terms of two commonly used performance measures. Moreover, the effectiveness of our proposed archive-guided velocity update method and immune-based evolutionary strategy is also experimentally validated on more than 30 test MOPs.
Digital signal processing algorithms for nuclear particle spectroscopy
International Nuclear Information System (INIS)
Zejnalova, O.; Zejnalov, Sh.; Hambsch, F.J.; Oberstedt, S.
2007-01-01
Digital signal processing algorithms for nuclear particle spectroscopy are described along with a digital pile-up elimination method applicable to equidistantly sampled detector signals pre-processed by a charge-sensitive preamplifier. The signal processing algorithms are provided as recursive one- or multi-step procedures which can be easily programmed using modern computer programming languages. The influence of the number of bits of the sampling analogue-to-digital converter on the final signal-to-noise ratio of the spectrometer is considered. Algorithms for a digital shaping-filter amplifier, for a digital pile-up elimination scheme and for ballistic deficit correction were investigated using a high purity germanium detector. The pile-up elimination method was originally developed for fission fragment spectroscopy using a Frisch-grid back-to-back double ionization chamber and was mainly intended for pile-up elimination in case of high alpha-radioactivity of the fissile target. The developed pile-up elimination method affects only the electronic noise generated by the preamplifier. Therefore the influence of the pile-up elimination scheme on the final resolution of the spectrometer is investigated in terms of the distance between pile-up pulses. The efficiency of the developed algorithms is compared with other signal processing schemes published in literature
Optimal configuration of power grid sources based on optimal particle swarm algorithm
Wen, Yuanhua
2018-04-01
In order to optimize the distribution problem of power grid sources, an optimized particle swarm optimization algorithm is proposed. First, the concept of multi-objective optimization and the Pareto solution set are enumerated. Then, the performance of the classical genetic algorithm, the classical particle swarm optimization algorithm and the improved particle swarm optimization algorithm are analyzed. The three algorithms are simulated respectively. Compared with the test results of each algorithm, the superiority of the algorithm in convergence and optimization performance is proved, which lays the foundation for subsequent micro-grid power optimization configuration solution.
International Nuclear Information System (INIS)
Huang, Xiaobiao; Safranek, James
2014-01-01
Nonlinear dynamics optimization is carried out for a low emittance upgrade lattice of SPEAR3 in order to improve its dynamic aperture and Touschek lifetime. Two multi-objective optimization algorithms, a genetic algorithm and a particle swarm algorithm, are used for this study. The performance of the two algorithms are compared. The result shows that the particle swarm algorithm converges significantly faster to similar or better solutions than the genetic algorithm and it does not require seeding of good solutions in the initial population. These advantages of the particle swarm algorithm may make it more suitable for many accelerator optimization applications
Energy Technology Data Exchange (ETDEWEB)
Huang, Xiaobiao, E-mail: xiahuang@slac.stanford.edu; Safranek, James
2014-09-01
Nonlinear dynamics optimization is carried out for a low emittance upgrade lattice of SPEAR3 in order to improve its dynamic aperture and Touschek lifetime. Two multi-objective optimization algorithms, a genetic algorithm and a particle swarm algorithm, are used for this study. The performance of the two algorithms are compared. The result shows that the particle swarm algorithm converges significantly faster to similar or better solutions than the genetic algorithm and it does not require seeding of good solutions in the initial population. These advantages of the particle swarm algorithm may make it more suitable for many accelerator optimization applications.
Capecelatro, Jesse
2018-03-01
It has long been suggested that a purely Lagrangian solution to global-scale atmospheric/oceanic flows can potentially outperform tradition Eulerian schemes. Meanwhile, a demonstration of a scalable and practical framework remains elusive. Motivated by recent progress in particle-based methods when applied to convection dominated flows, this work presents a fully Lagrangian method for solving the inviscid shallow water equations on a rotating sphere in a smooth particle hydrodynamics framework. To avoid singularities at the poles, the governing equations are solved in Cartesian coordinates, augmented with a Lagrange multiplier to ensure that fluid particles are constrained to the surface of the sphere. An underlying grid in spherical coordinates is used to facilitate efficient neighbor detection and parallelization. The method is applied to a suite of canonical test cases, and conservation, accuracy, and parallel performance are assessed.
International Nuclear Information System (INIS)
Wong Unhong; Wong Honcheng; Tang Zesheng
2010-01-01
The smoothed particle hydrodynamics (SPH), which is a class of meshfree particle methods (MPMs), has a wide range of applications from micro-scale to macro-scale as well as from discrete systems to continuum systems. Graphics hardware, originally designed for computer graphics, now provide unprecedented computational power for scientific computation. Particle system needs a huge amount of computations in physical simulation. In this paper, an efficient parallel implementation of a SPH method on graphics hardware using the Compute Unified Device Architecture is developed for fluid simulation. Comparing to the corresponding CPU implementation, our experimental results show that the new approach allows significant speedups of fluid simulation through handling huge amount of computations in parallel on graphics hardware.
A multi-frame particle tracking algorithm robust against input noise
International Nuclear Information System (INIS)
Li, Dongning; Zhang, Yuanhui; Sun, Yigang; Yan, Wei
2008-01-01
The performance of a particle tracking algorithm which detects particle trajectories from discretely recorded particle positions could be substantially hindered by the input noise. In this paper, a particle tracking algorithm is developed which is robust against input noise. This algorithm employs the regression method instead of the extrapolation method usually employed by existing algorithms to predict future particle positions. If a trajectory cannot be linked to a particle at a frame, the algorithm can still proceed by trying to find a candidate at the next frame. The connectivity of tracked trajectories is inspected to remove the false ones. The algorithm is validated with synthetic data. The result shows that the algorithm is superior to traditional algorithms in the aspect of tracking long trajectories
Dose calculations algorithm for narrow heavy charged-particle beams
Energy Technology Data Exchange (ETDEWEB)
Barna, E A; Kappas, C [Department of Medical Physics, School of Medicine, University of Patras (Greece); Scarlat, F [National Institute for Laser and Plasma Physics, Bucharest (Romania)
1999-12-31
The dose distributional advantages of the heavy charged-particles can be fully exploited by using very efficient and accurate dose calculation algorithms, which can generate optimal three-dimensional scanning patterns. An inverse therapy planning algorithm for dynamically scanned, narrow heavy charged-particle beams is presented in this paper. The irradiation `start point` is defined at the distal end of the target volume, right-down, in a beam`s eye view. The peak-dose of the first elementary beam is set to be equal to the prescribed dose in the target volume, and is defined as the reference dose. The weighting factor of any Bragg-peak is determined by the residual dose at the point of irradiation, calculated as the difference between the reference dose and the cumulative dose delivered at that point of irradiation by all the previous Bragg-peaks. The final pattern consists of the weighted Bragg-peaks irradiation density. Dose distributions were computed using two different scanning steps equal to 0.5 mm, and 1 mm respectively. Very accurate and precise localized dose distributions, conform to the target volume, were obtained. (authors) 6 refs., 3 figs.
Energy Technology Data Exchange (ETDEWEB)
O' Brien, M. J.; Brantley, P. S.
2015-01-20
In order to run Monte Carlo particle transport calculations on new supercomputers with hundreds of thousands or millions of processors, care must be taken to implement scalable algorithms. This means that the algorithms must continue to perform well as the processor count increases. In this paper, we examine the scalability of:(1) globally resolving the particle locations on the correct processor, (2) deciding that particle streaming communication has finished, and (3) efficiently coupling neighbor domains together with different replication levels. We have run domain decomposed Monte Carlo particle transport on up to 2^{21} = 2,097,152 MPI processes on the IBM BG/Q Sequoia supercomputer and observed scalable results that agree with our theoretical predictions. These calculations were carefully constructed to have the same amount of work on every processor, i.e. the calculation is already load balanced. We also examine load imbalanced calculations where each domain’s replication level is proportional to its particle workload. In this case we show how to efficiently couple together adjacent domains to maintain within workgroup load balance and minimize memory usage.
An analysis of 3D particle path integration algorithms
International Nuclear Information System (INIS)
Darmofal, D.L.; Haimes, R.
1996-01-01
Several techniques for the numerical integration of particle paths in steady and unsteady vector (velocity) fields are analyzed. Most of the analysis applies to unsteady vector fields, however, some results apply to steady vector field integration. Multistep, multistage, and some hybrid schemes are considered. It is shown that due to initialization errors, many unsteady particle path integration schemes are limited to third-order accuracy in time. Multistage schemes require at least three times more internal data storage than multistep schemes of equal order. However, for timesteps within the stability bounds, multistage schemes are generally more accurate. A linearized analysis shows that the stability of these integration algorithms are determined by the eigenvalues of the local velocity tensor. Thus, the accuracy and stability of the methods are interpreted with concepts typically used in critical point theory. This paper shows how integration schemes can lead to erroneous classification of critical points when the timestep is finite and fixed. For steady velocity fields, we demonstrate that timesteps outside of the relative stability region can lead to similar integration errors. From this analysis, guidelines for accurate timestep sizing are suggested for both steady and unsteady flows. In particular, using simulation data for the unsteady flow around a tapered cylinder, we show that accurate particle path integration requires timesteps which are at most on the order of the physical timescale of the flow
Particle filters for object tracking: enhanced algorithm and efficient implementations
International Nuclear Information System (INIS)
Abd El-Halym, H.A.
2010-01-01
Object tracking and recognition is a hot research topic. In spite of the extensive research efforts expended, the development of a robust and efficient object tracking algorithm remains unsolved due to the inherent difficulty of the tracking problem. Particle filters (PFs) were recently introduced as a powerful, post-Kalman filter, estimation tool that provides a general framework for estimation of nonlinear/ non-Gaussian dynamic systems. Particle filters were advanced for building robust object trackers capable of operation under severe conditions (small image size, noisy background, occlusions, fast object maneuvers ..etc.). The heavy computational load of the particle filter remains a major obstacle towards its wide use.In this thesis, an Excitation Particle Filter (EPF) is introduced for object tracking. A new likelihood model is proposed. It depends on multiple functions: position likelihood; gray level intensity likelihood and similarity likelihood. Also, we modified the PF as a robust estimator to overcome the well-known sample impoverishment problem of the PF. This modification is based on re-exciting the particles if their weights fall below a memorized weight value. The proposed enhanced PF is implemented in software and evaluated. Its results are compared with a single likelihood function PF tracker, Particle Swarm Optimization (PSO) tracker, a correlation tracker, as well as, an edge tracker. The experimental results demonstrated the superior performance of the proposed tracker in terms of accuracy, robustness, and occlusion compared with other methods Efficient novel hardware architectures of the Sample Important Re sample Filter (SIRF) and the EPF are implemented. Three novel hardware architectures of the SIRF for object tracking are introduced. The first architecture is a two-step sequential PF machine, where particle generation, weight calculation and normalization are carried out in parallel during the first step followed by a sequential re
International Nuclear Information System (INIS)
Tarasov, Y.A.
1987-01-01
Hydrodynamic theory is used to calculate the dependence of the transverse momenta p-bar/sub perpendicular/ on the Feynman variable x and rapidity y for pions, kaons, and antiprotons at ISR and collider energies. Quantitative agreement with the experimental data (at ISR energies) is found. The values of p-bar/sub perpendicular/ (y) are determined by the profile of the ''hardening'' temperature as a function of the rapidity, and this profile is calculated in the hydrodynamic theory. The experimental data fit this profile well. The dependence p-bar/sub perpendicular/ (x) for π - mesons has the shape of a seagull wing. In the calculations the contribution of the Riemannian traveling wave is taken into account. For collider energies there are no experimental data, and the theoretical results play the role of predictions
Hydrodynamic and thermal modeling of solid particles in a multi-phase, multi-component flow
International Nuclear Information System (INIS)
Tentner, A.M.; Wider, H.U.
1984-01-01
This paper presents the new thermal hydraulic models describing the hydrodynamics of the solid fuel/steel chunks during an LMFBR hypothetical core disruptive accident. These models, which account for two-way coupling between the solid and fluid phases, describe the mass, momentum and energy exchanges which occur when the chunks are present at any axial location. They have been incorporated in LEVITATE, a code for the analysis of fuel and cladding dynamics under Loss-of-Flow (LOF) conditions. Their influence on fuel motion is presented in the context of the L6 TREAT experiment analysis. It is shown that the overall hydrodynamic behavior of the molten fuel and solid fuel chunks is dependent on both the size of the chunks and the power level. At low and intermediate power levels the fuel motion is more dispersive when small chunks, rather than large ones, are present. At high power levels the situation is reversed
Vectorizing and macrotasking Monte Carlo neutral particle algorithms
International Nuclear Information System (INIS)
Heifetz, D.B.
1987-04-01
Monte Carlo algorithms for computing neutral particle transport in plasmas have been vectorized and macrotasked. The techniques used are directly applicable to Monte Carlo calculations of neutron and photon transport, and Monte Carlo integration schemes in general. A highly vectorized code was achieved by calculating test flight trajectories in loops over arrays of flight data, isolating the conditional branches to as few a number of loops as possible. A number of solutions are discussed to the problem of gaps appearing in the arrays due to completed flights, which impede vectorization. A simple and effective implementation of macrotasking is achieved by dividing the calculation of the test flight profile among several processors. A tree of random numbers is used to ensure reproducible results. The additional memory required for each task may preclude using a larger number of tasks. In future machines, the limit of macrotasking may be possible, with each test flight, and split test flight, being a separate task
Directory of Open Access Journals (Sweden)
VELIZAR D. STANKOVIC
2001-01-01
Full Text Available The influence of an electrochemically generated gas phase on the hydrodynamic characteristics of a three-phase system has been examined. The two-phase fluid, (gas-liquid, in which the liquid phase is the continuous one, flows through a packed bed with glass spheres. The influence of the liquid velocity was examined, as well as the gas velocity and particle diameter on the pressure drop through the fixed bed. It was found that with increasing liquid velocity (wl = 0.01620.03 m/s, the relative pressure drop decreases through the fixed bed. With increasing current density, the pressure drop increases, since greater gas quantities stay behind in the fixed bed. Besides, it was found that with decreasing diameter of the glass particles, the relative pressure drop also decreases. The relationship betweeen the experimentally obtained friction factor and the Reynolds number was established.
Application of ant colony Algorithm and particle swarm optimization in architectural design
Song, Ziyi; Wu, Yunfa; Song, Jianhua
2018-02-01
By studying the development of ant colony algorithm and particle swarm algorithm, this paper expounds the core idea of the algorithm, explores the combination of algorithm and architectural design, sums up the application rules of intelligent algorithm in architectural design, and combines the characteristics of the two algorithms, obtains the research route and realization way of intelligent algorithm in architecture design. To establish algorithm rules to assist architectural design. Taking intelligent algorithm as the beginning of architectural design research, the authors provide the theory foundation of ant colony Algorithm and particle swarm algorithm in architectural design, popularize the application range of intelligent algorithm in architectural design, and provide a new idea for the architects.
Hydrodynamics of multi-sized particles in stable regime of a swirling bed
Energy Technology Data Exchange (ETDEWEB)
Miin, Chin Swee; Sulaiman, Shaharin Anwar; Raghavan, Vijay Raj; Heikal, Morgan Raymond; Naz, Muhammad Yasin [Universiti Teknologi PETRONAS, Perak (Malaysia)
2015-11-15
Using particle imaging velocimetry (PIV), we observed particle motion within the stable operating regime of a swirling fluidized bed with an annular blade distributor. This paper presents velocity profiles of particle flow in an effort to determine effects from blade angle, particle size and shape and bed weight on characteristics of a swirling fluidized bed. Generally, particle velocity increased with airflow rate and shallow bed height, but decreased with bed weight. A 3 .deg. increase in blade angle reduced particle velocity by approximately 18%. In addition, particle shape, size and bed weight affected various characteristics of the swirling regime. Swirling began soon after incipience in the form of a supra-linear curve, which is the characteristic of a swirling regime. The relationship between particle and gas velocities enabled us to predict heat and mass transfer rates between gas and particles.
Multiobjective Reliable Cloud Storage with Its Particle Swarm Optimization Algorithm
Directory of Open Access Journals (Sweden)
Xiyang Liu
2016-01-01
Full Text Available Information abounds in all fields of the real life, which is often recorded as digital data in computer systems and treated as a kind of increasingly important resource. Its increasing volume growth causes great difficulties in both storage and analysis. The massive data storage in cloud environments has significant impacts on the quality of service (QoS of the systems, which is becoming an increasingly challenging problem. In this paper, we propose a multiobjective optimization model for the reliable data storage in clouds through considering both cost and reliability of the storage service simultaneously. In the proposed model, the total cost is analyzed to be composed of storage space occupation cost, data migration cost, and communication cost. According to the analysis of the storage process, the transmission reliability, equipment stability, and software reliability are taken into account in the storage reliability evaluation. To solve the proposed multiobjective model, a Constrained Multiobjective Particle Swarm Optimization (CMPSO algorithm is designed. At last, experiments are designed to validate the proposed model and its solution PSO algorithm. In the experiments, the proposed model is tested in cooperation with 3 storage strategies. Experimental results show that the proposed model is positive and effective. The experimental results also demonstrate that the proposed model can perform much better in alliance with proper file splitting methods.
Particle swarm optimization algorithm based low cost magnetometer calibration
Ali, A. S.; Siddharth, S., Syed, Z., El-Sheimy, N.
2011-12-01
Inertial Navigation Systems (INS) consist of accelerometers, gyroscopes and a microprocessor provide inertial digital data from which position and orientation is obtained by integrating the specific forces and rotation rates. In addition to the accelerometers and gyroscopes, magnetometers can be used to derive the absolute user heading based on Earth's magnetic field. Unfortunately, the measurements of the magnetic field obtained with low cost sensors are corrupted by several errors including manufacturing defects and external electro-magnetic fields. Consequently, proper calibration of the magnetometer is required to achieve high accuracy heading measurements. In this paper, a Particle Swarm Optimization (PSO) based calibration algorithm is presented to estimate the values of the bias and scale factor of low cost magnetometer. The main advantage of this technique is the use of the artificial intelligence which does not need any error modeling or awareness of the nonlinearity. The estimated bias and scale factor errors from the proposed algorithm improve the heading accuracy and the results are also statistically significant. Also, it can help in the development of the Pedestrian Navigation Devices (PNDs) when combined with the INS and GPS/Wi-Fi especially in the indoor environments
A Novel Flexible Inertia Weight Particle Swarm Optimization Algorithm
Shamsi, Mousa; Sedaaghi, Mohammad Hossein
2016-01-01
Particle swarm optimization (PSO) is an evolutionary computing method based on intelligent collective behavior of some animals. It is easy to implement and there are few parameters to adjust. The performance of PSO algorithm depends greatly on the appropriate parameter selection strategies for fine tuning its parameters. Inertia weight (IW) is one of PSO’s parameters used to bring about a balance between the exploration and exploitation characteristics of PSO. This paper proposes a new nonlinear strategy for selecting inertia weight which is named Flexible Exponential Inertia Weight (FEIW) strategy because according to each problem we can construct an increasing or decreasing inertia weight strategy with suitable parameters selection. The efficacy and efficiency of PSO algorithm with FEIW strategy (FEPSO) is validated on a suite of benchmark problems with different dimensions. Also FEIW is compared with best time-varying, adaptive, constant and random inertia weights. Experimental results and statistical analysis prove that FEIW improves the search performance in terms of solution quality as well as convergence rate. PMID:27560945
A Novel Flexible Inertia Weight Particle Swarm Optimization Algorithm.
Amoshahy, Mohammad Javad; Shamsi, Mousa; Sedaaghi, Mohammad Hossein
2016-01-01
Particle swarm optimization (PSO) is an evolutionary computing method based on intelligent collective behavior of some animals. It is easy to implement and there are few parameters to adjust. The performance of PSO algorithm depends greatly on the appropriate parameter selection strategies for fine tuning its parameters. Inertia weight (IW) is one of PSO's parameters used to bring about a balance between the exploration and exploitation characteristics of PSO. This paper proposes a new nonlinear strategy for selecting inertia weight which is named Flexible Exponential Inertia Weight (FEIW) strategy because according to each problem we can construct an increasing or decreasing inertia weight strategy with suitable parameters selection. The efficacy and efficiency of PSO algorithm with FEIW strategy (FEPSO) is validated on a suite of benchmark problems with different dimensions. Also FEIW is compared with best time-varying, adaptive, constant and random inertia weights. Experimental results and statistical analysis prove that FEIW improves the search performance in terms of solution quality as well as convergence rate.
Multivariable optimization of liquid rocket engines using particle swarm algorithms
Jones, Daniel Ray
Liquid rocket engines are highly reliable, controllable, and efficient compared to other conventional forms of rocket propulsion. As such, they have seen wide use in the space industry and have become the standard propulsion system for launch vehicles, orbit insertion, and orbital maneuvering. Though these systems are well understood, historical optimization techniques are often inadequate due to the highly non-linear nature of the engine performance problem. In this thesis, a Particle Swarm Optimization (PSO) variant was applied to maximize the specific impulse of a finite-area combustion chamber (FAC) equilibrium flow rocket performance model by controlling the engine's oxidizer-to-fuel ratio and de Laval nozzle expansion and contraction ratios. In addition to the PSO-controlled parameters, engine performance was calculated based on propellant chemistry, combustion chamber pressure, and ambient pressure, which are provided as inputs to the program. The performance code was validated by comparison with NASA's Chemical Equilibrium with Applications (CEA) and the commercially available Rocket Propulsion Analysis (RPA) tool. Similarly, the PSO algorithm was validated by comparison with brute-force optimization, which calculates all possible solutions and subsequently determines which is the optimum. Particle Swarm Optimization was shown to be an effective optimizer capable of quick and reliable convergence for complex functions of multiple non-linear variables.
Luciano, Rezzolla
2013-01-01
Relativistic hydrodynamics is a very successful theoretical framework to describe the dynamics of matter from scales as small as those of colliding elementary particles, up to the largest scales in the universe. This book provides an up-to-date, lively, and approachable introduction to the mathematical formalism, numerical techniques, and applications of relativistic hydrodynamics. The topic is typically covered either by very formal or by very phenomenological books, but is instead presented here in a form that will be appreciated both by students and researchers in the field. The topics covered in the book are the results of work carried out over the last 40 years, which can be found in rather technical research articles with dissimilar notations and styles. The book is not just a collection of scattered information, but a well-organized description of relativistic hydrodynamics, from the basic principles of statistical kinetic theory, down to the technical aspects of numerical methods devised for the solut...
Inline motion and hydrodynamic interaction of 2D particles in a viscoplastic fluid
Chaparian, Emad; Wachs, Anthony; Frigaard, Ian A.
2018-03-01
In Stokes flow of a particle settling within a bath of viscoplastic fluid, a critical resistive force must be overcome in order for the particle to move. This leads to a critical ratio of the buoyancy stress to the yield stress: the critical yield number. This translates geometrically to an envelope around the particle in the limit of zero flow that contains both the particle and encapsulated unyielded fluid. Such unyielded envelopes and critical yield numbers are becoming well understood in our previous studies for single (2D) particles as well as the means of calculating. Here we address the case of having multiple particles, which introduces interesting new phenomena. First, plug regions can appear between the particles and connect them together, depending on the proximity and yield number. This can change the yielding behaviour since the combination forms a larger (and heavier) "particle." Moreover, small particles (that cannot move alone) can be pulled/pushed by larger particles or assembly of particles. Increasing the number of particles leads to interesting chain dynamics, including breaking and reforming.
Transverse-momentum distribution of particles according to the hydrodynamical model
International Nuclear Information System (INIS)
Yogiro, H.
1977-12-01
A fit to the transverse-momentum distribution is performed, in the context of the hydrodynamical model. By fixing a (total-energy-independent) dissociation temperature T and a transverse fluid-rapidity distribution whose width increases logarithmically with s, the existing data can be reproduced in all the P1 interval (where ω dsigma divided by d vector P varies by a factor of 10 -10 ) including their energy dependence. The final inclusive cross section appears to be approximately factorized in the longitudinal and the transverse rapidities, as verified experimentally
An Orthogonal Multi-Swarm Cooperative PSO Algorithm with a Particle Trajectory Knowledge Base
Directory of Open Access Journals (Sweden)
Jun Yang
2017-01-01
Full Text Available A novel orthogonal multi-swarm cooperative particle swarm optimization (PSO algorithm with a particle trajectory knowledge base is presented in this paper. Different from the traditional PSO algorithms and other variants of PSO, the proposed orthogonal multi-swarm cooperative PSO algorithm not only introduces an orthogonal initialization mechanism and a particle trajectory knowledge base for multi-dimensional optimization problems, but also conceives a new adaptive cooperation mechanism to accomplish the information interaction among swarms and particles. Experiments are conducted on a set of benchmark functions, and the results show its better performance compared with traditional PSO algorithm in aspects of convergence, computational efficiency and avoiding premature convergence.
Particle Identification algorithm for the CLIC ILD and CLIC SiD detectors
Nardulli, J
2011-01-01
This note describes the algorithm presently used to determine the particle identification performance for single particles for the CLIC ILD and CLIC SiD detector concepts as prepared in the CLIC Conceptual Design Report.
Particle mis-identification rate algorithm for the CLIC ILD and CLIC SiD detectors
Nardulli, J
2011-01-01
This note describes the algorithm presently used to determine the particle mis- identification rate and gives results for single particles for the CLIC ILD and CLIC SiD detector concepts as prepared for the CLIC Conceptual Design Report.
Hydrodynamics of a commercial scale CFB boiler-study with radioactive tracer particles
DEFF Research Database (Denmark)
Lin, Weigang; Hansen, Peter F.B.; Dam-Johansen, Kim
1999-01-01
This paper presents the experimental results with radioactive tracer particles in an 80 MWth circulating fluidized-bed boiler. Batches of gamma-ray emitting tracer particles were injected into the standpipe. The response curves of the impulse injection were measured by a set of successive scintil...
IMPOSING A LAGRANGIAN PARTICLE FRAMEWORK ON AN EULERIAN HYDRODYNAMICS INFRASTRUCTURE IN FLASH
International Nuclear Information System (INIS)
Dubey, A.; Daley, C.; Weide, K.; Graziani, C.; ZuHone, J.; Ricker, P. M.
2012-01-01
In many astrophysical simulations, both Eulerian and Lagrangian quantities are of interest. For example, in a galaxy cluster merger simulation, the intracluster gas can have Eulerian discretization, while dark matter can be modeled using particles. FLASH, a component-based scientific simulation code, superimposes a Lagrangian framework atop an adaptive mesh refinement Eulerian framework to enable such simulations. The discretization of the field variables is Eulerian, while the Lagrangian entities occur in many different forms including tracer particles, massive particles, charged particles in particle-in-cell mode, and Lagrangian markers to model fluid-structure interactions. These widely varying roles for Lagrangian entities are possible because of the highly modular, flexible, and extensible architecture of the Lagrangian framework. In this paper, we describe the Lagrangian framework in FLASH in the context of two very different applications, Type Ia supernovae and galaxy cluster mergers, which use the Lagrangian entities in fundamentally different ways.
Imposing a Lagrangian Particle Framework on an Eulerian Hydrodynamics Infrastructure in Flash
Dubey, A.; Daley, C.; ZuHone, J.; Ricker, P. M.; Weide, K.; Graziani, C.
2012-01-01
In many astrophysical simulations, both Eulerian and Lagrangian quantities are of interest. For example, in a galaxy cluster merger simulation, the intracluster gas can have Eulerian discretization, while dark matter can be modeled using particles. FLASH, a component-based scientific simulation code, superimposes a Lagrangian framework atop an adaptive mesh refinement Eulerian framework to enable such simulations. The discretization of the field variables is Eulerian, while the Lagrangian entities occur in many different forms including tracer particles, massive particles, charged particles in particle-in-cell mode, and Lagrangian markers to model fluid structure interactions. These widely varying roles for Lagrangian entities are possible because of the highly modular, flexible, and extensible architecture of the Lagrangian framework. In this paper, we describe the Lagrangian framework in FLASH in the context of two very different applications, Type Ia supernovae and galaxy cluster mergers, which use the Lagrangian entities in fundamentally different ways.
Directory of Open Access Journals (Sweden)
V. Krenn
2014-01-01
Full Text Available In histopathologic SLIM diagnostic (synovial-like interface membrane, SLIM apart from diagnosing periprosthetic infection particle identification has an important role to play. The differences in particle pathogenesis and variability of materials in endoprosthetics explain the particle heterogeneity that hampers the diagnostic identification of particles. For this reason, a histopathological particle algorithm has been developed. With minimal methodical complexity this histopathological particle algorithm offers a guide to prosthesis material-particle identification. Light microscopic-morphological as well as enzyme-histochemical characteristics and polarization-optical proporties have set and particles are defined by size (microparticles, macroparticles and supra- macroparticles and definitely characterized in accordance with a dichotomous principle. Based on these criteria, identification and validation of the particles was carried out in 120 joint endoprosthesis pathological cases. A histopathological particle score (HPS is proposed that summarizes the most important information for the orthopedist, material scientist and histopathologist concerning particle identification in the SLIM.
Zainol, M. R. R. M. A.; Kamaruddin, M. A.; Zawawi, M. H.; Wahab, K. A.
2017-11-01
Smooth Particle Hydrodynamic is the three-dimensional (3D) model. In this research work, three cases and one validation have been simulate using DualSPHysics. Study area of this research work was at Sarawak Barrage. The cases have different water level at the downstream. This study actually to simulate riverbed erosion and scouring properties by using multi-phases cases which use sand as sediment and water. The velocity and the scouring profile have been recorded as the result and shown in the result chapter. The result of the validation is acceptable where the scouring profile and the velocity were slightly different between laboratory experiment and simulation. Hence, it can be concluded that the simulation by using SPH can be used as the alternative to simulate the real cases.
Parallel particle swarm optimization algorithm in nuclear problems
International Nuclear Information System (INIS)
Waintraub, Marcel; Pereira, Claudio M.N.A.; Schirru, Roberto
2009-01-01
Particle Swarm Optimization (PSO) is a population-based metaheuristic (PBM), in which solution candidates evolve through simulation of a simplified social adaptation model. Putting together robustness, efficiency and simplicity, PSO has gained great popularity. Many successful applications of PSO are reported, in which PSO demonstrated to have advantages over other well-established PBM. However, computational costs are still a great constraint for PSO, as well as for all other PBMs, especially in optimization problems with time consuming objective functions. To overcome such difficulty, parallel computation has been used. The default advantage of parallel PSO (PPSO) is the reduction of computational time. Master-slave approaches, exploring this characteristic are the most investigated. However, much more should be expected. It is known that PSO may be improved by more elaborated neighborhood topologies. Hence, in this work, we develop several different PPSO algorithms exploring the advantages of enhanced neighborhood topologies implemented by communication strategies in multiprocessor architectures. The proposed PPSOs have been applied to two complex and time consuming nuclear engineering problems: reactor core design and fuel reload optimization. After exhaustive experiments, it has been concluded that: PPSO still improves solutions after many thousands of iterations, making prohibitive the efficient use of serial (non-parallel) PSO in such kind of realworld problems; and PPSO with more elaborated communication strategies demonstrated to be more efficient and robust than the master-slave model. Advantages and peculiarities of each model are carefully discussed in this work. (author)
Towards a smoothed particle hydrodynamics algorithm for shocks through layered materials
Zisis, I.A.; Linden, van der B.J.; Giannopapa, C.G.
2013-01-01
Hypervelocity impacts (HVIs) are collisions at velocities greater than the target object’s speed of sound. Such impacts produce pressure waves that generate sharp and sudden changes in the density of the materials. These are propagated as shock waves. Previous computational research has given
Pebble bed reactor fuel cycle optimization using particle swarm algorithm
Energy Technology Data Exchange (ETDEWEB)
Tavron, Barak, E-mail: btavron@bgu.ac.il [Planning, Development and Technology Division, Israel Electric Corporation Ltd., P.O. Box 10, Haifa 31000 (Israel); Shwageraus, Eugene, E-mail: es607@cam.ac.uk [Department of Engineering, University of Cambridge, Trumpington Street, Cambridge CB2 1PZ (United Kingdom)
2016-10-15
Highlights: • Particle swarm method has been developed for fuel cycle optimization of PBR reactor. • Results show uranium utilization low sensitivity to fuel and core design parameters. • Multi-zone fuel loading pattern leads to a small improvement in uranium utilization. • Thorium mixes with highly enriched uranium yields the best uranium utilization. - Abstract: Pebble bed reactors (PBR) features, such as robust thermo-mechanical fuel design and on-line continuous fueling, facilitate wide range of fuel cycle alternatives. A range off fuel pebble types, containing different amounts of fertile or fissile fuel material, may be loaded into the reactor core. Several fuel loading zones may be used since radial mixing of the pebbles was shown to be limited. This radial separation suggests the possibility to implement the “seed-blanket” concept for the utilization of fertile fuels such as thorium, and for enhancing reactor fuel utilization. In this study, the particle-swarm meta-heuristic evolutionary optimization method (PSO) has been used to find optimal fuel cycle design which yields the highest natural uranium utilization. The PSO method is known for solving efficiently complex problems with non-linear objective function, continuous or discrete parameters and complex constrains. The VSOP system of codes has been used for PBR fuel utilization calculations and MATLAB script has been used to implement the PSO algorithm. Optimization of PBR natural uranium utilization (NUU) has been carried out for 3000 MWth High Temperature Reactor design (HTR) operating on the Once Trough Then Out (OTTO) fuel management scheme, and for 400 MWth Pebble Bed Modular Reactor (PBMR) operating on the multi-pass (MEDUL) fuel management scheme. Results showed only a modest improvement in the NUU (<5%) over reference designs. Investigation of thorium fuel cases showed that the use of HEU in combination with thorium results in the most favorable reactor performance in terms of
Pebble bed reactor fuel cycle optimization using particle swarm algorithm
International Nuclear Information System (INIS)
Tavron, Barak; Shwageraus, Eugene
2016-01-01
Highlights: • Particle swarm method has been developed for fuel cycle optimization of PBR reactor. • Results show uranium utilization low sensitivity to fuel and core design parameters. • Multi-zone fuel loading pattern leads to a small improvement in uranium utilization. • Thorium mixes with highly enriched uranium yields the best uranium utilization. - Abstract: Pebble bed reactors (PBR) features, such as robust thermo-mechanical fuel design and on-line continuous fueling, facilitate wide range of fuel cycle alternatives. A range off fuel pebble types, containing different amounts of fertile or fissile fuel material, may be loaded into the reactor core. Several fuel loading zones may be used since radial mixing of the pebbles was shown to be limited. This radial separation suggests the possibility to implement the “seed-blanket” concept for the utilization of fertile fuels such as thorium, and for enhancing reactor fuel utilization. In this study, the particle-swarm meta-heuristic evolutionary optimization method (PSO) has been used to find optimal fuel cycle design which yields the highest natural uranium utilization. The PSO method is known for solving efficiently complex problems with non-linear objective function, continuous or discrete parameters and complex constrains. The VSOP system of codes has been used for PBR fuel utilization calculations and MATLAB script has been used to implement the PSO algorithm. Optimization of PBR natural uranium utilization (NUU) has been carried out for 3000 MWth High Temperature Reactor design (HTR) operating on the Once Trough Then Out (OTTO) fuel management scheme, and for 400 MWth Pebble Bed Modular Reactor (PBMR) operating on the multi-pass (MEDUL) fuel management scheme. Results showed only a modest improvement in the NUU (<5%) over reference designs. Investigation of thorium fuel cases showed that the use of HEU in combination with thorium results in the most favorable reactor performance in terms of
Fast weighted centroid algorithm for single particle localization near the information limit.
Fish, Jeremie; Scrimgeour, Jan
2015-07-10
A simple weighting scheme that enhances the localization precision of center of mass calculations for radially symmetric intensity distributions is presented. The algorithm effectively removes the biasing that is common in such center of mass calculations. Localization precision compares favorably with other localization algorithms used in super-resolution microscopy and particle tracking, while significantly reducing the processing time and memory usage. We expect that the algorithm presented will be of significant utility when fast computationally lightweight particle localization or tracking is desired.
GRAVITATIONAL LENS MODELING WITH GENETIC ALGORITHMS AND PARTICLE SWARM OPTIMIZERS
International Nuclear Information System (INIS)
Rogers, Adam; Fiege, Jason D.
2011-01-01
Strong gravitational lensing of an extended object is described by a mapping from source to image coordinates that is nonlinear and cannot generally be inverted analytically. Determining the structure of the source intensity distribution also requires a description of the blurring effect due to a point-spread function. This initial study uses an iterative gravitational lens modeling scheme based on the semilinear method to determine the linear parameters (source intensity profile) of a strongly lensed system. Our 'matrix-free' approach avoids construction of the lens and blurring operators while retaining the least-squares formulation of the problem. The parameters of an analytical lens model are found through nonlinear optimization by an advanced genetic algorithm (GA) and particle swarm optimizer (PSO). These global optimization routines are designed to explore the parameter space thoroughly, mapping model degeneracies in detail. We develop a novel method that determines the L-curve for each solution automatically, which represents the trade-off between the image χ 2 and regularization effects, and allows an estimate of the optimally regularized solution for each lens parameter set. In the final step of the optimization procedure, the lens model with the lowest χ 2 is used while the global optimizer solves for the source intensity distribution directly. This allows us to accurately determine the number of degrees of freedom in the problem to facilitate comparison between lens models and enforce positivity on the source profile. In practice, we find that the GA conducts a more thorough search of the parameter space than the PSO.
Magneto-Hydrodynamic Activity and Energetic Particles - Application to Beta Alfven Eigenmodes
International Nuclear Information System (INIS)
Nguyen, Ch.
2009-12-01
The goal of magnetic fusion research is to extract the power released by fusion reactions and carried by the product of these reactions, liberated at energies of the order of a few MeV. The feasibility of fusion energy production relies on our ability to confine these energetic particles, while keeping the thermonuclear plasma in safe operating conditions. For that purpose, it is necessary to understand and find ways to control the interaction between energetic particles and the thermonuclear plasma. Reaching these two goals is the general motivation for this work. More specifically, our focus is on one type of instability, the Beta Alfven Eigenmode (BAE), which can be driven by energetic particles and impact on the confinement of both energetic and thermal particles. In this work, we study the characteristics of BAEs analytically and derive its dispersion relation and structure. Next, we analyze the linear stability of the mode in the presence of energetic particles. First, a purely linear description is used, which makes possible to get an analytical linear criterion for BAE destabilization in the presence of energetic particles. This criterion is compared with experiments conducted in the Tore-Supra tokamak. Secondly, because the linear analysis reveals some features of the BAE stability which are subject to a strong nonlinear modification, the question is raised of the possibility of a sub-critical activity of the mode. We propose a simple scenario which makes possible the existence of meta-stable modes, verified analytically and numerically. Such a scenario is found to be relevant to the physics and scales characterizing BAEs. (author)
Directory of Open Access Journals (Sweden)
Qi Hu
2013-04-01
Full Text Available State-of-the-art heuristic algorithms to solve the vehicle routing problem with time windows (VRPTW usually present slow speeds during the early iterations and easily fall into local optimal solutions. Focusing on solving the above problems, this paper analyzes the particle encoding and decoding strategy of the particle swarm optimization algorithm, the construction of the vehicle route and the judgment of the local optimal solution. Based on these, a hybrid chaos-particle swarm optimization algorithm (HPSO is proposed to solve VRPTW. The chaos algorithm is employed to re-initialize the particle swarm. An efficient insertion heuristic algorithm is also proposed to build the valid vehicle route in the particle decoding process. A particle swarm premature convergence judgment mechanism is formulated and combined with the chaos algorithm and Gaussian mutation into HPSO when the particle swarm falls into the local convergence. Extensive experiments are carried out to test the parameter settings in the insertion heuristic algorithm and to evaluate that they are corresponding to the data’s real-distribution in the concrete problem. It is also revealed that the HPSO achieves a better performance than the other state-of-the-art algorithms on solving VRPTW.
The application of particle image velocimetry for the analysis of high-speed craft hydrodynamics
Jacobi, G.; Thill, C.H.; Huijsmans, R.H.M.; Huijsmans, R.H.M.
2016-01-01
The particle image velocimetry (PIV) technique has become a reliable method for capturing the velocity field and its derivatives, even in complex flows and is now also widely used for validation of numerical codes. As the imaging system is sensitive to vibrations, the application in environments
Algorithm of Data Reduce in Determination of Aerosol Particle Size Distribution at Damps/C
International Nuclear Information System (INIS)
Muhammad-Priyatna; Otto-Pribadi-Ruslanto
2001-01-01
The analysis had to do for algorithm of data reduction on Damps/C (Differential Mobility Particle Sizer with Condensation Particle Counter) system, this is for determine aerosol particle size distribution with range 0,01 μm to 1 μm in diameter. Damps/C (Differential Mobility Particle Sizer with Condensation Particle Counter) system contents are software and hardware. The hardware used determine of mobilities of aerosol particle and so the software used determine aerosol particle size distribution in diameter. The mobilities and diameter particle had connection in the electricity field. That is basic program for reduction of data and particle size conversion from particle mobility become particle diameter. The analysis to get transfer function value, Ω, is 0.5. The data reduction program to do conversation mobility basis become diameter basis with number efficiency correction, transfer function value, and poly charge particle. (author)
Optimization of Particle Search Algorithm for CFD-DEM Simulations
Directory of Open Access Journals (Sweden)
G. Baryshev
2013-09-01
Full Text Available Discrete element method has numerous applications in particle physics. However, simulating particles as discrete entities can become costly for large systems. In time-driven DEM simulation most computation time is taken by contact search stage. We propose an efficient collision detection method which is based on sorting particles by their coordinates. Using multiple sorting criteria allows minimizing number of potential neighbours and defines fitness of this approach for simulation of massive systems in 3D. This method is compared to a common approach that consists of placing particles onto a grid of cells. Advantage of the new approach is independence of simulation parameters upon particle radius and domain size.
A Swarm Optimization Genetic Algorithm Based on Quantum-Behaved Particle Swarm Optimization.
Sun, Tao; Xu, Ming-Hai
2017-01-01
Quantum-behaved particle swarm optimization (QPSO) algorithm is a variant of the traditional particle swarm optimization (PSO). The QPSO that was originally developed for continuous search spaces outperforms the traditional PSO in search ability. This paper analyzes the main factors that impact the search ability of QPSO and converts the particle movement formula to the mutation condition by introducing the rejection region, thus proposing a new binary algorithm, named swarm optimization genetic algorithm (SOGA), because it is more like genetic algorithm (GA) than PSO in form. SOGA has crossover and mutation operator as GA but does not need to set the crossover and mutation probability, so it has fewer parameters to control. The proposed algorithm was tested with several nonlinear high-dimension functions in the binary search space, and the results were compared with those from BPSO, BQPSO, and GA. The experimental results show that SOGA is distinctly superior to the other three algorithms in terms of solution accuracy and convergence.
Wei, Yongjie; Ge, Baozhen; Wei, Yaolin
2009-03-20
In general, model-independent algorithms are sensitive to noise during laser particle size measurement. An improved conjugate gradient algorithm (ICGA) that can be used to invert particle size distribution (PSD) from diffraction data is presented. By use of the ICGA to invert simulated data with multiplicative or additive noise, we determined that additive noise is the main factor that induces distorted results. Thus the ICGA is amended by introduction of an iteration step-adjusting parameter and is used experimentally on simulated data and some samples. The experimental results show that the sensitivity of the ICGA to noise is reduced and the inverted results are in accord with the real PSD.
Algorithms for tracking of charged particles in circular accelerators
International Nuclear Information System (INIS)
Iselin, F.Ch.
1986-01-01
An important problem in accelerator design is the determination of the largest stable betatron amplitude. This stability limit is also known as the dynamic aperture. The equations describing the particle motion are non-linear, and the Linear Lattice Functions cannot be used to compute the stability limits. The stability limits are therefore usually searched for by particle tracking. One selects a set of particles with different betatron amplitudes and tracks them for many turns around the machine. The particles which survive a sufficient number of turns are termed stable. This paper concentrates on conservative systems. For this case the particle motion can be described by a Hamiltonian, i.e. tracking particles means application of canonical transformations. Canonical transformations are equivalent to symplectic mappings, which implies that there exist invariants. These invariants should not be destroyed in tracking
Ballistic target tracking algorithm based on improved particle filtering
Ning, Xiao-lei; Chen, Zhan-qi; Li, Xiao-yang
2015-10-01
Tracking ballistic re-entry target is a typical nonlinear filtering problem. In order to track the ballistic re-entry target in the nonlinear and non-Gaussian complex environment, a novel chaos map particle filter (CMPF) is used to estimate the target state. CMPF has better performance in application to estimate the state and parameter of nonlinear and non-Gassuian system. The Monte Carlo simulation results show that, this method can effectively solve particle degeneracy and particle impoverishment problem by improving the efficiency of particle sampling to obtain the better particles to part in estimation. Meanwhile CMPF can improve the state estimation precision and convergence velocity compared with EKF, UKF and the ordinary particle filter.
Tran-Duc, Thien; Phan-Thien, Nhan; Khoo, Boo Cheong
2018-02-01
Technical activities to collect poly-metallic nodules on a seabed are likely to disturb the top-layer sediment and re-suspend it into the ambient ocean water. The transport of the re-suspended polydisperse-sized sediment is a process in which particles' size variation leads to a difference in their settling velocities; and thus the polydispersity in sizes of sediment has to be taken into account in the modeling process. The sediment transport within a window of 12 km is simulated and analyzed numerically in this study. The sediment characteristic and the ocean current data taken from the Peru Basin, Pacific Ocean, are used in the simulations. More than 50% of the re-suspended sediment are found to return to the bottom after 24 h. The sediment concentration in the ambient ocean water does not exceed 3.5 kg/m3 during the observed period. The deposition rate steadily increases and reaches 70% of the sediment re-suspension rate after 24 h. The sediment plume created by the activities comprises mainly very fine sediment particles (clays and silts), whereas coarser particles (sands) are found in abundance in the deposited sediment within 1 km from the source location. It is also found that the deposition process of the re-suspended sediment is changed remarkably as the current velocity increases from 0.05 m/s (medium current) to 0.1 m/s (strong current). The strong sediment deposition trend is also observed as the sediment source moves continuously over a region due to the sediment scattering effect.
The influence of acoustic field and frequency on Hydrodynamics of Group B particles
Directory of Open Access Journals (Sweden)
R L Sonolikar
2011-01-01
Full Text Available Sound Assisted Fluidized Bed (SAFB of group B particles (180μm glass bead has been studied in a 46mm I.D. column with aspect ratios of 1.4 and 2.9. A loudspeaker mounted on the top of the bed was supplied by a function generator with square wave to generate the sound as the source of vibration of the fluidized bed. The sound pressure level (referred to 20μpa was varied from 102 to 140dB and frequencies from 70Hz to 170Hz were applied. The effects of sound pressure level, sound frequency and particle loading on the properties of SAFB were investigated. The experimental result showed that the minimum fluidization velocity decreased with the increase in sound pressure level, also minimum fluidization velocity was varied with variation of frequencies. At resonance frequency minimum fluidization velocity was found to be minimum. The bed height did not show an appreciable increase in presence of high acoustic field and at resonant frequency. Minimum fluidization velocity verses frequency curve in presence of sound intensity varied with variation of bed weight.
Directory of Open Access Journals (Sweden)
Hao Yin
2014-01-01
Full Text Available For SLA-aware service composition problem (SSC, an optimization model for this algorithm is built, and a hybrid multiobjective discrete particle swarm optimization algorithm (HMDPSO is also proposed in this paper. According to the characteristic of this problem, a particle updating strategy is designed by introducing crossover operator. In order to restrain particle swarm’s premature convergence and increase its global search capacity, the swarm diversity indicator is introduced and a particle mutation strategy is proposed to increase the swarm diversity. To accelerate the process of obtaining the feasible particle position, a local search strategy based on constraint domination is proposed and incorporated into the proposed algorithm. At last, some parameters in the algorithm HMDPSO are analyzed and set with relative proper values, and then the algorithm HMDPSO and the algorithm HMDPSO+ incorporated by local search strategy are compared with the recently proposed related algorithms on different scale cases. The results show that algorithm HMDPSO+ can solve the SSC problem more effectively.
Algorithm of Particle Data Association for SLAM Based on Improved Ant Algorithm
Directory of Open Access Journals (Sweden)
KeKe Gen
2015-01-01
Full Text Available The article considers a problem of data association algorithm for simultaneous localization and mapping guidelines in determining the route of unmanned aerial vehicles (UAVs. Currently, these equipments are already widely used, but mainly controlled from the remote operator. An urgent task is to develop a control system that allows for autonomous flight. Algorithm SLAM (simultaneous localization and mapping, which allows to predict the location, speed, the ratio of flight parameters and the coordinates of landmarks and obstacles in an unknown environment, is one of the key technologies to achieve real autonomous UAV flight. The aim of this work is to study the possibility of solving this problem by using an improved ant algorithm.The data association for SLAM algorithm is meant to establish a matching set of observed landmarks and landmarks in the state vector. Ant algorithm is one of the widely used optimization algorithms with positive feedback and the ability to search in parallel, so the algorithm is suitable for solving the problem of data association for SLAM. But the traditional ant algorithm in the process of finding routes easily falls into local optimum. Adding random perturbations in the process of updating the global pheromone to avoid local optima. Setting limits pheromone on the route can increase the search space with a reasonable amount of calculations for finding the optimal route.The paper proposes an algorithm of the local data association for SLAM algorithm based on an improved ant algorithm. To increase the speed of calculation, local data association is used instead of the global data association. The first stage of the algorithm defines targets in the matching space and the observed landmarks with the possibility of association by the criterion of individual compatibility (IC. The second stage defines the matched landmarks and their coordinates using improved ant algorithm. Simulation results confirm the efficiency and
Tang, Ge; Wei, Biao; Wu, Decao; Feng, Peng; Liu, Juan; Tang, Yuan; Xiong, Shuangfei; Zhang, Zheng
2018-03-01
To select the optimal wavelengths in the light extinction spectroscopy measurement, genetic algorithm-particle swarm optimization (GAPSO) based on genetic algorithm (GA) and particle swarm optimization (PSO) is adopted. The change of the optimal wavelength positions in different feature size parameters and distribution parameters is evaluated. Moreover, the Monte Carlo method based on random probability is used to identify the number of optimal wavelengths, and good inversion effects of the particle size distribution are obtained. The method proved to have the advantage of resisting noise. In order to verify the feasibility of the algorithm, spectra with bands ranging from 200 to 1000 nm are computed. Based on this, the measured data of standard particles are used to verify the algorithm.
A general concurrent algorithm for plasma particle-in-cell simulation codes
International Nuclear Information System (INIS)
Liewer, P.C.; Decyk, V.K.
1989-01-01
We have developed a new algorithm for implementing plasma particle-in-cell (PIC) simulation codes on concurrent processors with distributed memory. This algorithm, named the general concurrent PIC algorithm (GCPIC), has been used to implement an electrostatic PIC code on the 33-node JPL Mark III Hypercube parallel computer. To decompose at PIC code using the GCPIC algorithm, the physical domain of the particle simulation is divided into sub-domains, equal in number to the number of processors, such that all sub-domains have roughly equal numbers of particles. For problems with non-uniform particle densities, these sub-domains will be of unequal physical size. Each processor is assigned a sub-domain and is responsible for updating the particles in its sub-domain. This algorithm has led to a a very efficient parallel implementation of a well-benchmarked 1-dimensional PIC code. The dominant portion of the code, updating the particle positions and velocities, is nearly 100% efficient when the number of particles is increased linearly with the number of hypercube processors used so that the number of particles per processor is constant. For example, the increase in time spent updating particles in going from a problem with 11,264 particles run on 1 processor to 360,448 particles on 32 processors was only 3% (parallel efficiency of 97%). Although implemented on a hypercube concurrent computer, this algorithm should also be efficient for PIC codes on other parallel architectures and for large PIC codes on sequential computers where part of the data must reside on external disks. copyright 1989 Academic Press, Inc
Wihartiko, F. D.; Wijayanti, H.; Virgantari, F.
2018-03-01
Genetic Algorithm (GA) is a common algorithm used to solve optimization problems with artificial intelligence approach. Similarly, the Particle Swarm Optimization (PSO) algorithm. Both algorithms have different advantages and disadvantages when applied to the case of optimization of the Model Integer Programming for Bus Timetabling Problem (MIPBTP), where in the case of MIPBTP will be found the optimal number of trips confronted with various constraints. The comparison results show that the PSO algorithm is superior in terms of complexity, accuracy, iteration and program simplicity in finding the optimal solution.
Optimization of multi-objective micro-grid based on improved particle swarm optimization algorithm
Zhang, Jian; Gan, Yang
2018-04-01
The paper presents a multi-objective optimal configuration model for independent micro-grid with the aim of economy and environmental protection. The Pareto solution set can be obtained by solving the multi-objective optimization configuration model of micro-grid with the improved particle swarm algorithm. The feasibility of the improved particle swarm optimization algorithm for multi-objective model is verified, which provides an important reference for multi-objective optimization of independent micro-grid.
Predicting patchy particle crystals: variable box shape simulations and evolutionary algorithms.
Bianchi, Emanuela; Doppelbauer, Günther; Filion, Laura; Dijkstra, Marjolein; Kahl, Gerhard
2012-06-07
We consider several patchy particle models that have been proposed in literature and we investigate their candidate crystal structures in a systematic way. We compare two different algorithms for predicting crystal structures: (i) an approach based on Monte Carlo simulations in the isobaric-isothermal ensemble and (ii) an optimization technique based on ideas of evolutionary algorithms. We show that the two methods are equally successful and provide consistent results on crystalline phases of patchy particle systems.
Particle Swarm Optimization algorithms for geophysical inversion, practical hints
Garcia Gonzalo, E.; Fernandez Martinez, J.; Fernandez Alvarez, J.; Kuzma, H.; Menendez Perez, C.
2008-12-01
PSO is a stochastic optimization technique that has been successfully used in many different engineering fields. PSO algorithm can be physically interpreted as a stochastic damped mass-spring system (Fernandez Martinez and Garcia Gonzalo 2008). Based on this analogy we present a whole family of PSO algorithms and their respective first order and second order stability regions. Their performance is also checked using synthetic functions (Rosenbrock and Griewank) showing a degree of ill-posedness similar to that found in many geophysical inverse problems. Finally, we present the application of these algorithms to the analysis of a Vertical Electrical Sounding inverse problem associated to a seawater intrusion in a coastal aquifer in South Spain. We analyze the role of PSO parameters (inertia, local and global accelerations and discretization step), both in convergence curves and in the a posteriori sampling of the depth of an intrusion. Comparison is made with binary genetic algorithms and simulated annealing. As result of this analysis, practical hints are given to select the correct algorithm and to tune the corresponding PSO parameters. Fernandez Martinez, J.L., Garcia Gonzalo, E., 2008a. The generalized PSO: a new door to PSO evolution. Journal of Artificial Evolution and Applications. DOI:10.1155/2008/861275.
Combinatorial Clustering Algorithm of Quantum-Behaved Particle Swarm Optimization and Cloud Model
Directory of Open Access Journals (Sweden)
Mi-Yuan Shan
2013-01-01
Full Text Available We propose a combinatorial clustering algorithm of cloud model and quantum-behaved particle swarm optimization (COCQPSO to solve the stochastic problem. The algorithm employs a novel probability model as well as a permutation-based local search method. We are setting the parameters of COCQPSO based on the design of experiment. In the comprehensive computational study, we scrutinize the performance of COCQPSO on a set of widely used benchmark instances. By benchmarking combinatorial clustering algorithm with state-of-the-art algorithms, we can show that its performance compares very favorably. The fuzzy combinatorial optimization algorithm of cloud model and quantum-behaved particle swarm optimization (FCOCQPSO in vague sets (IVSs is more expressive than the other fuzzy sets. Finally, numerical examples show the clustering effectiveness of COCQPSO and FCOCQPSO clustering algorithms which are extremely remarkable.
Wang, Ershen; Jia, Chaoying; Tong, Gang; Qu, Pingping; Lan, Xiaoyu; Pang, Tao
2018-03-01
The receiver autonomous integrity monitoring (RAIM) is one of the most important parts in an avionic navigation system. Two problems need to be addressed to improve this system, namely, the degeneracy phenomenon and lack of samples for the standard particle filter (PF). However, the number of samples cannot adequately express the real distribution of the probability density function (i.e., sample impoverishment). This study presents a GPS receiver autonomous integrity monitoring (RAIM) method based on a chaos particle swarm optimization particle filter (CPSO-PF) algorithm with a log likelihood ratio. The chaos sequence generates a set of chaotic variables, which are mapped to the interval of optimization variables to improve particle quality. This chaos perturbation overcomes the potential for the search to become trapped in a local optimum in the particle swarm optimization (PSO) algorithm. Test statistics are configured based on a likelihood ratio, and satellite fault detection is then conducted by checking the consistency between the state estimate of the main PF and those of the auxiliary PFs. Based on GPS data, the experimental results demonstrate that the proposed algorithm can effectively detect and isolate satellite faults under conditions of non-Gaussian measurement noise. Moreover, the performance of the proposed novel method is better than that of RAIM based on the PF or PSO-PF algorithm.
Comparison of several algorithms of the electric force calculation in particle plasma models
International Nuclear Information System (INIS)
Lachnitt, J; Hrach, R
2014-01-01
This work is devoted to plasma modelling using the technique of molecular dynamics. The crucial problem of most such models is the efficient calculation of electric force. This is usually solved by using the particle-in-cell (PIC) algorithm. However, PIC is an approximative algorithm as it underestimates the short-range interactions of charged particles. We propose a hybrid algorithm which adds these interactions to PIC. Then we include this algorithm in a set of algorithms which we test against each other in a two-dimensional collisionless magnetized plasma model. Besides our hybrid algorithm, this set includes two variants of pure PIC and the direct application of Coulomb's law. We compare particle forces, particle trajectories, total energy conservation and the speed of the algorithms. We find out that the hybrid algorithm can be a good replacement of direct Coulomb's law application (quite accurate and much faster). It is however probably unnecessary to use it in practical 2D models.
An improved particle filtering algorithm for aircraft engine gas-path fault diagnosis
Directory of Open Access Journals (Sweden)
Qihang Wang
2016-07-01
Full Text Available In this article, an improved particle filter with electromagnetism-like mechanism algorithm is proposed for aircraft engine gas-path component abrupt fault diagnosis. In order to avoid the particle degeneracy and sample impoverishment of normal particle filter, the electromagnetism-like mechanism optimization algorithm is introduced into resampling procedure, which adjusts the position of the particles through simulating attraction–repulsion mechanism between charged particles of the electromagnetism theory. The improved particle filter can solve the particle degradation problem and ensure the diversity of the particle set. Meanwhile, it enhances the ability of tracking abrupt fault due to considering the latest measurement information. Comparison of the proposed method with three different filter algorithms is carried out on a univariate nonstationary growth model. Simulations on a turbofan engine model indicate that compared to the normal particle filter, the improved particle filter can ensure the completion of the fault diagnosis within less sampling period and the root mean square error of parameters estimation is reduced.
Max–min Bin Packing Algorithm and its application in nano-particles filling
International Nuclear Information System (INIS)
Zhu, Dingju
2016-01-01
With regard to existing bin packing algorithms, higher packing efficiency often leads to lower packing speed while higher packing speed leads to lower packing efficiency. Packing speed and packing efficiency of existing bin packing algorithms including NFD, NF, FF, FFD, BF and BFD correlates negatively with each other, thus resulting in the failure of existing bin packing algorithms to satisfy the demand of nano-particles filling for both high speed and high efficiency. The paper provides a new bin packing algorithm, Max–min Bin Packing Algorithm (MM), which realizes both high packing speed and high packing efficiency. MM has the same packing speed as NFD (whose packing speed ranks no. 1 among existing bin packing algorithms); in case that the size repetition rate of objects to be packed is over 5, MM can realize almost the same packing efficiency as BFD (whose packing efficiency ranks No. 1 among existing bin packing algorithms), and in case that the size repetition rate of objects to be packed is over 500, MM can achieve exactly the same packing efficiency as BFD. With respect to application of nano-particles filling, the size repetition rate of nano particles to be packed is usually in thousands or ten thousands, far higher than 5 or 500. Consequently, in application of nano-particles filling, the packing efficiency of MM is exactly equal to that of BFD. Thus the irreconcilable conflict between packing speed and packing efficiency is successfully removed by MM, which leads to MM having better packing effect than any existing bin packing algorithm. In practice, there are few cases when the size repetition of objects to be packed is lower than 5. Therefore the MM is not necessarily limited to nano-particles filling, and can also be widely used in other applications besides nano-particles filling. Especially, MM has significant value in application of nano-particles filling such as nano printing and nano tooth filling.
Novotny, M.A.; Watanabe, Hiroshi; Ito, Nobuyasu
2010-01-01
The efficiency of dynamic Monte Carlo algorithms for off-lattice systems composed of particles is studied for the case of a single impurity particle. The theoretical efficiencies of the rejection-free method and of the Monte Carlo with Absorbing
Novotny, M.A.
2010-02-01
The efficiency of dynamic Monte Carlo algorithms for off-lattice systems composed of particles is studied for the case of a single impurity particle. The theoretical efficiencies of the rejection-free method and of the Monte Carlo with Absorbing Markov Chains method are given. Simulation results are presented to confirm the theoretical efficiencies. © 2010.
An Improved Particle Swarm Optimization Algorithm and Its Application in the Community Division
Directory of Open Access Journals (Sweden)
Jiang Hao
2016-01-01
Full Text Available With the deepening of the research on complex networks, the method of detecting and classifying social network is springing up. In this essay, the basic particle swarm algorithm is improved based on the GN algorithm. Modularity is taken as a measure of community division [1]. In view of the dynamic network community division, scrolling calculation method is put forward. Experiments show that using the improved particle swarm optimization algorithm can improve the accuracy of the community division and can also get higher value of the modularity in the dynamic community
Couceiro, Micael
2015-01-01
This book examines the bottom-up applicability of swarm intelligence to solving multiple problems, such as curve fitting, image segmentation, and swarm robotics. It compares the capabilities of some of the better-known bio-inspired optimization approaches, especially Particle Swarm Optimization (PSO), Darwinian Particle Swarm Optimization (DPSO) and the recently proposed Fractional Order Darwinian Particle Swarm Optimization (FODPSO), and comprehensively discusses their advantages and disadvantages. Further, it demonstrates the superiority and key advantages of using the FODPSO algorithm, suc
Application of particle swarm optimization algorithm in the heating system planning problem.
Ma, Rong-Jiang; Yu, Nan-Yang; Hu, Jun-Yi
2013-01-01
Based on the life cycle cost (LCC) approach, this paper presents an integral mathematical model and particle swarm optimization (PSO) algorithm for the heating system planning (HSP) problem. The proposed mathematical model minimizes the cost of heating system as the objective for a given life cycle time. For the particularity of HSP problem, the general particle swarm optimization algorithm was improved. An actual case study was calculated to check its feasibility in practical use. The results show that the improved particle swarm optimization (IPSO) algorithm can more preferably solve the HSP problem than PSO algorithm. Moreover, the results also present the potential to provide useful information when making decisions in the practical planning process. Therefore, it is believed that if this approach is applied correctly and in combination with other elements, it can become a powerful and effective optimization tool for HSP problem.
Explicit symplectic algorithms based on generating functions for charged particle dynamics
Zhang, Ruili; Qin, Hong; Tang, Yifa; Liu, Jian; He, Yang; Xiao, Jianyuan
2016-07-01
Dynamics of a charged particle in the canonical coordinates is a Hamiltonian system, and the well-known symplectic algorithm has been regarded as the de facto method for numerical integration of Hamiltonian systems due to its long-term accuracy and fidelity. For long-term simulations with high efficiency, explicit symplectic algorithms are desirable. However, it is generally believed that explicit symplectic algorithms are only available for sum-separable Hamiltonians, and this restriction limits the application of explicit symplectic algorithms to charged particle dynamics. To overcome this difficulty, we combine the familiar sum-split method and a generating function method to construct second- and third-order explicit symplectic algorithms for dynamics of charged particle. The generating function method is designed to generate explicit symplectic algorithms for product-separable Hamiltonian with form of H (x ,p ) =pif (x ) or H (x ,p ) =xig (p ) . Applied to the simulations of charged particle dynamics, the explicit symplectic algorithms based on generating functions demonstrate superiorities in conservation and efficiency.
Directory of Open Access Journals (Sweden)
Xun Zhang
2014-01-01
Full Text Available Optimal sensor placement is a key issue in the structural health monitoring of large-scale structures. However, some aspects in existing approaches require improvement, such as the empirical and unreliable selection of mode and sensor numbers and time-consuming computation. A novel improved particle swarm optimization (IPSO algorithm is proposed to address these problems. The approach firstly employs the cumulative effective modal mass participation ratio to select mode number. Three strategies are then adopted to improve the PSO algorithm. Finally, the IPSO algorithm is utilized to determine the optimal sensors number and configurations. A case study of a latticed shell model is implemented to verify the feasibility of the proposed algorithm and four different PSO algorithms. The effective independence method is also taken as a contrast experiment. The comparison results show that the optimal placement schemes obtained by the PSO algorithms are valid, and the proposed IPSO algorithm has better enhancement in convergence speed and precision.
Wang, Xingmei; Hao, Wenqian; Li, Qiming
2017-12-18
This paper proposes an adaptive cultural algorithm with improved quantum-behaved particle swarm optimization (ACA-IQPSO) to detect the underwater sonar image. In the population space, to improve searching ability of particles, iterative times and the fitness value of particles are regarded as factors to adaptively adjust the contraction-expansion coefficient of the quantum-behaved particle swarm optimization algorithm (QPSO). The improved quantum-behaved particle swarm optimization algorithm (IQPSO) can make particles adjust their behaviours according to their quality. In the belief space, a new update strategy is adopted to update cultural individuals according to the idea of the update strategy in shuffled frog leaping algorithm (SFLA). Moreover, to enhance the utilization of information in the population space and belief space, accept function and influence function are redesigned in the new communication protocol. The experimental results show that ACA-IQPSO can obtain good clustering centres according to the grey distribution information of underwater sonar images, and accurately complete underwater objects detection. Compared with other algorithms, the proposed ACA-IQPSO has good effectiveness, excellent adaptability, a powerful searching ability and high convergence efficiency. Meanwhile, the experimental results of the benchmark functions can further demonstrate that the proposed ACA-IQPSO has better searching ability, convergence efficiency and stability.
Babu, Mannam Naga Praveen; Mallikarjuna, J M; Krishnankutty, P
Two-dimensional velocity fields around a freely swimming freshwater black shark fish in longitudinal (XZ) plane and transverse (YZ) plane are measured using digital particle image velocimetry (DPIV). By transferring momentum to the fluid, fishes generate thrust. Thrust is generated not only by its caudal fin, but also using pectoral and anal fins, the contribution of which depends on the fish's morphology and swimming movements. These fins also act as roll and pitch stabilizers for the swimming fish. In this paper, studies are performed on the flow induced by fins of freely swimming undulatory carangiform swimming fish (freshwater black shark, L = 26 cm) by an experimental hydrodynamic approach based on quantitative flow visualization technique. We used 2D PIV to visualize water flow pattern in the wake of the caudal, pectoral and anal fins of swimming fish at a speed of 0.5-1.5 times of body length per second. The kinematic analysis and pressure distribution of carangiform fish are presented here. The fish body and fin undulations create circular flow patterns (vortices) that travel along with the body waves and change the flow around its tail to increase the swimming efficiency. The wake of different fins of the swimming fish consists of two counter-rotating vortices about the mean path of fish motion. These wakes resemble like reverse von Karman vortex street which is nothing but a thrust-producing wake. The velocity vectors around a C-start (a straight swimming fish bends into C-shape) maneuvering fish are also discussed in this paper. Studying flows around flapping fins will contribute to design of bioinspired propulsors for marine vehicles.
Kinetic-Monte-Carlo-Based Parallel Evolution Simulation Algorithm of Dust Particles
Directory of Open Access Journals (Sweden)
Xiaomei Hu
2014-01-01
Full Text Available The evolution simulation of dust particles provides an important way to analyze the impact of dust on the environment. KMC-based parallel algorithm is proposed to simulate the evolution of dust particles. In the parallel evolution simulation algorithm of dust particles, data distribution way and communication optimizing strategy are raised to balance the load of every process and reduce the communication expense among processes. The experimental results show that the simulation of diffusion, sediment, and resuspension of dust particles in virtual campus is realized and the simulation time is shortened by parallel algorithm, which makes up for the shortage of serial computing and makes the simulation of large-scale virtual environment possible.
A hand tracking algorithm with particle filter and improved GVF snake model
Sun, Yi-qi; Wu, Ai-guo; Dong, Na; Shao, Yi-zhe
2017-07-01
To solve the problem that the accurate information of hand cannot be obtained by particle filter, a hand tracking algorithm based on particle filter combined with skin-color adaptive gradient vector flow (GVF) snake model is proposed. Adaptive GVF and skin color adaptive external guidance force are introduced to the traditional GVF snake model, guiding the curve to quickly converge to the deep concave region of hand contour and obtaining the complex hand contour accurately. This algorithm realizes a real-time correction of the particle filter parameters, avoiding the particle drift phenomenon. Experimental results show that the proposed algorithm can reduce the root mean square error of the hand tracking by 53%, and improve the accuracy of hand tracking in the case of complex and moving background, even with a large range of occlusion.
International Nuclear Information System (INIS)
Niu Lili; Qian Ming; Yu Wentao; Jin Qiaofeng; Ling Tao; Zheng Hairong; Wan Kun; Gao Shen
2010-01-01
This paper presents a new algorithm for ultrasonic particle image velocimetry (Echo PIV) for improving the flow velocity measurement accuracy and efficiency in regions with high velocity gradients. The conventional Echo PIV algorithm has been modified by incorporating a multiple iterative algorithm, sub-pixel method, filter and interpolation method, and spurious vector elimination algorithm. The new algorithms' performance is assessed by analyzing simulated images with known displacements, and ultrasonic B-mode images of in vitro laminar pipe flow, rotational flow and in vivo rat carotid arterial flow. Results of the simulated images show that the new algorithm produces much smaller bias from the known displacements. For laminar flow, the new algorithm results in 1.1% deviation from the analytically derived value, and 8.8% for the conventional algorithm. The vector quality evaluation for the rotational flow imaging shows that the new algorithm produces better velocity vectors. For in vivo rat carotid arterial flow imaging, the results from the new algorithm deviate 6.6% from the Doppler-measured peak velocities averagely compared to 15% of that from the conventional algorithm. The new Echo PIV algorithm is able to effectively improve the measurement accuracy in imaging flow fields with high velocity gradients.
RB Particle Filter Time Synchronization Algorithm Based on the DPM Model.
Guo, Chunsheng; Shen, Jia; Sun, Yao; Ying, Na
2015-09-03
Time synchronization is essential for node localization, target tracking, data fusion, and various other Wireless Sensor Network (WSN) applications. To improve the estimation accuracy of continuous clock offset and skew of mobile nodes in WSNs, we propose a novel time synchronization algorithm, the Rao-Blackwellised (RB) particle filter time synchronization algorithm based on the Dirichlet process mixture (DPM) model. In a state-space equation with a linear substructure, state variables are divided into linear and non-linear variables by the RB particle filter algorithm. These two variables can be estimated using Kalman filter and particle filter, respectively, which improves the computational efficiency more so than if only the particle filter was used. In addition, the DPM model is used to describe the distribution of non-deterministic delays and to automatically adjust the number of Gaussian mixture model components based on the observational data. This improves the estimation accuracy of clock offset and skew, which allows achieving the time synchronization. The time synchronization performance of this algorithm is also validated by computer simulations and experimental measurements. The results show that the proposed algorithm has a higher time synchronization precision than traditional time synchronization algorithms.
A nowcasting technique based on application of the particle filter blending algorithm
Chen, Yuanzhao; Lan, Hongping; Chen, Xunlai; Zhang, Wenhai
2017-10-01
To improve the accuracy of nowcasting, a new extrapolation technique called particle filter blending was configured in this study and applied to experimental nowcasting. Radar echo extrapolation was performed by using the radar mosaic at an altitude of 2.5 km obtained from the radar images of 12 S-band radars in Guangdong Province, China. The first bilateral filter was applied in the quality control of the radar data; an optical flow method based on the Lucas-Kanade algorithm and the Harris corner detection algorithm were used to track radar echoes and retrieve the echo motion vectors; then, the motion vectors were blended with the particle filter blending algorithm to estimate the optimal motion vector of the true echo motions; finally, semi-Lagrangian extrapolation was used for radar echo extrapolation based on the obtained motion vector field. A comparative study of the extrapolated forecasts of four precipitation events in 2016 in Guangdong was conducted. The results indicate that the particle filter blending algorithm could realistically reproduce the spatial pattern, echo intensity, and echo location at 30- and 60-min forecast lead times. The forecasts agreed well with observations, and the results were of operational significance. Quantitative evaluation of the forecasts indicates that the particle filter blending algorithm performed better than the cross-correlation method and the optical flow method. Therefore, the particle filter blending method is proved to be superior to the traditional forecasting methods and it can be used to enhance the ability of nowcasting in operational weather forecasts.
A parallel algorithm for 3D particle tracking and Lagrangian trajectory reconstruction
International Nuclear Information System (INIS)
Barker, Douglas; Zhang, Yuanhui; Lifflander, Jonathan; Arya, Anshu
2012-01-01
Particle-tracking methods are widely used in fluid mechanics and multi-target tracking research because of their unique ability to reconstruct long trajectories with high spatial and temporal resolution. Researchers have recently demonstrated 3D tracking of several objects in real time, but as the number of objects is increased, real-time tracking becomes impossible due to data transfer and processing bottlenecks. This problem may be solved by using parallel processing. In this paper, a parallel-processing framework has been developed based on frame decomposition and is programmed using the asynchronous object-oriented Charm++ paradigm. This framework can be a key step in achieving a scalable Lagrangian measurement system for particle-tracking velocimetry and may lead to real-time measurement capabilities. The parallel tracking algorithm was evaluated with three data sets including the particle image velocimetry standard 3D images data set #352, a uniform data set for optimal parallel performance and a computational-fluid-dynamics-generated non-uniform data set to test trajectory reconstruction accuracy, consistency with the sequential version and scalability to more than 500 processors. The algorithm showed strong scaling up to 512 processors and no inherent limits of scalability were seen. Ultimately, up to a 200-fold speedup is observed compared to the serial algorithm when 256 processors were used. The parallel algorithm is adaptable and could be easily modified to use any sequential tracking algorithm, which inputs frames of 3D particle location data and outputs particle trajectories
Explicit K-symplectic algorithms for charged particle dynamics
International Nuclear Information System (INIS)
He, Yang; Zhou, Zhaoqi; Sun, Yajuan; Liu, Jian; Qin, Hong
2017-01-01
We study the Lorentz force equation of charged particle dynamics by considering its K-symplectic structure. As the Hamiltonian of the system can be decomposed as four parts, we are able to construct the numerical methods that preserve the K-symplectic structure based on Hamiltonian splitting technique. The newly derived numerical methods are explicit, and are shown in numerical experiments to be stable over long-term simulation. The error convergency as well as the long term energy conservation of the numerical solutions is also analyzed by means of the Darboux transformation.
Explicit K-symplectic algorithms for charged particle dynamics
Energy Technology Data Exchange (ETDEWEB)
He, Yang [School of Mathematics and Physics, University of Science and Technology Beijing, Beijing 100083 (China); Zhou, Zhaoqi [LSEC, Academy of Mathematics and Systems Science, Chinese Academy of Sciences, P.O. Box 2719, Beijing 100190 (China); Sun, Yajuan, E-mail: sunyj@lsec.cc.ac.cn [LSEC, Academy of Mathematics and Systems Science, Chinese Academy of Sciences, P.O. Box 2719, Beijing 100190 (China); University of Chinese Academy of Sciences, Beijing 100049 (China); Liu, Jian [Department of Modern Physics and School of Nuclear Science and Technology, University of Science and Technology of China, Hefei, Anhui 230026 (China); Key Laboratory of Geospace Environment, CAS, Hefei, Anhui 230026 (China); Qin, Hong [Department of Modern Physics and School of Nuclear Science and Technology, University of Science and Technology of China, Hefei, Anhui 230026 (China); Plasma Physics Laboratory, Princeton University, Princeton, NJ 08543 (United States)
2017-02-12
We study the Lorentz force equation of charged particle dynamics by considering its K-symplectic structure. As the Hamiltonian of the system can be decomposed as four parts, we are able to construct the numerical methods that preserve the K-symplectic structure based on Hamiltonian splitting technique. The newly derived numerical methods are explicit, and are shown in numerical experiments to be stable over long-term simulation. The error convergency as well as the long term energy conservation of the numerical solutions is also analyzed by means of the Darboux transformation.
Parallel-vector algorithms for particle simulations on shared-memory multiprocessors
International Nuclear Information System (INIS)
Nishiura, Daisuke; Sakaguchi, Hide
2011-01-01
Over the last few decades, the computational demands of massive particle-based simulations for both scientific and industrial purposes have been continuously increasing. Hence, considerable efforts are being made to develop parallel computing techniques on various platforms. In such simulations, particles freely move within a given space, and so on a distributed-memory system, load balancing, i.e., assigning an equal number of particles to each processor, is not guaranteed. However, shared-memory systems achieve better load balancing for particle models, but suffer from the intrinsic drawback of memory access competition, particularly during (1) paring of contact candidates from among neighboring particles and (2) force summation for each particle. Here, novel algorithms are proposed to overcome these two problems. For the first problem, the key is a pre-conditioning process during which particle labels are sorted by a cell label in the domain to which the particles belong. Then, a list of contact candidates is constructed by pairing the sorted particle labels. For the latter problem, a table comprising the list indexes of the contact candidate pairs is created and used to sum the contact forces acting on each particle for all contacts according to Newton's third law. With just these methods, memory access competition is avoided without additional redundant procedures. The parallel efficiency and compatibility of these two algorithms were evaluated in discrete element method (DEM) simulations on four types of shared-memory parallel computers: a multicore multiprocessor computer, scalar supercomputer, vector supercomputer, and graphics processing unit. The computational efficiency of a DEM code was found to be drastically improved with our algorithms on all but the scalar supercomputer. Thus, the developed parallel algorithms are useful on shared-memory parallel computers with sufficient memory bandwidth.
A neuro-fuzzy inference system tuned by particle swarm optimization algorithm for sensor monitoring
Energy Technology Data Exchange (ETDEWEB)
Oliveira, Mauro Vitor de [Instituto de Engenharia Nuclear (IEN), Rio de Janeiro, RJ (Brazil). Div. de Instrumentacao e Confiabilidade Humana]. E-mail: mvitor@ien.gov.br; Schirru, Roberto [Universidade Federal, Rio de Janeiro, RJ (Brazil). Coordenacao dos Programas de Pos-graduacao de Engenharia. Lab. de Monitoracao de Processos
2005-07-01
A neuro-fuzzy inference system (ANFIS) tuned by particle swarm optimization (PSO) algorithm has been developed for monitor the relevant sensor in a nuclear plant using the information of other sensors. The antecedent parameters of the ANFIS that estimates the relevant sensor signal are optimized by a PSO algorithm and consequent parameters use a least-squares algorithm. The proposed sensor-monitoring algorithm was demonstrated through the estimation of the nuclear power value in a pressurized water reactor using as input to the ANFIS six other correlated signals. The obtained results are compared to two similar ANFIS using one gradient descendent (GD) and other genetic algorithm (GA), as antecedent parameters training algorithm. (author)
A neuro-fuzzy inference system tuned by particle swarm optimization algorithm for sensor monitoring
International Nuclear Information System (INIS)
Oliveira, Mauro Vitor de; Schirru, Roberto
2005-01-01
A neuro-fuzzy inference system (ANFIS) tuned by particle swarm optimization (PSO) algorithm has been developed for monitor the relevant sensor in a nuclear plant using the information of other sensors. The antecedent parameters of the ANFIS that estimates the relevant sensor signal are optimized by a PSO algorithm and consequent parameters use a least-squares algorithm. The proposed sensor-monitoring algorithm was demonstrated through the estimation of the nuclear power value in a pressurized water reactor using as input to the ANFIS six other correlated signals. The obtained results are compared to two similar ANFIS using one gradient descendent (GD) and other genetic algorithm (GA), as antecedent parameters training algorithm. (author)
A Novel Chaotic Particle Swarm Optimization Algorithm for Parking Space Guidance
Directory of Open Access Journals (Sweden)
Na Dong
2016-01-01
Full Text Available An evolutionary approach of parking space guidance based upon a novel Chaotic Particle Swarm Optimization (CPSO algorithm is proposed. In the newly proposed CPSO algorithm, the chaotic dynamics is combined into the position updating rules of Particle Swarm Optimization to improve the diversity of solutions and to avoid being trapped in the local optima. This novel approach, that combines the strengths of Particle Swarm Optimization and chaotic dynamics, is then applied into the route optimization (RO problem of parking lots, which is an important issue in the management systems of large-scale parking lots. It is used to find out the optimized paths between any source and destination nodes in the route network. Route optimization problems based on real parking lots are introduced for analyzing and the effectiveness and practicability of this novel optimization algorithm for parking space guidance have been verified through the application results.
A Novel Adaptive Particle Swarm Optimization Algorithm with Foraging Behavior in Optimization Design
Directory of Open Access Journals (Sweden)
Liu Yan
2018-01-01
Full Text Available The method of repeated trial and proofreading is generally used to the convention reducer design, but these methods is low efficiency and the size of the reducer is often large. Aiming the problems, this paper presents an adaptive particle swarm optimization algorithm with foraging behavior, in this method, the bacterial foraging process is introduced into the adaptive particle swarm optimization algorithm, which can provide the function of particle chemotaxis, swarming, reproduction, elimination and dispersal, to improve the ability of local search and avoid premature behavior. By test verification through typical function and the application of the optimization design in the structure of the reducer with discrete and continuous variables, the results are shown that the new algorithm has the advantages of good reliability, strong searching ability and high accuracy. It can be used in engineering design, and has a strong applicability.
Directory of Open Access Journals (Sweden)
Weitian Lin
2014-01-01
Full Text Available Particle swarm optimization algorithm (PSOA is an advantage optimization tool. However, it has a tendency to get stuck in a near optimal solution especially for middle and large size problems and it is difficult to improve solution accuracy by fine-tuning parameters. According to the insufficiency, this paper researches the local and global search combine particle swarm algorithm (LGSCPSOA, and its convergence and obtains its convergence qualification. At the same time, it is tested with a set of 8 benchmark continuous functions and compared their optimization results with original particle swarm algorithm (OPSOA. Experimental results indicate that the LGSCPSOA improves the search performance especially on the middle and large size benchmark functions significantly.
Directory of Open Access Journals (Sweden)
Jianwen Guo
2016-01-01
Full Text Available All equipment must be maintained during its lifetime to ensure normal operation. Maintenance is one of the critical roles in the success of manufacturing enterprises. This paper proposed a preventive maintenance period optimization model (PMPOM to find an optimal preventive maintenance period. By making use of the advantages of particle swarm optimization (PSO and cuckoo search (CS algorithm, a hybrid optimization algorithm of PSO and CS is proposed to solve the PMPOM problem. The test functions show that the proposed algorithm exhibits more outstanding performance than particle swarm optimization and cuckoo search. Experiment results show that the proposed algorithm has advantages of strong optimization ability and fast convergence speed to solve the PMPOM problem.
A Parallel Adaptive Particle Swarm Optimization Algorithm for Economic/Environmental Power Dispatch
Directory of Open Access Journals (Sweden)
Jinchao Li
2012-01-01
Full Text Available A parallel adaptive particle swarm optimization algorithm (PAPSO is proposed for economic/environmental power dispatch, which can overcome the premature characteristic, the slow-speed convergence in the late evolutionary phase, and lacking good direction in particles’ evolutionary process. A search population is randomly divided into several subpopulations. Then for each subpopulation, the optimal solution is searched synchronously using the proposed method, and thus parallel computing is realized. To avoid converging to a local optimum, a crossover operator is introduced to exchange the information among the subpopulations and the diversity of population is sustained simultaneously. Simulation results show that the proposed algorithm can effectively solve the economic/environmental operation problem of hydropower generating units. Performance comparisons show that the solution from the proposed method is better than those from the conventional particle swarm algorithm and other optimization algorithms.
Xu, Sheng-Hua; Liu, Ji-Ping; Zhang, Fu-Hao; Wang, Liang; Sun, Li-Jian
2015-08-27
A combination of genetic algorithm and particle swarm optimization (PSO) for vehicle routing problems with time windows (VRPTW) is proposed in this paper. The improvements of the proposed algorithm include: using the particle real number encoding method to decode the route to alleviate the computation burden, applying a linear decreasing function based on the number of the iterations to provide balance between global and local exploration abilities, and integrating with the crossover operator of genetic algorithm to avoid the premature convergence and the local minimum. The experimental results show that the proposed algorithm is not only more efficient and competitive with other published results but can also obtain more optimal solutions for solving the VRPTW issue. One new well-known solution for this benchmark problem is also outlined in the following.
He Wang
2015-01-01
Demand prediction of supply chain is an important content and the first premise in supply management of different enterprises and has become one of the difficulties and hot research fields for the researchers related. The paper takes fresh food demand prediction for example and presents a new algorithm for predicting demand of fresh food supply chain. First, the working principle and the root causes of the defects of particle swarm optimization algorithm are analyzed in the study; Second, the...
Wang, Chang; Qin, Xin; Liu, Yan; Zhang, Wenchao
2016-06-01
An adaptive inertia weight particle swarm algorithm is proposed in this study to solve the local optimal problem with the method of traditional particle swarm optimization in the process of estimating magnetic resonance(MR)image bias field.An indicator measuring the degree of premature convergence was designed for the defect of traditional particle swarm optimization algorithm.The inertia weight was adjusted adaptively based on this indicator to ensure particle swarm to be optimized globally and to avoid it from falling into local optimum.The Legendre polynomial was used to fit bias field,the polynomial parameters were optimized globally,and finally the bias field was estimated and corrected.Compared to those with the improved entropy minimum algorithm,the entropy of corrected image was smaller and the estimated bias field was more accurate in this study.Then the corrected image was segmented and the segmentation accuracy obtained in this research was 10% higher than that with improved entropy minimum algorithm.This algorithm can be applied to the correction of MR image bias field.
A simple algorithm for measuring particle size distributions on an uneven background from TEM images
DEFF Research Database (Denmark)
Gontard, Lionel Cervera; Ozkaya, Dogan; Dunin-Borkowski, Rafal E.
2011-01-01
Nanoparticles have a wide range of applications in science and technology. Their sizes are often measured using transmission electron microscopy (TEM) or X-ray diffraction. Here, we describe a simple computer algorithm for measuring particle size distributions from TEM images in the presence of a...... application to images of heterogeneous catalysts is presented.......Nanoparticles have a wide range of applications in science and technology. Their sizes are often measured using transmission electron microscopy (TEM) or X-ray diffraction. Here, we describe a simple computer algorithm for measuring particle size distributions from TEM images in the presence...
Research on Multiple Particle Swarm Algorithm Based on Analysis of Scientific Materials
Directory of Open Access Journals (Sweden)
Zhao Hongwei
2017-01-01
Full Text Available This paper proposed an improved particle swarm optimization algorithm based on analysis of scientific materials. The core thesis of MPSO (Multiple Particle Swarm Algorithm is to improve the single population PSO to interactive multi-swarms, which is used to settle the problem of being trapped into local minima during later iterations because it is lack of diversity. The simulation results show that the convergence rate is fast and the search performance is good, and it has achieved very good results.
Directory of Open Access Journals (Sweden)
Wei Li
2015-01-01
Full Text Available We propose a new optimization algorithm inspired by the formation and change of the cloud in nature, referred to as Cloud Particles Differential Evolution (CPDE algorithm. The cloud is assumed to have three states in the proposed algorithm. Gaseous state represents the global exploration. Liquid state represents the intermediate process from the global exploration to the local exploitation. Solid state represents the local exploitation. The best solution found so far acts as a nucleus. In gaseous state, the nucleus leads the population to explore by condensation operation. In liquid state, cloud particles carry out macrolocal exploitation by liquefaction operation. A new mutation strategy called cloud differential mutation is introduced in order to solve a problem that the misleading effect of a nucleus may cause the premature convergence. In solid state, cloud particles carry out microlocal exploitation by solidification operation. The effectiveness of the algorithm is validated upon different benchmark problems. The results have been compared with eight well-known optimization algorithms. The statistical analysis on performance evaluation of the different algorithms on 10 benchmark functions and CEC2013 problems indicates that CPDE attains good performance.
Enhanced Particle Swarm Optimization Algorithm: Efficient Training of ReaxFF Reactive Force Fields.
Furman, David; Carmeli, Benny; Zeiri, Yehuda; Kosloff, Ronnie
2018-05-04
Particle swarm optimization is a powerful metaheuristic population-based global optimization algorithm. However, when applied to non-separable objective functions its performance on multimodal landscapes is significantly degraded. Here we show that a significant improvement in the search quality and efficiency on multimodal functions can be achieved by enhancing the basic rotation-invariant particle swarm optimization algorithm with isotropic Gaussian mutation operators. The new algorithm demonstrates a superior performance across several nonlinear, multimodal benchmark functions compared to the rotation-invariant Particle Swam Optimization (PSO) algorithm and the well-established simulated annealing and sequential one-parameter parabolic interpolation methods. A search for the optimal set of parameters for the dispersion interaction model in ReaxFF-lg reactive force field is carried out with respect to accurate DFT-TS calculations. The resulting optimized force field accurately describes the equations of state of several high-energy molecular crystals where such interactions are of crucial importance. The improved algorithm also presents a better performance compared to a Genetic Algorithm optimization method in the optimization of a ReaxFF-lg correction model parameters. The computational framework is implemented in a standalone C++ code that allows a straightforward development of ReaxFF reactive force fields.
Artificial Fish Swarm Algorithm-Based Particle Filter for Li-Ion Battery Life Prediction
Directory of Open Access Journals (Sweden)
Ye Tian
2014-01-01
Full Text Available An intelligent online prognostic approach is proposed for predicting the remaining useful life (RUL of lithium-ion (Li-ion batteries based on artificial fish swarm algorithm (AFSA and particle filter (PF, which is an integrated approach combining model-based method with data-driven method. The parameters, used in the empirical model which is based on the capacity fade trends of Li-ion batteries, are identified dependent on the tracking ability of PF. AFSA-PF aims to improve the performance of the basic PF. By driving the prior particles to the domain with high likelihood, AFSA-PF allows global optimization, prevents particle degeneracy, thereby improving particle distribution and increasing prediction accuracy and algorithm convergence. Data provided by NASA are used to verify this approach and compare it with basic PF and regularized PF. AFSA-PF is shown to be more accurate and precise.
A Constructive Data Classification Version of the Particle Swarm Optimization Algorithm
Directory of Open Access Journals (Sweden)
Alexandre Szabo
2013-01-01
Full Text Available The particle swarm optimization algorithm was originally introduced to solve continuous parameter optimization problems. It was soon modified to solve other types of optimization tasks and also to be applied to data analysis. In the latter case, however, there are few works in the literature that deal with the problem of dynamically building the architecture of the system. This paper introduces new particle swarm algorithms specifically designed to solve classification problems. The first proposal, named Particle Swarm Classifier (PSClass, is a derivation of a particle swarm clustering algorithm and its architecture, as in most classifiers, is pre-defined. The second proposal, named Constructive Particle Swarm Classifier (cPSClass, uses ideas from the immune system to automatically build the swarm. A sensitivity analysis of the growing procedure of cPSClass and an investigation into a proposed pruning procedure for this algorithm are performed. The proposals were applied to a wide range of databases from the literature and the results show that they are competitive in relation to other approaches, with the advantage of having a dynamically constructed architecture.
The Improved Locating Algorithm of Particle Filter Based on ROS Robot
Fang, Xun; Fu, Xiaoyang; Sun, Ming
2018-03-01
This paperanalyzes basic theory and primary algorithm of the real-time locating system and SLAM technology based on ROS system Robot. It proposes improved locating algorithm of particle filter effectively reduces the matching time of laser radar and map, additional ultra-wideband technology directly accelerates the global efficiency of FastSLAM algorithm, which no longer needs searching on the global map. Meanwhile, the re-sampling has been largely reduced about 5/6 that directly cancels the matching behavior on Roboticsalgorithm.
Algorithmic Information Dynamics of Persistent Patterns and Colliding Particles in the Game of Life
Zenil, Hector
2018-02-18
We demonstrate the way to apply and exploit the concept of \\\\textit{algorithmic information dynamics} in the characterization and classification of dynamic and persistent patterns, motifs and colliding particles in, without loss of generalization, Conway\\'s Game of Life (GoL) cellular automaton as a case study. We analyze the distribution of prevailing motifs that occur in GoL from the perspective of algorithmic probability. We demonstrate how the tools introduced are an alternative to computable measures such as entropy and compression algorithms which are often nonsensitive to small changes and features of non-statistical nature in the study of evolving complex systems and their emergent structures.
Analysis of Population Diversity of Dynamic Probabilistic Particle Swarm Optimization Algorithms
Directory of Open Access Journals (Sweden)
Qingjian Ni
2014-01-01
Full Text Available In evolutionary algorithm, population diversity is an important factor for solving performance. In this paper, combined with some population diversity analysis methods in other evolutionary algorithms, three indicators are introduced to be measures of population diversity in PSO algorithms, which are standard deviation of population fitness values, population entropy, and Manhattan norm of standard deviation in population positions. The three measures are used to analyze the population diversity in a relatively new PSO variant—Dynamic Probabilistic Particle Swarm Optimization (DPPSO. The results show that the three measure methods can fully reflect the evolution of population diversity in DPPSO algorithms from different angles, and we also discuss the impact of population diversity on the DPPSO variants. The relevant conclusions of the population diversity on DPPSO can be used to analyze, design, and improve the DPPSO algorithms, thus improving optimization performance, which could also be beneficial to understand the working mechanism of DPPSO theoretically.
DEFF Research Database (Denmark)
Nica, Florin Valentin Traian; Ritchie, Ewen; Leban, Krisztina Monika
2013-01-01
, genetic algorithm and particle swarm are shortly presented in this paper. These two algorithms are tested to determine their performance on five different benchmark test functions. The algorithms are tested based on three requirements: precision of the result, number of iterations and calculation time....... Both algorithms are also tested on an analytical design process of a Transverse Flux Permanent Magnet Generator to observe their performances in an electrical machine design application.......Nowadays the requirements imposed by the industry and economy ask for better quality and performance while the price must be maintained in the same range. To achieve this goal optimization must be introduced in the design process. Two of the best known optimization algorithms for machine design...
Iswari, T.; Asih, A. M. S.
2018-04-01
In the logistics system, transportation plays an important role to connect every element in the supply chain, but it can produces the greatest cost. Therefore, it is important to make the transportation costs as minimum as possible. Reducing the transportation cost can be done in several ways. One of the ways to minimizing the transportation cost is by optimizing the routing of its vehicles. It refers to Vehicle Routing Problem (VRP). The most common type of VRP is Capacitated Vehicle Routing Problem (CVRP). In CVRP, the vehicles have their own capacity and the total demands from the customer should not exceed the capacity of the vehicle. CVRP belongs to the class of NP-hard problems. These NP-hard problems make it more complex to solve such that exact algorithms become highly time-consuming with the increases in problem sizes. Thus, for large-scale problem instances, as typically found in industrial applications, finding an optimal solution is not practicable. Therefore, this paper uses two kinds of metaheuristics approach to solving CVRP. Those are Genetic Algorithm and Particle Swarm Optimization. This paper compares the results of both algorithms and see the performance of each algorithm. The results show that both algorithms perform well in solving CVRP but still needs to be improved. From algorithm testing and numerical example, Genetic Algorithm yields a better solution than Particle Swarm Optimization in total distance travelled.
Roman, Marco; Rigo, Chiara; Castillo-Michel, Hiram; Munivrana, Ivan; Vindigni, Vincenzo; Mičetić, Ivan; Benetti, Federico; Manodori, Laura; Cairns, Warren R L
2016-07-01
Silver nanoparticles (AgNPs) are increasingly used in medical devices as innovative antibacterial agents, but no data are currently available on their chemical transformations and fate in vivo in the human body, particularly on their potential to reach the circulatory system. To study the processes involving AgNPs in human plasma and blood, we developed an analytical method based on hydrodynamic chromatography (HDC) coupled to inductively coupled plasma mass spectrometry (ICP-MS) in single-particle detection mode. An innovative algorithm was implemented to deconvolute the signals of dissolved Ag and AgNPs and to extrapolate a multiparametric characterization of the particles in the same chromatogram. From a single injection, the method provides the concentration of dissolved Ag and the distribution of AgNPs in terms of hydrodynamic diameter, mass-derived diameter, number and mass concentration. This analytical approach is robust and suitable to study quantitatively the dynamics and kinetics of AgNPs in complex biological fluids, including processes such as agglomeration, dissolution and formation of protein coronas. The method was applied to study the transformations of AgNP standards and an AgNP-coated dressing in human plasma, supported by micro X-ray fluorescence (μXRF) and micro X-ray absorption near-edge spectroscopy (μXANES) speciation analysis and imaging, and to investigate, for the first time, the possible presence of AgNPs in the blood of three burn patients treated with the same dressing. Together with our previous studies, the results strongly support the hypothesis that the systemic mobilization of the metal after topical administration of AgNPs is driven by their dissolution in situ. Graphical Abstract Simplified scheme of the combined analytical approach adopted for studying the chemical dynamics of AgNPs in human plasma/blood.
V.A.F. Dallagnol (V. A F); J.H. van den Berg (Jan); L. Mous (Lonneke)
2009-01-01
textabstractIn this paper, it is shown a comparison of the application of particle swarm optimization and genetic algorithms to portfolio management, in a constrained portfolio optimization problem where no short sales are allowed. The objective function to be minimized is the value at risk
The Study on Food Sensory Evaluation based on Particle Swarm Optimization Algorithm
Hairong Wang; Huijuan Xu
2015-01-01
In this study, it explores the procedures and methods of the system for establishing food sensory evaluation based on particle swarm optimization algorithm, by means of explaining the interpretation of sensory evaluation and sensory analysis, combined with the applying situation of sensory evaluation in food industry.
Czech Academy of Sciences Publication Activity Database
Larentzos, J.P.; Brennan, J.K.; Moore, J.D.; Lísal, Martin; Mattson, w.D.
2014-01-01
Roč. 185, č. 7 (2014), s. 1987-1998 ISSN 0010-4655 Grant - others:ARL(US) W911NF-10-2-0039 Institutional support: RVO:67985858 Keywords : dissipative particle dynamics * shardlow splitting algorithm * numerical integration Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 3.112, year: 2014
Predicting patchy particle crystals: variable box shape simulations and evolutionary algorithms
Bianchi, E.; Doppelbauer, G.; Filion, L.C.; Dijkstra, M.; Kahl, G.
2012-01-01
We consider several patchy particle models that have been proposed in literature and we investigate their candidate crystal structures in a systematic way. We compare two different algorithms for predicting crystal structures: (i) an approach based on Monte Carlo simulations in the
Design of Wire Antennas by Using an Evolved Particle Swarm Optimization Algorithm
Lepelaars, E.S.A.M.; Zwamborn, A.P.M.; Rogovic, A.; Marasini, C.; Monorchio, A.
2007-01-01
A Particle Swarm Optimization (PSO) algorithm has been used in conjunction with a full-wave numerical code based on the Method of Moments (MoM) to design and optimize wire antennas. The PSO is a robust stochastic evolutionary numerical technique that is very effective in optimizing multidimensional
Romano, Paul Kollath
Monte Carlo particle transport methods are being considered as a viable option for high-fidelity simulation of nuclear reactors. While Monte Carlo methods offer several potential advantages over deterministic methods, there are a number of algorithmic shortcomings that would prevent their immediate adoption for full-core analyses. In this thesis, algorithms are proposed both to ameliorate the degradation in parallel efficiency typically observed for large numbers of processors and to offer a means of decomposing large tally data that will be needed for reactor analysis. A nearest-neighbor fission bank algorithm was proposed and subsequently implemented in the OpenMC Monte Carlo code. A theoretical analysis of the communication pattern shows that the expected cost is O( N ) whereas traditional fission bank algorithms are O(N) at best. The algorithm was tested on two supercomputers, the Intrepid Blue Gene/P and the Titan Cray XK7, and demonstrated nearly linear parallel scaling up to 163,840 processor cores on a full-core benchmark problem. An algorithm for reducing network communication arising from tally reduction was analyzed and implemented in OpenMC. The proposed algorithm groups only particle histories on a single processor into batches for tally purposes---in doing so it prevents all network communication for tallies until the very end of the simulation. The algorithm was tested, again on a full-core benchmark, and shown to reduce network communication substantially. A model was developed to predict the impact of load imbalances on the performance of domain decomposed simulations. The analysis demonstrated that load imbalances in domain decomposed simulations arise from two distinct phenomena: non-uniform particle densities and non-uniform spatial leakage. The dominant performance penalty for domain decomposition was shown to come from these physical effects rather than insufficient network bandwidth or high latency. The model predictions were verified with
Identification of nuclear power plant transients using the Particle Swarm Optimization algorithm
International Nuclear Information System (INIS)
Canedo Medeiros, Jose Antonio Carlos; Schirru, Roberto
2008-01-01
In order to help nuclear power plant operator reduce his cognitive load and increase his available time to maintain the plant operating in a safe condition, transient identification systems have been devised to help operators identify possible plant transients and take fast and right corrective actions in due time. In the design of classification systems for identification of nuclear power plants transients, several artificial intelligence techniques, involving expert systems, neuro-fuzzy and genetic algorithms have been used. In this work we explore the ability of the Particle Swarm Optimization algorithm (PSO) as a tool for optimizing a distance-based discrimination transient classification method, giving also an innovative solution for searching the best set of prototypes for identification of transients. The Particle Swarm Optimization algorithm was successfully applied to the optimization of a nuclear power plant transient identification problem. Comparing the PSO to similar methods found in literature it has shown better results
A tracking algorithm for the reconstruction of the daughters of long-lived particles in LHCb
Dendek, Adam Mateusz
2018-01-01
A tracking algorithm for the reconstruction of the daughters of long-lived particles in LHCb 5 Jun 2018, 16:00 1h 30m Library, Centro San Domenico () LHC experiments Posters session Speaker Katharina Mueller (Universitaet Zuerich (CH)) Description The LHCb experiment at CERN operates a high precision and robust tracking system to reach its physics goals, including precise measurements of CP-violation phenomena in the heavy flavour quark sector and searches for New Physics beyond the Standard Model. The track reconstruction procedure is performed by a number of algorithms. One of these, PatLongLivedTracking, is optimised to reconstruct "downstream tracks", which are tracks originating from decays outside the LHCb vertex detector of long-lived particles, such as Ks or Λ0. After an overview of the LHCb tracking system, we provide a detailed description of the LHCb downstream track reconstruction algorithm. Its computational intelligence part is described in details, including the adaptation of the employed...
Identification of nuclear power plant transients using the Particle Swarm Optimization algorithm
Energy Technology Data Exchange (ETDEWEB)
Canedo Medeiros, Jose Antonio Carlos [Universidade Federal do Rio de Janeiro, PEN/COPPE, UFRJ, Ilha do Fundao s/n, CEP 21945-970 Rio de Janeiro (Brazil)], E-mail: canedo@lmp.ufrj.br; Schirru, Roberto [Universidade Federal do Rio de Janeiro, PEN/COPPE, UFRJ, Ilha do Fundao s/n, CEP 21945-970 Rio de Janeiro (Brazil)], E-mail: schirru@lmp.ufrj.br
2008-04-15
In order to help nuclear power plant operator reduce his cognitive load and increase his available time to maintain the plant operating in a safe condition, transient identification systems have been devised to help operators identify possible plant transients and take fast and right corrective actions in due time. In the design of classification systems for identification of nuclear power plants transients, several artificial intelligence techniques, involving expert systems, neuro-fuzzy and genetic algorithms have been used. In this work we explore the ability of the Particle Swarm Optimization algorithm (PSO) as a tool for optimizing a distance-based discrimination transient classification method, giving also an innovative solution for searching the best set of prototypes for identification of transients. The Particle Swarm Optimization algorithm was successfully applied to the optimization of a nuclear power plant transient identification problem. Comparing the PSO to similar methods found in literature it has shown better results.
Yu, Chaoyin; Yuan, Zhengwu; Wu, Yuanfeng
2017-10-01
Hyperspectral image unmixing is an important part of hyperspectral data analysis. The mixed pixel decomposition consists of two steps, endmember (the unique signatures of pure ground components) extraction and abundance (the proportion of each endmember in each pixel) estimation. Recently, a Discrete Particle Swarm Optimization algorithm (DPSO) was proposed for accurately extract endmembers with high optimal performance. However, the DPSO algorithm shows very high computational complexity, which makes the endmember extraction procedure very time consuming for hyperspectral image unmixing. Thus, in this paper, the DPSO endmember extraction algorithm was parallelized, implemented on the CUDA (GPU K20) platform, and evaluated by real hyperspectral remote sensing data. The experimental results show that with increasing the number of particles the parallelized version obtained much higher computing efficiency while maintain the same endmember exaction accuracy.
Directory of Open Access Journals (Sweden)
Yu Huang
Full Text Available Parameter estimation for fractional-order chaotic systems is an important issue in fractional-order chaotic control and synchronization and could be essentially formulated as a multidimensional optimization problem. A novel algorithm called quantum parallel particle swarm optimization (QPPSO is proposed to solve the parameter estimation for fractional-order chaotic systems. The parallel characteristic of quantum computing is used in QPPSO. This characteristic increases the calculation of each generation exponentially. The behavior of particles in quantum space is restrained by the quantum evolution equation, which consists of the current rotation angle, individual optimal quantum rotation angle, and global optimal quantum rotation angle. Numerical simulation based on several typical fractional-order systems and comparisons with some typical existing algorithms show the effectiveness and efficiency of the proposed algorithm.
The Splashback Radius of Halos from Particle Dynamics. I. The SPARTA Algorithm
Diemer, Benedikt
2017-07-01
Motivated by the recent proposal of the splashback radius as a physical boundary of dark-matter halos, we present a parallel computer code for Subhalo and PARticle Trajectory Analysis (SPARTA). The code analyzes the orbits of all simulation particles in all host halos, billions of orbits in the case of typical cosmological N-body simulations. Within this general framework, we develop an algorithm that accurately extracts the location of the first apocenter of particles after infall into a halo, or splashback. We define the splashback radius of a halo as the smoothed average of the apocenter radii of individual particles. This definition allows us to reliably measure the splashback radii of 95% of host halos above a resolution limit of 1000 particles. We show that, on average, the splashback radius and mass are converged to better than 5% accuracy with respect to mass resolution, snapshot spacing, and all free parameters of the method.
Hybrid Artificial Bee Colony Algorithm and Particle Swarm Search for Global Optimization
Directory of Open Access Journals (Sweden)
Wang Chun-Feng
2014-01-01
Full Text Available Artificial bee colony (ABC algorithm is one of the most recent swarm intelligence based algorithms, which has been shown to be competitive to other population-based algorithms. However, there is still an insufficiency in ABC regarding its solution search equation, which is good at exploration but poor at exploitation. To overcome this problem, we propose a novel artificial bee colony algorithm based on particle swarm search mechanism. In this algorithm, for improving the convergence speed, the initial population is generated by using good point set theory rather than random selection firstly. Secondly, in order to enhance the exploitation ability, the employed bee, onlookers, and scouts utilize the mechanism of PSO to search new candidate solutions. Finally, for further improving the searching ability, the chaotic search operator is adopted in the best solution of the current iteration. Our algorithm is tested on some well-known benchmark functions and compared with other algorithms. Results show that our algorithm has good performance.
Algorithms for the optimization of RBE-weighted dose in particle therapy.
Horcicka, M; Meyer, C; Buschbacher, A; Durante, M; Krämer, M
2013-01-21
We report on various algorithms used for the nonlinear optimization of RBE-weighted dose in particle therapy. Concerning the dose calculation carbon ions are considered and biological effects are calculated by the Local Effect Model. Taking biological effects fully into account requires iterative methods to solve the optimization problem. We implemented several additional algorithms into GSI's treatment planning system TRiP98, like the BFGS-algorithm and the method of conjugated gradients, in order to investigate their computational performance. We modified textbook iteration procedures to improve the convergence speed. The performance of the algorithms is presented by convergence in terms of iterations and computation time. We found that the Fletcher-Reeves variant of the method of conjugated gradients is the algorithm with the best computational performance. With this algorithm we could speed up computation times by a factor of 4 compared to the method of steepest descent, which was used before. With our new methods it is possible to optimize complex treatment plans in a few minutes leading to good dose distributions. At the end we discuss future goals concerning dose optimization issues in particle therapy which might benefit from fast optimization solvers.
Dagum, Leonardo
1989-01-01
The data parallel implementation of a particle simulation for hypersonic rarefied flow described by Dagum associates a single parallel data element with each particle in the simulation. The simulated space is divided into discrete regions called cells containing a variable and constantly changing number of particles. The implementation requires a global sort of the parallel data elements so as to arrange them in an order that allows immediate access to the information associated with cells in the simulation. Described here is a very fast algorithm for performing the necessary ranking of the parallel data elements. The performance of the new algorithm is compared with that of the microcoded instruction for ranking on the Connection Machine.
Genetic particle swarm parallel algorithm analysis of optimization arrangement on mistuned blades
Zhao, Tianyu; Yuan, Huiqun; Yang, Wenjun; Sun, Huagang
2017-12-01
This article introduces a method of mistuned parameter identification which consists of static frequency testing of blades, dichotomy and finite element analysis. A lumped parameter model of an engine bladed-disc system is then set up. A bladed arrangement optimization method, namely the genetic particle swarm optimization algorithm, is presented. It consists of a discrete particle swarm optimization and a genetic algorithm. From this, the local and global search ability is introduced. CUDA-based co-evolution particle swarm optimization, using a graphics processing unit, is presented and its performance is analysed. The results show that using optimization results can reduce the amplitude and localization of the forced vibration response of a bladed-disc system, while optimization based on the CUDA framework can improve the computing speed. This method could provide support for engineering applications in terms of effectiveness and efficiency.
Hybrid particle swarm optimization algorithm and its application in nuclear engineering
International Nuclear Information System (INIS)
Liu, C.Y.; Yan, C.Q.; Wang, J.J.
2014-01-01
Highlights: • We propose a hybrid particle swarm optimization algorithm (HPSO). • Modified Nelder–Mead simplex search method is applied in HPSO. • The algorithm has a high search precision and rapidly calculation speed. • HPSO can be used in the nuclear engineering optimization design problems. - Abstract: A hybrid particle swarm optimization algorithm with a feasibility-based rule for solving constrained optimization problems has been developed in this research. Firstly, the global optimal solution zone can be obtained through particle swarm optimization process, and then the refined search of the global optimal solution will be achieved through the modified Nelder–Mead simplex algorithm. Simulations based on two well-studied benchmark problems demonstrate the proposed algorithm will be an efficient alternative to solving constrained optimization problems. The vertical electrical heating pressurizer is one of the key components in reactor coolant system. The mathematical model of pressurizer has been established in steady state. The optimization design of pressurizer weight has been carried out through HPSO algorithm. The results show the pressurizer weight can be reduced by 16.92%. The thermal efficiencies of conventional PWR nuclear power plants are about 31–35% so far, which are much lower than fossil fueled plants based in a steam cycle as PWR. The thermal equilibrium mathematic model for nuclear power plant secondary loop has been established. An optimization case study has been conducted to improve the efficiency of the nuclear power plant with the proposed algorithm. The results show the thermal efficiency is improved by 0.5%
Multi-objective Reactive Power Optimization Based on Improved Particle Swarm Algorithm
Cui, Xue; Gao, Jian; Feng, Yunbin; Zou, Chenlu; Liu, Huanlei
2018-01-01
In this paper, an optimization model with the minimum active power loss and minimum voltage deviation of node and maximum static voltage stability margin as the optimization objective is proposed for the reactive power optimization problems. By defining the index value of reactive power compensation, the optimal reactive power compensation node was selected. The particle swarm optimization algorithm was improved, and the selection pool of global optimal and the global optimal of probability (p-gbest) were introduced. A set of Pareto optimal solution sets is obtained by this algorithm. And by calculating the fuzzy membership value of the pareto optimal solution sets, individuals with the smallest fuzzy membership value were selected as the final optimization results. The above improved algorithm is used to optimize the reactive power of IEEE14 standard node system. Through the comparison and analysis of the results, it has been proven that the optimization effect of this algorithm was very good.
Directory of Open Access Journals (Sweden)
Jiaxi Wang
2016-01-01
Full Text Available The shunting schedule of electric multiple units depot (SSED is one of the essential plans for high-speed train maintenance activities. This paper presents a 0-1 programming model to address the problem of determining an optimal SSED through automatic computing. The objective of the model is to minimize the number of shunting movements and the constraints include track occupation conflicts, shunting routes conflicts, time durations of maintenance processes, and shunting running time. An enhanced particle swarm optimization (EPSO algorithm is proposed to solve the optimization problem. Finally, an empirical study from Shanghai South EMU Depot is carried out to illustrate the model and EPSO algorithm. The optimization results indicate that the proposed method is valid for the SSED problem and that the EPSO algorithm outperforms the traditional PSO algorithm on the aspect of optimality.
Jin, Junchen
2016-01-01
The shunting schedule of electric multiple units depot (SSED) is one of the essential plans for high-speed train maintenance activities. This paper presents a 0-1 programming model to address the problem of determining an optimal SSED through automatic computing. The objective of the model is to minimize the number of shunting movements and the constraints include track occupation conflicts, shunting routes conflicts, time durations of maintenance processes, and shunting running time. An enhanced particle swarm optimization (EPSO) algorithm is proposed to solve the optimization problem. Finally, an empirical study from Shanghai South EMU Depot is carried out to illustrate the model and EPSO algorithm. The optimization results indicate that the proposed method is valid for the SSED problem and that the EPSO algorithm outperforms the traditional PSO algorithm on the aspect of optimality. PMID:27436998
International Nuclear Information System (INIS)
Hong, W.-C.
2009-01-01
Accurate forecasting of electric load has always been the most important issues in the electricity industry, particularly for developing countries. Due to the various influences, electric load forecasting reveals highly nonlinear characteristics. Recently, support vector regression (SVR), with nonlinear mapping capabilities of forecasting, has been successfully employed to solve nonlinear regression and time series problems. However, it is still lack of systematic approaches to determine appropriate parameter combination for a SVR model. This investigation elucidates the feasibility of applying chaotic particle swarm optimization (CPSO) algorithm to choose the suitable parameter combination for a SVR model. The empirical results reveal that the proposed model outperforms the other two models applying other algorithms, genetic algorithm (GA) and simulated annealing algorithm (SA). Finally, it also provides the theoretical exploration of the electric load forecasting support system (ELFSS)
Directory of Open Access Journals (Sweden)
Keivan Borna
2015-12-01
Full Text Available Traveling salesman problem (TSP is a well-established NP-complete problem and many evolutionary techniques like particle swarm optimization (PSO are used to optimize existing solutions for that. PSO is a method inspired by the social behavior of birds. In PSO, each member will change its position in the search space, according to personal or social experience of the whole society. In this paper, we combine the principles of PSO and crossover operator of genetic algorithm to propose a heuristic algorithm for solving the TSP more efficiently. Finally, some experimental results on our algorithm are applied in some instances in TSPLIB to demonstrate the effectiveness of our methods which also show that our algorithm can achieve better results than other approaches.
Directory of Open Access Journals (Sweden)
Weizhe Zhang
2014-01-01
Full Text Available Energy consumption in computer systems has become a more and more important issue. High energy consumption has already damaged the environment to some extent, especially in heterogeneous multiprocessors. In this paper, we first formulate and describe the energy-aware real-time task scheduling problem in heterogeneous multiprocessors. Then we propose a particle swarm optimization (PSO based algorithm, which can successfully reduce the energy cost and the time for searching feasible solutions. Experimental results show that the PSO-based energy-aware metaheuristic uses 40%–50% less energy than the GA-based and SFLA-based algorithms and spends 10% less time than the SFLA-based algorithm in finding the solutions. Besides, it can also find 19% more feasible solutions than the SFLA-based algorithm.
Bourque, Alexandra E; Bedwani, Stéphane; Carrier, Jean-François; Ménard, Cynthia; Borman, Pim; Bos, Clemens; Raaymakers, Bas W; Mickevicius, Nikolai; Paulson, Eric; Tijssen, Rob H N
PURPOSE: To assess overall robustness and accuracy of a modified particle filter-based tracking algorithm for magnetic resonance (MR)-guided radiation therapy treatments. METHODS AND MATERIALS: An improved particle filter-based tracking algorithm was implemented, which used a normalized
International Nuclear Information System (INIS)
Coban, Ramazan
2011-01-01
Research highlights: → A closed-loop fuzzy logic controller based on the particle swarm optimization algorithm was proposed for controlling the power level of nuclear research reactors. → The proposed control system was tested for various initial and desired power levels, and it could control the reactor successfully for most situations. → The proposed controller is robust against the disturbances. - Abstract: In this paper, a closed-loop fuzzy logic controller based on the particle swarm optimization algorithm is proposed for controlling the power level of nuclear research reactors. The principle of the fuzzy logic controller is based on the rules constructed from numerical experiments made by means of a computer code for the core dynamics calculation and from human operator's experience and knowledge. In addition to these intuitive and experimental design efforts, consequent parts of the fuzzy rules are optimally (or near optimally) determined using the particle swarm optimization algorithm. The contribution of the proposed algorithm to a reactor control system is investigated in details. The performance of the controller is also tested with numerical simulations in numerous operating conditions from various initial power levels to desired power levels, as well as under disturbance. It is shown that the proposed control system performs satisfactorily under almost all operating conditions, even in the case of very small initial power levels.
3D head pose estimation and tracking using particle filtering and ICP algorithm
Ben Ghorbel, Mahdi; Baklouti, Malek; Couvet, Serge
2010-01-01
This paper addresses the issue of 3D head pose estimation and tracking. Existing approaches generally need huge database, training procedure, manual initialization or use face feature extraction manually extracted. We propose a framework for estimating the 3D head pose in its fine level and tracking it continuously across multiple Degrees of Freedom (DOF) based on ICP and particle filtering. We propose to approach the problem, using 3D computational techniques, by aligning a face model to the 3D dense estimation computed by a stereo vision method, and propose a particle filter algorithm to refine and track the posteriori estimate of the position of the face. This work comes with two contributions: the first concerns the alignment part where we propose an extended ICP algorithm using an anisotropic scale transformation. The second contribution concerns the tracking part. We propose the use of the particle filtering algorithm and propose to constrain the search space using ICP algorithm in the propagation step. The results show that the system is able to fit and track the head properly, and keeps accurate the results on new individuals without a manual adaptation or training. © Springer-Verlag Berlin Heidelberg 2010.
PS-FW: A Hybrid Algorithm Based on Particle Swarm and Fireworks for Global Optimization
Chen, Shuangqing; Wei, Lixin; Guan, Bing
2018-01-01
Particle swarm optimization (PSO) and fireworks algorithm (FWA) are two recently developed optimization methods which have been applied in various areas due to their simplicity and efficiency. However, when being applied to high-dimensional optimization problems, PSO algorithm may be trapped in the local optima owing to the lack of powerful global exploration capability, and fireworks algorithm is difficult to converge in some cases because of its relatively low local exploitation efficiency for noncore fireworks. In this paper, a hybrid algorithm called PS-FW is presented, in which the modified operators of FWA are embedded into the solving process of PSO. In the iteration process, the abandonment and supplement mechanism is adopted to balance the exploration and exploitation ability of PS-FW, and the modified explosion operator and the novel mutation operator are proposed to speed up the global convergence and to avoid prematurity. To verify the performance of the proposed PS-FW algorithm, 22 high-dimensional benchmark functions have been employed, and it is compared with PSO, FWA, stdPSO, CPSO, CLPSO, FIPS, Frankenstein, and ALWPSO algorithms. Results show that the PS-FW algorithm is an efficient, robust, and fast converging optimization method for solving global optimization problems. PMID:29675036
Directory of Open Access Journals (Sweden)
Kazem Mohammadi- Aghdam
2015-10-01
Full Text Available This paper proposes the application of a new version of the heuristic particle swarm optimization (PSO method for designing water distribution networks (WDNs. The optimization problem of looped water distribution networks is recognized as an NP-hard combinatorial problem which cannot be easily solved using traditional mathematical optimization techniques. In this paper, the concept of dynamic swarm size is considered in an attempt to increase the convergence speed of the original PSO algorithm. In this strategy, the size of the swarm is dynamically changed according to the iteration number of the algorithm. Furthermore, a novel mutation approach is introduced to increase the diversification property of the PSO and to help the algorithm to avoid trapping in local optima. The new version of the PSO algorithm is called dynamic mutated particle swarm optimization (DMPSO. The proposed DMPSO is then applied to solve WDN design problems. Finally, two illustrative examples are used for comparison to verify the efficiency of the proposed DMPSO as compared to other intelligent algorithms.
Qi, Xin; Ju, Guohao; Xu, Shuyan
2018-04-10
The phase diversity (PD) technique needs optimization algorithms to minimize the error metric and find the global minimum. Particle swarm optimization (PSO) is very suitable for PD due to its simple structure, fast convergence, and global searching ability. However, the traditional PSO algorithm for PD still suffers from the stagnation problem (premature convergence), which can result in a wrong solution. In this paper, the stagnation problem of the traditional PSO algorithm for PD is illustrated first. Then, an explicit strategy is proposed to solve this problem, based on an in-depth understanding of the inherent optimization mechanism of the PSO algorithm. Specifically, a criterion is proposed to detect premature convergence; then a redistributing mechanism is proposed to prevent premature convergence. To improve the efficiency of this redistributing mechanism, randomized Halton sequences are further introduced to ensure the uniform distribution and randomness of the redistributed particles in the search space. Simulation results show that this strategy can effectively solve the stagnation problem of the PSO algorithm for PD, especially for large-scale and high-dimension wavefront sensing and noisy conditions. This work is further verified by an experiment. This work can improve the robustness and performance of PD wavefront sensing.
International Nuclear Information System (INIS)
Li Yongjie; Yao Dezhong; Yao, Jonathan; Chen Wufan
2005-01-01
Automatic beam angle selection is an important but challenging problem for intensity-modulated radiation therapy (IMRT) planning. Though many efforts have been made, it is still not very satisfactory in clinical IMRT practice because of overextensive computation of the inverse problem. In this paper, a new technique named BASPSO (Beam Angle Selection with a Particle Swarm Optimization algorithm) is presented to improve the efficiency of the beam angle optimization problem. Originally developed as a tool for simulating social behaviour, the particle swarm optimization (PSO) algorithm is a relatively new population-based evolutionary optimization technique first introduced by Kennedy and Eberhart in 1995. In the proposed BASPSO, the beam angles are optimized using PSO by treating each beam configuration as a particle (individual), and the beam intensity maps for each beam configuration are optimized using the conjugate gradient (CG) algorithm. These two optimization processes are implemented iteratively. The performance of each individual is evaluated by a fitness value calculated with a physical objective function. A population of these individuals is evolved by cooperation and competition among the individuals themselves through generations. The optimization results of a simulated case with known optimal beam angles and two clinical cases (a prostate case and a head-and-neck case) show that PSO is valid and efficient and can speed up the beam angle optimization process. Furthermore, the performance comparisons based on the preliminary results indicate that, as a whole, the PSO-based algorithm seems to outperform, or at least compete with, the GA-based algorithm in computation time and robustness. In conclusion, the reported work suggested that the introduced PSO algorithm could act as a new promising solution to the beam angle optimization problem and potentially other optimization problems in IMRT, though further studies need to be investigated
A Particle Swarm Optimization Algorithm with Variable Random Functions and Mutation
Institute of Scientific and Technical Information of China (English)
ZHOU Xiao-Jun; YANG Chun-Hua; GUI Wei-Hua; DONG Tian-Xue
2014-01-01
The convergence analysis of the standard particle swarm optimization (PSO) has shown that the changing of random functions, personal best and group best has the potential to improve the performance of the PSO. In this paper, a novel strategy with variable random functions and polynomial mutation is introduced into the PSO, which is called particle swarm optimization algorithm with variable random functions and mutation (PSO-RM). Random functions are adjusted with the density of the population so as to manipulate the weight of cognition part and social part. Mutation is executed on both personal best particle and group best particle to explore new areas. Experiment results have demonstrated the effectiveness of the strategy.
Innovations in ILC detector design using a particle flow algorithm approach
International Nuclear Information System (INIS)
Magill, S.; High Energy Physics
2007-01-01
The International Linear Collider (ILC) is a future e + e - collider that will produce particles with masses up to the design center-of-mass (CM) energy of 500 GeV. The ILC complements the Large Hadron Collider (LHC) which, although colliding protons at 14 TeV in the CM, will be luminosity-limited to particle production with masses up to ∼1-2 TeV. At the ILC, interesting cross-sections are small, but there are no backgrounds from underlying events, so masses should be able to be measured by hadronic decays to dijets (∼80% BR) as well as in leptonic decay modes. The precise measurement of jets will require major detector innovations, in particular to the calorimeter, which will be optimized to reconstruct final state particle 4-vectors--called the particle flow algorithm approach to jet reconstruction
Ef: Software for Nonrelativistic Beam Simulation by Particle-in-Cell Algorithm
Directory of Open Access Journals (Sweden)
Boytsov A. Yu.
2018-01-01
Full Text Available Understanding of particle dynamics is crucial in construction of electron guns, ion sources and other types of nonrelativistic beam devices. Apart from external guiding and focusing systems, a prominent role in evolution of such low-energy beams is played by particle-particle interaction. Numerical simulations taking into account these effects are typically accomplished by a well-known particle-in-cell method. In practice, for convenient work a simulation program should not only implement this method, but also support parallelization, provide integration with CAD systems and allow access to details of the simulation algorithm. To address the formulated requirements, development of a new open source code - Ef - has been started. It's current features and main functionality are presented. Comparison with several analytical models demonstrates good agreement between the numerical results and the theory. Further development plans are discussed.
Ef: Software for Nonrelativistic Beam Simulation by Particle-in-Cell Algorithm
Boytsov, A. Yu.; Bulychev, A. A.
2018-04-01
Understanding of particle dynamics is crucial in construction of electron guns, ion sources and other types of nonrelativistic beam devices. Apart from external guiding and focusing systems, a prominent role in evolution of such low-energy beams is played by particle-particle interaction. Numerical simulations taking into account these effects are typically accomplished by a well-known particle-in-cell method. In practice, for convenient work a simulation program should not only implement this method, but also support parallelization, provide integration with CAD systems and allow access to details of the simulation algorithm. To address the formulated requirements, development of a new open source code - Ef - has been started. It's current features and main functionality are presented. Comparison with several analytical models demonstrates good agreement between the numerical results and the theory. Further development plans are discussed.
A benchmark study of the Signed-particle Monte Carlo algorithm for the Wigner equation
Directory of Open Access Journals (Sweden)
Muscato Orazio
2017-12-01
Full Text Available The Wigner equation represents a promising model for the simulation of electronic nanodevices, which allows the comprehension and prediction of quantum mechanical phenomena in terms of quasi-distribution functions. During these years, a Monte Carlo technique for the solution of this kinetic equation has been developed, based on the generation and annihilation of signed particles. This technique can be deeply understood in terms of the theory of pure jump processes with a general state space, producing a class of stochastic algorithms. One of these algorithms has been validated successfully by numerical experiments on a benchmark test case.
DEFF Research Database (Denmark)
Vesterstrøm, Jacob Svaneborg; Thomsen, Rene
2004-01-01
Several extensions to evolutionary algorithms (EAs) and particle swarm optimization (PSO) have been suggested during the last decades offering improved performance on selected benchmark problems. Recently, another search heuristic termed differential evolution (DE) has shown superior performance...... in several real-world applications. In this paper, we evaluate the performance of DE, PSO, and EAs regarding their general applicability as numerical optimization techniques. The comparison is performed on a suite of 34 widely used benchmark problems. The results from our study show that DE generally...... outperforms the other algorithms. However, on two noisy functions, both DE and PSO were outperformed by the EA....
International Nuclear Information System (INIS)
Semwal, Girish; Rastogi, Vipul
2014-01-01
We present design optimization of wavelength filters based on long period waveguide gratings (LPWGs) using the adaptive particle swarm optimization (APSO) technique. We demonstrate optimization of the LPWG parameters for single-band, wide-band and dual-band rejection filters for testing the convergence of APSO algorithms. After convergence tests on the algorithms, the optimization technique has been implemented to design more complicated application specific filters such as erbium doped fiber amplifier (EDFA) amplified spontaneous emission (ASE) flattening, erbium doped waveguide amplifier (EDWA) gain flattening and pre-defined broadband rejection filters. The technique is useful for designing and optimizing the parameters of LPWGs to achieve complicated application specific spectra. (paper)
Particle simulation algorithms with short-range forces in MHD and fluid flow
International Nuclear Information System (INIS)
Cable, S.; Tajima, T.; Umegaki, K.
1992-07-01
Attempts are made to develop numerical algorithms for handling fluid flows involving liquids and liquid-gas mixtures. In these types of systems, the short-range intermolecular interactions are important enough to significantly alter behavior predicted on the basis of standard fluid mechanics and magnetohydrodynamics alone. We have constructed a particle-in-cell (PIC) code for the purpose of studying the effects of these interactions. Of the algorithms considered, the one which has been successfully implemented is based on a MHD particle code developed by Brunel et al. In the version presented here, short range forces are included in particle motion by, first, calculating the forces between individual particles and then, to prevent aliasing, interpolating these forces to the computational grid points, then interpolating the forces back to the particles. The code has been used to model a simple two-fluid Rayleigh-Taylor instability. Limitations to the accuracy of the code exist at short wavelengths, where the effects of the short-range forces would be expected to be most pronounced
Energy Technology Data Exchange (ETDEWEB)
Español, Pep [Dept. Física Fundamental, Universidad Nacional de Educación a Distancia, Aptdo. 60141, E-28080 Madrid (Spain); Donev, Aleksandar [Dept. Física Fundamental, Universidad Nacional de Educación a Distancia, Aptdo. 60141, E-28080 Madrid (Spain); Courant Institute of Mathematical Sciences, New York University, 251 Mercer Street, New York, New York 10012 (United States)
2015-12-21
We derive a coarse-grained description of the dynamics of a nanoparticle immersed in an isothermal simple fluid by performing a systematic coarse graining of the underlying microscopic dynamics. As coarse-grained or relevant variables, we select the position of the nanoparticle and the total mass and momentum density field of the fluid, which are locally conserved slow variables because they are defined to include the contribution of the nanoparticle. The theory of coarse graining based on the Zwanzing projection operator leads us to a system of stochastic ordinary differential equations that are closed in the relevant variables. We demonstrate that our discrete coarse-grained equations are consistent with a Petrov-Galerkin finite-element discretization of a system of formal stochastic partial differential equations which resemble previously used phenomenological models based on fluctuating hydrodynamics. Key to this connection between our “bottom-up” and previous “top-down” approaches is the use of the same dual orthogonal set of linear basis functions familiar from finite element methods (FEMs), both as a way to coarse-grain the microscopic degrees of freedom and as a way to discretize the equations of fluctuating hydrodynamics. Another key ingredient is the use of a “linear for spiky” weak approximation which replaces microscopic “fields” with a linear FE interpolant inside expectation values. For the irreversible or dissipative dynamics, we approximate the constrained Green-Kubo expressions for the dissipation coefficients with their equilibrium averages. Under suitable approximations, we obtain closed approximations of the coarse-grained dynamics in a manner which gives them a clear physical interpretation and provides explicit microscopic expressions for all of the coefficients appearing in the closure. Our work leads to a model for dilute nanocolloidal suspensions that can be simulated effectively using feasibly short molecular dynamics
Directory of Open Access Journals (Sweden)
Narinder Singh
2017-01-01
Full Text Available A newly hybrid nature inspired algorithm called HPSOGWO is presented with the combination of Particle Swarm Optimization (PSO and Grey Wolf Optimizer (GWO. The main idea is to improve the ability of exploitation in Particle Swarm Optimization with the ability of exploration in Grey Wolf Optimizer to produce both variants’ strength. Some unimodal, multimodal, and fixed-dimension multimodal test functions are used to check the solution quality and performance of HPSOGWO variant. The numerical and statistical solutions show that the hybrid variant outperforms significantly the PSO and GWO variants in terms of solution quality, solution stability, convergence speed, and ability to find the global optimum.
Fourtakas, G.; Rogers, B. D.
2016-06-01
A two-phase numerical model using Smoothed Particle Hydrodynamics (SPH) is applied to two-phase liquid-sediments flows. The absence of a mesh in SPH is ideal for interfacial and highly non-linear flows with changing fragmentation of the interface, mixing and resuspension. The rheology of sediment induced under rapid flows undergoes several states which are only partially described by previous research in SPH. This paper attempts to bridge the gap between the geotechnics, non-Newtonian and Newtonian flows by proposing a model that combines the yielding, shear and suspension layer which are needed to predict accurately the global erosion phenomena, from a hydrodynamics prospective. The numerical SPH scheme is based on the explicit treatment of both phases using Newtonian and the non-Newtonian Bingham-type Herschel-Bulkley-Papanastasiou constitutive model. This is supplemented by the Drucker-Prager yield criterion to predict the onset of yielding of the sediment surface and a concentration suspension model. The multi-phase model has been compared with experimental and 2-D reference numerical models for scour following a dry-bed dam break yielding satisfactory results and improvements over well-known SPH multi-phase models. With 3-D simulations requiring a large number of particles, the code is accelerated with a graphics processing unit (GPU) in the open-source DualSPHysics code. The implementation and optimisation of the code achieved a speed up of x58 over an optimised single thread serial code. A 3-D dam break over a non-cohesive erodible bed simulation with over 4 million particles yields close agreement with experimental scour and water surface profiles.
Energy Technology Data Exchange (ETDEWEB)
Young, Steven; Montakhab, Mohammad; Nouri, Hassan
2011-07-15
Economic dispatch (ED) is one of the most important problems to be solved in power generation as fractional percentage fuel reductions represent significant cost savings. ED wishes to optimise the power generated by each generating unit in a system in order to find the minimum operating cost at a required load demand, whilst ensuring both equality and inequality constraints are met. For the process of optimisation, a model must be created for each generating unit. The particle swarm optimisation technique is an evolutionary computation technique with one of the most powerful methods for solving global optimisation problems. The aim of this paper is to add in a constriction factor to the particle swarm optimisation algorithm (CFBPSO). Results show that the algorithm is very good at solving the ED problem and that CFBPSO must be able to work in a practical environment and so a valve point effect with transmission losses should be included in future work.
Energy Technology Data Exchange (ETDEWEB)
Chen, Zaigao; Wang, Jianguo [Key Laboratory for Physical Electronics and Devices of the Ministry of Education, Xi' an Jiaotong University, Xi' an, Shaanxi 710049 (China); Northwest Institute of Nuclear Technology, P.O. Box 69-12, Xi' an, Shaanxi 710024 (China); Wang, Yue; Qiao, Hailiang; Zhang, Dianhui [Northwest Institute of Nuclear Technology, P.O. Box 69-12, Xi' an, Shaanxi 710024 (China); Guo, Weijie [Key Laboratory for Physical Electronics and Devices of the Ministry of Education, Xi' an Jiaotong University, Xi' an, Shaanxi 710049 (China)
2013-11-15
Optimal design method of high-power microwave source using particle simulation and parallel genetic algorithms is presented in this paper. The output power, simulated by the fully electromagnetic particle simulation code UNIPIC, of the high-power microwave device is given as the fitness function, and the float-encoding genetic algorithms are used to optimize the high-power microwave devices. Using this method, we encode the heights of non-uniform slow wave structure in the relativistic backward wave oscillators (RBWO), and optimize the parameters on massively parallel processors. Simulation results demonstrate that we can obtain the optimal parameters of non-uniform slow wave structure in the RBWO, and the output microwave power enhances 52.6% after the device is optimized.
International Nuclear Information System (INIS)
Svensson, Urban
2001-04-01
A particle tracking algorithm, PARTRACK, that simulates transport and dispersion in a sparsely fractured rock is described. The main novel feature of the algorithm is the introduction of multiple particle states. It is demonstrated that the introduction of this feature allows for the simultaneous simulation of Taylor dispersion, sorption and matrix diffusion. A number of test cases are used to verify and demonstrate the features of PARTRACK. It is shown that PARTRACK can simulate the following processes, believed to be important for the problem addressed: the split up of a tracer cloud at a fracture intersection, channeling in a fracture plane, Taylor dispersion and matrix diffusion and sorption. From the results of the test cases, it is concluded that PARTRACK is an adequate framework for simulation of transport and dispersion of a solute in a sparsely fractured rock
Optimization of heat pump system in indoor swimming pool using particle swarm algorithm
Energy Technology Data Exchange (ETDEWEB)
Lee, Wen-Shing; Kung, Chung-Kuan [Department of Energy and Refrigerating Air-Conditioning Engineering, National Taipei University of Technology, 1, Section 3, Chung-Hsiao East Road, Taipei (China)
2008-09-15
When it comes to indoor swimming pool facilities, a large amount of energy is required to heat up low-temperature outdoor air before it is being introduced indoors to maintain indoor humidity. Since water is evaporated from the pool surface, the exhausted air contains more water and specific enthalpy. In response to this indoor air, heat pump is generally used in heat recovery for indoor swimming pools. To reduce the cost in energy consumption, this paper utilizes particle swarm algorithm to optimize the design of heat pump system. The optimized parameters include continuous parameters and discrete parameters. The former consists of outdoor air mass flow and heat conductance of heat exchangers; the latter comprises compressor type and boiler type. In a case study, life cycle energy cost is considered as an objective function. In this regard, the optimized outdoor air flow and the optimized design for heating system can be deduced by using particle swarm algorithm. (author)
Directory of Open Access Journals (Sweden)
Qi Hong
2015-01-01
Full Text Available The particle size distribution (PSD plays an important role in environmental pollution detection and human health protection, such as fog, haze and soot. In this study, the Attractive and Repulsive Particle Swarm Optimization (ARPSO algorithm and the basic PSO were applied to retrieve the PSD. The spectral extinction technique coupled with the Anomalous Diffraction Approximation (ADA and the Lambert-Beer Law were employed to investigate the retrieval of the PSD. Three commonly used monomodal PSDs, i.e. the Rosin-Rammer (R-R distribution, the normal (N-N distribution, the logarithmic normal (L-N distribution were studied in the dependent model. Then, an optimal wavelengths selection algorithm was proposed. To study the accuracy and robustness of the inverse results, some characteristic parameters were employed. The research revealed that the ARPSO showed more accurate and faster convergence rate than the basic PSO, even with random measurement error. Moreover, the investigation also demonstrated that the inverse results of four incident laser wavelengths showed more accurate and robust than those of two wavelengths. The research also found that if increasing the interval of the selected incident laser wavelengths, inverse results would show more accurate, even in the presence of random error.
International Nuclear Information System (INIS)
Pryce, M.H.L.
1985-01-01
A dominant mechanism contributing to hydrodynamic dispersion in fluid flow through rocks is variation of travel speeds within the channels carrying the fluid, whether these be interstices between grains, in granular rocks, or cracks in fractured crystalline rocks. The complex interconnections of the channels ensure a mixing of those parts of the fluid which travel more slowly and those which travel faster. On a macroscopic scale this can be treated statistically in terms of the distribution of times taken by a particle of fluid to move from one surface of constant hydraulic potential to another, lower, potential. The distributions in the individual channels are such that very long travel times make a very important contribution. Indeed, while the mean travel time is related to distance by a well-defined transport speed, the mean square is effectively infinite. This results in an asymmetrical plume which differs markedly from a gaussian shape. The distribution of microscopic travel times is related to the distribution of apertures in the interstices, or in the microcracks, which in turn are affected in a complex way by the stresses acting on the rock matrix
GPU-accelerated algorithms for many-particle continuous-time quantum walks
Piccinini, Enrico; Benedetti, Claudia; Siloi, Ilaria; Paris, Matteo G. A.; Bordone, Paolo
2017-06-01
Many-particle continuous-time quantum walks (CTQWs) represent a resource for several tasks in quantum technology, including quantum search algorithms and universal quantum computation. In order to design and implement CTQWs in a realistic scenario, one needs effective simulation tools for Hamiltonians that take into account static noise and fluctuations in the lattice, i.e. Hamiltonians containing stochastic terms. To this aim, we suggest a parallel algorithm based on the Taylor series expansion of the evolution operator, and compare its performances with those of algorithms based on the exact diagonalization of the Hamiltonian or a 4th order Runge-Kutta integration. We prove that both Taylor-series expansion and Runge-Kutta algorithms are reliable and have a low computational cost, the Taylor-series expansion showing the additional advantage of a memory allocation not depending on the precision of calculation. Both algorithms are also highly parallelizable within the SIMT paradigm, and are thus suitable for GPGPU computing. In turn, we have benchmarked 4 NVIDIA GPUs and 3 quad-core Intel CPUs for a 2-particle system over lattices of increasing dimension, showing that the speedup provided by GPU computing, with respect to the OPENMP parallelization, lies in the range between 8x and (more than) 20x, depending on the frequency of post-processing. GPU-accelerated codes thus allow one to overcome concerns about the execution time, and make it possible simulations with many interacting particles on large lattices, with the only limit of the memory available on the device.
An Image Filter Based on Shearlet Transformation and Particle Swarm Optimization Algorithm
Directory of Open Access Journals (Sweden)
Kai Hu
2015-01-01
Full Text Available Digital image is always polluted by noise and made data postprocessing difficult. To remove noise and preserve detail of image as much as possible, this paper proposed image filter algorithm which combined the merits of Shearlet transformation and particle swarm optimization (PSO algorithm. Firstly, we use classical Shearlet transform to decompose noised image into many subwavelets under multiscale and multiorientation. Secondly, we gave weighted factor to those subwavelets obtained. Then, using classical Shearlet inverse transform, we obtained a composite image which is composed of those weighted subwavelets. After that, we designed fast and rough evaluation method to evaluate noise level of the new image; by using this method as fitness, we adopted PSO to find the optimal weighted factor we added; after lots of iterations, by the optimal factors and Shearlet inverse transform, we got the best denoised image. Experimental results have shown that proposed algorithm eliminates noise effectively and yields good peak signal noise ratio (PSNR.
Energy Technology Data Exchange (ETDEWEB)
Lee, Kyun Ho [Sejong University, Sejong (Korea, Republic of); Kim, Ki Wan [Agency for Defense Development, Daejeon (Korea, Republic of)
2014-09-15
The heat transfer mechanism for radiation is directly related to the emission of photons and electromagnetic waves. Depending on the participation of the medium, the radiation can be classified into two forms: surface and gas radiation. In the present study, unknown radiation properties were estimated using an inverse boundary analysis of surface radiation in an axisymmetric cylindrical enclosure. For efficiency, a repulsive particle swarm optimization (RPSO) algorithm, which is a relatively recent heuristic search method, was used as inverse solver. By comparing the convergence rates and accuracies with the results of a genetic algorithm (GA), the performances of the proposed RPSO algorithm as an inverse solver was verified when applied to the inverse analysis of the surface radiation problem.
International Nuclear Information System (INIS)
Lee, Kyun Ho; Kim, Ki Wan
2014-01-01
The heat transfer mechanism for radiation is directly related to the emission of photons and electromagnetic waves. Depending on the participation of the medium, the radiation can be classified into two forms: surface and gas radiation. In the present study, unknown radiation properties were estimated using an inverse boundary analysis of surface radiation in an axisymmetric cylindrical enclosure. For efficiency, a repulsive particle swarm optimization (RPSO) algorithm, which is a relatively recent heuristic search method, was used as inverse solver. By comparing the convergence rates and accuracies with the results of a genetic algorithm (GA), the performances of the proposed RPSO algorithm as an inverse solver was verified when applied to the inverse analysis of the surface radiation problem
New hybrid genetic particle swarm optimization algorithm to design multi-zone binary filter.
Lin, Jie; Zhao, Hongyang; Ma, Yuan; Tan, Jiubin; Jin, Peng
2016-05-16
The binary phase filters have been used to achieve an optical needle with small lateral size. Designing a binary phase filter is still a scientific challenge in such fields. In this paper, a hybrid genetic particle swarm optimization (HGPSO) algorithm is proposed to design the binary phase filter. The HGPSO algorithm includes self-adaptive parameters, recombination and mutation operations that originated from the genetic algorithm. Based on the benchmark test, the HGPSO algorithm has achieved global optimization and fast convergence. In an easy-to-perform optimizing procedure, the iteration number of HGPSO is decreased to about a quarter of the original particle swarm optimization process. A multi-zone binary phase filter is designed by using the HGPSO. The long depth of focus and high resolution are achieved simultaneously, where the depth of focus and focal spot transverse size are 6.05λ and 0.41λ, respectively. Therefore, the proposed HGPSO can be applied to the optimization of filter with multiple parameters.
Optimization of C4.5 algorithm-based particle swarm optimization for breast cancer diagnosis
Muslim, M. A.; Rukmana, S. H.; Sugiharti, E.; Prasetiyo, B.; Alimah, S.
2018-03-01
Data mining has become a basic methodology for computational applications in the field of medical domains. Data mining can be applied in the health field such as for diagnosis of breast cancer, heart disease, diabetes and others. Breast cancer is most common in women, with more than one million cases and nearly 600,000 deaths occurring worldwide each year. The most effective way to reduce breast cancer deaths was by early diagnosis. This study aims to determine the level of breast cancer diagnosis. This research data uses Wisconsin Breast Cancer dataset (WBC) from UCI machine learning. The method used in this research is the algorithm C4.5 and Particle Swarm Optimization (PSO) as a feature option and to optimize the algorithm. C4.5. Ten-fold cross-validation is used as a validation method and a confusion matrix. The result of this research is C4.5 algorithm. The particle swarm optimization C4.5 algorithm has increased by 0.88%.
Ghose-Hajra, M.; McCorquodale, A.; Mattson, G.; Jerolleman, D.; Filostrat, J.
2015-03-01
Sea-level rise, the increasing number and intensity of storms, oil and groundwater extraction, and coastal land subsidence are putting people and property at risk along Louisiana's coast, with major implications for human safety and economic health of coastal areas. A major goal towards re-establishing a healthy and sustainable coastal ecosystem has been to rebuild Louisiana's disappearing wetlands with fine grained sediments that are dredged or diverted from nearby rivers, channels and lakes to build land in open water areas. A thorough geo-hydrodynamic characterization of the deposited sediments is important in the correct design and a more realistic outcome assessment of the long-term performance measures for ongoing coastal restoration projects. This paper evaluates the effects of salinity and solid particle concentration on the re-suspension characteristics of fine-grained dredged sediments obtained from multiple geographic locations along the Gulf coast. The critical bed-shear-stress for erosion has been evaluated as a function of sedimentation time. The sediment hydrodynamic properties obtained from the laboratory testing were used in a numerical coastal sediment distribution model to aid in evaluating sediment diversions from the Mississippi River into Breton Sound and Barataria Bay.
Directory of Open Access Journals (Sweden)
M. Ghose-Hajra
2015-03-01
Full Text Available Sea-level rise, the increasing number and intensity of storms, oil and groundwater extraction, and coastal land subsidence are putting people and property at risk along Louisiana’s coast, with major implications for human safety and economic health of coastal areas. A major goal towards re-establishing a healthy and sustainable coastal ecosystem has been to rebuild Louisiana’s disappearing wetlands with fine grained sediments that are dredged or diverted from nearby rivers, channels and lakes to build land in open water areas. A thorough geo-hydrodynamic characterization of the deposited sediments is important in the correct design and a more realistic outcome assessment of the long-term performance measures for ongoing coastal restoration projects. This paper evaluates the effects of salinity and solid particle concentration on the re-suspension characteristics of fine-grained dredged sediments obtained from multiple geographic locations along the Gulf coast. The critical bed-shear-stress for erosion has been evaluated as a function of sedimentation time. The sediment hydrodynamic properties obtained from the laboratory testing were used in a numerical coastal sediment distribution model to aid in evaluating sediment diversions from the Mississippi River into Breton Sound and Barataria Bay.
Directory of Open Access Journals (Sweden)
Rongxiao Wang
2017-09-01
Full Text Available The accurate prediction of air contaminant dispersion is essential to air quality monitoring and the emergency management of contaminant gas leakage incidents in chemical industry parks. Conventional atmospheric dispersion models can seldom give accurate predictions due to inaccurate input parameters. In order to improve the prediction accuracy of dispersion models, two data assimilation methods (i.e., the typical particle filter & the combination of a particle filter and expectation-maximization algorithm are proposed to assimilate the virtual Unmanned Aerial Vehicle (UAV observations with measurement error into the atmospheric dispersion model. Two emission cases with different dimensions of state parameters are considered. To test the performances of the proposed methods, two numerical experiments corresponding to the two emission cases are designed and implemented. The results show that the particle filter can effectively estimate the model parameters and improve the accuracy of model predictions when the dimension of state parameters is relatively low. In contrast, when the dimension of state parameters becomes higher, the method of particle filter combining the expectation-maximization algorithm performs better in terms of the parameter estimation accuracy. Therefore, the proposed data assimilation methods are able to effectively support air quality monitoring and emergency management in chemical industry parks.
Gong, Yuezheng; Zhao, Jia; Wang, Qi
2017-10-01
A quasi-incompressible hydrodynamic phase field model for flows of fluid mixtures of two incompressible viscous fluids of distinct densities and viscosities is derived by using the generalized Onsager principle, which warrants the variational structure, the mass conservation and energy dissipation law. We recast the model in an equivalent form and discretize the equivalent system in space firstly to arrive at a time-dependent ordinary differential and algebraic equation (DAE) system, which preserves the mass conservation and energy dissipation law at the semi-discrete level. Then, we develop a temporal discretization scheme for the DAE system, where the mass conservation and the energy dissipation law are once again preserved at the fully discretized level. We prove that the fully discretized algorithm is unconditionally energy stable. Several numerical examples, including drop dynamics of viscous fluid drops immersed in another viscous fluid matrix and mixing dynamics of binary polymeric solutions, are presented to show the convergence property as well as the accuracy and efficiency of the new scheme.
International Nuclear Information System (INIS)
Kajzer, A; Pozorski, J; Szewc, K
2014-01-01
In the paper we present Large-eddy simulation (LES) results of 3D Taylor- Green vortex obtained by the three different computational approaches: Smoothed Particle Hydrodynamics (SPH), Lattice Boltzmann Method (LBM) and Finite Volume Method (FVM). The Smagorinsky model was chosen as a subgrid-scale closure in LES for all considered methods and a selection of spatial resolutions have been investigated. The SPH and LBM computations have been carried out with the use of the in-house codes executed on GPU and compared, for validation purposes, with the FVM results obtained using the open-source CFD software OpenFOAM. A comparative study in terms of one-point statistics and turbulent energy spectra shows a good agreement of LES results for all methods. An analysis of the GPU code efficiency and implementation difficulties has been made. It is shown that both SPH and LBM may offer a significant advantage over mesh-based CFD methods.
Jovanović, Dušan; Fedele, Renato; De Nicola, Sergio; Akhter, Tamina; Belić, Milivoj
2017-12-01
A self-consistent nonlinear hydrodynamic theory is presented of the propagation of a long and thin relativistic electron beam, for a typical plasma wake field acceleration configuration in an unmagnetized and overdense plasma. The random component of the trajectories of the beam particles as well as of their velocity spread is modelled by an anisotropic temperature, allowing the beam dynamics to be approximated as a 3D adiabatic expansion/compression. It is shown that even in the absence of the nonlinear plasma wake force, the localisation of the beam in the transverse direction can be achieved owing to the nonlinearity associated with the adiabatic compression/rarefaction and a coherent stationary state is constructed. Numerical calculations reveal the possibility of the beam focussing and defocussing, but the lifetime of the beam can be significantly extended by the appropriate adjustments, so that transverse oscillations are observed, similar to those predicted within the thermal wave and Vlasov kinetic models.
Yang, Jie; Zhang, Pengcheng; Zhang, Liyuan; Shu, Huazhong; Li, Baosheng; Gui, Zhiguo
2017-01-01
In inverse treatment planning of intensity-modulated radiation therapy (IMRT), the objective function is typically the sum of the weighted sub-scores, where the weights indicate the importance of the sub-scores. To obtain a high-quality treatment plan, the planner manually adjusts the objective weights using a trial-and-error procedure until an acceptable plan is reached. In this work, a new particle swarm optimization (PSO) method which can adjust the weighting factors automatically was investigated to overcome the requirement of manual adjustment, thereby reducing the workload of the human planner and contributing to the development of a fully automated planning process. The proposed optimization method consists of three steps. (i) First, a swarm of weighting factors (i.e., particles) is initialized randomly in the search space, where each particle corresponds to a global objective function. (ii) Then, a plan optimization solver is employed to obtain the optimal solution for each particle, and the values of the evaluation functions used to determine the particle's location and the population global location for the PSO are calculated based on these results. (iii) Next, the weighting factors are updated based on the particle's location and the population global location. Step (ii) is performed alternately with step (iii) until the termination condition is reached. In this method, the evaluation function is a combination of several key points on the dose volume histograms. Furthermore, a perturbation strategy - the crossover and mutation operator hybrid approach - is employed to enhance the population diversity, and two arguments are applied to the evaluation function to improve the flexibility of the algorithm. In this study, the proposed method was used to develop IMRT treatment plans involving five unequally spaced 6MV photon beams for 10 prostate cancer cases. The proposed optimization algorithm yielded high-quality plans for all of the cases, without human
Directory of Open Access Journals (Sweden)
Elahe Fallah Mehdipour
2012-12-01
Full Text Available Optimal operation of multipurpose reservoirs is one of the complex and sometimes nonlinear problems in the field of multi-objective optimization. Evolutionary algorithms are optimization tools that search decision space using simulation of natural biological evolution and present a set of points as the optimum solutions of problem. In this research, application of multi-objective particle swarm optimization (MOPSO in optimal operation of Bazoft reservoir with different objectives, including generating hydropower energy, supplying downstream demands (drinking, industry and agriculture, recreation and flood control have been considered. In this regard, solution sets of the MOPSO algorithm in bi-combination of objectives and compromise programming (CP using different weighting and power coefficients have been first compared that the MOPSO algorithm in all combinations of objectives is more capable than the CP to find solution with appropriate distribution and these solutions have dominated the CP solutions. Then, ending points of solution set from the MOPSO algorithm and nonlinear programming (NLP results have been compared. Results showed that the MOPSO algorithm with 0.3 percent difference from the NLP results has more capability to present optimum solutions in the ending points of solution set.
Kota, Sujatha; Padmanabhuni, Venkata Nageswara Rao; Budda, Kishor; K, Sruthi
2018-05-01
Elliptic Curve Cryptography (ECC) uses two keys private key and public key and is considered as a public key cryptographic algorithm that is used for both authentication of a person and confidentiality of data. Either one of the keys is used in encryption and other in decryption depending on usage. Private key is used in encryption by the user and public key is used to identify user in the case of authentication. Similarly, the sender encrypts with the private key and the public key is used to decrypt the message in case of confidentiality. Choosing the private key is always an issue in all public key Cryptographic Algorithms such as RSA, ECC. If tiny values are chosen in random the security of the complete algorithm becomes an issue. Since the Public key is computed based on the Private Key, if they are not chosen optimally they generate infinity values. The proposed Modified Elliptic Curve Cryptography uses selection in either of the choices; the first option is by using Particle Swarm Optimization and the second option is by using Cuckoo Search Algorithm for randomly choosing the values. The proposed algorithms are developed and tested using sample database and both are found to be secured and reliable. The test results prove that the private key is chosen optimally not repetitive or tiny and the computations in public key will not reach infinity.
Prediction of Tibial Rotation Pathologies Using Particle Swarm Optimization and K-Means Algorithms.
Sari, Murat; Tuna, Can; Akogul, Serkan
2018-03-28
The aim of this article is to investigate pathological subjects from a population through different physical factors. To achieve this, particle swarm optimization (PSO) and K-means (KM) clustering algorithms have been combined (PSO-KM). Datasets provided by the literature were divided into three clusters based on age and weight parameters and each one of right tibial external rotation (RTER), right tibial internal rotation (RTIR), left tibial external rotation (LTER), and left tibial internal rotation (LTIR) values were divided into three types as Type 1, Type 2 and Type 3 (Type 2 is non-pathological (normal) and the other two types are pathological (abnormal)), respectively. The rotation values of every subject in any cluster were noted. Then the algorithm was run and the produced values were also considered. The values of the produced algorithm, the PSO-KM, have been compared with the real values. The hybrid PSO-KM algorithm has been very successful on the optimal clustering of the tibial rotation types through the physical criteria. In this investigation, Type 2 (pathological subjects) is of especially high predictability and the PSO-KM algorithm has been very successful as an operation system for clustering and optimizing the tibial motion data assessments. These research findings are expected to be very useful for health providers, such as physiotherapists, orthopedists, and so on, in which this consequence may help clinicians to appropriately designing proper treatment schedules for patients.
Indian Academy of Sciences (India)
polynomial) division have been found in Vedic Mathematics which are dated much before Euclid's algorithm. A programming language Is used to describe an algorithm for execution on a computer. An algorithm expressed using a programming.
Thermodynamic design of Stirling engine using multi-objective particle swarm optimization algorithm
International Nuclear Information System (INIS)
Duan, Chen; Wang, Xinggang; Shu, Shuiming; Jing, Changwei; Chang, Huawei
2014-01-01
Highlights: • An improved thermodynamic model taking into account irreversibility parameter was developed. • A multi-objective optimization method for designing Stirling engine was investigated. • Multi-objective particle swarm optimization algorithm was adopted in the area of Stirling engine for the first time. - Abstract: In the recent years, the interest in Stirling engine has remarkably increased due to its ability to use any heat source from outside including solar energy, fossil fuels and biomass. A large number of studies have been done on Stirling cycle analysis. In the present study, a mathematical model based on thermodynamic analysis of Stirling engine considering regenerative losses and internal irreversibilities has been developed. Power output, thermal efficiency and the cycle irreversibility parameter of Stirling engine are optimized simultaneously using Particle Swarm Optimization (PSO) algorithm, which is more effective than traditional genetic algorithms. In this optimization problem, some important parameters of Stirling engine are considered as decision variables, such as temperatures of the working fluid both in the high temperature isothermal process and in the low temperature isothermal process, dead volume ratios of each heat exchanger, volumes of each working spaces, effectiveness of the regenerator, and the system charge pressure. The Pareto optimal frontier is obtained and the final design solution has been selected by Linear Programming Technique for Multidimensional Analysis of Preference (LINMAP). Results show that the proposed multi-objective optimization approach can significantly outperform traditional single objective approaches
Directory of Open Access Journals (Sweden)
GholamReza Havaei
2015-09-01
Full Text Available Reinforced concrete reservoirs (RCR have been used extensively in municipal and industrial facilities for several decades. The design of these structures requires that attention be given not only to strength requirements, but to serviceability requirements as well. These types of structures will be square, round, and oval reinforced concrete structures which may be above, below, or partially below ground. The main challenge is to design concrete liquid containing structures which will resist the extremes of seasonal temperature changes, a variety of loading conditions, and remain liquid tight for useful life of 50 to 60 years. In this study, optimization is performed by particle swarm algorithm basd on structural design. Firstly by structural analysis all range of shell thickness and areas of rebar find. In the second step by parameter identification system interchange algorithm, source code which developed in particle swarm algorithm by MATLAB software linked to analysis software. Therefore best and optimized thicknesses and total area of bars for each element find. Lastly with circumferential stiffeners structure optimize and show 19% decrease in weight of rebar, 20% decrease in volume of concrete, and 13% minimum cost reduction in construction procedure compared with conventional 10,000 m3 RCR structures.
Fitness Estimation Based Particle Swarm Optimization Algorithm for Layout Design of Truss Structures
Directory of Open Access Journals (Sweden)
Ayang Xiao
2014-01-01
Full Text Available Due to the fact that vastly different variables and constraints are simultaneously considered, truss layout optimization is a typical difficult constrained mixed-integer nonlinear program. Moreover, the computational cost of truss analysis is often quite expensive. In this paper, a novel fitness estimation based particle swarm optimization algorithm with an adaptive penalty function approach (FEPSO-AP is proposed to handle this problem. FEPSO-AP adopts a special fitness estimate strategy to evaluate the similar particles in the current population, with the purpose to reduce the computational cost. Further more, a laconic adaptive penalty function is employed by FEPSO-AP, which can handle multiple constraints effectively by making good use of historical iteration information. Four benchmark examples with fixed topologies and up to 44 design dimensions were studied to verify the generality and efficiency of the proposed algorithm. Numerical results of the present work compared with results of other state-of-the-art hybrid algorithms shown in the literature demonstrate that the convergence rate and the solution quality of FEPSO-AP are essentially competitive.
International Nuclear Information System (INIS)
Huang, Chia-Ling
2015-01-01
This paper proposes a new swarm intelligence method known as the Particle-based Simplified Swarm Optimization (PSSO) algorithm while undertaking a modification of the Updating Mechanism (UM), called N-UM and R-UM, and simultaneously applying an Orthogonal Array Test (OA) to solve reliability–redundancy allocation problems (RRAPs) successfully. One difficulty of RRAP is the need to maximize system reliability in cases where the number of redundant components and the reliability of corresponding components in each subsystem are simultaneously decided with nonlinear constraints. In this paper, four RRAP benchmarks are used to display the applicability of the proposed PSSO that advances the strengths of both PSO and SSO to enable optimizing the RRAP that belongs to mixed-integer nonlinear programming. When the computational results are compared with those of previously developed algorithms in existing literature, the findings indicate that the proposed PSSO is highly competitive and performs well. - Highlights: • This paper proposes a particle-based simplified swarm optimization algorithm (PSSO) to optimize RRAP. • Furthermore, the UM and an OA are adapted to advance in optimizing RRAP. • Four systems are introduced and the results demonstrate the PSSO performs particularly well
Directory of Open Access Journals (Sweden)
Zhiwei Ye
2015-01-01
Full Text Available Image enhancement is an important procedure of image processing and analysis. This paper presents a new technique using a modified measure and blending of cuckoo search and particle swarm optimization (CS-PSO for low contrast images to enhance image adaptively. In this way, contrast enhancement is obtained by global transformation of the input intensities; it employs incomplete Beta function as the transformation function and a novel criterion for measuring image quality considering three factors which are threshold, entropy value, and gray-level probability density of the image. The enhancement process is a nonlinear optimization problem with several constraints. CS-PSO is utilized to maximize the objective fitness criterion in order to enhance the contrast and detail in an image by adapting the parameters of a novel extension to a local enhancement technique. The performance of the proposed method has been compared with other existing techniques such as linear contrast stretching, histogram equalization, and evolutionary computing based image enhancement methods like backtracking search algorithm, differential search algorithm, genetic algorithm, and particle swarm optimization in terms of processing time and image quality. Experimental results demonstrate that the proposed method is robust and adaptive and exhibits the better performance than other methods involved in the paper.
A novel robust and efficient algorithm for charge particle tracking in high background flux
International Nuclear Information System (INIS)
Fanelli, C; Cisbani, E; Dotto, A Del
2015-01-01
The high luminosity that will be reached in the new generation of High Energy Particle and Nuclear physics experiments implies large high background rate and large tracker occupancy, representing therefore a new challenge for particle tracking algorithms. For instance, at Jefferson Laboratory (JLab) (VA,USA), one of the most demanding experiment in this respect, performed with a 12 GeV electron beam, is characterized by a luminosity up to 10 39 cm -2 s -1 . To this scope, Gaseous Electron Multiplier (GEM) based trackers are under development for a new spectrometer that will operate at these high rates in the Hall A of JLab. Within this context, we developed a new tracking algorithm, based on a multistep approach: (i) all hardware - time and charge - information are exploited to minimize the number of hits to associate; (ii) a dedicated Neural Network (NN) has been designed for a fast and efficient association of the hits measured by the GEM detector; (iii) the measurements of the associated hits are further improved in resolution through the application of Kalman filter and Rauch- Tung-Striebel smoother. The algorithm is shortly presented along with a discussion of the promising first results. (paper)
Ye, Zhiwei; Wang, Mingwei; Hu, Zhengbing; Liu, Wei
2015-01-01
Image enhancement is an important procedure of image processing and analysis. This paper presents a new technique using a modified measure and blending of cuckoo search and particle swarm optimization (CS-PSO) for low contrast images to enhance image adaptively. In this way, contrast enhancement is obtained by global transformation of the input intensities; it employs incomplete Beta function as the transformation function and a novel criterion for measuring image quality considering three factors which are threshold, entropy value, and gray-level probability density of the image. The enhancement process is a nonlinear optimization problem with several constraints. CS-PSO is utilized to maximize the objective fitness criterion in order to enhance the contrast and detail in an image by adapting the parameters of a novel extension to a local enhancement technique. The performance of the proposed method has been compared with other existing techniques such as linear contrast stretching, histogram equalization, and evolutionary computing based image enhancement methods like backtracking search algorithm, differential search algorithm, genetic algorithm, and particle swarm optimization in terms of processing time and image quality. Experimental results demonstrate that the proposed method is robust and adaptive and exhibits the better performance than other methods involved in the paper.
Yang, Zhen-Lun; Wu, Angus; Min, Hua-Qing
2015-01-01
An improved quantum-behaved particle swarm optimization with elitist breeding (EB-QPSO) for unconstrained optimization is presented and empirically studied in this paper. In EB-QPSO, the novel elitist breeding strategy acts on the elitists of the swarm to escape from the likely local optima and guide the swarm to perform more efficient search. During the iterative optimization process of EB-QPSO, when criteria met, the personal best of each particle and the global best of the swarm are used to generate new diverse individuals through the transposon operators. The new generated individuals with better fitness are selected to be the new personal best particles and global best particle to guide the swarm for further solution exploration. A comprehensive simulation study is conducted on a set of twelve benchmark functions. Compared with five state-of-the-art quantum-behaved particle swarm optimization algorithms, the proposed EB-QPSO performs more competitively in all of the benchmark functions in terms of better global search capability and faster convergence rate.
Jets/MET Performance with the combination of Particle flow algorithm and SoftKiller
Yamamoto, Kohei
2017-01-01
The main purpose of my work is to study the performance of the combination of Particle flow algorithm(PFlow) and SoftKiller(SK), “PF+SK”. ATLAS experiment currently employes Topological clusters(Topo) for jet reconstruction, but we want to replace it with more effective one, PFlow. PFlow provides us with another method to reconstruct jets[1]. With this algorithm, we combine the energy deposits in calorimeters with the measurement in ID tracker. This strategy enables us to claim these consistent measurements in a detector come from same particles and avoid double counting. SK is a simple and effective way of suppressing pile-up[2]. This way, we divide rapidity-azimuthal plane into square patches and eliminate particles lower than the threshold "#$%, which is derived from each ",' so that the median of " density becomes zero. Practically, this is equal to gradually increasing "#$% till exactly half of patches becomes empty. Because there is no official calibration on PF+SK so far, we have t...
Wu, Lingling
composite deuterium - xenon liners reduce the energy gain due to lower target compression rates. The effect of heating of targets by alpha particles on the fusion energy gain has also been investigated. The study of the dependence of the ram pressure amplification on radial compressibility showed a good agreement with the theory. The study concludes that a liner with higher Mach number and lower adiabatic index gamma (the radio of specific heats) will generate higher ram pressure amplification and higher fusion energy gain. We implemented a second order embedded boundary method for the Maxwell equations in geometrically complex domains. The numerical scheme is second order in both space and time. Comparing to the first order stair-step approximation of complex geometries within the FDTD method, this method can avoid spurious solution introduced by the stair step approximation. Unlike the finite element method and the FE-FD hybrid method, no triangulation is needed for this scheme. This method preserves the simplicity of the embedded boundary method and it is easy to implement. We will also propose a conservative (symplectic) fourth order scheme for uniform geometry boundary.
An Adaptive Multi-Objective Particle Swarm Optimization Algorithm for Multi-Robot Path Planning
Directory of Open Access Journals (Sweden)
Nizar Hadi Abbas
2016-07-01
Full Text Available This paper discusses an optimal path planning algorithm based on an Adaptive Multi-Objective Particle Swarm Optimization Algorithm (AMOPSO for two case studies. First case, single robot wants to reach a goal in the static environment that contain two obstacles and two danger source. The second one, is improving the ability for five robots to reach the shortest way. The proposed algorithm solves the optimization problems for the first case by finding the minimum distance from initial to goal position and also ensuring that the generated path has a maximum distance from the danger zones. And for the second case, finding the shortest path for every robot and without any collision between them with the shortest time. In order to evaluate the proposed algorithm in term of finding the best solution, six benchmark test functions are used to make a comparison between AMOPSO and the standard MOPSO. The results show that the AMOPSO has a better ability to get away from local optimums with a quickest convergence than the MOPSO. The simulation results using Matlab 2014a, indicate that this methodology is extremely valuable for every robot in multi-robot framework to discover its own particular proper path from the start to the destination position with minimum distance and time.
Energy Technology Data Exchange (ETDEWEB)
Wang, Cheng-Der, E-mail: jdwang@iner.gov.tw [Nuclear Engineering Division, Institute of Nuclear Energy Research, No. 1000, Wenhua Rd., Jiaan Village, Longtan Township, Taoyuan County 32546, Taiwan, ROC (China); Lin, Chaung [National Tsing Hua University, Department of Engineering and System Science, 101, Section 2, Kuang Fu Road, Hsinchu 30013, Taiwan (China)
2013-02-15
Highlights: ► The PSO algorithm was adopted to automatically design a BWR CRP. ► The local search procedure was added to improve the result of PSO algorithm. ► The results show that the obtained CRP is the same good as that in the previous work. -- Abstract: This study developed a method for the automatic design of a boiling water reactor (BWR) control rod pattern (CRP) using the particle swarm optimization (PSO) algorithm. The PSO algorithm is more random compared to the rank-based ant system (RAS) that was used to solve the same BWR CRP design problem in the previous work. In addition, the local search procedure was used to make improvements after PSO, by adding the single control rod (CR) effect. The design goal was to obtain the CRP so that the thermal limits and shutdown margin would satisfy the design requirement and the cycle length, which is implicitly controlled by the axial power distribution, would be acceptable. The results showed that the same acceptable CRP found in the previous work could be obtained.
A Particle Swarm Optimization Algorithm for Neural Networks in Recognition of Maize Leaf Diseases
Directory of Open Access Journals (Sweden)
Zhiyong ZHANG
2014-03-01
Full Text Available The neural networks have significance on recognition of crops disease diagnosis? but it has disadvantage of slow convergent speed and shortcoming of local optimum. In order to identify the maize leaf diseases by using machine vision more accurately, we propose an improved particle swarm optimization algorithm for neural networks. With the algorithm, the neural network property is improved. It reasonably confirms threshold and connection weight of neural network, and improves capability of solving problems in the image recognition. At last, an example of the emulation shows that neural network model based on recognizes significantly better than without optimization. Model accuracy has been improved to a certain extent to meet the actual needs of maize leaf diseases recognition.
Li, Jinze; Qu, Zhi; He, Xiaoyang; Jin, Xiaoming; Li, Tie; Wang, Mingkai; Han, Qiu; Gao, Ziji; Jiang, Feng
2018-02-01
Large-scale access of distributed power can improve the current environmental pressure, at the same time, increasing the complexity and uncertainty of overall distribution system. Rational planning of distributed power can effectively improve the system voltage level. To this point, the specific impact on distribution network power quality caused by the access of typical distributed power was analyzed and from the point of improving the learning factor and the inertia weight, an improved particle swarm optimization algorithm (IPSO) was proposed which could solve distributed generation planning for distribution network to improve the local and global search performance of the algorithm. Results show that the proposed method can well reduce the system network loss and improve the economic performance of system operation with distributed generation.
OPTIMIZATION OF PLY STACKING SEQUENCE OF COMPOSITE DRIVE SHAFT USING PARTICLE SWARM ALGORITHM
Directory of Open Access Journals (Sweden)
CHANNAKESHAVA K. R.
2011-06-01
Full Text Available In this paper an attempt has been made to optimize ply stacking sequence of single piece E-Glass/Epoxy and Boron /Epoxy composite drive shafts using Particle swarm algorithm (PSA. PSA is a population based evolutionary stochastic optimization technique which is a resent heuristic search method, where mechanics are inspired by swarming or collaborative behavior of biological population. PSA programme is developed to optimize the ply stacking sequence with an objective of weight minimization by considering design constraints as torque transmission capacity, fundamental natural frequency, lateral vibration and torsional buckling strength having number of laminates, ply thickness and stacking sequence as design variables. The weight savings of the E-Glass/epoxy and Boron /Epoxy shaft from PAS were 51% and 85 % of the steel shaft respectively. The optimum results of PSA obtained are compared with results of genetic algorithm (GA results and found that PSA yields better results than GA.
The particle swarm optimization algorithm applied to nuclear systems surveillance test planning
International Nuclear Information System (INIS)
Siqueira, Newton Norat
2006-12-01
This work shows a new approach to solve availability maximization problems in electromechanical systems, under periodic preventive scheduled tests. This approach uses a new Optimization tool called PSO developed by Kennedy and Eberhart (2001), Particle Swarm Optimization, integrated with probabilistic safety analysis model. Two maintenance optimization problems are solved by the proposed technique, the first one is a hypothetical electromechanical configuration and the second one is a real case from a nuclear power plant (Emergency Diesel Generators). For both problem PSO is compared to a genetic algorithm (GA). In the experiments made, PSO was able to obtain results comparable or even slightly better than those obtained b GA. Therefore, the PSO algorithm is simpler and its convergence is faster, indicating that PSO is a good alternative for solving such kind of problems. (author)
International Nuclear Information System (INIS)
Lian Zhigang; Gu Xingsheng; Jiao Bin
2008-01-01
It is well known that the flow-shop scheduling problem (FSSP) is a branch of production scheduling and is NP-hard. Now, many different approaches have been applied for permutation flow-shop scheduling to minimize makespan, but current algorithms even for moderate size problems cannot be solved to guarantee optimality. Some literatures searching PSO for continuous optimization problems are reported, but papers searching PSO for discrete scheduling problems are few. In this paper, according to the discrete characteristic of FSSP, a novel particle swarm optimization (NPSO) algorithm is presented and successfully applied to permutation flow-shop scheduling to minimize makespan. Computation experiments of seven representative instances (Taillard) based on practical data were made, and comparing the NPSO with standard GA, we obtain that the NPSO is clearly more efficacious than standard GA for FSSP to minimize makespan
Multiple R&D projects scheduling optimization with improved particle swarm algorithm.
Liu, Mengqi; Shan, Miyuan; Wu, Juan
2014-01-01
For most enterprises, in order to win the initiative in the fierce competition of market, a key step is to improve their R&D ability to meet the various demands of customers more timely and less costly. This paper discusses the features of multiple R&D environments in large make-to-order enterprises under constrained human resource and budget, and puts forward a multi-project scheduling model during a certain period. Furthermore, we make some improvements to existed particle swarm algorithm and apply the one developed here to the resource-constrained multi-project scheduling model for a simulation experiment. Simultaneously, the feasibility of model and the validity of algorithm are proved in the experiment.
A Hard Constraint Algorithm to Model Particle Interactions in DNA-laden Flows
Energy Technology Data Exchange (ETDEWEB)
Trebotich, D; Miller, G H; Bybee, M D
2006-08-01
We present a new method for particle interactions in polymer models of DNA. The DNA is represented by a bead-rod polymer model and is fully-coupled to the fluid. The main objective in this work is to implement short-range forces to properly model polymer-polymer and polymer-surface interactions, specifically, rod-rod and rod-surface uncrossing. Our new method is based on a rigid constraint algorithm whereby rods elastically bounce off one another to prevent crossing, similar to our previous algorithm used to model polymer-surface interactions. We compare this model to a classical (smooth) potential which acts as a repulsive force between rods, and rods and surfaces.
A method and algorithm for correlating scattered light and suspended particles in polluted water
International Nuclear Information System (INIS)
Sami Gumaan Daraigan; Mohd Zubir Matjafri; Khiruddin Abdullah; Azlan Abdul Aziz; Abdul Aziz Tajuddin; Mohd Firdaus Othman
2005-01-01
An optical model has been developed for measuring total suspended solids TSS concentrations in water. This approach is based on the characteristics of scattered light from the suspended particles in water samples. An optical sensor system (an active spectrometer) has been developed to correlate pollutant (total suspended solids TSS) concentration and the scattered radiation. Scattered light was measured in terms of the output voltage of the phototransistor of the sensor system. The developed algorithm was used to calculate and estimate the concentrations of the polluted water samples. The proposed algorithm was calibrated using the observed readings. The results display a strong correlation between the radiation values and the total suspended solids concentrations. The proposed system yields a high degree of accuracy with the correlation coefficient (R) of 0.99 and the root mean square error (RMS) of 63.57 mg/l. (Author)
Sung, Wen-Tsai; Chiang, Yen-Chun
2012-12-01
This study examines wireless sensor network with real-time remote identification using the Android study of things (HCIOT) platform in community healthcare. An improved particle swarm optimization (PSO) method is proposed to efficiently enhance physiological multi-sensors data fusion measurement precision in the Internet of Things (IOT) system. Improved PSO (IPSO) includes: inertia weight factor design, shrinkage factor adjustment to allow improved PSO algorithm data fusion performance. The Android platform is employed to build multi-physiological signal processing and timely medical care of things analysis. Wireless sensor network signal transmission and Internet links allow community or family members to have timely medical care network services.
Sathish Kumar, V. R.; Anbuudayasankar, S. P.; Rameshkumar, K.
2018-02-01
In the current globalized scenario, business organizations are more dependent on cost effective supply chain to enhance profitability and better handle competition. Demand uncertainty is an important factor in success or failure of a supply chain. An efficient supply chain limits the stock held at all echelons to the extent of avoiding a stock-out situation. In this paper, a three echelon supply chain model consisting of supplier, manufacturing plant and market is developed and the same is optimized using particle swarm intelligence algorithm.
He, Zhenzong; Qi, Hong; Wang, Yuqing; Ruan, Liming
2014-10-01
Four improved Ant Colony Optimization (ACO) algorithms, i.e. the probability density function based ACO (PDF-ACO) algorithm, the Region ACO (RACO) algorithm, Stochastic ACO (SACO) algorithm and Homogeneous ACO (HACO) algorithm, are employed to estimate the particle size distribution (PSD) of the spheroidal particles. The direct problems are solved by the extended Anomalous Diffraction Approximation (ADA) and the Lambert-Beer law. Three commonly used monomodal distribution functions i.e. the Rosin-Rammer (R-R) distribution function, the normal (N-N) distribution function, and the logarithmic normal (L-N) distribution function are estimated under dependent model. The influence of random measurement errors on the inverse results is also investigated. All the results reveal that the PDF-ACO algorithm is more accurate than the other three ACO algorithms and can be used as an effective technique to investigate the PSD of the spheroidal particles. Furthermore, the Johnson's SB (J-SB) function and the modified beta (M-β) function are employed as the general distribution functions to retrieve the PSD of spheroidal particles using PDF-ACO algorithm. The investigation shows a reasonable agreement between the original distribution function and the general distribution function when only considering the variety of the length of the rotational semi-axis.
LC HCAL Absorber And Active Media Comparisons Using a Particle-Flow Algorithm
International Nuclear Information System (INIS)
Magill, Steve; Kuhlmann, S.
2006-01-01
We compared Stainless Steel (SS) to Tungsten (W) as absorber for the HCAL in simulation using single particles (pions) and a Particle-Flow Algorithm applied to e + e - -> Z -> qqbar events. We then used the PFA to evaluate the performance characteristics of a LC HCAL using W absorber and comparing scintillator and RPC as active media. The W/Scintillator HCAL performs better than the SS/Scintillator version due to finer λ I sampling and narrower showers in the dense absorber. The W/Scintillator HCAL performs better than the W/RPC HCAL except in the number of unused hits in the PFA. Since this represents the confusion term in the PFA response, additional tuning and optimization of a W/RPC HCAL might significantly improve this HCAL configuration
A new multiple robot path planning algorithm: dynamic distributed particle swarm optimization.
Ayari, Asma; Bouamama, Sadok
2017-01-01
Multiple robot systems have become a major study concern in the field of robotic research. Their control becomes unreliable and even infeasible if the number of robots increases. In this paper, a new dynamic distributed particle swarm optimization (D 2 PSO) algorithm is proposed for trajectory path planning of multiple robots in order to find collision-free optimal path for each robot in the environment. The proposed approach consists in calculating two local optima detectors, LOD pBest and LOD gBest . Particles which are unable to improve their personal best and global best for predefined number of successive iterations would be replaced with restructured ones. Stagnation and local optima problems would be avoided by adding diversity to the population, without losing the fast convergence characteristic of PSO. Experiments with multiple robots are provided and proved effectiveness of such approach compared with the distributed PSO.
A Novel Cluster Head Selection Algorithm Based on Fuzzy Clustering and Particle Swarm Optimization.
Ni, Qingjian; Pan, Qianqian; Du, Huimin; Cao, Cen; Zhai, Yuqing
2017-01-01
An important objective of wireless sensor network is to prolong the network life cycle, and topology control is of great significance for extending the network life cycle. Based on previous work, for cluster head selection in hierarchical topology control, we propose a solution based on fuzzy clustering preprocessing and particle swarm optimization. More specifically, first, fuzzy clustering algorithm is used to initial clustering for sensor nodes according to geographical locations, where a sensor node belongs to a cluster with a determined probability, and the number of initial clusters is analyzed and discussed. Furthermore, the fitness function is designed considering both the energy consumption and distance factors of wireless sensor network. Finally, the cluster head nodes in hierarchical topology are determined based on the improved particle swarm optimization. Experimental results show that, compared with traditional methods, the proposed method achieved the purpose of reducing the mortality rate of nodes and extending the network life cycle.
Morales, V. L.; Carrel, M.; Dentz, M.; Derlon, N.; Morgenroth, E.; Holzner, M.
2017-12-01
Biofilms are ubiquitous bacterial communities growing in various porous media including soils, trickling and sand filters and are relevant for applications such as the degradation of pollutants for bioremediation, waste water or drinking water production purposes. By their development, biofilms dynamically change the structure of porous media, increasing the heterogeneity of the pore network and the non-Fickian or anomalous dispersion. In this work, we use an experimental approach to investigate the influence of biofilm growth on pore scale hydrodynamics and transport processes and propose a correlated continuous time random walk model capturing these observations. We perform three-dimensional particle tracking velocimetry at four different time points from 0 to 48 hours of biofilm growth. The biofilm growth notably impacts pore-scale hydrodynamics, as shown by strong increase of the average velocity and in tailing of Lagrangian velocity probability density functions. Additionally, the spatial correlation length of the flow increases substantially. This points at the formation of preferential flow pathways and stagnation zones, which ultimately leads to an increase of anomalous transport in the porous media considered, characterized by non-Fickian scaling of mean-squared displacements and non-Gaussian distributions of the displacement probability density functions. A gamma distribution provides a remarkable approximation of the bulk and the high tail of the Lagrangian pore-scale velocity magnitude, indicating a transition from a parallel pore arrangement towards a more serial one. Finally, a correlated continuous time random walk based on a stochastic relation velocity model accurately reproduces the observations and could be used to predict transport beyond the time scales accessible to the experiment.
Fully implicit Particle-in-cell algorithms for multiscale plasma simulation
Energy Technology Data Exchange (ETDEWEB)
Chacon, Luis [Los Alamos National Laboratory
2015-07-16
The outline of the paper is as follows: Particle-in-cell (PIC) methods for fully ionized collisionless plasmas, explicit vs. implicit PIC, 1D ES implicit PIC (charge and energy conservation, moment-based acceleration), and generalization to Multi-D EM PIC: Vlasov-Darwin model (review and motivation for Darwin model, conservation properties (energy, charge, and canonical momenta), and numerical benchmarks). The author demonstrates a fully implicit, fully nonlinear, multidimensional PIC formulation that features exact local charge conservation (via a novel particle mover strategy), exact global energy conservation (no particle self-heating or self-cooling), adaptive particle orbit integrator to control errors in momentum conservation, and canonical momenta (EM-PIC only, reduced dimensionality). The approach is free of numerical instabilities: ω_{pe}Δt >> 1, and Δx >> λ_{D}. It requires many fewer dofs (vs. explicit PIC) for comparable accuracy in challenging problems. Significant CPU gains (vs explicit PIC) have been demonstrated. The method has much potential for efficiency gains vs. explicit in long-time-scale applications. Moment-based acceleration is effective in minimizing N_{FE}, leading to an optimal algorithm.
Brownian dynamics with hydrodynamic interactions
International Nuclear Information System (INIS)
Ermak, D.L.; McCammon, J.A.
1978-01-01
A method for simulating the Brownian dynamics of N particles with the inclusion of hydrodynamic interactions is described. The particles may also be subject to the usual interparticle or external forces (e.g., electrostatic) which have been included in previous methods for simulating Brownian dynamics of particles in the absence of hydrodynamic interactions. The present method is derived from the Langevin equations for the N particle assembly, and the results are shown to be consistent with the corresponding Fokker--Planck results. Sample calculations on small systems illustrate the importance of including hydrodynamic interactions in Brownian dynamics simulations. The method should be useful for simulation studies of diffusion limited reactions, polymer dynamics, protein folding, particle coagulation, and other phenomena in solution
DEFF Research Database (Denmark)
Ren, Jingzheng; Tan, Shiyu; Dong, Lichun
2010-01-01
A mathematical model relating operation profits with reflux ratio of a stage distillation column was established. In order to optimize the reflux ratio by solving the nonlinear objective function, an improved particle swarm algorithm was developed and has been proved to be able to enhance...... the searching ability of basic particle swarm algorithm significantly. An example of utilizing the improved algorithm to solve the mathematical model was demonstrated; the result showed that it is efficient and convenient to optimize the reflux ratio for a distillation column by using the mathematical model...
Directory of Open Access Journals (Sweden)
Zhou Feng
2013-09-01
Full Text Available A based on Rapidly-exploring Random Tree(RRT and Particle Swarm Optimizer (PSO for path planning of the robot is proposed.First the grid method is built to describe the working space of the mobile robot,then the Rapidly-exploring Random Tree algorithm is used to obtain the global navigation path,and the Particle Swarm Optimizer algorithm is adopted to get the better path.Computer experiment results demonstrate that this novel algorithm can plan an optimal path rapidly in a cluttered environment.The successful obstacle avoidance is achieved,and the model is robust and performs reliably.
Frydel, Derek; Rice, Stuart A
2007-12-01
We report a hydrodynamic analysis of the long-time behavior of the linear and angular velocity autocorrelation functions of an isolated colloid particle constrained to have quasi-two-dimensional motion, and compare the predicted behavior with the results of lattice-Boltzmann simulations. Our analysis uses the singularity method to characterize unsteady linear motion of an incompressible fluid. For bounded fluids we construct an image system with a discrete set of fundamental solutions of the Stokes equation from which we extract the long-time decay of the velocity. For the case that there are free slip boundary conditions at walls separated by H particle diameters, the time evolution of the parallel linear velocity and the perpendicular rotational velocity following impulsive excitation both correspond to the time evolution of a two-dimensional (2D) fluid with effective density rho_(2D)=rhoH. For the case that there are no slip boundary conditions at the walls, the same types of motion correspond to 2D fluid motions with a coefficient of friction xi=pi(2)nu/H(2) modulo a prefactor of order 1, with nu the kinematic viscosity. The linear particle motion perpendicular to the walls also experiences an effective frictional force, but the time dependence is proportional to t(-2) , which cannot be related to either pure 3D or pure 2D fluid motion. Our incompressible fluid model predicts correct self-diffusion constants but it does not capture all of the effects of the fluid confinement on the particle motion. In particular, the linear motion of a particle perpendicular to the walls is influenced by coupling between the density flux and the velocity field, which leads to damped velocity oscillations whose frequency is proportional to c_(s)/H , with c_(s) the velocity of sound. For particle motion parallel to no slip walls there is a slowing down of a density flux that spreads diffusively, which generates a long-time decay proportional to t(-1) .
Directory of Open Access Journals (Sweden)
Huan Zhang
2017-01-01
Full Text Available For the problem of multiaircraft cooperative suppression interference array (MACSIA against the enemy air defense radar network in electronic warfare mission planning, firstly, the concept of route planning security zone is proposed and the solution to get the minimum width of security zone based on mathematical morphology is put forward. Secondly, the minimum width of security zone and the sum of the distance between each jamming aircraft and the center of radar network are regarded as objective function, and the multiobjective optimization model of MACSIA is built, and then an improved multiobjective particle swarm optimization algorithm is used to solve the model. The decomposition mechanism is adopted and the proportional distribution is used to maintain diversity of the new found nondominated solutions. Finally, the Pareto optimal solutions are analyzed by simulation, and the optimal MACSIA schemes of each jamming aircraft suppression against the enemy air defense radar network are obtained and verify that the built multiobjective optimization model is corrected. It also shows that the improved multiobjective particle swarm optimization algorithm for solving the problem of MACSIA is feasible and effective.
Directory of Open Access Journals (Sweden)
Quanzhen Huang
2017-01-01
Full Text Available Numbers and locations of sensors and actuators play an important role in cost and control performance for active vibration control system of piezoelectric smart structure. This may lead to a diverse control system if sensors and actuators were not configured properly. An optimal location method of piezoelectric actuators and sensors is proposed in this paper based on particle swarm algorithm (PSA. Due to the complexity of the frame structure, it can be taken as a combination of many piezoelectric intelligent beams and L-type structures. Firstly, an optimal criterion of sensors and actuators is proposed with an optimal objective function. Secondly, each order natural frequency and modal strain are calculated and substituted into the optimal objective function. Preliminary optimal allocation is done using the particle swarm algorithm, based on the similar optimization method and the combination of the vibration stress and strain distribution at the lower modal frequency. Finally, the optimal location is given. An experimental platform was established and the experimental results indirectly verified the feasibility and effectiveness of the proposed method.
Zhang, Ruili; Wang, Yulei; He, Yang; Xiao, Jianyuan; Liu, Jian; Qin, Hong; Tang, Yifa
2018-02-01
Relativistic dynamics of a charged particle in time-dependent electromagnetic fields has theoretical significance and a wide range of applications. The numerical simulation of relativistic dynamics is often multi-scale and requires accurate long-term numerical simulations. Therefore, explicit symplectic algorithms are much more preferable than non-symplectic methods and implicit symplectic algorithms. In this paper, we employ the proper time and express the Hamiltonian as the sum of exactly solvable terms and product-separable terms in space-time coordinates. Then, we give the explicit symplectic algorithms based on the generating functions of orders 2 and 3 for relativistic dynamics of a charged particle. The methodology is not new, which has been applied to non-relativistic dynamics of charged particles, but the algorithm for relativistic dynamics has much significance in practical simulations, such as the secular simulation of runaway electrons in tokamaks.
Buaria, D.; Yeung, P. K.
2017-12-01
A new parallel algorithm utilizing a partitioned global address space (PGAS) programming model to achieve high scalability is reported for particle tracking in direct numerical simulations of turbulent fluid flow. The work is motivated by the desire to obtain Lagrangian information necessary for the study of turbulent dispersion at the largest problem sizes feasible on current and next-generation multi-petaflop supercomputers. A large population of fluid particles is distributed among parallel processes dynamically, based on instantaneous particle positions such that all of the interpolation information needed for each particle is available either locally on its host process or neighboring processes holding adjacent sub-domains of the velocity field. With cubic splines as the preferred interpolation method, the new algorithm is designed to minimize the need for communication, by transferring between adjacent processes only those spline coefficients determined to be necessary for specific particles. This transfer is implemented very efficiently as a one-sided communication, using Co-Array Fortran (CAF) features which facilitate small data movements between different local partitions of a large global array. The cost of monitoring transfer of particle properties between adjacent processes for particles migrating across sub-domain boundaries is found to be small. Detailed benchmarks are obtained on the Cray petascale supercomputer Blue Waters at the University of Illinois, Urbana-Champaign. For operations on the particles in a 81923 simulation (0.55 trillion grid points) on 262,144 Cray XE6 cores, the new algorithm is found to be orders of magnitude faster relative to a prior algorithm in which each particle is tracked by the same parallel process at all times. This large speedup reduces the additional cost of tracking of order 300 million particles to just over 50% of the cost of computing the Eulerian velocity field at this scale. Improving support of PGAS models on
MISR Dark Water aerosol retrievals: operational algorithm sensitivity to particle non-sphericity
Directory of Open Access Journals (Sweden)
O. V. Kalashnikova
2013-08-01
Full Text Available The aim of this study is to theoretically investigate the sensitivity of the Multi-angle Imaging SpectroRadiometer (MISR operational (version 22 Dark Water retrieval algorithm to aerosol non-sphericity over the global oceans under actual observing conditions, accounting for current algorithm assumptions. Non-spherical (dust aerosol models, which were introduced in version 16 of the MISR aerosol product, improved the quality and coverage of retrievals in dusty regions. Due to the sensitivity of the retrieval to the presence of non-spherical aerosols, the MISR aerosol product has been successfully used to track the location and evolution of mineral dust plumes from the Sahara across the Atlantic, for example. However, the MISR global non-spherical aerosol optical depth (AOD fraction product has been found to have several climatological artifacts superimposed on valid detections of mineral dust, including high non-spherical fraction in the Southern Ocean and seasonally variable bands of high non-sphericity. In this paper we introduce a formal approach to examine the ability of the operational MISR Dark Water algorithm to distinguish among various spherical and non-spherical particles as a function of the variable MISR viewing geometry. We demonstrate the following under the criteria currently implemented: (1 Dark Water retrieval sensitivity to particle non-sphericity decreases for AOD below about 0.1 primarily due to an unnecessarily large lower bound imposed on the uncertainty in MISR observations at low light levels, and improves when this lower bound is removed; (2 Dark Water retrievals are able to distinguish between the spherical and non-spherical particles currently used for all MISR viewing geometries when the AOD exceeds 0.1; (3 the sensitivity of the MISR retrievals to aerosol non-sphericity varies in a complex way that depends on the sampling of the scattering phase function and the contribution from multiple scattering; and (4 non
Pashaei, Elnaz; Pashaei, Elham; Aydin, Nizamettin
2018-04-14
In cancer classification, gene selection is an important data preprocessing technique, but it is a difficult task due to the large search space. Accordingly, the objective of this study is to develop a hybrid meta-heuristic Binary Black Hole Algorithm (BBHA) and Binary Particle Swarm Optimization (BPSO) (4-2) model that emphasizes gene selection. In this model, the BBHA is embedded in the BPSO (4-2) algorithm to make the BPSO (4-2) more effective and to facilitate the exploration and exploitation of the BPSO (4-2) algorithm to further improve the performance. This model has been associated with Random Forest Recursive Feature Elimination (RF-RFE) pre-filtering technique. The classifiers which are evaluated in the proposed framework are Sparse Partial Least Squares Discriminant Analysis (SPLSDA); k-nearest neighbor and Naive Bayes. The performance of the proposed method was evaluated on two benchmark and three clinical microarrays. The experimental results and statistical analysis confirm the better performance of the BPSO (4-2)-BBHA compared with the BBHA, the BPSO (4-2) and several state-of-the-art methods in terms of avoiding local minima, convergence rate, accuracy and number of selected genes. The results also show that the BPSO (4-2)-BBHA model can successfully identify known biologically and statistically significant genes from the clinical datasets. Copyright © 2018 Elsevier Inc. All rights reserved.
A Comprehensive Survey on Particle Swarm Optimization Algorithm and Its Applications
Directory of Open Access Journals (Sweden)
Yudong Zhang
2015-01-01
Full Text Available Particle swarm optimization (PSO is a heuristic global optimization method, proposed originally by Kennedy and Eberhart in 1995. It is now one of the most commonly used optimization techniques. This survey presented a comprehensive investigation of PSO. On one hand, we provided advances with PSO, including its modifications (including quantum-behaved PSO, bare-bones PSO, chaotic PSO, and fuzzy PSO, population topology (as fully connected, von Neumann, ring, star, random, etc., hybridization (with genetic algorithm, simulated annealing, Tabu search, artificial immune system, ant colony algorithm, artificial bee colony, differential evolution, harmonic search, and biogeography-based optimization, extensions (to multiobjective, constrained, discrete, and binary optimization, theoretical analysis (parameter selection and tuning, and convergence analysis, and parallel implementation (in multicore, multiprocessor, GPU, and cloud computing forms. On the other hand, we offered a survey on applications of PSO to the following eight fields: electrical and electronic engineering, automation control systems, communication theory, operations research, mechanical engineering, fuel and energy, medicine, chemistry, and biology. It is hoped that this survey would be beneficial for the researchers studying PSO algorithms.
International Nuclear Information System (INIS)
Hoisie, A.; Lubeck, O.; Wasserman, H.
1998-01-01
The authors develop a model for the parallel performance of algorithms that consist of concurrent, two-dimensional wavefronts implemented in a message passing environment. The model, based on a LogGP machine parameterization, combines the separate contributions of computation and communication wavefronts. They validate the model on three important supercomputer systems, on up to 500 processors. They use data from a deterministic particle transport application taken from the ASCI workload, although the model is general to any wavefront algorithm implemented on a 2-D processor domain. They also use the validated model to make estimates of performance and scalability of wavefront algorithms on 100-TFLOPS computer systems expected to be in existence within the next decade as part of the ASCI program and elsewhere. In this context, the authors analyze two problem sizes. Their model shows that on the largest such problem (1 billion cells), inter-processor communication performance is not the bottleneck. Single-node efficiency is the dominant factor
Directory of Open Access Journals (Sweden)
Meiping Wang
2016-01-01
Full Text Available We developed an effective intelligent model to predict the dynamic heat supply of heat source. A hybrid forecasting method was proposed based on support vector regression (SVR model-optimized particle swarm optimization (PSO algorithms. Due to the interaction of meteorological conditions and the heating parameters of heating system, it is extremely difficult to forecast dynamic heat supply. Firstly, the correlations among heat supply and related influencing factors in the heating system were analyzed through the correlation analysis of statistical theory. Then, the SVR model was employed to forecast dynamic heat supply. In the model, the input variables were selected based on the correlation analysis and three crucial parameters, including the penalties factor, gamma of the kernel RBF, and insensitive loss function, were optimized by PSO algorithms. The optimized SVR model was compared with the basic SVR, optimized genetic algorithm-SVR (GA-SVR, and artificial neural network (ANN through six groups of experiment data from two heat sources. The results of the correlation coefficient analysis revealed the relationship between the influencing factors and the forecasted heat supply and determined the input variables. The performance of the PSO-SVR model is superior to those of the other three models. The PSO-SVR method is statistically robust and can be applied to practical heating system.
Annavarapu, Chandra Sekhara Rao; Dara, Suresh; Banka, Haider
2016-01-01
Cancer investigations in microarray data play a major role in cancer analysis and the treatment. Cancer microarray data consists of complex gene expressed patterns of cancer. In this article, a Multi-Objective Binary Particle Swarm Optimization (MOBPSO) algorithm is proposed for analyzing cancer gene expression data. Due to its high dimensionality, a fast heuristic based pre-processing technique is employed to reduce some of the crude domain features from the initial feature set. Since these pre-processed and reduced features are still high dimensional, the proposed MOBPSO algorithm is used for finding further feature subsets. The objective functions are suitably modeled by optimizing two conflicting objectives i.e., cardinality of feature subsets and distinctive capability of those selected subsets. As these two objective functions are conflicting in nature, they are more suitable for multi-objective modeling. The experiments are carried out on benchmark gene expression datasets, i.e., Colon, Lymphoma and Leukaemia available in literature. The performance of the selected feature subsets with their classification accuracy and validated using 10 fold cross validation techniques. A detailed comparative study is also made to show the betterment or competitiveness of the proposed algorithm. PMID:27822174
Indian Academy of Sciences (India)
to as 'divide-and-conquer'. Although there has been a large effort in realizing efficient algorithms, there are not many universally accepted algorithm design paradigms. In this article, we illustrate algorithm design techniques such as balancing, greedy strategy, dynamic programming strategy, and backtracking or traversal of ...
Prognostics 101: A tutorial for particle filter-based prognostics algorithm using Matlab
International Nuclear Information System (INIS)
An, Dawn; Choi, Joo-Ho; Kim, Nam Ho
2013-01-01
This paper presents a Matlab-based tutorial for model-based prognostics, which combines a physical model with observed data to identify model parameters, from which the remaining useful life (RUL) can be predicted. Among many model-based prognostics algorithms, the particle filter is used in this tutorial for parameter estimation of damage or a degradation model. The tutorial is presented using a Matlab script with 62 lines, including detailed explanations. As examples, a battery degradation model and a crack growth model are used to explain the updating process of model parameters, damage progression, and RUL prediction. In order to illustrate the results, the RUL at an arbitrary cycle are predicted in the form of distribution along with the median and 90% prediction interval. This tutorial will be helpful for the beginners in prognostics to understand and use the prognostics method, and we hope it provides a standard of particle filter based prognostics. -- Highlights: ► Matlab-based tutorial for model-based prognostics is presented. ► A battery degradation model and a crack growth model are used as examples. ► The RUL at an arbitrary cycle are predicted using the particle filter
Optimization of the Infrastructure of Reinforced Concrete Reservoirs by a Particle Swarm Algorithm
Directory of Open Access Journals (Sweden)
Kia Saeed
2015-03-01
Full Text Available Optimization techniques may be effective in finding the best modeling and shapes for reinforced concrete reservoirs (RCR to improve their durability and mechanical behavior, particularly for avoiding or reducing the bending moments in these structures. RCRs are one of the major structures applied for reserving fluids to be used in drinking water networks. Usually, these structures have fixed shapes which are designed and calculated based on input discharges, the conditions of the structure's topology, and geotechnical locations with various combinations of static and dynamic loads. In this research, the elements of reservoir walls are first typed according to the performance analyzed; then the range of the membrane based on the thickness and the minimum and maximum cross sections of the bar used are determined in each element. This is done by considering the variable constraints, which are estimated by the maximum stress capacity. In the next phase, based on the reservoir analysis and using the algorithm of the PARIS connector, the related information is combined with the code for the PSO algorithm, i.e., an algorithm for a swarming search, to determine the optimum thickness of the cross sections for the reservoir membrane’s elements and the optimum cross section of the bar used. Based on very complex mathematical linear models for the correct embedding and angles related to achain of peripheral strengthening membranes, which optimize the vibration of the structure, a mutual relation is selected between the modeling software and the code for a particle swarm optimization algorithm. Finally, the comparative weight of the concrete reservoir optimized by the peripheral strengthening membrane is analyzed using common methods. This analysis shows a 19% decrease in the bar’s weight, a 20% decrease in the concrete’s weight, and a minimum 13% saving in construction costs according to the items of a checklist for a concrete reservoir at 10,000 m3.
A set of particle locating algorithms not requiring face belonging to cell connectivity data
Sani, M.; Saidi, M. S.
2009-10-01
Existing efficient directed particle locating (host determination) algorithms rely on the face belonging to cell relationship (F2C) to find the next cell on the search path and the cell in which the target is located. Recently, finite volume methods have been devised which do not need F2C. Therefore, existing search algorithms are not directly applicable (unless F2C is included). F2C is a major memory burden in grid description. If the memory benefit from these finite volume methods are desirable new search algorithms should be devised. In this work two new algorithms (line of sight and closest cell) are proposed which do not need F2C. They are based on the structure of the sparse coefficient matrix involved (stored for example in the compressed row storage, CRS, format) to determine the next cell. Since F2C is not available, testing a cell for the presence of the target is not possible. Therefore, the proposed methods may wrongly mark a nearby cell as the host in some rare cases. The issue of importance of finding the correct host cell (not wrongly hitting its neighbor) is addressed. Quantitative measures are introduced to assess the efficiency of the methods and comparison is made for typical grid types used in computational fluid dynamics. In comparison, the closest cell method, having a lower computational cost than the family of line of sight and the existing efficient maximum dot product methods, gives a very good performance with tolerable and harmless wrong hits. If more accuracy is needed, the method of approximate line of sight then closest cell (LS-A-CC) is recommended.
Explicit high-order non-canonical symplectic particle-in-cell algorithms for Vlasov-Maxwell systems
International Nuclear Information System (INIS)
Xiao, Jianyuan; Liu, Jian; He, Yang; Zhang, Ruili; Qin, Hong; Sun, Yajuan
2015-01-01
Explicit high-order non-canonical symplectic particle-in-cell algorithms for classical particle-field systems governed by the Vlasov-Maxwell equations are developed. The algorithms conserve a discrete non-canonical symplectic structure derived from the Lagrangian of the particle-field system, which is naturally discrete in particles. The electromagnetic field is spatially discretized using the method of discrete exterior calculus with high-order interpolating differential forms for a cubic grid. The resulting time-domain Lagrangian assumes a non-canonical symplectic structure. It is also gauge invariant and conserves charge. The system is then solved using a structure-preserving splitting method discovered by He et al. [preprint http://arxiv.org/abs/arXiv:1505.06076 (2015)], which produces five exactly soluble sub-systems, and high-order structure-preserving algorithms follow by combinations. The explicit, high-order, and conservative nature of the algorithms is especially suitable for long-term simulations of particle-field systems with extremely large number of degrees of freedom on massively parallel supercomputers. The algorithms have been tested and verified by the two physics problems, i.e., the nonlinear Landau damping and the electron Bernstein wave
Explicit high-order non-canonical symplectic particle-in-cell algorithms for Vlasov-Maxwell systems
Energy Technology Data Exchange (ETDEWEB)
Xiao, Jianyuan [School of Nuclear Science and Technology and Department of Modern Physics, University of Science and Technology of China, Hefei, Anhui 230026, China; Key Laboratory of Geospace Environment, CAS, Hefei, Anhui 230026, China; Qin, Hong [School of Nuclear Science and Technology and Department of Modern Physics, University of Science and Technology of China, Hefei, Anhui 230026, China; Plasma Physics Laboratory, Princeton University, Princeton, New Jersey 08543, USA; Liu, Jian [School of Nuclear Science and Technology and Department of Modern Physics, University of Science and Technology of China, Hefei, Anhui 230026, China; Key Laboratory of Geospace Environment, CAS, Hefei, Anhui 230026, China; He, Yang [School of Nuclear Science and Technology and Department of Modern Physics, University of Science and Technology of China, Hefei, Anhui 230026, China; Key Laboratory of Geospace Environment, CAS, Hefei, Anhui 230026, China; Zhang, Ruili [School of Nuclear Science and Technology and Department of Modern Physics, University of Science and Technology of China, Hefei, Anhui 230026, China; Key Laboratory of Geospace Environment, CAS, Hefei, Anhui 230026, China; Sun, Yajuan [LSEC, Academy of Mathematics and Systems Science, Chinese Academy of Sciences, P.O. Box 2719, Beijing 100190, China
2015-11-01
Explicit high-order non-canonical symplectic particle-in-cell algorithms for classical particle-field systems governed by the Vlasov-Maxwell equations are developed. The algorithms conserve a discrete non-canonical symplectic structure derived from the Lagrangian of the particle-field system, which is naturally discrete in particles. The electromagnetic field is spatially discretized using the method of discrete exterior calculus with high-order interpolating differential forms for a cubic grid. The resulting time-domain Lagrangian assumes a non-canonical symplectic structure. It is also gauge invariant and conserves charge. The system is then solved using a structure-preserving splitting method discovered by He et al. [preprint arXiv: 1505.06076 (2015)], which produces five exactly soluble sub-systems, and high-order structure-preserving algorithms follow by combinations. The explicit, high-order, and conservative nature of the algorithms is especially suitable for long-term simulations of particle-field systems with extremely large number of degrees of freedom on massively parallel supercomputers. The algorithms have been tested and verified by the two physics problems, i.e., the nonlinear Landau damping and the electron Bernstein wave. (C) 2015 AIP Publishing LLC.
Xenakis, A M; Lind, S J; Stansby, P K; Rogers, B D
2017-03-01
Tsunamis caused by landslides may result in significant destruction of the surroundings with both societal and industrial impact. The 1958 Lituya Bay landslide and tsunami is a recent and well-documented terrestrial landslide generating a tsunami with a run-up of 524 m. Although recent computational techniques have shown good performance in the estimation of the run-up height, they fail to capture all the physical processes, in particular, the landslide-entry profile and interaction with the water. Smoothed particle hydrodynamics (SPH) is a versatile numerical technique for describing free-surface and multi-phase flows, particularly those that exhibit highly nonlinear deformation in landslide-generated tsunamis. In the current work, the novel multi-phase incompressible SPH method with shifting is applied to the Lituya Bay tsunami and landslide and is the first methodology able to reproduce realistically both the run-up and landslide-entry as documented in a benchmark experiment. The method is the first paper to develop a realistic implementation of the physics that in addition to the non-Newtonian rheology of the landslide includes turbulence in the water phase and soil saturation. Sensitivity to the experimental initial conditions is also considered. This work demonstrates the ability of the proposed method in modelling challenging environmental multi-phase, non-Newtonian and turbulent flows.
Bonneau, Dominique; Souchet, Dominique
2014-01-01
This Series provides the necessary elements to the development and validation of numerical prediction models for hydrodynamic bearings. This book describes the rheological models and the equations of lubrication. It also presents the numerical approaches used to solve the above equations by finite differences, finite volumes and finite elements methods.
Indian Academy of Sciences (India)
Home; Journals; Resonance – Journal of Science Education; Volume 1; Issue 9. Hydrodynamic Lubrication Experiment with 'Floating' Drops. Jaywant H Arakeri K R Sreenivas. General Article Volume 1 Issue 9 September 1996 pp 51-58. Fulltext. Click here to view fulltext PDF. Permanent link:
Milne-Thomson, L M
2011-01-01
This classic exposition of the mathematical theory of fluid motion is applicable to both hydrodynamics and aerodynamics. Based on vector methods and notation with their natural consequence in two dimensions - the complex variable - it offers more than 600 exercises and nearly 400 diagrams. Prerequisites include a knowledge of elementary calculus. 1968 edition.
Energy Technology Data Exchange (ETDEWEB)
Bulatov, A.I.; Chernov, V.S.; Prokopov, L.I.; Proselkov, Yu.M.; Tikhonov, Yu.P.
1980-01-15
A hydrodynamic disperser is suggested which contains a housing, slit nozzles installed on a circular base arranged opposite from each other, resonators secured opposite the nozzle and outlet sleeve. In order to improve the effectiveness of dispersion by throttling the flow, each resonator is made in the form of a crimped plate with crimpings that decrease in height in a direction towards the nozzle.
International Nuclear Information System (INIS)
Braumann, Andreas; Kraft, Markus; Wagner, Wolfgang
2010-01-01
This paper is concerned with computational aspects of a multidimensional population balance model of a wet granulation process. Wet granulation is a manufacturing method to form composite particles, granules, from small particles and binders. A detailed numerical study of a stochastic particle algorithm for the solution of a five-dimensional population balance model for wet granulation is presented. Each particle consists of two types of solids (containing pores) and of external and internal liquid (located in the pores). Several transformations of particles are considered, including coalescence, compaction and breakage. A convergence study is performed with respect to the parameter that determines the number of numerical particles. Averaged properties of the system are computed. In addition, the ensemble is subdivided into practically relevant size classes and analysed with respect to the amount of mass and the particle porosity in each class. These results illustrate the importance of the multidimensional approach. Finally, the kinetic equation corresponding to the stochastic model is discussed.
Directory of Open Access Journals (Sweden)
Ho-Lung Hung
2008-08-01
Full Text Available A suboptimal partial transmit sequence (PTS based on particle swarm optimization (PSO algorithm is presented for the low computation complexity and the reduction of the peak-to-average power ratio (PAPR of an orthogonal frequency division multiplexing (OFDM system. In general, PTS technique can improve the PAPR statistics of an OFDM system. However, it will come with an exhaustive search over all combinations of allowed phase weighting factors and the search complexity increasing exponentially with the number of subblocks. In this paper, we work around potentially computational intractability; the proposed PSO scheme exploits heuristics to search the optimal combination of phase factors with low complexity. Simulation results show that the new technique can effectively reduce the computation complexity and PAPR reduction.
Directory of Open Access Journals (Sweden)
Lee Shu-Hong
2008-01-01
Full Text Available Abstract A suboptimal partial transmit sequence (PTS based on particle swarm optimization (PSO algorithm is presented for the low computation complexity and the reduction of the peak-to-average power ratio (PAPR of an orthogonal frequency division multiplexing (OFDM system. In general, PTS technique can improve the PAPR statistics of an OFDM system. However, it will come with an exhaustive search over all combinations of allowed phase weighting factors and the search complexity increasing exponentially with the number of subblocks. In this paper, we work around potentially computational intractability; the proposed PSO scheme exploits heuristics to search the optimal combination of phase factors with low complexity. Simulation results show that the new technique can effectively reduce the computation complexity and PAPR reduction.
International Nuclear Information System (INIS)
Pang, X.; Rybarcyk, L.J.
2014-01-01
Particle swarm optimization (PSO) and genetic algorithm (GA) are both nature-inspired population based optimization methods. Compared to GA, whose long history can trace back to 1975, PSO is a relatively new heuristic search method first proposed in 1995. Due to its fast convergence rate in single objective optimization domain, the PSO method has been extended to optimize multi-objective problems. In this paper, we will introduce the PSO method and its multi-objective extension, the MOPSO, apply it along with the MOGA (mainly the NSGA-II) to simulations of the LANSCE linac and operational set point optimizations. Our tests show that both methods can provide very similar Pareto fronts but the MOPSO converges faster
Directory of Open Access Journals (Sweden)
A. Muthukumar
2012-02-01
Full Text Available In general, the identification and verification are done by passwords, pin number, etc., which is easily cracked by others. In order to overcome this issue biometrics is a unique tool for authenticate an individual person. Nevertheless, unimodal biometric is suffered due to noise, intra class variations, spoof attacks, non-universality and some other attacks. In order to avoid these attacks, the multimodal biometrics i.e. combining of more modalities is adapted. In a biometric authentication system, the acceptance or rejection of an entity is dependent on the similarity score falling above or below the threshold. Hence this paper has focused on the security of the biometric system, because compromised biometric templates cannot be revoked or reissued and also this paper has proposed a multimodal system based on an evolutionary algorithm, Particle Swarm Optimization that adapts for varying security environments. With these two concerns, this paper had developed a design incorporating adaptability, authenticity and security.
Directory of Open Access Journals (Sweden)
Ahmad Shokuh Saljoughi
2018-01-01
Full Text Available Today, cloud computing has become popular among users in organizations and companies. Security and efficiency are the two major issues facing cloud service providers and their customers. Since cloud computing is a virtual pool of resources provided in an open environment (Internet, cloud-based services entail security risks. Detection of intrusions and attacks through unauthorized users is one of the biggest challenges for both cloud service providers and cloud users. In the present study, artificial intelligence techniques, e.g. MLP Neural Network sand particle swarm optimization algorithm, were used to detect intrusion and attacks. The methods were tested for NSL-KDD, KDD-CUP datasets. The results showed improved accuracy in detecting attacks and intrusions by unauthorized users.
Directory of Open Access Journals (Sweden)
Indrajit Bhattacharya
2011-05-01
Full Text Available The present paper proposes a departmental store automation system based on Radio Frequency Identification (RFID technology and Particle Swarm Optimization (PSO algorithm. The items in the departmental store spanned over different sections and in multiple floors, are tagged with passive RFID tags. The floor is divided into number of zones depending on different types of items that are placed in their respective racks. Each of the zones is placed with one RFID reader, which constantly monitors the items in their zone and periodically sends that information to the application. The problem of systematic periodic monitoring of the store is addressed in this application so that the locations, distributions and demands of every item in the store can be invigilated with intelligence. The proposed application is successfully demonstrated on a simulated case study.
Directory of Open Access Journals (Sweden)
Jude Hemanth Duraisamy
2016-01-01
Full Text Available Image steganography is one of the ever growing computational approaches which has found its application in many fields. The frequency domain techniques are highly preferred for image steganography applications. However, there are significant drawbacks associated with these techniques. In transform based approaches, the secret data is embedded in random manner in the transform coefficients of the cover image. These transform coefficients may not be optimal in terms of the stego image quality and embedding capacity. In this work, the application of Genetic Algorithm (GA and Particle Swarm Optimization (PSO have been explored in the context of determining the optimal coefficients in these transforms. Frequency domain transforms such as Bandelet Transform (BT and Finite Ridgelet Transform (FRIT are used in combination with GA and PSO to improve the efficiency of the image steganography system.
Dragonfly: an implementation of the expand-maximize-compress algorithm for single-particle imaging.
Ayyer, Kartik; Lan, Ti-Yen; Elser, Veit; Loh, N Duane
2016-08-01
Single-particle imaging (SPI) with X-ray free-electron lasers has the potential to change fundamentally how biomacromolecules are imaged. The structure would be derived from millions of diffraction patterns, each from a different copy of the macromolecule before it is torn apart by radiation damage. The challenges posed by the resultant data stream are staggering: millions of incomplete, noisy and un-oriented patterns have to be computationally assembled into a three-dimensional intensity map and then phase reconstructed. In this paper, the Dragonfly software package is described, based on a parallel implementation of the expand-maximize-compress reconstruction algorithm that is well suited for this task. Auxiliary modules to simulate SPI data streams are also included to assess the feasibility of proposed SPI experiments at the Linac Coherent Light Source, Stanford, California, USA.
International Nuclear Information System (INIS)
Campos, Gustavo L.; Campos, Tarcísio P.R.
2017-01-01
This paper brings to light optimized proposal for a circular particle accelerator for proton beam therapy purposes (named as ACPT). The methodology applied is based on computational metaheuristics based on genetic algorithms (GA) were used to obtain optimized parameters of the equipment. Some fundamental concepts in the metaheuristics developed in Matlab® software will be presented. Four parameters were considered for the proposed modeling for the equipment, being: potential difference, magnetic field, length and radius of the resonant cavity. As result, this article showed optimized parameters for two ACPT, one of them used for ocular radiation therapy, as well some parameters that will allow teletherapy, called in order ACPT - 65 and ACPT - 250, obtained through metaheuristics based in GA. (author)
Energy Technology Data Exchange (ETDEWEB)
Pang, X., E-mail: xpang@lanl.gov; Rybarcyk, L.J.
2014-03-21
Particle swarm optimization (PSO) and genetic algorithm (GA) are both nature-inspired population based optimization methods. Compared to GA, whose long history can trace back to 1975, PSO is a relatively new heuristic search method first proposed in 1995. Due to its fast convergence rate in single objective optimization domain, the PSO method has been extended to optimize multi-objective problems. In this paper, we will introduce the PSO method and its multi-objective extension, the MOPSO, apply it along with the MOGA (mainly the NSGA-II) to simulations of the LANSCE linac and operational set point optimizations. Our tests show that both methods can provide very similar Pareto fronts but the MOPSO converges faster.
Energy Technology Data Exchange (ETDEWEB)
Campos, Gustavo L.; Campos, Tarcísio P.R., E-mail: gustavo.lobato@ifmg.edu.br, E-mail: tprcampos@pq.cnpq.br, E-mail: gustavo.lobato@ifmg.edu.br [Universidade Federal de Minas Gerais (UFMG), Belo Horizonte, MG (Brazil). Departamento de Engenharia Nuclear
2017-07-01
This paper brings to light optimized proposal for a circular particle accelerator for proton beam therapy purposes (named as ACPT). The methodology applied is based on computational metaheuristics based on genetic algorithms (GA) were used to obtain optimized parameters of the equipment. Some fundamental concepts in the metaheuristics developed in Matlab® software will be presented. Four parameters were considered for the proposed modeling for the equipment, being: potential difference, magnetic field, length and radius of the resonant cavity. As result, this article showed optimized parameters for two ACPT, one of them used for ocular radiation therapy, as well some parameters that will allow teletherapy, called in order ACPT - 65 and ACPT - 250, obtained through metaheuristics based in GA. (author)
Yang, Guo Sheng; Wang, Xiao Yang; Li, Xue Dong
2018-03-01
With the establishment of the integrated model of relay protection and the scale of the power system expanding, the global setting and optimization of relay protection is an extremely difficult task. This paper presents a kind of application in relay protection of global optimization improved particle swarm optimization algorithm and the inverse time current protection as an example, selecting reliability of the relay protection, selectivity, quick action and flexibility as the four requires to establish the optimization targets, and optimizing protection setting values of the whole system. Finally, in the case of actual power system, the optimized setting value results of the proposed method in this paper are compared with the particle swarm algorithm. The results show that the improved quantum particle swarm optimization algorithm has strong search ability, good robustness, and it is suitable for optimizing setting value in the relay protection of the whole power system.
International Nuclear Information System (INIS)
Jiang Chuanwen; Bompard, Etorre
2005-01-01
This paper proposes a short term hydroelectric plant dispatch model based on the rule of maximizing the benefit. For the optimal dispatch model, which is a large scale nonlinear planning problem with multi-constraints and multi-variables, this paper proposes a novel self-adaptive chaotic particle swarm optimization algorithm to solve the short term generation scheduling of a hydro-system better in a deregulated environment. Since chaotic mapping enjoys certainty, ergodicity and the stochastic property, the proposed approach introduces chaos mapping and an adaptive scaling term into the particle swarm optimization algorithm, which increases its convergence rate and resulting precision. The new method has been examined and tested on a practical hydro-system. The results are promising and show the effectiveness and robustness of the proposed approach in comparison with the traditional particle swarm optimization algorithm
Wang, Li; Li, Feng; Xing, Jian
2017-10-01
In this paper, a hybrid artificial bee colony (ABC) algorithm and pattern search (PS) method is proposed and applied for recovery of particle size distribution (PSD) from spectral extinction data. To be more useful and practical, size distribution function is modelled as the general Johnson's ? function that can overcome the difficulty of not knowing the exact type beforehand encountered in many real circumstances. The proposed hybrid algorithm is evaluated through simulated examples involving unimodal, bimodal and trimodal PSDs with different widths and mean particle diameters. For comparison, all examples are additionally validated by the single ABC algorithm. In addition, the performance of the proposed algorithm is further tested by actual extinction measurements with real standard polystyrene samples immersed in water. Simulation and experimental results illustrate that the hybrid algorithm can be used as an effective technique to retrieve the PSDs with high reliability and accuracy. Compared with the single ABC algorithm, our proposed algorithm can produce more accurate and robust inversion results while taking almost comparative CPU time over ABC algorithm alone. The superiority of ABC and PS hybridization strategy in terms of reaching a better balance of estimation accuracy and computation effort increases its potentials as an excellent inversion technique for reliable and efficient actual measurement of PSD.
International Nuclear Information System (INIS)
Banerjee, Amit; Abu-Mahfouz, Issam
2014-01-01
The use of evolutionary algorithms has been popular in recent years for solving the inverse problem of identifying system parameters given the chaotic response of a dynamical system. The inverse problem is reformulated as a minimization problem and population-based optimizers such as evolutionary algorithms have been shown to be efficient solvers of the minimization problem. However, to the best of our knowledge, there has been no published work that evaluates the efficacy of using the two most popular evolutionary techniques – particle swarm optimization and differential evolution algorithm, on a wide range of parameter estimation problems. In this paper, the two methods along with their variants (for a total of seven algorithms) are applied to fifteen different parameter estimation problems of varying degrees of complexity. Estimation results are analyzed using nonparametric statistical methods to identify if an algorithm is statistically superior to others over the class of problems analyzed. Results based on parameter estimation quality suggest that there are significant differences between the algorithms with the newer, more sophisticated algorithms performing better than their canonical versions. More importantly, significant differences were also found among variants of the particle swarm optimizer and the best performing differential evolution algorithm
Double-layer evolutionary algorithm for distributed optimization of particle detection on the Grid
International Nuclear Information System (INIS)
Padée, Adam; Zaremba, Krzysztof; Kurek, Krzysztof
2013-01-01
Reconstruction of particle tracks from information collected by position-sensitive detectors is an important procedure in HEP experiments. It is usually controlled by a set of numerical parameters which have to be manually optimized. This paper proposes an automatic approach to this task by utilizing evolutionary algorithm (EA) operating on both real-valued and binary representations. Because of computational complexity of the task a special distributed architecture of the algorithm is proposed, designed to be run in grid environment. It is two-level hierarchical hybrid utilizing asynchronous master-slave EA on the level of clusters and island model EA on the level of the grid. The technical aspects of usage of production grid infrastructure are covered, including communication protocols on both levels. The paper deals also with the problem of heterogeneity of the resources, presenting efficiency tests on a benchmark function. These tests confirm that even relatively small islands (clusters) can be beneficial to the optimization process when connected to the larger ones. Finally a real-life usage example is presented, which is an optimization of track reconstruction in Large Angle Spectrometer of NA-58 COMPASS experiment held at CERN, using a sample of Monte Carlo simulated data. The overall reconstruction efficiency gain, achieved by the proposed method, is more than 4%, compared to the manually optimized parameters
An extension theory-based maximum power tracker using a particle swarm optimization algorithm
International Nuclear Information System (INIS)
Chao, Kuei-Hsiang
2014-01-01
Highlights: • We propose an adaptive maximum power point tracking (MPPT) approach for PV systems. • Transient and steady state performances in tracking process are improved. • The proposed MPPT can automatically tune tracking step size along a P–V curve. • A PSO algorithm is used to determine the weighting values of extension theory. - Abstract: The aim of this work is to present an adaptive maximum power point tracking (MPPT) approach for photovoltaic (PV) power generation system. Integrating the extension theory as well as the conventional perturb and observe method, an maximum power point (MPP) tracker is made able to automatically tune tracking step size by way of the category recognition along a P–V characteristic curve. Accordingly, the transient and steady state performances in tracking process are improved. Furthermore, an optimization approach is proposed on the basis of a particle swarm optimization (PSO) algorithm for the complexity reduction in the determination of weighting values. At the end of this work, a simulated improvement in the tracking performance is experimentally validated by an MPP tracker with a programmable system-on-chip (PSoC) based controller
Directory of Open Access Journals (Sweden)
Po-Chen Cheng
2015-06-01
Full Text Available In this paper, an asymmetrical fuzzy-logic-control (FLC-based maximum power point tracking (MPPT algorithm for photovoltaic (PV systems is presented. Two membership function (MF design methodologies that can improve the effectiveness of the proposed asymmetrical FLC-based MPPT methods are then proposed. The first method can quickly determine the input MF setting values via the power–voltage (P–V curve of solar cells under standard test conditions (STC. The second method uses the particle swarm optimization (PSO technique to optimize the input MF setting values. Because the PSO approach must target and optimize a cost function, a cost function design methodology that meets the performance requirements of practical photovoltaic generation systems (PGSs is also proposed. According to the simulated and experimental results, the proposed asymmetrical FLC-based MPPT method has the highest fitness value, therefore, it can successfully address the tracking speed/tracking accuracy dilemma compared with the traditional perturb and observe (P&O and symmetrical FLC-based MPPT algorithms. Compared to the conventional FLC-based MPPT method, the obtained optimal asymmetrical FLC-based MPPT can improve the transient time and the MPPT tracking accuracy by 25.8% and 0.98% under STC, respectively.
He, Zhenzong; Qi, Hong; Yao, Yuchen; Ruan, Liming
2014-12-01
The Ant Colony Optimization algorithm based on the probability density function (PDF-ACO) is applied to estimate the bimodal aerosol particle size distribution (PSD). The direct problem is solved by the modified Anomalous Diffraction Approximation (ADA, as an approximation for optically large and soft spheres, i.e., χ⪢1 and |m-1|⪡1) and the Beer-Lambert law. First, a popular bimodal aerosol PSD and three other bimodal PSDs are retrieved in the dependent model by the multi-wavelength extinction technique. All the results reveal that the PDF-ACO algorithm can be used as an effective technique to investigate the bimodal PSD. Then, the Johnson's SB (J-SB) function and the modified beta (M-β) function are employed as the general distribution function to retrieve the bimodal PSDs under the independent model. Finally, the J-SB and M-β functions are applied to recover actual measurement aerosol PSDs over Beijing and Shanghai obtained from the aerosol robotic network (AERONET). The numerical simulation and experimental results demonstrate that these two general functions, especially the J-SB function, can be used as a versatile distribution function to retrieve the bimodal aerosol PSD when no priori information about the PSD is available.
Detection of Carious Lesions and Restorations Using Particle Swarm Optimization Algorithm
Directory of Open Access Journals (Sweden)
Mohammad Naebi
2016-01-01
Full Text Available Background/Purpose. In terms of the detection of tooth diagnosis, no intelligent detection has been done up till now. Dentists just look at images and then they can detect the diagnosis position in tooth based on their experiences. Using new technologies, scientists will implement detection and repair of tooth diagnosis intelligently. In this paper, we have introduced one intelligent method for detection using particle swarm optimization (PSO and our mathematical formulation. This method was applied to 2D special images. Using developing of our method, we can detect tooth diagnosis for all of 2D and 3D images. Materials and Methods. In recent years, it is possible to implement intelligent processing of images by high efficiency optimization algorithms in many applications especially for detection of dental caries and restoration without human intervention. In the present work, we explain PSO algorithm with our detection formula for detection of dental caries and restoration. Also image processing helped us to implement our method. And to do so, pictures taken by digital radiography systems of tooth are used. Results and Conclusion. We implement some mathematics formula for fitness of PSO. Our results show that this method can detect dental caries and restoration in digital radiography pictures with the good convergence. In fact, the error rate of this method was 8%, so that it can be implemented for detection of dental caries and restoration. Using some parameters, it is possible that the error rate can be even reduced below 0.5%.
PSOVina: The hybrid particle swarm optimization algorithm for protein-ligand docking.
Ng, Marcus C K; Fong, Simon; Siu, Shirley W I
2015-06-01
Protein-ligand docking is an essential step in modern drug discovery process. The challenge here is to accurately predict and efficiently optimize the position and orientation of ligands in the binding pocket of a target protein. In this paper, we present a new method called PSOVina which combined the particle swarm optimization (PSO) algorithm with the efficient Broyden-Fletcher-Goldfarb-Shannon (BFGS) local search method adopted in AutoDock Vina to tackle the conformational search problem in docking. Using a diverse data set of 201 protein-ligand complexes from the PDBbind database and a full set of ligands and decoys for four representative targets from the directory of useful decoys (DUD) virtual screening data set, we assessed the docking performance of PSOVina in comparison to the original Vina program. Our results showed that PSOVina achieves a remarkable execution time reduction of 51-60% without compromising the prediction accuracies in the docking and virtual screening experiments. This improvement in time efficiency makes PSOVina a better choice of a docking tool in large-scale protein-ligand docking applications. Our work lays the foundation for the future development of swarm-based algorithms in molecular docking programs. PSOVina is freely available to non-commercial users at http://cbbio.cis.umac.mo .
Diyana Rosli, Anis; Adenan, Nur Sabrina; Hashim, Hadzli; Ezan Abdullah, Noor; Sulaiman, Suhaimi; Baharudin, Rohaiza
2018-03-01
This paper shows findings of the application of Particle Swarm Optimization (PSO) algorithm in optimizing an Artificial Neural Network that could categorize between ripeness and unripeness stage of citrus suhuensis. The algorithm would adjust the network connections weights and adapt its values during training for best results at the output. Initially, citrus suhuensis fruit’s skin is measured using optically non-destructive method via spectrometer. The spectrometer would transmit VIS (visible spectrum) photonic light radiation to the surface (skin of citrus) of the sample. The reflected light from the sample’s surface would be received and measured by the same spectrometer in terms of reflectance percentage based on VIS range. These measured data are used to train and test the best optimized ANN model. The accuracy is based on receiver operating characteristic (ROC) performance. The result outcomes from this investigation have shown that the achieved accuracy for the optimized is 70.5% with a sensitivity and specificity of 60.1% and 80.0% respectively.
Wang, Ji; Zhang, Ru; Yan, Yuting; Dong, Xiaoqiang; Li, Jun Ming
2017-05-01
Hazardous gas leaks in the atmosphere can cause significant economic losses in addition to environmental hazards, such as fires and explosions. A three-stage hazardous gas leak source localization method was developed that uses movable and stationary gas concentration sensors. The method calculates a preliminary source inversion with a modified genetic algorithm (MGA) and has the potential to crossover with eliminated individuals from the population, following the selection of the best candidate. The method then determines a search zone using Markov Chain Monte Carlo (MCMC) sampling, utilizing a partial evaluation strategy. The leak source is then accurately localized using a modified guaranteed convergence particle swarm optimization algorithm with several bad-performing individuals, following selection of the most successful individual with dynamic updates. The first two stages are based on data collected by motionless sensors, and the last stage is based on data from movable robots with sensors. The measurement error adaptability and the effect of the leak source location were analyzed. The test results showed that this three-stage localization process can localize a leak source within 1.0 m of the source for different leak source locations, with measurement error standard deviation smaller than 2.0.
Directory of Open Access Journals (Sweden)
A. A. Heidari
2017-09-01
Full Text Available Yin-Yang-pair optimization (YYPO is one of the latest metaheuristic algorithms (MA proposed in 2015 that tries to inspire the philosophy of balance between conflicting concepts. Particle swarm optimizer (PSO is one of the first population-based MA inspired by social behaviors of birds. In spite of PSO, the YYPO is not a nature inspired optimizer. It has a low complexity and starts with only two initial positions and can produce more points with regard to the dimension of target problem. Due to unique advantages of these methodologies and to mitigate the immature convergence and local optima (LO stagnation problems in PSO, in this work, a continuous hybrid strategy based on the behaviors of PSO and YYPO is proposed to attain the suboptimal solutions of uncapacitated warehouse location (UWL problems. This efficient hierarchical PSO-based optimizer (PSOYPO can improve the effectiveness of PSO on spatial optimization tasks such as the family of UWL problems. The performance of the proposed PSOYPO is verified according to some UWL benchmark cases. These test cases have been used in several works to evaluate the efficacy of different MA. Then, the PSOYPO is compared to the standard PSO, genetic algorithm (GA, harmony search (HS, modified HS (OBCHS, and evolutionary simulated annealing (ESA. The experimental results demonstrate that the PSOYPO can reveal a better or competitive efficacy compared to the PSO and other MA.
A Comparison of Selected Modifications of the Particle Swarm Optimization Algorithm
Directory of Open Access Journals (Sweden)
Michala Jakubcová
2014-01-01
Full Text Available We compare 27 modifications of the original particle swarm optimization (PSO algorithm. The analysis evaluated nine basic PSO types, which differ according to the swarm evolution as controlled by various inertia weights and constriction factor. Each of the basic PSO modifications was analyzed using three different distributed strategies. In the first strategy, the entire swarm population is considered as one unit (OC-PSO, the second strategy periodically partitions the population into equally large complexes according to the particle’s functional value (SCE-PSO, and the final strategy periodically splits the swarm population into complexes using random permutation (SCERand-PSO. All variants are tested using 11 benchmark functions that were prepared for the special session on real-parameter optimization of CEC 2005. It was found that the best modification of the PSO algorithm is a variant with adaptive inertia weight. The best distribution strategy is SCE-PSO, which gives better results than do OC-PSO and SCERand-PSO for seven functions. The sphere function showed no significant difference between SCE-PSO and SCERand-PSO. It follows that a shuffling mechanism improves the optimization process.
Morfa, Carlos Recarey; Farias, Márcio Muniz de; Morales, Irvin Pablo Pérez; Navarra, Eugenio Oñate Ibañez de; Valera, Roberto Roselló
2018-04-01
The influence of the microstructural heterogeneities is an important topic in the study of materials. In the context of computational mechanics, it is therefore necessary to generate virtual materials that are statistically equivalent to the microstructure under study, and to connect that geometrical description to the different numerical methods. Herein, the authors present a procedure to model continuous solid polycrystalline materials, such as rocks and metals, preserving their representative statistical grain size distribution. The first phase of the procedure consists of segmenting an image of the material into adjacent polyhedral grains representing the individual crystals. This segmentation allows estimating the grain size distribution, which is used as the input for an advancing front sphere packing algorithm. Finally, Laguerre diagrams are calculated from the obtained sphere packings. The centers of the spheres give the centers of the Laguerre cells, and their radii determine the cells' weights. The cell sizes in the obtained Laguerre diagrams have a distribution similar to that of the grains obtained from the image segmentation. That is why those diagrams are a convenient model of the original crystalline structure. The above-outlined procedure has been used to model real polycrystalline metallic materials. The main difference with previously existing methods lies in the use of a better particle packing algorithm.
Feature Selection of Network Intrusion Data using Genetic Algorithm and Particle Swarm Optimization
Directory of Open Access Journals (Sweden)
Iwan Syarif
2016-12-01
Full Text Available This paper describes the advantages of using Evolutionary Algorithms (EA for feature selection on network intrusion dataset. Most current Network Intrusion Detection Systems (NIDS are unable to detect intrusions in real time because of high dimensional data produced during daily operation. Extracting knowledge from huge data such as intrusion data requires new approach. The more complex the datasets, the higher computation time and the harder they are to be interpreted and analyzed. This paper investigates the performance of feature selection algoritms in network intrusiona data. We used Genetic Algorithms (GA and Particle Swarm Optimizations (PSO as feature selection algorithms. When applied to network intrusion datasets, both GA and PSO have significantly reduces the number of features. Our experiments show that GA successfully reduces the number of attributes from 41 to 15 while PSO reduces the number of attributes from 41 to 9. Using k Nearest Neighbour (k-NN as a classifier,the GA-reduced dataset which consists of 37% of original attributes, has accuracy improvement from 99.28% to 99.70% and its execution time is also 4.8 faster than the execution time of original dataset. Using the same classifier, PSO-reduced dataset which consists of 22% of original attributes, has the fastest execution time (7.2 times faster than the execution time of original datasets. However, its accuracy is slightly reduced 0.02% from 99.28% to 99.26%. Overall, both GA and PSO are good solution as feature selection techniques because theyhave shown very good performance in reducing the number of features significantly while still maintaining and sometimes improving the classification accuracy as well as reducing the computation time.
Swarm of bees and particles algorithms in the problem of gradual failure reliability assurance
Directory of Open Access Journals (Sweden)
M. F. Anop
2015-01-01
Full Text Available Probability-statistical framework of reliability theory uses models based on the chance failures analysis. These models are not functional and do not reflect relation of reliability characteristics to the object performance. At the same time, a significant part of the technical systems failures are gradual failures caused by degradation of the internal parameters of the system under the influence of various external factors.The paper shows how to provide the required level of reliability at the design stage using a functional model of a technical object. Paper describes the method for solving this problem under incomplete initial information, when there is no information about the patterns of technological deviations and degradation parameters, and the considered system model is a \\black box" one.To this end, we formulate the problem of optimal parametric synthesis. It lies in the choice of the nominal values of the system parameters to satisfy the requirements for its operation and take into account the unavoidable deviations of the parameters from their design values during operation. As an optimization criterion in this case we propose to use a deterministic geometric criterion \\reliability reserve", which is the minimum distance measured along the coordinate directions from the nominal parameter value to the acceptability region boundary rather than statistical values.The paper presents the results of the application of heuristic swarm intelligence methods to solve the formulated optimization problem. Efficiency of particle swarm algorithms and swarm of bees one compared with undirected random search algorithm in solving a number of test optimal parametric synthesis problems in three areas: reliability, convergence rate and operating time. The study suggests that the use of a swarm of bees method for solving the problem of the technical systems gradual failure reliability ensuring is preferred because of the greater flexibility of the
International Nuclear Information System (INIS)
Gholami, Ali; Honarvar, Farhang; Moghaddam, Hamid Abrishami
2017-01-01
This paper presents an accurate and easy-to-implement algorithm for estimating the parameters of the asymmetric Gaussian chirplet model (AGCM) used for modeling echoes measured in ultrasonic nondestructive testing (NDT) of materials. The proposed algorithm is a combination of particle swarm optimization (PSO) and Levenberg–Marquardt (LM) algorithms. PSO does not need an accurate initial guess and quickly converges to a reasonable output while LM needs a good initial guess in order to provide an accurate output. In the combined algorithm, PSO is run first to provide a rough estimate of the output and this result is consequently inputted to the LM algorithm for more accurate estimation of parameters. To apply the algorithm to signals with multiple echoes, the space alternating generalized expectation maximization (SAGE) is used. The proposed combined algorithm is robust and accurate. To examine the performance of the proposed algorithm, it is applied to a number of simulated echoes having various signal to noise ratios. The combined algorithm is also applied to a number of experimental ultrasonic signals. The results corroborate the accuracy and reliability of the proposed combined algorithm. (paper)
Directory of Open Access Journals (Sweden)
Patel G.C.M.
2016-09-01
Full Text Available The near net shaped manufacturing ability of squeeze casting process requiresto set the process variable combinations at their optimal levels to obtain both aesthetic appearance and internal soundness of the cast parts. The aesthetic and internal soundness of cast parts deal with surface roughness and tensile strength those can readily put the part in service without the requirement of costly secondary manufacturing processes (like polishing, shot blasting, plating, hear treatment etc.. It is difficult to determine the levels of the process variable (that is, pressure duration, squeeze pressure, pouring temperature and die temperature combinations for extreme values of the responses (that is, surface roughness, yield strength and ultimate tensile strength due to conflicting requirements. In the present manuscript, three population based search and optimization methods, namely genetic algorithm (GA, particle swarm optimization (PSO and multi-objective particle swarm optimization based on crowding distance (MOPSO-CD methods have been used to optimize multiple outputs simultaneously. Further, validation test has been conducted for the optimal casting conditions suggested by GA, PSO and MOPSO-CD. The results showed that PSO outperformed GA with regard to computation time.
Modeling of pedestrian evacuation based on the particle swarm optimization algorithm
Zheng, Yaochen; Chen, Jianqiao; Wei, Junhong; Guo, Xiwei
2012-09-01
By applying the evolutionary algorithm of Particle Swarm Optimization (PSO), we have developed a new pedestrian evacuation model. In the new model, we first introduce the local pedestrian’s density concept which is defined as the number of pedestrians distributed in a certain area divided by the area. Both the maximum velocity and the size of a particle (pedestrian) are supposed to be functions of the local density. An attempt to account for the impact consequence between pedestrians is also made by introducing a threshold of injury into the model. The updating rule of the model possesses heterogeneous spatial and temporal characteristics. Numerical examples demonstrate that the model is capable of simulating the typical features of evacuation captured by CA (Cellular Automata) based models. As contrast to CA-based simulations, in which the velocity (via step size) of a pedestrian in each time step is a constant value and limited in several directions, the new model is more flexible in describing pedestrians’ velocities since they are not limited in discrete values and directions according to the new updating rule.
Recent development of hydrodynamic modeling
Hirano, Tetsufumi
2014-09-01
In this talk, I give an overview of recent development in hydrodynamic modeling of high-energy nuclear collisions. First, I briefly discuss about current situation of hydrodynamic modeling by showing results from the integrated dynamical approach in which Monte-Carlo calculation of initial conditions, quark-gluon fluid dynamics and hadronic cascading are combined. In particular, I focus on rescattering effects of strange hadrons on final observables. Next I highlight three topics in recent development in hydrodynamic modeling. These include (1) medium response to jet propagation in di-jet asymmetric events, (2) causal hydrodynamic fluctuation and its application to Bjorken expansion and (3) chiral magnetic wave from anomalous hydrodynamic simulations. (1) Recent CMS data suggest the existence of QGP response to propagation of jets. To investigate this phenomenon, we solve hydrodynamic equations with source term which exhibits deposition of energy and momentum from jets. We find a large number of low momentum particles are emitted at large angle from jet axis. This gives a novel interpretation of the CMS data. (2) It has been claimed that a matter created even in p-p/p-A collisions may behave like a fluid. However, fluctuation effects would be important in such a small system. We formulate relativistic fluctuating hydrodynamics and apply it to Bjorken expansion. We found the final multiplicity fluctuates around the mean value even if initial condition is fixed. This effect is relatively important in peripheral A-A collisions and p-p/p-A collisions. (3) Anomalous transport of the quark-gluon fluid is predicted when extremely high magnetic field is applied. We investigate this possibility by solving anomalous hydrodynamic equations. We found the difference of the elliptic flow parameter between positive and negative particles appears due to the chiral magnetic wave. Finally, I provide some personal perspective of hydrodynamic modeling of high energy nuclear collisions
International Nuclear Information System (INIS)
Sadeghzadeh, H.; Ehyaei, M.A.; Rosen, M.A.
2015-01-01
Highlights: • Calculating pressure drop and heat transfer coefficient by Delaware method. • The accuracy of the Delaware method is more than the Kern method. • The results of the PSO are better than the results of the GA. • The optimization results suggest that yields the best and most economic optimization. - Abstract: The use of genetic and particle swarm algorithms in the design of techno-economically optimum shell-and-tube heat exchangers is demonstrated. A cost function (including costs of the heat exchanger based on surface area and power consumption to overcome pressure drops) is the objective function, which is to be minimized. Selected decision variables include tube diameter, central baffles spacing and shell diameter. The Delaware method is used to calculate the heat transfer coefficient and the shell-side pressure drop. The accuracy and efficiency of the suggested algorithm and the Delaware method are investigated. A comparison of the results obtained by the two algorithms shows that results obtained with the particle swarm optimization method are superior to those obtained with the genetic algorithm method. By comparing these results with those from various references employing the Kern method and other algorithms, it is shown that the Delaware method accompanied by genetic and particle swarm algorithms achieves more optimum results, based on assessments for two case studies
Simplified particle swarm optimization algorithm - doi: 10.4025/actascitechnol.v34i1.9679
Directory of Open Access Journals (Sweden)
Ricardo Paupitz Barbosa dos Santos
2011-11-01
Full Text Available Real ants and bees are considered social insects, which present some remarkable characteristics that can be used, as inspiration, to solve complex optimization problems. This field of study is known as swarm intelligence. Therefore, this paper presents a new algorithm that can be understood as a simplified version of the well known Particle Swarm Optimization (PSO. The proposed algorithm allows saving some computational effort and obtains a considerable performance in the optimization of nonlinear functions. We employed four nonlinear benchmark functions, Sphere, Schwefel, Schaffer and Ackley functions, to test and validate the new proposal. Some simulated results were used in order to clarify the efficiency of the proposed algorithm.
International Nuclear Information System (INIS)
He Yaoyao; Zhou Jianzhong; Xiang Xiuqiao; Chen Heng; Qin Hui
2009-01-01
The goal of this paper is to present a novel chaotic particle swarm optimization (CPSO) algorithm and compares the efficiency of three one-dimensional chaotic maps within symmetrical region for long-term cascaded hydroelectric system scheduling. The introduced chaotic maps improve the global optimal capability of CPSO algorithm. Moreover, a piecewise linear interpolation function is employed to transform all constraints into restrict upriver water level for implementing the maximum of objective function. Numerical results and comparisons demonstrate the effect and speed of different algorithms on a practical hydro-system.
Thieberger, P.; Gassner, D.; Hulsart, R.; Michnoff, R.; Miller, T.; Minty, M.; Sorrell, Z.; Bartnik, A.
2018-04-01
A simple, analytically correct algorithm is developed for calculating "pencil" relativistic beam coordinates using the signals from an ideal cylindrical particle beam position monitor (BPM) with four pickup electrodes (PUEs) of infinitesimal widths. The algorithm is then applied to simulations of realistic BPMs with finite width PUEs. Surprisingly small deviations are found. Simple empirically determined correction terms reduce the deviations even further. The algorithm is then tested with simulations for non-relativistic beams. As an example of the data acquisition speed advantage, a Field Programmable Gate Array-based BPM readout implementation of the new algorithm has been developed and characterized. Finally, the algorithm is tested with BPM data from the Cornell Preinjector.
Indian Academy of Sciences (India)
ticians but also forms the foundation of computer science. Two ... with methods of developing algorithms for solving a variety of problems but ... applications of computers in science and engineer- ... numerical calculus are as important. We will ...
International Nuclear Information System (INIS)
He, Zhenzong; Qi, Hong; Yao, Yuchen; Ruan, Liming
2014-01-01
The Ant Colony Optimization algorithm based on the probability density function (PDF-ACO) is applied to estimate the bimodal aerosol particle size distribution (PSD). The direct problem is solved by the modified Anomalous Diffraction Approximation (ADA, as an approximation for optically large and soft spheres, i.e., χ⪢1 and |m−1|⪡1) and the Beer–Lambert law. First, a popular bimodal aerosol PSD and three other bimodal PSDs are retrieved in the dependent model by the multi-wavelength extinction technique. All the results reveal that the PDF-ACO algorithm can be used as an effective technique to investigate the bimodal PSD. Then, the Johnson's S B (J-S B ) function and the modified beta (M-β) function are employed as the general distribution function to retrieve the bimodal PSDs under the independent model. Finally, the J-S B and M-β functions are applied to recover actual measurement aerosol PSDs over Beijing and Shanghai obtained from the aerosol robotic network (AERONET). The numerical simulation and experimental results demonstrate that these two general functions, especially the J-S B function, can be used as a versatile distribution function to retrieve the bimodal aerosol PSD when no priori information about the PSD is available. - Highlights: • Bimodal PSDs are retrieved by ACO based on probability density function accurately. • J-S B and M-β functions can be used as the versatile function to recover bimodal PSDs. • Bimodal aerosol PSDs can be estimated by J-S B function more reasonably
Kordilla, J.; Bresinsky, L. T.
2017-12-01
The physical mechanisms that govern preferential flow dynamics in unsaturated fractured rock formations are complex and not well understood. Fracture intersections may act as an integrator of unsaturated flow, leading to temporal delay, intermittent flow and partitioning dynamics. In this work, a three-dimensional Pairwise-Force Smoothed Particle Hydrodynamics (PF-SPH) model is being applied in order to simulate gravity-driven multiphase flow at synthetic fracture intersections. SPH, as a meshless Lagrangian method, is particularly suitable for modeling deformable interfaces, such as three-phase contact dynamics of droplets, rivulets and free-surface films. The static and dynamic contact angle can be recognized as the most important parameter of gravity-driven free-surface flow. In SPH, surface tension and adhesion naturally emerges from the implemented pairwise fluid-fluid (sff) and solid-fluid (ssf) interaction force. The model was calibrated to a contact angle of 65°, which corresponds to the wetting properties of water on Poly(methyl methacrylate). The accuracy of the SPH simulations were validated against an analytical solution of Poiseuille flow between two parallel plates and against laboratory experiments. Using the SPH model, the complex flow mode transitions from droplet to rivulet flow of an experimental study were reproduced. Additionally, laboratory dimensionless scaling experiments of water droplets were successfully replicated in SPH. Finally, SPH simulations were used to investigate the partitioning dynamics of single droplets into synthetic horizontal fractures with various apertures (Δdf = 0, 0.5, 1.0, 1.5, 2.0, 2.5, 3.0, 3.5, 4.0 mm) and offsets (Δdoff = -1.5, -1.0, -0.5, 0, 1.0, 2.0, 3.0 mm). Fluid masses were measured in the domains R1, R2 and R3. The perfect conditions of ideally smooth surfaces and the SPH inherent advantage of particle tracking allow the recognition of small scale partitioning mechanisms and its importance for bulk flow
Hydrodynamic interactions in active colloidal crystal microrheology
Weeber, R; Harting, JDR Jens
2012-01-01
In dense colloids it is commonly assumed that hydrodynamic interactions do not play a role. However, a found theoretical quantification is often missing. We present computer simulations that are motivated by experiments where a large colloidal particle is dragged through a colloidal crystal. To qualify the influence of long-ranged hydrodynamics, we model the setup by conventional Langevin dynamics simulations and by an improved scheme with limited hydrodynamic interactions. This scheme signif...
Cao, Jianfang; Cui, Hongyan; Shi, Hao; Jiao, Lijuan
2016-01-01
A back-propagation (BP) neural network can solve complicated random nonlinear mapping problems; therefore, it can be applied to a wide range of problems. However, as the sample size increases, the time required to train BP neural networks becomes lengthy. Moreover, the classification accuracy decreases as well. To improve the classification accuracy and runtime efficiency of the BP neural network algorithm, we proposed a parallel design and realization method for a particle swarm optimization (PSO)-optimized BP neural network based on MapReduce on the Hadoop platform using both the PSO algorithm and a parallel design. The PSO algorithm was used to optimize the BP neural network's initial weights and thresholds and improve the accuracy of the classification algorithm. The MapReduce parallel programming model was utilized to achieve parallel processing of the BP algorithm, thereby solving the problems of hardware and communication overhead when the BP neural network addresses big data. Datasets on 5 different scales were constructed using the scene image library from the SUN Database. The classification accuracy of the parallel PSO-BP neural network algorithm is approximately 92%, and the system efficiency is approximately 0.85, which presents obvious advantages when processing big data. The algorithm proposed in this study demonstrated both higher classification accuracy and improved time efficiency, which represents a significant improvement obtained from applying parallel processing to an intelligent algorithm on big data.
Directory of Open Access Journals (Sweden)
Hongjun Li
2012-01-01
Full Text Available This paper proposes a modified particle swarm optimization algorithm coupled with the finite element limit equilibrium method (FELEM for the minimum factor of safety and the location of associated noncircular critical failure surfaces for various geotechnical practices. During the search process, the stress compatibility constraints coupled with the geometrical and kinematical compatibility constraints are firstly established based on the features of slope geometry and stress distribution to guarantee realistic slip surfaces from being unreasonable. Furthermore, in the FELEM, based on rigorous theoretical analyses and derivation, it is noted that the physical meaning of the factor of safety can be formulated on the basis of strength reserving theory rather than the overloading theory. Consequently, compared with the limit equilibrium method (LEM and the shear strength reduction method (SSRM through several numerical examples, the FELEM in conjunction with the improved search strategy is proved to be an effective and efficient approach to routine analysis and design in geotechnical practices with a high level of confidence.
Particle swarm optimization algorithm for optimizing assignment of blood in blood banking system.
Olusanya, Micheal O; Arasomwan, Martins A; Adewumi, Aderemi O
2015-01-01
This paper reports the performance of particle swarm optimization (PSO) for the assignment of blood to meet patients' blood transfusion requests for blood transfusion. While the drive for blood donation lingers, there is need for effective and efficient management of available blood in blood banking systems. Moreover, inherent danger of transfusing wrong blood types to patients, unnecessary importation of blood units from external sources, and wastage of blood products due to nonusage necessitate the development of mathematical models and techniques for effective handling of blood distribution among available blood types in order to minimize wastages and importation from external sources. This gives rise to the blood assignment problem (BAP) introduced recently in literature. We propose a queue and multiple knapsack models with PSO-based solution to address this challenge. Simulation is based on sets of randomly generated data that mimic real-world population distribution of blood types. Results obtained show the efficiency of the proposed algorithm for BAP with no blood units wasted and very low importation, where necessary, from outside the blood bank. The result therefore can serve as a benchmark and basis for decision support tools for real-life deployment.
Proportional–Integral–Derivative (PID Controller Tuning using Particle Swarm Optimization Algorithm
Directory of Open Access Journals (Sweden)
J. S. Bassi
2012-08-01
Full Text Available The proportional-integral-derivative (PID controllers are the most popular controllers used in industry because of their remarkable effectiveness, simplicity of implementation and broad applicability. However, manual tuning of these controllers is time consuming, tedious and generally lead to poor performance. This tuning which is application specific also deteriorates with time as a result of plant parameter changes. This paper presents an artificial intelligence (AI method of particle swarm optimization (PSO algorithm for tuning the optimal proportional-integral derivative (PID controller parameters for industrial processes. This approach has superior features, including easy implementation, stable convergence characteristic and good computational efficiency over the conventional methods. Ziegler- Nichols, tuning method was applied in the PID tuning and results were compared with the PSO-Based PID for optimum control. Simulation results are presented to show that the PSO-Based optimized PID controller is capable of providing an improved closed-loop performance over the Ziegler- Nichols tuned PID controller Parameters. Compared to the heuristic PID tuning method of Ziegler-Nichols, the proposed method was more efficient in improving the step response characteristics such as, reducing the steady-states error; rise time, settling time and maximum overshoot in speed control of DC motor.
Particle Swarm Optimization Algorithm for Optimizing Assignment of Blood in Blood Banking System
Olusanya, Micheal O.; Arasomwan, Martins A.; Adewumi, Aderemi O.
2015-01-01
This paper reports the performance of particle swarm optimization (PSO) for the assignment of blood to meet patients' blood transfusion requests for blood transfusion. While the drive for blood donation lingers, there is need for effective and efficient management of available blood in blood banking systems. Moreover, inherent danger of transfusing wrong blood types to patients, unnecessary importation of blood units from external sources, and wastage of blood products due to nonusage necessitate the development of mathematical models and techniques for effective handling of blood distribution among available blood types in order to minimize wastages and importation from external sources. This gives rise to the blood assignment problem (BAP) introduced recently in literature. We propose a queue and multiple knapsack models with PSO-based solution to address this challenge. Simulation is based on sets of randomly generated data that mimic real-world population distribution of blood types. Results obtained show the efficiency of the proposed algorithm for BAP with no blood units wasted and very low importation, where necessary, from outside the blood bank. The result therefore can serve as a benchmark and basis for decision support tools for real-life deployment. PMID:25815046
Inverse estimation of the particle size distribution using the Fruit Fly Optimization Algorithm
International Nuclear Information System (INIS)
He, Zhenzong; Qi, Hong; Yao, Yuchen; Ruan, Liming
2015-01-01
The Fruit Fly Optimization Algorithm (FOA) is applied to retrieve the particle size distribution (PSD) for the first time. The direct problems are solved by the modified Anomalous Diffraction Approximation (ADA) and the Lambert–Beer Law. Firstly, three commonly used monomodal PSDs, i.e. the Rosin–Rammer (R–R) distribution, the normal (N–N) distribution and the logarithmic normal (L–N) distribution, and the bimodal Rosin–Rammer distribution function are estimated in the dependent model. All the results show that the FOA can be used as an effective technique to estimate the PSDs under the dependent model. Then, an optimal wavelength selection technique is proposed to improve the retrieval results of bimodal PSD. Finally, combined with two general functions, i.e. the Johnson's S B (J-S B ) function and the modified beta (M-β) function, the FOA is employed to recover actual measurement aerosol PSDs over Beijing and Hangzhou obtained from the aerosol robotic network (AERONET). All the numerical simulations and experiment results demonstrate that the FOA can be used to retrieve actual measurement PSDs, and more reliable and accurate results can be obtained, if the J-S B function is employed
International Nuclear Information System (INIS)
Luz, Andre Ferreira da
2009-01-01
In this work, a Particle Swarm Optimization Algorithm (PSO) is developed for preventive maintenance optimization. The proposed methodology, which allows the use flexible intervals between maintenance interventions, instead of considering fixed periods (as usual), allows a better adaptation of scheduling in order to deal with the failure rates of components under aging. Moreover, because of this flexibility, the planning of preventive maintenance becomes a difficult task. Motivated by the fact that the PSO has proved to be very competitive compared to other optimization tools, this work investigates the use of PSO as an alternative tool of optimization. Considering that PSO works in a real and continuous space, it is a challenge to use it for discrete optimization, in which scheduling may comprise variable number of maintenance interventions. The PSO model developed in this work overcome such difficulty. The proposed PSO searches for the best policy for maintaining and considers several aspects, such as: probability of needing repair (corrective maintenance), the cost of such repairs, typical outage times, costs of preventive maintenance, the impact of maintaining the reliability of systems as a whole, and the probability of imperfect maintenance. To evaluate the proposed methodology, we investigate an electro-mechanical system consisting of three pumps and four valves, High Pressure Injection System (HPIS) of a PWR. Results show that PSO is quite efficient in finding the optimum preventive maintenance policies for the HPIS. (author)
Indian Academy of Sciences (India)
algorithm design technique called 'divide-and-conquer'. One of ... Turtle graphics, September. 1996. 5. ... whole list named 'PO' is a pointer to the first element of the list; ..... Program for computing matrices X and Y and placing the result in C *).
Indian Academy of Sciences (India)
algorithm that it is implicitly understood that we know how to generate the next natural ..... Explicit comparisons are made in line (1) where maximum and minimum is ... It can be shown that the function T(n) = 3/2n -2 is the solution to the above ...
Directory of Open Access Journals (Sweden)
D. Ramyachitra
2015-09-01
Full Text Available Microarray technology allows simultaneous measurement of the expression levels of thousands of genes within a biological tissue sample. The fundamental power of microarrays lies within the ability to conduct parallel surveys of gene expression using microarray data. The classification of tissue samples based on gene expression data is an important problem in medical diagnosis of diseases such as cancer. In gene expression data, the number of genes is usually very high compared to the number of data samples. Thus the difficulty that lies with data are of high dimensionality and the sample size is small. This research work addresses the problem by classifying resultant dataset using the existing algorithms such as Support Vector Machine (SVM, K-nearest neighbor (KNN, Interval Valued Classification (IVC and the improvised Interval Value based Particle Swarm Optimization (IVPSO algorithm. Thus the results show that the IVPSO algorithm outperformed compared with other algorithms under several performance evaluation functions.
Directory of Open Access Journals (Sweden)
Li Ran
2017-01-01
Full Text Available Optimal allocation of generalized power sources in distribution network is researched. A simple index of voltage stability is put forward. Considering the investment and operation benefit, the stability of voltage and the pollution emissions of generalized power sources in distribution network, a multi-objective optimization planning model is established. A multi-objective particle swarm optimization algorithm is proposed to solve the optimal model. In order to improve the global search ability, the strategies of fast non-dominated sorting, elitism and crowding distance are adopted in this algorithm. Finally, tested the model and algorithm by IEEE-33 node system to find the best configuration of GP, the computed result shows that with the generalized power reasonable access to the active distribution network, the investment benefit and the voltage stability of the system is improved, and the proposed algorithm has better global search capability.
Ramyachitra, D; Sofia, M; Manikandan, P
2015-09-01
Microarray technology allows simultaneous measurement of the expression levels of thousands of genes within a biological tissue sample. The fundamental power of microarrays lies within the ability to conduct parallel surveys of gene expression using microarray data. The classification of tissue samples based on gene expression data is an important problem in medical diagnosis of diseases such as cancer. In gene expression data, the number of genes is usually very high compared to the number of data samples. Thus the difficulty that lies with data are of high dimensionality and the sample size is small. This research work addresses the problem by classifying resultant dataset using the existing algorithms such as Support Vector Machine (SVM), K-nearest neighbor (KNN), Interval Valued Classification (IVC) and the improvised Interval Value based Particle Swarm Optimization (IVPSO) algorithm. Thus the results show that the IVPSO algorithm outperformed compared with other algorithms under several performance evaluation functions.
Directory of Open Access Journals (Sweden)
Zhigang Lian
2010-01-01
Full Text Available The Job-shop scheduling problem (JSSP is a branch of production scheduling, which is among the hardest combinatorial optimization problems. Many different approaches have been applied to optimize JSSP, but for some JSSP even with moderate size cannot be solved to guarantee optimality. The original particle swarm optimization algorithm (OPSOA, generally, is used to solve continuous problems, and rarely to optimize discrete problems such as JSSP. In OPSOA, through research I find that it has a tendency to get stuck in a near optimal solution especially for middle and large size problems. The local and global search combine particle swarm optimization algorithm (LGSCPSOA is used to solve JSSP, where particle-updating mechanism benefits from the searching experience of one particle itself, the best of all particles in the swarm, and the best of particles in neighborhood population. The new coding method is used in LGSCPSOA to optimize JSSP, and it gets all sequences are feasible solutions. Three representative instances are made computational experiment, and simulation shows that the LGSCPSOA is efficacious for JSSP to minimize makespan.
Na, Dong-Yeop; Omelchenko, Yuri A.; Moon, Haksu; Borges, Ben-Hur V.; Teixeira, Fernando L.
2017-10-01
We present a charge-conservative electromagnetic particle-in-cell (EM-PIC) algorithm optimized for the analysis of vacuum electronic devices (VEDs) with cylindrical symmetry (axisymmetry). We exploit the axisymmetry present in the device geometry, fields, and sources to reduce the dimensionality of the problem from 3D to 2D. Further, we employ 'transformation optics' principles to map the original problem in polar coordinates with metric tensor diag (1 ,ρ2 , 1) to an equivalent problem on a Cartesian metric tensor diag (1 , 1 , 1) with an effective (artificial) inhomogeneous medium introduced. The resulting problem in the meridian (ρz) plane is discretized using an unstructured 2D mesh considering TEϕ-polarized fields. Electromagnetic field and source (node-based charges and edge-based currents) variables are expressed as differential forms of various degrees, and discretized using Whitney forms. Using leapfrog time integration, we obtain a mixed E - B finite-element time-domain scheme for the full-discrete Maxwell's equations. We achieve a local and explicit time update for the field equations by employing the sparse approximate inverse (SPAI) algorithm. Interpolating field values to particles' positions for solving Newton-Lorentz equations of motion is also done via Whitney forms. Particles are advanced using the Boris algorithm with relativistic correction. A recently introduced charge-conserving scatter scheme tailored for 2D unstructured grids is used in the scatter step. The algorithm is validated considering cylindrical cavity and space-charge-limited cylindrical diode problems. We use the algorithm to investigate the physical performance of VEDs designed to harness particle bunching effects arising from the coherent (resonance) Cerenkov electron beam interactions within micro-machined slow wave structures.
Kulper, Sloan A; Fang, Christian X; Ren, Xiaodan; Guo, Margaret; Sze, Kam Y; Leung, Frankie K L; Lu, William W
2018-04-01
A novel computational model of implant migration in trabecular bone was developed using smoothed-particle hydrodynamics (SPH), and an initial validation was performed via correlation with experimental data. Six fresh-frozen human cadaveric specimens measuring 10 × 10 × 20 mm were extracted from the proximal femurs of female donors (mean age of 82 years, range 75-90, BV/TV ratios between 17.88% and 30.49%). These specimens were then penetrated under axial loading to a depth of 10 mm with 5 mm diameter cylindrical indenters bearing either flat or sharp/conical tip designs similar to blunt and self-tapping cancellous screws, assigned in a random manner. SPH models were constructed based on microCT scans (17.33 µm) of the cadaveric specimens. Two initial specimens were used for calibration of material model parameters. The remaining four specimens were then simulated in silico using identical material model parameters. Peak forces varied between 92.0 and 365.0 N in the experiments, and 115.5-352.2 N in the SPH simulations. The concordance correlation coefficient between experimental and simulated pairs was 0.888, with a 95%CI of 0.8832-0.8926, a Pearson ρ (precision) value of 0.9396, and a bias correction factor Cb (accuracy) value of 0.945. Patterns of bone compaction were qualitatively similar; both experimental and simulated flat-tipped indenters produced dense regions of compacted material adjacent to the advancing face of the indenter, while sharp-tipped indenters deposited compacted material along their peripheries. Simulations based on SPH can produce accurate predictions of trabecular bone penetration that are useful for characterizing implant performance under high-strain loading conditions. © 2017 Orthopaedic Research Society. Published by Wiley Periodicals, Inc. J Orthop Res 36:1114-1123, 2018. © 2017 Orthopaedic Research Society. Published by Wiley Periodicals, Inc.
Hydrodynamic interactions in active colloidal crystal microrheology.
Weeber, R; Harting, J
2012-11-01
In dense colloids it is commonly assumed that hydrodynamic interactions do not play a role. However, a found theoretical quantification is often missing. We present computer simulations that are motivated by experiments where a large colloidal particle is dragged through a colloidal crystal. To qualify the influence of long-ranged hydrodynamics, we model the setup by conventional Langevin dynamics simulations and by an improved scheme with limited hydrodynamic interactions. This scheme significantly improves our results and allows to show that hydrodynamics strongly impacts the development of defects, the crystal regeneration, as well as the jamming behavior.
Indian Academy of Sciences (India)
will become clear in the next article when we discuss a simple logo like programming language. ... Rod B may be used as an auxiliary store. The problem is to find an algorithm which performs this task. ... No disks are moved from A to Busing C as auxiliary rod. • move _disk (A, C);. (No + l)th disk is moved from A to C directly ...
International Nuclear Information System (INIS)
Colgate, S.A.
1981-01-01
The explosion of a star supernova occurs at the end of its evolution when the nuclear fuel in its core is almost, or completely, consumed. The star may explode due to a small residual thermonuclear detonation, type I SN or it may collapse, type I and type II SN leaving a neutron star remnant. The type I progenitor should be thought to be an old accreting white dwarf, 1.4 M/sub theta/, with a close companion star. A type II SN is thought to be a massive young star 6 to 10 M/sub theta/. The mechanism of explosion is still a challenge to our ability to model the most extreme conditions of matter and hydrodynamics that occur presently and excessively in the universe. 39 references
Renilson, Martin
2015-01-01
This book adopts a practical approach and presents recent research together with applications in real submarine design and operation. Topics covered include hydrostatics, manoeuvring, resistance and propulsion of submarines. The author briefly reviews basic concepts in ship hydrodynamics and goes on to show how they are applied to submarines, including a look at the use of physical model experiments. The issues associated with manoeuvring in both the horizontal and vertical planes are explained, and readers will discover suggested criteria for stability, along with rudder and hydroplane effectiveness. The book includes a section on appendage design which includes information on sail design, different arrangements of bow planes and alternative stern configurations. Other themes explored in this book include hydro-acoustic performance, the components of resistance and the effect of hull shape. Readers will value the author’s applied experience as well as the empirical expressions that are presented for use a...
Guyon, Etienne; Petit, Luc; Mitescu, Catalin D
2015-01-01
This new edition is an enriched version of the textbook of fluid dynamics published more than 10 years ago. It retains the same physically oriented pedagogical perspective. This book emphasizes, as in the first edition, experimental inductive approaches and relies on the study of the mechanisms at play and on dimensional analysis rather than more formal approaches found in many classical textbooks in the field. The need for a completely new version also originated from the increase, over the last few decades, of the cross-overs between the mechanical and physical approaches, as is visible in international meetings and joint projects. Hydrodynamics is more widely linked today to other fields of experimental sciences: materials, environment, life sciences and earth sciences, as well as engineering sciences.
International Nuclear Information System (INIS)
Wilkins, M.L.
1979-01-01
Various aspects of hydrodynamics and elastic--plastic flow are introduced for the purpose of defining hydrodynamic terms and explaining what some of the important hydrodynamic concepts are. The first part covers hydrodynamic theory; and discussed fundamental hydrodynamic equations, discontinuities, and shock, detonation, and elastic--plastic waves. The second part deals with applications of hydrodynamic theory to material equations of state, spall, Taylor instabilities, and detonation pressure measurements
Wang, Yan; Huang, Song; Ji, Zhicheng
2017-07-01
This paper presents a hybrid particle swarm optimization and gravitational search algorithm based on hybrid mutation strategy (HGSAPSO-M) to optimize economic dispatch (ED) including distributed generations (DGs) considering market-based energy pricing. A daily ED model was formulated and a hybrid mutation strategy was adopted in HGSAPSO-M. The hybrid mutation strategy includes two mutation operators, chaotic mutation, Gaussian mutation. The proposed algorithm was tested on IEEE-33 bus and results show that the approach is effective for this problem.
Song, Lei; Zhang, Bo
2017-07-01
Nowadays, the grid faces much more challenges caused by wind power and the accessing of electric vehicles (EVs). Based on the potentiality of coordinated dispatch, a model of wind-EVs coordinated dispatch was developed. Then, A bi-level particle swarm optimization algorithm for solving the model was proposed in this paper. The application of this algorithm to 10-unit test system carried out that coordinated dispatch can benefit the power system from the following aspects: (1) Reducing operating costs; (2) Improving the utilization of wind power; (3) Stabilizing the peak-valley difference.
DEFF Research Database (Denmark)
Zhuang, Guisheng; Jensen, Thomas G.; Kutter, Jörg P.
2012-01-01
constrained in the out‐of‐plane direction into a narrow sheet, and then focused in‐plane into a small core region, obtaining on‐chip three‐dimensional (3D) hydrodynamic focusing. All the microoptical elements, including waveguides, microlens, and fiber‐to‐waveguide couplers, and the in‐plane focusing channels...... are fabricated in one SU‐8 layer by standard photolithography. The channels for out‐of‐plane focusing are made in a polydimethylsiloxane (PDMS) layer by a single cast using a SU‐8 master. Numerical and experimental results indicate that the device can realize 3D hydrodynamic focusing reliably over a wide range...
A geometric viewpoint on generalized hydrodynamics
Directory of Open Access Journals (Sweden)
Benjamin Doyon
2018-01-01
Full Text Available Generalized hydrodynamics (GHD is a large-scale theory for the dynamics of many-body integrable systems. It consists of an infinite set of conservation laws for quasi-particles traveling with effective (“dressed” velocities that depend on the local state. We show that these equations can be recast into a geometric dynamical problem. They are conservation equations with state-independent quasi-particle velocities, in a space equipped with a family of metrics, parametrized by the quasi-particles' type and speed, that depend on the local state. In the classical hard rod or soliton gas picture, these metrics measure the free length of space as perceived by quasi-particles; in the quantum picture, they weigh space with the density of states available to them. Using this geometric construction, we find a general solution to the initial value problem of GHD, in terms of a set of integral equations where time appears explicitly. These integral equations are solvable by iteration and provide an extremely efficient solution algorithm for GHD.
Villar, Xabier; Piso, Daniel; Bruguera, Javier D.
2014-02-01
This paper presents an FPGA implementation of an algorithm, previously published, for the the reconstruction of cosmic rays' trajectories and the determination of the time of arrival and velocity of the particles. The accuracy and precision issues of the algorithm have been analyzed to propose a suitable implementation. Thus, a 32-bit fixed-point format has been used for the representation of the data values. Moreover, the dependencies among the different operations have been taken into account to obtain a highly parallel and efficient hardware implementation. The final hardware architecture requires 18 cycles to process every particle, and has been exhaustively simulated to validate all the design decisions. The architecture has been mapped over different commercial FPGAs, with a frequency of operation ranging from 300 MHz to 1.3 GHz, depending on the FPGA being used. Consequently, the number of particle trajectories processed per second is between 16 million and 72 million. The high number of particle trajectories calculated per second shows that the proposed FPGA implementation might be used also in high rate environments such as those found in particle and nuclear physics experiments.
The RAGE radiation-hydrodynamic code
Energy Technology Data Exchange (ETDEWEB)
Gittings, Michael; Clover, Michael; Betlach, Thomas; Byrne, Nelson; Ranta, Dale [Science Applications International Corp. MS A-1, 10260 Campus Point Drive, San Diego, CA 92121 (United States); Weaver, Robert; Coker, Robert; Dendy, Edward; Hueckstaedt, Robert; New, Kim; Oakes, W Rob [Los Alamos National Laboratory, MS T087, PO Box 1663, Los Alamos, NM 87545 (United States); Stefan, Ryan [TaylorMade-adidas Golf, 5545 Fermi Court, Carlsbad, CA 92008-7324 (United States)], E-mail: michael.r.clover@saic.com
2008-10-01
We describe RAGE, the 'radiation adaptive grid Eulerian' radiation-hydrodynamics code, including its data structures, its parallelization strategy and performance, its hydrodynamic algorithm(s), its (gray) radiation diffusion algorithm, and some of the considerable amount of verification and validation efforts. The hydrodynamics is a basic Godunov solver, to which we have made significant improvements to increase the advection algorithm's robustness and to converge stiffnesses in the equation of state. Similarly, the radiation transport is a basic gray diffusion, but our treatment of the radiation-material coupling, wherein we converge nonlinearities in a novel manner to allow larger timesteps and more robust behavior, can be applied to any multi-group transport algorithm.
The RAGE radiation-hydrodynamic code
International Nuclear Information System (INIS)
Gittings, Michael; Clover, Michael; Betlach, Thomas; Byrne, Nelson; Ranta, Dale; Weaver, Robert; Coker, Robert; Dendy, Edward; Hueckstaedt, Robert; New, Kim; Oakes, W Rob; Stefan, Ryan
2008-01-01
We describe RAGE, the 'radiation adaptive grid Eulerian' radiation-hydrodynamics code, including its data structures, its parallelization strategy and performance, its hydrodynamic algorithm(s), its (gray) radiation diffusion algorithm, and some of the considerable amount of verification and validation efforts. The hydrodynamics is a basic Godunov solver, to which we have made significant improvements to increase the advection algorithm's robustness and to converge stiffnesses in the equation of state. Similarly, the radiation transport is a basic gray diffusion, but our treatment of the radiation-material coupling, wherein we converge nonlinearities in a novel manner to allow larger timesteps and more robust behavior, can be applied to any multi-group transport algorithm
International Nuclear Information System (INIS)
Xu, Kai-Jiang; Pan, Xiao-Min; Li, Ren-Xian; Sheng, Xin-Qing
2017-01-01
In optical trapping applications, the optical force should be investigated within a wide range of parameter space in terms of beam configuration to reach the desirable performance. A simple but reliable way of conducting the related investigation is to evaluate optical forces corresponding to all possible beam configurations. Although the optical force exerted on arbitrarily shaped particles can be well predicted by boundary element method (BEM), such investigation is time costing because it involves many repetitions of expensive computation, where the forces are calculated from the equivalent surface currents. An algorithm is proposed to alleviate the difficulty by exploiting our previously developed skeletonization framework. The proposed algorithm succeeds in reducing the number of repetitions. Since the number of skeleton beams is always much less than that of beams in question, the computation can be very efficient. The proposed algorithm is accurate because the skeletonization is accuracy controllable. - Highlights: • A fast and accurate algorithm is proposed in terms of boundary element method to reduce the number of repetitions of computing the optical forces from the equivalent currents. • The algorithm is accuracy controllable because the accuracy of the associated rank-revealing process is well-controlled. • The accelerate rate can reach over one thousand because the number of skeleton beams can be very small. • The algorithm can be applied to other methods, e.g., FE-BI.
Adachi, Y.; Matsumoto, T.; Cohen Stuart, M.A.
2002-01-01
Effects of hydrodynamic mixing intensity on the initial stage dynamics of bridging flocculation induced by adsorbing polyelectrolyte were analyzed as an extension of previous report on the effect of ionic strength (J. Coll. Int. Sci. 204 (1998) 328). Mixing condition were changed by adopting forked
Energy Technology Data Exchange (ETDEWEB)
Castellano, T.; De Palma, L.; Laneve, D.; Strippoli, V.; Cuccovilllo, A.; Prudenzano, F. [Electrical and Information Engineering Department (DEI), Polytechnic Institute of Bari, 4 Orabona Street, CAP 70125, Bari, (Italy); Dimiccoli, V.; Losito, O.; Prisco, R. [ITEL Telecomunicazioni, 39 Labriola Street, CAP 70037, Ruvo di Puglia, Bari, (Italy)
2015-07-01
A homemade computer code for designing a Side- Coupled Linear Accelerator (SCL) is written. It integrates a simplified model of SCL tanks with the Particle Swarm Optimization (PSO) algorithm. The computer code main aim is to obtain useful guidelines for the design of Linear Accelerator (LINAC) resonant cavities. The design procedure, assisted via the aforesaid approach seems very promising, allowing future improvements towards the optimization of actual accelerating geometries. (authors)
International Nuclear Information System (INIS)
Castellano, T.; De Palma, L.; Laneve, D.; Strippoli, V.; Cuccovilllo, A.; Prudenzano, F.; Dimiccoli, V.; Losito, O.; Prisco, R.
2015-01-01
A homemade computer code for designing a Side- Coupled Linear Accelerator (SCL) is written. It integrates a simplified model of SCL tanks with the Particle Swarm Optimization (PSO) algorithm. The computer code main aim is to obtain useful guidelines for the design of Linear Accelerator (LINAC) resonant cavities. The design procedure, assisted via the aforesaid approach seems very promising, allowing future improvements towards the optimization of actual accelerating geometries. (authors)
International Nuclear Information System (INIS)
Wang, Bo; Tai, Neng-ling; Zhai, Hai-qing; Ye, Jian; Zhu, Jia-dong; Qi, Liang-bo
2008-01-01
In this paper, a new ARMAX model based on evolutionary algorithm and particle swarm optimization for short-term load forecasting is proposed. Auto-regressive (AR) and moving average (MA) with exogenous variables (ARMAX) has been widely applied in the load forecasting area. Because of the nonlinear characteristics of the power system loads, the forecasting function has many local optimal points. The traditional method based on gradient searching may be trapped in local optimal points and lead to high error. While, the hybrid method based on evolutionary algorithm and particle swarm optimization can solve this problem more efficiently than the traditional ways. It takes advantage of evolutionary strategy to speed up the convergence of particle swarm optimization (PSO), and applies the crossover operation of genetic algorithm to enhance the global search ability. The new ARMAX model for short-term load forecasting has been tested based on the load data of Eastern China location market, and the results indicate that the proposed approach has achieved good accuracy. (author)
Directory of Open Access Journals (Sweden)
Tsekeri Alexandra
2016-01-01
Full Text Available The importance of studying the vertical distribution of aerosol plumes is prominent in regional and climate studies. The new Generalized Aerosol Retrieval from Radiometer and Lidar Combined data algorithm (GARRLiC provides this opportunity combining active and passive ground-based remote sensing from lidar and sunphotometer measurements. Here, we utilize GARRLiC capabilities for the characterization of Saharan dust and marine particles at the Eastern Mediterranean region during the Characterization of Aerosol mixtures of Dust And Marine origin Experiment (CHARADMExp. Two different case studies are presented, a dust-dominated case which we managed to characterize successfully in terms of the particle microphysical properties and their vertical distribution and a case of two separate layers of marine and dust particles for which the characterization proved to be more challenging.
Lattice Boltzmann method used to simulate particle motion in a conduit
Directory of Open Access Journals (Sweden)
Dolanský Jindřich
2017-06-01
Full Text Available A three-dimensional numerical simulation of particle motion in a pipe with a rough bed is presented. The simulation based on the Lattice Boltzmann Method (LBM employs the hybrid diffuse bounce-back approach to model moving boundaries. The bed of the pipe is formed by stationary spherical particles of the same size as the moving particles. Particle movements are induced by gravitational and hydrodynamic forces. To evaluate the hydrodynamic forces, the Momentum Exchange Algorithm is used. The LBM unified computational frame makes it possible to simulate both the particle motion and the fluid flow and to study mutual interactions of the carrier liquid flow and particles and the particle–bed and particle–particle collisions. The trajectories of simulated and experimental particles are compared. The Particle Tracking method is used to track particle motion. The correctness of the applied approach is assessed.
Soliton Gases and Generalized Hydrodynamics
Doyon, Benjamin; Yoshimura, Takato; Caux, Jean-Sébastien
2018-01-01
We show that the equations of generalized hydrodynamics (GHD), a hydrodynamic theory for integrable quantum systems at the Euler scale, emerge in full generality in a family of classical gases, which generalize the gas of hard rods. In this family, the particles, upon colliding, jump forward or backward by a distance that depends on their velocities, reminiscent of classical soliton scattering. This provides a "molecular dynamics" for GHD: a numerical solver which is efficient, flexible, and which applies to the presence of external force fields. GHD also describes the hydrodynamics of classical soliton gases. We identify the GHD of any quantum model with that of the gas of its solitonlike wave packets, thus providing a remarkable quantum-classical equivalence. The theory is directly applicable, for instance, to integrable quantum chains and to the Lieb-Liniger model realized in cold-atom experiments.
Chen, Guangye; Luis, Chacon; Bird, Robert; Stark, David; Yin, Lin; Albright, Brian
2017-10-01
Leap-frog based explicit algorithms, either ``energy-conserving'' or ``momentum-conserving'', do not conserve energy discretely. Time-centered fully implicit algorithms can conserve discrete energy exactly, but introduce large dispersion errors in the light-wave modes, regardless of timestep sizes. This can lead to intolerable simulation errors where highly accurate light propagation is needed (e.g. laser-plasma interactions, LPI). In this study, we selectively combine the leap-frog and Crank-Nicolson methods to produce a low-dispersion, exactly energy-and-charge-conserving PIC algorithm. Specifically, we employ the leap-frog method for Maxwell equations, and the Crank-Nicolson method for particle equations. Such an algorithm admits exact global energy conservation, exact local charge conservation, and preserves the dispersion properties of the leap-frog method for the light wave. The algorithm has been implemented in a code named iVPIC, based on the VPIC code developed at LANL. We will present numerical results that demonstrate the properties of the scheme with sample test problems (e.g. Weibel instability run for 107 timesteps, and LPI applications.
Location-allocation algorithm for multiple particle tracking using Birmingham MWPC positron camera
International Nuclear Information System (INIS)
Gundogdu, O.; Tarcan, E.
2004-01-01
Positron Emission Particle Tracking is a powerful, non-invasive technique that employs a single radioactive particle. It has been applied to a wide range of industrial systems. This paper presents an original application of a technique, which was mainly developed in economics or resource management. It allows the tracking of multiple particles using small number of trajectories with correct tagging. This technique originally used in economics or resource management provides very encouraging results
Location-allocation algorithm for multiple particle tracking using Birmingham MWPC positron camera
Energy Technology Data Exchange (ETDEWEB)
Gundogdu, O. E-mail: o.gundogdu@surrey.ac.uko_gundo@yahoo.co.uko.gundogdu@kingston.ac.uk; Tarcan, E
2004-05-01
Positron Emission Particle Tracking is a powerful, non-invasive technique that employs a single radioactive particle. It has been applied to a wide range of industrial systems. This paper presents an original application of a technique, which was mainly developed in economics or resource management. It allows the tracking of multiple particles using small number of trajectories with correct tagging. This technique originally used in economics or resource management provides very encouraging results.
Bruinsma, Robijn; Grosberg, Alexander Y.; Rabin, Yitzhak; Zidovska, Alexandra
2014-01-01
Following recent observations of large scale correlated motion of chromatin inside the nuclei of live differentiated cells, we present a hydrodynamic theory—the two-fluid model—in which the content of a nucleus is described as a chromatin solution with the nucleoplasm playing the role of the solvent and the chromatin fiber that of a solute. This system is subject to both passive thermal fluctuations and active scalar and vector events that are associated with free energy consumption, such as ATP hydrolysis. Scalar events drive the longitudinal viscoelastic modes (where the chromatin fiber moves relative to the solvent) while vector events generate the transverse modes (where the chromatin fiber moves together with the solvent). Using linear response methods, we derive explicit expressions for the response functions that connect the chromatin density and velocity correlation functions to the corresponding correlation functions of the active sources and the complex viscoelastic moduli of the chromatin solution. We then derive general expressions for the flow spectral density of the chromatin velocity field. We use the theory to analyze experimental results recently obtained by one of the present authors and her co-workers. We find that the time dependence of the experimental data for both native and ATP-depleted chromatin can be well-fitted using a simple model—the Maxwell fluid—for the complex modulus, although there is some discrepancy in terms of the wavevector dependence. Thermal fluctuations of ATP-depleted cells are predominantly longitudinal. ATP-active cells exhibit intense transverse long wavelength velocity fluctuations driven by force dipoles. Fluctuations with wavenumbers larger than a few inverse microns are dominated by concentration fluctuations with the same spectrum as thermal fluctuations but with increased intensity. PMID:24806919
Directory of Open Access Journals (Sweden)
Laxmi A. Bewoor
2017-10-01
Full Text Available The no-wait flow shop is a flowshop in which the scheduling of jobs is continuous and simultaneous through all machines without waiting for any consecutive machines. The scheduling of a no-wait flow shop requires finding an appropriate sequence of jobs for scheduling, which in turn reduces total processing time. The classical brute force method for finding the probabilities of scheduling for improving the utilization of resources may become trapped in local optima, and this problem can hence be observed as a typical NP-hard combinatorial optimization problem that requires finding a near optimal solution with heuristic and metaheuristic techniques. This paper proposes an effective hybrid Particle Swarm Optimization (PSO metaheuristic algorithm for solving no-wait flow shop scheduling problems with the objective of minimizing the total flow time of jobs. This Proposed Hybrid Particle Swarm Optimization (PHPSO algorithm presents a solution by the random key representation rule for converting the continuous position information values of particles to a discrete job permutation. The proposed algorithm initializes population efficiently with the Nawaz-Enscore-Ham (NEH heuristic technique and uses an evolutionary search guided by the mechanism of PSO, as well as simulated annealing based on a local neighborhood search to avoid getting stuck in local optima and to provide the appropriate balance of global exploration and local exploitation. Extensive computational experiments are carried out based on Taillard’s benchmark suite. Computational results and comparisons with existing metaheuristics show that the PHPSO algorithm outperforms the existing methods in terms of quality search and robustness for the problem considered. The improvement in solution quality is confirmed by statistical tests of significance.
Directory of Open Access Journals (Sweden)
Fereydoun Naghibi
2016-12-01
Full Text Available This paper presents an advanced method in urban growth modeling to discover transition rules of cellular automata (CA using the artificial bee colony (ABC optimization algorithm. Also, comparisons between the simulation results of CA models optimized by the ABC algorithm and the particle swarm optimization algorithms (PSO as intelligent approaches were performed to evaluate the potential of the proposed methods. According to previous studies, swarm intelligence algorithms for solving optimization problems such as discovering transition rules of CA in land use change/urban growth modeling can produce reasonable results. Modeling of urban growth as a dynamic process is not straightforward because of the existence of nonlinearity and heterogeneity among effective involved variables which can cause a number of challenges for traditional CA. ABC algorithm, the new powerful swarm based optimization algorithms, can be used to capture optimized transition rules of CA. This paper has proposed a methodology based on remote sensing data for modeling urban growth with CA calibrated by the ABC algorithm. The performance of ABC-CA, PSO-CA, and CA-logistic models in land use change detection is tested for the city of Urmia, Iran, between 2004 and 2014. Validations of the models based on statistical measures such as overall accuracy, figure of merit, and total operating characteristic were made. We showed that the overall accuracy of the ABC-CA model was 89%, which was 1.5% and 6.2% higher than those of the PSO-CA and CA-logistic model, respectively. Moreover, the allocation disagreement (simulation error of the simulation results for the ABC-CA, PSO-CA, and CA-logistic models are 11%, 12.5%, and 17.2%, respectively. Finally, for all evaluation indices including running time, convergence capability, flexibility, statistical measurements, and the produced spatial patterns, the ABC-CA model performance showed relative improvement and therefore its superiority was
International Nuclear Information System (INIS)
Jordehi, Ahmad Rezaee
2016-01-01
Highlights: • A modified PSO has been proposed for parameter estimation of PV cells and modules. • In the proposed modified PSO, acceleration coefficients are changed during run. • The proposed modified PSO mitigates premature convergence problem. • Parameter estimation problem has been solved for both PV cells and PV modules. • The results show that proposed PSO outperforms other state of the art algorithms. - Abstract: Estimating circuit model parameters of PV cells/modules represents a challenging problem. PV cell/module parameter estimation problem is typically translated into an optimisation problem and is solved by metaheuristic optimisation problems. Particle swarm optimisation (PSO) is considered as a popular and well-established optimisation algorithm. Despite all its advantages, PSO suffers from premature convergence problem meaning that it may get trapped in local optima. Personal and social acceleration coefficients are two control parameters that, due to their effect on explorative and exploitative capabilities, play important roles in computational behavior of PSO. In this paper, in an attempt toward premature convergence mitigation in PSO, its personal acceleration coefficient is decreased during the course of run, while its social acceleration coefficient is increased. In this way, an appropriate tradeoff between explorative and exploitative capabilities of PSO is established during the course of run and premature convergence problem is significantly mitigated. The results vividly show that in parameter estimation of PV cells and modules, the proposed time varying acceleration coefficients PSO (TVACPSO) offers more accurate parameters than conventional PSO, teaching learning-based optimisation (TLBO) algorithm, imperialistic competitive algorithm (ICA), grey wolf optimisation (GWO), water cycle algorithm (WCA), pattern search (PS) and Newton algorithm. For validation of the proposed methodology, parameter estimation has been done both for
Simulating prescribed particle densities in the grand canonical ensemble using iterative algorithms.
Malasics, Attila; Gillespie, Dirk; Boda, Dezso
2008-03-28
We present two efficient iterative Monte Carlo algorithms in the grand canonical ensemble with which the chemical potentials corresponding to prescribed (targeted) partial densities can be determined. The first algorithm works by always using the targeted densities in the kT log(rho(i)) (ideal gas) terms and updating the excess chemical potentials from the previous iteration. The second algorithm extrapolates the chemical potentials in the next iteration from the results of the previous iteration using a first order series expansion of the densities. The coefficients of the series, the derivatives of the densities with respect to the chemical potentials, are obtained from the simulations by fluctuation formulas. The convergence of this procedure is shown for the examples of a homogeneous Lennard-Jones mixture and a NaCl-CaCl(2) electrolyte mixture in the primitive model. The methods are quite robust under the conditions investigated. The first algorithm is less sensitive to initial conditions.
Cell-centered particle weighting algorithm for PIC simulations in a non-uniform 2D axisymmetric mesh
Araki, Samuel J.; Wirz, Richard E.
2014-09-01
Standard area weighting methods for particle-in-cell simulations result in systematic errors on particle densities for a non-uniform mesh in cylindrical coordinates. These errors can be significantly reduced by using weighted cell volumes for density calculations. A detailed description on the corrected volume calculations and cell-centered weighting algorithm in a non-uniform mesh is provided. The simple formulas for the corrected volume can be used for any type of quadrilateral and/or triangular mesh in cylindrical coordinates. Density errors arising from the cell-centered weighting algorithm are computed for radial density profiles of uniform, linearly decreasing, and Bessel function in an adaptive Cartesian mesh and an unstructured mesh. For all the density profiles, it is shown that the weighting algorithm provides a significant improvement for density calculations. However, relatively large density errors may persist at outermost cells for monotonically decreasing density profiles. A further analysis has been performed to investigate the effect of the density errors in potential calculations, and it is shown that the error at the outermost cell does not propagate into the potential solution for the density profiles investigated.
Directory of Open Access Journals (Sweden)
Maryam Mousavi
Full Text Available Flexible manufacturing system (FMS enhances the firm's flexibility and responsiveness to the ever-changing customer demand by providing a fast product diversification capability. Performance of an FMS is highly dependent upon the accuracy of scheduling policy for the components of the system, such as automated guided vehicles (AGVs. An AGV as a mobile robot provides remarkable industrial capabilities for material and goods transportation within a manufacturing facility or a warehouse. Allocating AGVs to tasks, while considering the cost and time of operations, defines the AGV scheduling process. Multi-objective scheduling of AGVs, unlike single objective practices, is a complex and combinatorial process. In the main draw of the research, a mathematical model was developed and integrated with evolutionary algorithms (genetic algorithm (GA, particle swarm optimization (PSO, and hybrid GA-PSO to optimize the task scheduling of AGVs with the objectives of minimizing makespan and number of AGVs while considering the AGVs' battery charge. Assessment of the numerical examples' scheduling before and after the optimization proved the applicability of all the three algorithms in decreasing the makespan and AGV numbers. The hybrid GA-PSO produced the optimum result and outperformed the other two algorithms, in which the mean of AGVs operation efficiency was found to be 69.4, 74, and 79.8 percent in PSO, GA, and hybrid GA-PSO, respectively. Evaluation and validation of the model was performed by simulation via Flexsim software.
Directory of Open Access Journals (Sweden)
Ying Zhang
2016-02-01
Full Text Available Due to their special environment, Underwater Wireless Sensor Networks (UWSNs are usually deployed over a large sea area and the nodes are usually floating. This results in a lower beacon node distribution density, a longer time for localization, and more energy consumption. Currently most of the localization algorithms in this field do not pay enough consideration on the mobility of the nodes. In this paper, by analyzing the mobility patterns of water near the seashore, a localization method for UWSNs based on a Mobility Prediction and a Particle Swarm Optimization algorithm (MP-PSO is proposed. In this method, the range-based PSO algorithm is used to locate the beacon nodes, and their velocities can be calculated. The velocity of an unknown node is calculated by using the spatial correlation of underwater object’s mobility, and then their locations can be predicted. The range-based PSO algorithm may cause considerable energy consumption and its computation complexity is a little bit high, nevertheless the number of beacon nodes is relatively smaller, so the calculation for the large number of unknown nodes is succinct, and this method can obviously decrease the energy consumption and time cost of localizing these mobile nodes. The simulation results indicate that this method has higher localization accuracy and better localization coverage rate compared with some other widely used localization methods in this field.
Mousavi, Maryam; Yap, Hwa Jen; Musa, Siti Nurmaya; Tahriri, Farzad; Md Dawal, Siti Zawiah
2017-01-01
Flexible manufacturing system (FMS) enhances the firm's flexibility and responsiveness to the ever-changing customer demand by providing a fast product diversification capability. Performance of an FMS is highly dependent upon the accuracy of scheduling policy for the components of the system, such as automated guided vehicles (AGVs). An AGV as a mobile robot provides remarkable industrial capabilities for material and goods transportation within a manufacturing facility or a warehouse. Allocating AGVs to tasks, while considering the cost and time of operations, defines the AGV scheduling process. Multi-objective scheduling of AGVs, unlike single objective practices, is a complex and combinatorial process. In the main draw of the research, a mathematical model was developed and integrated with evolutionary algorithms (genetic algorithm (GA), particle swarm optimization (PSO), and hybrid GA-PSO) to optimize the task scheduling of AGVs with the objectives of minimizing makespan and number of AGVs while considering the AGVs' battery charge. Assessment of the numerical examples' scheduling before and after the optimization proved the applicability of all the three algorithms in decreasing the makespan and AGV numbers. The hybrid GA-PSO produced the optimum result and outperformed the other two algorithms, in which the mean of AGVs operation efficiency was found to be 69.4, 74, and 79.8 percent in PSO, GA, and hybrid GA-PSO, respectively. Evaluation and validation of the model was performed by simulation via Flexsim software.
Zhang, Ying; Liang, Jixing; Jiang, Shengming; Chen, Wei
2016-02-06
Due to their special environment, Underwater Wireless Sensor Networks (UWSNs) are usually deployed over a large sea area and the nodes are usually floating. This results in a lower beacon node distribution density, a longer time for localization, and more energy consumption. Currently most of the localization algorithms in this field do not pay enough consideration on the mobility of the nodes. In this paper, by analyzing the mobility patterns of water near the seashore, a localization method for UWSNs based on a Mobility Prediction and a Particle Swarm Optimization algorithm (MP-PSO) is proposed. In this method, the range-based PSO algorithm is used to locate the beacon nodes, and their velocities can be calculated. The velocity of an unknown node is calculated by using the spatial correlation of underwater object's mobility, and then their locations can be predicted. The range-based PSO algorithm may cause considerable energy consumption and its computation complexity is a little bit high, nevertheless the number of beacon nodes is relatively smaller, so the calculation for the large number of unknown nodes is succinct, and this method can obviously decrease the energy consumption and time cost of localizing these mobile nodes. The simulation results indicate that this method has higher localization accuracy and better localization coverage rate compared with some other widely used localization methods in this field.
Behavior of passive admixture in a vortical hydrodynamic field
Bobrov, R.O.; Kyrylyuk, A.V; Zatovsky, A.V.
2006-01-01
The motion of passive admixture of spherical particles in the stationary hydrodynamic field of a swirling flow is studied. A spherical particle of a given mass in the hydrodynamic field of a swirling flow is located on a certain circular orbit, where the centrifugal force is compensated by the
Wang, Zhen-yu; Yu, Jian-cheng; Zhang, Ai-qun; Wang, Ya-xing; Zhao, Wen-tao
2017-12-01
Combining high precision numerical analysis methods with optimization algorithms to make a systematic exploration of a design space has become an important topic in the modern design methods. During the design process of an underwater glider's flying-wing structure, a surrogate model is introduced to decrease the computation time for a high precision analysis. By these means, the contradiction between precision and efficiency is solved effectively. Based on the parametric geometry modeling, mesh generation and computational fluid dynamics analysis, a surrogate model is constructed by adopting the design of experiment (DOE) theory to solve the multi-objects design optimization problem of the underwater glider. The procedure of a surrogate model construction is presented, and the Gaussian kernel function is specifically discussed. The Particle Swarm Optimization (PSO) algorithm is applied to hydrodynamic design optimization. The hydrodynamic performance of the optimized flying-wing structure underwater glider increases by 9.1%.
Poultangari, Iman; Shahnazi, Reza; Sheikhan, Mansour
2012-09-01
In order to control the pitch angle of blades in wind turbines, commonly the proportional and integral (PI) controller due to its simplicity and industrial usability is employed. The neural networks and evolutionary algorithms are tools that provide a suitable ground to determine the optimal PI gains. In this paper, a radial basis function (RBF) neural network based PI controller is proposed for collective pitch control (CPC) of a 5-MW wind turbine. In order to provide an optimal dataset to train the RBF neural network, particle swarm optimization (PSO) evolutionary algorithm is used. The proposed method does not need the complexities, nonlinearities and uncertainties of the system under control. The simulation results show that the proposed controller has satisfactory performance. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.
Local anisotropy effects in the hydrodynamical theory of multiparticle production
International Nuclear Information System (INIS)
Gorenstein, M.I.; Sinyukov, Yu.M.
1984-01-01
The stage of secondary particle formation in the hydrodynamic theory of multiparticle production is analysed. We find out that the secondary particle spectrum of the fluid element decay is anisotropic in the rest frame system of this element. (orig.)
Local anisotropy effects in the hydrodynamical theory of multiparticle production
International Nuclear Information System (INIS)
Gorenshtejn, M.I.; Sinyukov, Yu.M.
1983-01-01
The stage of final particle formation in the hydrodynamic theory of multiparticle production is analysed. It is shown, that secondary particle spectrum of the fluid element decay to be anisotropic in the rest frame system of this element
Cui, Ying; Chen, Qinggang; Li, Yaxiao; Tang, Ling
2017-02-01
Flavonoids exhibit a high affinity for the purified cytosolic NBD (C-terminal nucleotide-binding domain) of P-glycoprotein (P-gp). To explore the affinity of flavonoids for P-gp, quantitative structure-activity relationship (QSAR) models were developed using support vector machines (SVMs). A novel method coupling a modified particle swarm optimization algorithm with random mutation strategy and a genetic algorithm coupled with SVM was proposed to simultaneously optimize the kernel parameters of SVM and determine the subset of optimized features for the first time. Using DRAGON descriptors to represent compounds for QSAR, three subsets (training, prediction and external validation set) derived from the dataset were employed to investigate QSAR. With excluding of the outlier, the correlation coefficient (R 2 ) of the whole training set (training and prediction) was 0.924, and the R 2 of the external validation set was 0.941. The root-mean-square error (RMSE) of the whole training set was 0.0588; the RMSE of the cross-validation of the external validation set was 0.0443. The mean Q 2 value of leave-many-out cross-validation was 0.824. With more informations from results of randomization analysis and applicability domain, the proposed model is of good predictive ability, stability.
Prayogi, A.; Majidi, M. A.
2017-07-01
In condensed-matter physics, strongly-correlated systems refer to materials that exhibit variety of fascinating properties and ordered phases, depending on temperature, doping, and other factors. Such unique properties most notably arise due to strong electron-electron interactions, and in some cases due to interactions involving other quasiparticles as well. Electronic correlation effects are non-trivial that one may need a sufficiently accurate approximation technique with quite heavy computation, such as Quantum Monte-Carlo, in order to capture particular material properties arising from such effects. Meanwhile, less accurate techniques may come with lower numerical cost, but the ability to capture particular properties may highly depend on the choice of approximation. Among the many-body techniques derivable from Feynman diagrams, we aim to formulate algorithmic implementation of the Ladder Diagram approximation to capture the effects of electron-electron interactions. We wish to investigate how these correlation effects influence the temperature-dependent properties of strongly-correlated metals and semiconductors. As we are interested to study the temperature-dependent properties of the system, the Ladder diagram method needs to be applied in Matsubara frequency domain to obtain the self-consistent self-energy. However, at the end we would also need to compute the dynamical properties like density of states (DOS) and optical conductivity that are defined in the real frequency domain. For this purpose, we need to perform the analytic continuation procedure. At the end of this study, we will test the technique by observing the occurrence of metal-insulator transition in strongly-correlated metals, and renormalization of the band gap in strongly-correlated semiconductors.
Carrel, M.; Morales, V. L.; Dentz, M.; Derlon, N.; Morgenroth, E.; Holzner, M.
2018-03-01
Biofilms are ubiquitous bacterial communities that grow in various porous media including soils, trickling, and sand filters. In these environments, they play a central role in services ranging from degradation of pollutants to water purification. Biofilms dynamically change the pore structure of the medium through selective clogging of pores, a process known as bioclogging. This affects how solutes are transported and spread through the porous matrix, but the temporal changes to transport behavior during bioclogging are not well understood. To address this uncertainty, we experimentally study the hydrodynamic changes of a transparent 3-D porous medium as it experiences progressive bioclogging. Statistical analyses of the system's hydrodynamics at four time points of bioclogging (0, 24, 36, and 48 h in the exponential growth phase) reveal exponential increases in both average and variance of the flow velocity, as well as its correlation length. Measurements for spreading, as mean-squared displacements, are found to be non-Fickian and more intensely superdiffusive with progressive bioclogging, indicating the formation of preferential flow pathways and stagnation zones. A gamma distribution describes well the Lagrangian velocity distributions and provides parameters that quantify changes to the flow, which evolves from a parallel pore arrangement under unclogged conditions, toward a more serial arrangement with increasing clogging. Exponentially evolving hydrodynamic metrics agree with an exponential bacterial growth phase and are used to parameterize a correlated continuous time random walk model with a stochastic velocity relaxation. The model accurately reproduces transport observations and can be used to resolve transport behavior at intermediate time points within the exponential growth phase considered.
Hydrodynamic effects on coalescence.
Energy Technology Data Exchange (ETDEWEB)
Dimiduk, Thomas G.; Bourdon, Christopher Jay; Grillet, Anne Mary; Baer, Thomas A.; de Boer, Maarten Pieter; Loewenberg, Michael (Yale University, New Haven, CT); Gorby, Allen D.; Brooks, Carlton, F.
2006-10-01
The goal of this project was to design, build and test novel diagnostics to probe the effect of hydrodynamic forces on coalescence dynamics. Our investigation focused on how a drop coalesces onto a flat surface which is analogous to two drops coalescing, but more amenable to precise experimental measurements. We designed and built a flow cell to create an axisymmetric compression flow which brings a drop onto a flat surface. A computer-controlled system manipulates the flow to steer the drop and maintain a symmetric flow. Particle image velocimetry was performed to confirm that the control system was delivering a well conditioned flow. To examine the dynamics of the coalescence, we implemented an interferometry capability to measure the drainage of the thin film between the drop and the surface during the coalescence process. A semi-automated analysis routine was developed which converts the dynamic interferogram series into drop shape evolution data.
Albert, A.; André, M.; Anghinolfi, M.; Anton, G.; Ardid, M.; Aubert, J.-J.; Avgitas, T.; Baret, B.; Barrios-Martí, J.; Basa, S.; Bertin, V.; Biagi, S.; Bormuth, R.; Bourret, S.; Bouwhuis, M.C.; Bruijn, R.; Brunner, J.; Busto, J.; Capone, A.; Caramete, L.; Carr, J.; Celli, S.; Chiarusi, T.; Circella, M.; Coelho, C.O.A.; Coleiro, A.; Coniglione, R.; Costantini, H.; Coyle, P.; Creusot, A.; Deschamps, A.; De Bonis, G.; Distefano, C.; Di Palma, I.; Domi, A.; Donzaud, C.; Dornic, D.; Drouhin, D.; Eberl, T.; El Bojaddaini, I.; Elsässer, D.; Enzenhofer, A.; Felis, I.; Folger, F.; Fusco, L.A.; Galata, S.; Gay, P.; Giordano, V.; Glotin, H.; Grégoire, T.; Gracia-Ruiz, R.; Graf, K.; Hallmann, S.; van Haren, H.; Heijboer, A.J.; Hello, Y.; Hernandez-Rey, J.J.; Hößl, J.; Hofestädt, J.; Hugon, C.; Illuminati, G.; James, C.W.; de Jong, M.; Jongen, M.; Kadler, M.; Kalekin, O.; Katz, U.; Kießling, D.; Kouchner, A.; Kreter, M.; Kreykenbohm, I.; Kulikovskiy, V.; Lachaud, C.; Lahmann, R.; Lefevre, D.; Leonora, E.; Lotze, M.; Loucatos, S.; Marcelin, M.; Margiotta, A.; Marinelli, A.; Martinez-Mora, J.A.; Mele, R.; Melis, K.; Michael, T.; Migliozzi, P.; Moussa, A.; Nezri, E.; Organokov, M.; Pavalas, G.E.; Pellegrino, C.; Perrina, C.; Piattelli, P.; Popa, V.; Pradier, T.; Quinn, L.; Racca, C.; Riccobene, G.; Sanchez-Losa, A.; Saldaña, M.; Salvadori, I.; Samtleben, D.F.E.; Sanguineti, M.; Sapienza, P.; Schussler, F.; Sieger, C.; Spurio, M.; Stolarczyk, T.; Taiuti, M.; Tayalati, Y.; Trovato, A.; Turpin, D.; Tönnis, C.; Vallage, B.; Van Elewyck, V.; Versari, F.; Vivolo, D.; Vizzoca, A.; Wilms, J.; Zornoza, J.D.; Zuniga, J.
2017-01-01
A novel algorithm to reconstruct neutrino-induced particle showers within the ANTARES neutrino telescope is presented. The method achieves a median angular resolution of 6∘ for shower energies below 100 TeV. Applying this algorithm to 6 years of data taken with the ANTARES detector, 8 events with
International Nuclear Information System (INIS)
Garcia, A.L.; Alexander, F.J.; Alder, B.J.
1997-01-01
The consistent Boltzmann algorithm (CBA) for dense, hard-sphere gases is generalized to obtain the van der Waals equation of state and the corresponding exact viscosity at all densities except at the highest temperatures. A general scheme for adjusting any transport coefficients to higher values is presented
Modelling Systems of Classical/Quantum Identical Particles by Focusing on Algorithms
Guastella, Ivan; Fazio, Claudio; Sperandeo-Mineo, Rosa Maria
2012-01-01
A procedure modelling ideal classical and quantum gases is discussed. The proposed approach is mainly based on the idea that modelling and algorithm analysis can provide a deeper understanding of particularly complex physical systems. Appropriate representations and physical models able to mimic possible pseudo-mechanisms of functioning and having…
Algorithmic Information Dynamics of Persistent Patterns and Colliding Particles in the Game of Life
Zenil, Hector; Kiani, Narsis A.; Tegner, Jesper
2018-01-01
, Conway's Game of Life (GoL) cellular automaton as a case study. We analyze the distribution of prevailing motifs that occur in GoL from the perspective of algorithmic probability. We demonstrate how the tools introduced are an alternative to computable
Wang, Rongxiao; Chen, B.; Qiu, S.; Ma, Liang; Zhu, Zhengqiu; Wang, Yiping; Qiu, Xiaogang
2018-01-01
Locating and quantifying the emission source plays a significant role in the emergency management of hazardous gas leak accidents. Due to the lack of a desirable atmospheric dispersion model, current source estimation algorithms cannot meet the requirements of both accuracy and efficiency. In
Montoya, Gustavo; Valecillos, María; Romero, Carlos; Gonzáles, Dosinda
2009-11-01
In the present research a digital image processing-based automated algorithm was developed in order to determine the phase's height, hold up, and statistical distribution of the drop size in a two-phase system water-air using pipes with 0 , 10 , and 90 of inclination. Digital images were acquired with a high speed camera (up to 4500fps), using an equipment that consist of a system with three acrylic pipes with diameters of 1.905, 3.175, and 4.445 cm. Each pipe is arranged in two sections of 8 m of length. Various flow patterns were visualized for different superficial velocities of water and air. Finally, using the image processing program designed in Matlab/Simulink^, the captured images were processed to establish the parameters previously mentioned. The image processing algorithm is based in the frequency domain analysis of the source pictures, which allows to find the phase as the edge between the water and air, through a Sobel filter that extracts the high frequency components of the image. The drop size was found using the calculation of the Feret diameter. Three flow patterns were observed: Annular, ST, and ST&MI.
CERN. Geneva
2018-01-01
Particle identification (PID) plays a crucial role in LHCb analyses. Combining information from LHCb subdetectors allows one to distinguish between various species of long-lived charged and neutral particles. PID performance directly affects the sensitivity of most LHCb measurements. Advanced multivariate approaches are used at LHCb to obtain the best PID performance and control systematic uncertainties. This talk highlights recent developments in PID that use innovative machine learning techniques, as well as novel data-driven approaches which ensure that PID performance is well reproduced in simulation.
International Nuclear Information System (INIS)
Lin Chaung; Lin, Tung-Hsien
2012-01-01
Highlights: ► The automatic procedure was developed to design the radial enrichment and gadolinia (Gd) distribution of fuel lattice. ► The method is based on a particle swarm optimization algorithm and local search. ► The design goal were to achieve the minimum local peaking factor. ► The number of fuel pins with Gd and Gd concentration are fixed to reduce search complexity. ► In this study, three axial sections are design and lattice performance is calculated using CASMO-4. - Abstract: The axial section of fuel assembly in a boiling water reactor (BWR) consists of five or six different distributions; this requires a radial lattice design. In this study, an automatic procedure based on a particle swarm optimization (PSO) algorithm and local search was developed to design the radial enrichment and gadolinia (Gd) distribution of the fuel lattice. The design goals were to achieve the minimum local peaking factor (LPF), and to come as close as possible to the specified target average enrichment and target infinite multiplication factor (k ∞ ), in which the number of fuel pins with Gd and Gd concentration are fixed. In this study, three axial sections are designed, and lattice performance is calculated using CASMO-4. Finally, the neutron cross section library of the designed lattice is established by CMSLINK; the core status during depletion, such as thermal limits, cold shutdown margin and cycle length, are then calculated using SIMULATE-3 in order to confirm that the lattice design satisfies the design requirements.
Directory of Open Access Journals (Sweden)
Yifan Hu
2012-01-01
Full Text Available The fault-tolerant routing problem is important consideration in the design of heterogeneous wireless sensor networks (H-WSNs applications, and has recently been attracting growing research interests. In order to maintain k disjoint communication paths from source sensors to the macronodes, we present a hybrid routing scheme and model, in which multiple paths are calculated and maintained in advance, and alternate paths are created once the previous routing is broken. Then, we propose an immune cooperative particle swarm optimization algorithm (ICPSOA in the model to provide the fast routing recovery and reconstruct the network topology for path failure in H-WSNs. In the ICPSOA, mutation direction of the particle is determined by multi-swarm evolution equation, and its diversity is improved by immune mechanism, which can enhance the capacity of global search and improve the converging rate of the algorithm. Then we validate this theoretical model with simulation results. The results indicate that the ICPSOA-based fault-tolerant routing protocol outperforms several other protocols due to its capability of fast routing recovery mechanism, reliable communications, and prolonging the lifetime of WSNs.
Savastru, D.; Dontu, Simona; Savastru, Roxana; Sterian, Andreea Rodica
2013-01-01
Our knowledge about surroundings can be achieved by observations and measurements but both are influenced by errors (noise). Therefore one of the first tasks is to try to eliminate the noise by constructing instruments with high accuracy. But any real observed and measured system is characterized by natural limits due to the deterministic nature of the measured information. The present work is dedicated to the identification of these limits. We have analyzed some algorithms for selection and ...
Directory of Open Access Journals (Sweden)
Tzu-An Chiang
2014-01-01
Full Text Available This study designed a cross-stage reverse logistics course for defective products so that damaged products generated in downstream partners can be directly returned to upstream partners throughout the stages of a supply chain for rework and maintenance. To solve this reverse supply chain design problem, an optimal cross-stage reverse logistics mathematical model was developed. In addition, we developed a genetic algorithm (GA and three particle swarm optimization (PSO algorithms: the inertia weight method (PSOA_IWM, VMax method (PSOA_VMM, and constriction factor method (PSOA_CFM, which we employed to find solutions to support this mathematical model. Finally, a real case and five simulative cases with different scopes were used to compare the execution times, convergence times, and objective function values of the four algorithms used to validate the model proposed in this study. Regarding system execution time, the GA consumed more time than the other three PSOs did. Regarding objective function value, the GA, PSOA_IWM, and PSOA_CFM could obtain a lower convergence value than PSOA_VMM could. Finally, PSOA_IWM demonstrated a faster convergence speed than PSOA_VMM, PSOA_CFM, and the GA did.
Chiang, Tzu-An; Che, Z H; Cui, Zhihua
2014-01-01
This study designed a cross-stage reverse logistics course for defective products so that damaged products generated in downstream partners can be directly returned to upstream partners throughout the stages of a supply chain for rework and maintenance. To solve this reverse supply chain design problem, an optimal cross-stage reverse logistics mathematical model was developed. In addition, we developed a genetic algorithm (GA) and three particle swarm optimization (PSO) algorithms: the inertia weight method (PSOA_IWM), V(Max) method (PSOA_VMM), and constriction factor method (PSOA_CFM), which we employed to find solutions to support this mathematical model. Finally, a real case and five simulative cases with different scopes were used to compare the execution times, convergence times, and objective function values of the four algorithms used to validate the model proposed in this study. Regarding system execution time, the GA consumed more time than the other three PSOs did. Regarding objective function value, the GA, PSOA_IWM, and PSOA_CFM could obtain a lower convergence value than PSOA_VMM could. Finally, PSOA_IWM demonstrated a faster convergence speed than PSOA_VMM, PSOA_CFM, and the GA did.
Chiang, Tzu-An; Che, Z. H.
2014-01-01
This study designed a cross-stage reverse logistics course for defective products so that damaged products generated in downstream partners can be directly returned to upstream partners throughout the stages of a supply chain for rework and maintenance. To solve this reverse supply chain design problem, an optimal cross-stage reverse logistics mathematical model was developed. In addition, we developed a genetic algorithm (GA) and three particle swarm optimization (PSO) algorithms: the inertia weight method (PSOA_IWM), V Max method (PSOA_VMM), and constriction factor method (PSOA_CFM), which we employed to find solutions to support this mathematical model. Finally, a real case and five simulative cases with different scopes were used to compare the execution times, convergence times, and objective function values of the four algorithms used to validate the model proposed in this study. Regarding system execution time, the GA consumed more time than the other three PSOs did. Regarding objective function value, the GA, PSOA_IWM, and PSOA_CFM could obtain a lower convergence value than PSOA_VMM could. Finally, PSOA_IWM demonstrated a faster convergence speed than PSOA_VMM, PSOA_CFM, and the GA did. PMID:24772026
Marciana Lima Góes
2012-01-01
Neste trabalho, foi desenvolvido um simulador numérico baseado no método livre de malhas Smoothed Particle Hydrodynamics (SPH) para a resolução de escoamentos de fluidos newtonianos incompressíveis. Diferentemente da maioria das versões existentes deste método, o código numérico faz uso de uma técnica iterativa na determinação do campo de pressões. Este procedimento emprega a forma diferencial de uma equação de estado para um fluido compressível e a equação da continuidade a ...
Directory of Open Access Journals (Sweden)
Wenliao Du
2013-01-01
Full Text Available Promptly and accurately dealing with the equipment breakdown is very important in terms of enhancing reliability and decreasing downtime. A novel fault diagnosis method PSO-RVM based on relevance vector machines (RVM with particle swarm optimization (PSO algorithm for plunger pump in truck crane is proposed. The particle swarm optimization algorithm is utilized to determine the kernel width parameter of the kernel function in RVM, and the five two-class RVMs with binary tree architecture are trained to recognize the condition of mechanism. The proposed method is employed in the diagnosis of plunger pump in truck crane. The six states, including normal state, bearing inner race fault, bearing roller fault, plunger wear fault, thrust plate wear fault, and swash plate wear fault, are used to test the classification performance of the proposed PSO-RVM model, which compared with the classical models, such as back-propagation artificial neural network (BP-ANN, ant colony optimization artificial neural network (ANT-ANN, RVM, and support vectors, machines with particle swarm optimization (PSO-SVM, respectively. The experimental results show that the PSO-RVM is superior to the first three classical models, and has a comparative performance to the PSO-SVM, the corresponding diagnostic accuracy achieving as high as 99.17% and 99.58%, respectively. But the number of relevance vectors is far fewer than that of support vector, and the former is about 1/12–1/3 of the latter, which indicates that the proposed PSO-RVM model is more suitable for applications that require low complexity and real-time monitoring.
A new logistic dynamic particle swarm optimization algorithm based on random topology.
Ni, Qingjian; Deng, Jianming
2013-01-01
Population topology of particle swarm optimization (PSO) will directly affect the dissemination of optimal information during the evolutionary process and will have a significant impact on the performance of PSO. Classic static population topologies are usually used in PSO, such as fully connected topology, ring topology, star topology, and square topology. In this paper, the performance of PSO with the proposed random topologies is analyzed, and the relationship between population topology and the performance of PSO is also explored from the perspective of graph theory characteristics in population topologies. Further, in a relatively new PSO variant which named logistic dynamic particle optimization, an extensive simulation study is presented to discuss the effectiveness of the random topology and the design strategies of population topology. Finally, the experimental data are analyzed and discussed. And about the design and use of population topology on PSO, some useful conclusions are proposed which can provide a basis for further discussion and research.
International Nuclear Information System (INIS)
Ihle, Thomas
2008-01-01
Detailed calculations of the transport coefficients of a recently introduced particle-based model for fluid dynamics with a non-ideal equation of state are presented. Excluded volume interactions are modeled by means of biased stochastic multi-particle collisions which depend on the local velocities and densities. Momentum and energy are exactly conserved locally. A general scheme to derive transport coefficients for such biased, velocity-dependent collision rules is developed. Analytic expressions for the self-diffusion coefficient and the shear viscosity are obtained, and very good agreement is found with numerical results at small and large mean free paths. The viscosity turns out to be proportional to the square root of temperature, as in a real gas. In addition, the theoretical framework is applied to a two-component version of the model, and expressions for the viscosity and the difference in diffusion of the two species are given
Tests of a Particle Flow Algorithm with CALICE test beam data
Czech Academy of Sciences Publication Activity Database
Adloff, C.; Blaha, J.; Blaising, J.J.; Cvach, Jaroslav; Gallus, Petr; Havránek, Miroslav; Janata, Milan; Kvasnička, Jiří; Lednický, Denis; Marčišovský, Michal; Polák, Ivo; Popule, Jiří; Tomášek, Lukáš; Tomášek, Michal; Růžička, Pavel; Šícho, Petr; Smolík, Jan; Vrba, Václav; Zálešák, Jaroslav
2011-01-01
Roč. 6, č. 7 (2011), s. 1-15 ISSN 1748-0221 R&D Projects: GA MŠk LA09042; GA MŠk LA08032 Institutional research plan: CEZ:AV0Z10100502 Keywords : calorimeters * PFA * CALICE * calorimeter methods Subject RIV: BF - Elementary Particles and High Energy Physics Impact factor: 1.869, year: 2011 http://iopscience.iop.org/1748-0221/6/07/P07005
Dragonfly : an implementation of the expand–maximize–compress algorithm for single-particle imaging
Ayyer, Kartik; Lan, Ti-Yen; Elser, Veit; Loh, N. Duane
2016-01-01
Single-particle imaging (SPI) with X-ray free-electron lasers has the potential to change fundamentally how biomacromolecules are imaged. The structure would be derived from millions of diffraction patterns, each from a different copy of the macromolecule before it is torn apart by radiation damage. The challenges posed by the resultant data stream are staggering: millions of incomplete, noisy and un-oriented patterns have to be computationally assembled into a three-dimensional intensity map...
Advanced in Macrostatistical Hydrodynamics
International Nuclear Information System (INIS)
Graham, A.L.; Tetlow, N.; Abbott, J.R.; Mondy, L.S.; Brenner, H.
1993-01-01
An overview is presented of research that focuses on slow flows of suspensions in which colloidal and inertial effects are negligibly small (Macrostatistical Hydrodynamics). First, we describe nuclear magnetic resonance imaging experiments to quantitatively measure particle migration occurring in concentrated suspensions undergoing a flow with a nonuniform shear rate. These experiments address the issue of how the flow field affects the microstructure of suspensions. In order to understand the local viscosity in a suspension with such a flow-induced, spatially varying concentration, one must know how the viscosity of a homogeneous suspension depends on such variables as solids concentration and particle orientation. We suggest the technique of falling ball viscometry, using small balls, as a method to determine the effective viscosity of a suspension without affecting the original microstructure significantly. We also describe data from experiments in which the detailed fluctuations of a falling ball's velocity indicate the noncontinuum nature of the suspension and may lead to more insights into the effects of suspension microstructure on macroscopic properties. Finally, we briefly describe other experiments that can be performed in quiescent suspensions (in contrast to the use of conventional shear rotational viscometers) in order to learn more about the microstructure and boundary effects in concentrated suspensions
Development of pattern recognition algorithms for particles detection from atmospheric images
International Nuclear Information System (INIS)
Khatchadourian, S.
2010-01-01
The HESS experiment consists of a system of telescopes destined to observe cosmic rays. Since the project has achieved a high level of performances, a second phase of the project has been initiated. This implies the addition of a new telescope which is more sensitive than its predecessors and which is capable of collecting a huge amount of images. In this context, all data collected by the telescope can not be retained because of storage limitations. Therefore, a new real-time system trigger must be designed in order to select interesting events on the fly. The purpose of this thesis was to propose a trigger solution to efficiently discriminate events (images) which are captured by the telescope. The first part of this thesis was to develop pattern recognition algorithms to be implemented within the trigger. A processing chain based on neural networks and Zernike moments has been validated. The second part of the thesis has focused on the implementation of the proposed algorithms onto an FPGA target, taking into account the application constraints in terms of resources and execution time. (author)
Hydrodynamic Coefficients Identification and Experimental Investigation for an Underwater Vehicle
Directory of Open Access Journals (Sweden)
Shaorong XIE
2014-02-01
Full Text Available Hydrodynamic coefficients are the foundation of unmanned underwater vehicles modeling and controller design. In order to reduce identification complexity and acquire necessary hydrodynamic coefficients for controllers design, the motion of the unmanned underwater vehicle was separated into vertical motion and horizontal motion models. Hydrodynamic coefficients were regarded as mapping parameters from input forces and moments to output velocities and acceleration of the unmanned underwater vehicle. The motion models of the unmanned underwater vehicle were nonlinear and Genetic Algorithm was adopted to identify those hydrodynamic coefficients. To verify the identification quality, velocities and acceleration of the unmanned underwater vehicle was measured using inertial sensor under the same conditions as Genetic Algorithm identification. Curves similarity between measured velocities and acceleration and those identified by Genetic Algorithm were used as optimizing standard. It is found that the curves similarity were high and identified hydrodynamic coefficients of the unmanned underwater vehicle satisfied the measured motion states well.
Millet, Bertrand; Pinazo, Christel; Banaru, Daniela; Pagès, Rémi; Guiart, Pierre; Pairaud, Ivane
2018-01-01
Our study highlights the Lagrangian transport of solid particles discharged at the Marseille Wastewater Treatment Plant (WWTP), located at Cortiou on the southern coastline. We focused on episodic situations characterized by a coastal circulation pattern induced by intrusion events of the Northern Current (NC) on the continental shelf, associated with SE wind regimes. We computed, using MARS3D-RHOMA and ICHTHYOP models, the particle trajectories from a patch of 5.104 passive and conservative fine particles released at the WWTP outlet, during 2 chosen representative periods of intrusion of the NC in June 2008 and in October 2011, associated with S-SE and E-SE winds, respectively. Unexpected results highlighted that the amount of particles reaching the vulnerable shorelines of both northern and southern bays accounted for 21.2% and 46.3% of the WWTP initial patch, in June 2008 and October 2011, respectively. Finally, a conceptual diagram is proposed to highlight the mechanisms of dispersion within the bays of Marseille of the fine particles released at the WWTP outlet that have long been underestimated.
DEFF Research Database (Denmark)
Hou, Peng; Hu, Weihao; Soltani, Mohsen
2015-01-01
With the increasing size of wind farm, the impact of the wake effect on wind farm energy yields become more and more evident. The arrangement of the wind turbines’ (WT) locations will influence the capital investment and contribute to the wake losses which incur the reduction of energy production....... As a consequence, the optimized placement of the wind turbines may be done by considering the wake effect as well as the components cost within the wind farm. In this paper, a mathematical model which includes the variation of both wind direction and wake deficit is proposed. The problem is formulated by using...... Levelized Production Cost (LPC) as the objective function. The optimization procedure is performed by Particle Swarm Optimization (PSO) algorithm with the purpose of maximizing the energy yields while minimizing the total investment. The simulation results indicate that the proposed method is effective...
EMHP: an accurate automated hole masking algorithm for single-particle cryo-EM image processing.
Berndsen, Zachary; Bowman, Charles; Jang, Haerin; Ward, Andrew B
2017-12-01
The Electron Microscopy Hole Punch (EMHP) is a streamlined suite of tools for quick assessment, sorting and hole masking of electron micrographs. With recent advances in single-particle electron cryo-microscopy (cryo-EM) data processing allowing for the rapid determination of protein structures using a smaller computational footprint, we saw the need for a fast and simple tool for data pre-processing that could run independent of existing high-performance computing (HPC) infrastructures. EMHP provides a data preprocessing platform in a small package that requires minimal python dependencies to function. https://www.bitbucket.org/chazbot/emhp Apache 2.0 License. bowman@scripps.edu. Supplementary data are available at Bioinformatics online. © The Author(s) 2017. Published by Oxford University Press.
Directory of Open Access Journals (Sweden)
E. G. Dada
2017-04-01
Full Text Available Acute damage to the retina vessel has been identified to be main reason for blindness and impaired vision all over the world. A timely detection and control of these illnesses can greatly decrease the number of loss of sight cases. Developing a high performance unsupervised retinal vessel segmentation technique poses an uphill task. This paper presents study on the Primal-Dual Asynchronous Particle Swarm Optimisation (pdAPSO method for the segmentation of retinal vessels. A maximum average accuracy rate 0.9243 with an average specificity of sensitivity rate of 0.9834 and average sensitivity rate of 0.5721 were achieved on DRIVE database. The proposed method produces higher mean sensitivity and accuracy rates in the same range of very good speciﬁcity.
Applying Sequential Particle Swarm Optimization Algorithm to Improve Power Generation Quality
Directory of Open Access Journals (Sweden)
Abdulhafid Sallama
2014-10-01
Full Text Available Swarm Optimization approach is a heuristic search method whose mechanics are inspired by the swarming or collaborative behaviour of biological populations. It is used to solve constrained, unconstrained, continuous and discrete problems. Swarm intelligence systems are widely used and very effective in solving standard and large-scale optimization, provided that the problem does not require multi solutions. In this paper, particle swarm optimisation technique is used to optimise fuzzy logic controller (FLC for stabilising a power generation and distribution network that consists of four generators. The system is subject to different types of faults (single and multi-phase. Simulation studies show that the optimised FLC performs well in stabilising the network after it recovers from a fault. The controller is compared to multi-band and standard controllers.
International Nuclear Information System (INIS)
Parvin, Dan; Clarke, Sean
2015-01-01
Particle Swarm Imaging (PSIM) overcomes some of the challenges associated with the accurate declaration of measurement uncertainties of radionuclide inventories within waste items when the distribution of activity is unknown. Implementation requires minimal equipment, making use of gamma‑ray measurements taken from different locations around the waste item, using only a single electrically cooled HRGS gamma‑ray detector for objects up to a UK ISO freight container in size. The PSIM technique is a computational method that iteratively ‘homes‑in’ on the true location of activity concentrations in waste items. PSIM differs from conventional assay techniques by allowing only viable solutions - that is those that could actually give rise to the measured data - to be considered. Thus PSIM avoids the drawback of conventional analyses, namely, the adoption of unrealistic assumptions about the activity distribution that inevitably leads to the declaration of pessimistic (and in some cases optimistic) activity estimates and uncertainties. PSIM applies an optimisation technique based upon ‘particle swarming’ methods to determine a set of candidate solutions within a ‘search space’ defined by the interior volume of a waste item. The positions and activities of the swarm are used in conjunction with a mathematical model to simulate the measurement response for the current swarm location. The swarm is iteratively updated (with modified positions and activities) until a match with sufficient quality is obtained between the simulated and actual measurement data. This process is repeated to build up a distribution of candidate solutions, which is subsequently analysed to calculate a measurement result and uncertainty along with a visual image of the activity distribution. The application of ‘swarming’ computational methods to non‑destructive assay (NDA) measurements is considered novel and this paper is intended to introduce the PSIM concept and provide
Directory of Open Access Journals (Sweden)
Jing Li
2017-01-01
Full Text Available The goal of this study is to improve thermal comfort and indoor air quality with the adaptive network-based fuzzy inference system (ANFIS model and improved particle swarm optimization (PSO algorithm. A method to optimize air conditioning parameters and installation distance is proposed. The methodology is demonstrated through a prototype case, which corresponds to a typical laboratory in colleges and universities. A laboratory model is established, and simulated flow field information is obtained with the CFD software. Subsequently, the ANFIS model is employed instead of the CFD model to predict indoor flow parameters, and the CFD database is utilized to train ANN input-output “metamodels” for the subsequent optimization. With the improved PSO algorithm and the stratified sequence method, the objective functions are optimized. The functions comprise PMV, PPD, and mean age of air. The optimal installation distance is determined with the hemisphere model. Results show that most of the staff obtain a satisfactory degree of thermal comfort and that the proposed method can significantly reduce the cost of building an experimental device. The proposed methodology can be used to determine appropriate air supply parameters and air conditioner installation position for a pleasant and healthy indoor environment.
Chuang, Li-Yeh; Lane, Hsien-Yuan; Lin, Yu-Da; Lin, Ming-Teng; Yang, Cheng-Hong; Chang, Hsueh-Wei
2014-01-01
Facial emotion perception (FEP) can affect social function. We previously reported that parts of five tested single-nucleotide polymorphisms (SNPs) in the MET and AKT1 genes may individually affect FEP performance. However, the effects of SNP-SNP interactions on FEP performance remain unclear. This study compared patients with high and low FEP performances (n = 89 and 93, respectively). A particle swarm optimization (PSO) algorithm was used to identify the best SNP barcodes (i.e., the SNP combinations and genotypes that revealed the largest differences between the high and low FEP groups). The analyses of individual SNPs showed no significant differences between the high and low FEP groups. However, comparisons of multiple SNP-SNP interactions involving different combinations of two to five SNPs showed that the best PSO-generated SNP barcodes were significantly associated with high FEP score. The analyses of the joint effects of the best SNP barcodes for two to five interacting SNPs also showed that the best SNP barcodes had significantly higher odds ratios (2.119 to 3.138; P < 0.05) compared to other SNP barcodes. In conclusion, the proposed PSO algorithm effectively identifies the best SNP barcodes that have the strongest associations with FEP performance. This study also proposes a computational methodology for analyzing complex SNP-SNP interactions in social cognition domains such as recognition of facial emotion.
Han, Kuk-Il; Kim, Do-Hwi; Choi, Jun-Hyuk; Kim, Tae-Kuk
2018-04-20
Treatments for detection by infrared (IR) signals are higher than for other signals such as radar or sonar because an object detected by the IR sensor cannot easily recognize its detection status. Recently, research for actively reducing IR signal has been conducted to control the IR signal by adjusting the surface temperature of the object. In this paper, we propose an active IR stealth algorithm to synchronize IR signals from the object and the background around the object. The proposed method includes the repulsive particle swarm optimization statistical optimization algorithm to estimate the IR stealth surface temperature, which will result in a synchronization between the IR signals from the object and the surrounding background by setting the inverse distance weighted contrast radiant intensity (CRI) equal to zero. We tested the IR stealth performance in mid wavelength infrared (MWIR) and long wavelength infrared (LWIR) bands for a test plate located at three different positions on a forest scene to verify the proposed method. Our results show that the inverse distance weighted active IR stealth technique proposed in this study is proved to be an effective method for reducing the contrast radiant intensity between the object and background up to 32% as compared to the previous method using the CRI determined as the simple signal difference between the object and the background.
International Nuclear Information System (INIS)
Kıran, Mustafa Servet; Özceylan, Eren; Gündüz, Mesut; Paksoy, Turan
2012-01-01
Highlights: ► PSO and ACO algorithms are hybridized for forecasting energy demands of Turkey. ► Linear and quadratic forms are developed to meet the fluctuations of indicators. ► GDP, population, export and import have significant impacts on energy demand. ► Quadratic form provides better fit solution than linear form. ► Proposed approach gives lower estimation error than ACO and PSO, separately. - Abstract: This paper proposes a new hybrid method (HAP) for estimating energy demand of Turkey using Particle Swarm Optimization (PSO) and Ant Colony Optimization (ACO). Proposed energy demand model (HAPE) is the first model which integrates two mentioned meta-heuristic techniques. While, PSO, developed for solving continuous optimization problems, is a population based stochastic technique; ACO, simulating behaviors between nest and food source of real ants, is generally used for discrete optimizations. Hybrid method based PSO and ACO is developed to estimate energy demand using gross domestic product (GDP), population, import and export. HAPE is developed in two forms which are linear (HAPEL) and quadratic (HAPEQ). The future energy demand is estimated under different scenarios. In order to show the accuracy of the algorithm, a comparison is made with ACO and PSO which are developed for the same problem. According to obtained results, relative estimation errors of the HAPE model are the lowest of them and quadratic form (HAPEQ) provides better-fit solutions due to fluctuations of the socio-economic indicators.
Ma, Denglong; Tan, Wei; Zhang, Zaoxiao; Hu, Jun
2017-03-05
In order to identify the parameters of hazardous gas emission source in atmosphere with less previous information and reliable probability estimation, a hybrid algorithm coupling Tikhonov regularization with particle swarm optimization (PSO) was proposed. When the source location is known, the source strength can be estimated successfully by common Tikhonov regularization method, but it is invalid when the information about both source strength and location is absent. Therefore, a hybrid method combining linear Tikhonov regularization and PSO algorithm was designed. With this method, the nonlinear inverse dispersion model was transformed to a linear form under some assumptions, and the source parameters including source strength and location were identified simultaneously by linear Tikhonov-PSO regularization method. The regularization parameters were selected by L-curve method. The estimation results with different regularization matrixes showed that the confidence interval with high-order regularization matrix is narrower than that with zero-order regularization matrix. But the estimation results of different source parameters are close to each other with different regularization matrixes. A nonlinear Tikhonov-PSO hybrid regularization was also designed with primary nonlinear dispersion model to estimate the source parameters. The comparison results of simulation and experiment case showed that the linear Tikhonov-PSO method with transformed linear inverse model has higher computation efficiency than nonlinear Tikhonov-PSO method. The confidence intervals from linear Tikhonov-PSO are more reasonable than that from nonlinear method. The estimation results from linear Tikhonov-PSO method are similar to that from single PSO algorithm, and a reasonable confidence interval with some probability levels can be additionally given by Tikhonov-PSO method. Therefore, the presented linear Tikhonov-PSO regularization method is a good potential method for hazardous emission
Wu, Huafeng; Mei, Xiaojun; Chen, Xinqiang; Li, Junjun; Wang, Jun; Mohapatra, Prasant
2018-07-01
Maritime search and rescue (MSR) play a significant role in Safety of Life at Sea (SOLAS). However, it suffers from scenarios that the measurement information is inaccurate due to wave shadow effect when utilizing wireless Sensor Network (WSN) technology in MSR. In this paper, we develop a Novel Cooperative Localization Algorithm (NCLA) in MSR by using an enhanced particle filter method to reduce measurement errors on observation model caused by wave shadow effect. First, we take into account the mobility of nodes at sea to develop a motion model-Lagrangian model. Furthermore, we introduce both state model and observation model to constitute a system model for particle filter (PF). To address the impact of the wave shadow effect on the observation model, we develop an optimal parameter derived by Kullback-Leibler divergence (KLD) to mitigate the error. After the optimal parameter is acquired, an improved likelihood function is presented. Finally, the estimated position is acquired. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Disruptive Innovation in Numerical Hydrodynamics
Energy Technology Data Exchange (ETDEWEB)
Waltz, Jacob I. [Los Alamos National Laboratory
2012-09-06
We propose the research and development of a high-fidelity hydrodynamic algorithm for tetrahedral meshes that will lead to a disruptive innovation in the numerical modeling of Laboratory problems. Our proposed innovation has the potential to reduce turnaround time by orders of magnitude relative to Advanced Simulation and Computing (ASC) codes; reduce simulation setup costs by millions of dollars per year; and effectively leverage Graphics Processing Unit (GPU) and future Exascale computing hardware. If successful, this work will lead to a dramatic leap forward in the Laboratory's quest for a predictive simulation capability.
Kumar, Gaurav; Kumar, Ashok
2017-11-01
Structural control has gained significant attention in recent times. The standalone issue of power requirement during an earthquake has already been solved up to a large extent by designing semi-active control systems using conventional linear quadratic control theory, and many other intelligent control algorithms such as fuzzy controllers, artificial neural networks, etc. In conventional linear-quadratic regulator (LQR) theory, it is customary to note that the values of the design parameters are decided at the time of designing the controller and cannot be subsequently altered. During an earthquake event, the response of the structure may increase or decrease, depending the quasi-resonance occurring between the structure and the earthquake. In this case, it is essential to modify the value of the design parameters of the conventional LQR controller to obtain optimum control force to mitigate the vibrations due to the earthquake. A few studies have been done to sort out this issue but in all these studies it was necessary to maintain a database of the earthquake. To solve this problem and to find the optimized design parameters of the LQR controller in real time, a fast Fourier transform and particle swarm optimization based modified linear quadratic regulator method is presented here. This method comprises four different algorithms: particle swarm optimization (PSO), the fast Fourier transform (FFT), clipped control algorithm and the LQR. The FFT helps to obtain the dominant frequency for every time window. PSO finds the optimum gain matrix through the real-time update of the weighting matrix R, thereby, dispensing with the experimentation. The clipped control law is employed to match the magnetorheological (MR) damper force with the desired force given by the controller. The modified Bouc-Wen phenomenological model is taken to recognize the nonlinearities in the MR damper. The assessment of the advised method is done by simulation of a three-story structure
On the hydrodynamics and the scale-up of flotation processes
International Nuclear Information System (INIS)
Schubert, H.
1986-01-01
In flotation machines, turbulence is process-determining. Macroturbulence is necessary for suspension, microturbulence controls the air dispersion, the rate of the particle-bubble collisions and the stresses on agglomerates. Consequently, the hydrodynamic optimization of flotation processes plays an important role for the flotation efficiency. In the paper the following aspects are considered: the turbulent microprocesses of flotation processes; the integral hydrodynamic characterization of flotation processes; correlations between particle size and optimum hydrodynamics; correlations between flocculation of fine particles and optimum-hydrodynamics; and hydrodynamic scale-up of flotation processes
A particle swarm-based algorithm for optimization of multi-layered and graded dental ceramics.
Askari, Ehsan; Flores, Paulo; Silva, Filipe
2018-01-01
The thermal residual stresses (TRSs) generated owing to the cooling down from the processing temperature in layered ceramic systems can lead to crack formation as well as influence the bending stress distribution and the strength of the structure. The purpose of this study is to minimize the thermal residual and bending stresses in dental ceramics to enhance their strength as well as to prevent the structure failure. Analytical parametric models are developed to evaluate thermal residual stresses in zirconia-porcelain multi-layered and graded discs and to simulate the piston-on-ring test. To identify optimal designs of zirconia-based dental restorations, a particle swarm optimizer is also developed. The thickness of each interlayer and compositional distribution are referred to as design variables. The effect of layers number constituting the interlayer between two based materials on the performance of graded prosthetic systems is also investigated. The developed methodology is validated against results available in literature and a finite element model constructed in the present study. Three different cases are considered to determine the optimal design of graded prosthesis based on minimizing (a) TRSs; (b) bending stresses; and (c) both TRS and bending stresses. It is demonstrated that each layer thickness and composition profile have important contributions into the resulting stress field and magnitude. Copyright © 2017 Elsevier Ltd. All rights reserved.
Algorithm for Wave-Particle Resonances in Fluid Codes - Final Report
Mattor, N
2000-01-01
We review the work performed under LDRD ER grant 98-ERD-099. The goal of this work is to write a subroutine for a fluid turbulence code that allows it to incorporate wave-particle resonances (WPR). WPR historically have required a kinetic code, with extra dimensions needed to evolve the phase space distribution function, f(x, v, t). The main results accomplished under this grant have been: (1) Derivation of a nonlinear closure term for 1D electrostatic collisionless fluid; (2) Writing of a 1D electrostatic fluid code, ''es1f,'' with a subroutine to calculate the aforementioned closure term; (3) derivation of several methods to calculate the closure term, including Eulerian, Euler-local, fully local, linearized, and linearized zero-phase-velocity, and implementation of these in es1f; (4) Successful modeling of the Landau damping of an arbitrary Langmuir wave; (5) Successful description of a kinetic two-stream instability up to the point of the first bounce; and (6) a spin-off project which uses a mathematical ...
Algorithm for Wave-Particle Resonances in Fluid Codes - Final Report
International Nuclear Information System (INIS)
Mattor, N.
2000-01-01
We review the work performed under LDRD ER grant 98-ERD-099. The goal of this work is to write a subroutine for a fluid turbulence code that allows it to incorporate wave-particle resonances (WPR). WPR historically have required a kinetic code, with extra dimensions needed to evolve the phase space distribution function, f(x, v, t). The main results accomplished under this grant have been: (1) Derivation of a nonlinear closure term for 1D electrostatic collisionless fluid; (2) Writing of a 1D electrostatic fluid code, ''es1f,'' with a subroutine to calculate the aforementioned closure term; (3) derivation of several methods to calculate the closure term, including Eulerian, Euler-local, fully local, linearized, and linearized zero-phase-velocity, and implementation of these in es1f; (4) Successful modeling of the Landau damping of an arbitrary Langmuir wave; (5) Successful description of a kinetic two-stream instability up to the point of the first bounce; and (6) a spin-off project which uses a mathematical technique developed for the closure, known as the Phase Velocity Transform (PVT) to decompose turbulent fluctuations
Elasto-hydrodynamic lubrication
Dowson, D; Hopkins, D W
1977-01-01
Elasto-Hydrodynamic Lubrication deals with the mechanism of elasto-hydrodynamic lubrication, that is, the lubrication regime in operation over the small areas where machine components are in nominal point or line contact. The lubrication of rigid contacts is discussed, along with the effects of high pressure on the lubricant and bounding solids. The governing equations for the solution of elasto-hydrodynamic problems are presented.Comprised of 13 chapters, this volume begins with an overview of elasto-hydrodynamic lubrication and representation of contacts by cylinders, followed by a discussio
Elementary classical hydrodynamics
Chirgwin, B H; Langford, W J; Maxwell, E A; Plumpton, C
1967-01-01
Elementary Classical Hydrodynamics deals with the fundamental principles of elementary classical hydrodynamics, with emphasis on the mechanics of inviscid fluids. Topics covered by this book include direct use of the equations of hydrodynamics, potential flows, two-dimensional fluid motion, waves in liquids, and compressible flows. Some general theorems such as Bernoulli's equation are also considered. This book is comprised of six chapters and begins by introducing the reader to the fundamental principles of fluid hydrodynamics, with emphasis on ways of studying the motion of a fluid. Basic c
Yang, Minglin; Wu, Yueqian; Sheng, Xinqing; Ren, Kuan Fang
2017-12-01
Computation of scattering of shaped beams by large nonspherical particles is a challenge in both optics and electromagnetics domains since it concerns many research fields. In this paper, we report our new progress in the numerical computation of the scattering diagrams. Our algorithm permits to calculate the scattering of a particle of size as large as 110 wavelengths or 700 in size parameter. The particle can be transparent or absorbing of arbitrary shape, smooth or with a sharp surface, such as the Chebyshev particles or ice crystals. To illustrate the capacity of the algorithm, a zero order Bessel beam is taken as the incident beam, and the scattering of ellipsoidal particles and Chebyshev particles are taken as examples. Some special phenomena have been revealed and examined. The scattering problem is formulated with the combined tangential formulation and solved iteratively with the aid of the multilevel fast multipole algorithm, which is well parallelized with the message passing interface on the distributed memory computer platform using the hybrid partitioning strategy. The numerical predictions are compared with the results of the rigorous method for a spherical particle to validate the accuracy of the approach. The scattering diagrams of large ellipsoidal particles with various parameters are examined. The effect of aspect ratios, as well as half-cone angle of the incident zero-order Bessel beam and the off-axis distance on scattered intensity, is studied. Scattering by asymmetry Chebyshev particle with size parameter larger than 700 is also given to show the capability of the method for computing scattering by arbitrary shaped particles.
Hydrodynamic cavitation: a bottom-up approach to liquid aeration
Raut, J.S.; Stoyanov, S.D.; Duggal, C.; Pelan, E.G.; Arnaudov, L.N.; Naik, V.M.
2012-01-01
We report the use of hydrodynamic cavitation as a novel, bottom-up method for continuous creation of foams comprising of air microbubbles in aqueous systems containing surface active ingredients, like proteins or particles. The hydrodynamic cavitation was created using a converging-diverging nozzle.
Directory of Open Access Journals (Sweden)
Jie-Sheng Wang
2015-01-01
Full Text Available For predicting the key technology indicators (concentrate grade and tailings recovery rate of flotation process, a feed-forward neural network (FNN based soft-sensor model optimized by the hybrid algorithm combining particle swarm optimization (PSO algorithm and gravitational search algorithm (GSA is proposed. Although GSA has better optimization capability, it has slow convergence velocity and is easy to fall into local optimum. So in this paper, the velocity vector and position vector of GSA are adjusted by PSO algorithm in order to improve its convergence speed and prediction accuracy. Finally, the proposed hybrid algorithm is adopted to optimize the parameters of FNN soft-sensor model. Simulation results show that the model has better generalization and prediction accuracy for the concentrate grade and tailings recovery rate to meet the online soft-sensor requirements of the real-time control in the flotation process.
Filter-Feeding Zoobenthos and Hydrodynamics
DEFF Research Database (Denmark)
Riisgård, Hans Ulrik; Larsen, Poul Scheel
2017-01-01
interplay between benthic filter feeders and hydrodynamics. Starting from the general concept of grazing potential and typical data on benthic population densities its realization is considered, first at the level of the individual organism through the processes of pumping and trapping of food particles...
Microflow Cytometers with Integrated Hydrodynamic Focusing
Directory of Open Access Journals (Sweden)
Martin Schmidt
2013-04-01
Full Text Available This study demonstrates the suitability of microfluidic structures for high throughput blood cell analysis. The microfluidic chips exploit fully integrated hydrodynamic focusing based on two different concepts: Two-stage cascade focusing and spin focusing (vortex principle. The sample—A suspension of micro particles or blood cells—is injected into a sheath fluid streaming at a substantially higher flow rate, which assures positioning of the particles in the center of the flow channel. Particle velocities of a few m/s are achieved as required for high throughput blood cell analysis. The stability of hydrodynamic particle positioning was evaluated by measuring the pulse heights distributions of fluorescence signals from calibration beads. Quantitative assessment based on coefficient of variation for the fluorescence intensity distributions resulted in a value of about 3% determined for the micro-device exploiting cascade hydrodynamic focusing. For the spin focusing approach similar values were achieved for sample flow rates being 1.5 times lower. Our results indicate that the performances of both variants of hydrodynamic focusing suit for blood cell differentiation and counting. The potential of the micro flow cytometer is demonstrated by detecting immunologically labeled CD3 positive and CD4 positive T-lymphocytes in blood.
Directory of Open Access Journals (Sweden)
Deyun Wang
2018-04-01
Full Text Available Natural gas consumption has increased with an average annual growth rate of about 10% between 2012 and 2017. Total natural gas consumption accounted for 6.4% of consumed primary energy resources in 2016, up from 5.4% in 2012, making China the world’s third-largest gas user. Therefore, accurately predicting natural gas consumption has become very important for market participants to organize indigenous production, foreign supply contracts and infrastructures in a better way. This paper first presents the main factors affecting China’s natural gas consumption, and then proposes a hybrid forecasting model by combining the particle swarm optimization algorithm and wavelet neural network (PSO-WNN. In PSO-WNN model, the initial weights and wavelet parameters are optimized using PSO algorithm and updated through a dynamic learning rate to improve the training speed, forecasting precision and reduce fluctuation of WNN. The experimental results show the superiority of the proposed model compared with ANN and WNN based models. Then, this study conducts the scenario analysis of the natural gas consumption from 2017 to 2025 in China based on three scenarios, namely low scenario, reference scenario and high scenario, and the results illustrate that the China’s natural gas consumption is going to be 342.70, 358.27, 366.42 million tce (“standard” tons coal equivalent in 2020, and 407.01, 437.95, 461.38 million tce in 2025 under the low, reference and high scenarios, respectively. Finally, this paper provides some policy suggestions on natural gas exploration and development, infrastructure construction and technical innovations to promote a sustainable development of China’s natural gas industry.
International Nuclear Information System (INIS)
Medeiros, Jose Antonio Carlos Canedo; Machado, Marcelo Dornellas; Lima, Alan Miranda M. de; Schirru, Roberto
2007-01-01
Predictive control systems are control systems that use a model of the controlled system (plant), used to predict the future behavior of the plant allowing the establishment of an anticipative control based on a future condition of the plant, and an optimizer that, considering a future time horizon of the plant output and a recent horizon of the control action, determines the controller's outputs to optimize a performance index of the controlled plant. The predictive control system does not require analytical models of the plant; the model of predictor of the plant can be learned from historical data of operation of the plant. The optimizer of the predictive controller establishes the strategy of the control: the minimization of a performance index (objective function) is done so that the present and future control actions are computed in such a way to minimize the objective function. The control strategy, implemented by the optimizer, induces the formation of an optimal control mechanism whose effect is to reduce the stabilization time, the 'overshoot' and 'undershoot', minimize the control actuation so that a compromise among those objectives is attained. The optimizer of the predictive controller is usually implemented using gradient-based algorithms. In this work we use the Particle Swarm Optimization algorithm (PSO) in the optimizer component of a predictive controller applied in the control of the xenon oscillation of a pressurized water reactor (PWR). The PSO is a stochastic optimization technique applied in several disciplines, simple and capable of providing a global optimal for high complexity problems and difficult to be optimized, providing in many cases better results than those obtained by other conventional and/or other artificial optimization techniques. (author)