Advective isotope transport by mixing cell and particle tracking algorithms
International Nuclear Information System (INIS)
Tezcan, L.; Meric, T.
1999-01-01
The 'mixing cell' algorithm of the environmental isotope data evaluation is integrated with the three dimensional finite difference ground water flow model (MODFLOW) to simulate the advective isotope transport and the approach is compared with the 'particle tracking' algorithm of the MOC3D, that simulates three-dimensional solute transport with the method of characteristics technique
Particle swarm optimization - Genetic algorithm (PSOGA) on linear transportation problem
Rahmalia, Dinita
2017-08-01
Linear Transportation Problem (LTP) is the case of constrained optimization where we want to minimize cost subject to the balance of the number of supply and the number of demand. The exact method such as northwest corner, vogel, russel, minimal cost have been applied at approaching optimal solution. In this paper, we use heurisitic like Particle Swarm Optimization (PSO) for solving linear transportation problem at any size of decision variable. In addition, we combine mutation operator of Genetic Algorithm (GA) at PSO to improve optimal solution. This method is called Particle Swarm Optimization - Genetic Algorithm (PSOGA). The simulations show that PSOGA can improve optimal solution resulted by PSO.
Energy Technology Data Exchange (ETDEWEB)
O' Brien, M. J.; Brantley, P. S.
2015-01-20
In order to run Monte Carlo particle transport calculations on new supercomputers with hundreds of thousands or millions of processors, care must be taken to implement scalable algorithms. This means that the algorithms must continue to perform well as the processor count increases. In this paper, we examine the scalability of:(1) globally resolving the particle locations on the correct processor, (2) deciding that particle streaming communication has finished, and (3) efficiently coupling neighbor domains together with different replication levels. We have run domain decomposed Monte Carlo particle transport on up to 2^{21} = 2,097,152 MPI processes on the IBM BG/Q Sequoia supercomputer and observed scalable results that agree with our theoretical predictions. These calculations were carefully constructed to have the same amount of work on every processor, i.e. the calculation is already load balanced. We also examine load imbalanced calculations where each domain’s replication level is proportional to its particle workload. In this case we show how to efficiently couple together adjacent domains to maintain within workgroup load balance and minimize memory usage.
International Nuclear Information System (INIS)
Svensson, Urban
2001-04-01
A particle tracking algorithm, PARTRACK, that simulates transport and dispersion in a sparsely fractured rock is described. The main novel feature of the algorithm is the introduction of multiple particle states. It is demonstrated that the introduction of this feature allows for the simultaneous simulation of Taylor dispersion, sorption and matrix diffusion. A number of test cases are used to verify and demonstrate the features of PARTRACK. It is shown that PARTRACK can simulate the following processes, believed to be important for the problem addressed: the split up of a tracer cloud at a fracture intersection, channeling in a fracture plane, Taylor dispersion and matrix diffusion and sorption. From the results of the test cases, it is concluded that PARTRACK is an adequate framework for simulation of transport and dispersion of a solute in a sparsely fractured rock
Romano, Paul Kollath
Monte Carlo particle transport methods are being considered as a viable option for high-fidelity simulation of nuclear reactors. While Monte Carlo methods offer several potential advantages over deterministic methods, there are a number of algorithmic shortcomings that would prevent their immediate adoption for full-core analyses. In this thesis, algorithms are proposed both to ameliorate the degradation in parallel efficiency typically observed for large numbers of processors and to offer a means of decomposing large tally data that will be needed for reactor analysis. A nearest-neighbor fission bank algorithm was proposed and subsequently implemented in the OpenMC Monte Carlo code. A theoretical analysis of the communication pattern shows that the expected cost is O( N ) whereas traditional fission bank algorithms are O(N) at best. The algorithm was tested on two supercomputers, the Intrepid Blue Gene/P and the Titan Cray XK7, and demonstrated nearly linear parallel scaling up to 163,840 processor cores on a full-core benchmark problem. An algorithm for reducing network communication arising from tally reduction was analyzed and implemented in OpenMC. The proposed algorithm groups only particle histories on a single processor into batches for tally purposes---in doing so it prevents all network communication for tallies until the very end of the simulation. The algorithm was tested, again on a full-core benchmark, and shown to reduce network communication substantially. A model was developed to predict the impact of load imbalances on the performance of domain decomposed simulations. The analysis demonstrated that load imbalances in domain decomposed simulations arise from two distinct phenomena: non-uniform particle densities and non-uniform spatial leakage. The dominant performance penalty for domain decomposition was shown to come from these physical effects rather than insufficient network bandwidth or high latency. The model predictions were verified with
International Nuclear Information System (INIS)
Hoisie, A.; Lubeck, O.; Wasserman, H.
1998-01-01
The authors develop a model for the parallel performance of algorithms that consist of concurrent, two-dimensional wavefronts implemented in a message passing environment. The model, based on a LogGP machine parameterization, combines the separate contributions of computation and communication wavefronts. They validate the model on three important supercomputer systems, on up to 500 processors. They use data from a deterministic particle transport application taken from the ASCI workload, although the model is general to any wavefront algorithm implemented on a 2-D processor domain. They also use the validated model to make estimates of performance and scalability of wavefront algorithms on 100-TFLOPS computer systems expected to be in existence within the next decade as part of the ASCI program and elsewhere. In this context, the authors analyze two problem sizes. Their model shows that on the largest such problem (1 billion cells), inter-processor communication performance is not the bottleneck. Single-node efficiency is the dominant factor
International Nuclear Information System (INIS)
Garcia, A.L.; Alexander, F.J.; Alder, B.J.
1997-01-01
The consistent Boltzmann algorithm (CBA) for dense, hard-sphere gases is generalized to obtain the van der Waals equation of state and the corresponding exact viscosity at all densities except at the highest temperatures. A general scheme for adjusting any transport coefficients to higher values is presented
International Nuclear Information System (INIS)
Ihle, Thomas
2008-01-01
Detailed calculations of the transport coefficients of a recently introduced particle-based model for fluid dynamics with a non-ideal equation of state are presented. Excluded volume interactions are modeled by means of biased stochastic multi-particle collisions which depend on the local velocities and densities. Momentum and energy are exactly conserved locally. A general scheme to derive transport coefficients for such biased, velocity-dependent collision rules is developed. Analytic expressions for the self-diffusion coefficient and the shear viscosity are obtained, and very good agreement is found with numerical results at small and large mean free paths. The viscosity turns out to be proportional to the square root of temperature, as in a real gas. In addition, the theoretical framework is applied to a two-component version of the model, and expressions for the viscosity and the difference in diffusion of the two species are given
Safari, Mir Jafar Sadegh; Shirzad, Akbar; Mohammadi, Mirali
2017-08-01
May proposed two dimensionless parameters of transport (η) and mobility (F s ) for self-cleansing design of sewers with deposited bed condition. The relationships between those two parameters were introduced in conditional form for specific ranges of F s , which makes it difficult to use as a practical tool for sewer design. In this study, using the same experimental data used by May and employing the particle swarm optimization algorithm, a unified equation is recommended based on η and F s . The developed model is compared with original May relationships as well as corresponding models available in the literature. A large amount of data taken from the literature is used for the models' evaluation. The results demonstrate that the developed model in this study is superior to May and other existing models in the literature. Due to the fact that in May's dimensionless parameters more effective variables in the sediment transport process in sewers with deposited bed condition are considered, it is concluded that the revised May equation proposed in this study is a reliable model for sewer design.
Directory of Open Access Journals (Sweden)
Deep Agnani
Full Text Available P-glycoprotein, a human multidrug resistance transporter, has been extensively studied due to its importance to human health and disease. In order to understand transport kinetics via P-gp, confluent cell monolayers overexpressing P-gp are widely used. The purpose of this study is to obtain the mass action elementary rate constants for P-gp's transport and to functionally characterize members of P-gp's network, i.e., other transporters that transport P-gp substrates in hMDR1-MDCKII confluent cell monolayers and are essential to the net substrate flux. Transport of a range of concentrations of amprenavir, loperamide, quinidine and digoxin across the confluent monolayer of cells was measured in both directions, apical to basolateral and basolateral to apical. We developed a global optimization algorithm using the Particle Swarm method that can simultaneously fit all datasets to yield accurate and exhaustive fits of these elementary rate constants. The statistical sensitivity of the fitted values was determined by using 24 identical replicate fits, yielding simple averages and standard deviations for all of the kinetic parameters, including the efflux active P-gp surface density. Digoxin required additional basolateral and apical transporters, while loperamide required just a basolateral tranporter. The data were better fit by assuming bidirectional transporters, rather than active importers, suggesting that they are not MRP or active OATP transporters. The P-gp efflux rate constants for quinidine and digoxin were about 3-fold smaller than reported ATP hydrolysis rate constants from P-gp proteoliposomes. This suggests a roughly 3∶1 stoichiometry between ATP hydrolysis and P-gp transport for these two drugs. The fitted values of the elementary rate constants for these P-gp substrates support the hypotheses that the selective pressures on P-gp are to maintain a broad substrate range and to keep xenobiotics out of the cytosol, but not out of the
Scalable Domain Decomposed Monte Carlo Particle Transport
Energy Technology Data Exchange (ETDEWEB)
O' Brien, Matthew Joseph [Univ. of California, Davis, CA (United States)
2013-12-05
In this dissertation, we present the parallel algorithms necessary to run domain decomposed Monte Carlo particle transport on large numbers of processors (millions of processors). Previous algorithms were not scalable, and the parallel overhead became more computationally costly than the numerical simulation.
A Coulomb collision algorithm for weighted particle simulations
Miller, Ronald H.; Combi, Michael R.
1994-01-01
A binary Coulomb collision algorithm is developed for weighted particle simulations employing Monte Carlo techniques. Charged particles within a given spatial grid cell are pair-wise scattered, explicitly conserving momentum and implicitly conserving energy. A similar algorithm developed by Takizuka and Abe (1977) conserves momentum and energy provided the particles are unweighted (each particle representing equal fractions of the total particle density). If applied as is to simulations incorporating weighted particles, the plasma temperatures equilibrate to an incorrect temperature, as compared to theory. Using the appropriate pairing statistics, a Coulomb collision algorithm is developed for weighted particles. The algorithm conserves energy and momentum and produces the appropriate relaxation time scales as compared to theoretical predictions. Such an algorithm is necessary for future work studying self-consistent multi-species kinetic transport.
Partially linearized algorithms in gyrokinetic particle simulation
Energy Technology Data Exchange (ETDEWEB)
Dimits, A.M.; Lee, W.W.
1990-10-01
In this paper, particle simulation algorithms with time-varying weights for the gyrokinetic Vlasov-Poisson system have been developed. The primary purpose is to use them for the removal of the selected nonlinearities in the simulation of gradient-driven microturbulence so that the relative importance of the various nonlinear effects can be assessed. It is hoped that the use of these procedures will result in a better understanding of the transport mechanisms and scaling in tokamaks. Another application of these algorithms is for the improvement of the numerical properties of the simulation plasma. For instance, implementations of such algorithms (1) enable us to suppress the intrinsic numerical noise in the simulation, and (2) also make it possible to regulate the weights of the fast-moving particles and, in turn, to eliminate the associated high frequency oscillations. Examples of their application to drift-type instabilities in slab geometry are given. We note that the work reported here represents the first successful use of the weighted algorithms in particle codes for the nonlinear simulation of plasmas.
Partially linearized algorithms in gyrokinetic particle simulation
International Nuclear Information System (INIS)
Dimits, A.M.; Lee, W.W.
1990-10-01
In this paper, particle simulation algorithms with time-varying weights for the gyrokinetic Vlasov-Poisson system have been developed. The primary purpose is to use them for the removal of the selected nonlinearities in the simulation of gradient-driven microturbulence so that the relative importance of the various nonlinear effects can be assessed. It is hoped that the use of these procedures will result in a better understanding of the transport mechanisms and scaling in tokamaks. Another application of these algorithms is for the improvement of the numerical properties of the simulation plasma. For instance, implementations of such algorithms (1) enable us to suppress the intrinsic numerical noise in the simulation, and (2) also make it possible to regulate the weights of the fast-moving particles and, in turn, to eliminate the associated high frequency oscillations. Examples of their application to drift-type instabilities in slab geometry are given. We note that the work reported here represents the first successful use of the weighted algorithms in particle codes for the nonlinear simulation of plasmas
Particle swarm genetic algorithm and its application
International Nuclear Information System (INIS)
Liu Chengxiang; Yan Changxiang; Wang Jianjun; Liu Zhenhai
2012-01-01
To solve the problems of slow convergence speed and tendency to fall into the local optimum of the standard particle swarm optimization while dealing with nonlinear constraint optimization problem, a particle swarm genetic algorithm is designed. The proposed algorithm adopts feasibility principle handles constraint conditions and avoids the difficulty of penalty function method in selecting punishment factor, generates initial feasible group randomly, which accelerates particle swarm convergence speed, and introduces genetic algorithm crossover and mutation strategy to avoid particle swarm falls into the local optimum Through the optimization calculation of the typical test functions, the results show that particle swarm genetic algorithm has better optimized performance. The algorithm is applied in nuclear power plant optimization, and the optimization results are significantly. (authors)
A transport-based condensed history algorithm
International Nuclear Information System (INIS)
Tolar, D. R. Jr.
1999-01-01
Condensed history algorithms are approximate electron transport Monte Carlo methods in which the cumulative effects of multiple collisions are modeled in a single step of (user-specified) path length s 0 . This path length is the distance each Monte Carlo electron travels between collisions. Current condensed history techniques utilize a splitting routine over the range 0 le s le s 0 . For example, the PEnELOPE method splits each step into two substeps; one with length ξs 0 and one with length (1 minusξ)s 0 , where ξ is a random number from 0 0 is fixed (not sampled from an exponential distribution), conventional condensed history schemes are not transport processes. Here the authors describe a new condensed history algorithm that is a transport process. The method simulates a transport equation that approximates the exact Boltzmann equation. The new transport equation has a larger mean free path than, and preserves two angular moments of, the Boltzmann equation. Thus, the new process is solved more efficiently by Monte Carlo, and it conserves both particles and scattering power
Cooper, M A
2000-01-01
We present various approximations for the angular distribution of particles emerging from an optically thick, purely isotropically scattering region into a vacuum. Our motivation is to use such a distribution for the Fleck-Canfield random walk method [1] for implicit Monte Carlo (IMC) [2] radiation transport problems. We demonstrate that the cosine distribution recommended in the original random walk paper [1] is a poor approximation to the angular distribution predicted by transport theory. Then we examine other approximations that more closely match the transport angular distribution.
Particle algorithms for population dynamics in flows
International Nuclear Information System (INIS)
Perlekar, Prasad; Toschi, Federico; Benzi, Roberto; Pigolotti, Simone
2011-01-01
We present and discuss particle based algorithms to numerically study the dynamics of population subjected to an advecting flow condition. We discuss few possible variants of the algorithms and compare them in a model compressible flow. A comparison against appropriate versions of the continuum stochastic Fisher equation (sFKPP) is also presented and discussed. The algorithms can be used to study populations genetics in fluid environments.
Particle transport in urban dwellings
International Nuclear Information System (INIS)
Cannell, R.J.; Goddard, A.J.H.; ApSimon, H.M.
1988-01-01
A quantitative investigation of the potential for contamination of a dwelling by material carried in on the occupants' footwear has been completed. Data are now available on the transport capacity of different footwear for a small range of particle sizes and contamination source strengths. Additional information is also given on the rate of redistribution
Neural Network Algorithm for Particle Loading
International Nuclear Information System (INIS)
Lewandowski, J.L.V.
2003-01-01
An artificial neural network algorithm for continuous minimization is developed and applied to the case of numerical particle loading. It is shown that higher-order moments of the probability distribution function can be efficiently renormalized using this technique. A general neural network for the renormalization of an arbitrary number of moments is given
Computational plasticity algorithm for particle dynamics simulations
Krabbenhoft, K.; Lyamin, A. V.; Vignes, C.
2018-01-01
The problem of particle dynamics simulation is interpreted in the framework of computational plasticity leading to an algorithm which is mathematically indistinguishable from the common implicit scheme widely used in the finite element analysis of elastoplastic boundary value problems. This algorithm provides somewhat of a unification of two particle methods, the discrete element method and the contact dynamics method, which usually are thought of as being quite disparate. In particular, it is shown that the former appears as the special case where the time stepping is explicit while the use of implicit time stepping leads to the kind of schemes usually labelled contact dynamics methods. The framing of particle dynamics simulation within computational plasticity paves the way for new approaches similar (or identical) to those frequently employed in nonlinear finite element analysis. These include mixed implicit-explicit time stepping, dynamic relaxation and domain decomposition schemes.
A multi-parametric particle-pairing algorithm for particle tracking in single and multiphase flows
International Nuclear Information System (INIS)
Cardwell, Nicholas D; Vlachos, Pavlos P; Thole, Karen A
2011-01-01
Multiphase flows (MPFs) offer a rich area of fundamental study with many practical applications. Examples of such flows range from the ingestion of foreign particulates in gas turbines to transport of particles within the human body. Experimental investigation of MPFs, however, is challenging, and requires techniques that simultaneously resolve both the carrier and discrete phases present in the flowfield. This paper presents a new multi-parametric particle-pairing algorithm for particle tracking velocimetry (MP3-PTV) in MPFs. MP3-PTV improves upon previous particle tracking algorithms by employing a novel variable pair-matching algorithm which utilizes displacement preconditioning in combination with estimated particle size and intensity to more effectively and accurately match particle pairs between successive images. To improve the method's efficiency, a new particle identification and segmentation routine was also developed. Validation of the new method was initially performed on two artificial data sets: a traditional single-phase flow published by the Visualization Society of Japan (VSJ) and an in-house generated MPF data set having a bi-modal distribution of particles diameters. Metrics of the measurement yield, reliability and overall tracking efficiency were used for method comparison. On the VSJ data set, the newly presented segmentation routine delivered a twofold improvement in identifying particles when compared to other published methods. For the simulated MPF data set, measurement efficiency of the carrier phases improved from 9% to 41% for MP3-PTV as compared to a traditional hybrid PTV. When employed on experimental data of a gas–solid flow, the MP3-PTV effectively identified the two particle populations and reported a vector efficiency and velocity measurement error comparable to measurements for the single-phase flow images. Simultaneous measurement of the dispersed particle and the carrier flowfield velocities allowed for the calculation of
Multi-Algorithm Particle Simulations with Spatiocyte.
Arjunan, Satya N V; Takahashi, Koichi
2017-01-01
As quantitative biologists get more measurements of spatially regulated systems such as cell division and polarization, simulation of reaction and diffusion of proteins using the data is becoming increasingly relevant to uncover the mechanisms underlying the systems. Spatiocyte is a lattice-based stochastic particle simulator for biochemical reaction and diffusion processes. Simulations can be performed at single molecule and compartment spatial scales simultaneously. Molecules can diffuse and react in 1D (filament), 2D (membrane), and 3D (cytosol) compartments. The implications of crowded regions in the cell can be investigated because each diffusing molecule has spatial dimensions. Spatiocyte adopts multi-algorithm and multi-timescale frameworks to simulate models that simultaneously employ deterministic, stochastic, and particle reaction-diffusion algorithms. Comparison of light microscopy images to simulation snapshots is supported by Spatiocyte microscopy visualization and molecule tagging features. Spatiocyte is open-source software and is freely available at http://spatiocyte.org .
International Nuclear Information System (INIS)
Ueki, T.; Larsen, E.W.
1998-01-01
The authors show that Monte Carlo simulations of neutral particle transport in planargeometry anisotropically scattering media, using the exponential transform with angular biasing as a variance reduction device, are governed by a new Boltzman Monte Carlo (BMC) equation, which includes particle weight as an extra independent variable. The weight moments of the solution of the BMC equation determine the moments of the score and the mean number of collisions per history in the nonanalog Monte Carlo simulations. Therefore, the solution of the BMC equation predicts the variance of the score and the figure of merit in the simulation. Also, by (1) using an angular biasing function that is closely related to the ''asymptotic'' solution of the linear Boltzman equation and (2) requiring isotropic weight changes as collisions, they derive a new angular biasing scheme. Using the BMC equation, they propose a universal ''safe'' upper limit of the transform parameter, valid for any type of exponential transform. In numerical calculations, they demonstrate that the behavior of the Monte Carlo simulations and the performance predicted by deterministically solving the BMC equation agree well, and that the new angular biasing scheme is always advantageous
A Parallel Particle Swarm Optimization Algorithm Accelerated by Asynchronous Evaluations
Venter, Gerhard; Sobieszczanski-Sobieski, Jaroslaw
2005-01-01
A parallel Particle Swarm Optimization (PSO) algorithm is presented. Particle swarm optimization is a fairly recent addition to the family of non-gradient based, probabilistic search algorithms that is based on a simplified social model and is closely tied to swarming theory. Although PSO algorithms present several attractive properties to the designer, they are plagued by high computational cost as measured by elapsed time. One approach to reduce the elapsed time is to make use of coarse-grained parallelization to evaluate the design points. Previous parallel PSO algorithms were mostly implemented in a synchronous manner, where all design points within a design iteration are evaluated before the next iteration is started. This approach leads to poor parallel speedup in cases where a heterogeneous parallel environment is used and/or where the analysis time depends on the design point being analyzed. This paper introduces an asynchronous parallel PSO algorithm that greatly improves the parallel e ciency. The asynchronous algorithm is benchmarked on a cluster assembled of Apple Macintosh G5 desktop computers, using the multi-disciplinary optimization of a typical transport aircraft wing as an example.
JIT-transportation problem and its algorithm
Bai, Guozhong; Gan, Xiao-Xiong
2011-12-01
This article introduces the (just-in-time) JIT-transportation problem, which requires that all demanded goods be shipped to their destinations on schedule, at a zero or minimal destination-storage cost. The JIT-transportation problem is a special goal programming problem with discrete constraints. This article provides a mathematical model for such a transportation problem and introduces the JIT solution, the deviation solution, the JIT deviation, etc. By introducing the B(λ)-problem, this article establishes the equivalence between the optimal solutions of the B(λ)-problem and the optimal solutions of the JIT-transportation problem, and then provides an algorithm for the JIT-transportation problems. This algorithm is proven mathematically and is also illustrated by an example.
Los Alamos neutral particle transport codes: New and enhanced capabilities
International Nuclear Information System (INIS)
Alcouffe, R.E.; Baker, R.S.; Brinkley, F.W.; Clark, B.A.; Koch, K.R.; Marr, D.R.
1992-01-01
We present new developments in Los Alamos discrete-ordinates transport codes and introduce THREEDANT, the latest in the series of Los Alamos discrete ordinates transport codes. THREEDANT solves the multigroup, neutral-particle transport equation in X-Y-Z and R-Θ-Z geometries. THREEDANT uses computationally efficient algorithms: Diffusion Synthetic Acceleration (DSA) is used to accelerate the convergence of transport iterations, the DSA solution is accelerated using the multigrid technique. THREEDANT runs on a wide range of computers, from scientific workstations to CRAY supercomputers. The algorithms are highly vectorized on CRAY computers. Recently, the THREEDANT transport algorithm was implemented on the massively parallel CM-2 computer, with performance that is comparable to a single-processor CRAY-YMP We present the results of THREEDANT analysis of test problems
Fast algorithms for transport models. Final report
International Nuclear Information System (INIS)
Manteuffel, T.A.
1994-01-01
This project has developed a multigrid in space algorithm for the solution of the S N equations with isotropic scattering in slab geometry. The algorithm was developed for the Modified Linear Discontinuous (MLD) discretization in space which is accurate in the thick diffusion limit. It uses a red/black two-cell μ-line relaxation. This relaxation solves for all angles on two adjacent spatial cells simultaneously. It takes advantage of the rank-one property of the coupling between angles and can perform this inversion in O(N) operations. A version of the multigrid in space algorithm was programmed on the Thinking Machines Inc. CM-200 located at LANL. It was discovered that on the CM-200 a block Jacobi type iteration was more efficient than the block red/black iteration. Given sufficient processors all two-cell block inversions can be carried out simultaneously with a small number of parallel steps. The bottleneck is the need for sums of N values, where N is the number of discrete angles, each from a different processor. These are carried out by machine intrinsic functions and are well optimized. The overall algorithm has computational complexity O(log(M)), where M is the number of spatial cells. The algorithm is very efficient and represents the state-of-the-art for isotropic problems in slab geometry. For anisotropic scattering in slab geometry, a multilevel in angle algorithm was developed. A parallel version of the multilevel in angle algorithm has also been developed. Upon first glance, the shifted transport sweep has limited parallelism. Once the right-hand-side has been computed, the sweep is completely parallel in angle, becoming N uncoupled initial value ODE's. The author has developed a cyclic reduction algorithm that renders it parallel with complexity O(log(M)). The multilevel in angle algorithm visits log(N) levels, where shifted transport sweeps are performed. The overall complexity is O(log(N)log(M))
Variational Algorithms for Test Particle Trajectories
Ellison, C. Leland; Finn, John M.; Qin, Hong; Tang, William M.
2015-11-01
The theory of variational integration provides a novel framework for constructing conservative numerical methods for magnetized test particle dynamics. The retention of conservation laws in the numerical time advance captures the correct qualitative behavior of the long time dynamics. For modeling the Lorentz force system, new variational integrators have been developed that are both symplectic and electromagnetically gauge invariant. For guiding center test particle dynamics, discretization of the phase-space action principle yields multistep variational algorithms, in general. Obtaining the desired long-term numerical fidelity requires mitigation of the multistep method's parasitic modes or applying a discretization scheme that possesses a discrete degeneracy to yield a one-step method. Dissipative effects may be modeled using Lagrange-D'Alembert variational principles. Numerical results will be presented using a new numerical platform that interfaces with popular equilibrium codes and utilizes parallel hardware to achieve reduced times to solution. This work was supported by DOE Contract DE-AC02-09CH11466.
Algorithm for the Stochastic Generalized Transportation Problem
Directory of Open Access Journals (Sweden)
Marcin Anholcer
2012-01-01
Full Text Available The equalization method for the stochastic generalized transportation problem has been presented. The algorithm allows us to find the optimal solution to the problem of minimizing the expected total cost in the generalized transportation problem with random demand. After a short introduction and literature review, the algorithm is presented. It is a version of the method proposed by the author for the nonlinear generalized transportation problem. It is shown that this version of the method generates a sequence of solutions convergent to the KKT point. This guarantees the global optimality of the obtained solution, as the expected cost functions are convex and twice differentiable. The computational experiments performed for test problems of reasonable size show that the method is fast. (original abstract
Vectorizing and macrotasking Monte Carlo neutral particle algorithms
International Nuclear Information System (INIS)
Heifetz, D.B.
1987-04-01
Monte Carlo algorithms for computing neutral particle transport in plasmas have been vectorized and macrotasked. The techniques used are directly applicable to Monte Carlo calculations of neutron and photon transport, and Monte Carlo integration schemes in general. A highly vectorized code was achieved by calculating test flight trajectories in loops over arrays of flight data, isolating the conditional branches to as few a number of loops as possible. A number of solutions are discussed to the problem of gaps appearing in the arrays due to completed flights, which impede vectorization. A simple and effective implementation of macrotasking is achieved by dividing the calculation of the test flight profile among several processors. A tree of random numbers is used to ensure reproducible results. The additional memory required for each task may preclude using a larger number of tasks. In future machines, the limit of macrotasking may be possible, with each test flight, and split test flight, being a separate task
Adaptive multilevel splitting for Monte Carlo particle transport
Directory of Open Access Journals (Sweden)
Louvin Henri
2017-01-01
Full Text Available In the Monte Carlo simulation of particle transport, and especially for shielding applications, variance reduction techniques are widely used to help simulate realisations of rare events and reduce the relative errors on the estimated scores for a given computation time. Adaptive Multilevel Splitting (AMS is one of these variance reduction techniques that has recently appeared in the literature. In the present paper, we propose an alternative version of the AMS algorithm, adapted for the first time to the field of particle transport. Within this context, it can be used to build an unbiased estimator of any quantity associated with particle tracks, such as flux, reaction rates or even non-Boltzmann tallies like pulse-height tallies and other spectra. Furthermore, the efficiency of the AMS algorithm is shown not to be very sensitive to variations of its input parameters, which makes it capable of significant variance reduction without requiring extended user effort.
A dynamic global and local combined particle swarm optimization algorithm
International Nuclear Information System (INIS)
Jiao Bin; Lian Zhigang; Chen Qunxian
2009-01-01
Particle swarm optimization (PSO) algorithm has been developing rapidly and many results have been reported. PSO algorithm has shown some important advantages by providing high speed of convergence in specific problems, but it has a tendency to get stuck in a near optimal solution and one may find it difficult to improve solution accuracy by fine tuning. This paper presents a dynamic global and local combined particle swarm optimization (DGLCPSO) algorithm to improve the performance of original PSO, in which all particles dynamically share the best information of the local particle, global particle and group particles. It is tested with a set of eight benchmark functions with different dimensions and compared with original PSO. Experimental results indicate that the DGLCPSO algorithm improves the search performance on the benchmark functions significantly, and shows the effectiveness of the algorithm to solve optimization problems.
Particle transport in porous media
Corapcioglu, M. Yavuz; Hunt, James R.
The migration and capture of particles (such as colloidal materials and microorganisms) through porous media occur in fields as diversified as water and wastewater treatment, well drilling, and various liquid-solid separation processes. In liquid waste disposal projects, suspended solids can cause the injection well to become clogged, and groundwater quality can be endangered by suspended clay and silt particles because of migration to the formation adjacent to the well bore. In addition to reducing the permeability of the soil, mobile particles can carry groundwater contaminants adsorbed onto their surfaces. Furthermore, as in the case of contamination from septic tanks, the particles themselves may be pathogens, i.e., bacteria and viruses.
Optimal transport of particle beams
International Nuclear Information System (INIS)
Allen, C.K.; Reiser, M.
1997-01-01
The transport and matching problem for a low energy transport system is approached from a control theoretical viewpoint. We develop a model for a beam transport and matching section based on a multistage control network. To this model we apply the principles of optimal control to formulate techniques aiding in the design of the transport and matching section. Both nonlinear programming and dynamic programming techniques are used in the optimization. These techniques are implemented in a computer-aided design program called SPOT. Examples are presented to demonstrate the procedure and outline the results. (orig.)
A Novel Particle Swarm Optimization Algorithm for Global Optimization.
Wang, Chun-Feng; Liu, Kui
2016-01-01
Particle Swarm Optimization (PSO) is a recently developed optimization method, which has attracted interest of researchers in various areas due to its simplicity and effectiveness, and many variants have been proposed. In this paper, a novel Particle Swarm Optimization algorithm is presented, in which the information of the best neighbor of each particle and the best particle of the entire population in the current iteration is considered. Meanwhile, to avoid premature, an abandoned mechanism is used. Furthermore, for improving the global convergence speed of our algorithm, a chaotic search is adopted in the best solution of the current iteration. To verify the performance of our algorithm, standard test functions have been employed. The experimental results show that the algorithm is much more robust and efficient than some existing Particle Swarm Optimization algorithms.
Particle and heat transport in Tokamaks
International Nuclear Information System (INIS)
Chatelier, M.
1984-01-01
A limitation to performances of tokamaks is heat transport through magnetic surfaces. Principles of ''classical'' or ''neoclassical'' transport -i.e. transport due to particle and heat fluxes due to Coulomb scattering of charged particle in a magnetic field- are exposed. It is shown that beside this classical effect, ''anomalous'' transport occurs; it is associated to the existence of fluctuating electric or magnetic fields which can appear in the plasma as a result of charge and current perturbations. Tearing modes and drift wave instabilities are taken as typical examples. Experimental features are presented which show that ions behave approximately in a classical way whereas electrons are strongly anomalous [fr
Stochastic transport of particles across single barriers
International Nuclear Information System (INIS)
Kreuter, Christian; Siems, Ullrich; Henseler, Peter; Nielaba, Peter; Leiderer, Paul; Erbe, Artur
2012-01-01
Transport phenomena of interacting particles are of high interest for many applications in biology and mesoscopic systems. Here we present measurements on colloidal particles, which are confined in narrow channels on a substrate and interact with a barrier, which impedes the motion along the channel. The substrate of the particle is tilted in order for the particles to be driven towards the barrier and, if the energy gained by the tilt is large enough, surpass the barrier by thermal activation. We therefore study the influence of this barrier as well as the influence of particle interaction on the particle transport through such systems. All experiments are supported with Brownian dynamics simulations in order to complement the experiments with tests of a large range of parameter space which cannot be accessed in experiments.
Weighted Flow Algorithms (WFA) for stochastic particle coagulation
International Nuclear Information System (INIS)
DeVille, R.E.L.; Riemer, N.; West, M.
2011-01-01
Stochastic particle-resolved methods are a useful way to compute the time evolution of the multi-dimensional size distribution of atmospheric aerosol particles. An effective approach to improve the efficiency of such models is the use of weighted computational particles. Here we introduce particle weighting functions that are power laws in particle size to the recently-developed particle-resolved model PartMC-MOSAIC and present the mathematical formalism of these Weighted Flow Algorithms (WFA) for particle coagulation and growth. We apply this to an urban plume scenario that simulates a particle population undergoing emission of different particle types, dilution, coagulation and aerosol chemistry along a Lagrangian trajectory. We quantify the performance of the Weighted Flow Algorithm for number and mass-based quantities of relevance for atmospheric sciences applications.
Weighted Flow Algorithms (WFA) for stochastic particle coagulation
DeVille, R. E. L.; Riemer, N.; West, M.
2011-09-01
Stochastic particle-resolved methods are a useful way to compute the time evolution of the multi-dimensional size distribution of atmospheric aerosol particles. An effective approach to improve the efficiency of such models is the use of weighted computational particles. Here we introduce particle weighting functions that are power laws in particle size to the recently-developed particle-resolved model PartMC-MOSAIC and present the mathematical formalism of these Weighted Flow Algorithms (WFA) for particle coagulation and growth. We apply this to an urban plume scenario that simulates a particle population undergoing emission of different particle types, dilution, coagulation and aerosol chemistry along a Lagrangian trajectory. We quantify the performance of the Weighted Flow Algorithm for number and mass-based quantities of relevance for atmospheric sciences applications.
Particle-transport simulation with the Monte Carlo method
International Nuclear Information System (INIS)
Carter, L.L.; Cashwell, E.D.
1975-01-01
Attention is focused on the application of the Monte Carlo method to particle transport problems, with emphasis on neutron and photon transport. Topics covered include sampling methods, mathematical prescriptions for simulating particle transport, mechanics of simulating particle transport, neutron transport, and photon transport. A literature survey of 204 references is included. (GMT)
Fast algorithms for transport models. Final report, June 1, 1993--May 31, 1994
International Nuclear Information System (INIS)
Manteuffel, T.
1994-12-01
The focus of this project is the study of multigrid and multilevel algorithms for the numerical solution of Boltzmann models of the transport of neutral and charged particles. In previous work a fast multigrid algorithm was developed for the numerical solution of the Boltzmann model of neutral particle transport in slab geometry assuming isotropic scattering. The new algorithm is extremely fast in the thick diffusion limit; the multigrid v-cycle convergence factor approaches zero as the mean-free-path between collisions approaches zero, independent of the mesh. Also, a fast multilevel method was developed for the numerical solution of the Boltzmann model of charged particle transport in the thick Fokker-Plank limit for slab geometry. Parallel implementations were developed for both algorithms
Parallel Global Optimization with the Particle Swarm Algorithm (Preprint)
National Research Council Canada - National Science Library
Schutte, J. F; Reinbolt, J. A; Fregly, B. J; Haftka, R. T; George, A. D
2004-01-01
.... To obtain enhanced computational throughput and global search capability, we detail the coarse-grained parallelization of an increasingly popular global search method, the Particle Swarm Optimization (PSO) algorithm...
Iswari, T.; Asih, A. M. S.
2018-04-01
In the logistics system, transportation plays an important role to connect every element in the supply chain, but it can produces the greatest cost. Therefore, it is important to make the transportation costs as minimum as possible. Reducing the transportation cost can be done in several ways. One of the ways to minimizing the transportation cost is by optimizing the routing of its vehicles. It refers to Vehicle Routing Problem (VRP). The most common type of VRP is Capacitated Vehicle Routing Problem (CVRP). In CVRP, the vehicles have their own capacity and the total demands from the customer should not exceed the capacity of the vehicle. CVRP belongs to the class of NP-hard problems. These NP-hard problems make it more complex to solve such that exact algorithms become highly time-consuming with the increases in problem sizes. Thus, for large-scale problem instances, as typically found in industrial applications, finding an optimal solution is not practicable. Therefore, this paper uses two kinds of metaheuristics approach to solving CVRP. Those are Genetic Algorithm and Particle Swarm Optimization. This paper compares the results of both algorithms and see the performance of each algorithm. The results show that both algorithms perform well in solving CVRP but still needs to be improved. From algorithm testing and numerical example, Genetic Algorithm yields a better solution than Particle Swarm Optimization in total distance travelled.
Chaotically encoded particle swarm optimization algorithm and its applications
International Nuclear Information System (INIS)
Alatas, Bilal; Akin, Erhan
2009-01-01
This paper proposes a novel particle swarm optimization (PSO) algorithm, chaotically encoded particle swarm optimization algorithm (CENPSOA), based on the notion of chaos numbers that have been recently proposed for a novel meaning to numbers. In this paper, various chaos arithmetic and evaluation measures that can be used in CENPSOA have been described. Furthermore, CENPSOA has been designed to be effectively utilized in data mining applications.
Particle transport in inclined annuli
Energy Technology Data Exchange (ETDEWEB)
Kurtzhals, Erik
1993-12-31
A new model for the formation and behaviour of deposits in inclined wellbores is formulated. The annular space is divided into two layers, separated by a distinct plane boundary. While the lower layer is taken to consist of closely packed cuttings, the upper layer is presumed to behave as a pure fluid. A force balance for the lower layer decides whether it is stationary or slides in the upwards- or downwards direction. The position of the deposit surface is governed by the fluid shear stress at the deposit surface. The proposed model represents a major improvement compared to an earlier model. The predictions from the SCSB-model are in good qualitative agreement with experimental results obtained by the author, and results published by research groups in the U.S.A., United Kingdom and Germany. The quantitative agreement is variable, presumably because the SCSB-model is a somewhat simplified description of particle behaviour in inclined annuli. However, the model provides a clearer understanding of the physical background for previously published experimental results. In order to couple the theoretical work with experimental observations, an annular flow loop has been constructed. A characteristic feature in the flow loop design is the application of load cells, which permits determination of the annular particle content at steady state as well as under transient conditions. Due to delays in the constructional work, it has only been possible to perform a limited number of investigations in the loop. However, the results produced are in agreement with results published by other research groups. (au)
Particle transport in inclined annuli
Energy Technology Data Exchange (ETDEWEB)
Kurtzhals, Erik
1994-12-31
A new model for the formation and behaviour of deposits in inclined wellbores is formulated. The annular space is divided into two layers, separated by a distinct plane boundary. While the lower layer is taken to consist of closely packed cuttings, the upper layer is presumed to behave as a pure fluid. A force balance for the lower layer decides whether it is stationary or slides in the upwards- or downwards direction. The position of the deposit surface is governed by the fluid shear stress at the deposit surface. The proposed model represents a major improvement compared to an earlier model. The predictions from the SCSB-model are in good qualitative agreement with experimental results obtained by the author, and results published by research groups in the U.S.A., United Kingdom and Germany. The quantitative agreement is variable, presumably because the SCSB-model is a somewhat simplified description of particle behaviour in inclined annuli. However, the model provides a clearer understanding of the physical background for previously published experimental results. In order to couple the theoretical work with experimental observations, an annular flow loop has been constructed. A characteristic feature in the flow loop design is the application of load cells, which permits determination of the annular particle content at steady state as well as under transient conditions. Due to delays in the constructional work, it has only been possible to perform a limited number of investigations in the loop. However, the results produced are in agreement with results published by other research groups. (au)
Particle Tracking Model and Abstraction of Transport Processes
International Nuclear Information System (INIS)
Robinson, B.
2000-01-01
The purpose of the transport methodology and component analysis is to provide the numerical methods for simulating radionuclide transport and model setup for transport in the unsaturated zone (UZ) site-scale model. The particle-tracking method of simulating radionuclide transport is incorporated into the FEHM computer code and the resulting changes in the FEHM code are to be submitted to the software configuration management system. This Analysis and Model Report (AMR) outlines the assumptions, design, and testing of a model for calculating radionuclide transport in the unsaturated zone at Yucca Mountain. In addition, methods for determining colloid-facilitated transport parameters are outlined for use in the Total System Performance Assessment (TSPA) analyses. Concurrently, process-level flow model calculations are being carrier out in a PMR for the unsaturated zone. The computer code TOUGH2 is being used to generate three-dimensional, dual-permeability flow fields, that are supplied to the Performance Assessment group for subsequent transport simulations. These flow fields are converted to input files compatible with the FEHM code, which for this application simulates radionuclide transport using the particle-tracking algorithm outlined in this AMR. Therefore, this AMR establishes the numerical method and demonstrates the use of the model, but the specific breakthrough curves presented do not necessarily represent the behavior of the Yucca Mountain unsaturated zone
General particle transport equation. Final report
International Nuclear Information System (INIS)
Lafi, A.Y.; Reyes, J.N. Jr.
1994-12-01
The general objectives of this research are as follows: (1) To develop fundamental models for fluid particle coalescence and breakage rates for incorporation into statistically based (Population Balance Approach or Monte Carlo Approach) two-phase thermal hydraulics codes. (2) To develop fundamental models for flow structure transitions based on stability theory and fluid particle interaction rates. This report details the derivation of the mass, momentum and energy conservation equations for a distribution of spherical, chemically non-reacting fluid particles of variable size and velocity. To study the effects of fluid particle interactions on interfacial transfer and flow structure requires detailed particulate flow conservation equations. The equations are derived using a particle continuity equation analogous to Boltzmann's transport equation. When coupled with the appropriate closure equations, the conservation equations can be used to model nonequilibrium, two-phase, dispersed, fluid flow behavior. Unlike the Eulerian volume and time averaged conservation equations, the statistically averaged conservation equations contain additional terms that take into account the change due to fluid particle interfacial acceleration and fluid particle dynamics. Two types of particle dynamics are considered; coalescence and breakage. Therefore, the rate of change due to particle dynamics will consider the gain and loss involved in these processes and implement phenomenological models for fluid particle breakage and coalescence
The energetic alpha particle transport method EATM
International Nuclear Information System (INIS)
Kirkpatrick, R.C.
1998-02-01
The EATM method is an evolving attempt to find an efficient method of treating the transport of energetic charged particles in a dynamic magnetized (MHD) plasma for which the mean free path of the particles and the Larmor radius may be long compared to the gradient lengths in the plasma. The intent is to span the range of parameter space with the efficiency and accuracy thought necessary for experimental analysis and design of magnetized fusion targets
A solution algorithm for fluid-particle flows across all flow regimes
Kong, Bo; Fox, Rodney O.
2017-09-01
Many fluid-particle flows occurring in nature and in technological applications exhibit large variations in the local particle volume fraction. For example, in circulating fluidized beds there are regions where the particles are close-packed as well as very dilute regions where particle-particle collisions are rare. Thus, in order to simulate such fluid-particle systems, it is necessary to design a flow solver that can accurately treat all flow regimes occurring simultaneously in the same flow domain. In this work, a solution algorithm is proposed for this purpose. The algorithm is based on splitting the free-transport flux solver dynamically and locally in the flow. In close-packed to moderately dense regions, a hydrodynamic solver is employed, while in dilute to very dilute regions a kinetic-based finite-volume solver is used in conjunction with quadrature-based moment methods. To illustrate the accuracy and robustness of the proposed solution algorithm, it is implemented in OpenFOAM for particle velocity moments up to second order, and applied to simulate gravity-driven, gas-particle flows exhibiting cluster-induced turbulence. By varying the average particle volume fraction in the flow domain, it is demonstrated that the flow solver can handle seamlessly all flow regimes present in fluid-particle flows.
FLUKA: A Multi-Particle Transport Code
Energy Technology Data Exchange (ETDEWEB)
Ferrari, A.; Sala, P.R.; /CERN /INFN, Milan; Fasso, A.; /SLAC; Ranft, J.; /Siegen U.
2005-12-14
This report describes the 2005 version of the Fluka particle transport code. The first part introduces the basic notions, describes the modular structure of the system, and contains an installation and beginner's guide. The second part complements this initial information with details about the various components of Fluka and how to use them. It concludes with a detailed history and bibliography.
Refined holonomic summation algorithms in particle physics
Energy Technology Data Exchange (ETDEWEB)
Bluemlein, Johannes [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany); Round, Mark; Schneider, Carsten [Johannes Kepler Univ., Linz (Austria). Research Inst. for Symbolic Computation (RISC)
2017-06-15
An improved multi-summation approach is introduced and discussed that enables one to simultaneously handle indefinite nested sums and products in the setting of difference rings and holonomic sequences. Relevant mathematics is reviewed and the underlying advanced difference ring machinery is elaborated upon. The flexibility of this new toolbox contributed substantially to evaluating complicated multi-sums coming from particle physics. Illustrative examples of the functionality of the new software package RhoSum are given.
Refined holonomic summation algorithms in particle physics
International Nuclear Information System (INIS)
Bluemlein, Johannes; Round, Mark; Schneider, Carsten
2017-06-01
An improved multi-summation approach is introduced and discussed that enables one to simultaneously handle indefinite nested sums and products in the setting of difference rings and holonomic sequences. Relevant mathematics is reviewed and the underlying advanced difference ring machinery is elaborated upon. The flexibility of this new toolbox contributed substantially to evaluating complicated multi-sums coming from particle physics. Illustrative examples of the functionality of the new software package RhoSum are given.
Applying Dispersive Changes to Lagrangian Particles in Groundwater Transport Models
Konikow, Leonard F.
2010-01-01
Method-of-characteristics groundwater transport models require that changes in concentrations computed within an Eulerian framework to account for dispersion be transferred to moving particles used to simulate advective transport. A new algorithm was developed to accomplish this transfer between nodal values and advecting particles more precisely and realistically compared to currently used methods. The new method scales the changes and adjustments of particle concentrations relative to limiting bounds of concentration values determined from the population of adjacent nodal values. The method precludes unrealistic undershoot or overshoot for concentrations of individual particles. In the new method, if dispersion causes cell concentrations to decrease during a time step, those particles in the cell having the highest concentration will decrease the most, and those with the lowest concentration will decrease the least. The converse is true if dispersion is causing concentrations to increase. Furthermore, if the initial concentration on a particle is outside the range of the adjacent nodal values, it will automatically be adjusted in the direction of the acceptable range of values. The new method is inherently mass conservative. ?? US Government 2010.
A Fano cavity test for Monte Carlo proton transport algorithms
International Nuclear Information System (INIS)
Sterpin, Edmond; Sorriaux, Jefferson; Souris, Kevin; Vynckier, Stefaan; Bouchard, Hugo
2014-01-01
Purpose: In the scope of reference dosimetry of radiotherapy beams, Monte Carlo (MC) simulations are widely used to compute ionization chamber dose response accurately. Uncertainties related to the transport algorithm can be verified performing self-consistency tests, i.e., the so-called “Fano cavity test.” The Fano cavity test is based on the Fano theorem, which states that under charged particle equilibrium conditions, the charged particle fluence is independent of the mass density of the media as long as the cross-sections are uniform. Such tests have not been performed yet for MC codes simulating proton transport. The objectives of this study are to design a new Fano cavity test for proton MC and to implement the methodology in two MC codes: Geant4 and PENELOPE extended to protons (PENH). Methods: The new Fano test is designed to evaluate the accuracy of proton transport. Virtual particles with an energy ofE 0 and a mass macroscopic cross section of (Σ)/(ρ) are transported, having the ability to generate protons with kinetic energy E 0 and to be restored after each interaction, thus providing proton equilibrium. To perform the test, the authors use a simplified simulation model and rigorously demonstrate that the computed cavity dose per incident fluence must equal (ΣE 0 )/(ρ) , as expected in classic Fano tests. The implementation of the test is performed in Geant4 and PENH. The geometry used for testing is a 10 × 10 cm 2 parallel virtual field and a cavity (2 × 2 × 0.2 cm 3 size) in a water phantom with dimensions large enough to ensure proton equilibrium. Results: For conservative user-defined simulation parameters (leading to small step sizes), both Geant4 and PENH pass the Fano cavity test within 0.1%. However, differences of 0.6% and 0.7% were observed for PENH and Geant4, respectively, using larger step sizes. For PENH, the difference is attributed to the random-hinge method that introduces an artificial energy straggling if step size is not
Heavy particle transport in sputtering systems
Trieschmann, Jan
2015-09-01
This contribution aims to discuss the theoretical background of heavy particle transport in plasma sputtering systems such as direct current magnetron sputtering (dcMS), high power impulse magnetron sputtering (HiPIMS), or multi frequency capacitively coupled plasmas (MFCCP). Due to inherently low process pressures below one Pa only kinetic simulation models are suitable. In this work a model appropriate for the description of the transport of film forming particles sputtered of a target material has been devised within the frame of the OpenFOAM software (specifically dsmcFoam). The three dimensional model comprises of ejection of sputtered particles into the reactor chamber, their collisional transport through the volume, as well as deposition of the latter onto the surrounding surfaces (i.e. substrates, walls). An angular dependent Thompson energy distribution fitted to results from Monte-Carlo simulations is assumed initially. Binary collisions are treated via the M1 collision model, a modified variable hard sphere (VHS) model. The dynamics of sputtered and background gas species can be resolved self-consistently following the direct simulation Monte-Carlo (DSMC) approach or, whenever possible, simplified based on the test particle method (TPM) with the assumption of a constant, non-stationary background at a given temperature. At the example of an MFCCP research reactor the transport of sputtered aluminum is specifically discussed. For the peculiar configuration and under typical process conditions with argon as process gas the transport of aluminum sputtered of a circular target is shown to be governed by a one dimensional interaction of the imposed and backscattered particle fluxes. The results are analyzed and discussed on the basis of the obtained velocity distribution functions (VDF). This work is supported by the German Research Foundation (DFG) in the frame of the Collaborative Research Centre TRR 87.
Particle transport in field-reversed configurations
Energy Technology Data Exchange (ETDEWEB)
Tuszewski, M.; Linford, R.K.
1982-05-01
Particle transport in field-reversed configurations is investigated using a one-dimensional, nondecaying, magnetic field structure. The radial profiles are constrained to satisfy an average ..beta.. condition from two-dimensional equilibrium and a boundary condition at the separatrix to model the balance between closed and open-field-line transport. When applied to the FRX-B experimental data and to the projected performance of the FRX-C device, this model suggests that the particle confinement times obtained with anomalous lower-hybrid-drift transport are in good agreement with the available numerical and experimental data. Larger values of confinement times can be achieved by increasing the ratio of the separatrix radius to the conducting wall radius. Even larger increases in lifetimes might be obtained by improving the open-field-line confinement.
Particle transport in field-reversed configurations
International Nuclear Information System (INIS)
Tuszewski, M.; Linford, R.K.
1982-01-01
Particle transport in field-reversed configurations is investigated using a one-dimensional, nondecaying, magnetic field structure. The radial profiles are constrained to satisfy an average β condition from two-dimensional equilibrium and a boundary condition at the separatrix to model the balance between closed and open-field-line transport. When applied to the FRX-B experimental data and to the projected performance of the FRX-C device, this model suggests that the particle confinement times obtained with anomalous lower-hybrid-drift transport are in good agreement with the available numerical and experimental data. Larger values of confinement times can be achieved by increasing the ratio of the separatrix radius to the conducting wall radius. Even larger increases in lifetimes might be obtained by improving the open-field-line confinement
Transport with three-particle interaction
International Nuclear Information System (INIS)
Morawetz, K.
2000-01-01
Starting from a point - like two - and three - particle interaction the kinetic equation is derived. While the drift term of the kinetic equation turns out to be determined by the known Skyrme mean field the collision integral appears in two - and three - particle parts. The cross section results from the same microscopic footing and is naturally density dependent due to the three - particle force. By this way no hybrid model for drift and cross section is needed for nuclear transport. The resulting equation of state has besides the mean field correlation energy also a two - and three - particle correlation energy which both are calculated analytically for the ground state. These energies contribute to the equation of state and lead to an occurrence of a maximum at 3 times nuclear density in the total energy. (author)
Quantum Behaved Particle Swarm Optimization Algorithm Based on Artificial Fish Swarm
Yumin, Dong; Li, Zhao
2014-01-01
Quantum behaved particle swarm algorithm is a new intelligent optimization algorithm; the algorithm has less parameters and is easily implemented. In view of the existing quantum behaved particle swarm optimization algorithm for the premature convergence problem, put forward a quantum particle swarm optimization algorithm based on artificial fish swarm. The new algorithm based on quantum behaved particle swarm algorithm, introducing the swarm and following activities, meanwhile using the a...
An Efficient Sleepy Algorithm for Particle-Based Fluids
Directory of Open Access Journals (Sweden)
Xiao Nie
2014-01-01
Full Text Available We present a novel Smoothed Particle Hydrodynamics (SPH based algorithm for efficiently simulating compressible and weakly compressible particle fluids. Prior particle-based methods simulate all fluid particles; however, in many cases some particles appearing to be at rest can be safely ignored without notably affecting the fluid flow behavior. To identify these particles, a novel sleepy strategy is introduced. By utilizing this strategy, only a portion of the fluid particles requires computational resources; thus an obvious performance gain can be achieved. In addition, in order to resolve unphysical clumping issue due to tensile instability in SPH based methods, a new artificial repulsive force is provided. We demonstrate that our approach can be easily integrated with existing SPH based methods to improve the efficiency without sacrificing visual quality.
Particle tracing in the magnetosphere: New algorithms and results
International Nuclear Information System (INIS)
Sheldon, R.B.; Gaffey, J.D. Jr.
1993-01-01
The authors present new algorithms for calculating charged-particle trajectories in realistic magnetospheric fields in fast and efficient manners. The scheme is based on a hamiltonian energy conservation principle. It requires that particles conserve the first two adiabatic invariants, and thus also conserve energy. It is applicable for particles ranging in energy from 0.01 to 100 keV, having arbitrary charge, and pitch angle. In addition to rapid particle trajectory calculations, it allows topological boundaries to be located efficiently. The results can be combined with fluid models to provide quantitative models of the time development of the whole convecting plasma model
On the Langevin approach to particle transport
International Nuclear Information System (INIS)
Bringuier, Eric
2006-01-01
In the Langevin description of Brownian motion, the action of the surrounding medium upon the Brownian particle is split up into a systematic friction force of Stokes type and a randomly fluctuating force, alternatively termed noise. That simple description accounts for several basic features of particle transport in a medium, making it attractive to teach at the undergraduate level, but its range of applicability is limited. The limitation is illustrated here by showing that the Langevin description fails to account realistically for the transport of a charged particle in a medium under crossed electric and magnetic fields and the ensuing Hall effect. That particular failure is rooted in the concept of the friction force rather than in the accompanying random force. It is then shown that the framework of kinetic theory offers a better account of the Hall effect. It is concluded that the Langevin description is nothing but an extension of Drude's transport model subsuming diffusion, and so it inherits basic limitations from that model. This paper thus describes the interrelationship of the Langevin approach, the Drude model and kinetic theory, in a specific transport problem of physical interest
Empirical particle transport model for tokamaks
International Nuclear Information System (INIS)
Petravic, M.; Kuo-Petravic, G.
1986-08-01
A simple empirical particle transport model has been constructed with the purpose of gaining insight into the L- to H-mode transition in tokamaks. The aim was to construct the simplest possible model which would reproduce the measured density profiles in the L-regime, and also produce a qualitatively correct transition to the H-regime without having to assume a completely different transport mode for the bulk of the plasma. Rather than using completely ad hoc constructions for the particle diffusion coefficient, we assume D = 1/5 chi/sub total/, where chi/sub total/ ≅ chi/sub e/ is the thermal diffusivity, and then use the κ/sub e/ = n/sub e/chi/sub e/ values derived from experiments. The observed temperature profiles are then automatically reproduced, but nontrivially, the correct density profiles are also obtained, for realistic fueling rates and profiles. Our conclusion is that it is sufficient to reduce the transport coefficients within a few centimeters of the surface to produce the H-mode behavior. An additional simple assumption, concerning the particle mean-free path, leads to a convective transport term which reverses sign a few centimeters inside the surface, as required by the H-mode density profiles
Particle transport due to magnetic fluctuations
International Nuclear Information System (INIS)
Stoneking, M.R.; Hokin, S.A.; Prager, S.C.; Fiksel, G.; Ji, H.; Den Hartog, D.J.
1994-01-01
Electron current fluctuations are measured with an electrostatic energy analyzer at the edge of the MST reversed-field pinch plasma. The radial flux of fast electrons (E>T e ) due to parallel streaming along a fluctuating magnetic field is determined locally by measuring the correlated product e B r >. Particle transport is small just inside the last closed flux surface (Γ e,mag e,total ), but can account for all observed particle losses inside r/a=0.8. Electron diffusion is found to increase with parallel velocity, as expected for diffusion in a region of field stochasticity
A dynamic inertia weight particle swarm optimization algorithm
International Nuclear Information System (INIS)
Jiao Bin; Lian Zhigang; Gu Xingsheng
2008-01-01
Particle swarm optimization (PSO) algorithm has been developing rapidly and has been applied widely since it was introduced, as it is easily understood and realized. This paper presents an improved particle swarm optimization algorithm (IPSO) to improve the performance of standard PSO, which uses the dynamic inertia weight that decreases according to iterative generation increasing. It is tested with a set of 6 benchmark functions with 30, 50 and 150 different dimensions and compared with standard PSO. Experimental results indicate that the IPSO improves the search performance on the benchmark functions significantly
Machine learning based global particle indentification algorithms at LHCb experiment
Derkach, Denis; Likhomanenko, Tatiana; Rogozhnikov, Aleksei; Ratnikov, Fedor
2017-01-01
One of the most important aspects of data processing at LHC experiments is the particle identification (PID) algorithm. In LHCb, several different sub-detector systems provide PID information: the Ring Imaging CHerenkov (RICH) detector, the hadronic and electromagnetic calorimeters, and the muon chambers. To improve charged particle identification, several neural networks including a deep architecture and gradient boosting have been applied to data. These new approaches provide higher identification efficiencies than existing implementations for all charged particle types. It is also necessary to achieve a flat dependency between efficiencies and spectator variables such as particle momentum, in order to reduce systematic uncertainties during later stages of data analysis. For this purpose, "flat” algorithms that guarantee the flatness property for efficiencies have also been developed. This talk presents this new approach based on machine learning and its performance.
A decoupled power flow algorithm using particle swarm optimization technique
International Nuclear Information System (INIS)
Acharjee, P.; Goswami, S.K.
2009-01-01
A robust, nondivergent power flow method has been developed using the particle swarm optimization (PSO) technique. The decoupling properties between the power system quantities have been exploited in developing the power flow algorithm. The speed of the power flow algorithm has been improved using a simple perturbation technique. The basic power flow algorithm and the improvement scheme have been designed to retain the simplicity of the evolutionary approach. The power flow is rugged, can determine the critical loading conditions and also can handle the flexible alternating current transmission system (FACTS) devices efficiently. Test results on standard test systems show that the proposed method can find the solution when the standard power flows fail.
Lorentz covariant canonical symplectic algorithms for dynamics of charged particles
Wang, Yulei; Liu, Jian; Qin, Hong
2016-12-01
In this paper, the Lorentz covariance of algorithms is introduced. Under Lorentz transformation, both the form and performance of a Lorentz covariant algorithm are invariant. To acquire the advantages of symplectic algorithms and Lorentz covariance, a general procedure for constructing Lorentz covariant canonical symplectic algorithms (LCCSAs) is provided, based on which an explicit LCCSA for dynamics of relativistic charged particles is built. LCCSA possesses Lorentz invariance as well as long-term numerical accuracy and stability, due to the preservation of a discrete symplectic structure and the Lorentz symmetry of the system. For situations with time-dependent electromagnetic fields, which are difficult to handle in traditional construction procedures of symplectic algorithms, LCCSA provides a perfect explicit canonical symplectic solution by implementing the discretization in 4-spacetime. We also show that LCCSA has built-in energy-based adaptive time steps, which can optimize the computation performance when the Lorentz factor varies.
Solar energetic particle anisotropies and insights into particle transport
Energy Technology Data Exchange (ETDEWEB)
Leske, R. A., E-mail: ral@srl.caltech.edu; Cummings, A. C.; Cohen, C. M. S.; Mewaldt, R. A.; Labrador, A. W.; Stone, E. C. [California Institute of Technology, Pasadena, CA 91125 (United States); Wiedenbeck, M. E. [Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA 91109 (United States); Christian, E. R.; Rosenvinge, T. T. von [NASA/Goddard Space Flight Center, Greenbelt, MD 20771 (United States)
2016-03-25
As solar energetic particles (SEPs) travel through interplanetary space, their pitch-angle distributions are shaped by the competing effects of magnetic focusing and scattering. Measurements of SEP anisotropies can therefore reveal information about interplanetary conditions such as magnetic field strength, topology, and turbulence levels at remote locations from the observer. Onboard each of the two STEREO spacecraft, the Low Energy Telescope (LET) measures pitch-angle distributions for protons and heavier ions up to iron at energies of about 2-12 MeV/nucleon. Anisotropies observed using LET include bidirectional flows within interplanetary coronal mass ejections, sunward-flowing particles when STEREO was magnetically connected to the back side of a shock, and loss-cone distributions in which particles with large pitch angles underwent magnetic mirroring at an interplanetary field enhancement that was too weak to reflect particles with the smallest pitch angles. Unusual oscillations in the width of a beamed distribution at the onset of the 23 July 2012 SEP event were also observed and remain puzzling. We report LET anisotropy observations at both STEREO spacecraft and discuss their implications for SEP transport, focusing exclusively on the extreme event of 23 July 2012 in which a large variety of anisotropies were present at various times during the event.
Solar energetic particle anisotropies and insights into particle transport
Leske, R. A.; Cummings, A. C.; Cohen, C. M. S.; Mewaldt, R. A.; Labrador, A. W.; Stone, E. C.; Wiedenbeck, M. E.; Christian, E. R.; Rosenvinge, T. T. von
2016-03-01
As solar energetic particles (SEPs) travel through interplanetary space, their pitch-angle distributions are shaped by the competing effects of magnetic focusing and scattering. Measurements of SEP anisotropies can therefore reveal information about interplanetary conditions such as magnetic field strength, topology, and turbulence levels at remote locations from the observer. Onboard each of the two STEREO spacecraft, the Low Energy Telescope (LET) measures pitch-angle distributions for protons and heavier ions up to iron at energies of about 2-12 MeV/nucleon. Anisotropies observed using LET include bidirectional flows within interplanetary coronal mass ejections, sunward-flowing particles when STEREO was magnetically connected to the back side of a shock, and loss-cone distributions in which particles with large pitch angles underwent magnetic mirroring at an interplanetary field enhancement that was too weak to reflect particles with the smallest pitch angles. Unusual oscillations in the width of a beamed distribution at the onset of the 23 July 2012 SEP event were also observed and remain puzzling. We report LET anisotropy observations at both STEREO spacecraft and discuss their implications for SEP transport, focusing exclusively on the extreme event of 23 July 2012 in which a large variety of anisotropies were present at various times during the event.
Gyrokinetic particle simulation of neoclassical transport
International Nuclear Information System (INIS)
Lin, Z.; Tang, W.M.; Lee, W.W.
1995-01-01
A time varying weighting (δf ) scheme for gyrokinetic particle simulation is applied to a steady-state, multispecies simulation of neoclassical transport. Accurate collision operators conserving momentum and energy are developed and implemented. Simulation results using these operators are found to agree very well with neoclassical theory. For example, it is dynamically demonstrated that like-particle collisions produce no particle flux and that the neoclassical fluxes are ambipolar for an ion--electron plasma. An important physics feature of the present scheme is the introduction of toroidal flow to the simulations. Simulation results are in agreement with the existing analytical neoclassical theory. The poloidal electric field associated with toroidal mass flow is found to enhance density gradient-driven electron particle flux and the bootstrap current while reducing temperature gradient-driven flux and current. Finally, neoclassical theory in steep gradient profile relevant to the edge regime is examined by taking into account finite banana width effects. In general, in the present work a valuable new capability for studying important aspects of neoclassical transport inaccessible by conventional analytical calculation processes is demonstrated. copyright 1995 American Institute of Physics
Effects of Random Values for Particle Swarm Optimization Algorithm
Directory of Open Access Journals (Sweden)
Hou-Ping Dai
2018-02-01
Full Text Available Particle swarm optimization (PSO algorithm is generally improved by adaptively adjusting the inertia weight or combining with other evolution algorithms. However, in most modified PSO algorithms, the random values are always generated by uniform distribution in the range of [0, 1]. In this study, the random values, which are generated by uniform distribution in the ranges of [0, 1] and [−1, 1], and Gauss distribution with mean 0 and variance 1 ( U [ 0 , 1 ] , U [ − 1 , 1 ] and G ( 0 , 1 , are respectively used in the standard PSO and linear decreasing inertia weight (LDIW PSO algorithms. For comparison, the deterministic PSO algorithm, in which the random values are set as 0.5, is also investigated in this study. Some benchmark functions and the pressure vessel design problem are selected to test these algorithms with different types of random values in three space dimensions (10, 30, and 100. The experimental results show that the standard PSO and LDIW-PSO algorithms with random values generated by U [ − 1 , 1 ] or G ( 0 , 1 are more likely to avoid falling into local optima and quickly obtain the global optima. This is because the large-scale random values can expand the range of particle velocity to make the particle more likely to escape from local optima and obtain the global optima. Although the random values generated by U [ − 1 , 1 ] or G ( 0 , 1 are beneficial to improve the global searching ability, the local searching ability for a low dimensional practical optimization problem may be decreased due to the finite particles.
International Nuclear Information System (INIS)
Apisit, Patchimpattapong; Alireza, Haghighat; Shedlock, D.
2003-01-01
An expert system for generating an effective mesh distribution for the SN particle transport simulation has been developed. This expert system consists of two main parts: 1) an algorithm for generating an effective mesh distribution in a serial environment, and 2) an algorithm for inference of an effective domain decomposition strategy for parallel computing. For the first part, the algorithm prepares an effective mesh distribution considering problem physics and the spatial differencing scheme. For the second part, the algorithm determines a parallel-performance-index (PPI), which is defined as the ratio of the granularity to the degree-of-coupling. The parallel-performance-index provides expected performance of an algorithm depending on computing environment and resources. A large index indicates a high granularity algorithm with relatively low coupling among processors. This expert system has been successfully tested within the PENTRAN (Parallel Environment Neutral-Particle Transport) code system for simulating real-life shielding problems. (authors)
Energy Technology Data Exchange (ETDEWEB)
Apisit, Patchimpattapong [Electricity Generating Authority of Thailand, Office of Corporate Planning, Bangkruai, Nonthaburi (Thailand); Alireza, Haghighat; Shedlock, D. [Florida Univ., Department of Nuclear and Radiological Engineering, Gainesville, FL (United States)
2003-07-01
An expert system for generating an effective mesh distribution for the SN particle transport simulation has been developed. This expert system consists of two main parts: 1) an algorithm for generating an effective mesh distribution in a serial environment, and 2) an algorithm for inference of an effective domain decomposition strategy for parallel computing. For the first part, the algorithm prepares an effective mesh distribution considering problem physics and the spatial differencing scheme. For the second part, the algorithm determines a parallel-performance-index (PPI), which is defined as the ratio of the granularity to the degree-of-coupling. The parallel-performance-index provides expected performance of an algorithm depending on computing environment and resources. A large index indicates a high granularity algorithm with relatively low coupling among processors. This expert system has been successfully tested within the PENTRAN (Parallel Environment Neutral-Particle Transport) code system for simulating real-life shielding problems. (authors)
A Synchronous-Asynchronous Particle Swarm Optimisation Algorithm
Ab Aziz, Nor Azlina; Mubin, Marizan; Mohamad, Mohd Saberi; Ab Aziz, Kamarulzaman
2014-01-01
In the original particle swarm optimisation (PSO) algorithm, the particles' velocities and positions are updated after the whole swarm performance is evaluated. This algorithm is also known as synchronous PSO (S-PSO). The strength of this update method is in the exploitation of the information. Asynchronous update PSO (A-PSO) has been proposed as an alternative to S-PSO. A particle in A-PSO updates its velocity and position as soon as its own performance has been evaluated. Hence, particles are updated using partial information, leading to stronger exploration. In this paper, we attempt to improve PSO by merging both update methods to utilise the strengths of both methods. The proposed synchronous-asynchronous PSO (SA-PSO) algorithm divides the particles into smaller groups. The best member of a group and the swarm's best are chosen to lead the search. Members within a group are updated synchronously, while the groups themselves are asynchronously updated. Five well-known unimodal functions, four multimodal functions, and a real world optimisation problem are used to study the performance of SA-PSO, which is compared with the performances of S-PSO and A-PSO. The results are statistically analysed and show that the proposed SA-PSO has performed consistently well. PMID:25121109
Economic dispatch optimization algorithm based on particle diffusion
International Nuclear Information System (INIS)
Han, Li; Romero, Carlos E.; Yao, Zheng
2015-01-01
Highlights: • A dispatch model that considers fuel, emissions control and wind power cost is built. • An optimization algorithm named diffusion particle optimization (DPO) is proposed. • DPO was used to analyze the impact of wind power risk and emissions on dispatch. - Abstract: Due to the widespread installation of emissions control equipment in fossil fuel-fired power plants, the cost of emissions control needs to be considered, together with the plant fuel cost, in providing economic power dispatch of those units to the grid. On the other hand, while using wind power decreases the overall power generation cost for the power grid, it poses a risk to a traditional grid, because of its inherent stochastic characteristics. Therefore, an economic dispatch optimization model needs to consider all of the fuel cost, emissions control cost and wind power cost for each of the generating unit conforming the fleet that meets the required grid power demand. In this study, an optimization algorithm referred as diffusion particle optimization (DPO) is proposed to solve such complex optimization problem. In this algorithm, Brownian motion theory is used to guide the movement of particles so that the particles can search for an optimal solution over the entire definition region. Several benchmark functions and power grid system data were used to test the performance of DPO, and compared to traditional algorithms used for economic dispatch optimization, such as, particle swarm optimization and artificial bee colony algorithm. It was found that DPO has less probability to be trapped in local optimums. According to results of different power systems DPO was able to find economic dispatch solutions with lower costs. DPO was also used to analyze the impact of wind power risk and fossil unit emissions coefficients on power dispatch. The result are encouraging for the use of DPO as a dynamic tool for economic dispatch of the power grid.
Transport of Particle Swarms Through Fractures
Boomsma, E.; Pyrak-Nolte, L. J.
2011-12-01
The transport of engineered micro- and nano-scale particles through fractured rock is often assumed to occur as dispersions or emulsions. Another potential transport mechanism is the release of particle swarms from natural or industrial processes where small liquid drops, containing thousands to millions of colloidal-size particles, are released over time from seepage or leaks. Swarms have higher velocities than any individual colloid because the interactions among the particles maintain the cohesiveness of the swarm as it falls under gravity. Thus particle swarms give rise to the possibility that engineered particles may be transported farther and faster in fractures than predicted by traditional dispersion models. In this study, the effect of fractures on colloidal swarm cohesiveness and evolution was studied as a swarm falls under gravity and interacts with fracture walls. Transparent acrylic was used to fabricate synthetic fracture samples with either (1) a uniform aperture or (2) a converging aperture followed by a uniform aperture (funnel-shaped). The samples consisted of two blocks that measured 100 x 100 x 50 mm. The separation between these blocks determined the aperture (0.5 mm to 50 mm). During experiments, a fracture was fully submerged in water and swarms were released into it. The swarms consisted of dilute suspensions of either 25 micron soda-lime glass beads (2% by mass) or 3 micron polystyrene fluorescent beads (1% by mass) with an initial volume of 5μL. The swarms were illuminated with a green (525 nm) LED array and imaged optically with a CCD camera. In the uniform aperture fracture, the speed of the swarm prior to bifurcation increased with aperture up to a maximum at a fracture width of approximately 10 mm. For apertures greater than ~15 mm, the velocity was essentially constant with fracture width (but less than at 10 mm). This peak suggests that two competing mechanisms affect swarm velocity in fractures. The wall provides both drag, which
Sawtooth driven particle transport in tokamak plasmas
International Nuclear Information System (INIS)
Nicolas, T.
2013-01-01
The radial transport of particles in tokamaks is one of the most stringent issues faced by the magnetic confinement fusion community, because the fusion power is proportional to the square of the pressure, and also because accumulation of heavy impurities in the core leads to important power losses which can lead to a 'radiative collapse'. Sawteeth and the associated periodic redistribution of the core quantities can significantly impact the radial transport of electrons and impurities. In this thesis, we perform numerical simulations of sawteeth using a nonlinear tridimensional magnetohydrodynamic code called XTOR-2F to study the particle transport induced by sawtooth crashes. We show that the code recovers, after the crash, the fine structures of electron density that are observed with fast-sweeping reflectometry on the JET and TS tokamaks. The presence of these structure may indicate a low efficiency of the sawtooth in expelling the impurities from the core. However, applying the same code to impurity profiles, we show that the redistribution is quantitatively similar to that predicted by Kadomtsev's model, which could not be predicted a priori. Hence finally the sawtooth flushing is efficient in expelling impurities from the core. (author) [fr
Ripple enhanced transport of suprathermal alpha particles
International Nuclear Information System (INIS)
Tani, K.; Takizuka, T.; Azumi, M.
1986-01-01
The ripple enhanced transport of suprathermal alpha particles has been studied by the newly developed Monte-Carlo code in which the motion of banana orbit in a toroidal field ripple is described by a mapping method. The existence of ripple-resonance diffusion has been confirmed numerically. We have developed another new code in which the radial displacement of banana orbit is given by the diffusion coefficients from the mapping code or the orbit following Monte-Carlo code. The ripple loss of α particles during slowing down has been estimated by the mapping model code as well as the diffusion model code. From the comparison of the results with those from the orbit-following Monte-Carlo code, it has been found that all of them agree very well. (author)
Designing Artificial Neural Networks Using Particle Swarm Optimization Algorithms.
Garro, Beatriz A; Vázquez, Roberto A
2015-01-01
Artificial Neural Network (ANN) design is a complex task because its performance depends on the architecture, the selected transfer function, and the learning algorithm used to train the set of synaptic weights. In this paper we present a methodology that automatically designs an ANN using particle swarm optimization algorithms such as Basic Particle Swarm Optimization (PSO), Second Generation of Particle Swarm Optimization (SGPSO), and a New Model of PSO called NMPSO. The aim of these algorithms is to evolve, at the same time, the three principal components of an ANN: the set of synaptic weights, the connections or architecture, and the transfer functions for each neuron. Eight different fitness functions were proposed to evaluate the fitness of each solution and find the best design. These functions are based on the mean square error (MSE) and the classification error (CER) and implement a strategy to avoid overtraining and to reduce the number of connections in the ANN. In addition, the ANN designed with the proposed methodology is compared with those designed manually using the well-known Back-Propagation and Levenberg-Marquardt Learning Algorithms. Finally, the accuracy of the method is tested with different nonlinear pattern classification problems.
Particle transport in breathing quantum graph
International Nuclear Information System (INIS)
Matrasulov, D.U.; Yusupov, J.R.; Sabirov, K.K.; Sobirov, Z.A.
2012-01-01
Full text: Particle transport in nanoscale networks and discrete structures is of fundamental and practical importance. Usually such systems are modeled by so-called quantum graphs, the systems attracting much attention in physics and mathematics during past two decades [1-5]. During last two decades quantum graphs found numerous applications in modeling different discrete structures and networks in nanoscale and mesoscopic physics (e.g., see reviews [1-3]). Despite considerable progress made in the study of particle dynamics most of the problems deal with unperturbed case and the case of time-dependent perturbation has not yet be explored. In this work we treat particle dynamics for quantum star graph with time-dependent bonds. In particular, we consider harmonically breathing quantum star graphs, the cases of monotonically contracting and expanding graphs. The latter can be solved exactly analytically. Edge boundaries are considered to be time-dependent, while branching point is assumed to be fixed. Quantum dynamics of a particle in such graphs is studied by solving Schrodinger equation with time-dependent boundary conditions given on a star graph. Time-dependence of the average kinetic energy is analyzed. Space-time evolution of the Gaussian wave packet is treated for harmonically breathing star graph. It is found that for certain frequencies energy is a periodic function of time, while for others it can be non-monotonically growing function of time. Such a feature can be caused by possible synchronization of the particles motion and the motions of the moving edges of graph bonds. (authors) References: [1] Tsampikos Kottos and Uzy Smilansky, Ann. Phys., 76, 274 (1999). [2] Sven Gnutzmann and Uzy Smilansky, Adv. Phys. 55, 527 (2006). [3] S. GnutzmannJ.P. Keating, F. Piotet, Ann. Phys., 325, 2595 (2010). [4] P.Exner, P.Seba, P.Stovicek, J. Phys. A: Math. Gen. 21, 4009 (1988). [5] J. Boman, P. Kurasov, Adv. Appl. Math., 35, 58 (2005)
Public Transport Route Finding using a Hybrid Genetic Algorithm
Liviu Adrian COTFAS; Andreea DIOSTEANU
2011-01-01
In this paper we present a public transport route finding solution based on a hybrid genetic algorithm. The algorithm uses two heuristics that take into consideration the number of trans-fers and the remaining distance to the destination station in order to improve the convergence speed. The interface of the system uses the latest web technologies to offer both portability and advanced functionality. The approach has been evaluated using the data for the Bucharest public transport network.
Public Transport Route Finding using a Hybrid Genetic Algorithm
Directory of Open Access Journals (Sweden)
Liviu Adrian COTFAS
2011-01-01
Full Text Available In this paper we present a public transport route finding solution based on a hybrid genetic algorithm. The algorithm uses two heuristics that take into consideration the number of trans-fers and the remaining distance to the destination station in order to improve the convergence speed. The interface of the system uses the latest web technologies to offer both portability and advanced functionality. The approach has been evaluated using the data for the Bucharest public transport network.
Modeling Dynamic Objects in Monte Carlo Particle Transport Calculations
International Nuclear Information System (INIS)
Yegin, G.
2008-01-01
In this study, the Multi-Geometry geometry modeling technique was improved in order to handle moving objects in a Monte Carlo particle transport calculation. In the Multi-Geometry technique, the geometry is a superposition of objects not surfaces. By using this feature, we developed a new algorithm which allows a user to make enable or disable geometry elements during particle transport. A disabled object can be ignored at a certain stage of a calculation and switching among identical copies of the same object located adjacent poins during a particle simulation corresponds to the movement of that object in space. We called this powerfull feature as Dynamic Multi-Geometry technique (DMG) which is used for the first time in Brachy Dose Monte Carlo code to simulate HDR brachytherapy treatment systems. Our results showed that having disabled objects in a geometry does not effect calculated dose values. This technique is also suitable to be used in other areas such as IMRT treatment planning systems
Improved multi-objective clustering algorithm using particle swarm optimization.
Directory of Open Access Journals (Sweden)
Congcong Gong
Full Text Available Multi-objective clustering has received widespread attention recently, as it can obtain more accurate and reasonable solution. In this paper, an improved multi-objective clustering framework using particle swarm optimization (IMCPSO is proposed. Firstly, a novel particle representation for clustering problem is designed to help PSO search clustering solutions in continuous space. Secondly, the distribution of Pareto set is analyzed. The analysis results are applied to the leader selection strategy, and make algorithm avoid trapping in local optimum. Moreover, a clustering solution-improved method is proposed, which can increase the efficiency in searching clustering solution greatly. In the experiments, 28 datasets are used and nine state-of-the-art clustering algorithms are compared, the proposed method is superior to other approaches in the evaluation index ARI.
Improved multi-objective clustering algorithm using particle swarm optimization.
Gong, Congcong; Chen, Haisong; He, Weixiong; Zhang, Zhanliang
2017-01-01
Multi-objective clustering has received widespread attention recently, as it can obtain more accurate and reasonable solution. In this paper, an improved multi-objective clustering framework using particle swarm optimization (IMCPSO) is proposed. Firstly, a novel particle representation for clustering problem is designed to help PSO search clustering solutions in continuous space. Secondly, the distribution of Pareto set is analyzed. The analysis results are applied to the leader selection strategy, and make algorithm avoid trapping in local optimum. Moreover, a clustering solution-improved method is proposed, which can increase the efficiency in searching clustering solution greatly. In the experiments, 28 datasets are used and nine state-of-the-art clustering algorithms are compared, the proposed method is superior to other approaches in the evaluation index ARI.
Particle identification algorithms for the PANDA Endcap Disc DIRC
Schmidt, M.; Ali, A.; Belias, A.; Dzhygadlo, R.; Gerhardt, A.; Götzen, K.; Kalicy, G.; Krebs, M.; Lehmann, D.; Nerling, F.; Patsyuk, M.; Peters, K.; Schepers, G.; Schmitt, L.; Schwarz, C.; Schwiening, J.; Traxler, M.; Böhm, M.; Eyrich, W.; Lehmann, A.; Pfaffinger, M.; Uhlig, F.; Düren, M.; Etzelmüller, E.; Föhl, K.; Hayrapetyan, A.; Kreutzfeld, K.; Merle, O.; Rieke, J.; Wasem, T.; Achenbach, P.; Cardinali, M.; Hoek, M.; Lauth, W.; Schlimme, S.; Sfienti, C.; Thiel, M.
2017-12-01
The Endcap Disc DIRC has been developed to provide an excellent particle identification for the future PANDA experiment by separating pions and kaons up to a momentum of 4 GeV/c with a separation power of 3 standard deviations in the polar angle region from 5o to 22o. This goal will be achieved using dedicated particle identification algorithms based on likelihood methods and will be applied in an offline analysis and online event filtering. This paper evaluates the resulting PID performance using Monte-Carlo simulations to study basic single track PID as well as the analysis of complex physics channels. The online reconstruction algorithm has been tested with a Virtex4 FGPA card and optimized regarding the resulting constraints.
Ship Block Transportation Scheduling Problem Based on Greedy Algorithm
Directory of Open Access Journals (Sweden)
Chong Wang
2016-05-01
Full Text Available Ship block transportation problems are crucial issues to address in reducing the construction cost and improving the productivity of shipyards. Shipyards aim to maximize the workload balance of transporters with time constraint such that all blocks should be transported during the planning horizon. This process leads to three types of penalty time: empty transporter travel time, delay time, and tardy time. This study aims to minimize the sum of the penalty time. First, this study presents the problem of ship block transportation with the generalization of the block transportation restriction on the multi-type transporter. Second, the problem is transformed into the classical traveling salesman problem and assignment problem through a reasonable model simplification and by adding a virtual node to the proposed directed graph. Then, a heuristic algorithm based on greedy algorithm is proposed to assign blocks to available transporters and sequencing blocks for each transporter simultaneously. Finally, the numerical experiment method is used to validate the model, and its result shows that the proposed algorithm is effective in realizing the efficient use of the transporters in shipyards. Numerical simulation results demonstrate the promising application of the proposed method to efficiently improve the utilization of transporters and to reduce the cost of ship block logistics for shipyards.
Canonical algorithms for numerical integration of charged particle motion equations
Efimov, I. N.; Morozov, E. A.; Morozova, A. R.
2017-02-01
A technique for numerically integrating the equation of charged particle motion in a magnetic field is considered. It is based on the canonical transformations of the phase space in Hamiltonian mechanics. The canonical transformations make the integration process stable against counting error accumulation. The integration algorithms contain a minimum possible amount of arithmetics and can be used to design accelerators and devices of electron and ion optics.
Vectorization of Monte Carlo particle transport
International Nuclear Information System (INIS)
Burns, P.J.; Christon, M.; Schweitzer, R.; Lubeck, O.M.; Wasserman, H.J.; Simmons, M.L.; Pryor, D.V.
1989-01-01
This paper reports that fully vectorized versions of the Los Alamos National Laboratory benchmark code Gamteb, a Monte Carlo photon transport algorithm, were developed for the Cyber 205/ETA-10 and Cray X-MP/Y-MP architectures. Single-processor performance measurements of the vector and scalar implementations were modeled in a modified Amdahl's Law that accounts for additional data motion in the vector code. The performance and implementation strategy of the vector codes are related to architectural features of each machine. Speedups between fifteen and eighteen for Cyber 205/ETA-10 architectures, and about nine for CRAY X-MP/Y-MP architectures are observed. The best single processor execution time for the problem was 0.33 seconds on the ETA-10G, and 0.42 seconds on the CRAY Y-MP
Modeling pollutant transport using a meshless-lagrangian particle model
International Nuclear Information System (INIS)
Carrington, D.B.; Pepper, D.W.
2002-01-01
A combined meshless-Lagrangian particle transport model is used to predict pollutant transport over irregular terrain. The numerical model for initializing the velocity field is based on a meshless approach utilizing multiquadrics established by Kansa. The Lagrangian particle transport technique uses a random walk procedure to depict the advection and dispersion of pollutants over any type of surface, including street and city canyons
Computer codes in particle transport physics
International Nuclear Information System (INIS)
Pesic, M.
2004-01-01
Simulation of transport and interaction of various particles in complex media and wide energy range (from 1 MeV up to 1 TeV) is very complicated problem that requires valid model of a real process in nature and appropriate solving tool - computer code and data library. A brief overview of computer codes based on Monte Carlo techniques for simulation of transport and interaction of hadrons and ions in wide energy range in three dimensional (3D) geometry is shown. Firstly, a short attention is paid to underline the approach to the solution of the problem - process in nature - by selection of the appropriate 3D model and corresponding tools - computer codes and cross sections data libraries. Process of data collection and evaluation from experimental measurements and theoretical approach to establishing reliable libraries of evaluated cross sections data is Ion g, difficult and not straightforward activity. For this reason, world reference data centers and specialized ones are acknowledged, together with the currently available, state of art evaluated nuclear data libraries, as the ENDF/B-VI, JEF, JENDL, CENDL, BROND, etc. Codes for experimental and theoretical data evaluations (e.g., SAMMY and GNASH) together with the codes for data processing (e.g., NJOY, PREPRO and GRUCON) are briefly described. Examples of data evaluation and data processing to generate computer usable data libraries are shown. Among numerous and various computer codes developed in transport physics of particles, the most general ones are described only: MCNPX, FLUKA and SHIELD. A short overview of basic application of these codes, physical models implemented with their limitations, energy ranges of particles and types of interactions, is given. General information about the codes covers also programming language, operation system, calculation speed and the code availability. An example of increasing computation speed of running MCNPX code using a MPI cluster compared to the code sequential option
An External Archive-Guided Multiobjective Particle Swarm Optimization Algorithm.
Zhu, Qingling; Lin, Qiuzhen; Chen, Weineng; Wong, Ka-Chun; Coello Coello, Carlos A; Li, Jianqiang; Chen, Jianyong; Zhang, Jun
2017-09-01
The selection of swarm leaders (i.e., the personal best and global best), is important in the design of a multiobjective particle swarm optimization (MOPSO) algorithm. Such leaders are expected to effectively guide the swarm to approach the true Pareto optimal front. In this paper, we present a novel external archive-guided MOPSO algorithm (AgMOPSO), where the leaders for velocity update are all selected from the external archive. In our algorithm, multiobjective optimization problems (MOPs) are transformed into a set of subproblems using a decomposition approach, and then each particle is assigned accordingly to optimize each subproblem. A novel archive-guided velocity update method is designed to guide the swarm for exploration, and the external archive is also evolved using an immune-based evolutionary strategy. These proposed approaches speed up the convergence of AgMOPSO. The experimental results fully demonstrate the superiority of our proposed AgMOPSO in solving most of the test problems adopted, in terms of two commonly used performance measures. Moreover, the effectiveness of our proposed archive-guided velocity update method and immune-based evolutionary strategy is also experimentally validated on more than 30 test MOPs.
Vectorising the detector geometry to optimize particle transport
Apostolakis, John; Carminati, Federico; Gheata, Andrei; Wenzel, Sandro
2014-01-01
Among the components contributing to particle transport, geometry navigation is an important consumer of CPU cycles. The tasks performed to get answers to "basic" queries such as locating a point within a geometry hierarchy or computing accurately the distance to the next boundary can become very computing intensive for complex detector setups. So far, the existing geometry algorithms employ mainly scalar optimisation strategies (voxelization, caching) to reduce their CPU consumption. In this paper, we would like to take a different approach and investigate how geometry navigation can benefit from the vector instruction set extensions that are one of the primary source of performance enhancements on current and future hardware. While on paper, this form of microparallelism promises increasing performance opportunities, applying this technology to the highly hierarchical and multiply branched geometry code is a difficult challenge. We refer to the current work done to vectorise an important part of the critica...
New features of the mercury Monte Carlo particle transport code
International Nuclear Information System (INIS)
Procassini, Richard; Brantley, Patrick; Dawson, Shawn
2010-01-01
Several new capabilities have been added to the Mercury Monte Carlo transport code over the past four years. The most important algorithmic enhancement is a general, extensible infrastructure to support source, tally and variance reduction actions. For each action, the user defines a phase space, as well as any number of responses that are applied to a specified event. Tallies are accumulated into a correlated, multi-dimensional. Cartesian-product result phase space. Our approach employs a common user interface to specify the data sets and distributions that define the phase, response and result for each action. Modifications to the particle trackers include the use of facet halos (instead of extrapolative fuzz) for robust tracking, and material interface reconstruction for use in shape overlaid meshes. Support for expected-value criticality eigenvalue calculations has also been implemented. Computer science enhancements include an in-line Python interface for user customization of problem setup and output. (author)
Digital signal processing algorithms for nuclear particle spectroscopy
International Nuclear Information System (INIS)
Zejnalova, O.; Zejnalov, Sh.; Hambsch, F.J.; Oberstedt, S.
2007-01-01
Digital signal processing algorithms for nuclear particle spectroscopy are described along with a digital pile-up elimination method applicable to equidistantly sampled detector signals pre-processed by a charge-sensitive preamplifier. The signal processing algorithms are provided as recursive one- or multi-step procedures which can be easily programmed using modern computer programming languages. The influence of the number of bits of the sampling analogue-to-digital converter on the final signal-to-noise ratio of the spectrometer is considered. Algorithms for a digital shaping-filter amplifier, for a digital pile-up elimination scheme and for ballistic deficit correction were investigated using a high purity germanium detector. The pile-up elimination method was originally developed for fission fragment spectroscopy using a Frisch-grid back-to-back double ionization chamber and was mainly intended for pile-up elimination in case of high alpha-radioactivity of the fissile target. The developed pile-up elimination method affects only the electronic noise generated by the preamplifier. Therefore the influence of the pile-up elimination scheme on the final resolution of the spectrometer is investigated in terms of the distance between pile-up pulses. The efficiency of the developed algorithms is compared with other signal processing schemes published in literature
High performance stream computing for particle beam transport simulations
International Nuclear Information System (INIS)
Appleby, R; Bailey, D; Higham, J; Salt, M
2008-01-01
Understanding modern particle accelerators requires simulating charged particle transport through the machine elements. These simulations can be very time consuming due to the large number of particles and the need to consider many turns of a circular machine. Stream computing offers an attractive way to dramatically improve the performance of such simulations by calculating the simultaneous transport of many particles using dedicated hardware. Modern Graphics Processing Units (GPUs) are powerful and affordable stream computing devices. The results of simulations of particle transport through the booster-to-storage-ring transfer line of the DIAMOND synchrotron light source using an NVidia GeForce 7900 GPU are compared to the standard transport code MAD. It is found that particle transport calculations are suitable for stream processing and large performance increases are possible. The accuracy and potential speed gains are compared and the prospects for future work in the area are discussed
Mechanism of travelling-wave transport of particles
International Nuclear Information System (INIS)
Kawamoto, Hiroyuki; Seki, Kyogo; Kuromiya, Naoyuki
2006-01-01
Numerical and experimental investigations have been carried out on transport of particles in an electrostatic travelling field. A three-dimensional hard-sphere model of the distinct element method was developed to simulate the dynamics of particles. Forces applied to particles in the model were the Coulomb force, the dielectrophoresis force on polarized dipole particles in a non-uniform field, the image force, gravity and the air drag. Friction and repulsion between particle-particle and particle-conveyer were included in the model to replace initial conditions after mechanical contacts. Two kinds of experiments were performed to confirm the model. One was the measurement of charge of particles that is indispensable to determine the Coulomb force. Charge distribution was measured from the locus of free-fallen particles in a parallel electrostatic field. The averaged charge of the bulk particle was confirmed by measurement with a Faraday cage. The other experiment was measurements of the differential dynamics of particles on a conveyer consisting of parallel electrodes to which a four-phase travelling electrostatic wave was applied. Calculated results agreed with measurements, and the following characteristics were clarified. (1) The Coulomb force is the predominant force to drive particles compared with the other kinds of forces, (2) the direction of particle transport did not always coincide with that of the travelling wave but changed partially. It depended on the frequency of the travelling wave, the particle diameter and the electric field, (3) although some particles overtook the travelling wave at a very low frequency, the motion of particles was almost synchronized with the wave at the low frequency and (4) the transport of some particles was delayed to the wave at medium frequency; the majority of particles were transported backwards at high frequency and particles were not transported but only vibrated at very high frequency
Data decomposition of Monte Carlo particle transport simulations via tally servers
International Nuclear Information System (INIS)
Romano, Paul K.; Siegel, Andrew R.; Forget, Benoit; Smith, Kord
2013-01-01
An algorithm for decomposing large tally data in Monte Carlo particle transport simulations is developed, analyzed, and implemented in a continuous-energy Monte Carlo code, OpenMC. The algorithm is based on a non-overlapping decomposition of compute nodes into tracking processors and tally servers. The former are used to simulate the movement of particles through the domain while the latter continuously receive and update tally data. A performance model for this approach is developed, suggesting that, for a range of parameters relevant to LWR analysis, the tally server algorithm should perform with minimal overhead on contemporary supercomputers. An implementation of the algorithm in OpenMC is then tested on the Intrepid and Titan supercomputers, supporting the key predictions of the model over a wide range of parameters. We thus conclude that the tally server algorithm is a successful approach to circumventing classical on-node memory constraints en route to unprecedentedly detailed Monte Carlo reactor simulations
Baräo, Fernando; Nakagawa, Masayuki; Távora, Luis; Vaz, Pedro
2001-01-01
This book focusses on the state of the art of Monte Carlo methods in radiation physics and particle transport simulation and applications, the latter involving in particular, the use and development of electron--gamma, neutron--gamma and hadronic codes. Besides the basic theory and the methods employed, special attention is paid to algorithm development for modeling, and the analysis of experiments and measurements in a variety of fields ranging from particle to medical physics.
Optimal configuration of power grid sources based on optimal particle swarm algorithm
Wen, Yuanhua
2018-04-01
In order to optimize the distribution problem of power grid sources, an optimized particle swarm optimization algorithm is proposed. First, the concept of multi-objective optimization and the Pareto solution set are enumerated. Then, the performance of the classical genetic algorithm, the classical particle swarm optimization algorithm and the improved particle swarm optimization algorithm are analyzed. The three algorithms are simulated respectively. Compared with the test results of each algorithm, the superiority of the algorithm in convergence and optimization performance is proved, which lays the foundation for subsequent micro-grid power optimization configuration solution.
International Nuclear Information System (INIS)
Huang, Xiaobiao; Safranek, James
2014-01-01
Nonlinear dynamics optimization is carried out for a low emittance upgrade lattice of SPEAR3 in order to improve its dynamic aperture and Touschek lifetime. Two multi-objective optimization algorithms, a genetic algorithm and a particle swarm algorithm, are used for this study. The performance of the two algorithms are compared. The result shows that the particle swarm algorithm converges significantly faster to similar or better solutions than the genetic algorithm and it does not require seeding of good solutions in the initial population. These advantages of the particle swarm algorithm may make it more suitable for many accelerator optimization applications
Energy Technology Data Exchange (ETDEWEB)
Huang, Xiaobiao, E-mail: xiahuang@slac.stanford.edu; Safranek, James
2014-09-01
Nonlinear dynamics optimization is carried out for a low emittance upgrade lattice of SPEAR3 in order to improve its dynamic aperture and Touschek lifetime. Two multi-objective optimization algorithms, a genetic algorithm and a particle swarm algorithm, are used for this study. The performance of the two algorithms are compared. The result shows that the particle swarm algorithm converges significantly faster to similar or better solutions than the genetic algorithm and it does not require seeding of good solutions in the initial population. These advantages of the particle swarm algorithm may make it more suitable for many accelerator optimization applications.
A multi-frame particle tracking algorithm robust against input noise
International Nuclear Information System (INIS)
Li, Dongning; Zhang, Yuanhui; Sun, Yigang; Yan, Wei
2008-01-01
The performance of a particle tracking algorithm which detects particle trajectories from discretely recorded particle positions could be substantially hindered by the input noise. In this paper, a particle tracking algorithm is developed which is robust against input noise. This algorithm employs the regression method instead of the extrapolation method usually employed by existing algorithms to predict future particle positions. If a trajectory cannot be linked to a particle at a frame, the algorithm can still proceed by trying to find a candidate at the next frame. The connectivity of tracked trajectories is inspected to remove the false ones. The algorithm is validated with synthetic data. The result shows that the algorithm is superior to traditional algorithms in the aspect of tracking long trajectories
Sustainable logistics and transportation optimization models and algorithms
Gakis, Konstantinos; Pardalos, Panos
2017-01-01
Focused on the logistics and transportation operations within a supply chain, this book brings together the latest models, algorithms, and optimization possibilities. Logistics and transportation problems are examined within a sustainability perspective to offer a comprehensive assessment of environmental, social, ethical, and economic performance measures. Featured models, techniques, and algorithms may be used to construct policies on alternative transportation modes and technologies, green logistics, and incentives by the incorporation of environmental, economic, and social measures. Researchers, professionals, and graduate students in urban regional planning, logistics, transport systems, optimization, supply chain management, business administration, information science, mathematics, and industrial and systems engineering will find the real life and interdisciplinary issues presented in this book informative and useful.
High energy electromagnetic particle transportation on the GPU
Energy Technology Data Exchange (ETDEWEB)
Canal, P. [Fermilab; Elvira, D. [Fermilab; Jun, S. Y. [Fermilab; Kowalkowski, J. [Fermilab; Paterno, M. [Fermilab; Apostolakis, J. [CERN
2014-01-01
We present massively parallel high energy electromagnetic particle transportation through a finely segmented detector on a Graphics Processing Unit (GPU). Simulating events of energetic particle decay in a general-purpose high energy physics (HEP) detector requires intensive computing resources, due to the complexity of the geometry as well as physics processes applied to particles copiously produced by primary collisions and secondary interactions. The recent advent of hardware architectures of many-core or accelerated processors provides the variety of concurrent programming models applicable not only for the high performance parallel computing, but also for the conventional computing intensive application such as the HEP detector simulation. The components of our prototype are a transportation process under a non-uniform magnetic field, geometry navigation with a set of solid shapes and materials, electromagnetic physics processes for electrons and photons, and an interface to a framework that dispatches bundles of tracks in a highly vectorized manner optimizing for spatial locality and throughput. Core algorithms and methods are excerpted from the Geant4 toolkit, and are modified and optimized for the GPU application. Program kernels written in C/C++ are designed to be compatible with CUDA and OpenCL and with the aim to be generic enough for easy porting to future programming models and hardware architectures. To improve throughput by overlapping data transfers with kernel execution, multiple CUDA streams are used. Issues with floating point accuracy, random numbers generation, data structure, kernel divergences and register spills are also considered. Performance evaluation for the relative speedup compared to the corresponding sequential execution on CPU is presented as well.
Dose calculations algorithm for narrow heavy charged-particle beams
Energy Technology Data Exchange (ETDEWEB)
Barna, E A; Kappas, C [Department of Medical Physics, School of Medicine, University of Patras (Greece); Scarlat, F [National Institute for Laser and Plasma Physics, Bucharest (Romania)
1999-12-31
The dose distributional advantages of the heavy charged-particles can be fully exploited by using very efficient and accurate dose calculation algorithms, which can generate optimal three-dimensional scanning patterns. An inverse therapy planning algorithm for dynamically scanned, narrow heavy charged-particle beams is presented in this paper. The irradiation `start point` is defined at the distal end of the target volume, right-down, in a beam`s eye view. The peak-dose of the first elementary beam is set to be equal to the prescribed dose in the target volume, and is defined as the reference dose. The weighting factor of any Bragg-peak is determined by the residual dose at the point of irradiation, calculated as the difference between the reference dose and the cumulative dose delivered at that point of irradiation by all the previous Bragg-peaks. The final pattern consists of the weighted Bragg-peaks irradiation density. Dose distributions were computed using two different scanning steps equal to 0.5 mm, and 1 mm respectively. Very accurate and precise localized dose distributions, conform to the target volume, were obtained. (authors) 6 refs., 3 figs.
Entropic Ratchet transport of interacting active Brownian particles
International Nuclear Information System (INIS)
Ai, Bao-Quan; He, Ya-Feng; Zhong, Wei-Rong
2014-01-01
Directed transport of interacting active (self-propelled) Brownian particles is numerically investigated in confined geometries (entropic barriers). The self-propelled velocity can break thermodynamical equilibrium and induce the directed transport. It is found that the interaction between active particles can greatly affect the ratchet transport. For attractive particles, on increasing the interaction strength, the average velocity first decreases to its minima, then increases, and finally decreases to zero. For repulsive particles, when the interaction is very weak, there exists a critical interaction at which the average velocity is minimal, nearly tends to zero, however, for the strong interaction, the average velocity is independent of the interaction
Entropic Ratchet transport of interacting active Brownian particles
Energy Technology Data Exchange (ETDEWEB)
Ai, Bao-Quan, E-mail: aibq@hotmail.com [Laboratory of Quantum Engineering and Quantum Materials, School of Physics and Telecommunication Engineering, South China Normal University, 510006 Guangzhou (China); He, Ya-Feng [College of Physics Science and Technology, Hebei University, 071002 Baoding (China); Zhong, Wei-Rong, E-mail: wrzhong@jnu.edu.cn [Department of Physics and Siyuan Laboratory, College of Science and Engineering, Jinan University, 510632 Guangzhou (China)
2014-11-21
Directed transport of interacting active (self-propelled) Brownian particles is numerically investigated in confined geometries (entropic barriers). The self-propelled velocity can break thermodynamical equilibrium and induce the directed transport. It is found that the interaction between active particles can greatly affect the ratchet transport. For attractive particles, on increasing the interaction strength, the average velocity first decreases to its minima, then increases, and finally decreases to zero. For repulsive particles, when the interaction is very weak, there exists a critical interaction at which the average velocity is minimal, nearly tends to zero, however, for the strong interaction, the average velocity is independent of the interaction.
Transport of the moving barrier driven by chiral active particles
Liao, Jing-jing; Huang, Xiao-qun; Ai, Bao-quan
2018-03-01
Transport of a moving V-shaped barrier exposed to a bath of chiral active particles is investigated in a two-dimensional channel. Due to the chirality of active particles and the transversal asymmetry of the barrier position, active particles can power and steer the directed transport of the barrier in the longitudinal direction. The transport of the barrier is determined by the chirality of active particles. The moving barrier and active particles move in the opposite directions. The average velocity of the barrier is much larger than that of active particles. There exist optimal parameters (the chirality, the self-propulsion speed, the packing fraction, and the channel width) at which the average velocity of the barrier takes its maximal value. In particular, tailoring the geometry of the barrier and the active concentration provides novel strategies to control the transport properties of micro-objects or cargoes in an active medium.
Parallelization of a spherical Sn transport theory algorithm
International Nuclear Information System (INIS)
Haghighat, A.
1989-01-01
The work described in this paper derives a parallel algorithm for an R-dependent spherical S N transport theory algorithm and studies its performance by testing different sample problems. The S N transport method is one of the most accurate techniques used to solve the linear Boltzmann equation. Several studies have been done on the vectorization of the S N algorithms; however, very few studies have been performed on the parallelization of this algorithm. Weinke and Hommoto have looked at the parallel processing of the different energy groups, and Azmy recently studied the parallel processing of the inner iterations of an X-Y S N nodal transport theory method. Both studies have reported very encouraging results, which have prompted us to look at the parallel processing of an R-dependent S N spherical geometry algorithm. This geometry was chosen because, in spite of its simplicity, it contains the complications of the curvilinear geometries (i.e., redistribution of neutrons over the discretized angular bins)
An analysis of 3D particle path integration algorithms
International Nuclear Information System (INIS)
Darmofal, D.L.; Haimes, R.
1996-01-01
Several techniques for the numerical integration of particle paths in steady and unsteady vector (velocity) fields are analyzed. Most of the analysis applies to unsteady vector fields, however, some results apply to steady vector field integration. Multistep, multistage, and some hybrid schemes are considered. It is shown that due to initialization errors, many unsteady particle path integration schemes are limited to third-order accuracy in time. Multistage schemes require at least three times more internal data storage than multistep schemes of equal order. However, for timesteps within the stability bounds, multistage schemes are generally more accurate. A linearized analysis shows that the stability of these integration algorithms are determined by the eigenvalues of the local velocity tensor. Thus, the accuracy and stability of the methods are interpreted with concepts typically used in critical point theory. This paper shows how integration schemes can lead to erroneous classification of critical points when the timestep is finite and fixed. For steady velocity fields, we demonstrate that timesteps outside of the relative stability region can lead to similar integration errors. From this analysis, guidelines for accurate timestep sizing are suggested for both steady and unsteady flows. In particular, using simulation data for the unsteady flow around a tapered cylinder, we show that accurate particle path integration requires timesteps which are at most on the order of the physical timescale of the flow
Particle filters for object tracking: enhanced algorithm and efficient implementations
International Nuclear Information System (INIS)
Abd El-Halym, H.A.
2010-01-01
Object tracking and recognition is a hot research topic. In spite of the extensive research efforts expended, the development of a robust and efficient object tracking algorithm remains unsolved due to the inherent difficulty of the tracking problem. Particle filters (PFs) were recently introduced as a powerful, post-Kalman filter, estimation tool that provides a general framework for estimation of nonlinear/ non-Gaussian dynamic systems. Particle filters were advanced for building robust object trackers capable of operation under severe conditions (small image size, noisy background, occlusions, fast object maneuvers ..etc.). The heavy computational load of the particle filter remains a major obstacle towards its wide use.In this thesis, an Excitation Particle Filter (EPF) is introduced for object tracking. A new likelihood model is proposed. It depends on multiple functions: position likelihood; gray level intensity likelihood and similarity likelihood. Also, we modified the PF as a robust estimator to overcome the well-known sample impoverishment problem of the PF. This modification is based on re-exciting the particles if their weights fall below a memorized weight value. The proposed enhanced PF is implemented in software and evaluated. Its results are compared with a single likelihood function PF tracker, Particle Swarm Optimization (PSO) tracker, a correlation tracker, as well as, an edge tracker. The experimental results demonstrated the superior performance of the proposed tracker in terms of accuracy, robustness, and occlusion compared with other methods Efficient novel hardware architectures of the Sample Important Re sample Filter (SIRF) and the EPF are implemented. Three novel hardware architectures of the SIRF for object tracking are introduced. The first architecture is a two-step sequential PF machine, where particle generation, weight calculation and normalization are carried out in parallel during the first step followed by a sequential re
Limits on the efficiency of event-based algorithms for Monte Carlo neutron transport
Directory of Open Access Journals (Sweden)
Paul K. Romano
2017-09-01
Full Text Available The traditional form of parallelism in Monte Carlo particle transport simulations, wherein each individual particle history is considered a unit of work, does not lend itself well to data-level parallelism. Event-based algorithms, which were originally used for simulations on vector processors, may offer a path toward better utilizing data-level parallelism in modern computer architectures. In this study, a simple model is developed for estimating the efficiency of the event-based particle transport algorithm under two sets of assumptions. Data collected from simulations of four reactor problems using OpenMC was then used in conjunction with the models to calculate the speedup due to vectorization as a function of the size of the particle bank and the vector width. When each event type is assumed to have constant execution time, the achievable speedup is directly related to the particle bank size. We observed that the bank size generally needs to be at least 20 times greater than vector size to achieve vector efficiency greater than 90%. When the execution times for events are allowed to vary, the vector speedup is also limited by differences in the execution time for events being carried out in a single event-iteration.
Application of ant colony Algorithm and particle swarm optimization in architectural design
Song, Ziyi; Wu, Yunfa; Song, Jianhua
2018-02-01
By studying the development of ant colony algorithm and particle swarm algorithm, this paper expounds the core idea of the algorithm, explores the combination of algorithm and architectural design, sums up the application rules of intelligent algorithm in architectural design, and combines the characteristics of the two algorithms, obtains the research route and realization way of intelligent algorithm in architecture design. To establish algorithm rules to assist architectural design. Taking intelligent algorithm as the beginning of architectural design research, the authors provide the theory foundation of ant colony Algorithm and particle swarm algorithm in architectural design, popularize the application range of intelligent algorithm in architectural design, and provide a new idea for the architects.
Energy and particle core transport in tokamaks and stellarators compared
Energy Technology Data Exchange (ETDEWEB)
Beurskens, Marc; Angioni, Clemente; Beidler, Craig; Dinklage, Andreas; Fuchert, Golo; Hirsch, Matthias; Puetterich, Thomas; Wolf, Robert [Max-Planck-Institut fuer Plasmaphysik, Greifswald/Garching (Germany)
2016-07-01
The paper discusses expectations for core transport in the Wendelstein 7-X stellarator (W7-X) and presents a comparison to tokamaks. In tokamaks, the neoclassical trapped-particle-driven losses are small and turbulence dominates the energy and particle transport. At reactor relevant low collisionality, the heat transport is limited by ion temperature gradient limited turbulence, clamping the temperature gradient. The particle transport is set by an anomalous inward pinch, yielding peaked profiles. A strong edge pedestal adds to the good confinement properties. In traditional stellarators the 3D geometry cause increased trapped orbit losses. At reactor relevant low collisionality and high temperatures, these neoclassical losses would be well above the turbulent transport losses. The W7-X design minimizes neoclassical losses and turbulent transport can become dominant. Moreover, the separation of regions of bad curvature and that of trapped particle orbits in W7-X may have favourable implications on the turbulent electron heat transport. The neoclassical particle thermodiffusion is outward. Without core particle sources the density profile is flat or even hollow. The presence of a turbulence driven inward anomalous particle pinch in W7-X (like in tokamaks) is an open topic of research.
Multiobjective Reliable Cloud Storage with Its Particle Swarm Optimization Algorithm
Directory of Open Access Journals (Sweden)
Xiyang Liu
2016-01-01
Full Text Available Information abounds in all fields of the real life, which is often recorded as digital data in computer systems and treated as a kind of increasingly important resource. Its increasing volume growth causes great difficulties in both storage and analysis. The massive data storage in cloud environments has significant impacts on the quality of service (QoS of the systems, which is becoming an increasingly challenging problem. In this paper, we propose a multiobjective optimization model for the reliable data storage in clouds through considering both cost and reliability of the storage service simultaneously. In the proposed model, the total cost is analyzed to be composed of storage space occupation cost, data migration cost, and communication cost. According to the analysis of the storage process, the transmission reliability, equipment stability, and software reliability are taken into account in the storage reliability evaluation. To solve the proposed multiobjective model, a Constrained Multiobjective Particle Swarm Optimization (CMPSO algorithm is designed. At last, experiments are designed to validate the proposed model and its solution PSO algorithm. In the experiments, the proposed model is tested in cooperation with 3 storage strategies. Experimental results show that the proposed model is positive and effective. The experimental results also demonstrate that the proposed model can perform much better in alliance with proper file splitting methods.
Particle swarm optimization algorithm based low cost magnetometer calibration
Ali, A. S.; Siddharth, S., Syed, Z., El-Sheimy, N.
2011-12-01
Inertial Navigation Systems (INS) consist of accelerometers, gyroscopes and a microprocessor provide inertial digital data from which position and orientation is obtained by integrating the specific forces and rotation rates. In addition to the accelerometers and gyroscopes, magnetometers can be used to derive the absolute user heading based on Earth's magnetic field. Unfortunately, the measurements of the magnetic field obtained with low cost sensors are corrupted by several errors including manufacturing defects and external electro-magnetic fields. Consequently, proper calibration of the magnetometer is required to achieve high accuracy heading measurements. In this paper, a Particle Swarm Optimization (PSO) based calibration algorithm is presented to estimate the values of the bias and scale factor of low cost magnetometer. The main advantage of this technique is the use of the artificial intelligence which does not need any error modeling or awareness of the nonlinearity. The estimated bias and scale factor errors from the proposed algorithm improve the heading accuracy and the results are also statistically significant. Also, it can help in the development of the Pedestrian Navigation Devices (PNDs) when combined with the INS and GPS/Wi-Fi especially in the indoor environments
A Novel Flexible Inertia Weight Particle Swarm Optimization Algorithm
Shamsi, Mousa; Sedaaghi, Mohammad Hossein
2016-01-01
Particle swarm optimization (PSO) is an evolutionary computing method based on intelligent collective behavior of some animals. It is easy to implement and there are few parameters to adjust. The performance of PSO algorithm depends greatly on the appropriate parameter selection strategies for fine tuning its parameters. Inertia weight (IW) is one of PSO’s parameters used to bring about a balance between the exploration and exploitation characteristics of PSO. This paper proposes a new nonlinear strategy for selecting inertia weight which is named Flexible Exponential Inertia Weight (FEIW) strategy because according to each problem we can construct an increasing or decreasing inertia weight strategy with suitable parameters selection. The efficacy and efficiency of PSO algorithm with FEIW strategy (FEPSO) is validated on a suite of benchmark problems with different dimensions. Also FEIW is compared with best time-varying, adaptive, constant and random inertia weights. Experimental results and statistical analysis prove that FEIW improves the search performance in terms of solution quality as well as convergence rate. PMID:27560945
A Novel Flexible Inertia Weight Particle Swarm Optimization Algorithm.
Amoshahy, Mohammad Javad; Shamsi, Mousa; Sedaaghi, Mohammad Hossein
2016-01-01
Particle swarm optimization (PSO) is an evolutionary computing method based on intelligent collective behavior of some animals. It is easy to implement and there are few parameters to adjust. The performance of PSO algorithm depends greatly on the appropriate parameter selection strategies for fine tuning its parameters. Inertia weight (IW) is one of PSO's parameters used to bring about a balance between the exploration and exploitation characteristics of PSO. This paper proposes a new nonlinear strategy for selecting inertia weight which is named Flexible Exponential Inertia Weight (FEIW) strategy because according to each problem we can construct an increasing or decreasing inertia weight strategy with suitable parameters selection. The efficacy and efficiency of PSO algorithm with FEIW strategy (FEPSO) is validated on a suite of benchmark problems with different dimensions. Also FEIW is compared with best time-varying, adaptive, constant and random inertia weights. Experimental results and statistical analysis prove that FEIW improves the search performance in terms of solution quality as well as convergence rate.
A Novel Radiation Transport Algorithm for Radiography Simulations
International Nuclear Information System (INIS)
Inanc, Feyzi
2004-01-01
The simulations used in the NDE community are becoming more realistic with the introduction of more physics. In this work, we have developed a new algorithm that is capable of representing photon and charged particle fluxes through spherical harmonic expansions in a manner similar to well known discrete ordinates method with the exception that Boltzmann operator is treated through exact integration rather than conventional Legendre expansions. This approach provides a mean to include radiation interactions for higher energy regimes where there are additional physical mechanisms for photons and charged particles
Multivariable optimization of liquid rocket engines using particle swarm algorithms
Jones, Daniel Ray
Liquid rocket engines are highly reliable, controllable, and efficient compared to other conventional forms of rocket propulsion. As such, they have seen wide use in the space industry and have become the standard propulsion system for launch vehicles, orbit insertion, and orbital maneuvering. Though these systems are well understood, historical optimization techniques are often inadequate due to the highly non-linear nature of the engine performance problem. In this thesis, a Particle Swarm Optimization (PSO) variant was applied to maximize the specific impulse of a finite-area combustion chamber (FAC) equilibrium flow rocket performance model by controlling the engine's oxidizer-to-fuel ratio and de Laval nozzle expansion and contraction ratios. In addition to the PSO-controlled parameters, engine performance was calculated based on propellant chemistry, combustion chamber pressure, and ambient pressure, which are provided as inputs to the program. The performance code was validated by comparison with NASA's Chemical Equilibrium with Applications (CEA) and the commercially available Rocket Propulsion Analysis (RPA) tool. Similarly, the PSO algorithm was validated by comparison with brute-force optimization, which calculates all possible solutions and subsequently determines which is the optimum. Particle Swarm Optimization was shown to be an effective optimizer capable of quick and reliable convergence for complex functions of multiple non-linear variables.
ALGORITHMS FOR TRAFFIC MANAGEMENT IN THE INTELLIGENT TRANSPORT SYSTEMS
Directory of Open Access Journals (Sweden)
Andrey Borisovich Nikolaev
2017-09-01
Full Text Available Traffic jams interfere with the drivers and cost billions of dollars per year and lead to a substantial increase in fuel consumption. In order to avoid such problems the paper describes the algorithms for traffic management in intelligent transportation system, which collects traffic information in real time and is able to detect and manage congestion on the basis of this information. The results show that the proposed algorithms reduce the average travel time, emissions and fuel consumption. In particular, travel time has decreased by about 23%, the average fuel consumption of 9%, and the average emission of 10%.
Fully multidimensional flux-corrected transport algorithms for fluids
International Nuclear Information System (INIS)
Zalesak, S.T.
1979-01-01
The theory of flux-corrected transport (FCT) developed by Boris and Book is placed in a simple, generalized format, and a new algorithm for implementing the critical flux limiting stage in multidimensions without resort to time splitting is presented. The new flux limiting algorithm allows the use of FCT techniques in multidimensional fluid problems for which time splitting would produce unacceptable numerical results, such as those involving incompressible or nearly incompressible flow fields. The 'clipping' problem associated with the original one dimensional flux limiter is also eliminated or alleviated. Test results and applications to a two dimensional fluid plasma problem are presented
Bodin, Jacques
2015-03-01
In this study, new multi-dimensional time-domain random walk (TDRW) algorithms are derived from approximate one-dimensional (1-D), two-dimensional (2-D), and three-dimensional (3-D) analytical solutions of the advection-dispersion equation and from exact 1-D, 2-D, and 3-D analytical solutions of the pure-diffusion equation. These algorithms enable the calculation of both the time required for a particle to travel a specified distance in a homogeneous medium and the mass recovery at the observation point, which may be incomplete due to 2-D or 3-D transverse dispersion or diffusion. The method is extended to heterogeneous media, represented as a piecewise collection of homogeneous media. The particle motion is then decomposed along a series of intermediate checkpoints located on the medium interface boundaries. The accuracy of the multi-dimensional TDRW method is verified against (i) exact analytical solutions of solute transport in homogeneous media and (ii) finite-difference simulations in a synthetic 2-D heterogeneous medium of simple geometry. The results demonstrate that the method is ideally suited to purely diffusive transport and to advection-dispersion transport problems dominated by advection. Conversely, the method is not recommended for highly dispersive transport problems because the accuracy of the advection-dispersion TDRW algorithms degrades rapidly for a low Péclet number, consistent with the accuracy limit of the approximate analytical solutions. The proposed approach provides a unified methodology for deriving multi-dimensional time-domain particle equations and may be applicable to other mathematical transport models, provided that appropriate analytical solutions are available.
Liu, Jian-li; Lu, Shi-cai; Ai, Bao-quan
2018-06-01
Due to the chirality of active particles, the transversal asymmetry can induce the the longitudinal directed transport. The transport of chiral active particles in a periodic channel is investigated in the presence of two types of the transversal asymmetry, the transverse force and the transverse rigid half-circle obstacles. For all cases, the counterclockwise and clockwise particles move to the opposite directions. For the case of the only transverse force, the chiral active particles can reverse their directions when increasing the transverse force. When the transverse rigid half-circle obstacles are introduced, the transport behavior of particles becomes more complex and multiple current reversals occur. The direction of the transport is determined by the competition between two types of the transversal asymmetry. For a given chirality, by suitably tailoring parameters, particles with different self-propulsion speed can move in different directions and can be separated.
Microstripes for transport and separation of magnetic particles
DEFF Research Database (Denmark)
Donolato, Marco; Dalslet, Bjarke Thomas; Hansen, Mikkel Fougt
2012-01-01
We present a simple technique for creating an on-chip magnetic particle conveyor based on exchange-biased permalloy microstripes. The particle transportation relies on an array of stripes with a spacing smaller than their width in conjunction with a periodic sequence of four different externally...... applied magnetic fields. We demonstrate the controlled transportation of a large population of particles over several millimeters of distance as well as the spatial separation of two populations of magnetic particles with different magnetophoretic mobilities. The technique can be used for the controlled...... selective manipulation and separation of magnetically labelled species. (C) 2012 American Institute of Physics....
Stress, Flow and Particle Transport in Rock Fractures
Energy Technology Data Exchange (ETDEWEB)
Koyama, Tomofumi
2007-09-15
The fluid flow and tracer transport in a single rock fracture during shear processes has been an important issue in rock mechanics and is investigated in this thesis using Finite Element Method (FEM) and streamline particle tracking method, considering evolutions of aperture and transmissivity with shear displacement histories under different normal stresses, based on laboratory tests. The distributions of fracture aperture and its evolution during shear were calculated from the initial aperture fields, based on the laser-scanned surface roughness features of replicas of rock fracture specimens, and shear dilations measured during the coupled shear-flow-tracer tests in laboratory performed using a newly developed testing apparatus in Nagasaki University, Nagasaki, Japan. Three rock fractures of granite with different roughness characteristics were used as parent samples from which nine plaster replicas were made and coupled shear-flow tests was performed under three normal loading conditions (two levels of constant normal loading (CNL) and one constant normal stiffness (CNS) conditions). In order to visualize the tracer transport, transparent acrylic upper parts and plaster lower parts of the fracture specimens were manufactured from an artificially created tensile fracture of sandstone and the coupled shear-flow tests with fluid visualization was performed using a dye tracer injected from upstream and a CCD camera to record the dye movement. A special algorithm for treating the contact areas as zero-aperture elements was used to produce more accurate flow field simulations by using FEM, which is important for continued simulations of particle transport, but was often not properly treated in literature. The simulation results agreed well with the flow rate data obtained from the laboratory tests, showing that complex histories of fracture aperture and tortuous flow channels with changing normal stresses and increasing shear displacements, which were also captured
Dynamical theory of anomalous particle transport
International Nuclear Information System (INIS)
Meiss, J.D.; Cary, J.R.; Escande, D.F.; MacKay, R.S.; Percival, I.C.; Tennyson, J.L.
1985-01-01
The quasi-linear theory of transport applies only in a restricted parameter range, which does not necessarily correspond to experimental conditions. Theories are developed which extend transport calculations to the regimes of marginal stochasticity and strong turbulence. Near the stochastic threshold the description of transport involves the leakage through destroyed invariant surfaces, and the dynamical scaling theory is used to obtain a universal form for transport coefficients. In the strong-turbulence regime, there is an adiabatic invariant which is preserved except near separatrices. Breakdown of this invariant leads to a new form for the diffusion coefficient. (author)
Turbulent transport of large particles in the atmospheric boundary layer
Richter, D. H.; Chamecki, M.
2017-12-01
To describe the transport of heavy dust particles in the atmosphere, assumptions must typically be made in order to connect the micro-scale emission processes with the larger-scale atmospheric motions. In the context of numerical models, this can be thought of as the transport process which occurs between the domain bottom and the first vertical grid point. For example, in the limit of small particles (both low inertia and low settling velocity), theory built upon Monin-Obukhov similarity has proven effective in relating mean dust concentration profiles to surface emission fluxes. For increasing particle mass, however, it becomes more difficult to represent dust transport as a simple extension of the transport of a passive scalar due to issues such as the crossing trajectories effect. This study focuses specifically on the problem of large particle transport and dispersion in the turbulent boundary layer by utilizing direct numerical simulations with Lagrangian point-particle tracking to determine under what, if any, conditions the large dust particles (larger than 10 micron in diameter) can be accurately described in a simplified Eulerian framework. In particular, results will be presented detailing the independent contributions of both particle inertia and particle settling velocity relative to the strength of the surrounding turbulent flow, and consequences of overestimating surface fluxes via traditional parameterizations will be demonstrated.
ASYMPTOTICS OF a PARTICLES TRANSPORT PROBLEM
Directory of Open Access Journals (Sweden)
Kuzmina Ludmila Ivanovna
2017-11-01
Full Text Available Subject: a groundwater filtration affects the strength and stability of underground and hydro-technical constructions. Research objectives: the study of one-dimensional problem of displacement of suspension by the flow of pure water in a porous medium. Materials and methods: when filtering a suspension some particles pass through the porous medium, and some of them are stuck in the pores. It is assumed that size distributions of the solid particles and the pores overlap. In this case, the main mechanism of particle retention is a size-exclusion: the particles pass freely through the large pores and get stuck at the inlet of the tiny pores that are smaller than the particle diameter. The concentrations of suspended and retained particles satisfy two quasi-linear differential equations of the first order. To solve the filtration problem, methods of nonlinear asymptotic analysis are used. Results: in a mathematical model of filtration of suspensions, which takes into account the dependence of the porosity and permeability of the porous medium on concentration of retained particles, the boundary between two phases is moving with variable velocity. The asymptotic solution to the problem is constructed for a small filtration coefficient. The theorem of existence of the asymptotics is proved. Analytical expressions for the principal asymptotic terms are presented for the case of linear coefficients and initial conditions. The asymptotics of the boundary of two phases is given in explicit form. Conclusions: the filtration problem under study can be solved analytically.
An Orthogonal Multi-Swarm Cooperative PSO Algorithm with a Particle Trajectory Knowledge Base
Directory of Open Access Journals (Sweden)
Jun Yang
2017-01-01
Full Text Available A novel orthogonal multi-swarm cooperative particle swarm optimization (PSO algorithm with a particle trajectory knowledge base is presented in this paper. Different from the traditional PSO algorithms and other variants of PSO, the proposed orthogonal multi-swarm cooperative PSO algorithm not only introduces an orthogonal initialization mechanism and a particle trajectory knowledge base for multi-dimensional optimization problems, but also conceives a new adaptive cooperation mechanism to accomplish the information interaction among swarms and particles. Experiments are conducted on a set of benchmark functions, and the results show its better performance compared with traditional PSO algorithm in aspects of convergence, computational efficiency and avoiding premature convergence.
Particle Identification algorithm for the CLIC ILD and CLIC SiD detectors
Nardulli, J
2011-01-01
This note describes the algorithm presently used to determine the particle identification performance for single particles for the CLIC ILD and CLIC SiD detector concepts as prepared in the CLIC Conceptual Design Report.
Particle mis-identification rate algorithm for the CLIC ILD and CLIC SiD detectors
Nardulli, J
2011-01-01
This note describes the algorithm presently used to determine the particle mis- identification rate and gives results for single particles for the CLIC ILD and CLIC SiD detector concepts as prepared for the CLIC Conceptual Design Report.
Massively parallel performance of neutron transport response matrix algorithms
International Nuclear Information System (INIS)
Hanebutte, U.R.; Lewis, E.E.
1993-01-01
Massively parallel red/black response matrix algorithms for the solution of within-group neutron transport problems are implemented on the Connection Machines-2, 200 and 5. The response matrices are dericed from the diamond-differences and linear-linear nodal discrete ordinate and variational nodal P 3 approximations. The unaccelerated performance of the iterative procedure is examined relative to the maximum rated performances of the machines. The effects of processor partitions size, of virtual processor ratio and of problems size are examined in detail. For the red/black algorithm, the ratio of inter-node communication to computing times is found to be quite small, normally of the order of ten percent or less. Performance increases with problems size and with virtual processor ratio, within the memeory per physical processor limitation. Algorithm adaptation to courser grain machines is straight-forward, with total computing time being virtually inversely proportional to the number of physical processors. (orig.)
Directed Transport of Brownian Particles in a Periodic Channel
International Nuclear Information System (INIS)
Jiang Jie; Ai Bao-Quan; Wu Jian-Chun
2015-01-01
The transport of Brownian particles in the infinite channel within an external force along the axis of the channel has been studied. In this paper, we study the transport of Brownian particle in the infinite channel within an external force along the axis of the channel and an external force in the transversal direction. In this more sophisticated situation, some property is similar to the simple situation, but some interesting property also appears. (paper)
Time-dependent 2-stream particle transport
International Nuclear Information System (INIS)
Corngold, Noel
2015-01-01
Highlights: • We consider time-dependent transport in the 2-stream or “rod” model via an attractive matrix formalism. • After reviewing some classical problems in homogeneous media we discuss transport in materials with whose density may vary. • There we achieve a significant contraction of the underlying Telegrapher’s equation. • We conclude with a discussion of stochastics, treated by the “first-order smoothing approximation.” - Abstract: We consider time-dependent transport in the 2-stream or “rod” model via an attractive matrix formalism. After reviewing some classical problems in homogeneous media we discuss transport in materials whose density may vary. There we achieve a significant contraction of the underlying Telegrapher’s equation. We conclude with a discussion of stochastics, treated by the “first-order smoothing approximation.”
Spatiotemporal Structure of Aeolian Particle Transport on Flat Surface
Niiya, Hirofumi; Nishimura, Kouichi
2017-05-01
We conduct numerical simulations based on a model of blowing snow to reveal the long-term properties and equilibrium state of aeolian particle transport from 10-5 to 10 m above the flat surface. The numerical results are as follows. (i) Time-series data of particle transport are divided into development, relaxation, and equilibrium phases, which are formed by rapid wind response below 10 cm and gradual wind response above 10 cm. (ii) The particle transport rate at equilibrium is expressed as a power function of friction velocity, and the index of 2.35 implies that most particles are transported by saltation. (iii) The friction velocity below 100 µm remains roughly constant and lower than the fluid threshold at equilibrium. (iv) The mean particle speed above 300 µm is less than the wind speed, whereas that below 300 µm exceeds the wind speed because of descending particles. (v) The particle diameter increases with height in the saltation layer, and the relationship is expressed as a power function. Through comparisons with the previously reported random-flight model, we find a crucial problem that empirical splash functions cannot reproduce particle dynamics at a relatively high wind speed.
Relativity primer for particle transport. A LASL monograph
International Nuclear Information System (INIS)
Everett, C.J.; Cashwell, E.D.
1979-04-01
The basic principles of special relativity involved in Monte Carlo transport problems are developed with emphasis on the possible transmutations of particles, and on computational methods. Charged particle ballistics and polarized scattering are included, as well as a discussion of colliding beams
International Nuclear Information System (INIS)
Goharzadeh, A; Rodgers, P
2009-01-01
This paper presents an experimental study of gas-liquid slug flow on solid particle transport inside a horizontal pipe with two types of experiments conducted. The influence of slug length on solid particle transportation is characterized using high speed photography. Using combined Particle Image Velocimetry (PIV) with Refractive Index Matching (RIM) and fluorescent tracers (two-phase oil-air loop) the velocity distribution inside the slug body is measured. Combining these experimental analyses, an insight is provided into the physical mechanism of solid particle transportation due to slug flow. It was observed that the slug body significantly influences solid particle mobility. The physical mechanism of solid particle transportation was found to be discontinuous. The inactive region (in terms of solid particle transport) upstream of the slug nose was quantified as a function of gas-liquid composition and solid particle size. Measured velocity distributions showed a significant drop in velocity magnitude immediately upstream of the slug nose and therefore the critical velocity for solid particle lifting is reached further upstream.
Limits on the Efficiency of Event-Based Algorithms for Monte Carlo Neutron Transport
Energy Technology Data Exchange (ETDEWEB)
Romano, Paul K.; Siegel, Andrew R.
2017-04-16
The traditional form of parallelism in Monte Carlo particle transport simulations, wherein each individual particle history is considered a unit of work, does not lend itself well to data-level parallelism. Event-based algorithms, which were originally used for simulations on vector processors, may offer a path toward better utilizing data-level parallelism in modern computer architectures. In this study, a simple model is developed for estimating the efficiency of the event-based particle transport algorithm under two sets of assumptions. Data collected from simulations of four reactor problems using OpenMC was then used in conjunction with the models to calculate the speedup due to vectorization as a function of two parameters: the size of the particle bank and the vector width. When each event type is assumed to have constant execution time, the achievable speedup is directly related to the particle bank size. We observed that the bank size generally needs to be at least 20 times greater than vector size in order to achieve vector efficiency greater than 90%. When the execution times for events are allowed to vary, however, the vector speedup is also limited by differences in execution time for events being carried out in a single event-iteration. For some problems, this implies that vector effciencies over 50% may not be attainable. While there are many factors impacting performance of an event-based algorithm that are not captured by our model, it nevertheless provides insights into factors that may be limiting in a real implementation.
Directory of Open Access Journals (Sweden)
V. Krenn
2014-01-01
Full Text Available In histopathologic SLIM diagnostic (synovial-like interface membrane, SLIM apart from diagnosing periprosthetic infection particle identification has an important role to play. The differences in particle pathogenesis and variability of materials in endoprosthetics explain the particle heterogeneity that hampers the diagnostic identification of particles. For this reason, a histopathological particle algorithm has been developed. With minimal methodical complexity this histopathological particle algorithm offers a guide to prosthesis material-particle identification. Light microscopic-morphological as well as enzyme-histochemical characteristics and polarization-optical proporties have set and particles are defined by size (microparticles, macroparticles and supra- macroparticles and definitely characterized in accordance with a dichotomous principle. Based on these criteria, identification and validation of the particles was carried out in 120 joint endoprosthesis pathological cases. A histopathological particle score (HPS is proposed that summarizes the most important information for the orthopedist, material scientist and histopathologist concerning particle identification in the SLIM.
The OpenMC Monte Carlo particle transport code
International Nuclear Information System (INIS)
Romano, Paul K.; Forget, Benoit
2013-01-01
Highlights: ► An open source Monte Carlo particle transport code, OpenMC, has been developed. ► Solid geometry and continuous-energy physics allow high-fidelity simulations. ► Development has focused on high performance and modern I/O techniques. ► OpenMC is capable of scaling up to hundreds of thousands of processors. ► Results on a variety of benchmark problems agree with MCNP5. -- Abstract: A new Monte Carlo code called OpenMC is currently under development at the Massachusetts Institute of Technology as a tool for simulation on high-performance computing platforms. Given that many legacy codes do not scale well on existing and future parallel computer architectures, OpenMC has been developed from scratch with a focus on high performance scalable algorithms as well as modern software design practices. The present work describes the methods used in the OpenMC code and demonstrates the performance and accuracy of the code on a variety of problems.
Parallel particle swarm optimization algorithm in nuclear problems
International Nuclear Information System (INIS)
Waintraub, Marcel; Pereira, Claudio M.N.A.; Schirru, Roberto
2009-01-01
Particle Swarm Optimization (PSO) is a population-based metaheuristic (PBM), in which solution candidates evolve through simulation of a simplified social adaptation model. Putting together robustness, efficiency and simplicity, PSO has gained great popularity. Many successful applications of PSO are reported, in which PSO demonstrated to have advantages over other well-established PBM. However, computational costs are still a great constraint for PSO, as well as for all other PBMs, especially in optimization problems with time consuming objective functions. To overcome such difficulty, parallel computation has been used. The default advantage of parallel PSO (PPSO) is the reduction of computational time. Master-slave approaches, exploring this characteristic are the most investigated. However, much more should be expected. It is known that PSO may be improved by more elaborated neighborhood topologies. Hence, in this work, we develop several different PPSO algorithms exploring the advantages of enhanced neighborhood topologies implemented by communication strategies in multiprocessor architectures. The proposed PPSOs have been applied to two complex and time consuming nuclear engineering problems: reactor core design and fuel reload optimization. After exhaustive experiments, it has been concluded that: PPSO still improves solutions after many thousands of iterations, making prohibitive the efficient use of serial (non-parallel) PSO in such kind of realworld problems; and PPSO with more elaborated communication strategies demonstrated to be more efficient and robust than the master-slave model. Advantages and peculiarities of each model are carefully discussed in this work. (author)
Aerosol and particle transport in biomass furnaces
Kemenade, van H.P.; Obernberger, G.
2005-01-01
The particulate emissions of solid fuel fired furnaces typically exhibit a bimodal distribution: a small peak in the range of 0.1 mm and a larger one above 10 mm. The particles with sizes above 10 mm are formed by a mechanical process like disintegration of the fuel after combustion, or erosion,
PHITS-a particle and heavy ion transport code system
International Nuclear Information System (INIS)
Niita, Koji; Sato, Tatsuhiko; Iwase, Hiroshi; Nose, Hiroyuki; Nakashima, Hiroshi; Sihver, Lembit
2006-01-01
The paper presents a summary of the recent development of the multi-purpose Monte Carlo Particle and Heavy Ion Transport code System, PHITS. In particular, we discuss in detail the development of two new models, JAM and JQMD, for high energy particle interactions, incorporated in PHITS, and show comparisons between model calculations and experiments for the validations of these models. The paper presents three applications of the code including spallation neutron source, heavy ion therapy and space radiation. The results and examples shown indicate PHITS has great ability of carrying out the radiation transport analysis of almost all particles including heavy ions within a wide energy range
Fueling profile sensitivities of trapped particle mode transport to TNS
International Nuclear Information System (INIS)
Mense, A.T.; Attenberger, S.E.; Houlberg, W.A.
1977-01-01
A key factor in the plasma thermal behavior is the anticipated existence of dissipative trapped particle modes. A possible scheme for controlling the strength of these modes was found. The scheme involves varying the cold fueling profile. A one dimensional multifluid transport code was used to simulate plasma behavior. A multiregime model for particle and energy transport was incorporated based on pseudoclassical, trapped electron, and trapped ion regimes used elsewhere in simulation of large tokamaks. Fueling profiles peaked toward the plasma edge may provide a means for reducing density-gradient-driven trapped particle modes, thus reducing diffusion and conduction losses
Ebtehaj, Isa; Bonakdari, Hossein
2014-01-01
The existence of sediments in wastewater greatly affects the performance of the sewer and wastewater transmission systems. Increased sedimentation in wastewater collection systems causes problems such as reduced transmission capacity and early combined sewer overflow. The article reviews the performance of the genetic algorithm (GA) and imperialist competitive algorithm (ICA) in minimizing the target function (mean square error of observed and predicted Froude number). To study the impact of bed load transport parameters, using four non-dimensional groups, six different models have been presented. Moreover, the roulette wheel selection method is used to select the parents. The ICA with root mean square error (RMSE) = 0.007, mean absolute percentage error (MAPE) = 3.5% show better results than GA (RMSE = 0.007, MAPE = 5.6%) for the selected model. All six models return better results than the GA. Also, the results of these two algorithms were compared with multi-layer perceptron and existing equations.
Particle Transport Simulation on Heterogeneous Hardware
CERN. Geneva
2014-01-01
CPUs and GPGPUs. About the speaker Vladimir Koylazov is CTO and founder of Chaos Software and one of the original developers of the V-Ray raytracing software. Passionate about 3D graphics and programming, Vlado is the driving force behind Chaos Group's software solutions. He participated in the implementation of algorithms for accurate light simulations and support for different hardware platforms, including CPU and GPGPU, as well as distributed calculat...
DANTSYS: a system for deterministic, neutral particle transport calculations
Energy Technology Data Exchange (ETDEWEB)
Alcouffe, R.E.; Baker, R.S.
1996-12-31
The THREEDANT code is the latest addition to our system of codes, DANTSYS, which perform neutral particle transport computations on a given system of interest. The system of codes is distinguished by geometrical or symmetry considerations. For example, ONEDANT and TWODANT are designed for one and two dimensional geometries respectively. We have TWOHEX for hexagonal geometries, TWODANT/GQ for arbitrary quadrilaterals in XY and RZ geometry, and THREEDANT for three-dimensional geometries. The design of this system of codes is such that they share the same input and edit module and hence the input and output is uniform for all the codes (with the obvious additions needed to specify each type of geometry). The codes in this system are also designed to be general purpose solving both eigenvalue and source driven problems. In this paper we concentrate on the THREEDANT module since there are special considerations that need to be taken into account when designing such a module. The main issues that need to be addressed in a three-dimensional transport solver are those of the computational time needed to solve a problem and the amount of storage needed to accomplish that solution. Of course both these issues are directly related to the number of spatial mesh cells required to obtain a solution to a specified accuracy, but is also related to the spatial discretization method chosen and the requirements of the iteration acceleration scheme employed as will be noted below. Another related consideration is the robustness of the resulting algorithms as implemented; because insistence on complete robustness has a significant impact upon the computation time. We address each of these issues in the following through which we give reasons for the choices we have made in our approach to this code. And this is useful in outlining how the code is evolving to better address the shortcomings that presently exist.
Charged-particle calculations using Boltzmann transport methods
International Nuclear Information System (INIS)
Hoffman, T.J.; Dodds, H.L. Jr.; Robinson, M.T.; Holmes, D.K.
1981-01-01
Several aspects of radiation damage effects in fusion reactor neutron and ion irradiation environments are amenable to treatment by transport theory methods. In this paper, multigroup transport techniques are developed for the calculation of charged particle range distributions, reflection coefficients, and sputtering yields. The Boltzmann transport approach can be implemented, with minor changes, in standard neutral particle computer codes. With the multigroup discrete ordinates code, ANISN, determination of ion and target atom distributions as functions of position, energy, and direction can be obtained without the stochastic error associated with atomistic computer codes such as MARLOWE and TRIM. With the multigroup Monte Carlo code, MORSE, charged particle effects can be obtained for problems associated with very complex geometries. Results are presented for several charged particle problems. Good agreement is obtained between quantities calculated with the multigroup approach and those obtained experimentally or by atomistic computer codes
Non deterministic methods for charged particle transport
International Nuclear Information System (INIS)
Besnard, D.C.; Buresi, E.; Hermeline, F.; Wagon, F.
1985-04-01
The coupling of Monte-Carlo methods for solving Fokker Planck equation with ICF inertial confinement fusion codes requires them to be economical and to preserve gross conservation properties. Besides, the presence in FPE Fokker-Planck equation of diffusion terms due to collisions between test particles and the background plasma challenges standard M.C. (Monte-Carlo) techniques if this phenomenon is dominant. We address these problems through the use of a fixed mesh in phase space which allows us to handle highly variable sources, avoiding any Russian Roulette for lowering the size of the sample. Also on this mesh are solved diffusion equations obtained from a splitting of FPE. Any non linear diffusion terms of FPE can be handled in this manner. Another method, also presented here is to use a direct particle method for solving the full FPE
Transport of large particles released in a nuclear accident
International Nuclear Information System (INIS)
Poellaenen, R.; Toivonen, H.; Lahtinen, J.; Ilander, T.
1995-10-01
Highly radioactive particulate material may be released in a nuclear accident or sometimes during normal operation of a nuclear power plant. However, consequence analyses related to radioactive releases are often performed neglecting the particle nature of the release. The properties of the particles have an important role in the radiological hazard. A particle deposited on the skin may cause a large and highly non-uniform skin beta dose. Skin dose limits may be exceeded although the overall activity concentration in air is below the level of countermeasures. For sheltering purposes it is crucial to find out the transport range, i.e. the travel distance of the particles. A method for estimating the transport range of large particles (aerodynamic diameter d a > 20 μm) in simplified meteorological conditions is presented. A user-friendly computer code, known as TROP, is developed for fast range calculations in a nuclear emergency. (orig.) (23 refs., 13 figs.)
Transport of large particles released in a nuclear accident
Energy Technology Data Exchange (ETDEWEB)
Poellaenen, R; Toivonen, H; Lahtinen, J; Ilander, T
1995-10-01
Highly radioactive particulate material may be released in a nuclear accident or sometimes during normal operation of a nuclear power plant. However, consequence analyses related to radioactive releases are often performed neglecting the particle nature of the release. The properties of the particles have an important role in the radiological hazard. A particle deposited on the skin may cause a large and highly non-uniform skin beta dose. Skin dose limits may be exceeded although the overall activity concentration in air is below the level of countermeasures. For sheltering purposes it is crucial to find out the transport range, i.e. the travel distance of the particles. A method for estimating the transport range of large particles (aerodynamic diameter d{sub a} > 20 {mu}m) in simplified meteorological conditions is presented. A user-friendly computer code, known as TROP, is developed for fast range calculations in a nuclear emergency. (orig.) (23 refs., 13 figs.).
Semi-analytic modeling of tokamak particle transport
International Nuclear Information System (INIS)
Shi Bingren; Long Yongxing; Li Jiquan
2000-01-01
The linear particle transport equation of tokamak plasma is analyzed. Particle flow consists of an outward diffusion and an inward convection. General solution is expressed in terms of a Green function constituted by eigen-functions of corresponding Sturm-Liouville problem. For a particle source near the plasma edge (shadow fueling), a well-behaved solution in terms of Fourier series can be constituted by using the complementarity relation. It can be seen from the lowest eigen-function that the particle density becomes peaked when the wall recycling reduced. For a transient point source in the inner region, a well-behaved solution can be obtained by the complementarity as well
An Algorithm for the Mixed Transportation Network Design Problem.
Liu, Xinyu; Chen, Qun
2016-01-01
This paper proposes an optimization algorithm, the dimension-down iterative algorithm (DDIA), for solving a mixed transportation network design problem (MNDP), which is generally expressed as a mathematical programming with equilibrium constraint (MPEC). The upper level of the MNDP aims to optimize the network performance via both the expansion of the existing links and the addition of new candidate links, whereas the lower level is a traditional Wardrop user equilibrium (UE) problem. The idea of the proposed solution algorithm (DDIA) is to reduce the dimensions of the problem. A group of variables (discrete/continuous) is fixed to optimize another group of variables (continuous/discrete) alternately; then, the problem is transformed into solving a series of CNDPs (continuous network design problems) and DNDPs (discrete network design problems) repeatedly until the problem converges to the optimal solution. The advantage of the proposed algorithm is that its solution process is very simple and easy to apply. Numerical examples show that for the MNDP without budget constraint, the optimal solution can be found within a few iterations with DDIA. For the MNDP with budget constraint, however, the result depends on the selection of initial values, which leads to different optimal solutions (i.e., different local optimal solutions). Some thoughts are given on how to derive meaningful initial values, such as by considering the budgets of new and reconstruction projects separately.
An Algorithm for the Mixed Transportation Network Design Problem.
Directory of Open Access Journals (Sweden)
Xinyu Liu
Full Text Available This paper proposes an optimization algorithm, the dimension-down iterative algorithm (DDIA, for solving a mixed transportation network design problem (MNDP, which is generally expressed as a mathematical programming with equilibrium constraint (MPEC. The upper level of the MNDP aims to optimize the network performance via both the expansion of the existing links and the addition of new candidate links, whereas the lower level is a traditional Wardrop user equilibrium (UE problem. The idea of the proposed solution algorithm (DDIA is to reduce the dimensions of the problem. A group of variables (discrete/continuous is fixed to optimize another group of variables (continuous/discrete alternately; then, the problem is transformed into solving a series of CNDPs (continuous network design problems and DNDPs (discrete network design problems repeatedly until the problem converges to the optimal solution. The advantage of the proposed algorithm is that its solution process is very simple and easy to apply. Numerical examples show that for the MNDP without budget constraint, the optimal solution can be found within a few iterations with DDIA. For the MNDP with budget constraint, however, the result depends on the selection of initial values, which leads to different optimal solutions (i.e., different local optimal solutions. Some thoughts are given on how to derive meaningful initial values, such as by considering the budgets of new and reconstruction projects separately.
Pebble bed reactor fuel cycle optimization using particle swarm algorithm
Energy Technology Data Exchange (ETDEWEB)
Tavron, Barak, E-mail: btavron@bgu.ac.il [Planning, Development and Technology Division, Israel Electric Corporation Ltd., P.O. Box 10, Haifa 31000 (Israel); Shwageraus, Eugene, E-mail: es607@cam.ac.uk [Department of Engineering, University of Cambridge, Trumpington Street, Cambridge CB2 1PZ (United Kingdom)
2016-10-15
Highlights: • Particle swarm method has been developed for fuel cycle optimization of PBR reactor. • Results show uranium utilization low sensitivity to fuel and core design parameters. • Multi-zone fuel loading pattern leads to a small improvement in uranium utilization. • Thorium mixes with highly enriched uranium yields the best uranium utilization. - Abstract: Pebble bed reactors (PBR) features, such as robust thermo-mechanical fuel design and on-line continuous fueling, facilitate wide range of fuel cycle alternatives. A range off fuel pebble types, containing different amounts of fertile or fissile fuel material, may be loaded into the reactor core. Several fuel loading zones may be used since radial mixing of the pebbles was shown to be limited. This radial separation suggests the possibility to implement the “seed-blanket” concept for the utilization of fertile fuels such as thorium, and for enhancing reactor fuel utilization. In this study, the particle-swarm meta-heuristic evolutionary optimization method (PSO) has been used to find optimal fuel cycle design which yields the highest natural uranium utilization. The PSO method is known for solving efficiently complex problems with non-linear objective function, continuous or discrete parameters and complex constrains. The VSOP system of codes has been used for PBR fuel utilization calculations and MATLAB script has been used to implement the PSO algorithm. Optimization of PBR natural uranium utilization (NUU) has been carried out for 3000 MWth High Temperature Reactor design (HTR) operating on the Once Trough Then Out (OTTO) fuel management scheme, and for 400 MWth Pebble Bed Modular Reactor (PBMR) operating on the multi-pass (MEDUL) fuel management scheme. Results showed only a modest improvement in the NUU (<5%) over reference designs. Investigation of thorium fuel cases showed that the use of HEU in combination with thorium results in the most favorable reactor performance in terms of
Pebble bed reactor fuel cycle optimization using particle swarm algorithm
International Nuclear Information System (INIS)
Tavron, Barak; Shwageraus, Eugene
2016-01-01
Highlights: • Particle swarm method has been developed for fuel cycle optimization of PBR reactor. • Results show uranium utilization low sensitivity to fuel and core design parameters. • Multi-zone fuel loading pattern leads to a small improvement in uranium utilization. • Thorium mixes with highly enriched uranium yields the best uranium utilization. - Abstract: Pebble bed reactors (PBR) features, such as robust thermo-mechanical fuel design and on-line continuous fueling, facilitate wide range of fuel cycle alternatives. A range off fuel pebble types, containing different amounts of fertile or fissile fuel material, may be loaded into the reactor core. Several fuel loading zones may be used since radial mixing of the pebbles was shown to be limited. This radial separation suggests the possibility to implement the “seed-blanket” concept for the utilization of fertile fuels such as thorium, and for enhancing reactor fuel utilization. In this study, the particle-swarm meta-heuristic evolutionary optimization method (PSO) has been used to find optimal fuel cycle design which yields the highest natural uranium utilization. The PSO method is known for solving efficiently complex problems with non-linear objective function, continuous or discrete parameters and complex constrains. The VSOP system of codes has been used for PBR fuel utilization calculations and MATLAB script has been used to implement the PSO algorithm. Optimization of PBR natural uranium utilization (NUU) has been carried out for 3000 MWth High Temperature Reactor design (HTR) operating on the Once Trough Then Out (OTTO) fuel management scheme, and for 400 MWth Pebble Bed Modular Reactor (PBMR) operating on the multi-pass (MEDUL) fuel management scheme. Results showed only a modest improvement in the NUU (<5%) over reference designs. Investigation of thorium fuel cases showed that the use of HEU in combination with thorium results in the most favorable reactor performance in terms of
Flux-corrected transport principles, algorithms, and applications
Kuzmin, Dmitri; Turek, Stefan
2005-01-01
Addressing students and researchers as well as CFD practitioners, this book describes the state of the art in the development of high-resolution schemes based on the Flux-Corrected Transport (FCT) paradigm. Intended for readers who have a solid background in Computational Fluid Dynamics, the book begins with historical notes by J.P. Boris and D.L. Book. Review articles that follow describe recent advances in the design of FCT algorithms as well as various algorithmic aspects. The topics addressed in the book and its main highlights include: the derivation and analysis of classical FCT schemes with special emphasis on the underlying physical and mathematical constraints; flux limiting for hyperbolic systems; generalization of FCT to implicit time-stepping and finite element discretizations on unstructured meshes and its role as a subgrid scale model for Monotonically Integrated Large Eddy Simulation (MILES) of turbulent flows. The proposed enhancements of the FCT methodology also comprise the prelimiting and '...
Fast weighted centroid algorithm for single particle localization near the information limit.
Fish, Jeremie; Scrimgeour, Jan
2015-07-10
A simple weighting scheme that enhances the localization precision of center of mass calculations for radially symmetric intensity distributions is presented. The algorithm effectively removes the biasing that is common in such center of mass calculations. Localization precision compares favorably with other localization algorithms used in super-resolution microscopy and particle tracking, while significantly reducing the processing time and memory usage. We expect that the algorithm presented will be of significant utility when fast computationally lightweight particle localization or tracking is desired.
Gyrokinetic theory for particle and energy transport in fusion plasmas
Falessi, Matteo Valerio; Zonca, Fulvio
2018-03-01
A set of equations is derived describing the macroscopic transport of particles and energy in a thermonuclear plasma on the energy confinement time. The equations thus derived allow studying collisional and turbulent transport self-consistently, retaining the effect of magnetic field geometry without postulating any scale separation between the reference state and fluctuations. Previously, assuming scale separation, transport equations have been derived from kinetic equations by means of multiple-scale perturbation analysis and spatio-temporal averaging. In this work, the evolution equations for the moments of the distribution function are obtained following the standard approach; meanwhile, gyrokinetic theory has been used to explicitly express the fluctuation induced fluxes. In this way, equations for the transport of particles and energy up to the transport time scale can be derived using standard first order gyrokinetics.
Design of a fuzzy differential evolution algorithm to predict non-deposition sediment transport
Ebtehaj, Isa; Bonakdari, Hossein
2017-12-01
Since the flow entering a sewer contains solid matter, deposition at the bottom of the channel is inevitable. It is difficult to understand the complex, three-dimensional mechanism of sediment transport in sewer pipelines. Therefore, a method to estimate the limiting velocity is necessary for optimal designs. Due to the inability of gradient-based algorithms to train Adaptive Neuro-Fuzzy Inference Systems (ANFIS) for non-deposition sediment transport prediction, a new hybrid ANFIS method based on a differential evolutionary algorithm (ANFIS-DE) is developed. The training and testing performance of ANFIS-DE is evaluated using a wide range of dimensionless parameters gathered from the literature. The input combination used to estimate the densimetric Froude number ( Fr) parameters includes the volumetric sediment concentration ( C V ), ratio of median particle diameter to hydraulic radius ( d/R), ratio of median particle diameter to pipe diameter ( d/D) and overall friction factor of sediment ( λ s ). The testing results are compared with the ANFIS model and regression-based equation results. The ANFIS-DE technique predicted sediment transport at limit of deposition with lower root mean square error (RMSE = 0.323) and mean absolute percentage of error (MAPE = 0.065) and higher accuracy ( R 2 = 0.965) than the ANFIS model and regression-based equations.
GRAVITATIONAL LENS MODELING WITH GENETIC ALGORITHMS AND PARTICLE SWARM OPTIMIZERS
International Nuclear Information System (INIS)
Rogers, Adam; Fiege, Jason D.
2011-01-01
Strong gravitational lensing of an extended object is described by a mapping from source to image coordinates that is nonlinear and cannot generally be inverted analytically. Determining the structure of the source intensity distribution also requires a description of the blurring effect due to a point-spread function. This initial study uses an iterative gravitational lens modeling scheme based on the semilinear method to determine the linear parameters (source intensity profile) of a strongly lensed system. Our 'matrix-free' approach avoids construction of the lens and blurring operators while retaining the least-squares formulation of the problem. The parameters of an analytical lens model are found through nonlinear optimization by an advanced genetic algorithm (GA) and particle swarm optimizer (PSO). These global optimization routines are designed to explore the parameter space thoroughly, mapping model degeneracies in detail. We develop a novel method that determines the L-curve for each solution automatically, which represents the trade-off between the image χ 2 and regularization effects, and allows an estimate of the optimally regularized solution for each lens parameter set. In the final step of the optimization procedure, the lens model with the lowest χ 2 is used while the global optimizer solves for the source intensity distribution directly. This allows us to accurately determine the number of degrees of freedom in the problem to facilitate comparison between lens models and enforce positivity on the source profile. In practice, we find that the GA conducts a more thorough search of the parameter space than the PSO.
Directory of Open Access Journals (Sweden)
Qi Hu
2013-04-01
Full Text Available State-of-the-art heuristic algorithms to solve the vehicle routing problem with time windows (VRPTW usually present slow speeds during the early iterations and easily fall into local optimal solutions. Focusing on solving the above problems, this paper analyzes the particle encoding and decoding strategy of the particle swarm optimization algorithm, the construction of the vehicle route and the judgment of the local optimal solution. Based on these, a hybrid chaos-particle swarm optimization algorithm (HPSO is proposed to solve VRPTW. The chaos algorithm is employed to re-initialize the particle swarm. An efficient insertion heuristic algorithm is also proposed to build the valid vehicle route in the particle decoding process. A particle swarm premature convergence judgment mechanism is formulated and combined with the chaos algorithm and Gaussian mutation into HPSO when the particle swarm falls into the local convergence. Extensive experiments are carried out to test the parameter settings in the insertion heuristic algorithm and to evaluate that they are corresponding to the data’s real-distribution in the concrete problem. It is also revealed that the HPSO achieves a better performance than the other state-of-the-art algorithms on solving VRPTW.
ENERGETIC PARTICLE TRANSPORT ACROSS THE MEAN MAGNETIC FIELD: BEFORE DIFFUSION
International Nuclear Information System (INIS)
Laitinen, T.; Dalla, S.
2017-01-01
Current particle transport models describe the propagation of charged particles across the mean field direction in turbulent plasmas as diffusion. However, recent studies suggest that at short timescales, such as soon after solar energetic particle (SEP) injection, particles remain on turbulently meandering field lines, which results in nondiffusive initial propagation across the mean magnetic field. In this work, we use a new technique to investigate how the particles are displaced from their original field lines, and we quantify the parameters of the transition from field-aligned particle propagation along meandering field lines to particle diffusion across the mean magnetic field. We show that the initial decoupling of the particles from the field lines is slow, and particles remain within a Larmor radius from their initial meandering field lines for tens to hundreds of Larmor periods, for 0.1–10 MeV protons in turbulence conditions typical of the solar wind at 1 au. Subsequently, particles decouple from their initial field lines and after hundreds to thousands of Larmor periods reach time-asymptotic diffusive behavior consistent with particle diffusion across the mean field caused by the meandering of the field lines. We show that the typical duration of the prediffusive phase, hours to tens of hours for 10 MeV protons in 1 au solar wind turbulence conditions, is significant for SEP propagation to 1 au and must be taken into account when modeling SEP propagation in the interplanetary space.
ENERGETIC PARTICLE TRANSPORT ACROSS THE MEAN MAGNETIC FIELD: BEFORE DIFFUSION
Energy Technology Data Exchange (ETDEWEB)
Laitinen, T.; Dalla, S., E-mail: tlmlaitinen@uclan.ac.uk [Jeremiah Horrocks Institute, University of Central Lancashire, Preston (United Kingdom)
2017-01-10
Current particle transport models describe the propagation of charged particles across the mean field direction in turbulent plasmas as diffusion. However, recent studies suggest that at short timescales, such as soon after solar energetic particle (SEP) injection, particles remain on turbulently meandering field lines, which results in nondiffusive initial propagation across the mean magnetic field. In this work, we use a new technique to investigate how the particles are displaced from their original field lines, and we quantify the parameters of the transition from field-aligned particle propagation along meandering field lines to particle diffusion across the mean magnetic field. We show that the initial decoupling of the particles from the field lines is slow, and particles remain within a Larmor radius from their initial meandering field lines for tens to hundreds of Larmor periods, for 0.1–10 MeV protons in turbulence conditions typical of the solar wind at 1 au. Subsequently, particles decouple from their initial field lines and after hundreds to thousands of Larmor periods reach time-asymptotic diffusive behavior consistent with particle diffusion across the mean field caused by the meandering of the field lines. We show that the typical duration of the prediffusive phase, hours to tens of hours for 10 MeV protons in 1 au solar wind turbulence conditions, is significant for SEP propagation to 1 au and must be taken into account when modeling SEP propagation in the interplanetary space.
Modeling airflow and particle transport/deposition in pulmonary airways.
Kleinstreuer, Clement; Zhang, Zhe; Li, Zheng
2008-11-30
A review of research papers is presented, pertinent to computer modeling of airflow as well as nano- and micron-size particle deposition in pulmonary airway replicas. The key modeling steps are outlined, including construction of suitable airway geometries, mathematical description of the air-particle transport phenomena and computer simulation of micron and nanoparticle depositions. Specifically, diffusion-dominated nanomaterial deposits on airway surfaces much more uniformly than micron particles of the same material. This may imply different toxicity effects. Due to impaction and secondary flows, micron particles tend to accumulate around the carinal ridges and to form "hot spots", i.e., locally high concentrations which may lead to tumor developments. Inhaled particles in the size range of 20nm< or =dp< or =3microm may readily reach the deeper lung region. Concerning inhaled therapeutic particles, optimal parameters for mechanical drug-aerosol targeting of predetermined lung areas can be computed, given representative pulmonary airways.
Transient particle transport studies at the W7-AS stellarator
International Nuclear Information System (INIS)
Koponen, J.
2000-01-01
One of the crucial problems in fusion research is the understanding of the transport of particles and heat in plasmas relevant for energy production. Extensive experimental transport studies have unraveled many details of heat transport in tokamaks and stellarators. However, due to larger experimental difficulties, the properties of particle transport have remained much less known. In particular, very few particle transport studies have been carried out in stellarators. This thesis summarises the transient particle transport experiments carried out at the Wendelstein 7-Advanced Stellarator (W7-AS). The main diagnostics tool was a 10-channel microwave interferometer. A technique for reconstructing the electron density profiles from the multichannel interferometer data was developed and implemented. The interferometer and the reconstruction software provide high quality electron density measurements with high temporal and sufficient spatial resolution. The density reconstruction is based on regularization methods studied during the development work. An extensive program of transient particle transport studies was carried out with the gas modulation method. The experiments resulted in a scaling expression for the diffusion coefficient. Transient inward convection was found in the edge plasma. The role of convection is minor in the core plasma, except at higher heating power, when an outward directed convective flux is observed. Radially peaked density profiles were found in discharges free of significant central density sources. Such density profiles are usually observed in tokamaks, but never before in W7-AS. Existence of an inward pinch is confirmed with two independent transient transport analysis methods. The density peaking is possible if the plasma is heated with extreme off-axis Electron Cyclotron Heating (ECH), when the temperature gradient vanishes in the core plasma, and if the gas puffing level is relatively low. The transport of plasma particles and heat
Algorithm of Data Reduce in Determination of Aerosol Particle Size Distribution at Damps/C
International Nuclear Information System (INIS)
Muhammad-Priyatna; Otto-Pribadi-Ruslanto
2001-01-01
The analysis had to do for algorithm of data reduction on Damps/C (Differential Mobility Particle Sizer with Condensation Particle Counter) system, this is for determine aerosol particle size distribution with range 0,01 μm to 1 μm in diameter. Damps/C (Differential Mobility Particle Sizer with Condensation Particle Counter) system contents are software and hardware. The hardware used determine of mobilities of aerosol particle and so the software used determine aerosol particle size distribution in diameter. The mobilities and diameter particle had connection in the electricity field. That is basic program for reduction of data and particle size conversion from particle mobility become particle diameter. The analysis to get transfer function value, Ω, is 0.5. The data reduction program to do conversation mobility basis become diameter basis with number efficiency correction, transfer function value, and poly charge particle. (author)
Drift Wave Test Particle Transport in Reversed Shear Profile
International Nuclear Information System (INIS)
Horton, W.; Park, H.B.; Kwon, J.M.; Stronzzi, D.; Morrison, P.J.; Choi, D.I.
1998-01-01
Drift wave maps, area preserving maps that describe the motion of charged particles in drift waves, are derived. The maps allow the integration of particle orbits on the long time scale needed to describe transport. Calculations using the drift wave maps show that dramatic improvement in the particle confinement, in the presence of a given level and spectrum of E x B turbulence, can occur for q(r)-profiles with reversed shear. A similar reduction in the transport, i.e. one that is independent of the turbulence, is observed in the presence of an equilibrium radial electric field with shear. The transport reduction, caused by the combined effects of radial electric field shear and both monotonic and reversed shear magnetic q-profiles, is also investigated
Optimization of magnetic switches for single particle and cell transport
Energy Technology Data Exchange (ETDEWEB)
Abedini-Nassab, Roozbeh; Yellen, Benjamin B., E-mail: yellen@duke.edu [Department of Mechanical Engineering and Materials Science, Duke University, Box 90300 Hudson Hall, Durham, North Carolina 27708 (United States); Joint Institute, University of Michigan—Shanghai Jiao Tong University, Shanghai Jiao Tong University, Shanghai 200240 (China); Murdoch, David M. [Department of Medicine, Duke University, Durham, North Carolina 27708 (United States); Kim, CheolGi [Department of Emerging Materials Science, Daegu Gyeongbuk Institute of Science and Technology (DGIST), Daegu 711-873 (Korea, Republic of)
2014-06-28
The ability to manipulate an ensemble of single particles and cells is a key aim of lab-on-a-chip research; however, the control mechanisms must be optimized for minimal power consumption to enable future large-scale implementation. Recently, we demonstrated a matter transport platform, which uses overlaid patterns of magnetic films and metallic current lines to control magnetic particles and magnetic-nanoparticle-labeled cells; however, we have made no prior attempts to optimize the device geometry and power consumption. Here, we provide an optimization analysis of particle-switching devices based on stochastic variation in the particle's size and magnetic content. These results are immediately applicable to the design of robust, multiplexed platforms capable of transporting, sorting, and storing single cells in large arrays with low power and high efficiency.
Optimization of Particle Search Algorithm for CFD-DEM Simulations
Directory of Open Access Journals (Sweden)
G. Baryshev
2013-09-01
Full Text Available Discrete element method has numerous applications in particle physics. However, simulating particles as discrete entities can become costly for large systems. In time-driven DEM simulation most computation time is taken by contact search stage. We propose an efficient collision detection method which is based on sorting particles by their coordinates. Using multiple sorting criteria allows minimizing number of potential neighbours and defines fitness of this approach for simulation of massive systems in 3D. This method is compared to a common approach that consists of placing particles onto a grid of cells. Advantage of the new approach is independence of simulation parameters upon particle radius and domain size.
Discrete elements method of neutral particle transport
International Nuclear Information System (INIS)
Mathews, K.A.
1983-01-01
A new discrete elements (L/sub N/) transport method is derived and compared to the discrete ordinates S/sub N/ method, theoretically and by numerical experimentation. The discrete elements method is more accurate than discrete ordinates and strongly ameliorates ray effects for the practical problems studied. The discrete elements method is shown to be more cost effective, in terms of execution time with comparable storage to attain the same accuracy, for a one-dimensional test case using linear characteristic spatial quadrature. In a two-dimensional test case, a vacuum duct in a shield, L/sub N/ is more consistently convergent toward a Monte Carlo benchmark solution than S/sub N/, using step characteristic spatial quadrature. An analysis of the interaction of angular and spatial quadrature in xy-geometry indicates the desirability of using linear characteristic spatial quadrature with the L/sub N/ method
Computational methods for two-phase flow and particle transport
Lee, Wen Ho
2013-01-01
This book describes mathematical formulations and computational methods for solving two-phase flow problems with a computer code that calculates thermal hydraulic problems related to light water and fast breeder reactors. The physical model also handles the particle and gas flow problems that arise from coal gasification and fluidized beds. The second part of this book deals with the computational methods for particle transport.
Solitary Model of the Charge Particle Transport in Collisionless Plasma
International Nuclear Information System (INIS)
Simonchik, L.V.; Trukhachev, F.M.
2006-01-01
The one-dimensional MHD solitary model of charged particle transport in plasma is developed. It is shown that self-consistent electric field of ion-acoustic solitons can displace charged particles in space, which can be a reason of local electric current generation. The displacement amount is order of a few Debye lengths. It is shown that the current associated with soliton cascade has pulsating nature with DC component. Methods of built theory verification in dusty plasma are proposed
Particle Acceleration and Fractional Transport in Turbulent Reconnection
Isliker, Heinz; Pisokas, Theophilos; Vlahos, Loukas; Anastasiadis, Anastasios
2017-11-01
We consider a large-scale environment of turbulent reconnection that is fragmented into a number of randomly distributed unstable current sheets (UCSs), and we statistically analyze the acceleration of particles within this environment. We address two important cases of acceleration mechanisms when particles interact with the UCS: (a) electric field acceleration and (b) acceleration by reflection at contracting islands. Electrons and ions are accelerated very efficiently, attaining an energy distribution of power-law shape with an index 1-2, depending on the acceleration mechanism. The transport coefficients in energy space are estimated from test-particle simulation data, and we show that the classical Fokker-Planck (FP) equation fails to reproduce the simulation results when the transport coefficients are inserted into it and it is solved numerically. The cause for this failure is that the particles perform Levy flights in energy space, while the distributions of the energy increments exhibit power-law tails. We then use the fractional transport equation (FTE) derived by Isliker et al., whose parameters and the order of the fractional derivatives are inferred from the simulation data, and solving the FTE numerically, we show that the FTE successfully reproduces the kinetic energy distribution of the test particles. We discuss in detail the analysis of the simulation data and the criteria that allow one to judge the appropriateness of either an FTE or a classical FP equation as a transport model.
Particle Acceleration and Fractional Transport in Turbulent Reconnection
Energy Technology Data Exchange (ETDEWEB)
Isliker, Heinz; Pisokas, Theophilos; Vlahos, Loukas [Department of Physics, Aristotle University of Thessaloniki, GR-52124 Thessaloniki (Greece); Anastasiadis, Anastasios [Institute for Astronomy, Astrophysics, Space Applications and Remote Sensing, National Observatory of Athens, GR-15236 Penteli (Greece)
2017-11-01
We consider a large-scale environment of turbulent reconnection that is fragmented into a number of randomly distributed unstable current sheets (UCSs), and we statistically analyze the acceleration of particles within this environment. We address two important cases of acceleration mechanisms when particles interact with the UCS: (a) electric field acceleration and (b) acceleration by reflection at contracting islands. Electrons and ions are accelerated very efficiently, attaining an energy distribution of power-law shape with an index 1–2, depending on the acceleration mechanism. The transport coefficients in energy space are estimated from test-particle simulation data, and we show that the classical Fokker–Planck (FP) equation fails to reproduce the simulation results when the transport coefficients are inserted into it and it is solved numerically. The cause for this failure is that the particles perform Levy flights in energy space, while the distributions of the energy increments exhibit power-law tails. We then use the fractional transport equation (FTE) derived by Isliker et al., whose parameters and the order of the fractional derivatives are inferred from the simulation data, and solving the FTE numerically, we show that the FTE successfully reproduces the kinetic energy distribution of the test particles. We discuss in detail the analysis of the simulation data and the criteria that allow one to judge the appropriateness of either an FTE or a classical FP equation as a transport model.
A Swarm Optimization Genetic Algorithm Based on Quantum-Behaved Particle Swarm Optimization.
Sun, Tao; Xu, Ming-Hai
2017-01-01
Quantum-behaved particle swarm optimization (QPSO) algorithm is a variant of the traditional particle swarm optimization (PSO). The QPSO that was originally developed for continuous search spaces outperforms the traditional PSO in search ability. This paper analyzes the main factors that impact the search ability of QPSO and converts the particle movement formula to the mutation condition by introducing the rejection region, thus proposing a new binary algorithm, named swarm optimization genetic algorithm (SOGA), because it is more like genetic algorithm (GA) than PSO in form. SOGA has crossover and mutation operator as GA but does not need to set the crossover and mutation probability, so it has fewer parameters to control. The proposed algorithm was tested with several nonlinear high-dimension functions in the binary search space, and the results were compared with those from BPSO, BQPSO, and GA. The experimental results show that SOGA is distinctly superior to the other three algorithms in terms of solution accuracy and convergence.
Wei, Yongjie; Ge, Baozhen; Wei, Yaolin
2009-03-20
In general, model-independent algorithms are sensitive to noise during laser particle size measurement. An improved conjugate gradient algorithm (ICGA) that can be used to invert particle size distribution (PSD) from diffraction data is presented. By use of the ICGA to invert simulated data with multiplicative or additive noise, we determined that additive noise is the main factor that induces distorted results. Thus the ICGA is amended by introduction of an iteration step-adjusting parameter and is used experimentally on simulated data and some samples. The experimental results show that the sensitivity of the ICGA to noise is reduced and the inverted results are in accord with the real PSD.
Quasilinear Line Broadened Model for Energetic Particle Transport
Ghantous, Katy; Gorelenkov, Nikolai; Berk, Herbert
2011-10-01
We present a self-consistent quasi-linear model that describes wave-particle interaction in toroidal geometry and computes fast ion transport during TAE mode evolution. The model bridges the gap between single mode resonances, where it predicts the analytically expected saturation levels, and the case of multiple modes overlapping, where particles diffuse across phase space. Results are presented in the large aspect ratio limit where analytic expressions are used for Fourier harmonics of the power exchange between waves and particles, . Implemention of a more realistic mode structure calculated by NOVAK code are also presented. This work is funded by DOE contract DE-AC02-09CH11466.
Algorithms for tracking of charged particles in circular accelerators
International Nuclear Information System (INIS)
Iselin, F.Ch.
1986-01-01
An important problem in accelerator design is the determination of the largest stable betatron amplitude. This stability limit is also known as the dynamic aperture. The equations describing the particle motion are non-linear, and the Linear Lattice Functions cannot be used to compute the stability limits. The stability limits are therefore usually searched for by particle tracking. One selects a set of particles with different betatron amplitudes and tracks them for many turns around the machine. The particles which survive a sufficient number of turns are termed stable. This paper concentrates on conservative systems. For this case the particle motion can be described by a Hamiltonian, i.e. tracking particles means application of canonical transformations. Canonical transformations are equivalent to symplectic mappings, which implies that there exist invariants. These invariants should not be destroyed in tracking
Ballistic target tracking algorithm based on improved particle filtering
Ning, Xiao-lei; Chen, Zhan-qi; Li, Xiao-yang
2015-10-01
Tracking ballistic re-entry target is a typical nonlinear filtering problem. In order to track the ballistic re-entry target in the nonlinear and non-Gaussian complex environment, a novel chaos map particle filter (CMPF) is used to estimate the target state. CMPF has better performance in application to estimate the state and parameter of nonlinear and non-Gassuian system. The Monte Carlo simulation results show that, this method can effectively solve particle degeneracy and particle impoverishment problem by improving the efficiency of particle sampling to obtain the better particles to part in estimation. Meanwhile CMPF can improve the state estimation precision and convergence velocity compared with EKF, UKF and the ordinary particle filter.
Directory of Open Access Journals (Sweden)
Hao Yin
2014-01-01
Full Text Available For SLA-aware service composition problem (SSC, an optimization model for this algorithm is built, and a hybrid multiobjective discrete particle swarm optimization algorithm (HMDPSO is also proposed in this paper. According to the characteristic of this problem, a particle updating strategy is designed by introducing crossover operator. In order to restrain particle swarm’s premature convergence and increase its global search capacity, the swarm diversity indicator is introduced and a particle mutation strategy is proposed to increase the swarm diversity. To accelerate the process of obtaining the feasible particle position, a local search strategy based on constraint domination is proposed and incorporated into the proposed algorithm. At last, some parameters in the algorithm HMDPSO are analyzed and set with relative proper values, and then the algorithm HMDPSO and the algorithm HMDPSO+ incorporated by local search strategy are compared with the recently proposed related algorithms on different scale cases. The results show that algorithm HMDPSO+ can solve the SSC problem more effectively.
FLUKA A multi-particle transport code (program version 2005)
Ferrari, A; Fassò, A; Ranft, Johannes
2005-01-01
This report describes the 2005 version of the Fluka particle transport code. The first part introduces the basic notions, describes the modular structure of the system, and contains an installation and beginner’s guide. The second part complements this initial information with details about the various components of Fluka and how to use them. It concludes with a detailed history and bibliography.
Linear kinetic theory and particle transport in stochastic mixtures
Energy Technology Data Exchange (ETDEWEB)
Pomraning, G.C. [Univ. of California, Los Angeles, CA (United States)
1995-12-31
We consider the formulation of linear transport and kinetic theory describing energy and particle flow in a random mixture of two or more immiscible materials. Following an introduction, we summarize early and fundamental work in this area, and we conclude with a brief discussion of recent results.
Transient fluctuation relations for time-dependent particle transport
Altland, Alexander; de Martino, Alessandro; Egger, Reinhold; Narozhny, Boris
2010-09-01
We consider particle transport under the influence of time-varying driving forces, where fluctuation relations connect the statistics of pairs of time-reversed evolutions of physical observables. In many “mesoscopic” transport processes, the effective many-particle dynamics is dominantly classical while the microscopic rates governing particle motion are of quantum-mechanical origin. We here employ the stochastic path-integral approach as an optimal tool to probe the fluctuation statistics in such applications. Describing the classical limit of the Keldysh quantum nonequilibrium field theory, the stochastic path integral encapsulates the quantum origin of microscopic particle exchange rates. Dynamically, it is equivalent to a transport master equation which is a formalism general enough to describe many applications of practical interest. We apply the stochastic path integral to derive general functional fluctuation relations for current flow induced by time-varying forces. We show that the successive measurement processes implied by this setup do not put the derivation of quantum fluctuation relations in jeopardy. While in many cases the fluctuation relation for a full time-dependent current profile may contain excessive information, we formulate a number of reduced relations, and demonstrate their application to mesoscopic transport. Examples include the distribution of transmitted charge, where we show that the derivation of a fluctuation relation requires the combined monitoring of the statistics of charge and work.
Algorithm of Particle Data Association for SLAM Based on Improved Ant Algorithm
Directory of Open Access Journals (Sweden)
KeKe Gen
2015-01-01
Full Text Available The article considers a problem of data association algorithm for simultaneous localization and mapping guidelines in determining the route of unmanned aerial vehicles (UAVs. Currently, these equipments are already widely used, but mainly controlled from the remote operator. An urgent task is to develop a control system that allows for autonomous flight. Algorithm SLAM (simultaneous localization and mapping, which allows to predict the location, speed, the ratio of flight parameters and the coordinates of landmarks and obstacles in an unknown environment, is one of the key technologies to achieve real autonomous UAV flight. The aim of this work is to study the possibility of solving this problem by using an improved ant algorithm.The data association for SLAM algorithm is meant to establish a matching set of observed landmarks and landmarks in the state vector. Ant algorithm is one of the widely used optimization algorithms with positive feedback and the ability to search in parallel, so the algorithm is suitable for solving the problem of data association for SLAM. But the traditional ant algorithm in the process of finding routes easily falls into local optimum. Adding random perturbations in the process of updating the global pheromone to avoid local optima. Setting limits pheromone on the route can increase the search space with a reasonable amount of calculations for finding the optimal route.The paper proposes an algorithm of the local data association for SLAM algorithm based on an improved ant algorithm. To increase the speed of calculation, local data association is used instead of the global data association. The first stage of the algorithm defines targets in the matching space and the observed landmarks with the possibility of association by the criterion of individual compatibility (IC. The second stage defines the matched landmarks and their coordinates using improved ant algorithm. Simulation results confirm the efficiency and
Development of particle and heavy ion transport code system
International Nuclear Information System (INIS)
Niita, Koji
2004-01-01
Particle and heavy ion transport code system (PHITS) is 3 dimension general purpose Monte Carlo simulation codes for description of transport and reaction of particle and heavy ion in materials. It is developed on the basis of NMTC/JAM for design and safety of J-PARC. What is PHITS, it's physical process, physical models and development process of PHITC code are described. For examples of application, evaluation of neutron optics, cancer treatment by heavy particle ray and cosmic radiation are stated. JAM and JQMD model are used as the physical model. Neutron motion in six polar magnetic field and gravitational field, PHITC simulation of trace of C 12 beam and secondary neutron track of small model of cancer treatment device in HIMAC and neutron flux in Space Shuttle are explained. (S.Y.)
A generalized transport-velocity formulation for smoothed particle hydrodynamics
Energy Technology Data Exchange (ETDEWEB)
Zhang, Chi; Hu, Xiangyu Y., E-mail: xiangyu.hu@tum.de; Adams, Nikolaus A.
2017-05-15
The standard smoothed particle hydrodynamics (SPH) method suffers from tensile instability. In fluid-dynamics simulations this instability leads to particle clumping and void regions when negative pressure occurs. In solid-dynamics simulations, it results in unphysical structure fragmentation. In this work the transport-velocity formulation of Adami et al. (2013) is generalized for providing a solution of this long-standing problem. Other than imposing a global background pressure, a variable background pressure is used to modify the particle transport velocity and eliminate the tensile instability completely. Furthermore, such a modification is localized by defining a shortened smoothing length. The generalized formulation is suitable for fluid and solid materials with and without free surfaces. The results of extensive numerical tests on both fluid and solid dynamics problems indicate that the new method provides a unified approach for multi-physics SPH simulations.
Directed transport of confined Brownian particles with torque
Radtke, Paul K.; Schimansky-Geier, Lutz
2012-05-01
We investigate the influence of an additional torque on the motion of Brownian particles confined in a channel geometry with varying width. The particles are driven by random fluctuations modeled by an Ornstein-Uhlenbeck process with given correlation time τc. The latter causes persistent motion and is implemented as (i) thermal noise in equilibrium and (ii) noisy propulsion in nonequilibrium. In the nonthermal process a directed transport emerges; its properties are studied in detail with respect to the correlation time, the torque, and the channel geometry. Eventually, the transport mechanism is traced back to a persistent sliding of particles along the even boundaries in contrast to scattered motion at uneven or rough ones.
van den Bremer, Ton S.; Taylor, Paul H.
2014-11-01
Although the literature has examined Stokes drift, the net Lagrangian transport by particles due to of surface gravity waves, in great detail, the motion of fluid particles transported by surface gravity wave groups has received considerably less attention. In practice nevertheless, the wave field on the open sea often has a group-like structure. The motion of particles is different, as particles at sufficient depth are transported backwards by the Eulerian return current that was first described by Longuet-Higgins & Stewart (1962) and forms an inseparable counterpart of Stokes drift for wave groups ensuring the (irrotational) mass balance holds. We use WKB theory to study the variation of the Lagrangian transport by the return current with depth distinguishing two-dimensional seas, three-dimensional seas, infinite depth and finite depth. We then provide dimensional estimates of the net horizontal Lagrangian transport by the Stokes drift on the one hand and the return flow on the other hand for realistic sea states in all four cases. Finally we propose a simple scaling relationship for the transition depth: the depth above which Lagrangian particles are transported forwards by the Stokes drift and below which such particles are transported backwards by the return current.
Particle transport methods for LWR dosimetry developed by the Penn State transport theory group
International Nuclear Information System (INIS)
Haghighat, A.; Petrovic, B.
1997-01-01
This paper reviews advanced particle transport theory methods developed by the Penn State Transport Theory Group (PSTTG) over the past several years. These methods have been developed in response to increasing needs for accuracy of results and for three-dimensional modeling of nuclear systems
Directory of Open Access Journals (Sweden)
Long Ma
2015-05-01
Full Text Available We studied sediment cores from Sayram Lake in the Tianshan Mountains of northwest China to evaluate variations in aeolian transport processes over the past ~150 years. Using an end-member modeling algorithm of particle size data, we interpreted end members with a strong bimodal distribution as having been transported by aeolian processes, whereas other end members were interpreted to have been transported by fluvial processes. The aeolian fraction accounted for an average of 27% of the terrigenous components in the core. We used the ratio of aeolian to fluvial content in the Sayram Lake sediments as an index of past intensity of aeolian transport in the Tianshan Mountains. During the interval 1910-1930, the index was high, reflecting the fact that dry climate provided optimal conditions for aeolian dust transport. From 1930-1980, the intensity of aeolian transport was weak. From the 1980s to the 2000s, aeolian transport to Sayram Lake increased. Although climate in northwest China became more humid in the mid-1980s, human activity had by that time altered the impact of climate on the landscape, leading to enhanced surface erosion, which provided more transportable material for dust storms. Comparison of the Lake Sayram sediment record with sediment records from other lakes in the region indicates synchronous intervals of enhanced aeolian transport from 1910 to 1930 and 1980 to 2000.
Tang, Ge; Wei, Biao; Wu, Decao; Feng, Peng; Liu, Juan; Tang, Yuan; Xiong, Shuangfei; Zhang, Zheng
2018-03-01
To select the optimal wavelengths in the light extinction spectroscopy measurement, genetic algorithm-particle swarm optimization (GAPSO) based on genetic algorithm (GA) and particle swarm optimization (PSO) is adopted. The change of the optimal wavelength positions in different feature size parameters and distribution parameters is evaluated. Moreover, the Monte Carlo method based on random probability is used to identify the number of optimal wavelengths, and good inversion effects of the particle size distribution are obtained. The method proved to have the advantage of resisting noise. In order to verify the feasibility of the algorithm, spectra with bands ranging from 200 to 1000 nm are computed. Based on this, the measured data of standard particles are used to verify the algorithm.
A general concurrent algorithm for plasma particle-in-cell simulation codes
International Nuclear Information System (INIS)
Liewer, P.C.; Decyk, V.K.
1989-01-01
We have developed a new algorithm for implementing plasma particle-in-cell (PIC) simulation codes on concurrent processors with distributed memory. This algorithm, named the general concurrent PIC algorithm (GCPIC), has been used to implement an electrostatic PIC code on the 33-node JPL Mark III Hypercube parallel computer. To decompose at PIC code using the GCPIC algorithm, the physical domain of the particle simulation is divided into sub-domains, equal in number to the number of processors, such that all sub-domains have roughly equal numbers of particles. For problems with non-uniform particle densities, these sub-domains will be of unequal physical size. Each processor is assigned a sub-domain and is responsible for updating the particles in its sub-domain. This algorithm has led to a a very efficient parallel implementation of a well-benchmarked 1-dimensional PIC code. The dominant portion of the code, updating the particle positions and velocities, is nearly 100% efficient when the number of particles is increased linearly with the number of hypercube processors used so that the number of particles per processor is constant. For example, the increase in time spent updating particles in going from a problem with 11,264 particles run on 1 processor to 360,448 particles on 32 processors was only 3% (parallel efficiency of 97%). Although implemented on a hypercube concurrent computer, this algorithm should also be efficient for PIC codes on other parallel architectures and for large PIC codes on sequential computers where part of the data must reside on external disks. copyright 1989 Academic Press, Inc
Wihartiko, F. D.; Wijayanti, H.; Virgantari, F.
2018-03-01
Genetic Algorithm (GA) is a common algorithm used to solve optimization problems with artificial intelligence approach. Similarly, the Particle Swarm Optimization (PSO) algorithm. Both algorithms have different advantages and disadvantages when applied to the case of optimization of the Model Integer Programming for Bus Timetabling Problem (MIPBTP), where in the case of MIPBTP will be found the optimal number of trips confronted with various constraints. The comparison results show that the PSO algorithm is superior in terms of complexity, accuracy, iteration and program simplicity in finding the optimal solution.
Modeling reactive transport with particle tracking and kernel estimators
Rahbaralam, Maryam; Fernandez-Garcia, Daniel; Sanchez-Vila, Xavier
2015-04-01
Groundwater reactive transport models are useful to assess and quantify the fate and transport of contaminants in subsurface media and are an essential tool for the analysis of coupled physical, chemical, and biological processes in Earth Systems. Particle Tracking Method (PTM) provides a computationally efficient and adaptable approach to solve the solute transport partial differential equation. On a molecular level, chemical reactions are the result of collisions, combinations, and/or decay of different species. For a well-mixed system, the chem- ical reactions are controlled by the classical thermodynamic rate coefficient. Each of these actions occurs with some probability that is a function of solute concentrations. PTM is based on considering that each particle actually represents a group of molecules. To properly simulate this system, an infinite number of particles is required, which is computationally unfeasible. On the other hand, a finite number of particles lead to a poor-mixed system which is limited by diffusion. Recent works have used this effect to actually model incomplete mix- ing in naturally occurring porous media. In this work, we demonstrate that this effect in most cases should be attributed to a defficient estimation of the concentrations and not to the occurrence of true incomplete mixing processes in porous media. To illustrate this, we show that a Kernel Density Estimation (KDE) of the concentrations can approach the well-mixed solution with a limited number of particles. KDEs provide weighting functions of each particle mass that expands its region of influence, hence providing a wider region for chemical reactions with time. Simulation results show that KDEs are powerful tools to improve state-of-the-art simulations of chemical reactions and indicates that incomplete mixing in diluted systems should be modeled based on alternative conceptual models and not on a limited number of particles.
Optimization of multi-objective micro-grid based on improved particle swarm optimization algorithm
Zhang, Jian; Gan, Yang
2018-04-01
The paper presents a multi-objective optimal configuration model for independent micro-grid with the aim of economy and environmental protection. The Pareto solution set can be obtained by solving the multi-objective optimization configuration model of micro-grid with the improved particle swarm algorithm. The feasibility of the improved particle swarm optimization algorithm for multi-objective model is verified, which provides an important reference for multi-objective optimization of independent micro-grid.
Predicting patchy particle crystals: variable box shape simulations and evolutionary algorithms.
Bianchi, Emanuela; Doppelbauer, Günther; Filion, Laura; Dijkstra, Marjolein; Kahl, Gerhard
2012-06-07
We consider several patchy particle models that have been proposed in literature and we investigate their candidate crystal structures in a systematic way. We compare two different algorithms for predicting crystal structures: (i) an approach based on Monte Carlo simulations in the isobaric-isothermal ensemble and (ii) an optimization technique based on ideas of evolutionary algorithms. We show that the two methods are equally successful and provide consistent results on crystalline phases of patchy particle systems.
Particle Swarm Optimization algorithms for geophysical inversion, practical hints
Garcia Gonzalo, E.; Fernandez Martinez, J.; Fernandez Alvarez, J.; Kuzma, H.; Menendez Perez, C.
2008-12-01
PSO is a stochastic optimization technique that has been successfully used in many different engineering fields. PSO algorithm can be physically interpreted as a stochastic damped mass-spring system (Fernandez Martinez and Garcia Gonzalo 2008). Based on this analogy we present a whole family of PSO algorithms and their respective first order and second order stability regions. Their performance is also checked using synthetic functions (Rosenbrock and Griewank) showing a degree of ill-posedness similar to that found in many geophysical inverse problems. Finally, we present the application of these algorithms to the analysis of a Vertical Electrical Sounding inverse problem associated to a seawater intrusion in a coastal aquifer in South Spain. We analyze the role of PSO parameters (inertia, local and global accelerations and discretization step), both in convergence curves and in the a posteriori sampling of the depth of an intrusion. Comparison is made with binary genetic algorithms and simulated annealing. As result of this analysis, practical hints are given to select the correct algorithm and to tune the corresponding PSO parameters. Fernandez Martinez, J.L., Garcia Gonzalo, E., 2008a. The generalized PSO: a new door to PSO evolution. Journal of Artificial Evolution and Applications. DOI:10.1155/2008/861275.
Optimization of China Crude Oil Transportation Network with Genetic Ant Colony Algorithm
Directory of Open Access Journals (Sweden)
Yao Wang
2015-08-01
Full Text Available Taking into consideration both shipping and pipeline transport, this paper first analysed the risk factors for different modes of crude oil import transportation. Then, based on the minimum of both transportation cost and overall risk, a multi-objective programming model was established to optimize the transportation network of crude oil import, and the genetic algorithm and ant colony algorithm were employed to solve the problem. The optimized result shows that VLCC (Very Large Crude Carrier is superior in long distance sea transportation, whereas pipeline transport is more secure than sea transport. Finally, this paper provides related safeguard suggestions on crude oil import transportation.
Combinatorial Clustering Algorithm of Quantum-Behaved Particle Swarm Optimization and Cloud Model
Directory of Open Access Journals (Sweden)
Mi-Yuan Shan
2013-01-01
Full Text Available We propose a combinatorial clustering algorithm of cloud model and quantum-behaved particle swarm optimization (COCQPSO to solve the stochastic problem. The algorithm employs a novel probability model as well as a permutation-based local search method. We are setting the parameters of COCQPSO based on the design of experiment. In the comprehensive computational study, we scrutinize the performance of COCQPSO on a set of widely used benchmark instances. By benchmarking combinatorial clustering algorithm with state-of-the-art algorithms, we can show that its performance compares very favorably. The fuzzy combinatorial optimization algorithm of cloud model and quantum-behaved particle swarm optimization (FCOCQPSO in vague sets (IVSs is more expressive than the other fuzzy sets. Finally, numerical examples show the clustering effectiveness of COCQPSO and FCOCQPSO clustering algorithms which are extremely remarkable.
Wang, Ershen; Jia, Chaoying; Tong, Gang; Qu, Pingping; Lan, Xiaoyu; Pang, Tao
2018-03-01
The receiver autonomous integrity monitoring (RAIM) is one of the most important parts in an avionic navigation system. Two problems need to be addressed to improve this system, namely, the degeneracy phenomenon and lack of samples for the standard particle filter (PF). However, the number of samples cannot adequately express the real distribution of the probability density function (i.e., sample impoverishment). This study presents a GPS receiver autonomous integrity monitoring (RAIM) method based on a chaos particle swarm optimization particle filter (CPSO-PF) algorithm with a log likelihood ratio. The chaos sequence generates a set of chaotic variables, which are mapped to the interval of optimization variables to improve particle quality. This chaos perturbation overcomes the potential for the search to become trapped in a local optimum in the particle swarm optimization (PSO) algorithm. Test statistics are configured based on a likelihood ratio, and satellite fault detection is then conducted by checking the consistency between the state estimate of the main PF and those of the auxiliary PFs. Based on GPS data, the experimental results demonstrate that the proposed algorithm can effectively detect and isolate satellite faults under conditions of non-Gaussian measurement noise. Moreover, the performance of the proposed novel method is better than that of RAIM based on the PF or PSO-PF algorithm.
DEFF Research Database (Denmark)
Taasti, Vicki Trier; Knudsen, Helge; Holzscheiter, Michael
2015-01-01
The Monte Carlo particle transport code SHIELD-HIT12A is designed to simulate therapeutic beams for cancer radiotherapy with fast ions. SHIELD-HIT12A allows creation of antiproton beam kernels for the treatment planning system TRiP98, but first it must be benchmarked against experimental data. An...
Comparison of several algorithms of the electric force calculation in particle plasma models
International Nuclear Information System (INIS)
Lachnitt, J; Hrach, R
2014-01-01
This work is devoted to plasma modelling using the technique of molecular dynamics. The crucial problem of most such models is the efficient calculation of electric force. This is usually solved by using the particle-in-cell (PIC) algorithm. However, PIC is an approximative algorithm as it underestimates the short-range interactions of charged particles. We propose a hybrid algorithm which adds these interactions to PIC. Then we include this algorithm in a set of algorithms which we test against each other in a two-dimensional collisionless magnetized plasma model. Besides our hybrid algorithm, this set includes two variants of pure PIC and the direct application of Coulomb's law. We compare particle forces, particle trajectories, total energy conservation and the speed of the algorithms. We find out that the hybrid algorithm can be a good replacement of direct Coulomb's law application (quite accurate and much faster). It is however probably unnecessary to use it in practical 2D models.
Dynamic airspace configuration algorithms for next generation air transportation system
Wei, Jian
The National Airspace System (NAS) is under great pressure to safely and efficiently handle the record-high air traffic volume nowadays, and will face even greater challenge to keep pace with the steady increase of future air travel demand, since the air travel demand is projected to increase to two to three times the current level by 2025. The inefficiency of traffic flow management initiatives causes severe airspace congestion and frequent flight delays, which cost billions of economic losses every year. To address the increasingly severe airspace congestion and delays, the Next Generation Air Transportation System (NextGen) is proposed to transform the current static and rigid radar based system to a dynamic and flexible satellite based system. New operational concepts such as Dynamic Airspace Configuration (DAC) have been under development to allow more flexibility required to mitigate the demand-capacity imbalances in order to increase the throughput of the entire NAS. In this dissertation, we address the DAC problem in the en route and terminal airspace under the framework of NextGen. We develop a series of algorithms to facilitate the implementation of innovative concepts relevant with DAC in both the en route and terminal airspace. We also develop a performance evaluation framework for comprehensive benefit analyses on different aspects of future sector design algorithms. First, we complete a graph based sectorization algorithm for DAC in the en route airspace, which models the underlying air route network with a weighted graph, converts the sectorization problem into the graph partition problem, partitions the weighted graph with an iterative spectral bipartition method, and constructs the sectors from the partitioned graph. The algorithm uses a graph model to accurately capture the complex traffic patterns of the real flights, and generates sectors with high efficiency while evenly distributing the workload among the generated sectors. We further improve
Numerical investigations for insulation particle transport phenomena in water flow
International Nuclear Information System (INIS)
Krepper, E.; Grahn, A.; Alt, S.; Kaestner, W.; Kratzsch, A.; Seeliger, A.
2005-01-01
The investigation of insulation debris generation, transport and sedimentation gains importance regarding the reactor safety research for PWR and BWR considering the long term behaviour of emergency core coolant systems during all types of LOCA. The insulation debris released near the break during LOCA consists of a mixture of very different particles concerning size, shape, consistence and other properties. Some fraction of the released insulation debris will be transported into the reactor sump where it may affect emergency core cooling. Open questions of generic interest are e.g. the sedimentation of the insulation debris in a water pool, possible re-suspension, transport in the sump water flow, particle load on strainers and corresponding difference pressure. A joint research project in cooperation with Institute of Process Technology, Process Automation and Measuring Technology (IPM) Zittau deals with the experimental investigation and the development of CFD models for the description of particle transport phenomena in coolant flow. While experiments are performed at the IPM-Zittau, theoretical work is concentrated at Forschungszentrum Rossendorf. In the present paper the basic concepts for CFD modelling are described and first results including feasibility studies are shown. During the ongoing work further results are expected. (author)
Particle and heavy ion transport code system; PHITS
International Nuclear Information System (INIS)
Niita, Koji
2004-01-01
Intermediate and high energy nuclear data are strongly required in design study of many facilities such as accelerator-driven systems, intense pulse spallation neutron sources, and also in medical and space technology. There is, however, few evaluated nuclear data of intermediate and high energy nuclear reactions. Therefore, we have to use some models or systematics for the cross sections, which are essential ingredients of high energy particle and heavy ion transport code to estimate neutron yield, heat deposition and many other quantities of the transport phenomena in materials. We have developed general purpose particle and heavy ion transport Monte Carlo code system, PHITS (Particle and Heavy Ion Transport code System), based on the NMTC/JAM code by the collaboration of Tohoku University, JAERI and RIST. The PHITS has three important ingredients which enable us to calculate (1) high energy nuclear reactions up to 200 GeV, (2) heavy ion collision and its transport in material, (3) low energy neutron transport based on the evaluated nuclear data. In the PHITS, the cross sections of high energy nuclear reactions are obtained by JAM model. JAM (Jet AA Microscopic Transport Model) is a hadronic cascade model, which explicitly treats all established hadronic states including resonances and all hadron-hadron cross sections parametrized based on the resonance model and string model by fitting the available experimental data. The PHITS can describe the transport of heavy ions and their collisions by making use of JQMD and SPAR code. The JQMD (JAERI Quantum Molecular Dynamics) is a simulation code for nucleus nucleus collisions based on the molecular dynamics. The SPAR code is widely used to calculate the stopping powers and ranges for charged particles and heavy ions. The PHITS has included some part of MCNP4C code, by which the transport of low energy neutron, photon and electron based on the evaluated nuclear data can be described. Furthermore, the high energy nuclear
An improved particle filtering algorithm for aircraft engine gas-path fault diagnosis
Directory of Open Access Journals (Sweden)
Qihang Wang
2016-07-01
Full Text Available In this article, an improved particle filter with electromagnetism-like mechanism algorithm is proposed for aircraft engine gas-path component abrupt fault diagnosis. In order to avoid the particle degeneracy and sample impoverishment of normal particle filter, the electromagnetism-like mechanism optimization algorithm is introduced into resampling procedure, which adjusts the position of the particles through simulating attraction–repulsion mechanism between charged particles of the electromagnetism theory. The improved particle filter can solve the particle degradation problem and ensure the diversity of the particle set. Meanwhile, it enhances the ability of tracking abrupt fault due to considering the latest measurement information. Comparison of the proposed method with three different filter algorithms is carried out on a univariate nonstationary growth model. Simulations on a turbofan engine model indicate that compared to the normal particle filter, the improved particle filter can ensure the completion of the fault diagnosis within less sampling period and the root mean square error of parameters estimation is reduced.
A study on the particle penetration in RMS Right Single Quotation Marks particle transport system
International Nuclear Information System (INIS)
Son, S. M.; Oh, S. H.; Choi, C. R.
2014-01-01
In nuclear facilities, a radiation monitoring system (RMS) monitors the exhaust gas containing the radioactive material. Samples of exhaust gas are collected in the downstream region of air cleaning units (ACUs) in order to examine radioactive materials. It is possible to predict an amount of radioactive material by analyzing the corrected samples. Representation of the collected samples should be assured in order to accurately sense and measure of radioactive materials. The radius of curvature is mainly 5 times of tube diameter. Sometimes, a booster fan is additionally added to enhance particle penetration rate... In this study, particle penetrations are calculated to evaluate particle penetration rate with various design parameters (tube lengths, tube declined angles, radius of curvatures, etc). The particle penetration rates have been calculated for several elements in the particle transport system. In general, the horizontal length of tube and the number of bending tube have a big impact on the penetration rate in the particle transport system. If the sampling location is far from the radiation monitoring system, additional installation of booster fans could be considered in case of large diameter tubes, but is not recommended in case of small diameter tube. In order to enhance particle penetration rate, the following works are recommended by priority. 1) to reduce the interval between sampling location and radiation monitoring system 2) to reduce the number of the bending tube
Max–min Bin Packing Algorithm and its application in nano-particles filling
International Nuclear Information System (INIS)
Zhu, Dingju
2016-01-01
With regard to existing bin packing algorithms, higher packing efficiency often leads to lower packing speed while higher packing speed leads to lower packing efficiency. Packing speed and packing efficiency of existing bin packing algorithms including NFD, NF, FF, FFD, BF and BFD correlates negatively with each other, thus resulting in the failure of existing bin packing algorithms to satisfy the demand of nano-particles filling for both high speed and high efficiency. The paper provides a new bin packing algorithm, Max–min Bin Packing Algorithm (MM), which realizes both high packing speed and high packing efficiency. MM has the same packing speed as NFD (whose packing speed ranks no. 1 among existing bin packing algorithms); in case that the size repetition rate of objects to be packed is over 5, MM can realize almost the same packing efficiency as BFD (whose packing efficiency ranks No. 1 among existing bin packing algorithms), and in case that the size repetition rate of objects to be packed is over 500, MM can achieve exactly the same packing efficiency as BFD. With respect to application of nano-particles filling, the size repetition rate of nano particles to be packed is usually in thousands or ten thousands, far higher than 5 or 500. Consequently, in application of nano-particles filling, the packing efficiency of MM is exactly equal to that of BFD. Thus the irreconcilable conflict between packing speed and packing efficiency is successfully removed by MM, which leads to MM having better packing effect than any existing bin packing algorithm. In practice, there are few cases when the size repetition of objects to be packed is lower than 5. Therefore the MM is not necessarily limited to nano-particles filling, and can also be widely used in other applications besides nano-particles filling. Especially, MM has significant value in application of nano-particles filling such as nano printing and nano tooth filling.
Modelling of neutral particle transport in divertor plasma
International Nuclear Information System (INIS)
Kakizuka, Tomonori; Shimizu, Katsuhiro
1995-01-01
An outline of the modelling of neutral particle transport in the diverter plasma was described in the paper. The characteristic properties of divertor plasma were largely affected by interaction between neutral particles and divertor plasma. Accordingly, the behavior of neutral particle should be investigated quantitatively. Moreover, plasma and neutral gas should be traced consistently in the plasma simulation. There are Monte Carlo modelling and the neutral gas fluid modelling as the transport modelling. The former need long calculation time, but it is able to make the physical process modelling. A ultra-large parallel computer is good for the former. In spite of proposing some kinds of models, the latter has not been established. At the view point of reducing calculation time, a work station is good for the simulation of the latter, although some physical problems have not been solved. On the Monte Carlo method particle modelling, reducing the calculation time and introducing the interaction of particles are important subjects to develop 'the evolutional Monte Carlo Method'. To reduce the calculation time, two new methods: 'Implicit Monte Carlo method' and 'Free-and Diffusive-Motion Hybrid Monte-Carlo method' have been developing. (S.Y.)
DRIFT-INDUCED PERPENDICULAR TRANSPORT OF SOLAR ENERGETIC PARTICLES
International Nuclear Information System (INIS)
Marsh, M. S.; Dalla, S.; Kelly, J.; Laitinen, T.
2013-01-01
Drifts are known to play a role in galactic cosmic ray transport within the heliosphere and are a standard component of cosmic ray propagation models. However, the current paradigm of solar energetic particle (SEP) propagation holds the effects of drifts to be negligible, and they are not accounted for in most current SEP modeling efforts. We present full-orbit test particle simulations of SEP propagation in a Parker spiral interplanetary magnetic field (IMF), which demonstrate that high-energy particle drifts cause significant asymmetric propagation perpendicular to the IMF. Thus in many cases the assumption of field-aligned propagation of SEPs may not be valid. We show that SEP drifts have dependencies on energy, heliographic latitude, and charge-to-mass ratio that are capable of transporting energetic particles perpendicular to the field over significant distances within interplanetary space, e.g., protons of initial energy 100 MeV propagate distances across the field on the order of 1 AU, over timescales typical of a gradual SEP event. Our results demonstrate the need for current models of SEP events to include the effects of particle drift. We show that the drift is considerably stronger for heavy ion SEPs due to their larger mass-to-charge ratio. This paradigm shift has important consequences for the modeling of SEP events and is crucial to the understanding and interpretation of in situ observations
Novotny, M.A.; Watanabe, Hiroshi; Ito, Nobuyasu
2010-01-01
The efficiency of dynamic Monte Carlo algorithms for off-lattice systems composed of particles is studied for the case of a single impurity particle. The theoretical efficiencies of the rejection-free method and of the Monte Carlo with Absorbing
Novotny, M.A.
2010-02-01
The efficiency of dynamic Monte Carlo algorithms for off-lattice systems composed of particles is studied for the case of a single impurity particle. The theoretical efficiencies of the rejection-free method and of the Monte Carlo with Absorbing Markov Chains method are given. Simulation results are presented to confirm the theoretical efficiencies. © 2010.
An Improved Particle Swarm Optimization Algorithm and Its Application in the Community Division
Directory of Open Access Journals (Sweden)
Jiang Hao
2016-01-01
Full Text Available With the deepening of the research on complex networks, the method of detecting and classifying social network is springing up. In this essay, the basic particle swarm algorithm is improved based on the GN algorithm. Modularity is taken as a measure of community division [1]. In view of the dynamic network community division, scrolling calculation method is put forward. Experiments show that using the improved particle swarm optimization algorithm can improve the accuracy of the community division and can also get higher value of the modularity in the dynamic community
Couceiro, Micael
2015-01-01
This book examines the bottom-up applicability of swarm intelligence to solving multiple problems, such as curve fitting, image segmentation, and swarm robotics. It compares the capabilities of some of the better-known bio-inspired optimization approaches, especially Particle Swarm Optimization (PSO), Darwinian Particle Swarm Optimization (DPSO) and the recently proposed Fractional Order Darwinian Particle Swarm Optimization (FODPSO), and comprehensively discusses their advantages and disadvantages. Further, it demonstrates the superiority and key advantages of using the FODPSO algorithm, suc
Energy Technology Data Exchange (ETDEWEB)
Lasuik, J.; Shalchi, A., E-mail: andreasm4@yahoo.com [Department of Physics and Astronomy, University of Manitoba, Winnipeg, MB R3T 2N2 (Canada)
2017-09-20
Recently, a new theory for the transport of energetic particles across a mean magnetic field was presented. Compared to other nonlinear theories the new approach has the advantage that it provides a full time-dependent description of the transport. Furthermore, a diffusion approximation is no longer part of that theory. The purpose of this paper is to combine this new approach with a time-dependent model for parallel transport and different turbulence configurations in order to explore the parameter regimes for which we get ballistic transport, compound subdiffusion, and normal Markovian diffusion.
Application of particle swarm optimization algorithm in the heating system planning problem.
Ma, Rong-Jiang; Yu, Nan-Yang; Hu, Jun-Yi
2013-01-01
Based on the life cycle cost (LCC) approach, this paper presents an integral mathematical model and particle swarm optimization (PSO) algorithm for the heating system planning (HSP) problem. The proposed mathematical model minimizes the cost of heating system as the objective for a given life cycle time. For the particularity of HSP problem, the general particle swarm optimization algorithm was improved. An actual case study was calculated to check its feasibility in practical use. The results show that the improved particle swarm optimization (IPSO) algorithm can more preferably solve the HSP problem than PSO algorithm. Moreover, the results also present the potential to provide useful information when making decisions in the practical planning process. Therefore, it is believed that if this approach is applied correctly and in combination with other elements, it can become a powerful and effective optimization tool for HSP problem.
Explicit symplectic algorithms based on generating functions for charged particle dynamics
Zhang, Ruili; Qin, Hong; Tang, Yifa; Liu, Jian; He, Yang; Xiao, Jianyuan
2016-07-01
Dynamics of a charged particle in the canonical coordinates is a Hamiltonian system, and the well-known symplectic algorithm has been regarded as the de facto method for numerical integration of Hamiltonian systems due to its long-term accuracy and fidelity. For long-term simulations with high efficiency, explicit symplectic algorithms are desirable. However, it is generally believed that explicit symplectic algorithms are only available for sum-separable Hamiltonians, and this restriction limits the application of explicit symplectic algorithms to charged particle dynamics. To overcome this difficulty, we combine the familiar sum-split method and a generating function method to construct second- and third-order explicit symplectic algorithms for dynamics of charged particle. The generating function method is designed to generate explicit symplectic algorithms for product-separable Hamiltonian with form of H (x ,p ) =pif (x ) or H (x ,p ) =xig (p ) . Applied to the simulations of charged particle dynamics, the explicit symplectic algorithms based on generating functions demonstrate superiorities in conservation and efficiency.
Directory of Open Access Journals (Sweden)
Xun Zhang
2014-01-01
Full Text Available Optimal sensor placement is a key issue in the structural health monitoring of large-scale structures. However, some aspects in existing approaches require improvement, such as the empirical and unreliable selection of mode and sensor numbers and time-consuming computation. A novel improved particle swarm optimization (IPSO algorithm is proposed to address these problems. The approach firstly employs the cumulative effective modal mass participation ratio to select mode number. Three strategies are then adopted to improve the PSO algorithm. Finally, the IPSO algorithm is utilized to determine the optimal sensors number and configurations. A case study of a latticed shell model is implemented to verify the feasibility of the proposed algorithm and four different PSO algorithms. The effective independence method is also taken as a contrast experiment. The comparison results show that the optimal placement schemes obtained by the PSO algorithms are valid, and the proposed IPSO algorithm has better enhancement in convergence speed and precision.
Wang, Xingmei; Hao, Wenqian; Li, Qiming
2017-12-18
This paper proposes an adaptive cultural algorithm with improved quantum-behaved particle swarm optimization (ACA-IQPSO) to detect the underwater sonar image. In the population space, to improve searching ability of particles, iterative times and the fitness value of particles are regarded as factors to adaptively adjust the contraction-expansion coefficient of the quantum-behaved particle swarm optimization algorithm (QPSO). The improved quantum-behaved particle swarm optimization algorithm (IQPSO) can make particles adjust their behaviours according to their quality. In the belief space, a new update strategy is adopted to update cultural individuals according to the idea of the update strategy in shuffled frog leaping algorithm (SFLA). Moreover, to enhance the utilization of information in the population space and belief space, accept function and influence function are redesigned in the new communication protocol. The experimental results show that ACA-IQPSO can obtain good clustering centres according to the grey distribution information of underwater sonar images, and accurately complete underwater objects detection. Compared with other algorithms, the proposed ACA-IQPSO has good effectiveness, excellent adaptability, a powerful searching ability and high convergence efficiency. Meanwhile, the experimental results of the benchmark functions can further demonstrate that the proposed ACA-IQPSO has better searching ability, convergence efficiency and stability.
The Random Ray Method for neutral particle transport
Energy Technology Data Exchange (ETDEWEB)
Tramm, John R., E-mail: jtramm@mit.edu [Massachusetts Institute of Technology, Department of Nuclear Science Engineering, 77 Massachusetts Avenue, 24-107, Cambridge, MA 02139 (United States); Argonne National Laboratory, Mathematics and Computer Science Department 9700 S Cass Ave, Argonne, IL 60439 (United States); Smith, Kord S., E-mail: kord@mit.edu [Massachusetts Institute of Technology, Department of Nuclear Science Engineering, 77 Massachusetts Avenue, 24-107, Cambridge, MA 02139 (United States); Forget, Benoit, E-mail: bforget@mit.edu [Massachusetts Institute of Technology, Department of Nuclear Science Engineering, 77 Massachusetts Avenue, 24-107, Cambridge, MA 02139 (United States); Siegel, Andrew R., E-mail: siegela@mcs.anl.gov [Argonne National Laboratory, Mathematics and Computer Science Department 9700 S Cass Ave, Argonne, IL 60439 (United States)
2017-08-01
A new approach to solving partial differential equations (PDEs) based on the method of characteristics (MOC) is presented. The Random Ray Method (TRRM) uses a stochastic rather than deterministic discretization of characteristic tracks to integrate the phase space of a problem. TRRM is potentially applicable in a number of transport simulation fields where long characteristic methods are used, such as neutron transport and gamma ray transport in reactor physics as well as radiative transfer in astrophysics. In this study, TRRM is developed and then tested on a series of exemplar reactor physics benchmark problems. The results show extreme improvements in memory efficiency compared to deterministic MOC methods, while also reducing algorithmic complexity, allowing for a sparser computational grid to be used while maintaining accuracy.
The Random Ray Method for neutral particle transport
International Nuclear Information System (INIS)
Tramm, John R.; Smith, Kord S.; Forget, Benoit; Siegel, Andrew R.
2017-01-01
A new approach to solving partial differential equations (PDEs) based on the method of characteristics (MOC) is presented. The Random Ray Method (TRRM) uses a stochastic rather than deterministic discretization of characteristic tracks to integrate the phase space of a problem. TRRM is potentially applicable in a number of transport simulation fields where long characteristic methods are used, such as neutron transport and gamma ray transport in reactor physics as well as radiative transfer in astrophysics. In this study, TRRM is developed and then tested on a series of exemplar reactor physics benchmark problems. The results show extreme improvements in memory efficiency compared to deterministic MOC methods, while also reducing algorithmic complexity, allowing for a sparser computational grid to be used while maintaining accuracy.
Recent advances in neutral particle transport methods and codes
International Nuclear Information System (INIS)
Azmy, Y.Y.
1996-01-01
An overview of ORNL's three-dimensional neutral particle transport code, TORT, is presented. Special features of the code that make it invaluable for large applications are summarized for the prospective user. Advanced capabilities currently under development and installation in the production release of TORT are discussed; they include: multitasking on Cray platforms running the UNICOS operating system; Adjacent cell Preconditioning acceleration scheme; and graphics codes for displaying computed quantities such as the flux. Further developments for TORT and its companion codes to enhance its present capabilities, as well as expand its range of applications are disucssed. Speculation on the next generation of neutron particle transport codes at ORNL, especially regarding unstructured grids and high order spatial approximations, are also mentioned
Dust particle diffusion in ion beam transport region
Energy Technology Data Exchange (ETDEWEB)
Miyamoto, N.; Okajima, Y.; Romero, C. F.; Kuwata, Y.; Kasuya, T.; Wada, M., E-mail: mwada@mail.doshisha.ac.jp [Graduate school of Science and Engineering, Doshisha University, Kyotanabe, Kyoto 610-0321 (Japan)
2016-02-15
Dust particles of μm size produced by a monoplasmatron ion source are observed by a laser light scattering. The scattered light signal from an incident laser at 532 nm wavelength indicates when and where a particle passes through the ion beam transport region. As the result, dusts with the size more than 10 μm are found to be distributed in the center of the ion beam, while dusts with the size less than 10 μm size are distributed along the edge of the ion beam. Floating potential and electron temperature at beam transport region are measured by an electrostatic probe. This observation can be explained by a charge up model of the dust in the plasma boundary region.
Kinetic-Monte-Carlo-Based Parallel Evolution Simulation Algorithm of Dust Particles
Directory of Open Access Journals (Sweden)
Xiaomei Hu
2014-01-01
Full Text Available The evolution simulation of dust particles provides an important way to analyze the impact of dust on the environment. KMC-based parallel algorithm is proposed to simulate the evolution of dust particles. In the parallel evolution simulation algorithm of dust particles, data distribution way and communication optimizing strategy are raised to balance the load of every process and reduce the communication expense among processes. The experimental results show that the simulation of diffusion, sediment, and resuspension of dust particles in virtual campus is realized and the simulation time is shortened by parallel algorithm, which makes up for the shortage of serial computing and makes the simulation of large-scale virtual environment possible.
A hand tracking algorithm with particle filter and improved GVF snake model
Sun, Yi-qi; Wu, Ai-guo; Dong, Na; Shao, Yi-zhe
2017-07-01
To solve the problem that the accurate information of hand cannot be obtained by particle filter, a hand tracking algorithm based on particle filter combined with skin-color adaptive gradient vector flow (GVF) snake model is proposed. Adaptive GVF and skin color adaptive external guidance force are introduced to the traditional GVF snake model, guiding the curve to quickly converge to the deep concave region of hand contour and obtaining the complex hand contour accurately. This algorithm realizes a real-time correction of the particle filter parameters, avoiding the particle drift phenomenon. Experimental results show that the proposed algorithm can reduce the root mean square error of the hand tracking by 53%, and improve the accuracy of hand tracking in the case of complex and moving background, even with a large range of occlusion.
Particle transport in JET and TCV-H mode plasmas
International Nuclear Information System (INIS)
Maslov, M.
2009-10-01
Understanding particle transport physics is of great importance for magnetically confined plasma devices and for the development of thermonuclear fusion power for energy production. From the beginnings of fusion research, more than half a century ago, the problem of heat transport in tokamaks attracted the attention of researchers, but the particle transport phenomena were largely neglected until fairly recently. As tokamak physics advanced to its present level, the physics community realized that there are many hurdles to the development of fusion power beyond the energy confinement. Particle transport is one of the outstanding issues. The aim of this thesis work is to study the anomalous (turbulence driven) particle transport in tokamaks on the basis of experiments on two different devices: JET (Joint European Torus) and TCV (Tokamak à Configuration Variable). In particular the physics of particle inward convection (pinch), which causes formation of peaked density profiles, is addressed in this work. Density profile peaking has a direct, favorable effect on fusion power in a reactor, we therefore also propose an extrapolation to the international experimental reactor ITER, which is currently under construction. To complete the thesis research, a comprehensive experimental database was created on the basis of data collected on JET and TCV during the duration of the thesis. Improvements of the density profile measurements techniques and careful analysis of the experimental data allowed us to derive the dependencies of density profile shape on the relevant plasma parameters. These improved techniques also allowed us to dispel any doubts that had been voiced about previous results. The major conclusions from previous work on JET and other tokamaks were generally confirmed, with some minor supplements. The main novelty of the thesis resides in systematic tests of the predictions of linear gyrokinetic simulations of the ITG (Ion Temperature Gradient) mode against the
Particle and heavy ion transport code system, PHITS, version 2.52
International Nuclear Information System (INIS)
Sato, Tatsuhiko; Matsuda, Norihiro; Hashimoto, Shintaro; Iwamoto, Yosuke; Noda, Shusaku; Ogawa, Tatsuhiko; Nakashima, Hiroshi; Fukahori, Tokio; Okumura, Keisuke; Kai, Tetsuya; Niita, Koji; Iwase, Hiroshi; Chiba, Satoshi; Furuta, Takuya; Sihver, Lembit
2013-01-01
An upgraded version of the Particle and Heavy Ion Transport code System, PHITS2.52, was developed and released to the public. The new version has been greatly improved from the previously released version, PHITS2.24, in terms of not only the code itself but also the contents of its package, such as the attached data libraries. In the new version, a higher accuracy of simulation was achieved by implementing several latest nuclear reaction models. The reliability of the simulation was improved by modifying both the algorithms for the electron-, positron-, and photon-transport simulations and the procedure for calculating the statistical uncertainties of the tally results. Estimation of the time evolution of radioactivity became feasible by incorporating the activation calculation program DCHAIN-SP into the new package. The efficiency of the simulation was also improved as a result of the implementation of shared-memory parallelization and the optimization of several time-consuming algorithms. Furthermore, a number of new user-support tools and functions that help users to intuitively and effectively perform PHITS simulations were developed and incorporated. Due to these improvements, PHITS is now a more powerful tool for particle transport simulation applicable to various research and development fields, such as nuclear technology, accelerator design, medical physics, and cosmic-ray research. (author)
Gyrokinetics Simulation of Energetic Particle Turbulence and Transport
Energy Technology Data Exchange (ETDEWEB)
Diamond, Patrick H.
2011-09-21
Progress in research during this year elucidated the physics of precession resonance and its interaction with radial scattering to form phase space density granulations. Momentum theorems for drift wave-zonal flow systems involving precession resonance were derived. These are directly generalizable to energetic particle modes. A novel nonlinear, subcritical growth mechanism was identified, which has now been verified by simulation. These results strengthen the foundation of our understanding of transport in burning plasmas
Fluid description of particle transport in hf heated magnetized plasma
International Nuclear Information System (INIS)
Klima, R.
1980-01-01
Particle fluxes averaged over high-frequency oscillations are analyzed. The collisional effects and the kinetic mechanisms of energy absorption are included. Spatial dependences of both the high-frequency and the (quasi-)steady electromagnetic fields are arbitrary. The equations governing the fluxes are deduced from the moments of the averaged kinetic equation. Explicit expressions for steady state fluxes are given in terms of electromagnetic field quantities. The results can also be applied to anomalous transport phenomena in weakly turbulent plasmas. (author)
Gyrokinetics Simulation of Energetic Particle Turbulence and Transport
International Nuclear Information System (INIS)
Diamond, Patrick H.
2011-01-01
Progress in research during this year elucidated the physics of precession resonance and its interaction with radial scattering to form phase space density granulations. Momentum theorems for drift wave-zonal flow systems involving precession resonance were derived. These are directly generalizable to energetic particle modes. A novel nonlinear, subcritical growth mechanism was identified, which has now been verified by simulation. These results strengthen the foundation of our understanding of transport in burning plasmas
International Nuclear Information System (INIS)
Niu Lili; Qian Ming; Yu Wentao; Jin Qiaofeng; Ling Tao; Zheng Hairong; Wan Kun; Gao Shen
2010-01-01
This paper presents a new algorithm for ultrasonic particle image velocimetry (Echo PIV) for improving the flow velocity measurement accuracy and efficiency in regions with high velocity gradients. The conventional Echo PIV algorithm has been modified by incorporating a multiple iterative algorithm, sub-pixel method, filter and interpolation method, and spurious vector elimination algorithm. The new algorithms' performance is assessed by analyzing simulated images with known displacements, and ultrasonic B-mode images of in vitro laminar pipe flow, rotational flow and in vivo rat carotid arterial flow. Results of the simulated images show that the new algorithm produces much smaller bias from the known displacements. For laminar flow, the new algorithm results in 1.1% deviation from the analytically derived value, and 8.8% for the conventional algorithm. The vector quality evaluation for the rotational flow imaging shows that the new algorithm produces better velocity vectors. For in vivo rat carotid arterial flow imaging, the results from the new algorithm deviate 6.6% from the Doppler-measured peak velocities averagely compared to 15% of that from the conventional algorithm. The new Echo PIV algorithm is able to effectively improve the measurement accuracy in imaging flow fields with high velocity gradients.
Weighted-delta-tracking for Monte Carlo particle transport
International Nuclear Information System (INIS)
Morgan, L.W.G.; Kotlyar, D.
2015-01-01
Highlights: • This paper presents an alteration to the Monte Carlo Woodcock tracking technique. • The alteration improves computational efficiency within regions of high absorbers. • The rejection technique is replaced by a statistical weighting mechanism. • The modified Woodcock method is shown to be faster than standard Woodcock tracking. • The modified Woodcock method achieves a lower variance, given a specified accuracy. - Abstract: Monte Carlo particle transport (MCPT) codes are incredibly powerful and versatile tools to simulate particle behavior in a multitude of scenarios, such as core/criticality studies, radiation protection, shielding, medicine and fusion research to name just a small subset applications. However, MCPT codes can be very computationally expensive to run when the model geometry contains large attenuation depths and/or contains many components. This paper proposes a simple modification to the Woodcock tracking method used by some Monte Carlo particle transport codes. The Woodcock method utilizes the rejection method for sampling virtual collisions as a method to remove collision distance sampling at material boundaries. However, it suffers from poor computational efficiency when the sample acceptance rate is low. The proposed method removes rejection sampling from the Woodcock method in favor of a statistical weighting scheme, which improves the computational efficiency of a Monte Carlo particle tracking code. It is shown that the modified Woodcock method is less computationally expensive than standard ray-tracing and rejection-based Woodcock tracking methods and achieves a lower variance, given a specified accuracy
Transport of Particle Swarms Through Variable Aperture Fractures
Boomsma, E.; Pyrak-Nolte, L. J.
2012-12-01
Particle transport through fractured rock is a key concern with the increased use of micro- and nano-size particles in consumer products as well as from other activities in the sub- and near surface (e.g. mining, industrial waste, hydraulic fracturing, etc.). While particle transport is often studied as the transport of emulsions or dispersions, particles may also enter the subsurface from leaks or seepage that lead to particle swarms. Swarms are drop-like collections of millions of colloidal-sized particles that exhibit a number of unique characteristics when compared to dispersions and emulsions. Any contaminant or engineered particle that forms a swarm can be transported farther, faster, and more cohesively in fractures than would be expected from a traditional dispersion model. In this study, the effects of several variable aperture fractures on colloidal swarm cohesiveness and evolution were studied as a swarm fell under gravity and interacted with the fracture walls. Transparent acrylic was used to fabricate synthetic fracture samples with (1) a uniform aperture, (2) a converging region followed by a uniform region (funnel shaped), (3) a uniform region followed by a diverging region (inverted funnel), and (4) a cast of a an induced fracture from a carbonate rock. All of the samples consisted of two blocks that measured 100 x 100 x 50 mm. The minimum separation between these blocks determined the nominal aperture (0.5 mm to 20 mm). During experiments a fracture was fully submerged in water and swarms were released into it. The swarms consisted of a dilute suspension of 3 micron polystyrene fluorescent beads (1% by mass) with an initial volume of 5μL. The swarms were illuminated with a green (525 nm) LED array and imaged optically with a CCD camera. The variation in fracture aperture controlled swarm behavior. Diverging apertures caused a sudden loss of confinement that resulted in a rapid change in the swarm's shape as well as a sharp increase in its velocity
Particle Tracking Model and Abstraction of Transport Processes
Energy Technology Data Exchange (ETDEWEB)
B. Robinson
2004-10-21
The purpose of this report is to document the abstraction model being used in total system performance assessment (TSPA) model calculations for radionuclide transport in the unsaturated zone (UZ). The UZ transport abstraction model uses the particle-tracking method that is incorporated into the finite element heat and mass model (FEHM) computer code (Zyvoloski et al. 1997 [DIRS 100615]) to simulate radionuclide transport in the UZ. This report outlines the assumptions, design, and testing of a model for calculating radionuclide transport in the UZ at Yucca Mountain. In addition, methods for determining and inputting transport parameters are outlined for use in the TSPA for license application (LA) analyses. Process-level transport model calculations are documented in another report for the UZ (BSC 2004 [DIRS 164500]). Three-dimensional, dual-permeability flow fields generated to characterize UZ flow (documented by BSC 2004 [DIRS 169861]; DTN: LB03023DSSCP9I.001 [DIRS 163044]) are converted to make them compatible with the FEHM code for use in this abstraction model. This report establishes the numerical method and demonstrates the use of the model that is intended to represent UZ transport in the TSPA-LA. Capability of the UZ barrier for retarding the transport is demonstrated in this report, and by the underlying process model (BSC 2004 [DIRS 164500]). The technical scope, content, and management of this report are described in the planning document ''Technical Work Plan for: Unsaturated Zone Transport Model Report Integration'' (BSC 2004 [DIRS 171282]). Deviations from the technical work plan (TWP) are noted within the text of this report, as appropriate. The latest version of this document is being prepared principally to correct parameter values found to be in error due to transcription errors, changes in source data that were not captured in the report, calculation errors, and errors in interpretation of source data.
Particle Tracking Model and Abstraction of Transport Processes
International Nuclear Information System (INIS)
Robinson, B.
2004-01-01
The purpose of this report is to document the abstraction model being used in total system performance assessment (TSPA) model calculations for radionuclide transport in the unsaturated zone (UZ). The UZ transport abstraction model uses the particle-tracking method that is incorporated into the finite element heat and mass model (FEHM) computer code (Zyvoloski et al. 1997 [DIRS 100615]) to simulate radionuclide transport in the UZ. This report outlines the assumptions, design, and testing of a model for calculating radionuclide transport in the UZ at Yucca Mountain. In addition, methods for determining and inputting transport parameters are outlined for use in the TSPA for license application (LA) analyses. Process-level transport model calculations are documented in another report for the UZ (BSC 2004 [DIRS 164500]). Three-dimensional, dual-permeability flow fields generated to characterize UZ flow (documented by BSC 2004 [DIRS 169861]; DTN: LB03023DSSCP9I.001 [DIRS 163044]) are converted to make them compatible with the FEHM code for use in this abstraction model. This report establishes the numerical method and demonstrates the use of the model that is intended to represent UZ transport in the TSPA-LA. Capability of the UZ barrier for retarding the transport is demonstrated in this report, and by the underlying process model (BSC 2004 [DIRS 164500]). The technical scope, content, and management of this report are described in the planning document ''Technical Work Plan for: Unsaturated Zone Transport Model Report Integration'' (BSC 2004 [DIRS 171282]). Deviations from the technical work plan (TWP) are noted within the text of this report, as appropriate. The latest version of this document is being prepared principally to correct parameter values found to be in error due to transcription errors, changes in source data that were not captured in the report, calculation errors, and errors in interpretation of source data
A Global algorithm for linear radiosity
Sbert Cassasayas, Mateu; Pueyo Sánchez, Xavier
1993-01-01
A linear algorithm for radiosity is presented, linear both in time and storage. The new algorithm is based on previous work by the authors and on the well known algorithms for progressive radiosity and Monte Carlo particle transport.
Flux-corrected transport principles, algorithms, and applications
Löhner, Rainald; Turek, Stefan
2012-01-01
Many modern high-resolution schemes for Computational Fluid Dynamics trace their origins to the Flux-Corrected Transport (FCT) paradigm. FCT maintains monotonicity using a nonoscillatory low-order scheme to determine the bounds for a constrained high-order approximation. This book begins with historical notes by J.P. Boris and D.L. Book who invented FCT in the early 1970s. The chapters that follow describe the design of fully multidimensional FCT algorithms for structured and unstructured grids, limiting for systems of conservation laws, and the use of FCT as an implicit subgrid scale model. The second edition presents 200 pages of additional material. The main highlights of the three new chapters include: FCT-constrained interpolation for Arbitrary Lagrangian-Eulerian methods, an optimization-based approach to flux correction, and FCT simulations of high-speed flows on overset grids. Addressing students and researchers, as well as CFD practitioners, the book is focused on computational aspects and contains m...
Characterization of molecule and particle transport through nanoscale conduits
Alibakhshi, Mohammad Amin
Nanofluidic devices have been of great interest due to their applications in variety of fields, including energy conversion and storage, water desalination, biological and chemical separations, and lab-on-a-chip devices. Although these applications cross the boundaries of many different disciplines, they all share the demand for understanding transport in nanoscale conduits. In this thesis, different elusive aspects of molecule and particle transport through nanofluidic conduits are investigated, including liquid and ion transport in nanochannels, diffusion- and reaction-governed enzyme transport in nanofluidic channels, and finally translocation of nanobeads through nanopores. Liquid or solvent transport through nanoconfinements is an essential yet barely characterized component of any nanofluidic systems. In the first chapter, water transport through single hydrophilic nanochannels with heights down to 7 nm is experimentally investigated using a new measurement technique. This technique has been developed based on the capillary flow and a novel hybrid nanochannel design and is capable of characterizing flow in both single nanoconduits as well as nanoporous media. The presence of a 0.7 nm thick hydration layer on hydrophilic surfaces and its effect on increasing the hydraulic resistance of the nanochannels is verified. Next, ion transport in a new class of nanofluidic rectifiers is theoretically and experimentally investigated. These so called nanofluidic diodes are nanochannels with asymmetric geometries which preferentially allow ion transport in one direction. A nondimensional number as a function of electrolyte concentration, nanochannel dimensions, and surface charge is derived that summarizes the rectification behavior of this system. In the fourth chapter, diffusion- and reaction-governed enzyme transport in nanofluidic channels is studied and the theoretical background necessary for understanding enzymatic activity in nanofluidic channels is presented. A
Parallelization of particle transport using Intel® TBB
International Nuclear Information System (INIS)
Apostolakis, J; Brun, R; Carminati, F; Gheata, A; Wenzel, S; Belogurov, S; Ovcharenko, E
2014-01-01
One of the current challenges in HEP computing is the development of particle propagation algorithms capable of efficiently use all performance aspects of modern computing devices. The Geant-Vector project at CERN has recently introduced an approach in this direction. This paper describes the implementation of a similar workflow using the Intel(r) Threading Building Blocks (Intel(r) TBB) library. This approach is intended to overcome the potential bottleneck of having a single dispatcher on many-core architectures and to result in better scalability compared to the initial pthreads-based version.
RB Particle Filter Time Synchronization Algorithm Based on the DPM Model.
Guo, Chunsheng; Shen, Jia; Sun, Yao; Ying, Na
2015-09-03
Time synchronization is essential for node localization, target tracking, data fusion, and various other Wireless Sensor Network (WSN) applications. To improve the estimation accuracy of continuous clock offset and skew of mobile nodes in WSNs, we propose a novel time synchronization algorithm, the Rao-Blackwellised (RB) particle filter time synchronization algorithm based on the Dirichlet process mixture (DPM) model. In a state-space equation with a linear substructure, state variables are divided into linear and non-linear variables by the RB particle filter algorithm. These two variables can be estimated using Kalman filter and particle filter, respectively, which improves the computational efficiency more so than if only the particle filter was used. In addition, the DPM model is used to describe the distribution of non-deterministic delays and to automatically adjust the number of Gaussian mixture model components based on the observational data. This improves the estimation accuracy of clock offset and skew, which allows achieving the time synchronization. The time synchronization performance of this algorithm is also validated by computer simulations and experimental measurements. The results show that the proposed algorithm has a higher time synchronization precision than traditional time synchronization algorithms.
Measurement of particle transport coefficients on Alcator C-Mod
International Nuclear Information System (INIS)
Luke, T.C.T.
1994-10-01
The goal of this thesis was to study the behavior of the plasma transport during the divertor detachment in order to explain the central electron density rise. The measurement of particle transport coefficients requires sophisticated diagnostic tools. A two color interferometer system was developed and installed on Alcator C-Mod to measure the electron density with high spatial (∼ 2 cm) and high temporal (≤ 1.0 ms) resolution. The system consists of 10 CO 2 (10.6 μm) and 4 HeNe (.6328 μm) chords that are used to measure the line integrated density to within 0.08 CO 2 degrees or 2.3 x 10 16 m -2 theoretically. Using the two color interferometer, a series of gas puffing experiments were conducted. The density was varied above and below the threshold density for detachment at a constant magnetic field and plasma current. Using a gas modulation technique, the particle diffusion, D, and the convective velocity, V, were determined. Profiles were inverted using a SVD inversion and the transport coefficients were extracted with a time regression analysis and a transport simulation analysis. Results from each analysis were in good agreement. Measured profiles of the coefficients increased with the radius and the values were consistent with measurements from other experiments. The values exceeded neoclassical predictions by a factor of 10. The profiles also exhibited an inverse dependence with plasma density. The scaling of both attached and detached plasmas agreed well with this inverse scaling. This result and the lack of change in the energy and impurity transport indicate that there was no change in the underlying transport processes after detachment
Measurement of particle transport coefficients on Alcator C-Mod
Energy Technology Data Exchange (ETDEWEB)
Luke, T.C.T.
1994-10-01
The goal of this thesis was to study the behavior of the plasma transport during the divertor detachment in order to explain the central electron density rise. The measurement of particle transport coefficients requires sophisticated diagnostic tools. A two color interferometer system was developed and installed on Alcator C-Mod to measure the electron density with high spatial ({approx} 2 cm) and high temporal ({le} 1.0 ms) resolution. The system consists of 10 CO{sub 2} (10.6 {mu}m) and 4 HeNe (.6328 {mu}m) chords that are used to measure the line integrated density to within 0.08 CO{sub 2} degrees or 2.3 {times} 10{sup 16}m{sup {minus}2} theoretically. Using the two color interferometer, a series of gas puffing experiments were conducted. The density was varied above and below the threshold density for detachment at a constant magnetic field and plasma current. Using a gas modulation technique, the particle diffusion, D, and the convective velocity, V, were determined. Profiles were inverted using a SVD inversion and the transport coefficients were extracted with a time regression analysis and a transport simulation analysis. Results from each analysis were in good agreement. Measured profiles of the coefficients increased with the radius and the values were consistent with measurements from other experiments. The values exceeded neoclassical predictions by a factor of 10. The profiles also exhibited an inverse dependence with plasma density. The scaling of both attached and detached plasmas agreed well with this inverse scaling. This result and the lack of change in the energy and impurity transport indicate that there was no change in the underlying transport processes after detachment.
A nowcasting technique based on application of the particle filter blending algorithm
Chen, Yuanzhao; Lan, Hongping; Chen, Xunlai; Zhang, Wenhai
2017-10-01
To improve the accuracy of nowcasting, a new extrapolation technique called particle filter blending was configured in this study and applied to experimental nowcasting. Radar echo extrapolation was performed by using the radar mosaic at an altitude of 2.5 km obtained from the radar images of 12 S-band radars in Guangdong Province, China. The first bilateral filter was applied in the quality control of the radar data; an optical flow method based on the Lucas-Kanade algorithm and the Harris corner detection algorithm were used to track radar echoes and retrieve the echo motion vectors; then, the motion vectors were blended with the particle filter blending algorithm to estimate the optimal motion vector of the true echo motions; finally, semi-Lagrangian extrapolation was used for radar echo extrapolation based on the obtained motion vector field. A comparative study of the extrapolated forecasts of four precipitation events in 2016 in Guangdong was conducted. The results indicate that the particle filter blending algorithm could realistically reproduce the spatial pattern, echo intensity, and echo location at 30- and 60-min forecast lead times. The forecasts agreed well with observations, and the results were of operational significance. Quantitative evaluation of the forecasts indicates that the particle filter blending algorithm performed better than the cross-correlation method and the optical flow method. Therefore, the particle filter blending method is proved to be superior to the traditional forecasting methods and it can be used to enhance the ability of nowcasting in operational weather forecasts.
A parallel algorithm for 3D particle tracking and Lagrangian trajectory reconstruction
International Nuclear Information System (INIS)
Barker, Douglas; Zhang, Yuanhui; Lifflander, Jonathan; Arya, Anshu
2012-01-01
Particle-tracking methods are widely used in fluid mechanics and multi-target tracking research because of their unique ability to reconstruct long trajectories with high spatial and temporal resolution. Researchers have recently demonstrated 3D tracking of several objects in real time, but as the number of objects is increased, real-time tracking becomes impossible due to data transfer and processing bottlenecks. This problem may be solved by using parallel processing. In this paper, a parallel-processing framework has been developed based on frame decomposition and is programmed using the asynchronous object-oriented Charm++ paradigm. This framework can be a key step in achieving a scalable Lagrangian measurement system for particle-tracking velocimetry and may lead to real-time measurement capabilities. The parallel tracking algorithm was evaluated with three data sets including the particle image velocimetry standard 3D images data set #352, a uniform data set for optimal parallel performance and a computational-fluid-dynamics-generated non-uniform data set to test trajectory reconstruction accuracy, consistency with the sequential version and scalability to more than 500 processors. The algorithm showed strong scaling up to 512 processors and no inherent limits of scalability were seen. Ultimately, up to a 200-fold speedup is observed compared to the serial algorithm when 256 processors were used. The parallel algorithm is adaptable and could be easily modified to use any sequential tracking algorithm, which inputs frames of 3D particle location data and outputs particle trajectories
Explicit K-symplectic algorithms for charged particle dynamics
International Nuclear Information System (INIS)
He, Yang; Zhou, Zhaoqi; Sun, Yajuan; Liu, Jian; Qin, Hong
2017-01-01
We study the Lorentz force equation of charged particle dynamics by considering its K-symplectic structure. As the Hamiltonian of the system can be decomposed as four parts, we are able to construct the numerical methods that preserve the K-symplectic structure based on Hamiltonian splitting technique. The newly derived numerical methods are explicit, and are shown in numerical experiments to be stable over long-term simulation. The error convergency as well as the long term energy conservation of the numerical solutions is also analyzed by means of the Darboux transformation.
Explicit K-symplectic algorithms for charged particle dynamics
Energy Technology Data Exchange (ETDEWEB)
He, Yang [School of Mathematics and Physics, University of Science and Technology Beijing, Beijing 100083 (China); Zhou, Zhaoqi [LSEC, Academy of Mathematics and Systems Science, Chinese Academy of Sciences, P.O. Box 2719, Beijing 100190 (China); Sun, Yajuan, E-mail: sunyj@lsec.cc.ac.cn [LSEC, Academy of Mathematics and Systems Science, Chinese Academy of Sciences, P.O. Box 2719, Beijing 100190 (China); University of Chinese Academy of Sciences, Beijing 100049 (China); Liu, Jian [Department of Modern Physics and School of Nuclear Science and Technology, University of Science and Technology of China, Hefei, Anhui 230026 (China); Key Laboratory of Geospace Environment, CAS, Hefei, Anhui 230026 (China); Qin, Hong [Department of Modern Physics and School of Nuclear Science and Technology, University of Science and Technology of China, Hefei, Anhui 230026 (China); Plasma Physics Laboratory, Princeton University, Princeton, NJ 08543 (United States)
2017-02-12
We study the Lorentz force equation of charged particle dynamics by considering its K-symplectic structure. As the Hamiltonian of the system can be decomposed as four parts, we are able to construct the numerical methods that preserve the K-symplectic structure based on Hamiltonian splitting technique. The newly derived numerical methods are explicit, and are shown in numerical experiments to be stable over long-term simulation. The error convergency as well as the long term energy conservation of the numerical solutions is also analyzed by means of the Darboux transformation.
Parallel-vector algorithms for particle simulations on shared-memory multiprocessors
International Nuclear Information System (INIS)
Nishiura, Daisuke; Sakaguchi, Hide
2011-01-01
Over the last few decades, the computational demands of massive particle-based simulations for both scientific and industrial purposes have been continuously increasing. Hence, considerable efforts are being made to develop parallel computing techniques on various platforms. In such simulations, particles freely move within a given space, and so on a distributed-memory system, load balancing, i.e., assigning an equal number of particles to each processor, is not guaranteed. However, shared-memory systems achieve better load balancing for particle models, but suffer from the intrinsic drawback of memory access competition, particularly during (1) paring of contact candidates from among neighboring particles and (2) force summation for each particle. Here, novel algorithms are proposed to overcome these two problems. For the first problem, the key is a pre-conditioning process during which particle labels are sorted by a cell label in the domain to which the particles belong. Then, a list of contact candidates is constructed by pairing the sorted particle labels. For the latter problem, a table comprising the list indexes of the contact candidate pairs is created and used to sum the contact forces acting on each particle for all contacts according to Newton's third law. With just these methods, memory access competition is avoided without additional redundant procedures. The parallel efficiency and compatibility of these two algorithms were evaluated in discrete element method (DEM) simulations on four types of shared-memory parallel computers: a multicore multiprocessor computer, scalar supercomputer, vector supercomputer, and graphics processing unit. The computational efficiency of a DEM code was found to be drastically improved with our algorithms on all but the scalar supercomputer. Thus, the developed parallel algorithms are useful on shared-memory parallel computers with sufficient memory bandwidth.
A neuro-fuzzy inference system tuned by particle swarm optimization algorithm for sensor monitoring
Energy Technology Data Exchange (ETDEWEB)
Oliveira, Mauro Vitor de [Instituto de Engenharia Nuclear (IEN), Rio de Janeiro, RJ (Brazil). Div. de Instrumentacao e Confiabilidade Humana]. E-mail: mvitor@ien.gov.br; Schirru, Roberto [Universidade Federal, Rio de Janeiro, RJ (Brazil). Coordenacao dos Programas de Pos-graduacao de Engenharia. Lab. de Monitoracao de Processos
2005-07-01
A neuro-fuzzy inference system (ANFIS) tuned by particle swarm optimization (PSO) algorithm has been developed for monitor the relevant sensor in a nuclear plant using the information of other sensors. The antecedent parameters of the ANFIS that estimates the relevant sensor signal are optimized by a PSO algorithm and consequent parameters use a least-squares algorithm. The proposed sensor-monitoring algorithm was demonstrated through the estimation of the nuclear power value in a pressurized water reactor using as input to the ANFIS six other correlated signals. The obtained results are compared to two similar ANFIS using one gradient descendent (GD) and other genetic algorithm (GA), as antecedent parameters training algorithm. (author)
A neuro-fuzzy inference system tuned by particle swarm optimization algorithm for sensor monitoring
International Nuclear Information System (INIS)
Oliveira, Mauro Vitor de; Schirru, Roberto
2005-01-01
A neuro-fuzzy inference system (ANFIS) tuned by particle swarm optimization (PSO) algorithm has been developed for monitor the relevant sensor in a nuclear plant using the information of other sensors. The antecedent parameters of the ANFIS that estimates the relevant sensor signal are optimized by a PSO algorithm and consequent parameters use a least-squares algorithm. The proposed sensor-monitoring algorithm was demonstrated through the estimation of the nuclear power value in a pressurized water reactor using as input to the ANFIS six other correlated signals. The obtained results are compared to two similar ANFIS using one gradient descendent (GD) and other genetic algorithm (GA), as antecedent parameters training algorithm. (author)
Directory of Open Access Journals (Sweden)
B. T. Tsurutani
2002-04-01
Full Text Available Energetic particles and MHD waves are studied using simultaneous ISEE-3 data to investigate particle propagation and scattering between the source near the Sun and 1 AU. 3 He-rich events are of particular interest because they are typically low intensity "scatter-free" events. The largest solar proton events are of interest because they have been postulated to generate their own waves through beam instabilities. For 3 He-rich events, simultaneous interplanetary magnetic spectra are measured. The intensity of the interplanetary "fossil" turbulence through which the particles have traversed is found to be at the "quiet" to "intermediate" level of IMF activity. Pitch angle scattering rates and the corresponding particle mean free paths lW - P are calculated using the measured wave intensities, polarizations, and k directions. The values of lW - P are found to be ~ 5 times less than the value of lHe , the latter derived from He intensity and anisotropy time profiles. It is demonstrated by computer simulation that scattering rates through a 90° pitch angle are lower than that of other pitch angles, and that this is a possible explanation for the discrepancy between the lW - P and lHe values. At this time the scattering mechanism(s is unknown. We suggest a means where a direct comparison between the two l values could be made. Computer simulations indicate that although scattering through 90° is lower, it still occurs. Possibilities are either large pitch angle scattering through resonant interactions, or particle mirroring off of field compression regions. The largest solar proton events are analyzed to investigate the possibilities of local wave generation at 1 AU. In accordance with the results of a previous calculation (Gary et al., 1985 of beam stability, proton beams at 1 AU are found to be marginally stable. No evidence for substantial wave amplitude was found. Locally generated waves, if present, were less than 10-3 nT 2 Hz-1 at the leading
Directory of Open Access Journals (Sweden)
T. Hada
Full Text Available Energetic particles and MHD waves are studied using simultaneous ISEE-3 data to investigate particle propagation and scattering between the source near the Sun and 1 AU. 3 He-rich events are of particular interest because they are typically low intensity "scatter-free" events. The largest solar proton events are of interest because they have been postulated to generate their own waves through beam instabilities. For 3 He-rich events, simultaneous interplanetary magnetic spectra are measured. The intensity of the interplanetary "fossil" turbulence through which the particles have traversed is found to be at the "quiet" to "intermediate" level of IMF activity. Pitch angle scattering rates and the corresponding particle mean free paths lW - P are calculated using the measured wave intensities, polarizations, and k directions. The values of lW - P are found to be ~ 5 times less than the value of lHe , the latter derived from He intensity and anisotropy time profiles. It is demonstrated by computer simulation that scattering rates through a 90° pitch angle are lower than that of other pitch angles, and that this is a possible explanation for the discrepancy between the lW - P and lHe values. At this time the scattering mechanism(s is unknown. We suggest a means where a direct comparison between the two l values could be made. Computer simulations indicate that although scattering through 90° is lower, it still occurs. Possibilities are either large pitch angle scattering through resonant interactions, or particle mirroring off of field compression regions. The largest solar proton events are analyzed to investigate the possibilities of local wave generation at 1 AU. In accordance with the results of a previous calculation (Gary et al., 1985 of beam stability, proton beams at 1 AU are found to be marginally stable. No evidence for substantial wave amplitude was found. Locally generated waves, if present, were less than 10-3 nT 2 Hz-1 at the leading
Experimental study of particle transport and density fluctuation in LHD
International Nuclear Information System (INIS)
Tanaka, K.; Morita, S.; Sanin, A.; Michael, C.; Kawahata, K.; Yamada, H.; Miyazawa, J.; Tokuzawa, T.; Akiyama, T.; Goto, M.; Ida, K.; Yoshinuma, M.; Narihara, K.; Yamada, I.; Yokoyama, M.; Masuzaki, S.; Morisaki, T.; Sakamoto, R.; Funaba, H.; Komori, A.; Vyacheslavov, L.N.; Murakami, S.; Wakasa, A.
2005-01-01
A variety of electron density (n e ) profiles have been observed in Large Helical Device (LHD). The density profiles change dramatically with heating power and toroidal magnetic field (B t ) under the same line averaged density. The particle transport coefficients, i.e., diffusion coefficient (D) and convection velocity (V) are experimentally obtained from density modulation experiments in the standard configuration. The values of D and V are estimated separately at the core and edge. The diffusion coefficients are strong function of electron temperature (T e ) and are proportional to T e 1.7±0.9 in core and T e 1.1±0.14 in edge. And edge diffusion coefficients are proportional to B t -2.08 . It is found that the scaling of D in edge is close to gyro-Bohm-like in nature. The existence of non-zero V is observed. It is observed that the electron temperature (T e ) gradient can drive particle convection. This is particularly clear in the core region. The convection velocity in the core region reverses direction from inward to outward as the T e gradient increases. In the edge, the convection is inward directed in the most of the case of the present data set. And it shows modest tendency, whose value is proportional to T e gradient keeping inward direction. However, the toroidal magnetic field also significantly affects value and direction of V. The spectrum of density fluctuation changes at different heating power suggesting that it has an influence on particle transport. The peak wavenumber is around 0.1 times the inversed ion Larmor radius, as is expected from gyro-Bohm diffusion. The peaks of fluctuation intensity are localized at the plasma edge, where density gradient becomes negative and diffusion contributes most to the particle flux. These results suggest a qualitative correlation of fluctuations with particle diffusion. (author)
A ballistic transport model for electronic excitation following particle impact
Hanke, S.; Heuser, C.; Weidtmann, B.; Wucher, A.
2018-01-01
We present a ballistic model for the transport of electronic excitation energy induced by keV particle bombardment onto a solid surface. Starting from a free electron gas model, the Boltzmann transport equation (BTE) is employed to follow the evolution of the temporal and spatial distribution function f (r → , k → , t) describing the occupation probability of an electronic state k → at position r → and time t. Three different initializations of the distribution function are considered: i) a thermal distribution function with a locally and temporally elevated electron temperature, ii) a peak excitation at a specific energy above the Fermi level with a quasi-isotropic distribution in k-space and iii) an anisotropic peak excitation with k-vectors oriented in a specific transport direction. While the first initialization resembles a distribution function which may, for instance, result from electronic friction of moving atoms within an ion induced collision cascade, the peak excitation can in principle result from an autoionization process after excitation in close binary collisions. By numerically solving the BTE, we study the electronic energy exchange along a one dimensional transport direction to obtain a time and space resolved excitation energy distribution function, which is then analyzed in view of general transport characteristics of the chosen model system.
A Novel Chaotic Particle Swarm Optimization Algorithm for Parking Space Guidance
Directory of Open Access Journals (Sweden)
Na Dong
2016-01-01
Full Text Available An evolutionary approach of parking space guidance based upon a novel Chaotic Particle Swarm Optimization (CPSO algorithm is proposed. In the newly proposed CPSO algorithm, the chaotic dynamics is combined into the position updating rules of Particle Swarm Optimization to improve the diversity of solutions and to avoid being trapped in the local optima. This novel approach, that combines the strengths of Particle Swarm Optimization and chaotic dynamics, is then applied into the route optimization (RO problem of parking lots, which is an important issue in the management systems of large-scale parking lots. It is used to find out the optimized paths between any source and destination nodes in the route network. Route optimization problems based on real parking lots are introduced for analyzing and the effectiveness and practicability of this novel optimization algorithm for parking space guidance have been verified through the application results.
A Novel Adaptive Particle Swarm Optimization Algorithm with Foraging Behavior in Optimization Design
Directory of Open Access Journals (Sweden)
Liu Yan
2018-01-01
Full Text Available The method of repeated trial and proofreading is generally used to the convention reducer design, but these methods is low efficiency and the size of the reducer is often large. Aiming the problems, this paper presents an adaptive particle swarm optimization algorithm with foraging behavior, in this method, the bacterial foraging process is introduced into the adaptive particle swarm optimization algorithm, which can provide the function of particle chemotaxis, swarming, reproduction, elimination and dispersal, to improve the ability of local search and avoid premature behavior. By test verification through typical function and the application of the optimization design in the structure of the reducer with discrete and continuous variables, the results are shown that the new algorithm has the advantages of good reliability, strong searching ability and high accuracy. It can be used in engineering design, and has a strong applicability.
Directory of Open Access Journals (Sweden)
Weitian Lin
2014-01-01
Full Text Available Particle swarm optimization algorithm (PSOA is an advantage optimization tool. However, it has a tendency to get stuck in a near optimal solution especially for middle and large size problems and it is difficult to improve solution accuracy by fine-tuning parameters. According to the insufficiency, this paper researches the local and global search combine particle swarm algorithm (LGSCPSOA, and its convergence and obtains its convergence qualification. At the same time, it is tested with a set of 8 benchmark continuous functions and compared their optimization results with original particle swarm algorithm (OPSOA. Experimental results indicate that the LGSCPSOA improves the search performance especially on the middle and large size benchmark functions significantly.
Parallelization of a Monte Carlo particle transport simulation code
Hadjidoukas, P.; Bousis, C.; Emfietzoglou, D.
2010-05-01
We have developed a high performance version of the Monte Carlo particle transport simulation code MC4. The original application code, developed in Visual Basic for Applications (VBA) for Microsoft Excel, was first rewritten in the C programming language for improving code portability. Several pseudo-random number generators have been also integrated and studied. The new MC4 version was then parallelized for shared and distributed-memory multiprocessor systems using the Message Passing Interface. Two parallel pseudo-random number generator libraries (SPRNG and DCMT) have been seamlessly integrated. The performance speedup of parallel MC4 has been studied on a variety of parallel computing architectures including an Intel Xeon server with 4 dual-core processors, a Sun cluster consisting of 16 nodes of 2 dual-core AMD Opteron processors and a 200 dual-processor HP cluster. For large problem size, which is limited only by the physical memory of the multiprocessor server, the speedup results are almost linear on all systems. We have validated the parallel implementation against the serial VBA and C implementations using the same random number generator. Our experimental results on the transport and energy loss of electrons in a water medium show that the serial and parallel codes are equivalent in accuracy. The present improvements allow for studying of higher particle energies with the use of more accurate physical models, and improve statistics as more particles tracks can be simulated in low response time.
Directory of Open Access Journals (Sweden)
Jianwen Guo
2016-01-01
Full Text Available All equipment must be maintained during its lifetime to ensure normal operation. Maintenance is one of the critical roles in the success of manufacturing enterprises. This paper proposed a preventive maintenance period optimization model (PMPOM to find an optimal preventive maintenance period. By making use of the advantages of particle swarm optimization (PSO and cuckoo search (CS algorithm, a hybrid optimization algorithm of PSO and CS is proposed to solve the PMPOM problem. The test functions show that the proposed algorithm exhibits more outstanding performance than particle swarm optimization and cuckoo search. Experiment results show that the proposed algorithm has advantages of strong optimization ability and fast convergence speed to solve the PMPOM problem.
A Parallel Adaptive Particle Swarm Optimization Algorithm for Economic/Environmental Power Dispatch
Directory of Open Access Journals (Sweden)
Jinchao Li
2012-01-01
Full Text Available A parallel adaptive particle swarm optimization algorithm (PAPSO is proposed for economic/environmental power dispatch, which can overcome the premature characteristic, the slow-speed convergence in the late evolutionary phase, and lacking good direction in particles’ evolutionary process. A search population is randomly divided into several subpopulations. Then for each subpopulation, the optimal solution is searched synchronously using the proposed method, and thus parallel computing is realized. To avoid converging to a local optimum, a crossover operator is introduced to exchange the information among the subpopulations and the diversity of population is sustained simultaneously. Simulation results show that the proposed algorithm can effectively solve the economic/environmental operation problem of hydropower generating units. Performance comparisons show that the solution from the proposed method is better than those from the conventional particle swarm algorithm and other optimization algorithms.
Xu, Sheng-Hua; Liu, Ji-Ping; Zhang, Fu-Hao; Wang, Liang; Sun, Li-Jian
2015-08-27
A combination of genetic algorithm and particle swarm optimization (PSO) for vehicle routing problems with time windows (VRPTW) is proposed in this paper. The improvements of the proposed algorithm include: using the particle real number encoding method to decode the route to alleviate the computation burden, applying a linear decreasing function based on the number of the iterations to provide balance between global and local exploration abilities, and integrating with the crossover operator of genetic algorithm to avoid the premature convergence and the local minimum. The experimental results show that the proposed algorithm is not only more efficient and competitive with other published results but can also obtain more optimal solutions for solving the VRPTW issue. One new well-known solution for this benchmark problem is also outlined in the following.
Simulations of reactive transport and precipitation with smoothed particle hydrodynamics
Tartakovsky, Alexandre M.; Meakin, Paul; Scheibe, Timothy D.; Eichler West, Rogene M.
2007-03-01
A numerical model based on smoothed particle hydrodynamics (SPH) was developed for reactive transport and mineral precipitation in fractured and porous materials. Because of its Lagrangian particle nature, SPH has several advantages for modeling Navier-Stokes flow and reactive transport including: (1) in a Lagrangian framework there is no non-linear term in the momentum conservation equation, so that accurate solutions can be obtained for momentum dominated flows and; (2) complicated physical and chemical processes such as surface growth due to precipitation/dissolution and chemical reactions are easy to implement. In addition, SPH simulations explicitly conserve mass and linear momentum. The SPH solution of the diffusion equation with fixed and moving reactive solid-fluid boundaries was compared with analytical solutions, Lattice Boltzmann [Q. Kang, D. Zhang, P. Lichtner, I. Tsimpanogiannis, Lattice Boltzmann model for crystal growth from supersaturated solution, Geophysical Research Letters, 31 (2004) L21604] simulations and diffusion limited aggregation (DLA) [P. Meakin, Fractals, scaling and far from equilibrium. Cambridge University Press, Cambridge, UK, 1998] model simulations. To illustrate the capabilities of the model, coupled three-dimensional flow, reactive transport and precipitation in a fracture aperture with a complex geometry were simulated.
High energy particle transport code NMTC/JAM
International Nuclear Information System (INIS)
Niita, Koji; Meigo, Shin-ichiro; Takada, Hiroshi; Ikeda, Yujiro
2001-03-01
We have developed a high energy particle transport code NMTC/JAM, which is an upgraded version of NMTC/JAERI97. The applicable energy range of NMTC/JAM is extended in principle up to 200 GeV for nucleons and mesons by introducing the high energy nuclear reaction code JAM for the intra-nuclear cascade part. For the evaporation and fission process, we have also implemented a new model, GEM, by which the light nucleus production from the excited residual nucleus can be described. According to the extension of the applicable energy, we have upgraded the nucleon-nucleus non-elastic, elastic and differential elastic cross section data by employing new systematics. In addition, the particle transport in a magnetic field has been implemented for the beam transport calculations. In this upgrade, some new tally functions are added and the format of input of data has been improved very much in a user friendly manner. Due to the implementation of these new calculation functions and utilities, consequently, NMTC/JAM enables us to carry out reliable neutronics study of a large scale target system with complex geometry more accurately and easily than before. This report serves as a user manual of the code. (author)
Recently developed methods in neutral-particle transport calculations: overview
International Nuclear Information System (INIS)
Alcouffe, R.E.
1982-01-01
It has become increasingly apparent that successful, general methods for the solution of the neutral particle transport equation involve a close connection between the spatial-discretization method used and the source-acceleration method chosen. The first form of the transport equation, angular discretization which is discrete ordinates is considered as well as spatial discretization based upon a mesh arrangement. Characteristic methods are considered briefly in the context of future, desirable developments. The ideal spatial-discretization method is described as having the following attributes: (1) positive-positive boundary data yields a positive angular flux within the mesh including its boundaries; (2) satisfies the particle balance equation over the mesh, that is, the method is conservative; (3) possesses the diffusion limit independent of spatial mesh size, that is, for a linearly isotropic flux assumption, the transport differencing reduces to a suitable diffusion equation differencing; (4) the method is unconditionally acceleratable, i.e., for each mesh size, the method is unconditionally convergent with a source iteration acceleration. It is doubtful that a single method possesses all these attributes for a general problem. Some commonly used methods are outlined and their computational performance and usefulness are compared; recommendations for future development are detailed, which include practical computational considerations
Production and global transport of Titan's sand particles
Barnes, Jason W.; Lorenz, Ralph D.; Radebaugh, Jani; Hayes, Alexander G.; Arnold, Karl; Chandler, Clayton
2015-06-01
Previous authors have suggested that Titan's individual sand particles form by either sintering or by lithification and erosion. We suggest two new mechanisms for the production of Titan's organic sand particles that would occur within bodies of liquid: flocculation and evaporitic precipitation. Such production mechanisms would suggest discrete sand sources in dry lakebeds. We search for such sources, but find no convincing candidates with the present Cassini Visual and Infrared Mapping Spectrometer coverage. As a result we propose that Titan's equatorial dunes may represent a single, global sand sea with west-to-east transport providing sources and sinks for sand in each interconnected basin. The sand might then be transported around Xanadu by fast-moving Barchan dune chains and/or fluvial transport in transient riverbeds. A river at the Xanadu/Shangri-La border could explain the sharp edge of the sand sea there, much like the Kuiseb River stops the Namib Sand Sea in southwest Africa on Earth. Future missions could use the composition of Titan's sands to constrain the global hydrocarbon cycle.
Algorithm for Public Electric Transport Schedule Control for Intelligent Embedded Devices
Alps, Ivars; Potapov, Andrey; Gorobetz, Mikhail; Levchenkov, Anatoly
2010-01-01
In this paper authors present heuristics algorithm for precise schedule fulfilment in city traffic conditions taking in account traffic lights. The algorithm is proposed for programmable controller. PLC is proposed to be installed in electric vehicle to control its motion speed and signals of traffic lights. Algorithm is tested using real controller connected to virtual devices and real functional models of real tram devices. Results of experiments show high precision of public transport schedule fulfilment using proposed algorithm.
He Wang
2015-01-01
Demand prediction of supply chain is an important content and the first premise in supply management of different enterprises and has become one of the difficulties and hot research fields for the researchers related. The paper takes fresh food demand prediction for example and presents a new algorithm for predicting demand of fresh food supply chain. First, the working principle and the root causes of the defects of particle swarm optimization algorithm are analyzed in the study; Second, the...
Wang, Chang; Qin, Xin; Liu, Yan; Zhang, Wenchao
2016-06-01
An adaptive inertia weight particle swarm algorithm is proposed in this study to solve the local optimal problem with the method of traditional particle swarm optimization in the process of estimating magnetic resonance(MR)image bias field.An indicator measuring the degree of premature convergence was designed for the defect of traditional particle swarm optimization algorithm.The inertia weight was adjusted adaptively based on this indicator to ensure particle swarm to be optimized globally and to avoid it from falling into local optimum.The Legendre polynomial was used to fit bias field,the polynomial parameters were optimized globally,and finally the bias field was estimated and corrected.Compared to those with the improved entropy minimum algorithm,the entropy of corrected image was smaller and the estimated bias field was more accurate in this study.Then the corrected image was segmented and the segmentation accuracy obtained in this research was 10% higher than that with improved entropy minimum algorithm.This algorithm can be applied to the correction of MR image bias field.
A simple algorithm for measuring particle size distributions on an uneven background from TEM images
DEFF Research Database (Denmark)
Gontard, Lionel Cervera; Ozkaya, Dogan; Dunin-Borkowski, Rafal E.
2011-01-01
Nanoparticles have a wide range of applications in science and technology. Their sizes are often measured using transmission electron microscopy (TEM) or X-ray diffraction. Here, we describe a simple computer algorithm for measuring particle size distributions from TEM images in the presence of a...... application to images of heterogeneous catalysts is presented.......Nanoparticles have a wide range of applications in science and technology. Their sizes are often measured using transmission electron microscopy (TEM) or X-ray diffraction. Here, we describe a simple computer algorithm for measuring particle size distributions from TEM images in the presence...
Research on Multiple Particle Swarm Algorithm Based on Analysis of Scientific Materials
Directory of Open Access Journals (Sweden)
Zhao Hongwei
2017-01-01
Full Text Available This paper proposed an improved particle swarm optimization algorithm based on analysis of scientific materials. The core thesis of MPSO (Multiple Particle Swarm Algorithm is to improve the single population PSO to interactive multi-swarms, which is used to settle the problem of being trapped into local minima during later iterations because it is lack of diversity. The simulation results show that the convergence rate is fast and the search performance is good, and it has achieved very good results.
Directory of Open Access Journals (Sweden)
Wei Li
2015-01-01
Full Text Available We propose a new optimization algorithm inspired by the formation and change of the cloud in nature, referred to as Cloud Particles Differential Evolution (CPDE algorithm. The cloud is assumed to have three states in the proposed algorithm. Gaseous state represents the global exploration. Liquid state represents the intermediate process from the global exploration to the local exploitation. Solid state represents the local exploitation. The best solution found so far acts as a nucleus. In gaseous state, the nucleus leads the population to explore by condensation operation. In liquid state, cloud particles carry out macrolocal exploitation by liquefaction operation. A new mutation strategy called cloud differential mutation is introduced in order to solve a problem that the misleading effect of a nucleus may cause the premature convergence. In solid state, cloud particles carry out microlocal exploitation by solidification operation. The effectiveness of the algorithm is validated upon different benchmark problems. The results have been compared with eight well-known optimization algorithms. The statistical analysis on performance evaluation of the different algorithms on 10 benchmark functions and CEC2013 problems indicates that CPDE attains good performance.
Enhanced Particle Swarm Optimization Algorithm: Efficient Training of ReaxFF Reactive Force Fields.
Furman, David; Carmeli, Benny; Zeiri, Yehuda; Kosloff, Ronnie
2018-05-04
Particle swarm optimization is a powerful metaheuristic population-based global optimization algorithm. However, when applied to non-separable objective functions its performance on multimodal landscapes is significantly degraded. Here we show that a significant improvement in the search quality and efficiency on multimodal functions can be achieved by enhancing the basic rotation-invariant particle swarm optimization algorithm with isotropic Gaussian mutation operators. The new algorithm demonstrates a superior performance across several nonlinear, multimodal benchmark functions compared to the rotation-invariant Particle Swam Optimization (PSO) algorithm and the well-established simulated annealing and sequential one-parameter parabolic interpolation methods. A search for the optimal set of parameters for the dispersion interaction model in ReaxFF-lg reactive force field is carried out with respect to accurate DFT-TS calculations. The resulting optimized force field accurately describes the equations of state of several high-energy molecular crystals where such interactions are of crucial importance. The improved algorithm also presents a better performance compared to a Genetic Algorithm optimization method in the optimization of a ReaxFF-lg correction model parameters. The computational framework is implemented in a standalone C++ code that allows a straightforward development of ReaxFF reactive force fields.
Artificial Fish Swarm Algorithm-Based Particle Filter for Li-Ion Battery Life Prediction
Directory of Open Access Journals (Sweden)
Ye Tian
2014-01-01
Full Text Available An intelligent online prognostic approach is proposed for predicting the remaining useful life (RUL of lithium-ion (Li-ion batteries based on artificial fish swarm algorithm (AFSA and particle filter (PF, which is an integrated approach combining model-based method with data-driven method. The parameters, used in the empirical model which is based on the capacity fade trends of Li-ion batteries, are identified dependent on the tracking ability of PF. AFSA-PF aims to improve the performance of the basic PF. By driving the prior particles to the domain with high likelihood, AFSA-PF allows global optimization, prevents particle degeneracy, thereby improving particle distribution and increasing prediction accuracy and algorithm convergence. Data provided by NASA are used to verify this approach and compare it with basic PF and regularized PF. AFSA-PF is shown to be more accurate and precise.
A Constructive Data Classification Version of the Particle Swarm Optimization Algorithm
Directory of Open Access Journals (Sweden)
Alexandre Szabo
2013-01-01
Full Text Available The particle swarm optimization algorithm was originally introduced to solve continuous parameter optimization problems. It was soon modified to solve other types of optimization tasks and also to be applied to data analysis. In the latter case, however, there are few works in the literature that deal with the problem of dynamically building the architecture of the system. This paper introduces new particle swarm algorithms specifically designed to solve classification problems. The first proposal, named Particle Swarm Classifier (PSClass, is a derivation of a particle swarm clustering algorithm and its architecture, as in most classifiers, is pre-defined. The second proposal, named Constructive Particle Swarm Classifier (cPSClass, uses ideas from the immune system to automatically build the swarm. A sensitivity analysis of the growing procedure of cPSClass and an investigation into a proposed pruning procedure for this algorithm are performed. The proposals were applied to a wide range of databases from the literature and the results show that they are competitive in relation to other approaches, with the advantage of having a dynamically constructed architecture.
Particle Transport in ECRH Plasmas of the TJ-II
International Nuclear Information System (INIS)
Vargas, V. I.; Lopez-Bruna, D.; Estrada, T.; Guasp, J.; Reynolds, J. M.; Velasco, J. L.; Herranz, J.
2007-01-01
We present a systematic study of particle transport in ECRH plasmas of TJ-II with different densities. The goal is to fi nd particle confinement time and electron diffusivity dependence with line-averaged density. The experimental information consists of electron temperature profiles, T e (Thomson Scattering TS) and electron density, n e , (TS and reflectometry) and measured puffing data in stationary discharges. The profile of the electron source, Se, was obtained by the 3D Monte-Carlo code EIRENE. The analysis of particle balance has been done by linking the results of the code EIRENE with the results of a model that reproduces ECRH plasmas in stationary conditions. In the range of densities studied (0.58 ≤n e > (10 1 9m - 3) ≤0.80) there are two regions of confinement separated by a threshold density, e > ∼0.65 10 1 9m - 3. Below this threshold density the particle confinement time is low, and vice versa. This is reflected in the effective diffusivity, D e , which in the range of validity of this study, 0.5 e are flat for ≥0,63(10 1 9m - 3). (Author) 35 refs
Approximate models for neutral particle transport calculations in ducts
International Nuclear Information System (INIS)
Ono, Shizuca
2000-01-01
The problem of neutral particle transport in evacuated ducts of arbitrary, but axially uniform, cross-sectional geometry and isotropic reflection at the wall is studied. The model makes use of basis functions to represent the transverse and azimuthal dependences of the particle angular flux in the duct. For the approximation in terms of two basis functions, an improvement in the method is implemented by decomposing the problem into uncollided and collided components. A new quadrature set, more suitable to the problem, is developed and generated by one of the techniques of the constructive theory of orthogonal polynomials. The approximation in terms of three basis functions is developed and implemented to improve the precision of the results. For both models of two and three basis functions, the energy dependence of the problem is introduced through the multigroup formalism. The results of sample problems are compared to literature results and to results of the Monte Carlo code, MCNP. (author)
On the use of antithetic variates in particle transport problems
International Nuclear Information System (INIS)
Milgram, M.S.
2001-01-01
The possible use of antithetic variates as a method of variance reduction in particle transport problems is investigated, by performing some numerical experiments. It is found that if variance reduction is not very carefully defined, it is possible, with antithetic variates, to spuriously detect reduction, or not detect true reduction. Once such subtleties are overcome, it is shown that antithetic variates can reduce variance in multidimensional integration up to a point. The phenomenon of spontaneous correlation is defined and identified as the cause of failure. The surprising result that it sometimes pays to track non-contributing particle histories is demonstrated by means of a zero variance integration analogue. The principles developed in the investigation of multi-variable integration are then employed in a simple calculation of energy deposition using the EGS4 computer code. Promising results are obtained for the total energy deposition problem, but the depth/dose problem remains unsolved. Possible means of overcoming the difficulties are suggested
Transport and containment of plasma, particles and energy within flares
Acton, L. W.; Brown, W. A.; Bruner, M. E. C.; Haisch, B. M.; Strong, K. T.
1983-01-01
Results from the analysis of flares observed by the Solar Maximum Mission (SMM) and a recent rocket experiment are discussed. Evidence for primary energy release in the corona through the interaction of magnetic structures, particle and plasma transport into more than a single magnetic structure at the time of a flare and a complex and changing magnetic topology during the course of a flare is found. The rocket data are examined for constraints on flare cooling, within the context of simple loop models. These results form a basis for comments on the limitations of simple loop models for flares.
Development of a Coupled Fluid and Colloidall Particle Transport Model
Ripplinger, Scott
2013-01-01
A colloidal system usually refers to when very small particles are suspended within a solution. The study of these systems encompasses a variety of cases including bacteria in ground water, blood cells and platelets in blood plasma, and river silt transport. Taking a look at these kinds of systems using computer simulation can provide a great deal of insight into how they work. Most approaches to date do not look at the details of the system, however, and are specific to given system. In this...
Computational transport phenomena of fluid-particle systems
Arastoopour, Hamid; Abbasi, Emad
2017-01-01
This book concerns the most up-to-date advances in computational transport phenomena (CTP), an emerging tool for the design of gas-solid processes such as fluidized bed systems. The authors examine recent work in kinetic theory and CTP and illustrate gas-solid processes’ many applications in the energy, chemical, pharmaceutical, and food industries. They also discuss the kinetic theory approach in developing constitutive equations for gas-solid flow systems and how it has advanced over the last decade as well as the possibility of obtaining innovative designs for multiphase reactors, such as those needed to capture CO2 from flue gases. Suitable as a concise reference and a textbook supplement for graduate courses, Computational Transport Phenomena of Gas-Solid Systems is ideal for practitioners in industries involved with the design and operation of processes based on fluid/particle mixtures, such as the energy, chemicals, pharmaceuticals, and food processing. Explains how to couple the population balance e...
The Improved Locating Algorithm of Particle Filter Based on ROS Robot
Fang, Xun; Fu, Xiaoyang; Sun, Ming
2018-03-01
This paperanalyzes basic theory and primary algorithm of the real-time locating system and SLAM technology based on ROS system Robot. It proposes improved locating algorithm of particle filter effectively reduces the matching time of laser radar and map, additional ultra-wideband technology directly accelerates the global efficiency of FastSLAM algorithm, which no longer needs searching on the global map. Meanwhile, the re-sampling has been largely reduced about 5/6 that directly cancels the matching behavior on Roboticsalgorithm.
Algorithmic Information Dynamics of Persistent Patterns and Colliding Particles in the Game of Life
Zenil, Hector
2018-02-18
We demonstrate the way to apply and exploit the concept of \\\\textit{algorithmic information dynamics} in the characterization and classification of dynamic and persistent patterns, motifs and colliding particles in, without loss of generalization, Conway\\'s Game of Life (GoL) cellular automaton as a case study. We analyze the distribution of prevailing motifs that occur in GoL from the perspective of algorithmic probability. We demonstrate how the tools introduced are an alternative to computable measures such as entropy and compression algorithms which are often nonsensitive to small changes and features of non-statistical nature in the study of evolving complex systems and their emergent structures.
Analysis of Population Diversity of Dynamic Probabilistic Particle Swarm Optimization Algorithms
Directory of Open Access Journals (Sweden)
Qingjian Ni
2014-01-01
Full Text Available In evolutionary algorithm, population diversity is an important factor for solving performance. In this paper, combined with some population diversity analysis methods in other evolutionary algorithms, three indicators are introduced to be measures of population diversity in PSO algorithms, which are standard deviation of population fitness values, population entropy, and Manhattan norm of standard deviation in population positions. The three measures are used to analyze the population diversity in a relatively new PSO variant—Dynamic Probabilistic Particle Swarm Optimization (DPPSO. The results show that the three measure methods can fully reflect the evolution of population diversity in DPPSO algorithms from different angles, and we also discuss the impact of population diversity on the DPPSO variants. The relevant conclusions of the population diversity on DPPSO can be used to analyze, design, and improve the DPPSO algorithms, thus improving optimization performance, which could also be beneficial to understand the working mechanism of DPPSO theoretically.
DEFF Research Database (Denmark)
Nica, Florin Valentin Traian; Ritchie, Ewen; Leban, Krisztina Monika
2013-01-01
, genetic algorithm and particle swarm are shortly presented in this paper. These two algorithms are tested to determine their performance on five different benchmark test functions. The algorithms are tested based on three requirements: precision of the result, number of iterations and calculation time....... Both algorithms are also tested on an analytical design process of a Transverse Flux Permanent Magnet Generator to observe their performances in an electrical machine design application.......Nowadays the requirements imposed by the industry and economy ask for better quality and performance while the price must be maintained in the same range. To achieve this goal optimization must be introduced in the design process. Two of the best known optimization algorithms for machine design...
Efficient implementation of the transportation algorithm for the ...
African Journals Online (AJOL)
This paper analyses the transportation problem of the NBC, amongst its distribution pattern of the South-South Zone of Nigeria. The Tora software was used to analyze the data. We solved the transportation planning problems sequentially each by the transportation model for the available data from NBC for the 2012 ...
Creating and using a type of free-form geometry in Monte Carlo particle transport
International Nuclear Information System (INIS)
Wessol, D.E.; Wheeler, F.J.
1993-01-01
While the reactor physicists were fine-tuning the Monte Carlo paradigm for particle transport in regular geometries, the computer scientists were developing rendering algorithms to display extremely realistic renditions of irregular objects ranging from the ubiquitous teakettle to dynamic Jell-O. Even though the modeling methods share a common basis, the initial strategies each discipline developed for variance reduction were remarkably different. Initially, the reactor physicist used Russian roulette, importance sampling, particle splitting, and rejection techniques. In the early stages of development, the computer scientist relied primarily on rejection techniques, including a very elegant hierarchical construction and sampling method. This sampling method allowed the computer scientist to viably track particles through irregular geometries in three-dimensional space, while the initial methods developed by the reactor physicists would only allow for efficient searches through analytical surfaces or objects. As time goes by, it appears there has been some merging of the variance reduction strategies between the two disciplines. This is an early (possibly first) incorporation of geometric hierarchical construction and sampling into the reactor physicists' Monte Carlo transport model that permits efficient tracking through nonuniform rational B-spline surfaces in three-dimensional space. After some discussion, the results from this model are compared with experiments and the model employing implicit (analytical) geometric representation
V.A.F. Dallagnol (V. A F); J.H. van den Berg (Jan); L. Mous (Lonneke)
2009-01-01
textabstractIn this paper, it is shown a comparison of the application of particle swarm optimization and genetic algorithms to portfolio management, in a constrained portfolio optimization problem where no short sales are allowed. The objective function to be minimized is the value at risk
The Study on Food Sensory Evaluation based on Particle Swarm Optimization Algorithm
Hairong Wang; Huijuan Xu
2015-01-01
In this study, it explores the procedures and methods of the system for establishing food sensory evaluation based on particle swarm optimization algorithm, by means of explaining the interpretation of sensory evaluation and sensory analysis, combined with the applying situation of sensory evaluation in food industry.
Czech Academy of Sciences Publication Activity Database
Larentzos, J.P.; Brennan, J.K.; Moore, J.D.; Lísal, Martin; Mattson, w.D.
2014-01-01
Roč. 185, č. 7 (2014), s. 1987-1998 ISSN 0010-4655 Grant - others:ARL(US) W911NF-10-2-0039 Institutional support: RVO:67985858 Keywords : dissipative particle dynamics * shardlow splitting algorithm * numerical integration Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 3.112, year: 2014
Predicting patchy particle crystals: variable box shape simulations and evolutionary algorithms
Bianchi, E.; Doppelbauer, G.; Filion, L.C.; Dijkstra, M.; Kahl, G.
2012-01-01
We consider several patchy particle models that have been proposed in literature and we investigate their candidate crystal structures in a systematic way. We compare two different algorithms for predicting crystal structures: (i) an approach based on Monte Carlo simulations in the
Design of Wire Antennas by Using an Evolved Particle Swarm Optimization Algorithm
Lepelaars, E.S.A.M.; Zwamborn, A.P.M.; Rogovic, A.; Marasini, C.; Monorchio, A.
2007-01-01
A Particle Swarm Optimization (PSO) algorithm has been used in conjunction with a full-wave numerical code based on the Method of Moments (MoM) to design and optimize wire antennas. The PSO is a robust stochastic evolutionary numerical technique that is very effective in optimizing multidimensional
Directory of Open Access Journals (Sweden)
Maryam Mousavi
Full Text Available Flexible manufacturing system (FMS enhances the firm's flexibility and responsiveness to the ever-changing customer demand by providing a fast product diversification capability. Performance of an FMS is highly dependent upon the accuracy of scheduling policy for the components of the system, such as automated guided vehicles (AGVs. An AGV as a mobile robot provides remarkable industrial capabilities for material and goods transportation within a manufacturing facility or a warehouse. Allocating AGVs to tasks, while considering the cost and time of operations, defines the AGV scheduling process. Multi-objective scheduling of AGVs, unlike single objective practices, is a complex and combinatorial process. In the main draw of the research, a mathematical model was developed and integrated with evolutionary algorithms (genetic algorithm (GA, particle swarm optimization (PSO, and hybrid GA-PSO to optimize the task scheduling of AGVs with the objectives of minimizing makespan and number of AGVs while considering the AGVs' battery charge. Assessment of the numerical examples' scheduling before and after the optimization proved the applicability of all the three algorithms in decreasing the makespan and AGV numbers. The hybrid GA-PSO produced the optimum result and outperformed the other two algorithms, in which the mean of AGVs operation efficiency was found to be 69.4, 74, and 79.8 percent in PSO, GA, and hybrid GA-PSO, respectively. Evaluation and validation of the model was performed by simulation via Flexsim software.
Mousavi, Maryam; Yap, Hwa Jen; Musa, Siti Nurmaya; Tahriri, Farzad; Md Dawal, Siti Zawiah
2017-01-01
Flexible manufacturing system (FMS) enhances the firm's flexibility and responsiveness to the ever-changing customer demand by providing a fast product diversification capability. Performance of an FMS is highly dependent upon the accuracy of scheduling policy for the components of the system, such as automated guided vehicles (AGVs). An AGV as a mobile robot provides remarkable industrial capabilities for material and goods transportation within a manufacturing facility or a warehouse. Allocating AGVs to tasks, while considering the cost and time of operations, defines the AGV scheduling process. Multi-objective scheduling of AGVs, unlike single objective practices, is a complex and combinatorial process. In the main draw of the research, a mathematical model was developed and integrated with evolutionary algorithms (genetic algorithm (GA), particle swarm optimization (PSO), and hybrid GA-PSO) to optimize the task scheduling of AGVs with the objectives of minimizing makespan and number of AGVs while considering the AGVs' battery charge. Assessment of the numerical examples' scheduling before and after the optimization proved the applicability of all the three algorithms in decreasing the makespan and AGV numbers. The hybrid GA-PSO produced the optimum result and outperformed the other two algorithms, in which the mean of AGVs operation efficiency was found to be 69.4, 74, and 79.8 percent in PSO, GA, and hybrid GA-PSO, respectively. Evaluation and validation of the model was performed by simulation via Flexsim software.
Particle Swarm Transport through Immiscible Fluid Layers in a Fracture
Teasdale, N. D.; Boomsma, E.; Pyrak-Nolte, L. J.
2011-12-01
Immiscible fluids occur either naturally (e.g. oil & water) or from anthropogenic processes (e.g. liquid CO2 & water) in the subsurface and complicate the transport of natural or engineered micro- or nano-scale particles. In this study, we examined the effect of immiscible fluids on the formation and evolution of particle swarms in a fracture. A particle swarm is a collection of colloidal-size particles in a dilute suspension that exhibits cohesive behavior. Swarms fall under gravity with a velocity that is greater than the settling velocity of a single particle. Thus a particle swarm of colloidal contaminants can potentially travel farther and faster in a fracture than expected for a dispersion or emulsion of colloidal particles. We investigated the formation, evolution, and break-up of colloidal swarms under gravity in a uniform aperture fracture as hydrophobic/hydrophyllic particle swarms move across an oil-water interface. A uniform aperture fracture was fabricated from two transparent acrylic rectangular prisms (100 mm x 50 mm x 100 mm) that are separated by 1, 2.5, 5, 10 or 50 mm. The fracture was placed, vertically, inside a glass tank containing a layer of pure silicone oil (polydimethylsiloxane) on distilled water. Along the length of the fracture, 30 mm was filled with oil and 70 mm with water. Experiments were conducted using silicone oils with viscosities of 5, 10, 100, or 1000 cSt. Particle swarms (5 μl) were comprised of a 1% concentration (by mass) of 25 micron glass beads (hydrophilic) suspended in a water drop, or a 1% concentration (by mass) of 3 micron polystyrene fluorescent beads (hydrophobic) suspended in a water drop. The swarm behavior was imaged using an optical fluorescent imaging system composed of a CCD camera and by green (525 nm) LED arrays for illumination. Swarms were spherical and remained coherent as they fell through the oil because of the immiscibility of oil and water. However, as a swarm approached the oil-water interface, it
Evidence for particle transport between alveolar macrophages in vivo
Energy Technology Data Exchange (ETDEWEB)
Benson, J.M.; Nikula, K.J.; Guilmette, R.A.
1995-12-01
Recent studies at this Institute have focused on determining the role of alveolar macrophages (AMs) in the transport of particles within and form the lung. For those studies, AMs previously labeled using the nuclear stain Hoechst 33342 and polychromatic Fluoresbrite microspheres (1 {mu}m diameter, Polysciences, Inc., Warrington, PA) were instilled into lungs of recipient F344 rats. The fate of the donor particles and the doubly labeled AMs within recipient lungs was followed for 32 d. Within 2-4 d after instillation, the polychromatic microspheres were found in both donor and resident AMs, suggesting that particle transfer occurred between the donor and resident AMs. However, this may also have been an artifact resulting from phagocytosis of the microspheres form dead donor cells or from the fading or degradation of Hoechst 33342 within the donor cells leading to their misidentification as resident AMs. The results support the earlier findings that microspheres in donor AMs can be transferred to resident AMs within 2 d after instillation.
Semiclassical transport of particles with dynamical spectral functions
International Nuclear Information System (INIS)
Cassing, W.; Juchem, S.
2000-01-01
The conventional transport of particles in the on-shell quasiparticle limit is extended to particles of finite life time by means of a spectral function A(X,P,M 2 ) for a particle moving in an area of complex self-energy Σ ret X =Re Σ ret X -iΓ X /2. Starting from the Kadanoff--Baym equations we derive in first-order gradient expansion equations of motion for testparticles with respect to their time evolution in X,P and M 2 . The off-shell propagation is demonstrated for a couple of model cases that simulate hadron-nucleus collisions. In case of nucleus-nucleus collisions the imaginary part of the hadron self-energy Γ X is determined by the local space-time dependent collision rate dynamically. A first application is presented for A+A reactions up to 95 A MeV, where the effects from the off-shell propagation of nucleons are discussed with respect to high energy proton spectra, high energy photon production as well as kaon yields in comparison to the available data from GANIL
Adaptive sampling method in deep-penetration particle transport problem
International Nuclear Information System (INIS)
Wang Ruihong; Ji Zhicheng; Pei Lucheng
2012-01-01
Deep-penetration problem has been one of the difficult problems in shielding calculation with Monte Carlo method for several decades. In this paper, a kind of particle transport random walking system under the emission point as a sampling station is built. Then, an adaptive sampling scheme is derived for better solution with the achieved information. The main advantage of the adaptive scheme is to choose the most suitable sampling number from the emission point station to obtain the minimum value of the total cost in the process of the random walk. Further, the related importance sampling method is introduced. Its main principle is to define the importance function due to the particle state and to ensure the sampling number of the emission particle is proportional to the importance function. The numerical results show that the adaptive scheme under the emission point as a station could overcome the difficulty of underestimation of the result in some degree, and the adaptive importance sampling method gets satisfied results as well. (authors)
Particle transport model sensitivity on wave-induced processes
Staneva, Joanna; Ricker, Marcel; Krüger, Oliver; Breivik, Oyvind; Stanev, Emil; Schrum, Corinna
2017-04-01
Different effects of wind waves on the hydrodynamics in the North Sea are investigated using a coupled wave (WAM) and circulation (NEMO) model system. The terms accounting for the wave-current interaction are: the Stokes-Coriolis force, the sea-state dependent momentum and energy flux. The role of the different Stokes drift parameterizations is investigated using a particle-drift model. Those particles can be considered as simple representations of either oil fractions, or fish larvae. In the ocean circulation models the momentum flux from the atmosphere, which is related to the wind speed, is passed directly to the ocean and this is controlled by the drag coefficient. However, in the real ocean, the waves play also the role of a reservoir for momentum and energy because different amounts of the momentum flux from the atmosphere is taken up by the waves. In the coupled model system the momentum transferred into the ocean model is estimated as the fraction of the total flux that goes directly to the currents plus the momentum lost from wave dissipation. Additionally, we demonstrate that the wave-induced Stokes-Coriolis force leads to a deflection of the current. During the extreme events the Stokes velocity is comparable in magnitude to the current velocity. The resulting wave-induced drift is crucial for the transport of particles in the upper ocean. The performed sensitivity analyses demonstrate that the model skill depends on the chosen processes. The results are validated using surface drifters, ADCP, HF radar data and other in-situ measurements in different regions of the North Sea with a focus on the coastal areas. The using of a coupled model system reveals that the newly introduced wave effects are important for the drift-model performance, especially during extremes. Those effects cannot be neglected by search and rescue, oil-spill, transport of biological material, or larva drift modelling.
Experimental study of particle transport and density fluctuation in LHD
International Nuclear Information System (INIS)
Tanaka, K.; Michael, C.; Sanin, A.
2005-01-01
A variety of electron density (n e ) profiles have been observed in Large Helical Device (LHD). The density profiles change dramatically with heating power and toroidal magnetic field (B t ) under the same line averaged density. The particle transport coefficients, i.e., diffusion coefficient (D) and convection velocity (V) are experimentally obtained in the standard configuration from density modulation experiments. The values of D and V are estimated separately in the core and edge. The diffusion coefficients are found to be a strong function of electron temperature (T e ) and are proportional to T e 1.7±0.9 in the core and T e 1.1±0.14 in the edge. Edge diffusion coefficients are proportional to B t -2.08 . It is found that the scaling of D in the edge is close to gyro-Bohm-like in nature. Non-zero V is observed and it is found that the electron temperature gradient can drive particle convection, particularly in the core region. The convection velocity in the core reverses direction from inward to outward as the T e gradient increases. In the edge, convection is inward directed in most cases of the present data set. It shows a modest tendency, being proportional to T e gradient and remaining inward directed. However, the toroidal magnetic field also significantly affects the value and direction of V. The density fluctuation spectrum varies with heating power suggesting that it has an influence on particle transport. The value of K sub(perpendicular) ρ i is around 0.1, as expected for gyro-Bohm diffusion. Fluctuations are localized in both positive and negative density gradient regions of the hollow density profiles. The fluctuation power in each region is clearly distinguished having different phase velocity profiles. (author)
Identification of nuclear power plant transients using the Particle Swarm Optimization algorithm
International Nuclear Information System (INIS)
Canedo Medeiros, Jose Antonio Carlos; Schirru, Roberto
2008-01-01
In order to help nuclear power plant operator reduce his cognitive load and increase his available time to maintain the plant operating in a safe condition, transient identification systems have been devised to help operators identify possible plant transients and take fast and right corrective actions in due time. In the design of classification systems for identification of nuclear power plants transients, several artificial intelligence techniques, involving expert systems, neuro-fuzzy and genetic algorithms have been used. In this work we explore the ability of the Particle Swarm Optimization algorithm (PSO) as a tool for optimizing a distance-based discrimination transient classification method, giving also an innovative solution for searching the best set of prototypes for identification of transients. The Particle Swarm Optimization algorithm was successfully applied to the optimization of a nuclear power plant transient identification problem. Comparing the PSO to similar methods found in literature it has shown better results
A tracking algorithm for the reconstruction of the daughters of long-lived particles in LHCb
Dendek, Adam Mateusz
2018-01-01
A tracking algorithm for the reconstruction of the daughters of long-lived particles in LHCb 5 Jun 2018, 16:00 1h 30m Library, Centro San Domenico () LHC experiments Posters session Speaker Katharina Mueller (Universitaet Zuerich (CH)) Description The LHCb experiment at CERN operates a high precision and robust tracking system to reach its physics goals, including precise measurements of CP-violation phenomena in the heavy flavour quark sector and searches for New Physics beyond the Standard Model. The track reconstruction procedure is performed by a number of algorithms. One of these, PatLongLivedTracking, is optimised to reconstruct "downstream tracks", which are tracks originating from decays outside the LHCb vertex detector of long-lived particles, such as Ks or Λ0. After an overview of the LHCb tracking system, we provide a detailed description of the LHCb downstream track reconstruction algorithm. Its computational intelligence part is described in details, including the adaptation of the employed...
Identification of nuclear power plant transients using the Particle Swarm Optimization algorithm
Energy Technology Data Exchange (ETDEWEB)
Canedo Medeiros, Jose Antonio Carlos [Universidade Federal do Rio de Janeiro, PEN/COPPE, UFRJ, Ilha do Fundao s/n, CEP 21945-970 Rio de Janeiro (Brazil)], E-mail: canedo@lmp.ufrj.br; Schirru, Roberto [Universidade Federal do Rio de Janeiro, PEN/COPPE, UFRJ, Ilha do Fundao s/n, CEP 21945-970 Rio de Janeiro (Brazil)], E-mail: schirru@lmp.ufrj.br
2008-04-15
In order to help nuclear power plant operator reduce his cognitive load and increase his available time to maintain the plant operating in a safe condition, transient identification systems have been devised to help operators identify possible plant transients and take fast and right corrective actions in due time. In the design of classification systems for identification of nuclear power plants transients, several artificial intelligence techniques, involving expert systems, neuro-fuzzy and genetic algorithms have been used. In this work we explore the ability of the Particle Swarm Optimization algorithm (PSO) as a tool for optimizing a distance-based discrimination transient classification method, giving also an innovative solution for searching the best set of prototypes for identification of transients. The Particle Swarm Optimization algorithm was successfully applied to the optimization of a nuclear power plant transient identification problem. Comparing the PSO to similar methods found in literature it has shown better results.
Yu, Chaoyin; Yuan, Zhengwu; Wu, Yuanfeng
2017-10-01
Hyperspectral image unmixing is an important part of hyperspectral data analysis. The mixed pixel decomposition consists of two steps, endmember (the unique signatures of pure ground components) extraction and abundance (the proportion of each endmember in each pixel) estimation. Recently, a Discrete Particle Swarm Optimization algorithm (DPSO) was proposed for accurately extract endmembers with high optimal performance. However, the DPSO algorithm shows very high computational complexity, which makes the endmember extraction procedure very time consuming for hyperspectral image unmixing. Thus, in this paper, the DPSO endmember extraction algorithm was parallelized, implemented on the CUDA (GPU K20) platform, and evaluated by real hyperspectral remote sensing data. The experimental results show that with increasing the number of particles the parallelized version obtained much higher computing efficiency while maintain the same endmember exaction accuracy.
Directory of Open Access Journals (Sweden)
Yu Huang
Full Text Available Parameter estimation for fractional-order chaotic systems is an important issue in fractional-order chaotic control and synchronization and could be essentially formulated as a multidimensional optimization problem. A novel algorithm called quantum parallel particle swarm optimization (QPPSO is proposed to solve the parameter estimation for fractional-order chaotic systems. The parallel characteristic of quantum computing is used in QPPSO. This characteristic increases the calculation of each generation exponentially. The behavior of particles in quantum space is restrained by the quantum evolution equation, which consists of the current rotation angle, individual optimal quantum rotation angle, and global optimal quantum rotation angle. Numerical simulation based on several typical fractional-order systems and comparisons with some typical existing algorithms show the effectiveness and efficiency of the proposed algorithm.
The Splashback Radius of Halos from Particle Dynamics. I. The SPARTA Algorithm
Diemer, Benedikt
2017-07-01
Motivated by the recent proposal of the splashback radius as a physical boundary of dark-matter halos, we present a parallel computer code for Subhalo and PARticle Trajectory Analysis (SPARTA). The code analyzes the orbits of all simulation particles in all host halos, billions of orbits in the case of typical cosmological N-body simulations. Within this general framework, we develop an algorithm that accurately extracts the location of the first apocenter of particles after infall into a halo, or splashback. We define the splashback radius of a halo as the smoothed average of the apocenter radii of individual particles. This definition allows us to reliably measure the splashback radii of 95% of host halos above a resolution limit of 1000 particles. We show that, on average, the splashback radius and mass are converged to better than 5% accuracy with respect to mass resolution, snapshot spacing, and all free parameters of the method.
Mehdinejadiani, Behrouz
2017-08-01
This study represents the first attempt to estimate the solute transport parameters of the spatial fractional advection-dispersion equation using Bees Algorithm. The numerical studies as well as the experimental studies were performed to certify the integrity of Bees Algorithm. The experimental ones were conducted in a sandbox for homogeneous and heterogeneous soils. A detailed comparative study was carried out between the results obtained from Bees Algorithm and those from Genetic Algorithm and LSQNONLIN routines in FracFit toolbox. The results indicated that, in general, the Bees Algorithm much more accurately appraised the sFADE parameters in comparison with Genetic Algorithm and LSQNONLIN, especially in the heterogeneous soil and for α values near to 1 in the numerical study. Also, the results obtained from Bees Algorithm were more reliable than those from Genetic Algorithm. The Bees Algorithm showed the relative similar performances for all cases, while the Genetic Algorithm and the LSQNONLIN yielded different performances for various cases. The performance of LSQNONLIN strongly depends on the initial guess values so that, compared to the Genetic Algorithm, it can more accurately estimate the sFADE parameters by taking into consideration the suitable initial guess values. To sum up, the Bees Algorithm was found to be very simple, robust and accurate approach to estimate the transport parameters of the spatial fractional advection-dispersion equation. Copyright © 2017 Elsevier B.V. All rights reserved.
Mehdinejadiani, Behrouz
2017-08-01
This study represents the first attempt to estimate the solute transport parameters of the spatial fractional advection-dispersion equation using Bees Algorithm. The numerical studies as well as the experimental studies were performed to certify the integrity of Bees Algorithm. The experimental ones were conducted in a sandbox for homogeneous and heterogeneous soils. A detailed comparative study was carried out between the results obtained from Bees Algorithm and those from Genetic Algorithm and LSQNONLIN routines in FracFit toolbox. The results indicated that, in general, the Bees Algorithm much more accurately appraised the sFADE parameters in comparison with Genetic Algorithm and LSQNONLIN, especially in the heterogeneous soil and for α values near to 1 in the numerical study. Also, the results obtained from Bees Algorithm were more reliable than those from Genetic Algorithm. The Bees Algorithm showed the relative similar performances for all cases, while the Genetic Algorithm and the LSQNONLIN yielded different performances for various cases. The performance of LSQNONLIN strongly depends on the initial guess values so that, compared to the Genetic Algorithm, it can more accurately estimate the sFADE parameters by taking into consideration the suitable initial guess values. To sum up, the Bees Algorithm was found to be very simple, robust and accurate approach to estimate the transport parameters of the spatial fractional advection-dispersion equation.
International Nuclear Information System (INIS)
Krommes, J.A.; Kleva, R.G.; Oberman, C.
1978-05-01
A systematic theory is developed for the computation of electron transport in stochastic magnetic fields. Small scale magnetic perturbations arising, for example, from finite-β micro-instabilities are assumed to destroy the flux surfaces of a standard tokamak equilibrium. Because the magnetic lines then wander in a volume, electron radial flux is enhanced due to the rapid particle transport along as well as across the lines. By treating the magnetic lines as random variables, it is possible to develop a kinetic equation for the electron distribution function. This is solved approximately to yield the diffusion coefficient
Energy Technology Data Exchange (ETDEWEB)
Krommes, J.A.; Kleva, R.G.; Oberman, C.
1978-05-01
A systematic theory is developed for the computation of electron transport in stochastic magnetic fields. Small scale magnetic perturbations arising, for example, from finite-..beta.. micro-instabilities are assumed to destroy the flux surfaces of a standard tokamak equilibrium. Because the magnetic lines then wander in a volume, electron radial flux is enhanced due to the rapid particle transport along as well as across the lines. By treating the magnetic lines as random variables, it is possible to develop a kinetic equation for the electron distribution function. This is solved approximately to yield the diffusion coefficient.
Hybrid Artificial Bee Colony Algorithm and Particle Swarm Search for Global Optimization
Directory of Open Access Journals (Sweden)
Wang Chun-Feng
2014-01-01
Full Text Available Artificial bee colony (ABC algorithm is one of the most recent swarm intelligence based algorithms, which has been shown to be competitive to other population-based algorithms. However, there is still an insufficiency in ABC regarding its solution search equation, which is good at exploration but poor at exploitation. To overcome this problem, we propose a novel artificial bee colony algorithm based on particle swarm search mechanism. In this algorithm, for improving the convergence speed, the initial population is generated by using good point set theory rather than random selection firstly. Secondly, in order to enhance the exploitation ability, the employed bee, onlookers, and scouts utilize the mechanism of PSO to search new candidate solutions. Finally, for further improving the searching ability, the chaotic search operator is adopted in the best solution of the current iteration. Our algorithm is tested on some well-known benchmark functions and compared with other algorithms. Results show that our algorithm has good performance.
Algorithms for the optimization of RBE-weighted dose in particle therapy.
Horcicka, M; Meyer, C; Buschbacher, A; Durante, M; Krämer, M
2013-01-21
We report on various algorithms used for the nonlinear optimization of RBE-weighted dose in particle therapy. Concerning the dose calculation carbon ions are considered and biological effects are calculated by the Local Effect Model. Taking biological effects fully into account requires iterative methods to solve the optimization problem. We implemented several additional algorithms into GSI's treatment planning system TRiP98, like the BFGS-algorithm and the method of conjugated gradients, in order to investigate their computational performance. We modified textbook iteration procedures to improve the convergence speed. The performance of the algorithms is presented by convergence in terms of iterations and computation time. We found that the Fletcher-Reeves variant of the method of conjugated gradients is the algorithm with the best computational performance. With this algorithm we could speed up computation times by a factor of 4 compared to the method of steepest descent, which was used before. With our new methods it is possible to optimize complex treatment plans in a few minutes leading to good dose distributions. At the end we discuss future goals concerning dose optimization issues in particle therapy which might benefit from fast optimization solvers.
Dagum, Leonardo
1989-01-01
The data parallel implementation of a particle simulation for hypersonic rarefied flow described by Dagum associates a single parallel data element with each particle in the simulation. The simulated space is divided into discrete regions called cells containing a variable and constantly changing number of particles. The implementation requires a global sort of the parallel data elements so as to arrange them in an order that allows immediate access to the information associated with cells in the simulation. Described here is a very fast algorithm for performing the necessary ranking of the parallel data elements. The performance of the new algorithm is compared with that of the microcoded instruction for ranking on the Connection Machine.
Genetic particle swarm parallel algorithm analysis of optimization arrangement on mistuned blades
Zhao, Tianyu; Yuan, Huiqun; Yang, Wenjun; Sun, Huagang
2017-12-01
This article introduces a method of mistuned parameter identification which consists of static frequency testing of blades, dichotomy and finite element analysis. A lumped parameter model of an engine bladed-disc system is then set up. A bladed arrangement optimization method, namely the genetic particle swarm optimization algorithm, is presented. It consists of a discrete particle swarm optimization and a genetic algorithm. From this, the local and global search ability is introduced. CUDA-based co-evolution particle swarm optimization, using a graphics processing unit, is presented and its performance is analysed. The results show that using optimization results can reduce the amplitude and localization of the forced vibration response of a bladed-disc system, while optimization based on the CUDA framework can improve the computing speed. This method could provide support for engineering applications in terms of effectiveness and efficiency.
Electrokinetic Particle Transport in Micro-Nanofluidics Direct Numerical Simulation Analysis
Qian, Shizhi
2012-01-01
Numerous applications of micro-/nanofluidics are related to particle transport in micro-/nanoscale channels, and electrokinetics has proved to be one of the most promising tools to manipulate particles in micro/nanofluidics. Therefore, a comprehensive understanding of electrokinetic particle transport in micro-/nanoscale channels is crucial to the development of micro/nano-fluidic devices. Electrokinetic Particle Transport in Micro-/Nanofluidics: Direct Numerical Simulation Analysis provides a fundamental understanding of electrokinetic particle transport in micro-/nanofluidics involving elect
Hybrid particle swarm optimization algorithm and its application in nuclear engineering
International Nuclear Information System (INIS)
Liu, C.Y.; Yan, C.Q.; Wang, J.J.
2014-01-01
Highlights: • We propose a hybrid particle swarm optimization algorithm (HPSO). • Modified Nelder–Mead simplex search method is applied in HPSO. • The algorithm has a high search precision and rapidly calculation speed. • HPSO can be used in the nuclear engineering optimization design problems. - Abstract: A hybrid particle swarm optimization algorithm with a feasibility-based rule for solving constrained optimization problems has been developed in this research. Firstly, the global optimal solution zone can be obtained through particle swarm optimization process, and then the refined search of the global optimal solution will be achieved through the modified Nelder–Mead simplex algorithm. Simulations based on two well-studied benchmark problems demonstrate the proposed algorithm will be an efficient alternative to solving constrained optimization problems. The vertical electrical heating pressurizer is one of the key components in reactor coolant system. The mathematical model of pressurizer has been established in steady state. The optimization design of pressurizer weight has been carried out through HPSO algorithm. The results show the pressurizer weight can be reduced by 16.92%. The thermal efficiencies of conventional PWR nuclear power plants are about 31–35% so far, which are much lower than fossil fueled plants based in a steam cycle as PWR. The thermal equilibrium mathematic model for nuclear power plant secondary loop has been established. An optimization case study has been conducted to improve the efficiency of the nuclear power plant with the proposed algorithm. The results show the thermal efficiency is improved by 0.5%
Helium, iron and electron particle transport and energy transport studies on the TFTR tokamak
International Nuclear Information System (INIS)
Synakowski, E.J.; Efthimion, P.C.; Rewoldt, G.; Stratton, B.C.; Tang, W.M.; Grek, B.; Hill, K.W.; Hulse, R.A.; Johnson, D.W.; Mansfield, D.K.; McCune, D.; Mikkelsen, D.R.; Park, H.K.; Ramsey, A.T.; Redi, M.H.; Scott, S.D.; Taylor, G.; Timberlake, J.; Zarnstorff, M.C.
1993-03-01
Results from helium, iron, and electron transport on TFTR in L-mode and Supershot deuterium plasmas with the same toroidal field, plasma current, and neutral beam heating power are presented. They are compared to results from thermal transport analysis based on power balance. Particle diffusivities and thermal conductivities are radially hollow and larger than neoclassical values, except possibly near the magnetic axis. The ion channel dominates over the electron channel in both particle and thermal diffusion. A peaked helium profile, supported by inward convection that is stronger than predicted by neoclassical theory, is measured in the Supershot The helium profile shape is consistent with predictions from quasilinear electrostatic drift-wave theory. While the perturbative particle diffusion coefficients of all three species are similar in the Supershot, differences are found in the L-Mode. Quasilinear theory calculations of the ratios of impurity diffusivities are in good accord with measurements. Theory estimates indicate that the ion heat flux should be larger than the electron heat flux, consistent with power balance analysis. However, theoretical values of the ratio of the ion to electron heat flux can be more than a factor of three larger than experimental values. A correlation between helium diffusion and ion thermal transport is observed and has favorable implications for sustained ignition of a tokamak fusion reactor
Helium, Iron and Electron Particle Transport and Energy Transport Studies on the TFTR Tokamak
Synakowski, E. J.; Efthimion, P. C.; Rewoldt, G.; Stratton, B. C.; Tang, W. M.; Grek, B.; Hill, K. W.; Hulse, R. A.; Johnson, D .W.; Mansfield, D. K.; McCune, D.; Mikkelsen, D. R.; Park, H. K.; Ramsey, A. T.; Redi, M. H.; Scott, S. D.; Taylor, G.; Timberlake, J.; Zarnstorff, M. C. (Princeton Univ., NJ (United States). Plasma Physics Lab.); Kissick, M. W. (Wisconsin Univ., Madison, WI (United States))
1993-03-01
Results from helium, iron, and electron transport on TFTR in L-mode and Supershot deuterium plasmas with the same toroidal field, plasma current, and neutral beam heating power are presented. They are compared to results from thermal transport analysis based on power balance. Particle diffusivities and thermal conductivities are radially hollow and larger than neoclassical values, except possibly near the magnetic axis. The ion channel dominates over the electron channel in both particle and thermal diffusion. A peaked helium profile, supported by inward convection that is stronger than predicted by neoclassical theory, is measured in the Supershot The helium profile shape is consistent with predictions from quasilinear electrostatic drift-wave theory. While the perturbative particle diffusion coefficients of all three species are similar in the Supershot, differences are found in the L-Mode. Quasilinear theory calculations of the ratios of impurity diffusivities are in good accord with measurements. Theory estimates indicate that the ion heat flux should be larger than the electron heat flux, consistent with power balance analysis. However, theoretical values of the ratio of the ion to electron heat flux can be more than a factor of three larger than experimental values. A correlation between helium diffusion and ion thermal transport is observed and has favorable implications for sustained ignition of a tokamak fusion reactor.
Relationship between particle and heat transport in JT-60U plasmas with internal transport barrier
International Nuclear Information System (INIS)
Takenaga, H.
2002-01-01
Relationship between particle and heat transport in an internal transport barrier (ITB) has been systematically investigated for the first time in reversed shear (RS) and high-β p ELMy H-mode (weak positive shear) plasmas of JT-60U for understanding of compatibility of improved energy confinement and effective particle control such as exhaust of helium ash and reduction in impurity contamination. In the RS plasma, no helium and carbon accumulation inside the ITB is observed even with highly improved energy confinement. In the high-β p plasma, both helium and carbon density profiles are flat. As the ion temperature profile changes from parabolic- to box-type, the helium diffusivity decreases by a factor of about 2 as well as the ion thermal diffusivity in the RS plasma. The measured soft X-ray profile is more peaked than that calculated by assuming the same n AR profile as the n e profile in the Ar injected RS plasma with the box-type profile, suggesting accumulation of Ar inside the ITB. Particle transport is improved with no change of ion temperature in the RS plasma, when density fluctuation is drastically reduced by a pellet injection. (author)
Multi-objective Reactive Power Optimization Based on Improved Particle Swarm Algorithm
Cui, Xue; Gao, Jian; Feng, Yunbin; Zou, Chenlu; Liu, Huanlei
2018-01-01
In this paper, an optimization model with the minimum active power loss and minimum voltage deviation of node and maximum static voltage stability margin as the optimization objective is proposed for the reactive power optimization problems. By defining the index value of reactive power compensation, the optimal reactive power compensation node was selected. The particle swarm optimization algorithm was improved, and the selection pool of global optimal and the global optimal of probability (p-gbest) were introduced. A set of Pareto optimal solution sets is obtained by this algorithm. And by calculating the fuzzy membership value of the pareto optimal solution sets, individuals with the smallest fuzzy membership value were selected as the final optimization results. The above improved algorithm is used to optimize the reactive power of IEEE14 standard node system. Through the comparison and analysis of the results, it has been proven that the optimization effect of this algorithm was very good.
Directory of Open Access Journals (Sweden)
Jiaxi Wang
2016-01-01
Full Text Available The shunting schedule of electric multiple units depot (SSED is one of the essential plans for high-speed train maintenance activities. This paper presents a 0-1 programming model to address the problem of determining an optimal SSED through automatic computing. The objective of the model is to minimize the number of shunting movements and the constraints include track occupation conflicts, shunting routes conflicts, time durations of maintenance processes, and shunting running time. An enhanced particle swarm optimization (EPSO algorithm is proposed to solve the optimization problem. Finally, an empirical study from Shanghai South EMU Depot is carried out to illustrate the model and EPSO algorithm. The optimization results indicate that the proposed method is valid for the SSED problem and that the EPSO algorithm outperforms the traditional PSO algorithm on the aspect of optimality.
Jin, Junchen
2016-01-01
The shunting schedule of electric multiple units depot (SSED) is one of the essential plans for high-speed train maintenance activities. This paper presents a 0-1 programming model to address the problem of determining an optimal SSED through automatic computing. The objective of the model is to minimize the number of shunting movements and the constraints include track occupation conflicts, shunting routes conflicts, time durations of maintenance processes, and shunting running time. An enhanced particle swarm optimization (EPSO) algorithm is proposed to solve the optimization problem. Finally, an empirical study from Shanghai South EMU Depot is carried out to illustrate the model and EPSO algorithm. The optimization results indicate that the proposed method is valid for the SSED problem and that the EPSO algorithm outperforms the traditional PSO algorithm on the aspect of optimality. PMID:27436998
International Nuclear Information System (INIS)
Hong, W.-C.
2009-01-01
Accurate forecasting of electric load has always been the most important issues in the electricity industry, particularly for developing countries. Due to the various influences, electric load forecasting reveals highly nonlinear characteristics. Recently, support vector regression (SVR), with nonlinear mapping capabilities of forecasting, has been successfully employed to solve nonlinear regression and time series problems. However, it is still lack of systematic approaches to determine appropriate parameter combination for a SVR model. This investigation elucidates the feasibility of applying chaotic particle swarm optimization (CPSO) algorithm to choose the suitable parameter combination for a SVR model. The empirical results reveal that the proposed model outperforms the other two models applying other algorithms, genetic algorithm (GA) and simulated annealing algorithm (SA). Finally, it also provides the theoretical exploration of the electric load forecasting support system (ELFSS)
Directory of Open Access Journals (Sweden)
Keivan Borna
2015-12-01
Full Text Available Traveling salesman problem (TSP is a well-established NP-complete problem and many evolutionary techniques like particle swarm optimization (PSO are used to optimize existing solutions for that. PSO is a method inspired by the social behavior of birds. In PSO, each member will change its position in the search space, according to personal or social experience of the whole society. In this paper, we combine the principles of PSO and crossover operator of genetic algorithm to propose a heuristic algorithm for solving the TSP more efficiently. Finally, some experimental results on our algorithm are applied in some instances in TSPLIB to demonstrate the effectiveness of our methods which also show that our algorithm can achieve better results than other approaches.
Directory of Open Access Journals (Sweden)
Weizhe Zhang
2014-01-01
Full Text Available Energy consumption in computer systems has become a more and more important issue. High energy consumption has already damaged the environment to some extent, especially in heterogeneous multiprocessors. In this paper, we first formulate and describe the energy-aware real-time task scheduling problem in heterogeneous multiprocessors. Then we propose a particle swarm optimization (PSO based algorithm, which can successfully reduce the energy cost and the time for searching feasible solutions. Experimental results show that the PSO-based energy-aware metaheuristic uses 40%–50% less energy than the GA-based and SFLA-based algorithms and spends 10% less time than the SFLA-based algorithm in finding the solutions. Besides, it can also find 19% more feasible solutions than the SFLA-based algorithm.
An evolutionary algorithm for order splitting with multiple transport alternatives
Dullaert, Wout; Maes, Bart; Vernimmen, Bert; Witlox, Frank
In this paper, a new methodology is suggested for determining the optimal mix of transport alternatives to minimize total logistics costs when goods are shipped from a supplier to a receiver. The total logistics costs comprise order costs, transportation costs and inventory costs. It is assumed that
Transport, Acceleration and Spatial Access of Solar Energetic Particles
Borovikov, D.; Sokolov, I.; Effenberger, F.; Jin, M.; Gombosi, T. I.
2017-12-01
Solar Energetic Particles (SEPs) are a major branch of space weather. Often driven by Coronal Mass Ejections (CMEs), SEPs have a very high destructive potential, which includes but is not limited to disrupting communication systems on Earth, inflicting harmful and potentially fatal radiation doses to crew members onboard spacecraft and, in extreme cases, to people aboard high altitude flights. However, currently the research community lacks efficient tools to predict such hazardous SEP events. Such a tool would serve as the first step towards improving humanity's preparedness for SEP events and ultimately its ability to mitigate their effects. The main goal of the presented research is to develop a computational tool that provides the said capabilities and meets the community's demand. Our model has the forecasting capability and can be the basis for operational system that will provide live information on the current potential threats posed by SEPs based on observations of the Sun. The tool comprises several numerical models, which are designed to simulate different physical aspects of SEPs. The background conditions in the interplanetary medium, in particular, the Coronal Mass Ejection driving the particle acceleration, play a defining role and are simulated with the state-of-the-art MHD solver, Block-Adaptive-Tree Solar-wind Roe-type Upwind Scheme (BATS-R-US). The newly developed particle code, Multiple-Field-Line-Advection Model for Particle Acceleration (M-FLAMPA), simulates the actual transport and acceleration of SEPs and is coupled to the MHD code. The special property of SEPs, the tendency to follow magnetic lines of force, is fully taken advantage of in the computational model, which substitutes a complicated 3-D model with a multitude of 1-D models. This approach significantly simplifies computations and improves the time performance of the overall model. Also, it plays an important role of mapping the affected region by connecting it with the origin of
A parallel version of a multigrid algorithm for isotropic transport equations
International Nuclear Information System (INIS)
Manteuffel, T.; McCormick, S.; Yang, G.; Morel, J.; Oliveira, S.
1994-01-01
The focus of this paper is on a parallel algorithm for solving the transport equations in a slab geometry using multigrid. The spatial discretization scheme used is a finite element method called the modified linear discontinuous (MLD) scheme. The MLD scheme represents a lumped version of the standard linear discontinuous (LD) scheme. The parallel algorithm was implemented on the Connection Machine 2 (CM2). Convergence rates and timings for this algorithm on the CM2 and Cray-YMP are shown
Bourque, Alexandra E; Bedwani, Stéphane; Carrier, Jean-François; Ménard, Cynthia; Borman, Pim; Bos, Clemens; Raaymakers, Bas W; Mickevicius, Nikolai; Paulson, Eric; Tijssen, Rob H N
PURPOSE: To assess overall robustness and accuracy of a modified particle filter-based tracking algorithm for magnetic resonance (MR)-guided radiation therapy treatments. METHODS AND MATERIALS: An improved particle filter-based tracking algorithm was implemented, which used a normalized
The energy band memory server algorithm for parallel Monte Carlo transport calculations
International Nuclear Information System (INIS)
Felker, K.G.; Siegel, A.R.; Smith, K.S.; Romano, P.K.; Forget, B.
2013-01-01
An algorithm is developed to significantly reduce the on-node footprint of cross section memory in Monte Carlo particle tracking algorithms. The classic method of per-node replication of cross section data is replaced by a memory server model, in which the read-only lookup tables reside on a remote set of disjoint processors. The main particle tracking algorithm is then modified in such a way as to enable efficient use of the remotely stored data in the particle tracking algorithm. Results of a prototype code on a Blue Gene/Q installation reveal that the penalty for remote storage is reasonable in the context of time scales for real-world applications, thus yielding a path forward for a broad range of applications that are memory bound using current techniques. (authors)
Energy Technology Data Exchange (ETDEWEB)
Kopp, Andreas [Université Libre de Bruxelles, Service de Physique Statistique et des Plasmas, CP 231, B-1050 Brussels (Belgium); Wiengarten, Tobias; Fichtner, Horst [Institut für Theoretische Physik IV, Ruhr-Universität Bochum, D-44780 Bochum (Germany); Effenberger, Frederic [Department of Physics and KIPAC, Stanford University, Stanford, CA 94305 (United States); Kühl, Patrick; Heber, Bernd [Institut für Experimentelle und Angewandte Physik, Christian-Albrecht-Universität zu Kiel, D-24098 Kiel (Germany); Raath, Jan-Louis; Potgieter, Marius S. [Centre for Space Research, North-West University, 2520 Potchefstroom (South Africa)
2017-03-01
The transport of cosmic rays (CRs) in the heliosphere is determined by the properties of the solar wind plasma. The heliospheric plasma environment has been probed by spacecraft for decades and provides a unique opportunity for testing transport theories. Of particular interest for the three-dimensional (3D) heliospheric CR transport are structures such as corotating interaction regions (CIRs), which, due to the enhancement of the magnetic field strength and magnetic fluctuations within and due to the associated shocks as well as stream interfaces, do influence the CR diffusion and drift. In a three-fold series of papers, we investigate these effects by modeling inner-heliospheric solar wind conditions with the numerical magnetohydrodynamic (MHD) framework Cronos (Wiengarten et al., referred as Paper I), and the results serve as input to a transport code employing a stochastic differential equation approach (this paper). While, in Paper I, we presented results from 3D simulations with Cronos, the MHD output is now taken as an input to the CR transport modeling. We discuss the diffusion and drift behavior of Galactic cosmic rays using the example of different theories, and study the effects of CIRs on these transport processes. In particular, we point out the wide range of possible particle fluxes at a given point in space resulting from these different theories. The restriction of this variety by fitting the numerical results to spacecraft data will be the subject of the third paper of this series.
International Nuclear Information System (INIS)
Coban, Ramazan
2011-01-01
Research highlights: → A closed-loop fuzzy logic controller based on the particle swarm optimization algorithm was proposed for controlling the power level of nuclear research reactors. → The proposed control system was tested for various initial and desired power levels, and it could control the reactor successfully for most situations. → The proposed controller is robust against the disturbances. - Abstract: In this paper, a closed-loop fuzzy logic controller based on the particle swarm optimization algorithm is proposed for controlling the power level of nuclear research reactors. The principle of the fuzzy logic controller is based on the rules constructed from numerical experiments made by means of a computer code for the core dynamics calculation and from human operator's experience and knowledge. In addition to these intuitive and experimental design efforts, consequent parts of the fuzzy rules are optimally (or near optimally) determined using the particle swarm optimization algorithm. The contribution of the proposed algorithm to a reactor control system is investigated in details. The performance of the controller is also tested with numerical simulations in numerous operating conditions from various initial power levels to desired power levels, as well as under disturbance. It is shown that the proposed control system performs satisfactorily under almost all operating conditions, even in the case of very small initial power levels.
3D head pose estimation and tracking using particle filtering and ICP algorithm
Ben Ghorbel, Mahdi; Baklouti, Malek; Couvet, Serge
2010-01-01
This paper addresses the issue of 3D head pose estimation and tracking. Existing approaches generally need huge database, training procedure, manual initialization or use face feature extraction manually extracted. We propose a framework for estimating the 3D head pose in its fine level and tracking it continuously across multiple Degrees of Freedom (DOF) based on ICP and particle filtering. We propose to approach the problem, using 3D computational techniques, by aligning a face model to the 3D dense estimation computed by a stereo vision method, and propose a particle filter algorithm to refine and track the posteriori estimate of the position of the face. This work comes with two contributions: the first concerns the alignment part where we propose an extended ICP algorithm using an anisotropic scale transformation. The second contribution concerns the tracking part. We propose the use of the particle filtering algorithm and propose to constrain the search space using ICP algorithm in the propagation step. The results show that the system is able to fit and track the head properly, and keeps accurate the results on new individuals without a manual adaptation or training. © Springer-Verlag Berlin Heidelberg 2010.
PS-FW: A Hybrid Algorithm Based on Particle Swarm and Fireworks for Global Optimization
Chen, Shuangqing; Wei, Lixin; Guan, Bing
2018-01-01
Particle swarm optimization (PSO) and fireworks algorithm (FWA) are two recently developed optimization methods which have been applied in various areas due to their simplicity and efficiency. However, when being applied to high-dimensional optimization problems, PSO algorithm may be trapped in the local optima owing to the lack of powerful global exploration capability, and fireworks algorithm is difficult to converge in some cases because of its relatively low local exploitation efficiency for noncore fireworks. In this paper, a hybrid algorithm called PS-FW is presented, in which the modified operators of FWA are embedded into the solving process of PSO. In the iteration process, the abandonment and supplement mechanism is adopted to balance the exploration and exploitation ability of PS-FW, and the modified explosion operator and the novel mutation operator are proposed to speed up the global convergence and to avoid prematurity. To verify the performance of the proposed PS-FW algorithm, 22 high-dimensional benchmark functions have been employed, and it is compared with PSO, FWA, stdPSO, CPSO, CLPSO, FIPS, Frankenstein, and ALWPSO algorithms. Results show that the PS-FW algorithm is an efficient, robust, and fast converging optimization method for solving global optimization problems. PMID:29675036
Fundamentals of charged particle transport in gases and condensed matter
Robson, Robert E; Hildebrandt, Malte
2018-01-01
This book offers a comprehensive and cohesive overview of transport processes associated with all kinds of charged particles, including electrons, ions, positrons, and muons, in both gases and condensed matter. The emphasis is on fundamental physics, linking experiment, theory and applications. In particular, the authors discuss: The kinetic theory of gases, from the traditional Boltzmann equation to modern generalizations A complementary approach: Maxwell’s equations of change and fluid modeling Calculation of ion-atom scattering cross sections Extension to soft condensed matter, amorphous materials Applications: drift tube experiments, including the Franck-Hertz experiment, modeling plasma processing devices, muon catalysed fusion, positron emission tomography, gaseous radiation detectors Straightforward, physically-based arguments are used wherever possible to complement mathematical rigor.
High energy particle transport code NMTC/JAM
International Nuclear Information System (INIS)
Niita, K.; Takada, H.; Meigo, S.; Ikeda, Y.
2001-01-01
We have developed a high energy particle transport code NMTC/JAM, which is an upgrade version of NMTC/JAERI97. The available energy range of NMTC/JAM is, in principle, extended to 200 GeV for nucleons and mesons including the high energy nuclear reaction code JAM for the intra-nuclear cascade part. We compare the calculations by NMTC/JAM code with the experimental data of thin and thick targets for proton induced reactions up to several 10 GeV. The results of NMTC/JAM code show excellent agreement with the experimental data. From these code validation, it is concluded that NMTC/JAM is reliable in neutronics optimization study of the high intense spallation neutron utilization facility. (author)
The simulation status of particle transport system JPTS
International Nuclear Information System (INIS)
Deng, L.
2015-01-01
'Full text:' Particle transport system JPTS has been developed by IAPCM. It is based on the three support frustrations (JASMIN, JAUMIN and JCOGIN) and is used to simulate the reactor full core and radiation shielding problems. The system has been realized the high fidelity. In this presentation, analysis of the H-M, BEAVRS, VENUS-III and SG-III models are shown. Analyze HZP conditions of BEAVRS model with Monte Carlo code JMCT, MC21 and OpenMC to assess code accuracy against available data. Assess the feasibility of analysis of a PWR using JMCT. The large scale depletion solver is also shown. Assess the feasibility of analysis of radiation shielding using JSNT. JPTS has been proved with the capability of the full-core pin-by-pin and radiation shielding. (author)
Particle transport across a circular shear layer with coherent structures
International Nuclear Information System (INIS)
Nielsen, A.H.; Lynov, J.P.; Juul Rasmussen, J.
1998-01-01
In the study of the dynamics of coherent structures, forced circular shear flows offer many desirable features. The inherent quantisation of circular geometries due to the periodic boundary conditions makes it possible to design experiments in which the spatial and temporal complexity of the coherent structures can be accurately controlled. Experiments on circular shear flows demonstrating the formation of coherent structures have been performed in different physical systems, including quasi-neutral plasmas, non-neutral plasmas and rotating fluids. In this paper we investigate the evolution of such coherent structures by solving the forced incompressible Navier-Stokes equations numerically using a spectral code. The model is formulated in the context of a rotating fluid but apply equally well to low frequency electrostatic oscillations in a homogeneous magnetized plasma. In order to reveal the Lagrangian properties of the flow and in particular to investigate the transport capacity in the shear layer, passive particles are traced by the velocity field. (orig.)
International Nuclear Information System (INIS)
Miller, I.; Roman, K.
1979-12-01
In order to perform studies of the influence of regional groundwater flow systems on the long-term performance of potential high-level nuclear waste repositories, it was determined that an adequate computer model would have to consider the full three-dimensional flow system. Golder Associates' SOLTR code, while three-dimensional, has an overly simple algorithm for simulating the passage of radionuclides from one aquifier to another above or below it. Part 1 of this report describes the algorithm developed to provide SOLTR with an improved capability for simulating interaquifer transport
Directory of Open Access Journals (Sweden)
Kazem Mohammadi- Aghdam
2015-10-01
Full Text Available This paper proposes the application of a new version of the heuristic particle swarm optimization (PSO method for designing water distribution networks (WDNs. The optimization problem of looped water distribution networks is recognized as an NP-hard combinatorial problem which cannot be easily solved using traditional mathematical optimization techniques. In this paper, the concept of dynamic swarm size is considered in an attempt to increase the convergence speed of the original PSO algorithm. In this strategy, the size of the swarm is dynamically changed according to the iteration number of the algorithm. Furthermore, a novel mutation approach is introduced to increase the diversification property of the PSO and to help the algorithm to avoid trapping in local optima. The new version of the PSO algorithm is called dynamic mutated particle swarm optimization (DMPSO. The proposed DMPSO is then applied to solve WDN design problems. Finally, two illustrative examples are used for comparison to verify the efficiency of the proposed DMPSO as compared to other intelligent algorithms.
Qi, Xin; Ju, Guohao; Xu, Shuyan
2018-04-10
The phase diversity (PD) technique needs optimization algorithms to minimize the error metric and find the global minimum. Particle swarm optimization (PSO) is very suitable for PD due to its simple structure, fast convergence, and global searching ability. However, the traditional PSO algorithm for PD still suffers from the stagnation problem (premature convergence), which can result in a wrong solution. In this paper, the stagnation problem of the traditional PSO algorithm for PD is illustrated first. Then, an explicit strategy is proposed to solve this problem, based on an in-depth understanding of the inherent optimization mechanism of the PSO algorithm. Specifically, a criterion is proposed to detect premature convergence; then a redistributing mechanism is proposed to prevent premature convergence. To improve the efficiency of this redistributing mechanism, randomized Halton sequences are further introduced to ensure the uniform distribution and randomness of the redistributed particles in the search space. Simulation results show that this strategy can effectively solve the stagnation problem of the PSO algorithm for PD, especially for large-scale and high-dimension wavefront sensing and noisy conditions. This work is further verified by an experiment. This work can improve the robustness and performance of PD wavefront sensing.
International Nuclear Information System (INIS)
Li Yongjie; Yao Dezhong; Yao, Jonathan; Chen Wufan
2005-01-01
Automatic beam angle selection is an important but challenging problem for intensity-modulated radiation therapy (IMRT) planning. Though many efforts have been made, it is still not very satisfactory in clinical IMRT practice because of overextensive computation of the inverse problem. In this paper, a new technique named BASPSO (Beam Angle Selection with a Particle Swarm Optimization algorithm) is presented to improve the efficiency of the beam angle optimization problem. Originally developed as a tool for simulating social behaviour, the particle swarm optimization (PSO) algorithm is a relatively new population-based evolutionary optimization technique first introduced by Kennedy and Eberhart in 1995. In the proposed BASPSO, the beam angles are optimized using PSO by treating each beam configuration as a particle (individual), and the beam intensity maps for each beam configuration are optimized using the conjugate gradient (CG) algorithm. These two optimization processes are implemented iteratively. The performance of each individual is evaluated by a fitness value calculated with a physical objective function. A population of these individuals is evolved by cooperation and competition among the individuals themselves through generations. The optimization results of a simulated case with known optimal beam angles and two clinical cases (a prostate case and a head-and-neck case) show that PSO is valid and efficient and can speed up the beam angle optimization process. Furthermore, the performance comparisons based on the preliminary results indicate that, as a whole, the PSO-based algorithm seems to outperform, or at least compete with, the GA-based algorithm in computation time and robustness. In conclusion, the reported work suggested that the introduced PSO algorithm could act as a new promising solution to the beam angle optimization problem and potentially other optimization problems in IMRT, though further studies need to be investigated
A Particle Swarm Optimization Algorithm with Variable Random Functions and Mutation
Institute of Scientific and Technical Information of China (English)
ZHOU Xiao-Jun; YANG Chun-Hua; GUI Wei-Hua; DONG Tian-Xue
2014-01-01
The convergence analysis of the standard particle swarm optimization (PSO) has shown that the changing of random functions, personal best and group best has the potential to improve the performance of the PSO. In this paper, a novel strategy with variable random functions and polynomial mutation is introduced into the PSO, which is called particle swarm optimization algorithm with variable random functions and mutation (PSO-RM). Random functions are adjusted with the density of the population so as to manipulate the weight of cognition part and social part. Mutation is executed on both personal best particle and group best particle to explore new areas. Experiment results have demonstrated the effectiveness of the strategy.
Innovations in ILC detector design using a particle flow algorithm approach
International Nuclear Information System (INIS)
Magill, S.; High Energy Physics
2007-01-01
The International Linear Collider (ILC) is a future e + e - collider that will produce particles with masses up to the design center-of-mass (CM) energy of 500 GeV. The ILC complements the Large Hadron Collider (LHC) which, although colliding protons at 14 TeV in the CM, will be luminosity-limited to particle production with masses up to ∼1-2 TeV. At the ILC, interesting cross-sections are small, but there are no backgrounds from underlying events, so masses should be able to be measured by hadronic decays to dijets (∼80% BR) as well as in leptonic decay modes. The precise measurement of jets will require major detector innovations, in particular to the calorimeter, which will be optimized to reconstruct final state particle 4-vectors--called the particle flow algorithm approach to jet reconstruction
Ef: Software for Nonrelativistic Beam Simulation by Particle-in-Cell Algorithm
Directory of Open Access Journals (Sweden)
Boytsov A. Yu.
2018-01-01
Full Text Available Understanding of particle dynamics is crucial in construction of electron guns, ion sources and other types of nonrelativistic beam devices. Apart from external guiding and focusing systems, a prominent role in evolution of such low-energy beams is played by particle-particle interaction. Numerical simulations taking into account these effects are typically accomplished by a well-known particle-in-cell method. In practice, for convenient work a simulation program should not only implement this method, but also support parallelization, provide integration with CAD systems and allow access to details of the simulation algorithm. To address the formulated requirements, development of a new open source code - Ef - has been started. It's current features and main functionality are presented. Comparison with several analytical models demonstrates good agreement between the numerical results and the theory. Further development plans are discussed.
Ef: Software for Nonrelativistic Beam Simulation by Particle-in-Cell Algorithm
Boytsov, A. Yu.; Bulychev, A. A.
2018-04-01
Understanding of particle dynamics is crucial in construction of electron guns, ion sources and other types of nonrelativistic beam devices. Apart from external guiding and focusing systems, a prominent role in evolution of such low-energy beams is played by particle-particle interaction. Numerical simulations taking into account these effects are typically accomplished by a well-known particle-in-cell method. In practice, for convenient work a simulation program should not only implement this method, but also support parallelization, provide integration with CAD systems and allow access to details of the simulation algorithm. To address the formulated requirements, development of a new open source code - Ef - has been started. It's current features and main functionality are presented. Comparison with several analytical models demonstrates good agreement between the numerical results and the theory. Further development plans are discussed.
Spokes and charged particle transport in HiPIMS magnetrons
International Nuclear Information System (INIS)
Brenning, N; Lundin, D; Minea, T; Vitelaru, C; Costin, C
2013-01-01
Two separate scientific communities are shown to have studied one common phenomenon, azimuthally rotating dense plasma structures, also called spokes, in pulsed-power E × B discharges, starting from quite different approaches. The first body of work is motivated by fundamental plasma science and concerns a phenomenon called the critical ionization velocity, CIV, while the other body of work is motivated by the applied plasma science of high power impulse magnetron sputtering (HiPIMS). Here we make use of this situation by applying experimental observations, and theoretical analysis, from the CIV literature to HiPIMS discharges. For a practical example, we take data from observed spokes in HiPIMS discharges and focus on their role in charged particle transport, and in electron energization. We also touch upon the closely related questions of how they channel the cross-B discharge current, how they maintain their internal potential structure and how they influence the energy spectrum of the ions? New particle-in-cell Monte Carlo collisional simulations that shed light on the azimuthal drift and expansion of the spokes are also presented. (paper)
Entropic transport of active particles driven by a transverse ac force
Energy Technology Data Exchange (ETDEWEB)
Wu, Jian-chun, E-mail: wjchun2010@163.com; Chen, Qun; Ai, Bao-quan, E-mail: aibq@scnu.edu.cn
2015-12-18
Transport of active particles is numerically investigated in a two-dimensional period channel. In the presence of a transverse ac force, the directed transport of active particles demonstrates striking behaviors. By adjusting the amplitude and the frequency of the transverse ac force, the average velocity will be influenced significantly and the direction of the transport can be reversed several times. Remarkably, it is also found that the direction of the transport varies with different self-propelled speeds. Therefore, particles with different self-propelled speeds will move to the different directions, which is able to separate particles of different self-propelled speeds. - Highlights: • A transverse ac force strongly influence the transport of active particles. • The direction of the transport can be reversed several times. • Active particles with different self-propelled speeds can be separated.
A benchmark study of the Signed-particle Monte Carlo algorithm for the Wigner equation
Directory of Open Access Journals (Sweden)
Muscato Orazio
2017-12-01
Full Text Available The Wigner equation represents a promising model for the simulation of electronic nanodevices, which allows the comprehension and prediction of quantum mechanical phenomena in terms of quasi-distribution functions. During these years, a Monte Carlo technique for the solution of this kinetic equation has been developed, based on the generation and annihilation of signed particles. This technique can be deeply understood in terms of the theory of pure jump processes with a general state space, producing a class of stochastic algorithms. One of these algorithms has been validated successfully by numerical experiments on a benchmark test case.
DEFF Research Database (Denmark)
Vesterstrøm, Jacob Svaneborg; Thomsen, Rene
2004-01-01
Several extensions to evolutionary algorithms (EAs) and particle swarm optimization (PSO) have been suggested during the last decades offering improved performance on selected benchmark problems. Recently, another search heuristic termed differential evolution (DE) has shown superior performance...... in several real-world applications. In this paper, we evaluate the performance of DE, PSO, and EAs regarding their general applicability as numerical optimization techniques. The comparison is performed on a suite of 34 widely used benchmark problems. The results from our study show that DE generally...... outperforms the other algorithms. However, on two noisy functions, both DE and PSO were outperformed by the EA....
International Nuclear Information System (INIS)
Semwal, Girish; Rastogi, Vipul
2014-01-01
We present design optimization of wavelength filters based on long period waveguide gratings (LPWGs) using the adaptive particle swarm optimization (APSO) technique. We demonstrate optimization of the LPWG parameters for single-band, wide-band and dual-band rejection filters for testing the convergence of APSO algorithms. After convergence tests on the algorithms, the optimization technique has been implemented to design more complicated application specific filters such as erbium doped fiber amplifier (EDFA) amplified spontaneous emission (ASE) flattening, erbium doped waveguide amplifier (EDWA) gain flattening and pre-defined broadband rejection filters. The technique is useful for designing and optimizing the parameters of LPWGs to achieve complicated application specific spectra. (paper)
International Nuclear Information System (INIS)
Brown, P.; Chang, B.
1998-01-01
The linear Boltzmann transport equation (BTE) is an integro-differential equation arising in deterministic models of neutral and charged particle transport. In slab (one-dimensional Cartesian) geometry and certain higher-dimensional cases, Diffusion Synthetic Acceleration (DSA) is known to be an effective algorithm for the iterative solution of the discretized BTE. Fourier and asymptotic analyses have been applied to various idealizations (e.g., problems on infinite domains with constant coefficients) to obtain sharp bounds on the convergence rate of DSA in such cases. While DSA has been shown to be a highly effective acceleration (or preconditioning) technique in one-dimensional problems, it has been observed to be less effective in higher dimensions. This is due in part to the expense of solving the related diffusion linear system. We investigate here the effectiveness of a parallel semicoarsening multigrid (SMG) solution approach to DSA preconditioning in several three dimensional problems. In particular, we consider the algorithmic and implementation scalability of a parallel SMG-DSA preconditioner on several types of test problems
Verification of Gyrokinetic Particle of Turbulent Simulation of Device Size Scaling Transport
Institute of Scientific and Technical Information of China (English)
LIN Zhihong; S. ETHIER; T. S. HAHM; W. M. TANG
2012-01-01
Verification and historical perspective are presented on the gyrokinetic particle simulations that discovered the device size scaling of turbulent transport and indentified the geometry model as the source of the long-standing disagreement between gyrokinetic particle and continuum simulations.
Graphical User Interface for High Energy Multi-Particle Transport, Phase II
National Aeronautics and Space Administration — Computer codes such as MCNPX now have the capability to transport most high energy particle types (34 particle types now supported in MCNPX) with energies extending...
Graphical User Interface for High Energy Multi-Particle Transport, Phase I
National Aeronautics and Space Administration — Computer codes such as MCNPX now have the capability to transport most high energy particle types (34 particle types now supported in MCNPX) with energies extending...
Particle simulation algorithms with short-range forces in MHD and fluid flow
International Nuclear Information System (INIS)
Cable, S.; Tajima, T.; Umegaki, K.
1992-07-01
Attempts are made to develop numerical algorithms for handling fluid flows involving liquids and liquid-gas mixtures. In these types of systems, the short-range intermolecular interactions are important enough to significantly alter behavior predicted on the basis of standard fluid mechanics and magnetohydrodynamics alone. We have constructed a particle-in-cell (PIC) code for the purpose of studying the effects of these interactions. Of the algorithms considered, the one which has been successfully implemented is based on a MHD particle code developed by Brunel et al. In the version presented here, short range forces are included in particle motion by, first, calculating the forces between individual particles and then, to prevent aliasing, interpolating these forces to the computational grid points, then interpolating the forces back to the particles. The code has been used to model a simple two-fluid Rayleigh-Taylor instability. Limitations to the accuracy of the code exist at short wavelengths, where the effects of the short-range forces would be expected to be most pronounced
Directory of Open Access Journals (Sweden)
Narinder Singh
2017-01-01
Full Text Available A newly hybrid nature inspired algorithm called HPSOGWO is presented with the combination of Particle Swarm Optimization (PSO and Grey Wolf Optimizer (GWO. The main idea is to improve the ability of exploitation in Particle Swarm Optimization with the ability of exploration in Grey Wolf Optimizer to produce both variants’ strength. Some unimodal, multimodal, and fixed-dimension multimodal test functions are used to check the solution quality and performance of HPSOGWO variant. The numerical and statistical solutions show that the hybrid variant outperforms significantly the PSO and GWO variants in terms of solution quality, solution stability, convergence speed, and ability to find the global optimum.
Energy Technology Data Exchange (ETDEWEB)
Young, Steven; Montakhab, Mohammad; Nouri, Hassan
2011-07-15
Economic dispatch (ED) is one of the most important problems to be solved in power generation as fractional percentage fuel reductions represent significant cost savings. ED wishes to optimise the power generated by each generating unit in a system in order to find the minimum operating cost at a required load demand, whilst ensuring both equality and inequality constraints are met. For the process of optimisation, a model must be created for each generating unit. The particle swarm optimisation technique is an evolutionary computation technique with one of the most powerful methods for solving global optimisation problems. The aim of this paper is to add in a constriction factor to the particle swarm optimisation algorithm (CFBPSO). Results show that the algorithm is very good at solving the ED problem and that CFBPSO must be able to work in a practical environment and so a valve point effect with transmission losses should be included in future work.
Energy Technology Data Exchange (ETDEWEB)
Chen, Zaigao; Wang, Jianguo [Key Laboratory for Physical Electronics and Devices of the Ministry of Education, Xi' an Jiaotong University, Xi' an, Shaanxi 710049 (China); Northwest Institute of Nuclear Technology, P.O. Box 69-12, Xi' an, Shaanxi 710024 (China); Wang, Yue; Qiao, Hailiang; Zhang, Dianhui [Northwest Institute of Nuclear Technology, P.O. Box 69-12, Xi' an, Shaanxi 710024 (China); Guo, Weijie [Key Laboratory for Physical Electronics and Devices of the Ministry of Education, Xi' an Jiaotong University, Xi' an, Shaanxi 710049 (China)
2013-11-15
Optimal design method of high-power microwave source using particle simulation and parallel genetic algorithms is presented in this paper. The output power, simulated by the fully electromagnetic particle simulation code UNIPIC, of the high-power microwave device is given as the fitness function, and the float-encoding genetic algorithms are used to optimize the high-power microwave devices. Using this method, we encode the heights of non-uniform slow wave structure in the relativistic backward wave oscillators (RBWO), and optimize the parameters on massively parallel processors. Simulation results demonstrate that we can obtain the optimal parameters of non-uniform slow wave structure in the RBWO, and the output microwave power enhances 52.6% after the device is optimized.
Optimization of heat pump system in indoor swimming pool using particle swarm algorithm
Energy Technology Data Exchange (ETDEWEB)
Lee, Wen-Shing; Kung, Chung-Kuan [Department of Energy and Refrigerating Air-Conditioning Engineering, National Taipei University of Technology, 1, Section 3, Chung-Hsiao East Road, Taipei (China)
2008-09-15
When it comes to indoor swimming pool facilities, a large amount of energy is required to heat up low-temperature outdoor air before it is being introduced indoors to maintain indoor humidity. Since water is evaporated from the pool surface, the exhausted air contains more water and specific enthalpy. In response to this indoor air, heat pump is generally used in heat recovery for indoor swimming pools. To reduce the cost in energy consumption, this paper utilizes particle swarm algorithm to optimize the design of heat pump system. The optimized parameters include continuous parameters and discrete parameters. The former consists of outdoor air mass flow and heat conductance of heat exchangers; the latter comprises compressor type and boiler type. In a case study, life cycle energy cost is considered as an objective function. In this regard, the optimized outdoor air flow and the optimized design for heating system can be deduced by using particle swarm algorithm. (author)
Directory of Open Access Journals (Sweden)
Qi Hong
2015-01-01
Full Text Available The particle size distribution (PSD plays an important role in environmental pollution detection and human health protection, such as fog, haze and soot. In this study, the Attractive and Repulsive Particle Swarm Optimization (ARPSO algorithm and the basic PSO were applied to retrieve the PSD. The spectral extinction technique coupled with the Anomalous Diffraction Approximation (ADA and the Lambert-Beer Law were employed to investigate the retrieval of the PSD. Three commonly used monomodal PSDs, i.e. the Rosin-Rammer (R-R distribution, the normal (N-N distribution, the logarithmic normal (L-N distribution were studied in the dependent model. Then, an optimal wavelengths selection algorithm was proposed. To study the accuracy and robustness of the inverse results, some characteristic parameters were employed. The research revealed that the ARPSO showed more accurate and faster convergence rate than the basic PSO, even with random measurement error. Moreover, the investigation also demonstrated that the inverse results of four incident laser wavelengths showed more accurate and robust than those of two wavelengths. The research also found that if increasing the interval of the selected incident laser wavelengths, inverse results would show more accurate, even in the presence of random error.
GPU-accelerated algorithms for many-particle continuous-time quantum walks
Piccinini, Enrico; Benedetti, Claudia; Siloi, Ilaria; Paris, Matteo G. A.; Bordone, Paolo
2017-06-01
Many-particle continuous-time quantum walks (CTQWs) represent a resource for several tasks in quantum technology, including quantum search algorithms and universal quantum computation. In order to design and implement CTQWs in a realistic scenario, one needs effective simulation tools for Hamiltonians that take into account static noise and fluctuations in the lattice, i.e. Hamiltonians containing stochastic terms. To this aim, we suggest a parallel algorithm based on the Taylor series expansion of the evolution operator, and compare its performances with those of algorithms based on the exact diagonalization of the Hamiltonian or a 4th order Runge-Kutta integration. We prove that both Taylor-series expansion and Runge-Kutta algorithms are reliable and have a low computational cost, the Taylor-series expansion showing the additional advantage of a memory allocation not depending on the precision of calculation. Both algorithms are also highly parallelizable within the SIMT paradigm, and are thus suitable for GPGPU computing. In turn, we have benchmarked 4 NVIDIA GPUs and 3 quad-core Intel CPUs for a 2-particle system over lattices of increasing dimension, showing that the speedup provided by GPU computing, with respect to the OPENMP parallelization, lies in the range between 8x and (more than) 20x, depending on the frequency of post-processing. GPU-accelerated codes thus allow one to overcome concerns about the execution time, and make it possible simulations with many interacting particles on large lattices, with the only limit of the memory available on the device.
Development of general-purpose particle and heavy ion transport monte carlo code
International Nuclear Information System (INIS)
Iwase, Hiroshi; Nakamura, Takashi; Niita, Koji
2002-01-01
The high-energy particle transport code NMTC/JAM, which has been developed at JAERI, was improved for the high-energy heavy ion transport calculation by incorporating the JQMD code, the SPAR code and the Shen formula. The new NMTC/JAM named PHITS (Particle and Heavy-Ion Transport code System) is the first general-purpose heavy ion transport Monte Carlo code over the incident energies from several MeV/nucleon to several GeV/nucleon. (author)
An Image Filter Based on Shearlet Transformation and Particle Swarm Optimization Algorithm
Directory of Open Access Journals (Sweden)
Kai Hu
2015-01-01
Full Text Available Digital image is always polluted by noise and made data postprocessing difficult. To remove noise and preserve detail of image as much as possible, this paper proposed image filter algorithm which combined the merits of Shearlet transformation and particle swarm optimization (PSO algorithm. Firstly, we use classical Shearlet transform to decompose noised image into many subwavelets under multiscale and multiorientation. Secondly, we gave weighted factor to those subwavelets obtained. Then, using classical Shearlet inverse transform, we obtained a composite image which is composed of those weighted subwavelets. After that, we designed fast and rough evaluation method to evaluate noise level of the new image; by using this method as fitness, we adopted PSO to find the optimal weighted factor we added; after lots of iterations, by the optimal factors and Shearlet inverse transform, we got the best denoised image. Experimental results have shown that proposed algorithm eliminates noise effectively and yields good peak signal noise ratio (PSNR.
Energy Technology Data Exchange (ETDEWEB)
Lee, Kyun Ho [Sejong University, Sejong (Korea, Republic of); Kim, Ki Wan [Agency for Defense Development, Daejeon (Korea, Republic of)
2014-09-15
The heat transfer mechanism for radiation is directly related to the emission of photons and electromagnetic waves. Depending on the participation of the medium, the radiation can be classified into two forms: surface and gas radiation. In the present study, unknown radiation properties were estimated using an inverse boundary analysis of surface radiation in an axisymmetric cylindrical enclosure. For efficiency, a repulsive particle swarm optimization (RPSO) algorithm, which is a relatively recent heuristic search method, was used as inverse solver. By comparing the convergence rates and accuracies with the results of a genetic algorithm (GA), the performances of the proposed RPSO algorithm as an inverse solver was verified when applied to the inverse analysis of the surface radiation problem.
International Nuclear Information System (INIS)
Lee, Kyun Ho; Kim, Ki Wan
2014-01-01
The heat transfer mechanism for radiation is directly related to the emission of photons and electromagnetic waves. Depending on the participation of the medium, the radiation can be classified into two forms: surface and gas radiation. In the present study, unknown radiation properties were estimated using an inverse boundary analysis of surface radiation in an axisymmetric cylindrical enclosure. For efficiency, a repulsive particle swarm optimization (RPSO) algorithm, which is a relatively recent heuristic search method, was used as inverse solver. By comparing the convergence rates and accuracies with the results of a genetic algorithm (GA), the performances of the proposed RPSO algorithm as an inverse solver was verified when applied to the inverse analysis of the surface radiation problem
New hybrid genetic particle swarm optimization algorithm to design multi-zone binary filter.
Lin, Jie; Zhao, Hongyang; Ma, Yuan; Tan, Jiubin; Jin, Peng
2016-05-16
The binary phase filters have been used to achieve an optical needle with small lateral size. Designing a binary phase filter is still a scientific challenge in such fields. In this paper, a hybrid genetic particle swarm optimization (HGPSO) algorithm is proposed to design the binary phase filter. The HGPSO algorithm includes self-adaptive parameters, recombination and mutation operations that originated from the genetic algorithm. Based on the benchmark test, the HGPSO algorithm has achieved global optimization and fast convergence. In an easy-to-perform optimizing procedure, the iteration number of HGPSO is decreased to about a quarter of the original particle swarm optimization process. A multi-zone binary phase filter is designed by using the HGPSO. The long depth of focus and high resolution are achieved simultaneously, where the depth of focus and focal spot transverse size are 6.05λ and 0.41λ, respectively. Therefore, the proposed HGPSO can be applied to the optimization of filter with multiple parameters.
Optimization of C4.5 algorithm-based particle swarm optimization for breast cancer diagnosis
Muslim, M. A.; Rukmana, S. H.; Sugiharti, E.; Prasetiyo, B.; Alimah, S.
2018-03-01
Data mining has become a basic methodology for computational applications in the field of medical domains. Data mining can be applied in the health field such as for diagnosis of breast cancer, heart disease, diabetes and others. Breast cancer is most common in women, with more than one million cases and nearly 600,000 deaths occurring worldwide each year. The most effective way to reduce breast cancer deaths was by early diagnosis. This study aims to determine the level of breast cancer diagnosis. This research data uses Wisconsin Breast Cancer dataset (WBC) from UCI machine learning. The method used in this research is the algorithm C4.5 and Particle Swarm Optimization (PSO) as a feature option and to optimize the algorithm. C4.5. Ten-fold cross-validation is used as a validation method and a confusion matrix. The result of this research is C4.5 algorithm. The particle swarm optimization C4.5 algorithm has increased by 0.88%.
Directory of Open Access Journals (Sweden)
Rongxiao Wang
2017-09-01
Full Text Available The accurate prediction of air contaminant dispersion is essential to air quality monitoring and the emergency management of contaminant gas leakage incidents in chemical industry parks. Conventional atmospheric dispersion models can seldom give accurate predictions due to inaccurate input parameters. In order to improve the prediction accuracy of dispersion models, two data assimilation methods (i.e., the typical particle filter & the combination of a particle filter and expectation-maximization algorithm are proposed to assimilate the virtual Unmanned Aerial Vehicle (UAV observations with measurement error into the atmospheric dispersion model. Two emission cases with different dimensions of state parameters are considered. To test the performances of the proposed methods, two numerical experiments corresponding to the two emission cases are designed and implemented. The results show that the particle filter can effectively estimate the model parameters and improve the accuracy of model predictions when the dimension of state parameters is relatively low. In contrast, when the dimension of state parameters becomes higher, the method of particle filter combining the expectation-maximization algorithm performs better in terms of the parameter estimation accuracy. Therefore, the proposed data assimilation methods are able to effectively support air quality monitoring and emergency management in chemical industry parks.
System convergence in transport models: algorithms efficiency and output uncertainty
DEFF Research Database (Denmark)
Rich, Jeppe; Nielsen, Otto Anker
2015-01-01
of this paper is to analyse convergence performance for the external loop and to illustrate how an improper linkage between the converging parts can lead to substantial uncertainty in the final output. Although this loop is crucial for the performance of large-scale transport models it has not been analysed...... much in the literature. The paper first investigates several variants of the Method of Successive Averages (MSA) by simulation experiments on a toy-network. It is found that the simulation experiments produce support for a weighted MSA approach. The weighted MSA approach is then analysed on large......-scale in the Danish National Transport Model (DNTM). It is revealed that system convergence requires that either demand or supply is without random noise but not both. In that case, if MSA is applied to the model output with random noise, it will converge effectively as the random effects are gradually dampened...
Dynamics and transport of laser-accelerated particle beams
International Nuclear Information System (INIS)
Becker, Stefan
2010-01-01
The subject of this thesis is the investigation and optimization of beam transport elements in the context of the steadily growing field of laser-driven particle acceleration. The first topic is the examination of the free vacuum expansion of an electron beam at high current density. It could be shown that particle tracking codes which are commonly used for the calculation of space charge effects will generate substantial artifacts in the regime considered here. The artifacts occurring hitherto predominantly involve insufficient prerequisites for the Lorentz transformation, the application of inadequate initial conditions and non negligible retardation artifacts. A part of this thesis is dedicated to the development of a calculation approach which uses a more adequate ansatz calculating space charge effects for laser-accelerated electron beams. It can also be used to validate further approaches for the calculation of space charge effects. The next elements considered are miniature magnetic quadrupole devices for the focusing of charged particle beams. General problems involved with their miniaturization concern distorting higher order field components. If these distorting components cannot be controlled, the field of applications is very limited. In this thesis a new method for the characterization and compensation of the distorting components was developed, which might become a standard method when assembling these permanent magnet multipole devices. The newly developed characterization method has been validated at the Mainz Microtron (MAMI) electron accelerator. Now that we can ensure optimum performance, the first application of permanent magnet quadrupole devices in conjunction with laser-accelerated ion beams is presented. The experiment was carried out at the Z-Petawatt laser system at Sandia National Laboratories. A promising application for laser-accelerated electron beams is the FEL in a university-scale size. The first discussion of all relevant aspects
Dynamics and transport of laser-accelerated particle beams
Energy Technology Data Exchange (ETDEWEB)
Becker, Stefan
2010-04-19
The subject of this thesis is the investigation and optimization of beam transport elements in the context of the steadily growing field of laser-driven particle acceleration. The first topic is the examination of the free vacuum expansion of an electron beam at high current density. It could be shown that particle tracking codes which are commonly used for the calculation of space charge effects will generate substantial artifacts in the regime considered here. The artifacts occurring hitherto predominantly involve insufficient prerequisites for the Lorentz transformation, the application of inadequate initial conditions and non negligible retardation artifacts. A part of this thesis is dedicated to the development of a calculation approach which uses a more adequate ansatz calculating space charge effects for laser-accelerated electron beams. It can also be used to validate further approaches for the calculation of space charge effects. The next elements considered are miniature magnetic quadrupole devices for the focusing of charged particle beams. General problems involved with their miniaturization concern distorting higher order field components. If these distorting components cannot be controlled, the field of applications is very limited. In this thesis a new method for the characterization and compensation of the distorting components was developed, which might become a standard method when assembling these permanent magnet multipole devices. The newly developed characterization method has been validated at the Mainz Microtron (MAMI) electron accelerator. Now that we can ensure optimum performance, the first application of permanent magnet quadrupole devices in conjunction with laser-accelerated ion beams is presented. The experiment was carried out at the Z-Petawatt laser system at Sandia National Laboratories. A promising application for laser-accelerated electron beams is the FEL in a university-scale size. The first discussion of all relevant aspects
International Nuclear Information System (INIS)
Meng, Jianxin; Mei, Deqing; Yang, Keji; Fan, Zongwei
2014-01-01
In existing ultrasonic transportation methods, the long-range transportation of micro-particles is always realized in step-by-step way. Due to the substantial decrease of the driving force in each step, the transportation is lower-speed and stair-stepping. To improve the transporting velocity, a non-stepping ultrasonic transportation approach is proposed. By quantitatively analyzing the acoustic potential well, an optimal region is defined as the position, where the largest driving force is provided under the condition that the driving force is simultaneously the major component of an acoustic radiation force. To keep the micro-particle trapped in the optimal region during the whole transportation process, an approach of optimizing the phase-shifting velocity and phase-shifting step is adopted. Due to the stable and large driving force, the displacement of the micro-particle is an approximately linear function of time, instead of a stair-stepping function of time as in the existing step-by-step methods. An experimental setup is also developed to validate this approach. Long-range ultrasonic transportations of zirconium beads with high transporting velocity were realized. The experimental results demonstrated that this approach is an effective way to improve transporting velocity in the long-range ultrasonic transportation of micro-particles
Yang, Jie; Zhang, Pengcheng; Zhang, Liyuan; Shu, Huazhong; Li, Baosheng; Gui, Zhiguo
2017-01-01
In inverse treatment planning of intensity-modulated radiation therapy (IMRT), the objective function is typically the sum of the weighted sub-scores, where the weights indicate the importance of the sub-scores. To obtain a high-quality treatment plan, the planner manually adjusts the objective weights using a trial-and-error procedure until an acceptable plan is reached. In this work, a new particle swarm optimization (PSO) method which can adjust the weighting factors automatically was investigated to overcome the requirement of manual adjustment, thereby reducing the workload of the human planner and contributing to the development of a fully automated planning process. The proposed optimization method consists of three steps. (i) First, a swarm of weighting factors (i.e., particles) is initialized randomly in the search space, where each particle corresponds to a global objective function. (ii) Then, a plan optimization solver is employed to obtain the optimal solution for each particle, and the values of the evaluation functions used to determine the particle's location and the population global location for the PSO are calculated based on these results. (iii) Next, the weighting factors are updated based on the particle's location and the population global location. Step (ii) is performed alternately with step (iii) until the termination condition is reached. In this method, the evaluation function is a combination of several key points on the dose volume histograms. Furthermore, a perturbation strategy - the crossover and mutation operator hybrid approach - is employed to enhance the population diversity, and two arguments are applied to the evaluation function to improve the flexibility of the algorithm. In this study, the proposed method was used to develop IMRT treatment plans involving five unequally spaced 6MV photon beams for 10 prostate cancer cases. The proposed optimization algorithm yielded high-quality plans for all of the cases, without human
Directory of Open Access Journals (Sweden)
Elahe Fallah Mehdipour
2012-12-01
Full Text Available Optimal operation of multipurpose reservoirs is one of the complex and sometimes nonlinear problems in the field of multi-objective optimization. Evolutionary algorithms are optimization tools that search decision space using simulation of natural biological evolution and present a set of points as the optimum solutions of problem. In this research, application of multi-objective particle swarm optimization (MOPSO in optimal operation of Bazoft reservoir with different objectives, including generating hydropower energy, supplying downstream demands (drinking, industry and agriculture, recreation and flood control have been considered. In this regard, solution sets of the MOPSO algorithm in bi-combination of objectives and compromise programming (CP using different weighting and power coefficients have been first compared that the MOPSO algorithm in all combinations of objectives is more capable than the CP to find solution with appropriate distribution and these solutions have dominated the CP solutions. Then, ending points of solution set from the MOPSO algorithm and nonlinear programming (NLP results have been compared. Results showed that the MOPSO algorithm with 0.3 percent difference from the NLP results has more capability to present optimum solutions in the ending points of solution set.
Kota, Sujatha; Padmanabhuni, Venkata Nageswara Rao; Budda, Kishor; K, Sruthi
2018-05-01
Elliptic Curve Cryptography (ECC) uses two keys private key and public key and is considered as a public key cryptographic algorithm that is used for both authentication of a person and confidentiality of data. Either one of the keys is used in encryption and other in decryption depending on usage. Private key is used in encryption by the user and public key is used to identify user in the case of authentication. Similarly, the sender encrypts with the private key and the public key is used to decrypt the message in case of confidentiality. Choosing the private key is always an issue in all public key Cryptographic Algorithms such as RSA, ECC. If tiny values are chosen in random the security of the complete algorithm becomes an issue. Since the Public key is computed based on the Private Key, if they are not chosen optimally they generate infinity values. The proposed Modified Elliptic Curve Cryptography uses selection in either of the choices; the first option is by using Particle Swarm Optimization and the second option is by using Cuckoo Search Algorithm for randomly choosing the values. The proposed algorithms are developed and tested using sample database and both are found to be secured and reliable. The test results prove that the private key is chosen optimally not repetitive or tiny and the computations in public key will not reach infinity.
Prediction of Tibial Rotation Pathologies Using Particle Swarm Optimization and K-Means Algorithms.
Sari, Murat; Tuna, Can; Akogul, Serkan
2018-03-28
The aim of this article is to investigate pathological subjects from a population through different physical factors. To achieve this, particle swarm optimization (PSO) and K-means (KM) clustering algorithms have been combined (PSO-KM). Datasets provided by the literature were divided into three clusters based on age and weight parameters and each one of right tibial external rotation (RTER), right tibial internal rotation (RTIR), left tibial external rotation (LTER), and left tibial internal rotation (LTIR) values were divided into three types as Type 1, Type 2 and Type 3 (Type 2 is non-pathological (normal) and the other two types are pathological (abnormal)), respectively. The rotation values of every subject in any cluster were noted. Then the algorithm was run and the produced values were also considered. The values of the produced algorithm, the PSO-KM, have been compared with the real values. The hybrid PSO-KM algorithm has been very successful on the optimal clustering of the tibial rotation types through the physical criteria. In this investigation, Type 2 (pathological subjects) is of especially high predictability and the PSO-KM algorithm has been very successful as an operation system for clustering and optimizing the tibial motion data assessments. These research findings are expected to be very useful for health providers, such as physiotherapists, orthopedists, and so on, in which this consequence may help clinicians to appropriately designing proper treatment schedules for patients.
Transient Particle Transport Analysis on TJ-II Stellarator
Energy Technology Data Exchange (ETDEWEB)
Eguilior, S.; Castejon, F.; Guasp, J.; Estrada, T.; Medina, F.; Tabares, F.L.; Branas, B.
2006-12-18
Particle diffusivity and convective velocity have been determined in ECRH plasmas confined in the stellarator TJ-II by analysing the evolving density profile. This is obtained from an amplitude modulation reflectometry system in addition to an X-ray tomographic reconstruction. The source term, which is needed as an input for transport equations, is obtained using EIRENE code. In order to discriminate between the diffusive and convective contributions, the dynamics of the density evolution has been analysed in several perturbative experiments. This evolution has been considered in discharges with injection of a single pulse of H2 as well as in those that present a spontaneous transition to an enhanced confinement mode and whose confinement properties are modified by inducing an ohmic current. The pinch velocity and diffusivity are parameterized by different expressions in order to fit the experimental time evolution of density profile. The profile evolution is very different from one case to another due to the different values of convective velocities and diffusivities, besides the different source terms. (Author) 19 refs.
Indian Academy of Sciences (India)
polynomial) division have been found in Vedic Mathematics which are dated much before Euclid's algorithm. A programming language Is used to describe an algorithm for execution on a computer. An algorithm expressed using a programming.
Density Dependence of Particle Transport in ECH Plasmas of the TJ-II Stellarator
Energy Technology Data Exchange (ETDEWEB)
Vargas, V. I.; Lopez-Bruna, D.; Guasp, J.; Herranz, J.; Estrada, T.; Medina, F.; Ochando, M.A.; Velasco, J.L.; Reynolds, J.M.; Ferreira, J.A.; Tafalla, D.; Castejon, F.; Salas, A.
2009-05-21
We present the experimental dependence of particle transport on average density in electron cyclotron heated (ECH) hydrogen plasmas of the TJ-II stellarator. The results are based on: (I) electron density and temperature data from Thomson Scattering and reflectometry diagnostics; (II) a transport model that reproduces the particle density profiles in steady state; and (III) Eirene, a code for neutrals transport that calculates the particle source in the plasma from the particle confinement time and the appropriate geometry of the machine/plasma. After estimating an effective particle diffusivity and the particle confinement time, a threshold density separating qualitatively and quantitatively different plasma transport regimes is found. The poor confinement times found below the threshold are coincident with the presence of ECH-induced fast electron losses and a positive radial electric field all over the plasma. (Author) 40 refs.
Thermodynamic design of Stirling engine using multi-objective particle swarm optimization algorithm
International Nuclear Information System (INIS)
Duan, Chen; Wang, Xinggang; Shu, Shuiming; Jing, Changwei; Chang, Huawei
2014-01-01
Highlights: • An improved thermodynamic model taking into account irreversibility parameter was developed. • A multi-objective optimization method for designing Stirling engine was investigated. • Multi-objective particle swarm optimization algorithm was adopted in the area of Stirling engine for the first time. - Abstract: In the recent years, the interest in Stirling engine has remarkably increased due to its ability to use any heat source from outside including solar energy, fossil fuels and biomass. A large number of studies have been done on Stirling cycle analysis. In the present study, a mathematical model based on thermodynamic analysis of Stirling engine considering regenerative losses and internal irreversibilities has been developed. Power output, thermal efficiency and the cycle irreversibility parameter of Stirling engine are optimized simultaneously using Particle Swarm Optimization (PSO) algorithm, which is more effective than traditional genetic algorithms. In this optimization problem, some important parameters of Stirling engine are considered as decision variables, such as temperatures of the working fluid both in the high temperature isothermal process and in the low temperature isothermal process, dead volume ratios of each heat exchanger, volumes of each working spaces, effectiveness of the regenerator, and the system charge pressure. The Pareto optimal frontier is obtained and the final design solution has been selected by Linear Programming Technique for Multidimensional Analysis of Preference (LINMAP). Results show that the proposed multi-objective optimization approach can significantly outperform traditional single objective approaches
Directory of Open Access Journals (Sweden)
GholamReza Havaei
2015-09-01
Full Text Available Reinforced concrete reservoirs (RCR have been used extensively in municipal and industrial facilities for several decades. The design of these structures requires that attention be given not only to strength requirements, but to serviceability requirements as well. These types of structures will be square, round, and oval reinforced concrete structures which may be above, below, or partially below ground. The main challenge is to design concrete liquid containing structures which will resist the extremes of seasonal temperature changes, a variety of loading conditions, and remain liquid tight for useful life of 50 to 60 years. In this study, optimization is performed by particle swarm algorithm basd on structural design. Firstly by structural analysis all range of shell thickness and areas of rebar find. In the second step by parameter identification system interchange algorithm, source code which developed in particle swarm algorithm by MATLAB software linked to analysis software. Therefore best and optimized thicknesses and total area of bars for each element find. Lastly with circumferential stiffeners structure optimize and show 19% decrease in weight of rebar, 20% decrease in volume of concrete, and 13% minimum cost reduction in construction procedure compared with conventional 10,000 m3 RCR structures.
Fitness Estimation Based Particle Swarm Optimization Algorithm for Layout Design of Truss Structures
Directory of Open Access Journals (Sweden)
Ayang Xiao
2014-01-01
Full Text Available Due to the fact that vastly different variables and constraints are simultaneously considered, truss layout optimization is a typical difficult constrained mixed-integer nonlinear program. Moreover, the computational cost of truss analysis is often quite expensive. In this paper, a novel fitness estimation based particle swarm optimization algorithm with an adaptive penalty function approach (FEPSO-AP is proposed to handle this problem. FEPSO-AP adopts a special fitness estimate strategy to evaluate the similar particles in the current population, with the purpose to reduce the computational cost. Further more, a laconic adaptive penalty function is employed by FEPSO-AP, which can handle multiple constraints effectively by making good use of historical iteration information. Four benchmark examples with fixed topologies and up to 44 design dimensions were studied to verify the generality and efficiency of the proposed algorithm. Numerical results of the present work compared with results of other state-of-the-art hybrid algorithms shown in the literature demonstrate that the convergence rate and the solution quality of FEPSO-AP are essentially competitive.
International Nuclear Information System (INIS)
Huang, Chia-Ling
2015-01-01
This paper proposes a new swarm intelligence method known as the Particle-based Simplified Swarm Optimization (PSSO) algorithm while undertaking a modification of the Updating Mechanism (UM), called N-UM and R-UM, and simultaneously applying an Orthogonal Array Test (OA) to solve reliability–redundancy allocation problems (RRAPs) successfully. One difficulty of RRAP is the need to maximize system reliability in cases where the number of redundant components and the reliability of corresponding components in each subsystem are simultaneously decided with nonlinear constraints. In this paper, four RRAP benchmarks are used to display the applicability of the proposed PSSO that advances the strengths of both PSO and SSO to enable optimizing the RRAP that belongs to mixed-integer nonlinear programming. When the computational results are compared with those of previously developed algorithms in existing literature, the findings indicate that the proposed PSSO is highly competitive and performs well. - Highlights: • This paper proposes a particle-based simplified swarm optimization algorithm (PSSO) to optimize RRAP. • Furthermore, the UM and an OA are adapted to advance in optimizing RRAP. • Four systems are introduced and the results demonstrate the PSSO performs particularly well
Directory of Open Access Journals (Sweden)
Zhiwei Ye
2015-01-01
Full Text Available Image enhancement is an important procedure of image processing and analysis. This paper presents a new technique using a modified measure and blending of cuckoo search and particle swarm optimization (CS-PSO for low contrast images to enhance image adaptively. In this way, contrast enhancement is obtained by global transformation of the input intensities; it employs incomplete Beta function as the transformation function and a novel criterion for measuring image quality considering three factors which are threshold, entropy value, and gray-level probability density of the image. The enhancement process is a nonlinear optimization problem with several constraints. CS-PSO is utilized to maximize the objective fitness criterion in order to enhance the contrast and detail in an image by adapting the parameters of a novel extension to a local enhancement technique. The performance of the proposed method has been compared with other existing techniques such as linear contrast stretching, histogram equalization, and evolutionary computing based image enhancement methods like backtracking search algorithm, differential search algorithm, genetic algorithm, and particle swarm optimization in terms of processing time and image quality. Experimental results demonstrate that the proposed method is robust and adaptive and exhibits the better performance than other methods involved in the paper.
An efficient particle Fokker–Planck algorithm for rarefied gas flows
Energy Technology Data Exchange (ETDEWEB)
Gorji, M. Hossein; Jenny, Patrick
2014-04-01
This paper is devoted to the algorithmic improvement and careful analysis of the Fokker–Planck kinetic model derived by Jenny et al. [1] and Gorji et al. [2]. The motivation behind the Fokker–Planck based particle methods is to gain efficiency in low Knudsen rarefied gas flow simulations, where conventional direct simulation Monte Carlo (DSMC) becomes expensive. This can be achieved due to the fact that the resulting model equations are continuous stochastic differential equations in velocity space. Accordingly, the computational particles evolve along independent stochastic paths and thus no collision needs to be calculated. Therefore the computational cost of the solution algorithm becomes independent of the Knudsen number. In the present study, different computational improvements were persuaded in order to augment the method, including an accurate time integration scheme, local time stepping and noise reduction. For assessment of the performance, gas flow around a cylinder and lid driven cavity flow were studied. Convergence rates, accuracy and computational costs were compared with respect to DSMC for a range of Knudsen numbers (from hydrodynamic regime up to above one). In all the considered cases, the model together with the proposed scheme give rise to very efficient yet accurate solution algorithms.
A novel robust and efficient algorithm for charge particle tracking in high background flux
International Nuclear Information System (INIS)
Fanelli, C; Cisbani, E; Dotto, A Del
2015-01-01
The high luminosity that will be reached in the new generation of High Energy Particle and Nuclear physics experiments implies large high background rate and large tracker occupancy, representing therefore a new challenge for particle tracking algorithms. For instance, at Jefferson Laboratory (JLab) (VA,USA), one of the most demanding experiment in this respect, performed with a 12 GeV electron beam, is characterized by a luminosity up to 10 39 cm -2 s -1 . To this scope, Gaseous Electron Multiplier (GEM) based trackers are under development for a new spectrometer that will operate at these high rates in the Hall A of JLab. Within this context, we developed a new tracking algorithm, based on a multistep approach: (i) all hardware - time and charge - information are exploited to minimize the number of hits to associate; (ii) a dedicated Neural Network (NN) has been designed for a fast and efficient association of the hits measured by the GEM detector; (iii) the measurements of the associated hits are further improved in resolution through the application of Kalman filter and Rauch- Tung-Striebel smoother. The algorithm is shortly presented along with a discussion of the promising first results. (paper)
Ye, Zhiwei; Wang, Mingwei; Hu, Zhengbing; Liu, Wei
2015-01-01
Image enhancement is an important procedure of image processing and analysis. This paper presents a new technique using a modified measure and blending of cuckoo search and particle swarm optimization (CS-PSO) for low contrast images to enhance image adaptively. In this way, contrast enhancement is obtained by global transformation of the input intensities; it employs incomplete Beta function as the transformation function and a novel criterion for measuring image quality considering three factors which are threshold, entropy value, and gray-level probability density of the image. The enhancement process is a nonlinear optimization problem with several constraints. CS-PSO is utilized to maximize the objective fitness criterion in order to enhance the contrast and detail in an image by adapting the parameters of a novel extension to a local enhancement technique. The performance of the proposed method has been compared with other existing techniques such as linear contrast stretching, histogram equalization, and evolutionary computing based image enhancement methods like backtracking search algorithm, differential search algorithm, genetic algorithm, and particle swarm optimization in terms of processing time and image quality. Experimental results demonstrate that the proposed method is robust and adaptive and exhibits the better performance than other methods involved in the paper.
Yang, Zhen-Lun; Wu, Angus; Min, Hua-Qing
2015-01-01
An improved quantum-behaved particle swarm optimization with elitist breeding (EB-QPSO) for unconstrained optimization is presented and empirically studied in this paper. In EB-QPSO, the novel elitist breeding strategy acts on the elitists of the swarm to escape from the likely local optima and guide the swarm to perform more efficient search. During the iterative optimization process of EB-QPSO, when criteria met, the personal best of each particle and the global best of the swarm are used to generate new diverse individuals through the transposon operators. The new generated individuals with better fitness are selected to be the new personal best particles and global best particle to guide the swarm for further solution exploration. A comprehensive simulation study is conducted on a set of twelve benchmark functions. Compared with five state-of-the-art quantum-behaved particle swarm optimization algorithms, the proposed EB-QPSO performs more competitively in all of the benchmark functions in terms of better global search capability and faster convergence rate.
Investigation of particle reduction and its transport mechanism in UHF-ECR dielectric etching system
International Nuclear Information System (INIS)
Kobayashi, Hiroyuki; Yokogawa, Ken'etsu; Maeda, Kenji; Izawa, Masaru
2008-01-01
Control of particle transport was investigated by using a UHF-ECR etching apparatus with a laser particle monitor. The particles, which float at a plasma-sheath boundary, fall on a wafer when the plasma is turned off. These floating particles can be removed from the region above the wafer by changing the plasma distribution. We measured the distribution of the rotational temperature of nitrogen molecules across the wafer to investigate the effect of the thermophoretic force. We found that mechanisms of particle transport in directions parallel to the wafer surface can be explained by the balance between thermophoretic and gas viscous forces
International Nuclear Information System (INIS)
Synakowski, E.J.; Efthimion, P.C.; Rewoldt, G.; Stratton, B.C.; Tang, W.M.; Bell, R.E.; Grek, B.; Hulse, R.A.; Johnson, D.W.; Hill, K.W.; Mansfield, D.K.; McCune, D.; Mikkelsen, D.R.; Park, H.K.; Ramsey, A.T.; Scott, S.D.; Taylor, G.; Timberlake, J.; Zarnstorff, M.C.
1992-01-01
Particle and energy transport in tokamak plasmas have long been subjects of vigorous investigation. Present-day measurement techniques permit radially resolved studies of the transport of electron perturbations, low- and high-Z impurities, and energy. In addition, developments in transport theory provide tools that can be brought to bear on transport issues. Here, we examine local particle transport measurements of electrons, fully-stripped thermal helium, and helium-like iron in balanced-injection L-mode and enhanced confinement deuterium plasmas on TFTR of the same plasma current, toroidal field, and auxiliary heating power. He 2+ and Fe 24+ transport has been studied with charge exchange recombination spectroscopy, while electron transport has been studied by analyzing the perturbed electron flux following the same helium puff used for the He 2+ studies. By examining the electron and He 2+ responses following the same gas puff in the same plasmas, an unambiguous comparison of the transport of the two species has been made. The local energy transport has been examined with power balance analysis, allowing for comparisons to the local thermal fluxes. Some particle and energy transport results from the Supershot have been compared to a transport model based on a quasilinear picture of electrostatic toroidal drift-type microinstabilities. Finally, implications for future fusion reactors of the observed correlation between thermal transport and helium particle transport is discussed
Jets/MET Performance with the combination of Particle flow algorithm and SoftKiller
Yamamoto, Kohei
2017-01-01
The main purpose of my work is to study the performance of the combination of Particle flow algorithm(PFlow) and SoftKiller(SK), “PF+SK”. ATLAS experiment currently employes Topological clusters(Topo) for jet reconstruction, but we want to replace it with more effective one, PFlow. PFlow provides us with another method to reconstruct jets[1]. With this algorithm, we combine the energy deposits in calorimeters with the measurement in ID tracker. This strategy enables us to claim these consistent measurements in a detector come from same particles and avoid double counting. SK is a simple and effective way of suppressing pile-up[2]. This way, we divide rapidity-azimuthal plane into square patches and eliminate particles lower than the threshold "#$%, which is derived from each ",' so that the median of " density becomes zero. Practically, this is equal to gradually increasing "#$% till exactly half of patches becomes empty. Because there is no official calibration on PF+SK so far, we have t...
Particle transport in subaqueous eruptions: An experimental investigation
Verolino, A.; White, J. D. L.; Zimanowski, B.
2018-01-01
Subaqueous volcanic eruptions are natural events common under the world's oceans. Here we report results from bench-scale underwater explosions that entrain and eject particles into a water tank. Our aim was to examine how particles are transferred to the water column and begin to sediment from it, and to visualize and interpret evolution of the 'eruption' cloud. Understanding particle transfer to water is a key requirement for using deposit characteristics to infer behaviour and evolution of an underwater eruption. For the experiments here, we used compressed argon to force different types of particles, under known driving pressures, into water within a container, and recorded the results at 1 MPx/frame and 1000 fps. Three types of runs were completed: (1) particles within water were driven into a water-filled container; (2) dry particles were driven into water; (3) dry particles were driven into air at atmospheric pressure. Across the range of particles used for all subaqueous runs, we observed: a) initial doming, b) a main expansion of decompressing gas, and c) a phase of necking, when a forced plume separated from the driving jet. Phase c did not take place for the subaerial runs. A key observation is that none of the subaqueous explosions produced a single, simple, open cavity; in all cases, multiphase mixtures of gas bubbles, particles and water were formed. Explosions in which the expanding argon ejects particles in air, analogous to delivery of particles created in an explosion, produce jets and forced plumes that release particles into the tank more readily than do those in which particles in water are driven into the tank. The latter runs mimic propulsion of an existing vent slurry by an explosion. Explosions with different particle types also yielded differences in behaviour controlled primarily by particle mass, particle density, and particle-population homogeneity. Particles were quickly delivered into the water column during plume rise following
Parallel/vector algorithms for the spherical SN transport theory method
International Nuclear Information System (INIS)
Haghighat, A.; Mattis, R.E.
1990-01-01
This paper discusses vector and parallel processing of a 1-D curvilinear (i.e. spherical) S N transport theory algorithm on the Cornell National SuperComputer Facility (CNSF) IBM 3090/600E. Two different vector algorithms were developed and parallelized based on angular decomposition. It is shown that significant speedups are attainable. For example, for problems with large granularity, using 4 processors, the parallel/vector algorithm achieves speedups (for wall-clock time) of more than 4.5 relative to the old serial/scalar algorithm. Furthermore, this work has demonstrated the existing potential for the development of faster processing vector and parallel algorithms for multidimensional curvilinear geometries. (author)
Analysis of Massively Parallel Discrete-Ordinates Transport Sweep Algorithms with Collisions
International Nuclear Information System (INIS)
Bailey, T.S.; Falgout, R.D.
2008-01-01
We present theoretical scaling models for a variety of discrete-ordinates sweep algorithms. In these models, we pay particular attention to the way each algorithm handles collisions. A collision is defined as a processor having multiple angles with ready to be swept during one stage of the sweep. The models also take into account how subdomains are assigned to processors and how angles are grouped during the sweep. We describe a data driven algorithm that resolves collisions efficiently during the sweep as well as other algorithms that have been designed to avoid collisions completely. Our models are validated using the ARGES and AMTRAN transport codes. We then use the models to study and predict scaling trends in all of the sweep algorithms
Convective and diffusive effects on particle transport in asymmetric periodic capillaries.
Directory of Open Access Journals (Sweden)
Nazmul Islam
Full Text Available We present here results of a theoretical investigation of particle transport in longitudinally asymmetric but axially symmetric capillaries, allowing for the influence of both diffusion and convection. In this study we have focused attention primarily on characterizing the influence of tube geometry and applied hydraulic pressure on the magnitude, direction and rate of transport of particles in axi-symmetric, saw-tooth shaped tubes. Three initial value problems are considered. The first involves the evolution of a fixed number of particles initially confined to a central wave-section. The second involves the evolution of the same initial state but including an ongoing production of particles in the central wave-section. The third involves the evolution of particles a fully laden tube. Based on a physical model of convective-diffusive transport, assuming an underlying oscillatory fluid velocity field that is unaffected by the presence of the particles, we find that transport rates and even net transport directions depend critically on the design specifics, such as tube geometry, flow rate, initial particle configuration and whether or not particles are continuously introduced. The second transient scenario is qualitatively independent of the details of how particles are generated. In the third scenario there is no net transport. As the study is fundamental in nature, our findings could engender greater understanding of practical systems.
International Nuclear Information System (INIS)
Walsh, Jonathan A.; Palmer, Todd S.; Urbatsch, Todd J.
2015-01-01
Highlights: • Generation of discrete differential scattering angle and energy loss cross sections. • Gauss–Radau quadrature utilizing numerically computed cross section moments. • Development of a charged particle transport capability in the Milagro IMC code. • Integration of cross section generation and charged particle transport capabilities. - Abstract: We investigate a method for numerically generating discrete scattering cross sections for use in charged particle transport simulations. We describe the cross section generation procedure and compare it to existing methods used to obtain discrete cross sections. The numerical approach presented here is generalized to allow greater flexibility in choosing a cross section model from which to derive discrete values. Cross section data computed with this method compare favorably with discrete data generated with an existing method. Additionally, a charged particle transport capability is demonstrated in the time-dependent Implicit Monte Carlo radiative transfer code, Milagro. We verify the implementation of charged particle transport in Milagro with analytic test problems and we compare calculated electron depth–dose profiles with another particle transport code that has a validated electron transport capability. Finally, we investigate the integration of the new discrete cross section generation method with the charged particle transport capability in Milagro.
An Adaptive Multi-Objective Particle Swarm Optimization Algorithm for Multi-Robot Path Planning
Directory of Open Access Journals (Sweden)
Nizar Hadi Abbas
2016-07-01
Full Text Available This paper discusses an optimal path planning algorithm based on an Adaptive Multi-Objective Particle Swarm Optimization Algorithm (AMOPSO for two case studies. First case, single robot wants to reach a goal in the static environment that contain two obstacles and two danger source. The second one, is improving the ability for five robots to reach the shortest way. The proposed algorithm solves the optimization problems for the first case by finding the minimum distance from initial to goal position and also ensuring that the generated path has a maximum distance from the danger zones. And for the second case, finding the shortest path for every robot and without any collision between them with the shortest time. In order to evaluate the proposed algorithm in term of finding the best solution, six benchmark test functions are used to make a comparison between AMOPSO and the standard MOPSO. The results show that the AMOPSO has a better ability to get away from local optimums with a quickest convergence than the MOPSO. The simulation results using Matlab 2014a, indicate that this methodology is extremely valuable for every robot in multi-robot framework to discover its own particular proper path from the start to the destination position with minimum distance and time.
Energy Technology Data Exchange (ETDEWEB)
Wang, Cheng-Der, E-mail: jdwang@iner.gov.tw [Nuclear Engineering Division, Institute of Nuclear Energy Research, No. 1000, Wenhua Rd., Jiaan Village, Longtan Township, Taoyuan County 32546, Taiwan, ROC (China); Lin, Chaung [National Tsing Hua University, Department of Engineering and System Science, 101, Section 2, Kuang Fu Road, Hsinchu 30013, Taiwan (China)
2013-02-15
Highlights: ► The PSO algorithm was adopted to automatically design a BWR CRP. ► The local search procedure was added to improve the result of PSO algorithm. ► The results show that the obtained CRP is the same good as that in the previous work. -- Abstract: This study developed a method for the automatic design of a boiling water reactor (BWR) control rod pattern (CRP) using the particle swarm optimization (PSO) algorithm. The PSO algorithm is more random compared to the rank-based ant system (RAS) that was used to solve the same BWR CRP design problem in the previous work. In addition, the local search procedure was used to make improvements after PSO, by adding the single control rod (CR) effect. The design goal was to obtain the CRP so that the thermal limits and shutdown margin would satisfy the design requirement and the cycle length, which is implicitly controlled by the axial power distribution, would be acceptable. The results showed that the same acceptable CRP found in the previous work could be obtained.
A Particle Swarm Optimization Algorithm for Neural Networks in Recognition of Maize Leaf Diseases
Directory of Open Access Journals (Sweden)
Zhiyong ZHANG
2014-03-01
Full Text Available The neural networks have significance on recognition of crops disease diagnosis? but it has disadvantage of slow convergent speed and shortcoming of local optimum. In order to identify the maize leaf diseases by using machine vision more accurately, we propose an improved particle swarm optimization algorithm for neural networks. With the algorithm, the neural network property is improved. It reasonably confirms threshold and connection weight of neural network, and improves capability of solving problems in the image recognition. At last, an example of the emulation shows that neural network model based on recognizes significantly better than without optimization. Model accuracy has been improved to a certain extent to meet the actual needs of maize leaf diseases recognition.
Li, Jinze; Qu, Zhi; He, Xiaoyang; Jin, Xiaoming; Li, Tie; Wang, Mingkai; Han, Qiu; Gao, Ziji; Jiang, Feng
2018-02-01
Large-scale access of distributed power can improve the current environmental pressure, at the same time, increasing the complexity and uncertainty of overall distribution system. Rational planning of distributed power can effectively improve the system voltage level. To this point, the specific impact on distribution network power quality caused by the access of typical distributed power was analyzed and from the point of improving the learning factor and the inertia weight, an improved particle swarm optimization algorithm (IPSO) was proposed which could solve distributed generation planning for distribution network to improve the local and global search performance of the algorithm. Results show that the proposed method can well reduce the system network loss and improve the economic performance of system operation with distributed generation.
OPTIMIZATION OF PLY STACKING SEQUENCE OF COMPOSITE DRIVE SHAFT USING PARTICLE SWARM ALGORITHM
Directory of Open Access Journals (Sweden)
CHANNAKESHAVA K. R.
2011-06-01
Full Text Available In this paper an attempt has been made to optimize ply stacking sequence of single piece E-Glass/Epoxy and Boron /Epoxy composite drive shafts using Particle swarm algorithm (PSA. PSA is a population based evolutionary stochastic optimization technique which is a resent heuristic search method, where mechanics are inspired by swarming or collaborative behavior of biological population. PSA programme is developed to optimize the ply stacking sequence with an objective of weight minimization by considering design constraints as torque transmission capacity, fundamental natural frequency, lateral vibration and torsional buckling strength having number of laminates, ply thickness and stacking sequence as design variables. The weight savings of the E-Glass/epoxy and Boron /Epoxy shaft from PAS were 51% and 85 % of the steel shaft respectively. The optimum results of PSA obtained are compared with results of genetic algorithm (GA results and found that PSA yields better results than GA.
The particle swarm optimization algorithm applied to nuclear systems surveillance test planning
International Nuclear Information System (INIS)
Siqueira, Newton Norat
2006-12-01
This work shows a new approach to solve availability maximization problems in electromechanical systems, under periodic preventive scheduled tests. This approach uses a new Optimization tool called PSO developed by Kennedy and Eberhart (2001), Particle Swarm Optimization, integrated with probabilistic safety analysis model. Two maintenance optimization problems are solved by the proposed technique, the first one is a hypothetical electromechanical configuration and the second one is a real case from a nuclear power plant (Emergency Diesel Generators). For both problem PSO is compared to a genetic algorithm (GA). In the experiments made, PSO was able to obtain results comparable or even slightly better than those obtained b GA. Therefore, the PSO algorithm is simpler and its convergence is faster, indicating that PSO is a good alternative for solving such kind of problems. (author)
International Nuclear Information System (INIS)
Lian Zhigang; Gu Xingsheng; Jiao Bin
2008-01-01
It is well known that the flow-shop scheduling problem (FSSP) is a branch of production scheduling and is NP-hard. Now, many different approaches have been applied for permutation flow-shop scheduling to minimize makespan, but current algorithms even for moderate size problems cannot be solved to guarantee optimality. Some literatures searching PSO for continuous optimization problems are reported, but papers searching PSO for discrete scheduling problems are few. In this paper, according to the discrete characteristic of FSSP, a novel particle swarm optimization (NPSO) algorithm is presented and successfully applied to permutation flow-shop scheduling to minimize makespan. Computation experiments of seven representative instances (Taillard) based on practical data were made, and comparing the NPSO with standard GA, we obtain that the NPSO is clearly more efficacious than standard GA for FSSP to minimize makespan
Multiple R&D projects scheduling optimization with improved particle swarm algorithm.
Liu, Mengqi; Shan, Miyuan; Wu, Juan
2014-01-01
For most enterprises, in order to win the initiative in the fierce competition of market, a key step is to improve their R&D ability to meet the various demands of customers more timely and less costly. This paper discusses the features of multiple R&D environments in large make-to-order enterprises under constrained human resource and budget, and puts forward a multi-project scheduling model during a certain period. Furthermore, we make some improvements to existed particle swarm algorithm and apply the one developed here to the resource-constrained multi-project scheduling model for a simulation experiment. Simultaneously, the feasibility of model and the validity of algorithm are proved in the experiment.
A Hard Constraint Algorithm to Model Particle Interactions in DNA-laden Flows
Energy Technology Data Exchange (ETDEWEB)
Trebotich, D; Miller, G H; Bybee, M D
2006-08-01
We present a new method for particle interactions in polymer models of DNA. The DNA is represented by a bead-rod polymer model and is fully-coupled to the fluid. The main objective in this work is to implement short-range forces to properly model polymer-polymer and polymer-surface interactions, specifically, rod-rod and rod-surface uncrossing. Our new method is based on a rigid constraint algorithm whereby rods elastically bounce off one another to prevent crossing, similar to our previous algorithm used to model polymer-surface interactions. We compare this model to a classical (smooth) potential which acts as a repulsive force between rods, and rods and surfaces.
A method and algorithm for correlating scattered light and suspended particles in polluted water
International Nuclear Information System (INIS)
Sami Gumaan Daraigan; Mohd Zubir Matjafri; Khiruddin Abdullah; Azlan Abdul Aziz; Abdul Aziz Tajuddin; Mohd Firdaus Othman
2005-01-01
An optical model has been developed for measuring total suspended solids TSS concentrations in water. This approach is based on the characteristics of scattered light from the suspended particles in water samples. An optical sensor system (an active spectrometer) has been developed to correlate pollutant (total suspended solids TSS) concentration and the scattered radiation. Scattered light was measured in terms of the output voltage of the phototransistor of the sensor system. The developed algorithm was used to calculate and estimate the concentrations of the polluted water samples. The proposed algorithm was calibrated using the observed readings. The results display a strong correlation between the radiation values and the total suspended solids concentrations. The proposed system yields a high degree of accuracy with the correlation coefficient (R) of 0.99 and the root mean square error (RMS) of 63.57 mg/l. (Author)
Relationship between particle and heat transport in JT-60U plasmas with internal transport barrier
International Nuclear Information System (INIS)
Takenaga, H.; Higashijima, S.; Oyama, N.
2003-01-01
The relationship between particle and heat transport in an internal transport barrier (ITB) has been systematically investigated in reversed shear (RS) and high β p ELMy H-mode plasmas in JT-60U. No helium and carbon accumulation inside the ITB is observed even with ion heat transport reduced to a neoclassical level. On the other hand, the heavy impurity argon is accumulated inside the ITB. The argon density profile estimated from the soft x-ray profile is more peaked, by a factor of 2-4 in the RS plasma and of 1.6 in the high β p mode plasma, than the electron density profile. The helium diffusivity (D He ) and the ion thermal diffusivity (χ i ) are at an anomalous level in the high β p mode plasma, where D He and χ i are higher by a factor of 5-10 than the neoclassical value. In the RS plasma, D He is reduced from the anomalous to the neoclassical level, together with χ i . The carbon and argon density profiles calculated using the transport coefficients reduced to the neoclassical level only in the ITB are more peaked than the measured profiles, even when χ i is reduced to the neoclassical level. Argon exhaust from the inside of the ITB is demonstrated by applying ECH in the high β p mode plasma, where both electron and argon density profiles become flatter. The reduction of the neoclassical inward velocity for argon due to the reduction of density gradient is consistent with the experimental observation. In the RS plasma, the density gradient is not decreased by ECH and argon is not exhausted. These results suggest the importance of density gradient control to suppress heavy impurity accumulation. (author)
Relationship between particle and heat transport in JT-60U plasmas with internal transport barrier
International Nuclear Information System (INIS)
Takenaga, Hidenobu; Higashijima, S.; Oyama, N.
2003-01-01
The relationship between particle and heat transport in an internal transport barrier (ITB) has been systematically investigated in reversed shear (RS) and high β p ELMy H-mode plasmas in JT-60U. No helium and carbon accumulation inside the ITB is observed even with ion heat transport reduced to a neoclassical level. On the other hand, the heavy impurity argon is accumulated inside the ITB. The argon density profile estimated from the soft x-ray profile is more peaked, by a factor of 2-4 in the RS plasma and of 1.6 in the high β p mode plasma, than the electron density profile. The helium diffusivity (D He ) and the ion thermal diffusivity (χ i ) are at an anomalous level in the high β p mode plasma, where D He and χ i are higher by a factor of 5-10 than the neoclassical value. In the RS plasma, D He is reduced from the anomalous to the neoclassical level, together with χ i . The carbon and argon density profiles calculated using the transport coefficients reduced to the neoclassical level only in the ITB are more peaked than the measured profiles, even when χ i is reduced to the neoclassical level. Argon exhaust from the inside of the ITB is demonstrated by applying ECH in the high β p mode plasma, where both electron and argon density profiles become flatter. The reduction of the neoclassical inward velocity for argon due to the reduction of density gradient is consistent with the experimental observation. In the RS plasma, the density gradient is not decreased by ECH and argon is not exhausted. These results suggest the importance of density control to suppress heavy impurity accumulation. (author)
Electron cyclotron absorption in Tokamak plasmas in the presence of radial transport of particles
International Nuclear Information System (INIS)
Rosa, Paulo R. da S.; Ziebell, Luiz F.
1998-01-01
We use quasilinear theory to study effects of particle radial transport on the electron cyclotron absorption coefficient by a current carrying plasma, in a tokamak modelated as a plasma slab. Our numerical results indicate significant modification in the profile of the electron cyclotron absorption coefficient when transport is taken into account relative to the situation without transport. (author)
Sung, Wen-Tsai; Chiang, Yen-Chun
2012-12-01
This study examines wireless sensor network with real-time remote identification using the Android study of things (HCIOT) platform in community healthcare. An improved particle swarm optimization (PSO) method is proposed to efficiently enhance physiological multi-sensors data fusion measurement precision in the Internet of Things (IOT) system. Improved PSO (IPSO) includes: inertia weight factor design, shrinkage factor adjustment to allow improved PSO algorithm data fusion performance. The Android platform is employed to build multi-physiological signal processing and timely medical care of things analysis. Wireless sensor network signal transmission and Internet links allow community or family members to have timely medical care network services.
Sathish Kumar, V. R.; Anbuudayasankar, S. P.; Rameshkumar, K.
2018-02-01
In the current globalized scenario, business organizations are more dependent on cost effective supply chain to enhance profitability and better handle competition. Demand uncertainty is an important factor in success or failure of a supply chain. An efficient supply chain limits the stock held at all echelons to the extent of avoiding a stock-out situation. In this paper, a three echelon supply chain model consisting of supplier, manufacturing plant and market is developed and the same is optimized using particle swarm intelligence algorithm.
He, Zhenzong; Qi, Hong; Wang, Yuqing; Ruan, Liming
2014-10-01
Four improved Ant Colony Optimization (ACO) algorithms, i.e. the probability density function based ACO (PDF-ACO) algorithm, the Region ACO (RACO) algorithm, Stochastic ACO (SACO) algorithm and Homogeneous ACO (HACO) algorithm, are employed to estimate the particle size distribution (PSD) of the spheroidal particles. The direct problems are solved by the extended Anomalous Diffraction Approximation (ADA) and the Lambert-Beer law. Three commonly used monomodal distribution functions i.e. the Rosin-Rammer (R-R) distribution function, the normal (N-N) distribution function, and the logarithmic normal (L-N) distribution function are estimated under dependent model. The influence of random measurement errors on the inverse results is also investigated. All the results reveal that the PDF-ACO algorithm is more accurate than the other three ACO algorithms and can be used as an effective technique to investigate the PSD of the spheroidal particles. Furthermore, the Johnson's SB (J-SB) function and the modified beta (M-β) function are employed as the general distribution functions to retrieve the PSD of spheroidal particles using PDF-ACO algorithm. The investigation shows a reasonable agreement between the original distribution function and the general distribution function when only considering the variety of the length of the rotational semi-axis.
Yang, Y M; Bednarz, B
2013-02-21
Following the proposal by several groups to integrate magnetic resonance imaging (MRI) with radiation therapy, much attention has been afforded to examining the impact of strong (on the order of a Tesla) transverse magnetic fields on photon dose distributions. The effect of the magnetic field on dose distributions must be considered in order to take full advantage of the benefits of real-time intra-fraction imaging. In this investigation, we compared the handling of particle transport in magnetic fields between two Monte Carlo codes, EGSnrc and Geant4, to analyze various aspects of their electromagnetic transport algorithms; both codes are well-benchmarked for medical physics applications in the absence of magnetic fields. A water-air-water slab phantom and a water-lung-water slab phantom were used to highlight dose perturbations near high- and low-density interfaces. We have implemented a method of calculating the Lorentz force in EGSnrc based on theoretical models in literature, and show very good consistency between the two Monte Carlo codes. This investigation further demonstrates the importance of accurate dosimetry for MRI-guided radiation therapy (MRIgRT), and facilitates the integration of a ViewRay MRIgRT system in the University of Wisconsin-Madison's Radiation Oncology Department.
International Nuclear Information System (INIS)
Yang, Y M; Bednarz, B
2013-01-01
Following the proposal by several groups to integrate magnetic resonance imaging (MRI) with radiation therapy, much attention has been afforded to examining the impact of strong (on the order of a Tesla) transverse magnetic fields on photon dose distributions. The effect of the magnetic field on dose distributions must be considered in order to take full advantage of the benefits of real-time intra-fraction imaging. In this investigation, we compared the handling of particle transport in magnetic fields between two Monte Carlo codes, EGSnrc and Geant4, to analyze various aspects of their electromagnetic transport algorithms; both codes are well-benchmarked for medical physics applications in the absence of magnetic fields. A water–air–water slab phantom and a water–lung–water slab phantom were used to highlight dose perturbations near high- and low-density interfaces. We have implemented a method of calculating the Lorentz force in EGSnrc based on theoretical models in literature, and show very good consistency between the two Monte Carlo codes. This investigation further demonstrates the importance of accurate dosimetry for MRI-guided radiation therapy (MRIgRT), and facilitates the integration of a ViewRay MRIgRT system in the University of Wisconsin-Madison's Radiation Oncology Department. (note)
LC HCAL Absorber And Active Media Comparisons Using a Particle-Flow Algorithm
International Nuclear Information System (INIS)
Magill, Steve; Kuhlmann, S.
2006-01-01
We compared Stainless Steel (SS) to Tungsten (W) as absorber for the HCAL in simulation using single particles (pions) and a Particle-Flow Algorithm applied to e + e - -> Z -> qqbar events. We then used the PFA to evaluate the performance characteristics of a LC HCAL using W absorber and comparing scintillator and RPC as active media. The W/Scintillator HCAL performs better than the SS/Scintillator version due to finer λ I sampling and narrower showers in the dense absorber. The W/Scintillator HCAL performs better than the W/RPC HCAL except in the number of unused hits in the PFA. Since this represents the confusion term in the PFA response, additional tuning and optimization of a W/RPC HCAL might significantly improve this HCAL configuration
A new multiple robot path planning algorithm: dynamic distributed particle swarm optimization.
Ayari, Asma; Bouamama, Sadok
2017-01-01
Multiple robot systems have become a major study concern in the field of robotic research. Their control becomes unreliable and even infeasible if the number of robots increases. In this paper, a new dynamic distributed particle swarm optimization (D 2 PSO) algorithm is proposed for trajectory path planning of multiple robots in order to find collision-free optimal path for each robot in the environment. The proposed approach consists in calculating two local optima detectors, LOD pBest and LOD gBest . Particles which are unable to improve their personal best and global best for predefined number of successive iterations would be replaced with restructured ones. Stagnation and local optima problems would be avoided by adding diversity to the population, without losing the fast convergence characteristic of PSO. Experiments with multiple robots are provided and proved effectiveness of such approach compared with the distributed PSO.
A Novel Cluster Head Selection Algorithm Based on Fuzzy Clustering and Particle Swarm Optimization.
Ni, Qingjian; Pan, Qianqian; Du, Huimin; Cao, Cen; Zhai, Yuqing
2017-01-01
An important objective of wireless sensor network is to prolong the network life cycle, and topology control is of great significance for extending the network life cycle. Based on previous work, for cluster head selection in hierarchical topology control, we propose a solution based on fuzzy clustering preprocessing and particle swarm optimization. More specifically, first, fuzzy clustering algorithm is used to initial clustering for sensor nodes according to geographical locations, where a sensor node belongs to a cluster with a determined probability, and the number of initial clusters is analyzed and discussed. Furthermore, the fitness function is designed considering both the energy consumption and distance factors of wireless sensor network. Finally, the cluster head nodes in hierarchical topology are determined based on the improved particle swarm optimization. Experimental results show that, compared with traditional methods, the proposed method achieved the purpose of reducing the mortality rate of nodes and extending the network life cycle.
Fully implicit Particle-in-cell algorithms for multiscale plasma simulation
Energy Technology Data Exchange (ETDEWEB)
Chacon, Luis [Los Alamos National Laboratory
2015-07-16
The outline of the paper is as follows: Particle-in-cell (PIC) methods for fully ionized collisionless plasmas, explicit vs. implicit PIC, 1D ES implicit PIC (charge and energy conservation, moment-based acceleration), and generalization to Multi-D EM PIC: Vlasov-Darwin model (review and motivation for Darwin model, conservation properties (energy, charge, and canonical momenta), and numerical benchmarks). The author demonstrates a fully implicit, fully nonlinear, multidimensional PIC formulation that features exact local charge conservation (via a novel particle mover strategy), exact global energy conservation (no particle self-heating or self-cooling), adaptive particle orbit integrator to control errors in momentum conservation, and canonical momenta (EM-PIC only, reduced dimensionality). The approach is free of numerical instabilities: ω_{pe}Δt >> 1, and Δx >> λ_{D}. It requires many fewer dofs (vs. explicit PIC) for comparable accuracy in challenging problems. Significant CPU gains (vs explicit PIC) have been demonstrated. The method has much potential for efficiency gains vs. explicit in long-time-scale applications. Moment-based acceleration is effective in minimizing N_{FE}, leading to an optimal algorithm.
International Nuclear Information System (INIS)
Chen, Zhenping; Song, Jing; Zheng, Huaqing; Wu, Bin; Hu, Liqin
2015-01-01
Highlights: • The subdivision combines both advantages of uniform and non-uniform schemes. • The grid models were proved to be more efficient than traditional CSG models. • Monte Carlo simulation performance was enhanced by Optimal Spatial Subdivision. • Efficiency gains were obtained for realistic whole reactor core models. - Abstract: Geometry navigation is one of the key aspects of dominating Monte Carlo particle transport simulation performance for large-scale whole reactor models. In such cases, spatial subdivision is an easily-established and high-potential method to improve the run-time performance. In this study, a dedicated method, named Optimal Spatial Subdivision, is proposed for generating numerically optimal spatial grid models, which are demonstrated to be more efficient for geometry navigation than traditional Constructive Solid Geometry (CSG) models. The method uses a recursive subdivision algorithm to subdivide a CSG model into non-overlapping grids, which are labeled as totally or partially occupied, or not occupied at all, by CSG objects. The most important point is that, at each stage of subdivision, a conception of quality factor based on a cost estimation function is derived to evaluate the qualities of the subdivision schemes. Only the scheme with optimal quality factor will be chosen as the final subdivision strategy for generating the grid model. Eventually, the model built with the optimal quality factor will be efficient for Monte Carlo particle transport simulation. The method has been implemented and integrated into the Super Monte Carlo program SuperMC developed by FDS Team. Testing cases were used to highlight the performance gains that could be achieved. Results showed that Monte Carlo simulation runtime could be reduced significantly when using the new method, even as cases reached whole reactor core model sizes
Parallel algorithms for 2-D cylindrical transport equations of Eigenvalue problem
International Nuclear Information System (INIS)
Wei, J.; Yang, S.
2013-01-01
In this paper, aimed at the neutron transport equations of eigenvalue problem under 2-D cylindrical geometry on unstructured grid, the discrete scheme of Sn discrete ordinate and discontinuous finite is built, and the parallel computation for the scheme is realized on MPI systems. Numerical experiments indicate that the designed parallel algorithm can reach perfect speedup, it has good practicality and scalability. (authors)
Drummond, Jen; Davies-Colley, Rob; Stott, Rebecca; Sukias, James; Nagels, John; Sharp, Alice; Packman, Aaron
2014-05-01
Transport dynamics of microbial cells and organic fine particles are important to stream ecology and biogeochemistry. Cells and particles continuously deposit and resuspend during downstream transport owing to a variety of processes including gravitational settling, interactions with in-stream structures or biofilms at the sediment-water interface, and hyporheic exchange and filtration within underlying sediments. Deposited cells and particles are also resuspended following increases in streamflow. Fine particle retention influences biogeochemical processing of substrates and nutrients (C, N, P), while remobilization of pathogenic microbes during flood events presents a hazard to downstream uses such as water supplies and recreation. We are conducting studies to gain insights into the dynamics of fine particles and microbes in streams, with a campaign of experiments and modeling. The results improve understanding of fine sediment transport, carbon cycling, nutrient spiraling, and microbial hazards in streams. We developed a stochastic model to describe the transport and retention of fine particles and microbes in rivers that accounts for hyporheic exchange and transport through porewaters, reversible filtration within the streambed, and microbial inactivation in the water column and subsurface. This model framework is an advance over previous work in that it incorporates detailed transport and retention processes that are amenable to measurement. Solute, particle, and microbial transport were observed both locally within sediment and at the whole-stream scale. A multi-tracer whole-stream injection experiment compared the transport and retention of a conservative solute, fluorescent fine particles, and the fecal indicator bacterium Escherichia coli. Retention occurred within both the underlying sediment bed and stands of submerged macrophytes. The results demonstrate that the combination of local measurements, whole-stream tracer experiments, and advanced modeling
International Nuclear Information System (INIS)
Cox, R.G.
1984-01-01
Much controversy surrounds government regulation of routing and scheduling of Hazardous Materials Transportation (HMT). Increases in operating costs must be balanced against expected benefits from local HMT bans and curfews when promulgating or preempting HMT regulations. Algorithmic approaches for evaluating HMT routing and scheduling regulatory policy are described. A review of current US HMT regulatory policy is presented to provide a context for the analysis. Next, a multiobjective shortest path algorithm to find the set of efficient routes under conflicting objectives is presented. This algorithm generates all efficient routes under any partial ordering in a single pass through the network. Also, scheduling algorithms are presented to estimate the travel time delay due to HMT curfews along a route. Algorithms are presented assuming either deterministic or stochastic travel times between curfew cities and also possible rerouting to avoid such cities. These algorithms are applied to the case study of US highway transport of spent nuclear fuel from reactors to permanent repositories. Two data sets were used. One data set included the US Interstate Highway System (IHS) network with reactor locations, possible repository sites, and 150 heavily populated areas (HPAs). The other data set contained estimates of the population residing with 0.5 miles of the IHS and the Eastern US. Curfew delay is dramatically reduced by optimally scheduling departure times unless inter-HPA travel times are highly uncertain. Rerouting shipments to avoid HPAs is a less efficient approach to reducing delay
Energy Technology Data Exchange (ETDEWEB)
Berkolaiko, G. [Department of Mathematics, Texas A and M University, College Station, Texas 77843-3368 (United States); Kuipers, J. [Institut für Theoretische Physik, Universität Regensburg, D-93040 Regensburg (Germany)
2013-12-15
Electronic transport through chaotic quantum dots exhibits universal behaviour which can be understood through the semiclassical approximation. Within the approximation, calculation of transport moments reduces to codifying classical correlations between scattering trajectories. These can be represented as ribbon graphs and we develop an algorithmic combinatorial method to generate all such graphs with a given genus. This provides an expansion of the linear transport moments for systems both with and without time reversal symmetry. The computational implementation is then able to progress several orders further than previous semiclassical formulae as well as those derived from an asymptotic expansion of random matrix results. The patterns observed also suggest a general form for the higher orders.
Directory of Open Access Journals (Sweden)
Sergey Kharitonov
2015-06-01
Full Text Available Optimum transport infrastructure usage is an important aspect of the development of the national economy of the Russian Federation. Thus, development of instruments for assessing the efficiency of infrastructure is impossible without constant monitoring of a number of significant indicators. This work is devoted to the selection of indicators and the method of their calculation in relation to the transport subsystem as airport infrastructure. The work also reflects aspects of the evaluation of the possibilities of algorithmic computational mechanisms to improve the tools of public administration transport subsystems.
DEFF Research Database (Denmark)
Ren, Jingzheng; Tan, Shiyu; Dong, Lichun
2010-01-01
A mathematical model relating operation profits with reflux ratio of a stage distillation column was established. In order to optimize the reflux ratio by solving the nonlinear objective function, an improved particle swarm algorithm was developed and has been proved to be able to enhance...... the searching ability of basic particle swarm algorithm significantly. An example of utilizing the improved algorithm to solve the mathematical model was demonstrated; the result showed that it is efficient and convenient to optimize the reflux ratio for a distillation column by using the mathematical model...
Directory of Open Access Journals (Sweden)
Zhou Feng
2013-09-01
Full Text Available A based on Rapidly-exploring Random Tree(RRT and Particle Swarm Optimizer (PSO for path planning of the robot is proposed.First the grid method is built to describe the working space of the mobile robot,then the Rapidly-exploring Random Tree algorithm is used to obtain the global navigation path,and the Particle Swarm Optimizer algorithm is adopted to get the better path.Computer experiment results demonstrate that this novel algorithm can plan an optimal path rapidly in a cluttered environment.The successful obstacle avoidance is achieved,and the model is robust and performs reliably.
Directory of Open Access Journals (Sweden)
Huan Zhang
2017-01-01
Full Text Available For the problem of multiaircraft cooperative suppression interference array (MACSIA against the enemy air defense radar network in electronic warfare mission planning, firstly, the concept of route planning security zone is proposed and the solution to get the minimum width of security zone based on mathematical morphology is put forward. Secondly, the minimum width of security zone and the sum of the distance between each jamming aircraft and the center of radar network are regarded as objective function, and the multiobjective optimization model of MACSIA is built, and then an improved multiobjective particle swarm optimization algorithm is used to solve the model. The decomposition mechanism is adopted and the proportional distribution is used to maintain diversity of the new found nondominated solutions. Finally, the Pareto optimal solutions are analyzed by simulation, and the optimal MACSIA schemes of each jamming aircraft suppression against the enemy air defense radar network are obtained and verify that the built multiobjective optimization model is corrected. It also shows that the improved multiobjective particle swarm optimization algorithm for solving the problem of MACSIA is feasible and effective.
Directory of Open Access Journals (Sweden)
Quanzhen Huang
2017-01-01
Full Text Available Numbers and locations of sensors and actuators play an important role in cost and control performance for active vibration control system of piezoelectric smart structure. This may lead to a diverse control system if sensors and actuators were not configured properly. An optimal location method of piezoelectric actuators and sensors is proposed in this paper based on particle swarm algorithm (PSA. Due to the complexity of the frame structure, it can be taken as a combination of many piezoelectric intelligent beams and L-type structures. Firstly, an optimal criterion of sensors and actuators is proposed with an optimal objective function. Secondly, each order natural frequency and modal strain are calculated and substituted into the optimal objective function. Preliminary optimal allocation is done using the particle swarm algorithm, based on the similar optimization method and the combination of the vibration stress and strain distribution at the lower modal frequency. Finally, the optimal location is given. An experimental platform was established and the experimental results indirectly verified the feasibility and effectiveness of the proposed method.
Multidisciplinary Optimization of a Transport Aircraft Wing using Particle Swarm Optimization
Sobieszczanski-Sobieski, Jaroslaw; Venter, Gerhard
2002-01-01
The purpose of this paper is to demonstrate the application of particle swarm optimization to a realistic multidisciplinary optimization test problem. The paper's new contributions to multidisciplinary optimization is the application of a new algorithm for dealing with the unique challenges associated with multidisciplinary optimization problems, and recommendations as to the utility of the algorithm in future multidisciplinary optimization applications. The selected example is a bi-level optimization problem that demonstrates severe numerical noise and has a combination of continuous and truly discrete design variables. The use of traditional gradient-based optimization algorithms is thus not practical. The numerical results presented indicate that the particle swarm optimization algorithm is able to reliably find the optimum design for the problem presented here. The algorithm is capable of dealing with the unique challenges posed by multidisciplinary optimization as well as the numerical noise and truly discrete variables present in the current example problem.
International Nuclear Information System (INIS)
Noack, K.
1982-01-01
The perturbation source method may be a powerful Monte-Carlo means to calculate small effects in a particle field. In a preceding paper we have formulated this methos in inhomogeneous linear particle transport problems describing the particle fields by solutions of Fredholm integral equations and have derived formulae for the second moment of the difference event point estimator. In the present paper we analyse the general structure of its variance, point out the variance peculiarities, discuss the dependence on certain transport games and on generation procedures of the auxiliary particles and draw conclusions to improve this method
Zhang, Ruili; Wang, Yulei; He, Yang; Xiao, Jianyuan; Liu, Jian; Qin, Hong; Tang, Yifa
2018-02-01
Relativistic dynamics of a charged particle in time-dependent electromagnetic fields has theoretical significance and a wide range of applications. The numerical simulation of relativistic dynamics is often multi-scale and requires accurate long-term numerical simulations. Therefore, explicit symplectic algorithms are much more preferable than non-symplectic methods and implicit symplectic algorithms. In this paper, we employ the proper time and express the Hamiltonian as the sum of exactly solvable terms and product-separable terms in space-time coordinates. Then, we give the explicit symplectic algorithms based on the generating functions of orders 2 and 3 for relativistic dynamics of a charged particle. The methodology is not new, which has been applied to non-relativistic dynamics of charged particles, but the algorithm for relativistic dynamics has much significance in practical simulations, such as the secular simulation of runaway electrons in tokamaks.
Buaria, D.; Yeung, P. K.
2017-12-01
A new parallel algorithm utilizing a partitioned global address space (PGAS) programming model to achieve high scalability is reported for particle tracking in direct numerical simulations of turbulent fluid flow. The work is motivated by the desire to obtain Lagrangian information necessary for the study of turbulent dispersion at the largest problem sizes feasible on current and next-generation multi-petaflop supercomputers. A large population of fluid particles is distributed among parallel processes dynamically, based on instantaneous particle positions such that all of the interpolation information needed for each particle is available either locally on its host process or neighboring processes holding adjacent sub-domains of the velocity field. With cubic splines as the preferred interpolation method, the new algorithm is designed to minimize the need for communication, by transferring between adjacent processes only those spline coefficients determined to be necessary for specific particles. This transfer is implemented very efficiently as a one-sided communication, using Co-Array Fortran (CAF) features which facilitate small data movements between different local partitions of a large global array. The cost of monitoring transfer of particle properties between adjacent processes for particles migrating across sub-domain boundaries is found to be small. Detailed benchmarks are obtained on the Cray petascale supercomputer Blue Waters at the University of Illinois, Urbana-Champaign. For operations on the particles in a 81923 simulation (0.55 trillion grid points) on 262,144 Cray XE6 cores, the new algorithm is found to be orders of magnitude faster relative to a prior algorithm in which each particle is tracked by the same parallel process at all times. This large speedup reduces the additional cost of tracking of order 300 million particles to just over 50% of the cost of computing the Eulerian velocity field at this scale. Improving support of PGAS models on
MISR Dark Water aerosol retrievals: operational algorithm sensitivity to particle non-sphericity
Directory of Open Access Journals (Sweden)
O. V. Kalashnikova
2013-08-01
Full Text Available The aim of this study is to theoretically investigate the sensitivity of the Multi-angle Imaging SpectroRadiometer (MISR operational (version 22 Dark Water retrieval algorithm to aerosol non-sphericity over the global oceans under actual observing conditions, accounting for current algorithm assumptions. Non-spherical (dust aerosol models, which were introduced in version 16 of the MISR aerosol product, improved the quality and coverage of retrievals in dusty regions. Due to the sensitivity of the retrieval to the presence of non-spherical aerosols, the MISR aerosol product has been successfully used to track the location and evolution of mineral dust plumes from the Sahara across the Atlantic, for example. However, the MISR global non-spherical aerosol optical depth (AOD fraction product has been found to have several climatological artifacts superimposed on valid detections of mineral dust, including high non-spherical fraction in the Southern Ocean and seasonally variable bands of high non-sphericity. In this paper we introduce a formal approach to examine the ability of the operational MISR Dark Water algorithm to distinguish among various spherical and non-spherical particles as a function of the variable MISR viewing geometry. We demonstrate the following under the criteria currently implemented: (1 Dark Water retrieval sensitivity to particle non-sphericity decreases for AOD below about 0.1 primarily due to an unnecessarily large lower bound imposed on the uncertainty in MISR observations at low light levels, and improves when this lower bound is removed; (2 Dark Water retrievals are able to distinguish between the spherical and non-spherical particles currently used for all MISR viewing geometries when the AOD exceeds 0.1; (3 the sensitivity of the MISR retrievals to aerosol non-sphericity varies in a complex way that depends on the sampling of the scattering phase function and the contribution from multiple scattering; and (4 non
Pashaei, Elnaz; Pashaei, Elham; Aydin, Nizamettin
2018-04-14
In cancer classification, gene selection is an important data preprocessing technique, but it is a difficult task due to the large search space. Accordingly, the objective of this study is to develop a hybrid meta-heuristic Binary Black Hole Algorithm (BBHA) and Binary Particle Swarm Optimization (BPSO) (4-2) model that emphasizes gene selection. In this model, the BBHA is embedded in the BPSO (4-2) algorithm to make the BPSO (4-2) more effective and to facilitate the exploration and exploitation of the BPSO (4-2) algorithm to further improve the performance. This model has been associated with Random Forest Recursive Feature Elimination (RF-RFE) pre-filtering technique. The classifiers which are evaluated in the proposed framework are Sparse Partial Least Squares Discriminant Analysis (SPLSDA); k-nearest neighbor and Naive Bayes. The performance of the proposed method was evaluated on two benchmark and three clinical microarrays. The experimental results and statistical analysis confirm the better performance of the BPSO (4-2)-BBHA compared with the BBHA, the BPSO (4-2) and several state-of-the-art methods in terms of avoiding local minima, convergence rate, accuracy and number of selected genes. The results also show that the BPSO (4-2)-BBHA model can successfully identify known biologically and statistically significant genes from the clinical datasets. Copyright © 2018 Elsevier Inc. All rights reserved.
A Comprehensive Survey on Particle Swarm Optimization Algorithm and Its Applications
Directory of Open Access Journals (Sweden)
Yudong Zhang
2015-01-01
Full Text Available Particle swarm optimization (PSO is a heuristic global optimization method, proposed originally by Kennedy and Eberhart in 1995. It is now one of the most commonly used optimization techniques. This survey presented a comprehensive investigation of PSO. On one hand, we provided advances with PSO, including its modifications (including quantum-behaved PSO, bare-bones PSO, chaotic PSO, and fuzzy PSO, population topology (as fully connected, von Neumann, ring, star, random, etc., hybridization (with genetic algorithm, simulated annealing, Tabu search, artificial immune system, ant colony algorithm, artificial bee colony, differential evolution, harmonic search, and biogeography-based optimization, extensions (to multiobjective, constrained, discrete, and binary optimization, theoretical analysis (parameter selection and tuning, and convergence analysis, and parallel implementation (in multicore, multiprocessor, GPU, and cloud computing forms. On the other hand, we offered a survey on applications of PSO to the following eight fields: electrical and electronic engineering, automation control systems, communication theory, operations research, mechanical engineering, fuel and energy, medicine, chemistry, and biology. It is hoped that this survey would be beneficial for the researchers studying PSO algorithms.
Directory of Open Access Journals (Sweden)
Meiping Wang
2016-01-01
Full Text Available We developed an effective intelligent model to predict the dynamic heat supply of heat source. A hybrid forecasting method was proposed based on support vector regression (SVR model-optimized particle swarm optimization (PSO algorithms. Due to the interaction of meteorological conditions and the heating parameters of heating system, it is extremely difficult to forecast dynamic heat supply. Firstly, the correlations among heat supply and related influencing factors in the heating system were analyzed through the correlation analysis of statistical theory. Then, the SVR model was employed to forecast dynamic heat supply. In the model, the input variables were selected based on the correlation analysis and three crucial parameters, including the penalties factor, gamma of the kernel RBF, and insensitive loss function, were optimized by PSO algorithms. The optimized SVR model was compared with the basic SVR, optimized genetic algorithm-SVR (GA-SVR, and artificial neural network (ANN through six groups of experiment data from two heat sources. The results of the correlation coefficient analysis revealed the relationship between the influencing factors and the forecasted heat supply and determined the input variables. The performance of the PSO-SVR model is superior to those of the other three models. The PSO-SVR method is statistically robust and can be applied to practical heating system.
Annavarapu, Chandra Sekhara Rao; Dara, Suresh; Banka, Haider
2016-01-01
Cancer investigations in microarray data play a major role in cancer analysis and the treatment. Cancer microarray data consists of complex gene expressed patterns of cancer. In this article, a Multi-Objective Binary Particle Swarm Optimization (MOBPSO) algorithm is proposed for analyzing cancer gene expression data. Due to its high dimensionality, a fast heuristic based pre-processing technique is employed to reduce some of the crude domain features from the initial feature set. Since these pre-processed and reduced features are still high dimensional, the proposed MOBPSO algorithm is used for finding further feature subsets. The objective functions are suitably modeled by optimizing two conflicting objectives i.e., cardinality of feature subsets and distinctive capability of those selected subsets. As these two objective functions are conflicting in nature, they are more suitable for multi-objective modeling. The experiments are carried out on benchmark gene expression datasets, i.e., Colon, Lymphoma and Leukaemia available in literature. The performance of the selected feature subsets with their classification accuracy and validated using 10 fold cross validation techniques. A detailed comparative study is also made to show the betterment or competitiveness of the proposed algorithm. PMID:27822174
Influence of particle sorting in transport of sediment-associated contaminants
International Nuclear Information System (INIS)
Lane, L.J.; Hakonson, T.E.
1982-01-01
Hydrologic and sediment transport models are developed to route the flow of water and sediment (by particle size classes) in alluvial stream channels. A simplified infiltration model is used to compute runoff from upland areas and flow is routed in ephemeral stream channels to account for infiltration or transmission losses in the channel alluvium. Hydraulic calculations, based on the normal flow assumption and an approximating hydrograph, are used to compute sediment transport by particle size classes. Contaminants associated with sediment particles are routed in the stream channels to predict contaminatant transport by particle size classes. An empirical adjustment factor, the enrichment ratio, is shown to be a function of the particle size distribution of stream bed sediments, contaminant concentrations by particle size, differential sediment transport rates, and the magnitude of the runoff event causing transport of sediment and contaminants. This analysis and an example application in a liquid effluent-receiving area illustrate the significance of particle sorting in transport of sediment associated contaminants
A New Method for Tracking Individual Particles During Bed Load Transport in a Gravel-Bed River
Tremblay, M.; Marquis, G. A.; Roy, A. G.; Chaire de Recherche Du Canada En Dynamique Fluviale
2010-12-01
Many particle tracers (passive or active) have been developed to study gravel movement in rivers. It remains difficult, however, to document resting and moving periods and to know how particles travel from one deposition site to another. Our new tracking method uses the Hobo Pendant G acceleration Data Logger to quantitatively describe the motion of individual particles from the initiation of movement, through the displacement and to the rest, in a natural gravel river. The Hobo measures the acceleration in three dimensions at a chosen temporal frequency. The Hobo was inserted into 11 artificial rocks. The rocks were seeded in Ruisseau Béard, a small gravel-bed river in the Yamaska drainage basin (Québec) where the hydraulics, particle sizes and bed characteristics are well known. The signals recorded during eight floods (Summer and Fall 2008-2009) allowed us to develop an algorithm which classifies the periods of rest and motion. We can differentiate two types of motion: sliding and rolling. The particles can also vibrate while remaining in the same position. The examination of the movement and vibration periods with respect to the hydraulic conditions (discharge, shear stress, stream power) showed that vibration occurred mostly before the rise of hydrograph and allowed us to establish movement threshold and response times. In all cases, particle movements occurred during floods but not always in direct response to increased bed shear stress and stream power. This method offers great potential to track individual particles and to establish a spatiotemporal sequence of the intermittent transport of the particle during a flood and to test theories concerning the resting periods of particles on a gravel bed.
Transport of inertial particles in a turbulent premixed jet flame
International Nuclear Information System (INIS)
Battista, F; Picano, F; Casciola, C M; Troiani, G
2011-01-01
The heat release, occurring in reacting flows, induces a sudden fluid acceleration which particles follow with a certain lag, due to their finite inertia. Actually, the coupling between particle inertia and the flame front expansion strongly biases the spatial distribution of the particles, by inducing the formation of localized clouds with different dimensions downstream the thin flame front. A possible indicator of this preferential localization is the so-called Clustering Index, quantifying the departure of the actual particle distribution from the Poissonian, which would correspond to a purely random spatial arrangement. Most of the clustering is found in the flame brush region, which is spanned by the fluctuating instantaneous flame front. The effect is significant also for very light particles. In this case a simple model based on the Bray-Moss-Libby formalism is able to account for most of the deviation from the Poissonian. When the particle inertia increases, the effect is found to increases and persist well within the region of burned gases. The effect is maximum when the particle relaxation time is of the order of the flame front time scale. The evidence of this peculiar source of clustering is here provided by data from a direct numerical simulation of a turbulent premixed jet flame and confirmed by experimental data.
Indian Academy of Sciences (India)
to as 'divide-and-conquer'. Although there has been a large effort in realizing efficient algorithms, there are not many universally accepted algorithm design paradigms. In this article, we illustrate algorithm design techniques such as balancing, greedy strategy, dynamic programming strategy, and backtracking or traversal of ...
Prognostics 101: A tutorial for particle filter-based prognostics algorithm using Matlab
International Nuclear Information System (INIS)
An, Dawn; Choi, Joo-Ho; Kim, Nam Ho
2013-01-01
This paper presents a Matlab-based tutorial for model-based prognostics, which combines a physical model with observed data to identify model parameters, from which the remaining useful life (RUL) can be predicted. Among many model-based prognostics algorithms, the particle filter is used in this tutorial for parameter estimation of damage or a degradation model. The tutorial is presented using a Matlab script with 62 lines, including detailed explanations. As examples, a battery degradation model and a crack growth model are used to explain the updating process of model parameters, damage progression, and RUL prediction. In order to illustrate the results, the RUL at an arbitrary cycle are predicted in the form of distribution along with the median and 90% prediction interval. This tutorial will be helpful for the beginners in prognostics to understand and use the prognostics method, and we hope it provides a standard of particle filter based prognostics. -- Highlights: ► Matlab-based tutorial for model-based prognostics is presented. ► A battery degradation model and a crack growth model are used as examples. ► The RUL at an arbitrary cycle are predicted using the particle filter
Optimization of the Infrastructure of Reinforced Concrete Reservoirs by a Particle Swarm Algorithm
Directory of Open Access Journals (Sweden)
Kia Saeed
2015-03-01
Full Text Available Optimization techniques may be effective in finding the best modeling and shapes for reinforced concrete reservoirs (RCR to improve their durability and mechanical behavior, particularly for avoiding or reducing the bending moments in these structures. RCRs are one of the major structures applied for reserving fluids to be used in drinking water networks. Usually, these structures have fixed shapes which are designed and calculated based on input discharges, the conditions of the structure's topology, and geotechnical locations with various combinations of static and dynamic loads. In this research, the elements of reservoir walls are first typed according to the performance analyzed; then the range of the membrane based on the thickness and the minimum and maximum cross sections of the bar used are determined in each element. This is done by considering the variable constraints, which are estimated by the maximum stress capacity. In the next phase, based on the reservoir analysis and using the algorithm of the PARIS connector, the related information is combined with the code for the PSO algorithm, i.e., an algorithm for a swarming search, to determine the optimum thickness of the cross sections for the reservoir membrane’s elements and the optimum cross section of the bar used. Based on very complex mathematical linear models for the correct embedding and angles related to achain of peripheral strengthening membranes, which optimize the vibration of the structure, a mutual relation is selected between the modeling software and the code for a particle swarm optimization algorithm. Finally, the comparative weight of the concrete reservoir optimized by the peripheral strengthening membrane is analyzed using common methods. This analysis shows a 19% decrease in the bar’s weight, a 20% decrease in the concrete’s weight, and a minimum 13% saving in construction costs according to the items of a checklist for a concrete reservoir at 10,000 m3.
A set of particle locating algorithms not requiring face belonging to cell connectivity data
Sani, M.; Saidi, M. S.
2009-10-01
Existing efficient directed particle locating (host determination) algorithms rely on the face belonging to cell relationship (F2C) to find the next cell on the search path and the cell in which the target is located. Recently, finite volume methods have been devised which do not need F2C. Therefore, existing search algorithms are not directly applicable (unless F2C is included). F2C is a major memory burden in grid description. If the memory benefit from these finite volume methods are desirable new search algorithms should be devised. In this work two new algorithms (line of sight and closest cell) are proposed which do not need F2C. They are based on the structure of the sparse coefficient matrix involved (stored for example in the compressed row storage, CRS, format) to determine the next cell. Since F2C is not available, testing a cell for the presence of the target is not possible. Therefore, the proposed methods may wrongly mark a nearby cell as the host in some rare cases. The issue of importance of finding the correct host cell (not wrongly hitting its neighbor) is addressed. Quantitative measures are introduced to assess the efficiency of the methods and comparison is made for typical grid types used in computational fluid dynamics. In comparison, the closest cell method, having a lower computational cost than the family of line of sight and the existing efficient maximum dot product methods, gives a very good performance with tolerable and harmless wrong hits. If more accuracy is needed, the method of approximate line of sight then closest cell (LS-A-CC) is recommended.
International Nuclear Information System (INIS)
Faure, M.H.
1994-03-01
Understanding the mechanisms which control the transient transport of particles and radionuclides in natural and artificial porous media is a key problem for the assessment of safety of radioactive waste disposals. An experimental study has been performed to characterize the clayey particle mobility in porous media: a laboratory- made column, packed with an unconsolidated sand bentonite (5% weight) sample, is flushed with a salt solution. An original method of salinity gradient allowed us to show and to quantify some typical behaviours of this system: threshold effects in the peptization of particles, creation of preferential pathways, formation of immobile water zones induce solute-transfer limitation. The mathematical modelling accounts for a phenomenological law, where the distribution of particles between the stagnant water zone and the porous medium is a function of sodium chloride concentration. This distribution function is associated with a radionuclide adsorption model, and is included in a convective dispersive transport model with stagnant water zones. It allowed us to simulate the particle and solute transport when the salt environment is modified. The complete model has been validated with experiments involving cesium, calcium and neptunium in a sodium chloride gradient. (author). refs., figs., tabs
Small particle transport across turbulent nonisothermal boundary layers
Rosner, D. E.; Fernandez De La Mora, J.
1982-01-01
The interaction between turbulent diffusion, Brownian diffusion, and particle thermophoresis in the limit of vanishing particle inertial effects is quantitatively modeled for applications in gas turbines. The model is initiated with consideration of the particle phase mass conservation equation for a two-dimensional boundary layer, including the thermophoretic flux term directed toward the cold wall. A formalism of a turbulent flow near a flat plate in a heat transfer problem is adopted, and variable property effects are neglected. Attention is given to the limit of very large Schmidt numbers and the particle concentration depletion outside of the Brownian sublayer. It is concluded that, in the parameter range of interest, thermophoresis augments the high Schmidt number mass-transfer coefficient by a factor equal to the product of the outer sink and the thermophoretic suction.
A review of the facile (FN) method in particle transport theory
International Nuclear Information System (INIS)
Garcia, R.D.M.
1986-02-01
The facile F N method for solving particle transport problems is reviewed. The fundamentals of the method are summarized, recent developments are discussed and several applications of the method are described in detail. (author) [pt
Symmetry properties of the transport coefficients of charged particles in disordered materials
International Nuclear Information System (INIS)
Baird, J.K.
1979-01-01
The transport coefficients of a charged particle in an isotropic material are shown to be even functions of the applied electric field. We discuss the limitation which this result and its consequences place upon formulae used to represent these coefficients
Influence of coal slurry particle composition on pipeline hydraulic transportation behavior
Li-an, Zhao; Ronghuan, Cai; Tieli, Wang
2018-02-01
Acting as a new type of energy transportation mode, the coal pipeline hydraulic transmission can reduce the energy transportation cost and the fly ash pollution of the conventional coal transportation. In this study, the effect of average velocity, particle size and pumping time on particle composition of coal particles during hydraulic conveying was investigated by ring tube test. Meanwhile, the effects of particle composition change on slurry viscosity, transmission resistance and critical sedimentation velocity were studied based on the experimental data. The experimental and theoretical analysis indicate that the alter of slurry particle composition can lead to the change of viscosity, resistance and critical velocity of slurry. Moreover, based on the previous studies, the critical velocity calculation model of coal slurry is proposed.
Alternate mutation based artificial immune algorithm for step fixed charge transportation problem
Directory of Open Access Journals (Sweden)
Mahmoud Moustafa El-Sherbiny
2012-07-01
Full Text Available Step fixed charge transportation problem (SFCTP is considered as a special version of the fixed-charge transportation problem (FCTP. In SFCTP, the fixed cost is incurred for every route that is used in the solution and is proportional to the amount shipped. This cost structure causes the value of the objective function to behave like a step function. Both FCTP and SFCTP are considered to be NP-hard problems. While a lot of research has been carried out concerning FCTP, not much has been done concerning SFCTP. This paper introduces an alternate Mutation based Artificial Immune (MAI algorithm for solving SFCTPs. The proposed MAI algorithm solves both balanced and unbalanced SFCTP without introducing a dummy supplier or a dummy customer. In MAI algorithm a coding schema is designed and procedures are developed for decoding such schema and shipping units. MAI algorithm guarantees the feasibility of all the generated solutions. Due to the significant role of mutation function on the MAI algorithm’s quality, 16 mutation functions are presented and their performances are compared to select the best one. For this purpose, forty problems with different sizes have been generated at random and then a robust calibration is applied using the relative percentage deviation (RPD method. Through two illustrative problems of different sizes the performance of the MAI algorithm has been compared with most recent methods.
Clogging and transport of driven particles in asymmetric funnel arrays
Reichhardt, C. J. O.; Reichhardt, C.
2018-06-01
We numerically examine the flow and clogging of particles driven through asymmetric funnel arrays when the commensurability ratio of the number of particles per plaquette is varied. The particle–particle interactions are modeled with a soft repulsive potential that could represent vortex flow in type-II superconductors or driven charged colloids. The velocity-force curves for driving in the easy flow direction of the funnels exhibit a single depinning threshold; however, for driving in the hard flow direction, we find that there can be both negative mobility where the velocity decreases with increasing driving force as well as a reentrant pinning effect in which the particles flow at low drives but become pinned at intermediate drives. This reentrant pinning is associated with a transition from smooth 1D flow at low drives to a clogged state at higher drives that occurs when the particles cluster in a small number of plaquettes and block the flow. When the drive is further increased, particle rearrangements occur that cause the clog to break apart. We map out the regimes in which the pinned, flowing, and clogged states appear as a function of plaquette filling and drive. The clogged states remain robust at finite temperatures but develop intermittent bursts of flow in which a clog temporarily breaks apart but quickly reforms.
Holzinger, Dennis; Koch, Iris; Burgard, Stefan; Ehresmann, Arno
2015-07-28
An approach for a remotely controllable transport of magnetic micro- and/or nanoparticles above a topographically flat exchange-bias (EB) thin film system, magnetically patterned into parallel stripe domains, is presented where the particle manipulation is achieved by sub-mT external magnetic field pulses. Superparamagnetic core-shell particles are moved stepwise by the dynamic transformation of the particles' magnetic potential energy landscape due to the external magnetic field pulses without affecting the magnetic state of the thin film system. The magnetic particle velocity is adjustable in the range of 1-100 μm/s by the design of the substrate's magnetic field landscape (MFL), the particle-substrate distance, and the magnitude of the applied external magnetic field pulses. The agglomeration of magnetic particles is avoided by the intrinsic magnetostatic repulsion of particles due to the parallel alignment of the particles' magnetic moments perpendicular to the transport direction and parallel to the surface normal of the substrate during the particle motion. The transport mechanism is modeled by a quantitative theory based on the precise knowledge of the sample's MFL and the particle-substrate distance.
The design of the public transport lines with the use of the fast genetic algorithm
Directory of Open Access Journals (Sweden)
Aleksander Król
2015-09-01
Full Text Available Background: The growing role of public transport and the pressure of economic criteria requires the new optimization tools for process of public transport planning. These problems are computationally very complex, thus it is preferable to use various approximate methods, leading to a good solution within an acceptable time. Methods: One of such method is the genetic algorithm mimicking the processes of evolution and natural selection in the nature. In this paper, the different variants of the public transport lines layout are subjected to the artificial selection. The essence of the proposed approach is a simplified method of calculating the value of the fit function for a single individual, which brings relatively short computation time even for large jobs. Results: It was shown that despite the introduced simplifications the quality of the results is not worsened. Using the data obtained from KZK GOP (Communications Municipal Association of Upper Silesian Industrial Region the described algorithm was used to optimize the layout of the network of bus lines located within the borders of Katowice. Conclusion: The proposed algorithm was applied to a real, very complex network of public transportation and a possibility of a significant improvement of its efficiency was indicated. The obtained results give hope that the presented model, after some improvements can be the basis of the scientific method, and in a consequence of a further development to find practical application.
Explicit high-order non-canonical symplectic particle-in-cell algorithms for Vlasov-Maxwell systems
International Nuclear Information System (INIS)
Xiao, Jianyuan; Liu, Jian; He, Yang; Zhang, Ruili; Qin, Hong; Sun, Yajuan
2015-01-01
Explicit high-order non-canonical symplectic particle-in-cell algorithms for classical particle-field systems governed by the Vlasov-Maxwell equations are developed. The algorithms conserve a discrete non-canonical symplectic structure derived from the Lagrangian of the particle-field system, which is naturally discrete in particles. The electromagnetic field is spatially discretized using the method of discrete exterior calculus with high-order interpolating differential forms for a cubic grid. The resulting time-domain Lagrangian assumes a non-canonical symplectic structure. It is also gauge invariant and conserves charge. The system is then solved using a structure-preserving splitting method discovered by He et al. [preprint http://arxiv.org/abs/arXiv:1505.06076 (2015)], which produces five exactly soluble sub-systems, and high-order structure-preserving algorithms follow by combinations. The explicit, high-order, and conservative nature of the algorithms is especially suitable for long-term simulations of particle-field systems with extremely large number of degrees of freedom on massively parallel supercomputers. The algorithms have been tested and verified by the two physics problems, i.e., the nonlinear Landau damping and the electron Bernstein wave
Explicit high-order non-canonical symplectic particle-in-cell algorithms for Vlasov-Maxwell systems
Energy Technology Data Exchange (ETDEWEB)
Xiao, Jianyuan [School of Nuclear Science and Technology and Department of Modern Physics, University of Science and Technology of China, Hefei, Anhui 230026, China; Key Laboratory of Geospace Environment, CAS, Hefei, Anhui 230026, China; Qin, Hong [School of Nuclear Science and Technology and Department of Modern Physics, University of Science and Technology of China, Hefei, Anhui 230026, China; Plasma Physics Laboratory, Princeton University, Princeton, New Jersey 08543, USA; Liu, Jian [School of Nuclear Science and Technology and Department of Modern Physics, University of Science and Technology of China, Hefei, Anhui 230026, China; Key Laboratory of Geospace Environment, CAS, Hefei, Anhui 230026, China; He, Yang [School of Nuclear Science and Technology and Department of Modern Physics, University of Science and Technology of China, Hefei, Anhui 230026, China; Key Laboratory of Geospace Environment, CAS, Hefei, Anhui 230026, China; Zhang, Ruili [School of Nuclear Science and Technology and Department of Modern Physics, University of Science and Technology of China, Hefei, Anhui 230026, China; Key Laboratory of Geospace Environment, CAS, Hefei, Anhui 230026, China; Sun, Yajuan [LSEC, Academy of Mathematics and Systems Science, Chinese Academy of Sciences, P.O. Box 2719, Beijing 100190, China
2015-11-01
Explicit high-order non-canonical symplectic particle-in-cell algorithms for classical particle-field systems governed by the Vlasov-Maxwell equations are developed. The algorithms conserve a discrete non-canonical symplectic structure derived from the Lagrangian of the particle-field system, which is naturally discrete in particles. The electromagnetic field is spatially discretized using the method of discrete exterior calculus with high-order interpolating differential forms for a cubic grid. The resulting time-domain Lagrangian assumes a non-canonical symplectic structure. It is also gauge invariant and conserves charge. The system is then solved using a structure-preserving splitting method discovered by He et al. [preprint arXiv: 1505.06076 (2015)], which produces five exactly soluble sub-systems, and high-order structure-preserving algorithms follow by combinations. The explicit, high-order, and conservative nature of the algorithms is especially suitable for long-term simulations of particle-field systems with extremely large number of degrees of freedom on massively parallel supercomputers. The algorithms have been tested and verified by the two physics problems, i.e., the nonlinear Landau damping and the electron Bernstein wave. (C) 2015 AIP Publishing LLC.
Wang, Lichun; Cardenas, M Bayani
2015-08-01
The quantitative study of transport through fractured media has continued for many decades, but has often been constrained by observational and computational challenges. Here, we developed an efficient quasi-3D random walk particle tracking (RWPT) algorithm to simulate solute transport through natural fractures based on a 2D flow field generated from the modified local cubic law (MLCL). As a reference, we also modeled the actual breakthrough curves (BTCs) through direct simulations with the 3D advection-diffusion equation (ADE) and Navier-Stokes equations. The RWPT algorithm along with the MLCL accurately reproduced the actual BTCs calculated with the 3D ADE. The BTCs exhibited non-Fickian behavior, including early arrival and long tails. Using the spatial information of particle trajectories, we further analyzed the dynamic dispersion process through moment analysis. From this, asymptotic time scales were determined for solute dispersion to distinguish non-Fickian from Fickian regimes. This analysis illustrates the advantage and benefit of using an efficient combination of flow modeling and RWPT. Copyright © 2015 Elsevier B.V. All rights reserved.
Direct measurements of particle transport in dc glow discharge dusty plasmas
International Nuclear Information System (INIS)
Thomas, E. Jr.
2001-01-01
Many recent experiments in dc glow discharge plasmas have shown that clouds of dust particles can be suspended near the biased electrodes. Once formed, the dust clouds have well defined boundaries while particle motion within the clouds can be quite complex. Because the dust particles in the cloud can remain suspended in the plasma for tens of minutes, it implies that the particles have a low diffusive loss rate and follow closed trajectories within the cloud. In the experiments discussed in this paper, direct measurements of the dust particle velocities are made using particle image velocimetry (PIV) techniques. From the velocity measurements, a reconstruction of the three-dimensional transport of the dust particles is performed. A qualitative model is developed for the closed motion of the dust particles in a dc glow discharge dusty plasma. (orig.)
Chen, Xingxin; Wu, Zhonghan; Cai, Qipeng; Cao, Wei
2018-04-01
It is well established that seismic waves traveling through porous media stimulate fluid flow and accelerate particle transport. However, the mechanism remains poorly understood. To quantify the coupling effect of hydrodynamic force, transportation distance, and ultrasonic stimulation on particle transport and fate in porous media, laboratory experiments were conducted using custom-built ultrasonic-controlled soil column equipment. Three column lengths (23 cm, 33 cm, and 43 cm) were selected to examine the influence of transportation distance. Transport experiments were performed with 0 W, 600 W, 1000 W, 1400 W, and 1800 W of applied ultrasound, and flow rates of 0.065 cm/s, 0.130 cm/s, and 0.195 cm/s, to establish the roles of ultrasonic stimulation and hydrodynamic force. The laboratory results suggest that whilst ultrasonic stimulation does inhibit suspended-particle deposition and accelerate deposited-particle release, both hydrodynamic force and transportation distance are the principal controlling factors. The median particle diameter for the peak concentration was approximately 50% of that retained in the soil column. Simulated particle-breakthrough curves using extended traditional filtration theory effectively described the experimental curves, particularly the curves that exhibited a higher tailing concentration.
Control of alpha-particle transport by ion cyclotron resonance heating
International Nuclear Information System (INIS)
Chang, C.S.; Imre, K.; Weitzner, H.; Colestock, P.
1990-01-01
In this paper control of radial alpha-particle transport by using ion cyclotron range of frequency (ICRF) waves is investigated in a large-aspect-ratio tokamak geometry. Spatially inhomogeneous ICRF wave energy with properly selected frequencies and wave numbers can induce fast convective transports of alpha particles at the speed of order v α ∼ (P RF /n α ε 0 )ρ p , where R RF is the ICRF wave power density, n α is the alpha-particle density, ε 0 is the alpha-particle birth energy, and ρ p is the poloidal gyroradius of alpha particles at the birth energy. Application to International Thermonuclear Experimental Reactor (ITER) plasma is studied and possible antenna designs to control alpha-particle flux are discussed
International Nuclear Information System (INIS)
Braumann, Andreas; Kraft, Markus; Wagner, Wolfgang
2010-01-01
This paper is concerned with computational aspects of a multidimensional population balance model of a wet granulation process. Wet granulation is a manufacturing method to form composite particles, granules, from small particles and binders. A detailed numerical study of a stochastic particle algorithm for the solution of a five-dimensional population balance model for wet granulation is presented. Each particle consists of two types of solids (containing pores) and of external and internal liquid (located in the pores). Several transformations of particles are considered, including coalescence, compaction and breakage. A convergence study is performed with respect to the parameter that determines the number of numerical particles. Averaged properties of the system are computed. In addition, the ensemble is subdivided into practically relevant size classes and analysed with respect to the amount of mass and the particle porosity in each class. These results illustrate the importance of the multidimensional approach. Finally, the kinetic equation corresponding to the stochastic model is discussed.
DANTSYS: A diffusion accelerated neutral particle transport code system
International Nuclear Information System (INIS)
Alcouffe, R.E.; Baker, R.S.; Brinkley, F.W.; Marr, D.R.; O'Dell, R.D.; Walters, W.F.
1995-06-01
The DANTSYS code package includes the following transport codes: ONEDANT, TWODANT, TWODANT/GQ, TWOHEX, and THREEDANT. The DANTSYS code package is a modular computer program package designed to solve the time-independent, multigroup discrete ordinates form of the boltzmann transport equation in several different geometries. The modular construction of the package separates the input processing, the transport equation solving, and the post processing (or edit) functions into distinct code modules: the Input Module, one or more Solver Modules, and the Edit Module, respectively. The Input and Edit Modules are very general in nature and are common to all the Solver Modules. The ONEDANT Solver Module contains a one-dimensional (slab, cylinder, and sphere), time-independent transport equation solver using the standard diamond-differencing method for space/angle discretization. Also included in the package are solver Modules named TWODANT, TWODANT/GQ, THREEDANT, and TWOHEX. The TWODANT Solver Module solves the time-independent two-dimensional transport equation using the diamond-differencing method for space/angle discretization. The authors have also introduced an adaptive weighted diamond differencing (AWDD) method for the spatial and angular discretization into TWODANT as an option. The TWOHEX Solver Module solves the time-independent two-dimensional transport equation on an equilateral triangle spatial mesh. The THREEDANT Solver Module solves the time independent, three-dimensional transport equation for XYZ and RZΘ symmetries using both diamond differencing with set-to-zero fixup and the AWDD method. The TWODANT/GQ Solver Module solves the 2-D transport equation in XY and RZ symmetries using a spatial mesh of arbitrary quadrilaterals. The spatial differencing method is based upon the diamond differencing method with set-to-zero fixup with changes to accommodate the generalized spatial meshing
On the Use of Importance Sampling in Particle Transport Problems
International Nuclear Information System (INIS)
Eriksson, B.
1965-06-01
The idea of importance sampling is applied to the problem of solving integral equations of Fredholm's type. Especially Bolzmann's neutron transport equation is taken into consideration. For the solution of the latter equation, an importance sampling technique is derived from some simple transformations at the original transport equation into a similar equation. Examples of transformations are given, which have been used with great success in practice
Phenomena of charged particles transport in variable magnetic fields
International Nuclear Information System (INIS)
Savane, Sy Y.; Faza Barry, M.; Vladmir, L.; Diaby, I.
2002-11-01
This present work is dedicated to the study of the dynamical phenomena for the transport of ions in the presence of variable magnetic fields in front of the Jupiter wave shock. We obtain the spectrum of the accelerated ions and we study the conditions of acceleration by solving the transport equation in the planetocentric system. We discuss the theoretical results obtained and make a comparison with the experimental parameters in the region of acceleration behind the Jupiter wave shock. (author)
DANTSYS: A diffusion accelerated neutral particle transport code system
Energy Technology Data Exchange (ETDEWEB)
Alcouffe, R.E.; Baker, R.S.; Brinkley, F.W.; Marr, D.R.; O`Dell, R.D.; Walters, W.F.
1995-06-01
The DANTSYS code package includes the following transport codes: ONEDANT, TWODANT, TWODANT/GQ, TWOHEX, and THREEDANT. The DANTSYS code package is a modular computer program package designed to solve the time-independent, multigroup discrete ordinates form of the boltzmann transport equation in several different geometries. The modular construction of the package separates the input processing, the transport equation solving, and the post processing (or edit) functions into distinct code modules: the Input Module, one or more Solver Modules, and the Edit Module, respectively. The Input and Edit Modules are very general in nature and are common to all the Solver Modules. The ONEDANT Solver Module contains a one-dimensional (slab, cylinder, and sphere), time-independent transport equation solver using the standard diamond-differencing method for space/angle discretization. Also included in the package are solver Modules named TWODANT, TWODANT/GQ, THREEDANT, and TWOHEX. The TWODANT Solver Module solves the time-independent two-dimensional transport equation using the diamond-differencing method for space/angle discretization. The authors have also introduced an adaptive weighted diamond differencing (AWDD) method for the spatial and angular discretization into TWODANT as an option. The TWOHEX Solver Module solves the time-independent two-dimensional transport equation on an equilateral triangle spatial mesh. The THREEDANT Solver Module solves the time independent, three-dimensional transport equation for XYZ and RZ{Theta} symmetries using both diamond differencing with set-to-zero fixup and the AWDD method. The TWODANT/GQ Solver Module solves the 2-D transport equation in XY and RZ symmetries using a spatial mesh of arbitrary quadrilaterals. The spatial differencing method is based upon the diamond differencing method with set-to-zero fixup with changes to accommodate the generalized spatial meshing.
On the Use of Importance Sampling in Particle Transport Problems
Energy Technology Data Exchange (ETDEWEB)
Eriksson, B
1965-06-15
The idea of importance sampling is applied to the problem of solving integral equations of Fredholm's type. Especially Bolzmann's neutron transport equation is taken into consideration. For the solution of the latter equation, an importance sampling technique is derived from some simple transformations at the original transport equation into a similar equation. Examples of transformations are given, which have been used with great success in practice.
Transport of transient solar wind particles in Earth's cusps
International Nuclear Information System (INIS)
Parks, G. K.; Lee, E.; Teste, A.; Wilber, M.; Lin, N.; Canu, P.; Dandouras, I.; Reme, H.; Fu, S. Y.; Goldstein, M. L.
2008-01-01
An important problem in space physics still not understood well is how the solar wind enters the Earth's magnetosphere. Evidence is presented that transient solar wind particles produced by solar disturbances can appear in the Earth's mid-altitude (∼5 R E geocentric) cusps with densities nearly equal to those in the magnetosheath. That these are magnetosheath particles is established by showing they have the same ''flattop'' electron distributions as magnetosheath electrons behind the bow shock. The transient ions are moving parallel to the magnetic field (B) toward Earth and often coexist with ionospheric particles that are flowing out. The accompanying waves include electromagnetic and broadband electrostatic noise emissions and Bernstein mode waves. Phase-space distributions show a mixture of hot and cold electrons and multiple ion species including field-aligned ionospheric O + beams
Directory of Open Access Journals (Sweden)
Ho-Lung Hung
2008-08-01
Full Text Available A suboptimal partial transmit sequence (PTS based on particle swarm optimization (PSO algorithm is presented for the low computation complexity and the reduction of the peak-to-average power ratio (PAPR of an orthogonal frequency division multiplexing (OFDM system. In general, PTS technique can improve the PAPR statistics of an OFDM system. However, it will come with an exhaustive search over all combinations of allowed phase weighting factors and the search complexity increasing exponentially with the number of subblocks. In this paper, we work around potentially computational intractability; the proposed PSO scheme exploits heuristics to search the optimal combination of phase factors with low complexity. Simulation results show that the new technique can effectively reduce the computation complexity and PAPR reduction.
Directory of Open Access Journals (Sweden)
Lee Shu-Hong
2008-01-01
Full Text Available Abstract A suboptimal partial transmit sequence (PTS based on particle swarm optimization (PSO algorithm is presented for the low computation complexity and the reduction of the peak-to-average power ratio (PAPR of an orthogonal frequency division multiplexing (OFDM system. In general, PTS technique can improve the PAPR statistics of an OFDM system. However, it will come with an exhaustive search over all combinations of allowed phase weighting factors and the search complexity increasing exponentially with the number of subblocks. In this paper, we work around potentially computational intractability; the proposed PSO scheme exploits heuristics to search the optimal combination of phase factors with low complexity. Simulation results show that the new technique can effectively reduce the computation complexity and PAPR reduction.
International Nuclear Information System (INIS)
Pang, X.; Rybarcyk, L.J.
2014-01-01
Particle swarm optimization (PSO) and genetic algorithm (GA) are both nature-inspired population based optimization methods. Compared to GA, whose long history can trace back to 1975, PSO is a relatively new heuristic search method first proposed in 1995. Due to its fast convergence rate in single objective optimization domain, the PSO method has been extended to optimize multi-objective problems. In this paper, we will introduce the PSO method and its multi-objective extension, the MOPSO, apply it along with the MOGA (mainly the NSGA-II) to simulations of the LANSCE linac and operational set point optimizations. Our tests show that both methods can provide very similar Pareto fronts but the MOPSO converges faster
Directory of Open Access Journals (Sweden)
A. Muthukumar
2012-02-01
Full Text Available In general, the identification and verification are done by passwords, pin number, etc., which is easily cracked by others. In order to overcome this issue biometrics is a unique tool for authenticate an individual person. Nevertheless, unimodal biometric is suffered due to noise, intra class variations, spoof attacks, non-universality and some other attacks. In order to avoid these attacks, the multimodal biometrics i.e. combining of more modalities is adapted. In a biometric authentication system, the acceptance or rejection of an entity is dependent on the similarity score falling above or below the threshold. Hence this paper has focused on the security of the biometric system, because compromised biometric templates cannot be revoked or reissued and also this paper has proposed a multimodal system based on an evolutionary algorithm, Particle Swarm Optimization that adapts for varying security environments. With these two concerns, this paper had developed a design incorporating adaptability, authenticity and security.
Directory of Open Access Journals (Sweden)
Ahmad Shokuh Saljoughi
2018-01-01
Full Text Available Today, cloud computing has become popular among users in organizations and companies. Security and efficiency are the two major issues facing cloud service providers and their customers. Since cloud computing is a virtual pool of resources provided in an open environment (Internet, cloud-based services entail security risks. Detection of intrusions and attacks through unauthorized users is one of the biggest challenges for both cloud service providers and cloud users. In the present study, artificial intelligence techniques, e.g. MLP Neural Network sand particle swarm optimization algorithm, were used to detect intrusion and attacks. The methods were tested for NSL-KDD, KDD-CUP datasets. The results showed improved accuracy in detecting attacks and intrusions by unauthorized users.
Directory of Open Access Journals (Sweden)
Indrajit Bhattacharya
2011-05-01
Full Text Available The present paper proposes a departmental store automation system based on Radio Frequency Identification (RFID technology and Particle Swarm Optimization (PSO algorithm. The items in the departmental store spanned over different sections and in multiple floors, are tagged with passive RFID tags. The floor is divided into number of zones depending on different types of items that are placed in their respective racks. Each of the zones is placed with one RFID reader, which constantly monitors the items in their zone and periodically sends that information to the application. The problem of systematic periodic monitoring of the store is addressed in this application so that the locations, distributions and demands of every item in the store can be invigilated with intelligence. The proposed application is successfully demonstrated on a simulated case study.
Directory of Open Access Journals (Sweden)
Jude Hemanth Duraisamy
2016-01-01
Full Text Available Image steganography is one of the ever growing computational approaches which has found its application in many fields. The frequency domain techniques are highly preferred for image steganography applications. However, there are significant drawbacks associated with these techniques. In transform based approaches, the secret data is embedded in random manner in the transform coefficients of the cover image. These transform coefficients may not be optimal in terms of the stego image quality and embedding capacity. In this work, the application of Genetic Algorithm (GA and Particle Swarm Optimization (PSO have been explored in the context of determining the optimal coefficients in these transforms. Frequency domain transforms such as Bandelet Transform (BT and Finite Ridgelet Transform (FRIT are used in combination with GA and PSO to improve the efficiency of the image steganography system.
Dragonfly: an implementation of the expand-maximize-compress algorithm for single-particle imaging.
Ayyer, Kartik; Lan, Ti-Yen; Elser, Veit; Loh, N Duane
2016-08-01
Single-particle imaging (SPI) with X-ray free-electron lasers has the potential to change fundamentally how biomacromolecules are imaged. The structure would be derived from millions of diffraction patterns, each from a different copy of the macromolecule before it is torn apart by radiation damage. The challenges posed by the resultant data stream are staggering: millions of incomplete, noisy and un-oriented patterns have to be computationally assembled into a three-dimensional intensity map and then phase reconstructed. In this paper, the Dragonfly software package is described, based on a parallel implementation of the expand-maximize-compress reconstruction algorithm that is well suited for this task. Auxiliary modules to simulate SPI data streams are also included to assess the feasibility of proposed SPI experiments at the Linac Coherent Light Source, Stanford, California, USA.
International Nuclear Information System (INIS)
Campos, Gustavo L.; Campos, Tarcísio P.R.
2017-01-01
This paper brings to light optimized proposal for a circular particle accelerator for proton beam therapy purposes (named as ACPT). The methodology applied is based on computational metaheuristics based on genetic algorithms (GA) were used to obtain optimized parameters of the equipment. Some fundamental concepts in the metaheuristics developed in Matlab® software will be presented. Four parameters were considered for the proposed modeling for the equipment, being: potential difference, magnetic field, length and radius of the resonant cavity. As result, this article showed optimized parameters for two ACPT, one of them used for ocular radiation therapy, as well some parameters that will allow teletherapy, called in order ACPT - 65 and ACPT - 250, obtained through metaheuristics based in GA. (author)
Energy Technology Data Exchange (ETDEWEB)
Pang, X., E-mail: xpang@lanl.gov; Rybarcyk, L.J.
2014-03-21
Particle swarm optimization (PSO) and genetic algorithm (GA) are both nature-inspired population based optimization methods. Compared to GA, whose long history can trace back to 1975, PSO is a relatively new heuristic search method first proposed in 1995. Due to its fast convergence rate in single objective optimization domain, the PSO method has been extended to optimize multi-objective problems. In this paper, we will introduce the PSO method and its multi-objective extension, the MOPSO, apply it along with the MOGA (mainly the NSGA-II) to simulations of the LANSCE linac and operational set point optimizations. Our tests show that both methods can provide very similar Pareto fronts but the MOPSO converges faster.
Energy Technology Data Exchange (ETDEWEB)
Campos, Gustavo L.; Campos, Tarcísio P.R., E-mail: gustavo.lobato@ifmg.edu.br, E-mail: tprcampos@pq.cnpq.br, E-mail: gustavo.lobato@ifmg.edu.br [Universidade Federal de Minas Gerais (UFMG), Belo Horizonte, MG (Brazil). Departamento de Engenharia Nuclear
2017-07-01
This paper brings to light optimized proposal for a circular particle accelerator for proton beam therapy purposes (named as ACPT). The methodology applied is based on computational metaheuristics based on genetic algorithms (GA) were used to obtain optimized parameters of the equipment. Some fundamental concepts in the metaheuristics developed in Matlab® software will be presented. Four parameters were considered for the proposed modeling for the equipment, being: potential difference, magnetic field, length and radius of the resonant cavity. As result, this article showed optimized parameters for two ACPT, one of them used for ocular radiation therapy, as well some parameters that will allow teletherapy, called in order ACPT - 65 and ACPT - 250, obtained through metaheuristics based in GA. (author)
Application of an Intelligent Fuzzy Regression Algorithm in Road Freight Transportation Modeling
Directory of Open Access Journals (Sweden)
Pooya Najaf
2013-07-01
Full Text Available Road freight transportation between provinces of a country has an important effect on the traffic flow of intercity transportation networks. Therefore, an accurate estimation of the road freight transportation for provinces of a country is so crucial to improve the rural traffic operation in a large scale management. Accordingly, the focused case study database in this research is the information related to Iran’s provinces in the year 2008. Correlation between road freight transportation with variables such as transport cost and distance, population, average household income and Gross Domestic Product (GDP of each province is calculated. Results clarify that the population is the most effective factor in the prediction of provinces’ transported freight. Linear Regression Model (LRM is calibrated based on the population variable, and afterwards Fuzzy Regression Algorithm (FRA is generated on the basis of the LRM. The proposed FRA is an intelligent modified algorithm with an accurate prediction and fitting ability. This methodology can be significantly useful in macro-level planning problems where decreasing prediction error values is one of the most important concerns for decision makers. In addition, Back-Propagation Neural Network (BPNN is developed to evaluate the prediction capability of the models and to be compared with FRA. According to the final results, the modified FRA estimates road freight transportation values more accurately than the BPNN and LRM. Finally, in order to predict the road freight transportation values, the reliability of the calibrated models is analyzed using the information of the year 2009. Results show higher reliability for the proposed modified FRA.
Effects of fuel particle size distributions on neutron transport in stochastic media
International Nuclear Information System (INIS)
Liang, Chao; Pavlou, Andrew T.; Ji, Wei
2014-01-01
Highlights: • Effects of fuel particle size distributions on neutron transport are evaluated. • Neutron channeling is identified as the fundamental reason for the effects. • The effects are noticeable at low packing and low optical thickness systems. • Unit cells of realistic reactor designs are studied for different size particles. • Fuel particle size distribution effects are not negligible in realistic designs. - Abstract: This paper presents a study of the fuel particle size distribution effects on neutron transport in three-dimensional stochastic media. Particle fuel is used in gas-cooled nuclear reactor designs and innovative light water reactor designs loaded with accident tolerant fuel. Due to the design requirements and fuel fabrication limits, the size of fuel particles may not be perfectly constant but instead follows a certain distribution. This brings a fundamental question to the radiation transport computation community: how does the fuel particle size distribution affect the neutron transport in particle fuel systems? To answer this question, size distribution effects and their physical interpretations are investigated by performing a series of neutron transport simulations at different fuel particle size distributions. An eigenvalue problem is simulated in a cylindrical container consisting of fissile fuel particles with five different size distributions: constant, uniform, power, exponential and Gaussian. A total of 15 parametric cases are constructed by altering the fissile particle volume packing fraction and its optical thickness, but keeping the mean chord length of the spherical fuel particle the same at different size distributions. The tallied effective multiplication factor (k eff ) and the spatial distribution of fission power density along axial and radial directions are compared between different size distributions. At low packing fraction and low optical thickness, the size distribution shows a noticeable effect on neutron
International Nuclear Information System (INIS)
Torok, J.; Buckley, L.P.; Woods, B.L.
1989-01-01
Laboratory-scale lysimeter experiments were performed with simulated waste forms placed in candidate buffer materials which have been chosen for a low-level radioactive waste repository. Radionuclide releases into the effluent water and radionuclide capture by the buffer material were determined. The results could not be explained by traditional solution transport mechanisms, and transport by particles released from the waste form and/or transport by buffer particles were suspected as the dominant mechanism for radionuclide release from the lysimeters. To elucidate the relative contribution of particle and solution transport, the waste forms were replaced by a wafer of neutron-activated buffer soaked with selected soluble isotopes. Particle transport was determined by the movement of gamma-emitting neutron-activation products through the lysimeter. Solution transport was quantified by comparing the migration of soluble radionuclides relative to the transport of neutron activation products. The new approach for monitoring radionuclide migration in soil is presented. It facilitates the determination of most of the fundamental coefficients required to model the transport process
vanWees, BJ
1996-01-01
We have investigated supercurrent and quasi-particle transport in the 2DEG present in InAs/Al(Ga)Sb quantum wells. The physics of these systems will be discussed with two examples: (i) supercurrent transport in Nb/InAs/Nb junctions, and (ii) phase-dependent resistance in a superconductor-2DEG
Nuclear fuel particles in the environment - characteristics, atmospheric transport and skin doses
International Nuclear Information System (INIS)
Poellaenen, R.
2002-05-01
In the present thesis, nuclear fuel particles are studied from the perspective of their characteristics, atmospheric transport and possible skin doses. These particles, often referred to as 'hot' particles, can be released into the environment, as has happened in past years, through human activities, incidents and accidents, such as the Chernobyl nuclear power plant accident in 1986. Nuclear fuel particles with a diameter of tens of micrometers, referred to here as large particles, may be hundreds of kilobecquerels in activity and even an individual particle may present a quantifiable health hazard. The detection of individual nuclear fuel particles in the environment, their isolation for subsequent analysis and their characterisation are complicated and require well-designed sampling and tailored analytical methods. In the present study, the need to develop particle analysis methods is highlighted. It is shown that complementary analytical techniques are necessary for proper characterisation of the particles. Methods routinely used for homogeneous samples may produce erroneous results if they are carelessly applied to radioactive particles. Large nuclear fuel particles are transported differently in the atmosphere compared with small particles or gaseous species. Thus, the trajectories of gaseous species are not necessarily appropriate for calculating the areas that may receive large particle fallout. A simplified model and a more advanced model based on the data on real weather conditions were applied in the case of the Chernobyl accident to calculate the transport of the particles of different sizes. The models were appropriate in characterising general transport properties but were not able to properly predict the transport of the particles with an aerodynamic diameter of tens of micrometers, detected at distances of hundreds of kilometres from the source, using only the current knowledge of the source term. Either the effective release height has been higher
Nuclear fuel particles in the environment - characteristics, atmospheric transport and skin doses
Energy Technology Data Exchange (ETDEWEB)
Poellaenen, R
2002-05-01
In the present thesis, nuclear fuel particles are studied from the perspective of their characteristics, atmospheric transport and possible skin doses. These particles, often referred to as 'hot' particles, can be released into the environment, as has happened in past years, through human activities, incidents and accidents, such as the Chernobyl nuclear power plant accident in 1986. Nuclear fuel particles with a diameter of tens of micrometers, referred to here as large particles, may be hundreds of kilobecquerels in activity and even an individual particle may present a quantifiable health hazard. The detection of individual nuclear fuel particles in the environment, their isolation for subsequent analysis and their characterisation are complicated and require well-designed sampling and tailored analytical methods. In the present study, the need to develop particle analysis methods is highlighted. It is shown that complementary analytical techniques are necessary for proper characterisation of the particles. Methods routinely used for homogeneous samples may produce erroneous results if they are carelessly applied to radioactive particles. Large nuclear fuel particles are transported differently in the atmosphere compared with small particles or gaseous species. Thus, the trajectories of gaseous species are not necessarily appropriate for calculating the areas that may receive large particle fallout. A simplified model and a more advanced model based on the data on real weather conditions were applied in the case of the Chernobyl accident to calculate the transport of the particles of different sizes. The models were appropriate in characterising general transport properties but were not able to properly predict the transport of the particles with an aerodynamic diameter of tens of micrometers, detected at distances of hundreds of kilometres from the source, using only the current knowledge of the source term. Either the effective release height has
Cai, Li; Tong, Meiping; Wang, Xueting; Kim, Hyunjung
2014-07-01
This study investigated the influence of two representative suspended clay particles, bentonite and kaolinite, on the transport of titanium dioxide nanoparticles (nTiO2) in saturated quartz sand in both NaCl (1 and 10 mM ionic strength) and CaCl2 solutions (0.1 and 1 mM ionic strength) at pH 7. The breakthrough curves of nTiO2 with bentonite or kaolinite were higher than those without the presence of clay particles in NaCl solutions, indicating that both types of clay particles increased nTiO2 transport in NaCl solutions. Moreover, the enhancement of nTiO2 transport was more significant when bentonite was present in nTiO2 suspensions relative to kaolinite. Similar to NaCl solutions, in CaCl2 solutions, the breakthrough curves of nTiO2 with bentonite were also higher than those without clay particles, while the breakthrough curves of nTiO2 with kaolinite were lower than those without clay particles. Clearly, in CaCl2 solutions, the presence of bentonite in suspensions increased nTiO2 transport, whereas, kaolinite decreased nTiO2 transport in quartz sand. The attachment of nTiO2 onto clay particles (both bentonite and kaolinite) were observed under all experimental conditions. The increased transport of nTiO2 in most experimental conditions (except for kaolinite in CaCl2 solutions) was attributed mainly to the clay-facilitated nTiO2 transport. The straining of larger nTiO2-kaolinite clusters yet contributed to the decreased transport (enhanced retention) of nTiO2 in divalent CaCl2 solutions when kaolinite particles were copresent in suspensions.
Yang, Guo Sheng; Wang, Xiao Yang; Li, Xue Dong
2018-03-01
With the establishment of the integrated model of relay protection and the scale of the power system expanding, the global setting and optimization of relay protection is an extremely difficult task. This paper presents a kind of application in relay protection of global optimization improved particle swarm optimization algorithm and the inverse time current protection as an example, selecting reliability of the relay protection, selectivity, quick action and flexibility as the four requires to establish the optimization targets, and optimizing protection setting values of the whole system. Finally, in the case of actual power system, the optimized setting value results of the proposed method in this paper are compared with the particle swarm algorithm. The results show that the improved quantum particle swarm optimization algorithm has strong search ability, good robustness, and it is suitable for optimizing setting value in the relay protection of the whole power system.
International Nuclear Information System (INIS)
Jiang Chuanwen; Bompard, Etorre
2005-01-01
This paper proposes a short term hydroelectric plant dispatch model based on the rule of maximizing the benefit. For the optimal dispatch model, which is a large scale nonlinear planning problem with multi-constraints and multi-variables, this paper proposes a novel self-adaptive chaotic particle swarm optimization algorithm to solve the short term generation scheduling of a hydro-system better in a deregulated environment. Since chaotic mapping enjoys certainty, ergodicity and the stochastic property, the proposed approach introduces chaos mapping and an adaptive scaling term into the particle swarm optimization algorithm, which increases its convergence rate and resulting precision. The new method has been examined and tested on a practical hydro-system. The results are promising and show the effectiveness and robustness of the proposed approach in comparison with the traditional particle swarm optimization algorithm
Wang, Li; Li, Feng; Xing, Jian
2017-10-01
In this paper, a hybrid artificial bee colony (ABC) algorithm and pattern search (PS) method is proposed and applied for recovery of particle size distribution (PSD) from spectral extinction data. To be more useful and practical, size distribution function is modelled as the general Johnson's ? function that can overcome the difficulty of not knowing the exact type beforehand encountered in many real circumstances. The proposed hybrid algorithm is evaluated through simulated examples involving unimodal, bimodal and trimodal PSDs with different widths and mean particle diameters. For comparison, all examples are additionally validated by the single ABC algorithm. In addition, the performance of the proposed algorithm is further tested by actual extinction measurements with real standard polystyrene samples immersed in water. Simulation and experimental results illustrate that the hybrid algorithm can be used as an effective technique to retrieve the PSDs with high reliability and accuracy. Compared with the single ABC algorithm, our proposed algorithm can produce more accurate and robust inversion results while taking almost comparative CPU time over ABC algorithm alone. The superiority of ABC and PS hybridization strategy in terms of reaching a better balance of estimation accuracy and computation effort increases its potentials as an excellent inversion technique for reliable and efficient actual measurement of PSD.
International Nuclear Information System (INIS)
Banerjee, Amit; Abu-Mahfouz, Issam
2014-01-01
The use of evolutionary algorithms has been popular in recent years for solving the inverse problem of identifying system parameters given the chaotic response of a dynamical system. The inverse problem is reformulated as a minimization problem and population-based optimizers such as evolutionary algorithms have been shown to be efficient solvers of the minimization problem. However, to the best of our knowledge, there has been no published work that evaluates the efficacy of using the two most popular evolutionary techniques – particle swarm optimization and differential evolution algorithm, on a wide range of parameter estimation problems. In this paper, the two methods along with their variants (for a total of seven algorithms) are applied to fifteen different parameter estimation problems of varying degrees of complexity. Estimation results are analyzed using nonparametric statistical methods to identify if an algorithm is statistically superior to others over the class of problems analyzed. Results based on parameter estimation quality suggest that there are significant differences between the algorithms with the newer, more sophisticated algorithms performing better than their canonical versions. More importantly, significant differences were also found among variants of the particle swarm optimizer and the best performing differential evolution algorithm
Double-layer evolutionary algorithm for distributed optimization of particle detection on the Grid
International Nuclear Information System (INIS)
Padée, Adam; Zaremba, Krzysztof; Kurek, Krzysztof
2013-01-01
Reconstruction of particle tracks from information collected by position-sensitive detectors is an important procedure in HEP experiments. It is usually controlled by a set of numerical parameters which have to be manually optimized. This paper proposes an automatic approach to this task by utilizing evolutionary algorithm (EA) operating on both real-valued and binary representations. Because of computational complexity of the task a special distributed architecture of the algorithm is proposed, designed to be run in grid environment. It is two-level hierarchical hybrid utilizing asynchronous master-slave EA on the level of clusters and island model EA on the level of the grid. The technical aspects of usage of production grid infrastructure are covered, including communication protocols on both levels. The paper deals also with the problem of heterogeneity of the resources, presenting efficiency tests on a benchmark function. These tests confirm that even relatively small islands (clusters) can be beneficial to the optimization process when connected to the larger ones. Finally a real-life usage example is presented, which is an optimization of track reconstruction in Large Angle Spectrometer of NA-58 COMPASS experiment held at CERN, using a sample of Monte Carlo simulated data. The overall reconstruction efficiency gain, achieved by the proposed method, is more than 4%, compared to the manually optimized parameters
An extension theory-based maximum power tracker using a particle swarm optimization algorithm
International Nuclear Information System (INIS)
Chao, Kuei-Hsiang
2014-01-01
Highlights: • We propose an adaptive maximum power point tracking (MPPT) approach for PV systems. • Transient and steady state performances in tracking process are improved. • The proposed MPPT can automatically tune tracking step size along a P–V curve. • A PSO algorithm is used to determine the weighting values of extension theory. - Abstract: The aim of this work is to present an adaptive maximum power point tracking (MPPT) approach for photovoltaic (PV) power generation system. Integrating the extension theory as well as the conventional perturb and observe method, an maximum power point (MPP) tracker is made able to automatically tune tracking step size by way of the category recognition along a P–V characteristic curve. Accordingly, the transient and steady state performances in tracking process are improved. Furthermore, an optimization approach is proposed on the basis of a particle swarm optimization (PSO) algorithm for the complexity reduction in the determination of weighting values. At the end of this work, a simulated improvement in the tracking performance is experimentally validated by an MPP tracker with a programmable system-on-chip (PSoC) based controller
Directory of Open Access Journals (Sweden)
Po-Chen Cheng
2015-06-01
Full Text Available In this paper, an asymmetrical fuzzy-logic-control (FLC-based maximum power point tracking (MPPT algorithm for photovoltaic (PV systems is presented. Two membership function (MF design methodologies that can improve the effectiveness of the proposed asymmetrical FLC-based MPPT methods are then proposed. The first method can quickly determine the input MF setting values via the power–voltage (P–V curve of solar cells under standard test conditions (STC. The second method uses the particle swarm optimization (PSO technique to optimize the input MF setting values. Because the PSO approach must target and optimize a cost function, a cost function design methodology that meets the performance requirements of practical photovoltaic generation systems (PGSs is also proposed. According to the simulated and experimental results, the proposed asymmetrical FLC-based MPPT method has the highest fitness value, therefore, it can successfully address the tracking speed/tracking accuracy dilemma compared with the traditional perturb and observe (P&O and symmetrical FLC-based MPPT algorithms. Compared to the conventional FLC-based MPPT method, the obtained optimal asymmetrical FLC-based MPPT can improve the transient time and the MPPT tracking accuracy by 25.8% and 0.98% under STC, respectively.
He, Zhenzong; Qi, Hong; Yao, Yuchen; Ruan, Liming
2014-12-01
The Ant Colony Optimization algorithm based on the probability density function (PDF-ACO) is applied to estimate the bimodal aerosol particle size distribution (PSD). The direct problem is solved by the modified Anomalous Diffraction Approximation (ADA, as an approximation for optically large and soft spheres, i.e., χ⪢1 and |m-1|⪡1) and the Beer-Lambert law. First, a popular bimodal aerosol PSD and three other bimodal PSDs are retrieved in the dependent model by the multi-wavelength extinction technique. All the results reveal that the PDF-ACO algorithm can be used as an effective technique to investigate the bimodal PSD. Then, the Johnson's SB (J-SB) function and the modified beta (M-β) function are employed as the general distribution function to retrieve the bimodal PSDs under the independent model. Finally, the J-SB and M-β functions are applied to recover actual measurement aerosol PSDs over Beijing and Shanghai obtained from the aerosol robotic network (AERONET). The numerical simulation and experimental results demonstrate that these two general functions, especially the J-SB function, can be used as a versatile distribution function to retrieve the bimodal aerosol PSD when no priori information about the PSD is available.
Detection of Carious Lesions and Restorations Using Particle Swarm Optimization Algorithm
Directory of Open Access Journals (Sweden)
Mohammad Naebi
2016-01-01
Full Text Available Background/Purpose. In terms of the detection of tooth diagnosis, no intelligent detection has been done up till now. Dentists just look at images and then they can detect the diagnosis position in tooth based on their experiences. Using new technologies, scientists will implement detection and repair of tooth diagnosis intelligently. In this paper, we have introduced one intelligent method for detection using particle swarm optimization (PSO and our mathematical formulation. This method was applied to 2D special images. Using developing of our method, we can detect tooth diagnosis for all of 2D and 3D images. Materials and Methods. In recent years, it is possible to implement intelligent processing of images by high efficiency optimization algorithms in many applications especially for detection of dental caries and restoration without human intervention. In the present work, we explain PSO algorithm with our detection formula for detection of dental caries and restoration. Also image processing helped us to implement our method. And to do so, pictures taken by digital radiography systems of tooth are used. Results and Conclusion. We implement some mathematics formula for fitness of PSO. Our results show that this method can detect dental caries and restoration in digital radiography pictures with the good convergence. In fact, the error rate of this method was 8%, so that it can be implemented for detection of dental caries and restoration. Using some parameters, it is possible that the error rate can be even reduced below 0.5%.
PSOVina: The hybrid particle swarm optimization algorithm for protein-ligand docking.
Ng, Marcus C K; Fong, Simon; Siu, Shirley W I
2015-06-01
Protein-ligand docking is an essential step in modern drug discovery process. The challenge here is to accurately predict and efficiently optimize the position and orientation of ligands in the binding pocket of a target protein. In this paper, we present a new method called PSOVina which combined the particle swarm optimization (PSO) algorithm with the efficient Broyden-Fletcher-Goldfarb-Shannon (BFGS) local search method adopted in AutoDock Vina to tackle the conformational search problem in docking. Using a diverse data set of 201 protein-ligand complexes from the PDBbind database and a full set of ligands and decoys for four representative targets from the directory of useful decoys (DUD) virtual screening data set, we assessed the docking performance of PSOVina in comparison to the original Vina program. Our results showed that PSOVina achieves a remarkable execution time reduction of 51-60% without compromising the prediction accuracies in the docking and virtual screening experiments. This improvement in time efficiency makes PSOVina a better choice of a docking tool in large-scale protein-ligand docking applications. Our work lays the foundation for the future development of swarm-based algorithms in molecular docking programs. PSOVina is freely available to non-commercial users at http://cbbio.cis.umac.mo .
Diyana Rosli, Anis; Adenan, Nur Sabrina; Hashim, Hadzli; Ezan Abdullah, Noor; Sulaiman, Suhaimi; Baharudin, Rohaiza
2018-03-01
This paper shows findings of the application of Particle Swarm Optimization (PSO) algorithm in optimizing an Artificial Neural Network that could categorize between ripeness and unripeness stage of citrus suhuensis. The algorithm would adjust the network connections weights and adapt its values during training for best results at the output. Initially, citrus suhuensis fruit’s skin is measured using optically non-destructive method via spectrometer. The spectrometer would transmit VIS (visible spectrum) photonic light radiation to the surface (skin of citrus) of the sample. The reflected light from the sample’s surface would be received and measured by the same spectrometer in terms of reflectance percentage based on VIS range. These measured data are used to train and test the best optimized ANN model. The accuracy is based on receiver operating characteristic (ROC) performance. The result outcomes from this investigation have shown that the achieved accuracy for the optimized is 70.5% with a sensitivity and specificity of 60.1% and 80.0% respectively.