WorldWideScience

Sample records for particle transport algorithm

  1. Advective isotope transport by mixing cell and particle tracking algorithms

    International Nuclear Information System (INIS)

    Tezcan, L.; Meric, T.

    1999-01-01

    The 'mixing cell' algorithm of the environmental isotope data evaluation is integrated with the three dimensional finite difference ground water flow model (MODFLOW) to simulate the advective isotope transport and the approach is compared with the 'particle tracking' algorithm of the MOC3D, that simulates three-dimensional solute transport with the method of characteristics technique

  2. Particle swarm optimization - Genetic algorithm (PSOGA) on linear transportation problem

    Science.gov (United States)

    Rahmalia, Dinita

    2017-08-01

    Linear Transportation Problem (LTP) is the case of constrained optimization where we want to minimize cost subject to the balance of the number of supply and the number of demand. The exact method such as northwest corner, vogel, russel, minimal cost have been applied at approaching optimal solution. In this paper, we use heurisitic like Particle Swarm Optimization (PSO) for solving linear transportation problem at any size of decision variable. In addition, we combine mutation operator of Genetic Algorithm (GA) at PSO to improve optimal solution. This method is called Particle Swarm Optimization - Genetic Algorithm (PSOGA). The simulations show that PSOGA can improve optimal solution resulted by PSO.

  3. Particle Communication and Domain Neighbor Coupling: Scalable Domain Decomposed Algorithms for Monte Carlo Particle Transport

    Energy Technology Data Exchange (ETDEWEB)

    O' Brien, M. J.; Brantley, P. S.

    2015-01-20

    In order to run Monte Carlo particle transport calculations on new supercomputers with hundreds of thousands or millions of processors, care must be taken to implement scalable algorithms. This means that the algorithms must continue to perform well as the processor count increases. In this paper, we examine the scalability of:(1) globally resolving the particle locations on the correct processor, (2) deciding that particle streaming communication has finished, and (3) efficiently coupling neighbor domains together with different replication levels. We have run domain decomposed Monte Carlo particle transport on up to 221 = 2,097,152 MPI processes on the IBM BG/Q Sequoia supercomputer and observed scalable results that agree with our theoretical predictions. These calculations were carefully constructed to have the same amount of work on every processor, i.e. the calculation is already load balanced. We also examine load imbalanced calculations where each domain’s replication level is proportional to its particle workload. In this case we show how to efficiently couple together adjacent domains to maintain within workgroup load balance and minimize memory usage.

  4. A Coulomb collision algorithm for weighted particle simulations

    Science.gov (United States)

    Miller, Ronald H.; Combi, Michael R.

    1994-01-01

    A binary Coulomb collision algorithm is developed for weighted particle simulations employing Monte Carlo techniques. Charged particles within a given spatial grid cell are pair-wise scattered, explicitly conserving momentum and implicitly conserving energy. A similar algorithm developed by Takizuka and Abe (1977) conserves momentum and energy provided the particles are unweighted (each particle representing equal fractions of the total particle density). If applied as is to simulations incorporating weighted particles, the plasma temperatures equilibrate to an incorrect temperature, as compared to theory. Using the appropriate pairing statistics, a Coulomb collision algorithm is developed for weighted particles. The algorithm conserves energy and momentum and produces the appropriate relaxation time scales as compared to theoretical predictions. Such an algorithm is necessary for future work studying self-consistent multi-species kinetic transport.

  5. PARTRACK - A particle tracking algorithm for transport and dispersion of solutes in a sparsely fractured rock

    International Nuclear Information System (INIS)

    Svensson, Urban

    2001-04-01

    A particle tracking algorithm, PARTRACK, that simulates transport and dispersion in a sparsely fractured rock is described. The main novel feature of the algorithm is the introduction of multiple particle states. It is demonstrated that the introduction of this feature allows for the simultaneous simulation of Taylor dispersion, sorption and matrix diffusion. A number of test cases are used to verify and demonstrate the features of PARTRACK. It is shown that PARTRACK can simulate the following processes, believed to be important for the problem addressed: the split up of a tracer cloud at a fracture intersection, channeling in a fracture plane, Taylor dispersion and matrix diffusion and sorption. From the results of the test cases, it is concluded that PARTRACK is an adequate framework for simulation of transport and dispersion of a solute in a sparsely fractured rock

  6. Parallel Algorithms for Monte Carlo Particle Transport Simulation on Exascale Computing Architectures

    Science.gov (United States)

    Romano, Paul Kollath

    Monte Carlo particle transport methods are being considered as a viable option for high-fidelity simulation of nuclear reactors. While Monte Carlo methods offer several potential advantages over deterministic methods, there are a number of algorithmic shortcomings that would prevent their immediate adoption for full-core analyses. In this thesis, algorithms are proposed both to ameliorate the degradation in parallel efficiency typically observed for large numbers of processors and to offer a means of decomposing large tally data that will be needed for reactor analysis. A nearest-neighbor fission bank algorithm was proposed and subsequently implemented in the OpenMC Monte Carlo code. A theoretical analysis of the communication pattern shows that the expected cost is O( N ) whereas traditional fission bank algorithms are O(N) at best. The algorithm was tested on two supercomputers, the Intrepid Blue Gene/P and the Titan Cray XK7, and demonstrated nearly linear parallel scaling up to 163,840 processor cores on a full-core benchmark problem. An algorithm for reducing network communication arising from tally reduction was analyzed and implemented in OpenMC. The proposed algorithm groups only particle histories on a single processor into batches for tally purposes---in doing so it prevents all network communication for tallies until the very end of the simulation. The algorithm was tested, again on a full-core benchmark, and shown to reduce network communication substantially. A model was developed to predict the impact of load imbalances on the performance of domain decomposed simulations. The analysis demonstrated that load imbalances in domain decomposed simulations arise from two distinct phenomena: non-uniform particle densities and non-uniform spatial leakage. The dominant performance penalty for domain decomposition was shown to come from these physical effects rather than insufficient network bandwidth or high latency. The model predictions were verified with

  7. Performance analysis of multidimensional wavefront algorithms with application to deterministic particle transport

    International Nuclear Information System (INIS)

    Hoisie, A.; Lubeck, O.; Wasserman, H.

    1998-01-01

    The authors develop a model for the parallel performance of algorithms that consist of concurrent, two-dimensional wavefronts implemented in a message passing environment. The model, based on a LogGP machine parameterization, combines the separate contributions of computation and communication wavefronts. They validate the model on three important supercomputer systems, on up to 500 processors. They use data from a deterministic particle transport application taken from the ASCI workload, although the model is general to any wavefront algorithm implemented on a 2-D processor domain. They also use the validated model to make estimates of performance and scalability of wavefront algorithms on 100-TFLOPS computer systems expected to be in existence within the next decade as part of the ASCI program and elsewhere. In this context, the authors analyze two problem sizes. Their model shows that on the largest such problem (1 billion cells), inter-processor communication performance is not the bottleneck. Single-node efficiency is the dominant factor

  8. Scalable Domain Decomposed Monte Carlo Particle Transport

    Energy Technology Data Exchange (ETDEWEB)

    O' Brien, Matthew Joseph [Univ. of California, Davis, CA (United States)

    2013-12-05

    In this dissertation, we present the parallel algorithms necessary to run domain decomposed Monte Carlo particle transport on large numbers of processors (millions of processors). Previous algorithms were not scalable, and the parallel overhead became more computationally costly than the numerical simulation.

  9. Fast algorithms for transport models. Final report, June 1, 1993--May 31, 1994

    International Nuclear Information System (INIS)

    Manteuffel, T.

    1994-12-01

    The focus of this project is the study of multigrid and multilevel algorithms for the numerical solution of Boltzmann models of the transport of neutral and charged particles. In previous work a fast multigrid algorithm was developed for the numerical solution of the Boltzmann model of neutral particle transport in slab geometry assuming isotropic scattering. The new algorithm is extremely fast in the thick diffusion limit; the multigrid v-cycle convergence factor approaches zero as the mean-free-path between collisions approaches zero, independent of the mesh. Also, a fast multilevel method was developed for the numerical solution of the Boltzmann model of charged particle transport in the thick Fokker-Plank limit for slab geometry. Parallel implementations were developed for both algorithms

  10. Partially linearized algorithms in gyrokinetic particle simulation

    Energy Technology Data Exchange (ETDEWEB)

    Dimits, A.M.; Lee, W.W.

    1990-10-01

    In this paper, particle simulation algorithms with time-varying weights for the gyrokinetic Vlasov-Poisson system have been developed. The primary purpose is to use them for the removal of the selected nonlinearities in the simulation of gradient-driven microturbulence so that the relative importance of the various nonlinear effects can be assessed. It is hoped that the use of these procedures will result in a better understanding of the transport mechanisms and scaling in tokamaks. Another application of these algorithms is for the improvement of the numerical properties of the simulation plasma. For instance, implementations of such algorithms (1) enable us to suppress the intrinsic numerical noise in the simulation, and (2) also make it possible to regulate the weights of the fast-moving particles and, in turn, to eliminate the associated high frequency oscillations. Examples of their application to drift-type instabilities in slab geometry are given. We note that the work reported here represents the first successful use of the weighted algorithms in particle codes for the nonlinear simulation of plasmas.

  11. Partially linearized algorithms in gyrokinetic particle simulation

    International Nuclear Information System (INIS)

    Dimits, A.M.; Lee, W.W.

    1990-10-01

    In this paper, particle simulation algorithms with time-varying weights for the gyrokinetic Vlasov-Poisson system have been developed. The primary purpose is to use them for the removal of the selected nonlinearities in the simulation of gradient-driven microturbulence so that the relative importance of the various nonlinear effects can be assessed. It is hoped that the use of these procedures will result in a better understanding of the transport mechanisms and scaling in tokamaks. Another application of these algorithms is for the improvement of the numerical properties of the simulation plasma. For instance, implementations of such algorithms (1) enable us to suppress the intrinsic numerical noise in the simulation, and (2) also make it possible to regulate the weights of the fast-moving particles and, in turn, to eliminate the associated high frequency oscillations. Examples of their application to drift-type instabilities in slab geometry are given. We note that the work reported here represents the first successful use of the weighted algorithms in particle codes for the nonlinear simulation of plasmas

  12. A transport-based condensed history algorithm

    International Nuclear Information System (INIS)

    Tolar, D. R. Jr.

    1999-01-01

    Condensed history algorithms are approximate electron transport Monte Carlo methods in which the cumulative effects of multiple collisions are modeled in a single step of (user-specified) path length s 0 . This path length is the distance each Monte Carlo electron travels between collisions. Current condensed history techniques utilize a splitting routine over the range 0 le s le s 0 . For example, the PEnELOPE method splits each step into two substeps; one with length ξs 0 and one with length (1 minusξ)s 0 , where ξ is a random number from 0 0 is fixed (not sampled from an exponential distribution), conventional condensed history schemes are not transport processes. Here the authors describe a new condensed history algorithm that is a transport process. The method simulates a transport equation that approximates the exact Boltzmann equation. The new transport equation has a larger mean free path than, and preserves two angular moments of, the Boltzmann equation. Thus, the new process is solved more efficiently by Monte Carlo, and it conserves both particles and scattering power

  13. Comparing genetic algorithm and particle swarm optimization for solving capacitated vehicle routing problem

    Science.gov (United States)

    Iswari, T.; Asih, A. M. S.

    2018-04-01

    In the logistics system, transportation plays an important role to connect every element in the supply chain, but it can produces the greatest cost. Therefore, it is important to make the transportation costs as minimum as possible. Reducing the transportation cost can be done in several ways. One of the ways to minimizing the transportation cost is by optimizing the routing of its vehicles. It refers to Vehicle Routing Problem (VRP). The most common type of VRP is Capacitated Vehicle Routing Problem (CVRP). In CVRP, the vehicles have their own capacity and the total demands from the customer should not exceed the capacity of the vehicle. CVRP belongs to the class of NP-hard problems. These NP-hard problems make it more complex to solve such that exact algorithms become highly time-consuming with the increases in problem sizes. Thus, for large-scale problem instances, as typically found in industrial applications, finding an optimal solution is not practicable. Therefore, this paper uses two kinds of metaheuristics approach to solving CVRP. Those are Genetic Algorithm and Particle Swarm Optimization. This paper compares the results of both algorithms and see the performance of each algorithm. The results show that both algorithms perform well in solving CVRP but still needs to be improved. From algorithm testing and numerical example, Genetic Algorithm yields a better solution than Particle Swarm Optimization in total distance travelled.

  14. A multi-parametric particle-pairing algorithm for particle tracking in single and multiphase flows

    International Nuclear Information System (INIS)

    Cardwell, Nicholas D; Vlachos, Pavlos P; Thole, Karen A

    2011-01-01

    Multiphase flows (MPFs) offer a rich area of fundamental study with many practical applications. Examples of such flows range from the ingestion of foreign particulates in gas turbines to transport of particles within the human body. Experimental investigation of MPFs, however, is challenging, and requires techniques that simultaneously resolve both the carrier and discrete phases present in the flowfield. This paper presents a new multi-parametric particle-pairing algorithm for particle tracking velocimetry (MP3-PTV) in MPFs. MP3-PTV improves upon previous particle tracking algorithms by employing a novel variable pair-matching algorithm which utilizes displacement preconditioning in combination with estimated particle size and intensity to more effectively and accurately match particle pairs between successive images. To improve the method's efficiency, a new particle identification and segmentation routine was also developed. Validation of the new method was initially performed on two artificial data sets: a traditional single-phase flow published by the Visualization Society of Japan (VSJ) and an in-house generated MPF data set having a bi-modal distribution of particles diameters. Metrics of the measurement yield, reliability and overall tracking efficiency were used for method comparison. On the VSJ data set, the newly presented segmentation routine delivered a twofold improvement in identifying particles when compared to other published methods. For the simulated MPF data set, measurement efficiency of the carrier phases improved from 9% to 41% for MP3-PTV as compared to a traditional hybrid PTV. When employed on experimental data of a gas–solid flow, the MP3-PTV effectively identified the two particle populations and reported a vector efficiency and velocity measurement error comparable to measurements for the single-phase flow images. Simultaneous measurement of the dispersed particle and the carrier flowfield velocities allowed for the calculation of

  15. Adaptive multilevel splitting for Monte Carlo particle transport

    Directory of Open Access Journals (Sweden)

    Louvin Henri

    2017-01-01

    Full Text Available In the Monte Carlo simulation of particle transport, and especially for shielding applications, variance reduction techniques are widely used to help simulate realisations of rare events and reduce the relative errors on the estimated scores for a given computation time. Adaptive Multilevel Splitting (AMS is one of these variance reduction techniques that has recently appeared in the literature. In the present paper, we propose an alternative version of the AMS algorithm, adapted for the first time to the field of particle transport. Within this context, it can be used to build an unbiased estimator of any quantity associated with particle tracks, such as flux, reaction rates or even non-Boltzmann tallies like pulse-height tallies and other spectra. Furthermore, the efficiency of the AMS algorithm is shown not to be very sensitive to variations of its input parameters, which makes it capable of significant variance reduction without requiring extended user effort.

  16. Los Alamos neutral particle transport codes: New and enhanced capabilities

    International Nuclear Information System (INIS)

    Alcouffe, R.E.; Baker, R.S.; Brinkley, F.W.; Clark, B.A.; Koch, K.R.; Marr, D.R.

    1992-01-01

    We present new developments in Los Alamos discrete-ordinates transport codes and introduce THREEDANT, the latest in the series of Los Alamos discrete ordinates transport codes. THREEDANT solves the multigroup, neutral-particle transport equation in X-Y-Z and R-Θ-Z geometries. THREEDANT uses computationally efficient algorithms: Diffusion Synthetic Acceleration (DSA) is used to accelerate the convergence of transport iterations, the DSA solution is accelerated using the multigrid technique. THREEDANT runs on a wide range of computers, from scientific workstations to CRAY supercomputers. The algorithms are highly vectorized on CRAY computers. Recently, the THREEDANT transport algorithm was implemented on the massively parallel CM-2 computer, with performance that is comparable to a single-processor CRAY-YMP We present the results of THREEDANT analysis of test problems

  17. A solution algorithm for fluid-particle flows across all flow regimes

    Science.gov (United States)

    Kong, Bo; Fox, Rodney O.

    2017-09-01

    Many fluid-particle flows occurring in nature and in technological applications exhibit large variations in the local particle volume fraction. For example, in circulating fluidized beds there are regions where the particles are close-packed as well as very dilute regions where particle-particle collisions are rare. Thus, in order to simulate such fluid-particle systems, it is necessary to design a flow solver that can accurately treat all flow regimes occurring simultaneously in the same flow domain. In this work, a solution algorithm is proposed for this purpose. The algorithm is based on splitting the free-transport flux solver dynamically and locally in the flow. In close-packed to moderately dense regions, a hydrodynamic solver is employed, while in dilute to very dilute regions a kinetic-based finite-volume solver is used in conjunction with quadrature-based moment methods. To illustrate the accuracy and robustness of the proposed solution algorithm, it is implemented in OpenFOAM for particle velocity moments up to second order, and applied to simulate gravity-driven, gas-particle flows exhibiting cluster-induced turbulence. By varying the average particle volume fraction in the flow domain, it is demonstrated that the flow solver can handle seamlessly all flow regimes present in fluid-particle flows.

  18. A Parallel Particle Swarm Optimization Algorithm Accelerated by Asynchronous Evaluations

    Science.gov (United States)

    Venter, Gerhard; Sobieszczanski-Sobieski, Jaroslaw

    2005-01-01

    A parallel Particle Swarm Optimization (PSO) algorithm is presented. Particle swarm optimization is a fairly recent addition to the family of non-gradient based, probabilistic search algorithms that is based on a simplified social model and is closely tied to swarming theory. Although PSO algorithms present several attractive properties to the designer, they are plagued by high computational cost as measured by elapsed time. One approach to reduce the elapsed time is to make use of coarse-grained parallelization to evaluate the design points. Previous parallel PSO algorithms were mostly implemented in a synchronous manner, where all design points within a design iteration are evaluated before the next iteration is started. This approach leads to poor parallel speedup in cases where a heterogeneous parallel environment is used and/or where the analysis time depends on the design point being analyzed. This paper introduces an asynchronous parallel PSO algorithm that greatly improves the parallel e ciency. The asynchronous algorithm is benchmarked on a cluster assembled of Apple Macintosh G5 desktop computers, using the multi-disciplinary optimization of a typical transport aircraft wing as an example.

  19. Particle swarm genetic algorithm and its application

    International Nuclear Information System (INIS)

    Liu Chengxiang; Yan Changxiang; Wang Jianjun; Liu Zhenhai

    2012-01-01

    To solve the problems of slow convergence speed and tendency to fall into the local optimum of the standard particle swarm optimization while dealing with nonlinear constraint optimization problem, a particle swarm genetic algorithm is designed. The proposed algorithm adopts feasibility principle handles constraint conditions and avoids the difficulty of penalty function method in selecting punishment factor, generates initial feasible group randomly, which accelerates particle swarm convergence speed, and introduces genetic algorithm crossover and mutation strategy to avoid particle swarm falls into the local optimum Through the optimization calculation of the typical test functions, the results show that particle swarm genetic algorithm has better optimized performance. The algorithm is applied in nuclear power plant optimization, and the optimization results are significantly. (authors)

  20. A particle method with adjustable transport properties - the generalized consistent Boltzmann algorithm

    International Nuclear Information System (INIS)

    Garcia, A.L.; Alexander, F.J.; Alder, B.J.

    1997-01-01

    The consistent Boltzmann algorithm (CBA) for dense, hard-sphere gases is generalized to obtain the van der Waals equation of state and the corresponding exact viscosity at all densities except at the highest temperatures. A general scheme for adjusting any transport coefficients to higher values is presented

  1. An expert system for automatic mesh generation for Sn particle transport simulation in parallel environment

    International Nuclear Information System (INIS)

    Apisit, Patchimpattapong; Alireza, Haghighat; Shedlock, D.

    2003-01-01

    An expert system for generating an effective mesh distribution for the SN particle transport simulation has been developed. This expert system consists of two main parts: 1) an algorithm for generating an effective mesh distribution in a serial environment, and 2) an algorithm for inference of an effective domain decomposition strategy for parallel computing. For the first part, the algorithm prepares an effective mesh distribution considering problem physics and the spatial differencing scheme. For the second part, the algorithm determines a parallel-performance-index (PPI), which is defined as the ratio of the granularity to the degree-of-coupling. The parallel-performance-index provides expected performance of an algorithm depending on computing environment and resources. A large index indicates a high granularity algorithm with relatively low coupling among processors. This expert system has been successfully tested within the PENTRAN (Parallel Environment Neutral-Particle Transport) code system for simulating real-life shielding problems. (authors)

  2. An expert system for automatic mesh generation for Sn particle transport simulation in parallel environment

    Energy Technology Data Exchange (ETDEWEB)

    Apisit, Patchimpattapong [Electricity Generating Authority of Thailand, Office of Corporate Planning, Bangkruai, Nonthaburi (Thailand); Alireza, Haghighat; Shedlock, D. [Florida Univ., Department of Nuclear and Radiological Engineering, Gainesville, FL (United States)

    2003-07-01

    An expert system for generating an effective mesh distribution for the SN particle transport simulation has been developed. This expert system consists of two main parts: 1) an algorithm for generating an effective mesh distribution in a serial environment, and 2) an algorithm for inference of an effective domain decomposition strategy for parallel computing. For the first part, the algorithm prepares an effective mesh distribution considering problem physics and the spatial differencing scheme. For the second part, the algorithm determines a parallel-performance-index (PPI), which is defined as the ratio of the granularity to the degree-of-coupling. The parallel-performance-index provides expected performance of an algorithm depending on computing environment and resources. A large index indicates a high granularity algorithm with relatively low coupling among processors. This expert system has been successfully tested within the PENTRAN (Parallel Environment Neutral-Particle Transport) code system for simulating real-life shielding problems. (authors)

  3. Transport coefficients of multi-particle collision algorithms with velocity-dependent collision rules

    International Nuclear Information System (INIS)

    Ihle, Thomas

    2008-01-01

    Detailed calculations of the transport coefficients of a recently introduced particle-based model for fluid dynamics with a non-ideal equation of state are presented. Excluded volume interactions are modeled by means of biased stochastic multi-particle collisions which depend on the local velocities and densities. Momentum and energy are exactly conserved locally. A general scheme to derive transport coefficients for such biased, velocity-dependent collision rules is developed. Analytic expressions for the self-diffusion coefficient and the shear viscosity are obtained, and very good agreement is found with numerical results at small and large mean free paths. The viscosity turns out to be proportional to the square root of temperature, as in a real gas. In addition, the theoretical framework is applied to a two-component version of the model, and expressions for the viscosity and the difference in diffusion of the two species are given

  4. A multi-frame particle tracking algorithm robust against input noise

    International Nuclear Information System (INIS)

    Li, Dongning; Zhang, Yuanhui; Sun, Yigang; Yan, Wei

    2008-01-01

    The performance of a particle tracking algorithm which detects particle trajectories from discretely recorded particle positions could be substantially hindered by the input noise. In this paper, a particle tracking algorithm is developed which is robust against input noise. This algorithm employs the regression method instead of the extrapolation method usually employed by existing algorithms to predict future particle positions. If a trajectory cannot be linked to a particle at a frame, the algorithm can still proceed by trying to find a candidate at the next frame. The connectivity of tracked trajectories is inspected to remove the false ones. The algorithm is validated with synthetic data. The result shows that the algorithm is superior to traditional algorithms in the aspect of tracking long trajectories

  5. Data decomposition of Monte Carlo particle transport simulations via tally servers

    International Nuclear Information System (INIS)

    Romano, Paul K.; Siegel, Andrew R.; Forget, Benoit; Smith, Kord

    2013-01-01

    An algorithm for decomposing large tally data in Monte Carlo particle transport simulations is developed, analyzed, and implemented in a continuous-energy Monte Carlo code, OpenMC. The algorithm is based on a non-overlapping decomposition of compute nodes into tracking processors and tally servers. The former are used to simulate the movement of particles through the domain while the latter continuously receive and update tally data. A performance model for this approach is developed, suggesting that, for a range of parameters relevant to LWR analysis, the tally server algorithm should perform with minimal overhead on contemporary supercomputers. An implementation of the algorithm in OpenMC is then tested on the Intrepid and Titan supercomputers, supporting the key predictions of the model over a wide range of parameters. We thus conclude that the tally server algorithm is a successful approach to circumventing classical on-node memory constraints en route to unprecedentedly detailed Monte Carlo reactor simulations

  6. Particle Tracking Model and Abstraction of Transport Processes

    International Nuclear Information System (INIS)

    Robinson, B.

    2000-01-01

    The purpose of the transport methodology and component analysis is to provide the numerical methods for simulating radionuclide transport and model setup for transport in the unsaturated zone (UZ) site-scale model. The particle-tracking method of simulating radionuclide transport is incorporated into the FEHM computer code and the resulting changes in the FEHM code are to be submitted to the software configuration management system. This Analysis and Model Report (AMR) outlines the assumptions, design, and testing of a model for calculating radionuclide transport in the unsaturated zone at Yucca Mountain. In addition, methods for determining colloid-facilitated transport parameters are outlined for use in the Total System Performance Assessment (TSPA) analyses. Concurrently, process-level flow model calculations are being carrier out in a PMR for the unsaturated zone. The computer code TOUGH2 is being used to generate three-dimensional, dual-permeability flow fields, that are supplied to the Performance Assessment group for subsequent transport simulations. These flow fields are converted to input files compatible with the FEHM code, which for this application simulates radionuclide transport using the particle-tracking algorithm outlined in this AMR. Therefore, this AMR establishes the numerical method and demonstrates the use of the model, but the specific breakthrough curves presented do not necessarily represent the behavior of the Yucca Mountain unsaturated zone

  7. Applying Dispersive Changes to Lagrangian Particles in Groundwater Transport Models

    Science.gov (United States)

    Konikow, Leonard F.

    2010-01-01

    Method-of-characteristics groundwater transport models require that changes in concentrations computed within an Eulerian framework to account for dispersion be transferred to moving particles used to simulate advective transport. A new algorithm was developed to accomplish this transfer between nodal values and advecting particles more precisely and realistically compared to currently used methods. The new method scales the changes and adjustments of particle concentrations relative to limiting bounds of concentration values determined from the population of adjacent nodal values. The method precludes unrealistic undershoot or overshoot for concentrations of individual particles. In the new method, if dispersion causes cell concentrations to decrease during a time step, those particles in the cell having the highest concentration will decrease the most, and those with the lowest concentration will decrease the least. The converse is true if dispersion is causing concentrations to increase. Furthermore, if the initial concentration on a particle is outside the range of the adjacent nodal values, it will automatically be adjusted in the direction of the acceptable range of values. The new method is inherently mass conservative. ?? US Government 2010.

  8. Limits on the efficiency of event-based algorithms for Monte Carlo neutron transport

    Directory of Open Access Journals (Sweden)

    Paul K. Romano

    2017-09-01

    Full Text Available The traditional form of parallelism in Monte Carlo particle transport simulations, wherein each individual particle history is considered a unit of work, does not lend itself well to data-level parallelism. Event-based algorithms, which were originally used for simulations on vector processors, may offer a path toward better utilizing data-level parallelism in modern computer architectures. In this study, a simple model is developed for estimating the efficiency of the event-based particle transport algorithm under two sets of assumptions. Data collected from simulations of four reactor problems using OpenMC was then used in conjunction with the models to calculate the speedup due to vectorization as a function of the size of the particle bank and the vector width. When each event type is assumed to have constant execution time, the achievable speedup is directly related to the particle bank size. We observed that the bank size generally needs to be at least 20 times greater than vector size to achieve vector efficiency greater than 90%. When the execution times for events are allowed to vary, the vector speedup is also limited by differences in the execution time for events being carried out in a single event-iteration.

  9. From analytical solutions of solute transport equations to multidimensional time-domain random walk (TDRW) algorithms

    Science.gov (United States)

    Bodin, Jacques

    2015-03-01

    In this study, new multi-dimensional time-domain random walk (TDRW) algorithms are derived from approximate one-dimensional (1-D), two-dimensional (2-D), and three-dimensional (3-D) analytical solutions of the advection-dispersion equation and from exact 1-D, 2-D, and 3-D analytical solutions of the pure-diffusion equation. These algorithms enable the calculation of both the time required for a particle to travel a specified distance in a homogeneous medium and the mass recovery at the observation point, which may be incomplete due to 2-D or 3-D transverse dispersion or diffusion. The method is extended to heterogeneous media, represented as a piecewise collection of homogeneous media. The particle motion is then decomposed along a series of intermediate checkpoints located on the medium interface boundaries. The accuracy of the multi-dimensional TDRW method is verified against (i) exact analytical solutions of solute transport in homogeneous media and (ii) finite-difference simulations in a synthetic 2-D heterogeneous medium of simple geometry. The results demonstrate that the method is ideally suited to purely diffusive transport and to advection-dispersion transport problems dominated by advection. Conversely, the method is not recommended for highly dispersive transport problems because the accuracy of the advection-dispersion TDRW algorithms degrades rapidly for a low Péclet number, consistent with the accuracy limit of the approximate analytical solutions. The proposed approach provides a unified methodology for deriving multi-dimensional time-domain particle equations and may be applicable to other mathematical transport models, provided that appropriate analytical solutions are available.

  10. Limits on the Efficiency of Event-Based Algorithms for Monte Carlo Neutron Transport

    Energy Technology Data Exchange (ETDEWEB)

    Romano, Paul K.; Siegel, Andrew R.

    2017-04-16

    The traditional form of parallelism in Monte Carlo particle transport simulations, wherein each individual particle history is considered a unit of work, does not lend itself well to data-level parallelism. Event-based algorithms, which were originally used for simulations on vector processors, may offer a path toward better utilizing data-level parallelism in modern computer architectures. In this study, a simple model is developed for estimating the efficiency of the event-based particle transport algorithm under two sets of assumptions. Data collected from simulations of four reactor problems using OpenMC was then used in conjunction with the models to calculate the speedup due to vectorization as a function of two parameters: the size of the particle bank and the vector width. When each event type is assumed to have constant execution time, the achievable speedup is directly related to the particle bank size. We observed that the bank size generally needs to be at least 20 times greater than vector size in order to achieve vector efficiency greater than 90%. When the execution times for events are allowed to vary, however, the vector speedup is also limited by differences in execution time for events being carried out in a single event-iteration. For some problems, this implies that vector effciencies over 50% may not be attainable. While there are many factors impacting performance of an event-based algorithm that are not captured by our model, it nevertheless provides insights into factors that may be limiting in a real implementation.

  11. Modeling Dynamic Objects in Monte Carlo Particle Transport Calculations

    International Nuclear Information System (INIS)

    Yegin, G.

    2008-01-01

    In this study, the Multi-Geometry geometry modeling technique was improved in order to handle moving objects in a Monte Carlo particle transport calculation. In the Multi-Geometry technique, the geometry is a superposition of objects not surfaces. By using this feature, we developed a new algorithm which allows a user to make enable or disable geometry elements during particle transport. A disabled object can be ignored at a certain stage of a calculation and switching among identical copies of the same object located adjacent poins during a particle simulation corresponds to the movement of that object in space. We called this powerfull feature as Dynamic Multi-Geometry technique (DMG) which is used for the first time in Brachy Dose Monte Carlo code to simulate HDR brachytherapy treatment systems. Our results showed that having disabled objects in a geometry does not effect calculated dose values. This technique is also suitable to be used in other areas such as IMRT treatment planning systems

  12. A Novel Particle Swarm Optimization Algorithm for Global Optimization.

    Science.gov (United States)

    Wang, Chun-Feng; Liu, Kui

    2016-01-01

    Particle Swarm Optimization (PSO) is a recently developed optimization method, which has attracted interest of researchers in various areas due to its simplicity and effectiveness, and many variants have been proposed. In this paper, a novel Particle Swarm Optimization algorithm is presented, in which the information of the best neighbor of each particle and the best particle of the entire population in the current iteration is considered. Meanwhile, to avoid premature, an abandoned mechanism is used. Furthermore, for improving the global convergence speed of our algorithm, a chaotic search is adopted in the best solution of the current iteration. To verify the performance of our algorithm, standard test functions have been employed. The experimental results show that the algorithm is much more robust and efficient than some existing Particle Swarm Optimization algorithms.

  13. Quantum Behaved Particle Swarm Optimization Algorithm Based on Artificial Fish Swarm

    OpenAIRE

    Yumin, Dong; Li, Zhao

    2014-01-01

    Quantum behaved particle swarm algorithm is a new intelligent optimization algorithm; the algorithm has less parameters and is easily implemented. In view of the existing quantum behaved particle swarm optimization algorithm for the premature convergence problem, put forward a quantum particle swarm optimization algorithm based on artificial fish swarm. The new algorithm based on quantum behaved particle swarm algorithm, introducing the swarm and following activities, meanwhile using the a...

  14. Sediment transport modeling in deposited bed sewers: unified form of May's equations using the particle swarm optimization algorithm.

    Science.gov (United States)

    Safari, Mir Jafar Sadegh; Shirzad, Akbar; Mohammadi, Mirali

    2017-08-01

    May proposed two dimensionless parameters of transport (η) and mobility (F s ) for self-cleansing design of sewers with deposited bed condition. The relationships between those two parameters were introduced in conditional form for specific ranges of F s , which makes it difficult to use as a practical tool for sewer design. In this study, using the same experimental data used by May and employing the particle swarm optimization algorithm, a unified equation is recommended based on η and F s . The developed model is compared with original May relationships as well as corresponding models available in the literature. A large amount of data taken from the literature is used for the models' evaluation. The results demonstrate that the developed model in this study is superior to May and other existing models in the literature. Due to the fact that in May's dimensionless parameters more effective variables in the sediment transport process in sewers with deposited bed condition are considered, it is concluded that the revised May equation proposed in this study is a reliable model for sewer design.

  15. A dynamic global and local combined particle swarm optimization algorithm

    International Nuclear Information System (INIS)

    Jiao Bin; Lian Zhigang; Chen Qunxian

    2009-01-01

    Particle swarm optimization (PSO) algorithm has been developing rapidly and many results have been reported. PSO algorithm has shown some important advantages by providing high speed of convergence in specific problems, but it has a tendency to get stuck in a near optimal solution and one may find it difficult to improve solution accuracy by fine tuning. This paper presents a dynamic global and local combined particle swarm optimization (DGLCPSO) algorithm to improve the performance of original PSO, in which all particles dynamically share the best information of the local particle, global particle and group particles. It is tested with a set of eight benchmark functions with different dimensions and compared with original PSO. Experimental results indicate that the DGLCPSO algorithm improves the search performance on the benchmark functions significantly, and shows the effectiveness of the algorithm to solve optimization problems.

  16. Weighted Flow Algorithms (WFA) for stochastic particle coagulation

    International Nuclear Information System (INIS)

    DeVille, R.E.L.; Riemer, N.; West, M.

    2011-01-01

    Stochastic particle-resolved methods are a useful way to compute the time evolution of the multi-dimensional size distribution of atmospheric aerosol particles. An effective approach to improve the efficiency of such models is the use of weighted computational particles. Here we introduce particle weighting functions that are power laws in particle size to the recently-developed particle-resolved model PartMC-MOSAIC and present the mathematical formalism of these Weighted Flow Algorithms (WFA) for particle coagulation and growth. We apply this to an urban plume scenario that simulates a particle population undergoing emission of different particle types, dilution, coagulation and aerosol chemistry along a Lagrangian trajectory. We quantify the performance of the Weighted Flow Algorithm for number and mass-based quantities of relevance for atmospheric sciences applications.

  17. Weighted Flow Algorithms (WFA) for stochastic particle coagulation

    Science.gov (United States)

    DeVille, R. E. L.; Riemer, N.; West, M.

    2011-09-01

    Stochastic particle-resolved methods are a useful way to compute the time evolution of the multi-dimensional size distribution of atmospheric aerosol particles. An effective approach to improve the efficiency of such models is the use of weighted computational particles. Here we introduce particle weighting functions that are power laws in particle size to the recently-developed particle-resolved model PartMC-MOSAIC and present the mathematical formalism of these Weighted Flow Algorithms (WFA) for particle coagulation and growth. We apply this to an urban plume scenario that simulates a particle population undergoing emission of different particle types, dilution, coagulation and aerosol chemistry along a Lagrangian trajectory. We quantify the performance of the Weighted Flow Algorithm for number and mass-based quantities of relevance for atmospheric sciences applications.

  18. Particle and heavy ion transport code system, PHITS, version 2.52

    International Nuclear Information System (INIS)

    Sato, Tatsuhiko; Matsuda, Norihiro; Hashimoto, Shintaro; Iwamoto, Yosuke; Noda, Shusaku; Ogawa, Tatsuhiko; Nakashima, Hiroshi; Fukahori, Tokio; Okumura, Keisuke; Kai, Tetsuya; Niita, Koji; Iwase, Hiroshi; Chiba, Satoshi; Furuta, Takuya; Sihver, Lembit

    2013-01-01

    An upgraded version of the Particle and Heavy Ion Transport code System, PHITS2.52, was developed and released to the public. The new version has been greatly improved from the previously released version, PHITS2.24, in terms of not only the code itself but also the contents of its package, such as the attached data libraries. In the new version, a higher accuracy of simulation was achieved by implementing several latest nuclear reaction models. The reliability of the simulation was improved by modifying both the algorithms for the electron-, positron-, and photon-transport simulations and the procedure for calculating the statistical uncertainties of the tally results. Estimation of the time evolution of radioactivity became feasible by incorporating the activation calculation program DCHAIN-SP into the new package. The efficiency of the simulation was also improved as a result of the implementation of shared-memory parallelization and the optimization of several time-consuming algorithms. Furthermore, a number of new user-support tools and functions that help users to intuitively and effectively perform PHITS simulations were developed and incorporated. Due to these improvements, PHITS is now a more powerful tool for particle transport simulation applicable to various research and development fields, such as nuclear technology, accelerator design, medical physics, and cosmic-ray research. (author)

  19. Vectorizing and macrotasking Monte Carlo neutral particle algorithms

    International Nuclear Information System (INIS)

    Heifetz, D.B.

    1987-04-01

    Monte Carlo algorithms for computing neutral particle transport in plasmas have been vectorized and macrotasked. The techniques used are directly applicable to Monte Carlo calculations of neutron and photon transport, and Monte Carlo integration schemes in general. A highly vectorized code was achieved by calculating test flight trajectories in loops over arrays of flight data, isolating the conditional branches to as few a number of loops as possible. A number of solutions are discussed to the problem of gaps appearing in the arrays due to completed flights, which impede vectorization. A simple and effective implementation of macrotasking is achieved by dividing the calculation of the test flight profile among several processors. A tree of random numbers is used to ensure reproducible results. The additional memory required for each task may preclude using a larger number of tasks. In future machines, the limit of macrotasking may be possible, with each test flight, and split test flight, being a separate task

  20. JIT-transportation problem and its algorithm

    Science.gov (United States)

    Bai, Guozhong; Gan, Xiao-Xiong

    2011-12-01

    This article introduces the (just-in-time) JIT-transportation problem, which requires that all demanded goods be shipped to their destinations on schedule, at a zero or minimal destination-storage cost. The JIT-transportation problem is a special goal programming problem with discrete constraints. This article provides a mathematical model for such a transportation problem and introduces the JIT solution, the deviation solution, the JIT deviation, etc. By introducing the B(λ)-problem, this article establishes the equivalence between the optimal solutions of the B(λ)-problem and the optimal solutions of the JIT-transportation problem, and then provides an algorithm for the JIT-transportation problems. This algorithm is proven mathematically and is also illustrated by an example.

  1. A Synchronous-Asynchronous Particle Swarm Optimisation Algorithm

    Science.gov (United States)

    Ab Aziz, Nor Azlina; Mubin, Marizan; Mohamad, Mohd Saberi; Ab Aziz, Kamarulzaman

    2014-01-01

    In the original particle swarm optimisation (PSO) algorithm, the particles' velocities and positions are updated after the whole swarm performance is evaluated. This algorithm is also known as synchronous PSO (S-PSO). The strength of this update method is in the exploitation of the information. Asynchronous update PSO (A-PSO) has been proposed as an alternative to S-PSO. A particle in A-PSO updates its velocity and position as soon as its own performance has been evaluated. Hence, particles are updated using partial information, leading to stronger exploration. In this paper, we attempt to improve PSO by merging both update methods to utilise the strengths of both methods. The proposed synchronous-asynchronous PSO (SA-PSO) algorithm divides the particles into smaller groups. The best member of a group and the swarm's best are chosen to lead the search. Members within a group are updated synchronously, while the groups themselves are asynchronously updated. Five well-known unimodal functions, four multimodal functions, and a real world optimisation problem are used to study the performance of SA-PSO, which is compared with the performances of S-PSO and A-PSO. The results are statistically analysed and show that the proposed SA-PSO has performed consistently well. PMID:25121109

  2. Design of a fuzzy differential evolution algorithm to predict non-deposition sediment transport

    Science.gov (United States)

    Ebtehaj, Isa; Bonakdari, Hossein

    2017-12-01

    Since the flow entering a sewer contains solid matter, deposition at the bottom of the channel is inevitable. It is difficult to understand the complex, three-dimensional mechanism of sediment transport in sewer pipelines. Therefore, a method to estimate the limiting velocity is necessary for optimal designs. Due to the inability of gradient-based algorithms to train Adaptive Neuro-Fuzzy Inference Systems (ANFIS) for non-deposition sediment transport prediction, a new hybrid ANFIS method based on a differential evolutionary algorithm (ANFIS-DE) is developed. The training and testing performance of ANFIS-DE is evaluated using a wide range of dimensionless parameters gathered from the literature. The input combination used to estimate the densimetric Froude number ( Fr) parameters includes the volumetric sediment concentration ( C V ), ratio of median particle diameter to hydraulic radius ( d/R), ratio of median particle diameter to pipe diameter ( d/D) and overall friction factor of sediment ( λ s ). The testing results are compared with the ANFIS model and regression-based equation results. The ANFIS-DE technique predicted sediment transport at limit of deposition with lower root mean square error (RMSE = 0.323) and mean absolute percentage of error (MAPE = 0.065) and higher accuracy ( R 2 = 0.965) than the ANFIS model and regression-based equations.

  3. Particle-transport simulation with the Monte Carlo method

    International Nuclear Information System (INIS)

    Carter, L.L.; Cashwell, E.D.

    1975-01-01

    Attention is focused on the application of the Monte Carlo method to particle transport problems, with emphasis on neutron and photon transport. Topics covered include sampling methods, mathematical prescriptions for simulating particle transport, mechanics of simulating particle transport, neutron transport, and photon transport. A literature survey of 204 references is included. (GMT)

  4. A general concurrent algorithm for plasma particle-in-cell simulation codes

    International Nuclear Information System (INIS)

    Liewer, P.C.; Decyk, V.K.

    1989-01-01

    We have developed a new algorithm for implementing plasma particle-in-cell (PIC) simulation codes on concurrent processors with distributed memory. This algorithm, named the general concurrent PIC algorithm (GCPIC), has been used to implement an electrostatic PIC code on the 33-node JPL Mark III Hypercube parallel computer. To decompose at PIC code using the GCPIC algorithm, the physical domain of the particle simulation is divided into sub-domains, equal in number to the number of processors, such that all sub-domains have roughly equal numbers of particles. For problems with non-uniform particle densities, these sub-domains will be of unequal physical size. Each processor is assigned a sub-domain and is responsible for updating the particles in its sub-domain. This algorithm has led to a a very efficient parallel implementation of a well-benchmarked 1-dimensional PIC code. The dominant portion of the code, updating the particle positions and velocities, is nearly 100% efficient when the number of particles is increased linearly with the number of hypercube processors used so that the number of particles per processor is constant. For example, the increase in time spent updating particles in going from a problem with 11,264 particles run on 1 processor to 360,448 particles on 32 processors was only 3% (parallel efficiency of 97%). Although implemented on a hypercube concurrent computer, this algorithm should also be efficient for PIC codes on other parallel architectures and for large PIC codes on sequential computers where part of the data must reside on external disks. copyright 1989 Academic Press, Inc

  5. Particle algorithms for population dynamics in flows

    International Nuclear Information System (INIS)

    Perlekar, Prasad; Toschi, Federico; Benzi, Roberto; Pigolotti, Simone

    2011-01-01

    We present and discuss particle based algorithms to numerically study the dynamics of population subjected to an advecting flow condition. We discuss few possible variants of the algorithms and compare them in a model compressible flow. A comparison against appropriate versions of the continuum stochastic Fisher equation (sFKPP) is also presented and discussed. The algorithms can be used to study populations genetics in fluid environments.

  6. Optimal configuration of power grid sources based on optimal particle swarm algorithm

    Science.gov (United States)

    Wen, Yuanhua

    2018-04-01

    In order to optimize the distribution problem of power grid sources, an optimized particle swarm optimization algorithm is proposed. First, the concept of multi-objective optimization and the Pareto solution set are enumerated. Then, the performance of the classical genetic algorithm, the classical particle swarm optimization algorithm and the improved particle swarm optimization algorithm are analyzed. The three algorithms are simulated respectively. Compared with the test results of each algorithm, the superiority of the algorithm in convergence and optimization performance is proved, which lays the foundation for subsequent micro-grid power optimization configuration solution.

  7. Explicit symplectic algorithms based on generating functions for charged particle dynamics

    Science.gov (United States)

    Zhang, Ruili; Qin, Hong; Tang, Yifa; Liu, Jian; He, Yang; Xiao, Jianyuan

    2016-07-01

    Dynamics of a charged particle in the canonical coordinates is a Hamiltonian system, and the well-known symplectic algorithm has been regarded as the de facto method for numerical integration of Hamiltonian systems due to its long-term accuracy and fidelity. For long-term simulations with high efficiency, explicit symplectic algorithms are desirable. However, it is generally believed that explicit symplectic algorithms are only available for sum-separable Hamiltonians, and this restriction limits the application of explicit symplectic algorithms to charged particle dynamics. To overcome this difficulty, we combine the familiar sum-split method and a generating function method to construct second- and third-order explicit symplectic algorithms for dynamics of charged particle. The generating function method is designed to generate explicit symplectic algorithms for product-separable Hamiltonian with form of H (x ,p ) =pif (x ) or H (x ,p ) =xig (p ) . Applied to the simulations of charged particle dynamics, the explicit symplectic algorithms based on generating functions demonstrate superiorities in conservation and efficiency.

  8. Economic dispatch optimization algorithm based on particle diffusion

    International Nuclear Information System (INIS)

    Han, Li; Romero, Carlos E.; Yao, Zheng

    2015-01-01

    Highlights: • A dispatch model that considers fuel, emissions control and wind power cost is built. • An optimization algorithm named diffusion particle optimization (DPO) is proposed. • DPO was used to analyze the impact of wind power risk and emissions on dispatch. - Abstract: Due to the widespread installation of emissions control equipment in fossil fuel-fired power plants, the cost of emissions control needs to be considered, together with the plant fuel cost, in providing economic power dispatch of those units to the grid. On the other hand, while using wind power decreases the overall power generation cost for the power grid, it poses a risk to a traditional grid, because of its inherent stochastic characteristics. Therefore, an economic dispatch optimization model needs to consider all of the fuel cost, emissions control cost and wind power cost for each of the generating unit conforming the fleet that meets the required grid power demand. In this study, an optimization algorithm referred as diffusion particle optimization (DPO) is proposed to solve such complex optimization problem. In this algorithm, Brownian motion theory is used to guide the movement of particles so that the particles can search for an optimal solution over the entire definition region. Several benchmark functions and power grid system data were used to test the performance of DPO, and compared to traditional algorithms used for economic dispatch optimization, such as, particle swarm optimization and artificial bee colony algorithm. It was found that DPO has less probability to be trapped in local optimums. According to results of different power systems DPO was able to find economic dispatch solutions with lower costs. DPO was also used to analyze the impact of wind power risk and fossil unit emissions coefficients on power dispatch. The result are encouraging for the use of DPO as a dynamic tool for economic dispatch of the power grid.

  9. Mechanism of travelling-wave transport of particles

    International Nuclear Information System (INIS)

    Kawamoto, Hiroyuki; Seki, Kyogo; Kuromiya, Naoyuki

    2006-01-01

    Numerical and experimental investigations have been carried out on transport of particles in an electrostatic travelling field. A three-dimensional hard-sphere model of the distinct element method was developed to simulate the dynamics of particles. Forces applied to particles in the model were the Coulomb force, the dielectrophoresis force on polarized dipole particles in a non-uniform field, the image force, gravity and the air drag. Friction and repulsion between particle-particle and particle-conveyer were included in the model to replace initial conditions after mechanical contacts. Two kinds of experiments were performed to confirm the model. One was the measurement of charge of particles that is indispensable to determine the Coulomb force. Charge distribution was measured from the locus of free-fallen particles in a parallel electrostatic field. The averaged charge of the bulk particle was confirmed by measurement with a Faraday cage. The other experiment was measurements of the differential dynamics of particles on a conveyer consisting of parallel electrodes to which a four-phase travelling electrostatic wave was applied. Calculated results agreed with measurements, and the following characteristics were clarified. (1) The Coulomb force is the predominant force to drive particles compared with the other kinds of forces, (2) the direction of particle transport did not always coincide with that of the travelling wave but changed partially. It depended on the frequency of the travelling wave, the particle diameter and the electric field, (3) although some particles overtook the travelling wave at a very low frequency, the motion of particles was almost synchronized with the wave at the low frequency and (4) the transport of some particles was delayed to the wave at medium frequency; the majority of particles were transported backwards at high frequency and particles were not transported but only vibrated at very high frequency

  10. Experimental characterization of solid particle transport by slug flow using Particle Image Velocimetry

    International Nuclear Information System (INIS)

    Goharzadeh, A; Rodgers, P

    2009-01-01

    This paper presents an experimental study of gas-liquid slug flow on solid particle transport inside a horizontal pipe with two types of experiments conducted. The influence of slug length on solid particle transportation is characterized using high speed photography. Using combined Particle Image Velocimetry (PIV) with Refractive Index Matching (RIM) and fluorescent tracers (two-phase oil-air loop) the velocity distribution inside the slug body is measured. Combining these experimental analyses, an insight is provided into the physical mechanism of solid particle transportation due to slug flow. It was observed that the slug body significantly influences solid particle mobility. The physical mechanism of solid particle transportation was found to be discontinuous. The inactive region (in terms of solid particle transport) upstream of the slug nose was quantified as a function of gas-liquid composition and solid particle size. Measured velocity distributions showed a significant drop in velocity magnitude immediately upstream of the slug nose and therefore the critical velocity for solid particle lifting is reached further upstream.

  11. Monte Carlo 2000 Conference : Advanced Monte Carlo for Radiation Physics, Particle Transport Simulation and Applications

    CERN Document Server

    Baräo, Fernando; Nakagawa, Masayuki; Távora, Luis; Vaz, Pedro

    2001-01-01

    This book focusses on the state of the art of Monte Carlo methods in radiation physics and particle transport simulation and applications, the latter involving in particular, the use and development of electron--gamma, neutron--gamma and hadronic codes. Besides the basic theory and the methods employed, special attention is paid to algorithm development for modeling, and the analysis of experiments and measurements in a variety of fields ranging from particle to medical physics.

  12. Chaotically encoded particle swarm optimization algorithm and its applications

    International Nuclear Information System (INIS)

    Alatas, Bilal; Akin, Erhan

    2009-01-01

    This paper proposes a novel particle swarm optimization (PSO) algorithm, chaotically encoded particle swarm optimization algorithm (CENPSOA), based on the notion of chaos numbers that have been recently proposed for a novel meaning to numbers. In this paper, various chaos arithmetic and evaluation measures that can be used in CENPSOA have been described. Furthermore, CENPSOA has been designed to be effectively utilized in data mining applications.

  13. Effects of Random Values for Particle Swarm Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Hou-Ping Dai

    2018-02-01

    Full Text Available Particle swarm optimization (PSO algorithm is generally improved by adaptively adjusting the inertia weight or combining with other evolution algorithms. However, in most modified PSO algorithms, the random values are always generated by uniform distribution in the range of [0, 1]. In this study, the random values, which are generated by uniform distribution in the ranges of [0, 1] and [−1, 1], and Gauss distribution with mean 0 and variance 1 ( U [ 0 , 1 ] , U [ − 1 , 1 ] and G ( 0 , 1 , are respectively used in the standard PSO and linear decreasing inertia weight (LDIW PSO algorithms. For comparison, the deterministic PSO algorithm, in which the random values are set as 0.5, is also investigated in this study. Some benchmark functions and the pressure vessel design problem are selected to test these algorithms with different types of random values in three space dimensions (10, 30, and 100. The experimental results show that the standard PSO and LDIW-PSO algorithms with random values generated by U [ − 1 , 1 ] or G ( 0 , 1 are more likely to avoid falling into local optima and quickly obtain the global optima. This is because the large-scale random values can expand the range of particle velocity to make the particle more likely to escape from local optima and obtain the global optima. Although the random values generated by U [ − 1 , 1 ] or G ( 0 , 1 are beneficial to improve the global searching ability, the local searching ability for a low dimensional practical optimization problem may be decreased due to the finite particles.

  14. Computational plasticity algorithm for particle dynamics simulations

    Science.gov (United States)

    Krabbenhoft, K.; Lyamin, A. V.; Vignes, C.

    2018-01-01

    The problem of particle dynamics simulation is interpreted in the framework of computational plasticity leading to an algorithm which is mathematically indistinguishable from the common implicit scheme widely used in the finite element analysis of elastoplastic boundary value problems. This algorithm provides somewhat of a unification of two particle methods, the discrete element method and the contact dynamics method, which usually are thought of as being quite disparate. In particular, it is shown that the former appears as the special case where the time stepping is explicit while the use of implicit time stepping leads to the kind of schemes usually labelled contact dynamics methods. The framing of particle dynamics simulation within computational plasticity paves the way for new approaches similar (or identical) to those frequently employed in nonlinear finite element analysis. These include mixed implicit-explicit time stepping, dynamic relaxation and domain decomposition schemes.

  15. Parallelization of a spherical Sn transport theory algorithm

    International Nuclear Information System (INIS)

    Haghighat, A.

    1989-01-01

    The work described in this paper derives a parallel algorithm for an R-dependent spherical S N transport theory algorithm and studies its performance by testing different sample problems. The S N transport method is one of the most accurate techniques used to solve the linear Boltzmann equation. Several studies have been done on the vectorization of the S N algorithms; however, very few studies have been performed on the parallelization of this algorithm. Weinke and Hommoto have looked at the parallel processing of the different energy groups, and Azmy recently studied the parallel processing of the inner iterations of an X-Y S N nodal transport theory method. Both studies have reported very encouraging results, which have prompted us to look at the parallel processing of an R-dependent S N spherical geometry algorithm. This geometry was chosen because, in spite of its simplicity, it contains the complications of the curvilinear geometries (i.e., redistribution of neutrons over the discretized angular bins)

  16. A dynamic inertia weight particle swarm optimization algorithm

    International Nuclear Information System (INIS)

    Jiao Bin; Lian Zhigang; Gu Xingsheng

    2008-01-01

    Particle swarm optimization (PSO) algorithm has been developing rapidly and has been applied widely since it was introduced, as it is easily understood and realized. This paper presents an improved particle swarm optimization algorithm (IPSO) to improve the performance of standard PSO, which uses the dynamic inertia weight that decreases according to iterative generation increasing. It is tested with a set of 6 benchmark functions with 30, 50 and 150 different dimensions and compared with standard PSO. Experimental results indicate that the IPSO improves the search performance on the benchmark functions significantly

  17. A Fano cavity test for Monte Carlo proton transport algorithms

    International Nuclear Information System (INIS)

    Sterpin, Edmond; Sorriaux, Jefferson; Souris, Kevin; Vynckier, Stefaan; Bouchard, Hugo

    2014-01-01

    Purpose: In the scope of reference dosimetry of radiotherapy beams, Monte Carlo (MC) simulations are widely used to compute ionization chamber dose response accurately. Uncertainties related to the transport algorithm can be verified performing self-consistency tests, i.e., the so-called “Fano cavity test.” The Fano cavity test is based on the Fano theorem, which states that under charged particle equilibrium conditions, the charged particle fluence is independent of the mass density of the media as long as the cross-sections are uniform. Such tests have not been performed yet for MC codes simulating proton transport. The objectives of this study are to design a new Fano cavity test for proton MC and to implement the methodology in two MC codes: Geant4 and PENELOPE extended to protons (PENH). Methods: The new Fano test is designed to evaluate the accuracy of proton transport. Virtual particles with an energy ofE 0 and a mass macroscopic cross section of (Σ)/(ρ) are transported, having the ability to generate protons with kinetic energy E 0 and to be restored after each interaction, thus providing proton equilibrium. To perform the test, the authors use a simplified simulation model and rigorously demonstrate that the computed cavity dose per incident fluence must equal (ΣE 0 )/(ρ) , as expected in classic Fano tests. The implementation of the test is performed in Geant4 and PENH. The geometry used for testing is a 10 × 10 cm 2 parallel virtual field and a cavity (2 × 2 × 0.2 cm 3 size) in a water phantom with dimensions large enough to ensure proton equilibrium. Results: For conservative user-defined simulation parameters (leading to small step sizes), both Geant4 and PENH pass the Fano cavity test within 0.1%. However, differences of 0.6% and 0.7% were observed for PENH and Geant4, respectively, using larger step sizes. For PENH, the difference is attributed to the random-hinge method that introduces an artificial energy straggling if step size is not

  18. Multi-Algorithm Particle Simulations with Spatiocyte.

    Science.gov (United States)

    Arjunan, Satya N V; Takahashi, Koichi

    2017-01-01

    As quantitative biologists get more measurements of spatially regulated systems such as cell division and polarization, simulation of reaction and diffusion of proteins using the data is becoming increasingly relevant to uncover the mechanisms underlying the systems. Spatiocyte is a lattice-based stochastic particle simulator for biochemical reaction and diffusion processes. Simulations can be performed at single molecule and compartment spatial scales simultaneously. Molecules can diffuse and react in 1D (filament), 2D (membrane), and 3D (cytosol) compartments. The implications of crowded regions in the cell can be investigated because each diffusing molecule has spatial dimensions. Spatiocyte adopts multi-algorithm and multi-timescale frameworks to simulate models that simultaneously employ deterministic, stochastic, and particle reaction-diffusion algorithms. Comparison of light microscopy images to simulation snapshots is supported by Spatiocyte microscopy visualization and molecule tagging features. Spatiocyte is open-source software and is freely available at http://spatiocyte.org .

  19. RB Particle Filter Time Synchronization Algorithm Based on the DPM Model.

    Science.gov (United States)

    Guo, Chunsheng; Shen, Jia; Sun, Yao; Ying, Na

    2015-09-03

    Time synchronization is essential for node localization, target tracking, data fusion, and various other Wireless Sensor Network (WSN) applications. To improve the estimation accuracy of continuous clock offset and skew of mobile nodes in WSNs, we propose a novel time synchronization algorithm, the Rao-Blackwellised (RB) particle filter time synchronization algorithm based on the Dirichlet process mixture (DPM) model. In a state-space equation with a linear substructure, state variables are divided into linear and non-linear variables by the RB particle filter algorithm. These two variables can be estimated using Kalman filter and particle filter, respectively, which improves the computational efficiency more so than if only the particle filter was used. In addition, the DPM model is used to describe the distribution of non-deterministic delays and to automatically adjust the number of Gaussian mixture model components based on the observational data. This improves the estimation accuracy of clock offset and skew, which allows achieving the time synchronization. The time synchronization performance of this algorithm is also validated by computer simulations and experimental measurements. The results show that the proposed algorithm has a higher time synchronization precision than traditional time synchronization algorithms.

  20. Kinetic-Monte-Carlo-Based Parallel Evolution Simulation Algorithm of Dust Particles

    Directory of Open Access Journals (Sweden)

    Xiaomei Hu

    2014-01-01

    Full Text Available The evolution simulation of dust particles provides an important way to analyze the impact of dust on the environment. KMC-based parallel algorithm is proposed to simulate the evolution of dust particles. In the parallel evolution simulation algorithm of dust particles, data distribution way and communication optimizing strategy are raised to balance the load of every process and reduce the communication expense among processes. The experimental results show that the simulation of diffusion, sediment, and resuspension of dust particles in virtual campus is realized and the simulation time is shortened by parallel algorithm, which makes up for the shortage of serial computing and makes the simulation of large-scale virtual environment possible.

  1. Parallel-vector algorithms for particle simulations on shared-memory multiprocessors

    International Nuclear Information System (INIS)

    Nishiura, Daisuke; Sakaguchi, Hide

    2011-01-01

    Over the last few decades, the computational demands of massive particle-based simulations for both scientific and industrial purposes have been continuously increasing. Hence, considerable efforts are being made to develop parallel computing techniques on various platforms. In such simulations, particles freely move within a given space, and so on a distributed-memory system, load balancing, i.e., assigning an equal number of particles to each processor, is not guaranteed. However, shared-memory systems achieve better load balancing for particle models, but suffer from the intrinsic drawback of memory access competition, particularly during (1) paring of contact candidates from among neighboring particles and (2) force summation for each particle. Here, novel algorithms are proposed to overcome these two problems. For the first problem, the key is a pre-conditioning process during which particle labels are sorted by a cell label in the domain to which the particles belong. Then, a list of contact candidates is constructed by pairing the sorted particle labels. For the latter problem, a table comprising the list indexes of the contact candidate pairs is created and used to sum the contact forces acting on each particle for all contacts according to Newton's third law. With just these methods, memory access competition is avoided without additional redundant procedures. The parallel efficiency and compatibility of these two algorithms were evaluated in discrete element method (DEM) simulations on four types of shared-memory parallel computers: a multicore multiprocessor computer, scalar supercomputer, vector supercomputer, and graphics processing unit. The computational efficiency of a DEM code was found to be drastically improved with our algorithms on all but the scalar supercomputer. Thus, the developed parallel algorithms are useful on shared-memory parallel computers with sufficient memory bandwidth.

  2. Application of ant colony Algorithm and particle swarm optimization in architectural design

    Science.gov (United States)

    Song, Ziyi; Wu, Yunfa; Song, Jianhua

    2018-02-01

    By studying the development of ant colony algorithm and particle swarm algorithm, this paper expounds the core idea of the algorithm, explores the combination of algorithm and architectural design, sums up the application rules of intelligent algorithm in architectural design, and combines the characteristics of the two algorithms, obtains the research route and realization way of intelligent algorithm in architecture design. To establish algorithm rules to assist architectural design. Taking intelligent algorithm as the beginning of architectural design research, the authors provide the theory foundation of ant colony Algorithm and particle swarm algorithm in architectural design, popularize the application range of intelligent algorithm in architectural design, and provide a new idea for the architects.

  3. An improved particle filtering algorithm for aircraft engine gas-path fault diagnosis

    Directory of Open Access Journals (Sweden)

    Qihang Wang

    2016-07-01

    Full Text Available In this article, an improved particle filter with electromagnetism-like mechanism algorithm is proposed for aircraft engine gas-path component abrupt fault diagnosis. In order to avoid the particle degeneracy and sample impoverishment of normal particle filter, the electromagnetism-like mechanism optimization algorithm is introduced into resampling procedure, which adjusts the position of the particles through simulating attraction–repulsion mechanism between charged particles of the electromagnetism theory. The improved particle filter can solve the particle degradation problem and ensure the diversity of the particle set. Meanwhile, it enhances the ability of tracking abrupt fault due to considering the latest measurement information. Comparison of the proposed method with three different filter algorithms is carried out on a univariate nonstationary growth model. Simulations on a turbofan engine model indicate that compared to the normal particle filter, the improved particle filter can ensure the completion of the fault diagnosis within less sampling period and the root mean square error of parameters estimation is reduced.

  4. Stress, Flow and Particle Transport in Rock Fractures

    Energy Technology Data Exchange (ETDEWEB)

    Koyama, Tomofumi

    2007-09-15

    The fluid flow and tracer transport in a single rock fracture during shear processes has been an important issue in rock mechanics and is investigated in this thesis using Finite Element Method (FEM) and streamline particle tracking method, considering evolutions of aperture and transmissivity with shear displacement histories under different normal stresses, based on laboratory tests. The distributions of fracture aperture and its evolution during shear were calculated from the initial aperture fields, based on the laser-scanned surface roughness features of replicas of rock fracture specimens, and shear dilations measured during the coupled shear-flow-tracer tests in laboratory performed using a newly developed testing apparatus in Nagasaki University, Nagasaki, Japan. Three rock fractures of granite with different roughness characteristics were used as parent samples from which nine plaster replicas were made and coupled shear-flow tests was performed under three normal loading conditions (two levels of constant normal loading (CNL) and one constant normal stiffness (CNS) conditions). In order to visualize the tracer transport, transparent acrylic upper parts and plaster lower parts of the fracture specimens were manufactured from an artificially created tensile fracture of sandstone and the coupled shear-flow tests with fluid visualization was performed using a dye tracer injected from upstream and a CCD camera to record the dye movement. A special algorithm for treating the contact areas as zero-aperture elements was used to produce more accurate flow field simulations by using FEM, which is important for continued simulations of particle transport, but was often not properly treated in literature. The simulation results agreed well with the flow rate data obtained from the laboratory tests, showing that complex histories of fracture aperture and tortuous flow channels with changing normal stresses and increasing shear displacements, which were also captured

  5. A Constructive Data Classification Version of the Particle Swarm Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Alexandre Szabo

    2013-01-01

    Full Text Available The particle swarm optimization algorithm was originally introduced to solve continuous parameter optimization problems. It was soon modified to solve other types of optimization tasks and also to be applied to data analysis. In the latter case, however, there are few works in the literature that deal with the problem of dynamically building the architecture of the system. This paper introduces new particle swarm algorithms specifically designed to solve classification problems. The first proposal, named Particle Swarm Classifier (PSClass, is a derivation of a particle swarm clustering algorithm and its architecture, as in most classifiers, is pre-defined. The second proposal, named Constructive Particle Swarm Classifier (cPSClass, uses ideas from the immune system to automatically build the swarm. A sensitivity analysis of the growing procedure of cPSClass and an investigation into a proposed pruning procedure for this algorithm are performed. The proposals were applied to a wide range of databases from the literature and the results show that they are competitive in relation to other approaches, with the advantage of having a dynamically constructed architecture.

  6. Comparison of several algorithms of the electric force calculation in particle plasma models

    International Nuclear Information System (INIS)

    Lachnitt, J; Hrach, R

    2014-01-01

    This work is devoted to plasma modelling using the technique of molecular dynamics. The crucial problem of most such models is the efficient calculation of electric force. This is usually solved by using the particle-in-cell (PIC) algorithm. However, PIC is an approximative algorithm as it underestimates the short-range interactions of charged particles. We propose a hybrid algorithm which adds these interactions to PIC. Then we include this algorithm in a set of algorithms which we test against each other in a two-dimensional collisionless magnetized plasma model. Besides our hybrid algorithm, this set includes two variants of pure PIC and the direct application of Coulomb's law. We compare particle forces, particle trajectories, total energy conservation and the speed of the algorithms. We find out that the hybrid algorithm can be a good replacement of direct Coulomb's law application (quite accurate and much faster). It is however probably unnecessary to use it in practical 2D models.

  7. High energy electromagnetic particle transportation on the GPU

    Energy Technology Data Exchange (ETDEWEB)

    Canal, P. [Fermilab; Elvira, D. [Fermilab; Jun, S. Y. [Fermilab; Kowalkowski, J. [Fermilab; Paterno, M. [Fermilab; Apostolakis, J. [CERN

    2014-01-01

    We present massively parallel high energy electromagnetic particle transportation through a finely segmented detector on a Graphics Processing Unit (GPU). Simulating events of energetic particle decay in a general-purpose high energy physics (HEP) detector requires intensive computing resources, due to the complexity of the geometry as well as physics processes applied to particles copiously produced by primary collisions and secondary interactions. The recent advent of hardware architectures of many-core or accelerated processors provides the variety of concurrent programming models applicable not only for the high performance parallel computing, but also for the conventional computing intensive application such as the HEP detector simulation. The components of our prototype are a transportation process under a non-uniform magnetic field, geometry navigation with a set of solid shapes and materials, electromagnetic physics processes for electrons and photons, and an interface to a framework that dispatches bundles of tracks in a highly vectorized manner optimizing for spatial locality and throughput. Core algorithms and methods are excerpted from the Geant4 toolkit, and are modified and optimized for the GPU application. Program kernels written in C/C++ are designed to be compatible with CUDA and OpenCL and with the aim to be generic enough for easy porting to future programming models and hardware architectures. To improve throughput by overlapping data transfers with kernel execution, multiple CUDA streams are used. Issues with floating point accuracy, random numbers generation, data structure, kernel divergences and register spills are also considered. Performance evaluation for the relative speedup compared to the corresponding sequential execution on CPU is presented as well.

  8. Particle and heat transport in Tokamaks

    International Nuclear Information System (INIS)

    Chatelier, M.

    1984-01-01

    A limitation to performances of tokamaks is heat transport through magnetic surfaces. Principles of ''classical'' or ''neoclassical'' transport -i.e. transport due to particle and heat fluxes due to Coulomb scattering of charged particle in a magnetic field- are exposed. It is shown that beside this classical effect, ''anomalous'' transport occurs; it is associated to the existence of fluctuating electric or magnetic fields which can appear in the plasma as a result of charge and current perturbations. Tearing modes and drift wave instabilities are taken as typical examples. Experimental features are presented which show that ions behave approximately in a classical way whereas electrons are strongly anomalous [fr

  9. Designing Artificial Neural Networks Using Particle Swarm Optimization Algorithms.

    Science.gov (United States)

    Garro, Beatriz A; Vázquez, Roberto A

    2015-01-01

    Artificial Neural Network (ANN) design is a complex task because its performance depends on the architecture, the selected transfer function, and the learning algorithm used to train the set of synaptic weights. In this paper we present a methodology that automatically designs an ANN using particle swarm optimization algorithms such as Basic Particle Swarm Optimization (PSO), Second Generation of Particle Swarm Optimization (SGPSO), and a New Model of PSO called NMPSO. The aim of these algorithms is to evolve, at the same time, the three principal components of an ANN: the set of synaptic weights, the connections or architecture, and the transfer functions for each neuron. Eight different fitness functions were proposed to evaluate the fitness of each solution and find the best design. These functions are based on the mean square error (MSE) and the classification error (CER) and implement a strategy to avoid overtraining and to reduce the number of connections in the ANN. In addition, the ANN designed with the proposed methodology is compared with those designed manually using the well-known Back-Propagation and Levenberg-Marquardt Learning Algorithms. Finally, the accuracy of the method is tested with different nonlinear pattern classification problems.

  10. A Global algorithm for linear radiosity

    OpenAIRE

    Sbert Cassasayas, Mateu; Pueyo Sánchez, Xavier

    1993-01-01

    A linear algorithm for radiosity is presented, linear both in time and storage. The new algorithm is based on previous work by the authors and on the well known algorithms for progressive radiosity and Monte Carlo particle transport.

  11. An Orthogonal Multi-Swarm Cooperative PSO Algorithm with a Particle Trajectory Knowledge Base

    Directory of Open Access Journals (Sweden)

    Jun Yang

    2017-01-01

    Full Text Available A novel orthogonal multi-swarm cooperative particle swarm optimization (PSO algorithm with a particle trajectory knowledge base is presented in this paper. Different from the traditional PSO algorithms and other variants of PSO, the proposed orthogonal multi-swarm cooperative PSO algorithm not only introduces an orthogonal initialization mechanism and a particle trajectory knowledge base for multi-dimensional optimization problems, but also conceives a new adaptive cooperation mechanism to accomplish the information interaction among swarms and particles. Experiments are conducted on a set of benchmark functions, and the results show its better performance compared with traditional PSO algorithm in aspects of convergence, computational efficiency and avoiding premature convergence.

  12. Nonlinear dynamics optimization with particle swarm and genetic algorithms for SPEAR3 emittance upgrade

    International Nuclear Information System (INIS)

    Huang, Xiaobiao; Safranek, James

    2014-01-01

    Nonlinear dynamics optimization is carried out for a low emittance upgrade lattice of SPEAR3 in order to improve its dynamic aperture and Touschek lifetime. Two multi-objective optimization algorithms, a genetic algorithm and a particle swarm algorithm, are used for this study. The performance of the two algorithms are compared. The result shows that the particle swarm algorithm converges significantly faster to similar or better solutions than the genetic algorithm and it does not require seeding of good solutions in the initial population. These advantages of the particle swarm algorithm may make it more suitable for many accelerator optimization applications

  13. Nonlinear dynamics optimization with particle swarm and genetic algorithms for SPEAR3 emittance upgrade

    Energy Technology Data Exchange (ETDEWEB)

    Huang, Xiaobiao, E-mail: xiahuang@slac.stanford.edu; Safranek, James

    2014-09-01

    Nonlinear dynamics optimization is carried out for a low emittance upgrade lattice of SPEAR3 in order to improve its dynamic aperture and Touschek lifetime. Two multi-objective optimization algorithms, a genetic algorithm and a particle swarm algorithm, are used for this study. The performance of the two algorithms are compared. The result shows that the particle swarm algorithm converges significantly faster to similar or better solutions than the genetic algorithm and it does not require seeding of good solutions in the initial population. These advantages of the particle swarm algorithm may make it more suitable for many accelerator optimization applications.

  14. Ship Block Transportation Scheduling Problem Based on Greedy Algorithm

    Directory of Open Access Journals (Sweden)

    Chong Wang

    2016-05-01

    Full Text Available Ship block transportation problems are crucial issues to address in reducing the construction cost and improving the productivity of shipyards. Shipyards aim to maximize the workload balance of transporters with time constraint such that all blocks should be transported during the planning horizon. This process leads to three types of penalty time: empty transporter travel time, delay time, and tardy time. This study aims to minimize the sum of the penalty time. First, this study presents the problem of ship block transportation with the generalization of the block transportation restriction on the multi-type transporter. Second, the problem is transformed into the classical traveling salesman problem and assignment problem through a reasonable model simplification and by adding a virtual node to the proposed directed graph. Then, a heuristic algorithm based on greedy algorithm is proposed to assign blocks to available transporters and sequencing blocks for each transporter simultaneously. Finally, the numerical experiment method is used to validate the model, and its result shows that the proposed algorithm is effective in realizing the efficient use of the transporters in shipyards. Numerical simulation results demonstrate the promising application of the proposed method to efficiently improve the utilization of transporters and to reduce the cost of ship block logistics for shipyards.

  15. Sustainable logistics and transportation optimization models and algorithms

    CERN Document Server

    Gakis, Konstantinos; Pardalos, Panos

    2017-01-01

    Focused on the logistics and transportation operations within a supply chain, this book brings together the latest models, algorithms, and optimization possibilities. Logistics and transportation problems are examined within a sustainability perspective to offer a comprehensive assessment of environmental, social, ethical, and economic performance measures. Featured models, techniques, and algorithms may be used to construct policies on alternative transportation modes and technologies, green logistics, and incentives by the incorporation of environmental, economic, and social measures. Researchers, professionals, and graduate students in urban regional planning, logistics, transport systems, optimization, supply chain management, business administration, information science, mathematics, and industrial and systems engineering will find the real life and interdisciplinary issues presented in this book informative and useful.

  16. Neural Network Algorithm for Particle Loading

    International Nuclear Information System (INIS)

    Lewandowski, J.L.V.

    2003-01-01

    An artificial neural network algorithm for continuous minimization is developed and applied to the case of numerical particle loading. It is shown that higher-order moments of the probability distribution function can be efficiently renormalized using this technique. A general neural network for the renormalization of an arbitrary number of moments is given

  17. Algorithm for the Stochastic Generalized Transportation Problem

    Directory of Open Access Journals (Sweden)

    Marcin Anholcer

    2012-01-01

    Full Text Available The equalization method for the stochastic generalized transportation problem has been presented. The algorithm allows us to find the optimal solution to the problem of minimizing the expected total cost in the generalized transportation problem with random demand. After a short introduction and literature review, the algorithm is presented. It is a version of the method proposed by the author for the nonlinear generalized transportation problem. It is shown that this version of the method generates a sequence of solutions convergent to the KKT point. This guarantees the global optimality of the obtained solution, as the expected cost functions are convex and twice differentiable. The computational experiments performed for test problems of reasonable size show that the method is fast. (original abstract

  18. Max–min Bin Packing Algorithm and its application in nano-particles filling

    International Nuclear Information System (INIS)

    Zhu, Dingju

    2016-01-01

    With regard to existing bin packing algorithms, higher packing efficiency often leads to lower packing speed while higher packing speed leads to lower packing efficiency. Packing speed and packing efficiency of existing bin packing algorithms including NFD, NF, FF, FFD, BF and BFD correlates negatively with each other, thus resulting in the failure of existing bin packing algorithms to satisfy the demand of nano-particles filling for both high speed and high efficiency. The paper provides a new bin packing algorithm, Max–min Bin Packing Algorithm (MM), which realizes both high packing speed and high packing efficiency. MM has the same packing speed as NFD (whose packing speed ranks no. 1 among existing bin packing algorithms); in case that the size repetition rate of objects to be packed is over 5, MM can realize almost the same packing efficiency as BFD (whose packing efficiency ranks No. 1 among existing bin packing algorithms), and in case that the size repetition rate of objects to be packed is over 500, MM can achieve exactly the same packing efficiency as BFD. With respect to application of nano-particles filling, the size repetition rate of nano particles to be packed is usually in thousands or ten thousands, far higher than 5 or 500. Consequently, in application of nano-particles filling, the packing efficiency of MM is exactly equal to that of BFD. Thus the irreconcilable conflict between packing speed and packing efficiency is successfully removed by MM, which leads to MM having better packing effect than any existing bin packing algorithm. In practice, there are few cases when the size repetition of objects to be packed is lower than 5. Therefore the MM is not necessarily limited to nano-particles filling, and can also be widely used in other applications besides nano-particles filling. Especially, MM has significant value in application of nano-particles filling such as nano printing and nano tooth filling.

  19. High performance stream computing for particle beam transport simulations

    International Nuclear Information System (INIS)

    Appleby, R; Bailey, D; Higham, J; Salt, M

    2008-01-01

    Understanding modern particle accelerators requires simulating charged particle transport through the machine elements. These simulations can be very time consuming due to the large number of particles and the need to consider many turns of a circular machine. Stream computing offers an attractive way to dramatically improve the performance of such simulations by calculating the simultaneous transport of many particles using dedicated hardware. Modern Graphics Processing Units (GPUs) are powerful and affordable stream computing devices. The results of simulations of particle transport through the booster-to-storage-ring transfer line of the DIAMOND synchrotron light source using an NVidia GeForce 7900 GPU are compared to the standard transport code MAD. It is found that particle transport calculations are suitable for stream processing and large performance increases are possible. The accuracy and potential speed gains are compared and the prospects for future work in the area are discussed

  20. A Hybrid Chaos-Particle Swarm Optimization Algorithm for the Vehicle Routing Problem with Time Window

    Directory of Open Access Journals (Sweden)

    Qi Hu

    2013-04-01

    Full Text Available State-of-the-art heuristic algorithms to solve the vehicle routing problem with time windows (VRPTW usually present slow speeds during the early iterations and easily fall into local optimal solutions. Focusing on solving the above problems, this paper analyzes the particle encoding and decoding strategy of the particle swarm optimization algorithm, the construction of the vehicle route and the judgment of the local optimal solution. Based on these, a hybrid chaos-particle swarm optimization algorithm (HPSO is proposed to solve VRPTW. The chaos algorithm is employed to re-initialize the particle swarm. An efficient insertion heuristic algorithm is also proposed to build the valid vehicle route in the particle decoding process. A particle swarm premature convergence judgment mechanism is formulated and combined with the chaos algorithm and Gaussian mutation into HPSO when the particle swarm falls into the local convergence. Extensive experiments are carried out to test the parameter settings in the insertion heuristic algorithm and to evaluate that they are corresponding to the data’s real-distribution in the concrete problem. It is also revealed that the HPSO achieves a better performance than the other state-of-the-art algorithms on solving VRPTW.

  1. Electrokinetic Particle Transport in Micro-Nanofluidics Direct Numerical Simulation Analysis

    CERN Document Server

    Qian, Shizhi

    2012-01-01

    Numerous applications of micro-/nanofluidics are related to particle transport in micro-/nanoscale channels, and electrokinetics has proved to be one of the most promising tools to manipulate particles in micro/nanofluidics. Therefore, a comprehensive understanding of electrokinetic particle transport in micro-/nanoscale channels is crucial to the development of micro/nano-fluidic devices. Electrokinetic Particle Transport in Micro-/Nanofluidics: Direct Numerical Simulation Analysis provides a fundamental understanding of electrokinetic particle transport in micro-/nanofluidics involving elect

  2. A Swarm Optimization Genetic Algorithm Based on Quantum-Behaved Particle Swarm Optimization.

    Science.gov (United States)

    Sun, Tao; Xu, Ming-Hai

    2017-01-01

    Quantum-behaved particle swarm optimization (QPSO) algorithm is a variant of the traditional particle swarm optimization (PSO). The QPSO that was originally developed for continuous search spaces outperforms the traditional PSO in search ability. This paper analyzes the main factors that impact the search ability of QPSO and converts the particle movement formula to the mutation condition by introducing the rejection region, thus proposing a new binary algorithm, named swarm optimization genetic algorithm (SOGA), because it is more like genetic algorithm (GA) than PSO in form. SOGA has crossover and mutation operator as GA but does not need to set the crossover and mutation probability, so it has fewer parameters to control. The proposed algorithm was tested with several nonlinear high-dimension functions in the binary search space, and the results were compared with those from BPSO, BQPSO, and GA. The experimental results show that SOGA is distinctly superior to the other three algorithms in terms of solution accuracy and convergence.

  3. A Hybrid Multiobjective Discrete Particle Swarm Optimization Algorithm for a SLA-Aware Service Composition Problem

    Directory of Open Access Journals (Sweden)

    Hao Yin

    2014-01-01

    Full Text Available For SLA-aware service composition problem (SSC, an optimization model for this algorithm is built, and a hybrid multiobjective discrete particle swarm optimization algorithm (HMDPSO is also proposed in this paper. According to the characteristic of this problem, a particle updating strategy is designed by introducing crossover operator. In order to restrain particle swarm’s premature convergence and increase its global search capacity, the swarm diversity indicator is introduced and a particle mutation strategy is proposed to increase the swarm diversity. To accelerate the process of obtaining the feasible particle position, a local search strategy based on constraint domination is proposed and incorporated into the proposed algorithm. At last, some parameters in the algorithm HMDPSO are analyzed and set with relative proper values, and then the algorithm HMDPSO and the algorithm HMDPSO+ incorporated by local search strategy are compared with the recently proposed related algorithms on different scale cases. The results show that algorithm HMDPSO+ can solve the SSC problem more effectively.

  4. Entropic Ratchet transport of interacting active Brownian particles

    Energy Technology Data Exchange (ETDEWEB)

    Ai, Bao-Quan, E-mail: aibq@hotmail.com [Laboratory of Quantum Engineering and Quantum Materials, School of Physics and Telecommunication Engineering, South China Normal University, 510006 Guangzhou (China); He, Ya-Feng [College of Physics Science and Technology, Hebei University, 071002 Baoding (China); Zhong, Wei-Rong, E-mail: wrzhong@jnu.edu.cn [Department of Physics and Siyuan Laboratory, College of Science and Engineering, Jinan University, 510632 Guangzhou (China)

    2014-11-21

    Directed transport of interacting active (self-propelled) Brownian particles is numerically investigated in confined geometries (entropic barriers). The self-propelled velocity can break thermodynamical equilibrium and induce the directed transport. It is found that the interaction between active particles can greatly affect the ratchet transport. For attractive particles, on increasing the interaction strength, the average velocity first decreases to its minima, then increases, and finally decreases to zero. For repulsive particles, when the interaction is very weak, there exists a critical interaction at which the average velocity is minimal, nearly tends to zero, however, for the strong interaction, the average velocity is independent of the interaction.

  5. Entropic Ratchet transport of interacting active Brownian particles

    International Nuclear Information System (INIS)

    Ai, Bao-Quan; He, Ya-Feng; Zhong, Wei-Rong

    2014-01-01

    Directed transport of interacting active (self-propelled) Brownian particles is numerically investigated in confined geometries (entropic barriers). The self-propelled velocity can break thermodynamical equilibrium and induce the directed transport. It is found that the interaction between active particles can greatly affect the ratchet transport. For attractive particles, on increasing the interaction strength, the average velocity first decreases to its minima, then increases, and finally decreases to zero. For repulsive particles, when the interaction is very weak, there exists a critical interaction at which the average velocity is minimal, nearly tends to zero, however, for the strong interaction, the average velocity is independent of the interaction

  6. Influence of particle sorting in transport of sediment-associated contaminants

    International Nuclear Information System (INIS)

    Lane, L.J.; Hakonson, T.E.

    1982-01-01

    Hydrologic and sediment transport models are developed to route the flow of water and sediment (by particle size classes) in alluvial stream channels. A simplified infiltration model is used to compute runoff from upland areas and flow is routed in ephemeral stream channels to account for infiltration or transmission losses in the channel alluvium. Hydraulic calculations, based on the normal flow assumption and an approximating hydrograph, are used to compute sediment transport by particle size classes. Contaminants associated with sediment particles are routed in the stream channels to predict contaminatant transport by particle size classes. An empirical adjustment factor, the enrichment ratio, is shown to be a function of the particle size distribution of stream bed sediments, contaminant concentrations by particle size, differential sediment transport rates, and the magnitude of the runoff event causing transport of sediment and contaminants. This analysis and an example application in a liquid effluent-receiving area illustrate the significance of particle sorting in transport of sediment associated contaminants

  7. Heavy particle transport in sputtering systems

    Science.gov (United States)

    Trieschmann, Jan

    2015-09-01

    This contribution aims to discuss the theoretical background of heavy particle transport in plasma sputtering systems such as direct current magnetron sputtering (dcMS), high power impulse magnetron sputtering (HiPIMS), or multi frequency capacitively coupled plasmas (MFCCP). Due to inherently low process pressures below one Pa only kinetic simulation models are suitable. In this work a model appropriate for the description of the transport of film forming particles sputtered of a target material has been devised within the frame of the OpenFOAM software (specifically dsmcFoam). The three dimensional model comprises of ejection of sputtered particles into the reactor chamber, their collisional transport through the volume, as well as deposition of the latter onto the surrounding surfaces (i.e. substrates, walls). An angular dependent Thompson energy distribution fitted to results from Monte-Carlo simulations is assumed initially. Binary collisions are treated via the M1 collision model, a modified variable hard sphere (VHS) model. The dynamics of sputtered and background gas species can be resolved self-consistently following the direct simulation Monte-Carlo (DSMC) approach or, whenever possible, simplified based on the test particle method (TPM) with the assumption of a constant, non-stationary background at a given temperature. At the example of an MFCCP research reactor the transport of sputtered aluminum is specifically discussed. For the peculiar configuration and under typical process conditions with argon as process gas the transport of aluminum sputtered of a circular target is shown to be governed by a one dimensional interaction of the imposed and backscattered particle fluxes. The results are analyzed and discussed on the basis of the obtained velocity distribution functions (VDF). This work is supported by the German Research Foundation (DFG) in the frame of the Collaborative Research Centre TRR 87.

  8. An Efficient Sleepy Algorithm for Particle-Based Fluids

    Directory of Open Access Journals (Sweden)

    Xiao Nie

    2014-01-01

    Full Text Available We present a novel Smoothed Particle Hydrodynamics (SPH based algorithm for efficiently simulating compressible and weakly compressible particle fluids. Prior particle-based methods simulate all fluid particles; however, in many cases some particles appearing to be at rest can be safely ignored without notably affecting the fluid flow behavior. To identify these particles, a novel sleepy strategy is introduced. By utilizing this strategy, only a portion of the fluid particles requires computational resources; thus an obvious performance gain can be achieved. In addition, in order to resolve unphysical clumping issue due to tensile instability in SPH based methods, a new artificial repulsive force is provided. We demonstrate that our approach can be easily integrated with existing SPH based methods to improve the efficiency without sacrificing visual quality.

  9. Stochastic transport of particles across single barriers

    International Nuclear Information System (INIS)

    Kreuter, Christian; Siems, Ullrich; Henseler, Peter; Nielaba, Peter; Leiderer, Paul; Erbe, Artur

    2012-01-01

    Transport phenomena of interacting particles are of high interest for many applications in biology and mesoscopic systems. Here we present measurements on colloidal particles, which are confined in narrow channels on a substrate and interact with a barrier, which impedes the motion along the channel. The substrate of the particle is tilted in order for the particles to be driven towards the barrier and, if the energy gained by the tilt is large enough, surpass the barrier by thermal activation. We therefore study the influence of this barrier as well as the influence of particle interaction on the particle transport through such systems. All experiments are supported with Brownian dynamics simulations in order to complement the experiments with tests of a large range of parameter space which cannot be accessed in experiments.

  10. Charged-particle calculations using Boltzmann transport methods

    International Nuclear Information System (INIS)

    Hoffman, T.J.; Dodds, H.L. Jr.; Robinson, M.T.; Holmes, D.K.

    1981-01-01

    Several aspects of radiation damage effects in fusion reactor neutron and ion irradiation environments are amenable to treatment by transport theory methods. In this paper, multigroup transport techniques are developed for the calculation of charged particle range distributions, reflection coefficients, and sputtering yields. The Boltzmann transport approach can be implemented, with minor changes, in standard neutral particle computer codes. With the multigroup discrete ordinates code, ANISN, determination of ion and target atom distributions as functions of position, energy, and direction can be obtained without the stochastic error associated with atomistic computer codes such as MARLOWE and TRIM. With the multigroup Monte Carlo code, MORSE, charged particle effects can be obtained for problems associated with very complex geometries. Results are presented for several charged particle problems. Good agreement is obtained between quantities calculated with the multigroup approach and those obtained experimentally or by atomistic computer codes

  11. A hand tracking algorithm with particle filter and improved GVF snake model

    Science.gov (United States)

    Sun, Yi-qi; Wu, Ai-guo; Dong, Na; Shao, Yi-zhe

    2017-07-01

    To solve the problem that the accurate information of hand cannot be obtained by particle filter, a hand tracking algorithm based on particle filter combined with skin-color adaptive gradient vector flow (GVF) snake model is proposed. Adaptive GVF and skin color adaptive external guidance force are introduced to the traditional GVF snake model, guiding the curve to quickly converge to the deep concave region of hand contour and obtaining the complex hand contour accurately. This algorithm realizes a real-time correction of the particle filter parameters, avoiding the particle drift phenomenon. Experimental results show that the proposed algorithm can reduce the root mean square error of the hand tracking by 53%, and improve the accuracy of hand tracking in the case of complex and moving background, even with a large range of occlusion.

  12. Energy and particle core transport in tokamaks and stellarators compared

    Energy Technology Data Exchange (ETDEWEB)

    Beurskens, Marc; Angioni, Clemente; Beidler, Craig; Dinklage, Andreas; Fuchert, Golo; Hirsch, Matthias; Puetterich, Thomas; Wolf, Robert [Max-Planck-Institut fuer Plasmaphysik, Greifswald/Garching (Germany)

    2016-07-01

    The paper discusses expectations for core transport in the Wendelstein 7-X stellarator (W7-X) and presents a comparison to tokamaks. In tokamaks, the neoclassical trapped-particle-driven losses are small and turbulence dominates the energy and particle transport. At reactor relevant low collisionality, the heat transport is limited by ion temperature gradient limited turbulence, clamping the temperature gradient. The particle transport is set by an anomalous inward pinch, yielding peaked profiles. A strong edge pedestal adds to the good confinement properties. In traditional stellarators the 3D geometry cause increased trapped orbit losses. At reactor relevant low collisionality and high temperatures, these neoclassical losses would be well above the turbulent transport losses. The W7-X design minimizes neoclassical losses and turbulent transport can become dominant. Moreover, the separation of regions of bad curvature and that of trapped particle orbits in W7-X may have favourable implications on the turbulent electron heat transport. The neoclassical particle thermodiffusion is outward. Without core particle sources the density profile is flat or even hollow. The presence of a turbulence driven inward anomalous particle pinch in W7-X (like in tokamaks) is an open topic of research.

  13. Ratchet Transport of Chiral Particles Caused by the Transversal Asymmetry: Current Reversals and Particle Separation

    Science.gov (United States)

    Liu, Jian-li; Lu, Shi-cai; Ai, Bao-quan

    2018-06-01

    Due to the chirality of active particles, the transversal asymmetry can induce the the longitudinal directed transport. The transport of chiral active particles in a periodic channel is investigated in the presence of two types of the transversal asymmetry, the transverse force and the transverse rigid half-circle obstacles. For all cases, the counterclockwise and clockwise particles move to the opposite directions. For the case of the only transverse force, the chiral active particles can reverse their directions when increasing the transverse force. When the transverse rigid half-circle obstacles are introduced, the transport behavior of particles becomes more complex and multiple current reversals occur. The direction of the transport is determined by the competition between two types of the transversal asymmetry. For a given chirality, by suitably tailoring parameters, particles with different self-propulsion speed can move in different directions and can be separated.

  14. Machine learning based global particle indentification algorithms at LHCb experiment

    CERN Multimedia

    Derkach, Denis; Likhomanenko, Tatiana; Rogozhnikov, Aleksei; Ratnikov, Fedor

    2017-01-01

    One of the most important aspects of data processing at LHC experiments is the particle identification (PID) algorithm. In LHCb, several different sub-detector systems provide PID information: the Ring Imaging CHerenkov (RICH) detector, the hadronic and electromagnetic calorimeters, and the muon chambers. To improve charged particle identification, several neural networks including a deep architecture and gradient boosting have been applied to data. These new approaches provide higher identification efficiencies than existing implementations for all charged particle types. It is also necessary to achieve a flat dependency between efficiencies and spectator variables such as particle momentum, in order to reduce systematic uncertainties during later stages of data analysis. For this purpose, "flat” algorithms that guarantee the flatness property for efficiencies have also been developed. This talk presents this new approach based on machine learning and its performance.

  15. Application of particle swarm optimization algorithm in the heating system planning problem.

    Science.gov (United States)

    Ma, Rong-Jiang; Yu, Nan-Yang; Hu, Jun-Yi

    2013-01-01

    Based on the life cycle cost (LCC) approach, this paper presents an integral mathematical model and particle swarm optimization (PSO) algorithm for the heating system planning (HSP) problem. The proposed mathematical model minimizes the cost of heating system as the objective for a given life cycle time. For the particularity of HSP problem, the general particle swarm optimization algorithm was improved. An actual case study was calculated to check its feasibility in practical use. The results show that the improved particle swarm optimization (IPSO) algorithm can more preferably solve the HSP problem than PSO algorithm. Moreover, the results also present the potential to provide useful information when making decisions in the practical planning process. Therefore, it is believed that if this approach is applied correctly and in combination with other elements, it can become a powerful and effective optimization tool for HSP problem.

  16. Spatiotemporal Structure of Aeolian Particle Transport on Flat Surface

    Science.gov (United States)

    Niiya, Hirofumi; Nishimura, Kouichi

    2017-05-01

    We conduct numerical simulations based on a model of blowing snow to reveal the long-term properties and equilibrium state of aeolian particle transport from 10-5 to 10 m above the flat surface. The numerical results are as follows. (i) Time-series data of particle transport are divided into development, relaxation, and equilibrium phases, which are formed by rapid wind response below 10 cm and gradual wind response above 10 cm. (ii) The particle transport rate at equilibrium is expressed as a power function of friction velocity, and the index of 2.35 implies that most particles are transported by saltation. (iii) The friction velocity below 100 µm remains roughly constant and lower than the fluid threshold at equilibrium. (iv) The mean particle speed above 300 µm is less than the wind speed, whereas that below 300 µm exceeds the wind speed because of descending particles. (v) The particle diameter increases with height in the saltation layer, and the relationship is expressed as a power function. Through comparisons with the previously reported random-flight model, we find a crucial problem that empirical splash functions cannot reproduce particle dynamics at a relatively high wind speed.

  17. Fast weighted centroid algorithm for single particle localization near the information limit.

    Science.gov (United States)

    Fish, Jeremie; Scrimgeour, Jan

    2015-07-10

    A simple weighting scheme that enhances the localization precision of center of mass calculations for radially symmetric intensity distributions is presented. The algorithm effectively removes the biasing that is common in such center of mass calculations. Localization precision compares favorably with other localization algorithms used in super-resolution microscopy and particle tracking, while significantly reducing the processing time and memory usage. We expect that the algorithm presented will be of significant utility when fast computationally lightweight particle localization or tracking is desired.

  18. A parallel algorithm for 3D particle tracking and Lagrangian trajectory reconstruction

    International Nuclear Information System (INIS)

    Barker, Douglas; Zhang, Yuanhui; Lifflander, Jonathan; Arya, Anshu

    2012-01-01

    Particle-tracking methods are widely used in fluid mechanics and multi-target tracking research because of their unique ability to reconstruct long trajectories with high spatial and temporal resolution. Researchers have recently demonstrated 3D tracking of several objects in real time, but as the number of objects is increased, real-time tracking becomes impossible due to data transfer and processing bottlenecks. This problem may be solved by using parallel processing. In this paper, a parallel-processing framework has been developed based on frame decomposition and is programmed using the asynchronous object-oriented Charm++ paradigm. This framework can be a key step in achieving a scalable Lagrangian measurement system for particle-tracking velocimetry and may lead to real-time measurement capabilities. The parallel tracking algorithm was evaluated with three data sets including the particle image velocimetry standard 3D images data set #352, a uniform data set for optimal parallel performance and a computational-fluid-dynamics-generated non-uniform data set to test trajectory reconstruction accuracy, consistency with the sequential version and scalability to more than 500 processors. The algorithm showed strong scaling up to 512 processors and no inherent limits of scalability were seen. Ultimately, up to a 200-fold speedup is observed compared to the serial algorithm when 256 processors were used. The parallel algorithm is adaptable and could be easily modified to use any sequential tracking algorithm, which inputs frames of 3D particle location data and outputs particle trajectories

  19. Empirical particle transport model for tokamaks

    International Nuclear Information System (INIS)

    Petravic, M.; Kuo-Petravic, G.

    1986-08-01

    A simple empirical particle transport model has been constructed with the purpose of gaining insight into the L- to H-mode transition in tokamaks. The aim was to construct the simplest possible model which would reproduce the measured density profiles in the L-regime, and also produce a qualitatively correct transition to the H-regime without having to assume a completely different transport mode for the bulk of the plasma. Rather than using completely ad hoc constructions for the particle diffusion coefficient, we assume D = 1/5 chi/sub total/, where chi/sub total/ ≅ chi/sub e/ is the thermal diffusivity, and then use the κ/sub e/ = n/sub e/chi/sub e/ values derived from experiments. The observed temperature profiles are then automatically reproduced, but nontrivially, the correct density profiles are also obtained, for realistic fueling rates and profiles. Our conclusion is that it is sufficient to reduce the transport coefficients within a few centimeters of the surface to produce the H-mode behavior. An additional simple assumption, concerning the particle mean-free path, leads to a convective transport term which reverses sign a few centimeters inside the surface, as required by the H-mode density profiles

  20. Entropic transport of active particles driven by a transverse ac force

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Jian-chun, E-mail: wjchun2010@163.com; Chen, Qun; Ai, Bao-quan, E-mail: aibq@scnu.edu.cn

    2015-12-18

    Transport of active particles is numerically investigated in a two-dimensional period channel. In the presence of a transverse ac force, the directed transport of active particles demonstrates striking behaviors. By adjusting the amplitude and the frequency of the transverse ac force, the average velocity will be influenced significantly and the direction of the transport can be reversed several times. Remarkably, it is also found that the direction of the transport varies with different self-propelled speeds. Therefore, particles with different self-propelled speeds will move to the different directions, which is able to separate particles of different self-propelled speeds. - Highlights: • A transverse ac force strongly influence the transport of active particles. • The direction of the transport can be reversed several times. • Active particles with different self-propelled speeds can be separated.

  1. Predicting patchy particle crystals: variable box shape simulations and evolutionary algorithms.

    Science.gov (United States)

    Bianchi, Emanuela; Doppelbauer, Günther; Filion, Laura; Dijkstra, Marjolein; Kahl, Gerhard

    2012-06-07

    We consider several patchy particle models that have been proposed in literature and we investigate their candidate crystal structures in a systematic way. We compare two different algorithms for predicting crystal structures: (i) an approach based on Monte Carlo simulations in the isobaric-isothermal ensemble and (ii) an optimization technique based on ideas of evolutionary algorithms. We show that the two methods are equally successful and provide consistent results on crystalline phases of patchy particle systems.

  2. Combinatorial Clustering Algorithm of Quantum-Behaved Particle Swarm Optimization and Cloud Model

    Directory of Open Access Journals (Sweden)

    Mi-Yuan Shan

    2013-01-01

    Full Text Available We propose a combinatorial clustering algorithm of cloud model and quantum-behaved particle swarm optimization (COCQPSO to solve the stochastic problem. The algorithm employs a novel probability model as well as a permutation-based local search method. We are setting the parameters of COCQPSO based on the design of experiment. In the comprehensive computational study, we scrutinize the performance of COCQPSO on a set of widely used benchmark instances. By benchmarking combinatorial clustering algorithm with state-of-the-art algorithms, we can show that its performance compares very favorably. The fuzzy combinatorial optimization algorithm of cloud model and quantum-behaved particle swarm optimization (FCOCQPSO in vague sets (IVSs is more expressive than the other fuzzy sets. Finally, numerical examples show the clustering effectiveness of COCQPSO and FCOCQPSO clustering algorithms which are extremely remarkable.

  3. A Novel Chaotic Particle Swarm Optimization Algorithm for Parking Space Guidance

    Directory of Open Access Journals (Sweden)

    Na Dong

    2016-01-01

    Full Text Available An evolutionary approach of parking space guidance based upon a novel Chaotic Particle Swarm Optimization (CPSO algorithm is proposed. In the newly proposed CPSO algorithm, the chaotic dynamics is combined into the position updating rules of Particle Swarm Optimization to improve the diversity of solutions and to avoid being trapped in the local optima. This novel approach, that combines the strengths of Particle Swarm Optimization and chaotic dynamics, is then applied into the route optimization (RO problem of parking lots, which is an important issue in the management systems of large-scale parking lots. It is used to find out the optimized paths between any source and destination nodes in the route network. Route optimization problems based on real parking lots are introduced for analyzing and the effectiveness and practicability of this novel optimization algorithm for parking space guidance have been verified through the application results.

  4. Parallel Global Optimization with the Particle Swarm Algorithm (Preprint)

    National Research Council Canada - National Science Library

    Schutte, J. F; Reinbolt, J. A; Fregly, B. J; Haftka, R. T; George, A. D

    2004-01-01

    .... To obtain enhanced computational throughput and global search capability, we detail the coarse-grained parallelization of an increasingly popular global search method, the Particle Swarm Optimization (PSO) algorithm...

  5. Particle transport in field-reversed configurations

    Energy Technology Data Exchange (ETDEWEB)

    Tuszewski, M.; Linford, R.K.

    1982-05-01

    Particle transport in field-reversed configurations is investigated using a one-dimensional, nondecaying, magnetic field structure. The radial profiles are constrained to satisfy an average ..beta.. condition from two-dimensional equilibrium and a boundary condition at the separatrix to model the balance between closed and open-field-line transport. When applied to the FRX-B experimental data and to the projected performance of the FRX-C device, this model suggests that the particle confinement times obtained with anomalous lower-hybrid-drift transport are in good agreement with the available numerical and experimental data. Larger values of confinement times can be achieved by increasing the ratio of the separatrix radius to the conducting wall radius. Even larger increases in lifetimes might be obtained by improving the open-field-line confinement.

  6. Particle transport in field-reversed configurations

    International Nuclear Information System (INIS)

    Tuszewski, M.; Linford, R.K.

    1982-01-01

    Particle transport in field-reversed configurations is investigated using a one-dimensional, nondecaying, magnetic field structure. The radial profiles are constrained to satisfy an average β condition from two-dimensional equilibrium and a boundary condition at the separatrix to model the balance between closed and open-field-line transport. When applied to the FRX-B experimental data and to the projected performance of the FRX-C device, this model suggests that the particle confinement times obtained with anomalous lower-hybrid-drift transport are in good agreement with the available numerical and experimental data. Larger values of confinement times can be achieved by increasing the ratio of the separatrix radius to the conducting wall radius. Even larger increases in lifetimes might be obtained by improving the open-field-line confinement

  7. Microstripes for transport and separation of magnetic particles

    DEFF Research Database (Denmark)

    Donolato, Marco; Dalslet, Bjarke Thomas; Hansen, Mikkel Fougt

    2012-01-01

    We present a simple technique for creating an on-chip magnetic particle conveyor based on exchange-biased permalloy microstripes. The particle transportation relies on an array of stripes with a spacing smaller than their width in conjunction with a periodic sequence of four different externally...... applied magnetic fields. We demonstrate the controlled transportation of a large population of particles over several millimeters of distance as well as the spatial separation of two populations of magnetic particles with different magnetophoretic mobilities. The technique can be used for the controlled...... selective manipulation and separation of magnetically labelled species. (C) 2012 American Institute of Physics....

  8. Estimates of Lagrangian particle transport by wave groups: forward transport by Stokes drift and backward transport by the return flow

    Science.gov (United States)

    van den Bremer, Ton S.; Taylor, Paul H.

    2014-11-01

    Although the literature has examined Stokes drift, the net Lagrangian transport by particles due to of surface gravity waves, in great detail, the motion of fluid particles transported by surface gravity wave groups has received considerably less attention. In practice nevertheless, the wave field on the open sea often has a group-like structure. The motion of particles is different, as particles at sufficient depth are transported backwards by the Eulerian return current that was first described by Longuet-Higgins & Stewart (1962) and forms an inseparable counterpart of Stokes drift for wave groups ensuring the (irrotational) mass balance holds. We use WKB theory to study the variation of the Lagrangian transport by the return current with depth distinguishing two-dimensional seas, three-dimensional seas, infinite depth and finite depth. We then provide dimensional estimates of the net horizontal Lagrangian transport by the Stokes drift on the one hand and the return flow on the other hand for realistic sea states in all four cases. Finally we propose a simple scaling relationship for the transition depth: the depth above which Lagrangian particles are transported forwards by the Stokes drift and below which such particles are transported backwards by the return current.

  9. Particle tracing in the magnetosphere: New algorithms and results

    International Nuclear Information System (INIS)

    Sheldon, R.B.; Gaffey, J.D. Jr.

    1993-01-01

    The authors present new algorithms for calculating charged-particle trajectories in realistic magnetospheric fields in fast and efficient manners. The scheme is based on a hamiltonian energy conservation principle. It requires that particles conserve the first two adiabatic invariants, and thus also conserve energy. It is applicable for particles ranging in energy from 0.01 to 100 keV, having arbitrary charge, and pitch angle. In addition to rapid particle trajectory calculations, it allows topological boundaries to be located efficiently. The results can be combined with fluid models to provide quantitative models of the time development of the whole convecting plasma model

  10. Public Transport Route Finding using a Hybrid Genetic Algorithm

    OpenAIRE

    Liviu Adrian COTFAS; Andreea DIOSTEANU

    2011-01-01

    In this paper we present a public transport route finding solution based on a hybrid genetic algorithm. The algorithm uses two heuristics that take into consideration the number of trans-fers and the remaining distance to the destination station in order to improve the convergence speed. The interface of the system uses the latest web technologies to offer both portability and advanced functionality. The approach has been evaluated using the data for the Bucharest public transport network.

  11. Hybrid particle swarm optimization algorithm and its application in nuclear engineering

    International Nuclear Information System (INIS)

    Liu, C.Y.; Yan, C.Q.; Wang, J.J.

    2014-01-01

    Highlights: • We propose a hybrid particle swarm optimization algorithm (HPSO). • Modified Nelder–Mead simplex search method is applied in HPSO. • The algorithm has a high search precision and rapidly calculation speed. • HPSO can be used in the nuclear engineering optimization design problems. - Abstract: A hybrid particle swarm optimization algorithm with a feasibility-based rule for solving constrained optimization problems has been developed in this research. Firstly, the global optimal solution zone can be obtained through particle swarm optimization process, and then the refined search of the global optimal solution will be achieved through the modified Nelder–Mead simplex algorithm. Simulations based on two well-studied benchmark problems demonstrate the proposed algorithm will be an efficient alternative to solving constrained optimization problems. The vertical electrical heating pressurizer is one of the key components in reactor coolant system. The mathematical model of pressurizer has been established in steady state. The optimization design of pressurizer weight has been carried out through HPSO algorithm. The results show the pressurizer weight can be reduced by 16.92%. The thermal efficiencies of conventional PWR nuclear power plants are about 31–35% so far, which are much lower than fossil fueled plants based in a steam cycle as PWR. The thermal equilibrium mathematic model for nuclear power plant secondary loop has been established. An optimization case study has been conducted to improve the efficiency of the nuclear power plant with the proposed algorithm. The results show the thermal efficiency is improved by 0.5%

  12. An approach to improving transporting velocity in the long-range ultrasonic transportation of micro-particles

    International Nuclear Information System (INIS)

    Meng, Jianxin; Mei, Deqing; Yang, Keji; Fan, Zongwei

    2014-01-01

    In existing ultrasonic transportation methods, the long-range transportation of micro-particles is always realized in step-by-step way. Due to the substantial decrease of the driving force in each step, the transportation is lower-speed and stair-stepping. To improve the transporting velocity, a non-stepping ultrasonic transportation approach is proposed. By quantitatively analyzing the acoustic potential well, an optimal region is defined as the position, where the largest driving force is provided under the condition that the driving force is simultaneously the major component of an acoustic radiation force. To keep the micro-particle trapped in the optimal region during the whole transportation process, an approach of optimizing the phase-shifting velocity and phase-shifting step is adopted. Due to the stable and large driving force, the displacement of the micro-particle is an approximately linear function of time, instead of a stair-stepping function of time as in the existing step-by-step methods. An experimental setup is also developed to validate this approach. Long-range ultrasonic transportations of zirconium beads with high transporting velocity were realized. The experimental results demonstrated that this approach is an effective way to improve transporting velocity in the long-range ultrasonic transportation of micro-particles

  13. A nowcasting technique based on application of the particle filter blending algorithm

    Science.gov (United States)

    Chen, Yuanzhao; Lan, Hongping; Chen, Xunlai; Zhang, Wenhai

    2017-10-01

    To improve the accuracy of nowcasting, a new extrapolation technique called particle filter blending was configured in this study and applied to experimental nowcasting. Radar echo extrapolation was performed by using the radar mosaic at an altitude of 2.5 km obtained from the radar images of 12 S-band radars in Guangdong Province, China. The first bilateral filter was applied in the quality control of the radar data; an optical flow method based on the Lucas-Kanade algorithm and the Harris corner detection algorithm were used to track radar echoes and retrieve the echo motion vectors; then, the motion vectors were blended with the particle filter blending algorithm to estimate the optimal motion vector of the true echo motions; finally, semi-Lagrangian extrapolation was used for radar echo extrapolation based on the obtained motion vector field. A comparative study of the extrapolated forecasts of four precipitation events in 2016 in Guangdong was conducted. The results indicate that the particle filter blending algorithm could realistically reproduce the spatial pattern, echo intensity, and echo location at 30- and 60-min forecast lead times. The forecasts agreed well with observations, and the results were of operational significance. Quantitative evaluation of the forecasts indicates that the particle filter blending algorithm performed better than the cross-correlation method and the optical flow method. Therefore, the particle filter blending method is proved to be superior to the traditional forecasting methods and it can be used to enhance the ability of nowcasting in operational weather forecasts.

  14. Explicit high-order non-canonical symplectic particle-in-cell algorithms for Vlasov-Maxwell systems

    International Nuclear Information System (INIS)

    Xiao, Jianyuan; Liu, Jian; He, Yang; Zhang, Ruili; Qin, Hong; Sun, Yajuan

    2015-01-01

    Explicit high-order non-canonical symplectic particle-in-cell algorithms for classical particle-field systems governed by the Vlasov-Maxwell equations are developed. The algorithms conserve a discrete non-canonical symplectic structure derived from the Lagrangian of the particle-field system, which is naturally discrete in particles. The electromagnetic field is spatially discretized using the method of discrete exterior calculus with high-order interpolating differential forms for a cubic grid. The resulting time-domain Lagrangian assumes a non-canonical symplectic structure. It is also gauge invariant and conserves charge. The system is then solved using a structure-preserving splitting method discovered by He et al. [preprint http://arxiv.org/abs/arXiv:1505.06076 (2015)], which produces five exactly soluble sub-systems, and high-order structure-preserving algorithms follow by combinations. The explicit, high-order, and conservative nature of the algorithms is especially suitable for long-term simulations of particle-field systems with extremely large number of degrees of freedom on massively parallel supercomputers. The algorithms have been tested and verified by the two physics problems, i.e., the nonlinear Landau damping and the electron Bernstein wave

  15. Explicit high-order non-canonical symplectic particle-in-cell algorithms for Vlasov-Maxwell systems

    Energy Technology Data Exchange (ETDEWEB)

    Xiao, Jianyuan [School of Nuclear Science and Technology and Department of Modern Physics, University of Science and Technology of China, Hefei, Anhui 230026, China; Key Laboratory of Geospace Environment, CAS, Hefei, Anhui 230026, China; Qin, Hong [School of Nuclear Science and Technology and Department of Modern Physics, University of Science and Technology of China, Hefei, Anhui 230026, China; Plasma Physics Laboratory, Princeton University, Princeton, New Jersey 08543, USA; Liu, Jian [School of Nuclear Science and Technology and Department of Modern Physics, University of Science and Technology of China, Hefei, Anhui 230026, China; Key Laboratory of Geospace Environment, CAS, Hefei, Anhui 230026, China; He, Yang [School of Nuclear Science and Technology and Department of Modern Physics, University of Science and Technology of China, Hefei, Anhui 230026, China; Key Laboratory of Geospace Environment, CAS, Hefei, Anhui 230026, China; Zhang, Ruili [School of Nuclear Science and Technology and Department of Modern Physics, University of Science and Technology of China, Hefei, Anhui 230026, China; Key Laboratory of Geospace Environment, CAS, Hefei, Anhui 230026, China; Sun, Yajuan [LSEC, Academy of Mathematics and Systems Science, Chinese Academy of Sciences, P.O. Box 2719, Beijing 100190, China

    2015-11-01

    Explicit high-order non-canonical symplectic particle-in-cell algorithms for classical particle-field systems governed by the Vlasov-Maxwell equations are developed. The algorithms conserve a discrete non-canonical symplectic structure derived from the Lagrangian of the particle-field system, which is naturally discrete in particles. The electromagnetic field is spatially discretized using the method of discrete exterior calculus with high-order interpolating differential forms for a cubic grid. The resulting time-domain Lagrangian assumes a non-canonical symplectic structure. It is also gauge invariant and conserves charge. The system is then solved using a structure-preserving splitting method discovered by He et al. [preprint arXiv: 1505.06076 (2015)], which produces five exactly soluble sub-systems, and high-order structure-preserving algorithms follow by combinations. The explicit, high-order, and conservative nature of the algorithms is especially suitable for long-term simulations of particle-field systems with extremely large number of degrees of freedom on massively parallel supercomputers. The algorithms have been tested and verified by the two physics problems, i.e., the nonlinear Landau damping and the electron Bernstein wave. (C) 2015 AIP Publishing LLC.

  16. Improved multi-objective clustering algorithm using particle swarm optimization.

    Science.gov (United States)

    Gong, Congcong; Chen, Haisong; He, Weixiong; Zhang, Zhanliang

    2017-01-01

    Multi-objective clustering has received widespread attention recently, as it can obtain more accurate and reasonable solution. In this paper, an improved multi-objective clustering framework using particle swarm optimization (IMCPSO) is proposed. Firstly, a novel particle representation for clustering problem is designed to help PSO search clustering solutions in continuous space. Secondly, the distribution of Pareto set is analyzed. The analysis results are applied to the leader selection strategy, and make algorithm avoid trapping in local optimum. Moreover, a clustering solution-improved method is proposed, which can increase the efficiency in searching clustering solution greatly. In the experiments, 28 datasets are used and nine state-of-the-art clustering algorithms are compared, the proposed method is superior to other approaches in the evaluation index ARI.

  17. Particle identification algorithms for the PANDA Endcap Disc DIRC

    Science.gov (United States)

    Schmidt, M.; Ali, A.; Belias, A.; Dzhygadlo, R.; Gerhardt, A.; Götzen, K.; Kalicy, G.; Krebs, M.; Lehmann, D.; Nerling, F.; Patsyuk, M.; Peters, K.; Schepers, G.; Schmitt, L.; Schwarz, C.; Schwiening, J.; Traxler, M.; Böhm, M.; Eyrich, W.; Lehmann, A.; Pfaffinger, M.; Uhlig, F.; Düren, M.; Etzelmüller, E.; Föhl, K.; Hayrapetyan, A.; Kreutzfeld, K.; Merle, O.; Rieke, J.; Wasem, T.; Achenbach, P.; Cardinali, M.; Hoek, M.; Lauth, W.; Schlimme, S.; Sfienti, C.; Thiel, M.

    2017-12-01

    The Endcap Disc DIRC has been developed to provide an excellent particle identification for the future PANDA experiment by separating pions and kaons up to a momentum of 4 GeV/c with a separation power of 3 standard deviations in the polar angle region from 5o to 22o. This goal will be achieved using dedicated particle identification algorithms based on likelihood methods and will be applied in an offline analysis and online event filtering. This paper evaluates the resulting PID performance using Monte-Carlo simulations to study basic single track PID as well as the analysis of complex physics channels. The online reconstruction algorithm has been tested with a Virtex4 FGPA card and optimized regarding the resulting constraints.

  18. Transport of the moving barrier driven by chiral active particles

    Science.gov (United States)

    Liao, Jing-jing; Huang, Xiao-qun; Ai, Bao-quan

    2018-03-01

    Transport of a moving V-shaped barrier exposed to a bath of chiral active particles is investigated in a two-dimensional channel. Due to the chirality of active particles and the transversal asymmetry of the barrier position, active particles can power and steer the directed transport of the barrier in the longitudinal direction. The transport of the barrier is determined by the chirality of active particles. The moving barrier and active particles move in the opposite directions. The average velocity of the barrier is much larger than that of active particles. There exist optimal parameters (the chirality, the self-propulsion speed, the packing fraction, and the channel width) at which the average velocity of the barrier takes its maximal value. In particular, tailoring the geometry of the barrier and the active concentration provides novel strategies to control the transport properties of micro-objects or cargoes in an active medium.

  19. GPU-accelerated algorithms for many-particle continuous-time quantum walks

    Science.gov (United States)

    Piccinini, Enrico; Benedetti, Claudia; Siloi, Ilaria; Paris, Matteo G. A.; Bordone, Paolo

    2017-06-01

    Many-particle continuous-time quantum walks (CTQWs) represent a resource for several tasks in quantum technology, including quantum search algorithms and universal quantum computation. In order to design and implement CTQWs in a realistic scenario, one needs effective simulation tools for Hamiltonians that take into account static noise and fluctuations in the lattice, i.e. Hamiltonians containing stochastic terms. To this aim, we suggest a parallel algorithm based on the Taylor series expansion of the evolution operator, and compare its performances with those of algorithms based on the exact diagonalization of the Hamiltonian or a 4th order Runge-Kutta integration. We prove that both Taylor-series expansion and Runge-Kutta algorithms are reliable and have a low computational cost, the Taylor-series expansion showing the additional advantage of a memory allocation not depending on the precision of calculation. Both algorithms are also highly parallelizable within the SIMT paradigm, and are thus suitable for GPGPU computing. In turn, we have benchmarked 4 NVIDIA GPUs and 3 quad-core Intel CPUs for a 2-particle system over lattices of increasing dimension, showing that the speedup provided by GPU computing, with respect to the OPENMP parallelization, lies in the range between 8x and (more than) 20x, depending on the frequency of post-processing. GPU-accelerated codes thus allow one to overcome concerns about the execution time, and make it possible simulations with many interacting particles on large lattices, with the only limit of the memory available on the device.

  20. Public Transport Route Finding using a Hybrid Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Liviu Adrian COTFAS

    2011-01-01

    Full Text Available In this paper we present a public transport route finding solution based on a hybrid genetic algorithm. The algorithm uses two heuristics that take into consideration the number of trans-fers and the remaining distance to the destination station in order to improve the convergence speed. The interface of the system uses the latest web technologies to offer both portability and advanced functionality. The approach has been evaluated using the data for the Bucharest public transport network.

  1. Modeling pollutant transport using a meshless-lagrangian particle model

    International Nuclear Information System (INIS)

    Carrington, D.B.; Pepper, D.W.

    2002-01-01

    A combined meshless-Lagrangian particle transport model is used to predict pollutant transport over irregular terrain. The numerical model for initializing the velocity field is based on a meshless approach utilizing multiquadrics established by Kansa. The Lagrangian particle transport technique uses a random walk procedure to depict the advection and dispersion of pollutants over any type of surface, including street and city canyons

  2. Fast algorithms for transport models. Final report

    International Nuclear Information System (INIS)

    Manteuffel, T.A.

    1994-01-01

    This project has developed a multigrid in space algorithm for the solution of the S N equations with isotropic scattering in slab geometry. The algorithm was developed for the Modified Linear Discontinuous (MLD) discretization in space which is accurate in the thick diffusion limit. It uses a red/black two-cell μ-line relaxation. This relaxation solves for all angles on two adjacent spatial cells simultaneously. It takes advantage of the rank-one property of the coupling between angles and can perform this inversion in O(N) operations. A version of the multigrid in space algorithm was programmed on the Thinking Machines Inc. CM-200 located at LANL. It was discovered that on the CM-200 a block Jacobi type iteration was more efficient than the block red/black iteration. Given sufficient processors all two-cell block inversions can be carried out simultaneously with a small number of parallel steps. The bottleneck is the need for sums of N values, where N is the number of discrete angles, each from a different processor. These are carried out by machine intrinsic functions and are well optimized. The overall algorithm has computational complexity O(log(M)), where M is the number of spatial cells. The algorithm is very efficient and represents the state-of-the-art for isotropic problems in slab geometry. For anisotropic scattering in slab geometry, a multilevel in angle algorithm was developed. A parallel version of the multilevel in angle algorithm has also been developed. Upon first glance, the shifted transport sweep has limited parallelism. Once the right-hand-side has been computed, the sweep is completely parallel in angle, becoming N uncoupled initial value ODE's. The author has developed a cyclic reduction algorithm that renders it parallel with complexity O(log(M)). The multilevel in angle algorithm visits log(N) levels, where shifted transport sweeps are performed. The overall complexity is O(log(N)log(M))

  3. The energy band memory server algorithm for parallel Monte Carlo transport calculations

    International Nuclear Information System (INIS)

    Felker, K.G.; Siegel, A.R.; Smith, K.S.; Romano, P.K.; Forget, B.

    2013-01-01

    An algorithm is developed to significantly reduce the on-node footprint of cross section memory in Monte Carlo particle tracking algorithms. The classic method of per-node replication of cross section data is replaced by a memory server model, in which the read-only lookup tables reside on a remote set of disjoint processors. The main particle tracking algorithm is then modified in such a way as to enable efficient use of the remotely stored data in the particle tracking algorithm. Results of a prototype code on a Blue Gene/Q installation reveal that the penalty for remote storage is reasonable in the context of time scales for real-world applications, thus yielding a path forward for a broad range of applications that are memory bound using current techniques. (authors)

  4. Fault detection and isolation in GPS receiver autonomous integrity monitoring based on chaos particle swarm optimization-particle filter algorithm

    Science.gov (United States)

    Wang, Ershen; Jia, Chaoying; Tong, Gang; Qu, Pingping; Lan, Xiaoyu; Pang, Tao

    2018-03-01

    The receiver autonomous integrity monitoring (RAIM) is one of the most important parts in an avionic navigation system. Two problems need to be addressed to improve this system, namely, the degeneracy phenomenon and lack of samples for the standard particle filter (PF). However, the number of samples cannot adequately express the real distribution of the probability density function (i.e., sample impoverishment). This study presents a GPS receiver autonomous integrity monitoring (RAIM) method based on a chaos particle swarm optimization particle filter (CPSO-PF) algorithm with a log likelihood ratio. The chaos sequence generates a set of chaotic variables, which are mapped to the interval of optimization variables to improve particle quality. This chaos perturbation overcomes the potential for the search to become trapped in a local optimum in the particle swarm optimization (PSO) algorithm. Test statistics are configured based on a likelihood ratio, and satellite fault detection is then conducted by checking the consistency between the state estimate of the main PF and those of the auxiliary PFs. Based on GPS data, the experimental results demonstrate that the proposed algorithm can effectively detect and isolate satellite faults under conditions of non-Gaussian measurement noise. Moreover, the performance of the proposed novel method is better than that of RAIM based on the PF or PSO-PF algorithm.

  5. IMPLANT-ASSOCIATED PATHOLOGY: AN ALGORITHM FOR IDENTIFYING PARTICLES IN HISTOPATHOLOGIC SYNOVIALIS/SLIM DIAGNOSTICS

    Directory of Open Access Journals (Sweden)

    V. Krenn

    2014-01-01

    Full Text Available In histopathologic SLIM diagnostic (synovial-like interface membrane, SLIM apart from diagnosing periprosthetic infection particle identification has an important role to play. The differences in particle pathogenesis and variability of materials in endoprosthetics explain the particle heterogeneity that hampers the diagnostic identification of particles. For this reason, a histopathological particle algorithm has been developed. With minimal methodical complexity this histopathological particle algorithm offers a guide to prosthesis material-particle identification. Light microscopic-morphological as well as enzyme-histochemical characteristics and polarization-optical proporties have set and particles are defined by size (microparticles, macroparticles and supra- macroparticles and definitely characterized in accordance with a dichotomous principle. Based on these criteria, identification and validation of the particles was carried out in 120 joint endoprosthesis pathological cases. A histopathological particle score (HPS is proposed that summarizes the most important information for the orthopedist, material scientist and histopathologist concerning particle identification in the SLIM.

  6. Vectorising the detector geometry to optimize particle transport

    CERN Document Server

    Apostolakis, John; Carminati, Federico; Gheata, Andrei; Wenzel, Sandro

    2014-01-01

    Among the components contributing to particle transport, geometry navigation is an important consumer of CPU cycles. The tasks performed to get answers to "basic" queries such as locating a point within a geometry hierarchy or computing accurately the distance to the next boundary can become very computing intensive for complex detector setups. So far, the existing geometry algorithms employ mainly scalar optimisation strategies (voxelization, caching) to reduce their CPU consumption. In this paper, we would like to take a different approach and investigate how geometry navigation can benefit from the vector instruction set extensions that are one of the primary source of performance enhancements on current and future hardware. While on paper, this form of microparallelism promises increasing performance opportunities, applying this technology to the highly hierarchical and multiply branched geometry code is a difficult challenge. We refer to the current work done to vectorise an important part of the critica...

  7. The Optimal Wavelengths for Light Absorption Spectroscopy Measurements Based on Genetic Algorithm-Particle Swarm Optimization

    Science.gov (United States)

    Tang, Ge; Wei, Biao; Wu, Decao; Feng, Peng; Liu, Juan; Tang, Yuan; Xiong, Shuangfei; Zhang, Zheng

    2018-03-01

    To select the optimal wavelengths in the light extinction spectroscopy measurement, genetic algorithm-particle swarm optimization (GAPSO) based on genetic algorithm (GA) and particle swarm optimization (PSO) is adopted. The change of the optimal wavelength positions in different feature size parameters and distribution parameters is evaluated. Moreover, the Monte Carlo method based on random probability is used to identify the number of optimal wavelengths, and good inversion effects of the particle size distribution are obtained. The method proved to have the advantage of resisting noise. In order to verify the feasibility of the algorithm, spectra with bands ranging from 200 to 1000 nm are computed. Based on this, the measured data of standard particles are used to verify the algorithm.

  8. On the Langevin approach to particle transport

    International Nuclear Information System (INIS)

    Bringuier, Eric

    2006-01-01

    In the Langevin description of Brownian motion, the action of the surrounding medium upon the Brownian particle is split up into a systematic friction force of Stokes type and a randomly fluctuating force, alternatively termed noise. That simple description accounts for several basic features of particle transport in a medium, making it attractive to teach at the undergraduate level, but its range of applicability is limited. The limitation is illustrated here by showing that the Langevin description fails to account realistically for the transport of a charged particle in a medium under crossed electric and magnetic fields and the ensuing Hall effect. That particular failure is rooted in the concept of the friction force rather than in the accompanying random force. It is then shown that the framework of kinetic theory offers a better account of the Hall effect. It is concluded that the Langevin description is nothing but an extension of Drude's transport model subsuming diffusion, and so it inherits basic limitations from that model. This paper thus describes the interrelationship of the Langevin approach, the Drude model and kinetic theory, in a specific transport problem of physical interest

  9. Turbulent transport of large particles in the atmospheric boundary layer

    Science.gov (United States)

    Richter, D. H.; Chamecki, M.

    2017-12-01

    To describe the transport of heavy dust particles in the atmosphere, assumptions must typically be made in order to connect the micro-scale emission processes with the larger-scale atmospheric motions. In the context of numerical models, this can be thought of as the transport process which occurs between the domain bottom and the first vertical grid point. For example, in the limit of small particles (both low inertia and low settling velocity), theory built upon Monin-Obukhov similarity has proven effective in relating mean dust concentration profiles to surface emission fluxes. For increasing particle mass, however, it becomes more difficult to represent dust transport as a simple extension of the transport of a passive scalar due to issues such as the crossing trajectories effect. This study focuses specifically on the problem of large particle transport and dispersion in the turbulent boundary layer by utilizing direct numerical simulations with Lagrangian point-particle tracking to determine under what, if any, conditions the large dust particles (larger than 10 micron in diameter) can be accurately described in a simplified Eulerian framework. In particular, results will be presented detailing the independent contributions of both particle inertia and particle settling velocity relative to the strength of the surrounding turbulent flow, and consequences of overestimating surface fluxes via traditional parameterizations will be demonstrated.

  10. An efficient quasi-3D particle tracking-based approach for transport through fractures with application to dynamic dispersion calculation.

    Science.gov (United States)

    Wang, Lichun; Cardenas, M Bayani

    2015-08-01

    The quantitative study of transport through fractured media has continued for many decades, but has often been constrained by observational and computational challenges. Here, we developed an efficient quasi-3D random walk particle tracking (RWPT) algorithm to simulate solute transport through natural fractures based on a 2D flow field generated from the modified local cubic law (MLCL). As a reference, we also modeled the actual breakthrough curves (BTCs) through direct simulations with the 3D advection-diffusion equation (ADE) and Navier-Stokes equations. The RWPT algorithm along with the MLCL accurately reproduced the actual BTCs calculated with the 3D ADE. The BTCs exhibited non-Fickian behavior, including early arrival and long tails. Using the spatial information of particle trajectories, we further analyzed the dynamic dispersion process through moment analysis. From this, asymptotic time scales were determined for solute dispersion to distinguish non-Fickian from Fickian regimes. This analysis illustrates the advantage and benefit of using an efficient combination of flow modeling and RWPT. Copyright © 2015 Elsevier B.V. All rights reserved.

  11. Algorithms for the optimization of RBE-weighted dose in particle therapy.

    Science.gov (United States)

    Horcicka, M; Meyer, C; Buschbacher, A; Durante, M; Krämer, M

    2013-01-21

    We report on various algorithms used for the nonlinear optimization of RBE-weighted dose in particle therapy. Concerning the dose calculation carbon ions are considered and biological effects are calculated by the Local Effect Model. Taking biological effects fully into account requires iterative methods to solve the optimization problem. We implemented several additional algorithms into GSI's treatment planning system TRiP98, like the BFGS-algorithm and the method of conjugated gradients, in order to investigate their computational performance. We modified textbook iteration procedures to improve the convergence speed. The performance of the algorithms is presented by convergence in terms of iterations and computation time. We found that the Fletcher-Reeves variant of the method of conjugated gradients is the algorithm with the best computational performance. With this algorithm we could speed up computation times by a factor of 4 compared to the method of steepest descent, which was used before. With our new methods it is possible to optimize complex treatment plans in a few minutes leading to good dose distributions. At the end we discuss future goals concerning dose optimization issues in particle therapy which might benefit from fast optimization solvers.

  12. Improved multi-objective clustering algorithm using particle swarm optimization.

    Directory of Open Access Journals (Sweden)

    Congcong Gong

    Full Text Available Multi-objective clustering has received widespread attention recently, as it can obtain more accurate and reasonable solution. In this paper, an improved multi-objective clustering framework using particle swarm optimization (IMCPSO is proposed. Firstly, a novel particle representation for clustering problem is designed to help PSO search clustering solutions in continuous space. Secondly, the distribution of Pareto set is analyzed. The analysis results are applied to the leader selection strategy, and make algorithm avoid trapping in local optimum. Moreover, a clustering solution-improved method is proposed, which can increase the efficiency in searching clustering solution greatly. In the experiments, 28 datasets are used and nine state-of-the-art clustering algorithms are compared, the proposed method is superior to other approaches in the evaluation index ARI.

  13. Convective and diffusive effects on particle transport in asymmetric periodic capillaries.

    Directory of Open Access Journals (Sweden)

    Nazmul Islam

    Full Text Available We present here results of a theoretical investigation of particle transport in longitudinally asymmetric but axially symmetric capillaries, allowing for the influence of both diffusion and convection. In this study we have focused attention primarily on characterizing the influence of tube geometry and applied hydraulic pressure on the magnitude, direction and rate of transport of particles in axi-symmetric, saw-tooth shaped tubes. Three initial value problems are considered. The first involves the evolution of a fixed number of particles initially confined to a central wave-section. The second involves the evolution of the same initial state but including an ongoing production of particles in the central wave-section. The third involves the evolution of particles a fully laden tube. Based on a physical model of convective-diffusive transport, assuming an underlying oscillatory fluid velocity field that is unaffected by the presence of the particles, we find that transport rates and even net transport directions depend critically on the design specifics, such as tube geometry, flow rate, initial particle configuration and whether or not particles are continuously introduced. The second transient scenario is qualitatively independent of the details of how particles are generated. In the third scenario there is no net transport. As the study is fundamental in nature, our findings could engender greater understanding of practical systems.

  14. Cloud Particles Differential Evolution Algorithm: A Novel Optimization Method for Global Numerical Optimization

    Directory of Open Access Journals (Sweden)

    Wei Li

    2015-01-01

    Full Text Available We propose a new optimization algorithm inspired by the formation and change of the cloud in nature, referred to as Cloud Particles Differential Evolution (CPDE algorithm. The cloud is assumed to have three states in the proposed algorithm. Gaseous state represents the global exploration. Liquid state represents the intermediate process from the global exploration to the local exploitation. Solid state represents the local exploitation. The best solution found so far acts as a nucleus. In gaseous state, the nucleus leads the population to explore by condensation operation. In liquid state, cloud particles carry out macrolocal exploitation by liquefaction operation. A new mutation strategy called cloud differential mutation is introduced in order to solve a problem that the misleading effect of a nucleus may cause the premature convergence. In solid state, cloud particles carry out microlocal exploitation by solidification operation. The effectiveness of the algorithm is validated upon different benchmark problems. The results have been compared with eight well-known optimization algorithms. The statistical analysis on performance evaluation of the different algorithms on 10 benchmark functions and CEC2013 problems indicates that CPDE attains good performance.

  15. Enhanced Particle Swarm Optimization Algorithm: Efficient Training of ReaxFF Reactive Force Fields.

    Science.gov (United States)

    Furman, David; Carmeli, Benny; Zeiri, Yehuda; Kosloff, Ronnie

    2018-05-04

    Particle swarm optimization is a powerful metaheuristic population-based global optimization algorithm. However, when applied to non-separable objective functions its performance on multimodal landscapes is significantly degraded. Here we show that a significant improvement in the search quality and efficiency on multimodal functions can be achieved by enhancing the basic rotation-invariant particle swarm optimization algorithm with isotropic Gaussian mutation operators. The new algorithm demonstrates a superior performance across several nonlinear, multimodal benchmark functions compared to the rotation-invariant Particle Swam Optimization (PSO) algorithm and the well-established simulated annealing and sequential one-parameter parabolic interpolation methods. A search for the optimal set of parameters for the dispersion interaction model in ReaxFF-lg reactive force field is carried out with respect to accurate DFT-TS calculations. The resulting optimized force field accurately describes the equations of state of several high-energy molecular crystals where such interactions are of crucial importance. The improved algorithm also presents a better performance compared to a Genetic Algorithm optimization method in the optimization of a ReaxFF-lg correction model parameters. The computational framework is implemented in a standalone C++ code that allows a straightforward development of ReaxFF reactive force fields.

  16. User's manual for ONEDANT: a code package for one-dimensional, diffusion-accelerated, neutral-particle transport

    International Nuclear Information System (INIS)

    O'Dell, R.D.; Brinkley, F.W. Jr.; Marr, D.R.

    1982-02-01

    ONEDANT is designed for the CDC-7600, but the program has been implemented and run on the IBM-370/190 and CRAY-I computers. ONEDANT solves the one-dimensional multigroup transport equation in plane, cylindrical, spherical, and two-angle plane geometries. Both regular and adjoint, inhomogeneous and homogeneous (k/sub eff/ and eigenvalue search) problems subject to vacuum, reflective, periodic, white, albedo, or inhomogeneous boundary flux conditions are solved. General anisotropic scattering is allowed and anisotropic inhomogeneous sources are permitted. ONEDANT numerically solves the one-dimensional, multigroup form of the neutral-particle, steady-state form of the Boltzmann transport equation. The discrete-ordinates approximation is used for treating the angular variation of the particle distribution and the diamond-difference scheme is used for phase space discretization. Negative fluxes are eliminated by a local set-to-zero-and-correct algorithm. A standard inner (within-group) iteration, outer (energy-group-dependent source) iteration technique is used. Both inner and outer iterations are accelerated using the diffusion synthetic acceleration method

  17. Fitting the elementary rate constants of the P-gp transporter network in the hMDR1-MDCK confluent cell monolayer using a particle swarm algorithm.

    Directory of Open Access Journals (Sweden)

    Deep Agnani

    Full Text Available P-glycoprotein, a human multidrug resistance transporter, has been extensively studied due to its importance to human health and disease. In order to understand transport kinetics via P-gp, confluent cell monolayers overexpressing P-gp are widely used. The purpose of this study is to obtain the mass action elementary rate constants for P-gp's transport and to functionally characterize members of P-gp's network, i.e., other transporters that transport P-gp substrates in hMDR1-MDCKII confluent cell monolayers and are essential to the net substrate flux. Transport of a range of concentrations of amprenavir, loperamide, quinidine and digoxin across the confluent monolayer of cells was measured in both directions, apical to basolateral and basolateral to apical. We developed a global optimization algorithm using the Particle Swarm method that can simultaneously fit all datasets to yield accurate and exhaustive fits of these elementary rate constants. The statistical sensitivity of the fitted values was determined by using 24 identical replicate fits, yielding simple averages and standard deviations for all of the kinetic parameters, including the efflux active P-gp surface density. Digoxin required additional basolateral and apical transporters, while loperamide required just a basolateral tranporter. The data were better fit by assuming bidirectional transporters, rather than active importers, suggesting that they are not MRP or active OATP transporters. The P-gp efflux rate constants for quinidine and digoxin were about 3-fold smaller than reported ATP hydrolysis rate constants from P-gp proteoliposomes. This suggests a roughly 3∶1 stoichiometry between ATP hydrolysis and P-gp transport for these two drugs. The fitted values of the elementary rate constants for these P-gp substrates support the hypotheses that the selective pressures on P-gp are to maintain a broad substrate range and to keep xenobiotics out of the cytosol, but not out of the

  18. Transport with three-particle interaction

    International Nuclear Information System (INIS)

    Morawetz, K.

    2000-01-01

    Starting from a point - like two - and three - particle interaction the kinetic equation is derived. While the drift term of the kinetic equation turns out to be determined by the known Skyrme mean field the collision integral appears in two - and three - particle parts. The cross section results from the same microscopic footing and is naturally density dependent due to the three - particle force. By this way no hybrid model for drift and cross section is needed for nuclear transport. The resulting equation of state has besides the mean field correlation energy also a two - and three - particle correlation energy which both are calculated analytically for the ground state. These energies contribute to the equation of state and lead to an occurrence of a maximum at 3 times nuclear density in the total energy. (author)

  19. Optimization of China Crude Oil Transportation Network with Genetic Ant Colony Algorithm

    Directory of Open Access Journals (Sweden)

    Yao Wang

    2015-08-01

    Full Text Available Taking into consideration both shipping and pipeline transport, this paper first analysed the risk factors for different modes of crude oil import transportation. Then, based on the minimum of both transportation cost and overall risk, a multi-objective programming model was established to optimize the transportation network of crude oil import, and the genetic algorithm and ant colony algorithm were employed to solve the problem. The optimized result shows that VLCC (Very Large Crude Carrier is superior in long distance sea transportation, whereas pipeline transport is more secure than sea transport. Finally, this paper provides related safeguard suggestions on crude oil import transportation.

  20. An Adaptive Cultural Algorithm with Improved Quantum-behaved Particle Swarm Optimization for Sonar Image Detection.

    Science.gov (United States)

    Wang, Xingmei; Hao, Wenqian; Li, Qiming

    2017-12-18

    This paper proposes an adaptive cultural algorithm with improved quantum-behaved particle swarm optimization (ACA-IQPSO) to detect the underwater sonar image. In the population space, to improve searching ability of particles, iterative times and the fitness value of particles are regarded as factors to adaptively adjust the contraction-expansion coefficient of the quantum-behaved particle swarm optimization algorithm (QPSO). The improved quantum-behaved particle swarm optimization algorithm (IQPSO) can make particles adjust their behaviours according to their quality. In the belief space, a new update strategy is adopted to update cultural individuals according to the idea of the update strategy in shuffled frog leaping algorithm (SFLA). Moreover, to enhance the utilization of information in the population space and belief space, accept function and influence function are redesigned in the new communication protocol. The experimental results show that ACA-IQPSO can obtain good clustering centres according to the grey distribution information of underwater sonar images, and accurately complete underwater objects detection. Compared with other algorithms, the proposed ACA-IQPSO has good effectiveness, excellent adaptability, a powerful searching ability and high convergence efficiency. Meanwhile, the experimental results of the benchmark functions can further demonstrate that the proposed ACA-IQPSO has better searching ability, convergence efficiency and stability.

  1. An Improved Particle Swarm Optimization Algorithm and Its Application in the Community Division

    Directory of Open Access Journals (Sweden)

    Jiang Hao

    2016-01-01

    Full Text Available With the deepening of the research on complex networks, the method of detecting and classifying social network is springing up. In this essay, the basic particle swarm algorithm is improved based on the GN algorithm. Modularity is taken as a measure of community division [1]. In view of the dynamic network community division, scrolling calculation method is put forward. Experiments show that using the improved particle swarm optimization algorithm can improve the accuracy of the community division and can also get higher value of the modularity in the dynamic community

  2. Gyrokinetic particle simulation of neoclassical transport

    International Nuclear Information System (INIS)

    Lin, Z.; Tang, W.M.; Lee, W.W.

    1995-01-01

    A time varying weighting (δf ) scheme for gyrokinetic particle simulation is applied to a steady-state, multispecies simulation of neoclassical transport. Accurate collision operators conserving momentum and energy are developed and implemented. Simulation results using these operators are found to agree very well with neoclassical theory. For example, it is dynamically demonstrated that like-particle collisions produce no particle flux and that the neoclassical fluxes are ambipolar for an ion--electron plasma. An important physics feature of the present scheme is the introduction of toroidal flow to the simulations. Simulation results are in agreement with the existing analytical neoclassical theory. The poloidal electric field associated with toroidal mass flow is found to enhance density gradient-driven electron particle flux and the bootstrap current while reducing temperature gradient-driven flux and current. Finally, neoclassical theory in steep gradient profile relevant to the edge regime is examined by taking into account finite banana width effects. In general, in the present work a valuable new capability for studying important aspects of neoclassical transport inaccessible by conventional analytical calculation processes is demonstrated. copyright 1995 American Institute of Physics

  3. Noise effect in an improved conjugate gradient algorithm to invert particle size distribution and the algorithm amendment.

    Science.gov (United States)

    Wei, Yongjie; Ge, Baozhen; Wei, Yaolin

    2009-03-20

    In general, model-independent algorithms are sensitive to noise during laser particle size measurement. An improved conjugate gradient algorithm (ICGA) that can be used to invert particle size distribution (PSD) from diffraction data is presented. By use of the ICGA to invert simulated data with multiplicative or additive noise, we determined that additive noise is the main factor that induces distorted results. Thus the ICGA is amended by introduction of an iteration step-adjusting parameter and is used experimentally on simulated data and some samples. The experimental results show that the sensitivity of the ICGA to noise is reduced and the inverted results are in accord with the real PSD.

  4. Particle Identification algorithm for the CLIC ILD and CLIC SiD detectors

    CERN Document Server

    Nardulli, J

    2011-01-01

    This note describes the algorithm presently used to determine the particle identification performance for single particles for the CLIC ILD and CLIC SiD detector concepts as prepared in the CLIC Conceptual Design Report.

  5. The OpenMC Monte Carlo particle transport code

    International Nuclear Information System (INIS)

    Romano, Paul K.; Forget, Benoit

    2013-01-01

    Highlights: ► An open source Monte Carlo particle transport code, OpenMC, has been developed. ► Solid geometry and continuous-energy physics allow high-fidelity simulations. ► Development has focused on high performance and modern I/O techniques. ► OpenMC is capable of scaling up to hundreds of thousands of processors. ► Results on a variety of benchmark problems agree with MCNP5. -- Abstract: A new Monte Carlo code called OpenMC is currently under development at the Massachusetts Institute of Technology as a tool for simulation on high-performance computing platforms. Given that many legacy codes do not scale well on existing and future parallel computer architectures, OpenMC has been developed from scratch with a focus on high performance scalable algorithms as well as modern software design practices. The present work describes the methods used in the OpenMC code and demonstrates the performance and accuracy of the code on a variety of problems.

  6. Particle Acceleration and Fractional Transport in Turbulent Reconnection

    Science.gov (United States)

    Isliker, Heinz; Pisokas, Theophilos; Vlahos, Loukas; Anastasiadis, Anastasios

    2017-11-01

    We consider a large-scale environment of turbulent reconnection that is fragmented into a number of randomly distributed unstable current sheets (UCSs), and we statistically analyze the acceleration of particles within this environment. We address two important cases of acceleration mechanisms when particles interact with the UCS: (a) electric field acceleration and (b) acceleration by reflection at contracting islands. Electrons and ions are accelerated very efficiently, attaining an energy distribution of power-law shape with an index 1-2, depending on the acceleration mechanism. The transport coefficients in energy space are estimated from test-particle simulation data, and we show that the classical Fokker-Planck (FP) equation fails to reproduce the simulation results when the transport coefficients are inserted into it and it is solved numerically. The cause for this failure is that the particles perform Levy flights in energy space, while the distributions of the energy increments exhibit power-law tails. We then use the fractional transport equation (FTE) derived by Isliker et al., whose parameters and the order of the fractional derivatives are inferred from the simulation data, and solving the FTE numerically, we show that the FTE successfully reproduces the kinetic energy distribution of the test particles. We discuss in detail the analysis of the simulation data and the criteria that allow one to judge the appropriateness of either an FTE or a classical FP equation as a transport model.

  7. Particle Acceleration and Fractional Transport in Turbulent Reconnection

    Energy Technology Data Exchange (ETDEWEB)

    Isliker, Heinz; Pisokas, Theophilos; Vlahos, Loukas [Department of Physics, Aristotle University of Thessaloniki, GR-52124 Thessaloniki (Greece); Anastasiadis, Anastasios [Institute for Astronomy, Astrophysics, Space Applications and Remote Sensing, National Observatory of Athens, GR-15236 Penteli (Greece)

    2017-11-01

    We consider a large-scale environment of turbulent reconnection that is fragmented into a number of randomly distributed unstable current sheets (UCSs), and we statistically analyze the acceleration of particles within this environment. We address two important cases of acceleration mechanisms when particles interact with the UCS: (a) electric field acceleration and (b) acceleration by reflection at contracting islands. Electrons and ions are accelerated very efficiently, attaining an energy distribution of power-law shape with an index 1–2, depending on the acceleration mechanism. The transport coefficients in energy space are estimated from test-particle simulation data, and we show that the classical Fokker–Planck (FP) equation fails to reproduce the simulation results when the transport coefficients are inserted into it and it is solved numerically. The cause for this failure is that the particles perform Levy flights in energy space, while the distributions of the energy increments exhibit power-law tails. We then use the fractional transport equation (FTE) derived by Isliker et al., whose parameters and the order of the fractional derivatives are inferred from the simulation data, and solving the FTE numerically, we show that the FTE successfully reproduces the kinetic energy distribution of the test particles. We discuss in detail the analysis of the simulation data and the criteria that allow one to judge the appropriateness of either an FTE or a classical FP equation as a transport model.

  8. Creating and using a type of free-form geometry in Monte Carlo particle transport

    International Nuclear Information System (INIS)

    Wessol, D.E.; Wheeler, F.J.

    1993-01-01

    While the reactor physicists were fine-tuning the Monte Carlo paradigm for particle transport in regular geometries, the computer scientists were developing rendering algorithms to display extremely realistic renditions of irregular objects ranging from the ubiquitous teakettle to dynamic Jell-O. Even though the modeling methods share a common basis, the initial strategies each discipline developed for variance reduction were remarkably different. Initially, the reactor physicist used Russian roulette, importance sampling, particle splitting, and rejection techniques. In the early stages of development, the computer scientist relied primarily on rejection techniques, including a very elegant hierarchical construction and sampling method. This sampling method allowed the computer scientist to viably track particles through irregular geometries in three-dimensional space, while the initial methods developed by the reactor physicists would only allow for efficient searches through analytical surfaces or objects. As time goes by, it appears there has been some merging of the variance reduction strategies between the two disciplines. This is an early (possibly first) incorporation of geometric hierarchical construction and sampling into the reactor physicists' Monte Carlo transport model that permits efficient tracking through nonuniform rational B-spline surfaces in three-dimensional space. After some discussion, the results from this model are compared with experiments and the model employing implicit (analytical) geometric representation

  9. Particle and heavy ion transport code system; PHITS

    International Nuclear Information System (INIS)

    Niita, Koji

    2004-01-01

    Intermediate and high energy nuclear data are strongly required in design study of many facilities such as accelerator-driven systems, intense pulse spallation neutron sources, and also in medical and space technology. There is, however, few evaluated nuclear data of intermediate and high energy nuclear reactions. Therefore, we have to use some models or systematics for the cross sections, which are essential ingredients of high energy particle and heavy ion transport code to estimate neutron yield, heat deposition and many other quantities of the transport phenomena in materials. We have developed general purpose particle and heavy ion transport Monte Carlo code system, PHITS (Particle and Heavy Ion Transport code System), based on the NMTC/JAM code by the collaboration of Tohoku University, JAERI and RIST. The PHITS has three important ingredients which enable us to calculate (1) high energy nuclear reactions up to 200 GeV, (2) heavy ion collision and its transport in material, (3) low energy neutron transport based on the evaluated nuclear data. In the PHITS, the cross sections of high energy nuclear reactions are obtained by JAM model. JAM (Jet AA Microscopic Transport Model) is a hadronic cascade model, which explicitly treats all established hadronic states including resonances and all hadron-hadron cross sections parametrized based on the resonance model and string model by fitting the available experimental data. The PHITS can describe the transport of heavy ions and their collisions by making use of JQMD and SPAR code. The JQMD (JAERI Quantum Molecular Dynamics) is a simulation code for nucleus nucleus collisions based on the molecular dynamics. The SPAR code is widely used to calculate the stopping powers and ranges for charged particles and heavy ions. The PHITS has included some part of MCNP4C code, by which the transport of low energy neutron, photon and electron based on the evaluated nuclear data can be described. Furthermore, the high energy nuclear

  10. Research on Multiple Particle Swarm Algorithm Based on Analysis of Scientific Materials

    Directory of Open Access Journals (Sweden)

    Zhao Hongwei

    2017-01-01

    Full Text Available This paper proposed an improved particle swarm optimization algorithm based on analysis of scientific materials. The core thesis of MPSO (Multiple Particle Swarm Algorithm is to improve the single population PSO to interactive multi-swarms, which is used to settle the problem of being trapped into local minima during later iterations because it is lack of diversity. The simulation results show that the convergence rate is fast and the search performance is good, and it has achieved very good results.

  11. An External Archive-Guided Multiobjective Particle Swarm Optimization Algorithm.

    Science.gov (United States)

    Zhu, Qingling; Lin, Qiuzhen; Chen, Weineng; Wong, Ka-Chun; Coello Coello, Carlos A; Li, Jianqiang; Chen, Jianyong; Zhang, Jun

    2017-09-01

    The selection of swarm leaders (i.e., the personal best and global best), is important in the design of a multiobjective particle swarm optimization (MOPSO) algorithm. Such leaders are expected to effectively guide the swarm to approach the true Pareto optimal front. In this paper, we present a novel external archive-guided MOPSO algorithm (AgMOPSO), where the leaders for velocity update are all selected from the external archive. In our algorithm, multiobjective optimization problems (MOPs) are transformed into a set of subproblems using a decomposition approach, and then each particle is assigned accordingly to optimize each subproblem. A novel archive-guided velocity update method is designed to guide the swarm for exploration, and the external archive is also evolved using an immune-based evolutionary strategy. These proposed approaches speed up the convergence of AgMOPSO. The experimental results fully demonstrate the superiority of our proposed AgMOPSO in solving most of the test problems adopted, in terms of two commonly used performance measures. Moreover, the effectiveness of our proposed archive-guided velocity update method and immune-based evolutionary strategy is also experimentally validated on more than 30 test MOPs.

  12. A Parallel Adaptive Particle Swarm Optimization Algorithm for Economic/Environmental Power Dispatch

    Directory of Open Access Journals (Sweden)

    Jinchao Li

    2012-01-01

    Full Text Available A parallel adaptive particle swarm optimization algorithm (PAPSO is proposed for economic/environmental power dispatch, which can overcome the premature characteristic, the slow-speed convergence in the late evolutionary phase, and lacking good direction in particles’ evolutionary process. A search population is randomly divided into several subpopulations. Then for each subpopulation, the optimal solution is searched synchronously using the proposed method, and thus parallel computing is realized. To avoid converging to a local optimum, a crossover operator is introduced to exchange the information among the subpopulations and the diversity of population is sustained simultaneously. Simulation results show that the proposed algorithm can effectively solve the economic/environmental operation problem of hydropower generating units. Performance comparisons show that the solution from the proposed method is better than those from the conventional particle swarm algorithm and other optimization algorithms.

  13. Transport of Particle Swarms Through Fractures

    Science.gov (United States)

    Boomsma, E.; Pyrak-Nolte, L. J.

    2011-12-01

    The transport of engineered micro- and nano-scale particles through fractured rock is often assumed to occur as dispersions or emulsions. Another potential transport mechanism is the release of particle swarms from natural or industrial processes where small liquid drops, containing thousands to millions of colloidal-size particles, are released over time from seepage or leaks. Swarms have higher velocities than any individual colloid because the interactions among the particles maintain the cohesiveness of the swarm as it falls under gravity. Thus particle swarms give rise to the possibility that engineered particles may be transported farther and faster in fractures than predicted by traditional dispersion models. In this study, the effect of fractures on colloidal swarm cohesiveness and evolution was studied as a swarm falls under gravity and interacts with fracture walls. Transparent acrylic was used to fabricate synthetic fracture samples with either (1) a uniform aperture or (2) a converging aperture followed by a uniform aperture (funnel-shaped). The samples consisted of two blocks that measured 100 x 100 x 50 mm. The separation between these blocks determined the aperture (0.5 mm to 50 mm). During experiments, a fracture was fully submerged in water and swarms were released into it. The swarms consisted of dilute suspensions of either 25 micron soda-lime glass beads (2% by mass) or 3 micron polystyrene fluorescent beads (1% by mass) with an initial volume of 5μL. The swarms were illuminated with a green (525 nm) LED array and imaged optically with a CCD camera. In the uniform aperture fracture, the speed of the swarm prior to bifurcation increased with aperture up to a maximum at a fracture width of approximately 10 mm. For apertures greater than ~15 mm, the velocity was essentially constant with fracture width (but less than at 10 mm). This peak suggests that two competing mechanisms affect swarm velocity in fractures. The wall provides both drag, which

  14. Faster Heavy Ion Transport for HZETRN

    Science.gov (United States)

    Slaba, Tony C.

    2013-01-01

    The deterministic particle transport code HZETRN was developed to enable fast and accurate space radiation transport through materials. As more complex transport solutions are implemented for neutrons, light ions (Z heavy ion (Z > 2) transport algorithm in HZETRN is reviewed, and a simple modification is shown to provide an approximate 5x decrease in execution time for galactic cosmic ray transport. Convergence tests and other comparisons are carried out to verify that numerical accuracy is maintained in the new algorithm.

  15. A Local and Global Search Combined Particle Swarm Optimization Algorithm and Its Convergence Analysis

    Directory of Open Access Journals (Sweden)

    Weitian Lin

    2014-01-01

    Full Text Available Particle swarm optimization algorithm (PSOA is an advantage optimization tool. However, it has a tendency to get stuck in a near optimal solution especially for middle and large size problems and it is difficult to improve solution accuracy by fine-tuning parameters. According to the insufficiency, this paper researches the local and global search combine particle swarm algorithm (LGSCPSOA, and its convergence and obtains its convergence qualification. At the same time, it is tested with a set of 8 benchmark continuous functions and compared their optimization results with original particle swarm algorithm (OPSOA. Experimental results indicate that the LGSCPSOA improves the search performance especially on the middle and large size benchmark functions significantly.

  16. Transient fluctuation relations for time-dependent particle transport

    Science.gov (United States)

    Altland, Alexander; de Martino, Alessandro; Egger, Reinhold; Narozhny, Boris

    2010-09-01

    We consider particle transport under the influence of time-varying driving forces, where fluctuation relations connect the statistics of pairs of time-reversed evolutions of physical observables. In many “mesoscopic” transport processes, the effective many-particle dynamics is dominantly classical while the microscopic rates governing particle motion are of quantum-mechanical origin. We here employ the stochastic path-integral approach as an optimal tool to probe the fluctuation statistics in such applications. Describing the classical limit of the Keldysh quantum nonequilibrium field theory, the stochastic path integral encapsulates the quantum origin of microscopic particle exchange rates. Dynamically, it is equivalent to a transport master equation which is a formalism general enough to describe many applications of practical interest. We apply the stochastic path integral to derive general functional fluctuation relations for current flow induced by time-varying forces. We show that the successive measurement processes implied by this setup do not put the derivation of quantum fluctuation relations in jeopardy. While in many cases the fluctuation relation for a full time-dependent current profile may contain excessive information, we formulate a number of reduced relations, and demonstrate their application to mesoscopic transport. Examples include the distribution of transmitted charge, where we show that the derivation of a fluctuation relation requires the combined monitoring of the statistics of charge and work.

  17. On the use of diffusion synthetic acceleration in parallel 3D neutral particle transport calculations

    International Nuclear Information System (INIS)

    Brown, P.; Chang, B.

    1998-01-01

    The linear Boltzmann transport equation (BTE) is an integro-differential equation arising in deterministic models of neutral and charged particle transport. In slab (one-dimensional Cartesian) geometry and certain higher-dimensional cases, Diffusion Synthetic Acceleration (DSA) is known to be an effective algorithm for the iterative solution of the discretized BTE. Fourier and asymptotic analyses have been applied to various idealizations (e.g., problems on infinite domains with constant coefficients) to obtain sharp bounds on the convergence rate of DSA in such cases. While DSA has been shown to be a highly effective acceleration (or preconditioning) technique in one-dimensional problems, it has been observed to be less effective in higher dimensions. This is due in part to the expense of solving the related diffusion linear system. We investigate here the effectiveness of a parallel semicoarsening multigrid (SMG) solution approach to DSA preconditioning in several three dimensional problems. In particular, we consider the algorithmic and implementation scalability of a parallel SMG-DSA preconditioner on several types of test problems

  18. Drift Wave Test Particle Transport in Reversed Shear Profile

    International Nuclear Information System (INIS)

    Horton, W.; Park, H.B.; Kwon, J.M.; Stronzzi, D.; Morrison, P.J.; Choi, D.I.

    1998-01-01

    Drift wave maps, area preserving maps that describe the motion of charged particles in drift waves, are derived. The maps allow the integration of particle orbits on the long time scale needed to describe transport. Calculations using the drift wave maps show that dramatic improvement in the particle confinement, in the presence of a given level and spectrum of E x B turbulence, can occur for q(r)-profiles with reversed shear. A similar reduction in the transport, i.e. one that is independent of the turbulence, is observed in the presence of an equilibrium radial electric field with shear. The transport reduction, caused by the combined effects of radial electric field shear and both monotonic and reversed shear magnetic q-profiles, is also investigated

  19. Optimization of multi-objective micro-grid based on improved particle swarm optimization algorithm

    Science.gov (United States)

    Zhang, Jian; Gan, Yang

    2018-04-01

    The paper presents a multi-objective optimal configuration model for independent micro-grid with the aim of economy and environmental protection. The Pareto solution set can be obtained by solving the multi-objective optimization configuration model of micro-grid with the improved particle swarm algorithm. The feasibility of the improved particle swarm optimization algorithm for multi-objective model is verified, which provides an important reference for multi-objective optimization of independent micro-grid.

  20. Directed transport of confined Brownian particles with torque

    Science.gov (United States)

    Radtke, Paul K.; Schimansky-Geier, Lutz

    2012-05-01

    We investigate the influence of an additional torque on the motion of Brownian particles confined in a channel geometry with varying width. The particles are driven by random fluctuations modeled by an Ornstein-Uhlenbeck process with given correlation time τc. The latter causes persistent motion and is implemented as (i) thermal noise in equilibrium and (ii) noisy propulsion in nonequilibrium. In the nonthermal process a directed transport emerges; its properties are studied in detail with respect to the correlation time, the torque, and the channel geometry. Eventually, the transport mechanism is traced back to a persistent sliding of particles along the even boundaries in contrast to scattered motion at uneven or rough ones.

  1. Variational Algorithms for Test Particle Trajectories

    Science.gov (United States)

    Ellison, C. Leland; Finn, John M.; Qin, Hong; Tang, William M.

    2015-11-01

    The theory of variational integration provides a novel framework for constructing conservative numerical methods for magnetized test particle dynamics. The retention of conservation laws in the numerical time advance captures the correct qualitative behavior of the long time dynamics. For modeling the Lorentz force system, new variational integrators have been developed that are both symplectic and electromagnetically gauge invariant. For guiding center test particle dynamics, discretization of the phase-space action principle yields multistep variational algorithms, in general. Obtaining the desired long-term numerical fidelity requires mitigation of the multistep method's parasitic modes or applying a discretization scheme that possesses a discrete degeneracy to yield a one-step method. Dissipative effects may be modeled using Lagrange-D'Alembert variational principles. Numerical results will be presented using a new numerical platform that interfaces with popular equilibrium codes and utilizes parallel hardware to achieve reduced times to solution. This work was supported by DOE Contract DE-AC02-09CH11466.

  2. Methane Bubbles Transport Particles From Contaminated Sediment to a Lake Surface

    Science.gov (United States)

    Delwiche, K.; Hemond, H.

    2017-12-01

    Methane bubbling from aquatic sediments has long been known to transport carbon to the atmosphere, but new evidence presented here suggests that methane bubbles also transport particulate matter to a lake surface. This transport pathway is of particular importance in lakes with contaminated sediments, as bubble transport could increase human exposure to toxic metals. The Upper Mystic Lake in Arlington, MA has a documented history of methane bubbling and sediment contamination by arsenic and other heavy metals, and we have conducted laboratory and field studies demonstrating that methane bubbles are capable of transporting sediment particles over depths as great as 15 m in Upper Mystic Lake. Methane bubble traps were used in-situ to capture particles adhered to bubble interfaces, and to relate particle mass transport to bubble flux. Laboratory studies were conducted in a custom-made 15 m tall water column to quantify the relationship between water column height and the mass of particulate transport. We then couple this particle transport data with historical estimates of ebullition from Upper Mystic Lake to quantify the significance of bubble-mediated particle transport to heavy metal cycling within the lake. Results suggest that methane bubbles can represent a significant pathway for contaminated sediment to reach surface waters even in relatively deep water bodies. Given the frequent co-occurrence of contaminated sediments and high bubble flux rates, and the potential for human exposure to heavy metals, it will be critical to study the significance of this transport pathway for a range of sediment and contaminant types.

  3. The Improved Locating Algorithm of Particle Filter Based on ROS Robot

    Science.gov (United States)

    Fang, Xun; Fu, Xiaoyang; Sun, Ming

    2018-03-01

    This paperanalyzes basic theory and primary algorithm of the real-time locating system and SLAM technology based on ROS system Robot. It proposes improved locating algorithm of particle filter effectively reduces the matching time of laser radar and map, additional ultra-wideband technology directly accelerates the global efficiency of FastSLAM algorithm, which no longer needs searching on the global map. Meanwhile, the re-sampling has been largely reduced about 5/6 that directly cancels the matching behavior on Roboticsalgorithm.

  4. A Novel Adaptive Particle Swarm Optimization Algorithm with Foraging Behavior in Optimization Design

    Directory of Open Access Journals (Sweden)

    Liu Yan

    2018-01-01

    Full Text Available The method of repeated trial and proofreading is generally used to the convention reducer design, but these methods is low efficiency and the size of the reducer is often large. Aiming the problems, this paper presents an adaptive particle swarm optimization algorithm with foraging behavior, in this method, the bacterial foraging process is introduced into the adaptive particle swarm optimization algorithm, which can provide the function of particle chemotaxis, swarming, reproduction, elimination and dispersal, to improve the ability of local search and avoid premature behavior. By test verification through typical function and the application of the optimization design in the structure of the reducer with discrete and continuous variables, the results are shown that the new algorithm has the advantages of good reliability, strong searching ability and high accuracy. It can be used in engineering design, and has a strong applicability.

  5. Transient particle transport studies at the W7-AS stellarator

    International Nuclear Information System (INIS)

    Koponen, J.

    2000-01-01

    One of the crucial problems in fusion research is the understanding of the transport of particles and heat in plasmas relevant for energy production. Extensive experimental transport studies have unraveled many details of heat transport in tokamaks and stellarators. However, due to larger experimental difficulties, the properties of particle transport have remained much less known. In particular, very few particle transport studies have been carried out in stellarators. This thesis summarises the transient particle transport experiments carried out at the Wendelstein 7-Advanced Stellarator (W7-AS). The main diagnostics tool was a 10-channel microwave interferometer. A technique for reconstructing the electron density profiles from the multichannel interferometer data was developed and implemented. The interferometer and the reconstruction software provide high quality electron density measurements with high temporal and sufficient spatial resolution. The density reconstruction is based on regularization methods studied during the development work. An extensive program of transient particle transport studies was carried out with the gas modulation method. The experiments resulted in a scaling expression for the diffusion coefficient. Transient inward convection was found in the edge plasma. The role of convection is minor in the core plasma, except at higher heating power, when an outward directed convective flux is observed. Radially peaked density profiles were found in discharges free of significant central density sources. Such density profiles are usually observed in tokamaks, but never before in W7-AS. Existence of an inward pinch is confirmed with two independent transient transport analysis methods. The density peaking is possible if the plasma is heated with extreme off-axis Electron Cyclotron Heating (ECH), when the temperature gradient vanishes in the core plasma, and if the gas puffing level is relatively low. The transport of plasma particles and heat

  6. Particle Filter-Based Target Tracking Algorithm for Magnetic Resonance-Guided Respiratory Compensation : Robustness and Accuracy Assessment

    NARCIS (Netherlands)

    Bourque, Alexandra E; Bedwani, Stéphane; Carrier, Jean-François; Ménard, Cynthia; Borman, Pim; Bos, Clemens; Raaymakers, Bas W; Mickevicius, Nikolai; Paulson, Eric; Tijssen, Rob H N

    PURPOSE: To assess overall robustness and accuracy of a modified particle filter-based tracking algorithm for magnetic resonance (MR)-guided radiation therapy treatments. METHODS AND MATERIALS: An improved particle filter-based tracking algorithm was implemented, which used a normalized

  7. Influence of coal slurry particle composition on pipeline hydraulic transportation behavior

    Science.gov (United States)

    Li-an, Zhao; Ronghuan, Cai; Tieli, Wang

    2018-02-01

    Acting as a new type of energy transportation mode, the coal pipeline hydraulic transmission can reduce the energy transportation cost and the fly ash pollution of the conventional coal transportation. In this study, the effect of average velocity, particle size and pumping time on particle composition of coal particles during hydraulic conveying was investigated by ring tube test. Meanwhile, the effects of particle composition change on slurry viscosity, transmission resistance and critical sedimentation velocity were studied based on the experimental data. The experimental and theoretical analysis indicate that the alter of slurry particle composition can lead to the change of viscosity, resistance and critical velocity of slurry. Moreover, based on the previous studies, the critical velocity calculation model of coal slurry is proposed.

  8. Relativity primer for particle transport. A LASL monograph

    International Nuclear Information System (INIS)

    Everett, C.J.; Cashwell, E.D.

    1979-04-01

    The basic principles of special relativity involved in Monte Carlo transport problems are developed with emphasis on the possible transmutations of particles, and on computational methods. Charged particle ballistics and polarized scattering are included, as well as a discussion of colliding beams

  9. Particle transport in urban dwellings

    International Nuclear Information System (INIS)

    Cannell, R.J.; Goddard, A.J.H.; ApSimon, H.M.

    1988-01-01

    A quantitative investigation of the potential for contamination of a dwelling by material carried in on the occupants' footwear has been completed. Data are now available on the transport capacity of different footwear for a small range of particle sizes and contamination source strengths. Additional information is also given on the rate of redistribution

  10. Particle simulation algorithms with short-range forces in MHD and fluid flow

    International Nuclear Information System (INIS)

    Cable, S.; Tajima, T.; Umegaki, K.

    1992-07-01

    Attempts are made to develop numerical algorithms for handling fluid flows involving liquids and liquid-gas mixtures. In these types of systems, the short-range intermolecular interactions are important enough to significantly alter behavior predicted on the basis of standard fluid mechanics and magnetohydrodynamics alone. We have constructed a particle-in-cell (PIC) code for the purpose of studying the effects of these interactions. Of the algorithms considered, the one which has been successfully implemented is based on a MHD particle code developed by Brunel et al. In the version presented here, short range forces are included in particle motion by, first, calculating the forces between individual particles and then, to prevent aliasing, interpolating these forces to the computational grid points, then interpolating the forces back to the particles. The code has been used to model a simple two-fluid Rayleigh-Taylor instability. Limitations to the accuracy of the code exist at short wavelengths, where the effects of the short-range forces would be expected to be most pronounced

  11. Control of alpha-particle transport by ion cyclotron resonance heating

    International Nuclear Information System (INIS)

    Chang, C.S.; Imre, K.; Weitzner, H.; Colestock, P.

    1990-01-01

    In this paper control of radial alpha-particle transport by using ion cyclotron range of frequency (ICRF) waves is investigated in a large-aspect-ratio tokamak geometry. Spatially inhomogeneous ICRF wave energy with properly selected frequencies and wave numbers can induce fast convective transports of alpha particles at the speed of order v α ∼ (P RF /n α ε 0 )ρ p , where R RF is the ICRF wave power density, n α is the alpha-particle density, ε 0 is the alpha-particle birth energy, and ρ p is the poloidal gyroradius of alpha particles at the birth energy. Application to International Thermonuclear Experimental Reactor (ITER) plasma is studied and possible antenna designs to control alpha-particle flux are discussed

  12. [Application of an Adaptive Inertia Weight Particle Swarm Algorithm in the Magnetic Resonance Bias Field Correction].

    Science.gov (United States)

    Wang, Chang; Qin, Xin; Liu, Yan; Zhang, Wenchao

    2016-06-01

    An adaptive inertia weight particle swarm algorithm is proposed in this study to solve the local optimal problem with the method of traditional particle swarm optimization in the process of estimating magnetic resonance(MR)image bias field.An indicator measuring the degree of premature convergence was designed for the defect of traditional particle swarm optimization algorithm.The inertia weight was adjusted adaptively based on this indicator to ensure particle swarm to be optimized globally and to avoid it from falling into local optimum.The Legendre polynomial was used to fit bias field,the polynomial parameters were optimized globally,and finally the bias field was estimated and corrected.Compared to those with the improved entropy minimum algorithm,the entropy of corrected image was smaller and the estimated bias field was more accurate in this study.Then the corrected image was segmented and the segmentation accuracy obtained in this research was 10% higher than that with improved entropy minimum algorithm.This algorithm can be applied to the correction of MR image bias field.

  13. General particle transport equation. Final report

    International Nuclear Information System (INIS)

    Lafi, A.Y.; Reyes, J.N. Jr.

    1994-12-01

    The general objectives of this research are as follows: (1) To develop fundamental models for fluid particle coalescence and breakage rates for incorporation into statistically based (Population Balance Approach or Monte Carlo Approach) two-phase thermal hydraulics codes. (2) To develop fundamental models for flow structure transitions based on stability theory and fluid particle interaction rates. This report details the derivation of the mass, momentum and energy conservation equations for a distribution of spherical, chemically non-reacting fluid particles of variable size and velocity. To study the effects of fluid particle interactions on interfacial transfer and flow structure requires detailed particulate flow conservation equations. The equations are derived using a particle continuity equation analogous to Boltzmann's transport equation. When coupled with the appropriate closure equations, the conservation equations can be used to model nonequilibrium, two-phase, dispersed, fluid flow behavior. Unlike the Eulerian volume and time averaged conservation equations, the statistically averaged conservation equations contain additional terms that take into account the change due to fluid particle interfacial acceleration and fluid particle dynamics. Two types of particle dynamics are considered; coalescence and breakage. Therefore, the rate of change due to particle dynamics will consider the gain and loss involved in these processes and implement phenomenological models for fluid particle breakage and coalescence

  14. 3D head pose estimation and tracking using particle filtering and ICP algorithm

    KAUST Repository

    Ben Ghorbel, Mahdi; Baklouti, Malek; Couvet, Serge

    2010-01-01

    This paper addresses the issue of 3D head pose estimation and tracking. Existing approaches generally need huge database, training procedure, manual initialization or use face feature extraction manually extracted. We propose a framework for estimating the 3D head pose in its fine level and tracking it continuously across multiple Degrees of Freedom (DOF) based on ICP and particle filtering. We propose to approach the problem, using 3D computational techniques, by aligning a face model to the 3D dense estimation computed by a stereo vision method, and propose a particle filter algorithm to refine and track the posteriori estimate of the position of the face. This work comes with two contributions: the first concerns the alignment part where we propose an extended ICP algorithm using an anisotropic scale transformation. The second contribution concerns the tracking part. We propose the use of the particle filtering algorithm and propose to constrain the search space using ICP algorithm in the propagation step. The results show that the system is able to fit and track the head properly, and keeps accurate the results on new individuals without a manual adaptation or training. © Springer-Verlag Berlin Heidelberg 2010.

  15. Multidisciplinary Optimization of a Transport Aircraft Wing using Particle Swarm Optimization

    Science.gov (United States)

    Sobieszczanski-Sobieski, Jaroslaw; Venter, Gerhard

    2002-01-01

    The purpose of this paper is to demonstrate the application of particle swarm optimization to a realistic multidisciplinary optimization test problem. The paper's new contributions to multidisciplinary optimization is the application of a new algorithm for dealing with the unique challenges associated with multidisciplinary optimization problems, and recommendations as to the utility of the algorithm in future multidisciplinary optimization applications. The selected example is a bi-level optimization problem that demonstrates severe numerical noise and has a combination of continuous and truly discrete design variables. The use of traditional gradient-based optimization algorithms is thus not practical. The numerical results presented indicate that the particle swarm optimization algorithm is able to reliably find the optimum design for the problem presented here. The algorithm is capable of dealing with the unique challenges posed by multidisciplinary optimization as well as the numerical noise and truly discrete variables present in the current example problem.

  16. A Novel Radiation Transport Algorithm for Radiography Simulations

    International Nuclear Information System (INIS)

    Inanc, Feyzi

    2004-01-01

    The simulations used in the NDE community are becoming more realistic with the introduction of more physics. In this work, we have developed a new algorithm that is capable of representing photon and charged particle fluxes through spherical harmonic expansions in a manner similar to well known discrete ordinates method with the exception that Boltzmann operator is treated through exact integration rather than conventional Legendre expansions. This approach provides a mean to include radiation interactions for higher energy regimes where there are additional physical mechanisms for photons and charged particles

  17. Lorentz covariant canonical symplectic algorithms for dynamics of charged particles

    Science.gov (United States)

    Wang, Yulei; Liu, Jian; Qin, Hong

    2016-12-01

    In this paper, the Lorentz covariance of algorithms is introduced. Under Lorentz transformation, both the form and performance of a Lorentz covariant algorithm are invariant. To acquire the advantages of symplectic algorithms and Lorentz covariance, a general procedure for constructing Lorentz covariant canonical symplectic algorithms (LCCSAs) is provided, based on which an explicit LCCSA for dynamics of relativistic charged particles is built. LCCSA possesses Lorentz invariance as well as long-term numerical accuracy and stability, due to the preservation of a discrete symplectic structure and the Lorentz symmetry of the system. For situations with time-dependent electromagnetic fields, which are difficult to handle in traditional construction procedures of symplectic algorithms, LCCSA provides a perfect explicit canonical symplectic solution by implementing the discretization in 4-spacetime. We also show that LCCSA has built-in energy-based adaptive time steps, which can optimize the computation performance when the Lorentz factor varies.

  18. Directed Transport of Brownian Particles in a Periodic Channel

    International Nuclear Information System (INIS)

    Jiang Jie; Ai Bao-Quan; Wu Jian-Chun

    2015-01-01

    The transport of Brownian particles in the infinite channel within an external force along the axis of the channel has been studied. In this paper, we study the transport of Brownian particle in the infinite channel within an external force along the axis of the channel and an external force in the transversal direction. In this more sophisticated situation, some property is similar to the simple situation, but some interesting property also appears. (paper)

  19. Inverse estimation of the spheroidal particle size distribution using Ant Colony Optimization algorithms in multispectral extinction technique

    Science.gov (United States)

    He, Zhenzong; Qi, Hong; Wang, Yuqing; Ruan, Liming

    2014-10-01

    Four improved Ant Colony Optimization (ACO) algorithms, i.e. the probability density function based ACO (PDF-ACO) algorithm, the Region ACO (RACO) algorithm, Stochastic ACO (SACO) algorithm and Homogeneous ACO (HACO) algorithm, are employed to estimate the particle size distribution (PSD) of the spheroidal particles. The direct problems are solved by the extended Anomalous Diffraction Approximation (ADA) and the Lambert-Beer law. Three commonly used monomodal distribution functions i.e. the Rosin-Rammer (R-R) distribution function, the normal (N-N) distribution function, and the logarithmic normal (L-N) distribution function are estimated under dependent model. The influence of random measurement errors on the inverse results is also investigated. All the results reveal that the PDF-ACO algorithm is more accurate than the other three ACO algorithms and can be used as an effective technique to investigate the PSD of the spheroidal particles. Furthermore, the Johnson's SB (J-SB) function and the modified beta (M-β) function are employed as the general distribution functions to retrieve the PSD of spheroidal particles using PDF-ACO algorithm. The investigation shows a reasonable agreement between the original distribution function and the general distribution function when only considering the variety of the length of the rotational semi-axis.

  20. Semi-analytic modeling of tokamak particle transport

    International Nuclear Information System (INIS)

    Shi Bingren; Long Yongxing; Li Jiquan

    2000-01-01

    The linear particle transport equation of tokamak plasma is analyzed. Particle flow consists of an outward diffusion and an inward convection. General solution is expressed in terms of a Green function constituted by eigen-functions of corresponding Sturm-Liouville problem. For a particle source near the plasma edge (shadow fueling), a well-behaved solution in terms of Fourier series can be constituted by using the complementarity relation. It can be seen from the lowest eigen-function that the particle density becomes peaked when the wall recycling reduced. For a transient point source in the inner region, a well-behaved solution can be obtained by the complementarity as well

  1. An efficient particle Fokker–Planck algorithm for rarefied gas flows

    Energy Technology Data Exchange (ETDEWEB)

    Gorji, M. Hossein; Jenny, Patrick

    2014-04-01

    This paper is devoted to the algorithmic improvement and careful analysis of the Fokker–Planck kinetic model derived by Jenny et al. [1] and Gorji et al. [2]. The motivation behind the Fokker–Planck based particle methods is to gain efficiency in low Knudsen rarefied gas flow simulations, where conventional direct simulation Monte Carlo (DSMC) becomes expensive. This can be achieved due to the fact that the resulting model equations are continuous stochastic differential equations in velocity space. Accordingly, the computational particles evolve along independent stochastic paths and thus no collision needs to be calculated. Therefore the computational cost of the solution algorithm becomes independent of the Knudsen number. In the present study, different computational improvements were persuaded in order to augment the method, including an accurate time integration scheme, local time stepping and noise reduction. For assessment of the performance, gas flow around a cylinder and lid driven cavity flow were studied. Convergence rates, accuracy and computational costs were compared with respect to DSMC for a range of Knudsen numbers (from hydrodynamic regime up to above one). In all the considered cases, the model together with the proposed scheme give rise to very efficient yet accurate solution algorithms.

  2. PHITS-a particle and heavy ion transport code system

    International Nuclear Information System (INIS)

    Niita, Koji; Sato, Tatsuhiko; Iwase, Hiroshi; Nose, Hiroyuki; Nakashima, Hiroshi; Sihver, Lembit

    2006-01-01

    The paper presents a summary of the recent development of the multi-purpose Monte Carlo Particle and Heavy Ion Transport code System, PHITS. In particular, we discuss in detail the development of two new models, JAM and JQMD, for high energy particle interactions, incorporated in PHITS, and show comparisons between model calculations and experiments for the validations of these models. The paper presents three applications of the code including spallation neutron source, heavy ion therapy and space radiation. The results and examples shown indicate PHITS has great ability of carrying out the radiation transport analysis of almost all particles including heavy ions within a wide energy range

  3. Parallel/vector algorithms for the spherical SN transport theory method

    International Nuclear Information System (INIS)

    Haghighat, A.; Mattis, R.E.

    1990-01-01

    This paper discusses vector and parallel processing of a 1-D curvilinear (i.e. spherical) S N transport theory algorithm on the Cornell National SuperComputer Facility (CNSF) IBM 3090/600E. Two different vector algorithms were developed and parallelized based on angular decomposition. It is shown that significant speedups are attainable. For example, for problems with large granularity, using 4 processors, the parallel/vector algorithm achieves speedups (for wall-clock time) of more than 4.5 relative to the old serial/scalar algorithm. Furthermore, this work has demonstrated the existing potential for the development of faster processing vector and parallel algorithms for multidimensional curvilinear geometries. (author)

  4. Genetic particle swarm parallel algorithm analysis of optimization arrangement on mistuned blades

    Science.gov (United States)

    Zhao, Tianyu; Yuan, Huiqun; Yang, Wenjun; Sun, Huagang

    2017-12-01

    This article introduces a method of mistuned parameter identification which consists of static frequency testing of blades, dichotomy and finite element analysis. A lumped parameter model of an engine bladed-disc system is then set up. A bladed arrangement optimization method, namely the genetic particle swarm optimization algorithm, is presented. It consists of a discrete particle swarm optimization and a genetic algorithm. From this, the local and global search ability is introduced. CUDA-based co-evolution particle swarm optimization, using a graphics processing unit, is presented and its performance is analysed. The results show that using optimization results can reduce the amplitude and localization of the forced vibration response of a bladed-disc system, while optimization based on the CUDA framework can improve the computing speed. This method could provide support for engineering applications in terms of effectiveness and efficiency.

  5. Numerical computation of discrete differential scattering cross sections for Monte Carlo charged particle transport

    International Nuclear Information System (INIS)

    Walsh, Jonathan A.; Palmer, Todd S.; Urbatsch, Todd J.

    2015-01-01

    Highlights: • Generation of discrete differential scattering angle and energy loss cross sections. • Gauss–Radau quadrature utilizing numerically computed cross section moments. • Development of a charged particle transport capability in the Milagro IMC code. • Integration of cross section generation and charged particle transport capabilities. - Abstract: We investigate a method for numerically generating discrete scattering cross sections for use in charged particle transport simulations. We describe the cross section generation procedure and compare it to existing methods used to obtain discrete cross sections. The numerical approach presented here is generalized to allow greater flexibility in choosing a cross section model from which to derive discrete values. Cross section data computed with this method compare favorably with discrete data generated with an existing method. Additionally, a charged particle transport capability is demonstrated in the time-dependent Implicit Monte Carlo radiative transfer code, Milagro. We verify the implementation of charged particle transport in Milagro with analytic test problems and we compare calculated electron depth–dose profiles with another particle transport code that has a validated electron transport capability. Finally, we investigate the integration of the new discrete cross section generation method with the charged particle transport capability in Milagro.

  6. A decoupled power flow algorithm using particle swarm optimization technique

    International Nuclear Information System (INIS)

    Acharjee, P.; Goswami, S.K.

    2009-01-01

    A robust, nondivergent power flow method has been developed using the particle swarm optimization (PSO) technique. The decoupling properties between the power system quantities have been exploited in developing the power flow algorithm. The speed of the power flow algorithm has been improved using a simple perturbation technique. The basic power flow algorithm and the improvement scheme have been designed to retain the simplicity of the evolutionary approach. The power flow is rugged, can determine the critical loading conditions and also can handle the flexible alternating current transmission system (FACTS) devices efficiently. Test results on standard test systems show that the proposed method can find the solution when the standard power flows fail.

  7. Application of Dynamic Mutated Particle Swarm Optimization Algorithm to Design Water Distribution Networks

    Directory of Open Access Journals (Sweden)

    Kazem Mohammadi- Aghdam

    2015-10-01

    Full Text Available This paper proposes the application of a new version of the heuristic particle swarm optimization (PSO method for designing water distribution networks (WDNs. The optimization problem of looped water distribution networks is recognized as an NP-hard combinatorial problem which cannot be easily solved using traditional mathematical optimization techniques. In this paper, the concept of dynamic swarm size is considered in an attempt to increase the convergence speed of the original PSO algorithm. In this strategy, the size of the swarm is dynamically changed according to the iteration number of the algorithm. Furthermore, a novel mutation approach is introduced to increase the diversification property of the PSO and to help the algorithm to avoid trapping in local optima. The new version of the PSO algorithm is called dynamic mutated particle swarm optimization (DMPSO. The proposed DMPSO is then applied to solve WDN design problems. Finally, two illustrative examples are used for comparison to verify the efficiency of the proposed DMPSO as compared to other intelligent algorithms.

  8. Algorithm for Public Electric Transport Schedule Control for Intelligent Embedded Devices

    Science.gov (United States)

    Alps, Ivars; Potapov, Andrey; Gorobetz, Mikhail; Levchenkov, Anatoly

    2010-01-01

    In this paper authors present heuristics algorithm for precise schedule fulfilment in city traffic conditions taking in account traffic lights. The algorithm is proposed for programmable controller. PLC is proposed to be installed in electric vehicle to control its motion speed and signals of traffic lights. Algorithm is tested using real controller connected to virtual devices and real functional models of real tram devices. Results of experiments show high precision of public transport schedule fulfilment using proposed algorithm.

  9. Modelling of neutral particle transport in divertor plasma

    International Nuclear Information System (INIS)

    Kakizuka, Tomonori; Shimizu, Katsuhiro

    1995-01-01

    An outline of the modelling of neutral particle transport in the diverter plasma was described in the paper. The characteristic properties of divertor plasma were largely affected by interaction between neutral particles and divertor plasma. Accordingly, the behavior of neutral particle should be investigated quantitatively. Moreover, plasma and neutral gas should be traced consistently in the plasma simulation. There are Monte Carlo modelling and the neutral gas fluid modelling as the transport modelling. The former need long calculation time, but it is able to make the physical process modelling. A ultra-large parallel computer is good for the former. In spite of proposing some kinds of models, the latter has not been established. At the view point of reducing calculation time, a work station is good for the simulation of the latter, although some physical problems have not been solved. On the Monte Carlo method particle modelling, reducing the calculation time and introducing the interaction of particles are important subjects to develop 'the evolutional Monte Carlo Method'. To reduce the calculation time, two new methods: 'Implicit Monte Carlo method' and 'Free-and Diffusive-Motion Hybrid Monte-Carlo method' have been developing. (S.Y.)

  10. Transport of large particles released in a nuclear accident

    International Nuclear Information System (INIS)

    Poellaenen, R.; Toivonen, H.; Lahtinen, J.; Ilander, T.

    1995-10-01

    Highly radioactive particulate material may be released in a nuclear accident or sometimes during normal operation of a nuclear power plant. However, consequence analyses related to radioactive releases are often performed neglecting the particle nature of the release. The properties of the particles have an important role in the radiological hazard. A particle deposited on the skin may cause a large and highly non-uniform skin beta dose. Skin dose limits may be exceeded although the overall activity concentration in air is below the level of countermeasures. For sheltering purposes it is crucial to find out the transport range, i.e. the travel distance of the particles. A method for estimating the transport range of large particles (aerodynamic diameter d a > 20 μm) in simplified meteorological conditions is presented. A user-friendly computer code, known as TROP, is developed for fast range calculations in a nuclear emergency. (orig.) (23 refs., 13 figs.)

  11. Transport of large particles released in a nuclear accident

    Energy Technology Data Exchange (ETDEWEB)

    Poellaenen, R; Toivonen, H; Lahtinen, J; Ilander, T

    1995-10-01

    Highly radioactive particulate material may be released in a nuclear accident or sometimes during normal operation of a nuclear power plant. However, consequence analyses related to radioactive releases are often performed neglecting the particle nature of the release. The properties of the particles have an important role in the radiological hazard. A particle deposited on the skin may cause a large and highly non-uniform skin beta dose. Skin dose limits may be exceeded although the overall activity concentration in air is below the level of countermeasures. For sheltering purposes it is crucial to find out the transport range, i.e. the travel distance of the particles. A method for estimating the transport range of large particles (aerodynamic diameter d{sub a} > 20 {mu}m) in simplified meteorological conditions is presented. A user-friendly computer code, known as TROP, is developed for fast range calculations in a nuclear emergency. (orig.) (23 refs., 13 figs.).

  12. Artificial Fish Swarm Algorithm-Based Particle Filter for Li-Ion Battery Life Prediction

    Directory of Open Access Journals (Sweden)

    Ye Tian

    2014-01-01

    Full Text Available An intelligent online prognostic approach is proposed for predicting the remaining useful life (RUL of lithium-ion (Li-ion batteries based on artificial fish swarm algorithm (AFSA and particle filter (PF, which is an integrated approach combining model-based method with data-driven method. The parameters, used in the empirical model which is based on the capacity fade trends of Li-ion batteries, are identified dependent on the tracking ability of PF. AFSA-PF aims to improve the performance of the basic PF. By driving the prior particles to the domain with high likelihood, AFSA-PF allows global optimization, prevents particle degeneracy, thereby improving particle distribution and increasing prediction accuracy and algorithm convergence. Data provided by NASA are used to verify this approach and compare it with basic PF and regularized PF. AFSA-PF is shown to be more accurate and precise.

  13. Density Dependence of Particle Transport in ECH Plasmas of the TJ-II Stellarator

    Energy Technology Data Exchange (ETDEWEB)

    Vargas, V. I.; Lopez-Bruna, D.; Guasp, J.; Herranz, J.; Estrada, T.; Medina, F.; Ochando, M.A.; Velasco, J.L.; Reynolds, J.M.; Ferreira, J.A.; Tafalla, D.; Castejon, F.; Salas, A.

    2009-05-21

    We present the experimental dependence of particle transport on average density in electron cyclotron heated (ECH) hydrogen plasmas of the TJ-II stellarator. The results are based on: (I) electron density and temperature data from Thomson Scattering and reflectometry diagnostics; (II) a transport model that reproduces the particle density profiles in steady state; and (III) Eirene, a code for neutrals transport that calculates the particle source in the plasma from the particle confinement time and the appropriate geometry of the machine/plasma. After estimating an effective particle diffusivity and the particle confinement time, a threshold density separating qualitatively and quantitatively different plasma transport regimes is found. The poor confinement times found below the threshold are coincident with the presence of ECH-induced fast electron losses and a positive radial electric field all over the plasma. (Author) 40 refs.

  14. Hybrid Optimization Algorithm of Particle Swarm Optimization and Cuckoo Search for Preventive Maintenance Period Optimization

    Directory of Open Access Journals (Sweden)

    Jianwen Guo

    2016-01-01

    Full Text Available All equipment must be maintained during its lifetime to ensure normal operation. Maintenance is one of the critical roles in the success of manufacturing enterprises. This paper proposed a preventive maintenance period optimization model (PMPOM to find an optimal preventive maintenance period. By making use of the advantages of particle swarm optimization (PSO and cuckoo search (CS algorithm, a hybrid optimization algorithm of PSO and CS is proposed to solve the PMPOM problem. The test functions show that the proposed algorithm exhibits more outstanding performance than particle swarm optimization and cuckoo search. Experiment results show that the proposed algorithm has advantages of strong optimization ability and fast convergence speed to solve the PMPOM problem.

  15. New hybrid genetic particle swarm optimization algorithm to design multi-zone binary filter.

    Science.gov (United States)

    Lin, Jie; Zhao, Hongyang; Ma, Yuan; Tan, Jiubin; Jin, Peng

    2016-05-16

    The binary phase filters have been used to achieve an optical needle with small lateral size. Designing a binary phase filter is still a scientific challenge in such fields. In this paper, a hybrid genetic particle swarm optimization (HGPSO) algorithm is proposed to design the binary phase filter. The HGPSO algorithm includes self-adaptive parameters, recombination and mutation operations that originated from the genetic algorithm. Based on the benchmark test, the HGPSO algorithm has achieved global optimization and fast convergence. In an easy-to-perform optimizing procedure, the iteration number of HGPSO is decreased to about a quarter of the original particle swarm optimization process. A multi-zone binary phase filter is designed by using the HGPSO. The long depth of focus and high resolution are achieved simultaneously, where the depth of focus and focal spot transverse size are 6.05λ and 0.41λ, respectively. Therefore, the proposed HGPSO can be applied to the optimization of filter with multiple parameters.

  16. Transport of Particle Swarms Through Variable Aperture Fractures

    Science.gov (United States)

    Boomsma, E.; Pyrak-Nolte, L. J.

    2012-12-01

    Particle transport through fractured rock is a key concern with the increased use of micro- and nano-size particles in consumer products as well as from other activities in the sub- and near surface (e.g. mining, industrial waste, hydraulic fracturing, etc.). While particle transport is often studied as the transport of emulsions or dispersions, particles may also enter the subsurface from leaks or seepage that lead to particle swarms. Swarms are drop-like collections of millions of colloidal-sized particles that exhibit a number of unique characteristics when compared to dispersions and emulsions. Any contaminant or engineered particle that forms a swarm can be transported farther, faster, and more cohesively in fractures than would be expected from a traditional dispersion model. In this study, the effects of several variable aperture fractures on colloidal swarm cohesiveness and evolution were studied as a swarm fell under gravity and interacted with the fracture walls. Transparent acrylic was used to fabricate synthetic fracture samples with (1) a uniform aperture, (2) a converging region followed by a uniform region (funnel shaped), (3) a uniform region followed by a diverging region (inverted funnel), and (4) a cast of a an induced fracture from a carbonate rock. All of the samples consisted of two blocks that measured 100 x 100 x 50 mm. The minimum separation between these blocks determined the nominal aperture (0.5 mm to 20 mm). During experiments a fracture was fully submerged in water and swarms were released into it. The swarms consisted of a dilute suspension of 3 micron polystyrene fluorescent beads (1% by mass) with an initial volume of 5μL. The swarms were illuminated with a green (525 nm) LED array and imaged optically with a CCD camera. The variation in fracture aperture controlled swarm behavior. Diverging apertures caused a sudden loss of confinement that resulted in a rapid change in the swarm's shape as well as a sharp increase in its velocity

  17. Hybrid Algorithms for Fuzzy Reverse Supply Chain Network Design

    Science.gov (United States)

    Che, Z. H.; Chiang, Tzu-An; Kuo, Y. C.

    2014-01-01

    In consideration of capacity constraints, fuzzy defect ratio, and fuzzy transport loss ratio, this paper attempted to establish an optimized decision model for production planning and distribution of a multiphase, multiproduct reverse supply chain, which addresses defects returned to original manufacturers, and in addition, develops hybrid algorithms such as Particle Swarm Optimization-Genetic Algorithm (PSO-GA), Genetic Algorithm-Simulated Annealing (GA-SA), and Particle Swarm Optimization-Simulated Annealing (PSO-SA) for solving the optimized model. During a case study of a multi-phase, multi-product reverse supply chain network, this paper explained the suitability of the optimized decision model and the applicability of the algorithms. Finally, the hybrid algorithms showed excellent solving capability when compared with original GA and PSO methods. PMID:24892057

  18. Digital signal processing algorithms for nuclear particle spectroscopy

    International Nuclear Information System (INIS)

    Zejnalova, O.; Zejnalov, Sh.; Hambsch, F.J.; Oberstedt, S.

    2007-01-01

    Digital signal processing algorithms for nuclear particle spectroscopy are described along with a digital pile-up elimination method applicable to equidistantly sampled detector signals pre-processed by a charge-sensitive preamplifier. The signal processing algorithms are provided as recursive one- or multi-step procedures which can be easily programmed using modern computer programming languages. The influence of the number of bits of the sampling analogue-to-digital converter on the final signal-to-noise ratio of the spectrometer is considered. Algorithms for a digital shaping-filter amplifier, for a digital pile-up elimination scheme and for ballistic deficit correction were investigated using a high purity germanium detector. The pile-up elimination method was originally developed for fission fragment spectroscopy using a Frisch-grid back-to-back double ionization chamber and was mainly intended for pile-up elimination in case of high alpha-radioactivity of the fissile target. The developed pile-up elimination method affects only the electronic noise generated by the preamplifier. Therefore the influence of the pile-up elimination scheme on the final resolution of the spectrometer is investigated in terms of the distance between pile-up pulses. The efficiency of the developed algorithms is compared with other signal processing schemes published in literature

  19. Effects of fuel particle size distributions on neutron transport in stochastic media

    International Nuclear Information System (INIS)

    Liang, Chao; Pavlou, Andrew T.; Ji, Wei

    2014-01-01

    Highlights: • Effects of fuel particle size distributions on neutron transport are evaluated. • Neutron channeling is identified as the fundamental reason for the effects. • The effects are noticeable at low packing and low optical thickness systems. • Unit cells of realistic reactor designs are studied for different size particles. • Fuel particle size distribution effects are not negligible in realistic designs. - Abstract: This paper presents a study of the fuel particle size distribution effects on neutron transport in three-dimensional stochastic media. Particle fuel is used in gas-cooled nuclear reactor designs and innovative light water reactor designs loaded with accident tolerant fuel. Due to the design requirements and fuel fabrication limits, the size of fuel particles may not be perfectly constant but instead follows a certain distribution. This brings a fundamental question to the radiation transport computation community: how does the fuel particle size distribution affect the neutron transport in particle fuel systems? To answer this question, size distribution effects and their physical interpretations are investigated by performing a series of neutron transport simulations at different fuel particle size distributions. An eigenvalue problem is simulated in a cylindrical container consisting of fissile fuel particles with five different size distributions: constant, uniform, power, exponential and Gaussian. A total of 15 parametric cases are constructed by altering the fissile particle volume packing fraction and its optical thickness, but keeping the mean chord length of the spherical fuel particle the same at different size distributions. The tallied effective multiplication factor (k eff ) and the spatial distribution of fission power density along axial and radial directions are compared between different size distributions. At low packing fraction and low optical thickness, the size distribution shows a noticeable effect on neutron

  20. Identification of nuclear power plant transients using the Particle Swarm Optimization algorithm

    International Nuclear Information System (INIS)

    Canedo Medeiros, Jose Antonio Carlos; Schirru, Roberto

    2008-01-01

    In order to help nuclear power plant operator reduce his cognitive load and increase his available time to maintain the plant operating in a safe condition, transient identification systems have been devised to help operators identify possible plant transients and take fast and right corrective actions in due time. In the design of classification systems for identification of nuclear power plants transients, several artificial intelligence techniques, involving expert systems, neuro-fuzzy and genetic algorithms have been used. In this work we explore the ability of the Particle Swarm Optimization algorithm (PSO) as a tool for optimizing a distance-based discrimination transient classification method, giving also an innovative solution for searching the best set of prototypes for identification of transients. The Particle Swarm Optimization algorithm was successfully applied to the optimization of a nuclear power plant transient identification problem. Comparing the PSO to similar methods found in literature it has shown better results

  1. Identification of nuclear power plant transients using the Particle Swarm Optimization algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Canedo Medeiros, Jose Antonio Carlos [Universidade Federal do Rio de Janeiro, PEN/COPPE, UFRJ, Ilha do Fundao s/n, CEP 21945-970 Rio de Janeiro (Brazil)], E-mail: canedo@lmp.ufrj.br; Schirru, Roberto [Universidade Federal do Rio de Janeiro, PEN/COPPE, UFRJ, Ilha do Fundao s/n, CEP 21945-970 Rio de Janeiro (Brazil)], E-mail: schirru@lmp.ufrj.br

    2008-04-15

    In order to help nuclear power plant operator reduce his cognitive load and increase his available time to maintain the plant operating in a safe condition, transient identification systems have been devised to help operators identify possible plant transients and take fast and right corrective actions in due time. In the design of classification systems for identification of nuclear power plants transients, several artificial intelligence techniques, involving expert systems, neuro-fuzzy and genetic algorithms have been used. In this work we explore the ability of the Particle Swarm Optimization algorithm (PSO) as a tool for optimizing a distance-based discrimination transient classification method, giving also an innovative solution for searching the best set of prototypes for identification of transients. The Particle Swarm Optimization algorithm was successfully applied to the optimization of a nuclear power plant transient identification problem. Comparing the PSO to similar methods found in literature it has shown better results.

  2. A Particle Swarm Optimization Algorithm with Variable Random Functions and Mutation

    Institute of Scientific and Technical Information of China (English)

    ZHOU Xiao-Jun; YANG Chun-Hua; GUI Wei-Hua; DONG Tian-Xue

    2014-01-01

    The convergence analysis of the standard particle swarm optimization (PSO) has shown that the changing of random functions, personal best and group best has the potential to improve the performance of the PSO. In this paper, a novel strategy with variable random functions and polynomial mutation is introduced into the PSO, which is called particle swarm optimization algorithm with variable random functions and mutation (PSO-RM). Random functions are adjusted with the density of the population so as to manipulate the weight of cognition part and social part. Mutation is executed on both personal best particle and group best particle to explore new areas. Experiment results have demonstrated the effectiveness of the strategy.

  3. Gyrokinetic theory for particle and energy transport in fusion plasmas

    Science.gov (United States)

    Falessi, Matteo Valerio; Zonca, Fulvio

    2018-03-01

    A set of equations is derived describing the macroscopic transport of particles and energy in a thermonuclear plasma on the energy confinement time. The equations thus derived allow studying collisional and turbulent transport self-consistently, retaining the effect of magnetic field geometry without postulating any scale separation between the reference state and fluctuations. Previously, assuming scale separation, transport equations have been derived from kinetic equations by means of multiple-scale perturbation analysis and spatio-temporal averaging. In this work, the evolution equations for the moments of the distribution function are obtained following the standard approach; meanwhile, gyrokinetic theory has been used to explicitly express the fluctuation induced fluxes. In this way, equations for the transport of particles and energy up to the transport time scale can be derived using standard first order gyrokinetics.

  4. Fueling profile sensitivities of trapped particle mode transport to TNS

    International Nuclear Information System (INIS)

    Mense, A.T.; Attenberger, S.E.; Houlberg, W.A.

    1977-01-01

    A key factor in the plasma thermal behavior is the anticipated existence of dissipative trapped particle modes. A possible scheme for controlling the strength of these modes was found. The scheme involves varying the cold fueling profile. A one dimensional multifluid transport code was used to simulate plasma behavior. A multiregime model for particle and energy transport was incorporated based on pseudoclassical, trapped electron, and trapped ion regimes used elsewhere in simulation of large tokamaks. Fueling profiles peaked toward the plasma edge may provide a means for reducing density-gradient-driven trapped particle modes, thus reducing diffusion and conduction losses

  5. Development of general-purpose particle and heavy ion transport monte carlo code

    International Nuclear Information System (INIS)

    Iwase, Hiroshi; Nakamura, Takashi; Niita, Koji

    2002-01-01

    The high-energy particle transport code NMTC/JAM, which has been developed at JAERI, was improved for the high-energy heavy ion transport calculation by incorporating the JQMD code, the SPAR code and the Shen formula. The new NMTC/JAM named PHITS (Particle and Heavy-Ion Transport code System) is the first general-purpose heavy ion transport Monte Carlo code over the incident energies from several MeV/nucleon to several GeV/nucleon. (author)

  6. Particle transport due to magnetic fluctuations

    International Nuclear Information System (INIS)

    Stoneking, M.R.; Hokin, S.A.; Prager, S.C.; Fiksel, G.; Ji, H.; Den Hartog, D.J.

    1994-01-01

    Electron current fluctuations are measured with an electrostatic energy analyzer at the edge of the MST reversed-field pinch plasma. The radial flux of fast electrons (E>T e ) due to parallel streaming along a fluctuating magnetic field is determined locally by measuring the correlated product e B r >. Particle transport is small just inside the last closed flux surface (Γ e,mag e,total ), but can account for all observed particle losses inside r/a=0.8. Electron diffusion is found to increase with parallel velocity, as expected for diffusion in a region of field stochasticity

  7. Fully multidimensional flux-corrected transport algorithms for fluids

    International Nuclear Information System (INIS)

    Zalesak, S.T.

    1979-01-01

    The theory of flux-corrected transport (FCT) developed by Boris and Book is placed in a simple, generalized format, and a new algorithm for implementing the critical flux limiting stage in multidimensions without resort to time splitting is presented. The new flux limiting algorithm allows the use of FCT techniques in multidimensional fluid problems for which time splitting would produce unacceptable numerical results, such as those involving incompressible or nearly incompressible flow fields. The 'clipping' problem associated with the original one dimensional flux limiter is also eliminated or alleviated. Test results and applications to a two dimensional fluid plasma problem are presented

  8. Particle transport methods for LWR dosimetry developed by the Penn State transport theory group

    International Nuclear Information System (INIS)

    Haghighat, A.; Petrovic, B.

    1997-01-01

    This paper reviews advanced particle transport theory methods developed by the Penn State Transport Theory Group (PSTTG) over the past several years. These methods have been developed in response to increasing needs for accuracy of results and for three-dimensional modeling of nuclear systems

  9. The energetic alpha particle transport method EATM

    International Nuclear Information System (INIS)

    Kirkpatrick, R.C.

    1998-02-01

    The EATM method is an evolving attempt to find an efficient method of treating the transport of energetic charged particles in a dynamic magnetized (MHD) plasma for which the mean free path of the particles and the Larmor radius may be long compared to the gradient lengths in the plasma. The intent is to span the range of parameter space with the efficiency and accuracy thought necessary for experimental analysis and design of magnetized fusion targets

  10. Particle mis-identification rate algorithm for the CLIC ILD and CLIC SiD detectors

    CERN Document Server

    Nardulli, J

    2011-01-01

    This note describes the algorithm presently used to determine the particle mis- identification rate and gives results for single particles for the CLIC ILD and CLIC SiD detector concepts as prepared for the CLIC Conceptual Design Report.

  11. New features of the mercury Monte Carlo particle transport code

    International Nuclear Information System (INIS)

    Procassini, Richard; Brantley, Patrick; Dawson, Shawn

    2010-01-01

    Several new capabilities have been added to the Mercury Monte Carlo transport code over the past four years. The most important algorithmic enhancement is a general, extensible infrastructure to support source, tally and variance reduction actions. For each action, the user defines a phase space, as well as any number of responses that are applied to a specified event. Tallies are accumulated into a correlated, multi-dimensional. Cartesian-product result phase space. Our approach employs a common user interface to specify the data sets and distributions that define the phase, response and result for each action. Modifications to the particle trackers include the use of facet halos (instead of extrapolative fuzz) for robust tracking, and material interface reconstruction for use in shape overlaid meshes. Support for expected-value criticality eigenvalue calculations has also been implemented. Computer science enhancements include an in-line Python interface for user customization of problem setup and output. (author)

  12. Dose calculations algorithm for narrow heavy charged-particle beams

    Energy Technology Data Exchange (ETDEWEB)

    Barna, E A; Kappas, C [Department of Medical Physics, School of Medicine, University of Patras (Greece); Scarlat, F [National Institute for Laser and Plasma Physics, Bucharest (Romania)

    1999-12-31

    The dose distributional advantages of the heavy charged-particles can be fully exploited by using very efficient and accurate dose calculation algorithms, which can generate optimal three-dimensional scanning patterns. An inverse therapy planning algorithm for dynamically scanned, narrow heavy charged-particle beams is presented in this paper. The irradiation `start point` is defined at the distal end of the target volume, right-down, in a beam`s eye view. The peak-dose of the first elementary beam is set to be equal to the prescribed dose in the target volume, and is defined as the reference dose. The weighting factor of any Bragg-peak is determined by the residual dose at the point of irradiation, calculated as the difference between the reference dose and the cumulative dose delivered at that point of irradiation by all the previous Bragg-peaks. The final pattern consists of the weighted Bragg-peaks irradiation density. Dose distributions were computed using two different scanning steps equal to 0.5 mm, and 1 mm respectively. Very accurate and precise localized dose distributions, conform to the target volume, were obtained. (authors) 6 refs., 3 figs.

  13. Hybrid Artificial Bee Colony Algorithm and Particle Swarm Search for Global Optimization

    Directory of Open Access Journals (Sweden)

    Wang Chun-Feng

    2014-01-01

    Full Text Available Artificial bee colony (ABC algorithm is one of the most recent swarm intelligence based algorithms, which has been shown to be competitive to other population-based algorithms. However, there is still an insufficiency in ABC regarding its solution search equation, which is good at exploration but poor at exploitation. To overcome this problem, we propose a novel artificial bee colony algorithm based on particle swarm search mechanism. In this algorithm, for improving the convergence speed, the initial population is generated by using good point set theory rather than random selection firstly. Secondly, in order to enhance the exploitation ability, the employed bee, onlookers, and scouts utilize the mechanism of PSO to search new candidate solutions. Finally, for further improving the searching ability, the chaotic search operator is adopted in the best solution of the current iteration. Our algorithm is tested on some well-known benchmark functions and compared with other algorithms. Results show that our algorithm has good performance.

  14. Nuclear fuel particles in the environment - characteristics, atmospheric transport and skin doses

    International Nuclear Information System (INIS)

    Poellaenen, R.

    2002-05-01

    In the present thesis, nuclear fuel particles are studied from the perspective of their characteristics, atmospheric transport and possible skin doses. These particles, often referred to as 'hot' particles, can be released into the environment, as has happened in past years, through human activities, incidents and accidents, such as the Chernobyl nuclear power plant accident in 1986. Nuclear fuel particles with a diameter of tens of micrometers, referred to here as large particles, may be hundreds of kilobecquerels in activity and even an individual particle may present a quantifiable health hazard. The detection of individual nuclear fuel particles in the environment, their isolation for subsequent analysis and their characterisation are complicated and require well-designed sampling and tailored analytical methods. In the present study, the need to develop particle analysis methods is highlighted. It is shown that complementary analytical techniques are necessary for proper characterisation of the particles. Methods routinely used for homogeneous samples may produce erroneous results if they are carelessly applied to radioactive particles. Large nuclear fuel particles are transported differently in the atmosphere compared with small particles or gaseous species. Thus, the trajectories of gaseous species are not necessarily appropriate for calculating the areas that may receive large particle fallout. A simplified model and a more advanced model based on the data on real weather conditions were applied in the case of the Chernobyl accident to calculate the transport of the particles of different sizes. The models were appropriate in characterising general transport properties but were not able to properly predict the transport of the particles with an aerodynamic diameter of tens of micrometers, detected at distances of hundreds of kilometres from the source, using only the current knowledge of the source term. Either the effective release height has been higher

  15. Modeling reactive transport with particle tracking and kernel estimators

    Science.gov (United States)

    Rahbaralam, Maryam; Fernandez-Garcia, Daniel; Sanchez-Vila, Xavier

    2015-04-01

    Groundwater reactive transport models are useful to assess and quantify the fate and transport of contaminants in subsurface media and are an essential tool for the analysis of coupled physical, chemical, and biological processes in Earth Systems. Particle Tracking Method (PTM) provides a computationally efficient and adaptable approach to solve the solute transport partial differential equation. On a molecular level, chemical reactions are the result of collisions, combinations, and/or decay of different species. For a well-mixed system, the chem- ical reactions are controlled by the classical thermodynamic rate coefficient. Each of these actions occurs with some probability that is a function of solute concentrations. PTM is based on considering that each particle actually represents a group of molecules. To properly simulate this system, an infinite number of particles is required, which is computationally unfeasible. On the other hand, a finite number of particles lead to a poor-mixed system which is limited by diffusion. Recent works have used this effect to actually model incomplete mix- ing in naturally occurring porous media. In this work, we demonstrate that this effect in most cases should be attributed to a defficient estimation of the concentrations and not to the occurrence of true incomplete mixing processes in porous media. To illustrate this, we show that a Kernel Density Estimation (KDE) of the concentrations can approach the well-mixed solution with a limited number of particles. KDEs provide weighting functions of each particle mass that expands its region of influence, hence providing a wider region for chemical reactions with time. Simulation results show that KDEs are powerful tools to improve state-of-the-art simulations of chemical reactions and indicates that incomplete mixing in diluted systems should be modeled based on alternative conceptual models and not on a limited number of particles.

  16. Solar energetic particle anisotropies and insights into particle transport

    Energy Technology Data Exchange (ETDEWEB)

    Leske, R. A., E-mail: ral@srl.caltech.edu; Cummings, A. C.; Cohen, C. M. S.; Mewaldt, R. A.; Labrador, A. W.; Stone, E. C. [California Institute of Technology, Pasadena, CA 91125 (United States); Wiedenbeck, M. E. [Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA 91109 (United States); Christian, E. R.; Rosenvinge, T. T. von [NASA/Goddard Space Flight Center, Greenbelt, MD 20771 (United States)

    2016-03-25

    As solar energetic particles (SEPs) travel through interplanetary space, their pitch-angle distributions are shaped by the competing effects of magnetic focusing and scattering. Measurements of SEP anisotropies can therefore reveal information about interplanetary conditions such as magnetic field strength, topology, and turbulence levels at remote locations from the observer. Onboard each of the two STEREO spacecraft, the Low Energy Telescope (LET) measures pitch-angle distributions for protons and heavier ions up to iron at energies of about 2-12 MeV/nucleon. Anisotropies observed using LET include bidirectional flows within interplanetary coronal mass ejections, sunward-flowing particles when STEREO was magnetically connected to the back side of a shock, and loss-cone distributions in which particles with large pitch angles underwent magnetic mirroring at an interplanetary field enhancement that was too weak to reflect particles with the smallest pitch angles. Unusual oscillations in the width of a beamed distribution at the onset of the 23 July 2012 SEP event were also observed and remain puzzling. We report LET anisotropy observations at both STEREO spacecraft and discuss their implications for SEP transport, focusing exclusively on the extreme event of 23 July 2012 in which a large variety of anisotropies were present at various times during the event.

  17. Solar energetic particle anisotropies and insights into particle transport

    Science.gov (United States)

    Leske, R. A.; Cummings, A. C.; Cohen, C. M. S.; Mewaldt, R. A.; Labrador, A. W.; Stone, E. C.; Wiedenbeck, M. E.; Christian, E. R.; Rosenvinge, T. T. von

    2016-03-01

    As solar energetic particles (SEPs) travel through interplanetary space, their pitch-angle distributions are shaped by the competing effects of magnetic focusing and scattering. Measurements of SEP anisotropies can therefore reveal information about interplanetary conditions such as magnetic field strength, topology, and turbulence levels at remote locations from the observer. Onboard each of the two STEREO spacecraft, the Low Energy Telescope (LET) measures pitch-angle distributions for protons and heavier ions up to iron at energies of about 2-12 MeV/nucleon. Anisotropies observed using LET include bidirectional flows within interplanetary coronal mass ejections, sunward-flowing particles when STEREO was magnetically connected to the back side of a shock, and loss-cone distributions in which particles with large pitch angles underwent magnetic mirroring at an interplanetary field enhancement that was too weak to reflect particles with the smallest pitch angles. Unusual oscillations in the width of a beamed distribution at the onset of the 23 July 2012 SEP event were also observed and remain puzzling. We report LET anisotropy observations at both STEREO spacecraft and discuss their implications for SEP transport, focusing exclusively on the extreme event of 23 July 2012 in which a large variety of anisotropies were present at various times during the event.

  18. Development of particle and heavy ion transport code system

    International Nuclear Information System (INIS)

    Niita, Koji

    2004-01-01

    Particle and heavy ion transport code system (PHITS) is 3 dimension general purpose Monte Carlo simulation codes for description of transport and reaction of particle and heavy ion in materials. It is developed on the basis of NMTC/JAM for design and safety of J-PARC. What is PHITS, it's physical process, physical models and development process of PHITC code are described. For examples of application, evaluation of neutron optics, cancer treatment by heavy particle ray and cosmic radiation are stated. JAM and JQMD model are used as the physical model. Neutron motion in six polar magnetic field and gravitational field, PHITC simulation of trace of C 12 beam and secondary neutron track of small model of cancer treatment device in HIMAC and neutron flux in Space Shuttle are explained. (S.Y.)

  19. Optimization of magnetic switches for single particle and cell transport

    Energy Technology Data Exchange (ETDEWEB)

    Abedini-Nassab, Roozbeh; Yellen, Benjamin B., E-mail: yellen@duke.edu [Department of Mechanical Engineering and Materials Science, Duke University, Box 90300 Hudson Hall, Durham, North Carolina 27708 (United States); Joint Institute, University of Michigan—Shanghai Jiao Tong University, Shanghai Jiao Tong University, Shanghai 200240 (China); Murdoch, David M. [Department of Medicine, Duke University, Durham, North Carolina 27708 (United States); Kim, CheolGi [Department of Emerging Materials Science, Daegu Gyeongbuk Institute of Science and Technology (DGIST), Daegu 711-873 (Korea, Republic of)

    2014-06-28

    The ability to manipulate an ensemble of single particles and cells is a key aim of lab-on-a-chip research; however, the control mechanisms must be optimized for minimal power consumption to enable future large-scale implementation. Recently, we demonstrated a matter transport platform, which uses overlaid patterns of magnetic films and metallic current lines to control magnetic particles and magnetic-nanoparticle-labeled cells; however, we have made no prior attempts to optimize the device geometry and power consumption. Here, we provide an optimization analysis of particle-switching devices based on stochastic variation in the particle's size and magnetic content. These results are immediately applicable to the design of robust, multiplexed platforms capable of transporting, sorting, and storing single cells in large arrays with low power and high efficiency.

  20. Sawtooth driven particle transport in tokamak plasmas

    International Nuclear Information System (INIS)

    Nicolas, T.

    2013-01-01

    The radial transport of particles in tokamaks is one of the most stringent issues faced by the magnetic confinement fusion community, because the fusion power is proportional to the square of the pressure, and also because accumulation of heavy impurities in the core leads to important power losses which can lead to a 'radiative collapse'. Sawteeth and the associated periodic redistribution of the core quantities can significantly impact the radial transport of electrons and impurities. In this thesis, we perform numerical simulations of sawteeth using a nonlinear tridimensional magnetohydrodynamic code called XTOR-2F to study the particle transport induced by sawtooth crashes. We show that the code recovers, after the crash, the fine structures of electron density that are observed with fast-sweeping reflectometry on the JET and TS tokamaks. The presence of these structure may indicate a low efficiency of the sawtooth in expelling the impurities from the core. However, applying the same code to impurity profiles, we show that the redistribution is quantitatively similar to that predicted by Kadomtsev's model, which could not be predicted a priori. Hence finally the sawtooth flushing is efficient in expelling impurities from the core. (author) [fr

  1. Predicting patchy particle crystals: variable box shape simulations and evolutionary algorithms

    NARCIS (Netherlands)

    Bianchi, E.; Doppelbauer, G.; Filion, L.C.; Dijkstra, M.; Kahl, G.

    2012-01-01

    We consider several patchy particle models that have been proposed in literature and we investigate their candidate crystal structures in a systematic way. We compare two different algorithms for predicting crystal structures: (i) an approach based on Monte Carlo simulations in the

  2. Aeolian particle transport inferred using a ~150-year sediment record from Sayram Lake, arid northwest China

    Directory of Open Access Journals (Sweden)

    Long Ma

    2015-05-01

    Full Text Available We studied sediment cores from Sayram Lake in the Tianshan Mountains of northwest China to evaluate variations in aeolian transport processes over the past ~150 years. Using an end-member modeling algorithm of particle size data, we interpreted end members with a strong bimodal distribution as having been transported by aeolian processes, whereas other end members were interpreted to have been transported by fluvial processes. The aeolian fraction accounted for an average of 27% of the terrigenous components in the core. We used the ratio of aeolian to fluvial content in the Sayram Lake sediments as an index of past intensity of aeolian transport in the Tianshan Mountains. During the interval 1910-1930, the index was high, reflecting the fact that dry climate provided optimal conditions for aeolian dust transport. From 1930-1980, the intensity of aeolian transport was weak. From the 1980s to the 2000s, aeolian transport to Sayram Lake increased. Although climate in northwest China became more humid in the mid-1980s, human activity had by that time altered the impact of climate on the landscape, leading to enhanced surface erosion, which provided more transportable material for dust storms. Comparison of the Lake Sayram sediment record with sediment records from other lakes in the region indicates synchronous intervals of enhanced aeolian transport from 1910 to 1930 and 1980 to 2000.

  3. Explicit symplectic algorithms based on generating functions for relativistic charged particle dynamics in time-dependent electromagnetic field

    Science.gov (United States)

    Zhang, Ruili; Wang, Yulei; He, Yang; Xiao, Jianyuan; Liu, Jian; Qin, Hong; Tang, Yifa

    2018-02-01

    Relativistic dynamics of a charged particle in time-dependent electromagnetic fields has theoretical significance and a wide range of applications. The numerical simulation of relativistic dynamics is often multi-scale and requires accurate long-term numerical simulations. Therefore, explicit symplectic algorithms are much more preferable than non-symplectic methods and implicit symplectic algorithms. In this paper, we employ the proper time and express the Hamiltonian as the sum of exactly solvable terms and product-separable terms in space-time coordinates. Then, we give the explicit symplectic algorithms based on the generating functions of orders 2 and 3 for relativistic dynamics of a charged particle. The methodology is not new, which has been applied to non-relativistic dynamics of charged particles, but the algorithm for relativistic dynamics has much significance in practical simulations, such as the secular simulation of runaway electrons in tokamaks.

  4. Analysis of Population Diversity of Dynamic Probabilistic Particle Swarm Optimization Algorithms

    Directory of Open Access Journals (Sweden)

    Qingjian Ni

    2014-01-01

    Full Text Available In evolutionary algorithm, population diversity is an important factor for solving performance. In this paper, combined with some population diversity analysis methods in other evolutionary algorithms, three indicators are introduced to be measures of population diversity in PSO algorithms, which are standard deviation of population fitness values, population entropy, and Manhattan norm of standard deviation in population positions. The three measures are used to analyze the population diversity in a relatively new PSO variant—Dynamic Probabilistic Particle Swarm Optimization (DPPSO. The results show that the three measure methods can fully reflect the evolution of population diversity in DPPSO algorithms from different angles, and we also discuss the impact of population diversity on the DPPSO variants. The relevant conclusions of the population diversity on DPPSO can be used to analyze, design, and improve the DPPSO algorithms, thus improving optimization performance, which could also be beneficial to understand the working mechanism of DPPSO theoretically.

  5. DRIFT-INDUCED PERPENDICULAR TRANSPORT OF SOLAR ENERGETIC PARTICLES

    International Nuclear Information System (INIS)

    Marsh, M. S.; Dalla, S.; Kelly, J.; Laitinen, T.

    2013-01-01

    Drifts are known to play a role in galactic cosmic ray transport within the heliosphere and are a standard component of cosmic ray propagation models. However, the current paradigm of solar energetic particle (SEP) propagation holds the effects of drifts to be negligible, and they are not accounted for in most current SEP modeling efforts. We present full-orbit test particle simulations of SEP propagation in a Parker spiral interplanetary magnetic field (IMF), which demonstrate that high-energy particle drifts cause significant asymmetric propagation perpendicular to the IMF. Thus in many cases the assumption of field-aligned propagation of SEPs may not be valid. We show that SEP drifts have dependencies on energy, heliographic latitude, and charge-to-mass ratio that are capable of transporting energetic particles perpendicular to the field over significant distances within interplanetary space, e.g., protons of initial energy 100 MeV propagate distances across the field on the order of 1 AU, over timescales typical of a gradual SEP event. Our results demonstrate the need for current models of SEP events to include the effects of particle drift. We show that the drift is considerably stronger for heavy ion SEPs due to their larger mass-to-charge ratio. This paradigm shift has important consequences for the modeling of SEP events and is crucial to the understanding and interpretation of in situ observations

  6. Optimization of C4.5 algorithm-based particle swarm optimization for breast cancer diagnosis

    Science.gov (United States)

    Muslim, M. A.; Rukmana, S. H.; Sugiharti, E.; Prasetiyo, B.; Alimah, S.

    2018-03-01

    Data mining has become a basic methodology for computational applications in the field of medical domains. Data mining can be applied in the health field such as for diagnosis of breast cancer, heart disease, diabetes and others. Breast cancer is most common in women, with more than one million cases and nearly 600,000 deaths occurring worldwide each year. The most effective way to reduce breast cancer deaths was by early diagnosis. This study aims to determine the level of breast cancer diagnosis. This research data uses Wisconsin Breast Cancer dataset (WBC) from UCI machine learning. The method used in this research is the algorithm C4.5 and Particle Swarm Optimization (PSO) as a feature option and to optimize the algorithm. C4.5. Ten-fold cross-validation is used as a validation method and a confusion matrix. The result of this research is C4.5 algorithm. The particle swarm optimization C4.5 algorithm has increased by 0.88%.

  7. Modelling of shear effects on thermal and particle transport in advanced Tokamak scenarios

    International Nuclear Information System (INIS)

    Moreau, D.; Voitsekhovitch, I.; Baker, D.R.

    1999-01-01

    Evolution of thermal and particle internal transport barriers (ITBs) is studied by modelling the time-dependent energy and particle balance in DIII-D plasmas with reversed magnetic shear configurations and in JET discharges with monotonic or slightly reversed q-profiles and large ExB rotation shear. Simulations are performed with semi-empirical models for anomalous diffusion and particle pinch. Stabilizing effects of magnetic and ExB rotation shears are included in anomalous particle and heat diffusivity. Shear effects on particle and thermal transport are compared. Improved particle and energy confinement with the formation of an internal transport barrier (ITB) has been produced in DIII-D plasmas during current ramp-up accompanied with neutral beam injection (NBI). These plasmas are characterized by strong reversed magnetic shear and large ExB rotation shear which provide the reduction of anomalous fluxes. The formation of ITB's in the optimized shear (OS) JET scenario starts with strong NBI heating in a target plasma with a flat or slightly reversed q-profile pre-formed during current ramp-up with ion cyclotron resonance heating (ICRH). Our paper presents the modelling of particle and thermal transport for these scenarios. (authors)

  8. Dust particle diffusion in ion beam transport region

    Energy Technology Data Exchange (ETDEWEB)

    Miyamoto, N.; Okajima, Y.; Romero, C. F.; Kuwata, Y.; Kasuya, T.; Wada, M., E-mail: mwada@mail.doshisha.ac.jp [Graduate school of Science and Engineering, Doshisha University, Kyotanabe, Kyoto 610-0321 (Japan)

    2016-02-15

    Dust particles of μm size produced by a monoplasmatron ion source are observed by a laser light scattering. The scattered light signal from an incident laser at 532 nm wavelength indicates when and where a particle passes through the ion beam transport region. As the result, dusts with the size more than 10 μm are found to be distributed in the center of the ion beam, while dusts with the size less than 10 μm size are distributed along the edge of the ion beam. Floating potential and electron temperature at beam transport region are measured by an electrostatic probe. This observation can be explained by a charge up model of the dust in the plasma boundary region.

  9. A study on the particle penetration in RMS Right Single Quotation Marks particle transport system

    International Nuclear Information System (INIS)

    Son, S. M.; Oh, S. H.; Choi, C. R.

    2014-01-01

    In nuclear facilities, a radiation monitoring system (RMS) monitors the exhaust gas containing the radioactive material. Samples of exhaust gas are collected in the downstream region of air cleaning units (ACUs) in order to examine radioactive materials. It is possible to predict an amount of radioactive material by analyzing the corrected samples. Representation of the collected samples should be assured in order to accurately sense and measure of radioactive materials. The radius of curvature is mainly 5 times of tube diameter. Sometimes, a booster fan is additionally added to enhance particle penetration rate... In this study, particle penetrations are calculated to evaluate particle penetration rate with various design parameters (tube lengths, tube declined angles, radius of curvatures, etc). The particle penetration rates have been calculated for several elements in the particle transport system. In general, the horizontal length of tube and the number of bending tube have a big impact on the penetration rate in the particle transport system. If the sampling location is far from the radiation monitoring system, additional installation of booster fans could be considered in case of large diameter tubes, but is not recommended in case of small diameter tube. In order to enhance particle penetration rate, the following works are recommended by priority. 1) to reduce the interval between sampling location and radiation monitoring system 2) to reduce the number of the bending tube

  10. GPU implementation of discrete particle swarm optimization algorithm for endmember extraction from hyperspectral image

    Science.gov (United States)

    Yu, Chaoyin; Yuan, Zhengwu; Wu, Yuanfeng

    2017-10-01

    Hyperspectral image unmixing is an important part of hyperspectral data analysis. The mixed pixel decomposition consists of two steps, endmember (the unique signatures of pure ground components) extraction and abundance (the proportion of each endmember in each pixel) estimation. Recently, a Discrete Particle Swarm Optimization algorithm (DPSO) was proposed for accurately extract endmembers with high optimal performance. However, the DPSO algorithm shows very high computational complexity, which makes the endmember extraction procedure very time consuming for hyperspectral image unmixing. Thus, in this paper, the DPSO endmember extraction algorithm was parallelized, implemented on the CUDA (GPU K20) platform, and evaluated by real hyperspectral remote sensing data. The experimental results show that with increasing the number of particles the parallelized version obtained much higher computing efficiency while maintain the same endmember exaction accuracy.

  11. A parallel version of a multigrid algorithm for isotropic transport equations

    International Nuclear Information System (INIS)

    Manteuffel, T.; McCormick, S.; Yang, G.; Morel, J.; Oliveira, S.

    1994-01-01

    The focus of this paper is on a parallel algorithm for solving the transport equations in a slab geometry using multigrid. The spatial discretization scheme used is a finite element method called the modified linear discontinuous (MLD) scheme. The MLD scheme represents a lumped version of the standard linear discontinuous (LD) scheme. The parallel algorithm was implemented on the Connection Machine 2 (CM2). Convergence rates and timings for this algorithm on the CM2 and Cray-YMP are shown

  12. Fractional order Darwinian particle swarm optimization applications and evaluation of an evolutionary algorithm

    CERN Document Server

    Couceiro, Micael

    2015-01-01

    This book examines the bottom-up applicability of swarm intelligence to solving multiple problems, such as curve fitting, image segmentation, and swarm robotics. It compares the capabilities of some of the better-known bio-inspired optimization approaches, especially Particle Swarm Optimization (PSO), Darwinian Particle Swarm Optimization (DPSO) and the recently proposed Fractional Order Darwinian Particle Swarm Optimization (FODPSO), and comprehensively discusses their advantages and disadvantages. Further, it demonstrates the superiority and key advantages of using the FODPSO algorithm, suc

  13. A generalized transport-velocity formulation for smoothed particle hydrodynamics

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Chi; Hu, Xiangyu Y., E-mail: xiangyu.hu@tum.de; Adams, Nikolaus A.

    2017-05-15

    The standard smoothed particle hydrodynamics (SPH) method suffers from tensile instability. In fluid-dynamics simulations this instability leads to particle clumping and void regions when negative pressure occurs. In solid-dynamics simulations, it results in unphysical structure fragmentation. In this work the transport-velocity formulation of Adami et al. (2013) is generalized for providing a solution of this long-standing problem. Other than imposing a global background pressure, a variable background pressure is used to modify the particle transport velocity and eliminate the tensile instability completely. Furthermore, such a modification is localized by defining a shortened smoothing length. The generalized formulation is suitable for fluid and solid materials with and without free surfaces. The results of extensive numerical tests on both fluid and solid dynamics problems indicate that the new method provides a unified approach for multi-physics SPH simulations.

  14. FLUKA: A Multi-Particle Transport Code

    Energy Technology Data Exchange (ETDEWEB)

    Ferrari, A.; Sala, P.R.; /CERN /INFN, Milan; Fasso, A.; /SLAC; Ranft, J.; /Siegen U.

    2005-12-14

    This report describes the 2005 version of the Fluka particle transport code. The first part introduces the basic notions, describes the modular structure of the system, and contains an installation and beginner's guide. The second part complements this initial information with details about the various components of Fluka and how to use them. It concludes with a detailed history and bibliography.

  15. Modeling particle transport and discoloration risk in drinking water distribution networks

    Directory of Open Access Journals (Sweden)

    J. van Summeren

    2017-10-01

    Full Text Available Discoloration of drinking water is a worldwide phenomenon caused by accumulation and subsequent remobilization of particulate matter in drinking water distribution systems (DWDSs. It contributes a substantial fraction of customer complaints to water utilities. Accurate discoloration risk predictions could improve system operation by allowing for more effective programs on cleaning and prevention actions and field measurements, but are challenged by incomplete understanding on the origins and properties of particles and a complex and not fully understood interplay of processes in distribution networks. In this paper, we assess and describe relevant hydraulic processes that govern particle transport in turbulent pipe flow, including gravitational settling, bed-load transport, and particle entrainment into suspension. We assess which transport mechanisms are dominant for a range of bulk flow velocities, particle diameters, and particle mass densities, which includes common conditions for DWDSs in the Netherlands, the UK, and Australia. Our analysis shows that the theoretically predicted particle settling velocity and threshold shear stresses for incipient particle motion are in the same range as, but more variable than, previous estimates from lab experiments, field measurements, and modeling. The presented material will be used in the future development of a numerical modeling tool to determine and predict the spatial distribution of particulate material and discoloration risk in DWDSs. Our approach is aimed at understanding specific causalities and processes, which can complement data-driven approaches.

  16. Numerical investigations for insulation particle transport phenomena in water flow

    International Nuclear Information System (INIS)

    Krepper, E.; Grahn, A.; Alt, S.; Kaestner, W.; Kratzsch, A.; Seeliger, A.

    2005-01-01

    The investigation of insulation debris generation, transport and sedimentation gains importance regarding the reactor safety research for PWR and BWR considering the long term behaviour of emergency core coolant systems during all types of LOCA. The insulation debris released near the break during LOCA consists of a mixture of very different particles concerning size, shape, consistence and other properties. Some fraction of the released insulation debris will be transported into the reactor sump where it may affect emergency core cooling. Open questions of generic interest are e.g. the sedimentation of the insulation debris in a water pool, possible re-suspension, transport in the sump water flow, particle load on strainers and corresponding difference pressure. A joint research project in cooperation with Institute of Process Technology, Process Automation and Measuring Technology (IPM) Zittau deals with the experimental investigation and the development of CFD models for the description of particle transport phenomena in coolant flow. While experiments are performed at the IPM-Zittau, theoretical work is concentrated at Forschungszentrum Rossendorf. In the present paper the basic concepts for CFD modelling are described and first results including feasibility studies are shown. During the ongoing work further results are expected. (author)

  17. A fast sorting algorithm for a hypersonic rarefied flow particle simulation on the connection machine

    Science.gov (United States)

    Dagum, Leonardo

    1989-01-01

    The data parallel implementation of a particle simulation for hypersonic rarefied flow described by Dagum associates a single parallel data element with each particle in the simulation. The simulated space is divided into discrete regions called cells containing a variable and constantly changing number of particles. The implementation requires a global sort of the parallel data elements so as to arrange them in an order that allows immediate access to the information associated with cells in the simulation. Described here is a very fast algorithm for performing the necessary ranking of the parallel data elements. The performance of the new algorithm is compared with that of the microcoded instruction for ranking on the Connection Machine.

  18. Nuclear fuel particles in the environment - characteristics, atmospheric transport and skin doses

    Energy Technology Data Exchange (ETDEWEB)

    Poellaenen, R

    2002-05-01

    In the present thesis, nuclear fuel particles are studied from the perspective of their characteristics, atmospheric transport and possible skin doses. These particles, often referred to as 'hot' particles, can be released into the environment, as has happened in past years, through human activities, incidents and accidents, such as the Chernobyl nuclear power plant accident in 1986. Nuclear fuel particles with a diameter of tens of micrometers, referred to here as large particles, may be hundreds of kilobecquerels in activity and even an individual particle may present a quantifiable health hazard. The detection of individual nuclear fuel particles in the environment, their isolation for subsequent analysis and their characterisation are complicated and require well-designed sampling and tailored analytical methods. In the present study, the need to develop particle analysis methods is highlighted. It is shown that complementary analytical techniques are necessary for proper characterisation of the particles. Methods routinely used for homogeneous samples may produce erroneous results if they are carelessly applied to radioactive particles. Large nuclear fuel particles are transported differently in the atmosphere compared with small particles or gaseous species. Thus, the trajectories of gaseous species are not necessarily appropriate for calculating the areas that may receive large particle fallout. A simplified model and a more advanced model based on the data on real weather conditions were applied in the case of the Chernobyl accident to calculate the transport of the particles of different sizes. The models were appropriate in characterising general transport properties but were not able to properly predict the transport of the particles with an aerodynamic diameter of tens of micrometers, detected at distances of hundreds of kilometres from the source, using only the current knowledge of the source term. Either the effective release height has

  19. Monte Carlo simulations of the particle transport in semiconductor detectors of fast neutrons

    International Nuclear Information System (INIS)

    Sedlačková, Katarína; Zaťko, Bohumír; Šagátová, Andrea; Nečas, Vladimír

    2013-01-01

    Several Monte Carlo all-particle transport codes are under active development around the world. In this paper we focused on the capabilities of the MCNPX code (Monte Carlo N-Particle eXtended) to follow the particle transport in semiconductor detector of fast neutrons. Semiconductor detector based on semi-insulating GaAs was the object of our investigation. As converter material capable to produce charged particles from the (n, p) interaction, a high-density polyethylene (HDPE) was employed. As the source of fast neutrons, the 239 Pu–Be neutron source was used in the model. The simulations were performed using the MCNPX code which makes possible to track not only neutrons but also recoiled protons at all interesting energies. Hence, the MCNPX code enables seamless particle transport and no other computer program is needed to process the particle transport. The determination of the optimal thickness of the conversion layer and the minimum thickness of the active region of semiconductor detector as well as the energy spectra simulation were the principal goals of the computer modeling. Theoretical detector responses showed that the best detection efficiency can be achieved for 500 μm thick HDPE converter layer. The minimum detector active region thickness has been estimated to be about 400 μm. -- Highlights: ► Application of the MCNPX code for fast neutron detector design is demonstrated. ► Simulations of the particle transport through conversion film of HDPE are presented. ► Simulations of the particle transport through detector active region are presented. ► The optimal thickness of the HDPE conversion film has been calculated. ► Detection efficiency of 0.135% was reached for 500 μm thick HDPE conversion film

  20. A self-adaptive chaotic particle swarm algorithm for short term hydroelectric system scheduling in deregulated environment

    International Nuclear Information System (INIS)

    Jiang Chuanwen; Bompard, Etorre

    2005-01-01

    This paper proposes a short term hydroelectric plant dispatch model based on the rule of maximizing the benefit. For the optimal dispatch model, which is a large scale nonlinear planning problem with multi-constraints and multi-variables, this paper proposes a novel self-adaptive chaotic particle swarm optimization algorithm to solve the short term generation scheduling of a hydro-system better in a deregulated environment. Since chaotic mapping enjoys certainty, ergodicity and the stochastic property, the proposed approach introduces chaos mapping and an adaptive scaling term into the particle swarm optimization algorithm, which increases its convergence rate and resulting precision. The new method has been examined and tested on a practical hydro-system. The results are promising and show the effectiveness and robustness of the proposed approach in comparison with the traditional particle swarm optimization algorithm

  1. Testing of a "smart-pebble" for measuring particle transport statistics

    Science.gov (United States)

    Kitsikoudis, Vasileios; Avgeris, Loukas; Valyrakis, Manousos

    2017-04-01

    This paper presents preliminary results from novel experiments aiming to assess coarse sediment transport statistics for a range of transport conditions, via the use of an innovative "smart-pebble" device. This device is a waterproof sphere, which has 7 cm diameter and is equipped with a number of sensors that provide information about the velocity, acceleration and positioning of the "smart-pebble" within the flow field. A series of specifically designed experiments are carried out to monitor the entrainment of a "smart-pebble" for fully developed, uniform, turbulent flow conditions over a hydraulically rough bed. Specifically, the bed surface is configured to three sections, each of them consisting of well packed glass beads of slightly increasing size at the downstream direction. The first section has a streamwise length of L1=150 cm and beads size of D1=15 mm, the second section has a length of L2=85 cm and beads size of D2=22 mm, and the third bed section has a length of L3=55 cm and beads size of D3=25.4 mm. Two cameras monitor the area of interest to provide additional information regarding the "smart-pebble" movement. Three-dimensional flow measurements are obtained with the aid of an acoustic Doppler velocimeter along a measurement grid to assess the flow forcing field. A wide range of flow rates near and above the threshold of entrainment is tested, while using four distinct densities for the "smart-pebble", which can affect its transport speed and total momentum. The acquired data are analyzed to derive Lagrangian transport statistics and the implications of such an important experiment for the transport of particles by rolling are discussed. The flow conditions for the initiation of motion, particle accelerations and equilibrium particle velocities (translating into transport rates), statistics of particle impact and its motion, can be extracted from the acquired data, which can be further compared to develop meaningful insights for sediment transport

  2. Mechanism for Particle Transport and Size Sorting via Low-Frequency Vibrations

    Science.gov (United States)

    Sherrit, Stewart; Scott, James S.; Bar-Cohen, Yoseph; Badescu, Mircea; Bao, Xiaoqi

    2010-01-01

    There is a need for effective sample handling tools to deliver and sort particles for analytical instruments that are planned for use in future NASA missions. Specifically, a need exists for a compact mechanism that allows transporting and sieving particle sizes of powdered cuttings and soil grains that may be acquired by sampling tools such as a robotic scoop or drill. The required tool needs to be low mass and compact to operate from such platforms as a lander or rover. This technology also would be applicable to sample handling when transporting samples to analyzers and sorting particles by size.

  3. Atmospheric fate and transport of fine volcanic ash: Does particle shape matter?

    Science.gov (United States)

    White, C. M.; Allard, M. P.; Klewicki, J.; Proussevitch, A. A.; Mulukutla, G.; Genareau, K.; Sahagian, D. L.

    2013-12-01

    Volcanic ash presents hazards to infrastructure, agriculture, and human and animal health. In particular, given the economic importance of intercontinental aviation, understanding how long ash is suspended in the atmosphere, and how far it is transported has taken on greater importance. Airborne ash abrades the exteriors of aircraft, enters modern jet engines and melts while coating interior engine parts causing damage and potential failure. The time fine ash stays in the atmosphere depends on its terminal velocity. Existing models of ash terminal velocities are based on smooth, quasi-spherical particles characterized by Stokes velocity. Ash particles, however, violate the various assumptions upon which Stokes flow and associated models are based. Ash particles are non-spherical and can have complex surface and internal structure. This suggests that particle shape may be one reason that models fail to accurately predict removal rates of fine particles from volcanic ash clouds. The present research seeks to better parameterize predictive models for ash particle terminal velocities, diffusivity, and dispersion in the atmospheric boundary layer. The fundamental hypothesis being tested is that particle shape irreducibly impacts the fate and transport properties of fine volcanic ash. Pilot studies, incorporating modeling and experiments, are being conducted to test this hypothesis. Specifically, a statistical model has been developed that can account for actual volcanic ash size distributions, complex ash particle geometry, and geometry variability. Experimental results are used to systematically validate and improve the model. The experiments are being conducted at the Flow Physics Facility (FPF) at UNH. Terminal velocities and dispersion properties of fine ash are characterized using still air drop experiments in an unconstrained open space using a homogenized mix of source particles. Dispersion and sedimentation dynamics are quantified using particle image

  4. A fuzzy controller design for nuclear research reactors using the particle swarm optimization algorithm

    International Nuclear Information System (INIS)

    Coban, Ramazan

    2011-01-01

    Research highlights: → A closed-loop fuzzy logic controller based on the particle swarm optimization algorithm was proposed for controlling the power level of nuclear research reactors. → The proposed control system was tested for various initial and desired power levels, and it could control the reactor successfully for most situations. → The proposed controller is robust against the disturbances. - Abstract: In this paper, a closed-loop fuzzy logic controller based on the particle swarm optimization algorithm is proposed for controlling the power level of nuclear research reactors. The principle of the fuzzy logic controller is based on the rules constructed from numerical experiments made by means of a computer code for the core dynamics calculation and from human operator's experience and knowledge. In addition to these intuitive and experimental design efforts, consequent parts of the fuzzy rules are optimally (or near optimally) determined using the particle swarm optimization algorithm. The contribution of the proposed algorithm to a reactor control system is investigated in details. The performance of the controller is also tested with numerical simulations in numerous operating conditions from various initial power levels to desired power levels, as well as under disturbance. It is shown that the proposed control system performs satisfactorily under almost all operating conditions, even in the case of very small initial power levels.

  5. Efficiencies of dynamic Monte Carlo algorithms for off-lattice particle systems with a single impurity

    KAUST Repository

    Novotny, M.A.; Watanabe, Hiroshi; Ito, Nobuyasu

    2010-01-01

    The efficiency of dynamic Monte Carlo algorithms for off-lattice systems composed of particles is studied for the case of a single impurity particle. The theoretical efficiencies of the rejection-free method and of the Monte Carlo with Absorbing

  6. Techno-economic optimization of a shell and tube heat exchanger by genetic and particle swarm algorithms

    International Nuclear Information System (INIS)

    Sadeghzadeh, H.; Ehyaei, M.A.; Rosen, M.A.

    2015-01-01

    Highlights: • Calculating pressure drop and heat transfer coefficient by Delaware method. • The accuracy of the Delaware method is more than the Kern method. • The results of the PSO are better than the results of the GA. • The optimization results suggest that yields the best and most economic optimization. - Abstract: The use of genetic and particle swarm algorithms in the design of techno-economically optimum shell-and-tube heat exchangers is demonstrated. A cost function (including costs of the heat exchanger based on surface area and power consumption to overcome pressure drops) is the objective function, which is to be minimized. Selected decision variables include tube diameter, central baffles spacing and shell diameter. The Delaware method is used to calculate the heat transfer coefficient and the shell-side pressure drop. The accuracy and efficiency of the suggested algorithm and the Delaware method are investigated. A comparison of the results obtained by the two algorithms shows that results obtained with the particle swarm optimization method are superior to those obtained with the genetic algorithm method. By comparing these results with those from various references employing the Kern method and other algorithms, it is shown that the Delaware method accompanied by genetic and particle swarm algorithms achieves more optimum results, based on assessments for two case studies

  7. Jets/MET Performance with the combination of Particle flow algorithm and SoftKiller

    CERN Document Server

    Yamamoto, Kohei

    2017-01-01

    The main purpose of my work is to study the performance of the combination of Particle flow algorithm(PFlow) and SoftKiller(SK), “PF+SK”. ATLAS experiment currently employes Topological clusters(Topo) for jet reconstruction, but we want to replace it with more effective one, PFlow. PFlow provides us with another method to reconstruct jets[1]. With this algorithm, we combine the energy deposits in calorimeters with the measurement in ID tracker. This strategy enables us to claim these consistent measurements in a detector come from same particles and avoid double counting. SK is a simple and effective way of suppressing pile-up[2]. This way, we divide rapidity-azimuthal plane into square patches and eliminate particles lower than the threshold "#$%, which is derived from each ",' so that the median of " density becomes zero. Practically, this is equal to gradually increasing "#$% till exactly half of patches becomes empty. Because there is no official calibration on PF+SK so far, we have t...

  8. High energy particle transport code NMTC/JAM

    International Nuclear Information System (INIS)

    Niita, Koji; Meigo, Shin-ichiro; Takada, Hiroshi; Ikeda, Yujiro

    2001-03-01

    We have developed a high energy particle transport code NMTC/JAM, which is an upgraded version of NMTC/JAERI97. The applicable energy range of NMTC/JAM is extended in principle up to 200 GeV for nucleons and mesons by introducing the high energy nuclear reaction code JAM for the intra-nuclear cascade part. For the evaporation and fission process, we have also implemented a new model, GEM, by which the light nucleus production from the excited residual nucleus can be described. According to the extension of the applicable energy, we have upgraded the nucleon-nucleus non-elastic, elastic and differential elastic cross section data by employing new systematics. In addition, the particle transport in a magnetic field has been implemented for the beam transport calculations. In this upgrade, some new tally functions are added and the format of input of data has been improved very much in a user friendly manner. Due to the implementation of these new calculation functions and utilities, consequently, NMTC/JAM enables us to carry out reliable neutronics study of a large scale target system with complex geometry more accurately and easily than before. This report serves as a user manual of the code. (author)

  9. Production and global transport of Titan's sand particles

    Science.gov (United States)

    Barnes, Jason W.; Lorenz, Ralph D.; Radebaugh, Jani; Hayes, Alexander G.; Arnold, Karl; Chandler, Clayton

    2015-06-01

    Previous authors have suggested that Titan's individual sand particles form by either sintering or by lithification and erosion. We suggest two new mechanisms for the production of Titan's organic sand particles that would occur within bodies of liquid: flocculation and evaporitic precipitation. Such production mechanisms would suggest discrete sand sources in dry lakebeds. We search for such sources, but find no convincing candidates with the present Cassini Visual and Infrared Mapping Spectrometer coverage. As a result we propose that Titan's equatorial dunes may represent a single, global sand sea with west-to-east transport providing sources and sinks for sand in each interconnected basin. The sand might then be transported around Xanadu by fast-moving Barchan dune chains and/or fluvial transport in transient riverbeds. A river at the Xanadu/Shangri-La border could explain the sharp edge of the sand sea there, much like the Kuiseb River stops the Namib Sand Sea in southwest Africa on Earth. Future missions could use the composition of Titan's sands to constrain the global hydrocarbon cycle.

  10. Optimization of the reflux ratio for a stage distillation column based on an improved particle swarm algorithm

    DEFF Research Database (Denmark)

    Ren, Jingzheng; Tan, Shiyu; Dong, Lichun

    2010-01-01

    A mathematical model relating operation profits with reflux ratio of a stage distillation column was established. In order to optimize the reflux ratio by solving the nonlinear objective function, an improved particle swarm algorithm was developed and has been proved to be able to enhance...... the searching ability of basic particle swarm algorithm significantly. An example of utilizing the improved algorithm to solve the mathematical model was demonstrated; the result showed that it is efficient and convenient to optimize the reflux ratio for a distillation column by using the mathematical model...

  11. A highly scalable particle tracking algorithm using partitioned global address space (PGAS) programming for extreme-scale turbulence simulations

    Science.gov (United States)

    Buaria, D.; Yeung, P. K.

    2017-12-01

    A new parallel algorithm utilizing a partitioned global address space (PGAS) programming model to achieve high scalability is reported for particle tracking in direct numerical simulations of turbulent fluid flow. The work is motivated by the desire to obtain Lagrangian information necessary for the study of turbulent dispersion at the largest problem sizes feasible on current and next-generation multi-petaflop supercomputers. A large population of fluid particles is distributed among parallel processes dynamically, based on instantaneous particle positions such that all of the interpolation information needed for each particle is available either locally on its host process or neighboring processes holding adjacent sub-domains of the velocity field. With cubic splines as the preferred interpolation method, the new algorithm is designed to minimize the need for communication, by transferring between adjacent processes only those spline coefficients determined to be necessary for specific particles. This transfer is implemented very efficiently as a one-sided communication, using Co-Array Fortran (CAF) features which facilitate small data movements between different local partitions of a large global array. The cost of monitoring transfer of particle properties between adjacent processes for particles migrating across sub-domain boundaries is found to be small. Detailed benchmarks are obtained on the Cray petascale supercomputer Blue Waters at the University of Illinois, Urbana-Champaign. For operations on the particles in a 81923 simulation (0.55 trillion grid points) on 262,144 Cray XE6 cores, the new algorithm is found to be orders of magnitude faster relative to a prior algorithm in which each particle is tracked by the same parallel process at all times. This large speedup reduces the additional cost of tracking of order 300 million particles to just over 50% of the cost of computing the Eulerian velocity field at this scale. Improving support of PGAS models on

  12. Matrix-operator method for calculation of dynamics of intense beams of charged particles

    International Nuclear Information System (INIS)

    Kapchinskij, M.I.; Korenev, I.L.; Rinskij, L.A.

    1989-01-01

    Calculation algorithm for particle dynamics in high-current cyclic and linear accelerators is suggested. Particle movement in six-dimensional phase space is divided into coherent and incoherent components. Incoherent movement is described by envelope method; particle cluster is considered to be even-charged by tri-axial ellipsoid. Coherent movement is described in para-axial approximation; each structure element of the accelerator transport channel is characterized by six-dimensional matrix of phase coordinate transformation of cluster centre and by shift vector resulting from deviation of focusing element parameters from calculated values. Effect of space charge reflected forces is taken into account in the element matrix. Algorithm software is realized using well-known TRANSPORT program

  13. Analysis of Massively Parallel Discrete-Ordinates Transport Sweep Algorithms with Collisions

    International Nuclear Information System (INIS)

    Bailey, T.S.; Falgout, R.D.

    2008-01-01

    We present theoretical scaling models for a variety of discrete-ordinates sweep algorithms. In these models, we pay particular attention to the way each algorithm handles collisions. A collision is defined as a processor having multiple angles with ready to be swept during one stage of the sweep. The models also take into account how subdomains are assigned to processors and how angles are grouped during the sweep. We describe a data driven algorithm that resolves collisions efficiently during the sweep as well as other algorithms that have been designed to avoid collisions completely. Our models are validated using the ARGES and AMTRAN transport codes. We then use the models to study and predict scaling trends in all of the sweep algorithms

  14. A neuro-fuzzy inference system tuned by particle swarm optimization algorithm for sensor monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Oliveira, Mauro Vitor de [Instituto de Engenharia Nuclear (IEN), Rio de Janeiro, RJ (Brazil). Div. de Instrumentacao e Confiabilidade Humana]. E-mail: mvitor@ien.gov.br; Schirru, Roberto [Universidade Federal, Rio de Janeiro, RJ (Brazil). Coordenacao dos Programas de Pos-graduacao de Engenharia. Lab. de Monitoracao de Processos

    2005-07-01

    A neuro-fuzzy inference system (ANFIS) tuned by particle swarm optimization (PSO) algorithm has been developed for monitor the relevant sensor in a nuclear plant using the information of other sensors. The antecedent parameters of the ANFIS that estimates the relevant sensor signal are optimized by a PSO algorithm and consequent parameters use a least-squares algorithm. The proposed sensor-monitoring algorithm was demonstrated through the estimation of the nuclear power value in a pressurized water reactor using as input to the ANFIS six other correlated signals. The obtained results are compared to two similar ANFIS using one gradient descendent (GD) and other genetic algorithm (GA), as antecedent parameters training algorithm. (author)

  15. A neuro-fuzzy inference system tuned by particle swarm optimization algorithm for sensor monitoring

    International Nuclear Information System (INIS)

    Oliveira, Mauro Vitor de; Schirru, Roberto

    2005-01-01

    A neuro-fuzzy inference system (ANFIS) tuned by particle swarm optimization (PSO) algorithm has been developed for monitor the relevant sensor in a nuclear plant using the information of other sensors. The antecedent parameters of the ANFIS that estimates the relevant sensor signal are optimized by a PSO algorithm and consequent parameters use a least-squares algorithm. The proposed sensor-monitoring algorithm was demonstrated through the estimation of the nuclear power value in a pressurized water reactor using as input to the ANFIS six other correlated signals. The obtained results are compared to two similar ANFIS using one gradient descendent (GD) and other genetic algorithm (GA), as antecedent parameters training algorithm. (author)

  16. Innovations in ILC detector design using a particle flow algorithm approach

    International Nuclear Information System (INIS)

    Magill, S.; High Energy Physics

    2007-01-01

    The International Linear Collider (ILC) is a future e + e - collider that will produce particles with masses up to the design center-of-mass (CM) energy of 500 GeV. The ILC complements the Large Hadron Collider (LHC) which, although colliding protons at 14 TeV in the CM, will be luminosity-limited to particle production with masses up to ∼1-2 TeV. At the ILC, interesting cross-sections are small, but there are no backgrounds from underlying events, so masses should be able to be measured by hadronic decays to dijets (∼80% BR) as well as in leptonic decay modes. The precise measurement of jets will require major detector innovations, in particular to the calorimeter, which will be optimized to reconstruct final state particle 4-vectors--called the particle flow algorithm approach to jet reconstruction

  17. Ef: Software for Nonrelativistic Beam Simulation by Particle-in-Cell Algorithm

    Science.gov (United States)

    Boytsov, A. Yu.; Bulychev, A. A.

    2018-04-01

    Understanding of particle dynamics is crucial in construction of electron guns, ion sources and other types of nonrelativistic beam devices. Apart from external guiding and focusing systems, a prominent role in evolution of such low-energy beams is played by particle-particle interaction. Numerical simulations taking into account these effects are typically accomplished by a well-known particle-in-cell method. In practice, for convenient work a simulation program should not only implement this method, but also support parallelization, provide integration with CAD systems and allow access to details of the simulation algorithm. To address the formulated requirements, development of a new open source code - Ef - has been started. It's current features and main functionality are presented. Comparison with several analytical models demonstrates good agreement between the numerical results and the theory. Further development plans are discussed.

  18. Ef: Software for Nonrelativistic Beam Simulation by Particle-in-Cell Algorithm

    Directory of Open Access Journals (Sweden)

    Boytsov A. Yu.

    2018-01-01

    Full Text Available Understanding of particle dynamics is crucial in construction of electron guns, ion sources and other types of nonrelativistic beam devices. Apart from external guiding and focusing systems, a prominent role in evolution of such low-energy beams is played by particle-particle interaction. Numerical simulations taking into account these effects are typically accomplished by a well-known particle-in-cell method. In practice, for convenient work a simulation program should not only implement this method, but also support parallelization, provide integration with CAD systems and allow access to details of the simulation algorithm. To address the formulated requirements, development of a new open source code - Ef - has been started. It's current features and main functionality are presented. Comparison with several analytical models demonstrates good agreement between the numerical results and the theory. Further development plans are discussed.

  19. A particle swarm optimization algorithm for beam angle selection in intensity-modulated radiotherapy planning

    International Nuclear Information System (INIS)

    Li Yongjie; Yao Dezhong; Yao, Jonathan; Chen Wufan

    2005-01-01

    Automatic beam angle selection is an important but challenging problem for intensity-modulated radiation therapy (IMRT) planning. Though many efforts have been made, it is still not very satisfactory in clinical IMRT practice because of overextensive computation of the inverse problem. In this paper, a new technique named BASPSO (Beam Angle Selection with a Particle Swarm Optimization algorithm) is presented to improve the efficiency of the beam angle optimization problem. Originally developed as a tool for simulating social behaviour, the particle swarm optimization (PSO) algorithm is a relatively new population-based evolutionary optimization technique first introduced by Kennedy and Eberhart in 1995. In the proposed BASPSO, the beam angles are optimized using PSO by treating each beam configuration as a particle (individual), and the beam intensity maps for each beam configuration are optimized using the conjugate gradient (CG) algorithm. These two optimization processes are implemented iteratively. The performance of each individual is evaluated by a fitness value calculated with a physical objective function. A population of these individuals is evolved by cooperation and competition among the individuals themselves through generations. The optimization results of a simulated case with known optimal beam angles and two clinical cases (a prostate case and a head-and-neck case) show that PSO is valid and efficient and can speed up the beam angle optimization process. Furthermore, the performance comparisons based on the preliminary results indicate that, as a whole, the PSO-based algorithm seems to outperform, or at least compete with, the GA-based algorithm in computation time and robustness. In conclusion, the reported work suggested that the introduced PSO algorithm could act as a new promising solution to the beam angle optimization problem and potentially other optimization problems in IMRT, though further studies need to be investigated

  20. A Combination of Genetic Algorithm and Particle Swarm Optimization for Vehicle Routing Problem with Time Windows.

    Science.gov (United States)

    Xu, Sheng-Hua; Liu, Ji-Ping; Zhang, Fu-Hao; Wang, Liang; Sun, Li-Jian

    2015-08-27

    A combination of genetic algorithm and particle swarm optimization (PSO) for vehicle routing problems with time windows (VRPTW) is proposed in this paper. The improvements of the proposed algorithm include: using the particle real number encoding method to decode the route to alleviate the computation burden, applying a linear decreasing function based on the number of the iterations to provide balance between global and local exploration abilities, and integrating with the crossover operator of genetic algorithm to avoid the premature convergence and the local minimum. The experimental results show that the proposed algorithm is not only more efficient and competitive with other published results but can also obtain more optimal solutions for solving the VRPTW issue. One new well-known solution for this benchmark problem is also outlined in the following.

  1. Modeling of Particle Transport on Channels and Gaps Exposed to Plasma Fluxes

    International Nuclear Information System (INIS)

    Nieto-Perez, Martin

    2008-01-01

    Many problems in particle transport in fusion devices involve the transport of plasma or eroded particles through channels or gaps, such as in the case of trying to assess damage to delicate optical diagnostics collecting light through a slit or determining the deposition and codeposition on the gaps between tiles of plasma-facing components. A dynamic-composition Monte Carlo code in the spirit of TRIDYN, previously developed to study composition changes on optical mirrors subject to ion bombardment, has been upgraded to include motion of particles through a volume defined by sets of plane surfaces. Particles sputtered or reflected from the walls of the channel/gap can be tracked as well, allowing the calculation of wall impurity transport, either back to the plasma (for the case of a gap) or to components separated from the plasma by a channel/slit (for the case of optical diagnostics). Two examples of the code application to particle transport in fusion devices will be presented in this work: one will evaluate the erosion/impurity deposition rate on a mirror separated from a plasma source by a slit; the other case will look at the enhanced emission of tile material in the region of the gap between two tiles

  2. Graphical User Interface for High Energy Multi-Particle Transport, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Computer codes such as MCNPX now have the capability to transport most high energy particle types (34 particle types now supported in MCNPX) with energies extending...

  3. Graphical User Interface for High Energy Multi-Particle Transport, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — Computer codes such as MCNPX now have the capability to transport most high energy particle types (34 particle types now supported in MCNPX) with energies extending...

  4. ALGORITHMS FOR TRAFFIC MANAGEMENT IN THE INTELLIGENT TRANSPORT SYSTEMS

    Directory of Open Access Journals (Sweden)

    Andrey Borisovich Nikolaev

    2017-09-01

    Full Text Available Traffic jams interfere with the drivers and cost billions of dollars per year and lead to a substantial increase in fuel consumption. In order to avoid such problems the paper describes the algorithms for traffic management in intelligent transportation system, which collects traffic information in real time and is able to detect and manage congestion on the basis of this information. The results show that the proposed algorithms reduce the average travel time, emissions and fuel consumption. In particular, travel time has decreased by about 23%, the average fuel consumption of 9%, and the average emission of 10%.

  5. Silver (Ag) Transport Mechanisms in TRISO coated particles: A Critical Review

    Energy Technology Data Exchange (ETDEWEB)

    I J van Rooyen; J H Neethling; J A A Engelbrecht; P M van Rooyen; G Strydom

    2012-10-01

    Transport of 110mAg in the intact SiC layer of TRISO coated particles has been studied for approximately 30 years without arriving at a satisfactory explanation of the transport mechanism. In this paper the possible mechanisms postulated in previous experimental studies, both in-reactor and out-of reactor research environment studies are critically reviewed and of particular interest are relevance to very high temperature gas reactor operating and accident conditions. Among the factors thought to influence Ag transport are grain boundary stoichiometry, SiC grain size and shape, the presence of free silicon, nano-cracks, thermal decomposition, palladium attack, transmutation products, layer thinning and coated particle shape. Additionally new insight to nature and location of fission products has been gained via recent post irradiation electron microscopy examination of TRISO coated particles from the DOE’s fuel development program. The combined effect of critical review and new analyses indicates a direction for investigating possible the Ag transport mechanism including the confidence level with which these mechanisms may be experimentally verified.

  6. Silver (Ag) transport mechanisms in TRISO coated particles: A critical review

    Energy Technology Data Exchange (ETDEWEB)

    Rooyen, I.J. van, E-mail: isabella.vanrooyen@inl.gov [Idaho National Laboratory, Idaho Falls, ID 83415-6188 (United States); Dunzik-Gougar, M.L. [Department of Nuclear Engineering, Idaho State University, ID (United States); Rooyen, P.M. van [Philip M. van Rooyen Network Consultants, Midlands Estates (South Africa)

    2014-05-01

    Transport of {sup 110m}Ag in the intact SiC layer of TRISO coated particles has been studied for approximately 30 years without arriving at a satisfactory explanation of the transport mechanism. In this paper the possible mechanisms postulated in previous experimental studies, both in-reactor and out-of reactor research environment studies are critically reviewed and of particular interest are relevance to very high temperature gas reactor operating and accident conditions. Among the factors thought to influence Ag transport are grain boundary stoichiometry, SiC grain size and shape, the presence of free silicon, nano-cracks, thermal decomposition, palladium attack, transmutation products, layer thinning and coated particle shape. Additionally new insight to nature and location of fission products has been gained via recent post irradiation electron microscopy examination of TRISO coated particles from the DOE's fuel development program. The combined effect of critical review and new analyses indicates a direction for investigating possible the Ag transport mechanism including the confidence level with which these mechanisms may be experimentally verified.

  7. A particle-based simplified swarm optimization algorithm for reliability redundancy allocation problems

    International Nuclear Information System (INIS)

    Huang, Chia-Ling

    2015-01-01

    This paper proposes a new swarm intelligence method known as the Particle-based Simplified Swarm Optimization (PSSO) algorithm while undertaking a modification of the Updating Mechanism (UM), called N-UM and R-UM, and simultaneously applying an Orthogonal Array Test (OA) to solve reliability–redundancy allocation problems (RRAPs) successfully. One difficulty of RRAP is the need to maximize system reliability in cases where the number of redundant components and the reliability of corresponding components in each subsystem are simultaneously decided with nonlinear constraints. In this paper, four RRAP benchmarks are used to display the applicability of the proposed PSSO that advances the strengths of both PSO and SSO to enable optimizing the RRAP that belongs to mixed-integer nonlinear programming. When the computational results are compared with those of previously developed algorithms in existing literature, the findings indicate that the proposed PSSO is highly competitive and performs well. - Highlights: • This paper proposes a particle-based simplified swarm optimization algorithm (PSSO) to optimize RRAP. • Furthermore, the UM and an OA are adapted to advance in optimizing RRAP. • Four systems are introduced and the results demonstrate the PSSO performs particularly well

  8. Effect of ultrasonic stimulation on particle transport and fate over different lengths of porous media

    Science.gov (United States)

    Chen, Xingxin; Wu, Zhonghan; Cai, Qipeng; Cao, Wei

    2018-04-01

    It is well established that seismic waves traveling through porous media stimulate fluid flow and accelerate particle transport. However, the mechanism remains poorly understood. To quantify the coupling effect of hydrodynamic force, transportation distance, and ultrasonic stimulation on particle transport and fate in porous media, laboratory experiments were conducted using custom-built ultrasonic-controlled soil column equipment. Three column lengths (23 cm, 33 cm, and 43 cm) were selected to examine the influence of transportation distance. Transport experiments were performed with 0 W, 600 W, 1000 W, 1400 W, and 1800 W of applied ultrasound, and flow rates of 0.065 cm/s, 0.130 cm/s, and 0.195 cm/s, to establish the roles of ultrasonic stimulation and hydrodynamic force. The laboratory results suggest that whilst ultrasonic stimulation does inhibit suspended-particle deposition and accelerate deposited-particle release, both hydrodynamic force and transportation distance are the principal controlling factors. The median particle diameter for the peak concentration was approximately 50% of that retained in the soil column. Simulated particle-breakthrough curves using extended traditional filtration theory effectively described the experimental curves, particularly the curves that exhibited a higher tailing concentration.

  9. Multi-objective Reactive Power Optimization Based on Improved Particle Swarm Algorithm

    Science.gov (United States)

    Cui, Xue; Gao, Jian; Feng, Yunbin; Zou, Chenlu; Liu, Huanlei

    2018-01-01

    In this paper, an optimization model with the minimum active power loss and minimum voltage deviation of node and maximum static voltage stability margin as the optimization objective is proposed for the reactive power optimization problems. By defining the index value of reactive power compensation, the optimal reactive power compensation node was selected. The particle swarm optimization algorithm was improved, and the selection pool of global optimal and the global optimal of probability (p-gbest) were introduced. A set of Pareto optimal solution sets is obtained by this algorithm. And by calculating the fuzzy membership value of the pareto optimal solution sets, individuals with the smallest fuzzy membership value were selected as the final optimization results. The above improved algorithm is used to optimize the reactive power of IEEE14 standard node system. Through the comparison and analysis of the results, it has been proven that the optimization effect of this algorithm was very good.

  10. Angular Distribution of Particles Emerging from a Diffusive Region and its Implications for the Fleck-Canfield Random Walk Algorithm for Implicit Monte Carlo Radiation Transport

    CERN Document Server

    Cooper, M A

    2000-01-01

    We present various approximations for the angular distribution of particles emerging from an optically thick, purely isotropically scattering region into a vacuum. Our motivation is to use such a distribution for the Fleck-Canfield random walk method [1] for implicit Monte Carlo (IMC) [2] radiation transport problems. We demonstrate that the cosine distribution recommended in the original random walk paper [1] is a poor approximation to the angular distribution predicted by transport theory. Then we examine other approximations that more closely match the transport angular distribution.

  11. Development of an inter-layer solute transport algorithm for SOLTR computer program. Part 1. The algorithm

    International Nuclear Information System (INIS)

    Miller, I.; Roman, K.

    1979-12-01

    In order to perform studies of the influence of regional groundwater flow systems on the long-term performance of potential high-level nuclear waste repositories, it was determined that an adequate computer model would have to consider the full three-dimensional flow system. Golder Associates' SOLTR code, while three-dimensional, has an overly simple algorithm for simulating the passage of radionuclides from one aquifier to another above or below it. Part 1 of this report describes the algorithm developed to provide SOLTR with an improved capability for simulating interaquifer transport

  12. Algorithm of Data Reduce in Determination of Aerosol Particle Size Distribution at Damps/C

    International Nuclear Information System (INIS)

    Muhammad-Priyatna; Otto-Pribadi-Ruslanto

    2001-01-01

    The analysis had to do for algorithm of data reduction on Damps/C (Differential Mobility Particle Sizer with Condensation Particle Counter) system, this is for determine aerosol particle size distribution with range 0,01 μm to 1 μm in diameter. Damps/C (Differential Mobility Particle Sizer with Condensation Particle Counter) system contents are software and hardware. The hardware used determine of mobilities of aerosol particle and so the software used determine aerosol particle size distribution in diameter. The mobilities and diameter particle had connection in the electricity field. That is basic program for reduction of data and particle size conversion from particle mobility become particle diameter. The analysis to get transfer function value, Ω, is 0.5. The data reduction program to do conversation mobility basis become diameter basis with number efficiency correction, transfer function value, and poly charge particle. (author)

  13. Quasilinear Line Broadened Model for Energetic Particle Transport

    Science.gov (United States)

    Ghantous, Katy; Gorelenkov, Nikolai; Berk, Herbert

    2011-10-01

    We present a self-consistent quasi-linear model that describes wave-particle interaction in toroidal geometry and computes fast ion transport during TAE mode evolution. The model bridges the gap between single mode resonances, where it predicts the analytically expected saturation levels, and the case of multiple modes overlapping, where particles diffuse across phase space. Results are presented in the large aspect ratio limit where analytic expressions are used for Fourier harmonics of the power exchange between waves and particles, . Implemention of a more realistic mode structure calculated by NOVAK code are also presented. This work is funded by DOE contract DE-AC02-09CH11466.

  14. Antiproton annihilation physics annihilation physics in the Monte Carlo particle transport code particle transport code SHIELD-HIT12A

    DEFF Research Database (Denmark)

    Taasti, Vicki Trier; Knudsen, Helge; Holzscheiter, Michael

    2015-01-01

    The Monte Carlo particle transport code SHIELD-HIT12A is designed to simulate therapeutic beams for cancer radiotherapy with fast ions. SHIELD-HIT12A allows creation of antiproton beam kernels for the treatment planning system TRiP98, but first it must be benchmarked against experimental data. An...

  15. Relationship between particle and heat transport in JT-60U plasmas with internal transport barrier

    International Nuclear Information System (INIS)

    Takenaga, H.

    2002-01-01

    Relationship between particle and heat transport in an internal transport barrier (ITB) has been systematically investigated for the first time in reversed shear (RS) and high-β p ELMy H-mode (weak positive shear) plasmas of JT-60U for understanding of compatibility of improved energy confinement and effective particle control such as exhaust of helium ash and reduction in impurity contamination. In the RS plasma, no helium and carbon accumulation inside the ITB is observed even with highly improved energy confinement. In the high-β p plasma, both helium and carbon density profiles are flat. As the ion temperature profile changes from parabolic- to box-type, the helium diffusivity decreases by a factor of about 2 as well as the ion thermal diffusivity in the RS plasma. The measured soft X-ray profile is more peaked than that calculated by assuming the same n AR profile as the n e profile in the Ar injected RS plasma with the box-type profile, suggesting accumulation of Ar inside the ITB. Particle transport is improved with no change of ion temperature in the RS plasma, when density fluctuation is drastically reduced by a pellet injection. (author)

  16. Ice cloud formation potential by free tropospheric particles from long-range transport over the Northern Atlantic Ocean

    Science.gov (United States)

    China, Swarup; Alpert, Peter A.; Zhang, Bo; Schum, Simeon; Dzepina, Katja; Wright, Kendra; Owen, R. Chris; Fialho, Paulo; Mazzoleni, Lynn R.; Mazzoleni, Claudio; Knopf, Daniel A.

    2017-03-01

    Long-range transported free tropospheric particles can play a significant role on heterogeneous ice nucleation. Using optical and electron microscopy we examine the physicochemical characteristics of ice nucleating particles (INPs). Particles were collected on substrates from the free troposphere at the remote Pico Mountain Observatory in the Azores Islands, after long-range transport and aging over the Atlantic Ocean. We investigate four specific events to study the ice formation potential by the collected particles with different ages and transport patterns. We use single-particle analysis, as well as bulk analysis to characterize particle populations. Both analyses show substantial differences in particle composition between samples from the four events; in addition, single-particle microscopy analysis indicates that most particles are coated by organic material. The identified INPs contained mixtures of dust, aged sea salt and soot, and organic material acquired either at the source or during transport. The temperature and relative humidity (RH) at which ice formed, varied only by 5% between samples, despite differences in particle composition, sources, and transport patterns. We hypothesize that this small variation in the onset RH may be due to the coating material on the particles. This study underscores and motivates the need to further investigate how long-range transported and atmospherically aged free tropospheric particles impact ice cloud formation.

  17. Canonical algorithms for numerical integration of charged particle motion equations

    Science.gov (United States)

    Efimov, I. N.; Morozov, E. A.; Morozova, A. R.

    2017-02-01

    A technique for numerically integrating the equation of charged particle motion in a magnetic field is considered. It is based on the canonical transformations of the phase space in Hamiltonian mechanics. The canonical transformations make the integration process stable against counting error accumulation. The integration algorithms contain a minimum possible amount of arithmetics and can be used to design accelerators and devices of electron and ion optics.

  18. A Novel Path Planning for Robots Based on Rapidly-Exploring Random Tree and Particle Swarm Optimizer Algorithm

    Directory of Open Access Journals (Sweden)

    Zhou Feng

    2013-09-01

    Full Text Available A based on Rapidly-exploring Random Tree(RRT and Particle Swarm Optimizer (PSO for path planning of the robot is proposed.First the grid method is built to describe the working space of the mobile robot,then the Rapidly-exploring Random Tree algorithm is used to obtain the global navigation path,and the Particle Swarm Optimizer algorithm is adopted to get the better path.Computer experiment results demonstrate that this novel algorithm can plan an optimal path rapidly in a cluttered environment.The successful obstacle avoidance is achieved,and the model is robust and performs reliably.

  19. Neutral particle transport modeling with a reflective source in the plasma edge

    International Nuclear Information System (INIS)

    Valenti, M.E.

    1992-01-01

    A reflective source term is incorporated into the Boltzmann neutral particle transport equation to account for boundary reflection. This reflective neutral model is integrated over a uniform axis and subsequently discretized. The discrete two-dimensional equations are solved iteratively with a computer code. The results of the reflective neutral model computer code are benchmarked with the neutral particle transport code ONEDANT. The benchmark process demonstrates the validity of the reflective neutral model. The reflective neutral model is coupled to the Braams plasma particle and energy transport code. The coupled system generates self-consistent plasma edge transport solutions. These solutions, which utilize the transport equation are similar to solutions which utilize simple plasma edge neutral models when high recycle divertors are modeled. In the high recycle mode, the high electron density at the divertor plate reduces the mean free path of plate neutrals. Hence, the similarity in results. It is concluded that simple neutral models are sufficient for the analysis of high recycle power reactor edge plasmas. Low recycle edge plasmas were not examined

  20. Multi-objective AGV scheduling in an FMS using a hybrid of genetic algorithm and particle swarm optimization.

    Directory of Open Access Journals (Sweden)

    Maryam Mousavi

    Full Text Available Flexible manufacturing system (FMS enhances the firm's flexibility and responsiveness to the ever-changing customer demand by providing a fast product diversification capability. Performance of an FMS is highly dependent upon the accuracy of scheduling policy for the components of the system, such as automated guided vehicles (AGVs. An AGV as a mobile robot provides remarkable industrial capabilities for material and goods transportation within a manufacturing facility or a warehouse. Allocating AGVs to tasks, while considering the cost and time of operations, defines the AGV scheduling process. Multi-objective scheduling of AGVs, unlike single objective practices, is a complex and combinatorial process. In the main draw of the research, a mathematical model was developed and integrated with evolutionary algorithms (genetic algorithm (GA, particle swarm optimization (PSO, and hybrid GA-PSO to optimize the task scheduling of AGVs with the objectives of minimizing makespan and number of AGVs while considering the AGVs' battery charge. Assessment of the numerical examples' scheduling before and after the optimization proved the applicability of all the three algorithms in decreasing the makespan and AGV numbers. The hybrid GA-PSO produced the optimum result and outperformed the other two algorithms, in which the mean of AGVs operation efficiency was found to be 69.4, 74, and 79.8 percent in PSO, GA, and hybrid GA-PSO, respectively. Evaluation and validation of the model was performed by simulation via Flexsim software.

  1. Multi-objective AGV scheduling in an FMS using a hybrid of genetic algorithm and particle swarm optimization.

    Science.gov (United States)

    Mousavi, Maryam; Yap, Hwa Jen; Musa, Siti Nurmaya; Tahriri, Farzad; Md Dawal, Siti Zawiah

    2017-01-01

    Flexible manufacturing system (FMS) enhances the firm's flexibility and responsiveness to the ever-changing customer demand by providing a fast product diversification capability. Performance of an FMS is highly dependent upon the accuracy of scheduling policy for the components of the system, such as automated guided vehicles (AGVs). An AGV as a mobile robot provides remarkable industrial capabilities for material and goods transportation within a manufacturing facility or a warehouse. Allocating AGVs to tasks, while considering the cost and time of operations, defines the AGV scheduling process. Multi-objective scheduling of AGVs, unlike single objective practices, is a complex and combinatorial process. In the main draw of the research, a mathematical model was developed and integrated with evolutionary algorithms (genetic algorithm (GA), particle swarm optimization (PSO), and hybrid GA-PSO) to optimize the task scheduling of AGVs with the objectives of minimizing makespan and number of AGVs while considering the AGVs' battery charge. Assessment of the numerical examples' scheduling before and after the optimization proved the applicability of all the three algorithms in decreasing the makespan and AGV numbers. The hybrid GA-PSO produced the optimum result and outperformed the other two algorithms, in which the mean of AGVs operation efficiency was found to be 69.4, 74, and 79.8 percent in PSO, GA, and hybrid GA-PSO, respectively. Evaluation and validation of the model was performed by simulation via Flexsim software.

  2. Parameter estimation of fractional-order chaotic systems by using quantum parallel particle swarm optimization algorithm.

    Directory of Open Access Journals (Sweden)

    Yu Huang

    Full Text Available Parameter estimation for fractional-order chaotic systems is an important issue in fractional-order chaotic control and synchronization and could be essentially formulated as a multidimensional optimization problem. A novel algorithm called quantum parallel particle swarm optimization (QPPSO is proposed to solve the parameter estimation for fractional-order chaotic systems. The parallel characteristic of quantum computing is used in QPPSO. This characteristic increases the calculation of each generation exponentially. The behavior of particles in quantum space is restrained by the quantum evolution equation, which consists of the current rotation angle, individual optimal quantum rotation angle, and global optimal quantum rotation angle. Numerical simulation based on several typical fractional-order systems and comparisons with some typical existing algorithms show the effectiveness and efficiency of the proposed algorithm.

  3. Inverse Estimation of Surface Radiation Properties Using Repulsive Particle Swarm Optimization Algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Kyun Ho [Sejong University, Sejong (Korea, Republic of); Kim, Ki Wan [Agency for Defense Development, Daejeon (Korea, Republic of)

    2014-09-15

    The heat transfer mechanism for radiation is directly related to the emission of photons and electromagnetic waves. Depending on the participation of the medium, the radiation can be classified into two forms: surface and gas radiation. In the present study, unknown radiation properties were estimated using an inverse boundary analysis of surface radiation in an axisymmetric cylindrical enclosure. For efficiency, a repulsive particle swarm optimization (RPSO) algorithm, which is a relatively recent heuristic search method, was used as inverse solver. By comparing the convergence rates and accuracies with the results of a genetic algorithm (GA), the performances of the proposed RPSO algorithm as an inverse solver was verified when applied to the inverse analysis of the surface radiation problem.

  4. Inverse Estimation of Surface Radiation Properties Using Repulsive Particle Swarm Optimization Algorithm

    International Nuclear Information System (INIS)

    Lee, Kyun Ho; Kim, Ki Wan

    2014-01-01

    The heat transfer mechanism for radiation is directly related to the emission of photons and electromagnetic waves. Depending on the participation of the medium, the radiation can be classified into two forms: surface and gas radiation. In the present study, unknown radiation properties were estimated using an inverse boundary analysis of surface radiation in an axisymmetric cylindrical enclosure. For efficiency, a repulsive particle swarm optimization (RPSO) algorithm, which is a relatively recent heuristic search method, was used as inverse solver. By comparing the convergence rates and accuracies with the results of a genetic algorithm (GA), the performances of the proposed RPSO algorithm as an inverse solver was verified when applied to the inverse analysis of the surface radiation problem

  5. The separation of radionuclide migration by solution and particle transport in LLRW repository buffer material

    International Nuclear Information System (INIS)

    Torok, J.; Buckley, L.P.; Woods, B.L.

    1989-01-01

    Laboratory-scale lysimeter experiments were performed with simulated waste forms placed in candidate buffer materials which have been chosen for a low-level radioactive waste repository. Radionuclide releases into the effluent water and radionuclide capture by the buffer material were determined. The results could not be explained by traditional solution transport mechanisms, and transport by particles released from the waste form and/or transport by buffer particles were suspected as the dominant mechanism for radionuclide release from the lysimeters. To elucidate the relative contribution of particle and solution transport, the waste forms were replaced by a wafer of neutron-activated buffer soaked with selected soluble isotopes. Particle transport was determined by the movement of gamma-emitting neutron-activation products through the lysimeter. Solution transport was quantified by comparing the migration of soluble radionuclides relative to the transport of neutron activation products. The new approach for monitoring radionuclide migration in soil is presented. It facilitates the determination of most of the fundamental coefficients required to model the transport process

  6. Fitness Estimation Based Particle Swarm Optimization Algorithm for Layout Design of Truss Structures

    Directory of Open Access Journals (Sweden)

    Ayang Xiao

    2014-01-01

    Full Text Available Due to the fact that vastly different variables and constraints are simultaneously considered, truss layout optimization is a typical difficult constrained mixed-integer nonlinear program. Moreover, the computational cost of truss analysis is often quite expensive. In this paper, a novel fitness estimation based particle swarm optimization algorithm with an adaptive penalty function approach (FEPSO-AP is proposed to handle this problem. FEPSO-AP adopts a special fitness estimate strategy to evaluate the similar particles in the current population, with the purpose to reduce the computational cost. Further more, a laconic adaptive penalty function is employed by FEPSO-AP, which can handle multiple constraints effectively by making good use of historical iteration information. Four benchmark examples with fixed topologies and up to 44 design dimensions were studied to verify the generality and efficiency of the proposed algorithm. Numerical results of the present work compared with results of other state-of-the-art hybrid algorithms shown in the literature demonstrate that the convergence rate and the solution quality of FEPSO-AP are essentially competitive.

  7. Computational methods for two-phase flow and particle transport

    CERN Document Server

    Lee, Wen Ho

    2013-01-01

    This book describes mathematical formulations and computational methods for solving two-phase flow problems with a computer code that calculates thermal hydraulic problems related to light water and fast breeder reactors. The physical model also handles the particle and gas flow problems that arise from coal gasification and fluidized beds. The second part of this book deals with the computational methods for particle transport.

  8. Time-dependent Perpendicular Transport of Energetic Particles for Different Turbulence Configurations and Parallel Transport Models

    Energy Technology Data Exchange (ETDEWEB)

    Lasuik, J.; Shalchi, A., E-mail: andreasm4@yahoo.com [Department of Physics and Astronomy, University of Manitoba, Winnipeg, MB R3T 2N2 (Canada)

    2017-09-20

    Recently, a new theory for the transport of energetic particles across a mean magnetic field was presented. Compared to other nonlinear theories the new approach has the advantage that it provides a full time-dependent description of the transport. Furthermore, a diffusion approximation is no longer part of that theory. The purpose of this paper is to combine this new approach with a time-dependent model for parallel transport and different turbulence configurations in order to explore the parameter regimes for which we get ballistic transport, compound subdiffusion, and normal Markovian diffusion.

  9. The Random Ray Method for neutral particle transport

    Energy Technology Data Exchange (ETDEWEB)

    Tramm, John R., E-mail: jtramm@mit.edu [Massachusetts Institute of Technology, Department of Nuclear Science Engineering, 77 Massachusetts Avenue, 24-107, Cambridge, MA 02139 (United States); Argonne National Laboratory, Mathematics and Computer Science Department 9700 S Cass Ave, Argonne, IL 60439 (United States); Smith, Kord S., E-mail: kord@mit.edu [Massachusetts Institute of Technology, Department of Nuclear Science Engineering, 77 Massachusetts Avenue, 24-107, Cambridge, MA 02139 (United States); Forget, Benoit, E-mail: bforget@mit.edu [Massachusetts Institute of Technology, Department of Nuclear Science Engineering, 77 Massachusetts Avenue, 24-107, Cambridge, MA 02139 (United States); Siegel, Andrew R., E-mail: siegela@mcs.anl.gov [Argonne National Laboratory, Mathematics and Computer Science Department 9700 S Cass Ave, Argonne, IL 60439 (United States)

    2017-08-01

    A new approach to solving partial differential equations (PDEs) based on the method of characteristics (MOC) is presented. The Random Ray Method (TRRM) uses a stochastic rather than deterministic discretization of characteristic tracks to integrate the phase space of a problem. TRRM is potentially applicable in a number of transport simulation fields where long characteristic methods are used, such as neutron transport and gamma ray transport in reactor physics as well as radiative transfer in astrophysics. In this study, TRRM is developed and then tested on a series of exemplar reactor physics benchmark problems. The results show extreme improvements in memory efficiency compared to deterministic MOC methods, while also reducing algorithmic complexity, allowing for a sparser computational grid to be used while maintaining accuracy.

  10. The Random Ray Method for neutral particle transport

    International Nuclear Information System (INIS)

    Tramm, John R.; Smith, Kord S.; Forget, Benoit; Siegel, Andrew R.

    2017-01-01

    A new approach to solving partial differential equations (PDEs) based on the method of characteristics (MOC) is presented. The Random Ray Method (TRRM) uses a stochastic rather than deterministic discretization of characteristic tracks to integrate the phase space of a problem. TRRM is potentially applicable in a number of transport simulation fields where long characteristic methods are used, such as neutron transport and gamma ray transport in reactor physics as well as radiative transfer in astrophysics. In this study, TRRM is developed and then tested on a series of exemplar reactor physics benchmark problems. The results show extreme improvements in memory efficiency compared to deterministic MOC methods, while also reducing algorithmic complexity, allowing for a sparser computational grid to be used while maintaining accuracy.

  11. A simple algorithm for measuring particle size distributions on an uneven background from TEM images

    DEFF Research Database (Denmark)

    Gontard, Lionel Cervera; Ozkaya, Dogan; Dunin-Borkowski, Rafal E.

    2011-01-01

    Nanoparticles have a wide range of applications in science and technology. Their sizes are often measured using transmission electron microscopy (TEM) or X-ray diffraction. Here, we describe a simple computer algorithm for measuring particle size distributions from TEM images in the presence of a...... application to images of heterogeneous catalysts is presented.......Nanoparticles have a wide range of applications in science and technology. Their sizes are often measured using transmission electron microscopy (TEM) or X-ray diffraction. Here, we describe a simple computer algorithm for measuring particle size distributions from TEM images in the presence...

  12. Microbial and Organic Fine Particle Transport Dynamics in Streams - a Combined Experimental and Stochastic Modeling Approach

    Science.gov (United States)

    Drummond, Jen; Davies-Colley, Rob; Stott, Rebecca; Sukias, James; Nagels, John; Sharp, Alice; Packman, Aaron

    2014-05-01

    Transport dynamics of microbial cells and organic fine particles are important to stream ecology and biogeochemistry. Cells and particles continuously deposit and resuspend during downstream transport owing to a variety of processes including gravitational settling, interactions with in-stream structures or biofilms at the sediment-water interface, and hyporheic exchange and filtration within underlying sediments. Deposited cells and particles are also resuspended following increases in streamflow. Fine particle retention influences biogeochemical processing of substrates and nutrients (C, N, P), while remobilization of pathogenic microbes during flood events presents a hazard to downstream uses such as water supplies and recreation. We are conducting studies to gain insights into the dynamics of fine particles and microbes in streams, with a campaign of experiments and modeling. The results improve understanding of fine sediment transport, carbon cycling, nutrient spiraling, and microbial hazards in streams. We developed a stochastic model to describe the transport and retention of fine particles and microbes in rivers that accounts for hyporheic exchange and transport through porewaters, reversible filtration within the streambed, and microbial inactivation in the water column and subsurface. This model framework is an advance over previous work in that it incorporates detailed transport and retention processes that are amenable to measurement. Solute, particle, and microbial transport were observed both locally within sediment and at the whole-stream scale. A multi-tracer whole-stream injection experiment compared the transport and retention of a conservative solute, fluorescent fine particles, and the fecal indicator bacterium Escherichia coli. Retention occurred within both the underlying sediment bed and stands of submerged macrophytes. The results demonstrate that the combination of local measurements, whole-stream tracer experiments, and advanced modeling

  13. Helium, Iron and Electron Particle Transport and Energy Transport Studies on the TFTR Tokamak

    Science.gov (United States)

    Synakowski, E. J.; Efthimion, P. C.; Rewoldt, G.; Stratton, B. C.; Tang, W. M.; Grek, B.; Hill, K. W.; Hulse, R. A.; Johnson, D .W.; Mansfield, D. K.; McCune, D.; Mikkelsen, D. R.; Park, H. K.; Ramsey, A. T.; Redi, M. H.; Scott, S. D.; Taylor, G.; Timberlake, J.; Zarnstorff, M. C. (Princeton Univ., NJ (United States). Plasma Physics Lab.); Kissick, M. W. (Wisconsin Univ., Madison, WI (United States))

    1993-03-01

    Results from helium, iron, and electron transport on TFTR in L-mode and Supershot deuterium plasmas with the same toroidal field, plasma current, and neutral beam heating power are presented. They are compared to results from thermal transport analysis based on power balance. Particle diffusivities and thermal conductivities are radially hollow and larger than neoclassical values, except possibly near the magnetic axis. The ion channel dominates over the electron channel in both particle and thermal diffusion. A peaked helium profile, supported by inward convection that is stronger than predicted by neoclassical theory, is measured in the Supershot The helium profile shape is consistent with predictions from quasilinear electrostatic drift-wave theory. While the perturbative particle diffusion coefficients of all three species are similar in the Supershot, differences are found in the L-Mode. Quasilinear theory calculations of the ratios of impurity diffusivities are in good accord with measurements. Theory estimates indicate that the ion heat flux should be larger than the electron heat flux, consistent with power balance analysis. However, theoretical values of the ratio of the ion to electron heat flux can be more than a factor of three larger than experimental values. A correlation between helium diffusion and ion thermal transport is observed and has favorable implications for sustained ignition of a tokamak fusion reactor.

  14. Helium, iron and electron particle transport and energy transport studies on the TFTR tokamak

    International Nuclear Information System (INIS)

    Synakowski, E.J.; Efthimion, P.C.; Rewoldt, G.; Stratton, B.C.; Tang, W.M.; Grek, B.; Hill, K.W.; Hulse, R.A.; Johnson, D.W.; Mansfield, D.K.; McCune, D.; Mikkelsen, D.R.; Park, H.K.; Ramsey, A.T.; Redi, M.H.; Scott, S.D.; Taylor, G.; Timberlake, J.; Zarnstorff, M.C.

    1993-03-01

    Results from helium, iron, and electron transport on TFTR in L-mode and Supershot deuterium plasmas with the same toroidal field, plasma current, and neutral beam heating power are presented. They are compared to results from thermal transport analysis based on power balance. Particle diffusivities and thermal conductivities are radially hollow and larger than neoclassical values, except possibly near the magnetic axis. The ion channel dominates over the electron channel in both particle and thermal diffusion. A peaked helium profile, supported by inward convection that is stronger than predicted by neoclassical theory, is measured in the Supershot The helium profile shape is consistent with predictions from quasilinear electrostatic drift-wave theory. While the perturbative particle diffusion coefficients of all three species are similar in the Supershot, differences are found in the L-Mode. Quasilinear theory calculations of the ratios of impurity diffusivities are in good accord with measurements. Theory estimates indicate that the ion heat flux should be larger than the electron heat flux, consistent with power balance analysis. However, theoretical values of the ratio of the ion to electron heat flux can be more than a factor of three larger than experimental values. A correlation between helium diffusion and ion thermal transport is observed and has favorable implications for sustained ignition of a tokamak fusion reactor

  15. An optimization method of relativistic backward wave oscillator using particle simulation and genetic algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Zaigao; Wang, Jianguo [Key Laboratory for Physical Electronics and Devices of the Ministry of Education, Xi' an Jiaotong University, Xi' an, Shaanxi 710049 (China); Northwest Institute of Nuclear Technology, P.O. Box 69-12, Xi' an, Shaanxi 710024 (China); Wang, Yue; Qiao, Hailiang; Zhang, Dianhui [Northwest Institute of Nuclear Technology, P.O. Box 69-12, Xi' an, Shaanxi 710024 (China); Guo, Weijie [Key Laboratory for Physical Electronics and Devices of the Ministry of Education, Xi' an Jiaotong University, Xi' an, Shaanxi 710049 (China)

    2013-11-15

    Optimal design method of high-power microwave source using particle simulation and parallel genetic algorithms is presented in this paper. The output power, simulated by the fully electromagnetic particle simulation code UNIPIC, of the high-power microwave device is given as the fitness function, and the float-encoding genetic algorithms are used to optimize the high-power microwave devices. Using this method, we encode the heights of non-uniform slow wave structure in the relativistic backward wave oscillators (RBWO), and optimize the parameters on massively parallel processors. Simulation results demonstrate that we can obtain the optimal parameters of non-uniform slow wave structure in the RBWO, and the output microwave power enhances 52.6% after the device is optimized.

  16. Recent advances in neutral particle transport methods and codes

    International Nuclear Information System (INIS)

    Azmy, Y.Y.

    1996-01-01

    An overview of ORNL's three-dimensional neutral particle transport code, TORT, is presented. Special features of the code that make it invaluable for large applications are summarized for the prospective user. Advanced capabilities currently under development and installation in the production release of TORT are discussed; they include: multitasking on Cray platforms running the UNICOS operating system; Adjacent cell Preconditioning acceleration scheme; and graphics codes for displaying computed quantities such as the flux. Further developments for TORT and its companion codes to enhance its present capabilities, as well as expand its range of applications are disucssed. Speculation on the next generation of neutron particle transport codes at ORNL, especially regarding unstructured grids and high order spatial approximations, are also mentioned

  17. Optimization of Multipurpose Reservoir Operation with Application Particle Swarm Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Elahe Fallah Mehdipour

    2012-12-01

    Full Text Available Optimal operation of multipurpose reservoirs is one of the complex and sometimes nonlinear problems in the field of multi-objective optimization. Evolutionary algorithms are optimization tools that search decision space using simulation of natural biological evolution and present a set of points as the optimum solutions of problem. In this research, application of multi-objective particle swarm optimization (MOPSO in optimal operation of Bazoft reservoir with different objectives, including generating hydropower energy, supplying downstream demands (drinking, industry and agriculture, recreation and flood control have been considered. In this regard, solution sets of the MOPSO algorithm in bi-combination of objectives and compromise programming (CP using different weighting and power coefficients have been first compared that the MOPSO algorithm in all combinations of objectives is more capable than the CP to find solution with appropriate distribution and these solutions have dominated the CP solutions. Then, ending points of solution set from the MOPSO algorithm and nonlinear programming (NLP results have been compared. Results showed that the MOPSO algorithm with 0.3 percent difference from the NLP results has more capability to present optimum solutions in the ending points of solution set.

  18. Ultrasonic particle image velocimetry for improved flow gradient imaging: algorithms, methodology and validation

    International Nuclear Information System (INIS)

    Niu Lili; Qian Ming; Yu Wentao; Jin Qiaofeng; Ling Tao; Zheng Hairong; Wan Kun; Gao Shen

    2010-01-01

    This paper presents a new algorithm for ultrasonic particle image velocimetry (Echo PIV) for improving the flow velocity measurement accuracy and efficiency in regions with high velocity gradients. The conventional Echo PIV algorithm has been modified by incorporating a multiple iterative algorithm, sub-pixel method, filter and interpolation method, and spurious vector elimination algorithm. The new algorithms' performance is assessed by analyzing simulated images with known displacements, and ultrasonic B-mode images of in vitro laminar pipe flow, rotational flow and in vivo rat carotid arterial flow. Results of the simulated images show that the new algorithm produces much smaller bias from the known displacements. For laminar flow, the new algorithm results in 1.1% deviation from the analytically derived value, and 8.8% for the conventional algorithm. The vector quality evaluation for the rotational flow imaging shows that the new algorithm produces better velocity vectors. For in vivo rat carotid arterial flow imaging, the results from the new algorithm deviate 6.6% from the Doppler-measured peak velocities averagely compared to 15% of that from the conventional algorithm. The new Echo PIV algorithm is able to effectively improve the measurement accuracy in imaging flow fields with high velocity gradients.

  19. Axisymmetric charge-conservative electromagnetic particle simulation algorithm on unstructured grids: Application to microwave vacuum electronic devices

    Science.gov (United States)

    Na, Dong-Yeop; Omelchenko, Yuri A.; Moon, Haksu; Borges, Ben-Hur V.; Teixeira, Fernando L.

    2017-10-01

    We present a charge-conservative electromagnetic particle-in-cell (EM-PIC) algorithm optimized for the analysis of vacuum electronic devices (VEDs) with cylindrical symmetry (axisymmetry). We exploit the axisymmetry present in the device geometry, fields, and sources to reduce the dimensionality of the problem from 3D to 2D. Further, we employ 'transformation optics' principles to map the original problem in polar coordinates with metric tensor diag (1 ,ρ2 , 1) to an equivalent problem on a Cartesian metric tensor diag (1 , 1 , 1) with an effective (artificial) inhomogeneous medium introduced. The resulting problem in the meridian (ρz) plane is discretized using an unstructured 2D mesh considering TEϕ-polarized fields. Electromagnetic field and source (node-based charges and edge-based currents) variables are expressed as differential forms of various degrees, and discretized using Whitney forms. Using leapfrog time integration, we obtain a mixed E - B finite-element time-domain scheme for the full-discrete Maxwell's equations. We achieve a local and explicit time update for the field equations by employing the sparse approximate inverse (SPAI) algorithm. Interpolating field values to particles' positions for solving Newton-Lorentz equations of motion is also done via Whitney forms. Particles are advanced using the Boris algorithm with relativistic correction. A recently introduced charge-conserving scatter scheme tailored for 2D unstructured grids is used in the scatter step. The algorithm is validated considering cylindrical cavity and space-charge-limited cylindrical diode problems. We use the algorithm to investigate the physical performance of VEDs designed to harness particle bunching effects arising from the coherent (resonance) Cerenkov electron beam interactions within micro-machined slow wave structures.

  20. Efficient solution to the stagnation problem of the particle swarm optimization algorithm for phase diversity.

    Science.gov (United States)

    Qi, Xin; Ju, Guohao; Xu, Shuyan

    2018-04-10

    The phase diversity (PD) technique needs optimization algorithms to minimize the error metric and find the global minimum. Particle swarm optimization (PSO) is very suitable for PD due to its simple structure, fast convergence, and global searching ability. However, the traditional PSO algorithm for PD still suffers from the stagnation problem (premature convergence), which can result in a wrong solution. In this paper, the stagnation problem of the traditional PSO algorithm for PD is illustrated first. Then, an explicit strategy is proposed to solve this problem, based on an in-depth understanding of the inherent optimization mechanism of the PSO algorithm. Specifically, a criterion is proposed to detect premature convergence; then a redistributing mechanism is proposed to prevent premature convergence. To improve the efficiency of this redistributing mechanism, randomized Halton sequences are further introduced to ensure the uniform distribution and randomness of the redistributed particles in the search space. Simulation results show that this strategy can effectively solve the stagnation problem of the PSO algorithm for PD, especially for large-scale and high-dimension wavefront sensing and noisy conditions. This work is further verified by an experiment. This work can improve the robustness and performance of PD wavefront sensing.

  1. Flux-corrected transport principles, algorithms, and applications

    CERN Document Server

    Kuzmin, Dmitri; Turek, Stefan

    2005-01-01

    Addressing students and researchers as well as CFD practitioners, this book describes the state of the art in the development of high-resolution schemes based on the Flux-Corrected Transport (FCT) paradigm. Intended for readers who have a solid background in Computational Fluid Dynamics, the book begins with historical notes by J.P. Boris and D.L. Book. Review articles that follow describe recent advances in the design of FCT algorithms as well as various algorithmic aspects. The topics addressed in the book and its main highlights include: the derivation and analysis of classical FCT schemes with special emphasis on the underlying physical and mathematical constraints; flux limiting for hyperbolic systems; generalization of FCT to implicit time-stepping and finite element discretizations on unstructured meshes and its role as a subgrid scale model for Monotonically Integrated Large Eddy Simulation (MILES) of turbulent flows. The proposed enhancements of the FCT methodology also comprise the prelimiting and '...

  2. AUTOMATION OF CALCULATION ALGORITHMS FOR EFFICIENCY ESTIMATION OF TRANSPORT INFRASTRUCTURE DEVELOPMENT

    Directory of Open Access Journals (Sweden)

    Sergey Kharitonov

    2015-06-01

    Full Text Available Optimum transport infrastructure usage is an important aspect of the development of the national economy of the Russian Federation. Thus, development of instruments for assessing the efficiency of infrastructure is impossible without constant monitoring of a number of significant indicators. This work is devoted to the selection of indicators and the method of their calculation in relation to the transport subsystem as airport infrastructure. The work also reflects aspects of the evaluation of the possibilities of algorithmic computational mechanisms to improve the tools of public administration transport subsystems.

  3. The Study on Food Sensory Evaluation based on Particle Swarm Optimization Algorithm

    OpenAIRE

    Hairong Wang; Huijuan Xu

    2015-01-01

    In this study, it explores the procedures and methods of the system for establishing food sensory evaluation based on particle swarm optimization algorithm, by means of explaining the interpretation of sensory evaluation and sensory analysis, combined with the applying situation of sensory evaluation in food industry.

  4. Design of Wire Antennas by Using an Evolved Particle Swarm Optimization Algorithm

    NARCIS (Netherlands)

    Lepelaars, E.S.A.M.; Zwamborn, A.P.M.; Rogovic, A.; Marasini, C.; Monorchio, A.

    2007-01-01

    A Particle Swarm Optimization (PSO) algorithm has been used in conjunction with a full-wave numerical code based on the Method of Moments (MoM) to design and optimize wire antennas. The PSO is a robust stochastic evolutionary numerical technique that is very effective in optimizing multidimensional

  5. Particle transport in JET and TCV-H mode plasmas

    International Nuclear Information System (INIS)

    Maslov, M.

    2009-10-01

    Understanding particle transport physics is of great importance for magnetically confined plasma devices and for the development of thermonuclear fusion power for energy production. From the beginnings of fusion research, more than half a century ago, the problem of heat transport in tokamaks attracted the attention of researchers, but the particle transport phenomena were largely neglected until fairly recently. As tokamak physics advanced to its present level, the physics community realized that there are many hurdles to the development of fusion power beyond the energy confinement. Particle transport is one of the outstanding issues. The aim of this thesis work is to study the anomalous (turbulence driven) particle transport in tokamaks on the basis of experiments on two different devices: JET (Joint European Torus) and TCV (Tokamak à Configuration Variable). In particular the physics of particle inward convection (pinch), which causes formation of peaked density profiles, is addressed in this work. Density profile peaking has a direct, favorable effect on fusion power in a reactor, we therefore also propose an extrapolation to the international experimental reactor ITER, which is currently under construction. To complete the thesis research, a comprehensive experimental database was created on the basis of data collected on JET and TCV during the duration of the thesis. Improvements of the density profile measurements techniques and careful analysis of the experimental data allowed us to derive the dependencies of density profile shape on the relevant plasma parameters. These improved techniques also allowed us to dispel any doubts that had been voiced about previous results. The major conclusions from previous work on JET and other tokamaks were generally confirmed, with some minor supplements. The main novelty of the thesis resides in systematic tests of the predictions of linear gyrokinetic simulations of the ITG (Ion Temperature Gradient) mode against the

  6. Optimal planning of electric vehicle charging station at the distribution system using hybrid optimization algorithm

    DEFF Research Database (Denmark)

    Awasthi, Abhishek; Venkitusamy, Karthikeyan; Padmanaban, Sanjeevikumar

    2017-01-01

    India's ever increasing population has made it necessary to develop alternative modes of transportation with electric vehicles being the most preferred option. The major obstacle is the deteriorating impact on the utility distribution system brought about by improper setup of these charging...... stations. This paper deals with the optimal planning (siting and sizing) of charging station infrastructure in the city of Allahabad, India. This city is one of the upcoming smart cities, where electric vehicle transportation pilot project is going on under Government of India initiative. In this context......, a hybrid algorithm based on genetic algorithm and improved version of conventional particle swarm optimization is utilized for finding optimal placement of charging station in the Allahabad distribution system. The particle swarm optimization algorithm re-optimizes the received sub-optimal solution (site...

  7. Weighted-delta-tracking for Monte Carlo particle transport

    International Nuclear Information System (INIS)

    Morgan, L.W.G.; Kotlyar, D.

    2015-01-01

    Highlights: • This paper presents an alteration to the Monte Carlo Woodcock tracking technique. • The alteration improves computational efficiency within regions of high absorbers. • The rejection technique is replaced by a statistical weighting mechanism. • The modified Woodcock method is shown to be faster than standard Woodcock tracking. • The modified Woodcock method achieves a lower variance, given a specified accuracy. - Abstract: Monte Carlo particle transport (MCPT) codes are incredibly powerful and versatile tools to simulate particle behavior in a multitude of scenarios, such as core/criticality studies, radiation protection, shielding, medicine and fusion research to name just a small subset applications. However, MCPT codes can be very computationally expensive to run when the model geometry contains large attenuation depths and/or contains many components. This paper proposes a simple modification to the Woodcock tracking method used by some Monte Carlo particle transport codes. The Woodcock method utilizes the rejection method for sampling virtual collisions as a method to remove collision distance sampling at material boundaries. However, it suffers from poor computational efficiency when the sample acceptance rate is low. The proposed method removes rejection sampling from the Woodcock method in favor of a statistical weighting scheme, which improves the computational efficiency of a Monte Carlo particle tracking code. It is shown that the modified Woodcock method is less computationally expensive than standard ray-tracing and rejection-based Woodcock tracking methods and achieves a lower variance, given a specified accuracy

  8. A New Method for Tracking Individual Particles During Bed Load Transport in a Gravel-Bed River

    Science.gov (United States)

    Tremblay, M.; Marquis, G. A.; Roy, A. G.; Chaire de Recherche Du Canada En Dynamique Fluviale

    2010-12-01

    Many particle tracers (passive or active) have been developed to study gravel movement in rivers. It remains difficult, however, to document resting and moving periods and to know how particles travel from one deposition site to another. Our new tracking method uses the Hobo Pendant G acceleration Data Logger to quantitatively describe the motion of individual particles from the initiation of movement, through the displacement and to the rest, in a natural gravel river. The Hobo measures the acceleration in three dimensions at a chosen temporal frequency. The Hobo was inserted into 11 artificial rocks. The rocks were seeded in Ruisseau Béard, a small gravel-bed river in the Yamaska drainage basin (Québec) where the hydraulics, particle sizes and bed characteristics are well known. The signals recorded during eight floods (Summer and Fall 2008-2009) allowed us to develop an algorithm which classifies the periods of rest and motion. We can differentiate two types of motion: sliding and rolling. The particles can also vibrate while remaining in the same position. The examination of the movement and vibration periods with respect to the hydraulic conditions (discharge, shear stress, stream power) showed that vibration occurred mostly before the rise of hydrograph and allowed us to establish movement threshold and response times. In all cases, particle movements occurred during floods but not always in direct response to increased bed shear stress and stream power. This method offers great potential to track individual particles and to establish a spatiotemporal sequence of the intermittent transport of the particle during a flood and to test theories concerning the resting periods of particles on a gravel bed.

  9. Setting value optimization method in integration for relay protection based on improved quantum particle swarm optimization algorithm

    Science.gov (United States)

    Yang, Guo Sheng; Wang, Xiao Yang; Li, Xue Dong

    2018-03-01

    With the establishment of the integrated model of relay protection and the scale of the power system expanding, the global setting and optimization of relay protection is an extremely difficult task. This paper presents a kind of application in relay protection of global optimization improved particle swarm optimization algorithm and the inverse time current protection as an example, selecting reliability of the relay protection, selectivity, quick action and flexibility as the four requires to establish the optimization targets, and optimizing protection setting values of the whole system. Finally, in the case of actual power system, the optimized setting value results of the proposed method in this paper are compared with the particle swarm algorithm. The results show that the improved quantum particle swarm optimization algorithm has strong search ability, good robustness, and it is suitable for optimizing setting value in the relay protection of the whole power system.

  10. Control of alpha particle transport by spatially inhomogeneous ion cyclotron resonance heating

    International Nuclear Information System (INIS)

    Chang, C.S.; Imre, K.; Weitzner, H.; Colestock, P.

    1990-02-01

    Control of the radial alpha particle transport by using Ion Cyclotron Range of Frequency waves is investigated in a large-aspect-ratio tokamak geometry. It is shown that spatially inhomogeneous ICRF-wave energy with properly selected frequencies and wave numbers can induce fast convective transport of alpha particles at the speed of order υ alpha ∼ (P RF /n α ε 0 ) ρ p , where P RF is the ICRF-wave power density, n α is the alpha density, ε 0 is the alpha birth energy, and ρ p is the poloidal gyroradius of alpha particles at the birth energy. Application to ITER plasmas is studied and possible antenna designs to control alpha particle flux are discussed. 8 refs., 3 figs

  11. Performance comparison of genetic algorithms and particle swarm optimization for model integer programming bus timetabling problem

    Science.gov (United States)

    Wihartiko, F. D.; Wijayanti, H.; Virgantari, F.

    2018-03-01

    Genetic Algorithm (GA) is a common algorithm used to solve optimization problems with artificial intelligence approach. Similarly, the Particle Swarm Optimization (PSO) algorithm. Both algorithms have different advantages and disadvantages when applied to the case of optimization of the Model Integer Programming for Bus Timetabling Problem (MIPBTP), where in the case of MIPBTP will be found the optimal number of trips confronted with various constraints. The comparison results show that the PSO algorithm is superior in terms of complexity, accuracy, iteration and program simplicity in finding the optimal solution.

  12. Investigation of particle reduction and its transport mechanism in UHF-ECR dielectric etching system

    International Nuclear Information System (INIS)

    Kobayashi, Hiroyuki; Yokogawa, Ken'etsu; Maeda, Kenji; Izawa, Masaru

    2008-01-01

    Control of particle transport was investigated by using a UHF-ECR etching apparatus with a laser particle monitor. The particles, which float at a plasma-sheath boundary, fall on a wafer when the plasma is turned off. These floating particles can be removed from the region above the wafer by changing the plasma distribution. We measured the distribution of the rotational temperature of nitrogen molecules across the wafer to investigate the effect of the thermophoretic force. We found that mechanisms of particle transport in directions parallel to the wafer surface can be explained by the balance between thermophoretic and gas viscous forces

  13. Chain segmentation for the Monte Carlo solution of particle transport problems

    International Nuclear Information System (INIS)

    Ragheb, M.M.H.

    1984-01-01

    A Monte Carlo approach is proposed where the random walk chains generated in particle transport simulations are segmented. Forward and adjoint-mode estimators are then used in conjunction with the firstevent source density on the segmented chains to obtain multiple estimates of the individual terms of the Neumann series solution at each collision point. The solution is then constructed by summation of the series. The approach is compared to the exact analytical and to the Monte Carlo nonabsorption weighting method results for two representative slowing down and deep penetration problems. Application of the proposed approach leads to unbiased estimates for limited numbers of particle simulations and is useful in suppressing an effective bias problem observed in some cases of deep penetration particle transport problems

  14. Analytic theory of the energy and time independent particle transport in the plane geometry

    International Nuclear Information System (INIS)

    Simovic, R.D.

    2001-01-01

    An analytic investigation of the energy and time independent particle transport in the plane geometry described by a common anisotropic scattering function is carried out. Regarding the particles with specific diffusion histories in the infinite or the semi-infinite medium, new exact solutions of the corresponding transport equations are analytically derived by means of the Fourier inversion technique. Two particular groups of particles scattered after each successive collision into the directions μ 0, were considered. Its Fourier transformed transport equations have solutions without logarithmic singular points, in the upper part or the lower part of the complex k-plane. The Fourier inversion of solutions are carried out analytically and the obtained formulae represents valid generalization of the expressions for the flux of once scattered particles. (author)

  15. Fully implicit Particle-in-cell algorithms for multiscale plasma simulation

    Energy Technology Data Exchange (ETDEWEB)

    Chacon, Luis [Los Alamos National Laboratory

    2015-07-16

    The outline of the paper is as follows: Particle-in-cell (PIC) methods for fully ionized collisionless plasmas, explicit vs. implicit PIC, 1D ES implicit PIC (charge and energy conservation, moment-based acceleration), and generalization to Multi-D EM PIC: Vlasov-Darwin model (review and motivation for Darwin model, conservation properties (energy, charge, and canonical momenta), and numerical benchmarks). The author demonstrates a fully implicit, fully nonlinear, multidimensional PIC formulation that features exact local charge conservation (via a novel particle mover strategy), exact global energy conservation (no particle self-heating or self-cooling), adaptive particle orbit integrator to control errors in momentum conservation, and canonical momenta (EM-PIC only, reduced dimensionality). The approach is free of numerical instabilities: ωpeΔt >> 1, and Δx >> λD. It requires many fewer dofs (vs. explicit PIC) for comparable accuracy in challenging problems. Significant CPU gains (vs explicit PIC) have been demonstrated. The method has much potential for efficiency gains vs. explicit in long-time-scale applications. Moment-based acceleration is effective in minimizing NFE, leading to an optimal algorithm.

  16. The Splashback Radius of Halos from Particle Dynamics. I. The SPARTA Algorithm

    Science.gov (United States)

    Diemer, Benedikt

    2017-07-01

    Motivated by the recent proposal of the splashback radius as a physical boundary of dark-matter halos, we present a parallel computer code for Subhalo and PARticle Trajectory Analysis (SPARTA). The code analyzes the orbits of all simulation particles in all host halos, billions of orbits in the case of typical cosmological N-body simulations. Within this general framework, we develop an algorithm that accurately extracts the location of the first apocenter of particles after infall into a halo, or splashback. We define the splashback radius of a halo as the smoothed average of the apocenter radii of individual particles. This definition allows us to reliably measure the splashback radii of 95% of host halos above a resolution limit of 1000 particles. We show that, on average, the splashback radius and mass are converged to better than 5% accuracy with respect to mass resolution, snapshot spacing, and all free parameters of the method.

  17. Accounting for beta-particle energy loss to cortical bone via paired-image radiation transport (PIRT)

    International Nuclear Information System (INIS)

    Shah, Amish P.; Rajon, Didier A.; Patton, Phillip W.; Jokisch, Derek W.; Bolch, Wesley E.

    2005-01-01

    Current methods of skeletal dose assessment in both medical physics (radionuclide therapy) and health physics (dose reconstruction and risk assessment) rely heavily on a single set of bone and marrow cavity chord-length distributions in which particle energy deposition is tracked within an infinite extent of trabecular spongiosa, with no allowance for particle escape to cortical bone. In the present study, we introduce a paired-image radiation transport (PIRT) model which provides a more realistic three-dimensional (3D) geometry for particle transport in the skeletal site at both microscopic and macroscopic levels of its histology. Ex vivo CT scans were acquired of the pelvis, cranial cap, and individual ribs excised from a 66-year male cadaver (BMI of 22.7 kg m -2 ). For the three skeletal sites, regions of trabecular spongiosa and cortical bone were identified and segmented. Physical sections of interior spongiosa were taken and subjected to microCT imaging. Voxels within the resulting microCT images were then segmented and labeled as regions of bone trabeculae, endosteum, active marrow, and inactive marrow through application of image processing algorithms. The PIRT methodology was then implemented within the EGSNRC radiation transport code whereby electrons of various initial energies are simultaneously tracked within both the ex vivo CT macroimage and the CT microimage of the skeletal site. At initial electron energies greater than 50-200 keV, a divergence in absorbed fractions to active marrow are noted between PIRT model simulations and those estimated under existing techniques of infinite spongiosa transport. Calculations of radionuclide S values under both methodologies imply that current chord-based models may overestimate the absorbed dose to active bone marrow in these skeletal sites by 0% to 27% for low-energy beta emitters ( 33 P, 169 Er, and 177 Lu), by ∼4% to 49% for intermediate-energy beta emitters ( 153 Sm, 186 Re, and 89 Sr), and by ∼14% to

  18. Selective transport of Fe(III) using ionic imprinted polymer (IIP) membrane particle

    Science.gov (United States)

    Djunaidi, Muhammad Cholid; Jumina, Siswanta, Dwi; Ulbricht, Mathias

    2015-12-01

    The membrane particles was prepared from polyvinyl alcohol (PVA) and polymer IIP with weight ratios of 1: 2 and 1: 1 using different adsorbent templates and casting thickness. The permeability of membrane towards Fe(III) and also mecanism of transport were studied. The selectivity of the membrane for Fe(III) was studied by performing adsorption experiments also with Cr(III) separately. In this study, the preparation of Ionic Imprinted Polymer (IIP) membrane particles for selective transport of Fe (III) had been done using polyeugenol as functional polymer. Polyeugenol was then imprinted with Fe (III) and then crosslinked with PEGDE under alkaline condition to produce polyeugenol-Fe-PEGDE polymer aggregates. The agrregates was then crushed and sieved using mesh size of 80 and the powder was then used to prepare the membrane particles by mixing it with PVA (Mr 125,000) solution in 1-Methyl-2-pyrrolidone (NMP) solvent. The membrane was obtained after casting at a speed of 25 m/s and soaking in NaOH solution overnight. The membrane sheet was then cut and Fe(III) was removed by acid to produce IIP membrane particles. Analysis of the membrane and its constituent was done by XRD, SEM and size selectivity test. Experimental results showed the transport of Fe(III) was faster with the decrease of membrane thickness, while the higher concentration of template ion correlates with higher Fe(III) being transported. However, the transport of Fe(III) was slower for higher concentration of PVA in the membrane. IImparticles works through retarded permeation mechanism, where Fe(III) was bind to the active side of IIP. The active side of IIP membrane was dominated by the -OH groups. The selectivity of all IIP membranes was confirmed as they were all unable to transport Cr (III), while NIP (Non-imprinted Polymer) membrane was able transport Cr (III).

  19. Estimating the solute transport parameters of the spatial fractional advection-dispersion equation using Bees Algorithm.

    Science.gov (United States)

    Mehdinejadiani, Behrouz

    2017-08-01

    This study represents the first attempt to estimate the solute transport parameters of the spatial fractional advection-dispersion equation using Bees Algorithm. The numerical studies as well as the experimental studies were performed to certify the integrity of Bees Algorithm. The experimental ones were conducted in a sandbox for homogeneous and heterogeneous soils. A detailed comparative study was carried out between the results obtained from Bees Algorithm and those from Genetic Algorithm and LSQNONLIN routines in FracFit toolbox. The results indicated that, in general, the Bees Algorithm much more accurately appraised the sFADE parameters in comparison with Genetic Algorithm and LSQNONLIN, especially in the heterogeneous soil and for α values near to 1 in the numerical study. Also, the results obtained from Bees Algorithm were more reliable than those from Genetic Algorithm. The Bees Algorithm showed the relative similar performances for all cases, while the Genetic Algorithm and the LSQNONLIN yielded different performances for various cases. The performance of LSQNONLIN strongly depends on the initial guess values so that, compared to the Genetic Algorithm, it can more accurately estimate the sFADE parameters by taking into consideration the suitable initial guess values. To sum up, the Bees Algorithm was found to be very simple, robust and accurate approach to estimate the transport parameters of the spatial fractional advection-dispersion equation. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Estimating the solute transport parameters of the spatial fractional advection-dispersion equation using Bees Algorithm

    Science.gov (United States)

    Mehdinejadiani, Behrouz

    2017-08-01

    This study represents the first attempt to estimate the solute transport parameters of the spatial fractional advection-dispersion equation using Bees Algorithm. The numerical studies as well as the experimental studies were performed to certify the integrity of Bees Algorithm. The experimental ones were conducted in a sandbox for homogeneous and heterogeneous soils. A detailed comparative study was carried out between the results obtained from Bees Algorithm and those from Genetic Algorithm and LSQNONLIN routines in FracFit toolbox. The results indicated that, in general, the Bees Algorithm much more accurately appraised the sFADE parameters in comparison with Genetic Algorithm and LSQNONLIN, especially in the heterogeneous soil and for α values near to 1 in the numerical study. Also, the results obtained from Bees Algorithm were more reliable than those from Genetic Algorithm. The Bees Algorithm showed the relative similar performances for all cases, while the Genetic Algorithm and the LSQNONLIN yielded different performances for various cases. The performance of LSQNONLIN strongly depends on the initial guess values so that, compared to the Genetic Algorithm, it can more accurately estimate the sFADE parameters by taking into consideration the suitable initial guess values. To sum up, the Bees Algorithm was found to be very simple, robust and accurate approach to estimate the transport parameters of the spatial fractional advection-dispersion equation.

  1. Particle swarm optimizer for weighting factor selection in intensity-modulated radiation therapy optimization algorithms.

    Science.gov (United States)

    Yang, Jie; Zhang, Pengcheng; Zhang, Liyuan; Shu, Huazhong; Li, Baosheng; Gui, Zhiguo

    2017-01-01

    In inverse treatment planning of intensity-modulated radiation therapy (IMRT), the objective function is typically the sum of the weighted sub-scores, where the weights indicate the importance of the sub-scores. To obtain a high-quality treatment plan, the planner manually adjusts the objective weights using a trial-and-error procedure until an acceptable plan is reached. In this work, a new particle swarm optimization (PSO) method which can adjust the weighting factors automatically was investigated to overcome the requirement of manual adjustment, thereby reducing the workload of the human planner and contributing to the development of a fully automated planning process. The proposed optimization method consists of three steps. (i) First, a swarm of weighting factors (i.e., particles) is initialized randomly in the search space, where each particle corresponds to a global objective function. (ii) Then, a plan optimization solver is employed to obtain the optimal solution for each particle, and the values of the evaluation functions used to determine the particle's location and the population global location for the PSO are calculated based on these results. (iii) Next, the weighting factors are updated based on the particle's location and the population global location. Step (ii) is performed alternately with step (iii) until the termination condition is reached. In this method, the evaluation function is a combination of several key points on the dose volume histograms. Furthermore, a perturbation strategy - the crossover and mutation operator hybrid approach - is employed to enhance the population diversity, and two arguments are applied to the evaluation function to improve the flexibility of the algorithm. In this study, the proposed method was used to develop IMRT treatment plans involving five unequally spaced 6MV photon beams for 10 prostate cancer cases. The proposed optimization algorithm yielded high-quality plans for all of the cases, without human

  2. Alternate mutation based artificial immune algorithm for step fixed charge transportation problem

    Directory of Open Access Journals (Sweden)

    Mahmoud Moustafa El-Sherbiny

    2012-07-01

    Full Text Available Step fixed charge transportation problem (SFCTP is considered as a special version of the fixed-charge transportation problem (FCTP. In SFCTP, the fixed cost is incurred for every route that is used in the solution and is proportional to the amount shipped. This cost structure causes the value of the objective function to behave like a step function. Both FCTP and SFCTP are considered to be NP-hard problems. While a lot of research has been carried out concerning FCTP, not much has been done concerning SFCTP. This paper introduces an alternate Mutation based Artificial Immune (MAI algorithm for solving SFCTPs. The proposed MAI algorithm solves both balanced and unbalanced SFCTP without introducing a dummy supplier or a dummy customer. In MAI algorithm a coding schema is designed and procedures are developed for decoding such schema and shipping units. MAI algorithm guarantees the feasibility of all the generated solutions. Due to the significant role of mutation function on the MAI algorithm’s quality, 16 mutation functions are presented and their performances are compared to select the best one. For this purpose, forty problems with different sizes have been generated at random and then a robust calibration is applied using the relative percentage deviation (RPD method. Through two illustrative problems of different sizes the performance of the MAI algorithm has been compared with most recent methods.

  3. Particle and solute migration in porous media. Modeling of simultaneous transport of clay particles and radionuclides in a salinity gradient

    International Nuclear Information System (INIS)

    Faure, M.H.

    1994-03-01

    Understanding the mechanisms which control the transient transport of particles and radionuclides in natural and artificial porous media is a key problem for the assessment of safety of radioactive waste disposals. An experimental study has been performed to characterize the clayey particle mobility in porous media: a laboratory- made column, packed with an unconsolidated sand bentonite (5% weight) sample, is flushed with a salt solution. An original method of salinity gradient allowed us to show and to quantify some typical behaviours of this system: threshold effects in the peptization of particles, creation of preferential pathways, formation of immobile water zones induce solute-transfer limitation. The mathematical modelling accounts for a phenomenological law, where the distribution of particles between the stagnant water zone and the porous medium is a function of sodium chloride concentration. This distribution function is associated with a radionuclide adsorption model, and is included in a convective dispersive transport model with stagnant water zones. It allowed us to simulate the particle and solute transport when the salt environment is modified. The complete model has been validated with experiments involving cesium, calcium and neptunium in a sodium chloride gradient. (author). refs., figs., tabs

  4. Efficiencies of dynamic Monte Carlo algorithms for off-lattice particle systems with a single impurity

    KAUST Repository

    Novotny, M.A.

    2010-02-01

    The efficiency of dynamic Monte Carlo algorithms for off-lattice systems composed of particles is studied for the case of a single impurity particle. The theoretical efficiencies of the rejection-free method and of the Monte Carlo with Absorbing Markov Chains method are given. Simulation results are presented to confirm the theoretical efficiencies. © 2010.

  5. A Novel Flexible Inertia Weight Particle Swarm Optimization Algorithm.

    Science.gov (United States)

    Amoshahy, Mohammad Javad; Shamsi, Mousa; Sedaaghi, Mohammad Hossein

    2016-01-01

    Particle swarm optimization (PSO) is an evolutionary computing method based on intelligent collective behavior of some animals. It is easy to implement and there are few parameters to adjust. The performance of PSO algorithm depends greatly on the appropriate parameter selection strategies for fine tuning its parameters. Inertia weight (IW) is one of PSO's parameters used to bring about a balance between the exploration and exploitation characteristics of PSO. This paper proposes a new nonlinear strategy for selecting inertia weight which is named Flexible Exponential Inertia Weight (FEIW) strategy because according to each problem we can construct an increasing or decreasing inertia weight strategy with suitable parameters selection. The efficacy and efficiency of PSO algorithm with FEIW strategy (FEPSO) is validated on a suite of benchmark problems with different dimensions. Also FEIW is compared with best time-varying, adaptive, constant and random inertia weights. Experimental results and statistical analysis prove that FEIW improves the search performance in terms of solution quality as well as convergence rate.

  6. Massively parallel performance of neutron transport response matrix algorithms

    International Nuclear Information System (INIS)

    Hanebutte, U.R.; Lewis, E.E.

    1993-01-01

    Massively parallel red/black response matrix algorithms for the solution of within-group neutron transport problems are implemented on the Connection Machines-2, 200 and 5. The response matrices are dericed from the diamond-differences and linear-linear nodal discrete ordinate and variational nodal P 3 approximations. The unaccelerated performance of the iterative procedure is examined relative to the maximum rated performances of the machines. The effects of processor partitions size, of virtual processor ratio and of problems size are examined in detail. For the red/black algorithm, the ratio of inter-node communication to computing times is found to be quite small, normally of the order of ten percent or less. Performance increases with problems size and with virtual processor ratio, within the memeory per physical processor limitation. Algorithm adaptation to courser grain machines is straight-forward, with total computing time being virtually inversely proportional to the number of physical processors. (orig.)

  7. Multiple R&D projects scheduling optimization with improved particle swarm algorithm.

    Science.gov (United States)

    Liu, Mengqi; Shan, Miyuan; Wu, Juan

    2014-01-01

    For most enterprises, in order to win the initiative in the fierce competition of market, a key step is to improve their R&D ability to meet the various demands of customers more timely and less costly. This paper discusses the features of multiple R&D environments in large make-to-order enterprises under constrained human resource and budget, and puts forward a multi-project scheduling model during a certain period. Furthermore, we make some improvements to existed particle swarm algorithm and apply the one developed here to the resource-constrained multi-project scheduling model for a simulation experiment. Simultaneously, the feasibility of model and the validity of algorithm are proved in the experiment.

  8. A new ARMAX model based on evolutionary algorithm and particle swarm optimization for short-term load forecasting

    International Nuclear Information System (INIS)

    Wang, Bo; Tai, Neng-ling; Zhai, Hai-qing; Ye, Jian; Zhu, Jia-dong; Qi, Liang-bo

    2008-01-01

    In this paper, a new ARMAX model based on evolutionary algorithm and particle swarm optimization for short-term load forecasting is proposed. Auto-regressive (AR) and moving average (MA) with exogenous variables (ARMAX) has been widely applied in the load forecasting area. Because of the nonlinear characteristics of the power system loads, the forecasting function has many local optimal points. The traditional method based on gradient searching may be trapped in local optimal points and lead to high error. While, the hybrid method based on evolutionary algorithm and particle swarm optimization can solve this problem more efficiently than the traditional ways. It takes advantage of evolutionary strategy to speed up the convergence of particle swarm optimization (PSO), and applies the crossover operation of genetic algorithm to enhance the global search ability. The new ARMAX model for short-term load forecasting has been tested based on the load data of Eastern China location market, and the results indicate that the proposed approach has achieved good accuracy. (author)

  9. Inward particle transport at high collisionality in the Experimental Advanced Superconducting Tokamak

    International Nuclear Information System (INIS)

    Wang, G. Q.; Ma, J.; Weiland, J.; Zang, Q.

    2013-01-01

    We have made the first drift wave study of particle transport in the Experimental Advanced Superconducting Tokamak (Wan et al., Nucl. Fusion 49, 104011 (2009)). The results reveal that collisions make the particle flux more inward in the high collisionality regime. This can be traced back to effects that are quadratic in the collision frequency. The particle pinch is due to electron trapping which is not very efficient in the high collisionality regime so the approach to equilibrium is slow. We have included also the electron temperature gradient (ETG) mode to give the right electron temperature gradient, since the Trapped Electron Mode (TE mode) is weak in this regime. However, at the ETG mode number ions are Boltzmann distributed so the ETG mode does not give particle transport

  10. Application of an Intelligent Fuzzy Regression Algorithm in Road Freight Transportation Modeling

    Directory of Open Access Journals (Sweden)

    Pooya Najaf

    2013-07-01

    Full Text Available Road freight transportation between provinces of a country has an important effect on the traffic flow of intercity transportation networks. Therefore, an accurate estimation of the road freight transportation for provinces of a country is so crucial to improve the rural traffic operation in a large scale management. Accordingly, the focused case study database in this research is the information related to Iran’s provinces in the year 2008. Correlation between road freight transportation with variables such as transport cost and distance, population, average household income and Gross Domestic Product (GDP of each province is calculated. Results clarify that the population is the most effective factor in the prediction of provinces’ transported freight. Linear Regression Model (LRM is calibrated based on the population variable, and afterwards Fuzzy Regression Algorithm (FRA is generated on the basis of the LRM. The proposed FRA is an intelligent modified algorithm with an accurate prediction and fitting ability. This methodology can be significantly useful in macro-level planning problems where decreasing prediction error values is one of the most important concerns for decision makers. In addition, Back-Propagation Neural Network (BPNN is developed to evaluate the prediction capability of the models and to be compared with FRA. According to the final results, the modified FRA estimates road freight transportation values more accurately than the BPNN and LRM. Finally, in order to predict the road freight transportation values, the reliability of the calibrated models is analyzed using the information of the year 2009. Results show higher reliability for the proposed modified FRA.

  11. Modeling airflow and particle transport/deposition in pulmonary airways.

    Science.gov (United States)

    Kleinstreuer, Clement; Zhang, Zhe; Li, Zheng

    2008-11-30

    A review of research papers is presented, pertinent to computer modeling of airflow as well as nano- and micron-size particle deposition in pulmonary airway replicas. The key modeling steps are outlined, including construction of suitable airway geometries, mathematical description of the air-particle transport phenomena and computer simulation of micron and nanoparticle depositions. Specifically, diffusion-dominated nanomaterial deposits on airway surfaces much more uniformly than micron particles of the same material. This may imply different toxicity effects. Due to impaction and secondary flows, micron particles tend to accumulate around the carinal ridges and to form "hot spots", i.e., locally high concentrations which may lead to tumor developments. Inhaled particles in the size range of 20nm< or =dp< or =3microm may readily reach the deeper lung region. Concerning inhaled therapeutic particles, optimal parameters for mechanical drug-aerosol targeting of predetermined lung areas can be computed, given representative pulmonary airways.

  12. Monoenergetic particle transport in a semi-infinite medium with reflection

    International Nuclear Information System (INIS)

    Ganapol, B.D.

    1993-01-01

    Next to neutron or photon transport in infinite geometry, particle transport in semi-infinite geometry is probably the most investigated transport problem. When the mean free path for particle interaction is small compared to the physical dimension of the scattering medium, the infinite or semi-infinite geometry assumption is reasonable for a variety of applications. These include nondestructive testing, photon transport in plant canopies, and inverse problems associated with well logging. Another important application of the transport solution in a semi-infinite medium is as a benchmark to which other more approximate methods can be compared. In this paper, the transport solution in a semi-infinite medium with both diffuse and specular reflection at the free surface is solved analytically and numerically evaluated. The approach is based on a little-known solution obtained by Sobelev for the problem with specular reflection, which itself originates from the classical albedo problem solution without reflection. Using Sobelev's solution as a partial Green's function, the exiting flux for diffuse reflection can be obtained. In this way, the exiting flux for a half-space with both constant diffuse and specular reflection coefficients is obtained for the first time. This expression can then be extended to the complex plane to obtain the interior flux as an inverse Laplace transform, which is numerically evaluated

  13. Particle and energy transport studies on TFTR and implications for helium ash in future fusion devices

    International Nuclear Information System (INIS)

    Synakowski, E.J.; Efthimion, P.C.; Rewoldt, G.; Stratton, B.C.; Tang, W.M.; Bell, R.E.; Grek, B.; Hulse, R.A.; Johnson, D.W.; Hill, K.W.; Mansfield, D.K.; McCune, D.; Mikkelsen, D.R.; Park, H.K.; Ramsey, A.T.; Scott, S.D.; Taylor, G.; Timberlake, J.; Zarnstorff, M.C.

    1992-01-01

    Particle and energy transport in tokamak plasmas have long been subjects of vigorous investigation. Present-day measurement techniques permit radially resolved studies of the transport of electron perturbations, low- and high-Z impurities, and energy. In addition, developments in transport theory provide tools that can be brought to bear on transport issues. Here, we examine local particle transport measurements of electrons, fully-stripped thermal helium, and helium-like iron in balanced-injection L-mode and enhanced confinement deuterium plasmas on TFTR of the same plasma current, toroidal field, and auxiliary heating power. He 2+ and Fe 24+ transport has been studied with charge exchange recombination spectroscopy, while electron transport has been studied by analyzing the perturbed electron flux following the same helium puff used for the He 2+ studies. By examining the electron and He 2+ responses following the same gas puff in the same plasmas, an unambiguous comparison of the transport of the two species has been made. The local energy transport has been examined with power balance analysis, allowing for comparisons to the local thermal fluxes. Some particle and energy transport results from the Supershot have been compared to a transport model based on a quasilinear picture of electrostatic toroidal drift-type microinstabilities. Finally, implications for future fusion reactors of the observed correlation between thermal transport and helium particle transport is discussed

  14. A Local and Global Search Combine Particle Swarm Optimization Algorithm for Job-Shop Scheduling to Minimize Makespan

    Directory of Open Access Journals (Sweden)

    Zhigang Lian

    2010-01-01

    Full Text Available The Job-shop scheduling problem (JSSP is a branch of production scheduling, which is among the hardest combinatorial optimization problems. Many different approaches have been applied to optimize JSSP, but for some JSSP even with moderate size cannot be solved to guarantee optimality. The original particle swarm optimization algorithm (OPSOA, generally, is used to solve continuous problems, and rarely to optimize discrete problems such as JSSP. In OPSOA, through research I find that it has a tendency to get stuck in a near optimal solution especially for middle and large size problems. The local and global search combine particle swarm optimization algorithm (LGSCPSOA is used to solve JSSP, where particle-updating mechanism benefits from the searching experience of one particle itself, the best of all particles in the swarm, and the best of particles in neighborhood population. The new coding method is used in LGSCPSOA to optimize JSSP, and it gets all sequences are feasible solutions. Three representative instances are made computational experiment, and simulation shows that the LGSCPSOA is efficacious for JSSP to minimize makespan.

  15. Design of sampling tools for Monte Carlo particle transport code JMCT

    International Nuclear Information System (INIS)

    Shangguan Danhua; Li Gang; Zhang Baoyin; Deng Li

    2012-01-01

    A class of sampling tools for general Monte Carlo particle transport code JMCT is designed. Two ways are provided to sample from distributions. One is the utilization of special sampling methods for special distribution; the other is the utilization of general sampling methods for arbitrary discrete distribution and one-dimensional continuous distribution on a finite interval. Some open source codes are included in the general sampling method for the maximum convenience of users. The sampling results show sampling correctly from distribution which are popular in particle transport can be achieved with these tools, and the user's convenience can be assured. (authors)

  16. A comparative analysis of particle swarm optimization and differential evolution algorithms for parameter estimation in nonlinear dynamic systems

    International Nuclear Information System (INIS)

    Banerjee, Amit; Abu-Mahfouz, Issam

    2014-01-01

    The use of evolutionary algorithms has been popular in recent years for solving the inverse problem of identifying system parameters given the chaotic response of a dynamical system. The inverse problem is reformulated as a minimization problem and population-based optimizers such as evolutionary algorithms have been shown to be efficient solvers of the minimization problem. However, to the best of our knowledge, there has been no published work that evaluates the efficacy of using the two most popular evolutionary techniques – particle swarm optimization and differential evolution algorithm, on a wide range of parameter estimation problems. In this paper, the two methods along with their variants (for a total of seven algorithms) are applied to fifteen different parameter estimation problems of varying degrees of complexity. Estimation results are analyzed using nonparametric statistical methods to identify if an algorithm is statistically superior to others over the class of problems analyzed. Results based on parameter estimation quality suggest that there are significant differences between the algorithms with the newer, more sophisticated algorithms performing better than their canonical versions. More importantly, significant differences were also found among variants of the particle swarm optimizer and the best performing differential evolution algorithm

  17. Surface transport and stable trapping of particles and cells by an optical waveguide loop.

    Science.gov (United States)

    Hellesø, Olav Gaute; Løvhaugen, Pål; Subramanian, Ananth Z; Wilkinson, James S; Ahluwalia, Balpreet Singh

    2012-09-21

    Waveguide trapping has emerged as a useful technique for parallel and planar transport of particles and biological cells and can be integrated with lab-on-a-chip applications. However, particles trapped on waveguides are continuously propelled forward along the surface of the waveguide. This limits the practical usability of the waveguide trapping technique with other functions (e.g. analysis, imaging) that require particles to be stationary during diagnosis. In this paper, an optical waveguide loop with an intentional gap at the centre is proposed to hold propelled particles and cells. The waveguide acts as a conveyor belt to transport and deliver the particles/cells towards the gap. At the gap, the diverging light fields hold the particles at a fixed position. The proposed waveguide design is numerically studied and experimentally implemented. The optical forces on the particle at the gap are calculated using the finite element method. Experimentally, the method is used to transport and trap micro-particles and red blood cells at the gap with varying separations. The waveguides are only 180 nm thick and thus could be integrated with other functions on the chip, e.g. microfluidics or optical detection, to make an on-chip system for single cell analysis and to study the interaction between cells.

  18. A tracking algorithm for the reconstruction of the daughters of long-lived particles in LHCb

    CERN Document Server

    Dendek, Adam Mateusz

    2018-01-01

    A tracking algorithm for the reconstruction of the daughters of long-lived particles in LHCb 5 Jun 2018, 16:00 1h 30m Library, Centro San Domenico () LHC experiments Posters session Speaker Katharina Mueller (Universitaet Zuerich (CH)) Description The LHCb experiment at CERN operates a high precision and robust tracking system to reach its physics goals, including precise measurements of CP-violation phenomena in the heavy flavour quark sector and searches for New Physics beyond the Standard Model. The track reconstruction procedure is performed by a number of algorithms. One of these, PatLongLivedTracking, is optimised to reconstruct "downstream tracks", which are tracks originating from decays outside the LHCb vertex detector of long-lived particles, such as Ks or Λ0. After an overview of the LHCb tracking system, we provide a detailed description of the LHCb downstream track reconstruction algorithm. Its computational intelligence part is described in details, including the adaptation of the employed...

  19. Transport and selective chaining of bidisperse particles in a travelling wave potential.

    Science.gov (United States)

    Tierno, Pietro; Straube, Arthur V

    2016-05-01

    We combine experiments, theory and numerical simulation to investigate the dynamics of a binary suspension of paramagnetic colloidal particles dispersed in water and transported above a stripe-patterned magnetic garnet film. The substrate generates a one-dimensional periodic energy landscape above its surface. The application of an elliptically polarized rotating magnetic field causes the landscape to translate, inducing direct transport of paramagnetic particles placed above the film. The ellipticity of the applied field can be used to control and tune the interparticle interactions, from net repulsive to net attractive. When considering particles of two distinct sizes, we find that, depending on their elevation above the surface of the magnetic substrate, the particles feel effectively different potentials, resulting in different mobilities. We exploit this feature to induce selective chaining for certain values of the applied field parameters. In particular, when driving two types of particles, we force only one type to condense into travelling parallel chains. These chains confine the movement of the other non-chaining particles within narrow colloidal channels. This phenomenon is explained by considering the balance of pairwise magnetic forces between the particles and their individual coupling with the travelling landscape.

  20. Magnetic fluctuation driven cross-field particle transport in the reversed-field pinch

    International Nuclear Information System (INIS)

    Scheffel, J.; Liu, D.

    1997-01-01

    Electrostatic and electromagnetic fluctuations generally cause cross-field particle transport in confined plasmas. Thus core localized turbulence must be kept at low levels for sufficient energy confinement in magnetic fusion plasmas. Reversed-field pinch (RFP) equilibria can, theoretically, be completely stable to ideal and resistive (tearing) magnetohydrodynamic (MHD) modes at zero beta. Unstable resistive interchange modes are, however, always present at experimentally relevant values of the poloidal beta β θ . An analytical quasilinear, ambipolar diffusion model is here used to model associated particle transport. The results indicate that core density fluctuations should not exceed a level of about 1% for plasmas of fusion interest. Parameters of experimentally relevant stationary states of the RFP were adjusted to minimize growth rates, using a fully resistive linearized MHD stability code. Density gradient effects are included through employing a parabolic density profile. The scaling of particle diffusion [D(r)∝λ 2 n 0.5 T/aB, where λ is the mode width] is such that the effects of particle transport are milder in present day RFP experiments than in future reactor-relevant plasmas. copyright 1997 American Institute of Physics

  1. Modeling particle-facilitated solute transport using the C-Ride module of HYDRUS

    Science.gov (United States)

    Simunek, Jiri; Bradford, Scott A.

    2017-04-01

    Strongly sorbing chemicals (e.g., heavy metals, radionuclides, pharmaceuticals, and/or explosives) in soils are associated predominantly with the solid phase, which is commonly assumed to be stationary. However, recent field- and laboratory-scale observations have shown that, in the presence of mobile colloidal particles (e.g., microbes, humic substances, clays and metal oxides), the colloids could act as pollutant carriers and thus provide a rapid transport pathway for strongly sorbing contaminants. Such transport can be further accelerated since these colloidal particles may travel through interconnected larger pores where the water velocity is relatively high. Additionally, colloidal particles have a considerable adsorption capacity for other species present in water because of their large specific surface areas and their high concentrations in soil-water and groundwater. As a result, the transport of contaminants can be significantly, sometimes dramatically, enhanced when they are adsorbed to mobile colloids. To address this problem, we have developed the C-Ride module for HYDRUS-1D. This one-dimensional numerical module is based on the HYDRUS-1D software package and incorporates mechanisms associated with colloid and colloid-facilitated solute transport in variably saturated porous media. This numerical model accounts for both colloid and solute movement due to convection, diffusion, and dispersion in variably-saturated soils, as well as for solute movement facilitated by colloid transport. The colloids transport module additionally considers processes of attachment/detachment to/from the solid phase, straining, and/or size exclusion. Various blocking and depth dependent functions can be used to modify the attachment and straining coefficients. The module additionally considers the effects of changes in the water content on colloid/bacteria transport and attachment/detachment to/from solid-water and air-water interfaces. For example, when the air

  2. Design of tallying function for general purpose Monte Carlo particle transport code JMCT

    International Nuclear Information System (INIS)

    Shangguan Danhua; Li Gang; Deng Li; Zhang Baoyin

    2013-01-01

    A new postponed accumulation algorithm was proposed. Based on JCOGIN (J combinatorial geometry Monte Carlo transport infrastructure) framework and the postponed accumulation algorithm, the tallying function of the general purpose Monte Carlo neutron-photon transport code JMCT was improved markedly. JMCT gets a higher tallying efficiency than MCNP 4C by 28% for simple geometry model, and JMCT is faster than MCNP 4C by two orders of magnitude for complicated repeated structure model. The available ability of tallying function for JMCT makes firm foundation for reactor analysis and multi-step burnup calculation. (authors)

  3. PyMercury: Interactive Python for the Mercury Monte Carlo Particle Transport Code

    International Nuclear Information System (INIS)

    Iandola, F.N.; O'Brien, M.J.; Procassini, R.J.

    2010-01-01

    Monte Carlo particle transport applications are often written in low-level languages (C/C++) for optimal performance on clusters and supercomputers. However, this development approach often sacrifices straightforward usability and testing in the interest of fast application performance. To improve usability, some high-performance computing applications employ mixed-language programming with high-level and low-level languages. In this study, we consider the benefits of incorporating an interactive Python interface into a Monte Carlo application. With PyMercury, a new Python extension to the Mercury general-purpose Monte Carlo particle transport code, we improve application usability without diminishing performance. In two case studies, we illustrate how PyMercury improves usability and simplifies testing and validation in a Monte Carlo application. In short, PyMercury demonstrates the value of interactive Python for Monte Carlo particle transport applications. In the future, we expect interactive Python to play an increasingly significant role in Monte Carlo usage and testing.

  4. Data Assimilation in Air Contaminant Dispersion Using a Particle Filter and Expectation-Maximization Algorithm

    Directory of Open Access Journals (Sweden)

    Rongxiao Wang

    2017-09-01

    Full Text Available The accurate prediction of air contaminant dispersion is essential to air quality monitoring and the emergency management of contaminant gas leakage incidents in chemical industry parks. Conventional atmospheric dispersion models can seldom give accurate predictions due to inaccurate input parameters. In order to improve the prediction accuracy of dispersion models, two data assimilation methods (i.e., the typical particle filter & the combination of a particle filter and expectation-maximization algorithm are proposed to assimilate the virtual Unmanned Aerial Vehicle (UAV observations with measurement error into the atmospheric dispersion model. Two emission cases with different dimensions of state parameters are considered. To test the performances of the proposed methods, two numerical experiments corresponding to the two emission cases are designed and implemented. The results show that the particle filter can effectively estimate the model parameters and improve the accuracy of model predictions when the dimension of state parameters is relatively low. In contrast, when the dimension of state parameters becomes higher, the method of particle filter combining the expectation-maximization algorithm performs better in terms of the parameter estimation accuracy. Therefore, the proposed data assimilation methods are able to effectively support air quality monitoring and emergency management in chemical industry parks.

  5. Energy-Aware Real-Time Task Scheduling for Heterogeneous Multiprocessors with Particle Swarm Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Weizhe Zhang

    2014-01-01

    Full Text Available Energy consumption in computer systems has become a more and more important issue. High energy consumption has already damaged the environment to some extent, especially in heterogeneous multiprocessors. In this paper, we first formulate and describe the energy-aware real-time task scheduling problem in heterogeneous multiprocessors. Then we propose a particle swarm optimization (PSO based algorithm, which can successfully reduce the energy cost and the time for searching feasible solutions. Experimental results show that the PSO-based energy-aware metaheuristic uses 40%–50% less energy than the GA-based and SFLA-based algorithms and spends 10% less time than the SFLA-based algorithm in finding the solutions. Besides, it can also find 19% more feasible solutions than the SFLA-based algorithm.

  6. Verification of Gyrokinetic Particle of Turbulent Simulation of Device Size Scaling Transport

    Institute of Scientific and Technical Information of China (English)

    LIN Zhihong; S. ETHIER; T. S. HAHM; W. M. TANG

    2012-01-01

    Verification and historical perspective are presented on the gyrokinetic particle simulations that discovered the device size scaling of turbulent transport and indentified the geometry model as the source of the long-standing disagreement between gyrokinetic particle and continuum simulations.

  7. Measurement of particle transport coefficients on Alcator C-Mod

    Energy Technology Data Exchange (ETDEWEB)

    Luke, T.C.T.

    1994-10-01

    The goal of this thesis was to study the behavior of the plasma transport during the divertor detachment in order to explain the central electron density rise. The measurement of particle transport coefficients requires sophisticated diagnostic tools. A two color interferometer system was developed and installed on Alcator C-Mod to measure the electron density with high spatial ({approx} 2 cm) and high temporal ({le} 1.0 ms) resolution. The system consists of 10 CO{sub 2} (10.6 {mu}m) and 4 HeNe (.6328 {mu}m) chords that are used to measure the line integrated density to within 0.08 CO{sub 2} degrees or 2.3 {times} 10{sup 16}m{sup {minus}2} theoretically. Using the two color interferometer, a series of gas puffing experiments were conducted. The density was varied above and below the threshold density for detachment at a constant magnetic field and plasma current. Using a gas modulation technique, the particle diffusion, D, and the convective velocity, V, were determined. Profiles were inverted using a SVD inversion and the transport coefficients were extracted with a time regression analysis and a transport simulation analysis. Results from each analysis were in good agreement. Measured profiles of the coefficients increased with the radius and the values were consistent with measurements from other experiments. The values exceeded neoclassical predictions by a factor of 10. The profiles also exhibited an inverse dependence with plasma density. The scaling of both attached and detached plasmas agreed well with this inverse scaling. This result and the lack of change in the energy and impurity transport indicate that there was no change in the underlying transport processes after detachment.

  8. Measurement of particle transport coefficients on Alcator C-Mod

    International Nuclear Information System (INIS)

    Luke, T.C.T.

    1994-10-01

    The goal of this thesis was to study the behavior of the plasma transport during the divertor detachment in order to explain the central electron density rise. The measurement of particle transport coefficients requires sophisticated diagnostic tools. A two color interferometer system was developed and installed on Alcator C-Mod to measure the electron density with high spatial (∼ 2 cm) and high temporal (≤ 1.0 ms) resolution. The system consists of 10 CO 2 (10.6 μm) and 4 HeNe (.6328 μm) chords that are used to measure the line integrated density to within 0.08 CO 2 degrees or 2.3 x 10 16 m -2 theoretically. Using the two color interferometer, a series of gas puffing experiments were conducted. The density was varied above and below the threshold density for detachment at a constant magnetic field and plasma current. Using a gas modulation technique, the particle diffusion, D, and the convective velocity, V, were determined. Profiles were inverted using a SVD inversion and the transport coefficients were extracted with a time regression analysis and a transport simulation analysis. Results from each analysis were in good agreement. Measured profiles of the coefficients increased with the radius and the values were consistent with measurements from other experiments. The values exceeded neoclassical predictions by a factor of 10. The profiles also exhibited an inverse dependence with plasma density. The scaling of both attached and detached plasmas agreed well with this inverse scaling. This result and the lack of change in the energy and impurity transport indicate that there was no change in the underlying transport processes after detachment

  9. Influence of clay particles on the transport and retention of titanium dioxide nanoparticles in quartz sand.

    Science.gov (United States)

    Cai, Li; Tong, Meiping; Wang, Xueting; Kim, Hyunjung

    2014-07-01

    This study investigated the influence of two representative suspended clay particles, bentonite and kaolinite, on the transport of titanium dioxide nanoparticles (nTiO2) in saturated quartz sand in both NaCl (1 and 10 mM ionic strength) and CaCl2 solutions (0.1 and 1 mM ionic strength) at pH 7. The breakthrough curves of nTiO2 with bentonite or kaolinite were higher than those without the presence of clay particles in NaCl solutions, indicating that both types of clay particles increased nTiO2 transport in NaCl solutions. Moreover, the enhancement of nTiO2 transport was more significant when bentonite was present in nTiO2 suspensions relative to kaolinite. Similar to NaCl solutions, in CaCl2 solutions, the breakthrough curves of nTiO2 with bentonite were also higher than those without clay particles, while the breakthrough curves of nTiO2 with kaolinite were lower than those without clay particles. Clearly, in CaCl2 solutions, the presence of bentonite in suspensions increased nTiO2 transport, whereas, kaolinite decreased nTiO2 transport in quartz sand. The attachment of nTiO2 onto clay particles (both bentonite and kaolinite) were observed under all experimental conditions. The increased transport of nTiO2 in most experimental conditions (except for kaolinite in CaCl2 solutions) was attributed mainly to the clay-facilitated nTiO2 transport. The straining of larger nTiO2-kaolinite clusters yet contributed to the decreased transport (enhanced retention) of nTiO2 in divalent CaCl2 solutions when kaolinite particles were copresent in suspensions.

  10. Plasma transport in stochastic magnetic fields. I. General considerations and test particle transport

    International Nuclear Information System (INIS)

    Krommes, J.A.; Kleva, R.G.; Oberman, C.

    1978-05-01

    A systematic theory is developed for the computation of electron transport in stochastic magnetic fields. Small scale magnetic perturbations arising, for example, from finite-β micro-instabilities are assumed to destroy the flux surfaces of a standard tokamak equilibrium. Because the magnetic lines then wander in a volume, electron radial flux is enhanced due to the rapid particle transport along as well as across the lines. By treating the magnetic lines as random variables, it is possible to develop a kinetic equation for the electron distribution function. This is solved approximately to yield the diffusion coefficient

  11. Plasma transport in stochastic magnetic fields. I. General considerations and test particle transport

    Energy Technology Data Exchange (ETDEWEB)

    Krommes, J.A.; Kleva, R.G.; Oberman, C.

    1978-05-01

    A systematic theory is developed for the computation of electron transport in stochastic magnetic fields. Small scale magnetic perturbations arising, for example, from finite-..beta.. micro-instabilities are assumed to destroy the flux surfaces of a standard tokamak equilibrium. Because the magnetic lines then wander in a volume, electron radial flux is enhanced due to the rapid particle transport along as well as across the lines. By treating the magnetic lines as random variables, it is possible to develop a kinetic equation for the electron distribution function. This is solved approximately to yield the diffusion coefficient.

  12. A Hard Constraint Algorithm to Model Particle Interactions in DNA-laden Flows

    Energy Technology Data Exchange (ETDEWEB)

    Trebotich, D; Miller, G H; Bybee, M D

    2006-08-01

    We present a new method for particle interactions in polymer models of DNA. The DNA is represented by a bead-rod polymer model and is fully-coupled to the fluid. The main objective in this work is to implement short-range forces to properly model polymer-polymer and polymer-surface interactions, specifically, rod-rod and rod-surface uncrossing. Our new method is based on a rigid constraint algorithm whereby rods elastically bounce off one another to prevent crossing, similar to our previous algorithm used to model polymer-surface interactions. We compare this model to a classical (smooth) potential which acts as a repulsive force between rods, and rods and surfaces.

  13. SIMULATION OF ENERGETIC PARTICLE TRANSPORT AND ACCELERATION AT SHOCK WAVES IN A FOCUSED TRANSPORT MODEL: IMPLICATIONS FOR MIXED SOLAR PARTICLE EVENTS

    Energy Technology Data Exchange (ETDEWEB)

    Kartavykh, Y. Y.; Dröge, W. [Institut für Theoretische Physik und Astrophysik, Universität Würzburg, D-97074 Würzburg (Germany); Gedalin, M. [Department of Physics, Ben-Gurion Unversity of the Negev, Beer-Sheva (Israel)

    2016-03-20

    We use numerical solutions of the focused transport equation obtained by an implicit stochastic differential equation scheme to study the evolution of the pitch-angle dependent distribution function of protons in the vicinity of shock waves. For a planar stationary parallel shock, the effects of anisotropic distribution functions, pitch-angle dependent spatial diffusion, and first-order Fermi acceleration at the shock are examined, including the timescales on which the energy spectrum approaches the predictions of diffusive shock acceleration theory. We then consider the case that a flare-accelerated population of ions is released close to the Sun simultaneously with a traveling interplanetary shock for which we assume a simplified geometry. We investigate the consequences of adiabatic focusing in the diverging magnetic field on the particle transport at the shock, and of the competing effects of acceleration at the shock and adiabatic energy losses in the expanding solar wind. We analyze the resulting intensities, anisotropies, and energy spectra as a function of time and find that our simulations can naturally reproduce the morphologies of so-called mixed particle events in which sometimes the prompt and sometimes the shock component is more prominent, by assuming parameter values which are typically observed for scattering mean free paths of ions in the inner heliosphere and energy spectra of the flare particles which are injected simultaneously with the release of the shock.

  14. Microtubule self-organisation by reaction-diffusion processes causes collective transport and organisation of cellular particles

    Directory of Open Access Journals (Sweden)

    Demongeot Jacques

    2004-06-01

    Full Text Available Abstract Background The transport of intra-cellular particles by microtubules is a major biological function. Under appropriate in vitro conditions, microtubule preparations behave as a 'complex' system and show 'emergent' phenomena. In particular, they form dissipative structures that self-organise over macroscopic distances by a combination of reaction and diffusion. Results Here, we show that self-organisation also gives rise to a collective transport of colloidal particles along a specific direction. Particles, such as polystyrene beads, chromosomes, nuclei, and vesicles are carried at speeds of several microns per minute. The process also results in the macroscopic self-organisation of these particles. After self-organisation is completed, they show the same pattern of organisation as the microtubules. Numerical simulations of a population of growing and shrinking microtubules, incorporating experimentally realistic reaction dynamics, predict self-organisation. They forecast that during self-organisation, macroscopic parallel arrays of oriented microtubules form which cross the reaction space in successive waves. Such travelling waves are capable of transporting colloidal particles. The fact that in the simulations, the aligned arrays move along the same direction and at the same speed as the particles move, suggest that this process forms the underlying mechanism for the observed transport properties. Conclusions This process constitutes a novel physical chemical mechanism by which chemical energy is converted into collective transport of colloidal particles along a given direction. Self-organisation of this type provides a new mechanism by which intra cellular particles such as chromosomes and vesicles can be displaced and simultaneously organised by microtubules. It is plausible that processes of this type occur in vivo.

  15. Microtubule self-organisation by reaction-diffusion processes causes collective transport and organisation of cellular particles

    Science.gov (United States)

    Glade, Nicolas; Demongeot, Jacques; Tabony, James

    2004-01-01

    Background The transport of intra-cellular particles by microtubules is a major biological function. Under appropriate in vitro conditions, microtubule preparations behave as a 'complex' system and show 'emergent' phenomena. In particular, they form dissipative structures that self-organise over macroscopic distances by a combination of reaction and diffusion. Results Here, we show that self-organisation also gives rise to a collective transport of colloidal particles along a specific direction. Particles, such as polystyrene beads, chromosomes, nuclei, and vesicles are carried at speeds of several microns per minute. The process also results in the macroscopic self-organisation of these particles. After self-organisation is completed, they show the same pattern of organisation as the microtubules. Numerical simulations of a population of growing and shrinking microtubules, incorporating experimentally realistic reaction dynamics, predict self-organisation. They forecast that during self-organisation, macroscopic parallel arrays of oriented microtubules form which cross the reaction space in successive waves. Such travelling waves are capable of transporting colloidal particles. The fact that in the simulations, the aligned arrays move along the same direction and at the same speed as the particles move, suggest that this process forms the underlying mechanism for the observed transport properties. Conclusions This process constitutes a novel physical chemical mechanism by which chemical energy is converted into collective transport of colloidal particles along a given direction. Self-organisation of this type provides a new mechanism by which intra cellular particles such as chromosomes and vesicles can be displaced and simultaneously organised by microtubules. It is plausible that processes of this type occur in vivo. PMID:15176973

  16. Volume-weighted particle-tracking method for solute-transport modeling; Implementation in MODFLOW–GWT

    Science.gov (United States)

    Winston, Richard B.; Konikow, Leonard F.; Hornberger, George Z.

    2018-02-16

    In the traditional method of characteristics for groundwater solute-transport models, advective transport is represented by moving particles that track concentration. This approach can lead to global mass-balance problems because in models of aquifers having complex boundary conditions and heterogeneous properties, particles can originate in cells having different pore volumes and (or) be introduced (or removed) at cells representing fluid sources (or sinks) of varying strengths. Use of volume-weighted particles means that each particle tracks solute mass. In source or sink cells, the changes in particle weights will match the volume of water added or removed through external fluxes. This enables the new method to conserve mass in source or sink cells as well as globally. This approach also leads to potential efficiencies by allowing the number of particles per cell to vary spatially—using more particles where concentration gradients are high and fewer where gradients are low. The approach also eliminates the need for the model user to have to distinguish between “weak” and “strong” fluid source (or sink) cells. The new model determines whether solute mass added by fluid sources in a cell should be represented by (1) new particles having weights representing appropriate fractions of the volume of water added by the source, or (2) distributing the solute mass added over all particles already in the source cell. The first option is more appropriate for the condition of a strong source; the latter option is more appropriate for a weak source. At sinks, decisions whether or not to remove a particle are replaced by a reduction in particle weight in proportion to the volume of water removed. A number of test cases demonstrate that the new method works well and conserves mass. The method is incorporated into a new version of the U.S. Geological Survey’s MODFLOW–GWT solute-transport model.

  17. Optimization of heat pump system in indoor swimming pool using particle swarm algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Wen-Shing; Kung, Chung-Kuan [Department of Energy and Refrigerating Air-Conditioning Engineering, National Taipei University of Technology, 1, Section 3, Chung-Hsiao East Road, Taipei (China)

    2008-09-15

    When it comes to indoor swimming pool facilities, a large amount of energy is required to heat up low-temperature outdoor air before it is being introduced indoors to maintain indoor humidity. Since water is evaporated from the pool surface, the exhausted air contains more water and specific enthalpy. In response to this indoor air, heat pump is generally used in heat recovery for indoor swimming pools. To reduce the cost in energy consumption, this paper utilizes particle swarm algorithm to optimize the design of heat pump system. The optimized parameters include continuous parameters and discrete parameters. The former consists of outdoor air mass flow and heat conductance of heat exchangers; the latter comprises compressor type and boiler type. In a case study, life cycle energy cost is considered as an objective function. In this regard, the optimized outdoor air flow and the optimized design for heating system can be deduced by using particle swarm algorithm. (author)

  18. Optimization of Reinforced Concrete Reservoir with Circumferential Stiffeners Strips by Particle Swarm Algorithm

    Directory of Open Access Journals (Sweden)

    GholamReza Havaei

    2015-09-01

    Full Text Available Reinforced concrete reservoirs (RCR have been used extensively in municipal and industrial facilities for several decades. The design of these structures requires that attention be given not only to strength requirements, but to serviceability requirements as well. These types of structures will be square, round, and oval reinforced concrete structures which may be above, below, or partially below ground. The main challenge is to design concrete liquid containing structures which will resist the extremes of seasonal temperature changes, a variety of loading conditions, and remain liquid tight for useful life of 50 to 60 years. In this study, optimization is performed by particle swarm algorithm basd on structural design. Firstly by structural analysis all range of shell thickness and areas of rebar find. In the second step by parameter identification system interchange algorithm, source code which developed in particle swarm algorithm by MATLAB software linked to analysis software. Therefore best and optimized thicknesses and total area of bars for each element find. Lastly with circumferential stiffeners structure optimize and show 19% decrease in weight of rebar, 20% decrease in volume of concrete, and 13% minimum cost reduction in construction procedure compared with conventional 10,000 m3 RCR structures.

  19. Momentum, heat, and mass transfer analogy for vertical hydraulic transport of inert particles

    Directory of Open Access Journals (Sweden)

    Jaćimovski Darko R.

    2014-01-01

    Full Text Available Wall-to-bed momentum, heat and mass transfer in vertical liquid-solids flow, as well as in single phase flow, were studied. The aim of this investigation was to establish the analogy among those phenomena. Also, effect of particles concentration on momentum, heat and mass transfer was studied. The experiments in hydraulic transport were performed in a 25.4 mm I.D. cooper tube equipped with a steam jacket, using spherical glass particles of 1.94 mm in diameter and water as a transport fluid. The segment of the transport tube used for mass transfer measurements was inside coated with benzoic acid. In the hydraulic transport two characteristic flow regimes were observed: turbulent and parallel particle flow regime. The transition between two characteristic regimes (γ*=0, occurs at a critical voidage ε≈0.85. The vertical two-phase flow was considered as the pseudofluid, and modified mixture-wall friction coefficient (fw and modified mixture Reynolds number (Rem were introduced for explanation of this system. Experimental data show that the wall-to-bed momentum, heat and mass transfer coefficients, in vertical flow of pseudofluid, for the turbulent regime are significantly higher than in parallel regime. Wall-to-bed, mass and heat transfer coefficients in hydraulic transport of particles were much higher then in single-phase flow for lower Reynolds numbers (Re15000, there was not significant difference. The experimental data for wall-to-bed momentum, heat and mass transfer in vertical flow of pseudofluid in parallel particle flow regime, show existing analogy among these three phenomena. [Projekat Ministarstva nauke Republike Srbije, br. 172022

  20. Radial transport processes as a precursor to particle deposition in drinking water distribution systems.

    Science.gov (United States)

    van Thienen, P; Vreeburg, J H G; Blokker, E J M

    2011-02-01

    Various particle transport mechanisms play a role in the build-up of discoloration potential in drinking water distribution networks. In order to enhance our understanding of and ability to predict this build-up, it is essential to recognize and understand their role. Gravitational settling with drag has primarily been considered in this context. However, since flow in water distribution pipes is nearly always in the turbulent regime, turbulent processes should be considered also. In addition to these, single particle effects and forces may affect radial particle transport. In this work, we present an application of a previously published turbulent particle deposition theory to conditions relevant for drinking water distribution systems. We predict quantitatively under which conditions turbophoresis, including the virtual mass effect, the Saffman lift force, and the Magnus force may contribute significantly to sediment transport in radial direction and compare these results to experimental observations. The contribution of turbophoresis is mostly limited to large particles (>50 μm) in transport mains, and not expected to play a major role in distribution mains. The Saffman lift force may enhance this process to some degree. The Magnus force is not expected to play any significant role in drinking water distribution systems. © 2010 Elsevier Ltd. All rights reserved.

  1. Effects of oil dispersants on settling of marine sediment particles and particle-facilitated distribution and transport of oil components.

    Science.gov (United States)

    Cai, Zhengqing; Fu, Jie; Liu, Wen; Fu, Kunming; O'Reilly, S E; Zhao, Dongye

    2017-01-15

    This work investigated effects of three model oil dispersants (Corexit EC9527A, Corexit EC9500A and SPC1000) on settling of fine sediment particles and particle-facilitated distribution and transport of oil components in sediment-seawater systems. All three dispersants enhanced settling of sediment particles. The nonionic surfactants (Tween 80 and Tween 85) play key roles in promoting particle aggregation. Yet, the effects varied with environmental factors (pH, salinity, DOM, and temperature). Strongest dispersant effect was observed at neutral or alkaline pH and in salinity range of 0-3.5wt%. The presence of water accommodated oil and dispersed oil accelerated settling of the particles. Total petroleum hydrocarbons in the sediment phase were increased from 6.9% to 90.1% in the presence of Corexit EC9527A, and from 11.4% to 86.7% for PAHs. The information is useful for understanding roles of oil dispersants in formation of oil-sediment aggregates and in sediment-facilitated transport of oil and PAHs in marine eco-systems. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. Algorithmic Information Dynamics of Persistent Patterns and Colliding Particles in the Game of Life

    KAUST Repository

    Zenil, Hector

    2018-02-18

    We demonstrate the way to apply and exploit the concept of \\\\textit{algorithmic information dynamics} in the characterization and classification of dynamic and persistent patterns, motifs and colliding particles in, without loss of generalization, Conway\\'s Game of Life (GoL) cellular automaton as a case study. We analyze the distribution of prevailing motifs that occur in GoL from the perspective of algorithmic probability. We demonstrate how the tools introduced are an alternative to computable measures such as entropy and compression algorithms which are often nonsensitive to small changes and features of non-statistical nature in the study of evolving complex systems and their emergent structures.

  3. Commuter exposure to inhalable, thoracic and alveolic particles in various transportation modes in Delhi.

    Science.gov (United States)

    Kumar, Pramod; Gupta, N C

    2016-01-15

    A public health concern is to understand the linkages between specific pollution sources and adverse health impacts. Commuting can be viewed as one of the significant-exposure activity in high-vehicle density areas. This paper investigates the commuter exposure to inhalable, thoracic and alveolic particles in various transportation modes in Delhi, India. Air pollution levels are significantly contributed by automobile exhaust and also in-vehicle exposure can be higher sometime than ambient levels. Motorcycle, auto rickshaw, car and bus were selected to study particles concentration along two routes in Delhi between Kashmere Gate and Dwarka. The bus and auto rickshaw were running on compressed natural gas (CNG) while the car and motorcycle were operated on gasoline fuel. Aerosol spectrometer was employed to measure inhalable, thoracic and alveolic particles during morning and evening rush hours for five weekdays. From the study, we observed that the concentration levels of these particles were greatly influenced by transportation modes. Concentrations of inhalable particles were found higher during morning in auto rickshaw (332.81 ± 90.97 μg/m(3)) while the commuter of bus exhibited higher exposure of thoracic particles (292.23 ± 110.45 μg/m(3)) and car commuters were exposed to maximum concentrations of alveolic particles (222.37 ± 26.56 μg/m(3)). We observed that in evening car commuters experienced maximum concentrations of all sizes of particles among the four commuting modes. Interestingly, motorcycle commuters were exposed to lower levels of inhalable and thoracic particles during morning and evening hours as compared to other modes of transport. The mean values were found greater than the median values for all the modes of transport suggesting that positive skewed distributions are characteristics of naturally occurring phenomenon. Copyright © 2015 Elsevier B.V. All rights reserved.

  4. Comparison of genetic algorithm and imperialist competitive algorithms in predicting bed load transport in clean pipe.

    Science.gov (United States)

    Ebtehaj, Isa; Bonakdari, Hossein

    2014-01-01

    The existence of sediments in wastewater greatly affects the performance of the sewer and wastewater transmission systems. Increased sedimentation in wastewater collection systems causes problems such as reduced transmission capacity and early combined sewer overflow. The article reviews the performance of the genetic algorithm (GA) and imperialist competitive algorithm (ICA) in minimizing the target function (mean square error of observed and predicted Froude number). To study the impact of bed load transport parameters, using four non-dimensional groups, six different models have been presented. Moreover, the roulette wheel selection method is used to select the parents. The ICA with root mean square error (RMSE) = 0.007, mean absolute percentage error (MAPE) = 3.5% show better results than GA (RMSE = 0.007, MAPE = 5.6%) for the selected model. All six models return better results than the GA. Also, the results of these two algorithms were compared with multi-layer perceptron and existing equations.

  5. Computer codes in particle transport physics

    International Nuclear Information System (INIS)

    Pesic, M.

    2004-01-01

    Simulation of transport and interaction of various particles in complex media and wide energy range (from 1 MeV up to 1 TeV) is very complicated problem that requires valid model of a real process in nature and appropriate solving tool - computer code and data library. A brief overview of computer codes based on Monte Carlo techniques for simulation of transport and interaction of hadrons and ions in wide energy range in three dimensional (3D) geometry is shown. Firstly, a short attention is paid to underline the approach to the solution of the problem - process in nature - by selection of the appropriate 3D model and corresponding tools - computer codes and cross sections data libraries. Process of data collection and evaluation from experimental measurements and theoretical approach to establishing reliable libraries of evaluated cross sections data is Ion g, difficult and not straightforward activity. For this reason, world reference data centers and specialized ones are acknowledged, together with the currently available, state of art evaluated nuclear data libraries, as the ENDF/B-VI, JEF, JENDL, CENDL, BROND, etc. Codes for experimental and theoretical data evaluations (e.g., SAMMY and GNASH) together with the codes for data processing (e.g., NJOY, PREPRO and GRUCON) are briefly described. Examples of data evaluation and data processing to generate computer usable data libraries are shown. Among numerous and various computer codes developed in transport physics of particles, the most general ones are described only: MCNPX, FLUKA and SHIELD. A short overview of basic application of these codes, physical models implemented with their limitations, energy ranges of particles and types of interactions, is given. General information about the codes covers also programming language, operation system, calculation speed and the code availability. An example of increasing computation speed of running MCNPX code using a MPI cluster compared to the code sequential option

  6. MISR Dark Water aerosol retrievals: operational algorithm sensitivity to particle non-sphericity

    Directory of Open Access Journals (Sweden)

    O. V. Kalashnikova

    2013-08-01

    Full Text Available The aim of this study is to theoretically investigate the sensitivity of the Multi-angle Imaging SpectroRadiometer (MISR operational (version 22 Dark Water retrieval algorithm to aerosol non-sphericity over the global oceans under actual observing conditions, accounting for current algorithm assumptions. Non-spherical (dust aerosol models, which were introduced in version 16 of the MISR aerosol product, improved the quality and coverage of retrievals in dusty regions. Due to the sensitivity of the retrieval to the presence of non-spherical aerosols, the MISR aerosol product has been successfully used to track the location and evolution of mineral dust plumes from the Sahara across the Atlantic, for example. However, the MISR global non-spherical aerosol optical depth (AOD fraction product has been found to have several climatological artifacts superimposed on valid detections of mineral dust, including high non-spherical fraction in the Southern Ocean and seasonally variable bands of high non-sphericity. In this paper we introduce a formal approach to examine the ability of the operational MISR Dark Water algorithm to distinguish among various spherical and non-spherical particles as a function of the variable MISR viewing geometry. We demonstrate the following under the criteria currently implemented: (1 Dark Water retrieval sensitivity to particle non-sphericity decreases for AOD below about 0.1 primarily due to an unnecessarily large lower bound imposed on the uncertainty in MISR observations at low light levels, and improves when this lower bound is removed; (2 Dark Water retrievals are able to distinguish between the spherical and non-spherical particles currently used for all MISR viewing geometries when the AOD exceeds 0.1; (3 the sensitivity of the MISR retrievals to aerosol non-sphericity varies in a complex way that depends on the sampling of the scattering phase function and the contribution from multiple scattering; and (4 non

  7. Representation of mathematical expectation of symmetrical functionals in the particle transport theory

    International Nuclear Information System (INIS)

    Uchajkin, V.V.

    1977-01-01

    The two-dimensional functional is used to show that the mathematical expectation of symmetrical functionals may be represented as a nonlinear functional obtained from the solution of the Boltzman equations (Green's function). For the highest moments of additive detector readings, which are a particular case of symmetrical functionals, a similar result was obtained by the author previously when he studied particles transport with and without multiplication. In physical terms such a concept is conditioned by the absence of moving particles with one another, the assumption of which is the basis of the linear transport theory

  8. A Novel Flexible Inertia Weight Particle Swarm Optimization Algorithm

    Science.gov (United States)

    Shamsi, Mousa; Sedaaghi, Mohammad Hossein

    2016-01-01

    Particle swarm optimization (PSO) is an evolutionary computing method based on intelligent collective behavior of some animals. It is easy to implement and there are few parameters to adjust. The performance of PSO algorithm depends greatly on the appropriate parameter selection strategies for fine tuning its parameters. Inertia weight (IW) is one of PSO’s parameters used to bring about a balance between the exploration and exploitation characteristics of PSO. This paper proposes a new nonlinear strategy for selecting inertia weight which is named Flexible Exponential Inertia Weight (FEIW) strategy because according to each problem we can construct an increasing or decreasing inertia weight strategy with suitable parameters selection. The efficacy and efficiency of PSO algorithm with FEIW strategy (FEPSO) is validated on a suite of benchmark problems with different dimensions. Also FEIW is compared with best time-varying, adaptive, constant and random inertia weights. Experimental results and statistical analysis prove that FEIW improves the search performance in terms of solution quality as well as convergence rate. PMID:27560945

  9. Chaotic particle swarm optimization algorithm in a support vector regression electric load forecasting model

    International Nuclear Information System (INIS)

    Hong, W.-C.

    2009-01-01

    Accurate forecasting of electric load has always been the most important issues in the electricity industry, particularly for developing countries. Due to the various influences, electric load forecasting reveals highly nonlinear characteristics. Recently, support vector regression (SVR), with nonlinear mapping capabilities of forecasting, has been successfully employed to solve nonlinear regression and time series problems. However, it is still lack of systematic approaches to determine appropriate parameter combination for a SVR model. This investigation elucidates the feasibility of applying chaotic particle swarm optimization (CPSO) algorithm to choose the suitable parameter combination for a SVR model. The empirical results reveal that the proposed model outperforms the other two models applying other algorithms, genetic algorithm (GA) and simulated annealing algorithm (SA). Finally, it also provides the theoretical exploration of the electric load forecasting support system (ELFSS)

  10. Ripple enhanced transport of suprathermal alpha particles

    International Nuclear Information System (INIS)

    Tani, K.; Takizuka, T.; Azumi, M.

    1986-01-01

    The ripple enhanced transport of suprathermal alpha particles has been studied by the newly developed Monte-Carlo code in which the motion of banana orbit in a toroidal field ripple is described by a mapping method. The existence of ripple-resonance diffusion has been confirmed numerically. We have developed another new code in which the radial displacement of banana orbit is given by the diffusion coefficients from the mapping code or the orbit following Monte-Carlo code. The ripple loss of α particles during slowing down has been estimated by the mapping model code as well as the diffusion model code. From the comparison of the results with those from the orbit-following Monte-Carlo code, it has been found that all of them agree very well. (author)

  11. DANTSYS: a system for deterministic, neutral particle transport calculations

    Energy Technology Data Exchange (ETDEWEB)

    Alcouffe, R.E.; Baker, R.S.

    1996-12-31

    The THREEDANT code is the latest addition to our system of codes, DANTSYS, which perform neutral particle transport computations on a given system of interest. The system of codes is distinguished by geometrical or symmetry considerations. For example, ONEDANT and TWODANT are designed for one and two dimensional geometries respectively. We have TWOHEX for hexagonal geometries, TWODANT/GQ for arbitrary quadrilaterals in XY and RZ geometry, and THREEDANT for three-dimensional geometries. The design of this system of codes is such that they share the same input and edit module and hence the input and output is uniform for all the codes (with the obvious additions needed to specify each type of geometry). The codes in this system are also designed to be general purpose solving both eigenvalue and source driven problems. In this paper we concentrate on the THREEDANT module since there are special considerations that need to be taken into account when designing such a module. The main issues that need to be addressed in a three-dimensional transport solver are those of the computational time needed to solve a problem and the amount of storage needed to accomplish that solution. Of course both these issues are directly related to the number of spatial mesh cells required to obtain a solution to a specified accuracy, but is also related to the spatial discretization method chosen and the requirements of the iteration acceleration scheme employed as will be noted below. Another related consideration is the robustness of the resulting algorithms as implemented; because insistence on complete robustness has a significant impact upon the computation time. We address each of these issues in the following through which we give reasons for the choices we have made in our approach to this code. And this is useful in outlining how the code is evolving to better address the shortcomings that presently exist.

  12. Thermodynamic design of Stirling engine using multi-objective particle swarm optimization algorithm

    International Nuclear Information System (INIS)

    Duan, Chen; Wang, Xinggang; Shu, Shuiming; Jing, Changwei; Chang, Huawei

    2014-01-01

    Highlights: • An improved thermodynamic model taking into account irreversibility parameter was developed. • A multi-objective optimization method for designing Stirling engine was investigated. • Multi-objective particle swarm optimization algorithm was adopted in the area of Stirling engine for the first time. - Abstract: In the recent years, the interest in Stirling engine has remarkably increased due to its ability to use any heat source from outside including solar energy, fossil fuels and biomass. A large number of studies have been done on Stirling cycle analysis. In the present study, a mathematical model based on thermodynamic analysis of Stirling engine considering regenerative losses and internal irreversibilities has been developed. Power output, thermal efficiency and the cycle irreversibility parameter of Stirling engine are optimized simultaneously using Particle Swarm Optimization (PSO) algorithm, which is more effective than traditional genetic algorithms. In this optimization problem, some important parameters of Stirling engine are considered as decision variables, such as temperatures of the working fluid both in the high temperature isothermal process and in the low temperature isothermal process, dead volume ratios of each heat exchanger, volumes of each working spaces, effectiveness of the regenerator, and the system charge pressure. The Pareto optimal frontier is obtained and the final design solution has been selected by Linear Programming Technique for Multidimensional Analysis of Preference (LINMAP). Results show that the proposed multi-objective optimization approach can significantly outperform traditional single objective approaches

  13. A novel robust and efficient algorithm for charge particle tracking in high background flux

    International Nuclear Information System (INIS)

    Fanelli, C; Cisbani, E; Dotto, A Del

    2015-01-01

    The high luminosity that will be reached in the new generation of High Energy Particle and Nuclear physics experiments implies large high background rate and large tracker occupancy, representing therefore a new challenge for particle tracking algorithms. For instance, at Jefferson Laboratory (JLab) (VA,USA), one of the most demanding experiment in this respect, performed with a 12 GeV electron beam, is characterized by a luminosity up to 10 39 cm -2 s -1 . To this scope, Gaseous Electron Multiplier (GEM) based trackers are under development for a new spectrometer that will operate at these high rates in the Hall A of JLab. Within this context, we developed a new tracking algorithm, based on a multistep approach: (i) all hardware - time and charge - information are exploited to minimize the number of hits to associate; (ii) a dedicated Neural Network (NN) has been designed for a fast and efficient association of the hits measured by the GEM detector; (iii) the measurements of the associated hits are further improved in resolution through the application of Kalman filter and Rauch- Tung-Striebel smoother. The algorithm is shortly presented along with a discussion of the promising first results. (paper)

  14. Particle transport and fluctuation characteristics around neoclassically optimized configurations in LHD

    International Nuclear Information System (INIS)

    Tanaka, K.; Michael, C.; Vyacheslavov, L.N.

    2008-01-01

    Density profiles in LHD were measured and particle transport coefficients were estimated from density modulation experiments in LHD. The data set contains the wide region of discharge condition. The dataset of different magnetic axis, toroidal magnetic field and heating power provided data set of widely scanned neoclassical transport. At minimized neoclassical transport configuration in the dataset (Rax=3.5m, Bt=2.8T) showed peaked density profile. Its peaking factor increased gradually with decrease of collisional frequency. This is a similar result observed in tokamak data base. At other configuration, peaking factor reduced with decrease of collisional frequency. Data set showed that larger contribution of neoclassical transport produced hollowed density profile. Comparison between neoclassical and experimental estimated particle diffusivity showed different minimum condition. This suggests neoclassical optimization is not same as anomalous optimization. Clear difference of spatial profile of turbulence was observed between hollowed and peaked density profiles. Major part of fluctuation existed in the unstable region of linear growth rate of ion temperature gradient mode. (author)

  15. Direct measurements of particle transport in dc glow discharge dusty plasmas

    International Nuclear Information System (INIS)

    Thomas, E. Jr.

    2001-01-01

    Many recent experiments in dc glow discharge plasmas have shown that clouds of dust particles can be suspended near the biased electrodes. Once formed, the dust clouds have well defined boundaries while particle motion within the clouds can be quite complex. Because the dust particles in the cloud can remain suspended in the plasma for tens of minutes, it implies that the particles have a low diffusive loss rate and follow closed trajectories within the cloud. In the experiments discussed in this paper, direct measurements of the dust particle velocities are made using particle image velocimetry (PIV) techniques. From the velocity measurements, a reconstruction of the three-dimensional transport of the dust particles is performed. A qualitative model is developed for the closed motion of the dust particles in a dc glow discharge dusty plasma. (orig.)

  16. Directed Magnetic Particle Transport above Artificial Magnetic Domains Due to Dynamic Magnetic Potential Energy Landscape Transformation.

    Science.gov (United States)

    Holzinger, Dennis; Koch, Iris; Burgard, Stefan; Ehresmann, Arno

    2015-07-28

    An approach for a remotely controllable transport of magnetic micro- and/or nanoparticles above a topographically flat exchange-bias (EB) thin film system, magnetically patterned into parallel stripe domains, is presented where the particle manipulation is achieved by sub-mT external magnetic field pulses. Superparamagnetic core-shell particles are moved stepwise by the dynamic transformation of the particles' magnetic potential energy landscape due to the external magnetic field pulses without affecting the magnetic state of the thin film system. The magnetic particle velocity is adjustable in the range of 1-100 μm/s by the design of the substrate's magnetic field landscape (MFL), the particle-substrate distance, and the magnitude of the applied external magnetic field pulses. The agglomeration of magnetic particles is avoided by the intrinsic magnetostatic repulsion of particles due to the parallel alignment of the particles' magnetic moments perpendicular to the transport direction and parallel to the surface normal of the substrate during the particle motion. The transport mechanism is modeled by a quantitative theory based on the precise knowledge of the sample's MFL and the particle-substrate distance.

  17. Optimal Sensor Placement for Latticed Shell Structure Based on an Improved Particle Swarm Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Xun Zhang

    2014-01-01

    Full Text Available Optimal sensor placement is a key issue in the structural health monitoring of large-scale structures. However, some aspects in existing approaches require improvement, such as the empirical and unreliable selection of mode and sensor numbers and time-consuming computation. A novel improved particle swarm optimization (IPSO algorithm is proposed to address these problems. The approach firstly employs the cumulative effective modal mass participation ratio to select mode number. Three strategies are then adopted to improve the PSO algorithm. Finally, the IPSO algorithm is utilized to determine the optimal sensors number and configurations. A case study of a latticed shell model is implemented to verify the feasibility of the proposed algorithm and four different PSO algorithms. The effective independence method is also taken as a contrast experiment. The comparison results show that the optimal placement schemes obtained by the PSO algorithms are valid, and the proposed IPSO algorithm has better enhancement in convergence speed and precision.

  18. Linear kinetic theory and particle transport in stochastic mixtures

    Energy Technology Data Exchange (ETDEWEB)

    Pomraning, G.C. [Univ. of California, Los Angeles, CA (United States)

    1995-12-31

    We consider the formulation of linear transport and kinetic theory describing energy and particle flow in a random mixture of two or more immiscible materials. Following an introduction, we summarize early and fundamental work in this area, and we conclude with a brief discussion of recent results.

  19. Comparison between Genetic Algorithms and Particle Swarm Optimization Methods on Standard Test Functions and Machine Design

    DEFF Research Database (Denmark)

    Nica, Florin Valentin Traian; Ritchie, Ewen; Leban, Krisztina Monika

    2013-01-01

    , genetic algorithm and particle swarm are shortly presented in this paper. These two algorithms are tested to determine their performance on five different benchmark test functions. The algorithms are tested based on three requirements: precision of the result, number of iterations and calculation time....... Both algorithms are also tested on an analytical design process of a Transverse Flux Permanent Magnet Generator to observe their performances in an electrical machine design application.......Nowadays the requirements imposed by the industry and economy ask for better quality and performance while the price must be maintained in the same range. To achieve this goal optimization must be introduced in the design process. Two of the best known optimization algorithms for machine design...

  20. A Comprehensive Survey on Particle Swarm Optimization Algorithm and Its Applications

    Directory of Open Access Journals (Sweden)

    Yudong Zhang

    2015-01-01

    Full Text Available Particle swarm optimization (PSO is a heuristic global optimization method, proposed originally by Kennedy and Eberhart in 1995. It is now one of the most commonly used optimization techniques. This survey presented a comprehensive investigation of PSO. On one hand, we provided advances with PSO, including its modifications (including quantum-behaved PSO, bare-bones PSO, chaotic PSO, and fuzzy PSO, population topology (as fully connected, von Neumann, ring, star, random, etc., hybridization (with genetic algorithm, simulated annealing, Tabu search, artificial immune system, ant colony algorithm, artificial bee colony, differential evolution, harmonic search, and biogeography-based optimization, extensions (to multiobjective, constrained, discrete, and binary optimization, theoretical analysis (parameter selection and tuning, and convergence analysis, and parallel implementation (in multicore, multiprocessor, GPU, and cloud computing forms. On the other hand, we offered a survey on applications of PSO to the following eight fields: electrical and electronic engineering, automation control systems, communication theory, operations research, mechanical engineering, fuel and energy, medicine, chemistry, and biology. It is hoped that this survey would be beneficial for the researchers studying PSO algorithms.

  1. Experimental and theoretical study of particle transport in the TCV Tokamak

    International Nuclear Information System (INIS)

    Fable, E.

    2009-06-01

    The main scope of this thesis work is to compare theoretical models with experimental observations on particle transport in particular regimes of plasma operation from the Tokamak à Configuration Variable (TCV) located at CRPP–EPFL in Lausanne. We introduce the main topics in Tokamak fusion research and the challenging problems in the first Chapter. A particular attention is devoted to the modelling of heat and particle transport. In the second Chapter the experimental part is presented, including an overview of TCV capabilities, a brief review of the relevant diagnostic systems, and a discussion of the numerical tools used to analyze the experimental data. In addition, the numerical codes that are used to interpret the experimental data and to compare them with theoretical predictions are introduced. The third Chapter deals with the problem of understanding the mechanisms that regulate the transport of energy in TCV plasmas, in particular in the electron Internal Transport Barrier (eITB) scenario. A radial transport code, integrated with an external module for the calculation of the turbulence-induced transport coefficients, is employed to reproduce the experimental scenario and to understand the physics at play. It is shown how the sustainment of an improved confinement regime is linked to the presence of a reversed safety factor profile. The improvement of confinement in the eITB regime is visible in the energy channel and in the particle channel as well. The density profile shows strong correlation with the temperature profile and has a large local logarithmic gradient. This is an important result obtained from the TCV eITB scenario analysis and is presented in the fourth Chapter. In the same chapter we present the estimate of the particle diffusion and convection coefficients obtained from density transient experiments performed in the eITB scenario. The theoretical understanding of the strong correlation between density and temperature observed in the e

  2. Time-dependent transport of energetic particles in magnetic turbulence: computer simulations versus analytical theory

    Science.gov (United States)

    Arendt, V.; Shalchi, A.

    2018-06-01

    We explore numerically the transport of energetic particles in a turbulent magnetic field configuration. A test-particle code is employed to compute running diffusion coefficients as well as particle distribution functions in the different directions of space. Our numerical findings are compared with models commonly used in diffusion theory such as Gaussian distribution functions and solutions of the cosmic ray Fokker-Planck equation. Furthermore, we compare the running diffusion coefficients across the mean magnetic field with solutions obtained from the time-dependent version of the unified non-linear transport theory. In most cases we find that particle distribution functions are indeed of Gaussian form as long as a two-component turbulence model is employed. For turbulence setups with reduced dimensionality, however, the Gaussian distribution can no longer be obtained. It is also shown that the unified non-linear transport theory agrees with simulated perpendicular diffusion coefficients as long as the pure two-dimensional model is excluded.

  3. Neutron secondary-particle production cross sections and their incorporation into Monte-Carlo transport codes

    International Nuclear Information System (INIS)

    Brenner, D.J.; Prael, R.E.; Little, R.C.

    1987-01-01

    Realistic simulations of the passage of fast neutrons through tissue require a large quantity of cross-sectional data. What are needed are differential (in particle type, energy and angle) cross sections. A computer code is described which produces such spectra for neutrons above ∼14 MeV incident on light nuclei such as carbon and oxygen. Comparisons have been made with experimental measurements of double-differential secondary charged-particle production on carbon and oxygen at energies from 27 to 60 MeV; they indicate that the model is adequate in this energy range. In order to utilize fully the results of these calculations, they should be incorporated into a neutron transport code. This requires defining a generalized format for describing charged-particle production, putting the calculated results in this format, interfacing the neutron transport code with these data, and charged-particle transport. The design and development of such a program is described. 13 refs., 3 figs

  4. Simplified particle swarm optimization algorithm - doi: 10.4025/actascitechnol.v34i1.9679

    Directory of Open Access Journals (Sweden)

    Ricardo Paupitz Barbosa dos Santos

    2011-11-01

    Full Text Available Real ants and bees are considered social insects, which present some remarkable characteristics that can be used, as inspiration, to solve complex optimization problems. This field of study is known as swarm intelligence. Therefore, this paper presents a new algorithm that can be understood as a simplified version of the well known Particle Swarm Optimization (PSO. The proposed algorithm allows saving some computational effort and obtains a considerable performance in the optimization of nonlinear functions. We employed four nonlinear benchmark functions, Sphere, Schwefel, Schaffer and Ackley functions, to test and validate the new proposal. Some simulated results were used in order to clarify the efficiency of the proposed algorithm.

  5. Recently developed methods in neutral-particle transport calculations: overview

    International Nuclear Information System (INIS)

    Alcouffe, R.E.

    1982-01-01

    It has become increasingly apparent that successful, general methods for the solution of the neutral particle transport equation involve a close connection between the spatial-discretization method used and the source-acceleration method chosen. The first form of the transport equation, angular discretization which is discrete ordinates is considered as well as spatial discretization based upon a mesh arrangement. Characteristic methods are considered briefly in the context of future, desirable developments. The ideal spatial-discretization method is described as having the following attributes: (1) positive-positive boundary data yields a positive angular flux within the mesh including its boundaries; (2) satisfies the particle balance equation over the mesh, that is, the method is conservative; (3) possesses the diffusion limit independent of spatial mesh size, that is, for a linearly isotropic flux assumption, the transport differencing reduces to a suitable diffusion equation differencing; (4) the method is unconditionally acceleratable, i.e., for each mesh size, the method is unconditionally convergent with a source iteration acceleration. It is doubtful that a single method possesses all these attributes for a general problem. Some commonly used methods are outlined and their computational performance and usefulness are compared; recommendations for future development are detailed, which include practical computational considerations

  6. An analysis of 3D particle path integration algorithms

    International Nuclear Information System (INIS)

    Darmofal, D.L.; Haimes, R.

    1996-01-01

    Several techniques for the numerical integration of particle paths in steady and unsteady vector (velocity) fields are analyzed. Most of the analysis applies to unsteady vector fields, however, some results apply to steady vector field integration. Multistep, multistage, and some hybrid schemes are considered. It is shown that due to initialization errors, many unsteady particle path integration schemes are limited to third-order accuracy in time. Multistage schemes require at least three times more internal data storage than multistep schemes of equal order. However, for timesteps within the stability bounds, multistage schemes are generally more accurate. A linearized analysis shows that the stability of these integration algorithms are determined by the eigenvalues of the local velocity tensor. Thus, the accuracy and stability of the methods are interpreted with concepts typically used in critical point theory. This paper shows how integration schemes can lead to erroneous classification of critical points when the timestep is finite and fixed. For steady velocity fields, we demonstrate that timesteps outside of the relative stability region can lead to similar integration errors. From this analysis, guidelines for accurate timestep sizing are suggested for both steady and unsteady flows. In particular, using simulation data for the unsteady flow around a tapered cylinder, we show that accurate particle path integration requires timesteps which are at most on the order of the physical timescale of the flow

  7. Ripple induced trapped particle loss in tokamaks

    International Nuclear Information System (INIS)

    White, R.B.

    1996-05-01

    The threshold for stochastic transport of high energy trapped particles in a tokamak due to toroidal field ripple is calculated by explicit construction of primary resonances, and a numerical examination of the route to chaos. Critical field ripple amplitude is determined for loss. The expression is given in magnetic coordinates and makes no assumptions regarding shape or up-down symmetry. An algorithm is developed including the effects of prompt axisymmetric orbit loss, ripple trapping, convective banana flow, and stochastic ripple loss, which gives accurate ripple loss predictions for representative Tokamak Fusion Test Reactor and International Thermonuclear Experimental Reactor equilibria. The algorithm is extended to include the effects of collisions and drag, allowing rapid estimation of alpha particle loss in tokamaks

  8. Variance analysis of the Monte-Carlo perturbation source method in inhomogeneous linear particle transport problems

    International Nuclear Information System (INIS)

    Noack, K.

    1982-01-01

    The perturbation source method may be a powerful Monte-Carlo means to calculate small effects in a particle field. In a preceding paper we have formulated this methos in inhomogeneous linear particle transport problems describing the particle fields by solutions of Fredholm integral equations and have derived formulae for the second moment of the difference event point estimator. In the present paper we analyse the general structure of its variance, point out the variance peculiarities, discuss the dependence on certain transport games and on generation procedures of the auxiliary particles and draw conclusions to improve this method

  9. Solitary Model of the Charge Particle Transport in Collisionless Plasma

    International Nuclear Information System (INIS)

    Simonchik, L.V.; Trukhachev, F.M.

    2006-01-01

    The one-dimensional MHD solitary model of charged particle transport in plasma is developed. It is shown that self-consistent electric field of ion-acoustic solitons can displace charged particles in space, which can be a reason of local electric current generation. The displacement amount is order of a few Debye lengths. It is shown that the current associated with soliton cascade has pulsating nature with DC component. Methods of built theory verification in dusty plasma are proposed

  10. An Algorithm for the Mixed Transportation Network Design Problem.

    Science.gov (United States)

    Liu, Xinyu; Chen, Qun

    2016-01-01

    This paper proposes an optimization algorithm, the dimension-down iterative algorithm (DDIA), for solving a mixed transportation network design problem (MNDP), which is generally expressed as a mathematical programming with equilibrium constraint (MPEC). The upper level of the MNDP aims to optimize the network performance via both the expansion of the existing links and the addition of new candidate links, whereas the lower level is a traditional Wardrop user equilibrium (UE) problem. The idea of the proposed solution algorithm (DDIA) is to reduce the dimensions of the problem. A group of variables (discrete/continuous) is fixed to optimize another group of variables (continuous/discrete) alternately; then, the problem is transformed into solving a series of CNDPs (continuous network design problems) and DNDPs (discrete network design problems) repeatedly until the problem converges to the optimal solution. The advantage of the proposed algorithm is that its solution process is very simple and easy to apply. Numerical examples show that for the MNDP without budget constraint, the optimal solution can be found within a few iterations with DDIA. For the MNDP with budget constraint, however, the result depends on the selection of initial values, which leads to different optimal solutions (i.e., different local optimal solutions). Some thoughts are given on how to derive meaningful initial values, such as by considering the budgets of new and reconstruction projects separately.

  11. An Algorithm for the Mixed Transportation Network Design Problem.

    Directory of Open Access Journals (Sweden)

    Xinyu Liu

    Full Text Available This paper proposes an optimization algorithm, the dimension-down iterative algorithm (DDIA, for solving a mixed transportation network design problem (MNDP, which is generally expressed as a mathematical programming with equilibrium constraint (MPEC. The upper level of the MNDP aims to optimize the network performance via both the expansion of the existing links and the addition of new candidate links, whereas the lower level is a traditional Wardrop user equilibrium (UE problem. The idea of the proposed solution algorithm (DDIA is to reduce the dimensions of the problem. A group of variables (discrete/continuous is fixed to optimize another group of variables (continuous/discrete alternately; then, the problem is transformed into solving a series of CNDPs (continuous network design problems and DNDPs (discrete network design problems repeatedly until the problem converges to the optimal solution. The advantage of the proposed algorithm is that its solution process is very simple and easy to apply. Numerical examples show that for the MNDP without budget constraint, the optimal solution can be found within a few iterations with DDIA. For the MNDP with budget constraint, however, the result depends on the selection of initial values, which leads to different optimal solutions (i.e., different local optimal solutions. Some thoughts are given on how to derive meaningful initial values, such as by considering the budgets of new and reconstruction projects separately.

  12. Hemodynamic and oxygen transport patterns for outcome prediction, therapeutic goals, and clinical algorithms to improve outcome. Feasibility of artificial intelligence to customize algorithms.

    Science.gov (United States)

    Shoemaker, W C; Patil, R; Appel, P L; Kram, H B

    1992-11-01

    A generalized decision tree or clinical algorithm for treatment of high-risk elective surgical patients was developed from a physiologic model based on empirical data. First, a large data bank was used to do the following: (1) describe temporal hemodynamic and oxygen transport patterns that interrelate cardiac, pulmonary, and tissue perfusion functions in survivors and nonsurvivors; (2) define optimal therapeutic goals based on the supranormal oxygen transport values of high-risk postoperative survivors; (3) compare the relative effectiveness of alternative therapies in a wide variety of clinical and physiologic conditions; and (4) to develop criteria for titration of therapy to the endpoints of the supranormal optimal goals using cardiac index (CI), oxygen delivery (DO2), and oxygen consumption (VO2) as proxy outcome measures. Second, a general purpose algorithm was generated from these data and tested in preoperatively randomized clinical trials of high-risk surgical patients. Improved outcome was demonstrated with this generalized algorithm. The concept that the supranormal values represent compensations that have survival value has been corroborated by several other groups. We now propose a unique approach to refine the generalized algorithm to develop customized algorithms and individualized decision analysis for each patient's unique problems. The present article describes a preliminary evaluation of the feasibility of artificial intelligence techniques to accomplish individualized algorithms that may further improve patient care and outcome.

  13. An Image Filter Based on Shearlet Transformation and Particle Swarm Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Kai Hu

    2015-01-01

    Full Text Available Digital image is always polluted by noise and made data postprocessing difficult. To remove noise and preserve detail of image as much as possible, this paper proposed image filter algorithm which combined the merits of Shearlet transformation and particle swarm optimization (PSO algorithm. Firstly, we use classical Shearlet transform to decompose noised image into many subwavelets under multiscale and multiorientation. Secondly, we gave weighted factor to those subwavelets obtained. Then, using classical Shearlet inverse transform, we obtained a composite image which is composed of those weighted subwavelets. After that, we designed fast and rough evaluation method to evaluate noise level of the new image; by using this method as fitness, we adopted PSO to find the optimal weighted factor we added; after lots of iterations, by the optimal factors and Shearlet inverse transform, we got the best denoised image. Experimental results have shown that proposed algorithm eliminates noise effectively and yields good peak signal noise ratio (PSNR.

  14. A combination of genetic algorithm and particle swarm optimization method for solving traveling salesman problem

    Directory of Open Access Journals (Sweden)

    Keivan Borna

    2015-12-01

    Full Text Available Traveling salesman problem (TSP is a well-established NP-complete problem and many evolutionary techniques like particle swarm optimization (PSO are used to optimize existing solutions for that. PSO is a method inspired by the social behavior of birds. In PSO, each member will change its position in the search space, according to personal or social experience of the whole society. In this paper, we combine the principles of PSO and crossover operator of genetic algorithm to propose a heuristic algorithm for solving the TSP more efficiently. Finally, some experimental results on our algorithm are applied in some instances in TSPLIB to demonstrate the effectiveness of our methods which also show that our algorithm can achieve better results than other approaches.

  15. Application of State Quantization-Based Methods in HEP Particle Transport Simulation

    Science.gov (United States)

    Santi, Lucio; Ponieman, Nicolás; Jun, Soon Yung; Genser, Krzysztof; Elvira, Daniel; Castro, Rodrigo

    2017-10-01

    Simulation of particle-matter interactions in complex geometries is one of the main tasks in high energy physics (HEP) research. An essential aspect of it is an accurate and efficient particle transportation in a non-uniform magnetic field, which includes the handling of volume crossings within a predefined 3D geometry. Quantized State Systems (QSS) is a family of numerical methods that provides attractive features for particle transportation processes, such as dense output (sequences of polynomial segments changing only according to accuracy-driven discrete events) and lightweight detection and handling of volume crossings (based on simple root-finding of polynomial functions). In this work we present a proof-of-concept performance comparison between a QSS-based standalone numerical solver and an application based on the Geant4 simulation toolkit, with its default Runge-Kutta based adaptive step method. In a case study with a charged particle circulating in a vacuum (with interactions with matter turned off), in a uniform magnetic field, and crossing up to 200 volume boundaries twice per turn, simulation results showed speedups of up to 6 times in favor of QSS while it being 10 times slower in the case with zero volume boundaries.

  16. Inversion of particle size distribution by spectral extinction technique using the attractive and repulsive particle swarm optimization algorithm

    Directory of Open Access Journals (Sweden)

    Qi Hong

    2015-01-01

    Full Text Available The particle size distribution (PSD plays an important role in environmental pollution detection and human health protection, such as fog, haze and soot. In this study, the Attractive and Repulsive Particle Swarm Optimization (ARPSO algorithm and the basic PSO were applied to retrieve the PSD. The spectral extinction technique coupled with the Anomalous Diffraction Approximation (ADA and the Lambert-Beer Law were employed to investigate the retrieval of the PSD. Three commonly used monomodal PSDs, i.e. the Rosin-Rammer (R-R distribution, the normal (N-N distribution, the logarithmic normal (L-N distribution were studied in the dependent model. Then, an optimal wavelengths selection algorithm was proposed. To study the accuracy and robustness of the inverse results, some characteristic parameters were employed. The research revealed that the ARPSO showed more accurate and faster convergence rate than the basic PSO, even with random measurement error. Moreover, the investigation also demonstrated that the inverse results of four incident laser wavelengths showed more accurate and robust than those of two wavelengths. The research also found that if increasing the interval of the selected incident laser wavelengths, inverse results would show more accurate, even in the presence of random error.

  17. Parallelization of particle transport using Intel® TBB

    International Nuclear Information System (INIS)

    Apostolakis, J; Brun, R; Carminati, F; Gheata, A; Wenzel, S; Belogurov, S; Ovcharenko, E

    2014-01-01

    One of the current challenges in HEP computing is the development of particle propagation algorithms capable of efficiently use all performance aspects of modern computing devices. The Geant-Vector project at CERN has recently introduced an approach in this direction. This paper describes the implementation of a similar workflow using the Intel(r) Threading Building Blocks (Intel(r) TBB) library. This approach is intended to overcome the potential bottleneck of having a single dispatcher on many-core architectures and to result in better scalability compared to the initial pthreads-based version.

  18. FPGA Implementation of an Efficient Algorithm for the Calculation of Charged Particle Trajectories in Cosmic Ray Detectors

    Science.gov (United States)

    Villar, Xabier; Piso, Daniel; Bruguera, Javier D.

    2014-02-01

    This paper presents an FPGA implementation of an algorithm, previously published, for the the reconstruction of cosmic rays' trajectories and the determination of the time of arrival and velocity of the particles. The accuracy and precision issues of the algorithm have been analyzed to propose a suitable implementation. Thus, a 32-bit fixed-point format has been used for the representation of the data values. Moreover, the dependencies among the different operations have been taken into account to obtain a highly parallel and efficient hardware implementation. The final hardware architecture requires 18 cycles to process every particle, and has been exhaustively simulated to validate all the design decisions. The architecture has been mapped over different commercial FPGAs, with a frequency of operation ranging from 300 MHz to 1.3 GHz, depending on the FPGA being used. Consequently, the number of particle trajectories processed per second is between 16 million and 72 million. The high number of particle trajectories calculated per second shows that the proposed FPGA implementation might be used also in high rate environments such as those found in particle and nuclear physics experiments.

  19. Modeling Solar Energetic Particle Transport near a Wavy Heliospheric Current Sheet

    Science.gov (United States)

    Battarbee, Markus; Dalla, Silvia; Marsh, Mike S.

    2018-02-01

    Understanding the transport of solar energetic particles (SEPs) from acceleration sites at the Sun into interplanetary space and to the Earth is an important question for forecasting space weather. The interplanetary magnetic field (IMF), with two distinct polarities and a complex structure, governs energetic particle transport and drifts. We analyze for the first time the effect of a wavy heliospheric current sheet (HCS) on the propagation of SEPs. We inject protons close to the Sun and propagate them by integrating fully 3D trajectories within the inner heliosphere in the presence of weak scattering. We model the HCS position using fits based on neutral lines of magnetic field source surface maps (SSMs). We map 1 au proton crossings, which show efficient transport in longitude via HCS, depending on the location of the injection region with respect to the HCS. For HCS tilt angles around 30°–40°, we find significant qualitative differences between A+ and A‑ configurations of the IMF, with stronger fluences along the HCS in the former case but with a distribution of particles across a wider range of longitudes and latitudes in the latter. We show how a wavy current sheet leads to longitudinally periodic enhancements in particle fluence. We show that for an A+ IMF configuration, a wavy HCS allows for more proton deceleration than a flat HCS. We find that A‑ IMF configurations result in larger average fluences than A+ IMF configurations, due to a radial drift component at the current sheet.

  20. Parallel Implementation of Isothermal and Isoenergetic Dissipative Particle Dynamics using Shardlow-like Splitting Algorithms

    Czech Academy of Sciences Publication Activity Database

    Larentzos, J.P.; Brennan, J.K.; Moore, J.D.; Lísal, Martin; Mattson, w.D.

    2014-01-01

    Roč. 185, č. 7 (2014), s. 1987-1998 ISSN 0010-4655 Grant - others:ARL(US) W911NF-10-2-0039 Institutional support: RVO:67985858 Keywords : dissipative particle dynamics * shardlow splitting algorithm * numerical integration Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 3.112, year: 2014

  1. Collective transport of Lennard–Jones particles through one-dimensional periodic potentials

    International Nuclear Information System (INIS)

    He Jian-hui; Wen Jia-le; Chen Pei-rong; Zheng Dong-qin; Zhong Wei-rong

    2017-01-01

    The surrounding media in which transport occurs contains various kinds of fields, such as particle potentials and external potentials. One of the important questions is how elements work and how position and momentum are redistributed in the diffusion under these conditions. For enriching Fick’s law, ordinary non-equilibrium statistical physics can be used to understand the complex process. This study attempts to discuss particle transport in the one-dimensional channel under external potential fields. Two kinds of potentials—the potential well and barrier—which do not change the potential in total, are built during the diffusion process. There are quite distinct phenomena because of the different one-dimensional periodic potentials. By the combination of a Monte Carlo method and molecular dynamics, we meticulously explore why an external potential field impacts transport by the subsection and statistical method. Besides, one piece of evidence of the Maxwell velocity distribution is confirmed under the assumption of local equilibrium. The simple model is based on the key concept that relates the flux to sectional statistics of position and momentum and could be referenced in similar transport problems. (rapid communication)

  2. An adaptive image enhancement technique by combining cuckoo search and particle swarm optimization algorithm.

    Science.gov (United States)

    Ye, Zhiwei; Wang, Mingwei; Hu, Zhengbing; Liu, Wei

    2015-01-01

    Image enhancement is an important procedure of image processing and analysis. This paper presents a new technique using a modified measure and blending of cuckoo search and particle swarm optimization (CS-PSO) for low contrast images to enhance image adaptively. In this way, contrast enhancement is obtained by global transformation of the input intensities; it employs incomplete Beta function as the transformation function and a novel criterion for measuring image quality considering three factors which are threshold, entropy value, and gray-level probability density of the image. The enhancement process is a nonlinear optimization problem with several constraints. CS-PSO is utilized to maximize the objective fitness criterion in order to enhance the contrast and detail in an image by adapting the parameters of a novel extension to a local enhancement technique. The performance of the proposed method has been compared with other existing techniques such as linear contrast stretching, histogram equalization, and evolutionary computing based image enhancement methods like backtracking search algorithm, differential search algorithm, genetic algorithm, and particle swarm optimization in terms of processing time and image quality. Experimental results demonstrate that the proposed method is robust and adaptive and exhibits the better performance than other methods involved in the paper.

  3. An Adaptive Image Enhancement Technique by Combining Cuckoo Search and Particle Swarm Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Zhiwei Ye

    2015-01-01

    Full Text Available Image enhancement is an important procedure of image processing and analysis. This paper presents a new technique using a modified measure and blending of cuckoo search and particle swarm optimization (CS-PSO for low contrast images to enhance image adaptively. In this way, contrast enhancement is obtained by global transformation of the input intensities; it employs incomplete Beta function as the transformation function and a novel criterion for measuring image quality considering three factors which are threshold, entropy value, and gray-level probability density of the image. The enhancement process is a nonlinear optimization problem with several constraints. CS-PSO is utilized to maximize the objective fitness criterion in order to enhance the contrast and detail in an image by adapting the parameters of a novel extension to a local enhancement technique. The performance of the proposed method has been compared with other existing techniques such as linear contrast stretching, histogram equalization, and evolutionary computing based image enhancement methods like backtracking search algorithm, differential search algorithm, genetic algorithm, and particle swarm optimization in terms of processing time and image quality. Experimental results demonstrate that the proposed method is robust and adaptive and exhibits the better performance than other methods involved in the paper.

  4. Particle swarm optimization algorithm based low cost magnetometer calibration

    Science.gov (United States)

    Ali, A. S.; Siddharth, S., Syed, Z., El-Sheimy, N.

    2011-12-01

    Inertial Navigation Systems (INS) consist of accelerometers, gyroscopes and a microprocessor provide inertial digital data from which position and orientation is obtained by integrating the specific forces and rotation rates. In addition to the accelerometers and gyroscopes, magnetometers can be used to derive the absolute user heading based on Earth's magnetic field. Unfortunately, the measurements of the magnetic field obtained with low cost sensors are corrupted by several errors including manufacturing defects and external electro-magnetic fields. Consequently, proper calibration of the magnetometer is required to achieve high accuracy heading measurements. In this paper, a Particle Swarm Optimization (PSO) based calibration algorithm is presented to estimate the values of the bias and scale factor of low cost magnetometer. The main advantage of this technique is the use of the artificial intelligence which does not need any error modeling or awareness of the nonlinearity. The estimated bias and scale factor errors from the proposed algorithm improve the heading accuracy and the results are also statistically significant. Also, it can help in the development of the Pedestrian Navigation Devices (PNDs) when combined with the INS and GPS/Wi-Fi especially in the indoor environments

  5. Large Eddy Simulation of Transient Flow, Solidification, and Particle Transport Processes in Continuous-Casting Mold

    Science.gov (United States)

    Liu, Zhongqiu; Li, Linmin; Li, Baokuan; Jiang, Maofa

    2014-07-01

    The current study developed a coupled computational model to simulate the transient fluid flow, solidification, and particle transport processes in a slab continuous-casting mold. Transient flow of molten steel in the mold is calculated using the large eddy simulation. An enthalpy-porosity approach is used for the analysis of solidification processes. The transport of bubble and non-metallic inclusion inside the liquid pool is calculated using the Lagrangian approach based on the transient flow field. A criterion of particle entrapment in the solidified shell is developed using the user-defined functions of FLUENT software (ANSYS, Inc., Canonsburg, PA). The predicted results of this model are compared with the measurements of the ultrasonic testing of the rolled steel plates and the water model experiments. The transient asymmetrical flow pattern inside the liquid pool exhibits quite satisfactory agreement with the corresponding measurements. The predicted complex instantaneous velocity field is composed of various small recirculation zones and multiple vortices. The transport of particles inside the liquid pool and the entrapment of particles in the solidified shell are not symmetric. The Magnus force can reduce the entrapment ratio of particles in the solidified shell, especially for smaller particles, but the effect is not obvious. The Marangoni force can play an important role in controlling the motion of particles, which increases the entrapment ratio of particles in the solidified shell obviously.

  6. Giving peeps to my props: Using 3D printing to shed new light on particle transport in fractured rock.

    Science.gov (United States)

    Walsh, S. D.; Du Frane, W. L.; Vericella, J. J.; Aines, R. D.

    2014-12-01

    Smart tracers and smart proppants promise new methods for sensing and manipulating rock fractures. However, the correct use and interpretation of these technologies relies on accurate models of their transport. Even for less exotic particles, the factors controlling particle transport through fractures are poorly understood. In this presentation, we will describe ongoing research at Lawrence Livermore National Laboratory into the transport properties of particles in natural rock fractures. Using three dimensional printing techniques, we create clear-plastic reproductions of real-world fracture surfaces, thereby enabling direct observation of the particle movement. We will also discuss how particle tracking of dense particle packs can be further enhanced by using such specially tailored flow cells in combination with micro-encapsulated tracer particles. Experimental results investigating the transport behavior of smart tracers and proppants close to the neutrally buoyant limit will be presented and we will describe how data from these experiments can be used to improve large-scale models of particle transport in fractures. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  7. Enhancement of transport properties of a Brownian particle due to quantum effects: Smoluchowski limit

    International Nuclear Information System (INIS)

    Shit, Anindita; Chattopadhyay, Sudip; Chaudhuri, Jyotipratim Ray

    2012-01-01

    Graphical abstract: By invoking physically motivated coordinate transformation into quantum Smoluchowski equation, we have presented a transparent treatment for the determination of the effective diffusion coefficient and current of a quantum Brownian particle. Substantial enhancement in the efficiency of the diffusive transport is envisaged due to the quantum correction effects. Highlights:: ► Transport of a quantum Brownian particle in a periodic potential has been addressed. ► Governing quantum Smoluchowski equation (QSE) includes state dependent diffusion. ► A coordinate transformation is used to recast QSE with constant diffusion. ► Transport properties increases in comparison to the corresponding classical result. ► This enhancement is purely a quantum effect. - Abstract: The transport property of a quantum Brownian particle that interacts strongly with a bath (in which a typical damping constant by far exceeds a characteristic frequency of the isolated system) under the influence of a tilted periodic potential has been studied by solving quantum Smoluchowski equation (QSE). By invoking physically motivated coordinate transformation into QSE, we have presented a transparent treatment for the determination of the effective diffusion coefficient of a quantum Brownian particle and the current (the average stationary velocity). Substantial enhancement in the efficiency of the diffusive transport is envisaged due to the quantum correction effects only if the bath temperature hovers around an appropriate range of intermediate values. Our findings also confirm the results obtained in the classical cases.

  8. Laboratory observations of sediment transport using combined particle image and tracking velocimetry (Conference Presentation)

    Science.gov (United States)

    Frank, Donya; Calantoni, Joseph

    2017-05-01

    Improved understanding of coastal hydrodynamics and morphology will lead to more effective mitigation measures that reduce fatalities and property damage caused by natural disasters such as hurricanes. We investigated sediment transport under oscillatory flow over flat and rippled beds with phase-separated stereoscopic Particle Image Velocimetry (PIV). Standard PIV techniques severely limit measurements at the fluid-sediment interface and do not allow for the observation of separate phases in multi-phase flow (e.g. sand grains in water). We have implemented phase-separated Particle Image Velocimetry by adding fluorescent tracer particles to the fluid in order to observe fluid flow and sediment transport simultaneously. While sand grains scatter 532 nm wavelength laser light, the fluorescent particles absorb 532 nm laser light and re-emit light at a wavelength of 584 nm. Optical long-pass filters with a cut-on wavelength of 550 nm were installed on two cameras configured to perform stereoscopic PIV to capture only the light emitted by the fluorescent tracer particles. A third high-speed camera was used to capture the light scattered by the sand grains allowing for sediment particle tracking via particle tracking velocimetry (PTV). Together, these overlapping, simultaneously recorded images provided sediment particle and fluid velocities at high temporal and spatial resolution (100 Hz sampling with 0.8 mm vector spacing for the 2D-3C fluid velocity field). Measurements were made under a wide range of oscillatory flows over flat and rippled sand beds. The set of observations allow for the investigation of the relative importance of pressure gradients and shear stresses on sediment transport.

  9. Development of a tracer transport option for the NAPSAC fracture network computer code

    International Nuclear Information System (INIS)

    Herbert, A.W.

    1990-06-01

    The Napsac computer code predicts groundwater flow through fractured rock using a direct fracture network approach. This paper describes the development of a tracer transport algorithm for the NAPSAC code. A very efficient particle-following approach is used enabling tracer transport to be predicted through large fracture networks. The new algorithm is tested against three test examples. These demonstrations confirm the accuracy of the code for simple networks, where there is an analytical solution to the transport problem, and illustrates the use of the computer code on a more realistic problem. (author)

  10. A discrete particle swarm optimization algorithm with local search for a production-based two-echelon single-vendor multiple-buyer supply chain

    Science.gov (United States)

    Seifbarghy, Mehdi; Kalani, Masoud Mirzaei; Hemmati, Mojtaba

    2016-03-01

    This paper formulates a two-echelon single-producer multi-buyer supply chain model, while a single product is produced and transported to the buyers by the producer. The producer and the buyers apply vendor-managed inventory mode of operation. It is assumed that the producer applies economic production quantity policy, which implies a constant production rate at the producer. The operational parameters of each buyer are sales quantity, sales price and production rate. Channel profit of the supply chain and contract price between the producer and each buyer is determined based on the values of the operational parameters. Since the model belongs to nonlinear integer programs, we use a discrete particle swarm optimization algorithm (DPSO) to solve the addressed problem; however, the performance of the DPSO is compared utilizing two well-known heuristics, namely genetic algorithm and simulated annealing. A number of examples are provided to verify the model and assess the performance of the proposed heuristics. Experimental results indicate that DPSO outperforms the rival heuristics, with respect to some comparison metrics.

  11. Simulations of reactive transport and precipitation with smoothed particle hydrodynamics

    Science.gov (United States)

    Tartakovsky, Alexandre M.; Meakin, Paul; Scheibe, Timothy D.; Eichler West, Rogene M.

    2007-03-01

    A numerical model based on smoothed particle hydrodynamics (SPH) was developed for reactive transport and mineral precipitation in fractured and porous materials. Because of its Lagrangian particle nature, SPH has several advantages for modeling Navier-Stokes flow and reactive transport including: (1) in a Lagrangian framework there is no non-linear term in the momentum conservation equation, so that accurate solutions can be obtained for momentum dominated flows and; (2) complicated physical and chemical processes such as surface growth due to precipitation/dissolution and chemical reactions are easy to implement. In addition, SPH simulations explicitly conserve mass and linear momentum. The SPH solution of the diffusion equation with fixed and moving reactive solid-fluid boundaries was compared with analytical solutions, Lattice Boltzmann [Q. Kang, D. Zhang, P. Lichtner, I. Tsimpanogiannis, Lattice Boltzmann model for crystal growth from supersaturated solution, Geophysical Research Letters, 31 (2004) L21604] simulations and diffusion limited aggregation (DLA) [P. Meakin, Fractals, scaling and far from equilibrium. Cambridge University Press, Cambridge, UK, 1998] model simulations. To illustrate the capabilities of the model, coupled three-dimensional flow, reactive transport and precipitation in a fracture aperture with a complex geometry were simulated.

  12. Numerical methods: Analytical benchmarking in transport theory

    International Nuclear Information System (INIS)

    Ganapol, B.D.

    1988-01-01

    Numerical methods applied to reactor technology have reached a high degree of maturity. Certainly one- and two-dimensional neutron transport calculations have become routine, with several programs available on personal computer and the most widely used programs adapted to workstation and minicomputer computational environments. With the introduction of massive parallelism and as experience with multitasking increases, even more improvement in the development of transport algorithms can be expected. Benchmarking an algorithm is usually not a very pleasant experience for the code developer. Proper algorithmic verification by benchmarking involves the following considerations: (1) conservation of particles, (2) confirmation of intuitive physical behavior, and (3) reproduction of analytical benchmark results. By using today's computational advantages, new basic numerical methods have been developed that allow a wider class of benchmark problems to be considered

  13. Particle modeling of transport of α-ray generated ion clusters in air

    International Nuclear Information System (INIS)

    Tong, Lizhu; Nanbu, Kenichi; Hirata, Yosuke; Izumi, Mikio; Miyamoto, Yasuaki; Yamaguchi, Hiromi

    2006-01-01

    A particle model is developed using the test-particle Monte Carlo method to study the transport properties of α-ray generated ion clusters in a flow of air. An efficient ion-molecule collision model is proposed to simulate the collisions between ion and air molecule. The simulations are performed for a steady state of ion transport in a circular pipe. In the steady state, generation of ions is balanced with such losses of ions as absorption of the measuring sensor or pipe wall and disappearance by positive-negative ion recombination. The calculated ion current to the measuring sensor agrees well with the previous measured data. (author)

  14. Particle transport analysis in lower hybrid current drive discharges of JT-60U

    International Nuclear Information System (INIS)

    Nagashima, K.; Ide, S.; Naito, O.

    1996-01-01

    Particle transport is modified in lower hybrid current drive discharges of JT-60U. The density profile becomes broad during the lower hybrid wave injection and the profile change depends on the injected wave spectrum. Particle transport coefficients (diffusion coefficient and profile peaking factor) were evaluated using gas-puff modulation experiments. The diffusion coefficient in the current drive discharges is about three times larger than in the ohmic discharges. The profile peaking factor decreases in the current drive discharges and the evaluated values are consistent with the measured density profiles. (author)

  15. Particle transport simulation for spaceborne, NaI gamma-ray spectrometers

    International Nuclear Information System (INIS)

    Dyer, C.S.; Truscott, P.R.; Sims, A.J.; Comber, C.; Hammond, N.D.A.

    1988-11-01

    Radioactivity induced in detectors by protons and secondary neutrons limits the sensitivity of spaceborne gamma-ray spectrometers. Three dimensional Monte Carlo transport codes have been employed to simulate particle transport of cosmic rays and inner-belt protons in various representations of the Gamma Ray Observatory Spacecraft and the Oriented Scintillation Spectrometer Experiment. Results are used to accurately quantify the contributions to the radioactive background, assess shielding options and examine the effect of detector and space-craft orientation in anisotropic trapped proton fluxes. (author)

  16. An Improved Quantum-Behaved Particle Swarm Optimization Algorithm with Elitist Breeding for Unconstrained Optimization.

    Science.gov (United States)

    Yang, Zhen-Lun; Wu, Angus; Min, Hua-Qing

    2015-01-01

    An improved quantum-behaved particle swarm optimization with elitist breeding (EB-QPSO) for unconstrained optimization is presented and empirically studied in this paper. In EB-QPSO, the novel elitist breeding strategy acts on the elitists of the swarm to escape from the likely local optima and guide the swarm to perform more efficient search. During the iterative optimization process of EB-QPSO, when criteria met, the personal best of each particle and the global best of the swarm are used to generate new diverse individuals through the transposon operators. The new generated individuals with better fitness are selected to be the new personal best particles and global best particle to guide the swarm for further solution exploration. A comprehensive simulation study is conducted on a set of twelve benchmark functions. Compared with five state-of-the-art quantum-behaved particle swarm optimization algorithms, the proposed EB-QPSO performs more competitively in all of the benchmark functions in terms of better global search capability and faster convergence rate.

  17. Parallel Monte Carlo Particle Transport and the Quality of Random Number Generators: How Good is Good Enough?

    International Nuclear Information System (INIS)

    Procassini, R J; Beck, B R

    2004-01-01

    It might be assumed that use of a ''high-quality'' random number generator (RNG), producing a sequence of ''pseudo random'' numbers with a ''long'' repetition period, is crucial for producing unbiased results in Monte Carlo particle transport simulations. While several theoretical and empirical tests have been devised to check the quality (randomness and period) of an RNG, for many applications it is not clear what level of RNG quality is required to produce unbiased results. This paper explores the issue of RNG quality in the context of parallel, Monte Carlo transport simulations in order to determine how ''good'' is ''good enough''. This study employs the MERCURY Monte Carlo code, which incorporates the CNPRNG library for the generation of pseudo-random numbers via linear congruential generator (LCG) algorithms. The paper outlines the usage of random numbers during parallel MERCURY simulations, and then describes the source and criticality transport simulations which comprise the empirical basis of this study. A series of calculations for each test problem in which the quality of the RNG (period of the LCG) is varied provides the empirical basis for determining the minimum repetition period which may be employed without producing a bias in the mean integrated results

  18. Fast readout algorithm for cylindrical beam position monitors providing good accuracy for particle bunches with large offsets

    Science.gov (United States)

    Thieberger, P.; Gassner, D.; Hulsart, R.; Michnoff, R.; Miller, T.; Minty, M.; Sorrell, Z.; Bartnik, A.

    2018-04-01

    A simple, analytically correct algorithm is developed for calculating "pencil" relativistic beam coordinates using the signals from an ideal cylindrical particle beam position monitor (BPM) with four pickup electrodes (PUEs) of infinitesimal widths. The algorithm is then applied to simulations of realistic BPMs with finite width PUEs. Surprisingly small deviations are found. Simple empirically determined correction terms reduce the deviations even further. The algorithm is then tested with simulations for non-relativistic beams. As an example of the data acquisition speed advantage, a Field Programmable Gate Array-based BPM readout implementation of the new algorithm has been developed and characterized. Finally, the algorithm is tested with BPM data from the Cornell Preinjector.

  19. PHITS: Particle and heavy ion transport code system, version 2.23

    International Nuclear Information System (INIS)

    Niita, Koji; Matsuda, Norihiro; Iwamoto, Yosuke; Sato, Tatsuhiko; Nakashima, Hiroshi; Sakamoto, Yukio; Iwase, Hiroshi; Sihver, Lembit

    2010-10-01

    A Particle and Heavy-Ion Transport code System PHITS has been developed under the collaboration of JAEA (Japan Atomic Energy Agency), RIST (Research Organization for Information Science and Technology) and KEK (High Energy Accelerator Research Organization). PHITS can deal with the transport of all particles (nucleons, nuclei, mesons, photons, and electrons) over wide energy ranges, using several nuclear reaction models and nuclear data libraries. Geometrical configuration of the simulation can be set with GG (General Geometry) or CG (Combinatorial Geometry). Various quantities such as heat deposition, track length and production yields can be deduced from the simulation, using implemented estimator functions called 'tally'. The code also has a function to draw 2D and 3D figures of the calculated results as well as the setup geometries, using a code ANGEL. Because of these features, PHITS has been widely used for various purposes such as designs of accelerator shielding, radiation therapy and space exploration. Recently PHITS introduces an event generator for particle transport parts in the low energy region. Thus, PHITS was completely rewritten for the introduction of the event generator for neutron-induced reactions in energy region less than 20 MeV. Furthermore, several new tallis were incorporated for estimation of the relative biological effects. This document provides a manual of the new PHITS. (author)

  20. A low-dispersion, exactly energy-charge-conserving semi-implicit relativistic particle-in-cell algorithm

    Science.gov (United States)

    Chen, Guangye; Luis, Chacon; Bird, Robert; Stark, David; Yin, Lin; Albright, Brian

    2017-10-01

    Leap-frog based explicit algorithms, either ``energy-conserving'' or ``momentum-conserving'', do not conserve energy discretely. Time-centered fully implicit algorithms can conserve discrete energy exactly, but introduce large dispersion errors in the light-wave modes, regardless of timestep sizes. This can lead to intolerable simulation errors where highly accurate light propagation is needed (e.g. laser-plasma interactions, LPI). In this study, we selectively combine the leap-frog and Crank-Nicolson methods to produce a low-dispersion, exactly energy-and-charge-conserving PIC algorithm. Specifically, we employ the leap-frog method for Maxwell equations, and the Crank-Nicolson method for particle equations. Such an algorithm admits exact global energy conservation, exact local charge conservation, and preserves the dispersion properties of the leap-frog method for the light wave. The algorithm has been implemented in a code named iVPIC, based on the VPIC code developed at LANL. We will present numerical results that demonstrate the properties of the scheme with sample test problems (e.g. Weibel instability run for 107 timesteps, and LPI applications.

  1. Concatenating algorithms for parallel numerical simulations coupling radiation hydrodynamics with neutron transport

    International Nuclear Information System (INIS)

    Mo Zeyao

    2004-11-01

    Multiphysics parallel numerical simulations are usually essential to simplify researches on complex physical phenomena in which several physics are tightly coupled. It is very important on how to concatenate those coupled physics for fully scalable parallel simulation. Meanwhile, three objectives should be balanced, the first is efficient data transfer among simulations, the second and the third are efficient parallel executions and simultaneously developments of those simulation codes. Two concatenating algorithms for multiphysics parallel numerical simulations coupling radiation hydrodynamics with neutron transport on unstructured grid are presented. The first algorithm, Fully Loosely Concatenation (FLC), focuses on the independence of code development and the independence running with optimal performance of code. The second algorithm. Two Level Tightly Concatenation (TLTC), focuses on the optimal tradeoffs among above three objectives. Theoretical analyses for communicational complexity and parallel numerical experiments on hundreds of processors on two parallel machines have showed that these two algorithms are efficient and can be generalized to other multiphysics parallel numerical simulations. In especial, algorithm TLTC is linearly scalable and has achieved the optimal parallel performance. (authors)

  2. PS-FW: A Hybrid Algorithm Based on Particle Swarm and Fireworks for Global Optimization

    Science.gov (United States)

    Chen, Shuangqing; Wei, Lixin; Guan, Bing

    2018-01-01

    Particle swarm optimization (PSO) and fireworks algorithm (FWA) are two recently developed optimization methods which have been applied in various areas due to their simplicity and efficiency. However, when being applied to high-dimensional optimization problems, PSO algorithm may be trapped in the local optima owing to the lack of powerful global exploration capability, and fireworks algorithm is difficult to converge in some cases because of its relatively low local exploitation efficiency for noncore fireworks. In this paper, a hybrid algorithm called PS-FW is presented, in which the modified operators of FWA are embedded into the solving process of PSO. In the iteration process, the abandonment and supplement mechanism is adopted to balance the exploration and exploitation ability of PS-FW, and the modified explosion operator and the novel mutation operator are proposed to speed up the global convergence and to avoid prematurity. To verify the performance of the proposed PS-FW algorithm, 22 high-dimensional benchmark functions have been employed, and it is compared with PSO, FWA, stdPSO, CPSO, CLPSO, FIPS, Frankenstein, and ALWPSO algorithms. Results show that the PS-FW algorithm is an efficient, robust, and fast converging optimization method for solving global optimization problems. PMID:29675036

  3. Computational study of scattering of a zero-order Bessel beam by large nonspherical homogeneous particles with the multilevel fast multipole algorithm

    Science.gov (United States)

    Yang, Minglin; Wu, Yueqian; Sheng, Xinqing; Ren, Kuan Fang

    2017-12-01

    Computation of scattering of shaped beams by large nonspherical particles is a challenge in both optics and electromagnetics domains since it concerns many research fields. In this paper, we report our new progress in the numerical computation of the scattering diagrams. Our algorithm permits to calculate the scattering of a particle of size as large as 110 wavelengths or 700 in size parameter. The particle can be transparent or absorbing of arbitrary shape, smooth or with a sharp surface, such as the Chebyshev particles or ice crystals. To illustrate the capacity of the algorithm, a zero order Bessel beam is taken as the incident beam, and the scattering of ellipsoidal particles and Chebyshev particles are taken as examples. Some special phenomena have been revealed and examined. The scattering problem is formulated with the combined tangential formulation and solved iteratively with the aid of the multilevel fast multipole algorithm, which is well parallelized with the message passing interface on the distributed memory computer platform using the hybrid partitioning strategy. The numerical predictions are compared with the results of the rigorous method for a spherical particle to validate the accuracy of the approach. The scattering diagrams of large ellipsoidal particles with various parameters are examined. The effect of aspect ratios, as well as half-cone angle of the incident zero-order Bessel beam and the off-axis distance on scattered intensity, is studied. Scattering by asymmetry Chebyshev particle with size parameter larger than 700 is also given to show the capability of the method for computing scattering by arbitrary shaped particles.

  4. LC HCAL Absorber And Active Media Comparisons Using a Particle-Flow Algorithm

    International Nuclear Information System (INIS)

    Magill, Steve; Kuhlmann, S.

    2006-01-01

    We compared Stainless Steel (SS) to Tungsten (W) as absorber for the HCAL in simulation using single particles (pions) and a Particle-Flow Algorithm applied to e + e - -> Z -> qqbar events. We then used the PFA to evaluate the performance characteristics of a LC HCAL using W absorber and comparing scintillator and RPC as active media. The W/Scintillator HCAL performs better than the SS/Scintillator version due to finer λ I sampling and narrower showers in the dense absorber. The W/Scintillator HCAL performs better than the W/RPC HCAL except in the number of unused hits in the PFA. Since this represents the confusion term in the PFA response, additional tuning and optimization of a W/RPC HCAL might significantly improve this HCAL configuration

  5. Particle transport in breathing quantum graph

    International Nuclear Information System (INIS)

    Matrasulov, D.U.; Yusupov, J.R.; Sabirov, K.K.; Sobirov, Z.A.

    2012-01-01

    Full text: Particle transport in nanoscale networks and discrete structures is of fundamental and practical importance. Usually such systems are modeled by so-called quantum graphs, the systems attracting much attention in physics and mathematics during past two decades [1-5]. During last two decades quantum graphs found numerous applications in modeling different discrete structures and networks in nanoscale and mesoscopic physics (e.g., see reviews [1-3]). Despite considerable progress made in the study of particle dynamics most of the problems deal with unperturbed case and the case of time-dependent perturbation has not yet be explored. In this work we treat particle dynamics for quantum star graph with time-dependent bonds. In particular, we consider harmonically breathing quantum star graphs, the cases of monotonically contracting and expanding graphs. The latter can be solved exactly analytically. Edge boundaries are considered to be time-dependent, while branching point is assumed to be fixed. Quantum dynamics of a particle in such graphs is studied by solving Schrodinger equation with time-dependent boundary conditions given on a star graph. Time-dependence of the average kinetic energy is analyzed. Space-time evolution of the Gaussian wave packet is treated for harmonically breathing star graph. It is found that for certain frequencies energy is a periodic function of time, while for others it can be non-monotonically growing function of time. Such a feature can be caused by possible synchronization of the particles motion and the motions of the moving edges of graph bonds. (authors) References: [1] Tsampikos Kottos and Uzy Smilansky, Ann. Phys., 76, 274 (1999). [2] Sven Gnutzmann and Uzy Smilansky, Adv. Phys. 55, 527 (2006). [3] S. GnutzmannJ.P. Keating, F. Piotet, Ann. Phys., 325, 2595 (2010). [4] P.Exner, P.Seba, P.Stovicek, J. Phys. A: Math. Gen. 21, 4009 (1988). [5] J. Boman, P. Kurasov, Adv. Appl. Math., 35, 58 (2005)

  6. Drift-Alfven wave mediated particle transport in an elongated density depression

    International Nuclear Information System (INIS)

    Vincena, Stephen; Gekelman, Walter

    2006-01-01

    Cross-field particle transport due to drift-Alfven waves is measured in an elongated density depression within an otherwise uniform, magnetized helium plasma column. The depression is formed by drawing an electron current to a biased copper plate with cross-field dimensions of 28x0.24 ion sound-gyroradii ρ s =c s /ω ci . The process of density depletion and replenishment via particle flux repeats in a quasiperiodic fashion for the duration of the current collection. The mode structure of the wave density fluctuations in the plane perpendicular to the background magnetic field is revealed using a two-probe correlation technique. The particle flux as a function of frequency is measured using a linear array of Langmuir probes and the only significant transport occurs for waves with frequencies between 15%-25% of the ion cyclotron frequency (measured in the laboratory frame) and with perpendicular wavelengths k perpendicular ρ s ∼0.7. The frequency-integrated particle flux is in rough agreement with observed increases in density in the center of the depletion as a function of time. The experiments are carried out in the Large Plasma Device (LAPD) [Gekelman et al., Rev. Sci. Instrum. 62, 2875 (1991)] at the Basic Plasma Science Facility located at the University of California, Los Angeles

  7. Coupled Particle Transport and Pattern Formation in a Nonlinear Leaky-Box Model

    Science.gov (United States)

    Barghouty, A. F.; El-Nemr, K. W.; Baird, J. K.

    2009-01-01

    Effects of particle-particle coupling on particle characteristics in nonlinear leaky-box type descriptions of the acceleration and transport of energetic particles in space plasmas are examined in the framework of a simple two-particle model based on the Fokker-Planck equation in momentum space. In this model, the two particles are assumed coupled via a common nonlinear source term. In analogy with a prototypical mathematical system of diffusion-driven instability, this work demonstrates that steady-state patterns with strong dependence on the magnetic turbulence but a rather weak one on the coupled particles attributes can emerge in solutions of a nonlinearly coupled leaky-box model. The insight gained from this simple model may be of wider use and significance to nonlinearly coupled leaky-box type descriptions in general.

  8. Particle integrity, sampling, and application of a DNA-tagged tracer for aerosol transport studies

    Energy Technology Data Exchange (ETDEWEB)

    Kaeser, Cynthia Jeanne [Michigan State Univ., East Lansing, MI (United States)

    2017-07-21

    Aerosols are an ever-present part of our daily environment and have extensive effects on both human and environmental health. Particles in the inhalable range (1-10 μm diameter) are of particular concern because their deposition in the lung can lead to a variety of illnesses including allergic reactions, viral or bacterial infections, and cancer. Understanding the transport of inhalable aerosols across both short and long distances is necessary to predict human exposures to aerosols. To assess the transport of hazardous aerosols, surrogate tracer particles are required to measure their transport through occupied spaces. These tracer particles must not only possess similar transport characteristics to those of interest but also be easily distinguished from the background at low levels and survive the environmental conditions of the testing environment. A previously-developed DNA-tagged particle (DNATrax), composed of food-grade sugar and a DNA oligonucleotide as a “barcode” label, shows promise as a new aerosol tracer. Herein, the use of DNATrax material is validated for use in both indoor and outdoor environments. Utilizing passive samplers made of materials commonly found in indoor environments followed by quantitative polymerase chain reaction (qPCR) assay for endpoint particle detection, particles detection was achieved up to 90 m from the aerosolization location and across shorter distances with high spatial resolution. The unique DNA label and PCR assay specificity were leveraged to perform multiple simultaneous experiments. This allowed the assessment of experimental reproducibility, a rare occurrence among aerosol field tests. To transition to outdoor testing, the solid material provides some protection of the DNA label when exposed to ultraviolet (UV) radiation, with 60% of the DNA remaining intact after 60 minutes under a germicidal lamp and the rate of degradation declining with irradiation time. Additionally, exposure of the DNATrax material using

  9. Evolutionary Hybrid Particle Swarm Optimization Algorithm for Solving NP-Hard No-Wait Flow Shop Scheduling Problems

    Directory of Open Access Journals (Sweden)

    Laxmi A. Bewoor

    2017-10-01

    Full Text Available The no-wait flow shop is a flowshop in which the scheduling of jobs is continuous and simultaneous through all machines without waiting for any consecutive machines. The scheduling of a no-wait flow shop requires finding an appropriate sequence of jobs for scheduling, which in turn reduces total processing time. The classical brute force method for finding the probabilities of scheduling for improving the utilization of resources may become trapped in local optima, and this problem can hence be observed as a typical NP-hard combinatorial optimization problem that requires finding a near optimal solution with heuristic and metaheuristic techniques. This paper proposes an effective hybrid Particle Swarm Optimization (PSO metaheuristic algorithm for solving no-wait flow shop scheduling problems with the objective of minimizing the total flow time of jobs. This Proposed Hybrid Particle Swarm Optimization (PHPSO algorithm presents a solution by the random key representation rule for converting the continuous position information values of particles to a discrete job permutation. The proposed algorithm initializes population efficiently with the Nawaz-Enscore-Ham (NEH heuristic technique and uses an evolutionary search guided by the mechanism of PSO, as well as simulated annealing based on a local neighborhood search to avoid getting stuck in local optima and to provide the appropriate balance of global exploration and local exploitation. Extensive computational experiments are carried out based on Taillard’s benchmark suite. Computational results and comparisons with existing metaheuristics show that the PHPSO algorithm outperforms the existing methods in terms of quality search and robustness for the problem considered. The improvement in solution quality is confirmed by statistical tests of significance.

  10. Transport of rare earth element-tagged soil particles in response to thunderstorm runoff.

    Science.gov (United States)

    Matisoff, G; Ketterer, M E; Wilson, C G; Layman, R; Whiting, P J

    2001-08-15

    The downslope transport of rare earth element-tagged soil particles remobilized during a spring thunderstorm was studied on both a natural prairie and an agricultural field in southwestern Iowa (U.S.A.). A technique was developed for tagging natural soils with the rare earth elements Eu, Tb, and Ho to approximately 1,000 ppm via coprecipitation with MnO2. Tagged material was replaced in target locations; surficial soil samples were collected following precipitation and runoff; and rare earth element concentrations were determined by inductively coupled plasma mass spectrometry. Diffusion and exponential models were applied to the concentration-distance data to determine particle transport distances. The results indicate that the concentration-distance data are well described by the diffusion model, butthe exponential model does not simulate the rapid drop-off in concentrations near the tagged source. Using the diffusion model, calculated particle transport distances at all hillside locations and at both the cultivated and natural prairie sites were short, ranging from 3 to 73 cm during this single runoff event. This study successfully demonstrates a new tool for studying soil erosion.

  11. A benchmark study of the Signed-particle Monte Carlo algorithm for the Wigner equation

    Directory of Open Access Journals (Sweden)

    Muscato Orazio

    2017-12-01

    Full Text Available The Wigner equation represents a promising model for the simulation of electronic nanodevices, which allows the comprehension and prediction of quantum mechanical phenomena in terms of quasi-distribution functions. During these years, a Monte Carlo technique for the solution of this kinetic equation has been developed, based on the generation and annihilation of signed particles. This technique can be deeply understood in terms of the theory of pure jump processes with a general state space, producing a class of stochastic algorithms. One of these algorithms has been validated successfully by numerical experiments on a benchmark test case.

  12. Poisson-Box Sampling algorithms for three-dimensional Markov binary mixtures

    Science.gov (United States)

    Larmier, Coline; Zoia, Andrea; Malvagi, Fausto; Dumonteil, Eric; Mazzolo, Alain

    2018-02-01

    Particle transport in Markov mixtures can be addressed by the so-called Chord Length Sampling (CLS) methods, a family of Monte Carlo algorithms taking into account the effects of stochastic media on particle propagation by generating on-the-fly the material interfaces crossed by the random walkers during their trajectories. Such methods enable a significant reduction of computational resources as opposed to reference solutions obtained by solving the Boltzmann equation for a large number of realizations of random media. CLS solutions, which neglect correlations induced by the spatial disorder, are faster albeit approximate, and might thus show discrepancies with respect to reference solutions. In this work we propose a new family of algorithms (called 'Poisson Box Sampling', PBS) aimed at improving the accuracy of the CLS approach for transport in d-dimensional binary Markov mixtures. In order to probe the features of PBS methods, we will focus on three-dimensional Markov media and revisit the benchmark problem originally proposed by Adams, Larsen and Pomraning [1] and extended by Brantley [2]: for these configurations we will compare reference solutions, standard CLS solutions and the new PBS solutions for scalar particle flux, transmission and reflection coefficients. PBS will be shown to perform better than CLS at the expense of a reasonable increase in computational time.

  13. Gyrokinetics Simulation of Energetic Particle Turbulence and Transport

    Energy Technology Data Exchange (ETDEWEB)

    Diamond, Patrick H.

    2011-09-21

    Progress in research during this year elucidated the physics of precession resonance and its interaction with radial scattering to form phase space density granulations. Momentum theorems for drift wave-zonal flow systems involving precession resonance were derived. These are directly generalizable to energetic particle modes. A novel nonlinear, subcritical growth mechanism was identified, which has now been verified by simulation. These results strengthen the foundation of our understanding of transport in burning plasmas

  14. Gyrokinetics Simulation of Energetic Particle Turbulence and Transport

    International Nuclear Information System (INIS)

    Diamond, Patrick H.

    2011-01-01

    Progress in research during this year elucidated the physics of precession resonance and its interaction with radial scattering to form phase space density granulations. Momentum theorems for drift wave-zonal flow systems involving precession resonance were derived. These are directly generalizable to energetic particle modes. A novel nonlinear, subcritical growth mechanism was identified, which has now been verified by simulation. These results strengthen the foundation of our understanding of transport in burning plasmas

  15. Modeling the ultrasonic testing echoes by a combination of particle swarm optimization and Levenberg–Marquardt algorithms

    International Nuclear Information System (INIS)

    Gholami, Ali; Honarvar, Farhang; Moghaddam, Hamid Abrishami

    2017-01-01

    This paper presents an accurate and easy-to-implement algorithm for estimating the parameters of the asymmetric Gaussian chirplet model (AGCM) used for modeling echoes measured in ultrasonic nondestructive testing (NDT) of materials. The proposed algorithm is a combination of particle swarm optimization (PSO) and Levenberg–Marquardt (LM) algorithms. PSO does not need an accurate initial guess and quickly converges to a reasonable output while LM needs a good initial guess in order to provide an accurate output. In the combined algorithm, PSO is run first to provide a rough estimate of the output and this result is consequently inputted to the LM algorithm for more accurate estimation of parameters. To apply the algorithm to signals with multiple echoes, the space alternating generalized expectation maximization (SAGE) is used. The proposed combined algorithm is robust and accurate. To examine the performance of the proposed algorithm, it is applied to a number of simulated echoes having various signal to noise ratios. The combined algorithm is also applied to a number of experimental ultrasonic signals. The results corroborate the accuracy and reliability of the proposed combined algorithm. (paper)

  16. Characterization of molecule and particle transport through nanoscale conduits

    Science.gov (United States)

    Alibakhshi, Mohammad Amin

    Nanofluidic devices have been of great interest due to their applications in variety of fields, including energy conversion and storage, water desalination, biological and chemical separations, and lab-on-a-chip devices. Although these applications cross the boundaries of many different disciplines, they all share the demand for understanding transport in nanoscale conduits. In this thesis, different elusive aspects of molecule and particle transport through nanofluidic conduits are investigated, including liquid and ion transport in nanochannels, diffusion- and reaction-governed enzyme transport in nanofluidic channels, and finally translocation of nanobeads through nanopores. Liquid or solvent transport through nanoconfinements is an essential yet barely characterized component of any nanofluidic systems. In the first chapter, water transport through single hydrophilic nanochannels with heights down to 7 nm is experimentally investigated using a new measurement technique. This technique has been developed based on the capillary flow and a novel hybrid nanochannel design and is capable of characterizing flow in both single nanoconduits as well as nanoporous media. The presence of a 0.7 nm thick hydration layer on hydrophilic surfaces and its effect on increasing the hydraulic resistance of the nanochannels is verified. Next, ion transport in a new class of nanofluidic rectifiers is theoretically and experimentally investigated. These so called nanofluidic diodes are nanochannels with asymmetric geometries which preferentially allow ion transport in one direction. A nondimensional number as a function of electrolyte concentration, nanochannel dimensions, and surface charge is derived that summarizes the rectification behavior of this system. In the fourth chapter, diffusion- and reaction-governed enzyme transport in nanofluidic channels is studied and the theoretical background necessary for understanding enzymatic activity in nanofluidic channels is presented. A

  17. A review of the facile (FN) method in particle transport theory

    International Nuclear Information System (INIS)

    Garcia, R.D.M.

    1986-02-01

    The facile F N method for solving particle transport problems is reviewed. The fundamentals of the method are summarized, recent developments are discussed and several applications of the method are described in detail. (author) [pt

  18. The particle swarm optimization algorithm applied to nuclear systems surveillance test planning

    International Nuclear Information System (INIS)

    Siqueira, Newton Norat

    2006-12-01

    This work shows a new approach to solve availability maximization problems in electromechanical systems, under periodic preventive scheduled tests. This approach uses a new Optimization tool called PSO developed by Kennedy and Eberhart (2001), Particle Swarm Optimization, integrated with probabilistic safety analysis model. Two maintenance optimization problems are solved by the proposed technique, the first one is a hypothetical electromechanical configuration and the second one is a real case from a nuclear power plant (Emergency Diesel Generators). For both problem PSO is compared to a genetic algorithm (GA). In the experiments made, PSO was able to obtain results comparable or even slightly better than those obtained b GA. Therefore, the PSO algorithm is simpler and its convergence is faster, indicating that PSO is a good alternative for solving such kind of problems. (author)

  19. Parallel algorithms for 2-D cylindrical transport equations of Eigenvalue problem

    International Nuclear Information System (INIS)

    Wei, J.; Yang, S.

    2013-01-01

    In this paper, aimed at the neutron transport equations of eigenvalue problem under 2-D cylindrical geometry on unstructured grid, the discrete scheme of Sn discrete ordinate and discontinuous finite is built, and the parallel computation for the scheme is realized on MPI systems. Numerical experiments indicate that the designed parallel algorithm can reach perfect speedup, it has good practicality and scalability. (authors)

  20. Faster and more accurate transport procedures for HZETRN

    International Nuclear Information System (INIS)

    Slaba, T.C.; Blattnig, S.R.; Badavi, F.F.

    2010-01-01

    The deterministic transport code HZETRN was developed for research scientists and design engineers studying the effects of space radiation on astronauts and instrumentation protected by various shielding materials and structures. In this work, several aspects of code verification are examined. First, a detailed derivation of the light particle (A ≤ 4) and heavy ion (A > 4) numerical marching algorithms used in HZETRN is given. References are given for components of the derivation that already exist in the literature, and discussions are given for details that may have been absent in the past. The present paper provides a complete description of the numerical methods currently used in the code and is identified as a key component of the verification process. Next, a new numerical method for light particle transport is presented, and improvements to the heavy ion transport algorithm are discussed. A summary of round-off error is also given, and the impact of this error on previously predicted exposure quantities is shown. Finally, a coupled convergence study is conducted by refining the discretization parameters (step-size and energy grid-size). From this study, it is shown that past efforts in quantifying the numerical error in HZETRN were hindered by single precision calculations and computational resources. It is determined that almost all of the discretization error in HZETRN is caused by the use of discretization parameters that violate a numerical convergence criterion related to charged target fragments below 50 AMeV. Total discretization errors are given for the old and new algorithms to 100 g/cm 2 in aluminum and water, and the improved accuracy of the new numerical methods is demonstrated. Run time comparisons between the old and new algorithms are given for one, two, and three layer slabs of 100 g/cm 2 of aluminum, polyethylene, and water. The new algorithms are found to be almost 100 times faster for solar particle event simulations and almost 10 times

  1. Semiclassical transport of particles with dynamical spectral functions

    International Nuclear Information System (INIS)

    Cassing, W.; Juchem, S.

    2000-01-01

    The conventional transport of particles in the on-shell quasiparticle limit is extended to particles of finite life time by means of a spectral function A(X,P,M 2 ) for a particle moving in an area of complex self-energy Σ ret X =Re Σ ret X -iΓ X /2. Starting from the Kadanoff--Baym equations we derive in first-order gradient expansion equations of motion for testparticles with respect to their time evolution in X,P and M 2 . The off-shell propagation is demonstrated for a couple of model cases that simulate hadron-nucleus collisions. In case of nucleus-nucleus collisions the imaginary part of the hadron self-energy Γ X is determined by the local space-time dependent collision rate dynamically. A first application is presented for A+A reactions up to 95 A MeV, where the effects from the off-shell propagation of nucleons are discussed with respect to high energy proton spectra, high energy photon production as well as kaon yields in comparison to the available data from GANIL

  2. ENERGETIC PARTICLE TRANSPORT ACROSS THE MEAN MAGNETIC FIELD: BEFORE DIFFUSION

    International Nuclear Information System (INIS)

    Laitinen, T.; Dalla, S.

    2017-01-01

    Current particle transport models describe the propagation of charged particles across the mean field direction in turbulent plasmas as diffusion. However, recent studies suggest that at short timescales, such as soon after solar energetic particle (SEP) injection, particles remain on turbulently meandering field lines, which results in nondiffusive initial propagation across the mean magnetic field. In this work, we use a new technique to investigate how the particles are displaced from their original field lines, and we quantify the parameters of the transition from field-aligned particle propagation along meandering field lines to particle diffusion across the mean magnetic field. We show that the initial decoupling of the particles from the field lines is slow, and particles remain within a Larmor radius from their initial meandering field lines for tens to hundreds of Larmor periods, for 0.1–10 MeV protons in turbulence conditions typical of the solar wind at 1 au. Subsequently, particles decouple from their initial field lines and after hundreds to thousands of Larmor periods reach time-asymptotic diffusive behavior consistent with particle diffusion across the mean field caused by the meandering of the field lines. We show that the typical duration of the prediffusive phase, hours to tens of hours for 10 MeV protons in 1 au solar wind turbulence conditions, is significant for SEP propagation to 1 au and must be taken into account when modeling SEP propagation in the interplanetary space.

  3. ENERGETIC PARTICLE TRANSPORT ACROSS THE MEAN MAGNETIC FIELD: BEFORE DIFFUSION

    Energy Technology Data Exchange (ETDEWEB)

    Laitinen, T.; Dalla, S., E-mail: tlmlaitinen@uclan.ac.uk [Jeremiah Horrocks Institute, University of Central Lancashire, Preston (United Kingdom)

    2017-01-10

    Current particle transport models describe the propagation of charged particles across the mean field direction in turbulent plasmas as diffusion. However, recent studies suggest that at short timescales, such as soon after solar energetic particle (SEP) injection, particles remain on turbulently meandering field lines, which results in nondiffusive initial propagation across the mean magnetic field. In this work, we use a new technique to investigate how the particles are displaced from their original field lines, and we quantify the parameters of the transition from field-aligned particle propagation along meandering field lines to particle diffusion across the mean magnetic field. We show that the initial decoupling of the particles from the field lines is slow, and particles remain within a Larmor radius from their initial meandering field lines for tens to hundreds of Larmor periods, for 0.1–10 MeV protons in turbulence conditions typical of the solar wind at 1 au. Subsequently, particles decouple from their initial field lines and after hundreds to thousands of Larmor periods reach time-asymptotic diffusive behavior consistent with particle diffusion across the mean field caused by the meandering of the field lines. We show that the typical duration of the prediffusive phase, hours to tens of hours for 10 MeV protons in 1 au solar wind turbulence conditions, is significant for SEP propagation to 1 au and must be taken into account when modeling SEP propagation in the interplanetary space.

  4. Portfolio management using value at risk: A comparison between genetic algorithms and particle swarm optimization

    NARCIS (Netherlands)

    V.A.F. Dallagnol (V. A F); J.H. van den Berg (Jan); L. Mous (Lonneke)

    2009-01-01

    textabstractIn this paper, it is shown a comparison of the application of particle swarm optimization and genetic algorithms to portfolio management, in a constrained portfolio optimization problem where no short sales are allowed. The objective function to be minimized is the value at risk

  5. Parallelization of a Monte Carlo particle transport simulation code

    Science.gov (United States)

    Hadjidoukas, P.; Bousis, C.; Emfietzoglou, D.

    2010-05-01

    We have developed a high performance version of the Monte Carlo particle transport simulation code MC4. The original application code, developed in Visual Basic for Applications (VBA) for Microsoft Excel, was first rewritten in the C programming language for improving code portability. Several pseudo-random number generators have been also integrated and studied. The new MC4 version was then parallelized for shared and distributed-memory multiprocessor systems using the Message Passing Interface. Two parallel pseudo-random number generator libraries (SPRNG and DCMT) have been seamlessly integrated. The performance speedup of parallel MC4 has been studied on a variety of parallel computing architectures including an Intel Xeon server with 4 dual-core processors, a Sun cluster consisting of 16 nodes of 2 dual-core AMD Opteron processors and a 200 dual-processor HP cluster. For large problem size, which is limited only by the physical memory of the multiprocessor server, the speedup results are almost linear on all systems. We have validated the parallel implementation against the serial VBA and C implementations using the same random number generator. Our experimental results on the transport and energy loss of electrons in a water medium show that the serial and parallel codes are equivalent in accuracy. The present improvements allow for studying of higher particle energies with the use of more accurate physical models, and improve statistics as more particles tracks can be simulated in low response time.

  6. Comparison of different chaotic maps in particle swarm optimization algorithm for long-term cascaded hydroelectric system scheduling

    International Nuclear Information System (INIS)

    He Yaoyao; Zhou Jianzhong; Xiang Xiuqiao; Chen Heng; Qin Hui

    2009-01-01

    The goal of this paper is to present a novel chaotic particle swarm optimization (CPSO) algorithm and compares the efficiency of three one-dimensional chaotic maps within symmetrical region for long-term cascaded hydroelectric system scheduling. The introduced chaotic maps improve the global optimal capability of CPSO algorithm. Moreover, a piecewise linear interpolation function is employed to transform all constraints into restrict upriver water level for implementing the maximum of objective function. Numerical results and comparisons demonstrate the effect and speed of different algorithms on a practical hydro-system.

  7. Consistency evaluation between EGSnrc and Geant4 charged particle transport in an equilibrium magnetic field.

    Science.gov (United States)

    Yang, Y M; Bednarz, B

    2013-02-21

    Following the proposal by several groups to integrate magnetic resonance imaging (MRI) with radiation therapy, much attention has been afforded to examining the impact of strong (on the order of a Tesla) transverse magnetic fields on photon dose distributions. The effect of the magnetic field on dose distributions must be considered in order to take full advantage of the benefits of real-time intra-fraction imaging. In this investigation, we compared the handling of particle transport in magnetic fields between two Monte Carlo codes, EGSnrc and Geant4, to analyze various aspects of their electromagnetic transport algorithms; both codes are well-benchmarked for medical physics applications in the absence of magnetic fields. A water-air-water slab phantom and a water-lung-water slab phantom were used to highlight dose perturbations near high- and low-density interfaces. We have implemented a method of calculating the Lorentz force in EGSnrc based on theoretical models in literature, and show very good consistency between the two Monte Carlo codes. This investigation further demonstrates the importance of accurate dosimetry for MRI-guided radiation therapy (MRIgRT), and facilitates the integration of a ViewRay MRIgRT system in the University of Wisconsin-Madison's Radiation Oncology Department.

  8. Consistency evaluation between EGSnrc and Geant4 charged particle transport in an equilibrium magnetic field

    International Nuclear Information System (INIS)

    Yang, Y M; Bednarz, B

    2013-01-01

    Following the proposal by several groups to integrate magnetic resonance imaging (MRI) with radiation therapy, much attention has been afforded to examining the impact of strong (on the order of a Tesla) transverse magnetic fields on photon dose distributions. The effect of the magnetic field on dose distributions must be considered in order to take full advantage of the benefits of real-time intra-fraction imaging. In this investigation, we compared the handling of particle transport in magnetic fields between two Monte Carlo codes, EGSnrc and Geant4, to analyze various aspects of their electromagnetic transport algorithms; both codes are well-benchmarked for medical physics applications in the absence of magnetic fields. A water–air–water slab phantom and a water–lung–water slab phantom were used to highlight dose perturbations near high- and low-density interfaces. We have implemented a method of calculating the Lorentz force in EGSnrc based on theoretical models in literature, and show very good consistency between the two Monte Carlo codes. This investigation further demonstrates the importance of accurate dosimetry for MRI-guided radiation therapy (MRIgRT), and facilitates the integration of a ViewRay MRIgRT system in the University of Wisconsin-Madison's Radiation Oncology Department. (note)

  9. Particle Tracking Model and Abstraction of Transport Processes

    Energy Technology Data Exchange (ETDEWEB)

    B. Robinson

    2004-10-21

    The purpose of this report is to document the abstraction model being used in total system performance assessment (TSPA) model calculations for radionuclide transport in the unsaturated zone (UZ). The UZ transport abstraction model uses the particle-tracking method that is incorporated into the finite element heat and mass model (FEHM) computer code (Zyvoloski et al. 1997 [DIRS 100615]) to simulate radionuclide transport in the UZ. This report outlines the assumptions, design, and testing of a model for calculating radionuclide transport in the UZ at Yucca Mountain. In addition, methods for determining and inputting transport parameters are outlined for use in the TSPA for license application (LA) analyses. Process-level transport model calculations are documented in another report for the UZ (BSC 2004 [DIRS 164500]). Three-dimensional, dual-permeability flow fields generated to characterize UZ flow (documented by BSC 2004 [DIRS 169861]; DTN: LB03023DSSCP9I.001 [DIRS 163044]) are converted to make them compatible with the FEHM code for use in this abstraction model. This report establishes the numerical method and demonstrates the use of the model that is intended to represent UZ transport in the TSPA-LA. Capability of the UZ barrier for retarding the transport is demonstrated in this report, and by the underlying process model (BSC 2004 [DIRS 164500]). The technical scope, content, and management of this report are described in the planning document ''Technical Work Plan for: Unsaturated Zone Transport Model Report Integration'' (BSC 2004 [DIRS 171282]). Deviations from the technical work plan (TWP) are noted within the text of this report, as appropriate. The latest version of this document is being prepared principally to correct parameter values found to be in error due to transcription errors, changes in source data that were not captured in the report, calculation errors, and errors in interpretation of source data.

  10. Particle Tracking Model and Abstraction of Transport Processes

    International Nuclear Information System (INIS)

    Robinson, B.

    2004-01-01

    The purpose of this report is to document the abstraction model being used in total system performance assessment (TSPA) model calculations for radionuclide transport in the unsaturated zone (UZ). The UZ transport abstraction model uses the particle-tracking method that is incorporated into the finite element heat and mass model (FEHM) computer code (Zyvoloski et al. 1997 [DIRS 100615]) to simulate radionuclide transport in the UZ. This report outlines the assumptions, design, and testing of a model for calculating radionuclide transport in the UZ at Yucca Mountain. In addition, methods for determining and inputting transport parameters are outlined for use in the TSPA for license application (LA) analyses. Process-level transport model calculations are documented in another report for the UZ (BSC 2004 [DIRS 164500]). Three-dimensional, dual-permeability flow fields generated to characterize UZ flow (documented by BSC 2004 [DIRS 169861]; DTN: LB03023DSSCP9I.001 [DIRS 163044]) are converted to make them compatible with the FEHM code for use in this abstraction model. This report establishes the numerical method and demonstrates the use of the model that is intended to represent UZ transport in the TSPA-LA. Capability of the UZ barrier for retarding the transport is demonstrated in this report, and by the underlying process model (BSC 2004 [DIRS 164500]). The technical scope, content, and management of this report are described in the planning document ''Technical Work Plan for: Unsaturated Zone Transport Model Report Integration'' (BSC 2004 [DIRS 171282]). Deviations from the technical work plan (TWP) are noted within the text of this report, as appropriate. The latest version of this document is being prepared principally to correct parameter values found to be in error due to transcription errors, changes in source data that were not captured in the report, calculation errors, and errors in interpretation of source data

  11. A hybrid artificial bee colony algorithm and pattern search method for inversion of particle size distribution from spectral extinction data

    Science.gov (United States)

    Wang, Li; Li, Feng; Xing, Jian

    2017-10-01

    In this paper, a hybrid artificial bee colony (ABC) algorithm and pattern search (PS) method is proposed and applied for recovery of particle size distribution (PSD) from spectral extinction data. To be more useful and practical, size distribution function is modelled as the general Johnson's ? function that can overcome the difficulty of not knowing the exact type beforehand encountered in many real circumstances. The proposed hybrid algorithm is evaluated through simulated examples involving unimodal, bimodal and trimodal PSDs with different widths and mean particle diameters. For comparison, all examples are additionally validated by the single ABC algorithm. In addition, the performance of the proposed algorithm is further tested by actual extinction measurements with real standard polystyrene samples immersed in water. Simulation and experimental results illustrate that the hybrid algorithm can be used as an effective technique to retrieve the PSDs with high reliability and accuracy. Compared with the single ABC algorithm, our proposed algorithm can produce more accurate and robust inversion results while taking almost comparative CPU time over ABC algorithm alone. The superiority of ABC and PS hybridization strategy in terms of reaching a better balance of estimation accuracy and computation effort increases its potentials as an excellent inversion technique for reliable and efficient actual measurement of PSD.

  12. FLUKA A multi-particle transport code (program version 2005)

    CERN Document Server

    Ferrari, A; Fassò, A; Ranft, Johannes

    2005-01-01

    This report describes the 2005 version of the Fluka particle transport code. The first part introduces the basic notions, describes the modular structure of the system, and contains an installation and beginner’s guide. The second part complements this initial information with details about the various components of Fluka and how to use them. It concludes with a detailed history and bibliography.

  13. Vectorization of Monte Carlo particle transport

    International Nuclear Information System (INIS)

    Burns, P.J.; Christon, M.; Schweitzer, R.; Lubeck, O.M.; Wasserman, H.J.; Simmons, M.L.; Pryor, D.V.

    1989-01-01

    This paper reports that fully vectorized versions of the Los Alamos National Laboratory benchmark code Gamteb, a Monte Carlo photon transport algorithm, were developed for the Cyber 205/ETA-10 and Cray X-MP/Y-MP architectures. Single-processor performance measurements of the vector and scalar implementations were modeled in a modified Amdahl's Law that accounts for additional data motion in the vector code. The performance and implementation strategy of the vector codes are related to architectural features of each machine. Speedups between fifteen and eighteen for Cyber 205/ETA-10 architectures, and about nine for CRAY X-MP/Y-MP architectures are observed. The best single processor execution time for the problem was 0.33 seconds on the ETA-10G, and 0.42 seconds on the CRAY Y-MP

  14. Decabrominated Diphenyl Ethers (BDE-209) in Chinese and Global Air: Levels, Gas/Particle Partitioning, and Long-Range Transport: Is Long-Range Transport of BDE-209 Really Governed by the Movement of Particles?

    Science.gov (United States)

    Li, Yi-Fan; Qiao, Li-Na; Ren, Nan-Qi; Sverko, Ed; Mackay, Donald; Macdonald, Robie W

    2017-01-17

    In this paper, we report air concentrations of BDE-209 in both gas- and particle-phases across China. The annual mean concentrations of BDE-209 were from below detection limit (BDL) to 77.0 pg·m -3 in the gas-phase and 1.06-728 pg·m -3 in the particle-phase. Among the nine PBDEs measured, BDE-209 is the dominant congener in Chinese atmosphere in both gas and particle phases. We predicted the partitioning behavior of BDE-209 in air using our newly developed steady state equation, and the results matched the monitoring data worldwide very well. It was found that the logarithm of the partition quotient of BDE-209 is a constant, and equal to -1.53 under the global ambient temperature range (from -50 to +50 °C). The gaseous fractions of BDE-209 in air depends on the concentration of total suspended particle (TSP). The most important conclusion derived from this study is that, BDE-209, like other semivolatile organic compounds (SVOCs), cannot be sorbed entirely to atmospheric particles; and there is a significant amount of gaseous BDE-209 in global atmosphere, which is subject to long-range atmospheric transport (LRAT). Therefore, it is not surprising that BDE-209 can enter the Arctic through LRAT mainly by air transport rather than by particle movement. This is a significant advancement in understanding the global transport process and the pathways entering the Arctic for chemicals with low volatility and high octanol-air partition coefficients, such as BDE-209.

  15. Impacts on particles and ozone by transport processes recorded at urban and high-altitude monitoring stations

    International Nuclear Information System (INIS)

    Nicolás, J.F.; Crespo, J.; Yubero, E.; Soler, R.; Carratalá, A.; Mantilla, E.

    2014-01-01

    In order to evaluate the influence of particle transport episodes on particle number concentration temporal trends at both urban and high-altitude (Aitana peak-1558 m a.s.l.) stations, a simultaneous sampling campaign from October 2011 to September 2012 was performed. The monitoring stations are located in southeastern Spain, close to the Mediterranean coast. The annual average value of particle concentration obtained in the larger accumulation mode (size range 0.25–1 μm) at the mountain site, 55.0 ± 3.0 cm − 3 , was practically half that of the value obtained at the urban station (112.0 ± 4.0 cm − 3 ). The largest difference between both stations was recorded during December 2011 and January 2012, when particles at the mountain station registered the lowest values. It was observed that during urban stagnant episodes, particle transport from urban sites to the mountain station could take place under specific atmospheric conditions. During these transports, the major particle transfer is produced in the 0.5–2 μm size range. The minimum difference between stations was recorded in summer, particularly in July 2012, which is most likely due to several particle transport events that affected only the mountain station. The particle concentration in the coarse mode was very similar at both monitoring sites, with the biggest difference being recorded during the summer months, 0.4 ± 0.1 cm − 3 at the urban site and 0.9 ± 0.1 cm − 3 at the Aitana peak in August 2012. Saharan dust outbreaks were the main factor responsible for these values during summer time. The regional station was affected more by these outbreaks, recording values of > 4.0 cm − 3 , than the urban site. This long-range particle transport from the Sahara desert also had an effect upon O 3 levels measured at the mountain station. During periods affected by Saharan dust outbreaks, ozone levels underwent a significant decrease (3–17%) with respect to its mean value. - Highlights:

  16. Research on Demand Prediction of Fresh Food Supply Chain Based on Improved Particle Swarm Optimization Algorithm

    OpenAIRE

    He Wang

    2015-01-01

    Demand prediction of supply chain is an important content and the first premise in supply management of different enterprises and has become one of the difficulties and hot research fields for the researchers related. The paper takes fresh food demand prediction for example and presents a new algorithm for predicting demand of fresh food supply chain. First, the working principle and the root causes of the defects of particle swarm optimization algorithm are analyzed in the study; Second, the...

  17. Effects of varying the step particle distribution on a probabilistic transport model

    International Nuclear Information System (INIS)

    Bouzat, S.; Farengo, R.

    2005-01-01

    The consequences of varying the step particle distribution on a probabilistic transport model, which captures the basic features of transport in plasmas and was recently introduced in Ref. 1 [B. Ph. van Milligen et al., Phys. Plasmas 11, 2272 (2004)], are studied. Different superdiffusive transport mechanisms generated by a family of distributions with algebraic decays (Tsallis distributions) are considered. It is observed that the possibility of changing the superdiffusive transport mechanism improves the flexibility of the model for describing different situations. The use of the model to describe the low (L) and high (H) confinement modes is also analyzed

  18. Computational transport phenomena of fluid-particle systems

    CERN Document Server

    Arastoopour, Hamid; Abbasi, Emad

    2017-01-01

    This book concerns the most up-to-date advances in computational transport phenomena (CTP), an emerging tool for the design of gas-solid processes such as fluidized bed systems. The authors examine recent work in kinetic theory and CTP and illustrate gas-solid processes’ many applications in the energy, chemical, pharmaceutical, and food industries. They also discuss the kinetic theory approach in developing constitutive equations for gas-solid flow systems and how it has advanced over the last decade as well as the possibility of obtaining innovative designs for multiphase reactors, such as those needed to capture CO2 from flue gases. Suitable as a concise reference and a textbook supplement for graduate courses, Computational Transport Phenomena of Gas-Solid Systems is ideal for practitioners in industries involved with the design and operation of processes based on fluid/particle mixtures, such as the energy, chemicals, pharmaceuticals, and food processing. Explains how to couple the population balance e...

  19. A kinetic theory for nonanalog Monte Carlo particle transport algorithms: Exponential transform with angular biasing in planar-geometry anisotropically scattering media

    International Nuclear Information System (INIS)

    Ueki, T.; Larsen, E.W.

    1998-01-01

    The authors show that Monte Carlo simulations of neutral particle transport in planargeometry anisotropically scattering media, using the exponential transform with angular biasing as a variance reduction device, are governed by a new Boltzman Monte Carlo (BMC) equation, which includes particle weight as an extra independent variable. The weight moments of the solution of the BMC equation determine the moments of the score and the mean number of collisions per history in the nonanalog Monte Carlo simulations. Therefore, the solution of the BMC equation predicts the variance of the score and the figure of merit in the simulation. Also, by (1) using an angular biasing function that is closely related to the ''asymptotic'' solution of the linear Boltzman equation and (2) requiring isotropic weight changes as collisions, they derive a new angular biasing scheme. Using the BMC equation, they propose a universal ''safe'' upper limit of the transform parameter, valid for any type of exponential transform. In numerical calculations, they demonstrate that the behavior of the Monte Carlo simulations and the performance predicted by deterministically solving the BMC equation agree well, and that the new angular biasing scheme is always advantageous

  20. Results and perspectives of particle transport measurements in gases in microgravity

    Science.gov (United States)

    Vedernikov, Andrei; Balapanov, Daniyar; Beresnev, Sergey

    2016-07-01

    Solid or liquid particles floating in a gas belong to dispersed systems, most often referred to as aerosols or dust clouds. They are widely spread in nature, involving both environmental and technological issues. They attract growing attention in microgravity, particularly in complex plasma, simulation of protoplanetary dust clouds, atmospheric aerosol, etc. Brownian random walk, motion of particles in gravity, electrostatic and magnetic fields, are well defined. We present the survey showing that the quantitative description of a vast variety of other types of motion is much less accurate, often known only in a limited region of parameters, sometimes described by the contradictory models, poorly verified experimentally. It is true even for the most extensively investigated transport phenomena - thermophoresis and photophoresis, not to say about diffusiophoresis, gravito-photophoresis, various other types of particle motion driven by physicochemical transformation and accommodation peculiarities on the particle-gas interface, combination of different processes. The number of publications grow very quickly, only those dealing with thermophoresis exceeded 300 in 2015. Hence, there is a strong need in high quality experimental data on particle transport properties with growing interest to expand the scope for non-isometric particles, agglomerates, dense clouds, interrelation with the two-phase flow dynamics. In most cases, the accuracy and sometimes the entire possibility of the measurement is limited by the presence of gravity. Floating particles have the density considerably different from that of the gas. They sediment, often with gliding and tumbling, that perturbs the motion trajectory, local hydrodynamic environment around particles, all together complicating definition of the response. Measurements at very high or very low Knudsen numbers (rarefied gas or too big particles) are of particular difficulty. Experiments assume creating a well-defined force, i

  1. Turbulent particle transport in streams: can exponential settling be reconciled with fluid mechanics?

    Science.gov (United States)

    McNair, James N; Newbold, J Denis

    2012-05-07

    Most ecological studies of particle transport in streams that focus on fine particulate organic matter or benthic invertebrates use the Exponential Settling Model (ESM) to characterize the longitudinal pattern of particle settling on the bed. The ESM predicts that if particles are released into a stream, the proportion that have not yet settled will decline exponentially with transport time or distance and will be independent of the release elevation above the bed. To date, no credible basis in fluid mechanics has been established for this model, nor has it been rigorously tested against more-mechanistic alternative models. One alternative is the Local Exchange Model (LEM), which is a stochastic advection-diffusion model that includes both longitudinal and vertical spatial dimensions and is based on classical fluid mechanics. The LEM predicts that particle settling will be non-exponential in the near field but will become exponential in the far field, providing a new theoretical justification for far-field exponential settling that is based on plausible fluid mechanics. We review properties of the ESM and LEM and compare these with available empirical evidence. Most evidence supports the prediction of both models that settling will be exponential in the far field but contradicts the ESM's prediction that a single exponential distribution will hold for all transport times and distances. Copyright © 2012 Elsevier Ltd. All rights reserved.

  2. The design of the public transport lines with the use of the fast genetic algorithm

    Directory of Open Access Journals (Sweden)

    Aleksander Król

    2015-09-01

    Full Text Available Background: The growing role of public transport and the pressure of economic criteria requires the new optimization tools for process of public transport planning. These problems are computationally very complex, thus it is preferable to use various approximate methods, leading to a good solution within an acceptable time. Methods: One of such method is the genetic algorithm mimicking the processes of evolution and natural selection in the nature. In this paper, the different variants of the public transport lines layout are subjected to the artificial selection. The essence of the proposed approach is a simplified method of calculating the value of the fit function for a single individual, which brings relatively short computation time even for large jobs. Results: It was shown that despite the introduced simplifications the quality of the results is not worsened. Using the data obtained from KZK GOP (Communications Municipal Association of Upper Silesian Industrial Region the described algorithm was used to optimize the layout of the network of bus lines located within the borders of Katowice. Conclusion: The proposed algorithm was applied to a real, very complex network of public transportation and a possibility of a significant improvement of its efficiency was indicated. The obtained results give hope that the presented model, after some improvements can be the basis of the scientific method, and in a consequence of a further development to find practical application.

  3. Transport and trapping of dust particles in a potential well created by inductively coupled diffused plasmas.

    Science.gov (United States)

    Choudhary, Mangilal; Mukherjee, S; Bandyopadhyay, P

    2016-05-01

    A versatile linear dusty (complex) plasma device is designed to study the transport and dynamical behavior of dust particles in a large volume. Diffused inductively coupled plasma is generated in the background of argon gas. A novel technique is used to introduce the dust particles in the main plasma by striking a secondary direct current glow discharge. These dust particles are found to get trapped in an electrostatic potential well, which is formed due to the combination of the ambipolar electric field caused by diffusive plasma and the field produced by the charged glass wall of the vacuum chamber. According to the requirements, the volume of the dust cloud can be controlled very precisely by tuning the plasma and discharge parameters. The present device can be used to address the underlying physics behind the transport of dust particles, self-excited dust acoustic waves, and instabilities. The detailed design of this device, plasma production and characterization, trapping and transport of the dust particle, and some of the preliminary experimental results are presented.

  4. The effect of load imbalances on the performance of Monte Carlo algorithms in LWR analysis

    International Nuclear Information System (INIS)

    Siegel, A.R.; Smith, K.; Romano, P.K.; Forget, B.; Felker, K.

    2013-01-01

    A model is developed to predict the impact of particle load imbalances on the performance of domain-decomposed Monte Carlo neutron transport algorithms. Expressions for upper bound performance “penalties” are derived in terms of simple machine characteristics, material characterizations and initial particle distributions. The hope is that these relations can be used to evaluate tradeoffs among different memory decomposition strategies in next generation Monte Carlo codes, and perhaps as a metric for triggering particle redistribution in production codes

  5. Experimental study of particle transport and density fluctuation in LHD

    International Nuclear Information System (INIS)

    Tanaka, K.; Morita, S.; Sanin, A.; Michael, C.; Kawahata, K.; Yamada, H.; Miyazawa, J.; Tokuzawa, T.; Akiyama, T.; Goto, M.; Ida, K.; Yoshinuma, M.; Narihara, K.; Yamada, I.; Yokoyama, M.; Masuzaki, S.; Morisaki, T.; Sakamoto, R.; Funaba, H.; Komori, A.; Vyacheslavov, L.N.; Murakami, S.; Wakasa, A.

    2005-01-01

    A variety of electron density (n e ) profiles have been observed in Large Helical Device (LHD). The density profiles change dramatically with heating power and toroidal magnetic field (B t ) under the same line averaged density. The particle transport coefficients, i.e., diffusion coefficient (D) and convection velocity (V) are experimentally obtained from density modulation experiments in the standard configuration. The values of D and V are estimated separately at the core and edge. The diffusion coefficients are strong function of electron temperature (T e ) and are proportional to T e 1.7±0.9 in core and T e 1.1±0.14 in edge. And edge diffusion coefficients are proportional to B t -2.08 . It is found that the scaling of D in edge is close to gyro-Bohm-like in nature. The existence of non-zero V is observed. It is observed that the electron temperature (T e ) gradient can drive particle convection. This is particularly clear in the core region. The convection velocity in the core region reverses direction from inward to outward as the T e gradient increases. In the edge, the convection is inward directed in the most of the case of the present data set. And it shows modest tendency, whose value is proportional to T e gradient keeping inward direction. However, the toroidal magnetic field also significantly affects value and direction of V. The spectrum of density fluctuation changes at different heating power suggesting that it has an influence on particle transport. The peak wavenumber is around 0.1 times the inversed ion Larmor radius, as is expected from gyro-Bohm diffusion. The peaks of fluctuation intensity are localized at the plasma edge, where density gradient becomes negative and diffusion contributes most to the particle flux. These results suggest a qualitative correlation of fluctuations with particle diffusion. (author)

  6. Routing and scheduling of hazardous materials shipments: algorithmic approaches to managing spent nuclear fuel transport

    International Nuclear Information System (INIS)

    Cox, R.G.

    1984-01-01

    Much controversy surrounds government regulation of routing and scheduling of Hazardous Materials Transportation (HMT). Increases in operating costs must be balanced against expected benefits from local HMT bans and curfews when promulgating or preempting HMT regulations. Algorithmic approaches for evaluating HMT routing and scheduling regulatory policy are described. A review of current US HMT regulatory policy is presented to provide a context for the analysis. Next, a multiobjective shortest path algorithm to find the set of efficient routes under conflicting objectives is presented. This algorithm generates all efficient routes under any partial ordering in a single pass through the network. Also, scheduling algorithms are presented to estimate the travel time delay due to HMT curfews along a route. Algorithms are presented assuming either deterministic or stochastic travel times between curfew cities and also possible rerouting to avoid such cities. These algorithms are applied to the case study of US highway transport of spent nuclear fuel from reactors to permanent repositories. Two data sets were used. One data set included the US Interstate Highway System (IHS) network with reactor locations, possible repository sites, and 150 heavily populated areas (HPAs). The other data set contained estimates of the population residing with 0.5 miles of the IHS and the Eastern US. Curfew delay is dramatically reduced by optimally scheduling departure times unless inter-HPA travel times are highly uncertain. Rerouting shipments to avoid HPAs is a less efficient approach to reducing delay

  7. Two analytic transport equation solutions for particular cases of particle history

    International Nuclear Information System (INIS)

    Simovic, R.

    1997-01-01

    For anisotropic scattering and plane geometry, the linear transport equation of particles generated by a monodirectional unit source A(x,μ) = δ(x-0)δ(μ - μ 0 ) > 0, can be stated in the form of an integral equation

  8. Electron cyclotron absorption in Tokamak plasmas in the presence of radial transport of particles

    International Nuclear Information System (INIS)

    Rosa, Paulo R. da S.; Ziebell, Luiz F.

    1998-01-01

    We use quasilinear theory to study effects of particle radial transport on the electron cyclotron absorption coefficient by a current carrying plasma, in a tokamak modelated as a plasma slab. Our numerical results indicate significant modification in the profile of the electron cyclotron absorption coefficient when transport is taken into account relative to the situation without transport. (author)

  9. Coupling fine particle and bedload transport in gravel-bedded streams

    Science.gov (United States)

    Park, Jungsu; Hunt, James R.

    2017-09-01

    Fine particles in the silt- and clay-size range are important determinants of surface water quality. Since fine particle loading rates are not unique functions of stream discharge this limits the utility of the available models for water quality assessment. Data from 38 minimally developed watersheds within the United States Geological Survey stream gauging network in California, USA reveal three lines of evidence that fine particle release is coupled with bedload transport. First, there is a transition in fine particle loading rate as a function of discharge for gravel-bedded sediments that does not appear when the sediment bed is composed of sand, cobbles, boulders, or bedrock. Second, the discharge at the transition in the loading rate is correlated with the initiation of gravel mobilization. Third, high frequency particle concentration and discharge data are dominated by clockwise hysteresis where rising limb discharges generally have higher concentrations than falling limb discharges. These three observations across multiple watersheds lead to a conceptual model that fine particles accumulate within the sediment bed at discharges less than the transition and then the gravel bed fluidizes with fine particle release at discharges above the transition discharge. While these observations were individually recognized in the literature, this analysis provides a consistent conceptual model based on the coupling of fine particle dynamics with filtration at low discharges and gravel bed fluidization at higher discharges.

  10. Particle transport model sensitivity on wave-induced processes

    Science.gov (United States)

    Staneva, Joanna; Ricker, Marcel; Krüger, Oliver; Breivik, Oyvind; Stanev, Emil; Schrum, Corinna

    2017-04-01

    Different effects of wind waves on the hydrodynamics in the North Sea are investigated using a coupled wave (WAM) and circulation (NEMO) model system. The terms accounting for the wave-current interaction are: the Stokes-Coriolis force, the sea-state dependent momentum and energy flux. The role of the different Stokes drift parameterizations is investigated using a particle-drift model. Those particles can be considered as simple representations of either oil fractions, or fish larvae. In the ocean circulation models the momentum flux from the atmosphere, which is related to the wind speed, is passed directly to the ocean and this is controlled by the drag coefficient. However, in the real ocean, the waves play also the role of a reservoir for momentum and energy because different amounts of the momentum flux from the atmosphere is taken up by the waves. In the coupled model system the momentum transferred into the ocean model is estimated as the fraction of the total flux that goes directly to the currents plus the momentum lost from wave dissipation. Additionally, we demonstrate that the wave-induced Stokes-Coriolis force leads to a deflection of the current. During the extreme events the Stokes velocity is comparable in magnitude to the current velocity. The resulting wave-induced drift is crucial for the transport of particles in the upper ocean. The performed sensitivity analyses demonstrate that the model skill depends on the chosen processes. The results are validated using surface drifters, ADCP, HF radar data and other in-situ measurements in different regions of the North Sea with a focus on the coastal areas. The using of a coupled model system reveals that the newly introduced wave effects are important for the drift-model performance, especially during extremes. Those effects cannot be neglected by search and rescue, oil-spill, transport of biological material, or larva drift modelling.

  11. Load Balancing of Parallel Monte Carlo Transport Calculations

    International Nuclear Information System (INIS)

    Procassini, R J; O'Brien, M J; Taylor, J M

    2005-01-01

    The performance of parallel Monte Carlo transport calculations which use both spatial and particle parallelism is increased by dynamically assigning processors to the most worked domains. Since he particle work load varies over the course of the simulation, this algorithm determines each cycle if dynamic load balancing would speed up the calculation. If load balancing is required, a small number of particle communications are initiated in order to achieve load balance. This method has decreased the parallel run time by more than a factor of three for certain criticality calculations

  12. Implementing displacement damage calculations for electrons and gamma rays in the Particle and Heavy-Ion Transport code System

    Science.gov (United States)

    Iwamoto, Yosuke

    2018-03-01

    In this study, the Monte Carlo displacement damage calculation method in the Particle and Heavy-Ion Transport code System (PHITS) was improved to calculate displacements per atom (DPA) values due to irradiation by electrons (or positrons) and gamma rays. For the damage due to electrons and gamma rays, PHITS simulates electromagnetic cascades using the Electron Gamma Shower version 5 (EGS5) algorithm and calculates DPA values using the recoil energies and the McKinley-Feshbach cross section. A comparison of DPA values calculated by PHITS and the Monte Carlo assisted Classical Method (MCCM) reveals that they were in good agreement for gamma-ray irradiations of silicon and iron at energies that were less than 10 MeV. Above 10 MeV, PHITS can calculate DPA values not only for electrons but also for charged particles produced by photonuclear reactions. In DPA depth distributions under electron and gamma-ray irradiations, build-up effects can be observed near the target's surface. For irradiation of 90-cm-thick carbon by protons with energies of more than 30 GeV, the ratio of the secondary electron DPA values to the total DPA values is more than 10% and increases with an increase in incident energy. In summary, PHITS can calculate DPA values for all particles and materials over a wide energy range between 1 keV and 1 TeV for electrons, gamma rays, and charged particles and between 10-5 eV and 1 TeV for neutrons.

  13. Cell-centered particle weighting algorithm for PIC simulations in a non-uniform 2D axisymmetric mesh

    Science.gov (United States)

    Araki, Samuel J.; Wirz, Richard E.

    2014-09-01

    Standard area weighting methods for particle-in-cell simulations result in systematic errors on particle densities for a non-uniform mesh in cylindrical coordinates. These errors can be significantly reduced by using weighted cell volumes for density calculations. A detailed description on the corrected volume calculations and cell-centered weighting algorithm in a non-uniform mesh is provided. The simple formulas for the corrected volume can be used for any type of quadrilateral and/or triangular mesh in cylindrical coordinates. Density errors arising from the cell-centered weighting algorithm are computed for radial density profiles of uniform, linearly decreasing, and Bessel function in an adaptive Cartesian mesh and an unstructured mesh. For all the density profiles, it is shown that the weighting algorithm provides a significant improvement for density calculations. However, relatively large density errors may persist at outermost cells for monotonically decreasing density profiles. A further analysis has been performed to investigate the effect of the density errors in potential calculations, and it is shown that the error at the outermost cell does not propagate into the potential solution for the density profiles investigated.

  14. The role of colloids and suspended particles in radionuclide transport in the Canadian concept for nuclear fuel waste disposal

    International Nuclear Information System (INIS)

    Vilks, P.

    1994-02-01

    AECL Research is developing a concept for the permanent disposal of nuclear fuel waste in a deep engineered vault in plutonic rock of the Canadian Shield and is preparing an Environmental Impact Statement (EIS) to document its case for the acceptability of the disposal concept. This report, one in a series of supporting documents for the EIS, addresses the role of particles in radionuclide transport. It summarizes our studies of natural particles in groundwater and presents the arguments used to justify the omission of particle-facilitated transport in the geosphere model that is based on the Whiteshell Research Area (WRA) and used in the postclosure assessment study case. Because radiocolloids formed in the vault will not be able to migrate through the clay buffer, radiocolloid formation in the geosphere will be determined by the sorption of radionuclides onto particles in groundwater. These particles consist of typical fracture-lining minerals, such as clays, micas and quartz; precipitated particles, such as colloidal silica and Fe-Si oxyhydroxides; and organic particles. In groundwater from the WRA, the average concentrations of colloids and suspended particles are 0.34 and 1.4 mg/L respectively. Particle-facilitated transport is not included in the geosphere model because the concentrations of particles in groundwater from the WRA are too low to have a significant impact on radionuclide transport. (author). 92 refs., 11 tabs., 13 figs

  15. Evidence for particle transport between alveolar macrophages in vivo

    Energy Technology Data Exchange (ETDEWEB)

    Benson, J.M.; Nikula, K.J.; Guilmette, R.A.

    1995-12-01

    Recent studies at this Institute have focused on determining the role of alveolar macrophages (AMs) in the transport of particles within and form the lung. For those studies, AMs previously labeled using the nuclear stain Hoechst 33342 and polychromatic Fluoresbrite microspheres (1 {mu}m diameter, Polysciences, Inc., Warrington, PA) were instilled into lungs of recipient F344 rats. The fate of the donor particles and the doubly labeled AMs within recipient lungs was followed for 32 d. Within 2-4 d after instillation, the polychromatic microspheres were found in both donor and resident AMs, suggesting that particle transfer occurred between the donor and resident AMs. However, this may also have been an artifact resulting from phagocytosis of the microspheres form dead donor cells or from the fading or degradation of Hoechst 33342 within the donor cells leading to their misidentification as resident AMs. The results support the earlier findings that microspheres in donor AMs can be transferred to resident AMs within 2 d after instillation.

  16. Modeling Transport in Fractured Porous Media with the Random-Walk Particle Method: The Transient Activity Range and the Particle-Transfer Probability

    International Nuclear Information System (INIS)

    Lehua Pan; G.S. Bodvarsson

    2001-01-01

    Multiscale features of transport processes in fractured porous media make numerical modeling a difficult task, both in conceptualization and computation. Modeling the mass transfer through the fracture-matrix interface is one of the critical issues in the simulation of transport in a fractured porous medium. Because conventional dual-continuum-based numerical methods are unable to capture the transient features of the diffusion depth into the matrix (unless they assume a passive matrix medium), such methods will overestimate the transport of tracers through the fractures, especially for the cases with large fracture spacing, resulting in artificial early breakthroughs. We have developed a new method for calculating the particle-transfer probability that can capture the transient features of diffusion depth into the matrix within the framework of the dual-continuum random-walk particle method (RWPM) by introducing a new concept of activity range of a particle within the matrix. Unlike the multiple-continuum approach, the new dual-continuum RWPM does not require using additional grid blocks to represent the matrix. It does not assume a passive matrix medium and can be applied to the cases where global water flow exists in both continua. The new method has been verified against analytical solutions for transport in the fracture-matrix systems with various fracture spacing. The calculations of the breakthrough curves of radionuclides from a potential repository to the water table in Yucca Mountain demonstrate the effectiveness of the new method for simulating 3-D, mountain-scale transport in a heterogeneous, fractured porous medium under variably saturated conditions

  17. Algorithm of Particle Data Association for SLAM Based on Improved Ant Algorithm

    Directory of Open Access Journals (Sweden)

    KeKe Gen

    2015-01-01

    Full Text Available The article considers a problem of data association algorithm for simultaneous localization and mapping guidelines in determining the route of unmanned aerial vehicles (UAVs. Currently, these equipments are already widely used, but mainly controlled from the remote operator. An urgent task is to develop a control system that allows for autonomous flight. Algorithm SLAM (simultaneous localization and mapping, which allows to predict the location, speed, the ratio of flight parameters and the coordinates of landmarks and obstacles in an unknown environment, is one of the key technologies to achieve real autonomous UAV flight. The aim of this work is to study the possibility of solving this problem by using an improved ant algorithm.The data association for SLAM algorithm is meant to establish a matching set of observed landmarks and landmarks in the state vector. Ant algorithm is one of the widely used optimization algorithms with positive feedback and the ability to search in parallel, so the algorithm is suitable for solving the problem of data association for SLAM. But the traditional ant algorithm in the process of finding routes easily falls into local optimum. Adding random perturbations in the process of updating the global pheromone to avoid local optima. Setting limits pheromone on the route can increase the search space with a reasonable amount of calculations for finding the optimal route.The paper proposes an algorithm of the local data association for SLAM algorithm based on an improved ant algorithm. To increase the speed of calculation, local data association is used instead of the global data association. The first stage of the algorithm defines targets in the matching space and the observed landmarks with the possibility of association by the criterion of individual compatibility (IC. The second stage defines the matched landmarks and their coordinates using improved ant algorithm. Simulation results confirm the efficiency and

  18. Implementation vigenere algorithm using microcontroller for sending SMS in monitoring radioactive substances transport system

    International Nuclear Information System (INIS)

    Adi Abimanyu; Nurhidayat; Jumari

    2013-01-01

    Aspects of safety and security of radioactive substances from the sender to the receiver is to be secured for not to harm humans. In general, monitoring the transport of radioactive materials is done by communication with a telephone conversation to determine the location and rate of exposure radioactive substances. From the aspect of safety, communication through telephone conversations easily interpreted by others, in addition the possibility of human-error is quite high. SMS service is known for its ease in terms of use so that SMS can be used as a substitute for communication through telephone conversations to monitor the rate of radiation exposure and the position of radioactive substances in the transport of radioactive substances. The system monitors the transport of radioactive materials developed by implement vigenere algorithms using a microcontroller for sending SMS (Short Message Service) to communicate. Tests was conducted to testing encryption and description and computation time required. From the test results obtained they have been successfully implemented vigenere algorithm to encrypt and decrypt the messages on the transport of radioactive monitoring system and the computational time required to encrypt and decrypt the data is 13.05 ms for 36 characters and 13.61 for 37 characters. So for every single character require computing time 0.56 ms. (author)

  19. Parallel Implementation and Scaling of an Adaptive Mesh Discrete Ordinates Algorithm for Transport

    International Nuclear Information System (INIS)

    Howell, L H

    2004-01-01

    Block-structured adaptive mesh refinement (AMR) uses a mesh structure built up out of locally-uniform rectangular grids. In the BoxLib parallel framework used by the Raptor code, each processor operates on one or more of these grids at each refinement level. The decomposition of the mesh into grids and the distribution of these grids among processors may change every few timesteps as a calculation proceeds. Finer grids use smaller timesteps than coarser grids, requiring additional work to keep the system synchronized and ensure conservation between different refinement levels. In a paper for NECDC 2002 I presented preliminary results on implementation of parallel transport sweeps on the AMR mesh, conjugate gradient acceleration, accuracy of the AMR solution, and scalar speedup of the AMR algorithm compared to a uniform fully-refined mesh. This paper continues with a more in-depth examination of the parallel scaling properties of the scheme, both in single-level and multi-level calculations. Both sweeping and setup costs are considered. The algorithm scales with acceptable performance to several hundred processors. Trends suggest, however, that this is the limit for efficient calculations with traditional transport sweeps, and that modifications to the sweep algorithm will be increasingly needed as job sizes in the thousands of processors become common

  20. Particle Transport in ECRH Plasmas of the TJ-II

    International Nuclear Information System (INIS)

    Vargas, V. I.; Lopez-Bruna, D.; Estrada, T.; Guasp, J.; Reynolds, J. M.; Velasco, J. L.; Herranz, J.

    2007-01-01

    We present a systematic study of particle transport in ECRH plasmas of TJ-II with different densities. The goal is to fi nd particle confinement time and electron diffusivity dependence with line-averaged density. The experimental information consists of electron temperature profiles, T e (Thomson Scattering TS) and electron density, n e , (TS and reflectometry) and measured puffing data in stationary discharges. The profile of the electron source, Se, was obtained by the 3D Monte-Carlo code EIRENE. The analysis of particle balance has been done by linking the results of the code EIRENE with the results of a model that reproduces ECRH plasmas in stationary conditions. In the range of densities studied (0.58 ≤n e > (10 1 9m - 3) ≤0.80) there are two regions of confinement separated by a threshold density, e > ∼0.65 10 1 9m - 3. Below this threshold density the particle confinement time is low, and vice versa. This is reflected in the effective diffusivity, D e , which in the range of validity of this study, 0.5 e are flat for ≥0,63(10 1 9m - 3). (Author) 35 refs

  1. Fully kinetic particle simulations of high pressure streamer propagation

    Science.gov (United States)

    Rose, David; Welch, Dale; Thoma, Carsten; Clark, Robert

    2012-10-01

    Streamer and leader formation in high pressure devices is a dynamic process involving a hierarchy of physical phenomena. These include elastic and inelastic particle collisions in the gas, radiation generation, transport and absorption, and electrode interactions. We have performed 2D and 3D fully EM implicit particle-in-cell simulation model of gas breakdown leading to streamer formation under DC and RF fields. The model uses a Monte Carlo treatment for all particle interactions and includes discrete photon generation, transport, and absorption for ultra-violet and soft x-ray radiation. Central to the realization of this fully kinetic particle treatment is an algorithm [D. R. Welch, et al., J. Comp. Phys. 227, 143 (2007)] that manages the total particle count by species while preserving the local momentum distribution functions and conserving charge. These models are being applied to the analysis of high-pressure gas switches [D. V. Rose, et al., Phys. Plasmas 18, 093501 (2011)] and gas-filled RF accelerator cavities [D. V. Rose, et al. Proc. IPAC12, to appear].

  2. Optimal Spatial Subdivision method for improving geometry navigation performance in Monte Carlo particle transport simulation

    International Nuclear Information System (INIS)

    Chen, Zhenping; Song, Jing; Zheng, Huaqing; Wu, Bin; Hu, Liqin

    2015-01-01

    Highlights: • The subdivision combines both advantages of uniform and non-uniform schemes. • The grid models were proved to be more efficient than traditional CSG models. • Monte Carlo simulation performance was enhanced by Optimal Spatial Subdivision. • Efficiency gains were obtained for realistic whole reactor core models. - Abstract: Geometry navigation is one of the key aspects of dominating Monte Carlo particle transport simulation performance for large-scale whole reactor models. In such cases, spatial subdivision is an easily-established and high-potential method to improve the run-time performance. In this study, a dedicated method, named Optimal Spatial Subdivision, is proposed for generating numerically optimal spatial grid models, which are demonstrated to be more efficient for geometry navigation than traditional Constructive Solid Geometry (CSG) models. The method uses a recursive subdivision algorithm to subdivide a CSG model into non-overlapping grids, which are labeled as totally or partially occupied, or not occupied at all, by CSG objects. The most important point is that, at each stage of subdivision, a conception of quality factor based on a cost estimation function is derived to evaluate the qualities of the subdivision schemes. Only the scheme with optimal quality factor will be chosen as the final subdivision strategy for generating the grid model. Eventually, the model built with the optimal quality factor will be efficient for Monte Carlo particle transport simulation. The method has been implemented and integrated into the Super Monte Carlo program SuperMC developed by FDS Team. Testing cases were used to highlight the performance gains that could be achieved. Results showed that Monte Carlo simulation runtime could be reduced significantly when using the new method, even as cases reached whole reactor core model sizes

  3. Investigations of the transportation characteristics of biomass fuel particles in a horizontal pipeline through CFD modelling and experimental measurement

    International Nuclear Information System (INIS)

    Gubba, S.R.; Ingham, D.B.; Larsen, K.J.; Ma, L.; Pourkashanian, M.; Qian, X.; Williams, A.; Yan, Y.

    2012-01-01

    Recent national and international emission legislations to reduce emissions of carbon dioxide are forcing power generation industries using coal to look at various alternatives, such as biomass and especially by co-firing techniques. Biomass is transported to the burners either mixed with the primary fuel, in general, coal, or used in dedicated pipelines. In both cases, transportation of biomass is difficult due to its composition, size, shape and physical behaviour in comparison to the transportation of coal. This study considers experimental measurements for biomass particle transportation in a pipeline with a transverse elbow and compares the results with those using computation fluid dynamic (CFD) techniques. Various materials: flour, willow, wood, bark and a mixture of flour and willow, have been considered in the present investigation. The experimental work was performed using the dynamic changes in the electrostatic charges of biomass particles in conjunction with correlation signal processing techniques. The CFD simulations were performed by considering the effects of gravity, non-spherical drag (based on estimated shape factor), detailed information of the particle distribution, particle wall collisions and particle–particle interactions. Good quantitative and qualitative agreement was obtained between the CFD simulations and the experimental data. It is concluded that particle–particle interactions are of less importance if the mass loading ratio of particles to air is less than 0.03. -- Highlights: ► Dispersed biomass particle transportation is studied using experiments and CFD. ► Inclusion of asphericity in the drag model clearly demonstrated the improvements. ► Gravity effects are found to be important for correct particle distribution in pipe lines. ► Inter-particle collisions were less important for mass loading ratios <0.05 kg/kg.

  4. Hybrid Algorithm of Particle Swarm Optimization and Grey Wolf Optimizer for Improving Convergence Performance

    Directory of Open Access Journals (Sweden)

    Narinder Singh

    2017-01-01

    Full Text Available A newly hybrid nature inspired algorithm called HPSOGWO is presented with the combination of Particle Swarm Optimization (PSO and Grey Wolf Optimizer (GWO. The main idea is to improve the ability of exploitation in Particle Swarm Optimization with the ability of exploration in Grey Wolf Optimizer to produce both variants’ strength. Some unimodal, multimodal, and fixed-dimension multimodal test functions are used to check the solution quality and performance of HPSOGWO variant. The numerical and statistical solutions show that the hybrid variant outperforms significantly the PSO and GWO variants in terms of solution quality, solution stability, convergence speed, and ability to find the global optimum.

  5. Impacts on particles and ozone by transport processes recorded at urban and high-altitude monitoring stations

    Energy Technology Data Exchange (ETDEWEB)

    Nicolás, J.F., E-mail: j.nicolas@umh.es [Laboratory of Atmospheric Pollution (LCA), Miguel Hernández University, Av. de la Universidad s/n, Edif. Alcudia, 03202 Elche (Spain); Crespo, J.; Yubero, E.; Soler, R. [Laboratory of Atmospheric Pollution (LCA), Miguel Hernández University, Av. de la Universidad s/n, Edif. Alcudia, 03202 Elche (Spain); Carratalá, A. [Department of Chemical Engineering, University of Alicante, P.O. Box 99, 03080 Alicante (Spain); Mantilla, E. [Instituto Universitario CEAM-UMH, Parque Tecnológico, C/Charles R. Darwin 14, E-46980 Paterna (Spain)

    2014-01-01

    In order to evaluate the influence of particle transport episodes on particle number concentration temporal trends at both urban and high-altitude (Aitana peak-1558 m a.s.l.) stations, a simultaneous sampling campaign from October 2011 to September 2012 was performed. The monitoring stations are located in southeastern Spain, close to the Mediterranean coast. The annual average value of particle concentration obtained in the larger accumulation mode (size range 0.25–1 μm) at the mountain site, 55.0 ± 3.0 cm{sup − 3}, was practically half that of the value obtained at the urban station (112.0 ± 4.0 cm{sup − 3}). The largest difference between both stations was recorded during December 2011 and January 2012, when particles at the mountain station registered the lowest values. It was observed that during urban stagnant episodes, particle transport from urban sites to the mountain station could take place under specific atmospheric conditions. During these transports, the major particle transfer is produced in the 0.5–2 μm size range. The minimum difference between stations was recorded in summer, particularly in July 2012, which is most likely due to several particle transport events that affected only the mountain station. The particle concentration in the coarse mode was very similar at both monitoring sites, with the biggest difference being recorded during the summer months, 0.4 ± 0.1 cm{sup − 3} at the urban site and 0.9 ± 0.1 cm{sup − 3} at the Aitana peak in August 2012. Saharan dust outbreaks were the main factor responsible for these values during summer time. The regional station was affected more by these outbreaks, recording values of > 4.0 cm{sup − 3}, than the urban site. This long-range particle transport from the Sahara desert also had an effect upon O{sub 3} levels measured at the mountain station. During periods affected by Saharan dust outbreaks, ozone levels underwent a significant decrease (3–17%) with respect to its mean

  6. RANS modeling for particle transport and deposition in turbulent duct flows: Near wall model uncertainties

    International Nuclear Information System (INIS)

    Jayaraju, S.T.; Sathiah, P.; Roelofs, F.; Dehbi, A.

    2015-01-01

    Highlights: • Near-wall modeling uncertainties in the RANS particle transport and deposition are addressed in a turbulent duct flow. • Discrete Random Walk (DRW) model and Continuous Random Walk (CRW) model performances are tested. • Several near-wall anisotropic model accuracy is assessed. • Numerous sensitivity studies are performed to recommend a robust, well-validated near-wall model for accurate particle deposition predictions. - Abstract: Dust accumulation in the primary system of a (V)HTR is identified as one of the foremost concerns during a potential accident. Several numerical efforts have focused on the use of RANS methodology to better understand the complex phenomena of fluid–particle interaction at various flow conditions. In the present work, several uncertainties relating to the near-wall modeling of particle transport and deposition are addressed for the RANS approach. The validation analyses are performed in a fully developed turbulent duct flow setup. A standard k − ε turbulence model with enhanced wall treatment is used for modeling the turbulence. For the Lagrangian phase, the performance of a continuous random walk (CRW) model and a discrete random walk (DRW) model for the particle transport and deposition are assessed. For wall bounded flows, it is generally seen that accounting for near wall anisotropy is important to accurately predict particle deposition. The various near-wall correlations available in the literature are either derived from the DNS data or from the experimental data. A thorough investigation into various near-wall correlations and their applicability for accurate particle deposition predictions are assessed. The main outcome of the present work is a well validated turbulence model with optimal near-wall modeling which provides realistic particle deposition predictions

  7. Toroidally asymmetric particle transport caused by phase-locking of MHD modes in RFX-mod

    International Nuclear Information System (INIS)

    Lorenzini, R.; Terranova, D.; Auriemma, F.; Cavazzana, R.; Innocente, P.; Martini, S.; Serianni, G.; Zuin, M.

    2007-01-01

    The particle and energy transport in reversed field pinch experiments is affected by the locking in phase of the tearing modes, also dubbed dynamo modes, that sustain the magnetic configuration. In standard RFP pulses many m = 1 and m = 0 resonant modes have a relatively large amplitude (a spectrum dubbed MH for multiple helicity). The locking in phase of m = 1 tearing modes produces a helical deformation (locked mode (LM)) of the magnetic surfaces in a region of approximately 40 toroidal degrees. The region of the LM is characterized by a strong plasma-wall interaction and by high losses of energy and particles that account for a significant fraction of the input power and of the total particle outflux. The locking in phase of m = 0 modes modifies the plasma radius, shrinking and enlarging the plasma cross section in two wide toroidal regions of about 100 0 . The purpose of this paper is to investigate to what extent the locking in phase of m = 0 modes introduces toroidal asymmetries in the transport properties of the plasma. This study has been carried out investigating the shape of the density profile in the RFX-mod experiment. The analyses show that the profile exhibits a dependence on the toroidal angle, which is related to the deformation of the plasma column due to the locking in phase of m = 0 modes: the least steep density gradients at the edge are found in the region where the plasma column is shrunk, entailing that in this region the particle transport is enhanced. An analogous asymmetry also characterizes the density and magnetic fluctuations at the edge, which are enhanced in the same toroidal region where the particle transport also is enhanced. This result can be considered the first experimental evidence of an instability localized where the plasma column is shrunk

  8. Radiative transport-based frequency-domain fluorescence tomography

    International Nuclear Information System (INIS)

    Joshi, Amit; Rasmussen, John C; Sevick-Muraca, Eva M; Wareing, Todd A; McGhee, John

    2008-01-01

    We report the development of radiative transport model-based fluorescence optical tomography from frequency-domain boundary measurements. The coupled radiative transport model for describing NIR fluorescence propagation in tissue is solved by a novel software based on the established Attila(TM) particle transport simulation platform. The proposed scheme enables the prediction of fluorescence measurements with non-contact sources and detectors at a minimal computational cost. An adjoint transport solution-based fluorescence tomography algorithm is implemented on dual grids to efficiently assemble the measurement sensitivity Jacobian matrix. Finally, we demonstrate fluorescence tomography on a realistic computational mouse model to locate nM to μM fluorophore concentration distributions in simulated mouse organs

  9. Particle Swarm Optimization

    Science.gov (United States)

    Venter, Gerhard; Sobieszczanski-Sobieski Jaroslaw

    2002-01-01

    The purpose of this paper is to show how the search algorithm known as particle swarm optimization performs. Here, particle swarm optimization is applied to structural design problems, but the method has a much wider range of possible applications. The paper's new contributions are improvements to the particle swarm optimization algorithm and conclusions and recommendations as to the utility of the algorithm, Results of numerical experiments for both continuous and discrete applications are presented in the paper. The results indicate that the particle swarm optimization algorithm does locate the constrained minimum design in continuous applications with very good precision, albeit at a much higher computational cost than that of a typical gradient based optimizer. However, the true potential of particle swarm optimization is primarily in applications with discrete and/or discontinuous functions and variables. Additionally, particle swarm optimization has the potential of efficient computation with very large numbers of concurrently operating processors.

  10. Computation of many-particle quantum trajectories with exchange interaction: application to the simulation of nanoelectronic devices

    International Nuclear Information System (INIS)

    Alarcón, A; Yaro, S; Cartoixà, X; Oriols, X

    2013-01-01

    Following Oriols (2007 Phys. Rev. Lett. 98 066803), an algorithm to deal with the exchange interaction in non-separable quantum systems is presented. The algorithm can be applied to fermions or bosons and, by construction, it exactly ensures that any observable is totally independent of the interchange of particles. It is based on the use of conditional Bohmian wave functions which are solutions of single-particle pseudo-Schrödinger equations. The exchange symmetry is directly defined by demanding symmetry properties of the quantum trajectories in the configuration space with a universal algorithm, rather than through a particular exchange–correlation functional introduced into the single-particle pseudo-Schrödinger equation. It requires the computation of N 2 conditional wave functions to deal with N identical particles. For separable Hamiltonians, the algorithm reduces to the standard Slater determinant for fermions (or permanent for bosons). A numerical test for a two-particle system, where exact solutions for non-separable Hamiltonians are computationally accessible, is presented. The numerical viability of the algorithm for quantum electron transport (in a far-from-equilibrium time-dependent open system) is demonstrated by computing the current and fluctuations in a nano-resistor, with exchange and Coulomb interactions among electrons. (paper)

  11. Optimizing the Shunting Schedule of Electric Multiple Units Depot Using an Enhanced Particle Swarm Optimization Algorithm

    Science.gov (United States)

    Jin, Junchen

    2016-01-01

    The shunting schedule of electric multiple units depot (SSED) is one of the essential plans for high-speed train maintenance activities. This paper presents a 0-1 programming model to address the problem of determining an optimal SSED through automatic computing. The objective of the model is to minimize the number of shunting movements and the constraints include track occupation conflicts, shunting routes conflicts, time durations of maintenance processes, and shunting running time. An enhanced particle swarm optimization (EPSO) algorithm is proposed to solve the optimization problem. Finally, an empirical study from Shanghai South EMU Depot is carried out to illustrate the model and EPSO algorithm. The optimization results indicate that the proposed method is valid for the SSED problem and that the EPSO algorithm outperforms the traditional PSO algorithm on the aspect of optimality. PMID:27436998

  12. Optimizing the Shunting Schedule of Electric Multiple Units Depot Using an Enhanced Particle Swarm Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Jiaxi Wang

    2016-01-01

    Full Text Available The shunting schedule of electric multiple units depot (SSED is one of the essential plans for high-speed train maintenance activities. This paper presents a 0-1 programming model to address the problem of determining an optimal SSED through automatic computing. The objective of the model is to minimize the number of shunting movements and the constraints include track occupation conflicts, shunting routes conflicts, time durations of maintenance processes, and shunting running time. An enhanced particle swarm optimization (EPSO algorithm is proposed to solve the optimization problem. Finally, an empirical study from Shanghai South EMU Depot is carried out to illustrate the model and EPSO algorithm. The optimization results indicate that the proposed method is valid for the SSED problem and that the EPSO algorithm outperforms the traditional PSO algorithm on the aspect of optimality.

  13. An algorithm for the reconstruction of high-energy neutrino-induced particle showers and its application to the ANTARES neutrino telescope

    NARCIS (Netherlands)

    Albert, A.; André, M.; Anghinolfi, M.; Anton, G.; Ardid, M.; Aubert, J.-J.; Avgitas, T.; Baret, B.; Barrios-Martí, J.; Basa, S.; Bertin, V.; Biagi, S.; Bormuth, R.; Bourret, S.; Bouwhuis, M.C.; Bruijn, R.; Brunner, J.; Busto, J.; Capone, A.; Caramete, L.; Carr, J.; Celli, S.; Chiarusi, T.; Circella, M.; Coelho, C.O.A.; Coleiro, A.; Coniglione, R.; Costantini, H.; Coyle, P.; Creusot, A.; Deschamps, A.; De Bonis, G.; Distefano, C.; Di Palma, I.; Domi, A.; Donzaud, C.; Dornic, D.; Drouhin, D.; Eberl, T.; El Bojaddaini, I.; Elsässer, D.; Enzenhofer, A.; Felis, I.; Folger, F.; Fusco, L.A.; Galata, S.; Gay, P.; Giordano, V.; Glotin, H.; Grégoire, T.; Gracia-Ruiz, R.; Graf, K.; Hallmann, S.; van Haren, H.; Heijboer, A.J.; Hello, Y.; Hernandez-Rey, J.J.; Hößl, J.; Hofestädt, J.; Hugon, C.; Illuminati, G.; James, C.W.; de Jong, M.; Jongen, M.; Kadler, M.; Kalekin, O.; Katz, U.; Kießling, D.; Kouchner, A.; Kreter, M.; Kreykenbohm, I.; Kulikovskiy, V.; Lachaud, C.; Lahmann, R.; Lefevre, D.; Leonora, E.; Lotze, M.; Loucatos, S.; Marcelin, M.; Margiotta, A.; Marinelli, A.; Martinez-Mora, J.A.; Mele, R.; Melis, K.; Michael, T.; Migliozzi, P.; Moussa, A.; Nezri, E.; Organokov, M.; Pavalas, G.E.; Pellegrino, C.; Perrina, C.; Piattelli, P.; Popa, V.; Pradier, T.; Quinn, L.; Racca, C.; Riccobene, G.; Sanchez-Losa, A.; Saldaña, M.; Salvadori, I.; Samtleben, D.F.E.; Sanguineti, M.; Sapienza, P.; Schussler, F.; Sieger, C.; Spurio, M.; Stolarczyk, T.; Taiuti, M.; Tayalati, Y.; Trovato, A.; Turpin, D.; Tönnis, C.; Vallage, B.; Van Elewyck, V.; Versari, F.; Vivolo, D.; Vizzoca, A.; Wilms, J.; Zornoza, J.D.; Zuniga, J.

    2017-01-01

    A novel algorithm to reconstruct neutrino-induced particle showers within the ANTARES neutrino telescope is presented. The method achieves a median angular resolution of 6∘ for shower energies below 100 TeV. Applying this algorithm to 6 years of data taken with the ANTARES detector, 8 events with

  14. High-Speed Transport of Fluid Drops and Solid Particles via Surface Acoustic Waves

    Science.gov (United States)

    Bar-Cohen, Yoseph; Bao, Xiaoqi; Sherrit, Stewart; Badescu, Mircea; Lih, Shyh-shiuh

    2012-01-01

    A compact sampling tool mechanism that can operate at various temperatures, and transport and sieve particle sizes of powdered cuttings and soil grains with no moving parts, has been created using traveling surface acoustic waves (SAWs) that are emitted by an inter-digital transducer (IDT). The generated waves are driven at about 10 MHz, and it causes powder to move towards the IDT at high speed with different speeds for different sizes of particles, which enables these particles to be sieved. This design is based on the use of SAWs and their propelling effect on powder particles and fluids along the path of the waves. Generally, SAWs are elastic waves propagating in a shallow layer of about one wavelength beneath the surface of a solid substrate. To generate SAWs, a piezoelectric plate is used that is made of LiNbO3 crystal cut along the x-axis with rotation of 127.8 along the y-axis. On this plate are printed pairs of fingerlike electrodes in the form of a grating that are activated by subjecting the gap between the electrodes to electric field. This configuration of a surface wave transmitter is called IDT. The IDT that was used consists of 20 pairs of fingers with 0.4-mm spacing, a total length of 12.5 mm. The surface wave is produced by the nature of piezoelectric material to contract or expand when subjected to an electric field. Driving the IDT to generate wave at high amplitudes provides an actuation mechanism where the surface particles move elliptically, pulling powder particles on the surface toward the wavesource and pushing liquids in the opposite direction. This behavior allows the innovation to separate large particles and fluids that are mixed. Fluids are removed at speed (7.5 to 15 cm/s), enabling this innovation of acting as a bladeless wiper for raindrops. For the windshield design, the electrodes could be made transparent so that they do not disturb the driver or pilot. Multiple IDTs can be synchronized to transport water or powder over larger

  15. Modeling the solute transport by particle-tracing method with variable weights

    Science.gov (United States)

    Jiang, J.

    2016-12-01

    Particle-tracing method is usually used to simulate the solute transport in fracture media. In this method, the concentration at one point is proportional to number of particles visiting this point. However, this method is rather inefficient at the points with small concentration. Few particles visit these points, which leads to violent oscillation or gives zero value of concentration. In this paper, we proposed a particle-tracing method with variable weights. The concentration at one point is proportional to the sum of the weights of the particles visiting it. It adjusts the weight factors during simulations according to the estimated probabilities of corresponding walks. If the weight W of a tracking particle is larger than the relative concentration C at the corresponding site, the tracking particle will be splitted into Int(W/C) copies and each copy will be simulated independently with the weight W/Int(W/C) . If the weight W of a tracking particle is less than the relative concentration C at the corresponding site, the tracking particle will be continually tracked with a probability W/C and the weight will be adjusted to be C. By adjusting weights, the number of visiting particles distributes evenly in the whole range. Through this variable weights scheme, we can eliminate the violent oscillation and increase the accuracy of orders of magnitudes.

  16. Understanding Air Transportation Market Dynamics Using a Search Algorithm for Calibrating Travel Demand and Price

    Science.gov (United States)

    Kumar, Vivek; Horio, Brant M.; DeCicco, Anthony H.; Hasan, Shahab; Stouffer, Virginia L.; Smith, Jeremy C.; Guerreiro, Nelson M.

    2015-01-01

    This paper presents a search algorithm based framework to calibrate origin-destination (O-D) market specific airline ticket demands and prices for the Air Transportation System (ATS). This framework is used for calibrating an agent based model of the air ticket buy-sell process - Airline Evolutionary Simulation (Airline EVOS) -that has fidelity of detail that accounts for airline and consumer behaviors and the interdependencies they share between themselves and the NAS. More specificially, this algorithm simultaneous calibrates demand and airfares for each O-D market, to within specified threshold of a pre-specified target value. The proposed algorithm is illustrated with market data targets provided by the Transportation System Analysis Model (TSAM) and Airline Origin and Destination Survey (DB1B). Although we specify these models and datasources for this calibration exercise, the methods described in this paper are applicable to calibrating any low-level model of the ATS to some other demand forecast model-based data. We argue that using a calibration algorithm such as the one we present here to synchronize ATS models with specialized forecast demand models, is a powerful tool for establishing credible baseline conditions in experiments analyzing the effects of proposed policy changes to the ATS.

  17. An Adaptive Multi-Objective Particle Swarm Optimization Algorithm for Multi-Robot Path Planning

    Directory of Open Access Journals (Sweden)

    Nizar Hadi Abbas

    2016-07-01

    Full Text Available This paper discusses an optimal path planning algorithm based on an Adaptive Multi-Objective Particle Swarm Optimization Algorithm (AMOPSO for two case studies. First case, single robot wants to reach a goal in the static environment that contain two obstacles and two danger source. The second one, is improving the ability for five robots to reach the shortest way. The proposed algorithm solves the optimization problems for the first case by finding the minimum distance from initial to goal position and also ensuring that the generated path has a maximum distance from the danger zones. And for the second case, finding the shortest path for every robot and without any collision between them with the shortest time. In order to evaluate the proposed algorithm in term of finding the best solution, six benchmark test functions are used to make a comparison between AMOPSO and the standard MOPSO. The results show that the AMOPSO has a better ability to get away from local optimums with a quickest convergence than the MOPSO. The simulation results using Matlab 2014a, indicate that this methodology is extremely valuable for every robot in multi-robot framework to discover its own particular proper pa‌th from the start to the destination position with minimum distance and time.

  18. Adaptive tree multigrids and simplified spherical harmonics approximation in deterministic neutral and charged particle transport

    International Nuclear Information System (INIS)

    Kotiluoto, P.

    2007-05-01

    A new deterministic three-dimensional neutral and charged particle transport code, MultiTrans, has been developed. In the novel approach, the adaptive tree multigrid technique is used in conjunction with simplified spherical harmonics approximation of the Boltzmann transport equation. The development of the new radiation transport code started in the framework of the Finnish boron neutron capture therapy (BNCT) project. Since the application of the MultiTrans code to BNCT dose planning problems, the testing and development of the MultiTrans code has continued in conventional radiotherapy and reactor physics applications. In this thesis, an overview of different numerical radiation transport methods is first given. Special features of the simplified spherical harmonics method and the adaptive tree multigrid technique are then reviewed. The usefulness of the new MultiTrans code has been indicated by verifying and validating the code performance for different types of neutral and charged particle transport problems, reported in separate publications. (orig.)

  19. Dynamic Load Balancing of Parallel Monte Carlo Transport Calculations

    International Nuclear Information System (INIS)

    O'Brien, M; Taylor, J; Procassini, R

    2004-01-01

    The performance of parallel Monte Carlo transport calculations which use both spatial and particle parallelism is increased by dynamically assigning processors to the most worked domains. Since the particle work load varies over the course of the simulation, this algorithm determines each cycle if dynamic load balancing would speed up the calculation. If load balancing is required, a small number of particle communications are initiated in order to achieve load balance. This method has decreased the parallel run time by more than a factor of three for certain criticality calculations

  20. Load balancing in highly parallel processing of Monte Carlo code for particle transport

    International Nuclear Information System (INIS)

    Higuchi, Kenji; Takemiya, Hiroshi; Kawasaki, Takuji

    1998-01-01

    In parallel processing of Monte Carlo (MC) codes for neutron, photon and electron transport problems, particle histories are assigned to processors making use of independency of the calculation for each particle. Although we can easily parallelize main part of a MC code by this method, it is necessary and practically difficult to optimize the code concerning load balancing in order to attain high speedup ratio in highly parallel processing. In fact, the speedup ratio in the case of 128 processors remains in nearly one hundred times when using the test bed for the performance evaluation. Through the parallel processing of the MCNP code, which is widely used in the nuclear field, it is shown that it is difficult to attain high performance by static load balancing in especially neutron transport problems, and a load balancing method, which dynamically changes the number of assigned particles minimizing the sum of the computational and communication costs, overcomes the difficulty, resulting in nearly fifteen percentage of reduction for execution time. (author)