Kalman Filter Tracking on Parallel Architectures
International Nuclear Information System (INIS)
Cerati, Giuseppe; Elmer, Peter; Krutelyov, Slava; Lantz, Steven; Lefebvre, Matthieu; McDermott, Kevin; Riley, Daniel; Tadel, Matevž; Wittich, Peter; Würthwein, Frank; Yagil, Avi
2016-01-01
Power density constraints are limiting the performance improvements of modern CPUs. To address this we have seen the introduction of lower-power, multi-core processors such as GPGPU, ARM and Intel MIC. In order to achieve the theoretical performance gains of these processors, it will be necessary to parallelize algorithms to exploit larger numbers of lightweight cores and specialized functions like large vector units. Track finding and fitting is one of the most computationally challenging problems for event reconstruction in particle physics. At the High-Luminosity Large Hadron Collider (HL-LHC), for example, this will be by far the dominant problem. The need for greater parallelism has driven investigations of very different track finding techniques such as Cellular Automata or Hough Transforms. The most common track finding techniques in use today, however, are those based on a Kalman filter approach. Significant experience has been accumulated with these techniques on real tracking detector systems, both in the trigger and offline. They are known to provide high physics performance, are robust, and are in use today at the LHC. Given the utility of the Kalman filter in track finding, we have begun to port these algorithms to parallel architectures, namely Intel Xeon and Xeon Phi. We report here on our progress towards an end-to-end track reconstruction algorithm fully exploiting vectorization and parallelization techniques in a simplified experimental environment
The numerical parallel computing of photon transport
International Nuclear Information System (INIS)
Huang Qingnan; Liang Xiaoguang; Zhang Lifa
1998-12-01
The parallel computing of photon transport is investigated, the parallel algorithm and the parallelization of programs on parallel computers both with shared memory and with distributed memory are discussed. By analyzing the inherent law of the mathematics and physics model of photon transport according to the structure feature of parallel computers, using the strategy of 'to divide and conquer', adjusting the algorithm structure of the program, dissolving the data relationship, finding parallel liable ingredients and creating large grain parallel subtasks, the sequential computing of photon transport into is efficiently transformed into parallel and vector computing. The program was run on various HP parallel computers such as the HY-1 (PVP), the Challenge (SMP) and the YH-3 (MPP) and very good parallel speedup has been gotten
Parallel thermal radiation transport in two dimensions
International Nuclear Information System (INIS)
Smedley-Stevenson, R.P.; Ball, S.R.
2003-01-01
This paper describes the distributed memory parallel implementation of a deterministic thermal radiation transport algorithm in a 2-dimensional ALE hydrodynamics code. The parallel algorithm consists of a variety of components which are combined in order to produce a state of the art computational capability, capable of solving large thermal radiation transport problems using Blue-Oak, the 3 Tera-Flop MPP (massive parallel processors) computing facility at AWE (United Kingdom). Particular aspects of the parallel algorithm are described together with examples of the performance on some challenging applications. (author)
Parallel thermal radiation transport in two dimensions
Energy Technology Data Exchange (ETDEWEB)
Smedley-Stevenson, R.P.; Ball, S.R. [AWE Aldermaston (United Kingdom)
2003-07-01
This paper describes the distributed memory parallel implementation of a deterministic thermal radiation transport algorithm in a 2-dimensional ALE hydrodynamics code. The parallel algorithm consists of a variety of components which are combined in order to produce a state of the art computational capability, capable of solving large thermal radiation transport problems using Blue-Oak, the 3 Tera-Flop MPP (massive parallel processors) computing facility at AWE (United Kingdom). Particular aspects of the parallel algorithm are described together with examples of the performance on some challenging applications. (author)
A highly parallel algorithm for track finding
International Nuclear Information System (INIS)
Dell'Orso, M.
1990-01-01
We describe a very fast algorithm for track finding, which is applicable to a whole class of detectors like drift chambers, silicon microstrip detectors, etc. The algorithm uses a pattern bank stored in a large memory and organized into a tree structure. (orig.)
Particle orbit tracking on a parallel computer: Hypertrack
International Nuclear Information System (INIS)
Cole, B.; Bourianoff, G.; Pilat, F.; Talman, R.
1991-05-01
A program has been written which performs particle orbit tracking on the Intel iPSC/860 distributed memory parallel computer. The tracking is performed using a thin element approach. A brief description of the structure and performance of the code is presented, along with applications of the code to the analysis of accelerator lattices for the SSC. The concept of ''ensemble tracking'', i.e. the tracking of ensemble averages of noninteracting particles, such as the emittance, is presented. Preliminary results of such studies will be presented. 2 refs., 6 figs
Kalman filter tracking on parallel architectures
Cerati, G.; Elmer, P.; Krutelyov, S.; Lantz, S.; Lefebvre, M.; McDermott, K.; Riley, D.; Tadel, M.; Wittich, P.; Wurthwein, F.; Yagil, A.
2017-10-01
We report on the progress of our studies towards a Kalman filter track reconstruction algorithm with optimal performance on manycore architectures. The combinatorial structure of these algorithms is not immediately compatible with an efficient SIMD (or SIMT) implementation; the challenge for us is to recast the existing software so it can readily generate hundreds of shared-memory threads that exploit the underlying instruction set of modern processors. We show how the data and associated tasks can be organized in a way that is conducive to both multithreading and vectorization. We demonstrate very good performance on Intel Xeon and Xeon Phi architectures, as well as promising first results on Nvidia GPUs.
Parallelization of a Monte Carlo particle transport simulation code
Hadjidoukas, P.; Bousis, C.; Emfietzoglou, D.
2010-05-01
We have developed a high performance version of the Monte Carlo particle transport simulation code MC4. The original application code, developed in Visual Basic for Applications (VBA) for Microsoft Excel, was first rewritten in the C programming language for improving code portability. Several pseudo-random number generators have been also integrated and studied. The new MC4 version was then parallelized for shared and distributed-memory multiprocessor systems using the Message Passing Interface. Two parallel pseudo-random number generator libraries (SPRNG and DCMT) have been seamlessly integrated. The performance speedup of parallel MC4 has been studied on a variety of parallel computing architectures including an Intel Xeon server with 4 dual-core processors, a Sun cluster consisting of 16 nodes of 2 dual-core AMD Opteron processors and a 200 dual-processor HP cluster. For large problem size, which is limited only by the physical memory of the multiprocessor server, the speedup results are almost linear on all systems. We have validated the parallel implementation against the serial VBA and C implementations using the same random number generator. Our experimental results on the transport and energy loss of electrons in a water medium show that the serial and parallel codes are equivalent in accuracy. The present improvements allow for studying of higher particle energies with the use of more accurate physical models, and improve statistics as more particles tracks can be simulated in low response time.
Comparative eye-tracking evaluation of scatterplots and parallel coordinates
Directory of Open Access Journals (Sweden)
Rudolf Netzel
2017-06-01
Full Text Available We investigate task performance and reading characteristics for scatterplots (Cartesian coordinates and parallel coordinates. In a controlled eye-tracking study, we asked 24 participants to assess the relative distance of points in multidimensional space, depending on the diagram type (parallel coordinates or a horizontal collection of scatterplots, the number of data dimensions (2, 4, 6, or 8, and the relative distance between points (15%, 20%, or 25%. For a given reference point and two target points, we instructed participants to choose the target point that was closer to the reference point in multidimensional space. We present a visual scanning model that describes different strategies to solve this retrieval task for both diagram types, and propose corresponding hypotheses that we test using task completion time, accuracy, and gaze positions as dependent variables. Our results show that scatterplots outperform parallel coordinates significantly in 2 dimensions, however, the task was solved more quickly and more accurately with parallel coordinates in 8 dimensions. The eye-tracking data further shows significant differences between Cartesian and parallel coordinates, as well as between different numbers of dimensions. For parallel coordinates, there is a clear trend toward shorter fixations and longer saccades with increasing number of dimensions. Using an area-of-interest (AOI based approach, we identify different reading strategies for each diagram type: For parallel coordinates, the participants’ gaze frequently jumped back and forth between pairs of axes, while axes were rarely focused on when viewing Cartesian coordinates. We further found that participants’ attention is biased: toward the center of the whole plotfor parallel coordinates and skewed to the center/left side for Cartesian coordinates. We anticipate that these results may support the design of more effective visualizations for multidimensional data.
Multiscale Architectures and Parallel Algorithms for Video Object Tracking
2011-10-01
larger number of cores using the IBM QS22 Blade for handling higher video processing workloads (but at higher cost per core), low power consumption and...Cell/B.E. Blade processors which have a lot more main memory but also higher power consumption . More detailed performance figures for HD and SD video...Parallelism in Algorithms and Architectures, pages 289–298, 2007. [3] S. Ali and M. Shah. COCOA - Tracking in aerial imagery. In Daniel J. Henry
Traditional Tracking with Kalman Filter on Parallel Architectures
Cerati, Giuseppe; Elmer, Peter; Lantz, Steven; MacNeill, Ian; McDermott, Kevin; Riley, Dan; Tadel, Matevž; Wittich, Peter; Würthwein, Frank; Yagil, Avi
2015-05-01
Power density constraints are limiting the performance improvements of modern CPUs. To address this, we have seen the introduction of lower-power, multi-core processors, but the future will be even more exciting. In order to stay within the power density limits but still obtain Moore's Law performance/price gains, it will be necessary to parallelize algorithms to exploit larger numbers of lightweight cores and specialized functions like large vector units. Example technologies today include Intel's Xeon Phi and GPGPUs. Track finding and fitting is one of the most computationally challenging problems for event reconstruction in particle physics. At the High Luminosity LHC, for example, this will be by far the dominant problem. The most common track finding techniques in use today are however those based on the Kalman Filter. Significant experience has been accumulated with these techniques on real tracking detector systems, both in the trigger and offline. We report the results of our investigations into the potential and limitations of these algorithms on the new parallel hardware.
Beam dynamics calculations and particle tracking using massively parallel processors
International Nuclear Information System (INIS)
Ryne, R.D.; Habib, S.
1995-01-01
During the past decade massively parallel processors (MPPs) have slowly gained acceptance within the scientific community. At present these machines typically contain a few hundred to one thousand off-the-shelf microprocessors and a total memory of up to 32 GBytes. The potential performance of these machines is illustrated by the fact that a month long job on a high end workstation might require only a few hours on an MPP. The acceptance of MPPs has been slow for a variety of reasons. For example, some algorithms are not easily parallelizable. Also, in the past these machines were difficult to program. But in recent years the development of Fortran-like languages such as CM Fortran and High Performance Fortran have made MPPs much easier to use. In the following we will describe how MPPs can be used for beam dynamics calculations and long term particle tracking
Mouse-tracking evidence for parallel anticipatory option evaluation.
Cranford, Edward A; Moss, Jarrod
2017-12-23
In fast-paced, dynamic tasks, the ability to anticipate the future outcome of a sequence of events is crucial to quickly selecting an appropriate course of action among multiple alternative options. There are two classes of theories that describe how anticipation occurs. Serial theories assume options are generated and evaluated one at a time, in order of quality, whereas parallel theories assume simultaneous generation and evaluation. The present research examined the option evaluation process during a task designed to be analogous to prior anticipation tasks, but within the domain of narrative text comprehension. Prior research has relied on indirect, off-line measurement of the option evaluation process during anticipation tasks. Because the movement of the hand can provide a window into underlying cognitive processes, online metrics such as continuous mouse tracking provide more fine-grained measurements of cognitive processing as it occurs in real time. In this study, participants listened to three-sentence stories and predicted the protagonists' final action by moving a mouse toward one of three possible options. Each story was presented with either one (control condition) or two (distractor condition) plausible ending options. Results seem most consistent with a parallel option evaluation process because initial mouse trajectories deviated further from the best option in the distractor condition compared to the control condition. It is difficult to completely rule out all possible serial processing accounts, although the results do place constraints on the time frame in which a serial processing explanation must operate.
Parallel processing Monte Carlo radiation transport codes
International Nuclear Information System (INIS)
McKinney, G.W.
1994-01-01
Issues related to distributed-memory multiprocessing as applied to Monte Carlo radiation transport are discussed. Measurements of communication overhead are presented for the radiation transport code MCNP which employs the communication software package PVM, and average efficiency curves are provided for a homogeneous virtual machine
Parallel SN transport calculations on a transputer network
International Nuclear Information System (INIS)
Kim, Yong Hee; Cho, Nam Zin
1994-01-01
A parallel computing algorithm for the neutron transport problems has been implemented on a transputer network and two reactor benchmark problems (a fixed-source problem and an eigenvalue problem) are solved. We have shown that the parallel calculations provided significant reduction in execution time over the sequential calculations
Gauge field governing parallel transport along mixed states
International Nuclear Information System (INIS)
Uhlmann, A.
1990-01-01
At first a short account is given of some basic notations and results on parallel transport along mixed states. A new connection form (gauge field) is introduced to give a geometric meaning to the concept of parallelity in the theory of density operators. (Author) 11 refs
Angular parallelization of a curvilinear Sn transport theory method
International Nuclear Information System (INIS)
Haghighat, A.
1991-01-01
In this paper a parallel algorithm for angular domain decomposition (or parallelization) of an r-dependent spherical S n transport theory method is derived. The parallel formulation is incorporated into TWOTRAN-II using the IBM Parallel Fortran compiler and implemented on an IBM 3090/400 (with four processors). The behavior of the parallel algorithm for different physical problems is studied, and it is concluded that the parallel algorithm behaves differently in the presence of a fission source as opposed to the absence of a fission source; this is attributed to the relative contributions of the source and the angular redistribution terms in the S s algorithm. Further, the parallel performance of the algorithm is measured for various problem sizes and different combinations of angular subdomains or processors. Poor parallel efficiencies between ∼35 and 50% are achieved in situations where the relative difference of parallel to serial iterations is ∼50%. High parallel efficiencies between ∼60% and 90% are obtained in situations where the relative difference of parallel to serial iterations is <35%
Scalable parallel prefix solvers for discrete ordinates transport
International Nuclear Information System (INIS)
Pautz, S.; Pandya, T.; Adams, M.
2009-01-01
The well-known 'sweep' algorithm for inverting the streaming-plus-collision term in first-order deterministic radiation transport calculations has some desirable numerical properties. However, it suffers from parallel scaling issues caused by a lack of concurrency. The maximum degree of concurrency, and thus the maximum parallelism, grows more slowly than the problem size for sweeps-based solvers. We investigate a new class of parallel algorithms that involves recasting the streaming-plus-collision problem in prefix form and solving via cyclic reduction. This method, although computationally more expensive at low levels of parallelism than the sweep algorithm, offers better theoretical scalability properties. Previous work has demonstrated this approach for one-dimensional calculations; we show how to extend it to multidimensional calculations. Notably, for multiple dimensions it appears that this approach is limited to long-characteristics discretizations; other discretizations cannot be cast in prefix form. We implement two variants of the algorithm within the radlib/SCEPTRE transport code library at Sandia National Laboratories and show results on two different massively parallel systems. Both the 'forward' and 'symmetric' solvers behave similarly, scaling well to larger degrees of parallelism then sweeps-based solvers. We do observe some issues at the highest levels of parallelism (relative to the system size) and discuss possible causes. We conclude that this approach shows good potential for future parallel systems, but the parallel scalability will depend heavily on the architecture of the communication networks of these systems. (authors)
Directory of Open Access Journals (Sweden)
Schöning André
2016-01-01
Full Text Available Track reconstruction in high track multiplicity environments at current and future high rate particle physics experiments is a big challenge and very time consuming. The search for track seeds and the fitting of track candidates are usually the most time consuming steps in the track reconstruction. Here, a new and fast track reconstruction method based on hit triplets is proposed which exploits a three-dimensional fit model including multiple scattering and hit uncertainties from the very start, including the search for track seeds. The hit triplet based reconstruction method assumes a homogeneous magnetic field which allows to give an analytical solutions for the triplet fit result. This method is highly parallelizable, needs fewer operations than other standard track reconstruction methods and is therefore ideal for the implementation on parallel computing architectures. The proposed track reconstruction algorithm has been studied in the context of the Mu3e-experiment and a typical LHC experiment.
Parallel heat transport in integrable and chaotic magnetic fields
Energy Technology Data Exchange (ETDEWEB)
Castillo-Negrete, D. del; Chacon, L. [Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831-8071 (United States)
2012-05-15
The study of transport in magnetized plasmas is a problem of fundamental interest in controlled fusion, space plasmas, and astrophysics research. Three issues make this problem particularly challenging: (i) The extreme anisotropy between the parallel (i.e., along the magnetic field), {chi}{sub ||} , and the perpendicular, {chi}{sub Up-Tack }, conductivities ({chi}{sub ||} /{chi}{sub Up-Tack} may exceed 10{sup 10} in fusion plasmas); (ii) Nonlocal parallel transport in the limit of small collisionality; and (iii) Magnetic field lines chaos which in general complicates (and may preclude) the construction of magnetic field line coordinates. Motivated by these issues, we present a Lagrangian Green's function method to solve the local and non-local parallel transport equation applicable to integrable and chaotic magnetic fields in arbitrary geometry. The method avoids by construction the numerical pollution issues of grid-based algorithms. The potential of the approach is demonstrated with nontrivial applications to integrable (magnetic island), weakly chaotic (Devil's staircase), and fully chaotic magnetic field configurations. For the latter, numerical solutions of the parallel heat transport equation show that the effective radial transport, with local and non-local parallel closures, is non-diffusive, thus casting doubts on the applicability of quasilinear diffusion descriptions. General conditions for the existence of non-diffusive, multivalued flux-gradient relations in the temperature evolution are derived.
A parallel algorithm for 3D particle tracking and Lagrangian trajectory reconstruction
International Nuclear Information System (INIS)
Barker, Douglas; Zhang, Yuanhui; Lifflander, Jonathan; Arya, Anshu
2012-01-01
Particle-tracking methods are widely used in fluid mechanics and multi-target tracking research because of their unique ability to reconstruct long trajectories with high spatial and temporal resolution. Researchers have recently demonstrated 3D tracking of several objects in real time, but as the number of objects is increased, real-time tracking becomes impossible due to data transfer and processing bottlenecks. This problem may be solved by using parallel processing. In this paper, a parallel-processing framework has been developed based on frame decomposition and is programmed using the asynchronous object-oriented Charm++ paradigm. This framework can be a key step in achieving a scalable Lagrangian measurement system for particle-tracking velocimetry and may lead to real-time measurement capabilities. The parallel tracking algorithm was evaluated with three data sets including the particle image velocimetry standard 3D images data set #352, a uniform data set for optimal parallel performance and a computational-fluid-dynamics-generated non-uniform data set to test trajectory reconstruction accuracy, consistency with the sequential version and scalability to more than 500 processors. The algorithm showed strong scaling up to 512 processors and no inherent limits of scalability were seen. Ultimately, up to a 200-fold speedup is observed compared to the serial algorithm when 256 processors were used. The parallel algorithm is adaptable and could be easily modified to use any sequential tracking algorithm, which inputs frames of 3D particle location data and outputs particle trajectories
Load Balancing of Parallel Monte Carlo Transport Calculations
International Nuclear Information System (INIS)
Procassini, R J; O'Brien, M J; Taylor, J M
2005-01-01
The performance of parallel Monte Carlo transport calculations which use both spatial and particle parallelism is increased by dynamically assigning processors to the most worked domains. Since he particle work load varies over the course of the simulation, this algorithm determines each cycle if dynamic load balancing would speed up the calculation. If load balancing is required, a small number of particle communications are initiated in order to achieve load balance. This method has decreased the parallel run time by more than a factor of three for certain criticality calculations
Dynamic Load Balancing of Parallel Monte Carlo Transport Calculations
International Nuclear Information System (INIS)
O'Brien, M; Taylor, J; Procassini, R
2004-01-01
The performance of parallel Monte Carlo transport calculations which use both spatial and particle parallelism is increased by dynamically assigning processors to the most worked domains. Since the particle work load varies over the course of the simulation, this algorithm determines each cycle if dynamic load balancing would speed up the calculation. If load balancing is required, a small number of particle communications are initiated in order to achieve load balance. This method has decreased the parallel run time by more than a factor of three for certain criticality calculations
MINARET: Towards a time-dependent neutron transport parallel solver
International Nuclear Information System (INIS)
Baudron, A.M.; Lautard, J.J.; Maday, Y.; Mula, O.
2013-01-01
We present the newly developed time-dependent 3D multigroup discrete ordinates neutron transport solver that has recently been implemented in the MINARET code. The solver is the support for a study about computing acceleration techniques that involve parallel architectures. In this work, we will focus on the parallelization of two of the variables involved in our equation: the angular directions and the time. This last variable has been parallelized by a (time) domain decomposition method called the para-real in time algorithm. (authors)
'Iconic' tracking algorithms for high energy physics using the TRAX-I massively parallel processor
International Nuclear Information System (INIS)
Vesztergombi, G.
1989-01-01
TRAX-I, a cost-effective parallel microcomputer, applying associative string processor (ASP) architecture with 16 K parallel processing elements, is being built by Aspex Microsystems Ltd. (UK). When applied to the tracking problem of very complex events with several hundred tracks, the large number of processors allows one to dedicate one or more processors to each wire (in MWPC), each pixel (in digitized images from streamer chambers or other visual detectors), or each pad (in TPC) to perform very efficient pattern recognition. Some linear tracking algorithms based on this ''ionic'' representation are presented. (orig.)
'Iconic' tracking algorithms for high energy physics using the TRAX-I massively parallel processor
International Nuclear Information System (INIS)
Vestergombi, G.
1989-11-01
TRAX-I, a cost-effective parallel microcomputer, applying Associative String Processor (ASP) architecture with 16 K parallel processing elements, is being built by Aspex Microsystems Ltd. (UK). When applied to the tracking problem of very complex events with several hundred tracks, the large number of processors allows one to dedicate one or more processors to each wire (in MWPC), each pixel (in digitized images from streamer chambers or other visual detectors), or each pad (in TPC) to perform very efficient pattern recognition. Some linear tracking algorithms based on this 'iconic' representation are presented. (orig.)
Local and Nonlocal Parallel Heat Transport in General Magnetic Fields
International Nuclear Information System (INIS)
Castillo-Negrete, D. del; Chacon, L.
2011-01-01
A novel approach for the study of parallel transport in magnetized plasmas is presented. The method avoids numerical pollution issues of grid-based formulations and applies to integrable and chaotic magnetic fields with local or nonlocal parallel closures. In weakly chaotic fields, the method gives the fractal structure of the devil's staircase radial temperature profile. In fully chaotic fields, the temperature exhibits self-similar spatiotemporal evolution with a stretched-exponential scaling function for local closures and an algebraically decaying one for nonlocal closures. It is shown that, for both closures, the effective radial heat transport is incompatible with the quasilinear diffusion model.
Parallel processing of neutron transport in fuel assembly calculation
International Nuclear Information System (INIS)
Song, Jae Seung
1992-02-01
Group constants, which are used for reactor analyses by nodal method, are generated by fuel assembly calculations based on the neutron transport theory, since one or a quarter of the fuel assembly corresponds to a unit mesh in the current nodal calculation. The group constant calculation for a fuel assembly is performed through spectrum calculations, a two-dimensional fuel assembly calculation, and depletion calculations. The purpose of this study is to develop a parallel algorithm to be used in a parallel processor for the fuel assembly calculation and the depletion calculations of the group constant generation. A serial program, which solves the neutron integral transport equation using the transmission probability method and the linear depletion equation, was prepared and verified by a benchmark calculation. Small changes from the serial program was enough to parallelize the depletion calculation which has inherent parallel characteristics. In the fuel assembly calculation, however, efficient parallelization is not simple and easy because of the many coupling parameters in the calculation and data communications among CPU's. In this study, the group distribution method is introduced for the parallel processing of the fuel assembly calculation to minimize the data communications. The parallel processing was performed on Quadputer with 4 CPU's operating in NURAD Lab. at KAIST. Efficiencies of 54.3 % and 78.0 % were obtained in the fuel assembly calculation and depletion calculation, respectively, which lead to the overall speedup of about 2.5. As a result, it is concluded that the computing time consumed for the group constant generation can be easily reduced by parallel processing on the parallel computer with small size CPU's
Parallel Kalman filter track fit based on vector classes
Energy Technology Data Exchange (ETDEWEB)
Kisel, Ivan [GSI Helmholtzzentrum fuer Schwerionenforschung GmbH (Germany); Kretz, Matthias [Kirchhoff-Institut fuer Physik, Ruprecht-Karls Universitaet, Heidelberg (Germany); Kulakov, Igor [Goethe-Universitaet, Frankfurt am Main (Germany); National Taras Shevchenko University, Kyiv (Ukraine)
2010-07-01
Modern high energy physics experiments have to process terabytes of input data produced in particle collisions. The core of the data reconstruction in high energy physics is the Kalman filter. Therefore, developing the fast Kalman filter algorithm, which uses maximum available power of modern processors, is important, in particular for initial selection of events interesting for the new physics. One of processors features, which can speed up the algorithm, is a SIMD instruction set, which allows to pack several data items in one register and operate on all of them in one go, thus achieving more operations per clock cycle. Therefore a flexible and useful interface, which uses the SIMD instruction set on different CPU and GPU processors architectures, has been realized as a vector classes library. The Kalman filter based track fitting algorithm has been implemented with use of the vector classes. Fitting quality tests show good results with the residuals equal to 49 {mu}m and 44 {mu}m for x and y track parameters and relative momentum resolution of 0.7%. The fitting time of 0.053 {mu}s per track has been achieved on Intel Xeon X5550 with 8 cores at 2.6 GHz by using in addition Intel Threading Building Blocks.
Parallelization of a spherical Sn transport theory algorithm
International Nuclear Information System (INIS)
Haghighat, A.
1989-01-01
The work described in this paper derives a parallel algorithm for an R-dependent spherical S N transport theory algorithm and studies its performance by testing different sample problems. The S N transport method is one of the most accurate techniques used to solve the linear Boltzmann equation. Several studies have been done on the vectorization of the S N algorithms; however, very few studies have been performed on the parallelization of this algorithm. Weinke and Hommoto have looked at the parallel processing of the different energy groups, and Azmy recently studied the parallel processing of the inner iterations of an X-Y S N nodal transport theory method. Both studies have reported very encouraging results, which have prompted us to look at the parallel processing of an R-dependent S N spherical geometry algorithm. This geometry was chosen because, in spite of its simplicity, it contains the complications of the curvilinear geometries (i.e., redistribution of neutrons over the discretized angular bins)
An ion beam tracking system based on a parallel plate avalanche counter
International Nuclear Information System (INIS)
Carter, I. P.; Ramachandran, K.; Dasgupta, M.; Hinde, D. J.; Rafiei, R.; Luong, D. H.; Williams, E.; Cook, K. J.; McNeil, S.; Rafferty, D. C.; Harding, A. B.; Muirhead, A. G.; Tunningley, T.
2013-01-01
A pair of twin position-sensitive parallel plate avalanche counters have been developed at the Australian National University as a tracking system to aid in the further rejection of unwanted beam particles from a 6.5 T super conducting solenoid separator named SOLEROO. Their function is to track and identify each beam particle passing through the detectors on an event-by-event basis. In-beam studies have been completed and the detectors are in successful operation, demonstrating the tracking capability. A high efficiency 512-pixel wide-angle silicon detector array will then be integrated with the tracking system for nuclear reactions studies of radioactive ions. (authors)
Inter-dot coupling effects on transport through correlated parallel
Indian Academy of Sciences (India)
Transport through symmetric parallel coupled quantum dot system has been studied, using non-equilibrium Green function formalism. The inter-dot tunnelling with on-dot and inter-dot Coulomb repulsion is included. The transmission coefficient and Landaur–Buttiker like current formula are shown in terms of internal states ...
Parallel processing of two-dimensional Sn transport calculations
International Nuclear Information System (INIS)
Uematsu, M.
1997-01-01
A parallel processing method for the two-dimensional S n transport code DOT3.5 has been developed to achieve a drastic reduction in computation time. In the proposed method, parallelization is achieved with angular domain decomposition and/or space domain decomposition. The calculational speed of parallel processing by angular domain decomposition is largely influenced by frequent communications between processing elements. To assess parallelization efficiency, sample problems with up to 32 x 32 spatial meshes were solved with a Sun workstation using the PVM message-passing library. As a result, parallel calculation using 16 processing elements, for example, was found to be nine times as fast as that with one processing element. As for parallel processing by geometry segmentation, the influence of processing element communications on computation time is small; however, discontinuity at the segment boundary degrades convergence speed. To accelerate the convergence, an alternate sweep of angular flux in conjunction with space domain decomposition and a two-step rescaling method consisting of segmentwise rescaling and ordinary pointwise rescaling have been developed. By applying the developed method, the number of iterations needed to obtain a converged flux solution was reduced by a factor of 2. As a result, parallel calculation using 16 processing elements was found to be 5.98 times as fast as the original DOT3.5 calculation
Parallel computing solution of Boltzmann neutron transport equation
International Nuclear Information System (INIS)
Ansah-Narh, T.
2010-01-01
The focus of the research was on developing parallel computing algorithm for solving Eigen-values of the Boltzmam Neutron Transport Equation (BNTE) in a slab geometry using multi-grid approach. In response to the problem of slow execution of serial computing when solving large problems, such as BNTE, the study was focused on the design of parallel computing systems which was an evolution of serial computing that used multiple processing elements simultaneously to solve complex physical and mathematical problems. Finite element method (FEM) was used for the spatial discretization scheme, while angular discretization was accomplished by expanding the angular dependence in terms of Legendre polynomials. The eigenvalues representing the multiplication factors in the BNTE were determined by the power method. MATLAB Compiler Version 4.1 (R2009a) was used to compile the MATLAB codes of BNTE. The implemented parallel algorithms were enabled with matlabpool, a Parallel Computing Toolbox function. The option UseParallel was set to 'always' and the default value of the option was 'never'. When those conditions held, the solvers computed estimated gradients in parallel. The parallel computing system was used to handle all the bottlenecks in the matrix generated from the finite element scheme and each domain of the power method generated. The parallel algorithm was implemented on a Symmetric Multi Processor (SMP) cluster machine, which had Intel 32 bit quad-core x 86 processors. Convergence rates and timings for the algorithm on the SMP cluster machine were obtained. Numerical experiments indicated the designed parallel algorithm could reach perfect speedup and had good stability and scalability. (au)
Simulation of neutron transport equation using parallel Monte Carlo for deep penetration problems
International Nuclear Information System (INIS)
Bekar, K. K.; Tombakoglu, M.; Soekmen, C. N.
2001-01-01
Neutron transport equation is simulated using parallel Monte Carlo method for deep penetration neutron transport problem. Monte Carlo simulation is parallelized by using three different techniques; direct parallelization, domain decomposition and domain decomposition with load balancing, which are used with PVM (Parallel Virtual Machine) software on LAN (Local Area Network). The results of parallel simulation are given for various model problems. The performances of the parallelization techniques are compared with each other. Moreover, the effects of variance reduction techniques on parallelization are discussed
Hybrid shared/distributed parallelism for 3D characteristics transport solvers
International Nuclear Information System (INIS)
Dahmani, M.; Roy, R.
2005-01-01
In this paper, we will present a new hybrid parallel model for solving large-scale 3-dimensional neutron transport problems used in nuclear reactor simulations. Large heterogeneous reactor problems, like the ones that occurs when simulating Candu cores, have remained computationally intensive and impractical for routine applications on single-node or even vector computers. Based on the characteristics method, this new model is designed to solve the transport equation after distributing the calculation load on a network of shared memory multi-processors. The tracks are either generated on the fly at each characteristics sweep or stored in sequential files. The load balancing is taken into account by estimating the calculation load of tracks and by distributing batches of uniform load on each node of the network. Moreover, the communication overhead can be predicted after benchmarking the latency and bandwidth using appropriate network test suite. These models are useful for predicting the performance of the parallel applications and to analyze the scalability of the parallel systems. (authors)
Sn transport calculations on vector and parallel processors
International Nuclear Information System (INIS)
Rhoades, W.A.; Childs, R.L.
1987-01-01
The transport of radiation from the source to the location of people or equipment gives rise to some of the most challenging of calculations. A problem may involve as many as a billion unknowns, each evaluated several times to resolve interdependence. Such calculations run many hours on a Cray computer, and a typical study involves many such calculations. This paper will discuss the steps taken to vectorize the DOT code, which solves transport problems in two space dimensions (2-D); the extension of this code to 3-D; and the plans for extension to parallel processors
Parallel computing for homogeneous diffusion and transport equations in neutronics
International Nuclear Information System (INIS)
Pinchedez, K.
1999-06-01
Parallel computing meets the ever-increasing requirements for neutronic computer code speed and accuracy. In this work, two different approaches have been considered. We first parallelized the sequential algorithm used by the neutronics code CRONOS developed at the French Atomic Energy Commission. The algorithm computes the dominant eigenvalue associated with PN simplified transport equations by a mixed finite element method. Several parallel algorithms have been developed on distributed memory machines. The performances of the parallel algorithms have been studied experimentally by implementation on a T3D Cray and theoretically by complexity models. A comparison of various parallel algorithms has confirmed the chosen implementations. We next applied a domain sub-division technique to the two-group diffusion Eigen problem. In the modal synthesis-based method, the global spectrum is determined from the partial spectra associated with sub-domains. Then the Eigen problem is expanded on a family composed, on the one hand, from eigenfunctions associated with the sub-domains and, on the other hand, from functions corresponding to the contribution from the interface between the sub-domains. For a 2-D homogeneous core, this modal method has been validated and its accuracy has been measured. (author)
SMEs in Energy: Are they the parallel fast track for electrification of Africa?
Energy Technology Data Exchange (ETDEWEB)
Abdel-Rahman, Mohamed
2010-09-15
The African continent is suffering from a chronic energy shortage that hiders its development. The conventional wisdom is to put the Mega projects under focus. However, a parallel fast track for the energy as an SME business may bring faster results to the continent. To that end, this paper presents proposed steps to promote the concept within the continent.
Fast parallel tracking algorithm for the muon detector of the CBM experiment at FAIR
International Nuclear Information System (INIS)
Lebedev, A.; Hoehne, C.; Kisel', I.; Ososkov, G.
2010-01-01
Particle trajectory recognition is an important and challenging task in the Compressed Baryonic Matter (CBM) experiment at the future FAIR accelerator at Darmstadt. The tracking algorithms have to process terabytes of input data produced in particle collisions. Therefore, the speed of the tracking software is extremely important for data analysis. In this contribution, a fast parallel track reconstruction algorithm, which uses available features of modern processors is presented. These features comprise a SIMD instruction set (SSE) and multithreading. The first allows one to pack several data items into one register and to operate on all of them in parallel thus achieving more operations per cycle. The second feature enables the routines to exploit all available CPU cores and hardware threads. This parallel version of the tracking algorithm has been compared to the initial serial scalar version which uses a similar approach for tracking. A speed-upfactor of 487 was achieved (from 730 to 1.5 ms/event) for a computer with 2 x Intel Core 17 processors at 2.66 GHz
Transport through track etched polymeric blend membrane
Indian Academy of Sciences (India)
Unknown
Department of Physics, University of Rajasthan, Jaipur 302 004, India. MS received 10 June 2005 ... Both the track and bulk etching takes place in the irradiated membrane. ... using rotating flywheel attachment, the details having been given ...
Provably optimal parallel transport sweeps on regular grids
Energy Technology Data Exchange (ETDEWEB)
Adams, M. P.; Adams, M. L.; Hawkins, W. D. [Dept. of Nuclear Engineering, Texas A and M University, 3133 TAMU, College Station, TX 77843-3133 (United States); Smith, T.; Rauchwerger, L.; Amato, N. M. [Dept. of Computer Science and Engineering, Texas A and M University, 3133 TAMU, College Station, TX 77843-3133 (United States); Bailey, T. S.; Falgout, R. D. [Lawrence Livermore National Laboratory (United States)
2013-07-01
We have found provably optimal algorithms for full-domain discrete-ordinate transport sweeps on regular grids in 3D Cartesian geometry. We describe these algorithms and sketch a 'proof that they always execute the full eight-octant sweep in the minimum possible number of stages for a given P{sub x} x P{sub y} x P{sub z} partitioning. Computational results demonstrate that our optimal scheduling algorithms execute sweeps in the minimum possible stage count. Observed parallel efficiencies agree well with our performance model. An older version of our PDT transport code achieves almost 80% parallel efficiency on 131,072 cores, on a weak-scaling problem with only one energy group, 80 directions, and 4096 cells/core. A newer version is less efficient at present-we are still improving its implementation - but achieves almost 60% parallel efficiency on 393,216 cores. These results conclusively demonstrate that sweeps can perform with high efficiency on core counts approaching 10{sup 6}. (authors)
Provably optimal parallel transport sweeps on regular grids
International Nuclear Information System (INIS)
Adams, M. P.; Adams, M. L.; Hawkins, W. D.; Smith, T.; Rauchwerger, L.; Amato, N. M.; Bailey, T. S.; Falgout, R. D.
2013-01-01
We have found provably optimal algorithms for full-domain discrete-ordinate transport sweeps on regular grids in 3D Cartesian geometry. We describe these algorithms and sketch a 'proof that they always execute the full eight-octant sweep in the minimum possible number of stages for a given P x x P y x P z partitioning. Computational results demonstrate that our optimal scheduling algorithms execute sweeps in the minimum possible stage count. Observed parallel efficiencies agree well with our performance model. An older version of our PDT transport code achieves almost 80% parallel efficiency on 131,072 cores, on a weak-scaling problem with only one energy group, 80 directions, and 4096 cells/core. A newer version is less efficient at present-we are still improving its implementation - but achieves almost 60% parallel efficiency on 393,216 cores. These results conclusively demonstrate that sweeps can perform with high efficiency on core counts approaching 10 6 . (authors)
Parallel MCNP Monte Carlo transport calculations with MPI
International Nuclear Information System (INIS)
Wagner, J.C.; Haghighat, A.
1996-01-01
The steady increase in computational performance has made Monte Carlo calculations for large/complex systems possible. However, in order to make these calculations practical, order of magnitude increases in performance are necessary. The Monte Carlo method is inherently parallel (particles are simulated independently) and thus has the potential for near-linear speedup with respect to the number of processors. Further, the ever-increasing accessibility of parallel computers, such as workstation clusters, facilitates the practical use of parallel Monte Carlo. Recognizing the nature of the Monte Carlo method and the trends in available computing, the code developers at Los Alamos National Laboratory implemented the message-passing general-purpose Monte Carlo radiation transport code MCNP (version 4A). The PVM package was chosen by the MCNP code developers because it supports a variety of communication networks, several UNIX platforms, and heterogeneous computer systems. This PVM version of MCNP has been shown to produce speedups that approach the number of processors and thus, is a very useful tool for transport analysis. Due to software incompatibilities on the local IBM SP2, PVM has not been available, and thus it is not possible to take advantage of this useful tool. Hence, it became necessary to implement an alternative message-passing library package into MCNP. Because the message-passing interface (MPI) is supported on the local system, takes advantage of the high-speed communication switches in the SP2, and is considered to be the emerging standard, it was selected
Novel Parallel Numerical Methods for Radiation and Neutron Transport
International Nuclear Information System (INIS)
Brown, P N
2001-01-01
In many of the multiphysics simulations performed at LLNL, transport calculations can take up 30 to 50% of the total run time. If Monte Carlo methods are used, the percentage can be as high as 80%. Thus, a significant core competence in the formulation, software implementation, and solution of the numerical problems arising in transport modeling is essential to Laboratory and DOE research. In this project, we worked on developing scalable solution methods for the equations that model the transport of photons and neutrons through materials. Our goal was to reduce the transport solve time in these simulations by means of more advanced numerical methods and their parallel implementations. These methods must be scalable, that is, the time to solution must remain constant as the problem size grows and additional computer resources are used. For iterative methods, scalability requires that (1) the number of iterations to reach convergence is independent of problem size, and (2) that the computational cost grows linearly with problem size. We focused on deterministic approaches to transport, building on our earlier work in which we performed a new, detailed analysis of some existing transport methods and developed new approaches. The Boltzmann equation (the underlying equation to be solved) and various solution methods have been developed over many years. Consequently, many laboratory codes are based on these methods, which are in some cases decades old. For the transport of x-rays through partially ionized plasmas in local thermodynamic equilibrium, the transport equation is coupled to nonlinear diffusion equations for the electron and ion temperatures via the highly nonlinear Planck function. We investigated the suitability of traditional-solution approaches to transport on terascale architectures and also designed new scalable algorithms; in some cases, we investigated hybrid approaches that combined both
Time-dependent deterministic transport on parallel architectures using PARTISN
International Nuclear Information System (INIS)
Alcouffe, R.E.; Baker, R.S.
1998-01-01
In addition to the ability to solve the static transport equation, the authors have also incorporated time dependence into the parallel S N code PARTISN. Using a semi-implicit scheme, PARTISN is capable of performing time-dependent calculations for both fissioning and pure source driven problems. They have applied this to various types of problems such as shielding and prompt fission experiments. This paper describes the form of the time-dependent equations implemented, their solution strategies in PARTISN including iteration acceleration, and the strategies used for time-step control. Results are presented for a iron-water shielding calculation and a criticality excursion in a uranium solution configuration
Parallel 4-dimensional cellular automaton track finder for the CBM experiment
Energy Technology Data Exchange (ETDEWEB)
Akishina, Valentina [Goethe-Universitaet Frankfurt am Main, Frankfurt am Main (Germany); Frankfurt Institute for Advanced Studies, Frankfurt am Main (Germany); GSI Helmholtzzentrum fuer Schwerionenforschung GmbH, Darmstadt (Germany); JINR Joint Institute for Nuclear Research, Dubna (Russian Federation); Kisel, Ivan [Goethe-Universitaet Frankfurt am Main, Frankfurt am Main (Germany); Frankfurt Institute for Advanced Studies, Frankfurt am Main (Germany); GSI Helmholtzzentrum fuer Schwerionenforschung GmbH, Darmstadt (Germany); Collaboration: CBM-Collaboration
2016-07-01
The CBM experiment at FAIR will focus on the measurement of rare probes at interaction rates up to 10 MHz. The beam will provide free stream of particles, so that information about different collisions may overlap in time. It requires the full online event reconstruction not only in space, but also in time, so-called 4D (4-dimensional) event building. This is a task of the First-Level Event Selection (FLES) package. The FLES reconstruction package consists of several modules: track finding, track fitting, short-lived particles finding, event building and selection. The Silicon Tracking System (STS) time measurement information was included into the Cellular Automaton (CA) track finder algorithm. The 4D track finder algorithm speed (8.5 ms per event in a time-slice) and efficiency is comparable with the event-based analysis. The CA track finder was fully parallelised inside the time-slice. The parallel version achieves a speed-up factor of 10.6 while parallelising between 10 Intel Xeon physical cores with a hyper-threading. The first version of event building based on 4D track finder was implemented.
Advective isotope transport by mixing cell and particle tracking algorithms
International Nuclear Information System (INIS)
Tezcan, L.; Meric, T.
1999-01-01
The 'mixing cell' algorithm of the environmental isotope data evaluation is integrated with the three dimensional finite difference ground water flow model (MODFLOW) to simulate the advective isotope transport and the approach is compared with the 'particle tracking' algorithm of the MOC3D, that simulates three-dimensional solute transport with the method of characteristics technique
Knoeferle, Pia; Crocker, Matthew W
2009-12-01
Reading times for the second conjunct of and-coordinated clauses are faster when the second conjunct parallels the first conjunct in its syntactic or semantic (animacy) structure than when its structure differs (Frazier, Munn, & Clifton, 2000; Frazier, Taft, Roeper, & Clifton, 1984). What remains unclear, however, is the time course of parallelism effects, their scope, and the kinds of linguistic information to which they are sensitive. Findings from the first two eye-tracking experiments revealed incremental constituent order parallelism across the board-both during structural disambiguation (Experiment 1) and in sentences with unambiguously case-marked constituent order (Experiment 2), as well as for both marked and unmarked constituent orders (Experiments 1 and 2). Findings from Experiment 3 revealed effects of both constituent order and subtle semantic (noun phrase similarity) parallelism. Together our findings provide evidence for an across-the-board account of parallelism for processing and-coordinated clauses, in which both constituent order and semantic aspects of representations contribute towards incremental parallelism effects. We discuss our findings in the context of existing findings on parallelism and priming, as well as mechanisms of sentence processing.
Improved parallel solution techniques for the integral transport matrix method
Energy Technology Data Exchange (ETDEWEB)
Zerr, R. Joseph, E-mail: rjz116@psu.edu [Department of Mechanical and Nuclear Engineering, The Pennsylvania State University, University Park, PA (United States); Azmy, Yousry Y., E-mail: yyazmy@ncsu.edu [Department of Nuclear Engineering, North Carolina State University, Burlington Engineering Laboratories, Raleigh, NC (United States)
2011-07-01
Alternative solution strategies to the parallel block Jacobi (PBJ) method for the solution of the global problem with the integral transport matrix method operators have been designed and tested. The most straightforward improvement to the Jacobi iterative method is the Gauss-Seidel alternative. The parallel red-black Gauss-Seidel (PGS) algorithm can improve on the number of iterations and reduce work per iteration by applying an alternating red-black color-set to the subdomains and assigning multiple sub-domains per processor. A parallel GMRES(m) method was implemented as an alternative to stationary iterations. Computational results show that the PGS method can improve on the PBJ method execution time by up to 10´ when eight sub-domains per processor are used. However, compared to traditional source iterations with diffusion synthetic acceleration, it is still approximately an order of magnitude slower. The best-performing cases are optically thick because sub-domains decouple, yielding faster convergence. Further tests revealed that 64 sub-domains per processor was the best performing level of sub-domain division. An acceleration technique that improves the convergence rate would greatly improve the ITMM. The GMRES(m) method with a diagonal block pre conditioner consumes approximately the same time as the PBJ solver but could be improved by an as yet undeveloped, more efficient pre conditioner. (author)
Improved parallel solution techniques for the integral transport matrix method
International Nuclear Information System (INIS)
Zerr, R. Joseph; Azmy, Yousry Y.
2011-01-01
Alternative solution strategies to the parallel block Jacobi (PBJ) method for the solution of the global problem with the integral transport matrix method operators have been designed and tested. The most straightforward improvement to the Jacobi iterative method is the Gauss-Seidel alternative. The parallel red-black Gauss-Seidel (PGS) algorithm can improve on the number of iterations and reduce work per iteration by applying an alternating red-black color-set to the subdomains and assigning multiple sub-domains per processor. A parallel GMRES(m) method was implemented as an alternative to stationary iterations. Computational results show that the PGS method can improve on the PBJ method execution time by up to 10´ when eight sub-domains per processor are used. However, compared to traditional source iterations with diffusion synthetic acceleration, it is still approximately an order of magnitude slower. The best-performing cases are optically thick because sub-domains decouple, yielding faster convergence. Further tests revealed that 64 sub-domains per processor was the best performing level of sub-domain division. An acceleration technique that improves the convergence rate would greatly improve the ITMM. The GMRES(m) method with a diagonal block pre conditioner consumes approximately the same time as the PBJ solver but could be improved by an as yet undeveloped, more efficient pre conditioner. (author)
Energy Technology Data Exchange (ETDEWEB)
Lasuik, J.; Shalchi, A., E-mail: andreasm4@yahoo.com [Department of Physics and Astronomy, University of Manitoba, Winnipeg, MB R3T 2N2 (Canada)
2017-09-20
Recently, a new theory for the transport of energetic particles across a mean magnetic field was presented. Compared to other nonlinear theories the new approach has the advantage that it provides a full time-dependent description of the transport. Furthermore, a diffusion approximation is no longer part of that theory. The purpose of this paper is to combine this new approach with a time-dependent model for parallel transport and different turbulence configurations in order to explore the parameter regimes for which we get ballistic transport, compound subdiffusion, and normal Markovian diffusion.
Parallelized Kalman-Filter-Based Reconstruction of Particle Tracks on Many-Core Architectures
Energy Technology Data Exchange (ETDEWEB)
Cerati, Giuseppe [Fermilab; Elmer, Peter [Princeton U.; Krutelyov, Slava [UC, San Diego; Lantz, Steven [Cornell U., Phys. Dept.; Lefebvre, Matthieu [Princeton U.; Masciovecchio, Mario [UC, San Diego; McDermott, Kevin [Cornell U., Phys. Dept.; Riley, Daniel [Cornell U., Phys. Dept.; Tadel, Matevž [UC, San Diego; Wittich, Peter [Cornell U., Phys. Dept.; Würthwein, Frank [UC, San Diego; Yagil, Avi [UC, San Diego
2017-11-16
Faced with physical and energy density limitations on clock speed, contemporary microprocessor designers have increasingly turned to on-chip parallelism for performance gains. Examples include the Intel Xeon Phi, GPGPUs, and similar technologies. Algorithms should accordingly be designed with ample amounts of fine-grained parallelism if they are to realize the full performance of the hardware. This requirement can be challenging for algorithms that are naturally expressed as a sequence of small-matrix operations, such as the Kalman filter methods widely in use in high-energy physics experiments. In the High-Luminosity Large Hadron Collider (HL-LHC), for example, one of the dominant computational problems is expected to be finding and fitting charged-particle tracks during event reconstruction; today, the most common track-finding methods are those based on the Kalman filter. Experience at the LHC, both in the trigger and offline, has shown that these methods are robust and provide high physics performance. Previously we reported the significant parallel speedups that resulted from our efforts to adapt Kalman-filter-based tracking to many-core architectures such as Intel Xeon Phi. Here we report on how effectively those techniques can be applied to more realistic detector configurations and event complexity.
International Nuclear Information System (INIS)
Deng Li; Xie Zhongsheng
1999-01-01
The coupled neutron and photon transport Monte Carlo code MCNP (version 3B) has been parallelized in parallel virtual machine (PVM) and message passing interface (MPI) by modifying a previous serial code. The new code has been verified by solving sample problems. The speedup increases linearly with the number of processors and the average efficiency is up to 99% for 12-processor. (author)
Bao, Jian; Lau, Calvin; Kuley, Animesh; Lin, Zhihong; Fulton, Daniel; Tajima, Toshiki; Tri Alpha Energy, Inc. Team
2017-10-01
Collisional and turbulent transport in a field reversed configuration (FRC) is studied in global particle simulation by using GTC (gyrokinetic toroidal code). The global FRC geometry is incorporated in GTC by using a field-aligned mesh in cylindrical coordinates, which enables global simulation coupling core and scrape-off layer (SOL) across the separatrix. Furthermore, fully kinetic ions are implemented in GTC to treat magnetic-null point in FRC core. Both global simulation coupling core and SOL regions and independent SOL region simulation have been carried out to study turbulence. In this work, the ``logical sheath boundary condition'' is implemented to study parallel transport in the SOL. This method helps to relax time and spatial steps without resolving electron plasma frequency and Debye length, which enables turbulent transports simulation with sheath effects. We will study collisional and turbulent SOL parallel transport with mirror geometry and sheath boundary condition in C2-W divertor.
Arctic water tracks retain phosphorus and transport ammonium
Harms, T.; Cook, C. L.; Wlostowski, A. N.; Godsey, S.; Gooseff, M. N.
2017-12-01
Hydrologic flowpaths propagate biogeochemical signals among adjacent ecosystems, but reactions may attenuate signals by retaining, removing, or transforming dissolved and suspended materials. The theory of nutrient spiraling describes these simultaneous reaction and transport processes, but its application has been limited to stream channels. We applied nutrient spiraling theory to water tracks, zero-order channels draining Arctic hillslopes that contain perennially saturated soils and flow at the surface either perennially or in response to precipitation. In the Arctic, experimental warming results in increased availability of nitrogen, the limiting nutrient for hillslope vegetation at the study site, which may be delivered to aquatic ecosystems by water tracks. Increased intensity of rain events, deeper snowpack, earlier snowmelt, and increasing thaw depth resulting from climate change might support increased transport of nutrients, but the reactive capacity of hillslope flowpaths, including sorption and uptake by plants and microbes, could counter transport to regulate solute flux. Characteristics of flowpaths might influence the opportunity for reaction, where slower flowpaths increase the contact time between solutes and soils or roots. We measured nitrogen and phosphorus uptake and transient storage of water tracks through the growing season and found that water tracks retain inorganic phosphorus, but transport ammonium. Nutrient uptake was unrelated to transient storage, suggesting high capacity for nutrient retention by shallow organic soils and vegetation. These observations indicate that increased availability of ammonium, the biogeochemical signal of warming tundra, is propagated by hillslope flowpaths, whereas water tracks attenuate delivery of phosphorus to aquatic ecosystems, where its availability typically limits production.
Resolution of the neutron transport equation by massively parallel computer in the Cronos code
International Nuclear Information System (INIS)
Zardini, D.M.
1996-01-01
The feasibility of neutron transport problems parallel resolution by CRONOS code's SN module is here studied. In this report we give the first data about the parallel resolution by angular variable decomposition of the transport equation. Problems about parallel resolution by spatial variable decomposition and memory stage limits are also explained here. (author)
Parallel Transport Quantum Logic Gates with Trapped Ions.
de Clercq, Ludwig E; Lo, Hsiang-Yu; Marinelli, Matteo; Nadlinger, David; Oswald, Robin; Negnevitsky, Vlad; Kienzler, Daniel; Keitch, Ben; Home, Jonathan P
2016-02-26
We demonstrate single-qubit operations by transporting a beryllium ion with a controlled velocity through a stationary laser beam. We use these to perform coherent sequences of quantum operations, and to perform parallel quantum logic gates on two ions in different processing zones of a multiplexed ion trap chip using a single recycled laser beam. For the latter, we demonstrate individually addressed single-qubit gates by local control of the speed of each ion. The fidelities we observe are consistent with operations performed using standard methods involving static ions and pulsed laser fields. This work therefore provides a path to scalable ion trap quantum computing with reduced requirements on the optical control complexity.
Non-Almost Periodicity of Parallel Transports for Homogeneous Connections
International Nuclear Information System (INIS)
Brunnemann, Johannes; Fleischhack, Christian
2012-01-01
Let A be the affine space of all connections in an SU(2) principal fibre bundle over ℝ 3 . The set of homogeneous isotropic connections forms a line l in A. We prove that the parallel transports for general, non-straight paths in the base manifold do not depend almost periodically on l. Consequently, the embedding l ↪ A does not continuously extend to an embedding l-bar ↪ A-bar of the respective compactifications. Here, the Bohr compactification l-bar corresponds to the configuration space of homogeneous isotropic loop quantum cosmology and A-bar to that of loop quantum gravity. Analogous results are given for the anisotropic case.
Neutron transport solver parallelization using a Domain Decomposition method
International Nuclear Information System (INIS)
Van Criekingen, S.; Nataf, F.; Have, P.
2008-01-01
A domain decomposition (DD) method is investigated for the parallel solution of the second-order even-parity form of the time-independent Boltzmann transport equation. The spatial discretization is performed using finite elements, and the angular discretization using spherical harmonic expansions (P N method). The main idea developed here is due to P.L. Lions. It consists in having sub-domains exchanging not only interface point flux values, but also interface flux 'derivative' values. (The word 'derivative' is here used with quotes, because in the case considered here, it in fact consists in the Ω.∇ operator, with Ω the angular variable vector and ∇ the spatial gradient operator.) A parameter α is introduced, as proportionality coefficient between point flux and 'derivative' values. This parameter can be tuned - so far heuristically - to optimize the method. (authors)
Massively parallel performance of neutron transport response matrix algorithms
International Nuclear Information System (INIS)
Hanebutte, U.R.; Lewis, E.E.
1993-01-01
Massively parallel red/black response matrix algorithms for the solution of within-group neutron transport problems are implemented on the Connection Machines-2, 200 and 5. The response matrices are dericed from the diamond-differences and linear-linear nodal discrete ordinate and variational nodal P 3 approximations. The unaccelerated performance of the iterative procedure is examined relative to the maximum rated performances of the machines. The effects of processor partitions size, of virtual processor ratio and of problems size are examined in detail. For the red/black algorithm, the ratio of inter-node communication to computing times is found to be quite small, normally of the order of ten percent or less. Performance increases with problems size and with virtual processor ratio, within the memeory per physical processor limitation. Algorithm adaptation to courser grain machines is straight-forward, with total computing time being virtually inversely proportional to the number of physical processors. (orig.)
Plane parallel radiance transport for global illumination in vegetation
Energy Technology Data Exchange (ETDEWEB)
Max, N.; Mobley, C.; Keating, B.; Wu, E.H.
1997-01-05
This paper applies plane parallel radiance transport techniques to scattering from vegetation. The leaves, stems, and branches are represented as a volume density of scattering surfaces, depending only on height and the vertical component of the surface normal. Ordinary differential equations are written for the multiply scattered radiance as a function of the height above the ground, with the sky radiance and ground reflectance as boundary conditions. They are solved using a two-pass integration scheme to unify the two-point boundary conditions, and Fourier series for the dependence on the azimuthal angle. The resulting radiance distribution is used to precompute diffuse and specular `ambient` shading tables, as a function of height and surface normal, to be used in rendering, together with a z-buffer shadow algorithm for direct solar illumination.
Parallelized Kalman-Filter-Based Reconstruction of Particle Tracks on Many-Core Processors and GPUs
Cerati, Giuseppe; Elmer, Peter; Krutelyov, Slava; Lantz, Steven; Lefebvre, Matthieu; Masciovecchio, Mario; McDermott, Kevin; Riley, Daniel; Tadel, Matevž; Wittich, Peter; Würthwein, Frank; Yagil, Avi
2017-08-01
For over a decade now, physical and energy constraints have limited clock speed improvements in commodity microprocessors. Instead, chipmakers have been pushed into producing lower-power, multi-core processors such as Graphical Processing Units (GPU), ARM CPUs, and Intel MICs. Broad-based efforts from manufacturers and developers have been devoted to making these processors user-friendly enough to perform general computations. However, extracting performance from a larger number of cores, as well as specialized vector or SIMD units, requires special care in algorithm design and code optimization. One of the most computationally challenging problems in high-energy particle experiments is finding and fitting the charged-particle tracks during event reconstruction. This is expected to become by far the dominant problem at the High-Luminosity Large Hadron Collider (HL-LHC), for example. Today the most common track finding methods are those based on the Kalman filter. Experience with Kalman techniques on real tracking detector systems has shown that they are robust and provide high physics performance. This is why they are currently in use at the LHC, both in the trigger and offine. Previously we reported on the significant parallel speedups that resulted from our investigations to adapt Kalman filters to track fitting and track building on Intel Xeon and Xeon Phi. Here, we discuss our progresses toward the understanding of these processors and the new developments to port the Kalman filter to NVIDIA GPUs.
Parallelized Kalman-Filter-Based Reconstruction of Particle Tracks on Many-Core Processors and GPUs
Directory of Open Access Journals (Sweden)
Cerati Giuseppe
2017-01-01
Full Text Available For over a decade now, physical and energy constraints have limited clock speed improvements in commodity microprocessors. Instead, chipmakers have been pushed into producing lower-power, multi-core processors such as Graphical Processing Units (GPU, ARM CPUs, and Intel MICs. Broad-based efforts from manufacturers and developers have been devoted to making these processors user-friendly enough to perform general computations. However, extracting performance from a larger number of cores, as well as specialized vector or SIMD units, requires special care in algorithm design and code optimization. One of the most computationally challenging problems in high-energy particle experiments is finding and fitting the charged-particle tracks during event reconstruction. This is expected to become by far the dominant problem at the High-Luminosity Large Hadron Collider (HL-LHC, for example. Today the most common track finding methods are those based on the Kalman filter. Experience with Kalman techniques on real tracking detector systems has shown that they are robust and provide high physics performance. This is why they are currently in use at the LHC, both in the trigger and offine. Previously we reported on the significant parallel speedups that resulted from our investigations to adapt Kalman filters to track fitting and track building on Intel Xeon and Xeon Phi. Here, we discuss our progresses toward the understanding of these processors and the new developments to port the Kalman filter to NVIDIA GPUs.
Parallelized Kalman-Filter-Based Reconstruction of Particle Tracks on Many-Core Processors and GPUs
Energy Technology Data Exchange (ETDEWEB)
Cerati, Giuseppe [Fermilab; Elmer, Peter [Princeton U.; Krutelyov, Slava [UC, San Diego; Lantz, Steven [Cornell U.; Lefebvre, Matthieu [Princeton U.; Masciovecchio, Mario [UC, San Diego; McDermott, Kevin [Cornell U.; Riley, Daniel [Cornell U., LNS; Tadel, Matevž [UC, San Diego; Wittich, Peter [Cornell U.; Würthwein, Frank [UC, San Diego; Yagil, Avi [UC, San Diego
2017-01-01
For over a decade now, physical and energy constraints have limited clock speed improvements in commodity microprocessors. Instead, chipmakers have been pushed into producing lower-power, multi-core processors such as Graphical Processing Units (GPU), ARM CPUs, and Intel MICs. Broad-based efforts from manufacturers and developers have been devoted to making these processors user-friendly enough to perform general computations. However, extracting performance from a larger number of cores, as well as specialized vector or SIMD units, requires special care in algorithm design and code optimization. One of the most computationally challenging problems in high-energy particle experiments is finding and fitting the charged-particle tracks during event reconstruction. This is expected to become by far the dominant problem at the High-Luminosity Large Hadron Collider (HL-LHC), for example. Today the most common track finding methods are those based on the Kalman filter. Experience with Kalman techniques on real tracking detector systems has shown that they are robust and provide high physics performance. This is why they are currently in use at the LHC, both in the trigger and offine. Previously we reported on the significant parallel speedups that resulted from our investigations to adapt Kalman filters to track fitting and track building on Intel Xeon and Xeon Phi. Here, we discuss our progresses toward the understanding of these processors and the new developments to port the Kalman filter to NVIDIA GPUs.
A parallel implementation of particle tracking with space charge effects on an INTEL iPSC/860
International Nuclear Information System (INIS)
Chang, L.; Bourianoff, G.; Cole, B.; Machida, S.
1993-05-01
Particle-tracking simulation is one of the scientific applications that is well-suited to parallel computations. At the Superconducting Super Collider, it has been theoretically and empirically demonstrated that particle tracking on a designed lattice can achieve very high parallel efficiency on a MIMD Intel iPSC/860 machine. The key to such success is the realization that the particles can be tracked independently without considering their interaction. The perfectly parallel nature of particle tracking is broken if the interaction effects between particles are included. The space charge introduces an electromagnetic force that will affect the motion of tracked particles in 3-D space. For accurate modeling of the beam dynamics with space charge effects, one needs to solve three-dimensional Maxwell field equations, usually by a particle-in-cell (PIC) algorithm. This will require each particle to communicate with its neighbor grids to compute the momentum changes at each time step. It is expected that the 3-D PIC method will degrade parallel efficiency of particle-tracking implementation on any parallel computer. In this paper, we describe an efficient scheme for implementing particle tracking with space charge effects on an INTEL iPSC/860 machine. Experimental results show that a parallel efficiency of 75% can be obtained
Study on MPI/OpenMP hybrid parallelism for Monte Carlo neutron transport code
International Nuclear Information System (INIS)
Liang Jingang; Xu Qi; Wang Kan; Liu Shiwen
2013-01-01
Parallel programming with mixed mode of messages-passing and shared-memory has several advantages when used in Monte Carlo neutron transport code, such as fitting hardware of distributed-shared clusters, economizing memory demand of Monte Carlo transport, improving parallel performance, and so on. MPI/OpenMP hybrid parallelism was implemented based on a one dimension Monte Carlo neutron transport code. Some critical factors affecting the parallel performance were analyzed and solutions were proposed for several problems such as contention access, lock contention and false sharing. After optimization the code was tested finally. It is shown that the hybrid parallel code can reach good performance just as pure MPI parallel program, while it saves a lot of memory usage at the same time. Therefore hybrid parallel is efficient for achieving large-scale parallel of Monte Carlo neutron transport. (authors)
Particle Tracking Model and Abstraction of Transport Processes
International Nuclear Information System (INIS)
Robinson, B.
2000-01-01
The purpose of the transport methodology and component analysis is to provide the numerical methods for simulating radionuclide transport and model setup for transport in the unsaturated zone (UZ) site-scale model. The particle-tracking method of simulating radionuclide transport is incorporated into the FEHM computer code and the resulting changes in the FEHM code are to be submitted to the software configuration management system. This Analysis and Model Report (AMR) outlines the assumptions, design, and testing of a model for calculating radionuclide transport in the unsaturated zone at Yucca Mountain. In addition, methods for determining colloid-facilitated transport parameters are outlined for use in the Total System Performance Assessment (TSPA) analyses. Concurrently, process-level flow model calculations are being carrier out in a PMR for the unsaturated zone. The computer code TOUGH2 is being used to generate three-dimensional, dual-permeability flow fields, that are supplied to the Performance Assessment group for subsequent transport simulations. These flow fields are converted to input files compatible with the FEHM code, which for this application simulates radionuclide transport using the particle-tracking algorithm outlined in this AMR. Therefore, this AMR establishes the numerical method and demonstrates the use of the model, but the specific breakthrough curves presented do not necessarily represent the behavior of the Yucca Mountain unsaturated zone
Study on tracking system for radioactive material transport
Energy Technology Data Exchange (ETDEWEB)
Watanabe, F.; Igarashi, M.; Nomura, T. [Nuclear Emergency Assistance and Training Center, Japan Nuclear Cycle Development Inst., Ibaraki (Japan); Nakagome, Y. [Research Reactor Inst., Kyoto Univ., Osaka (Japan)
2004-07-01
When a transportation accident occurs, all entities including the shipper, the transportation organization, local governments, and emergency response organizations must have organized and planned for civil safety, property, and environmental protection. When a transportation accident occurs, many related organizations will be involved, and their cooperation determines the success or failure of the response. The point where the accident happens cannot be pinpointed in advance. Nuclear fuel transportation also requires a quick response from a viewpoint of security. A tracking system for radioactive material transport is being developed for use in Japan. The objective of this system is, in the rare event of an accident, for communication capabilities to share specific information among relevant organizations, the transporter, and so on.
Study on tracking system for radioactive material transport
International Nuclear Information System (INIS)
Watanabe, F.; Igarashi, M.; Nomura, T.; Nakagome, Y.
2004-01-01
When a transportation accident occurs, all entities including the shipper, the transportation organization, local governments, and emergency response organizations must have organized and planned for civil safety, property, and environmental protection. When a transportation accident occurs, many related organizations will be involved, and their cooperation determines the success or failure of the response. The point where the accident happens cannot be pinpointed in advance. Nuclear fuel transportation also requires a quick response from a viewpoint of security. A tracking system for radioactive material transport is being developed for use in Japan. The objective of this system is, in the rare event of an accident, for communication capabilities to share specific information among relevant organizations, the transporter, and so on
Weighted-delta-tracking for Monte Carlo particle transport
International Nuclear Information System (INIS)
Morgan, L.W.G.; Kotlyar, D.
2015-01-01
Highlights: • This paper presents an alteration to the Monte Carlo Woodcock tracking technique. • The alteration improves computational efficiency within regions of high absorbers. • The rejection technique is replaced by a statistical weighting mechanism. • The modified Woodcock method is shown to be faster than standard Woodcock tracking. • The modified Woodcock method achieves a lower variance, given a specified accuracy. - Abstract: Monte Carlo particle transport (MCPT) codes are incredibly powerful and versatile tools to simulate particle behavior in a multitude of scenarios, such as core/criticality studies, radiation protection, shielding, medicine and fusion research to name just a small subset applications. However, MCPT codes can be very computationally expensive to run when the model geometry contains large attenuation depths and/or contains many components. This paper proposes a simple modification to the Woodcock tracking method used by some Monte Carlo particle transport codes. The Woodcock method utilizes the rejection method for sampling virtual collisions as a method to remove collision distance sampling at material boundaries. However, it suffers from poor computational efficiency when the sample acceptance rate is low. The proposed method removes rejection sampling from the Woodcock method in favor of a statistical weighting scheme, which improves the computational efficiency of a Monte Carlo particle tracking code. It is shown that the modified Woodcock method is less computationally expensive than standard ray-tracing and rejection-based Woodcock tracking methods and achieves a lower variance, given a specified accuracy
Parallel processing of Monte Carlo code MCNP for particle transport problem
Energy Technology Data Exchange (ETDEWEB)
Higuchi, Kenji; Kawasaki, Takuji
1996-06-01
It is possible to vectorize or parallelize Monte Carlo codes (MC code) for photon and neutron transport problem, making use of independency of the calculation for each particle. Applicability of existing MC code to parallel processing is mentioned. As for parallel computer, we have used both vector-parallel processor and scalar-parallel processor in performance evaluation. We have made (i) vector-parallel processing of MCNP code on Monte Carlo machine Monte-4 with four vector processors, (ii) parallel processing on Paragon XP/S with 256 processors. In this report we describe the methodology and results for parallel processing on two types of parallel or distributed memory computers. In addition, we mention the evaluation of parallel programming environments for parallel computers used in the present work as a part of the work developing STA (Seamless Thinking Aid) Basic Software. (author)
Transcom's next move: Improvements to DOE's transportation satellite tracking systems
International Nuclear Information System (INIS)
Harmon, L.H.; Harris, A.D. III; Driscoll, K.L.; Ellis, L.G.
1990-01-01
In today's society, the use of satellites is becoming the state-of-the-art method of tracking shipments. The United States Department of Energy (US DOE) has advanced technology in this area with its transportation tracking and communications system, TRANSCOM, which has been in operation for over one year. TRANSCOM was developed by DOE to monitor selected, unclassified shipments of radioactive materials across the country. With the latest technology in satellite communications, Long Range Navigation (Loran), and computer networks, TRANSCOM tracks shipments in near-real time, disseminates information on each shipment to authorized users of the system, and offers two-way communications between vehicle operators and TRANSCOM users anywhere in the country. TRANSCOM's successful tracking record, during fiscal year 1989, includes shipments of spent fuel, cesium, uranium hexafluoride, and demonstration shipments for the Waste Isolation Pilot Plant (WIPP). Plans for fiscal year 1990 include tracking additional shipments, implementing system enhancements designed to meet the users' needs, and continuing to research the technology of tracking systems so that TRANSCOM can provide its users with the newest technology available in satellite communications. 3 refs., 1 fig
Adaptive robust trajectory tracking control of a parallel manipulator driven by pneumatic cylinders
Directory of Open Access Journals (Sweden)
Ce Shang
2016-04-01
Full Text Available Due to the compressibility of air, non-linear characteristics, and parameter uncertainties of pneumatic elements, the position control of a pneumatic cylinder or parallel platform is still very difficult while comparing with the systems driven by electric or hydraulic power. In this article, based on the basic dynamic model and descriptions of thermal processes, a controller integrated with online parameter estimation is proposed to improve the performance of a pneumatic cylinder controlled by a proportional valve. The trajectory tracking error is significantly decreased by applying this method. Moreover, the algorithm is expanded to the problem of posture trajectory tracking for the three-revolute prismatic spherical pneumatic parallel manipulator. Lyapunov’s method is used to give the proof of stability of the controller. Using NI-CompactRio, NI-PXI, and Veristand platform as the realistic controller hardware and data interactive environment, the adaptive robust control algorithm is applied to the physical system successfully. Experimental results and data analysis showed that the posture error of the platform could be about 0.5%–0.7% of the desired trajectory amplitude. By integrating this method to the mechatronic system, the pneumatic servo solutions can be much more competitive in the industrial market of position and posture control.
Li, Jiuyi; Busscher, Henk J.; Norde, Willem; Sjollema, Jelmer
2011-01-01
In order to investigate bacterium-substratum interactions, understanding of bacterial mass transport is necessary. Comparisons of experimentally observed initial deposition rates with mass transport rates in parallel-plate-flow-chambers (PPFC) predicted by convective-diffusion yielded deposition
Troublesome transportation concerns can be mitigated - RADMAT tracking system
International Nuclear Information System (INIS)
Harmon, L.H.
1987-01-01
There are three troublesome institutional concerns which face every large-quantity radioactive materials shipment - routing, pre-notification, and emergency response. People want to know: where's the shipment going and how's it getting there? States want to know what's being shipped and when? What kind of response to accidents is needed for this shipment and who'll respond? DOE is developing a transportation tracking system, based on a rapidly developing technology to determine geographical location using geo-positioning satellite systems. This technology will be used to track unclassified radioactive materials shipments in real-time. It puts those charged with monitoring transportation status on top of every shipment. Besides its practical benefits in the areas of logistics planning and execution, it demonstrates emergency preparedness has indeed been considered and close monitoring is possible. This paper will describe the system's technical detail, DOE plans and policy for its implementation, and the state of satellite positioning technology
The effect of plasma fluctuations on parallel transport parameters in the SOL
DEFF Research Database (Denmark)
Havlíčková, E.; Fundamenski, W.; Naulin, Volker
2011-01-01
The effect of plasma fluctuations due to turbulence at the outboard midplane on parallel transport properties is investigated. Time-dependent fluctuating signals at different radial locations are used to study the effect of signal statistics. Further, a computational analysis of parallel transport...... to a comparison of steady-state and time-dependent modelling....
Aghamohammadi, Amirhossein; Ang, Mei Choo; A Sundararajan, Elankovan; Weng, Ng Kok; Mogharrebi, Marzieh; Banihashem, Seyed Yashar
2018-01-01
Visual tracking in aerial videos is a challenging task in computer vision and remote sensing technologies due to appearance variation difficulties. Appearance variations are caused by camera and target motion, low resolution noisy images, scale changes, and pose variations. Various approaches have been proposed to deal with appearance variation difficulties in aerial videos, and amongst these methods, the spatiotemporal saliency detection approach reported promising results in the context of moving target detection. However, it is not accurate for moving target detection when visual tracking is performed under appearance variations. In this study, a visual tracking method is proposed based on spatiotemporal saliency and discriminative online learning methods to deal with appearance variations difficulties. Temporal saliency is used to represent moving target regions, and it was extracted based on the frame difference with Sauvola local adaptive thresholding algorithms. The spatial saliency is used to represent the target appearance details in candidate moving regions. SLIC superpixel segmentation, color, and moment features can be used to compute feature uniqueness and spatial compactness of saliency measurements to detect spatial saliency. It is a time consuming process, which prompted the development of a parallel algorithm to optimize and distribute the saliency detection processes that are loaded into the multi-processors. Spatiotemporal saliency is then obtained by combining the temporal and spatial saliencies to represent moving targets. Finally, a discriminative online learning algorithm was applied to generate a sample model based on spatiotemporal saliency. This sample model is then incrementally updated to detect the target in appearance variation conditions. Experiments conducted on the VIVID dataset demonstrated that the proposed visual tracking method is effective and is computationally efficient compared to state-of-the-art methods.
2018-01-01
Visual tracking in aerial videos is a challenging task in computer vision and remote sensing technologies due to appearance variation difficulties. Appearance variations are caused by camera and target motion, low resolution noisy images, scale changes, and pose variations. Various approaches have been proposed to deal with appearance variation difficulties in aerial videos, and amongst these methods, the spatiotemporal saliency detection approach reported promising results in the context of moving target detection. However, it is not accurate for moving target detection when visual tracking is performed under appearance variations. In this study, a visual tracking method is proposed based on spatiotemporal saliency and discriminative online learning methods to deal with appearance variations difficulties. Temporal saliency is used to represent moving target regions, and it was extracted based on the frame difference with Sauvola local adaptive thresholding algorithms. The spatial saliency is used to represent the target appearance details in candidate moving regions. SLIC superpixel segmentation, color, and moment features can be used to compute feature uniqueness and spatial compactness of saliency measurements to detect spatial saliency. It is a time consuming process, which prompted the development of a parallel algorithm to optimize and distribute the saliency detection processes that are loaded into the multi-processors. Spatiotemporal saliency is then obtained by combining the temporal and spatial saliencies to represent moving targets. Finally, a discriminative online learning algorithm was applied to generate a sample model based on spatiotemporal saliency. This sample model is then incrementally updated to detect the target in appearance variation conditions. Experiments conducted on the VIVID dataset demonstrated that the proposed visual tracking method is effective and is computationally efficient compared to state-of-the-art methods. PMID:29438421
International Nuclear Information System (INIS)
Masukawa, Fumihiro; Takano, Makoto; Naito, Yoshitaka; Yamazaki, Takao; Fujisaki, Masahide; Suzuki, Koichiro; Okuda, Motoi.
1993-11-01
In order to improve the accuracy and calculating speed of shielding analyses, MCNP 4, a Monte Carlo neutron and photon transport code system, has been parallelized and measured of its efficiency in the highly parallel distributed memory type computer, AP1000. The code has been analyzed statically and dynamically, then the suitable algorithm for parallelization has been determined for the shielding analysis functions of MCNP 4. This includes a strategy where a new history is assigned to the idling processor element dynamically during the execution. Furthermore, to avoid the congestion of communicative processing, the batch concept, processing multi-histories by a unit, has been introduced. By analyzing a sample cask problem with 2,000,000 histories by the AP1000 with 512 processor elements, the 82 % of parallelization efficiency is achieved, and the calculational speed has been estimated to be around 50 times as fast as that of FACOM M-780. (author)
T839 fiber tracking transporter at New Muon Lab
International Nuclear Information System (INIS)
Krider, J.
1991-01-01
A darkbox and its transporter have been designed for T839 fiber tracking tests. The darkbox is 3.35 m x 0.76 m x 0.25 m (1·w·h) and contains a scintillating fiber ribbon suspension system and mechanical hardware to support the readout electronics. The transporter provides 3.0 m of horizontal motion transverse to the beam for linear scans of fiber characteristics. In addition, 70 degrees of rotation about a vertical axis is provided to simulate tracking of particles emanating from a collision point at lab angles in the range 0 degrees--70 degrees. The transporter, which is located inside a radiation area, is remotely controlled to permit scanning the fiber array through the region defined by four small stationary triggering scintillators without disabling beam. The transporter rails extend 20 feet to the west beyond a gate in the radiation enclosure fencing. This provides a staging area to work on the apparatus, while the beam is on. 4 figs
Parallel transport in ideal magnetohydrodynamics and applications to resistive wall modes
International Nuclear Information System (INIS)
Finn, J.M.; Gerwin, R.A.
1996-01-01
It is shown that in magnetohydrodynamics (MHD) with an ideal Ohm close-quote s law, in the presence of parallel heat flux, density gradient, temperature gradient, and parallel compression, but in the absence of perpendicular compressibility, there is an exact cancellation of the parallel transport terms. This cancellation is due to the fact that magnetic flux is advected in the presence of an ideal Ohm close-quote s law, and therefore parallel transport of temperature and density gives the same result as perpendicular advection of the same quantities. Discussions are also presented regarding parallel viscosity and parallel velocity shear, and the generalization to toroidal geometry. These results suggest that a correct generalization of the Hammett endash Perkins fluid operator [G. W. Hammett and F. W. Perkins, Phys. Rev. Lett. 64, 3019 (1990)] to simulate Landau damping for electromagnetic modes must give an operator that acts on the dynamics parallel to the perturbed magnetic field lines. copyright 1996 American Institute of Physics
Su, Hao; Dickstein-Fischer, Laurie; Harrington, Kevin; Fu, Qiushi; Lu, Weina; Huang, Haibo; Cole, Gregory; Fischer, Gregory S
2010-01-01
This paper presents the development of new prismatic actuation approach and its application in human-safe humanoid head design. To reduce actuator output impedance and mitigate unexpected external shock, the prismatic actuation method uses cables to drive a piston with preloaded spring. By leveraging the advantages of parallel manipulator and cable-driven mechanism, the developed neck has a parallel manipulator embodiment with two cable-driven limbs embedded with preloaded springs and one passive limb. The eye mechanism is adapted for low-cost webcam with succinct "ball-in-socket" structure. Based on human head anatomy and biomimetics, the neck has 3 degree of freedom (DOF) motion: pan, tilt and one decoupled roll while each eye has independent pan and synchronous tilt motion (3 DOF eyes). A Kalman filter based face tracking algorithm is implemented to interact with the human. This neck and eye structure is translatable to other human-safe humanoid robots. The robot's appearance reflects a non-threatening image of a penguin, which can be translated into a possible therapeutic intervention for children with Autism Spectrum Disorders.
Particle Tracking Model and Abstraction of Transport Processes
Energy Technology Data Exchange (ETDEWEB)
B. Robinson
2004-10-21
The purpose of this report is to document the abstraction model being used in total system performance assessment (TSPA) model calculations for radionuclide transport in the unsaturated zone (UZ). The UZ transport abstraction model uses the particle-tracking method that is incorporated into the finite element heat and mass model (FEHM) computer code (Zyvoloski et al. 1997 [DIRS 100615]) to simulate radionuclide transport in the UZ. This report outlines the assumptions, design, and testing of a model for calculating radionuclide transport in the UZ at Yucca Mountain. In addition, methods for determining and inputting transport parameters are outlined for use in the TSPA for license application (LA) analyses. Process-level transport model calculations are documented in another report for the UZ (BSC 2004 [DIRS 164500]). Three-dimensional, dual-permeability flow fields generated to characterize UZ flow (documented by BSC 2004 [DIRS 169861]; DTN: LB03023DSSCP9I.001 [DIRS 163044]) are converted to make them compatible with the FEHM code for use in this abstraction model. This report establishes the numerical method and demonstrates the use of the model that is intended to represent UZ transport in the TSPA-LA. Capability of the UZ barrier for retarding the transport is demonstrated in this report, and by the underlying process model (BSC 2004 [DIRS 164500]). The technical scope, content, and management of this report are described in the planning document ''Technical Work Plan for: Unsaturated Zone Transport Model Report Integration'' (BSC 2004 [DIRS 171282]). Deviations from the technical work plan (TWP) are noted within the text of this report, as appropriate. The latest version of this document is being prepared principally to correct parameter values found to be in error due to transcription errors, changes in source data that were not captured in the report, calculation errors, and errors in interpretation of source data.
Particle Tracking Model and Abstraction of Transport Processes
International Nuclear Information System (INIS)
Robinson, B.
2004-01-01
The purpose of this report is to document the abstraction model being used in total system performance assessment (TSPA) model calculations for radionuclide transport in the unsaturated zone (UZ). The UZ transport abstraction model uses the particle-tracking method that is incorporated into the finite element heat and mass model (FEHM) computer code (Zyvoloski et al. 1997 [DIRS 100615]) to simulate radionuclide transport in the UZ. This report outlines the assumptions, design, and testing of a model for calculating radionuclide transport in the UZ at Yucca Mountain. In addition, methods for determining and inputting transport parameters are outlined for use in the TSPA for license application (LA) analyses. Process-level transport model calculations are documented in another report for the UZ (BSC 2004 [DIRS 164500]). Three-dimensional, dual-permeability flow fields generated to characterize UZ flow (documented by BSC 2004 [DIRS 169861]; DTN: LB03023DSSCP9I.001 [DIRS 163044]) are converted to make them compatible with the FEHM code for use in this abstraction model. This report establishes the numerical method and demonstrates the use of the model that is intended to represent UZ transport in the TSPA-LA. Capability of the UZ barrier for retarding the transport is demonstrated in this report, and by the underlying process model (BSC 2004 [DIRS 164500]). The technical scope, content, and management of this report are described in the planning document ''Technical Work Plan for: Unsaturated Zone Transport Model Report Integration'' (BSC 2004 [DIRS 171282]). Deviations from the technical work plan (TWP) are noted within the text of this report, as appropriate. The latest version of this document is being prepared principally to correct parameter values found to be in error due to transcription errors, changes in source data that were not captured in the report, calculation errors, and errors in interpretation of source data
International Nuclear Information System (INIS)
Kong, Rong; Spanier, Jerome
2013-01-01
In this paper we develop novel extensions of collision and track length estimators for the complete space-angle solutions of radiative transport problems. We derive the relevant equations, prove that our new estimators are unbiased, and compare their performance with that of more conventional estimators. Such comparisons based on numerical solutions of simple one dimensional slab problems indicate the the potential superiority of the new estimators for a wide variety of more general transport problems
Effects of parallel electron dynamics on plasma blob transport
Energy Technology Data Exchange (ETDEWEB)
Angus, Justin R.; Krasheninnikov, Sergei I. [University of California, San Diego, 9500 Gilman Drive, La Jolla, California 92093 (United States); Umansky, Maxim V. [Lawrence Livermore National Laboratory, 7000 East Avenue, Livermore, California 94550 (United States)
2012-08-15
The 3D effects on sheath connected plasma blobs that result from parallel electron dynamics are studied by allowing for the variation of blob density and potential along the magnetic field line and using collisional Ohm's law to model the parallel current density. The parallel current density from linear sheath theory, typically used in the 2D model, is implemented as parallel boundary conditions. This model includes electrostatic 3D effects, such as resistive drift waves and blob spinning, while retaining all of the fundamental 2D physics of sheath connected plasma blobs. If the growth time of unstable drift waves is comparable to the 2D advection time scale of the blob, then the blob's density gradient will be depleted resulting in a much more diffusive blob with little radial motion. Furthermore, blob profiles that are initially varying along the field line drive the potential to a Boltzmann relation that spins the blob and thereby acts as an addition sink of the 2D potential. Basic dimensionless parameters are presented to estimate the relative importance of these two 3D effects. The deviation of blob dynamics from that predicted by 2D theory in the appropriate limits of these parameters is demonstrated by a direct comparison of 2D and 3D seeded blob simulations.
A portable, parallel, object-oriented Monte Carlo neutron transport code in C++
International Nuclear Information System (INIS)
Lee, S.R.; Cummings, J.C.; Nolen, S.D.
1997-01-01
We have developed a multi-group Monte Carlo neutron transport code using C++ and the Parallel Object-Oriented Methods and Applications (POOMA) class library. This transport code, called MC++, currently computes k and α-eigenvalues and is portable to and runs parallel on a wide variety of platforms, including MPPs, clustered SMPs, and individual workstations. It contains appropriate classes and abstractions for particle transport and, through the use of POOMA, for portable parallelism. Current capabilities of MC++ are discussed, along with physics and performance results on a variety of hardware, including all Accelerated Strategic Computing Initiative (ASCI) hardware. Current parallel performance indicates the ability to compute α-eigenvalues in seconds to minutes rather than hours to days. Future plans and the implementation of a general transport physics framework are also discussed
Parallel algorithms for 2-D cylindrical transport equations of Eigenvalue problem
International Nuclear Information System (INIS)
Wei, J.; Yang, S.
2013-01-01
In this paper, aimed at the neutron transport equations of eigenvalue problem under 2-D cylindrical geometry on unstructured grid, the discrete scheme of Sn discrete ordinate and discontinuous finite is built, and the parallel computation for the scheme is realized on MPI systems. Numerical experiments indicate that the designed parallel algorithm can reach perfect speedup, it has good practicality and scalability. (authors)
Improvements in fast-response flood modeling: desktop parallel computing and domain tracking
Energy Technology Data Exchange (ETDEWEB)
Judi, David R [Los Alamos National Laboratory; Mcpherson, Timothy N [Los Alamos National Laboratory; Burian, Steven J [UNIV. OF UTAH
2009-01-01
It is becoming increasingly important to have the ability to accurately forecast flooding, as flooding accounts for the most losses due to natural disasters in the world and the United States. Flood inundation modeling has been dominated by one-dimensional approaches. These models are computationally efficient and are considered by many engineers to produce reasonably accurate water surface profiles. However, because the profiles estimated in these models must be superimposed on digital elevation data to create a two-dimensional map, the result may be sensitive to the ability of the elevation data to capture relevant features (e.g. dikes/levees, roads, walls, etc...). Moreover, one-dimensional models do not explicitly represent the complex flow processes present in floodplains and urban environments and because two-dimensional models based on the shallow water equations have significantly greater ability to determine flow velocity and direction, the National Research Council (NRC) has recommended that two-dimensional models be used over one-dimensional models for flood inundation studies. This paper has shown that two-dimensional flood modeling computational time can be greatly reduced through the use of Java multithreading on multi-core computers which effectively provides a means for parallel computing on a desktop computer. In addition, this paper has shown that when desktop parallel computing is coupled with a domain tracking algorithm, significant computation time can be eliminated when computations are completed only on inundated cells. The drastic reduction in computational time shown here enhances the ability of two-dimensional flood inundation models to be used as a near-real time flood forecasting tool, engineering, design tool, or planning tool. Perhaps even of greater significance, the reduction in computation time makes the incorporation of risk and uncertainty/ensemble forecasting more feasible for flood inundation modeling (NRC 2000; Sayers et al
Intelligent products for enhancing the utilization of tracking technology in transportation
Meyer, Gerben G.; Buijs, Paul; Szirbik, Nick B.; Wortmann, J.C.
2014-01-01
Purpose – Many transportation companies struggle to effectively utilize the information provided by tracking technology for performing operational control. The research as presented in this paper aims to identify the problems underlying the inability to utilize tracking technology within this
International Nuclear Information System (INIS)
Helander, P.; Hazeltine, R.D.; Catto, P.J.
1996-01-01
The orderings in the kinetic equations commonly used to study the plasma core of a tokamak do not allow a balance between parallel ion streaming and radial diffusion, and are, therefore, inappropriate in the plasma edge. Different orderings are required in the edge region where radial transport across the steep gradients associated with the scrape-off layer is large enough to balance the rapid parallel flow caused by conditions close to collecting surfaces (such as the Bohm sheath condition). In the present work, we derive and solve novel kinetic equations, allowing for such a balance, and construct distinctive transport laws for impure, collisional, edge plasmas in which the perpendicular transport is (i) due to Coulomb collisions of ions with heavy impurities, or (ii) governed by anomalous diffusion driven by electrostatic turbulence. In both the collisional and anomalous radial transport cases, we find that one single diffusion coefficient determines the radial transport of particles, momentum and heat. The parallel transport laws and parallel thermal force in the scrape-off layer assume an unconventional form, in which the relative ion-impurity flow is driven by a combination of the conventional parallel gradients, and new (i) collisional or (ii) anomalous terms involving products of radial derivatives of the temperature and density with the radial shear of the parallel velocity. Thus, in the presence of anomalous radial diffusion, the parallel ion transport cannot be entirely classical, as usually assumed in numerical edge computations. The underlying physical reason is the appearance of a novel type of parallel thermal force resulting from the combined action of anomalous diffusion and radial temperature and velocity gradients. In highly sheared flows the new terms can modify impurity penetration into the core plasma
Parallelizing an electron transport Monte Carlo simulator (MOCASIN 2.0)
International Nuclear Information System (INIS)
Schwetman, H.; Burdick, S.
1988-01-01
Electron transport simulators are tools for studying electrical properties of semiconducting materials and devices. As demands for modeling more complex devices and new materials have emerged, so have demands for more processing power. This paper documents a project to convert an electron transport simulator (MOCASIN 2.0) to a parallel processing environment. In addition to describing the conversion, the paper presents PPL, a parallel programming version of C running on a Sequent multiprocessor system. In timing tests, models that simulated the movement of 2,000 particles for 100 time steps were executed on ten processors, with a parallel efficiency of over 97%
Momentum-energy transport from turbulence driven by parallel flow shear
International Nuclear Information System (INIS)
Dong, J.Q.; Horton, W.; Bengtson, R.D.; Li, G.X.
1994-04-01
The low frequency E x B turbulence driven by the shear in the mass flow velocity parallel to the magnetic field is studied using the fluid theory in a slab configuration with magnetic shear. Ion temperature gradient effects are taken into account. The eigenfunctions of the linear instability are asymmetric about the mode rational surfaces. Quasilinear Reynolds stress induced by such asymmetric fluctuations produces momentum and energy transport across the magnetic field. Analytic formulas for the parallel and perpendicular Reynolds stress, viscosity and energy transport coefficients are given. Experimental observations of the parallel and poloidal plasma flows on TEXT-U are presented and compared with the theoretical models
MC++: A parallel, portable, Monte Carlo neutron transport code in C++
International Nuclear Information System (INIS)
Lee, S.R.; Cummings, J.C.; Nolen, S.D.
1997-01-01
MC++ is an implicit multi-group Monte Carlo neutron transport code written in C++ and based on the Parallel Object-Oriented Methods and Applications (POOMA) class library. MC++ runs in parallel on and is portable to a wide variety of platforms, including MPPs, SMPs, and clusters of UNIX workstations. MC++ is being developed to provide transport capabilities to the Accelerated Strategic Computing Initiative (ASCI). It is also intended to form the basis of the first transport physics framework (TPF), which is a C++ class library containing appropriate abstractions, objects, and methods for the particle transport problem. The transport problem is briefly described, as well as the current status and algorithms in MC++ for solving the transport equation. The alpha version of the POOMA class library is also discussed, along with the implementation of the transport solution algorithms using POOMA. Finally, a simple test problem is defined and performance and physics results from this problem are discussed on a variety of platforms
TRACKING VEHICLE IN GSM NETWORK TO SUPPORT INTELLIGENT TRANSPORTATION SYSTEMS
Directory of Open Access Journals (Sweden)
Z. Koppanyi
2012-07-01
Full Text Available The penetration of GSM capable devices is very high, especially in Europe. To exploit the potential of turning these mobile devices into dynamic data acquisition nodes that provides valuable data for Intelligent Transportation Systems (ITS, position information is needed. The paper describes the basic operation principles of the GSM system and provides an overview on the existing methods for deriving location data in the network. A novel positioning solution is presented that rely on handover (HO zone measurements; the zone geometry properties are also discussed. A new concept of HO zone sequence recognition is introduced that involves application of Probabilistic Deterministic Finite State Automata (PDFA. Both the potential commercial applications and the use of the derived position data in ITS is discussed for tracking vehicles and monitoring traffic flow. As a practical cutting edge example, the integration possibility of the technology in the SafeTRIP platform (developed in an EC FP7 project is presented.
Regional Atmospheric Transport Code for Hanford Emission Tracking (RATCHET)
International Nuclear Information System (INIS)
Ramsdell, J.V. Jr.; Simonen, C.A.; Burk, K.W.
1994-02-01
The purpose of the Hanford Environmental Dose Reconstruction (HEDR) Project is to estimate radiation doses that individuals may have received from operations at the Hanford Site since 1944. This report deals specifically with the atmospheric transport model, Regional Atmospheric Transport Code for Hanford Emission Tracking (RATCHET). RATCHET is a major rework of the MESOILT2 model used in the first phase of the HEDR Project; only the bookkeeping framework escaped major changes. Changes to the code include (1) significant changes in the representation of atmospheric processes and (2) incorporation of Monte Carlo methods for representing uncertainty in input data, model parameters, and coefficients. To a large extent, the revisions to the model are based on recommendations of a peer working group that met in March 1991. Technical bases for other portions of the atmospheric transport model are addressed in two other documents. This report has three major sections: a description of the model, a user's guide, and a programmer's guide. These sections discuss RATCHET from three different perspectives. The first provides a technical description of the code with emphasis on details such as the representation of the model domain, the data required by the model, and the equations used to make the model calculations. The technical description is followed by a user's guide to the model with emphasis on running the code. The user's guide contains information about the model input and output. The third section is a programmer's guide to the code. It discusses the hardware and software required to run the code. The programmer's guide also discusses program structure and each of the program elements
Modeling reactive transport with particle tracking and kernel estimators
Rahbaralam, Maryam; Fernandez-Garcia, Daniel; Sanchez-Vila, Xavier
2015-04-01
Groundwater reactive transport models are useful to assess and quantify the fate and transport of contaminants in subsurface media and are an essential tool for the analysis of coupled physical, chemical, and biological processes in Earth Systems. Particle Tracking Method (PTM) provides a computationally efficient and adaptable approach to solve the solute transport partial differential equation. On a molecular level, chemical reactions are the result of collisions, combinations, and/or decay of different species. For a well-mixed system, the chem- ical reactions are controlled by the classical thermodynamic rate coefficient. Each of these actions occurs with some probability that is a function of solute concentrations. PTM is based on considering that each particle actually represents a group of molecules. To properly simulate this system, an infinite number of particles is required, which is computationally unfeasible. On the other hand, a finite number of particles lead to a poor-mixed system which is limited by diffusion. Recent works have used this effect to actually model incomplete mix- ing in naturally occurring porous media. In this work, we demonstrate that this effect in most cases should be attributed to a defficient estimation of the concentrations and not to the occurrence of true incomplete mixing processes in porous media. To illustrate this, we show that a Kernel Density Estimation (KDE) of the concentrations can approach the well-mixed solution with a limited number of particles. KDEs provide weighting functions of each particle mass that expands its region of influence, hence providing a wider region for chemical reactions with time. Simulation results show that KDEs are powerful tools to improve state-of-the-art simulations of chemical reactions and indicates that incomplete mixing in diluted systems should be modeled based on alternative conceptual models and not on a limited number of particles.
Energy Technology Data Exchange (ETDEWEB)
Pinchedez, K
1999-06-01
Parallel computing meets the ever-increasing requirements for neutronic computer code speed and accuracy. In this work, two different approaches have been considered. We first parallelized the sequential algorithm used by the neutronics code CRONOS developed at the French Atomic Energy Commission. The algorithm computes the dominant eigenvalue associated with PN simplified transport equations by a mixed finite element method. Several parallel algorithms have been developed on distributed memory machines. The performances of the parallel algorithms have been studied experimentally by implementation on a T3D Cray and theoretically by complexity models. A comparison of various parallel algorithms has confirmed the chosen implementations. We next applied a domain sub-division technique to the two-group diffusion Eigen problem. In the modal synthesis-based method, the global spectrum is determined from the partial spectra associated with sub-domains. Then the Eigen problem is expanded on a family composed, on the one hand, from eigenfunctions associated with the sub-domains and, on the other hand, from functions corresponding to the contribution from the interface between the sub-domains. For a 2-D homogeneous core, this modal method has been validated and its accuracy has been measured. (author)
International Nuclear Information System (INIS)
Apisit, Patchimpattapong; Alireza, Haghighat; Shedlock, D.
2003-01-01
An expert system for generating an effective mesh distribution for the SN particle transport simulation has been developed. This expert system consists of two main parts: 1) an algorithm for generating an effective mesh distribution in a serial environment, and 2) an algorithm for inference of an effective domain decomposition strategy for parallel computing. For the first part, the algorithm prepares an effective mesh distribution considering problem physics and the spatial differencing scheme. For the second part, the algorithm determines a parallel-performance-index (PPI), which is defined as the ratio of the granularity to the degree-of-coupling. The parallel-performance-index provides expected performance of an algorithm depending on computing environment and resources. A large index indicates a high granularity algorithm with relatively low coupling among processors. This expert system has been successfully tested within the PENTRAN (Parallel Environment Neutral-Particle Transport) code system for simulating real-life shielding problems. (authors)
Energy Technology Data Exchange (ETDEWEB)
Apisit, Patchimpattapong [Electricity Generating Authority of Thailand, Office of Corporate Planning, Bangkruai, Nonthaburi (Thailand); Alireza, Haghighat; Shedlock, D. [Florida Univ., Department of Nuclear and Radiological Engineering, Gainesville, FL (United States)
2003-07-01
An expert system for generating an effective mesh distribution for the SN particle transport simulation has been developed. This expert system consists of two main parts: 1) an algorithm for generating an effective mesh distribution in a serial environment, and 2) an algorithm for inference of an effective domain decomposition strategy for parallel computing. For the first part, the algorithm prepares an effective mesh distribution considering problem physics and the spatial differencing scheme. For the second part, the algorithm determines a parallel-performance-index (PPI), which is defined as the ratio of the granularity to the degree-of-coupling. The parallel-performance-index provides expected performance of an algorithm depending on computing environment and resources. A large index indicates a high granularity algorithm with relatively low coupling among processors. This expert system has been successfully tested within the PENTRAN (Parallel Environment Neutral-Particle Transport) code system for simulating real-life shielding problems. (authors)
A parallel version of a multigrid algorithm for isotropic transport equations
International Nuclear Information System (INIS)
Manteuffel, T.; McCormick, S.; Yang, G.; Morel, J.; Oliveira, S.
1994-01-01
The focus of this paper is on a parallel algorithm for solving the transport equations in a slab geometry using multigrid. The spatial discretization scheme used is a finite element method called the modified linear discontinuous (MLD) scheme. The MLD scheme represents a lumped version of the standard linear discontinuous (LD) scheme. The parallel algorithm was implemented on the Connection Machine 2 (CM2). Convergence rates and timings for this algorithm on the CM2 and Cray-YMP are shown
Parallel processing implementation for the coupled transport of photons and electrons using OpenMP
Doerner, Edgardo
2016-05-01
In this work the use of OpenMP to implement the parallel processing of the Monte Carlo (MC) simulation of the coupled transport for photons and electrons is presented. This implementation was carried out using a modified EGSnrc platform which enables the use of the Microsoft Visual Studio 2013 (VS2013) environment, together with the developing tools available in the Intel Parallel Studio XE 2015 (XE2015). The performance study of this new implementation was carried out in a desktop PC with a multi-core CPU, taking as a reference the performance of the original platform. The results were satisfactory, both in terms of scalability as parallelization efficiency.
On the adequacy of message-passing parallel supercomputers for solving neutron transport problems
International Nuclear Information System (INIS)
Azmy, Y.Y.
1990-01-01
A coarse-grained, static-scheduling parallelization of the standard iterative scheme used for solving the discrete-ordinates approximation of the neutron transport equation is described. The parallel algorithm is based on a decomposition of the angular domain along the discrete ordinates, thus naturally producing a set of completely uncoupled systems of equations in each iteration. Implementation of the parallel code on Intcl's iPSC/2 hypercube, and solutions to test problems are presented as evidence of the high speedup and efficiency of the parallel code. The performance of the parallel code on the iPSC/2 is analyzed, and a model for the CPU time as a function of the problem size (order of angular quadrature) and the number of participating processors is developed and validated against measured CPU times. The performance model is used to speculate on the potential of massively parallel computers for significantly speeding up real-life transport calculations at acceptable efficiencies. We conclude that parallel computers with a few hundred processors are capable of producing large speedups at very high efficiencies in very large three-dimensional problems. 10 refs., 8 figs
International Nuclear Information System (INIS)
Mo Zeyao
2004-11-01
Multiphysics parallel numerical simulations are usually essential to simplify researches on complex physical phenomena in which several physics are tightly coupled. It is very important on how to concatenate those coupled physics for fully scalable parallel simulation. Meanwhile, three objectives should be balanced, the first is efficient data transfer among simulations, the second and the third are efficient parallel executions and simultaneously developments of those simulation codes. Two concatenating algorithms for multiphysics parallel numerical simulations coupling radiation hydrodynamics with neutron transport on unstructured grid are presented. The first algorithm, Fully Loosely Concatenation (FLC), focuses on the independence of code development and the independence running with optimal performance of code. The second algorithm. Two Level Tightly Concatenation (TLTC), focuses on the optimal tradeoffs among above three objectives. Theoretical analyses for communicational complexity and parallel numerical experiments on hundreds of processors on two parallel machines have showed that these two algorithms are efficient and can be generalized to other multiphysics parallel numerical simulations. In especial, algorithm TLTC is linearly scalable and has achieved the optimal parallel performance. (authors)
penORNL: a parallel Monte Carlo photon and electron transport package using PENELOPE
International Nuclear Information System (INIS)
Bekar, Kursat B.; Miller, Thomas Martin; Patton, Bruce W.; Weber, Charles F.
2015-01-01
The parallel Monte Carlo photon and electron transport code package penORNL was developed at Oak Ridge National Laboratory to enable advanced scanning electron microscope (SEM) simulations on high-performance computing systems. This paper discusses the implementations, capabilities and parallel performance of the new code package. penORNL uses PENELOPE for its physics calculations and provides all available PENELOPE features to the users, as well as some new features including source definitions specifically developed for SEM simulations, a pulse-height tally capability for detailed simulations of gamma and x-ray detectors, and a modified interaction forcing mechanism to enable accurate energy deposition calculations. The parallel performance of penORNL was extensively tested with several model problems, and very good linear parallel scaling was observed with up to 512 processors. penORNL, along with its new features, will be available for SEM simulations upon completion of the new pulse-height tally implementation.
Parallel/vector algorithms for the spherical SN transport theory method
International Nuclear Information System (INIS)
Haghighat, A.; Mattis, R.E.
1990-01-01
This paper discusses vector and parallel processing of a 1-D curvilinear (i.e. spherical) S N transport theory algorithm on the Cornell National SuperComputer Facility (CNSF) IBM 3090/600E. Two different vector algorithms were developed and parallelized based on angular decomposition. It is shown that significant speedups are attainable. For example, for problems with large granularity, using 4 processors, the parallel/vector algorithm achieves speedups (for wall-clock time) of more than 4.5 relative to the old serial/scalar algorithm. Furthermore, this work has demonstrated the existing potential for the development of faster processing vector and parallel algorithms for multidimensional curvilinear geometries. (author)
Energy Technology Data Exchange (ETDEWEB)
Azmy, Yousry
2014-06-10
We employ the Integral Transport Matrix Method (ITMM) as the kernel of new parallel solution methods for the discrete ordinates approximation of the within-group neutron transport equation. The ITMM abandons the repetitive mesh sweeps of the traditional source iterations (SI) scheme in favor of constructing stored operators that account for the direct coupling factors among all the cells' fluxes and between the cells' and boundary surfaces' fluxes. The main goals of this work are to develop the algorithms that construct these operators and employ them in the solution process, determine the most suitable way to parallelize the entire procedure, and evaluate the behavior and parallel performance of the developed methods with increasing number of processes, P. The fastest observed parallel solution method, Parallel Gauss-Seidel (PGS), was used in a weak scaling comparison with the PARTISN transport code, which uses the source iteration (SI) scheme parallelized with the Koch-baker-Alcouffe (KBA) method. Compared to the state-of-the-art SI-KBA with diffusion synthetic acceleration (DSA), this new method- even without acceleration/preconditioning-is completitive for optically thick problems as P is increased to the tens of thousands range. For the most optically thick cells tested, PGS reduced execution time by an approximate factor of three for problems with more than 130 million computational cells on P = 32,768. Moreover, the SI-DSA execution times's trend rises generally more steeply with increasing P than the PGS trend. Furthermore, the PGS method outperforms SI for the periodic heterogeneous layers (PHL) configuration problems. The PGS method outperforms SI and SI-DSA on as few as P = 16 for PHL problems and reduces execution time by a factor of ten or more for all problems considered with more than 2 million computational cells on P = 4.096.
Parallelization of a three-dimensional whole core transport code DeCART
Energy Technology Data Exchange (ETDEWEB)
Jin Young, Cho; Han Gyu, Joo; Ha Yong, Kim; Moon-Hee, Chang [Korea Atomic Energy Research Institute, Yuseong-gu, Daejon (Korea, Republic of)
2003-07-01
Parallelization of the DeCART (deterministic core analysis based on ray tracing) code is presented that reduces the computational burden of the tremendous computing time and memory required in three-dimensional whole core transport calculations. The parallelization employs the concept of MPI grouping and the MPI/OpenMP mixed scheme as well. Since most of the computing time and memory are used in MOC (method of characteristics) and the multi-group CMFD (coarse mesh finite difference) calculation in DeCART, variables and subroutines related to these two modules are the primary targets for parallelization. Specifically, the ray tracing module was parallelized using a planar domain decomposition scheme and an angular domain decomposition scheme. The parallel performance of the DeCART code is evaluated by solving a rodded variation of the C5G7MOX three dimensional benchmark problem and a simplified three-dimensional SMART PWR core problem. In C5G7MOX problem with 24 CPUs, a speedup of maximum 21 is obtained on an IBM Regatta machine and 22 on a LINUX Cluster in the MOC kernel, which indicates good parallel performance of the DeCART code. In the simplified SMART problem, the memory requirement of about 11 GBytes in the single processor cases reduces to 940 Mbytes with 24 processors, which means that the DeCART code can now solve large core problems with affordable LINUX clusters. (authors)
Tracking cellular telephones as an input for developing transport models
CSIR Research Space (South Africa)
Cooper, Antony K
2010-08-01
Full Text Available in the Cape Town area. We discuss the technologies used to track participants and construct their travel routes, problems with recruiting participants, the ethical issues, and the results of the project...
Development of parallel 3D discrete ordinates transport program on JASMIN framework
International Nuclear Information System (INIS)
Cheng, T.; Wei, J.; Shen, H.; Zhong, B.; Deng, L.
2015-01-01
A parallel 3D discrete ordinates radiation transport code JSNT-S is developed, aiming at simulating real-world radiation shielding and reactor physics applications in a reasonable time. Through the patch-based domain partition algorithm, the memory requirement is shared among processors and a space-angle parallel sweeping algorithm is developed based on data-driven algorithm. Acceleration methods such as partial current rebalance are implemented. The correctness is proved through the VENUS-3 and other benchmark models. In the radiation shielding calculation of the Qinshan-II reactor pressure vessel model with 24.3 billion DoF, only 88 seconds is required and the overall parallel efficiency of 44% is achieved on 1536 CPU cores. (author)
A massively parallel discrete ordinates response matrix method for neutron transport
International Nuclear Information System (INIS)
Hanebutte, U.R.; Lewis, E.E.
1992-01-01
In this paper a discrete ordinates response matrix method is formulated with anisotropic scattering for the solution of neutron transport problems on massively parallel computers. The response matrix formulation eliminates iteration on the scattering source. The nodal matrices that result from the diamond-differenced equations are utilized in a factored form that minimizes memory requirements and significantly reduces the number of arithmetic operations required per node. The red-black solution algorithm utilizes massive parallelism by assigning each spatial node to one or more processors. The algorithm is accelerated by a synthetic method in which the low-order diffusion equations are also solved by massively parallel red-black iterations. The method is implemented on a 16K Connection Machine-2, and S 8 and S 16 solutions are obtained for fixed-source benchmark problems in x-y geometry
Monte Carlo photon transport on shared memory and distributed memory parallel processors
International Nuclear Information System (INIS)
Martin, W.R.; Wan, T.C.; Abdel-Rahman, T.S.; Mudge, T.N.; Miura, K.
1987-01-01
Parallelized Monte Carlo algorithms for analyzing photon transport in an inertially confined fusion (ICF) plasma are considered. Algorithms were developed for shared memory (vector and scalar) and distributed memory (scalar) parallel processors. The shared memory algorithm was implemented on the IBM 3090/400, and timing results are presented for dedicated runs with two, three, and four processors. Two alternative distributed memory algorithms (replication and dispatching) were implemented on a hypercube parallel processor (1 through 64 nodes). The replication algorithm yields essentially full efficiency for all cube sizes; with the 64-node configuration, the absolute performance is nearly the same as with the CRAY X-MP. The dispatching algorithm also yields efficiencies above 80% in a large simulation for the 64-processor configuration
Vlasov modelling of parallel transport in a tokamak scrape-off layer
International Nuclear Information System (INIS)
Manfredi, G; Hirstoaga, S; Devaux, S
2011-01-01
A one-dimensional Vlasov-Poisson model is used to describe the parallel transport in a tokamak scrape-off layer. Thanks to a recently developed 'asymptotic-preserving' numerical scheme, it is possible to lift numerical constraints on the time step and grid spacing, which are no longer limited by, respectively, the electron plasma period and Debye length. The Vlasov approach provides a good velocity-space resolution even in regions of low density. The model is applied to the study of parallel transport during edge-localized modes, with particular emphasis on the particles and energy fluxes on the divertor plates. The numerical results are compared with analytical estimates based on a free-streaming model, with good general agreement. An interesting feature is the observation of an early electron energy flux, due to suprathermal electrons escaping the ions' attraction. In contrast, the long-time evolution is essentially quasi-neutral and dominated by the ion dynamics.
International Nuclear Information System (INIS)
Pandya, Tara M.; Johnson, Seth R.; Evans, Thomas M.; Davidson, Gregory G.; Hamilton, Steven P.; Godfrey, Andrew T.
2015-01-01
This paper discusses the implementation, capabilities, and validation of Shift, a massively parallel Monte Carlo radiation transport package developed and maintained at Oak Ridge National Laboratory. It has been developed to scale well from laptop to small computing clusters to advanced supercomputers. Special features of Shift include hybrid capabilities for variance reduction such as CADIS and FW-CADIS, and advanced parallel decomposition and tally methods optimized for scalability on supercomputing architectures. Shift has been validated and verified against various reactor physics benchmarks and compares well to other state-of-the-art Monte Carlo radiation transport codes such as MCNP5, CE KENO-VI, and OpenMC. Some specific benchmarks used for verification and validation include the CASL VERA criticality test suite and several Westinghouse AP1000 ® problems. These benchmark and scaling studies show promising results
Load balancing in highly parallel processing of Monte Carlo code for particle transport
International Nuclear Information System (INIS)
Higuchi, Kenji; Takemiya, Hiroshi; Kawasaki, Takuji
1998-01-01
In parallel processing of Monte Carlo (MC) codes for neutron, photon and electron transport problems, particle histories are assigned to processors making use of independency of the calculation for each particle. Although we can easily parallelize main part of a MC code by this method, it is necessary and practically difficult to optimize the code concerning load balancing in order to attain high speedup ratio in highly parallel processing. In fact, the speedup ratio in the case of 128 processors remains in nearly one hundred times when using the test bed for the performance evaluation. Through the parallel processing of the MCNP code, which is widely used in the nuclear field, it is shown that it is difficult to attain high performance by static load balancing in especially neutron transport problems, and a load balancing method, which dynamically changes the number of assigned particles minimizing the sum of the computational and communication costs, overcomes the difficulty, resulting in nearly fifteen percentage of reduction for execution time. (author)
Directional Transport of a Liquid Drop between Parallel-Nonparallel Combinative Plates.
Huang, Yao; Hu, Liang; Chen, Wenyu; Fu, Xin; Ruan, Xiaodong; Xie, Haibo
2018-04-17
Liquids confined between two parallel plates can perform the function of transmission, support, or lubrication in many practical applications, due to which to maintain liquids stable within their working area is very important. However, instabilities may lead to the formation of leaking drops outside the bulk liquid, thus it is necessary to transport the detached drops back without overstepping the working area and causing destructive leakage to the system. In this study, we report a novel and facile method to solve this problem by introducing the wedgelike geometry into the parallel gap to form a parallel-nonparallel combinative construction. Transport performances of this structure were investigated. The criterion for self-propelled motion was established, which seemed more difficult to meet than that in the nonparallel gap. Then, we performed a more detailed investigation into the drop dynamics under squeezing and relaxing modes because the drops can surely return in hydrophilic combinative gaps, whereas uncertainties arose in gaps with a weak hydrophobic character. Therefore, through exploration of the transition mechanism of the drop motion state, a crucial factor named turning point was discovered and supposed to be directly related to the final state of the drops. On the basis of the theoretical model of turning point, the criterion to identify whether a liquid drop returns to the parallel part under squeezing and relaxing modes was achieved. These criteria can provide guidance on parameter selection and structural optimization for the combinative gap, so that the destructive leakage in practical productions can be avoided.
Track recognition in 4 μs by a systolic trigger processor using a parallel Hough transform
International Nuclear Information System (INIS)
Klefenz, F.; Noffz, K.H.; Conen, W.; Zoz, R.; Kugel, A.; Maenner, R.; Univ. Heidelberg
1993-01-01
A parallel Hough transform processor has been developed that identifies circular particle tracks in a 2D projection of the OPAL jet chamber. The high-speed requirements imposed by the 8 bunch crossing mode of LEP could be fulfilled by computing the starting angle and the radius of curvature for each well defined track in less than 4 μs. The system consists of a Hough transform processor that determines well defined tracks, and a Euler processor that counts their number by applying the Euler relation to the thresholded result of the Hough transform. A prototype of a systolic processor has been built that handles one sector of the jet chamber. It consists of 35 x 32 processing elements that were loaded into 21 programmable gate arrays (XILINX). This processor runs at a clock rate of 40 MHz. It has been tested offline with about 1,000 original OPAL events. No deviations from the off-line simulation have been found. A trigger efficiency of 93% has been obtained. The prototype together with the associated drift time measurement unit has been installed at the OPAL detector at LEP and 100k events have been sampled to evaluate the system under detector conditions
Parallel 4-Dimensional Cellular Automaton Track Finder for the CBM Experiment
International Nuclear Information System (INIS)
Akishina, Valentina; Kisel, Ivan
2016-01-01
The CBM experiment (FAIR/GSI, Darmstadt, Germany) will focus on the measurement of rare probes at interaction rates up to 10 MHz with data flow of up to 1 TB/s. It requires a novel read-out and data-acquisition concept with self-triggered electronics and free-streaming data. In this case resolving different collisions is not a trivial task and event building must be performed in software online. That requires full online event reconstruction and selection not only in space, but also in time, so-called 4D event building and selection. This is a task of the First-Level Event Selection (FLES). The FLES reconstruction and selection package consists of several modules: track finding, track fitting, short-lived particles finding, event building and event selection. The Cellular Automaton (CA) track finder algorithm was adapted towards time-slice-based reconstruction and included into the CBMROOT framework. In this article, we describe the modification done to the algorithm, as well as the performance of the developed time-based approach. (paper)
Lateral charge transport from heavy-ion tracks in integrated circuit chips
Zoutendyk, J. A.; Schwartz, H. R.; Nevill, L. R.
1988-01-01
A 256K DRAM has been used to study the lateral transport of charge (electron-hole pairs) induced by direct ionization from heavy-ion tracks in an IC. The qualitative charge transport has been simulated using a two-dimensional numerical code in cylindrical coordinates. The experimental bit-map data clearly show the manifestation of lateral charge transport in the creation of adjacent multiple-bit errors from a single heavy-ion track. The heavy-ion data further demonstrate the occurrence of multiple-bit errors from single ion tracks with sufficient stopping power. The qualitative numerical simulation results suggest that electric-field-funnel-aided (drift) collection accounts for single error generated by an ion passing through a charge-collecting junction, while multiple errors from a single ion track are due to lateral diffusion of ion-generated charge.
Disruption Management in Passenger Transportation - from Air to Tracks
DEFF Research Database (Denmark)
Clausen, Jens
2007-01-01
of the world has show a dramatic increase as well. Public transportation by e.g. rail has come into focus, and hence also the service level provided by suppliers ad public transportation. These transportation systems are likewise very vulnerable to disruptions. In the airline industry there is a long tradition......Over the last 10 years there has been a tremendous growth in air transportation of passengers. Both airports and airspace are close to saturation with respect to capacity, leading to delays caused by disruptions. At the same time the amount of vehicular trac around and in all larger cities...
Generalized fluid equations for parallel transport in collisional to weakly collisional plasmas
International Nuclear Information System (INIS)
Zawaideh, E.S.
1985-01-01
A new set of two-fluid equations which are valid from collisional to weakly collisional limits are derived. Starting from gyrokinetic equations in flux coordinates with no zeroth order drifts, a set of moment equations describing plasma transport along the field lines of a space and time dependent magnetic field are derived. No restriction on the anisotropy of the ion distribution function is imposed. In the highly collisional limit, these equations reduce to those of Braginskii while in the weakly collisional limit, they are similar to the double adiabatic or Chew, Goldberger, and Low (CGL) equations. The new transport equations are used to study the effects of collisionality, magnetic field structure, and plasma anisotropy on plasma parallel transport. Numerical examples comparing these equations with conventional transport equations show that the conventional equations may contain large errors near the sound speed (M approx. = 1). It is also found that plasma anisotropy, which is not included in the conventional equations, is a critical parameter in determining plasma transport in varying magnetic field. The new transport equations are also used to study axial confinement in multiple mirror devices from the strongly to weakly collisional regime. A new ion conduction model was worked out to extend the regime of validity of the transport equations to the low density multiple mirror regime
Cutter, Michael G; Drieghe, Denis; Liversedge, Simon P
2017-08-01
In the current study we investigated whether orthographic information available from 1 upcoming parafoveal word influences the processing of another parafoveal word. Across 2 experiments we used the boundary paradigm (Rayner, 1975) to present participants with an identity preview of the 2 words after the boundary (e.g., hot pan ), a preview in which 2 letters were transposed between these words (e.g., hop tan ), or a preview in which the same 2 letters were substituted (e.g., hob fan ). We hypothesized that if these 2 words were processed in parallel in the parafovea then we may observe significant preview benefits for the condition in which the letters were transposed between words relative to the condition in which the letters were substituted. However, no such effect was observed, with participants fixating the words for the same amount of time in both conditions. This was the case both when the transposition was made between the final and first letter of the 2 words (e.g., hop tan as a preview of hot pan ; Experiment 1) and when the transposition maintained within word letter position (e.g., pit hop as a preview of hit pop ; Experiment 2). The implications of these findings are considered in relation to serial and parallel lexical processing during reading. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
International Nuclear Information System (INIS)
Fischer, J.W.; Azmy, Y.Y.
2003-01-01
A previously reported parallel performance model for Angular Domain Decomposition (ADD) of the Discrete Ordinates method for solving multidimensional neutron transport problems is revisited for further validation. Three communication schemes: native MPI, the bucket algorithm, and the distributed bucket algorithm, are included in the validation exercise that is successfully conducted on a Beowulf cluster. The parallel performance model is comprised of three components: serial, parallel, and communication. The serial component is largely independent of the number of participating processors, P, while the parallel component decreases like 1/P. These two components are independent of the communication scheme, in contrast with the communication component that typically increases with P in a manner highly dependent on the global reduced algorithm. Correct trends for each component and each communication scheme were measured for the Arbitrarily High Order Transport (AHOT) code, thus validating the performance models. Furthermore, extensive experiments illustrate the superiority of the bucket algorithm. The primary question addressed in this research is: for a given problem size, which domain decomposition method, angular or spatial, is best suited to parallelize Discrete Ordinates methods on a specific computational platform? We address this question for three-dimensional applications via parallel performance models that include parameters specifying the problem size and system performance: the above-mentioned ADD, and a previously constructed and validated Spatial Domain Decomposition (SDD) model. We conclude that for large problems the parallel component dwarfs the communication component even on moderately large numbers of processors. The main advantages of SDD are: (a) scalability to higher numbers of processors of the order of the number of computational cells; (b) smaller memory requirement; (c) better performance than ADD on high-end platforms and large number of
Parallel transport of long mean-free-path plasma along open magnetic field lines: Parallel heat flux
International Nuclear Information System (INIS)
Guo Zehua; Tang Xianzhu
2012-01-01
In a long mean-free-path plasma where temperature anisotropy can be sustained, the parallel heat flux has two components with one associated with the parallel thermal energy and the other the perpendicular thermal energy. Due to the large deviation of the distribution function from local Maxwellian in an open field line plasma with low collisionality, the conventional perturbative calculation of the parallel heat flux closure in its local or non-local form is no longer applicable. Here, a non-perturbative calculation is presented for a collisionless plasma in a two-dimensional flux expander bounded by absorbing walls. Specifically, closures of previously unfamiliar form are obtained for ions and electrons, which relate two distinct components of the species parallel heat flux to the lower order fluid moments such as density, parallel flow, parallel and perpendicular temperatures, and the field quantities such as the magnetic field strength and the electrostatic potential. The plasma source and boundary condition at the absorbing wall enter explicitly in the closure calculation. Although the closure calculation does not take into account wave-particle interactions, the results based on passing orbits from steady-state collisionless drift-kinetic equation show remarkable agreement with fully kinetic-Maxwell simulations. As an example of the physical implications of the theory, the parallel heat flux closures are found to predict a surprising observation in the kinetic-Maxwell simulation of the 2D magnetic flux expander problem, where the parallel heat flux of the parallel thermal energy flows from low to high parallel temperature region.
Parallel Implementation and Scaling of an Adaptive Mesh Discrete Ordinates Algorithm for Transport
International Nuclear Information System (INIS)
Howell, L H
2004-01-01
Block-structured adaptive mesh refinement (AMR) uses a mesh structure built up out of locally-uniform rectangular grids. In the BoxLib parallel framework used by the Raptor code, each processor operates on one or more of these grids at each refinement level. The decomposition of the mesh into grids and the distribution of these grids among processors may change every few timesteps as a calculation proceeds. Finer grids use smaller timesteps than coarser grids, requiring additional work to keep the system synchronized and ensure conservation between different refinement levels. In a paper for NECDC 2002 I presented preliminary results on implementation of parallel transport sweeps on the AMR mesh, conjugate gradient acceleration, accuracy of the AMR solution, and scalar speedup of the AMR algorithm compared to a uniform fully-refined mesh. This paper continues with a more in-depth examination of the parallel scaling properties of the scheme, both in single-level and multi-level calculations. Both sweeping and setup costs are considered. The algorithm scales with acceptable performance to several hundred processors. Trends suggest, however, that this is the limit for efficient calculations with traditional transport sweeps, and that modifications to the sweep algorithm will be increasingly needed as job sizes in the thousands of processors become common
Development Of A Parallel Performance Model For The THOR Neutral Particle Transport Code
Energy Technology Data Exchange (ETDEWEB)
Yessayan, Raffi; Azmy, Yousry; Schunert, Sebastian
2017-02-01
The THOR neutral particle transport code enables simulation of complex geometries for various problems from reactor simulations to nuclear non-proliferation. It is undergoing a thorough V&V requiring computational efficiency. This has motivated various improvements including angular parallelization, outer iteration acceleration, and development of peripheral tools. For guiding future improvements to the code’s efficiency, better characterization of its parallel performance is useful. A parallel performance model (PPM) can be used to evaluate the benefits of modifications and to identify performance bottlenecks. Using INL’s Falcon HPC, the PPM development incorporates an evaluation of network communication behavior over heterogeneous links and a functional characterization of the per-cell/angle/group runtime of each major code component. After evaluating several possible sources of variability, this resulted in a communication model and a parallel portion model. The former’s accuracy is bounded by the variability of communication on Falcon while the latter has an error on the order of 1%.
Recent Improvements to the IMPACT-T Parallel Particle Tracking Code
International Nuclear Information System (INIS)
Qiang, J.; Pogorelov, I.V.; Ryne, R.
2006-01-01
The IMPACT-T code is a parallel three-dimensional quasi-static beam dynamics code for modeling high brightness beams in photoinjectors and RF linacs. Developed under the US DOE Scientific Discovery through Advanced Computing (SciDAC) program, it includes several key features including a self-consistent calculation of 3D space-charge forces using a shifted and integrated Green function method, multiple energy bins for beams with large energy spread, and models for treating RF standing wave and traveling wave structures. In this paper, we report on recent improvements to the IMPACT-T code including modeling traveling wave structures, short-range transverse and longitudinal wakefields, and longitudinal coherent synchrotron radiation through bending magnets
A non overlapping parallel domain decomposition method applied to the simplified transport equations
International Nuclear Information System (INIS)
Lathuiliere, B.; Barrault, M.; Ramet, P.; Roman, J.
2009-01-01
A reactivity computation requires to compute the highest eigenvalue of a generalized eigenvalue problem. An inverse power algorithm is used commonly. Very fine modelizations are difficult to tackle for our sequential solver, based on the simplified transport equations, in terms of memory consumption and computational time. So, we propose a non-overlapping domain decomposition method for the approximate resolution of the linear system to solve at each inverse power iteration. Our method brings to a low development effort as the inner multigroup solver can be re-use without modification, and allows us to adapt locally the numerical resolution (mesh, finite element order). Numerical results are obtained by a parallel implementation of the method on two different cases with a pin by pin discretization. This results are analyzed in terms of memory consumption and parallel efficiency. (authors)
Yu, Leiming; Nina-Paravecino, Fanny; Kaeli, David; Fang, Qianqian
2018-01-01
We present a highly scalable Monte Carlo (MC) three-dimensional photon transport simulation platform designed for heterogeneous computing systems. Through the development of a massively parallel MC algorithm using the Open Computing Language framework, this research extends our existing graphics processing unit (GPU)-accelerated MC technique to a highly scalable vendor-independent heterogeneous computing environment, achieving significantly improved performance and software portability. A number of parallel computing techniques are investigated to achieve portable performance over a wide range of computing hardware. Furthermore, multiple thread-level and device-level load-balancing strategies are developed to obtain efficient simulations using multiple central processing units and GPUs. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).
International Nuclear Information System (INIS)
Satake, Shin-ichi; Kanamori, Hiroyuki; Kunugi, Tomoaki; Sato, Kazuho; Ito, Tomoyoshi; Yamamoto, Keisuke
2007-01-01
We have developed a parallel algorithm for microdigital-holographic particle-tracking velocimetry. The algorithm is used in (1) numerical reconstruction of a particle image computer using a digital hologram, and (2) searching for particles. The numerical reconstruction from the digital hologram makes use of the Fresnel diffraction equation and the FFT (fast Fourier transform),whereas the particle search algorithm looks for local maximum graduation in a reconstruction field represented by a 3D matrix. To achieve high performance computing for both calculations (reconstruction and particle search), two memory partitions are allocated to the 3D matrix. In this matrix, the reconstruction part consists of horizontally placed 2D memory partitions on the x-y plane for the FFT, whereas, the particle search part consists of vertically placed 2D memory partitions set along the z axes.Consequently, the scalability can be obtained for the proportion of processor elements,where the benchmarks are carried out for parallel computation by a SGI Altix machine
Energy Technology Data Exchange (ETDEWEB)
Moryakov, A. V., E-mail: sailor@orc.ru [National Research Centre Kurchatov Institute (Russian Federation)
2016-12-15
An algorithm for solving the time-dependent transport equation in the P{sub m}S{sub n} group approximation with the use of parallel computations is presented. The algorithm is implemented in the LUCKY-TD code for supercomputers employing the MPI standard for the data exchange between parallel processes.
Zerr, Robert Joseph
2011-12-01
The integral transport matrix method (ITMM) has been used as the kernel of new parallel solution methods for the discrete ordinates approximation of the within-group neutron transport equation. The ITMM abandons the repetitive mesh sweeps of the traditional source iterations (SI) scheme in favor of constructing stored operators that account for the direct coupling factors among all the cells and between the cells and boundary surfaces. The main goals of this work were to develop the algorithms that construct these operators and employ them in the solution process, determine the most suitable way to parallelize the entire procedure, and evaluate the behavior and performance of the developed methods for increasing number of processes. This project compares the effectiveness of the ITMM with the SI scheme parallelized with the Koch-Baker-Alcouffe (KBA) method. The primary parallel solution method involves a decomposition of the domain into smaller spatial sub-domains, each with their own transport matrices, and coupled together via interface boundary angular fluxes. Each sub-domain has its own set of ITMM operators and represents an independent transport problem. Multiple iterative parallel solution methods have investigated, including parallel block Jacobi (PBJ), parallel red/black Gauss-Seidel (PGS), and parallel GMRES (PGMRES). The fastest observed parallel solution method, PGS, was used in a weak scaling comparison with the PARTISN code. Compared to the state-of-the-art SI-KBA with diffusion synthetic acceleration (DSA), this new method without acceleration/preconditioning is not competitive for any problem parameters considered. The best comparisons occur for problems that are difficult for SI DSA, namely highly scattering and optically thick. SI DSA execution time curves are generally steeper than the PGS ones. However, until further testing is performed it cannot be concluded that SI DSA does not outperform the ITMM with PGS even on several thousand or tens of
Gudmundsson, Vidar; Abdullah, Nzar Rauf; Sitek, Anna; Goan, Hsi-Sheng; Tang, Chi-Shung; Manolescu, Andrei
2018-06-01
We calculate the current correlations for the steady-state electron transport through multi-level parallel quantum dots embedded in a short quantum wire, that is placed in a non-perfect photon cavity. We account for the electron-electron Coulomb interaction, and the para- and diamagnetic electron-photon interactions with a stepwise scheme of configuration interactions and truncation of the many-body Fock spaces. In the spectral density of the temporal current-current correlations we identify all the transitions, radiative and non-radiative, active in the system in order to maintain the steady state. We observe strong signs of two types of Rabi oscillations.
Vectorization and parallelization of Monte-Carlo programs for calculation of radiation transport
International Nuclear Information System (INIS)
Seidel, R.
1995-01-01
The versatile MCNP-3B Monte-Carlo code written in FORTRAN77, for simulation of the radiation transport of neutral particles, has been subjected to vectorization and parallelization of essential parts, without touching its versatility. Vectorization is not dependent on a specific computer. Several sample tasks have been selected in order to test the vectorized MCNP-3B code in comparison to the scalar MNCP-3B code. The samples are a representative example of the 3-D calculations to be performed for simulation of radiation transport in neutron and reactor physics. (1) 4πneutron detector. (2) High-energy calorimeter. (3) PROTEUS benchmark (conversion rates and neutron multiplication factors for the HCLWR (High Conversion Light Water Reactor)). (orig./HP) [de
Global restructuring of the CPM-2 transport algorithm for vector and parallel processing
International Nuclear Information System (INIS)
Vujic, J.L.; Martin, W.R.
1989-01-01
The CPM-2 code is an assembly transport code based on the collision probability (CP) method. It can in principle be applied to global reactor problems, but its excessive computational demands prevent this application. Therefore, a new transport algorithm for CPM-2 has been developed for vector-parallel architectures, which has resulted in an overall factor of 20 speedup (wall clock) on the IBM 3090-600E. This paper presents the detailed results of this effort as well as a brief description of ongoing effort to remove some of the modeling limitations in CPM-2 that inhibit its use for global applications, such as the use of the pure CP treatment and the assumption of isotropic scattering
International Nuclear Information System (INIS)
Rosa, M.; Warsa, J. S.; Chang, J. H.
2006-01-01
A Fourier analysis is conducted for the discrete-ordinates (SN) approximation of the neutron transport problem solved with Richardson iteration (Source Iteration) and Richardson iteration preconditioned with Transport Synthetic Acceleration (TSA), using the Parallel Block-Jacobi (PBJ) algorithm. Both 'traditional' TSA (TTSA) and a 'modified' TSA (MTSA), in which only the scattering in the low order equations is reduced by some non-negative factor β and < 1, are considered. The results for the un-accelerated algorithm show that convergence of the PBJ algorithm can degrade. The PBJ algorithm with TTSA can be effective provided the β parameter is properly tuned for a given scattering ratio c, but is potentially unstable. Compared to TTSA, MTSA is less sensitive to the choice of β, more effective for the same computational effort (c'), and it is unconditionally stable. (authors)
International Nuclear Information System (INIS)
Satake, Shinsuke; Okamoto, Masao; Nakajima, Noriyoshi; Takamaru, Hisanori
2005-11-01
A neoclassical transport simulation code (FORTEC-3D) applicable to three-dimensional configurations has been developed using High Performance Fortran (HPF). Adoption of computing techniques for parallelization and a hybrid simulation model to the δf Monte-Carlo method transport simulation, including non-local transport effects in three-dimensional configurations, makes it possible to simulate the dynamism of global, non-local transport phenomena with a self-consistent radial electric field within a reasonable computation time. In this paper, development of the transport code using HPF is reported. Optimization techniques in order to achieve both high vectorization and parallelization efficiency, adoption of a parallel random number generator, and also benchmark results, are shown. (author)
Transport of secondary electrons and reactive species in ion tracks
Surdutovich, Eugene; Solov'yov, Andrey V.
2015-08-01
The transport of reactive species brought about by ions traversing tissue-like medium is analysed analytically. Secondary electrons ejected by ions are capable of ionizing other molecules; the transport of these generations of electrons is studied using the random walk approximation until these electrons remain ballistic. Then, the distribution of solvated electrons produced as a result of interaction of low-energy electrons with water molecules is obtained. The radial distribution of energy loss by ions and secondary electrons to the medium yields the initial radial dose distribution, which can be used as initial conditions for the predicted shock waves. The formation, diffusion, and chemical evolution of hydroxyl radicals in liquid water are studied as well. COST Action Nano-IBCT: Nano-scale Processes Behind Ion-Beam Cancer Therapy.
Vlasov modelling of parallel transport in a tokamak scrape-off layer
Energy Technology Data Exchange (ETDEWEB)
Manfredi, G [Institut de Physique et Chimie des Materiaux, CNRS and Universite de Strasbourg, BP 43, F-67034 Strasbourg (France); Hirstoaga, S [INRIA Nancy Grand-Est and Institut de Recherche en Mathematiques Avancees, 7 rue Rene Descartes, F-67084 Strasbourg (France); Devaux, S, E-mail: Giovanni.Manfredi@ipcms.u-strasbg.f, E-mail: hirstoaga@math.unistra.f, E-mail: Stephane.Devaux@ccfe.ac.u [JET-EFDA, Culham Science Centre, Abingdon, OX14 3DB (United Kingdom)
2011-01-15
A one-dimensional Vlasov-Poisson model is used to describe the parallel transport in a tokamak scrape-off layer. Thanks to a recently developed 'asymptotic-preserving' numerical scheme, it is possible to lift numerical constraints on the time step and grid spacing, which are no longer limited by, respectively, the electron plasma period and Debye length. The Vlasov approach provides a good velocity-space resolution even in regions of low density. The model is applied to the study of parallel transport during edge-localized modes, with particular emphasis on the particles and energy fluxes on the divertor plates. The numerical results are compared with analytical estimates based on a free-streaming model, with good general agreement. An interesting feature is the observation of an early electron energy flux, due to suprathermal electrons escaping the ions' attraction. In contrast, the long-time evolution is essentially quasi-neutral and dominated by the ion dynamics.
International Nuclear Information System (INIS)
Brown, P.; Chang, B.
1998-01-01
The linear Boltzmann transport equation (BTE) is an integro-differential equation arising in deterministic models of neutral and charged particle transport. In slab (one-dimensional Cartesian) geometry and certain higher-dimensional cases, Diffusion Synthetic Acceleration (DSA) is known to be an effective algorithm for the iterative solution of the discretized BTE. Fourier and asymptotic analyses have been applied to various idealizations (e.g., problems on infinite domains with constant coefficients) to obtain sharp bounds on the convergence rate of DSA in such cases. While DSA has been shown to be a highly effective acceleration (or preconditioning) technique in one-dimensional problems, it has been observed to be less effective in higher dimensions. This is due in part to the expense of solving the related diffusion linear system. We investigate here the effectiveness of a parallel semicoarsening multigrid (SMG) solution approach to DSA preconditioning in several three dimensional problems. In particular, we consider the algorithmic and implementation scalability of a parallel SMG-DSA preconditioner on several types of test problems
Hsu, Liang-Yan; Wu, Ning; Rabitz, Herschel
2016-11-30
We investigate electron transport through series and parallel intramolecular circuits in the framework of the multi-level Redfield theory. Based on the assumption of weak monomer-bath couplings, the simulations depict the length and temperature dependence in six types of intramolecular circuits. In the tunneling regime, we find that the intramolecular circuit rule is only valid in the weak monomer coupling limit. In the thermally activated hopping regime, for circuits based on two different molecular units M a and M b with distinct activation energies E act,a > E act,b , the activation energies of M a and M b in series are nearly the same as E act,a while those in parallel are nearly the same as E act,b . This study gives a comprehensive description of electron transport through intramolecular circuits from tunneling to thermally activated hopping. We hope that this work can motivate additional studies to design intramolecular circuits based on different types of building blocks, and to explore the corresponding circuit laws and the length and temperature dependence of conductance.
Parallel unstructured mesh optimisation for 3D radiation transport and fluids modelling
International Nuclear Information System (INIS)
Gorman, G.J.; Pain, Ch. C.; Oliveira, C.R.E. de; Umpleby, A.P.; Goddard, A.J.H.
2003-01-01
In this paper we describe the theory and application of a parallel mesh optimisation procedure to obtain self-adapting finite element solutions on unstructured tetrahedral grids. The optimisation procedure adapts the tetrahedral mesh to the solution of a radiation transport or fluid flow problem without sacrificing the integrity of the boundary (geometry), or internal boundaries (regions) of the domain. The objective is to obtain a mesh which has both a uniform interpolation error in any direction and the element shapes are of good quality. This is accomplished with use of a non-Euclidean (anisotropic) metric which is related to the Hessian of the solution field. Appropriate scaling of the metric enables the resolution of multi-scale phenomena as encountered in transient incompressible fluids and multigroup transport calculations. The resulting metric is used to calculate element size and shape quality. The mesh optimisation method is based on a series of mesh connectivity and node position searches of the landscape defining mesh quality which is gauged by a functional. The mesh modification thus fits the solution field(s) in an optimal manner. The parallel mesh optimisation/adaptivity procedure presented in this paper is of general applicability. We illustrate this by applying it to a transient CFD (computational fluid dynamics) problem. Incompressible flow past a cylinder at moderate Reynolds numbers is modelled to demonstrate that the mesh can follow transient flow features. (authors)
Yang, Jianwen
2012-04-01
A general analytical solution is derived by using the Laplace transformation to describe transient reactive silica transport in a conceptualized 2-D system involving a set of parallel fractures embedded in an impermeable host rock matrix, taking into account of hydrodynamic dispersion and advection of silica transport along the fractures, molecular diffusion from each fracture to the intervening rock matrix, and dissolution of quartz. A special analytical solution is also developed by ignoring the longitudinal hydrodynamic dispersion term but remaining other conditions the same. The general and special solutions are in the form of a double infinite integral and a single infinite integral, respectively, and can be evaluated using Gauss-Legendre quadrature technique. A simple criterion is developed to determine under what conditions the general analytical solution can be approximated by the special analytical solution. It is proved analytically that the general solution always lags behind the special solution, unless a dimensionless parameter is less than a critical value. Several illustrative calculations are undertaken to demonstrate the effect of fracture spacing, fracture aperture and fluid flow rate on silica transport. The analytical solutions developed here can serve as a benchmark to validate numerical models that simulate reactive mass transport in fractured porous media.
TRACKING AND TRACING SOLUTION FOR DANGEROUS GOODS CARRIED BY INTERMODAL TRANSPORT
Directory of Open Access Journals (Sweden)
Marek Kvet
2014-03-01
Full Text Available This paper deals with the problem of designing a complex tracking and tracing solution for dangerous goods transportation with the support of modern information technologies. This research activity presents a part of the “ChemLogTT” [2] project solved at the University of Žilina. The main goal of our contribution is to present basic conception of a complex developed software tool for monitoring and analyzing mentioned dangerous goods transportation.
Energy Technology Data Exchange (ETDEWEB)
Sanchez Vicente, A.
2013-12-01
The EEA works in the transport area to assess the impacts of the sector on the human health and the environment. This work also allows the EEA to monitor the progress of integrating transport and environmental policies, and informing the EU, EEA member countries and the public about such progress. This is achieved by the production of relevant indicators that track progress towards policy targets for transport related to the environment, as well as through the elaboration of periodic assessments that cover all transport modes and the impacts of transport on the environment. The annual TERM report aims to enable policymakers to gauge the progress of those policies aiming to improve the environmental performance of the transport system as a whole. TERM 2013, has two distinct parts. Part A provides an annual assessment of the EU's transport and environment policies based on the TERM-CSI, a selection of 12 indicators from the broader set of EEA transport indicators to enabling monitoring of the most important aspects of transport. Part B focuses on urban transport and its effects on the environment. (LN)
International Nuclear Information System (INIS)
Jejcic, A.; Maillard, J.; Maurel, G.; Silva, J.; Wolff-Bacha, F.
1997-01-01
The work in the field of parallel processing has developed as research activities using several numerical Monte Carlo simulations related to basic or applied current problems of nuclear and particle physics. For the applications utilizing the GEANT code development or improvement works were done on parts simulating low energy physical phenomena like radiation, transport and interaction. The problem of actinide burning by means of accelerators was approached using a simulation with the GEANT code. A program of neutron tracking in the range of low energies up to the thermal region has been developed. It is coupled to the GEANT code and permits in a single pass the simulation of a hybrid reactor core receiving a proton burst. Other works in this field refers to simulations for nuclear medicine applications like, for instance, development of biological probes, evaluation and characterization of the gamma cameras (collimators, crystal thickness) as well as the method for dosimetric calculations. Particularly, these calculations are suited for a geometrical parallelization approach especially adapted to parallel machines of the TN310 type. Other works mentioned in the same field refer to simulation of the electron channelling in crystals and simulation of the beam-beam interaction effect in colliders. The GEANT code was also used to simulate the operation of germanium detectors designed for natural and artificial radioactivity monitoring of environment
Energy Technology Data Exchange (ETDEWEB)
Erkut, M. Hakan [Physics Engineering Department, Faculty of Science and Letters, Istanbul Technical University, 34469, Istanbul (Turkey); Çatmabacak, Onur, E-mail: mherkut@gmail.com [Institute for Computational Sciences Y11 F74, University of Zurich, Winterthurerstrasse 190, CH-8057 Zurich (Switzerland)
2017-11-01
The neutron stars in low-mass X-ray binaries (LMXBs) are usually thought to be weakly magnetized objects accreting matter from their low-mass companions in the form of a disk. Albeit weak compared to those in young neutron-star systems, the neutron-star magnetospheres in LMXBs can play an important role in determining the correlations between spectral and temporal properties. Parallel tracks appearing in the kilohertz (kHz) quasi-periodic oscillation (QPO) frequency versus X-ray flux plane can be used as a tool to study the magnetosphere–disk interaction in neutron-star LMXBs. For dynamically important weak fields, the formation of a non-Keplerian magnetic boundary layer at the innermost disk truncated near the surface of the neutron star is highly likely. Such a boundary region may harbor oscillatory modes of frequencies in the kHz range. We generate parallel tracks using the boundary region model of kHz QPOs. We also present the direct application of our model to the reproduction of the observed parallel tracks of individual sources such as 4U 1608–52, 4U 1636–53, and Aql X-1. We reveal how the radial width of the boundary layer must vary in the long-term flux evolution of each source to regenerate the parallel tracks. The run of the radial width looks similar for different sources and can be fitted by a generic model function describing the average steady behavior of the boundary region over the long term. The parallel tracks then correspond to the possible quasi-steady states the source can occupy around the average trend.
International Nuclear Information System (INIS)
Zhang, B.; Li, G.; Wang, W.; Shangguan, D.; Deng, L.
2015-01-01
This paper introduces the Strategy of multilevel hybrid parallelism of JCOGIN Infrastructure on Monte Carlo Particle Transport for the large-scale full-core pin-by-pin simulations. The particle parallelism, domain decomposition parallelism and MPI/OpenMP parallelism are designed and implemented. By the testing, JMCT presents the parallel scalability of JCOGIN, which reaches the parallel efficiency 80% on 120,000 cores for the pin-by-pin computation of the BEAVRS benchmark. (author)
Making Tracks 1.0: Action Researching an Active Transportation Education Program
Robinson, Daniel; Foran, Andrew; Robinson, Ingrid
2014-01-01
This paper reports on the results of the first cycle of an action research project. The objective of this action research was to examine the implementation of a school-based active transportation education program (Making Tracks). A two-cycle action research design was employed in which elementary school students' (ages 7-9), middle school…
Muna, Joseph T.; Prescott, Kevin
2011-08-01
Traditionally, freight transport and telematics solutions that exploit the GPS capabilities of in- vehicle devices to provide innovative Location Based Services (LBS) including track and trace transport systems have been the preserve of a select cluster of transport operators and organisations with the financial resources to develop the requisite custom software and hardware on which they are deployed. The average cost of outfitting a typical transport vehicle or truck with the latest Intelligent Transport System (ITS) increases the cost of the vehicle by anything from a couple to several thousand Euros, depending on the complexity and completeness of the solution. Though this does not generally deter large fleet transport owners since they typically get Return on Investment (ROI) based on economies of scale, it presents a barrier for the smaller independent entities that constitute the majority of freight transport operators [1].The North Sea Freight Intelligent Transport Solution (NS FRITS), a project co-funded by the European Commission Interreg IVB North Sea Region Programme, aims to make acquisition of such transport solutions easier for those organisations that cannot afford the expensive, bespoke systems used by their larger competitors.The project addresses transport security threats by developing a system capable of informing major actors along the freight logistics supply chain, of changing circumstances within the region's major transport corridors and between transport modes. The project also addresses issues of freight volumes, inter-modality, congestion and eco-mobility [2].
International Nuclear Information System (INIS)
Beucher, J.
2007-10-01
PIM (Parallel Ionization Multiplier) is a multi-stage micro-pattern gaseous detector using micro-meshes technology. This new device, based on Micromegas (micro-mesh gaseous structure) detector principle of operation, offers good characteristics for minimum ionizing particles track detection. However, this kind of detectors placed in hadron environment suffers discharges which degrade sensibly the detection efficiency and account for hazard to the front-end electronics. In order to minimize these strong events, it is convenient to perform charges multiplication by several successive steps. Within the framework of a European hadron physics project we have investigated the multi-stage PIM detector for high hadrons flux application. For this part of research and development, a systematic study for many geometrical configurations of a two amplification stages separated with a transfer space operated with the gaseous mixture Ne + 10% CO 2 has been performed. Beam tests realised with high energy hadrons at CERN facility have given that discharges probability could be strongly reduced with a suitable PIM device. A discharges rate lower to 10 9 by incident hadron and a spatial resolution of 51 μm have been measured at the beginning efficiency plateau (>96 %) operating point. (author)
Directory of Open Access Journals (Sweden)
Alessandro Palla
2017-05-01
Full Text Available In the last few years power wheelchairs have been becoming the only device able to provide autonomy and independence to people with motor skill impairments. In particular, many power wheelchairs feature robotic arms for gesture emulation, like the interaction with objects. However, complex robotic arms often require a joystic to be controlled; this feature make the arm hard to be controlled by impaired users. Paradoxically, if the user were able to proficiently control such devices, he would not need them. For that reason, this paper presents a highly autonomous robotic arm, designed in order to minimize the effort necessary for the control of the arm. In order to do that, the arm feature an easy to use human - machine interface and is controlled by Computer Vison algorithm, implementing a Position Based Visual Servoing (PBVS control. It was realized by extracting features by the camera and fusing them with the distance from the target, obtained by a proximity sensor. The Parallel Tracking and Mapping (PTAM algorithm was used to find the 3D position of the task object in the camera reference system. The visual servoing algorithm was implemented in an embedded platform, in real time. Each part of the control loop was developed in Robotic Operative System (ROS Environment, which allows to implement the previous algorithms as different nodes. Theoretical analysis, simulations and in system measurements proved the effectiveness of the proposed solution.
Overview of development and design of MPACT: Michigan parallel characteristics transport code
Energy Technology Data Exchange (ETDEWEB)
Kochunas, B.; Collins, B.; Jabaay, D.; Downar, T. J.; Martin, W. R. [Department of Nuclear Engineering and Radiological Sciences, University of Michigan, 2200 Bonisteel, Ann Arbor, MI 48109 (United States)
2013-07-01
MPACT (Michigan Parallel Characteristics Transport Code) is a new reactor analysis tool. It is being developed by students and research staff at the University of Michigan to be used for an advanced pin-resolved transport capability within VERA (Virtual Environment for Reactor Analysis). VERA is the end-user reactor simulation tool being produced by the Consortium for the Advanced Simulation of Light Water Reactors (CASL). The MPACT development project is itself unique for the way it is changing how students do research to achieve the instructional and research goals of an academic institution, while providing immediate value to industry. The MPACT code makes use of modern lean/agile software processes and extensive testing to maintain a level of productivity and quality required by CASL. MPACT's design relies heavily on object-oriented programming concepts and design patterns and is programmed in Fortran 2003. These designs are explained and illustrated as to how they can be readily extended to incorporate new capabilities and research ideas in support of academic research objectives. The transport methods currently implemented in MPACT include the 2-D and 3-D method of characteristics (MOC) and 2-D and 3-D method of collision direction probabilities (CDP). For the cross section resonance treatment, presently the subgroup method and the new embedded self-shielding method (ESSM) are implemented within MPACT. (authors)
Parallel FE Electron-Photon Transport Analysis on 2-D Unstructured Mesh
International Nuclear Information System (INIS)
Drumm, C.R.; Lorenz, J.
1999-01-01
A novel solution method has been developed to solve the coupled electron-photon transport problem on an unstructured triangular mesh. Instead of tackling the first-order form of the linear Boltzmann equation, this approach is based on the second-order form in conjunction with the conventional multi-group discrete-ordinates approximation. The highly forward-peaked electron scattering is modeled with a multigroup Legendre expansion derived from the Goudsmit-Saunderson theory. The finite element method is used to treat the spatial dependence. The solution method is unique in that the space-direction dependence is solved simultaneously, eliminating the need for the conventional inner iterations, a method that is well suited for massively parallel computers
Bentonite electrical conductivity: a model based on series–parallel transport
Lima, Ana T.
2010-01-30
Bentonite has significant applications nowadays, among them as landfill liners, in concrete industry as a repairing material, and as drilling mud in oil well construction. The application of an electric field to such perimeters is under wide discussion, and subject of many studies. However, to understand the behaviour of such an expansive and plastic material under the influence of an electric field, the perception of its electrical properties is essential. This work serves to compare existing data of such electrical behaviour with new laboratorial results. Electrical conductivity is a pertinent parameter since it indicates how much a material is prone to conduct electricity. In the current study, total conductivity of a compacted porous medium was established to be dependent upon density of the bentonite plug. Therefore, surface conductivity was addressed and a series-parallel transport model used to quantify/predict the total conductivity of the system. © The Author(s) 2010.
Gudmundsson, Vidar; Abdulla, Nzar Rauf; Sitek, Anna; Goan, Hsi-Sheng; Tang, Chi-Shung; Manolescu, Andrei
2018-02-01
We show that a Rabi-splitting of the states of strongly interacting electrons in parallel quantum dots embedded in a short quantum wire placed in a photon cavity can be produced by either the para- or the dia-magnetic electron-photon interactions when the geometry of the system is properly accounted for and the photon field is tuned close to a resonance with the electron system. We use these two resonances to explore the electroluminescence caused by the transport of electrons through the one- and two-electron ground states of the system and their corresponding conventional and vacuum electroluminescense as the central system is opened up by coupling it to external leads acting as electron reservoirs. Our analysis indicates that high-order electron-photon processes are necessary to adequately construct the cavity-photon dressed electron states needed to describe both types of electroluminescence.
Classical parallel transport in a multi-species plasma from a 21 moment approximation
International Nuclear Information System (INIS)
Radford, G.J.
1993-11-01
Momentum equations from a 21 moment Grad approximation are presented, including full expressions for the collision terms for the case of elastic collisions. Collision terms for the particular case of an electron-ion-impurity plasma are then given. In addition, for the positive ions, approximations to the collision terms are given for a common ion temperature, T z = T i , and a massive impurity species, m z >> m i and general temperatures. The moment equations are solved for the classical parallel transport coefficients for the specific case of a low impurity density plasma and the results compared with those give by other authors. The range of forms for the collision terms is given to allow more general or other types of solutions to be obtained. (Author)
A massively parallel method of characteristic neutral particle transport code for GPUs
International Nuclear Information System (INIS)
Boyd, W. R.; Smith, K.; Forget, B.
2013-01-01
Over the past 20 years, parallel computing has enabled computers to grow ever larger and more powerful while scientific applications have advanced in sophistication and resolution. This trend is being challenged, however, as the power consumption for conventional parallel computing architectures has risen to unsustainable levels and memory limitations have come to dominate compute performance. Heterogeneous computing platforms, such as Graphics Processing Units (GPUs), are an increasingly popular paradigm for solving these issues. This paper explores the applicability of GPUs for deterministic neutron transport. A 2D method of characteristics (MOC) code - OpenMOC - has been developed with solvers for both shared memory multi-core platforms as well as GPUs. The multi-threading and memory locality methodologies for the GPU solver are presented. Performance results for the 2D C5G7 benchmark demonstrate 25-35 x speedup for MOC on the GPU. The lessons learned from this case study will provide the basis for further exploration of MOC on GPUs as well as design decisions for hardware vendors exploring technologies for the next generation of machines for scientific computing. (authors)
Effect of parallel transport currents on the d-wave Josephson junction
International Nuclear Information System (INIS)
Rashedi, Gholamreza
2009-01-01
In this paper, the non-local mixing of coherent current states in d-wave superconducting banks is investigated. The superconducting banks are connected via a ballistic point contact. The banks have mis-orientation and phase difference. Furthermore, they are subjected to a tangential transport current along the ab plane of d-wave crystals and parallel to the interface between the superconductors. The effects of mis-orientation and external transport current on the current-phase relations and current distributions are the subjects of this paper. It is observed that, at values of phase difference close to 0, π and 2π, the current distribution may have a vortex-like form in the vicinity of the point contact. The current distribution of the above-mentioned junction between d-wave superconductors is totally different from the junction between s-wave superconductors. The interesting result which this study shows is that spontaneous and Josephson currents are observed for the case of φ = 0.
Parallel Transport and Profile of Boundary Plasma with a Low Recycling Wall
Energy Technology Data Exchange (ETDEWEB)
Tang, X.; Guo, Z., E-mail: xtang@lanl.gov [Los Alamos National Laboratory, Los Alamos (United States)
2012-09-15
Full text: Reduction of wall recycling by, for example, a flowing liquid surface at the divertor and first wall, holds the promise of accessing the distinct tokamak reactor operational mode with boundary plasmas of high temperature and low density. Earlier work has indicated that such a boundary plasma would reduce the temperature gradient across the entire plasma and hence remove the primary micro-instability drive responsibly for anomalous particle and energy transport. Here we present a systematic study solving the kinetic equations both analytically and numerically, with and without Coulomb collision. The distinct roles of magnetic field strength modulation and the ambipolar electric field on the electron and ion distribution functions are clarified. The resulting behavior on plasma profile and parallel heat flux, which are often surprising and counter the expectations from the collisional fluid models, on which previous work were based, are explained both intuitively and with a contrast between analytical calculation and numerical simulations. The transport-induced plasma instabilities, and their essential role in maintaining ambipolarity, are clarified, along with the subtle effect of Coulomb collision on electron temperature and wall potential as small but finite collisionality is taken into account. (author)
Advanced quadratures and periodic boundary conditions in parallel 3D Sn transport
International Nuclear Information System (INIS)
Manalo, K.; Yi, C.; Huang, M.; Sjoden, G.
2013-01-01
Significant updates in numerical quadratures have warranted investigation with 3D Sn discrete ordinates transport. We show new applications of quadrature departing from level symmetric ( 2 o) and Pn-Tn (>S 2 o). investigating 3 recently developed quadratures: Even-Odd (EO), Linear-Discontinuous Finite Element - Surface Area (LDFE-SA), and the non-symmetric Icosahedral Quadrature (IC). We discuss implementation changes to 3D Sn codes (applied to Hybrid MOC-Sn TITAN and 3D parallel PENTRAN) that can be performed to accommodate Icosahedral Quadrature, as this quadrature is not 90-degree rotation invariant. In particular, as demonstrated using PENTRAN, the properties of Icosahedral Quadrature are suitable for trivial application using periodic BCs versus that of reflective BCs. In addition to implementing periodic BCs for 3D Sn PENTRAN, we implemented a technique termed 'angular re-sweep' which properly conditions periodic BCs for outer eigenvalue iterative loop convergence. As demonstrated by two simple transport problems (3-group fixed source and 3-group reflected/periodic eigenvalue pin cell), we remark that all of the quadratures we investigated are generally superior to level symmetric quadrature, with Icosahedral Quadrature performing the most efficiently for problems tested. (authors)
Energy Technology Data Exchange (ETDEWEB)
2009-07-01
The TERM (Transport and Environment Reporting Mechanism) 2008 report examines performance of the transport sector vis-a-vis environmental performance and concludes that there are plenty of options for synenergies between different policy initiatives but also risk of measures counteracting each other. -Although there is growing awareness of to the transport sector's disproportionate impact on the environment, the report shows that there is little evidence of improved performance or a shift to sustainable transport across Europe. In particular: 1) freight transport has continued to grow; 2) passenger travel by road and air has continued to increase; 3) greenhouse gas emissions increased between 1990 and 2006; 4) air quality is still a problem across Europe despite continued reductions in air pollutant emissions from vehicles; and 5) transport noise levels are affecting the quality of life and health if EU citizens. (ln)
Steady-state and time-dependent modelling of parallel transport in the scrape-off layer
DEFF Research Database (Denmark)
Havlickova, E.; Fundamenski, W.; Naulin, Volker
2011-01-01
The one-dimensional fluid code SOLF1D has been used for modelling of plasma transport in the scrape-off layer (SOL) along magnetic field lines, both in steady state and under transient conditions that arise due to plasma turbulence. The presented work summarizes results of SOLF1D with attention...... given to transient parallel transport which reveals two distinct time scales due to the transport mechanisms of convection and diffusion. Time-dependent modelling combined with the effect of ballooning shows propagation of particles along the magnetic field line with Mach number up to M ≈ 1...... temperature calculated in SOLF1D is compared with the approximative model used in the turbulence code ESEL both for steady-state and turbulent SOL. Dynamics of the parallel transport are investigated for a simple transient event simulating the propagation of particles and energy to the targets from a blob...
Quasineutral plasma expansion into infinite vacuum as a model for parallel ELM transport
Moulton, D.; Ghendrih, Ph; Fundamenski, W.; Manfredi, G.; Tskhakaya, D.
2013-08-01
An analytic solution for the expansion of a plasma into vacuum is assessed for its relevance to the parallel transport of edge localized mode (ELM) filaments along field lines. This solution solves the 1D1V Vlasov-Poisson equations for the adiabatic (instantaneous source), collisionless expansion of a Gaussian plasma bunch into an infinite space in the quasineutral limit. The quasineutral assumption is found to hold as long as λD0/σ0 ≲ 0.01 (where λD0 is the initial Debye length at peak density and σ0 is the parallel length of the Gaussian filament), a condition that is physically realistic. The inclusion of a boundary at x = L and consequent formation of a target sheath is found to have a negligible effect when L/σ0 ≳ 5, a condition that is physically plausible. Under the same condition, the target flux densities predicted by the analytic solution are well approximated by the ‘free-streaming’ equations used in previous experimental studies, strengthening the notion that these simple equations are physically reasonable. Importantly, the analytic solution predicts a zero heat flux density so that a fluid approach to the problem can be used equally well, at least when the source is instantaneous. It is found that, even for JET-like pedestal parameters, collisions can affect the expansion dynamics via electron temperature isotropization, although this is probably a secondary effect. Finally, the effect of a finite duration, τsrc, for the plasma source is investigated. As is found for an instantaneous source, when L/σ0 ≳ 5 the presence of a target sheath has a negligible effect, at least up to the explored range of τsrc = L/cs (where cs is the sound speed at the initial temperature).
DSU Department
2008-01-01
The French authorities have informed CERN that, once the corresponding road signs have been installed, the single-track road running parallel to the dual carriageway culminating at Gate E will be closed to all motorised vehicle traffic, with the exception of agricultural plant, motorcycles, and service, emergency and police vehicles. Relations with the Host States Service Tel.: 72848 mailto:relations.secretariat@cern.chhttp://www.cern.ch/relations
Masciopinto, Costantino; Volpe, Angela; Palmiotta, Domenico; Cherubini, Claudia
2010-09-01
A combination of a parallel fracture model with the PHREEQC-2 geochemical model was developed to simulate sequential flow and chemical transport with reactions in fractured media where both laminar and turbulent flows occur. The integration of non-laminar flow resistances in one model produced relevant effects on water flow velocities, thus improving model prediction capabilities on contaminant transport. The proposed conceptual model consists of 3D rock-blocks, separated by horizontal bedding plane fractures with variable apertures. Particle tracking solved the transport equations for conservative compounds and provided input for PHREEQC-2. For each cluster of contaminant pathways, PHREEQC-2 determined the concentration for mass-transfer, sorption/desorption, ion exchange, mineral dissolution/precipitation and biodegradation, under kinetically controlled reactive processes of equilibrated chemical species. Field tests have been performed for the code verification. As an example, the combined model has been applied to a contaminated fractured aquifer of southern Italy in order to simulate the phenol transport. The code correctly fitted the field available data and also predicted a possible rapid depletion of phenols as a result of an increased biodegradation rate induced by a simulated artificial injection of nitrates, upgradient to the sources.
Nanoparticle Traffic on Helical Tracks: Thermophoretic Mass Transport through Carbon Nanotubes
DEFF Research Database (Denmark)
Schoen, Philipp A.E.; Walther, Jens Honore; Arcidiacono, Salvatore
2006-01-01
Using molecular dynamics simulations, we demonstrate and quantify thermophoretic motion of solid gold nanoparticles inside carbon nanotubes subject to wall temperature gradients ranging from 0.4 to 25 K/nm. For temperature gradients below 1 K/nm, we find that the particles move "on tracks......" in a predictable fashion as they follow unique helical orbits depending on the geometry of the carbon nanotubes. These findings markedly advance our knowledge of mass transport mechanisms relevant to nanoscale applications....
Regional Atmospheric Transport Code for Hanford Emission Tracking, Version 2 (RATCHET2)
International Nuclear Information System (INIS)
Ramsdell, James V.; Rishel, Jeremy P.
2006-01-01
This manual describes the atmospheric model and computer code for the Atmospheric Transport Module within SAC. The Atmospheric Transport Module, called RATCHET2, calculates the time-integrated air concentration and surface deposition of airborne contaminants to the soil. The RATCHET2 code is an adaptation of the Regional Atmospheric Transport Code for Hanford Emissions Tracking (RATCHET). The original RATCHET code was developed to perform the atmospheric transport for the Hanford Environmental Dose Reconstruction Project. Fundamentally, the two sets of codes are identical; no capabilities have been deleted from the original version of RATCHET. Most modifications are generally limited to revision of the run-specification file to streamline the simulation process for SAC.
Regional Atmospheric Transport Code for Hanford Emission Tracking, Version 2(RATCHET2)
Energy Technology Data Exchange (ETDEWEB)
Ramsdell, James V.; Rishel, Jeremy P.
2006-07-01
This manual describes the atmospheric model and computer code for the Atmospheric Transport Module within SAC. The Atmospheric Transport Module, called RATCHET2, calculates the time-integrated air concentration and surface deposition of airborne contaminants to the soil. The RATCHET2 code is an adaptation of the Regional Atmospheric Transport Code for Hanford Emissions Tracking (RATCHET). The original RATCHET code was developed to perform the atmospheric transport for the Hanford Environmental Dose Reconstruction Project. Fundamentally, the two sets of codes are identical; no capabilities have been deleted from the original version of RATCHET. Most modifications are generally limited to revision of the run-specification file to streamline the simulation process for SAC.
International Nuclear Information System (INIS)
Saindane, Shashank; Pujari, R.N.; Narsaiah, M.V.R.; Chaudhury, Probal; Pradeepkumar, K.S.
2016-01-01
The safety aspects during the transport of radioactive material have to ensure that even in event of accident the potential of radiation exposure to public is extremely small. Continuous monitoring and online data transfer to emergency control room will strengthen the emergency preparedness to response to any such accident during transport of radioactive material. The paper presents the combined application of Geographical Information Systems (GIS), Global Positioning System (GPS), General Packet Radio Service (GPRS) and the Internet for tracking the shipment vehicle transporting radioactive isotopes for use in the medical industry. The key features of the prototype system designed are realtime radiological status update along with photo snap of the shipping flask at predefined interval along with positional coordinates, GIS platform and a web-based user interface. The system consists of a GM based radiation monitoring device (RMD) along with a LAN camera, GPS for tracking the shipment vehicle, a communications server, a web-server, a database server, and a map server. The RMD and tracking device mounted in the shipment vehicle collects location and radiological information on real-time via the GPS. This information is transferred continuously through GPRS to a central database. The users will be able to view the current location of the vehicle via a web-based application
Impact of Preferred Eddy Tracks on Transport and Mixing in the Eastern South Pacific
Belmadani, A.; Donoso, D.; Auger, P. A.; Chaigneau, A.
2017-12-01
Mesoscale eddies, which play a fundamental role in the transport of mass, heat, nutrients, and biota across the oceans, have been suggested to propagate preferently along specific tracks. These preferred pathways, also called eddy trains, are near-zonal due to westward drift of individual vortices, and tend to be polarized (ie alternatively dominated by anticyclonic/cyclonic eddies), coinciding with the recently discovered latent striations (quasi-zonal mesoscale jet-like features). While significant effort has been made to understand the dynamics of striations and their interplay with mesoscale eddies, the impact of repeated eddy tracks on physical (temperature, salinity), biogeochemical (oxygen, carbon, nutrients) and other tracers (e.g. chlorophyll, marine debris) has received little attention. Here we report on the results of numerical modeling experiments that simulate the impact of preferred eddy tracks on the transport and mixing of water particles in the Eastern South Pacific off Chile. A 30-year interannual simulation of the oceanic circulation in this region has been performed over 1984-2013 with the ROMS (Regional Oceanic Modeling System) at an eddy-resolving resolution (10 km). Objective tracking of mesoscale coherent vortices is obtained using automated methods, allowing to compute the contribution of eddies to the ocean circulation. Preferred eddy tracks are further isolated from the more random eddies, by comparing the distances between individual tracks and the striated pattern in long-term mean eddy polarity with a least-squares approach. The remaining non-eddying flow may also be decomposed into time-mean and anomalous circulation, and/or small- and large-scale circulation. Neutrally-buoyant Lagrangian floats are then released uniformly into the various flow components as well as the total flow, and tracked forward in time with the ARIANE software. The dispersion patterns of water particles are used to estimate the respective contributions of
International Nuclear Information System (INIS)
Fischer, G.A.
2010-01-01
The PCA Benchmark is analyzed using RAPTOR-M3G, a parallel SN radiation transport code. A variety of mesh structures, angular quadrature sets, cross section treatments, and reactor dosimetry cross sections are presented. The results show that RAPTOR-M3G is generally suitable for PWR neutron dosimetry applications. (authors)
Romano, Paul Kollath
Monte Carlo particle transport methods are being considered as a viable option for high-fidelity simulation of nuclear reactors. While Monte Carlo methods offer several potential advantages over deterministic methods, there are a number of algorithmic shortcomings that would prevent their immediate adoption for full-core analyses. In this thesis, algorithms are proposed both to ameliorate the degradation in parallel efficiency typically observed for large numbers of processors and to offer a means of decomposing large tally data that will be needed for reactor analysis. A nearest-neighbor fission bank algorithm was proposed and subsequently implemented in the OpenMC Monte Carlo code. A theoretical analysis of the communication pattern shows that the expected cost is O( N ) whereas traditional fission bank algorithms are O(N) at best. The algorithm was tested on two supercomputers, the Intrepid Blue Gene/P and the Titan Cray XK7, and demonstrated nearly linear parallel scaling up to 163,840 processor cores on a full-core benchmark problem. An algorithm for reducing network communication arising from tally reduction was analyzed and implemented in OpenMC. The proposed algorithm groups only particle histories on a single processor into batches for tally purposes---in doing so it prevents all network communication for tallies until the very end of the simulation. The algorithm was tested, again on a full-core benchmark, and shown to reduce network communication substantially. A model was developed to predict the impact of load imbalances on the performance of domain decomposed simulations. The analysis demonstrated that load imbalances in domain decomposed simulations arise from two distinct phenomena: non-uniform particle densities and non-uniform spatial leakage. The dominant performance penalty for domain decomposition was shown to come from these physical effects rather than insufficient network bandwidth or high latency. The model predictions were verified with
Energy Technology Data Exchange (ETDEWEB)
Kostin, Mikhail [Michigan State Univ., East Lansing, MI (United States); Mokhov, Nikolai [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States); Niita, Koji [Research Organization for Information Science and Technology, Ibaraki-ken (Japan)
2013-09-25
A parallel computing framework has been developed to use with general-purpose radiation transport codes. The framework was implemented as a C++ module that uses MPI for message passing. It is intended to be used with older radiation transport codes implemented in Fortran77, Fortran 90 or C. The module is significantly independent of radiation transport codes it can be used with, and is connected to the codes by means of a number of interface functions. The framework was developed and tested in conjunction with the MARS15 code. It is possible to use it with other codes such as PHITS, FLUKA and MCNP after certain adjustments. Besides the parallel computing functionality, the framework offers a checkpoint facility that allows restarting calculations with a saved checkpoint file. The checkpoint facility can be used in single process calculations as well as in the parallel regime. The framework corrects some of the known problems with the scheduling and load balancing found in the original implementations of the parallel computing functionality in MARS15 and PHITS. The framework can be used efficiently on homogeneous systems and networks of workstations, where the interference from the other users is possible.
International Nuclear Information System (INIS)
Le Roux, J. A.
2011-01-01
Earlier work based on nonlinear guiding center (NLGC) theory suggested that perpendicular cosmic-ray transport is diffusive when cosmic rays encounter random three-dimensional magnetohydrodynamic turbulence dominated by uniform two-dimensional (2D) turbulence with a minor uniform slab turbulence component. In this approach large-scale perpendicular cosmic-ray transport is due to cosmic rays microscopically diffusing along the meandering magnetic field dominated by 2D turbulence because of gyroresonant interactions with slab turbulence. However, turbulence in the solar wind is intermittent and it has been suggested that intermittent turbulence might be responsible for the observation of 'dropout' events in solar energetic particle fluxes on small scales. In a previous paper le Roux et al. suggested, using NLGC theory as a basis, that if gyro-scale slab turbulence is intermittent, large-scale perpendicular cosmic-ray transport in weak uniform 2D turbulence will be superdiffusive or subdiffusive depending on the statistical characteristics of the intermittent slab turbulence. In this paper we expand and refine our previous work further by investigating how both parallel and perpendicular transport are affected by intermittent slab turbulence for weak as well as strong uniform 2D turbulence. The main new finding is that both parallel and perpendicular transport are the net effect of an interplay between diffusive and nondiffusive (superdiffusive or subdiffusive) transport effects as a consequence of this intermittency.
Energy Technology Data Exchange (ETDEWEB)
Le Roux, J. A. [Department of Physics, University of Alabama in Huntsville, Huntsville, AL 35899 (United States)
2011-12-10
Earlier work based on nonlinear guiding center (NLGC) theory suggested that perpendicular cosmic-ray transport is diffusive when cosmic rays encounter random three-dimensional magnetohydrodynamic turbulence dominated by uniform two-dimensional (2D) turbulence with a minor uniform slab turbulence component. In this approach large-scale perpendicular cosmic-ray transport is due to cosmic rays microscopically diffusing along the meandering magnetic field dominated by 2D turbulence because of gyroresonant interactions with slab turbulence. However, turbulence in the solar wind is intermittent and it has been suggested that intermittent turbulence might be responsible for the observation of 'dropout' events in solar energetic particle fluxes on small scales. In a previous paper le Roux et al. suggested, using NLGC theory as a basis, that if gyro-scale slab turbulence is intermittent, large-scale perpendicular cosmic-ray transport in weak uniform 2D turbulence will be superdiffusive or subdiffusive depending on the statistical characteristics of the intermittent slab turbulence. In this paper we expand and refine our previous work further by investigating how both parallel and perpendicular transport are affected by intermittent slab turbulence for weak as well as strong uniform 2D turbulence. The main new finding is that both parallel and perpendicular transport are the net effect of an interplay between diffusive and nondiffusive (superdiffusive or subdiffusive) transport effects as a consequence of this intermittency.
Sun, Wei; Gu, Yan; Wang, Gufeng; Fang, Ning
2012-01-17
The single particle orientation and rotational tracking (SPORT) technique was introduced recently to follow the rotational motion of plasmonic gold nanorod under a differential interference contrast (DIC) microscope. In biological studies, however, cellular activities usually involve a multiplicity of molecules; thus, tracking the motion of a single molecule/object is insufficient. Fluorescence-based techniques have long been used to follow the spatial and temporal distributions of biomolecules of interest thanks to the availability of multiplexing fluorescent probes. To know the type and number of molecules and the timing of their involvement in a biological process under investigation by SPORT, we constructed a dual-modality DIC/fluorescence microscope to simultaneously image fluorescently tagged biomolecules and plasmonic nanoprobes in living cells. With the dual-modality SPORT technique, the microtubule-based intracellular transport can be unambiguously identified while the dynamic orientation of nanometer-sized cargos can be monitored at video rate. Furthermore, the active transport on the microtubule can be easily separated from the diffusion before the nanocargo docks on the microtubule or after it undocks from the microtubule. The potential of dual-modality SPORT is demonstrated for shedding new light on unresolved questions in intracellular transport.
Tracking Oxidation During Transport of Trace Gases in Air from the Northern to Southern Hemisphere
Montzka, S. A.; Moore, F. L.; Atlas, E. L.; Parrish, D. D.; Miller, B. R.; Sweeney, C.; McKain, K.; Hall, B. D.; Siso, C.; Crotwell, M.; Hintsa, E. J.; Elkins, J. W.; Blake, D. R.; Barletta, B.; Meinardi, S.; Claxton, T.; Hossaini, R.
2017-12-01
Trace gas mole fractions contain the imprint of recent influences on an air mass such as sources, transport, and oxidation. Covariations among the many gases measured from flasks during ATom and HIPPO, and from the ongoing NOAA cooperative air sampling program enable recent influences to be identified from a wide range of sources including industrial activity, biomass burning, emissions from wetlands, and uptake by terrestrial ecosystems. In this work we explore the evolution of trace gas concentrations owing to atmospheric oxidation as air masses pass through the tropics, the atmospheric region with the highest concentrations of the hydroxyl radical. Variations in C2-C5 hydrocarbon concentrations downwind of source regions provide a measure of photochemical ageing in an air mass since emission, but they become less useful when tracking photochemical ageing as air is transported from the NH into the SH owing to their low mixing ratios, lifetimes that are very short relative to transport times, non-industrial sources in the tropics (e.g., biomass burning), and southern hemispheric sources. Instead, we consider a range of trace gases and trace gas pairs that provide a measure of photochemical processing as air transits the tropics. To be useful in this analysis, these trace gases would have lifetimes comparable to interhemispheric transport times, emissions arising from only the NH at constant relative magnitudes, and concentrations sufficient to allow precise and accurate measurements in both hemispheres. Some anthropogenically-emitted chlorinated hydrocarbons meet these requirements and have been measured during ATom, HIPPO, and from NOAA's ongoing surface sampling efforts. Consideration of these results and their implications for tracking photochemical processing in air as it is transported across the tropics will be presented.
International Nuclear Information System (INIS)
McGhee, J.M.; Roberts, R.M.; Morel, J.E.
1997-01-01
A spherical harmonics research code (DANTE) has been developed which is compatible with parallel computer architectures. DANTE provides 3-D, multi-material, deterministic, transport capabilities using an arbitrary finite element mesh. The linearized Boltzmann transport equation is solved in a second order self-adjoint form utilizing a Galerkin finite element spatial differencing scheme. The core solver utilizes a preconditioned conjugate gradient algorithm. Other distinguishing features of the code include options for discrete-ordinates and simplified spherical harmonics angular differencing, an exact Marshak boundary treatment for arbitrarily oriented boundary faces, in-line matrix construction techniques to minimize memory consumption, and an effective diffusion based preconditioner for scattering dominated problems. Algorithm efficiency is demonstrated for a massively parallel SIMD architecture (CM-5), and compatibility with MPP multiprocessor platforms or workstation clusters is anticipated
Steady-state and time-dependent modelling of parallel transport in the scrape-off layer
Czech Academy of Sciences Publication Activity Database
Havlíčková, E.; Fundameski, W.; Naulin, V.; Nielsen, A.H.; Zagórski, R.; Seidl, Jakub; Horáček, Jan
2011-01-01
Roč. 53, č. 6 (2011), 065004-065004 ISSN 0741-3335 R&D Projects: GA ČR GAP205/10/2055; GA MŠk 7G09042 Institutional research plan: CEZ:AV0Z20430508 Keywords : Parallel transport * , SOLF1D Subject RIV: BL - Plasma and Gas Discharge Physics Impact factor: 2.425, year: 2011 http://iopscience.iop.org/0741-3335/53/6/065004/pdf/0741-3335_53_6_065004.pdf
Chen, Kewei; Zhan, Hongbin
2018-06-01
The reactive solute transport in a single fracture bounded by upper and lower matrixes is a classical problem that captures the dominant factors affecting transport behavior beyond pore scale. A parallel fracture-matrix system which considers the interaction among multiple paralleled fractures is an extension to a single fracture-matrix system. The existing analytical or semi-analytical solution for solute transport in a parallel fracture-matrix simplifies the problem to various degrees, such as neglecting the transverse dispersion in the fracture and/or the longitudinal diffusion in the matrix. The difficulty of solving the full two-dimensional (2-D) problem lies in the calculation of the mass exchange between the fracture and matrix. In this study, we propose an innovative Green's function approach to address the 2-D reactive solute transport in a parallel fracture-matrix system. The flux at the interface is calculated numerically. It is found that the transverse dispersion in the fracture can be safely neglected due to the small scale of fracture aperture. However, neglecting the longitudinal matrix diffusion would overestimate the concentration profile near the solute entrance face and underestimate the concentration profile at the far side. The error caused by neglecting the longitudinal matrix diffusion decreases with increasing Peclet number. The longitudinal matrix diffusion does not have obvious influence on the concentration profile in long-term. The developed model is applied to a non-aqueous-phase-liquid (DNAPL) contamination field case in New Haven Arkose of Connecticut in USA to estimate the Trichloroethylene (TCE) behavior over 40 years. The ratio of TCE mass stored in the matrix and the injected TCE mass increases above 90% in less than 10 years.
International Nuclear Information System (INIS)
Petrosyan, Lyudvig S
2016-01-01
We study coherent transport in a system of periodic linear chain of quantum dots situated between two parallel quantum wires. We show that the resonant-tunneling conductance between the wires exhibits a Rabi splitting of the resonance peak as a function of Fermi energy in the wires. This effect is an electron transport analogue of the Rabi splitting in optical spectra of two interacting systems. The conductance peak splitting originates from the anticrossing of Bloch bands in a periodic system that is caused by a strong coupling between the electron states in the quantum dot chain and quantum wires. (paper)
International Nuclear Information System (INIS)
Ehrglis, K.Eh.
1980-01-01
Errors in the determination of spatial reference point (SRP) coordinates being reconstructed on the basis of photograph reference points are considered. The width of paths of probable track positions on photographs and the length of intersection zones of these paths with hampering track images are estimated. Conditions for a stable automatic tracing of closely traversing in space tracks are determined. The conclusion is made that of 5-6 SRP are accumulated the method of spatial tracing when shifting local scanning centres on photographs with a corresponding speed permits to trace automatically closely traversing tracks in the middle zone of the Merabel chamber when the angle between them is approximately 1 deg and the distance in space - 3-7 mm. It is emphasized that, when forecasting 8-10 SRP, the spatial or angle track resolution improves 1.5 times more due to the diminution of forecasting errors and corresponding narrowing of sensitivity paths. The described method will be especially effective when processing photographs taken in bubble chambers of a new generation at particle energies being tens-hundreds GeV [ru
International Nuclear Information System (INIS)
Svensson, Urban
2001-04-01
A particle tracking algorithm, PARTRACK, that simulates transport and dispersion in a sparsely fractured rock is described. The main novel feature of the algorithm is the introduction of multiple particle states. It is demonstrated that the introduction of this feature allows for the simultaneous simulation of Taylor dispersion, sorption and matrix diffusion. A number of test cases are used to verify and demonstrate the features of PARTRACK. It is shown that PARTRACK can simulate the following processes, believed to be important for the problem addressed: the split up of a tracer cloud at a fracture intersection, channeling in a fracture plane, Taylor dispersion and matrix diffusion and sorption. From the results of the test cases, it is concluded that PARTRACK is an adequate framework for simulation of transport and dispersion of a solute in a sparsely fractured rock
Energy Technology Data Exchange (ETDEWEB)
Lichtner, Peter C. [OFM Research, Redmond, WA (United States); Hammond, Glenn E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Lu, Chuan [Idaho National Lab. (INL), Idaho Falls, ID (United States); Karra, Satish [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Bisht, Gautam [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Andre, Benjamin [National Center for Atmospheric Research, Boulder, CO (United States); Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Mills, Richard [Intel Corporation, Portland, OR (United States); Univ. of Tennessee, Knoxville, TN (United States); Kumar, Jitendra [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)
2015-01-20
PFLOTRAN solves a system of generally nonlinear partial differential equations describing multi-phase, multicomponent and multiscale reactive flow and transport in porous materials. The code is designed to run on massively parallel computing architectures as well as workstations and laptops (e.g. Hammond et al., 2011). Parallelization is achieved through domain decomposition using the PETSc (Portable Extensible Toolkit for Scientific Computation) libraries for the parallelization framework (Balay et al., 1997). PFLOTRAN has been developed from the ground up for parallel scalability and has been run on up to 218 processor cores with problem sizes up to 2 billion degrees of freedom. Written in object oriented Fortran 90, the code requires the latest compilers compatible with Fortran 2003. At the time of this writing this requires gcc 4.7.x, Intel 12.1.x and PGC compilers. As a requirement of running problems with a large number of degrees of freedom, PFLOTRAN allows reading input data that is too large to fit into memory allotted to a single processor core. The current limitation to the problem size PFLOTRAN can handle is the limitation of the HDF5 file format used for parallel IO to 32 bit integers. Noting that 2^{32} = 4; 294; 967; 296, this gives an estimate of the maximum problem size that can be currently run with PFLOTRAN. Hopefully this limitation will be remedied in the near future.
Electrical Transport Through Micro Porous Track Etch Membranes of same Porosity
Garg, Ravish; Kumar, Vijay; Kumar, Dinesh; Chakarvarti, S. K.
2012-12-01
Porosity, pore size and thickness of membrane are vital factors to influence the transport phenomena through micro porous track etch membranes (TEMs) and affect the various applications like separations, drug release, flow control, bio-sensing and cell size detection etc. based on transport process. Therefore, a better understanding of transport mechanism through TEMs is required for new applications in various thrust areas like biomedical devices and packaging of foods and drugs. Transport studies of electrolytic solutions of potassium chloride, through porous polycarbonate TEMS having cylindrical pores of size 0.2 μm and 0.4 μm with same porosity of 15%, have been carried out using an electrochemical cell. In this technique, the etched filter is sandwiched between two compartments of cell in such a way that the TEM acts as a membrane separating the cell into two chambers. The two chambers are then filled with electrolyte solution (KCl in distilled water). The current voltage characteristics have been drawn by stepping the voltage ranging 0 to 10 V using Keithley 2400 Series Source Measurement Unit. The results indicate that rate of ion transport through cylindrical pores although is independent of pore size of TEMs of same porosity but there seems to be effect of TEM aperture size exposed to the electrolyte used in conducting cell on ion transport magnitude. From the experimental studies, a large deviation in the conduction through TEMs was observed when compared with theoretical consideration which led to the need for modification in the applicability of simple Ohm's law to the conduction through TEMs. It is found that ion transport increases with increase in area of aperture of TEM but much lower than the expected theoretically value.
Reactor Dosimetry Applications Using RAPTOR-M3G:. a New Parallel 3-D Radiation Transport Code
Longoni, Gianluca; Anderson, Stanwood L.
2009-08-01
The numerical solution of the Linearized Boltzmann Equation (LBE) via the Discrete Ordinates method (SN) requires extensive computational resources for large 3-D neutron and gamma transport applications due to the concurrent discretization of the angular, spatial, and energy domains. This paper will discuss the development RAPTOR-M3G (RApid Parallel Transport Of Radiation - Multiple 3D Geometries), a new 3-D parallel radiation transport code, and its application to the calculation of ex-vessel neutron dosimetry responses in the cavity of a commercial 2-loop Pressurized Water Reactor (PWR). RAPTOR-M3G is based domain decomposition algorithms, where the spatial and angular domains are allocated and processed on multi-processor computer architectures. As compared to traditional single-processor applications, this approach reduces the computational load as well as the memory requirement per processor, yielding an efficient solution methodology for large 3-D problems. Measured neutron dosimetry responses in the reactor cavity air gap will be compared to the RAPTOR-M3G predictions. This paper is organized as follows: Section 1 discusses the RAPTOR-M3G methodology; Section 2 describes the 2-loop PWR model and the numerical results obtained. Section 3 addresses the parallel performance of the code, and Section 4 concludes this paper with final remarks and future work.
High-speed parallel forward error correction for optical transport networks
DEFF Research Database (Denmark)
Rasmussen, Anders; Ruepp, Sarah Renée; Berger, Michael Stübert
2010-01-01
This paper presents a highly parallelized hardware implementation of the standard OTN Reed-Solomon Forward Error Correction algorithm. The proposed circuit is designed to meet the immense throughput required by OTN4, using commercially available FPGA technology....
Haziza, Simon; Mohan, Nitin; Loe-Mie, Yann; Lepagnol-Bestel, Aude-Marie; Massou, Sophie; Adam, Marie-Pierre; Le, Xuan Loc; Viard, Julia; Plancon, Christine; Daudin, Rachel; Koebel, Pascale; Dorard, Emilie; Rose, Christiane; Hsieh, Feng-Jen; Wu, Chih-Che; Potier, Brigitte; Herault, Yann; Sala, Carlo; Corvin, Aiden; Allinquant, Bernadette; Chang, Huan-Cheng; Treussart, François; Simonneau, Michel
2017-05-01
Brain diseases such as autism and Alzheimer's disease (each inflicting >1% of the world population) involve a large network of genes displaying subtle changes in their expression. Abnormalities in intraneuronal transport have been linked to genetic risk factors found in patients, suggesting the relevance of measuring this key biological process. However, current techniques are not sensitive enough to detect minor abnormalities. Here we report a sensitive method to measure the changes in intraneuronal transport induced by brain-disease-related genetic risk factors using fluorescent nanodiamonds (FNDs). We show that the high brightness, photostability and absence of cytotoxicity allow FNDs to be tracked inside the branches of dissociated neurons with a spatial resolution of 12 nm and a temporal resolution of 50 ms. As proof of principle, we applied the FND tracking assay on two transgenic mouse lines that mimic the slight changes in protein concentration (∼30%) found in the brains of patients. In both cases, we show that the FND assay is sufficiently sensitive to detect these changes.
Route planning with transportation network maps: an eye-tracking study.
Grison, Elise; Gyselinck, Valérie; Burkhardt, Jean-Marie; Wiener, Jan Malte
2017-09-01
Planning routes using transportation network maps is a common task that has received little attention in the literature. Here, we present a novel eye-tracking paradigm to investigate psychological processes and mechanisms involved in such a route planning. In the experiment, participants were first presented with an origin and destination pair before we presented them with fictitious public transportation maps. Their task was to find the connecting route that required the minimum number of transfers. Based on participants' gaze behaviour, each trial was split into two phases: (1) the search for origin and destination phase, i.e., the initial phase of the trial until participants gazed at both origin and destination at least once and (2) the route planning and selection phase. Comparisons of other eye-tracking measures between these phases and the time to complete them, which depended on the complexity of the planning task, suggest that these two phases are indeed distinct and supported by different cognitive processes. For example, participants spent more time attending the centre of the map during the initial search phase, before directing their attention to connecting stations, where transitions between lines were possible. Our results provide novel insights into the psychological processes involved in route planning from maps. The findings are discussed in relation to the current theories of route planning.
A first generation dynamic ingress, redistribution and transport model of soil track-in: DIRT.
Johnson, D L
2008-12-01
This work introduces a spatially resolved quantitative model, based on conservation of mass and first order transfer kinetics, for following the transport and redistribution of outdoor soil to, and within, the indoor environment by track-in on footwear. Implementations of the DIRT model examined the influence of room size, rug area and location, shoe size, and mass transfer coefficients for smooth and carpeted floor surfaces using the ratio of mass loading on carpeted to smooth floor surfaces as a performance metric. Results showed that in the limit for large numbers of random steps the dual aspects of deposition to and track-off from the carpets govern this ratio. Using recently obtained experimental measurements, historic transport and distribution parameters, cleaning efficiencies for the different floor surfaces, and indoor dust deposition rates to provide model boundary conditions, DIRT predicts realistic floor surface loadings. The spatio-temporal variability in model predictions agrees with field observations and suggests that floor surface dust loadings are constantly in flux; steady state distributions are hardly, if ever, achieved.
Frank, Donya; Calantoni, Joseph
2017-05-01
Improved understanding of coastal hydrodynamics and morphology will lead to more effective mitigation measures that reduce fatalities and property damage caused by natural disasters such as hurricanes. We investigated sediment transport under oscillatory flow over flat and rippled beds with phase-separated stereoscopic Particle Image Velocimetry (PIV). Standard PIV techniques severely limit measurements at the fluid-sediment interface and do not allow for the observation of separate phases in multi-phase flow (e.g. sand grains in water). We have implemented phase-separated Particle Image Velocimetry by adding fluorescent tracer particles to the fluid in order to observe fluid flow and sediment transport simultaneously. While sand grains scatter 532 nm wavelength laser light, the fluorescent particles absorb 532 nm laser light and re-emit light at a wavelength of 584 nm. Optical long-pass filters with a cut-on wavelength of 550 nm were installed on two cameras configured to perform stereoscopic PIV to capture only the light emitted by the fluorescent tracer particles. A third high-speed camera was used to capture the light scattered by the sand grains allowing for sediment particle tracking via particle tracking velocimetry (PTV). Together, these overlapping, simultaneously recorded images provided sediment particle and fluid velocities at high temporal and spatial resolution (100 Hz sampling with 0.8 mm vector spacing for the 2D-3C fluid velocity field). Measurements were made under a wide range of oscillatory flows over flat and rippled sand beds. The set of observations allow for the investigation of the relative importance of pressure gradients and shear stresses on sediment transport.
The energy band memory server algorithm for parallel Monte Carlo transport calculations
International Nuclear Information System (INIS)
Felker, K.G.; Siegel, A.R.; Smith, K.S.; Romano, P.K.; Forget, B.
2013-01-01
An algorithm is developed to significantly reduce the on-node footprint of cross section memory in Monte Carlo particle tracking algorithms. The classic method of per-node replication of cross section data is replaced by a memory server model, in which the read-only lookup tables reside on a remote set of disjoint processors. The main particle tracking algorithm is then modified in such a way as to enable efficient use of the remotely stored data in the particle tracking algorithm. Results of a prototype code on a Blue Gene/Q installation reveal that the penalty for remote storage is reasonable in the context of time scales for real-world applications, thus yielding a path forward for a broad range of applications that are memory bound using current techniques. (authors)
Parallel SOL transport in MAST and JET: the impact of the mirror force
International Nuclear Information System (INIS)
Kirk, A; Fundamenski, W; Ahn, J-W; Counsell, G
2003-01-01
Interpretative modelling of the SOL plasma in conventional (JET) and tight (MAST) aspect ratio devices has been performed using OSM2/EIRENE. A detailed comparison has been made of the solutions of the fluid equations and one key issue uncovered by this modelling is the significance of the mirror force for the spherical tokamak (ST) SOL. This force is proportional to ∇ parallel B/B, which is typically a factor 10 larger in an ST due to the low aspect ratio. This term leads to changes in the charged particle velocity distributions near regions with large V parallel B/B representing an effective, upstream particle and momentum source. The modelling performed in this paper indicates that exclusion of the ∇ parallel B term may lead to incorrect conclusions on, for example, the upstream density, especially in STs
Implementation of a Monte Carlo algorithm for neutron transport on a massively parallel SIMD machine
International Nuclear Information System (INIS)
Baker, R.S.
1992-01-01
We present some results from the recent adaptation of a vectorized Monte Carlo algorithm to a massively parallel architecture. The performance of the algorithm on a single processor Cray Y-MP and a Thinking Machine Corporations CM-2 and CM-200 is compared for several test problems. The results show that significant speedups are obtainable for vectorized Monte Carlo algorithms on massively parallel machines, even when the algorithms are applied to realistic problems which require extensive variance reduction. However, the architecture of the Connection Machine does place some limitations on the regime in which the Monte Carlo algorithm may be expected to perform well
Implementation of a Monte Carlo algorithm for neutron transport on a massively parallel SIMD machine
International Nuclear Information System (INIS)
Baker, R.S.
1993-01-01
We present some results from the recent adaptation of a vectorized Monte Carlo algorithm to a massively parallel architecture. The performance of the algorithm on a single processor Cray Y-MP and a Thinking Machine Corporations CM-2 and CM-200 is compared for several test problems. The results show that significant speedups are obtainable for vectorized Monte Carlo algorithms on massively parallel machines, even when the algorithms are applied to realistic problems which require extensive variance reduction. However, the architecture of the Connection Machine does place some limitations on the regime in which the Monte Carlo algorithm may be expected to perform well. (orig.)
Effects of model approximations for electron, hole, and photon transport in swift heavy ion tracks
Energy Technology Data Exchange (ETDEWEB)
Rymzhanov, R.A. [Joint Institute for Nuclear Research, Joliot-Curie 6, 141980 Dubna, Moscow Region (Russian Federation); Medvedev, N.A., E-mail: nikita.medvedev@fzu.cz [Department of Radiation and Chemical Physics, Institute of Physics, Czech Academy of Sciences, Na Slovance 2, 182 21 Prague 8 (Czech Republic); Laser Plasma Department, Institute of Plasma Physics, Czech Academy of Sciences, Za Slovankou 3, 182 00 Prague 8 (Czech Republic); Volkov, A.E. [Joint Institute for Nuclear Research, Joliot-Curie 6, 141980 Dubna, Moscow Region (Russian Federation); National Research Centre ‘Kurchatov Institute’, Kurchatov Sq. 1, 123182 Moscow (Russian Federation); Lebedev Physical Institute of the Russian Academy of Sciences, Leninskij pr., 53,119991 Moscow (Russian Federation); National University of Science and Technology MISiS, Leninskij pr., 4, 119049 Moscow (Russian Federation); National Research Nuclear University MEPhI, Kashirskoye sh., 31, 115409 Moscow (Russian Federation)
2016-12-01
The event-by-event Monte Carlo code, TREKIS, was recently developed to describe excitation of the electron subsystems of solids in the nanometric vicinity of a trajectory of a nonrelativistic swift heavy ion (SHI) decelerated in the electronic stopping regime. The complex dielectric function (CDF) formalism was applied in the used cross sections to account for collective response of a matter to excitation. Using this model we investigate effects of the basic assumptions on the modeled kinetics of the electronic subsystem which ultimately determine parameters of an excited material in an SHI track. In particular, (a) effects of different momentum dependencies of the CDF on scattering of projectiles on the electron subsystem are investigated. The ‘effective one-band’ approximation for target electrons produces good coincidence of the calculated electron mean free paths with those obtained in experiments in metals. (b) Effects of collective response of a lattice appeared to dominate in randomization of electron motion. We study how sensitive these effects are to the target temperature. We also compare results of applications of different model forms of (quasi-) elastic cross sections in simulations of the ion track kinetics, e.g. those calculated taking into account optical phonons in the CDF form vs. Mott’s atomic cross sections. (c) It is demonstrated that the kinetics of valence holes significantly affects redistribution of the excess electronic energy in the vicinity of an SHI trajectory as well as its conversion into lattice excitation in dielectrics and semiconductors. (d) It is also shown that induced transport of photons originated from radiative decay of core holes brings the excess energy faster and farther away from the track core, however, the amount of this energy is relatively small.
Energy Technology Data Exchange (ETDEWEB)
Philip, Bobby, E-mail: philipb@ornl.gov [Oak Ridge National Laboratory, One Bethel Valley Road, Oak Ridge, TN 37831 (United States); Berrill, Mark A.; Allu, Srikanth; Hamilton, Steven P.; Sampath, Rahul S.; Clarno, Kevin T. [Oak Ridge National Laboratory, One Bethel Valley Road, Oak Ridge, TN 37831 (United States); Dilts, Gary A. [Los Alamos National Laboratory, PO Box 1663, Los Alamos, NM 87545 (United States)
2015-04-01
This paper describes an efficient and nonlinearly consistent parallel solution methodology for solving coupled nonlinear thermal transport problems that occur in nuclear reactor applications over hundreds of individual 3D physical subdomains. Efficiency is obtained by leveraging knowledge of the physical domains, the physics on individual domains, and the couplings between them for preconditioning within a Jacobian Free Newton Krylov method. Details of the computational infrastructure that enabled this work, namely the open source Advanced Multi-Physics (AMP) package developed by the authors is described. Details of verification and validation experiments, and parallel performance analysis in weak and strong scaling studies demonstrating the achieved efficiency of the algorithm are presented. Furthermore, numerical experiments demonstrate that the preconditioner developed is independent of the number of fuel subdomains in a fuel rod, which is particularly important when simulating different types of fuel rods. Finally, we demonstrate the power of the coupling methodology by considering problems with couplings between surface and volume physics and coupling of nonlinear thermal transport in fuel rods to an external radiation transport code.
International Nuclear Information System (INIS)
Procassini, R J; Beck, B R
2004-01-01
It might be assumed that use of a ''high-quality'' random number generator (RNG), producing a sequence of ''pseudo random'' numbers with a ''long'' repetition period, is crucial for producing unbiased results in Monte Carlo particle transport simulations. While several theoretical and empirical tests have been devised to check the quality (randomness and period) of an RNG, for many applications it is not clear what level of RNG quality is required to produce unbiased results. This paper explores the issue of RNG quality in the context of parallel, Monte Carlo transport simulations in order to determine how ''good'' is ''good enough''. This study employs the MERCURY Monte Carlo code, which incorporates the CNPRNG library for the generation of pseudo-random numbers via linear congruential generator (LCG) algorithms. The paper outlines the usage of random numbers during parallel MERCURY simulations, and then describes the source and criticality transport simulations which comprise the empirical basis of this study. A series of calculations for each test problem in which the quality of the RNG (period of the LCG) is varied provides the empirical basis for determining the minimum repetition period which may be employed without producing a bias in the mean integrated results
Directory of Open Access Journals (Sweden)
Xueli Chen
2010-01-01
Full Text Available During the past decade, Monte Carlo method has obtained wide applications in optical imaging to simulate photon transport process inside tissues. However, this method has not been effectively extended to the simulation of free-space photon transport at present. In this paper, a uniform framework for noncontact optical imaging is proposed based on Monte Carlo method, which consists of the simulation of photon transport both in tissues and in free space. Specifically, the simplification theory of lens system is utilized to model the camera lens equipped in the optical imaging system, and Monte Carlo method is employed to describe the energy transformation from the tissue surface to the CCD camera. Also, the focusing effect of camera lens is considered to establish the relationship of corresponding points between tissue surface and CCD camera. Furthermore, a parallel version of the framework is realized, making the simulation much more convenient and effective. The feasibility of the uniform framework and the effectiveness of the parallel version are demonstrated with a cylindrical phantom based on real experimental results.
PARALLEL MEASUREMENT AND MODELING OF TRANSPORT IN THE DARHT II BEAMLINE ON ETA II
International Nuclear Information System (INIS)
Chambers, F W; Raymond, B A; Falabella, S; Lee, B S; Richardson, R A; Weir, J T; Davis, H A; Schultze, M E
2005-01-01
To successfully tune the DARHT II transport beamline requires the close coupling of a model of the beam transport and the measurement of the beam observables as the beam conditions and magnet settings are varied. For the ETA II experiment using the DARHT II beamline components this was achieved using the SUICIDE (Simple User Interface Connecting to an Integrated Data Environment) data analysis environment and the FITS (Fully Integrated Transport Simulation) model. The SUICIDE environment has direct access to the experimental beam transport data at acquisition and the FITS predictions of the transport for immediate comparison. The FITS model is coupled into the control system where it can read magnet current settings for real time modeling. We find this integrated coupling is essential for model verification and the successful development of a tuning aid for the efficient convergence on a useable tune. We show the real time comparisons of simulation and experiment and explore the successes and limitations of this close coupled approach
Kuiroukidis, Ap.; Throumoulopoulos, G. N.
2015-08-01
We construct nonlinear toroidal equilibria of fixed diverted boundary shaping with reversed magnetic shear and flows parallel to the magnetic field. The equilibria have hole-like current density and the reversed magnetic shear increases as the equilibrium nonlinearity becomes stronger. Also, application of a sufficient condition for linear stability implies that the stability is improved as the equilibrium nonlinearity correlated to the reversed magnetic shear gets stronger with a weaker stabilizing contribution from the flow. These results indicate synergetic stabilizing effects of reversed magnetic shear, equilibrium nonlinearity and flow in the establishment of Internal Transport Barriers (ITBs).
International Nuclear Information System (INIS)
Rosa, M.; Warsa, J. S.; Chang, J. H.
2007-01-01
A Fourier analysis is conducted in two-dimensional (2D) Cartesian geometry for the discrete-ordinates (SN) approximation of the neutron transport problem solved with Richardson iteration (Source Iteration) and Richardson iteration preconditioned with Transport Synthetic Acceleration (TSA), using the Parallel Block-Jacobi (PBJ) algorithm. The results for the un-accelerated algorithm show that convergence of PBJ can degrade, leading in particular to stagnation of GMRES(m) in problems containing optically thin sub-domains. The results for the accelerated algorithm indicate that TSA can be used to efficiently precondition an iterative method in the optically thin case when implemented in the 'modified' version MTSA, in which only the scattering in the low order equations is reduced by some non-negative factor β<1. (authors)
Mills, R. T.; Rupp, K.; Smith, B. F.; Brown, J.; Knepley, M.; Zhang, H.; Adams, M.; Hammond, G. E.
2017-12-01
As the high-performance computing community pushes towards the exascale horizon, power and heat considerations have driven the increasing importance and prevalence of fine-grained parallelism in new computer architectures. High-performance computing centers have become increasingly reliant on GPGPU accelerators and "manycore" processors such as the Intel Xeon Phi line, and 512-bit SIMD registers have even been introduced in the latest generation of Intel's mainstream Xeon server processors. The high degree of fine-grained parallelism and more complicated memory hierarchy considerations of such "manycore" processors present several challenges to existing scientific software. Here, we consider how the massively parallel, open-source hydrologic flow and reactive transport code PFLOTRAN - and the underlying Portable, Extensible Toolkit for Scientific Computation (PETSc) library on which it is built - can best take advantage of such architectures. We will discuss some key features of these novel architectures and our code optimizations and algorithmic developments targeted at them, and present experiences drawn from working with a wide range of PFLOTRAN benchmark problems on these architectures.
E.J. Spee (Edwin); P.M. de Zeeuw (Paul); J.G. Verwer (Jan); J.G. Blom (Joke); W. Hundsdorfer (Willem)
1996-01-01
textabstractAtmospheric air quality modeling relies in part on numerical simulation. Required numerical simulations are often hampered by lack of computer capacity and computational speed. This problem is most severe in the field of global modeling where transport and exchange of trace constituents
Multi-agent model predictive control for transportation networks : Serial versus parallel schemes
Negenborn, R.R.; De Schutter, B.; Hellendoorn, J.
2006-01-01
We consider the control of large-scale transportation networks, like road traffic networks, power distribution networks, water distribution networks, etc. Control of these networks is often not possible from a single point by a single intelligent control agent; instead control has to be performed
Wheeler, K. I.; Levia, D. F.; Hudson, J. E.
2017-09-01
In autumn, the dissolved organic matter (DOM) contribution of leaf litter leachate to streams in forested watersheds changes as trees undergo resorption, senescence, and leaf abscission. Despite its biogeochemical importance, little work has investigated how leaf litter leachate DOM changes throughout autumn and how any changes might differ interspecifically and intraspecifically. Since climate change is expected to cause vegetation migration, it is necessary to learn how changes in forest composition could affect DOM inputs via leaf litter leachate. We examined changes in leaf litter leachate fluorescent DOM (FDOM) from American beech (Fagus grandifolia Ehrh.) leaves in Maryland, Rhode Island, Vermont, and North Carolina and from yellow poplar (Liriodendron tulipifera L.) leaves from Maryland. FDOM in leachate samples was characterized by excitation-emission matrices (EEMs). A six-component parallel factor analysis (PARAFAC) model was created to identify components that accounted for the majority of the variation in the data set. Self-organizing maps (SOM) compared the PARAFAC component proportions of leachate samples. Phenophase and species exerted much stronger influence on the determination of a sample's SOM placement than geographic origin. As expected, FDOM from all trees transitioned from more protein-like components to more humic-like components with senescence. Percent greenness of sampled leaves and the proportion of tyrosine-like component 1 were found to be significantly different between the two genetic beech clusters, suggesting differences in photosynthesis and resorption. Our results highlight the need to account for interspecific and intraspecific variations in leaf litter leachate FDOM throughout autumn when examining the influence of allochthonous inputs to streams.
libmpdata++ 1.0: a library of parallel MPDATA solvers for systems of generalised transport equations
Jaruga, A.; Arabas, S.; Jarecka, D.; Pawlowska, H.; Smolarkiewicz, P. K.; Waruszewski, M.
2015-04-01
This paper accompanies the first release of libmpdata++, a C++ library implementing the multi-dimensional positive-definite advection transport algorithm (MPDATA) on regular structured grid. The library offers basic numerical solvers for systems of generalised transport equations. The solvers are forward-in-time, conservative and non-linearly stable. The libmpdata++ library covers the basic second-order-accurate formulation of MPDATA, its third-order variant, the infinite-gauge option for variable-sign fields and a flux-corrected transport extension to guarantee non-oscillatory solutions. The library is equipped with a non-symmetric variational elliptic solver for implicit evaluation of pressure gradient terms. All solvers offer parallelisation through domain decomposition using shared-memory parallelisation. The paper describes the library programming interface, and serves as a user guide. Supported options are illustrated with benchmarks discussed in the MPDATA literature. Benchmark descriptions include code snippets as well as quantitative representations of simulation results. Examples of applications include homogeneous transport in one, two and three dimensions in Cartesian and spherical domains; a shallow-water system compared with analytical solution (originally derived for a 2-D case); and a buoyant convection problem in an incompressible Boussinesq fluid with interfacial instability. All the examples are implemented out of the library tree. Regardless of the differences in the problem dimensionality, right-hand-side terms, boundary conditions and parallelisation approach, all the examples use the same unmodified library, which is a key goal of libmpdata++ design. The design, based on the principle of separation of concerns, prioritises the user and developer productivity. The libmpdata++ library is implemented in C++, making use of the Blitz++ multi-dimensional array containers, and is released as free/libre and open-source software.
libmpdata++ 0.1: a library of parallel MPDATA solvers for systems of generalised transport equations
Jaruga, A.; Arabas, S.; Jarecka, D.; Pawlowska, H.; Smolarkiewicz, P. K.; Waruszewski, M.
2014-11-01
This paper accompanies first release of libmpdata++, a C++ library implementing the Multidimensional Positive-Definite Advection Transport Algorithm (MPDATA). The library offers basic numerical solvers for systems of generalised transport equations. The solvers are forward-in-time, conservative and non-linearly stable. The libmpdata++ library covers the basic second-order-accurate formulation of MPDATA, its third-order variant, the infinite-gauge option for variable-sign fields and a flux-corrected transport extension to guarantee non-oscillatory solutions. The library is equipped with a non-symmetric variational elliptic solver for implicit evaluation of pressure gradient terms. All solvers offer parallelisation through domain decomposition using shared-memory parallelisation. The paper describes the library programming interface, and serves as a user guide. Supported options are illustrated with benchmarks discussed in the MPDATA literature. Benchmark descriptions include code snippets as well as quantitative representations of simulation results. Examples of applications include: homogeneous transport in one, two and three dimensions in Cartesian and spherical domains; shallow-water system compared with analytical solution (originally derived for a 2-D case); and a buoyant convection problem in an incompressible Boussinesq fluid with interfacial instability. All the examples are implemented out of the library tree. Regardless of the differences in the problem dimensionality, right-hand-side terms, boundary conditions and parallelisation approach, all the examples use the same unmodified library, which is a key goal of libmpdata++ design. The design, based on the principle of separation of concerns, prioritises the user and developer productivity. The libmpdata++ library is implemented in C++, making use of the Blitz++ multi-dimensional array containers, and is released as free/libre and open-source software.
International Nuclear Information System (INIS)
Rosa, Massimiliano; Warsa, James S.; Perks, Michael
2011-01-01
We have implemented a cell-wise, block-Gauss-Seidel (bGS) iterative algorithm, for the solution of the S_n transport equations on the Roadrunner hybrid, parallel computer architecture. A compute node of this massively parallel machine comprises AMD Opteron cores that are linked to a Cell Broadband Engine™ (Cell/B.E.)"1. LAPACK routines have been ported to the Cell/B.E. in order to make use of its parallel Synergistic Processing Elements (SPEs). The bGS algorithm is based on the LU factorization and solution of a linear system that couples the fluxes for all S_n angles and energy groups on a mesh cell. For every cell of a mesh that has been parallel decomposed on the higher-level Opteron processors, a linear system is transferred to the Cell/B.E. and the parallel LAPACK routines are used to compute a solution, which is then transferred back to the Opteron, where the rest of the computations for the S_n transport problem take place. Compared to standard parallel machines, a hundred-fold speedup of the bGS was observed on the hybrid Roadrunner architecture. Numerical experiments with strong and weak parallel scaling demonstrate the bGS method is viable and compares favorably to full parallel sweeps (FPS) on two-dimensional, unstructured meshes when it is applied to optically thick, multi-material problems. As expected, however, it is not as efficient as FPS in optically thin problems. (author)
Energy Technology Data Exchange (ETDEWEB)
Nakka, B W; Chan, T
1994-12-01
A deterministic particle-tracking code (TRACK3D) has been developed to compute convective flow paths of conservative (nonreactive) contaminants through porous geological media. TRACK3D requires the groundwater velocity distribution, which, in our applications, results from flow simulations using AECL`s MOTIF code. The MOTIF finite-element code solves the transient and steady-state coupled equations of groundwater flow, solute transport and heat transport in fractured/porous media. With few modifications, TRACK3D can be used to analyse the velocity distributions calculated by other finite-element or finite-difference flow codes. This report describes the assumptions, limitations, organization, operation and applications of the TRACK3D code, and provides a comprehensive user`s manual.
Generalized fluid equations for parallel transport in collisional to weakly collisional plasmas
International Nuclear Information System (INIS)
Zawaideh, E.; Najmabadi, F.; Conn, R.W.
1986-01-01
A new set of two-fluid equations that are valid from collisional to weakly collisional limits is derived. Starting from gyrokinetic equations in flux coordinates with no zero-order drifts, a set of moment equations describing plasma transport along the field lines of a space- and time-dependent magnetic field is derived. No restriction on the anisotropy of the ion distribution function is imposed. In the highly collisional limit, these equations reduce to those of Braginskii, while in the weakly collisional limit they are similar to the double adiabatic or Chew, Goldberger, and Low (CGL) equations [Proc. R. Soc. London, Ser. A 236, 112 (1956)]. The new set of equations also exhibits a physical singularity at the sound speed. This singularity is used to derive and compute the sound speed. Numerical examples comparing these equations with conventional transport equations show that in the limit where the ratio of the mean free path lambda to the scale length of the magnetic field gradient L/sub B/ approaches zero, there is no significant difference between the solution of the new and conventional transport equations. However, conventional fluid equations, ordinarily expected to be correct to the order (lambda/L/sub B/) 2 , are found to have errors of order (lambda/L/sub u/) 2 = (lambda/L/sub B/) 2 /(1-M 2 ) 2 , where L/sub u/ is the scale length of the flow velocity gradient and M is the Mach number. As such, the conventional equations may contain large errors near the sound speed (Mroughly-equal1)
The effect of plasma fluctuations on parallel transport parameters in the SOL
Czech Academy of Sciences Publication Activity Database
Havlíčková, Eva; Fundameski, W.; Naulin, V.; Nielsen, A.H.; Wiesen, S.; Horáček, Jan; Seidl, Jakub
2011-01-01
Roč. 415, č. 1 (2011), S471-S474 ISSN 0022-3115. [International Conference on Plasma-Surface Interactions in Controlled Fusion Device/19th./. San Diego, 24.05.2010-28.05.2010] R&D Projects: GA ČR GAP205/10/2055; GA MŠk 7G09042 Institutional research plan: CEZ:AV0Z20430508 Keywords : Tokamak * plasma * transport Subject RIV: BL - Plasma and Gas Discharge Physics Impact factor: 2.052, year: 2011 http://www.sciencedirect.com/science/article/pii/S002231151000560X
International Nuclear Information System (INIS)
Santos, Marcelo C. dos; Pereira, Claudio M.N.A.; Schirru, Roberto; Pinheiro, André; Coordenacao de Pos-Graduacao e Pesquisa de Engenharia
2017-01-01
Atmospheric radionuclide dispersion systems (ARDS) are essential mechanisms to predict the consequences of unexpected radioactive releases from nuclear power plants. Considering, that during an eventuality of an accident with a radioactive material release, an accurate forecast is vital to guide the evacuation plan of the possible affected areas. However, in order to predict the dispersion of the radioactive material and its impact on the environment, the model must process information about source term (radioactive materials released, activities and location), weather condition (wind, humidity and precipitation) and geographical characteristics (topography). Furthermore, ARDS is basically composed of 4 main modules: Source Term, Wind Field, Plume Dispersion and Doses Calculations. The Wind Field and Plume Dispersion modules are the ones that require a high computational performance to achieve accurate results within an acceptable time. Taking this into account, this work focuses on the development of a GPU-based parallel Plume Dispersion module, focusing on the radionuclide transport and diffusion calculations, which use a given wind field and a released source term as parameters. The program is being developed using the C ++ programming language, allied with CUDA libraries. In comparative case study between a parallel and sequential version of the slower function of the Plume Dispersion module, a speedup of 11.63 times could be observed. (author)
Energy Technology Data Exchange (ETDEWEB)
Santos, Marcelo C. dos; Pereira, Claudio M.N.A.; Schirru, Roberto; Pinheiro, André, E-mail: jovitamarcelo@gmail.com, E-mail: cmnap@ien.gov.br, E-mail: schirru@lmp.ufrj.br, E-mail: apinheiro99@gmail.com [Instituto de Engenharia Nuclear (IEN/CNEN-RJ), Rio de Janeiro, RJ (Brazil); Coordenacao de Pos-Graduacao e Pesquisa de Engenharia (COPPE/UFRJ), Rio de Janeiro, RJ (Brazil). Programa de Engenharia Nuclear
2017-07-01
Atmospheric radionuclide dispersion systems (ARDS) are essential mechanisms to predict the consequences of unexpected radioactive releases from nuclear power plants. Considering, that during an eventuality of an accident with a radioactive material release, an accurate forecast is vital to guide the evacuation plan of the possible affected areas. However, in order to predict the dispersion of the radioactive material and its impact on the environment, the model must process information about source term (radioactive materials released, activities and location), weather condition (wind, humidity and precipitation) and geographical characteristics (topography). Furthermore, ARDS is basically composed of 4 main modules: Source Term, Wind Field, Plume Dispersion and Doses Calculations. The Wind Field and Plume Dispersion modules are the ones that require a high computational performance to achieve accurate results within an acceptable time. Taking this into account, this work focuses on the development of a GPU-based parallel Plume Dispersion module, focusing on the radionuclide transport and diffusion calculations, which use a given wind field and a released source term as parameters. The program is being developed using the C ++ programming language, allied with CUDA libraries. In comparative case study between a parallel and sequential version of the slower function of the Plume Dispersion module, a speedup of 11.63 times could be observed. (author)
Towards scalable parallelism in Monte Carlo particle transport codes using remote memory access
International Nuclear Information System (INIS)
Romano, Paul K.; Forget, Benoit; Brown, Forrest
2010-01-01
One forthcoming challenge in the area of high-performance computing is having the ability to run large-scale problems while coping with less memory per compute node. In this work, we investigate a novel data decomposition method that would allow Monte Carlo transport calculations to be performed on systems with limited memory per compute node. In this method, each compute node remotely retrieves a small set of geometry and cross-section data as needed and remotely accumulates local tallies when crossing the boundary of the local spatial domain. Initial results demonstrate that while the method does allow large problems to be run in a memory-limited environment, achieving scalability may be difficult due to inefficiencies in the current implementation of RMA operations. (author)
Analysis of Massively Parallel Discrete-Ordinates Transport Sweep Algorithms with Collisions
International Nuclear Information System (INIS)
Bailey, T.S.; Falgout, R.D.
2008-01-01
We present theoretical scaling models for a variety of discrete-ordinates sweep algorithms. In these models, we pay particular attention to the way each algorithm handles collisions. A collision is defined as a processor having multiple angles with ready to be swept during one stage of the sweep. The models also take into account how subdomains are assigned to processors and how angles are grouped during the sweep. We describe a data driven algorithm that resolves collisions efficiently during the sweep as well as other algorithms that have been designed to avoid collisions completely. Our models are validated using the ARGES and AMTRAN transport codes. We then use the models to study and predict scaling trends in all of the sweep algorithms
A fast, parallel algorithm to solve the basic fluvial erosion/transport equations
Braun, J.
2012-04-01
Quantitative models of landform evolution are commonly based on the solution of a set of equations representing the processes of fluvial erosion, transport and deposition, which leads to predict the geometry of a river channel network and its evolution through time. The river network is often regarded as the backbone of any surface processes model (SPM) that might include other physical processes acting at a range of spatial and temporal scales along hill slopes. The basic laws of fluvial erosion requires the computation of local (slope) and non-local (drainage area) quantities at every point of a given landscape, a computationally expensive operation which limits the resolution of most SPMs. I present here an algorithm to compute the various components required in the parameterization of fluvial erosion (and transport) and thus solve the basic fluvial geomorphic equation, that is very efficient because it is O(n) (the number of required arithmetic operations is linearly proportional to the number of nodes defining the landscape), and is fully parallelizable (the computation cost decreases in a direct inverse proportion to the number of processors used to solve the problem). The algorithm is ideally suited for use on latest multi-core processors. Using this new technique, geomorphic problems can be solved at an unprecedented resolution (typically of the order of 10,000 X 10,000 nodes) while keeping the computational cost reasonable (order 1 sec per time step). Furthermore, I will show that the algorithm is applicable to any regular or irregular representation of the landform, and is such that the temporal evolution of the landform can be discretized by a fully implicit time-marching algorithm, making it unconditionally stable. I will demonstrate that such an efficient algorithm is ideally suited to produce a fully predictive SPM that links observationally based parameterizations of small-scale processes to the evolution of large-scale features of the landscapes on
International Nuclear Information System (INIS)
Chen, C.T.; Li, S.H.
1997-01-01
Analytical solutions are developed for the problem of radionuclide transport in a system of parallel fractures situated in a porous rock matrix. A constant flux is used as the inlet boundary condition. The solutions consider the following processes: (a) advective transport along the fractures; (b) mechanical dispersion and molecular diffusion along the fractures; (c) molecular diffusion from a fracture to the porous matrix; (d) molecular diffusion within the porous matrix in the direction perpendicular to the fracture axis; (e) adsorption onto the fracture wall; (f) adsorption within the porous matrix, and (g) radioactive decay. The solutions are based on the Laplace transform method. The general transient solution is in the form of a double integral that is evaluated using composite Gauss-Legendre quadrature. A simpler transient solution that is in the form of a single integral is also presented for the case that assumes negligible longitudinal dispersion along the fractures. The steady-state solutions are also provided. A number of examples are given to illustrate the effects of various important parameters, including: (a) fracture spacing; (b) fracture dispersion coefficient; (c) matrix diffusion coefficient; (d) fracture width; (e) groundwater velocity; (f) matrix retardation factor; and (g) matrix porosity
Preziosi-Ribero, A.; Fox, A.; Packman, A. I.; Escobar-Vargas, J.; Donado-Garzon, L. D.; Li, A.; Arnon, S.
2017-12-01
Exchange of mass, momentum and energy between surface water and groundwater is a driving factor for the biology, ecology and chemistry of rivers and water bodies in general. Nonetheless, this exchange is dominated by different factors like topography, bed morphology, and large-scale hydraulic gradient. In the particular case of fine sediments like clay, conservative tracer modeling is impossible because they are trapped in river beds for long periods, thus the normal advection dispersion approach leads to errors and results do not agree with reality. This study proposes a numerical particle tracking model that represents the behavior of kaolinite in a sand flume, and how its deposition varies according to different flow conditions, namely losing and gaining flow. Since fine particles do not behave like solutes, kaolinite dynamics are represented using settling velocity and a filtration coefficient allowing the particles to be trapped in the bed. This approach allows us to use measurable parameters directly related with the fine particle features as size and shape, and hydraulic parameters. Results are then compared with experimental results from lab experiments obtained in a recirculating flume, in order to assess the impact of losing and gaining conditions on sediment transport and deposition. Furthermore, our model is able to identify the zones where kaolinite deposition concentrates over the flume due to the bed geometry, and later relate these results with clogging of the bed and hence changes in the bed's hydraulic conductivity. Our results suggest that kaolinite deposition is higher under losing conditions since the vertical velocity of the flow is added to the deposition velocity of the particles modeled. Moreover, the zones where kaolinite concentrates varies under different flow conditions due to the difference in pressure and velocity in the river bed.
Energy Technology Data Exchange (ETDEWEB)
Sanchez Vicente, A
2011-11-15
For the first time ever the European Commissions is proposing a greenhouse gas emissions target for transport. But how is transport going to provide the services that our society needs while minimising its environmental impacts? This is the theme for the Transport White Paper launched in 2011. TERM 2011 and future reports aim to deliver an annual assessment on progress towards these targets by introducing the Transport and Environment Reporting Mechanism Core Set of Indicators (TERM-CSI). TERM 2011 provides also the baseline to which progress will be checked against, covering most of the environmental areas, including energy consumption, emissions, noise and transport demand. In addition, this report shows latest data and discuss on the different aspects that can contribute the most to minimise transport impacts. TERM 2011 applies the avoid-shift-improve (ASI) approach, introduced in the previous TERM report, analysing ways to optimise transport demand, obtain a more sustainable modal split or use the best technology available. (Author)
Parallel transport studies of high-Z impurities in the core of Alcator C-Mod plasmas
Energy Technology Data Exchange (ETDEWEB)
Reinke, M. L.; Hutchinson, I. H.; Rice, J. E.; Greenwald, M.; Howard, N. T.; Hubbard, A.; Hughes, J. W.; Terry, J. L.; Wolfe, S. M. [MIT-Plasma Science and Fusion Center Cambridge, Massachusetts 02139 (United States)
2013-05-15
Measurements of poloidal variation, ñ{sub z}/
International Nuclear Information System (INIS)
Bacon, Diana H.; White, Mark D.; McGrail, B PETER
2004-01-01
The U.S. Department of Energy must approve a performance assessment (PA) to support the design, construction, approval, and closure of disposal facilities for immobilized low-activity waste (ILAW) currently stored in underground tanks at Hanford, Washington. A critical component of the PA is to provide quantitative estimates of radionuclide release rates from the engineered portion of the disposal facilities. Computer simulations are essential for this purpose because impacts on groundwater resources must be projected to periods of 10,000 years and longer. The computer code selected for simulating the radionuclide release rates is the Subsurface Transport Over Reactive Multiphases (STORM) simulator. The STORM simulator solves coupled conservation equations for component mass and energy that describe subsurface flow over aqueous and gas phases through variably saturated geologic media. The resulting flow fields are used to sequentially solve conservation equations for reactive aqueous phase transport through variably saturated geologic media. These conservation equations for component mass, energy, and solute mass are partial differential equations that mathematically describe flow and transport through porous media. The STORM simulator solves the governing-conservation equations and constitutive functions using numerical techniques for nonlinear systems. The partial differential equations governing thermal and fluid flow processes are solved by the integral volume finite difference method. These governing equations are solved simultaneously using Newton-Raphson iteration. The partial differential equations governing reactive solute transport are solved using either an operator split technique where geochemical reactions and solute transport are solved separately, or a fully coupled technique where these equations are solved simultaneously. The STORM simulator is written in the FORTRAN 77 language, following American National Standards Institute (ANSI) standards
Li, Jiuyi; Busscher, Henk J.; van der Mei, Henny C.; Norde, Willem; Krom, Bastiaan P.; Sjollema, Jelmer
2011-01-01
Using a new phase-contrast microscopy-based method of analysis, sedimentation has recently been demonstrated to be the major mass transport mechanism of bacteria towards substratum surfaces in a parallel plate flow chamber (J. Li, H.J. Busscher, W. Norde, J. Sjollema, Colloid Surf. B. 84 (2011)76).
Energy Technology Data Exchange (ETDEWEB)
Sanchez Vicente, A.; Pastorello, C.; Foltescu, V.L. [and others
2012-11-15
TERM 2012 (Transport and Environment Reporting Mechanism) presents the most relevant and up to date information on the main issues regarding transport and environment in Europe, particularly in areas with specific policy targets such as greenhouse gas emissions and energy consumption, transport demand levels, noise and other issues. It also offers an overview of the transport sector's impact on air pollutant emissions and air quality. It discusses the contributions made by all modes of transport to direct air pollutant emissions and also to 'secondary' air pollutants formed in the atmosphere. Alongside the recently published Air quality in Europe - 2012 report, TERM 2012 aims to inform the European Commission's review of the Thematic Strategy on Air Pollution. (Author)
Morales, V. L.; Carrel, M.; Dentz, M.; Derlon, N.; Morgenroth, E.; Holzner, M.
2017-12-01
Biofilms are ubiquitous bacterial communities growing in various porous media including soils, trickling and sand filters and are relevant for applications such as the degradation of pollutants for bioremediation, waste water or drinking water production purposes. By their development, biofilms dynamically change the structure of porous media, increasing the heterogeneity of the pore network and the non-Fickian or anomalous dispersion. In this work, we use an experimental approach to investigate the influence of biofilm growth on pore scale hydrodynamics and transport processes and propose a correlated continuous time random walk model capturing these observations. We perform three-dimensional particle tracking velocimetry at four different time points from 0 to 48 hours of biofilm growth. The biofilm growth notably impacts pore-scale hydrodynamics, as shown by strong increase of the average velocity and in tailing of Lagrangian velocity probability density functions. Additionally, the spatial correlation length of the flow increases substantially. This points at the formation of preferential flow pathways and stagnation zones, which ultimately leads to an increase of anomalous transport in the porous media considered, characterized by non-Fickian scaling of mean-squared displacements and non-Gaussian distributions of the displacement probability density functions. A gamma distribution provides a remarkable approximation of the bulk and the high tail of the Lagrangian pore-scale velocity magnitude, indicating a transition from a parallel pore arrangement towards a more serial one. Finally, a correlated continuous time random walk based on a stochastic relation velocity model accurately reproduces the observations and could be used to predict transport beyond the time scales accessible to the experiment.
2011-06-01
Cities across the United States are grappling with a looming transportation crisis as a : result of ever-increasing passenger and freight transport demands and overburdened : networks of aging infrastructure. All levels of government, but particularl...
International Nuclear Information System (INIS)
Mula-Hernandez, Olga
2014-01-01
In this thesis, we have first developed a time dependent 3D neutron transport solver on unstructured meshes with discontinuous Galerkin finite elements spatial discretization. The solver (called MINARET) represents in itself an important contribution in reactor physics thanks to the accuracy that it can provide in the knowledge of the state of the core during severe accidents. It will also play an important role on vessel fluence calculations. From a mathematical point of view, the most important contribution has consisted in the implementation of modern algorithms that are well adapted for modern parallel architectures and that significantly decrease the computing times. A special effort has been done in order to efficiently parallelize the time variable by the use of the parareal in time algorithm. For this, we have first analyzed the performances that the classical scheme of parareal can provide when applied to the resolution of the neutron transport equation in a reactor core. Then, with the purpose of improving these performances, a parareal scheme that takes more efficiently into account the presence of other iterative schemes in the resolution of each time step has been proposed. The main idea consists in limiting the number of internal iterations for each time step and to reach convergence across the parareal iterations. A second phase of our work has been motivated by the following question: given the high degree of accuracy that MINARET can provide in the modeling of the neutron population, could we somehow use it as a tool to monitor in real time the population of neutrons on the purpose of helping in the operation of the reactor? And, what is more, how to make such a tool be coherent in some sense with the measurements taken in situ? One of the main challenges of this problem is the real time aspect of the simulations. Indeed, despite all of our efforts to speed-up the calculations, the discretization methods used in MINARET do not provide simulations
Energy Technology Data Exchange (ETDEWEB)
Anon.
1986-10-15
In many modern tracking chambers, the sense wires, rather than being lined up uniformly, are grouped into clusters to facilitate the pattern recognition process. However, with higher energy machines providing collisions richer in secondary particles, event reconstruction becomes more complicated. A Caltech / Illinois / SLAC / Washington group developed an ingenious track finding and fitting approach for the Mark III detector used at the SPEAR electron-positron ring at SLAC (Stanford). This capitalizes on the detector's triggering, which uses programmable logic circuits operating in parallel, each 'knowing' the cell patterns for all tracks passing through a specific portion of the tracker (drift chamber)
Jackson, Patrick Ryan; Lageman, Jonathan D.
2013-01-01
Piscicide applications in riverine environments are complicated by the advection and dispersion of the piscicide by the flowing water. Proper deactivation of the fish toxin is required outside of the treatment reach to ensure that there is minimal collateral damage to fisheries downstream or in connecting and adjacent water bodies. In urban settings and highly managed waterways, further complications arise from the influence of industrial intakes and outfalls, stormwater outfalls, lock and dam operations, and general unsteady flow conditions. These complications affect the local hydrodynamics and ultimately the transport and fate of the piscicide. This report presents two techniques using Rhodamine WT dye for real-time tracking of a piscicide plume—or any passive contaminant—in rivers and waterways in natural and urban settings. Passive contaminants are those that are present in such low concentration that there is no effect (such as buoyancy) on the fluid dynamics of the receiving water body. These methods, when combined with data logging and archiving, allow for visualization and documentation of the application and deactivation process. Real-time tracking and documentation of rotenone applications in rivers and urban waterways was accomplished by encasing the rotenone plume in a plume of Rhodamine WT dye and using vessel-mounted submersible fluorometers together with acoustic Doppler current profilers (ADCP) and global positioning system (GPS) receivers to track the dye and map the water currents responsible for advection and dispersion. In this study, two methods were used to track rotenone plumes: (1) simultaneous injection of dye with rotenone and (2) delineation of the upstream and downstream boundaries of the treatment zone with dye. All data were logged and displayed on a shipboard laptop computer, so that survey personnel provided real-time feedback about the extent of the rotenone plume to rotenone application and deactivation personnel. Further
International Nuclear Information System (INIS)
Masahiro, Tatsumi; Akio, Yamamoto
2003-01-01
A production code SCOPE2 was developed based on the fine-grained parallel algorithm by the red/black iterative method targeting parallel computing environments such as a PC-cluster. It can perform a depletion calculation in a few hours using a PC-cluster with the model based on a 9-group nodal-SP3 transport method in 3-dimensional pin-by-pin geometry for in-core fuel management of commercial PWRs. The present algorithm guarantees the identical convergence process as that in serial execution, which is very important from the viewpoint of quality management. The fine-mesh geometry is constructed by hierarchical decomposition with introduction of intermediate management layer as a block that is a quarter piece of a fuel assembly in radial direction. A combination of a mesh division scheme forcing even meshes on each edge and a latency-hidden communication algorithm provided simplicity and efficiency to message passing to enhance parallel performance. Inter-processor communication and parallel I/O access were realized using the MPI functions. Parallel performance was measured for depletion calculations by the 9-group nodal-SP3 transport method in 3-dimensional pin-by-pin geometry with 340 x 340 x 26 meshes for full core geometry and 170 x 170 x 26 for quarter core geometry. A PC cluster that consists of 24 Pentium-4 processors connected by the Fast Ethernet was used for the performance measurement. Calculations in full core geometry gave better speedups compared to those in quarter core geometry because of larger granularity. Fine-mesh sweep and feedback calculation parts gave almost perfect scalability since granularity is large enough, while 1-group coarse-mesh diffusion acceleration gave only around 80%. The speedup and parallel efficiency for total computation time were 22.6 and 94%, respectively, for the calculation in full core geometry with 24 processors. (authors)
International Nuclear Information System (INIS)
Kravets, L I; Dmitriev, S N; Drachev, A I; Gilman, A B; Lazea, A; Dinescu, G
2007-01-01
A process of plasma polymerization of dimethylaniline and acrylic acid vapours on the surface of poly(ethylene terephthalate) track membranes has been investigated. The surface and hydrodynamic properties of the composite membranes produced in this case have been studied. It is shown that the water permeability of the obtained polymeric membranes can be controlled by changing the filtrate pH. Membranes with such properties can be used for controllable drug delivery and in sensor control
Directory of Open Access Journals (Sweden)
JONG WOON KIM
2014-04-01
In this paper, we introduce a modified scattering kernel approach to avoid the unnecessarily repeated calculations involved with the scattering source calculation, and used it with parallel computing to effectively reduce the computation time. Its computational efficiency was tested for three-dimensional full-coupled photon-electron transport problems using our computer program which solves the multi-group discrete ordinates transport equation by using the discontinuous finite element method with unstructured tetrahedral meshes for complicated geometrical problems. The numerical tests show that we can improve speed up to 17∼42 times for the elapsed time per iteration using the modified scattering kernel, not only in the single CPU calculation but also in the parallel computing with several CPUs.
International Nuclear Information System (INIS)
Peeters, A. G.; Camenen, Y.; Casson, F. J.; Hornsby, W. A.; Snodin, A. P.; Strintzi, D.; Angioni, C.
2009-01-01
The paper derives the gyro-kinetic equation in the comoving frame of a toroidally rotating plasma, including both the Coriolis drift effect [A. G. Peeters et al., Phys. Rev. Lett. 98, 265003 (2007)] as well as the centrifugal force. The relation with the laboratory frame is discussed. A low field side gyro-fluid model is derived from the gyro-kinetic equation and applied to the description of parallel momentum transport. The model includes the effects of the Coriolis and centrifugal force as well as the parallel dynamics. The latter physics effect allows for a consistent description of both the Coriolis drift effect as well as the ExB shear effect [R. R. Dominguez and G. M. Staebler, Phys. Fluids B 5, 3876 (1993)] on the momentum transport. Strong plasma rotation as well as parallel dynamics reduce the Coriolis (inward) pinch of momentum and can lead to a sign reversal generating an outward pinch velocity. Also, the ExB shear effect is, in a similar manner, reduced by the parallel dynamics and stronger rotation.
03. Disruption Management in PassengerTransportation - from Air to Tracks
Clausen, Jens
2007-01-01
Over the last 10 years there has been a tremendous growth in air transportation of passengers. Both airports and airspace are close to saturation with respect to capacity, leading to delays caused by disruptions. At the same time the amount of vehicular traffic around and in all larger cities of the world has show a dramatic increase as well. Public transportation by e.g. rail has come into focus, and hence also the service level provided by suppliers ad public transportatio...
Winston, Richard B.; Konikow, Leonard F.; Hornberger, George Z.
2018-02-16
In the traditional method of characteristics for groundwater solute-transport models, advective transport is represented by moving particles that track concentration. This approach can lead to global mass-balance problems because in models of aquifers having complex boundary conditions and heterogeneous properties, particles can originate in cells having different pore volumes and (or) be introduced (or removed) at cells representing fluid sources (or sinks) of varying strengths. Use of volume-weighted particles means that each particle tracks solute mass. In source or sink cells, the changes in particle weights will match the volume of water added or removed through external fluxes. This enables the new method to conserve mass in source or sink cells as well as globally. This approach also leads to potential efficiencies by allowing the number of particles per cell to vary spatially—using more particles where concentration gradients are high and fewer where gradients are low. The approach also eliminates the need for the model user to have to distinguish between “weak” and “strong” fluid source (or sink) cells. The new model determines whether solute mass added by fluid sources in a cell should be represented by (1) new particles having weights representing appropriate fractions of the volume of water added by the source, or (2) distributing the solute mass added over all particles already in the source cell. The first option is more appropriate for the condition of a strong source; the latter option is more appropriate for a weak source. At sinks, decisions whether or not to remove a particle are replaced by a reduction in particle weight in proportion to the volume of water removed. A number of test cases demonstrate that the new method works well and conserves mass. The method is incorporated into a new version of the U.S. Geological Survey’s MODFLOW–GWT solute-transport model.
International Nuclear Information System (INIS)
Ojo, T.O.; Bonner, J.S.
2002-01-01
A study was conducted to demonstrate the dynamic behaviour of the turbulent mixing process in coastal environments for both advection and dispersion transport. The spatial variability of the coefficients that characterize the process was also examined. Every transport model should be calibrated to include specific information regarding geomorphology and climatic conditions. HF-radar equipment eliminates the need for model-recalibration and validation for transport models of coefficients which have spatial-temporal variations. The HF-radar has a grid resolution of 1000 m, providing real-time velocity coefficients by measuring surface currents. Dispersion coefficients can be derived from velocity time-series using the principle of Autocorrelation Functions (ACF) for time series. This concept was applied to two Gulf of Mexico bays in Texas, Corpus Christi and Matagorda. It was determined that the within-bay spatial variability of dispersion coefficients were many orders of magnitude higher than between-bay variability. The proposed model effectively reduced model complexity. The results of a 3-D dimensional contaminant transport model was presented. It was successfully used in the simulation of a contaminant spill scenario in the two bays using spatially distributed time-dependent transport coefficients. 5 refs., 8 figs
Well-to-Wheels Water Consumption: Tracking the Virtual Flow of Water into Transportation
Lampert, D. J.; Elgowainy, A.; Hao, C.
2015-12-01
Water and energy resources are fundamental to life on Earth and essential for the production of consumer goods and services in the economy. Energy and water resources are heavily interdependent—energy production consumes water, while water treatment and distribution consume energy. One example of this so-called energy-water nexus is the consumption of water associated with the production of transportation fuels. The Greenhouse Gases, Regulated Emissions, and Energy Use in Transportation (GREET) model is an analytical tool that can be used to compare the environmental impacts of different transportation fuels on a consistent basis. In this presentation, the expansion of GREET to perform life cycle water accounting or the "virtual flow" of water into transportation and other energy sectors and the associated implications will be discussed. The results indicate that increased usage of alternative fuels may increase freshwater resource consumption. The increased water consumption must be weighed against the benefits of decreased greenhouse gas and fossil energy consumption. Our analysis highlights the importance of regionality, co-product allocation, and consistent system boundaries when comparing the water intensity of alternative transportation fuel production pathways such as ethanol, biodiesel, compressed natural gas, hydrogen, and electricity with conventional petroleum-based fuels such as diesel and gasoline.
Energy Technology Data Exchange (ETDEWEB)
Bian, Nicolas H.; Kontar, Eduard P. [School of Physics and Astronomy, University of Glasgow, Glasgow G12 8QQ (United Kingdom); Emslie, A. Gordon, E-mail: n.bian@physics.gla.ac.uk, E-mail: emslieg@wku.edu [Department of Physics and Astronomy, Western Kentucky University, Bowling Green, KY 42101 (United States)
2016-06-20
The transport of the energy contained in electrons, both thermal and suprathermal, in solar flares plays a key role in our understanding of many aspects of the flare phenomenon, from the spatial distribution of hard X-ray emission to global energetics. Motivated by recent RHESSI observations that point to the existence of a mechanism that confines electrons to the coronal parts of flare loops more effectively than Coulomb collisions, we here consider the impact of pitch-angle scattering off turbulent magnetic fluctuations on the parallel transport of electrons in flaring coronal loops. It is shown that the presence of such a scattering mechanism in addition to Coulomb collisional scattering can significantly reduce the parallel thermal and electrical conductivities relative to their collisional values. We provide illustrative expressions for the resulting thermoelectric coefficients that relate the thermal flux and electrical current density to the temperature gradient and the applied electric field. We then evaluate the effect of these modified transport coefficients on the flare coronal temperature that can be attained, on the post-impulsive-phase cooling of heated coronal plasma, and on the importance of the beam-neutralizing return current on both ambient heating and the energy loss rate of accelerated electrons. We also discuss the possible ways in which anomalous transport processes have an impact on the required overall energy associated with accelerated electrons in solar flares.
Energy Technology Data Exchange (ETDEWEB)
Kim, Young-Keun, E-mail: ykkim@handong.edu [Department of Mechanical and Control Engineering, Handong Global University, Pohang (Korea, Republic of); Kim, Kyung-Soo [Department of Mechanical Engineering, KAIST, Daejeon 305-701 (Korea, Republic of)
2014-10-15
Maritime transportation demands an accurate measurement system to track the motion of oscillating container boxes in real time. However, it is a challenge to design a sensor system that can provide both reliable and non-contact methods of 6-DOF motion measurements of a remote object for outdoor applications. In the paper, a sensor system based on two 2D laser scanners is proposed for detecting the relative 6-DOF motion of a crane load in real time. Even without implementing a camera, the proposed system can detect the motion of a remote object using four laser beam points. Because it is a laser-based sensor, the system is expected to be highly robust to sea weather conditions.
Zhang, Ya-Jing; Zhang, Lian-Lian; Jiang, Cui; Gong, Wei-Jiang
2018-02-01
We theoretically investigate the electronic transport through a parallel-coupled multi-quantum-dot system, in which the terminal dots of a one-dimensional quantum-dot chain are embodied in the two arms of an Aharonov-Bohm interferometer. It is found that in the structures of odd(even) dots, all their even(odd) molecular states have opportunities to decouple from the leads, and in this process antiresonance occurs which are accordant with the odd(even)-numbered eigenenergies of the sub-molecule without terminal dots. Next when Majorana zero modes are introduced to couple laterally to the terminal dots, the antiresonance and decoupling phenomena still co-exist in the quantum transport process. Such a result can be helpful in understanding the special influence of Majorana zero mode on the electronic transport through quantum-dot systems.
Directory of Open Access Journals (Sweden)
James W. Tam
2012-02-01
Full Text Available Chronic total occlusion (CTO angioplasty is one of the most challenging procedures remaining for the interventional operator. Recanalizing CTOs can improve exercise capacity, symptoms, left ventricular function and possibly reduce mortality. Multiple strategies such as escalating wire, parallel wire, seesaw, contralateral injection, subintimal tracking and re-entry (STAR, retrograde wire techniques (controlled antegrade retrograde subintimal tracking, CART, reverse CART, confluent balloon, rendezvous in coronary, and other techniques have all been described. Selection of the most appropriate approach is based on assessment of vessel course, length of occluded segment, presence of bridging collaterals, presence of bifurcating side branches at the occlusion site, and other variables. Today, with significant operator expertise and the use of available techniques, the literature reports a 50-95% success rate for recanalizing CTOs.
International Nuclear Information System (INIS)
Kim, Nam Seok
2010-01-01
This dissertation aims to evaluate environmental and economic performances of an intermodal freight transport system and to estimate the trade-off between CO2 emissions, which is presented as an indicator of environmental performance, and freight costs, which indicate the economic performance of the intermodal freight system. The truck-only system is always regarded as the counterpart of the intermodal freight system in this dissertation. To examine the environmental performance of the intermodal freight system, CO2 emissions generated from all the processes in the intermodal chain, such as pre-haulage and post-haulage, long distance haulage, and transshipment, are estimated considering different sources that generate electricity and transmission loss of electricity (Chapters 3 and 4). To examine the economic performance of the system, two approaches are considered: (1) finding the intermodal breakeven distance for which the intermodal system is more competitive than the truck-only system (Chapter 5); (2) examining the economies of scale in the intermodal network and finding the route/system choice that minimizes the total freight transportation costs (Chapter 6). Finally, this dissertation attempts to find the trade-off between CO2 emissions (representing the environmental performance) and freight transportation cost (representing the economic performance) (Chapter 7)
Energy Technology Data Exchange (ETDEWEB)
Konersmann, Rainer; Kuehl, Christiane; Wilk, Werner [Bundesanstalt fuer Materialforschung und -pruefung (BAM), Berlin (Germany). Arbeitsgruppe ' Risikomanagement'
2008-11-15
Pipelines are usually buried and have the function of long-distance transport. For this, special safety measures are requires. For example, pipelines of the chemical industry have defined locations that must be easily identifiable. Pipelines must be adapted to infrastructural and topographic boundary conditions, in consideration of environmental protection and because the pipeline may be damaged by external influences. Even minor releases of chemical substances may pollute water and soil and cause damage to humans. The contribution discusses the possible damage resulting from a pipeline leak. (orig.)
Wang, Lichun; Cardenas, M Bayani
2015-08-01
The quantitative study of transport through fractured media has continued for many decades, but has often been constrained by observational and computational challenges. Here, we developed an efficient quasi-3D random walk particle tracking (RWPT) algorithm to simulate solute transport through natural fractures based on a 2D flow field generated from the modified local cubic law (MLCL). As a reference, we also modeled the actual breakthrough curves (BTCs) through direct simulations with the 3D advection-diffusion equation (ADE) and Navier-Stokes equations. The RWPT algorithm along with the MLCL accurately reproduced the actual BTCs calculated with the 3D ADE. The BTCs exhibited non-Fickian behavior, including early arrival and long tails. Using the spatial information of particle trajectories, we further analyzed the dynamic dispersion process through moment analysis. From this, asymptotic time scales were determined for solute dispersion to distinguish non-Fickian from Fickian regimes. This analysis illustrates the advantage and benefit of using an efficient combination of flow modeling and RWPT. Copyright © 2015 Elsevier B.V. All rights reserved.
Accurate transport simulation of electron tracks in the energy range 1 keV-4 MeV
International Nuclear Information System (INIS)
Cobut, V.; Cirioni, L.; Patau, J.P.
2004-01-01
Multipurpose electron transport simulation codes are widely used in the fields of radiation protection and dosimetry. Broadly based on multiple scattering theories and continuous energy loss stopping powers with some mechanism taking straggling into account, they give reliable answers to many problems. However they may be unsuitable in some specific situations. In fact, many of them are not able to accurately describe particle transport through very thin slabs and/or in high atomic number materials, or also when knowledge of high-resolution depth dose distributions is required. To circumvent these deficiencies, we developed a Monte Carlo code simulating each interaction along electron tracks. Gas phase elastic cross sections are corrected to take into account solid state effects. Inelastic interactions are described within the framework of the Martinez et al. [J. Appl. Phys. 67 (1990) 2955] theory intended to deal with energy deposition in both condensed insulators and conductors. The model described in this paper is validated for some materials as aluminium and silicon, encountered in spectrometric and dosimetric devices. Comparisons with experimental, theoretical and other simulation results are made for angular distributions and energy spectra of transmitted electrons through slabs of different thicknesses and for depth energy distributions in semi-infinite media. These comparisons are quite satisfactory
A vector/parallel method for a three-dimensional transport model coupled with bio-chemical terms
B.P. Sommeijer (Ben); J. Kok (Jan)
1995-01-01
textabstractA so-called fractional step method is considered for the time integration of a three-dimensional transport-chemical model in shallow seas. In this method, the transport part and the chemical part are treated separately by appropriate integration techniques. This separation is motivated
Deformation data modeling through numerical models: an efficient method for tracking magma transport
Charco, M.; Gonzalez, P. J.; Galán del Sastre, P.
2017-12-01
Nowadays, multivariate collected data and robust physical models at volcano observatories are becoming crucial for providing effective volcano monitoring. Nevertheless, the forecast of volcanic eruption is notoriously difficult. Wthin this frame one of the most promising methods to evaluate the volcano hazard is the use of surface ground deformation and in the last decades many developments in the field of deformation modeling has been achieved. In particular, numerical modeling allows realistic media features such as topography and crustal heterogeneities to be included, although it is still very time cosuming to solve the inverse problem for near-real time interpretations. Here, we present a method that can be efficiently used to estimate the location and evolution of magmatic sources base on real-time surface deformation data and Finite Element (FE) models. Generally, the search for the best-fitting magmatic (point) source(s) is conducted for an array of 3-D locations extending below a predefined volume region and the Green functions for all the array components have to be precomputed. We propose a FE model for the pre-computation of Green functions in a mechanically heterogeneous domain which eventually will lead to a better description of the status of the volcanic area. The number of Green functions is reduced here to the number of observational points by using their reciprocity relationship. We present and test this methodology with an optimization method base on a Genetic Algorithm. Following synthetic and sensitivity test to estimate the uncertainty of the model parameters, we apply the tool for magma tracking during 2007 Kilauea volcano intrusion and eruption. We show how data inversion with numerical models can speed up the source parameters estimations for a given volcano showing signs of unrest.
Tracking of Short Distance Transport Pathways in Biological Tissues by Ultra-Small Nanoparticles
Segmehl, Jana S.; Lauria, Alessandro; Keplinger, Tobias; Berg, John K.; Burgert, Ingo
2018-03-01
In this work, ultra-small europium-doped HfO2 nanoparticles were infiltrated into native wood and used as trackers for studying penetrability and diffusion pathways in the hierarchical wood structure. The high electron density, laser induced luminescence, and crystallinity of these particles allowed for a complementary detection of the particles in the cellular tissue. Confocal Raman microscopy and high-resolution synchrotron scanning wide-angle X-ray scattering (WAXS) measurements were used to detect the infiltrated particles in the native wood cell walls. This approach allows for simultaneously obtaining chemical information of the probed biological tissue and the spatial distribution of the integrated particles. The in-depth information about particle distribution in the complex wood structure can be used for revealing transport pathways in plant tissues, but also for gaining better understanding of modification treatments of plant scaffolds aiming at novel functionalized materials.
A New Method for Tracking Individual Particles During Bed Load Transport in a Gravel-Bed River
Tremblay, M.; Marquis, G. A.; Roy, A. G.; Chaire de Recherche Du Canada En Dynamique Fluviale
2010-12-01
Many particle tracers (passive or active) have been developed to study gravel movement in rivers. It remains difficult, however, to document resting and moving periods and to know how particles travel from one deposition site to another. Our new tracking method uses the Hobo Pendant G acceleration Data Logger to quantitatively describe the motion of individual particles from the initiation of movement, through the displacement and to the rest, in a natural gravel river. The Hobo measures the acceleration in three dimensions at a chosen temporal frequency. The Hobo was inserted into 11 artificial rocks. The rocks were seeded in Ruisseau Béard, a small gravel-bed river in the Yamaska drainage basin (Québec) where the hydraulics, particle sizes and bed characteristics are well known. The signals recorded during eight floods (Summer and Fall 2008-2009) allowed us to develop an algorithm which classifies the periods of rest and motion. We can differentiate two types of motion: sliding and rolling. The particles can also vibrate while remaining in the same position. The examination of the movement and vibration periods with respect to the hydraulic conditions (discharge, shear stress, stream power) showed that vibration occurred mostly before the rise of hydrograph and allowed us to establish movement threshold and response times. In all cases, particle movements occurred during floods but not always in direct response to increased bed shear stress and stream power. This method offers great potential to track individual particles and to establish a spatiotemporal sequence of the intermittent transport of the particle during a flood and to test theories concerning the resting periods of particles on a gravel bed.
Changing storm track diffusivity and the upper limit to poleward latent heat transport
Caballero, R.
2010-12-01
Poleward atmospheric energy transport plays a key role in the climate system by helping set the mean equator-pole temperature gradient. The mechanisms controlling the response of poleward heat flux to climate change are still poorly understood. Recent work shows that midlatitude poleward latent heat flux in atmospheric GCMs generally increases as the climate warms but reaches an upper limit at sufficiently high temperature and decreases with further warming. The reasons for this non-monotonic behavior have remained unclear. Simple arguments suggests that the latent heat flux Fl should scale as Fl ˜ vref qs, where vref is a typical meridional velocity in the baroclinic zone and qs is saturation humidity. While vref decreases with temperature, qs increases much more rapidly, so this scaling implies monotonically increasing moisture flux. We study this problem using a series of simulations employing NCAR’s CAM3 GCM coupled to a slab-ocean aquaplanet and spanning a wide range of atmospheric CO2 concentrations. We find that a modified scaling, Fl ˜ vref2 qs, describes the changes in moisture flux much more accurately. Using Lagrangian trajectory analysis, we explain the success of this scaling in terms of changes in the mixing length, which contracts proportionally to vref.
DEFF Research Database (Denmark)
Høyerby, Mikkel Christian Wendelboe; Andersen, Michael Andreas E.
2005-01-01
This paper presents a high-performance power conversion scheme for power supply applications that require very high output voltage slew rates (dV/dt). The concept is to parallel 2 switching bandpass current sources, each optimized for its passband frequency space and the expected load current....... The principle is demonstrated with a power supply, designed for supplying a 40 W linear RF power amplifier for efficient amplification of a 16-QAM modulated data stream...
Parallel beam dynamics simulation of linear accelerators
International Nuclear Information System (INIS)
Qiang, Ji; Ryne, Robert D.
2002-01-01
In this paper we describe parallel particle-in-cell methods for the large scale simulation of beam dynamics in linear accelerators. These techniques have been implemented in the IMPACT (Integrated Map and Particle Accelerator Tracking) code. IMPACT is being used to study the behavior of intense charged particle beams and as a tool for the design of next-generation linear accelerators. As examples, we present applications of the code to the study of emittance exchange in high intensity beams and to the study of beam transport in a proposed accelerator for the development of accelerator-driven waste transmutation technologies
Tracking Organic Carbon Transport From the Stordalen Mire to Glacial Lake Tornetrask, Abisko, Sweden
Beck, M. A.; Hamilton, B. T.; Spry, E.; Johnson, J. E.; Palace, M. W.; McCalley, C. K.; Varner, R. K.; Bothner, W. A.
2016-12-01
In subarctic regions, labile organic carbon from thawing permafrost and productivity of terrestrial and aquatic vegetation are sources of carbon to lake sediments. Methane is produced in lake sediments from the decomposition of organic carbon at rates affected by vegetation presence and type as well as sediment temperature. Recent research in the Stordalen Mire in northern Sweden has suggested that labile organic carbon sources in young, shallow lake sediments yield the highest in situ sediment methane concentrations. Ebullition (or bubbling) of this methane is predominantly controlled by seasonal warming. In this project we sampled stream, glacial and post-glacial lake sediments along a drainage transect through the Stordalen Mire into the large glacial Lake Torneträsk. Our results indicate that the highest methane and total organic carbon (TOC) concentrations were observed in lake and stream sediments in the upper 25 centimeters, consistent with previous studies. C/N ratios range from 8 to 32, and suggest that a mix of aquatic and terrestrial vegetation sources dominate the sedimentary record. Although water transport occurs throughout the mire, major depositional centers for sediments and organic carbon occur within the lakes and prohibit young, labile TOC from entering the larger glacial Lake Torneträsk. The lack of an observed sediment fan at the outlet of the Mire to the lake is consistent with this observation. Our results suggest that carbon produced in the mire stays in the mire, allowing methane production to be greater in the mire bound lakes and streams than in the larger adjacent glacial lake.
International Nuclear Information System (INIS)
Odry, Nans
2016-01-01
Deterministic calculation schemes are devised to numerically solve the neutron transport equation in nuclear reactors. Dealing with core-sized problems is very challenging for computers, so much that the dedicated core calculations have no choice but to allow simplifying assumptions (assembly- then core scale steps..). The PhD work aims at overcoming some of these approximations: thanks to important changes in computer architecture and capacities (HPC), nowadays one can solve 3D core-sized problems, using both high mesh refinement and the transport operator. It is an essential step forward in order to perform, in the future, reference calculations using deterministic schemes. This work focuses on a spatial domain decomposition method (DDM). Using massive parallelism, DDM allows much more ambitious computations in terms of both memory requirements and calculation time. Developments were performed inside the Sn core solver Minaret, from the new CEA neutronics platform APOLLO3. Only fast reactors (hexagonal periodicity) are considered, even if all kinds of geometries can be dealt with, using Minaret. The work has been divided in four steps: 1) The spatial domain decomposition with no overlap is inserted into the standard algorithmic structure of Minaret. The fundamental idea involves splitting a core-sized problem into smaller, independent, spatial sub-problems. angular flux is exchanged between adjacent sub-domains. In doing so, all combined sub-problems converge to the global solution at the outcome of an iterative process. Various strategies were explored regarding both data management and algorithm design. Results (k eff and flux) are systematically compared to the reference in a numerical verification step. 2) Introducing more parallelism is an unprecedented opportunity to heighten performances of deterministic schemes. Domain decomposition is particularly suited to this. A two-layer hybrid parallelism strategy, suited to HPC, is chosen. It benefits from the
Kuang, Simeng Max
This thesis contains two topics in data analysis. The first topic consists of the introduction of algorithms for sample-based optimal transport and barycenter problems. In chapter 1, a family of algorithms is introduced to solve both the L2 optimal transport problem and the Wasserstein barycenter problem. Starting from a theoretical perspective, the new algorithms are motivated from a key characterization of the barycenter measure, which suggests an update that reduces the total transportation cost and stops only when the barycenter is reached. A series of general theorems is given to prove the convergence of all the algorithms. We then extend the algorithms to solve sample-based optimal transport and barycenter problems, in which only finite sample sets are available instead of underlying probability distributions. A unique feature of the new approach is that it compares sample sets in terms of the expected values of a set of feature functions, which at the same time induce the function space of optimal maps and can be chosen by users to incorporate their prior knowledge of the data. All the algorithms are implemented and applied to various synthetic example and practical applications. On synthetic examples it is found that both the SOT algorithm and the SCB algorithm are able to find the true solution and often converge in a handful of iterations. On more challenging applications including Gaussian mixture models, color transfer and shape transform problems, the algorithms give very good results throughout despite the very different nature of the corresponding datasets. In chapter 2, a preconditioning procedure is developed for the L2 and more general optimal transport problems. The procedure is based on a family of affine map pairs, which transforms the original measures into two new measures that are closer to each other, while preserving the optimality of solutions. It is proved that the preconditioning procedure minimizes the remaining transportation cost
The parallel adult education system
DEFF Research Database (Denmark)
Wahlgren, Bjarne
2015-01-01
for competence development. The Danish university educational system includes two parallel programs: a traditional academic track (candidatus) and an alternative practice-based track (master). The practice-based program was established in 2001 and organized as part time. The total program takes half the time...
International Nuclear Information System (INIS)
Moriakov, A.; Vasyukhno, V.; Netecha, M.; Khacheresov, G.
2003-01-01
Powerful supercomputers are available today. MBC-1000M is one of Russian supercomputers that may be used by distant way access. Programs LUCKY and LUCKY C were created to work for multi-processors systems. These programs have algorithms created especially for these computers and used MPI (message passing interface) service for exchanges between processors. LUCKY may resolved shielding tasks by multigroup discreet ordinate method. LUCKY C may resolve critical tasks by same method. Only XYZ orthogonal geometry is available. Under little space steps to approximate discreet operator this geometry may be used as universal one to describe complex geometrical structures. Cross section libraries are used up to P8 approximation by Legendre polynomials for nuclear data in GIT format. Programming language is Fortran-90. 'Vector' processors may be used that lets get a time profit up to 30 times. But unfortunately MBC-1000M has not these processors. Nevertheless sufficient value for efficiency of parallel calculations was obtained under 'space' (LUCKY) and 'space and energy' (LUCKY C ) paralleling. AUTOCAD program is used to control geometry after a treatment of input data. Programs have powerful geometry module, it is a beautiful tool to achieve any geometry. Output results may be processed by graphic programs on personal computer. (authors)
International Nuclear Information System (INIS)
Badal, Andreu; Badano, Aldo
2009-01-01
Purpose: It is a known fact that Monte Carlo simulations of radiation transport are computationally intensive and may require long computing times. The authors introduce a new paradigm for the acceleration of Monte Carlo simulations: The use of a graphics processing unit (GPU) as the main computing device instead of a central processing unit (CPU). Methods: A GPU-based Monte Carlo code that simulates photon transport in a voxelized geometry with the accurate physics models from PENELOPE has been developed using the CUDA programming model (NVIDIA Corporation, Santa Clara, CA). Results: An outline of the new code and a sample x-ray imaging simulation with an anthropomorphic phantom are presented. A remarkable 27-fold speed up factor was obtained using a GPU compared to a single core CPU. Conclusions: The reported results show that GPUs are currently a good alternative to CPUs for the simulation of radiation transport. Since the performance of GPUs is currently increasing at a faster pace than that of CPUs, the advantages of GPU-based software are likely to be more pronounced in the future.
Energy Technology Data Exchange (ETDEWEB)
Badal, Andreu; Badano, Aldo [Division of Imaging and Applied Mathematics, OSEL, CDRH, U.S. Food and Drug Administration, Silver Spring, Maryland 20993-0002 (United States)
2009-11-15
Purpose: It is a known fact that Monte Carlo simulations of radiation transport are computationally intensive and may require long computing times. The authors introduce a new paradigm for the acceleration of Monte Carlo simulations: The use of a graphics processing unit (GPU) as the main computing device instead of a central processing unit (CPU). Methods: A GPU-based Monte Carlo code that simulates photon transport in a voxelized geometry with the accurate physics models from PENELOPE has been developed using the CUDA programming model (NVIDIA Corporation, Santa Clara, CA). Results: An outline of the new code and a sample x-ray imaging simulation with an anthropomorphic phantom are presented. A remarkable 27-fold speed up factor was obtained using a GPU compared to a single core CPU. Conclusions: The reported results show that GPUs are currently a good alternative to CPUs for the simulation of radiation transport. Since the performance of GPUs is currently increasing at a faster pace than that of CPUs, the advantages of GPU-based software are likely to be more pronounced in the future.
Badal, Andreu; Badano, Aldo
2009-11-01
It is a known fact that Monte Carlo simulations of radiation transport are computationally intensive and may require long computing times. The authors introduce a new paradigm for the acceleration of Monte Carlo simulations: The use of a graphics processing unit (GPU) as the main computing device instead of a central processing unit (CPU). A GPU-based Monte Carlo code that simulates photon transport in a voxelized geometry with the accurate physics models from PENELOPE has been developed using the CUDATM programming model (NVIDIA Corporation, Santa Clara, CA). An outline of the new code and a sample x-ray imaging simulation with an anthropomorphic phantom are presented. A remarkable 27-fold speed up factor was obtained using a GPU compared to a single core CPU. The reported results show that GPUs are currently a good alternative to CPUs for the simulation of radiation transport. Since the performance of GPUs is currently increasing at a faster pace than that of CPUs, the advantages of GPU-based software are likely to be more pronounced in the future.
Directory of Open Access Journals (Sweden)
Sang Soon Hwang
2008-03-01
Full Text Available Modeling and simulation for heat and mass transport in micro channel are beingused extensively in researches and industrial applications to gain better understanding of thefundamental processes and to optimize fuel cell designs before building a prototype forengineering application. In this study, we used a single-phase, fully three dimensionalsimulation model for PEMFC that can deal with both anode and cathode flow field forexamining the micro flow channel with electrochemical reaction. The results show thathydrogen and oxygen were solely supplied to the membrane by diffusion mechanism ratherthan convection transport, and the higher pressure drop at cathode side is thought to becaused by higher flow rate of oxygen at cathode. And it is found that the amount of water incathode channel was determined by water formation due to electrochemical reaction pluselectro-osmotic mass flux directing toward the cathode side. And it is very important tomodel the back diffusion and electro-osmotic mass flux accurately since the two flux wasclosely correlated each other and greatly influenced for determination of ionic conductivityof the membrane which directly affects the performance of fuel cell.
Lee, Pil Hyong; Han, Sang Seok; Hwang, Sang Soon
2008-03-03
Modeling and simulation for heat and mass transport in micro channel are beingused extensively in researches and industrial applications to gain better understanding of thefundamental processes and to optimize fuel cell designs before building a prototype forengineering application. In this study, we used a single-phase, fully three dimensionalsimulation model for PEMFC that can deal with both anode and cathode flow field forexamining the micro flow channel with electrochemical reaction. The results show thathydrogen and oxygen were solely supplied to the membrane by diffusion mechanism ratherthan convection transport, and the higher pressure drop at cathode side is thought to becaused by higher flow rate of oxygen at cathode. And it is found that the amount of water incathode channel was determined by water formation due to electrochemical reaction pluselectro-osmotic mass flux directing toward the cathode side. And it is very important tomodel the back diffusion and electro-osmotic mass flux accurately since the two flux wasclosely correlated each other and greatly influenced for determination of ionic conductivityof the membrane which directly affects the performance of fuel cell.
International Nuclear Information System (INIS)
Fevotte, F.; Lathuiliere, B.
2013-01-01
The large increase in computing power over the past few years now makes it possible to consider developing 3D full-core heterogeneous deterministic neutron transport solvers for reference calculations. Among all approaches presented in the literature, the method first introduced in [1] seems very promising. It consists in iterating over resolutions of 2D and ID MOC problems by taking advantage of prismatic geometries without introducing approximations of a low order operator such as diffusion. However, before developing a solver with all industrial options at EDF, several points needed to be clarified. In this work, we first prove the convergence of this iterative process, under some assumptions. We then present our high-performance, parallel implementation of this algorithm in the MICADO solver. Benchmarking the solver against the Takeda case shows that the 2D-1D coupling algorithm does not seem to affect the spatial convergence order of the MOC solver. As for performance issues, our study shows that even though the data distribution is suited to the 2D solver part, the efficiency of the ID part is sufficient to ensure a good parallel efficiency of the global algorithm. After this study, the main remaining difficulty implementation-wise is about the memory requirement of a vector used for initialization. An efficient acceleration operator will also need to be developed. (authors)
Wheeler, K. I.; Levia, D. F., Jr.; Hudson, J. E.
2017-12-01
As trees undergo autumnal processes such as resorption, senescence, and leaf abscission, the dissolved organic matter (DOM) contribution of leaf litter leachate to streams changes. However, little research has investigated how the fluorescent DOM (FDOM) changes throughout the autumn and how this differs inter- and intraspecifically. Two of the major impacts of global climate change on forested ecosystems include altering phenology and causing forest community species and subspecies composition restructuring. We examined changes in FDOM in leachate from American beech (Fagus grandifolia Ehrh.) leaves in Maryland, Rhode Island, Vermont, and North Carolina and yellow poplar (Liriodendron tulipifera L.) leaves from Maryland throughout three different phenophases: green, senescing, and freshly abscissed. Beech leaves from Maryland and Rhode Island have previously been identified as belonging to the same distinct genetic cluster and beech trees from Vermont and the study site in North Carolina from the other. FDOM in samples was characterized using excitation-emission matrices (EEMs) and a six-component parallel factor analysis (PARAFAC) model was created to identify components. Self-organizing maps (SOMs) were used to visualize variation and patterns in the PARAFAC component proportions of the leachate samples. Phenophase and species had the greatest influence on determining where a sample mapped on the SOM when compared to genetic clusters and geographic origin. Throughout senescence, FDOM from all the trees transitioned from more protein-like components to more humic-like ones. Percent greenness of the sampled leaves and the proportion of the tyrosine-like component 1 were found to significantly differ between the two genetic beech clusters. This suggests possible differences in photosynthesis and resorption between the two genetic clusters of beech. The use of SOMs to visualize differences in patterns of senescence between the different species and genetic
International Nuclear Information System (INIS)
Li, S.H.; Chen, C.T.
1997-01-01
Analytical solutions are developed for the problem of radionuclide transport in a system of parallel fractures situated in a porous rock matrix. A kinetic solubility-limited dissolution model is used as the inlet boundary condition. The solutions consider the following processes: (a) advective transport in the fractures, (b) mechanical dispersion and molecular diffusion along the fractures, (c) molecular diffusion from a fracture to the porous matrix, (d) molecular diffusion within the porous matrix in the direction perpendicular to the fracture axis, (e) adsorption onto the fracture wall, (f) adsorption within the porous matrix, and (g) radioactive decay. The solutions are based on the Laplace transform method. The general transient solution is in the form of a double integral that is evaluated using composite Gauss-Legendre quadrature. A simpler transient solution that is in the form of a single integral is also presented for the case that assumes negligible longitudinal dispersion along the fractures. The steady-state solutions are also provided. A number of examples are given to illustrate the effects of the following important parameters: (a) fracture spacings, (b) dissolution-rate constants, (c) fracture dispersion coefficient, (d) matrix retardation factor, and (e) fracture retardation factor
Crockett, Thomas W.
1995-01-01
This article provides a broad introduction to the subject of parallel rendering, encompassing both hardware and software systems. The focus is on the underlying concepts and the issues which arise in the design of parallel rendering algorithms and systems. We examine the different types of parallelism and how they can be applied in rendering applications. Concepts from parallel computing, such as data decomposition, task granularity, scalability, and load balancing, are considered in relation to the rendering problem. We also explore concepts from computer graphics, such as coherence and projection, which have a significant impact on the structure of parallel rendering algorithms. Our survey covers a number of practical considerations as well, including the choice of architectural platform, communication and memory requirements, and the problem of image assembly and display. We illustrate the discussion with numerous examples from the parallel rendering literature, representing most of the principal rendering methods currently used in computer graphics.
Fraser, Graham M.; Goldman, Daniel; Ellis, Christopher G.
2013-01-01
Objective We compare Reconstructed Microvascular Networks (RMN) to Parallel Capillary Arrays (PCA) under several simulated physiological conditions to determine how the use of different vascular geometry affects oxygen transport solutions. Methods Three discrete networks were reconstructed from intravital video microscopy of rat skeletal muscle (84×168×342 μm, 70×157×268 μm and 65×240×571 μm) and hemodynamic measurements were made in individual capillaries. PCAs were created based on statistical measurements from RMNs. Blood flow and O2 transport models were applied and the resulting solutions for RMN and PCA models were compared under 4 conditions (rest, exercise, ischemia and hypoxia). Results Predicted tissue PO2 was consistently lower in all RMN simulations compared to the paired PCA. PO2 for 3D reconstructions at rest were 28.2±4.8, 28.1±3.5, and 33.0±4.5 mmHg for networks I, II, and III compared to the PCA mean values of 31.2±4.5, 30.6±3.4, and 33.8±4.6 mmHg. Simulated exercise yielded mean tissue PO2 in the RMN of 10.1±5.4, 12.6±5.7, and 19.7±5.7 mmHg compared to 15.3±7.3, 18.8±5.3, and 21.7±6.0 in PCA. Conclusions These findings suggest that volume matched PCA yield different results compared to reconstructed microvascular geometries when applied to O2 transport modeling; the predominant characteristic of this difference being an over estimate of mean tissue PO2. Despite this limitation, PCA models remain important for theoretical studies as they produce PO2 distributions with similar shape and parameter dependence as RMN. PMID:23841679
1982-01-01
Parallel Computations focuses on parallel computation, with emphasis on algorithms used in a variety of numerical and physical applications and for many different types of parallel computers. Topics covered range from vectorization of fast Fourier transforms (FFTs) and of the incomplete Cholesky conjugate gradient (ICCG) algorithm on the Cray-1 to calculation of table lookups and piecewise functions. Single tridiagonal linear systems and vectorized computation of reactive flow are also discussed.Comprised of 13 chapters, this volume begins by classifying parallel computers and describing techn
Coupled electron-photon radiation transport
International Nuclear Information System (INIS)
Lorence, L.; Kensek, R.P.; Valdez, G.D.; Drumm, C.R.; Fan, W.C.; Powell, J.L.
2000-01-01
Massively-parallel computers allow detailed 3D radiation transport simulations to be performed to analyze the response of complex systems to radiation. This has been recently been demonstrated with the coupled electron-photon Monte Carlo code, ITS. To enable such calculations, the combinatorial geometry capability of ITS was improved. For greater geometrical flexibility, a version of ITS is under development that can track particles in CAD geometries. Deterministic radiation transport codes that utilize an unstructured spatial mesh are also being devised. For electron transport, the authors are investigating second-order forms of the transport equations which, when discretized, yield symmetric positive definite matrices. A novel parallelization strategy, simultaneously solving for spatial and angular unknowns, has been applied to the even- and odd-parity forms of the transport equation on a 2D unstructured spatial mesh. Another second-order form, the self-adjoint angular flux transport equation, also shows promise for electron transport
Energy Technology Data Exchange (ETDEWEB)
Ozdemir, Ozkan Emre, E-mail: ozdemir@psu.edu [Department of Mechanical and Nuclear Engineering, The Pennsylvania State University, University Park, PA 16802 (United States); Avramova, Maria N., E-mail: mna109@psu.edu [Department of Mechanical and Nuclear Engineering, The Pennsylvania State University, University Park, PA 16802 (United States); Sato, Kenya, E-mail: kenya_sato@mhi.co.jp [Mitsubishi Heavy Industries (MHI), Kobe (Japan)
2014-10-15
Highlights: ► Implementation of multidimensional boron transport model in a subchannel approach. ► Studies on cross flow mechanism, heat transfer and lateral pressure drop effects. ► Verification of the implemented model via code-to-code comparison with CFD code. - Abstract: The risk of reflux condensation especially during a Small Break Loss Of Coolant Accident (SB-LOCA) and the complications of tracking the boron concentration experimentally inside the primary coolant system have stimulated and subsequently have been a focus of many computational studies on boron tracking simulations in nuclear reactors. This paper presents the development and implementation of a multidimensional boron transport model with Modified Godunov Scheme within a thermal-hydraulic code based on a subchannel approach. The cross flow mechanism in multiple-subchannel rod bundle geometry as well as the heat transfer and lateral pressure drop effects are considered in the performed studies on simulations of deboration and boration cases. The Pennsylvania State University (PSU) version of the COBRA-TF (CTF) code was chosen for the implementation of three different boron tracking models: First Order Accurate Upwind Difference Scheme, Second Order Accurate Godunov Scheme, and Modified Godunov Scheme. Based on the performed nodalization sensitivity studies, the Modified Godunov Scheme approach with a physical diffusion term was determined to provide the best solution in terms of precision and accuracy. As a part of the verification and validation activities, a code-to-code comparison was carried out with the STAR-CD computational fluid dynamics (CFD) code and presented here. The objective of this study was two-fold: (1) to verify the accuracy of the newly developed CTF boron tracking model against CFD calculations; and (2) to investigate its numerical advantages as compared to other thermal-hydraulics codes.
Directory of Open Access Journals (Sweden)
Hyun Cheol Roh
2013-05-01
Full Text Available Zinc is an essential metal involved in a wide range of biological processes, and aberrant zinc metabolism is implicated in human diseases. The gastrointestinal tract of animals is a critical site of zinc metabolism that is responsible for dietary zinc uptake and distribution to the body. However, the role of the gastrointestinal tract in zinc excretion remains unclear. Zinc transporters are key regulators of zinc metabolism that mediate the movement of zinc ions across membranes. Here, we identified a comprehensive list of 14 predicted Cation Diffusion Facilitator (CDF family zinc transporters in Caenorhabditis elegans and demonstrated that zinc is excreted from intestinal cells by one of these CDF proteins, TTM-1B. The ttm-1 locus encodes two transcripts, ttm-1a and ttm-1b, that use different transcription start sites. ttm-1b expression was induced by high levels of zinc specifically in intestinal cells, whereas ttm-1a was not induced by zinc. TTM-1B was localized to the apical plasma membrane of intestinal cells, and analyses of loss-of-function mutant animals indicated that TTM-1B promotes zinc excretion into the intestinal lumen. Zinc excretion mediated by TTM-1B contributes to zinc detoxification. These observations indicate that ttm-1 is a component of a negative feedback circuit, since high levels of cytoplasmic zinc increase ttm-1b transcript levels and TTM-1B protein functions to reduce the level of cytoplasmic zinc. We showed that TTM-1 isoforms function in tandem with CDF-2, which is also induced by high levels of cytoplasmic zinc and reduces cytoplasmic zinc levels by sequestering zinc in lysosome-related organelles. These findings define a parallel negative feedback circuit that promotes zinc homeostasis and advance the understanding of the physiological roles of the gastrointestinal tract in zinc metabolism in animals.
Casanova, Henri; Robert, Yves
2008-01-01
""…The authors of the present book, who have extensive credentials in both research and instruction in the area of parallelism, present a sound, principled treatment of parallel algorithms. … This book is very well written and extremely well designed from an instructional point of view. … The authors have created an instructive and fascinating text. The book will serve researchers as well as instructors who need a solid, readable text for a course on parallelism in computing. Indeed, for anyone who wants an understandable text from which to acquire a current, rigorous, and broad vi
Parallel Monte Carlo reactor neutronics
International Nuclear Information System (INIS)
Blomquist, R.N.; Brown, F.B.
1994-01-01
The issues affecting implementation of parallel algorithms for large-scale engineering Monte Carlo neutron transport simulations are discussed. For nuclear reactor calculations, these include load balancing, recoding effort, reproducibility, domain decomposition techniques, I/O minimization, and strategies for different parallel architectures. Two codes were parallelized and tested for performance. The architectures employed include SIMD, MIMD-distributed memory, and workstation network with uneven interactive load. Speedups linear with the number of nodes were achieved
Directory of Open Access Journals (Sweden)
En-jin Zhao
2018-01-01
Full Text Available In view of the severity of oceanic pollution, based on the finite volume coastal ocean model (FVCOM, a Lagrangian particle-tracking model was used to numerically investigate the coastal pollution transport and water exchange capability in Tangdao Bay, in China. The severe pollution in the bay was numerically simulated by releasing and tracking particles inside it. The simulation results demonstrate that the water exchange capability in the bay is very low. Once the bay has suffered pollution, a long period will be required before the environment can purify itself. In order to eliminate or at least reduce the pollution level, environmental improvement measures have been proposed to enhance the seawater exchange capability and speed up the water purification inside the bay. The study findings presented in this paper are believed to be instructive and useful for future environmental policy makers and it is also anticipated that the numerical model in this paper can serve as an effective technological tool to study many emerging coastal environment problems. Keywords: Particle-tracking, Water exchange capability, Lagrangian system, Coastal pollution, Tangdao bay, FVCOM
Fast parallel event reconstruction
CERN. Geneva
2010-01-01
On-line processing of large data volumes produced in modern HEP experiments requires using maximum capabilities of modern and future many-core CPU and GPU architectures.One of such powerful feature is a SIMD instruction set, which allows packing several data items in one register and to operate on all of them, thus achievingmore operations per clock cycle. Motivated by the idea of using the SIMD unit ofmodern processors, the KF based track fit has been adapted for parallelism, including memory optimization, numerical analysis, vectorization with inline operator overloading, and optimization using SDKs. The speed of the algorithm has been increased in 120000 times with 0.1 ms/track, running in parallel on 16 SPEs of a Cell Blade computer. Running on a Nehalem CPU with 8 cores it shows the processing speed of 52 ns/track using the Intel Threading Building Blocks. The same KF algorithm running on an Nvidia GTX 280 in the CUDA frameworkprovi...
A sequential/parallel track selector
Bertolino, F; Bressani, Tullio; Chiavassa, E; Costa, S; Dellacasa, G; Gallio, M; Musso, A
1980-01-01
A medium speed ( approximately 1 mu s) hardware pre-analyzer for the selection of events detected in four planes of drift chambers in the magnetic field of the Omicron Spectrometer at the CERN SC is described. Specific geometrical criteria determine patterns of hits in the four planes of vertical wires that have to be recognized and that are stored as patterns of '1's in random access memories. Pairs of good hits are found sequentially, then the RAMs are used as look-up tables. (6 refs).
Energy Technology Data Exchange (ETDEWEB)
Beucher, J
2007-10-15
PIM (Parallel Ionization Multiplier) is a multi-stage micro-pattern gaseous detector using micro-meshes technology. This new device, based on Micromegas (micro-mesh gaseous structure) detector principle of operation, offers good characteristics for minimum ionizing particles track detection. However, this kind of detectors placed in hadron environment suffers discharges which degrade sensibly the detection efficiency and account for hazard to the front-end electronics. In order to minimize these strong events, it is convenient to perform charges multiplication by several successive steps. Within the framework of a European hadron physics project we have investigated the multi-stage PIM detector for high hadrons flux application. For this part of research and development, a systematic study for many geometrical configurations of a two amplification stages separated with a transfer space operated with the gaseous mixture Ne + 10% CO{sub 2} has been performed. Beam tests realised with high energy hadrons at CERN facility have given that discharges probability could be strongly reduced with a suitable PIM device. A discharges rate lower to 10{sup 9} by incident hadron and a spatial resolution of 51 {mu}m have been measured at the beginning efficiency plateau (>96 %) operating point. (author)
Matter Tracking Information System -
Department of Transportation — The Matter Tracking Information System (MTIS) principle function is to streamline and integrate the workload and work activity generated or addressed by our 300 plus...
Department of Transportation — AVS is now required to collect, track, and report on data from the following Flight, Business and Workforce Plan. The Human Resource Management’s Performance Target...
Eckerle, Kate
This dissertation begins with a review of Calabi-Yau manifolds and their moduli spaces, flux compactification largely tailored to the case of type IIb supergravity, and Coleman-De Luccia vacuum decay. The three chapters that follow present the results of novel research conducted as a graduate student. Our first project is concerned with bubble collisions in single scalar field theories with multiple vacua. Lorentz boosted solitons traveling in one spatial dimension are used as a proxy to the colliding 3-dimensional spherical bubble walls. Recent work found that at sufficiently high impact velocities collisions between such bubble vacua are governed by "free passage" dynamics in which field interactions can be ignored during the collision, providing a systematic process for populating local minima without quantum nucleation. We focus on the time period that follows the bubble collision and provide evidence that, for certain potentials, interactions can drive significant deviations from the free passage bubble profile, thwarting the production of a new patch with different field value. However, for simple polynomial potentials a fine-tuning of vacuum locations is required to reverse the free passage kick enough that the field in the collision region returns to the original bubble vacuum. Hence we deem classical transitions mediated by free passage robust. Our second project continues with soliton collisions in the limit of relativistic impact velocity, but with the new feature of nontrivial field space curvature. We establish a simple geometrical interpretation of such collisions in terms of a double family of field profiles whose tangent vector fields stand in mutual parallel transport. This provides a generalization of the well-known limit in flat field space (free passage). We investigate the limits of this approximation and illustrate our analytical results with numerical simulations. In our third and final project we investigate the distribution of field
International Nuclear Information System (INIS)
Misdaq, M.A.; Ktata, A.; Bakhchi, A.
2000-01-01
Radon ( 222 Rn) and thoron ( 220 Rn) α-activities per unit volume were measured inside and outside different building materials by using two types of solid state nuclear track detectors (SSNTD) (CR-39 and LR-115 type II). In addition, the radon and thoron emanation coefficients of the studied materials were evaluated. Based on these data, the transport of radon and thoron across parallelepipedic blocks of the building materials could be investigated and radon and thoron global α-activities per unit volume outside different building material blocks were determined. Moreover, the diffusion length and the effective diffusion coefficient of radon in the building materials were evaluated and the total alpha activity due to radon in the atmospheres of different rooms consisting of different building materials was studied
Carrel, M.; Morales, V. L.; Dentz, M.; Derlon, N.; Morgenroth, E.; Holzner, M.
2018-03-01
Biofilms are ubiquitous bacterial communities that grow in various porous media including soils, trickling, and sand filters. In these environments, they play a central role in services ranging from degradation of pollutants to water purification. Biofilms dynamically change the pore structure of the medium through selective clogging of pores, a process known as bioclogging. This affects how solutes are transported and spread through the porous matrix, but the temporal changes to transport behavior during bioclogging are not well understood. To address this uncertainty, we experimentally study the hydrodynamic changes of a transparent 3-D porous medium as it experiences progressive bioclogging. Statistical analyses of the system's hydrodynamics at four time points of bioclogging (0, 24, 36, and 48 h in the exponential growth phase) reveal exponential increases in both average and variance of the flow velocity, as well as its correlation length. Measurements for spreading, as mean-squared displacements, are found to be non-Fickian and more intensely superdiffusive with progressive bioclogging, indicating the formation of preferential flow pathways and stagnation zones. A gamma distribution describes well the Lagrangian velocity distributions and provides parameters that quantify changes to the flow, which evolves from a parallel pore arrangement under unclogged conditions, toward a more serial arrangement with increasing clogging. Exponentially evolving hydrodynamic metrics agree with an exponential bacterial growth phase and are used to parameterize a correlated continuous time random walk model with a stochastic velocity relaxation. The model accurately reproduces transport observations and can be used to resolve transport behavior at intermediate time points within the exponential growth phase considered.
McCallum, Ethan
2011-01-01
It's tough to argue with R as a high-quality, cross-platform, open source statistical software product-unless you're in the business of crunching Big Data. This concise book introduces you to several strategies for using R to analyze large datasets. You'll learn the basics of Snow, Multicore, Parallel, and some Hadoop-related tools, including how to find them, how to use them, when they work well, and when they don't. With these packages, you can overcome R's single-threaded nature by spreading work across multiple CPUs, or offloading work to multiple machines to address R's memory barrier.
International Nuclear Information System (INIS)
Fevotte, F.
2008-01-01
At the various stages of a nuclear reactor's life, numerous studies are needed to guaranty the safety and efficiency of the design, analyse the fuel cycle, prepare the dismantlement, and so on. Due to the extreme difficulty to take extensive and accurate measurements in the reactor core, most of these studies are numerical simulations. The complete numerical simulation of a nuclear reactor involves many types of physics: neutronics, thermal hydraulics, materials, control engineering, Among these, the neutron transport simulation is one of the fundamental steps, since it allows computation - among other things - of various fundamental values such as the power density (used in thermal hydraulics computations) or fuel burn-up. The neutron transport simulation is based on the Boltzmann equation, which models the neutron population inside the reactor core. Among the various methods allowing its numerical solution, much interest has been devoted in the past few years to the Method of Characteristics in unstructured meshes (MOC), since it offers a good accuracy and operates in complicated geometries. The aim of this work is to propose improvements of the calculation scheme bound on the two dimensions MOC, in order to decrease the needed resources number. (A.L.B.)
Energy Technology Data Exchange (ETDEWEB)
Fang, Chin [SLAC National Accelerator Lab., Menlo Park, CA (United States); Corttrell, R. A. [SLAC National Accelerator Lab., Menlo Park, CA (United States)
2015-05-06
This Technical Note provides an overview of high-performance parallel Big Data transfers with and without encryption for data in-transit over multiple network channels. It shows that with the parallel approach, it is feasible to carry out high-performance parallel "encrypted" Big Data transfers without serious impact to throughput. But other impacts, e.g. the energy-consumption part should be investigated. It also explains our rationales of using a statistics-based approach for gaining understanding from test results and for improving the system. The presentation is of high-level nature. Nevertheless, at the end we will pose some questions and identify potentially fruitful directions for future work.
Directory of Open Access Journals (Sweden)
Wim J. Kimmerer
2008-02-01
Full Text Available Movements of pelagic organisms in the tidal freshwater regions of estuaries are sensitive to the movements of water. In the Sacramento-San Joaquin Delta—the tidal freshwater reach of the San Francisco Estuary—such movements are key to losses of fish and other organisms to entrainment in large water-export facilities. We used the Delta Simulation Model-2 hydrodynamic model and its particle tracking model to examine the principal determinants of entrainment losses to the export facilities and how movement of fish through the Delta may be influenced by flow. We modeled 936 scenarios for 74 different conditions of flow, diversions, tides, and removable barriers to address seven questions regarding hydrodynamics and entrainment risk in the Delta. Tide had relatively small effects on fate and residence time of particles. Release location and hydrology interacted to control particle fate and residence time. The ratio of flow into the export facilities to freshwater flow into the Delta (export:inflow or EI ratio was a useful predictor of entrainment probability if the model were allowed to run long enough to resolve particles’ ultimate fate. Agricultural diversions within the Delta increased total entrainment losses and altered local movement patterns. Removable barriers in channels of the southern Delta and gates in the Delta Cross Channel in the northern Delta had minor effects on particles released in the rivers above these channels. A simulation of losses of larval delta smelt showed substantial cumulative losses depending on both inflow and export flow. A simulation mimicking mark–recapture experiments on Chinook salmon smolts suggested that both inflow and export flow may be important factors determining survival of salmon in the upper estuary. To the extent that fish behave passively, this model is probably suitable for describing Delta-wide movement, but it is less suitable for smaller scales or alternative configurations of the Delta.
National Research Council Canada - National Science Library
Adams, James; Carr, Ron; Chebl, Maroun; Coleman, Robert; Costantini, William; Cox, Robert; Dial, William; Jenkins, Robert; McGovern, James; Mueller, Peter
2006-01-01
...., trains, ships, etc.) and maximizing intermodal efficiency. A healthy balance must be achieved between the flow of international commerce and security requirements regardless of transportation mode...
Parallel hierarchical global illumination
Energy Technology Data Exchange (ETDEWEB)
Snell, Quinn O. [Iowa State Univ., Ames, IA (United States)
1997-10-08
Solving the global illumination problem is equivalent to determining the intensity of every wavelength of light in all directions at every point in a given scene. The complexity of the problem has led researchers to use approximation methods for solving the problem on serial computers. Rather than using an approximation method, such as backward ray tracing or radiosity, the authors have chosen to solve the Rendering Equation by direct simulation of light transport from the light sources. This paper presents an algorithm that solves the Rendering Equation to any desired accuracy, and can be run in parallel on distributed memory or shared memory computer systems with excellent scaling properties. It appears superior in both speed and physical correctness to recent published methods involving bidirectional ray tracing or hybrid treatments of diffuse and specular surfaces. Like progressive radiosity methods, it dynamically refines the geometry decomposition where required, but does so without the excessive storage requirements for ray histories. The algorithm, called Photon, produces a scene which converges to the global illumination solution. This amounts to a huge task for a 1997-vintage serial computer, but using the power of a parallel supercomputer significantly reduces the time required to generate a solution. Currently, Photon can be run on most parallel environments from a shared memory multiprocessor to a parallel supercomputer, as well as on clusters of heterogeneous workstations.
Directory of Open Access Journals (Sweden)
James G. Worner
2017-05-01
Full Text Available James Worner is an Australian-based writer and scholar currently pursuing a PhD at the University of Technology Sydney. His research seeks to expose masculinities lost in the shadow of Australia’s Anzac hegemony while exploring new opportunities for contemporary historiography. He is the recipient of the Doctoral Scholarship in Historical Consciousness at the university’s Australian Centre of Public History and will be hosted by the University of Bologna during 2017 on a doctoral research writing scholarship. ‘Parallel Lines’ is one of a collection of stories, The Shapes of Us, exploring liminal spaces of modern life: class, gender, sexuality, race, religion and education. It looks at lives, like lines, that do not meet but which travel in proximity, simultaneously attracted and repelled. James’ short stories have been published in various journals and anthologies.
International Nuclear Information System (INIS)
Anon.
1998-01-01
Here is the decree of the thirtieth of July 1998 relative to road transportation, to trade and brokerage of wastes. It requires to firms which carry out a road transportation as well as to traders and to brokers of wastes to declare their operations to the prefect. The declaration has to be renewed every five years. (O.M.)
International Nuclear Information System (INIS)
Peterson, Richard N.; Burnett, William C.; Opsahl, Stephen P.; Santos, Isaac R.; Misra, Sambuddha; Froelich, Philip N.
2013-01-01
Suspended particles in rivers can carry metals, nutrients, and pollutants downstream which can become bioactive in estuaries and coastal marine waters. In river systems with multiple sources of both suspended particles and contamination sources, it is important to assess the hydrologic conditions under which contaminated particles can be delivered to downstream ecosystems. The Apalachicola–Chattahoochee–Flint (ACF) River system in the southeastern United States represents an ideal system to study these hydrologic impacts on particle transport through a heavily-impacted river (the Chattahoochee River) and one much less impacted by anthropogenic activities (the Flint River). We demonstrate here the utility of natural radioisotopes as tracers of suspended particles through the ACF system, where particles contaminated with arsenic (As) and antimony (Sb) have been shown to be contributed from coal-fired power plants along the Chattahoochee River, and have elevated concentrations in the surficial sediments of the Apalachicola Bay Delta. Radium isotopes ( 228 Ra and 226 Ra) on suspended particles should vary throughout the different geologic provinces of this river system, allowing differentiation of the relative contributions of the Chattahoochee and Flint Rivers to the suspended load delivered to Lake Seminole, the Apalachicola River, and ultimately to Apalachicola Bay. We also use various geochemical proxies ( 40 K, organic carbon, and calcium) to assess the relative composition of suspended particles (lithogenic, organic, and carbonate fractions, respectively) under a range of hydrologic conditions. During low (base) flow conditions, the Flint River contributed 70% of the suspended particle load to both the Apalachicola River and the bay, whereas the Chattahoochee River became the dominant source during higher discharge, contributing 80% of the suspended load to the Apalachicola River and 62% of the particles entering the estuary. Neither of these hydrologic
P. Sharp
The CMS Inner Tracking Detector continues to make good progress. The successful commissioning of ~ 25% of the Silicon Strip Tracker was completed in the Tracker Integration Facility (TIF) at CERN in July 2007 and the Tracker has since been prepared for moving and installation into CMS at P5. The Tracker was ready to move on schedule in September 2007. The Installation of the Tracker cooling pipes and LV cables between Patch Panel 1 (PP1) on the inside the CMS magnet cryostat, and the cooling plants and power system racks on the balconies has been completed. The optical fibres from PP1 to the readout FEDs in the USC have been installed, together with the Tracker cable channels, in parallel with the installation of the EB/HB services. All of the Tracker Safety, Power, DCS and the VME Readout Systems have been installed at P5 and are being tested and commissioned with CMS. It is planned to install the Tracker into CMS before Christmas. The Tracker will then be connected to the pre-installed services on Y...
P. Sharp
The CMS Inner Tracking Detector continues to make good progress. The successful commissioning of ~ 25% of the Silicon Strip Tracker was completed in the Tracker Integration Facility (TIF) at CERN on 18 July 2007 and the Tracker has since been prepared for moving and installation into CMS at P5. The Tracker will be ready to move on schedule in September 2007. The Installation of the Tracker cooling pipes and LV cables between Patch Panel 1 (PP1) on the inside the CMS magnet cryostat, and the cooling plants and power system racks on the balconies has been completed. The optical fibres from PP1 to the readout FEDs in the USC will be installed in parallel with the installation of the EB/HB services, and will be completed in October. It is planned to install the Tracker into CMS at the end of October, after the completion of the installation of the EB/HB services. The Tracker will then be connected to the pre-installed services on YB0 and commissioned with CMS in December. The FPix and BPix continue to make ...
Energy Technology Data Exchange (ETDEWEB)
Franz, A., LLNL
1998-02-17
The numerical simulation of chemically reacting flows is a topic, that has attracted a great deal of current research At the heart of numerical reactive flow simulations are large sets of coupled, nonlinear Partial Differential Equations (PDES). Due to the stiffness that is usually present, explicit time differencing schemes are not used despite their inherent simplicity and efficiency on parallel and vector machines, since these schemes require prohibitively small numerical stepsizes. Implicit time differencing schemes, although possessing good stability characteristics, introduce a great deal of computational overhead necessary to solve the simultaneous algebraic system at each timestep. This thesis examines an algorithm based on a preconditioned time differencing scheme. The algorithm is explicit and permits a large stable time step. An investigation of the algorithm`s accuracy, stability and performance on a parallel architecture is presented
National Research Council Canada - National Science Library
Allshouse, Michael; Armstrong, Frederick Henry; Burns, Stephen; Courts, Michael; Denn, Douglas; Fortunato, Paul; Gettings, Daniel; Hansen, David; Hoffman, D. W; Jones, Robert
2007-01-01
.... The ability of the global transportation industry to rapidly move passengers and products from one corner of the globe to another continues to amaze even those wise to the dynamics of such operations...
Schein, Jessica R; Hunt, Kevin A; Minton, Janet A; Schultes, Neil P; Mourad, George S
2013-09-01
The single cell alga Chlamydomonas reinhardtii is capable of importing purines as nitrogen sources. An analysis of the annotated C. reinhardtii genome reveals at least three distinct gene families encoding for known nucleobase transporters. In this study the solute transport and binding properties for the lone C. reinhardtii nucleobase cation symporter 1 (CrNCS1) are determined through heterologous expression in Saccharomyces cerevisiae. CrNCS1 acts as a transporter of adenine, guanine, uracil and allantoin, sharing similar - but not identical - solute recognition specificity with the evolutionary distant NCS1 from Arabidopsis thaliana. The results suggest that the solute specificity for plant NCS1 occurred early in plant evolution and are distinct from solute transport specificities of single cell fungal NCS1 proteins. Copyright © 2013 Elsevier Masson SAS. All rights reserved.
Clark, Brian R.; Landon, Matthew K.; Kauffman, Leon J.; Hornberger, George Z.
2008-01-01
Contamination of public-supply wells has resulted in public-health threats and negative economic effects for communities that must treat contaminated water or find alternative water supplies. To investigate factors controlling vulnerability of public-supply wells to anthropogenic and natural contaminants using consistent and systematic data collected in a variety of principal aquifer settings in the United States, a study of Transport of Anthropogenic and Natural Contaminants to public-supply wells was begun in 2001 as part of the U.S. Geological Survey National Water-Quality Assessment Program. The area simulated by the ground-water flow model described in this report was selected for a study of processes influencing contaminant distribution and transport along the direction of ground-water flow towards a public-supply well in southeastern York, Nebraska. Ground-water flow is simulated for a 60-year period from September 1, 1944, to August 31, 2004. Steady-state conditions are simulated prior to September 1, 1944, and represent conditions prior to use of ground water for irrigation. Irrigation, municipal, and industrial wells were simulated using the Multi-Node Well package of the modular three-dimensional ground-water flow model code, MODFLOW-2000, which allows simulation of flow and solutes through wells that are simulated in multiple nodes or layers. Ground-water flow, age, and transport of selected tracers were simulated using the Ground-Water Transport process of MODFLOW-2000. Simulated ground-water age was compared to interpreted ground-water age in six monitoring wells in the unconfined aquifer. The tracer chlorofluorocarbon-11 was simulated directly using Ground-Water Transport for comparison with concentrations measured in six monitoring wells and one public supply well screened in the upper confined aquifer. Three alternative model simulations indicate that simulation results are highly sensitive to the distribution of multilayer well bores where leakage
Flight Activity and Crew Tracking System -
Department of Transportation — The Flight Activity and Crew Tracking System (FACTS) is a Web-based application that provides an overall management and tracking tool of FAA Airmen performing Flight...
Modeling Ballasted Tracks for Runoff Coefficient C
2012-08-01
In this study, the Regional Transportation District (RTD)s light rail tracks were modeled to determine the Rational Method : runoff coefficient, C, values corresponding to ballasted tracks. To accomplish this, a laboratory study utilizing a : rain...
The concurrent emergence and causes of double volcanic hotspot tracks on the Pacific plate
DEFF Research Database (Denmark)
Jones, David T; Davies, D. R.; Campbell, I. H.
2017-01-01
Mantle plumes are buoyant upwellings of hot rock that transport heat from Earth's core to its surface, generating anomalous regions of volcanism that are not directly associated with plate tectonic processes. The best-studied example is the Hawaiian-Emperor chain, but the emergence of two sub......-parallel volcanic tracks along this chain, Loa and Kea, and the systematic geochemical differences between them have remained unexplained. Here we argue that the emergence of these tracks coincides with the appearance of other double volcanic tracks on the Pacific plate and a recent azimuthal change in the motion...... of the plate. We propose a three-part model that explains the evolution of Hawaiian double-track volcanism: first, mantle flow beneath the rapidly moving Pacific plate strongly tilts the Hawaiian plume and leads to lateral separation between high- and low-pressure melt source regions; second, the recent...
TRACK The New Beam Dynamics Code
Mustapha, Brahim; Ostroumov, Peter; Schnirman-Lessner, Eliane
2005-01-01
The new ray-tracing code TRACK was developed* to fulfill the special requirements of the RIA accelerator systems. The RIA lattice includes an ECR ion source, a LEBT containing a MHB and a RFQ followed by three SC linac sections separated by two stripping stations with appropriate magnetic transport systems. No available beam dynamics code meet all the necessary requirements for an end-to-end simulation of the RIA driver linac. The latest version of TRACK was used for end-to-end simulations of the RIA driver including errors and beam loss analysis.** In addition to the standard capabilities, the code includes the following new features: i) multiple charge states ii) realistic stripper model; ii) static and dynamic errors iii) automatic steering to correct for misalignments iv) detailed beam-loss analysis; v) parallel computing to perform large scale simulations. Although primarily developed for simulations of the RIA machine, TRACK is a general beam dynamics code. Currently it is being used for the design and ...
2007-01-01
Faculty ii INDUSTRY TRAVEL Domestic Assistant Deputy Under Secretary of Defense (Transportation Policy), Washington, DC Department of...developed between the railroad and trucking industries. Railroads: Today’s seven Class I freight railroad systems move 42% of the nation’s intercity ...has been successfully employed in London to reduce congestion and observed by this industry study during its travels . It is currently being
Wurch, Louie L; Gobler, Christopher J; Dyhrman, Sonya T
2014-08-01
Targeted gene expression using quantitative reverse transcription polymerase chain reaction (qRT-PCR) was employed to track patterns in the expression of genes indicative of nitrogen or phosphorus deficiency in the brown tide-forming alga Aureococcus anophagefferens. During culture experiments, a xanthine/uracil/vitamin C permease (XUV) was upregulated ∼20-fold under nitrogen-deficient conditions relative to a nitrogen-replete control and rapidly returned to nitrogen-replete levels after nitrogen-deficient cells were resupplied with nitrate or ammonium. It was not responsive to phosphorus deficiency. Expression of an inorganic phosphate transporter (PTA3) was enriched ∼10-fold under phosphorus-deficient conditions relative to a phosphorus-replete control, and this signal was rapidly lost upon phosphate resupply. PTA3 was not upregulated by nitrogen deficiency. Natural A. anophagefferens populations from a dense brown tide that occurred in Long Island, NY, in 2009 were assayed for XUV and PTA3 expression and compared with nutrient concentrations over the peak of a bloom. Patterns in XUV expression were consistent with nitrogen-replete growth, never reaching the values observed in N-deficient cultures. PTA3 expression was highest prior to peak bloom stages, reaching expression levels within the range of P-deficient cultures. These data highlight the value of molecular-level assessments of nutrient deficiency and suggest that phosphorus deficiency could play a role in the dynamics of destructive A. anophagefferens blooms. © 2013 Society for Applied Microbiology and John Wiley & Sons Ltd.
Dynamic Ocean Track System Plus -
Department of Transportation — Dynamic Ocean Track System Plus (DOTS Plus) is a planning tool implemented at the ZOA, ZAN, and ZNY ARTCCs. It is utilized by Traffic Management Unit (TMU) personnel...
Roy, Swarnendu; Chakraborty, Usha
2018-01-01
Comparative analyses of the responses to NaCl in Cynodon dactylon and a sensitive crop species like rice could effectively unravel the salt tolerance mechanism in the former. C. dactylon, a wild perennial chloridoid grass having a wide range of ecological distribution is generally adaptable to varying degrees of salinity stress. The role of salt exclusion mechanism present exclusively in the wild grass was one of the major factors contributing to its tolerance. Salt exclusion was found to be induced at 4 days when the plants were treated with a minimum conc. of 200 mM NaCl. The structural peculiarities of the salt exuding glands were elucidated by the SEM and TEM studies, which clearly revealed the presence of a bicellular salt gland actively functioning under NaCl stress to remove the excess amount of Na + ion from the mesophyll tissues. Moreover, the intracellular effect of NaCl on the photosynthetic apparatus was found to be lower in C. dactylon in comparison to rice; at the same time, the vacuolization process increased in the former. Accumulation of osmolytes like proline and glycine betaine also increased significantly in C. dactylon with a concurrent check on the H 2 O 2 levels, electrolyte leakage and membrane lipid peroxidation. This accounted for the proper functioning of the Na + ion transporters in the salt glands and also in the vacuoles for the exudation and loading of excess salts, respectively, to maintain the osmotic balance of the protoplasm. In real-time PCR analyses, CdSOS1 expression was found to increase by 2.5- and 5-fold, respectively, and CdNHX expression increased by 1.5- and 2-fold, respectively, in plants subjected to 100 and 200 mM NaCl treatment for 72 h. Thus, the comparative analyses of the expression pattern of the plasma membrane and tonoplast Na + ion transporters, SOS1 and NHX in both the plants revealed the significant role of these two ion transporters in conferring salinity tolerance in Cynodon.
Evidence of Parallel Processing During Translation
DEFF Research Database (Denmark)
Balling, Laura Winther; Hvelplund, Kristian Tangsgaard; Sjørup, Annette Camilla
2014-01-01
conclude that translation is a parallel process and that literal translation is likely to be a universal initial default strategy in translation. This conclusion is strengthened by the fact that all three experiments were relatively naturalistic, due to the combination of remote eye tracking and mixed...
Parallel Volunteer Learning during Youth Programs
Lesmeister, Marilyn K.; Green, Jeremy; Derby, Amy; Bothum, Candi
2012-01-01
Lack of time is a hindrance for volunteers to participate in educational opportunities, yet volunteer success in an organization is tied to the orientation and education they receive. Meeting diverse educational needs of volunteers can be a challenge for program managers. Scheduling a Volunteer Learning Track for chaperones that is parallel to a…
International Nuclear Information System (INIS)
Mais, H.; Ripken, G.; Wrulich, A.; Schmidt, F.
1986-02-01
After a brief description of typical applications of particle tracking in storage rings and after a short discussion of some limitations and problems related with tracking we summarize some concepts and methods developed in the qualitative theory of dynamical systems. We show how these concepts can be applied to the proton ring HERA. (orig.)
DEFF Research Database (Denmark)
Düdder, Boris; Ross, Omry
2017-01-01
Managing and verifying forest products in a value chain is often reliant on easily manipulated document or digital tracking methods - Chain of Custody Systems. We aim to create a new means of tracking timber by developing a tamper proof digital system based on Blockchain technology. Blockchain...
Parallel Programming with Intel Parallel Studio XE
Blair-Chappell , Stephen
2012-01-01
Optimize code for multi-core processors with Intel's Parallel Studio Parallel programming is rapidly becoming a "must-know" skill for developers. Yet, where to start? This teach-yourself tutorial is an ideal starting point for developers who already know Windows C and C++ and are eager to add parallelism to their code. With a focus on applying tools, techniques, and language extensions to implement parallelism, this essential resource teaches you how to write programs for multicore and leverage the power of multicore in your programs. Sharing hands-on case studies and real-world examples, the
KIM, Jong Woon; LEE, Young-Ouk
2017-09-01
As computing power gets better and better, computer codes that use a deterministic method seem to be less useful than those using the Monte Carlo method. In addition, users do not like to think about space, angles, and energy discretization for deterministic codes. However, a deterministic method is still powerful in that we can obtain a solution of the flux throughout the problem, particularly as when particles can barely penetrate, such as in a deep penetration problem with small detection volumes. Recently, a new state-of-the-art discrete-ordinates code, ATTILA, was developed and has been widely used in several applications. ATTILA provides the capabilities to solve geometrically complex 3-D transport problems by using an unstructured tetrahedral mesh. Since 2009, we have been developing our own code by benchmarking ATTILA. AETIUS is a discrete ordinates code that uses an unstructured tetrahedral mesh such as ATTILA. For pre- and post- processing, Gmsh is used to generate an unstructured tetrahedral mesh by importing a CAD file (*.step) and visualizing the calculation results of AETIUS. Using a CAD tool, the geometry can be modeled very easily. In this paper, we describe a brief overview of AETIUS and provide numerical results from both AETIUS and a Monte Carlo code, MCNP5, in a deep penetration problem with small detection volumes. The results demonstrate the effectiveness and efficiency of AETIUS for such calculations.
International Nuclear Information System (INIS)
Satoh, Akira; Hayasaka, Ryo; Majima, Tamotsu
2008-01-01
We have treated a dilute dispersion composed of ferromagnetic rodlike particles with a magnetic moment normal to the particle axis, such as hematites, to investigate the influences of the magnetic field strength, shear rate, and random forces on the orientational distribution of rodlike particles and also on transport coefficients, such as viscosity and diffusion coefficient. In the present analysis, these rodlike particles are assumed to conduct the rotational Brownian motion in a simple shear flow as well as an external magnetic field. The results obtained here are summarized as follows. In the case of a strong magnetic field and a smaller shear rate, the rodlike particle can freely rotate in the xy-plane with the magnetic moment continuing to point the magnetic field direction. On the other hand, for a strong shear flow, the particle has a tendency to incline in the flow direction with the magnetic moment pointing to the magnetic field direction. In the case of the magnetic field applied normal to the direction of the sedimentation, the diffusion coefficient gives rise to smaller values than expected, since the rodlike particle sediments with the particle axis inclining toward directions normal to the movement direction and, of course, toward the direction along that direction
Directory of Open Access Journals (Sweden)
Yasemin eKarabacak
2015-08-01
Full Text Available A series of drugs have been reported to increase memory performance modulating the dopaminergic system and herein modafinil was tested for its working memory (WM enhancing properties. Reuptake inhibition of dopamine, serotonin (SERT and norepinephrine (NET by modafinil was tested. 60 male Sprague Dawley rats were divided into six groups (modafinil-treated 1-5-10 mg/kg body weight, trained and untrained and vehicle treated trained and untrained rats; daily injected intraperitoneally for a period of 10 days and tested in a radial arm maze (RAM, a paradigm for testing spatial WM. Hippocampi were taken six hours following the last day of training and complexes containing the unphosphorylated or phosphorylated dopamine transporter (DAT-CC and pDAT-CC and complexes containing the D1-3 dopamine receptor subunits (D1-D3-CC were determined. Modafinil was binding to the DAT but insignificantly to SERT or NET and dopamine reuptake was blocked specifically (IC50=11.11; SERT 1547; NET 182. From day 8 (day 9 for 1 mg/kg body weight modafinil was decreasing WM errors in the RAM significantly and remarkably at all doses tested as compared to the vehicle controls. WMEs were linked to the D2R-CC and the pDAT-CC. pDAT and D1-D3-CC levels were modulated significantly and modafinil was shown to enhance spatial WM in the rat in a well-documented paradigm at all the three doses and dopamine reuptake inhibition with subsequent modulation of D1-3-CC is proposed as a possible mechanism of action.
Parallel plasma fluid turbulence calculations
International Nuclear Information System (INIS)
Leboeuf, J.N.; Carreras, B.A.; Charlton, L.A.; Drake, J.B.; Lynch, V.E.; Newman, D.E.; Sidikman, K.L.; Spong, D.A.
1994-01-01
The study of plasma turbulence and transport is a complex problem of critical importance for fusion-relevant plasmas. To this day, the fluid treatment of plasma dynamics is the best approach to realistic physics at the high resolution required for certain experimentally relevant calculations. Core and edge turbulence in a magnetic fusion device have been modeled using state-of-the-art, nonlinear, three-dimensional, initial-value fluid and gyrofluid codes. Parallel implementation of these models on diverse platforms--vector parallel (National Energy Research Supercomputer Center's CRAY Y-MP C90), massively parallel (Intel Paragon XP/S 35), and serial parallel (clusters of high-performance workstations using the Parallel Virtual Machine protocol)--offers a variety of paths to high resolution and significant improvements in real-time efficiency, each with its own advantages. The largest and most efficient calculations have been performed at the 200 Mword memory limit on the C90 in dedicated mode, where an overlap of 12 to 13 out of a maximum of 16 processors has been achieved with a gyrofluid model of core fluctuations. The richness of the physics captured by these calculations is commensurate with the increased resolution and efficiency and is limited only by the ingenuity brought to the analysis of the massive amounts of data generated
Morse, H Stephen
1994-01-01
Practical Parallel Computing provides information pertinent to the fundamental aspects of high-performance parallel processing. This book discusses the development of parallel applications on a variety of equipment.Organized into three parts encompassing 12 chapters, this book begins with an overview of the technology trends that converge to favor massively parallel hardware over traditional mainframes and vector machines. This text then gives a tutorial introduction to parallel hardware architectures. Other chapters provide worked-out examples of programs using several parallel languages. Thi
Akl, Selim G
1985-01-01
Parallel Sorting Algorithms explains how to use parallel algorithms to sort a sequence of items on a variety of parallel computers. The book reviews the sorting problem, the parallel models of computation, parallel algorithms, and the lower bounds on the parallel sorting problems. The text also presents twenty different algorithms, such as linear arrays, mesh-connected computers, cube-connected computers. Another example where algorithm can be applied is on the shared-memory SIMD (single instruction stream multiple data stream) computers in which the whole sequence to be sorted can fit in the
International Nuclear Information System (INIS)
Burchart, J.; Kral, J.
1979-01-01
A comparison is made of two methods of determining the age of rocks, ie., the krypton-argon method and the fission tracks method. The former method is more accurate but is dependent on the temperature and on the grain size of the investigated rocks (apatites, biotites, muscovites). As for the method of fission tracks, the determination is not dependent on grain size. This method allows dating and the determination of uranium concentration and distribution in rocks. (H.S.)
Bayer image parallel decoding based on GPU
Hu, Rihui; Xu, Zhiyong; Wei, Yuxing; Sun, Shaohua
2012-11-01
In the photoelectrical tracking system, Bayer image is decompressed in traditional method, which is CPU-based. However, it is too slow when the images become large, for example, 2K×2K×16bit. In order to accelerate the Bayer image decoding, this paper introduces a parallel speedup method for NVIDA's Graphics Processor Unit (GPU) which supports CUDA architecture. The decoding procedure can be divided into three parts: the first is serial part, the second is task-parallelism part, and the last is data-parallelism part including inverse quantization, inverse discrete wavelet transform (IDWT) as well as image post-processing part. For reducing the execution time, the task-parallelism part is optimized by OpenMP techniques. The data-parallelism part could advance its efficiency through executing on the GPU as CUDA parallel program. The optimization techniques include instruction optimization, shared memory access optimization, the access memory coalesced optimization and texture memory optimization. In particular, it can significantly speed up the IDWT by rewriting the 2D (Tow-dimensional) serial IDWT into 1D parallel IDWT. Through experimenting with 1K×1K×16bit Bayer image, data-parallelism part is 10 more times faster than CPU-based implementation. Finally, a CPU+GPU heterogeneous decompression system was designed. The experimental result shows that it could achieve 3 to 5 times speed increase compared to the CPU serial method.
Parallel S/sub n/ iteration schemes
International Nuclear Information System (INIS)
Wienke, B.R.; Hiromoto, R.E.
1986-01-01
The iterative, multigroup, discrete ordinates (S/sub n/) technique for solving the linear transport equation enjoys widespread usage and appeal. Serial iteration schemes and numerical algorithms developed over the years provide a timely framework for parallel extension. On the Denelcor HEP, the authors investigate three parallel iteration schemes for solving the one-dimensional S/sub n/ transport equation. The multigroup representation and serial iteration methods are also reviewed. This analysis represents a first attempt to extend serial S/sub n/ algorithms to parallel environments and provides good baseline estimates on ease of parallel implementation, relative algorithm efficiency, comparative speedup, and some future directions. The authors examine ordered and chaotic versions of these strategies, with and without concurrent rebalance and diffusion acceleration. Two strategies efficiently support high degrees of parallelization and appear to be robust parallel iteration techniques. The third strategy is a weaker parallel algorithm. Chaotic iteration, difficult to simulate on serial machines, holds promise and converges faster than ordered versions of the schemes. Actual parallel speedup and efficiency are high and payoff appears substantial
Introduction to parallel programming
Brawer, Steven
1989-01-01
Introduction to Parallel Programming focuses on the techniques, processes, methodologies, and approaches involved in parallel programming. The book first offers information on Fortran, hardware and operating system models, and processes, shared memory, and simple parallel programs. Discussions focus on processes and processors, joining processes, shared memory, time-sharing with multiple processors, hardware, loops, passing arguments in function/subroutine calls, program structure, and arithmetic expressions. The text then elaborates on basic parallel programming techniques, barriers and race
Fox, Geoffrey C; Messina, Guiseppe C
2014-01-01
A clear illustration of how parallel computers can be successfully appliedto large-scale scientific computations. This book demonstrates how avariety of applications in physics, biology, mathematics and other scienceswere implemented on real parallel computers to produce new scientificresults. It investigates issues of fine-grained parallelism relevant forfuture supercomputers with particular emphasis on hypercube architecture. The authors describe how they used an experimental approach to configuredifferent massively parallel machines, design and implement basic systemsoftware, and develop
Adapting algorithms to massively parallel hardware
Sioulas, Panagiotis
2016-01-01
In the recent years, the trend in computing has shifted from delivering processors with faster clock speeds to increasing the number of cores per processor. This marks a paradigm shift towards parallel programming in which applications are programmed to exploit the power provided by multi-cores. Usually there is gain in terms of the time-to-solution and the memory footprint. Specifically, this trend has sparked an interest towards massively parallel systems that can provide a large number of processors, and possibly computing nodes, as in the GPUs and MPPAs (Massively Parallel Processor Arrays). In this project, the focus was on two distinct computing problems: k-d tree searches and track seeding cellular automata. The goal was to adapt the algorithms to parallel systems and evaluate their performance in different cases.
Coupled transport in field-reversed configurations
Steinhauer, L. C.; Berk, H. L.; TAE Team
2018-02-01
Coupled transport is the close interconnection between the cross-field and parallel fluxes in different regions due to topological changes in the magnetic field. This occurs because perpendicular transport is necessary for particles or energy to leave closed field-line regions, while parallel transport strongly affects evolution of open field-line regions. In most toroidal confinement systems, the periphery, namely, the portion with open magnetic surfaces, is small in thickness and volume compared to the core plasma, the portion with closed surfaces. In field-reversed configurations (FRCs), the periphery plays an outsized role in overall confinement. This effect is addressed by an FRC-relevant model of coupled particle transport that is well suited for immediate interpretation of experiments. The focus here is particle confinement rather than energy confinement since the two track together in FRCs. The interpretive tool yields both the particle transport rate χn and the end-loss time τǁ. The results indicate that particle confinement depends on both χn across magnetic surfaces throughout the plasma and τǁ along open surfaces and that they provide roughly equal transport barriers, inhibiting particle loss. The interpretation of traditional FRCs shows Bohm-like χn and inertial (free-streaming) τǁ. However, in recent advanced beam-driven FRC experiments, χn approaches the classical rate and τǁ is comparable to classic empty-loss-cone mirrors.
[Falsified medicines in parallel trade].
Muckenfuß, Heide
2017-11-01
The number of falsified medicines on the German market has distinctly increased over the past few years. In particular, stolen pharmaceutical products, a form of falsified medicines, have increasingly been introduced into the legal supply chain via parallel trading. The reasons why parallel trading serves as a gateway for falsified medicines are most likely the complex supply chains and routes of transport. It is hardly possible for national authorities to trace the history of a medicinal product that was bought and sold by several intermediaries in different EU member states. In addition, the heterogeneous outward appearance of imported and relabelled pharmaceutical products facilitates the introduction of illegal products onto the market. Official batch release at the Paul-Ehrlich-Institut offers the possibility of checking some aspects that might provide an indication of a falsified medicine. In some circumstances, this may allow the identification of falsified medicines before they come onto the German market. However, this control is only possible for biomedicinal products that have not received a waiver regarding official batch release. For improved control of parallel trade, better networking among the EU member states would be beneficial. European-wide regulations, e. g., for disclosure of the complete supply chain, would help to minimise the risks of parallel trading and hinder the marketing of falsified medicines.
Formation of etchable tracks in plastics
International Nuclear Information System (INIS)
Katz, R.
1984-01-01
It is proposed that etchable tracks in plastics are formed by the interaction of delta-rays with polymer clusters, paralleling the formation of developable tracks in emulsion. We speak of a latent image, a grain count regime, and a track-width regime for the tracks of single particles. We may speak of 'ion-kill' and 'gamma-kill', as in radiobiology, when dealing with irradiation by a beam of particles. Existing but incomplete experimental evidence is consistent with these concepts. Such evidence as there is suggests that CR-39 is a 1-or more hit detector. (author)
International Nuclear Information System (INIS)
Arsenault, Benoit; Le Tellier, Romain; Hebert, Alain
2008-01-01
The paper presents the results of a first implementation of a Monte Carlo module in DRAGON Version 4 based on the delta-tracking technique. The Monte Carlo module uses the geometry and the self-shielded multigroup cross-sections calculated with a deterministic model. The module has been tested with three different configurations of an ACR TM -type lattice. The paper also discusses the impact of this approach on the efficiency of the Monte Carlo module. (authors)
Parallel Atomistic Simulations
Energy Technology Data Exchange (ETDEWEB)
HEFFELFINGER,GRANT S.
2000-01-18
Algorithms developed to enable the use of atomistic molecular simulation methods with parallel computers are reviewed. Methods appropriate for bonded as well as non-bonded (and charged) interactions are included. While strategies for obtaining parallel molecular simulations have been developed for the full variety of atomistic simulation methods, molecular dynamics and Monte Carlo have received the most attention. Three main types of parallel molecular dynamics simulations have been developed, the replicated data decomposition, the spatial decomposition, and the force decomposition. For Monte Carlo simulations, parallel algorithms have been developed which can be divided into two categories, those which require a modified Markov chain and those which do not. Parallel algorithms developed for other simulation methods such as Gibbs ensemble Monte Carlo, grand canonical molecular dynamics, and Monte Carlo methods for protein structure determination are also reviewed and issues such as how to measure parallel efficiency, especially in the case of parallel Monte Carlo algorithms with modified Markov chains are discussed.
International Nuclear Information System (INIS)
Ishizuki, Shigeru; Kawai, Wataru; Nemoto, Toshiyuki; Ogasawara, Shinobu; Kume, Etsuo; Adachi, Masaaki; Kawasaki, Nobuo; Yatake, Yo-ichi
2000-03-01
Several computer codes in the nuclear field have been vectorized, parallelized and transported on the FUJITSU VPP500 system, the AP3000 system and the Paragon system at Center for Promotion of Computational Science and Engineering in Japan Atomic Energy Research Institute. We dealt with 12 codes in fiscal 1998. These results are reported in 3 parts, i.e., the vectorization and parallelization on vector processors part, the parallelization on scalar processors part and the porting part. In this report, we describe the vectorization and parallelization on vector processors. In this vectorization and parallelization on vector processors part, the vectorization of General Tokamak Circuit Simulation Program code GTCSP, the vectorization and parallelization of Molecular Dynamics NTV (n-particle, Temperature and Velocity) Simulation code MSP2, Eddy Current Analysis code EDDYCAL, Thermal Analysis Code for Test of Passive Cooling System by HENDEL T2 code THANPACST2 and MHD Equilibrium code SELENEJ on the VPP500 are described. In the parallelization on scalar processors part, the parallelization of Monte Carlo N-Particle Transport code MCNP4B2, Plasma Hydrodynamics code using Cubic Interpolated Propagation Method PHCIP and Vectorized Monte Carlo code (continuous energy model / multi-group model) MVP/GMVP on the Paragon are described. In the porting part, the porting of Monte Carlo N-Particle Transport code MCNP4B2 and Reactor Safety Analysis code RELAP5 on the AP3000 are described. (author)
... can disable blocking on those sites. Tagged with: computer security , cookies , Do Not Track , personal information , privacy June ... email Looking for business guidance on privacy and ... The Federal Trade Commission (FTC) is the nation’s consumer protection agency. The FTC works to prevent fraudulent, deceptive ...
CERN. Geneva
2016-01-01
The traditionally used and well established parallel programming models OpenMP and MPI are both targeting lower level parallelism and are meant to be as language agnostic as possible. For a long time, those models were the only widely available portable options for developing parallel C++ applications beyond using plain threads. This has strongly limited the optimization capabilities of compilers, has inhibited extensibility and genericity, and has restricted the use of those models together with other, modern higher level abstractions introduced by the C++11 and C++14 standards. The recent revival of interest in the industry and wider community for the C++ language has also spurred a remarkable amount of standardization proposals and technical specifications being developed. Those efforts however have so far failed to build a vision on how to seamlessly integrate various types of parallelism, such as iterative parallel execution, task-based parallelism, asynchronous many-task execution flows, continuation s...
Parallelism in matrix computations
Gallopoulos, Efstratios; Sameh, Ahmed H
2016-01-01
This book is primarily intended as a research monograph that could also be used in graduate courses for the design of parallel algorithms in matrix computations. It assumes general but not extensive knowledge of numerical linear algebra, parallel architectures, and parallel programming paradigms. The book consists of four parts: (I) Basics; (II) Dense and Special Matrix Computations; (III) Sparse Matrix Computations; and (IV) Matrix functions and characteristics. Part I deals with parallel programming paradigms and fundamental kernels, including reordering schemes for sparse matrices. Part II is devoted to dense matrix computations such as parallel algorithms for solving linear systems, linear least squares, the symmetric algebraic eigenvalue problem, and the singular-value decomposition. It also deals with the development of parallel algorithms for special linear systems such as banded ,Vandermonde ,Toeplitz ,and block Toeplitz systems. Part III addresses sparse matrix computations: (a) the development of pa...
Evaluation of environmental commitment tracking systems for use at CDOT.
2011-10-01
"The purpose of this study is to review existing Environmental Tracking Systems (ETSs) used by other, : select state Departments of Transportation (DOTs), as well as the existing Environmental Commitment : Tracking System (ECTS) currently in use by C...
Implementation and development of vehicle tracking and immobilization technologies.
2010-01-01
Since the mid-1980s, limited use has been made of vehicle tracking using satellite communications to mitigate the security and safety risks created by the highway transportation of certain types of hazardous materials. However, vehicle-tracking techn...
DEFF Research Database (Denmark)
Sitchinava, Nodar; Zeh, Norbert
2012-01-01
We present the parallel buffer tree, a parallel external memory (PEM) data structure for batched search problems. This data structure is a non-trivial extension of Arge's sequential buffer tree to a private-cache multiprocessor environment and reduces the number of I/O operations by the number of...... in the optimal OhOf(psortN + K/PB) parallel I/O complexity, where K is the size of the output reported in the process and psortN is the parallel I/O complexity of sorting N elements using P processors....
Deshmane, Anagha; Gulani, Vikas; Griswold, Mark A; Seiberlich, Nicole
2012-07-01
Parallel imaging is a robust method for accelerating the acquisition of magnetic resonance imaging (MRI) data, and has made possible many new applications of MR imaging. Parallel imaging works by acquiring a reduced amount of k-space data with an array of receiver coils. These undersampled data can be acquired more quickly, but the undersampling leads to aliased images. One of several parallel imaging algorithms can then be used to reconstruct artifact-free images from either the aliased images (SENSE-type reconstruction) or from the undersampled data (GRAPPA-type reconstruction). The advantages of parallel imaging in a clinical setting include faster image acquisition, which can be used, for instance, to shorten breath-hold times resulting in fewer motion-corrupted examinations. In this article the basic concepts behind parallel imaging are introduced. The relationship between undersampling and aliasing is discussed and two commonly used parallel imaging methods, SENSE and GRAPPA, are explained in detail. Examples of artifacts arising from parallel imaging are shown and ways to detect and mitigate these artifacts are described. Finally, several current applications of parallel imaging are presented and recent advancements and promising research in parallel imaging are briefly reviewed. Copyright © 2012 Wiley Periodicals, Inc.
Parallel Algorithms and Patterns
Energy Technology Data Exchange (ETDEWEB)
Robey, Robert W. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-06-16
This is a powerpoint presentation on parallel algorithms and patterns. A parallel algorithm is a well-defined, step-by-step computational procedure that emphasizes concurrency to solve a problem. Examples of problems include: Sorting, searching, optimization, matrix operations. A parallel pattern is a computational step in a sequence of independent, potentially concurrent operations that occurs in diverse scenarios with some frequency. Examples are: Reductions, prefix scans, ghost cell updates. We only touch on parallel patterns in this presentation. It really deserves its own detailed discussion which Gabe Rockefeller would like to develop.
Application Portable Parallel Library
Cole, Gary L.; Blech, Richard A.; Quealy, Angela; Townsend, Scott
1995-01-01
Application Portable Parallel Library (APPL) computer program is subroutine-based message-passing software library intended to provide consistent interface to variety of multiprocessor computers on market today. Minimizes effort needed to move application program from one computer to another. User develops application program once and then easily moves application program from parallel computer on which created to another parallel computer. ("Parallel computer" also include heterogeneous collection of networked computers). Written in C language with one FORTRAN 77 subroutine for UNIX-based computers and callable from application programs written in C language or FORTRAN 77.
Feedback from operational experience in front-end transportation
International Nuclear Information System (INIS)
Mondonel, J.L.; Parison, C.
1998-01-01
Transport forms an integral part of the nuclear fuel cycle, representing the strategic link between each stage of the cycle. In a way there is a transport cycle that parallels the nuclear fuel cycle. This concerns particularly the front-end of the cycle whose steps - mining conversion, enrichment and fuel fabrication - require numerous transports. Back-end shipments involve a handful of countries, but front-end transports involve all five continents, and many exotic countries. All over Europe such transports are routinely performed with an excellent safety track record. Transnucleaire dominates the French nuclear transportation market and carries out both front and back-end transports. For instance in 1996 more than 28,400 front-end packages were transported as well as more than 3,600 back-end packages. However front-end transport is now a business undergoing much change. A nuclear transportation company must now cope with an evolving picture including new technical requirements, new transportation schemes and new business conditions. This paper describes the latest evolutions in terms of front-end transportation and the way this activity is carried out by Transnucleaire, and goes on to discuss future prospects. (authors)
49 CFR 1150.36 - Exempt construction of connecting track.
2010-10-01
... shippers from abuse of market power; and that the construction of connecting track would be of limited... 49 Transportation 8 2010-10-01 2010-10-01 false Exempt construction of connecting track. 1150.36... TRANSPORTATION BOARD, DEPARTMENT OF TRANSPORTATION RULES OF PRACTICE CERTIFICATE TO CONSTRUCT, ACQUIRE, OR...
Xyce parallel electronic simulator design.
Energy Technology Data Exchange (ETDEWEB)
Thornquist, Heidi K.; Rankin, Eric Lamont; Mei, Ting; Schiek, Richard Louis; Keiter, Eric Richard; Russo, Thomas V.
2010-09-01
This document is the Xyce Circuit Simulator developer guide. Xyce has been designed from the 'ground up' to be a SPICE-compatible, distributed memory parallel circuit simulator. While it is in many respects a research code, Xyce is intended to be a production simulator. As such, having software quality engineering (SQE) procedures in place to insure a high level of code quality and robustness are essential. Version control, issue tracking customer support, C++ style guildlines and the Xyce release process are all described. The Xyce Parallel Electronic Simulator has been under development at Sandia since 1999. Historically, Xyce has mostly been funded by ASC, the original focus of Xyce development has primarily been related to circuits for nuclear weapons. However, this has not been the only focus and it is expected that the project will diversify. Like many ASC projects, Xyce is a group development effort, which involves a number of researchers, engineers, scientists, mathmaticians and computer scientists. In addition to diversity of background, it is to be expected on long term projects for there to be a certain amount of staff turnover, as people move on to different projects. As a result, it is very important that the project maintain high software quality standards. The point of this document is to formally document a number of the software quality practices followed by the Xyce team in one place. Also, it is hoped that this document will be a good source of information for new developers.
Parallel discrete event simulation
Overeinder, B.J.; Hertzberger, L.O.; Sloot, P.M.A.; Withagen, W.J.
1991-01-01
In simulating applications for execution on specific computing systems, the simulation performance figures must be known in a short period of time. One basic approach to the problem of reducing the required simulation time is the exploitation of parallelism. However, in parallelizing the simulation
Parallel reservoir simulator computations
International Nuclear Information System (INIS)
Hemanth-Kumar, K.; Young, L.C.
1995-01-01
The adaptation of a reservoir simulator for parallel computations is described. The simulator was originally designed for vector processors. It performs approximately 99% of its calculations in vector/parallel mode and relative to scalar calculations it achieves speedups of 65 and 81 for black oil and EOS simulations, respectively on the CRAY C-90
Totally parallel multilevel algorithms
Frederickson, Paul O.
1988-01-01
Four totally parallel algorithms for the solution of a sparse linear system have common characteristics which become quite apparent when they are implemented on a highly parallel hypercube such as the CM2. These four algorithms are Parallel Superconvergent Multigrid (PSMG) of Frederickson and McBryan, Robust Multigrid (RMG) of Hackbusch, the FFT based Spectral Algorithm, and Parallel Cyclic Reduction. In fact, all four can be formulated as particular cases of the same totally parallel multilevel algorithm, which are referred to as TPMA. In certain cases the spectral radius of TPMA is zero, and it is recognized to be a direct algorithm. In many other cases the spectral radius, although not zero, is small enough that a single iteration per timestep keeps the local error within the required tolerance.
Energy Technology Data Exchange (ETDEWEB)
1991-10-23
An account of the Caltech Concurrent Computation Program (C{sup 3}P), a five year project that focused on answering the question: Can parallel computers be used to do large-scale scientific computations '' As the title indicates, the question is answered in the affirmative, by implementing numerous scientific applications on real parallel computers and doing computations that produced new scientific results. In the process of doing so, C{sup 3}P helped design and build several new computers, designed and implemented basic system software, developed algorithms for frequently used mathematical computations on massively parallel machines, devised performance models and measured the performance of many computers, and created a high performance computing facility based exclusively on parallel computers. While the initial focus of C{sup 3}P was the hypercube architecture developed by C. Seitz, many of the methods developed and lessons learned have been applied successfully on other massively parallel architectures.
Massively parallel mathematical sieves
Energy Technology Data Exchange (ETDEWEB)
Montry, G.R.
1989-01-01
The Sieve of Eratosthenes is a well-known algorithm for finding all prime numbers in a given subset of integers. A parallel version of the Sieve is described that produces computational speedups over 800 on a hypercube with 1,024 processing elements for problems of fixed size. Computational speedups as high as 980 are achieved when the problem size per processor is fixed. The method of parallelization generalizes to other sieves and will be efficient on any ensemble architecture. We investigate two highly parallel sieves using scattered decomposition and compare their performance on a hypercube multiprocessor. A comparison of different parallelization techniques for the sieve illustrates the trade-offs necessary in the design and implementation of massively parallel algorithms for large ensemble computers.
Origins and nature of non-Fickian transport through fractures
Wang, L.; Cardenas, M. B.
2014-12-01
Non-Fickian transport occurs across all scales within fractured and porous geological media. Fundamental understanding and appropriate characterization of non-Fickian transport through fractures is critical for understanding and prediction of the fate of solutes and other scalars. We use both analytical and numerical modeling, including direct numerical simulation and particle tracking random walk, to investigate the origin of non-Fickian transport through both homogeneous and heterogeneous fractures. For the simple homogenous fracture case, i.e., parallel plates, we theoretically derived a formula for dynamic longitudinal dispersion (D) within Poiseuille flow. Using the closed-form expression for the theoretical D, we quantified the time (T) and length (L) scales separating preasymptotic and asymptotic dispersive transport, with T and L proportional to aperture (b) of parallel plates to second and fourth orders, respectively. As for heterogeneous fractures, the fracture roughness and correlation length are closely associated with the T and L, and thus indicate the origin for non-Fickian transport. Modeling solute transport through 2D rough-walled fractures with continuous time random walk with truncated power shows that the degree of deviation from Fickian transport is proportional to fracture roughness. The estimated L for 2D rough-walled fractures is significantly longer than that derived from the formula within Poiseuille flow with equivalent b. Moreover, we artificially generated normally distributed 3D fractures with fixed correlation length but different fracture dimensions. Solute transport through 3D fractures was modeled with a particle tracking random walk algorithm. We found that transport transitions from non-Fickian to Fickian with increasing fracture dimensions, where the estimated L for the studied 3D fractures is related to the correlation length.
Track-structure simulations for charged particles.
Dingfelder, Michael
2012-11-01
Monte Carlo track-structure simulations provide a detailed and accurate picture of radiation transport of charged particles through condensed matter of biological interest. Liquid water serves as a surrogate for soft tissue and is used in most Monte Carlo track-structure codes. Basic theories of radiation transport and track-structure simulations are discussed and differences compared to condensed history codes highlighted. Interaction cross sections for electrons, protons, alpha particles, and light and heavy ions are required input data for track-structure simulations. Different calculation methods, including the plane-wave Born approximation, the dielectric theory, and semi-empirical approaches are presented using liquid water as a target. Low-energy electron transport and light ion transport are discussed as areas of special interest.
Vector and parallel processors in computational science
International Nuclear Information System (INIS)
Duff, I.S.; Reid, J.K.
1985-01-01
This book presents the papers given at a conference which reviewed the new developments in parallel and vector processing. Topics considered at the conference included hardware (array processors, supercomputers), programming languages, software aids, numerical methods (e.g., Monte Carlo algorithms, iterative methods, finite elements, optimization), and applications (e.g., neutron transport theory, meteorology, image processing)
Implementing Shared Memory Parallelism in MCBEND
Directory of Open Access Journals (Sweden)
Bird Adam
2017-01-01
Full Text Available MCBEND is a general purpose radiation transport Monte Carlo code from AMEC Foster Wheelers’s ANSWERS® Software Service. MCBEND is well established in the UK shielding community for radiation shielding and dosimetry assessments. The existing MCBEND parallel capability effectively involves running the same calculation on many processors. This works very well except when the memory requirements of a model restrict the number of instances of a calculation that will fit on a machine. To more effectively utilise parallel hardware OpenMP has been used to implement shared memory parallelism in MCBEND. This paper describes the reasoning behind the choice of OpenMP, notes some of the challenges of multi-threading an established code such as MCBEND and assesses the performance of the parallel method implemented in MCBEND.
Parallel Tracking and Mapping for Controlling VTOL Airframe
Directory of Open Access Journals (Sweden)
Michal Jama
2011-01-01
Full Text Available This work presents a vision based system for navigation on a vertical takeoff and landing unmanned aerial vehicle (UAV. This is a monocular vision based, simultaneous localization and mapping (SLAM system, which measures the position and orientation of the camera and builds a map of the environment using a video stream from a single camera. This is different from past SLAM solutions on UAV which use sensors that measure depth, like LIDAR, stereoscopic cameras or depth cameras. Solution presented in this paper extends and significantly modifies a recent open-source algorithm that solves SLAM problem using approach fundamentally different from a traditional approach. Proposed modifications provide the position measurements necessary for the navigation solution on a UAV. The main contributions of this work include: (1 extension of the map building algorithm to enable it to be used realistically while controlling a UAV and simultaneously building the map; (2 improved performance of the SLAM algorithm for lower camera frame rates; and (3 the first known demonstration of a monocular SLAM algorithm successfully controlling a UAV while simultaneously building the map. This work demonstrates that a fully autonomous UAV that uses monocular vision for navigation is feasible.
Latent tracks in polymeric etched track detectors
International Nuclear Information System (INIS)
Yamauchi, Tomoya
2013-01-01
Track registration properties in polymeric track detectors, including Poly(allyl diglycol carbonate), Bispenol A polycarbonate, Poly(ethylen terephtarate), and Polyimide, have been investigated by means of Fourie transform Infararede FT-IR spectrometry. Chemical criterion on the track formation threshold has been proposes, in stead of the conventional physical track registration models. (author)
Energy Technology Data Exchange (ETDEWEB)
Stastny, P.
2007-03-15
Many employees are now choosing to work from home using laptops and telephones. Employers in the oil and gas industry are now reaping a number of benefits from their telecommuting employees, including increased productivity; higher levels of employee satisfaction, and less absenteeism. Providing a telecommunication option can prove to be advantageous for employers wishing to hire or retain employees. Telecommuting may also help to reduce greenhouse gas (GHG) emissions. This article provided details of Teletrips Inc., a company that aids in the production of corporate social responsibility reports. Teletrips provides reports that document employee savings in time, vehicle depreciation maintenance, and gasoline costs. Teletrips currently tracks 12 companies in Calgary, and plans to grow through the development of key technology partnerships. The company is also working with the federal government to provide their clients with emission trading credits, and has forged a memorandum of understanding with the British Columbia government for tracking emissions. Calgary now openly supports telecommuting and is encouraging businesses in the city to adopt telecommuting on a larger scale. It was concluded that the expanding needs for road infrastructure and the energy used by cars to move workers in and out of the city are a massive burden to the city's tax base. 1 fig.
P. Sharp
The CMS Inner Tracking Detector continues to make good progress. The Objective for 2006 was to complete all of the CMS Tracker sub-detectors and to start the integration of the sub-detectors into the Tracker Support Tube (TST). The Objective for 2007 is to deliver to CMS a completed, installed, commissioned and calibrated Tracking System (Silicon Strip and Pixels) aligned to < 100µ in April 2008 ready for the first physics collisions at LHC. In November 2006 all of the sub-detectors had been delivered to the Tracker Integration facility (TIF) at CERN and the tests and QA procedures to be carried out on each sub-detector before integration had been established. In December 2006, TIB/TID+ was integrated into TOB+, TIB/TID- was being prepared for integration, and TEC+ was undergoing tests at the final tracker operating temperature (-100 C) in the Lyon cold room. In February 2007, TIB/TID- has been integrated into TOB-, and the installation of the pixel support tube and the services for TI...
Adaptive track scheduling to optimize concurrency and vectorization in GeantV
International Nuclear Information System (INIS)
Apostolakis, J; Brun, R; Carminati, F; Gheata, A; Novak, M; Wenzel, S; Bandieramonte, M; Bitzes, G; Canal, P; Elvira, V D; Jun, S Y; Lima, G; Licht, J C De Fine; Duhem, L; Sehgal, R; Shadura, O
2015-01-01
The GeantV project is focused on the R and D of new particle transport techniques to maximize parallelism on multiple levels, profiting from the use of both SIMD instructions and co-processors for the CPU-intensive calculations specific to this type of applications. In our approach, vectors of tracks belonging to multiple events and matching different locality criteria must be gathered and dispatched to algorithms having vector signatures. While the transport propagates tracks and changes their individual states, data locality becomes harder to maintain. The scheduling policy has to be changed to maintain efficient vectors while keeping an optimal level of concurrency. The model has complex dynamics requiring tuning the thresholds to switch between the normal regime and special modes, i.e. prioritizing events to allow flushing memory, adding new events in the transport pipeline to boost locality, dynamically adjusting the particle vector size or switching between vector to single track mode when vectorization causes only overhead. This work requires a comprehensive study for optimizing these parameters to make the behaviour of the scheduler self-adapting, presenting here its initial results. (paper)
Evolution of the SOFIA tracking control system
Fiebig, Norbert; Jakob, Holger; Pfüller, Enrico; Röser, Hans-Peter; Wiedemann, Manuel; Wolf, Jürgen
2014-07-01
The airborne observatory SOFIA (Stratospheric Observatory for Infrared Astronomy) is undergoing a modernization of its tracking system. This included new, highly sensitive tracking cameras, control computers, filter wheels and other equipment, as well as a major redesign of the control software. The experiences along the migration path from an aged 19" VMbus based control system to the application of modern industrial PCs, from VxWorks real-time operating system to embedded Linux and a state of the art software architecture are presented. Further, the concept is presented to operate the new camera also as a scientific instrument, in parallel to tracking.
Algorithms for parallel computers
International Nuclear Information System (INIS)
Churchhouse, R.F.
1985-01-01
Until relatively recently almost all the algorithms for use on computers had been designed on the (usually unstated) assumption that they were to be run on single processor, serial machines. With the introduction of vector processors, array processors and interconnected systems of mainframes, minis and micros, however, various forms of parallelism have become available. The advantage of parallelism is that it offers increased overall processing speed but it also raises some fundamental questions, including: (i) which, if any, of the existing 'serial' algorithms can be adapted for use in the parallel mode. (ii) How close to optimal can such adapted algorithms be and, where relevant, what are the convergence criteria. (iii) How can we design new algorithms specifically for parallel systems. (iv) For multi-processor systems how can we handle the software aspects of the interprocessor communications. Aspects of these questions illustrated by examples are considered in these lectures. (orig.)
Parallelism and array processing
International Nuclear Information System (INIS)
Zacharov, V.
1983-01-01
Modern computing, as well as the historical development of computing, has been dominated by sequential monoprocessing. Yet there is the alternative of parallelism, where several processes may be in concurrent execution. This alternative is discussed in a series of lectures, in which the main developments involving parallelism are considered, both from the standpoint of computing systems and that of applications that can exploit such systems. The lectures seek to discuss parallelism in a historical context, and to identify all the main aspects of concurrency in computation right up to the present time. Included will be consideration of the important question as to what use parallelism might be in the field of data processing. (orig.)
Parallel magnetic resonance imaging
International Nuclear Information System (INIS)
Larkman, David J; Nunes, Rita G
2007-01-01
Parallel imaging has been the single biggest innovation in magnetic resonance imaging in the last decade. The use of multiple receiver coils to augment the time consuming Fourier encoding has reduced acquisition times significantly. This increase in speed comes at a time when other approaches to acquisition time reduction were reaching engineering and human limits. A brief summary of spatial encoding in MRI is followed by an introduction to the problem parallel imaging is designed to solve. There are a large number of parallel reconstruction algorithms; this article reviews a cross-section, SENSE, SMASH, g-SMASH and GRAPPA, selected to demonstrate the different approaches. Theoretical (the g-factor) and practical (coil design) limits to acquisition speed are reviewed. The practical implementation of parallel imaging is also discussed, in particular coil calibration. How to recognize potential failure modes and their associated artefacts are shown. Well-established applications including angiography, cardiac imaging and applications using echo planar imaging are reviewed and we discuss what makes a good application for parallel imaging. Finally, active research areas where parallel imaging is being used to improve data quality by repairing artefacted images are also reviewed. (invited topical review)
Dynamic grid refinement for partial differential equations on parallel computers
International Nuclear Information System (INIS)
Mccormick, S.; Quinlan, D.
1989-01-01
The fast adaptive composite grid method (FAC) is an algorithm that uses various levels of uniform grids to provide adaptive resolution and fast solution of PDEs. An asynchronous version of FAC, called AFAC, that completely eliminates the bottleneck to parallelism is presented. This paper describes the advantage that this algorithm has in adaptive refinement for moving singularities on multiprocessor computers. This work is applicable to the parallel solution of two- and three-dimensional shock tracking problems. 6 refs
DEFF Research Database (Denmark)
Bruun, Maja Hojer; Krause-Jensen, Jakob; Saltofte, Margit
2015-01-01
. In this chapter, we argue that although anthropology has its specific methodology – including a myriad of ethnographic data-gathering tools, techniques, analytical approaches and theories – it must first and foremost be understood as a craft. Anthropology as craft requires a specific ‘anthropological sensibility......’ that differs from the standardized procedures of normal science. To establish our points we use an example of problem-based project work conducted by a group of Techno-Anthropology students at Aalborg University, we focus on key aspects of this craft and how the students began to learn it: For two weeks...... the students followed the work of a group of porters. Drawing on anthropological concepts and research strategies the students gained crucial insights about the potential effects of using tracking technologies in the hospital....
International Nuclear Information System (INIS)
Gaillard, J.M.
1994-03-01
A large-size scintillating plastic fibre tracking detector was built as part of the upgrade of the UA2 central detector at the SPS proton-antiproton collider. The cylindrical fibre detector of average radius of 40 cm consisted of 60000 plastic fibres with an active length of 2.1 m. One of the main motivations was to improve the electron identification. The fibre ends were bunched to be coupled to read-out systems of image intensifier plus CCD, 32 in total. The quality and the reliability of the UA2 fibre detector performance exceeded expectations throughout its years of operation. A few examples of the use of image intensifiers and of scintillating fibres in biological instrumentation are described. (R.P.) 11 refs., 15 figs., 2 tabs
Scalable Track Detection in SAR CCD Images
Energy Technology Data Exchange (ETDEWEB)
Chow, James G [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Quach, Tu-Thach [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2017-03-01
Existing methods to detect vehicle tracks in coherent change detection images, a product of combining two synthetic aperture radar images ta ken at different times of the same scene, rely on simple, fast models to label track pixels. These models, however, are often too simple to capture natural track features such as continuity and parallelism. We present a simple convolutional network architecture consisting of a series of 3-by-3 convolutions to detect tracks. The network is trained end-to-end to learn natural track features entirely from data. The network is computationally efficient and improves the F-score on a standard dataset to 0.988, up fr om 0.907 obtained by the current state-of-the-art method.
The 2nd Symposium on the Frontiers of Massively Parallel Computations
Mills, Ronnie (Editor)
1988-01-01
Programming languages, computer graphics, neural networks, massively parallel computers, SIMD architecture, algorithms, digital terrain models, sort computation, simulation of charged particle transport on the massively parallel processor and image processing are among the topics discussed.
New ways of polymeric ion track characterization
International Nuclear Information System (INIS)
Fink, D.; Mueller, M.; Ghosh, S.; Dwivedi, K.K.; Vacik, J.; Hnatowicz, V.; Cervena, J.; Kobayashi, Y.; Hirata, K.
1999-01-01
New ways have been applied for characterization of ion tracks in polymers in the last few years, which are essentially related to depth profile determinations of ions, molecules, or positrons penetrating into these tracks. In combination with tomography, the first three-dimensional results have been obtained. Extensive diffusion simulations accompanying the measurements have enabled us to obtain a better understanding of the transport processes going on in ion tracks. This paper gives an overview about the range of new possibilities accessible by these techniques, and summarizes the presently obtained understanding of ion tracks in polymers
The STAPL Parallel Graph Library
Harshvardhan,; Fidel, Adam; Amato, Nancy M.; Rauchwerger, Lawrence
2013-01-01
This paper describes the stapl Parallel Graph Library, a high-level framework that abstracts the user from data-distribution and parallelism details and allows them to concentrate on parallel graph algorithm development. It includes a customizable
Fiber tracking for brain tumor
International Nuclear Information System (INIS)
Yamada, Kei; Nakamura, Hisao; Ito, Hirotoshi; Tanaka, Osamu; Kubota, Takao; Yuen, Sachiko; Kizu, Osamu; Nishimura, Tsunehiko
2003-01-01
The purpose of this study was to validate an innovative scanning method for patients diagnosed with brain tumors. Using a 1.5 Tesla whole body magnetic resonance (MR) imager, 23 patients with brain tumors were scanned. The recorded data points of the diffusion-tensor imaging (DTI) sequences were 128 x 37 with the parallel imaging technique. The parallel imaging technique was equivalent to a true resolution of 128 x 74. The scan parameters were repetition time (TR)=6000, echo time (TE)=88, 6 averaging with a b-value of 800 s/mm 2 . The total scan time for DTI was 4 minutes and 24 seconds. DTI scans and subsequent fiber tracking were successfully applied in all cases. All fiber tracts on the contralesional side were visualized in the expected locations. Fiber tracts on the lesional side had varying degrees of displacement, disruption, or a combination of displacement and disruption due to the tumor. Tract disruption resulted from direct tumor involvement, compression upon the tract, and vasogenic edema surrounding the tumor. This DTI method using a parallel imaging technique allows for clinically feasible fiber tracking that can be incorporated into a routine MR examination. (author)
Bridging Theory and Practice in an Applied Retail Track
Lange, Fredrik; Rosengren, Sara; Colliander, Jonas; Hernant, Mikael; Liljedal, Karina T.
2018-01-01
In this article, we present an educational approach that bridges theory and practice: an applied retail track. The track has been co-created by faculty and 10 partnering retail companies and runs in parallel with traditional courses during a 3-year bachelor's degree program in retail management. The underlying pedagogical concept is to move retail…
Delay, F.; de Marsily, G.; Carlier, E.
1994-10-01
For the last fifteen years or so, the random-walk methods have proved their worth in solving the transport equation in porous and fractured media. Their principal shortcomings remain their relatively slow calculation speed and their lack of precision at low concentrations. This paper proposes a new code which eliminates these disadvantages by managing the particles not individually but in the form of numerical values (representing the number of particles in each phase, mobile and immobile) assigned to each cell in a 1-D system. The calculation time then is short, and it is possible to introduce as many particles as desired into the model without increasing the calculation time. A large number of injection types can be simulated, and to the classical convection-dispersion phenomenon can be added a process of exchange between the mobile and immobile phase according to first-order kinetics. Because the particles are managed as numbers, the analytical solution obtained for the exchange during a time step reduces the calculation to a simple assignation of numerical values to two variables, one of which represents the mobile and the other the immobile phase; the calculation is then almost instantaneous. Because the program is developed in C, it leaves much room for graphic interaction which greatly facilitates the fitting of tracer experiments with a limited set of parameters.
Massively parallel multicanonical simulations
Gross, Jonathan; Zierenberg, Johannes; Weigel, Martin; Janke, Wolfhard
2018-03-01
Generalized-ensemble Monte Carlo simulations such as the multicanonical method and similar techniques are among the most efficient approaches for simulations of systems undergoing discontinuous phase transitions or with rugged free-energy landscapes. As Markov chain methods, they are inherently serial computationally. It was demonstrated recently, however, that a combination of independent simulations that communicate weight updates at variable intervals allows for the efficient utilization of parallel computational resources for multicanonical simulations. Implementing this approach for the many-thread architecture provided by current generations of graphics processing units (GPUs), we show how it can be efficiently employed with of the order of 104 parallel walkers and beyond, thus constituting a versatile tool for Monte Carlo simulations in the era of massively parallel computing. We provide the fully documented source code for the approach applied to the paradigmatic example of the two-dimensional Ising model as starting point and reference for practitioners in the field.
2006-01-01
13 March 2006 This Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) image shows a portion of a trough in the Sirenum Fossae region. On the floor and walls of the trough, large -- truck- to house-sized -- boulders are observed at rest. However, there is evidence in this image for the potential for mobility. In the central portion of the south (bottom) wall, a faint line of depressions extends from near the middle of the wall, down to the rippled trough floor, ending very near one of the many boulders in the area. This line of depressions is a boulder track; it indicates the path followed by the boulder as it trundled downslope and eventually came to rest on the trough floor. Because it is on Mars, even when the boulder is sitting still, this once-rolling stone gathers no moss. Location near: 29.4oS, 146.6oW Image width: 3 km (1.9 mi) Illumination from: upper left Season: Southern Summer
SPINning parallel systems software
International Nuclear Information System (INIS)
Matlin, O.S.; Lusk, E.; McCune, W.
2002-01-01
We describe our experiences in using Spin to verify parts of the Multi Purpose Daemon (MPD) parallel process management system. MPD is a distributed collection of processes connected by Unix network sockets. MPD is dynamic processes and connections among them are created and destroyed as MPD is initialized, runs user processes, recovers from faults, and terminates. This dynamic nature is easily expressible in the Spin/Promela framework but poses performance and scalability challenges. We present here the results of expressing some of the parallel algorithms of MPD and executing both simulation and verification runs with Spin
Parallel programming with Python
Palach, Jan
2014-01-01
A fast, easy-to-follow and clear tutorial to help you develop Parallel computing systems using Python. Along with explaining the fundamentals, the book will also introduce you to slightly advanced concepts and will help you in implementing these techniques in the real world. If you are an experienced Python programmer and are willing to utilize the available computing resources by parallelizing applications in a simple way, then this book is for you. You are required to have a basic knowledge of Python development to get the most of this book.
The MAJORANA Parts Tracking Database
Energy Technology Data Exchange (ETDEWEB)
Abgrall, N. [Nuclear Science Division, Lawrence Berkeley National Laboratory, Berkeley, CA (United States); Aguayo, E. [Pacific Northwest National Laboratory, Richland, WA (United States); Avignone, F.T. [Department of Physics and Astronomy, University of South Carolina, Columbia, SC (United States); Oak Ridge National Laboratory, Oak Ridge, TN (United States); Barabash, A.S. [Institute for Theoretical and Experimental Physics, Moscow (Russian Federation); Bertrand, F.E. [Oak Ridge National Laboratory, Oak Ridge, TN (United States); Brudanin, V. [Joint Institute for Nuclear Research, Dubna (Russian Federation); Busch, M. [Department of Physics, Duke University, Durham, NC (United States); Triangle Universities Nuclear Laboratory, Durham, NC (United States); Byram, D. [Department of Physics, University of South Dakota, Vermillion, SD (United States); Caldwell, A.S. [South Dakota School of Mines and Technology, Rapid City, SD (United States); Chan, Y-D. [Nuclear Science Division, Lawrence Berkeley National Laboratory, Berkeley, CA (United States); Christofferson, C.D. [South Dakota School of Mines and Technology, Rapid City, SD (United States); Combs, D.C. [Department of Physics, North Carolina State University, Raleigh, NC (United States); Triangle Universities Nuclear Laboratory, Durham, NC (United States); Cuesta, C.; Detwiler, J.A.; Doe, P.J. [Center for Experimental Nuclear Physics and Astrophysics, and Department of Physics, University of Washington, Seattle, WA (United States); Efremenko, Yu. [Department of Physics and Astronomy, University of Tennessee, Knoxville, TN (United States); Egorov, V. [Joint Institute for Nuclear Research, Dubna (Russian Federation); Ejiri, H. [Research Center for Nuclear Physics and Department of Physics, Osaka University, Ibaraki, Osaka (Japan); Elliott, S.R. [Los Alamos National Laboratory, Los Alamos, NM (United States); and others
2015-04-11
The MAJORANA DEMONSTRATOR is an ultra-low background physics experiment searching for the neutrinoless double beta decay of {sup 76}Ge. The MAJORANA Parts Tracking Database is used to record the history of components used in the construction of the DEMONSTRATOR. The tracking implementation takes a novel approach based on the schema-free database technology CouchDB. Transportation, storage, and processes undergone by parts such as machining or cleaning are linked to part records. Tracking parts provide a great logistics benefit and an important quality assurance reference during construction. In addition, the location history of parts provides an estimate of their exposure to cosmic radiation. A web application for data entry and a radiation exposure calculator have been developed as tools for achieving the extreme radio-purity required for this rare decay search.
The MAJORANA Parts Tracking Database
Abgrall, N.; Aguayo, E.; Avignone, F. T.; Barabash, A. S.; Bertrand, F. E.; Brudanin, V.; Busch, M.; Byram, D.; Caldwell, A. S.; Chan, Y.-D.; Christofferson, C. D.; Combs, D. C.; Cuesta, C.; Detwiler, J. A.; Doe, P. J.; Efremenko, Yu.; Egorov, V.; Ejiri, H.; Elliott, S. R.; Esterline, J.; Fast, J. E.; Finnerty, P.; Fraenkle, F. M.; Galindo-Uribarri, A.; Giovanetti, G. K.; Goett, J.; Green, M. P.; Gruszko, J.; Guiseppe, V. E.; Gusev, K.; Hallin, A. L.; Hazama, R.; Hegai, A.; Henning, R.; Hoppe, E. W.; Howard, S.; Howe, M. A.; Keeter, K. J.; Kidd, M. F.; Kochetov, O.; Konovalov, S. I.; Kouzes, R. T.; LaFerriere, B. D.; Leon, J. Diaz; Leviner, L. E.; Loach, J. C.; MacMullin, J.; Martin, R. D.; Meijer, S. J.; Mertens, S.; Miller, M. L.; Mizouni, L.; Nomachi, M.; Orrell, J. L.; O`Shaughnessy, C.; Overman, N. R.; Petersburg, R.; Phillips, D. G.; Poon, A. W. P.; Pushkin, K.; Radford, D. C.; Rager, J.; Rielage, K.; Robertson, R. G. H.; Romero-Romero, E.; Ronquest, M. C.; Shanks, B.; Shima, T.; Shirchenko, M.; Snavely, K. J.; Snyder, N.; Soin, A.; Suriano, A. M.; Tedeschi, D.; Thompson, J.; Timkin, V.; Tornow, W.; Trimble, J. E.; Varner, R. L.; Vasilyev, S.; Vetter, K.; Vorren, K.; White, B. R.; Wilkerson, J. F.; Wiseman, C.; Xu, W.; Yakushev, E.; Young, A. R.; Yu, C.-H.; Yumatov, V.; Zhitnikov, I.
2015-04-01
The MAJORANA DEMONSTRATOR is an ultra-low background physics experiment searching for the neutrinoless double beta decay of 76Ge. The MAJORANA Parts Tracking Database is used to record the history of components used in the construction of the DEMONSTRATOR. The tracking implementation takes a novel approach based on the schema-free database technology CouchDB. Transportation, storage, and processes undergone by parts such as machining or cleaning are linked to part records. Tracking parts provide a great logistics benefit and an important quality assurance reference during construction. In addition, the location history of parts provides an estimate of their exposure to cosmic radiation. A web application for data entry and a radiation exposure calculator have been developed as tools for achieving the extreme radio-purity required for this rare decay search.
Parallel algorithms for online trackfinding at PANDA
Energy Technology Data Exchange (ETDEWEB)
Bianchi, Ludovico; Ritman, James; Stockmanns, Tobias [IKP, Forschungszentrum Juelich GmbH (Germany); Herten, Andreas [JSC, Forschungszentrum Juelich GmbH (Germany); Collaboration: PANDA-Collaboration
2016-07-01
The PANDA experiment, one of the four scientific pillars of the FAIR facility currently in construction in Darmstadt, is a next-generation particle detector that will study collisions of antiprotons with beam momenta of 1.5-15 GeV/c on a fixed proton target. Because of the broad physics scope and the similar signature of signal and background events, PANDA's strategy for data acquisition is to continuously record data from the whole detector and use this global information to perform online event reconstruction and filtering. A real-time rejection factor of up to 1000 must be achieved to match the incoming data rate for offline storage, making all components of the data processing system computationally very challenging. Online particle track identification and reconstruction is an essential step, since track information is used as input in all following phases. Online tracking algorithms must ensure a delicate balance between high tracking efficiency and quality, and minimal computational footprint. For this reason, a massively parallel solution exploiting multiple Graphic Processing Units (GPUs) is under investigation. The talk presents the core concepts of the algorithms being developed for primary trackfinding, along with details of their implementation on GPUs.
Volpi, Guido; The ATLAS collaboration
2015-01-01
An overview of the ATLAS Fast Tracker processor is presented, reporting the design of the system, its expected performance, and the integration status. The next LHC runs, with a significant increase in instantaneous luminosity, will provide a big challenge to the trigger and data acquisition systems of all the experiments. An intensive use of the tracking information at the trigger level will be important to keep high efficiency in interesting events, despite the increase in multiple p-p collisions per bunch crossing (pile-up). In order to increase the use of tracks within the High Level Trigger (HLT), the ATLAS experiment planned the installation of an hardware processor dedicated to tracking: the Fast TracKer (FTK) processor. The FTK is designed to perform full scan track reconstruction at every Level-1 accept. To achieve this goal, the FTK uses a fully parallel architecture, with algorithms designed to exploit the computing power of custom VLSI chips, the Associative Memory, as well as modern FPGAs. The FT...
Vectoring of parallel synthetic jets
Berk, Tim; Ganapathisubramani, Bharathram; Gomit, Guillaume
2015-11-01
A pair of parallel synthetic jets can be vectored by applying a phase difference between the two driving signals. The resulting jet can be merged or bifurcated and either vectored towards the actuator leading in phase or the actuator lagging in phase. In the present study, the influence of phase difference and Strouhal number on the vectoring behaviour is examined experimentally. Phase-locked vorticity fields, measured using Particle Image Velocimetry (PIV), are used to track vortex pairs. The physical mechanisms that explain the diversity in vectoring behaviour are observed based on the vortex trajectories. For a fixed phase difference, the vectoring behaviour is shown to be primarily influenced by pinch-off time of vortex rings generated by the synthetic jets. Beyond a certain formation number, the pinch-off timescale becomes invariant. In this region, the vectoring behaviour is determined by the distance between subsequent vortex rings. We acknowledge the financial support from the European Research Council (ERC grant agreement no. 277472).
VLSI structures for track finding
International Nuclear Information System (INIS)
Dell'Orso, M.
1989-01-01
We discuss the architecture of a device based on the concept of associative memory designed to solve the track finding problem, typical of high energy physics experiments, in a time span of a few microseconds even for very high multiplicity events. This ''machine'' is implemented as a large array of custom VLSI chips. All the chips are equal and each of them stores a number of ''patterns''. All the patterns in all the chips are compared in parallel to the data coming from the detector while the detector is being read out. (orig.)
Hardware packet pacing using a DMA in a parallel computer
Chen, Dong; Heidelberger, Phillip; Vranas, Pavlos
2013-08-13
Method and system for hardware packet pacing using a direct memory access controller in a parallel computer which, in one aspect, keeps track of a total number of bytes put on the network as a result of a remote get operation, using a hardware token counter.
Expressing Parallelism with ROOT
Energy Technology Data Exchange (ETDEWEB)
Piparo, D. [CERN; Tejedor, E. [CERN; Guiraud, E. [CERN; Ganis, G. [CERN; Mato, P. [CERN; Moneta, L. [CERN; Valls Pla, X. [CERN; Canal, P. [Fermilab
2017-11-22
The need for processing the ever-increasing amount of data generated by the LHC experiments in a more efficient way has motivated ROOT to further develop its support for parallelism. Such support is being tackled both for shared-memory and distributed-memory environments. The incarnations of the aforementioned parallelism are multi-threading, multi-processing and cluster-wide executions. In the area of multi-threading, we discuss the new implicit parallelism and related interfaces, as well as the new building blocks to safely operate with ROOT objects in a multi-threaded environment. Regarding multi-processing, we review the new MultiProc framework, comparing it with similar tools (e.g. multiprocessing module in Python). Finally, as an alternative to PROOF for cluster-wide executions, we introduce the efforts on integrating ROOT with state-of-the-art distributed data processing technologies like Spark, both in terms of programming model and runtime design (with EOS as one of the main components). For all the levels of parallelism, we discuss, based on real-life examples and measurements, how our proposals can increase the productivity of scientists.
Expressing Parallelism with ROOT
Piparo, D.; Tejedor, E.; Guiraud, E.; Ganis, G.; Mato, P.; Moneta, L.; Valls Pla, X.; Canal, P.
2017-10-01
The need for processing the ever-increasing amount of data generated by the LHC experiments in a more efficient way has motivated ROOT to further develop its support for parallelism. Such support is being tackled both for shared-memory and distributed-memory environments. The incarnations of the aforementioned parallelism are multi-threading, multi-processing and cluster-wide executions. In the area of multi-threading, we discuss the new implicit parallelism and related interfaces, as well as the new building blocks to safely operate with ROOT objects in a multi-threaded environment. Regarding multi-processing, we review the new MultiProc framework, comparing it with similar tools (e.g. multiprocessing module in Python). Finally, as an alternative to PROOF for cluster-wide executions, we introduce the efforts on integrating ROOT with state-of-the-art distributed data processing technologies like Spark, both in terms of programming model and runtime design (with EOS as one of the main components). For all the levels of parallelism, we discuss, based on real-life examples and measurements, how our proposals can increase the productivity of scientists.
Parallel Fast Legendre Transform
Alves de Inda, M.; Bisseling, R.H.; Maslen, D.K.
1998-01-01
We discuss a parallel implementation of a fast algorithm for the discrete polynomial Legendre transform We give an introduction to the DriscollHealy algorithm using polynomial arithmetic and present experimental results on the eciency and accuracy of our implementation The algorithms were
Practical parallel programming
Bauer, Barr E
2014-01-01
This is the book that will teach programmers to write faster, more efficient code for parallel processors. The reader is introduced to a vast array of procedures and paradigms on which actual coding may be based. Examples and real-life simulations using these devices are presented in C and FORTRAN.
Parallel hierarchical radiosity rendering
Energy Technology Data Exchange (ETDEWEB)
Carter, Michael [Iowa State Univ., Ames, IA (United States)
1993-07-01
In this dissertation, the step-by-step development of a scalable parallel hierarchical radiosity renderer is documented. First, a new look is taken at the traditional radiosity equation, and a new form is presented in which the matrix of linear system coefficients is transformed into a symmetric matrix, thereby simplifying the problem and enabling a new solution technique to be applied. Next, the state-of-the-art hierarchical radiosity methods are examined for their suitability to parallel implementation, and scalability. Significant enhancements are also discovered which both improve their theoretical foundations and improve the images they generate. The resultant hierarchical radiosity algorithm is then examined for sources of parallelism, and for an architectural mapping. Several architectural mappings are discussed. A few key algorithmic changes are suggested during the process of making the algorithm parallel. Next, the performance, efficiency, and scalability of the algorithm are analyzed. The dissertation closes with a discussion of several ideas which have the potential to further enhance the hierarchical radiosity method, or provide an entirely new forum for the application of hierarchical methods.
Parallel universes beguile science
2007-01-01
A staple of mind-bending science fiction, the possibility of multiple universes has long intrigued hard-nosed physicists, mathematicians and cosmologists too. We may not be able -- as least not yet -- to prove they exist, many serious scientists say, but there are plenty of reasons to think that parallel dimensions are more than figments of eggheaded imagination.
Energy Technology Data Exchange (ETDEWEB)
2017-04-04
A parallelization of the k-means++ seed selection algorithm on three distinct hardware platforms: GPU, multicore CPU, and multithreaded architecture. K-means++ was developed by David Arthur and Sergei Vassilvitskii in 2007 as an extension of the k-means data clustering technique. These algorithms allow people to cluster multidimensional data, by attempting to minimize the mean distance of data points within a cluster. K-means++ improved upon traditional k-means by using a more intelligent approach to selecting the initial seeds for the clustering process. While k-means++ has become a popular alternative to traditional k-means clustering, little work has been done to parallelize this technique. We have developed original C++ code for parallelizing the algorithm on three unique hardware architectures: GPU using NVidia's CUDA/Thrust framework, multicore CPU using OpenMP, and the Cray XMT multithreaded architecture. By parallelizing the process for these platforms, we are able to perform k-means++ clustering much more quickly than it could be done before.
International Nuclear Information System (INIS)
Gardes, D.; Volkov, P.
1981-01-01
A 5x3cm 2 (timing only) and a 15x5cm 2 (timing and position) parallel plate avalanche counters (PPAC) are considered. The theory of operation and timing resolution is given. The measurement set-up and the curves of experimental results illustrate the possibilities of the two counters [fr
GPU Computing For Particle Tracking
International Nuclear Information System (INIS)
Nishimura, Hiroshi; Song, Kai; Muriki, Krishna; Sun, Changchun; James, Susan; Qin, Yong
2011-01-01
This is a feasibility study of using a modern Graphics Processing Unit (GPU) to parallelize the accelerator particle tracking code. To demonstrate the massive parallelization features provided by GPU computing, a simplified TracyGPU program is developed for dynamic aperture calculation. Performances, issues, and challenges from introducing GPU are also discussed. General purpose Computation on Graphics Processing Units (GPGPU) bring massive parallel computing capabilities to numerical calculation. However, the unique architecture of GPU requires a comprehensive understanding of the hardware and programming model to be able to well optimize existing applications. In the field of accelerator physics, the dynamic aperture calculation of a storage ring, which is often the most time consuming part of the accelerator modeling and simulation, can benefit from GPU due to its embarrassingly parallel feature, which fits well with the GPU programming model. In this paper, we use the Tesla C2050 GPU which consists of 14 multi-processois (MP) with 32 cores on each MP, therefore a total of 448 cores, to host thousands ot threads dynamically. Thread is a logical execution unit of the program on GPU. In the GPU programming model, threads are grouped into a collection of blocks Within each block, multiple threads share the same code, and up to 48 KB of shared memory. Multiple thread blocks form a grid, which is executed as a GPU kernel. A simplified code that is a subset of Tracy++ (2) is developed to demonstrate the possibility of using GPU to speed up the dynamic aperture calculation by having each thread track a particle.
International Nuclear Information System (INIS)
Brassier, Stephane
1998-01-01
The Magnetohydrodynamic (MHD) equations represent the coupling between fluid dynamics equations and Maxwell's equations. We consider here a new MHD model with two temperatures. A Roe scheme is first constructed in the one dimensional case, for a multi-species model and a general equation of state. The multidimensional case is treated thanks to the Powell approach. The notion of Roe-Powell matrix, generalization of the notion of Roe matrix for multidimensional MHD, allows us to develop an original scheme on a curvilinear grid. We focus on a second part on the modelling of a Plasma Opening Switch (POS). A front-tracking method is first set up, in order to correctly handle the deformation of the front between the vacuum and the plasma. Besides, by taking into account a general Ohm's law, we have to deal with the Hall effect, which leads to nonlinear transport equations with discontinuous coefficients. Several numerical schemes are proposed and tested on a variety of test cases. This work has allowed us to construct an industrial MHD code, intended to handle complex flows and in particular to correctly simulate the behaviour of the POS. (author) [fr
Impact analysis on a massively parallel computer
International Nuclear Information System (INIS)
Zacharia, T.; Aramayo, G.A.
1994-01-01
Advanced mathematical techniques and computer simulation play a major role in evaluating and enhancing the design of beverage cans, industrial, and transportation containers for improved performance. Numerical models are used to evaluate the impact requirements of containers used by the Department of Energy (DOE) for transporting radioactive materials. Many of these models are highly compute-intensive. An analysis may require several hours of computational time on current supercomputers despite the simplicity of the models being studied. As computer simulations and materials databases grow in complexity, massively parallel computers have become important tools. Massively parallel computational research at the Oak Ridge National Laboratory (ORNL) and its application to the impact analysis of shipping containers is briefly described in this paper
Wald, Ingo; Ize, Santiago
2015-07-28
Parallel population of a grid with a plurality of objects using a plurality of processors. One example embodiment is a method for parallel population of a grid with a plurality of objects using a plurality of processors. The method includes a first act of dividing a grid into n distinct grid portions, where n is the number of processors available for populating the grid. The method also includes acts of dividing a plurality of objects into n distinct sets of objects, assigning a distinct set of objects to each processor such that each processor determines by which distinct grid portion(s) each object in its distinct set of objects is at least partially bounded, and assigning a distinct grid portion to each processor such that each processor populates its distinct grid portion with any objects that were previously determined to be at least partially bounded by its distinct grid portion.
Ultrascalable petaflop parallel supercomputer
Blumrich, Matthias A [Ridgefield, CT; Chen, Dong [Croton On Hudson, NY; Chiu, George [Cross River, NY; Cipolla, Thomas M [Katonah, NY; Coteus, Paul W [Yorktown Heights, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Hall, Shawn [Pleasantville, NY; Haring, Rudolf A [Cortlandt Manor, NY; Heidelberger, Philip [Cortlandt Manor, NY; Kopcsay, Gerard V [Yorktown Heights, NY; Ohmacht, Martin [Yorktown Heights, NY; Salapura, Valentina [Chappaqua, NY; Sugavanam, Krishnan [Mahopac, NY; Takken, Todd [Brewster, NY
2010-07-20
A massively parallel supercomputer of petaOPS-scale includes node architectures based upon System-On-a-Chip technology, where each processing node comprises a single Application Specific Integrated Circuit (ASIC) having up to four processing elements. The ASIC nodes are interconnected by multiple independent networks that optimally maximize the throughput of packet communications between nodes with minimal latency. The multiple networks may include three high-speed networks for parallel algorithm message passing including a Torus, collective network, and a Global Asynchronous network that provides global barrier and notification functions. These multiple independent networks may be collaboratively or independently utilized according to the needs or phases of an algorithm for optimizing algorithm processing performance. The use of a DMA engine is provided to facilitate message passing among the nodes without the expenditure of processing resources at the node.
DEFF Research Database (Denmark)
Gregersen, Frans; Josephson, Olle; Kristoffersen, Gjert
of departure that English may be used in parallel with the various local, in this case Nordic, languages. As such, the book integrates the challenge of internationalization faced by any university with the wish to improve quality in research, education and administration based on the local language......Abstract [en] More parallel, please is the result of the work of an Inter-Nordic group of experts on language policy financed by the Nordic Council of Ministers 2014-17. The book presents all that is needed to plan, practice and revise a university language policy which takes as its point......(s). There are three layers in the text: First, you may read the extremely brief version of the in total 11 recommendations for best practice. Second, you may acquaint yourself with the extended version of the recommendations and finally, you may study the reasoning behind each of them. At the end of the text, we give...
PARALLEL MOVING MECHANICAL SYSTEMS
Directory of Open Access Journals (Sweden)
Florian Ion Tiberius Petrescu
2014-09-01
Full Text Available Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Moving mechanical systems parallel structures are solid, fast, and accurate. Between parallel systems it is to be noticed Stewart platforms, as the oldest systems, fast, solid and precise. The work outlines a few main elements of Stewart platforms. Begin with the geometry platform, kinematic elements of it, and presented then and a few items of dynamics. Dynamic primary element on it means the determination mechanism kinetic energy of the entire Stewart platforms. It is then in a record tail cinematic mobile by a method dot matrix of rotation. If a structural mottoelement consists of two moving elements which translates relative, drive train and especially dynamic it is more convenient to represent the mottoelement as a single moving components. We have thus seven moving parts (the six motoelements or feet to which is added mobile platform 7 and one fixed.
Parallel pic plasma simulation through particle decomposition techniques
International Nuclear Information System (INIS)
Briguglio, S.; Vlad, G.; Di Martino, B.; Naples, Univ. 'Federico II'
1998-02-01
Particle-in-cell (PIC) codes are among the major candidates to yield a satisfactory description of the detail of kinetic effects, such as the resonant wave-particle interaction, relevant in determining the transport mechanism in magnetically confined plasmas. A significant improvement of the simulation performance of such codes con be expected from parallelization, e.g., by distributing the particle population among several parallel processors. Parallelization of a hybrid magnetohydrodynamic-gyrokinetic code has been accomplished within the High Performance Fortran (HPF) framework, and tested on the IBM SP2 parallel system, using a 'particle decomposition' technique. The adopted technique requires a moderate effort in porting the code in parallel form and results in intrinsic load balancing and modest inter processor communication. The performance tests obtained confirm the hypothesis of high effectiveness of the strategy, if targeted towards moderately parallel architectures. Optimal use of resources is also discussed with reference to a specific physics problem [it
Xyce parallel electronic simulator.
Energy Technology Data Exchange (ETDEWEB)
Keiter, Eric R; Mei, Ting; Russo, Thomas V.; Rankin, Eric Lamont; Schiek, Richard Louis; Thornquist, Heidi K.; Fixel, Deborah A.; Coffey, Todd S; Pawlowski, Roger P; Santarelli, Keith R.
2010-05-01
This document is a reference guide to the Xyce Parallel Electronic Simulator, and is a companion document to the Xyce Users Guide. The focus of this document is (to the extent possible) exhaustively list device parameters, solver options, parser options, and other usage details of Xyce. This document is not intended to be a tutorial. Users who are new to circuit simulation are better served by the Xyce Users Guide.
Betchov, R
2012-01-01
Stability of Parallel Flows provides information pertinent to hydrodynamical stability. This book explores the stability problems that occur in various fields, including electronics, mechanics, oceanography, administration, economics, as well as naval and aeronautical engineering. Organized into two parts encompassing 10 chapters, this book starts with an overview of the general equations of a two-dimensional incompressible flow. This text then explores the stability of a laminar boundary layer and presents the equation of the inviscid approximation. Other chapters present the general equation
Algorithmically specialized parallel computers
Snyder, Lawrence; Gannon, Dennis B
1985-01-01
Algorithmically Specialized Parallel Computers focuses on the concept and characteristics of an algorithmically specialized computer.This book discusses the algorithmically specialized computers, algorithmic specialization using VLSI, and innovative architectures. The architectures and algorithms for digital signal, speech, and image processing and specialized architectures for numerical computations are also elaborated. Other topics include the model for analyzing generalized inter-processor, pipelined architecture for search tree maintenance, and specialized computer organization for raster
Okandan, Murat; Nielson, Gregory N.
2016-07-12
Solar tracking systems, as well as methods of using such solar tracking systems, are disclosed. More particularly, embodiments of the solar tracking systems include lateral supports horizontally positioned between uprights to support photovoltaic modules. The lateral supports may be raised and lowered along the uprights or translated to cause the photovoltaic modules to track the moving sun.
Energy Technology Data Exchange (ETDEWEB)
Adachi, Masaaki; Ogasawara, Shinobu; Kume, Etsuo [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Ishizuki, Shigeru; Nemoto, Toshiyuki; Kawasaki, Nobuo; Kawai, Wataru [Fujitsu Ltd., Tokyo (Japan); Yatake, Yo-ichi [Hitachi Ltd., Tokyo (Japan)
2001-02-01
Several computer codes in the nuclear field have been vectorized, parallelized and trans-ported on the FUJITSU VPP500 system, the AP3000 system, the SX-4 system and the Paragon system at Center for Promotion of Computational Science and Engineering in Japan Atomic Energy Research Institute. We dealt with 18 codes in fiscal 1999. These results are reported in 3 parts, i.e., the vectorization and the parallelization part on vector processors, the parallelization part on scalar processors and the porting part. In this report, we describe the vectorization and parallelization on vector processors. In this vectorization and parallelization on vector processors part, the vectorization of Relativistic Molecular Orbital Calculation code RSCAT, a microscopic transport code for high energy nuclear collisions code JAM, three-dimensional non-steady thermal-fluid analysis code STREAM, Relativistic Density Functional Theory code RDFT and High Speed Three-Dimensional Nodal Diffusion code MOSRA-Light on the VPP500 system and the SX-4 system are described. (author)
A programmable associative memory for track finding
International Nuclear Information System (INIS)
Bardi, A.; Belforte, S.; Donati, S.; Galeotti, S.; Giannetti, P.; Morsani, F.; Passuello, D.; Spinella, F.; Cerri, A.; Punzi, G.; Ristori, L.; Dell'Orso, M.; Meschi, E.; Leger, A.; Speer, T.; Wu, X.
1998-01-01
We present a device, based on the concept of associative memory for pattern recognition, dedicated to on-line track finding in high-energy physics experiments. A large pattern bank, describing all possible tracks, can be organized into field programmable gate arrays where all patterns are compared in parallel to data coming from the detector during readout. Patterns, recognized among 2 66 possible combinations, are output in a few 30 MHz clock cycles. Programmability results in a flexible, simple architecture and it allows to keep up smoothly with technology improvements. (orig.)
Energy Technology Data Exchange (ETDEWEB)
Nishioka, K.; Nakamura, Y. [Graduate School of Energy Science, Kyoto University, Gokasho, Uji, Kyoto 611-0011 (Japan); Nishimura, S. [National Institute for Fusion Science, 322-6 Oroshi-cho, Toki, Gifu 509-5292 (Japan); Lee, H. Y. [Korea Advanced Institute of Science and Technology, Daejeon 305-701 (Korea, Republic of); Kobayashi, S.; Mizuuchi, T.; Nagasaki, K.; Okada, H.; Minami, T.; Kado, S.; Yamamoto, S.; Ohshima, S.; Konoshima, S.; Sano, F. [Institute of Advanced Energy, Kyoto University, Gokasho, Uji, Kyoto 611-0011 (Japan)
2016-03-15
A moment approach to calculate neoclassical transport in non-axisymmetric torus plasmas composed of multiple ion species is extended to include the external parallel momentum sources due to unbalanced tangential neutral beam injections (NBIs). The momentum sources that are included in the parallel momentum balance are calculated from the collision operators of background particles with fast ions. This method is applied for the clarification of the physical mechanism of the neoclassical parallel ion flows and the multi-ion species effect on them in Heliotron J NBI plasmas. It is found that parallel ion flow can be determined by the balance between the parallel viscosity and the external momentum source in the region where the external source is much larger than the thermodynamic force driven source in the collisional plasmas. This is because the friction between C{sup 6+} and D{sup +} prevents a large difference between C{sup 6+} and D{sup +} flow velocities in such plasmas. The C{sup 6+} flow velocities, which are measured by the charge exchange recombination spectroscopy system, are numerically evaluated with this method. It is shown that the experimentally measured C{sup 6+} impurity flow velocities do not contradict clearly with the neoclassical estimations, and the dependence of parallel flow velocities on the magnetic field ripples is consistent in both results.
Resistor Combinations for Parallel Circuits.
McTernan, James P.
1978-01-01
To help simplify both teaching and learning of parallel circuits, a high school electricity/electronics teacher presents and illustrates the use of tables of values for parallel resistive circuits in which total resistances are whole numbers. (MF)
SOFTWARE FOR DESIGNING PARALLEL APPLICATIONS
Directory of Open Access Journals (Sweden)
M. K. Bouza
2017-01-01
Full Text Available The object of research is the tools to support the development of parallel programs in C/C ++. The methods and software which automates the process of designing parallel applications are proposed.
Parallel External Memory Graph Algorithms
DEFF Research Database (Denmark)
Arge, Lars Allan; Goodrich, Michael T.; Sitchinava, Nodari
2010-01-01
In this paper, we study parallel I/O efficient graph algorithms in the Parallel External Memory (PEM) model, one o f the private-cache chip multiprocessor (CMP) models. We study the fundamental problem of list ranking which leads to efficient solutions to problems on trees, such as computing lowest...... an optimal speedup of Â¿(P) in parallel I/O complexity and parallel computation time, compared to the single-processor external memory counterparts....
Parallel learning in an autoshaping paradigm.
Naeem, Maliha; White, Norman M
2016-08-01
In an autoshaping task, a single conditioned stimulus (CS; lever insertion) was repeatedly followed by the delivery of an unconditioned stimulus (US; food pellet into an adjacent food magazine) irrespective of the rats' behavior. After repeated training trials, some rats responded to the onset of the CS by approaching and pressing the lever (sign-trackers). Lesions of dorsolateral striatum almost completely eliminated responding to the lever CS while facilitating responding to the food magazine (US). Lesions of the dorsomedial striatum attenuated but did not eliminate responding to the lever CS. Lesions of the basolateral or central nucleus of the amygdala had no significant effects on sign-tracking, but combined lesions of the 2 structures impaired sign-tracking by significantly increasing latency to the first lever press without affecting the number of lever presses. Lesions of the dorsal hippocampus had no effect on any of the behavioral measures. The findings suggest that sign-tracking with a single lever insertion as the CS may consist of 2 separate behaviors learned in parallel: An amygdala-mediated conditioned orienting and approach response and a dorsal striatum-mediated instrumental response. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Parallel Robot for Lower Limb Rehabilitation Exercises
Directory of Open Access Journals (Sweden)
Alireza Rastegarpanah
2016-01-01
Full Text Available The aim of this study is to investigate the capability of a 6-DoF parallel robot to perform various rehabilitation exercises. The foot trajectories of twenty healthy participants have been measured by a Vicon system during the performing of four different exercises. Based on the kinematics and dynamics of a parallel robot, a MATLAB program was developed in order to calculate the length of the actuators, the actuators’ forces, workspace, and singularity locus of the robot during the performing of the exercises. The calculated length of the actuators and the actuators’ forces were used by motion analysis in SolidWorks in order to simulate different foot trajectories by the CAD model of the robot. A physical parallel robot prototype was built in order to simulate and execute the foot trajectories of the participants. Kinect camera was used to track the motion of the leg’s model placed on the robot. The results demonstrate the robot’s capability to perform a full range of various rehabilitation exercises.
Parallel Monte Carlo Search for Hough Transform
Lopes, Raul H. C.; Franqueira, Virginia N. L.; Reid, Ivan D.; Hobson, Peter R.
2017-10-01
We investigate the problem of line detection in digital image processing and in special how state of the art algorithms behave in the presence of noise and whether CPU efficiency can be improved by the combination of a Monte Carlo Tree Search, hierarchical space decomposition, and parallel computing. The starting point of the investigation is the method introduced in 1962 by Paul Hough for detecting lines in binary images. Extended in the 1970s to the detection of space forms, what came to be known as Hough Transform (HT) has been proposed, for example, in the context of track fitting in the LHC ATLAS and CMS projects. The Hough Transform transfers the problem of line detection, for example, into one of optimization of the peak in a vote counting process for cells which contain the possible points of candidate lines. The detection algorithm can be computationally expensive both in the demands made upon the processor and on memory. Additionally, it can have a reduced effectiveness in detection in the presence of noise. Our first contribution consists in an evaluation of the use of a variation of the Radon Transform as a form of improving theeffectiveness of line detection in the presence of noise. Then, parallel algorithms for variations of the Hough Transform and the Radon Transform for line detection are introduced. An algorithm for Parallel Monte Carlo Search applied to line detection is also introduced. Their algorithmic complexities are discussed. Finally, implementations on multi-GPU and multicore architectures are discussed.
Parallel inter channel interaction mechanisms
International Nuclear Information System (INIS)
Jovic, V.; Afgan, N.; Jovic, L.
1995-01-01
Parallel channels interactions are examined. For experimental researches of nonstationary regimes flow in three parallel vertical channels results of phenomenon analysis and mechanisms of parallel channel interaction for adiabatic condition of one-phase fluid and two-phase mixture flow are shown. (author)
International Nuclear Information System (INIS)
Soltz, R; Vranas, P; Blumrich, M; Chen, D; Gara, A; Giampap, M; Heidelberger, P; Salapura, V; Sexton, J; Bhanot, G
2007-01-01
The theory of the strong nuclear force, Quantum Chromodynamics (QCD), can be numerically simulated from first principles on massively-parallel supercomputers using the method of Lattice Gauge Theory. We describe the special programming requirements of lattice QCD (LQCD) as well as the optimal supercomputer hardware architectures that it suggests. We demonstrate these methods on the BlueGene massively-parallel supercomputer and argue that LQCD and the BlueGene architecture are a natural match. This can be traced to the simple fact that LQCD is a regular lattice discretization of space into lattice sites while the BlueGene supercomputer is a discretization of space into compute nodes, and that both are constrained by requirements of locality. This simple relation is both technologically important and theoretically intriguing. The main result of this paper is the speedup of LQCD using up to 131,072 CPUs on the largest BlueGene/L supercomputer. The speedup is perfect with sustained performance of about 20% of peak. This corresponds to a maximum of 70.5 sustained TFlop/s. At these speeds LQCD and BlueGene are poised to produce the next generation of strong interaction physics theoretical results
A Parallel Butterfly Algorithm
Poulson, Jack; Demanet, Laurent; Maxwell, Nicholas; Ying, Lexing
2014-01-01
The butterfly algorithm is a fast algorithm which approximately evaluates a discrete analogue of the integral transform (Equation Presented.) at large numbers of target points when the kernel, K(x, y), is approximately low-rank when restricted to subdomains satisfying a certain simple geometric condition. In d dimensions with O(Nd) quasi-uniformly distributed source and target points, when each appropriate submatrix of K is approximately rank-r, the running time of the algorithm is at most O(r2Nd logN). A parallelization of the butterfly algorithm is introduced which, assuming a message latency of α and per-process inverse bandwidth of β, executes in at most (Equation Presented.) time using p processes. This parallel algorithm was then instantiated in the form of the open-source DistButterfly library for the special case where K(x, y) = exp(iΦ(x, y)), where Φ(x, y) is a black-box, sufficiently smooth, real-valued phase function. Experiments on Blue Gene/Q demonstrate impressive strong-scaling results for important classes of phase functions. Using quasi-uniform sources, hyperbolic Radon transforms, and an analogue of a three-dimensional generalized Radon transform were, respectively, observed to strong-scale from 1-node/16-cores up to 1024-nodes/16,384-cores with greater than 90% and 82% efficiency, respectively. © 2014 Society for Industrial and Applied Mathematics.
A Parallel Butterfly Algorithm
Poulson, Jack
2014-02-04
The butterfly algorithm is a fast algorithm which approximately evaluates a discrete analogue of the integral transform (Equation Presented.) at large numbers of target points when the kernel, K(x, y), is approximately low-rank when restricted to subdomains satisfying a certain simple geometric condition. In d dimensions with O(Nd) quasi-uniformly distributed source and target points, when each appropriate submatrix of K is approximately rank-r, the running time of the algorithm is at most O(r2Nd logN). A parallelization of the butterfly algorithm is introduced which, assuming a message latency of α and per-process inverse bandwidth of β, executes in at most (Equation Presented.) time using p processes. This parallel algorithm was then instantiated in the form of the open-source DistButterfly library for the special case where K(x, y) = exp(iΦ(x, y)), where Φ(x, y) is a black-box, sufficiently smooth, real-valued phase function. Experiments on Blue Gene/Q demonstrate impressive strong-scaling results for important classes of phase functions. Using quasi-uniform sources, hyperbolic Radon transforms, and an analogue of a three-dimensional generalized Radon transform were, respectively, observed to strong-scale from 1-node/16-cores up to 1024-nodes/16,384-cores with greater than 90% and 82% efficiency, respectively. © 2014 Society for Industrial and Applied Mathematics.
New Parallel Algorithms for Landscape Evolution Model
Jin, Y.; Zhang, H.; Shi, Y.
2017-12-01
Most landscape evolution models (LEM) developed in the last two decades solve the diffusion equation to simulate the transportation of surface sediments. This numerical approach is difficult to parallelize due to the computation of drainage area for each node, which needs huge amount of communication if run in parallel. In order to overcome this difficulty, we developed two parallel algorithms for LEM with a stream net. One algorithm handles the partition of grid with traditional methods and applies an efficient global reduction algorithm to do the computation of drainage areas and transport rates for the stream net; the other algorithm is based on a new partition algorithm, which partitions the nodes in catchments between processes first, and then partitions the cells according to the partition of nodes. Both methods focus on decreasing communication between processes and take the advantage of massive computing techniques, and numerical experiments show that they are both adequate to handle large scale problems with millions of cells. We implemented the two algorithms in our program based on the widely used finite element library deal.II, so that it can be easily coupled with ASPECT.
Performance studies of the parallel VIM code
International Nuclear Information System (INIS)
Shi, B.; Blomquist, R.N.
1996-01-01
In this paper, the authors evaluate the performance of the parallel version of the VIM Monte Carlo code on the IBM SPx at the High Performance Computing Research Facility at ANL. Three test problems with contrasting computational characteristics were used to assess effects in performance. A statistical method for estimating the inefficiencies due to load imbalance and communication is also introduced. VIM is a large scale continuous energy Monte Carlo radiation transport program and was parallelized using history partitioning, the master/worker approach, and p4 message passing library. Dynamic load balancing is accomplished when the master processor assigns chunks of histories to workers that have completed a previously assigned task, accommodating variations in the lengths of histories, processor speeds, and worker loads. At the end of each batch (generation), the fission sites and tallies are sent from each worker to the master process, contributing to the parallel inefficiency. All communications are between master and workers, and are serial. The SPx is a scalable 128-node parallel supercomputer with high-performance Omega switches of 63 microsec latency and 35 MBytes/sec bandwidth. For uniform and reproducible performance, they used only the 120 identical regular processors (IBM RS/6000) and excluded the remaining eight planet nodes, which may be loaded by other's jobs
International Nuclear Information System (INIS)
DeHart, Mark D.; Williams, Mark L.; Bowman, Stephen M.
2010-01-01
The SCALE computational architecture has remained basically the same since its inception 30 years ago, although constituent modules and capabilities have changed significantly. This SCALE concept was intended to provide a framework whereby independent codes can be linked to provide a more comprehensive capability than possible with the individual programs - allowing flexibility to address a wide variety of applications. However, the current system was designed originally for mainframe computers with a single CPU and with significantly less memory than today's personal computers. It has been recognized that the present SCALE computation system could be restructured to take advantage of modern hardware and software capabilities, while retaining many of the modular features of the present system. Preliminary work is being done to define specifications and capabilities for a more advanced computational architecture. This paper describes the state of current SCALE development activities and plans for future development. With the release of SCALE 6.1 in 2010, a new phase of evolutionary development will be available to SCALE users within the TRITON and NEWT modules. The SCALE (Standardized Computer Analyses for Licensing Evaluation) code system developed by Oak Ridge National Laboratory (ORNL) provides a comprehensive and integrated package of codes and nuclear data for a wide range of applications in criticality safety, reactor physics, shielding, isotopic depletion and decay, and sensitivity/uncertainty (S/U) analysis. Over the last three years, since the release of version 5.1 in 2006, several important new codes have been introduced within SCALE, and significant advances applied to existing codes. Many of these new features became available with the release of SCALE 6.0 in early 2009. However, beginning with SCALE 6.1, a first generation of parallel computing is being introduced. In addition to near-term improvements, a plan for longer term SCALE enhancement
Mueller, Matthias
2016-01-01
persistent, robust and autonomous object tracking system for unmanned aerial vehicles (UAVs) called Persistent Aerial Tracking (PAT). A computer vision and control strategy is applied to a diverse set of moving objects (e.g. humans, animals, cars, boats, etc
Renewable Energy Tracking Systems
Renewable energy generation ownership can be accounted through tracking systems. Tracking systems are highly automated, contain specific information about each MWh, and are accessible over the internet to market participants.
Indian Academy of Sciences (India)
Abstract. Forward tracking is an essential part of a detector at the international linear collider (ILC). The requirements for forward tracking are explained and the proposed solutions in the detector concepts are shown.
Parallel Polarization State Generation.
She, Alan; Capasso, Federico
2016-05-17
The control of polarization, an essential property of light, is of wide scientific and technological interest. The general problem of generating arbitrary time-varying states of polarization (SOP) has always been mathematically formulated by a series of linear transformations, i.e. a product of matrices, imposing a serial architecture. Here we show a parallel architecture described by a sum of matrices. The theory is experimentally demonstrated by modulating spatially-separated polarization components of a laser using a digital micromirror device that are subsequently beam combined. This method greatly expands the parameter space for engineering devices that control polarization. Consequently, performance characteristics, such as speed, stability, and spectral range, are entirely dictated by the technologies of optical intensity modulation, including absorption, reflection, emission, and scattering. This opens up important prospects for polarization state generation (PSG) with unique performance characteristics with applications in spectroscopic ellipsometry, spectropolarimetry, communications, imaging, and security.
Parallel imaging microfluidic cytometer.
Ehrlich, Daniel J; McKenna, Brian K; Evans, James G; Belkina, Anna C; Denis, Gerald V; Sherr, David H; Cheung, Man Ching
2011-01-01
By adding an additional degree of freedom from multichannel flow, the parallel microfluidic cytometer (PMC) combines some of the best features of fluorescence-activated flow cytometry (FCM) and microscope-based high-content screening (HCS). The PMC (i) lends itself to fast processing of large numbers of samples, (ii) adds a 1D imaging capability for intracellular localization assays (HCS), (iii) has a high rare-cell sensitivity, and (iv) has an unusual capability for time-synchronized sampling. An inability to practically handle large sample numbers has restricted applications of conventional flow cytometers and microscopes in combinatorial cell assays, network biology, and drug discovery. The PMC promises to relieve a bottleneck in these previously constrained applications. The PMC may also be a powerful tool for finding rare primary cells in the clinic. The multichannel architecture of current PMC prototypes allows 384 unique samples for a cell-based screen to be read out in ∼6-10 min, about 30 times the speed of most current FCM systems. In 1D intracellular imaging, the PMC can obtain protein localization using HCS marker strategies at many times for the sample throughput of charge-coupled device (CCD)-based microscopes or CCD-based single-channel flow cytometers. The PMC also permits the signal integration time to be varied over a larger range than is practical in conventional flow cytometers. The signal-to-noise advantages are useful, for example, in counting rare positive cells in the most difficult early stages of genome-wide screening. We review the status of parallel microfluidic cytometry and discuss some of the directions the new technology may take. Copyright © 2011 Elsevier Inc. All rights reserved.
Parallel ray tracing for one-dimensional discrete ordinate computations
International Nuclear Information System (INIS)
Jarvis, R.D.; Nelson, P.
1996-01-01
The ray-tracing sweep in discrete-ordinates, spatially discrete numerical approximation methods applied to the linear, steady-state, plane-parallel, mono-energetic, azimuthally symmetric, neutral-particle transport equation can be reduced to a parallel prefix computation. In so doing, the often severe penalty in convergence rate of the source iteration, suffered by most current parallel algorithms using spatial domain decomposition, can be avoided while attaining parallelism in the spatial domain to whatever extent desired. In addition, the reduction implies parallel algorithm complexity limits for the ray-tracing sweep. The reduction applies to all closed, linear, one-cell functional (CLOF) spatial approximation methods, which encompasses most in current popular use. Scalability test results of an implementation of the algorithm on a 64-node nCube-2S hypercube-connected, message-passing, multi-computer are described. (author)
Detector independent cellular automaton algorithm for track reconstruction
Energy Technology Data Exchange (ETDEWEB)
Kisel, Ivan; Kulakov, Igor; Zyzak, Maksym [Goethe Univ. Frankfurt am Main (Germany); Frankfurt Institute for Advanced Studies, Frankfurt am Main (Germany); GSI Helmholtzzentrum fuer Schwerionenforschung GmbH (Germany); Collaboration: CBM-Collaboration
2013-07-01
Track reconstruction is one of the most challenging problems of data analysis in modern high energy physics (HEP) experiments, which have to process per second of the order of 10{sup 7} events with high track multiplicity and density, registered by detectors of different types and, in many cases, located in non-homogeneous magnetic field. Creation of reconstruction package common for all experiments is considered to be important in order to consolidate efforts. The cellular automaton (CA) track reconstruction approach has been used successfully in many HEP experiments. It is very simple, efficient, local and parallel. Meanwhile it is intrinsically independent of detector geometry and good candidate for common track reconstruction. The CA implementation for the CBM experiment has been generalized and applied to the ALICE ITS and STAR HFT detectors. Tests with simulated collisions have been performed. The track reconstruction efficiencies are at the level of 95% for majority of the signal tracks for all detectors.
DEFF Research Database (Denmark)
Jensen, Christian Søndergaard; Li, K.-J.; Pakalnis, Stardas
2005-01-01
efficient tracking techniques. More specifically, while almost all commercially available tracking solutions simply offer time-based sampling of positions, this paper's techniques aim to offer a guaranteed tracking accuracy for each vehicle at the lowest possible costs, in terms of network traffic...
CALTRANS: A parallel, deterministic, 3D neutronics code
Energy Technology Data Exchange (ETDEWEB)
Carson, L.; Ferguson, J.; Rogers, J.
1994-04-01
Our efforts to parallelize the deterministic solution of the neutron transport equation has culminated in a new neutronics code CALTRANS, which has full 3D capability. In this article, we describe the layout and algorithms of CALTRANS and present performance measurements of the code on a variety of platforms. Explicit implementation of the parallel algorithms of CALTRANS using both the function calls of the Parallel Virtual Machine software package (PVM 3.2) and the Meiko CS-2 tagged message passing library (based on the Intel NX/2 interface) are provided in appendices.
About Parallel Programming: Paradigms, Parallel Execution and Collaborative Systems
Directory of Open Access Journals (Sweden)
Loredana MOCEAN
2009-01-01
Full Text Available In the last years, there were made efforts for delineation of a stabile and unitary frame, where the problems of logical parallel processing must find solutions at least at the level of imperative languages. The results obtained by now are not at the level of the made efforts. This paper wants to be a little contribution at these efforts. We propose an overview in parallel programming, parallel execution and collaborative systems.
FTK: a Fast Track Trigger for ATLAS
International Nuclear Information System (INIS)
Anderson, J; Auerbach, B; Blair, R; Andreani, A; Andreazza, A; Citterio, M; Annovi, A; Beretta, M; Castegnaro, A; Atkinson, M; Cavaliere, V; Chang, P; Bevacqua, V; Crescioli, F; Blazey, G; Bogdan, M; Boveia, A; Canelli, F; Cheng, Y; Cervigni, F
2012-01-01
We describe the design and expected performance of a the Fast Tracker Trigger (FTK) system for the ATLAS detector at the Large Hadron Collider. The FTK is a highly parallel hardware system designed to operate at the Level 1 trigger output rate. It is designed to provide global tracks reconstructed in the inner detector with resolution comparable to the full offline reconstruction as input of the Level 2 trigger processing. The hardware system is based on associative memories for pattern recognition and fast FPGAs for track reconstruction. The FTK is expected to dramatically improve the performance of track based isolation and b-tagging with little to no dependencies of pile-up interactions.
Online track processor for the CDF upgrade
International Nuclear Information System (INIS)
Thomson, E. J.
2002-01-01
A trigger track processor, called the eXtremely Fast Tracker (XFT), has been designed for the CDF upgrade. This processor identifies high transverse momentum (> 1.5 GeV/c) charged particles in the new central outer tracking chamber for CDF II. The XFT design is highly parallel to handle the input rate of 183 Gbits/s and output rate of 44 Gbits/s. The processor is pipelined and reports the result for a new event every 132 ns. The processor uses three stages: hit classification, segment finding, and segment linking. The pattern recognition algorithms for the three stages are implemented in programmable logic devices (PLDs) which allow in-situ modification of the algorithm at any time. The PLDs reside on three different types of modules. The complete system has been installed and commissioned at CDF II. An overview of the track processor and performance in CDF Run II are presented
Design, analysis and control of cable-suspended parallel robots and its applications
Zi, Bin
2017-01-01
This book provides an essential overview of the authors’ work in the field of cable-suspended parallel robots, focusing on innovative design, mechanics, control, development and applications. It presents and analyzes several typical mechanical architectures of cable-suspended parallel robots in practical applications, including the feed cable-suspended structure for super antennae, hybrid-driven-based cable-suspended parallel robots, and cooperative cable parallel manipulators for multiple mobile cranes. It also addresses the fundamental mechanics of cable-suspended parallel robots on the basis of their typical applications, including the kinematics, dynamics and trajectory tracking control of the feed cable-suspended structure for super antennae. In addition it proposes a novel hybrid-driven-based cable-suspended parallel robot that uses integrated mechanism design methods to improve the performance of traditional cable-suspended parallel robots. A comparative study on error and performance indices of hybr...
Tracks: Nurses and the Tracking Network
Centers for Disease Control (CDC) Podcasts
This podcast highlights the utility of the National Environmental Public Health Tracking Network for nurses in a variety of work settings. It features commentary from the American Nurses Association and includes stories from a public health nurse in Massachusetts.
Parallel Framework for Cooperative Processes
Directory of Open Access Journals (Sweden)
Mitică Craus
2005-01-01
Full Text Available This paper describes the work of an object oriented framework designed to be used in the parallelization of a set of related algorithms. The idea behind the system we are describing is to have a re-usable framework for running several sequential algorithms in a parallel environment. The algorithms that the framework can be used with have several things in common: they have to run in cycles and the work should be possible to be split between several "processing units". The parallel framework uses the message-passing communication paradigm and is organized as a master-slave system. Two applications are presented: an Ant Colony Optimization (ACO parallel algorithm for the Travelling Salesman Problem (TSP and an Image Processing (IP parallel algorithm for the Symmetrical Neighborhood Filter (SNF. The implementations of these applications by means of the parallel framework prove to have good performances: approximatively linear speedup and low communication cost.
Tracks: Nurses and the Tracking Network
Centers for Disease Control (CDC) Podcasts
2012-06-06
This podcast highlights the utility of the National Environmental Public Health Tracking Network for nurses in a variety of work settings. It features commentary from the American Nurses Association and includes stories from a public health nurse in Massachusetts. Created: 6/6/2012 by National Center for Environmental Health (NCEH)/Division of Environmental Hazards and Health Effects (DEHHE)/Environmental Health Tracking Branch (EHTB). Date Released: 6/6/2012.
The tracking of high level waste shipments-TRANSCOM system
International Nuclear Information System (INIS)
Johnson, P.E.; Joy, D.S.; Pope, R.B.
1995-01-01
The TRANSCOM (transportation tracking and communication) system is the U.S. Department of Energy's (DOE's) real-time system for tracking shipments of spent fuel, high-level wastes, and other high-visibility shipments of radioactive material. The TRANSCOM system has been operational since 1988. The system was used during FY1993 to track almost 100 shipments within the US.DOE complex, and it is accessed weekly by 10 to 20 users
The tracking of high level waste shipments - TRANSCOM system
International Nuclear Information System (INIS)
Johnson, P.E.; Joy, D.S.; Pope, R.B.; Thomas, T.M.; Lester, P.B.
1994-01-01
The TRANSCOM (transportation tracking and communication) system is the US Department of Energy's (DOE's) real-time system for tracking shipments of spent fuel, high-level wastes, and other high-visibility shipments of radioactive material. The TRANSCOM system has been operational since 1988. The system was used during FY 1993 to track almost 100 shipments within the US DOE complex, and it is accessed weekly by 10 to 20 users
DEFF Research Database (Denmark)
Kosbar, Tamer R.; Sofan, Mamdouh A.; Waly, Mohamed A.
2015-01-01
about 6.1 °C when the TFO strand was modified with Z and the Watson-Crick strand with adenine-LNA (AL). The molecular modeling results showed that, in case of nucleobases Y and Z a hydrogen bond (1.69 and 1.72 Å, respectively) was formed between the protonated 3-aminopropyn-1-yl chain and one...... of the phosphate groups in Watson-Crick strand. Also, it was shown that the nucleobase Y made a good stacking and binding with the other nucleobases in the TFO and Watson-Crick duplex, respectively. In contrast, the nucleobase Z with LNA moiety was forced to twist out of plane of Watson-Crick base pair which......The phosphoramidites of DNA monomers of 7-(3-aminopropyn-1-yl)-8-aza-7-deazaadenine (Y) and 7-(3-aminopropyn-1-yl)-8-aza-7-deazaadenine LNA (Z) are synthesized, and the thermal stability at pH 7.2 and 8.2 of anti-parallel triplexes modified with these two monomers is determined. When, the anti...
Parallel consensual neural networks.
Benediktsson, J A; Sveinsson, J R; Ersoy, O K; Swain, P H
1997-01-01
A new type of a neural-network architecture, the parallel consensual neural network (PCNN), is introduced and applied in classification/data fusion of multisource remote sensing and geographic data. The PCNN architecture is based on statistical consensus theory and involves using stage neural networks with transformed input data. The input data are transformed several times and the different transformed data are used as if they were independent inputs. The independent inputs are first classified using the stage neural networks. The output responses from the stage networks are then weighted and combined to make a consensual decision. In this paper, optimization methods are used in order to weight the outputs from the stage networks. Two approaches are proposed to compute the data transforms for the PCNN, one for binary data and another for analog data. The analog approach uses wavelet packets. The experimental results obtained with the proposed approach show that the PCNN outperforms both a conjugate-gradient backpropagation neural network and conventional statistical methods in terms of overall classification accuracy of test data.
Resolving Neighbourhood Relations in a Parallel Fluid Dynamic Solver
Frisch, Jerome
2012-06-01
Computational Fluid Dynamics simulations require an enormous computational effort if a physically reasonable accuracy should be reached. Therefore, a parallel implementation is inevitable. This paper describes the basics of our implemented fluid solver with a special aspect on the hierarchical data structure, unique cell and grid identification, and the neighbourhood relations in-between grids on different processes. A special server concept keeps track of every grid over all processes while minimising data transfer between the nodes. © 2012 IEEE.
A Massively Parallel Code for Polarization Calculations
Akiyama, Shizuka; Höflich, Peter
2001-03-01
We present an implementation of our Monte-Carlo radiation transport method for rapidly expanding, NLTE atmospheres for massively parallel computers which utilizes both the distributed and shared memory models. This allows us to take full advantage of the fast communication and low latency inherent to nodes with multiple CPUs, and to stretch the limits of scalability with the number of nodes compared to a version which is based on the shared memory model. Test calculations on a local 20-node Beowulf cluster with dual CPUs showed an improved scalability by about 40%.
A parallel input composite transimpedance amplifier
Kim, D. J.; Kim, C.
2018-01-01
A new approach to high performance current to voltage preamplifier design is presented. The design using multiple operational amplifiers (op-amps) has a parasitic capacitance compensation network and a composite amplifier topology for fast, precision, and low noise performance. The input stage consisting of a parallel linked JFET op-amps and a high-speed bipolar junction transistor (BJT) gain stage driving the output in the composite amplifier topology, cooperating with the capacitance compensation feedback network, ensures wide bandwidth stability in the presence of input capacitance above 40 nF. The design is ideal for any two-probe measurement, including high impedance transport and scanning tunneling microscopy measurements.
A Parallel Particle Swarm Optimizer
National Research Council Canada - National Science Library
Schutte, J. F; Fregly, B .J; Haftka, R. T; George, A. D
2003-01-01
.... Motivated by a computationally demanding biomechanical system identification problem, we introduce a parallel implementation of a stochastic population based global optimizer, the Particle Swarm...
Patterns for Parallel Software Design
Ortega-Arjona, Jorge Luis
2010-01-01
Essential reading to understand patterns for parallel programming Software patterns have revolutionized the way we think about how software is designed, built, and documented, and the design of parallel software requires you to consider other particular design aspects and special skills. From clusters to supercomputers, success heavily depends on the design skills of software developers. Patterns for Parallel Software Design presents a pattern-oriented software architecture approach to parallel software design. This approach is not a design method in the classic sense, but a new way of managin
DEFF Research Database (Denmark)
Christensen, Mark Schram; Ehrsson, H Henrik; Nielsen, Jens Bo
2013-01-01
a different network, involving bilateral dorsal premotor cortex (PMd), primary motor cortex, and SMA, was more active when subjects viewed parallel movements while performing either symmetrical or parallel movements. Correlations between behavioral instability and brain activity were present in right lateral...... adduction-abduction movements symmetrically or in parallel with real-time congruent or incongruent visual feedback of the movements. One network, consisting of bilateral superior and middle frontal gyrus and supplementary motor area (SMA), was more active when subjects performed parallel movements, whereas...
Properties of Polyethylene Naphthalate Track Membranes
Akimenko, S N; Orelovich, O L; Maekawa, J; Ioshida, M; Apel, P Yu
2002-01-01
Basic characteristics of track membranes made of polyethylene naphthalate (which is a polyester synthesized from dimethyl naphthalate and ethylene glycol) are studied and presented. Polyethylene naphthalate possesses some properties (mechanical strength, thermal and chemical stability), which make this polymer a promising material for the production of track membranes. Water flow rate and air flow rate characteristics, burst strength, wettability, and amount of extractables are determined. Surface structure and pore structure are examined using scanning electron microscopy. It is found that the pores in the membranes are cylindrical in shape. The measured water and air flow rates follow known theoretical relations for the transport in narrow capillaries. The burst strength of polyethylene naphthalate membranes is found to be similar to that of polyethylene terephthalate track membranes. Polyethylene naphthalate track membranes can be categorized as moderately hydrophilic. Being treated with boiling water, pol...
Parallel 3-D method of characteristics in MPACT
International Nuclear Information System (INIS)
Kochunas, B.; Dovvnar, T. J.; Liu, Z.
2013-01-01
A new parallel 3-D MOC kernel has been developed and implemented in MPACT which makes use of the modular ray tracing technique to reduce computational requirements and to facilitate parallel decomposition. The parallel model makes use of both distributed and shared memory parallelism which are implemented with the MPI and OpenMP standards, respectively. The kernel is capable of parallel decomposition of problems in space, angle, and by characteristic rays up to 0(104) processors. Initial verification of the parallel 3-D MOC kernel was performed using the Takeda 3-D transport benchmark problems. The eigenvalues computed by MPACT are within the statistical uncertainty of the benchmark reference and agree well with the averages of other participants. The MPACT k eff differs from the benchmark results for rodded and un-rodded cases by 11 and -40 pcm, respectively. The calculations were performed for various numbers of processors and parallel decompositions up to 15625 processors; all producing the same result at convergence. The parallel efficiency of the worst case was 60%, while very good efficiency (>95%) was observed for cases using 500 processors. The overall run time for the 500 processor case was 231 seconds and 19 seconds for the case with 15625 processors. Ongoing work is focused on developing theoretical performance models and the implementation of acceleration techniques to minimize the number of iterations to converge. (authors)
Wireless GPS fleet tracking system at the University of Albany.
2014-07-01
This report provides an overview of the project undertaken at the University at Albany to make alternative transportation a more : viable option by implementing a GPS Tracking System on the University bus fleet and broadcasting the bus locations to c...
Monte Carlo charged-particle tracking and energy deposition on a Lagrangian mesh.
Yuan, J; Moses, G A; McKenty, P W
2005-10-01
A Monte Carlo algorithm for alpha particle tracking and energy deposition on a cylindrical computational mesh in a Lagrangian hydrodynamics code used for inertial confinement fusion (ICF) simulations is presented. The straight line approximation is used to follow propagation of "Monte Carlo particles" which represent collections of alpha particles generated from thermonuclear deuterium-tritium (DT) reactions. Energy deposition in the plasma is modeled by the continuous slowing down approximation. The scheme addresses various aspects arising in the coupling of Monte Carlo tracking with Lagrangian hydrodynamics; such as non-orthogonal severely distorted mesh cells, particle relocation on the moving mesh and particle relocation after rezoning. A comparison with the flux-limited multi-group diffusion transport method is presented for a polar direct drive target design for the National Ignition Facility. Simulations show the Monte Carlo transport method predicts about earlier ignition than predicted by the diffusion method, and generates higher hot spot temperature. Nearly linear speed-up is achieved for multi-processor parallel simulations.
Parallel community climate model: Description and user`s guide
Energy Technology Data Exchange (ETDEWEB)
Drake, J.B.; Flanery, R.E.; Semeraro, B.D.; Worley, P.H. [and others
1996-07-15
This report gives an overview of a parallel version of the NCAR Community Climate Model, CCM2, implemented for MIMD massively parallel computers using a message-passing programming paradigm. The parallel implementation was developed on an Intel iPSC/860 with 128 processors and on the Intel Delta with 512 processors, and the initial target platform for the production version of the code is the Intel Paragon with 2048 processors. Because the implementation uses a standard, portable message-passing libraries, the code has been easily ported to other multiprocessors supporting a message-passing programming paradigm. The parallelization strategy used is to decompose the problem domain into geographical patches and assign each processor the computation associated with a distinct subset of the patches. With this decomposition, the physics calculations involve only grid points and data local to a processor and are performed in parallel. Using parallel algorithms developed for the semi-Lagrangian transport, the fast Fourier transform and the Legendre transform, both physics and dynamics are computed in parallel with minimal data movement and modest change to the original CCM2 source code. Sequential or parallel history tapes are written and input files (in history tape format) are read sequentially by the parallel code to promote compatibility with production use of the model on other computer systems. A validation exercise has been performed with the parallel code and is detailed along with some performance numbers on the Intel Paragon and the IBM SP2. A discussion of reproducibility of results is included. A user`s guide for the PCCM2 version 2.1 on the various parallel machines completes the report. Procedures for compilation, setup and execution are given. A discussion of code internals is included for those who may wish to modify and use the program in their own research.
49 CFR 213.59 - Elevation of curved track; runoff.
2010-10-01
... 49 Transportation 4 2010-10-01 2010-10-01 false Elevation of curved track; runoff. 213.59 Section... track; runoff. (a) If a curve is elevated, the full elevation shall be provided throughout the curve, unless physical conditions do not permit. If elevation runoff occurs in a curve, the actual minimum...
49 CFR 231.22 - Operation of track motor cars.
2010-10-01
... 49 Transportation 4 2010-10-01 2010-10-01 false Operation of track motor cars. 231.22 Section 231... motor cars. On and after August 1, 1963, it shall be unlawful for any railroad subject to the requirements of the Safety Appliance Acts to operate or permit to be operated on its line track motor cars to...
Transport Statistics - Transport - UNECE
Sustainable Energy Statistics Trade Transport Themes UNECE and the SDGs Climate Change Gender Ideas 4 Change UNECE Weekly Videos UNECE Transport Areas of Work Transport Statistics Transport Transport Statistics About us Terms of Reference Meetings and Events Meetings Working Party on Transport Statistics (WP.6
Parallel processing at the SSC: The fact and the fiction
International Nuclear Information System (INIS)
Bourianoff, G.; Cole, B.
1991-10-01
Accurately modelling the behavior of particles circulating in accelerators is a computationally demanding task. The particle tracking code currently in use at SSC is based upon a ''thin element'' analysis (TEAPOT). In this model each magnet in the lattice is described by a thin element at which the particle experiences an impulsive kick. Each kick requires approximately 200 floating point operations (''FLOP''). For the SSC collider lattice consisting of 10 4 elements, performing a tracking of study for a set of 100 particles for 10 7 turns would require 2 x 10 15 FLOPS. Even on a machine capable of 100 MFLOP/sec (MFLOPS), this would require 2 x 10 7 seconds, and many such runs are necessary. It should be noted that the accuracy with which the kicks are to be calculated is important: the large number of iterations involved will magnify the effects of small errors. The inability of current computational resources to effectively perform the full calculation motivates the migration of this calculation to the most powerful computers available. A survey of the current research into new technologies for superconducting reveals that the supercomputers of the future will be parallel in nature. Further, numerous such machines exist today, and are being used to solve other difficult problems. Thus it seems clear that it is not early to begin developing the capability to develop tracking codes for parallel architectures. This report discusses implementing parallel processing on the SCC
Deposition of molecular probes in heavy ion tracks
Esser, M
1999-01-01
By using polarized fluorescence techniques the physical properties of heavy ion tracks such as the dielectric number, molecular alignment and track radius can be traced by molecular fluorescence probes. Foils of poly(ethylene terephthalate) (PET) were used as a matrix for the ion tracks wherein fluorescence probes such as aminostyryl-derivatives can be incorporated using a suitable solvent, e.g. N,N'-dimethylformamide (DMF) as transport medium. The high sensitivity of fluorescence methods allowed the comparison of the probe properties in ion tracks with the virgin material. From the fluorescence Stokes shift the dielectric constants could be calculated, describing the dielectric surroundings of the molecular probes. The lower dielectric constant in the tracks gives clear evidence that there is no higher accommodation of the highly polar solvent DMF in the tracks compared with the virgin material. Otherwise the dielectric constant in the tracks should be higher than in the virgin material. The orientation of t...
Optical flow optimization using parallel genetic algorithm
Zavala-Romero, Olmo; Botella, Guillermo; Meyer-Bäse, Anke; Meyer Base, Uwe
2011-06-01
A new approach to optimize the parameters of a gradient-based optical flow model using a parallel genetic algorithm (GA) is proposed. The main characteristics of the optical flow algorithm are its bio-inspiration and robustness against contrast, static patterns and noise, besides working consistently with several optical illusions where other algorithms fail. This model depends on many parameters which conform the number of channels, the orientations required, the length and shape of the kernel functions used in the convolution stage, among many more. The GA is used to find a set of parameters which improve the accuracy of the optical flow on inputs where the ground-truth data is available. This set of parameters helps to understand which of them are better suited for each type of inputs and can be used to estimate the parameters of the optical flow algorithm when used with videos that share similar characteristics. The proposed implementation takes into account the embarrassingly parallel nature of the GA and uses the OpenMP Application Programming Interface (API) to speedup the process of estimating an optimal set of parameters. The information obtained in this work can be used to dynamically reconfigure systems, with potential applications in robotics, medical imaging and tracking.
PARALLEL IMPORT: REALITY FOR RUSSIA
Directory of Open Access Journals (Sweden)
Т. А. Сухопарова
2014-01-01
Full Text Available Problem of parallel import is urgent question at now. Parallel import legalization in Russia is expedient. Such statement based on opposite experts opinion analysis. At the same time it’s necessary to negative consequences consider of this decision and to apply remedies to its minimization.Purchase on Elibrary.ru > Buy now
The Galley Parallel File System
Nieuwejaar, Nils; Kotz, David
1996-01-01
Most current multiprocessor file systems are designed to use multiple disks in parallel, using the high aggregate bandwidth to meet the growing I/0 requirements of parallel scientific applications. Many multiprocessor file systems provide applications with a conventional Unix-like interface, allowing the application to access multiple disks transparently. This interface conceals the parallelism within the file system, increasing the ease of programmability, but making it difficult or impossible for sophisticated programmers and libraries to use knowledge about their I/O needs to exploit that parallelism. In addition to providing an insufficient interface, most current multiprocessor file systems are optimized for a different workload than they are being asked to support. We introduce Galley, a new parallel file system that is intended to efficiently support realistic scientific multiprocessor workloads. We discuss Galley's file structure and application interface, as well as the performance advantages offered by that interface.
Parallelization of the FLAPW method
International Nuclear Information System (INIS)
Canning, A.; Mannstadt, W.; Freeman, A.J.
1999-01-01
The FLAPW (full-potential linearized-augmented plane-wave) method is one of the most accurate first-principles methods for determining electronic and magnetic properties of crystals and surfaces. Until the present work, the FLAPW method has been limited to systems of less than about one hundred atoms due to a lack of an efficient parallel implementation to exploit the power and memory of parallel computers. In this work we present an efficient parallelization of the method by division among the processors of the plane-wave components for each state. The code is also optimized for RISC (reduced instruction set computer) architectures, such as those found on most parallel computers, making full use of BLAS (basic linear algebra subprograms) wherever possible. Scaling results are presented for systems of up to 686 silicon atoms and 343 palladium atoms per unit cell, running on up to 512 processors on a CRAY T3E parallel computer
Parallelization of the FLAPW method
Canning, A.; Mannstadt, W.; Freeman, A. J.
2000-08-01
The FLAPW (full-potential linearized-augmented plane-wave) method is one of the most accurate first-principles methods for determining structural, electronic and magnetic properties of crystals and surfaces. Until the present work, the FLAPW method has been limited to systems of less than about a hundred atoms due to the lack of an efficient parallel implementation to exploit the power and memory of parallel computers. In this work, we present an efficient parallelization of the method by division among the processors of the plane-wave components for each state. The code is also optimized for RISC (reduced instruction set computer) architectures, such as those found on most parallel computers, making full use of BLAS (basic linear algebra subprograms) wherever possible. Scaling results are presented for systems of up to 686 silicon atoms and 343 palladium atoms per unit cell, running on up to 512 processors on a CRAY T3E parallel supercomputer.
Track filter on the basis of a cellular automation
International Nuclear Information System (INIS)
Glazov, A.A.; Kisel', I.V.; Konotopskaya, E.V.; Ososkov, G.A.
1991-01-01
The filtering method for tracks in discrete detectors based on the cellular automation is described. Results of the application of this method to experimental data (the spectrometer ARES) are quite successful: threefold reduction of input information with data grouping according to their belonging to separate tracks. They lift up percentage of useful events, which simplifies and accelerates considerably their next recognition. The described cellular automation for track filtering can be successfully applied in parallel computers and also in on-line mode if hardware implementation is used. 21 refs.; 11 figs
Parallelizing AT with MatlabMPI
International Nuclear Information System (INIS)
2011-01-01
The Accelerator Toolbox (AT) is a high-level collection of tools and scripts specifically oriented toward solving problems dealing with computational accelerator physics. It is integrated into the MATLAB environment, which provides an accessible, intuitive interface for accelerator physicists, allowing researchers to focus the majority of their efforts on simulations and calculations, rather than programming and debugging difficulties. Efforts toward parallelization of AT have been put in place to upgrade its performance to modern standards of computing. We utilized the packages MatlabMPI and pMatlab, which were developed by MIT Lincoln Laboratory, to set up a message-passing environment that could be called within MATLAB, which set up the necessary pre-requisites for multithread processing capabilities. On local quad-core CPUs, we were able to demonstrate processor efficiencies of roughly 95% and speed increases of nearly 380%. By exploiting the efficacy of modern-day parallel computing, we were able to demonstrate incredibly efficient speed increments per processor in AT's beam-tracking functions. Extrapolating from prediction, we can expect to reduce week-long computation runtimes to less than 15 minutes. This is a huge performance improvement and has enormous implications for the future computing power of the accelerator physics group at SSRL. However, one of the downfalls of parringpass is its current lack of transparency; the pMatlab and MatlabMPI packages must first be well-understood by the user before the system can be configured to run the scripts. In addition, the instantiation of argument parameters requires internal modification of the source code. Thus, parringpass, cannot be directly run from the MATLAB command line, which detracts from its flexibility and user-friendliness. Future work in AT's parallelization will focus on development of external functions and scripts that can be called from within MATLAB and configured on multiple nodes, while
2010-10-01
... devices for new on-track roadway maintenance machines. 214.509 Section 214.509 Transportation Other... TRANSPORTATION RAILROAD WORKPLACE SAFETY On-Track Roadway Maintenance Machines and Hi-Rail Vehicles § 214.509 Required visual illumination and reflective devices for new on-track roadway maintenance machines. Each new...
Environmental Public Health Tracking
Centers for Disease Control (CDC) Podcasts
In this podcast series, CDC scientists address frequently asked questions about the National Environmental Public Health Tracking Network, including using and applying data, running queries, and much more.
Social Security Administration — DCS Budget Tracking System database contains budget information for the Information Technology budget and the 'Other Objects' budget. This data allows for monitoring...
Parallel Numerical Simulations of Water Reservoirs
Torres, Pedro; Mangiavacchi, Norberto
2010-11-01
The study of the water flow and scalar transport in water reservoirs is important for the determination of the water quality during the initial stages of the reservoir filling and during the life of the reservoir. For this scope, a parallel 2D finite element code for solving the incompressible Navier-Stokes equations coupled with scalar transport was implemented using the message-passing programming model, in order to perform simulations of hidropower water reservoirs in a computer cluster environment. The spatial discretization is based on the MINI element that satisfies the Babuska-Brezzi (BB) condition, which provides sufficient conditions for a stable mixed formulation. All the distributed data structures needed in the different stages of the code, such as preprocessing, solving and post processing, were implemented using the PETSc library. The resulting linear systems for the velocity and the pressure fields were solved using the projection method, implemented by an approximate block LU factorization. In order to increase the parallel performance in the solution of the linear systems, we employ the static condensation method for solving the intermediate velocity at vertex and centroid nodes separately. We compare performance results of the static condensation method with the approach of solving the complete system. In our tests the static condensation method shows better performance for large problems, at the cost of an increased memory usage. Performance results for other intensive parts of the code in a computer cluster are also presented.
ATLAS Fast Tracker Status and Tracking at High luminosity LHC
Ilic, Nikolina; The ATLAS collaboration
2018-01-01
The LHC’s increase in centre of mass energy and luminosity in 2015 makes controlling trigger rates with high efficiency challenging. The ATLAS Fast TracKer (FTK) is a hardware processor built to reconstruct tracks at a rate of up to 100 kHz and provide them to the high level trigger. The FTK reconstructs tracks by matching incoming detector hits with pre-defined track patterns stored in associative memory on custom ASICs. Inner detector hits are fit to these track patterns using modern FPGAs. This talk describes the electronics system used for the FTK’s massive parallelization. The installation, commissioning and running of the system is happening in 2016, and is detailed in this talk. Tracking at High luminosity LHC is also presented.
3D neutron transport modelization
International Nuclear Information System (INIS)
Warin, X.
1996-12-01
Some nodal methods to solve the transport equation in 3D are presented. Two nodal methods presented at an OCDE congress are described: a first one is a low degree one called RTN0; a second one is a high degree one called BDM1. The two methods can be made faster with a totally consistent DSA. Some results of parallelization show that: 98% of the time is spent in sweeps; transport sweeps are easily parallelized. (K.A.)
3D neutron transport modelization
Energy Technology Data Exchange (ETDEWEB)
Warin, X.
1996-12-01
Some nodal methods to solve the transport equation in 3D are presented. Two nodal methods presented at an OCDE congress are described: a first one is a low degree one called RTN0; a second one is a high degree one called BDM1. The two methods can be made faster with a totally consistent DSA. Some results of parallelization show that: 98% of the time is spent in sweeps; transport sweeps are easily parallelized. (K.A.). 10 refs.
Parallel diffusion length on thermal neutrons in rod type lattices
International Nuclear Information System (INIS)
Ahmed, T.; Siddiqui, S.A.M.M.; Khan, A.M.
1981-11-01
Calculation of diffusion lengths of thermal neutrons in lead-water and aluminum water lattices in direction parallel to the rods are performed using one group diffusion equation together with Shevelev transport correction. The formalism is then applied to two practical cases, the Kawasaki (Hitachi) and the Douglas point (Candu) reactor lattices. Our results are in good agreement with the observed values. (author)
Development of Industrial High-Speed Transfer Parallel Robot
International Nuclear Information System (INIS)
Kim, Byung In; Kyung, Jin Ho; Do, Hyun Min; Jo, Sang Hyun
2013-01-01
Parallel robots used in industry require high stiffness or high speed because of their structural characteristics. Nowadays, the importance of rapid transportation has increased in the distribution industry. In this light, an industrial parallel robot has been developed for high-speed transfer. The developed parallel robot can handle a maximum payload of 3 kg. For a payload of 0.1 kg, the trajectory cycle time is 0.3 s (come and go), and the maximum velocity is 4.5 m/s (pick amp, place work, adept cycle). In this motion, its maximum acceleration is very high and reaches approximately 13g. In this paper, the design, analysis, and performance test results of the developed parallel robot system are introduced
Is Monte Carlo embarrassingly parallel?
Energy Technology Data Exchange (ETDEWEB)
Hoogenboom, J. E. [Delft Univ. of Technology, Mekelweg 15, 2629 JB Delft (Netherlands); Delft Nuclear Consultancy, IJsselzoom 2, 2902 LB Capelle aan den IJssel (Netherlands)
2012-07-01
Monte Carlo is often stated as being embarrassingly parallel. However, running a Monte Carlo calculation, especially a reactor criticality calculation, in parallel using tens of processors shows a serious limitation in speedup and the execution time may even increase beyond a certain number of processors. In this paper the main causes of the loss of efficiency when using many processors are analyzed using a simple Monte Carlo program for criticality. The basic mechanism for parallel execution is MPI. One of the bottlenecks turn out to be the rendez-vous points in the parallel calculation used for synchronization and exchange of data between processors. This happens at least at the end of each cycle for fission source generation in order to collect the full fission source distribution for the next cycle and to estimate the effective multiplication factor, which is not only part of the requested results, but also input to the next cycle for population control. Basic improvements to overcome this limitation are suggested and tested. Also other time losses in the parallel calculation are identified. Moreover, the threading mechanism, which allows the parallel execution of tasks based on shared memory using OpenMP, is analyzed in detail. Recommendations are given to get the maximum efficiency out of a parallel Monte Carlo calculation. (authors)
Is Monte Carlo embarrassingly parallel?
International Nuclear Information System (INIS)
Hoogenboom, J. E.
2012-01-01
Monte Carlo is often stated as being embarrassingly parallel. However, running a Monte Carlo calculation, especially a reactor criticality calculation, in parallel using tens of processors shows a serious limitation in speedup and the execution time may even increase beyond a certain number of processors. In this paper the main causes of the loss of efficiency when using many processors are analyzed using a simple Monte Carlo program for criticality. The basic mechanism for parallel execution is MPI. One of the bottlenecks turn out to be the rendez-vous points in the parallel calculation used for synchronization and exchange of data between processors. This happens at least at the end of each cycle for fission source generation in order to collect the full fission source distribution for the next cycle and to estimate the effective multiplication factor, which is not only part of the requested results, but also input to the next cycle for population control. Basic improvements to overcome this limitation are suggested and tested. Also other time losses in the parallel calculation are identified. Moreover, the threading mechanism, which allows the parallel execution of tasks based on shared memory using OpenMP, is analyzed in detail. Recommendations are given to get the maximum efficiency out of a parallel Monte Carlo calculation. (authors)
Parallel integer sorting with medium and fine-scale parallelism
Dagum, Leonardo
1993-01-01
Two new parallel integer sorting algorithms, queue-sort and barrel-sort, are presented and analyzed in detail. These algorithms do not have optimal parallel complexity, yet they show very good performance in practice. Queue-sort designed for fine-scale parallel architectures which allow the queueing of multiple messages to the same destination. Barrel-sort is designed for medium-scale parallel architectures with a high message passing overhead. The performance results from the implementation of queue-sort on a Connection Machine CM-2 and barrel-sort on a 128 processor iPSC/860 are given. The two implementations are found to be comparable in performance but not as good as a fully vectorized bucket sort on the Cray YMP.
Template based parallel checkpointing in a massively parallel computer system
Archer, Charles Jens [Rochester, MN; Inglett, Todd Alan [Rochester, MN
2009-01-13
A method and apparatus for a template based parallel checkpoint save for a massively parallel super computer system using a parallel variation of the rsync protocol, and network broadcast. In preferred embodiments, the checkpoint data for each node is compared to a template checkpoint file that resides in the storage and that was previously produced. Embodiments herein greatly decrease the amount of data that must be transmitted and stored for faster checkpointing and increased efficiency of the computer system. Embodiments are directed to a parallel computer system with nodes arranged in a cluster with a high speed interconnect that can perform broadcast communication. The checkpoint contains a set of actual small data blocks with their corresponding checksums from all nodes in the system. The data blocks may be compressed using conventional non-lossy data compression algorithms to further reduce the overall checkpoint size.
A Parallel Particle Swarm Optimization Algorithm Accelerated by Asynchronous Evaluations
Venter, Gerhard; Sobieszczanski-Sobieski, Jaroslaw
2005-01-01
A parallel Particle Swarm Optimization (PSO) algorithm is presented. Particle swarm optimization is a fairly recent addition to the family of non-gradient based, probabilistic search algorithms that is based on a simplified social model and is closely tied to swarming theory. Although PSO algorithms present several attractive properties to the designer, they are plagued by high computational cost as measured by elapsed time. One approach to reduce the elapsed time is to make use of coarse-grained parallelization to evaluate the design points. Previous parallel PSO algorithms were mostly implemented in a synchronous manner, where all design points within a design iteration are evaluated before the next iteration is started. This approach leads to poor parallel speedup in cases where a heterogeneous parallel environment is used and/or where the analysis time depends on the design point being analyzed. This paper introduces an asynchronous parallel PSO algorithm that greatly improves the parallel e ciency. The asynchronous algorithm is benchmarked on a cluster assembled of Apple Macintosh G5 desktop computers, using the multi-disciplinary optimization of a typical transport aircraft wing as an example.
Parallel education: what is it?
Amos, Michelle Peta
2017-01-01
In the history of education it has long been discussed that single-sex and coeducation are the two models of education present in schools. With the introduction of parallel schools over the last 15 years, there has been very little research into this 'new model'. Many people do not understand what it means for a school to be parallel or they confuse a parallel model with co-education, due to the presence of both boys and girls within the one institution. Therefore, the main obj...
Balanced, parallel operation of flashlamps
International Nuclear Information System (INIS)
Carder, B.M.; Merritt, B.T.
1979-01-01
A new energy store, the Compensated Pulsed Alternator (CPA), promises to be a cost effective substitute for capacitors to drive flashlamps that pump large Nd:glass lasers. Because the CPA is large and discrete, it will be necessary that it drive many parallel flashlamp circuits, presenting a problem in equal current distribution. Current division to +- 20% between parallel flashlamps has been achieved, but this is marginal for laser pumping. A method is presented here that provides equal current sharing to about 1%, and it includes fused protection against short circuit faults. The method was tested with eight parallel circuits, including both open-circuit and short-circuit fault tests
Ion track annealing in quartz investigated by small angle X-ray scattering
Energy Technology Data Exchange (ETDEWEB)
Schauries, D.; Afra, B.; Rodriguez, M.D. [Department of Electronic Materials Engineering, Research School of Physics and Engineering, The Australian National University, Canberra, ACT 2601 (Australia); Trautmann, C. [GSI Helmholtz Centre for Heavy Ion Research, Planckstrasse 1, 64291 Darmstadt (Germany); Technische Universität Darmstadt, 64287 Darmstadt (Germany); Hawley, A. [Australian Synchrotron, 800 Blackburn Road, Clayton, VIC 3168 (Australia); Kluth, P. [Department of Electronic Materials Engineering, Research School of Physics and Engineering, The Australian National University, Canberra, ACT 2601 (Australia)
2015-12-15
We report on the reduction of cross-section and length of amorphous ion tracks embedded within crystalline quartz during thermal annealing. The ion tracks were created via Au ion irradiation with an energy of 2.2 GeV. The use of synchrotron-based small angle X-ray scattering (SAXS) allowed characterization of the latent tracks, without the need for chemical etching. Temperatures between 900 and 1000 °C were required to see a notable change in track size. The shrinkage in cross-section and length was found to be comparable for tracks aligned perpendicular and parallel to the c-axis.
Tracking Ultrafast Carrier Dynamics in Single Semiconductor Nanowire Heterostructures
Directory of Open Access Journals (Sweden)
Taylor A.J.
2013-03-01
Full Text Available An understanding of non-equilibrium carrier dynamics in silicon (Si nanowires (NWs and NW heterostructures is very important due to their many nanophotonic and nanoelectronics applications. Here, we describe the first measurements of ultrafast carrier dynamics and diffusion in single heterostructured Si nanowires, obtained using ultrafast optical microscopy. By isolating individual nanowires, we avoid complications resulting from the broad size and alignment distribution in nanowire ensembles, allowing us to directly probe ultrafast carrier dynamics in these quasi-one-dimensional systems. Spatially-resolved pump-probe spectroscopy demonstrates the influence of surface-mediated mechanisms on carrier dynamics in a single NW, while polarization-resolved femtosecond pump-probe spectroscopy reveals a clear anisotropy in carrier lifetimes measured parallel and perpendicular to the NW axis, due to density-dependent Auger recombination. Furthermore, separating the pump and probe spots along the NW axis enabled us to track space and time dependent carrier diffusion in radial and axial NW heterostructures. These results enable us to reveal the influence of radial and axial interfaces on carrier dynamics and charge transport in these quasi-one-dimensional nanosystems, which can then be used to tailor carrier relaxation in a single nanowire heterostructure for a given application.
Can Tracking Improve Learning?
Duflo, Esther; Dupas, Pascaline; Kremer, Michael
2009-01-01
Tracking students into different classrooms according to their prior academic performance is controversial among both scholars and policymakers. If teachers find it easier to teach a homogeneous group of students, tracking could enhance school effectiveness and raise test scores of both low- and high-ability students. If students benefit from…
Attitude and position tracking
CSIR Research Space (South Africa)
Candy, LP
2011-01-01
Full Text Available Several applications require the tracking of attitude and position of a body based on velocity data. It is tempting to use direction cosine matrices (DCM), for example, to track attitude based on angular velocity data, and to integrate the linear...
International Nuclear Information System (INIS)
Reuther, H.
1976-11-01
This paper gives a survey of the present state of the development and the application of solid state track detectors. The fundamentals of the physical and chemical processes of the track formation and development are explained, the different detector materials and their registration characteristics are mentioned, the possibilities of the experimental practice and the most variable applications are discussed. (author)
Dailey, Charles H.; Rankin, Kelly D.
This guide was developed to serve both the novice and experienced starter in track and field events. Each year in the United States, runners encounter dozens of different starters' mannerisms as they travel to track meets in various towns and states. The goal of any competent and conscientious starter is to insure that all runners receive a fair…
Large scale tracking algorithms
Energy Technology Data Exchange (ETDEWEB)
Hansen, Ross L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Love, Joshua Alan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Melgaard, David Kennett [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Karelitz, David B. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Pitts, Todd Alan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Zollweg, Joshua David [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Anderson, Dylan Z. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Nandy, Prabal [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Whitlow, Gary L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bender, Daniel A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Byrne, Raymond Harry [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2015-01-01
Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.
George, J.; Irkens, M.; Neumann, S.; Scherer, U. W.; Srivastava, A.; Sinha, D.; Fink, D.
2006-03-01
It is a common practice since long to follow the ion track-etching process in thin foils via conductometry, i.e . by measurement of the electrical current which passes through the etched track, once the track breakthrough condition has been achieved. The major disadvantage of this approach, namely the absence of any major detectable signal before breakthrough, can be avoided by examining the track-etching process capacitively. This method allows one to define precisely not only the breakthrough point before it is reached, but also the length of any non-transient track. Combining both capacitive and conductive etching allows one to control the etching process perfectly. Examples and possible applications are given.
DEFF Research Database (Denmark)
Tække, Jesper
2015-01-01
In this short essay, concerning why we are tracking, I will try to frame tracking as an evolutionary developed skill that humans need to survive. From an evolutionary point zero life must reflect upon itself in regard to its surrounding world as a kind of societal self-synchronization in this reg......In this short essay, concerning why we are tracking, I will try to frame tracking as an evolutionary developed skill that humans need to survive. From an evolutionary point zero life must reflect upon itself in regard to its surrounding world as a kind of societal self......-synchronization in this regard (Spencer 1890, Luhmann 2000, Tække 2014, 2011). I was inspired by Jill Walker Rettberg’s book: “Seeing Ourselves through Technology” and her presentation at the seminar: “Tracking Culture” arranged by Anders Albrechtslund in Aarhus January 2015....
Mueller, Matthias
2016-04-13
In this thesis, we propose a new aerial video dataset and benchmark for low altitude UAV target tracking, as well as, a photo-realistic UAV simulator that can be coupled with tracking methods. Our benchmark provides the rst evaluation of many state of-the-art and popular trackers on 123 new and fully annotated HD video sequences captured from a low-altitude aerial perspective. Among the compared trackers, we determine which ones are the most suitable for UAV tracking both in terms of tracking accuracy and run-time. We also present a simulator that can be used to evaluate tracking algorithms in real-time scenarios before they are deployed on a UAV "in the field", as well as, generate synthetic but photo-realistic tracking datasets with free ground truth annotations to easily extend existing real-world datasets. Both the benchmark and simulator will be made publicly available to the vision community to further research in the area of object tracking from UAVs. Additionally, we propose a persistent, robust and autonomous object tracking system for unmanned aerial vehicles (UAVs) called Persistent Aerial Tracking (PAT). A computer vision and control strategy is applied to a diverse set of moving objects (e.g. humans, animals, cars, boats, etc.) integrating multiple UAVs with a stabilized RGB camera. A novel strategy is employed to successfully track objects over a long period, by \\'handing over the camera\\' from one UAV to another. We integrate the complete system into an off-the-shelf UAV, and obtain promising results showing the robustness of our solution in real-world aerial scenarios.
49 CFR 213.307 - Class of track: operating speed limits.
2010-10-01
... requirements for its intended class, it is to be reclassified to the next lower class of track for which it... 49 Transportation 4 2010-10-01 2010-10-01 false Class of track: operating speed limits. 213.307... Higher § 213.307 Class of track: operating speed limits. (a) Except as provided in paragraph (b) of this...
49 CFR 236.16 - Electric lock, main track releasing circuit.
2010-10-01
... 49 Transportation 4 2010-10-01 2010-10-01 false Electric lock, main track releasing circuit. 236... Rules and Instructions: All Systems General § 236.16 Electric lock, main track releasing circuit. When an electric lock releasing circuit is provided on the main track to permit a train or an engine to...
49 CFR 214.513 - Retrofitting of existing on-track roadway maintenance machines; general.
2010-10-01
... maintenance machines; general. 214.513 Section 214.513 Transportation Other Regulations Relating to... SAFETY On-Track Roadway Maintenance Machines and Hi-Rail Vehicles § 214.513 Retrofitting of existing on-track roadway maintenance machines; general. (a) Each existing on-track roadway maintenance machine...
49 CFR 1242.10 - Administration-track (account XX-19-02).
2010-10-01
... 49 Transportation 9 2010-10-01 2010-10-01 false Administration-track (account XX-19-02). 1242.10... Structures § 1242.10 Administration—track (account XX-19-02). Separate common administration—track expenses... accounts are separated between freight and passenger services: Roadway: Running (XX-17-10) Switching (XX-18...
Workspace Analysis for Parallel Robot
Directory of Open Access Journals (Sweden)
Ying Sun
2013-05-01
Full Text Available As a completely new-type of robot, the parallel robot possesses a lot of advantages that the serial robot does not, such as high rigidity, great load-carrying capacity, small error, high precision, small self-weight/load ratio, good dynamic behavior and easy control, hence its range is extended in using domain. In order to find workspace of parallel mechanism, the numerical boundary-searching algorithm based on the reverse solution of kinematics and limitation of link length has been introduced. This paper analyses position workspace, orientation workspace of parallel robot of the six degrees of freedom. The result shows: It is a main means to increase and decrease its workspace to change the length of branch of parallel mechanism; The radius of the movement platform has no effect on the size of workspace, but will change position of workspace.
"Feeling" Series and Parallel Resistances.
Morse, Robert A.
1993-01-01
Equipped with drinking straws and stirring straws, a teacher can help students understand how resistances in electric circuits combine in series and in parallel. Follow-up suggestions are provided. (ZWH)
Parallel encoders for pixel detectors
International Nuclear Information System (INIS)
Nikityuk, N.M.
1991-01-01
A new method of fast encoding and determining the multiplicity and coordinates of fired pixels is described. A specific example construction of parallel encodes and MCC for n=49 and t=2 is given. 16 refs.; 6 figs.; 2 tabs
Massively Parallel Finite Element Programming
Heister, Timo
2010-01-01
Today\\'s large finite element simulations require parallel algorithms to scale on clusters with thousands or tens of thousands of processor cores. We present data structures and algorithms to take advantage of the power of high performance computers in generic finite element codes. Existing generic finite element libraries often restrict the parallelization to parallel linear algebra routines. This is a limiting factor when solving on more than a few hundreds of cores. We describe routines for distributed storage of all major components coupled with efficient, scalable algorithms. We give an overview of our effort to enable the modern and generic finite element library deal.II to take advantage of the power of large clusters. In particular, we describe the construction of a distributed mesh and develop algorithms to fully parallelize the finite element calculation. Numerical results demonstrate good scalability. © 2010 Springer-Verlag.
Event monitoring of parallel computations
Directory of Open Access Journals (Sweden)
Gruzlikov Alexander M.
2015-06-01
Full Text Available The paper considers the monitoring of parallel computations for detection of abnormal events. It is assumed that computations are organized according to an event model, and monitoring is based on specific test sequences
Massively Parallel Finite Element Programming
Heister, Timo; Kronbichler, Martin; Bangerth, Wolfgang
2010-01-01
Today's large finite element simulations require parallel algorithms to scale on clusters with thousands or tens of thousands of processor cores. We present data structures and algorithms to take advantage of the power of high performance computers in generic finite element codes. Existing generic finite element libraries often restrict the parallelization to parallel linear algebra routines. This is a limiting factor when solving on more than a few hundreds of cores. We describe routines for distributed storage of all major components coupled with efficient, scalable algorithms. We give an overview of our effort to enable the modern and generic finite element library deal.II to take advantage of the power of large clusters. In particular, we describe the construction of a distributed mesh and develop algorithms to fully parallelize the finite element calculation. Numerical results demonstrate good scalability. © 2010 Springer-Verlag.
The STAPL Parallel Graph Library
Harshvardhan,
2013-01-01
This paper describes the stapl Parallel Graph Library, a high-level framework that abstracts the user from data-distribution and parallelism details and allows them to concentrate on parallel graph algorithm development. It includes a customizable distributed graph container and a collection of commonly used parallel graph algorithms. The library introduces pGraph pViews that separate algorithm design from the container implementation. It supports three graph processing algorithmic paradigms, level-synchronous, asynchronous and coarse-grained, and provides common graph algorithms based on them. Experimental results demonstrate improved scalability in performance and data size over existing graph libraries on more than 16,000 cores and on internet-scale graphs containing over 16 billion vertices and 250 billion edges. © Springer-Verlag Berlin Heidelberg 2013.
International Nuclear Information System (INIS)
Le, Anh Dinh; Zhou Biao
2009-01-01
A three-dimensional and unsteady proton exchange membrane fuel cell (PEMFC) model with serpentine-parallel channels has been incorporated to simulate not only the fluid flow, heat transfer, species transport, electrochemical reaction, and current density distribution but also the behaviors of liquid water in the gas-liquid flow of the channels and porous media. Using this general model, the behaviors of liquid water were investigated by performing the motion, deformation, coalescence and detachment of water droplets inside the channels and the penetration of liquid through the porous media at different time instants. The results showed that: tracking the interface of liquid water in a reacting gas-liquid flow in PEMFC can be fulfilled by using volume-of-fluid (VOF) algorithm combined with solving the conservation equations of continuity, momentum, energy, species transport and electrochemistry; the presence of liquid water in the channels has a significant impact on the flow fields, e.g., the gas flow became unevenly distributed due to the blockage of liquid water where the high pressure would be suddenly built up and the reactant gas transport in the channels and porous media would be hindered by liquid water occupation
Monte Carlo Transport for Electron Thermal Transport
Chenhall, Jeffrey; Cao, Duc; Moses, Gregory
2015-11-01
The iSNB (implicit Schurtz Nicolai Busquet multigroup electron thermal transport method of Cao et al. is adapted into a Monte Carlo transport method in order to better model the effects of non-local behavior. The end goal is a hybrid transport-diffusion method that combines Monte Carlo Transport with a discrete diffusion Monte Carlo (DDMC). The hybrid method will combine the efficiency of a diffusion method in short mean free path regions with the accuracy of a transport method in long mean free path regions. The Monte Carlo nature of the approach allows the algorithm to be massively parallelized. Work to date on the method will be presented. This work was supported by Sandia National Laboratory - Albuquerque and the University of Rochester Laboratory for Laser Energetics.
Parallel Tensor Compression for Large-Scale Scientific Data.
Energy Technology Data Exchange (ETDEWEB)
Kolda, Tamara G. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Ballard, Grey [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Austin, Woody Nathan [Univ. of Texas, Austin, TX (United States)
2015-10-01
As parallel computing trends towards the exascale, scientific data produced by high-fidelity simulations are growing increasingly massive. For instance, a simulation on a three-dimensional spatial grid with 512 points per dimension that tracks 64 variables per grid point for 128 time steps yields 8 TB of data. By viewing the data as a dense five way tensor, we can compute a Tucker decomposition to find inherent low-dimensional multilinear structure, achieving compression ratios of up to 10000 on real-world data sets with negligible loss in accuracy. So that we can operate on such massive data, we present the first-ever distributed memory parallel implementation for the Tucker decomposition, whose key computations correspond to parallel linear algebra operations, albeit with nonstandard data layouts. Our approach specifies a data distribution for tensors that avoids any tensor data redistribution, either locally or in parallel. We provide accompanying analysis of the computation and communication costs of the algorithms. To demonstrate the compression and accuracy of the method, we apply our approach to real-world data sets from combustion science simulations. We also provide detailed performance results, including parallel performance in both weak and strong scaling experiments.