WorldWideScience

Sample records for gpu-based particle system

  1. Comparison of GPU-Based Numerous Particles Simulation and Experiment

    International Nuclear Information System (INIS)

    Park, Sang Wook; Jun, Chul Woong; Sohn, Jeong Hyun; Lee, Jae Wook

    2014-01-01

    The dynamic behavior of numerous grains interacting with each other can be easily observed. In this study, this dynamic behavior was analyzed based on the contact between numerous grains. The discrete element method was used for analyzing the dynamic behavior of each particle and the neighboring-cell algorithm was employed for detecting their contact. The Hertzian and tangential sliding friction contact models were used for calculating the contact force acting between the particles. A GPU-based parallel program was developed for conducting the computer simulation and calculating the numerous contacts. The dam break experiment was performed to verify the simulation results. The reliability of the program was verified by comparing the results of the simulation with those of the experiment

  2. Moving-Target Position Estimation Using GPU-Based Particle Filter for IoT Sensing Applications

    Directory of Open Access Journals (Sweden)

    Seongseop Kim

    2017-11-01

    Full Text Available A particle filter (PF has been introduced for effective position estimation of moving targets for non-Gaussian and nonlinear systems. The time difference of arrival (TDOA method using acoustic sensor array has normally been used to for estimation by concealing the location of a moving target, especially underwater. In this paper, we propose a GPU -based acceleration of target position estimation using a PF and propose an efficient system and software architecture. The proposed graphic processing unit (GPU-based algorithm has more advantages in applying PF signal processing to a target system, which consists of large-scale Internet of Things (IoT-driven sensors because of the parallelization which is scalable. For the TDOA measurement from the acoustic sensor array, we use the generalized cross correlation phase transform (GCC-PHAT method to obtain the correlation coefficient of the signal using Fast Fourier Transform (FFT, and we try to accelerate the calculations of GCC-PHAT based TDOA measurements using FFT with GPU compute unified device architecture (CUDA. The proposed approach utilizes a parallelization method in the target position estimation algorithm using GPU-based PF processing. In addition, it could efficiently estimate sudden movement change of the target using GPU-based parallel computing which also can be used for multiple target tracking. It also provides scalability in extending the detection algorithm according to the increase of the number of sensors. Therefore, the proposed architecture can be applied in IoT sensing applications with a large number of sensors. The target estimation algorithm was verified using MATLAB and implemented using GPU CUDA. We implemented the proposed signal processing acceleration system using target GPU to analyze in terms of execution time. The execution time of the algorithm is reduced by 55% from to the CPU standalone operation in target embedded board, NVIDIA Jetson TX1. Also, to apply large

  3. Implementation and Optimization of GPU-Based Static State Security Analysis in Power Systems

    Directory of Open Access Journals (Sweden)

    Yong Chen

    2017-01-01

    Full Text Available Static state security analysis (SSSA is one of the most important computations to check whether a power system is in normal and secure operating state. It is a challenge to satisfy real-time requirements with CPU-based concurrent methods due to the intensive computations. A sensitivity analysis-based method with Graphics processing unit (GPU is proposed for power systems, which can reduce calculation time by 40% compared to the execution on a 4-core CPU. The proposed method involves load flow analysis and sensitivity analysis. In load flow analysis, a multifrontal method for sparse LU factorization is explored on GPU through dynamic frontal task scheduling between CPU and GPU. The varying matrix operations during sensitivity analysis on GPU are highly optimized in this study. The results of performance evaluations show that the proposed GPU-based SSSA with optimized matrix operations can achieve a significant reduction in computation time.

  4. SU-E-T-500: Initial Implementation of GPU-Based Particle Swarm Optimization for 4D IMRT Planning in Lung SBRT

    International Nuclear Information System (INIS)

    Modiri, A; Hagan, A; Gu, X; Sawant, A

    2015-01-01

    Purpose 4D-IMRT planning, combined with dynamic MLC tracking delivery, utilizes the temporal dimension as an additional degree of freedom to achieve improved OAR-sparing. The computational complexity for such optimization increases exponentially with increase in dimensionality. In order to accomplish this task in a clinically-feasible time frame, we present an initial implementation of GPU-based 4D-IMRT planning based on particle swarm optimization (PSO). Methods The target and normal structures were manually contoured on ten phases of a 4DCT scan of a NSCLC patient with a 54cm3 right-lower-lobe tumor (1.5cm motion). Corresponding ten 3D-IMRT plans were created in the Eclipse treatment planning system (Ver-13.6). A vendor-provided scripting interface was used to export 3D-dose matrices corresponding to each control point (10 phases × 9 beams × 166 control points = 14,940), which served as input to PSO. The optimization task was to iteratively adjust the weights of each control point and scale the corresponding dose matrices. In order to handle the large amount of data in GPU memory, dose matrices were sparsified and placed in contiguous memory blocks with the 14,940 weight-variables. PSO was implemented on CPU (dual-Xeon, 3.1GHz) and GPU (dual-K20 Tesla, 2496 cores, 3.52Tflops, each) platforms. NiftyReg, an open-source deformable image registration package, was used to calculate the summed dose. Results The 4D-PSO plan yielded PTV coverage comparable to the clinical ITV-based plan and significantly higher OAR-sparing, as follows: lung Dmean=33%; lung V20=27%; spinal cord Dmax=26%; esophagus Dmax=42%; heart Dmax=0%; heart Dmean=47%. The GPU-PSO processing time for 14940 variables and 7 PSO-particles was 41% that of CPU-PSO (199 vs. 488 minutes). Conclusion Truly 4D-IMRT planning can yield significant OAR dose-sparing while preserving PTV coverage. The corresponding optimization problem is large-scale, non-convex and computationally rigorous. Our initial results

  5. SU-E-T-500: Initial Implementation of GPU-Based Particle Swarm Optimization for 4D IMRT Planning in Lung SBRT

    Energy Technology Data Exchange (ETDEWEB)

    Modiri, A; Hagan, A; Gu, X; Sawant, A [UT Southwestern Medical Center, Dallas, TX (United States)

    2015-06-15

    Purpose 4D-IMRT planning, combined with dynamic MLC tracking delivery, utilizes the temporal dimension as an additional degree of freedom to achieve improved OAR-sparing. The computational complexity for such optimization increases exponentially with increase in dimensionality. In order to accomplish this task in a clinically-feasible time frame, we present an initial implementation of GPU-based 4D-IMRT planning based on particle swarm optimization (PSO). Methods The target and normal structures were manually contoured on ten phases of a 4DCT scan of a NSCLC patient with a 54cm3 right-lower-lobe tumor (1.5cm motion). Corresponding ten 3D-IMRT plans were created in the Eclipse treatment planning system (Ver-13.6). A vendor-provided scripting interface was used to export 3D-dose matrices corresponding to each control point (10 phases × 9 beams × 166 control points = 14,940), which served as input to PSO. The optimization task was to iteratively adjust the weights of each control point and scale the corresponding dose matrices. In order to handle the large amount of data in GPU memory, dose matrices were sparsified and placed in contiguous memory blocks with the 14,940 weight-variables. PSO was implemented on CPU (dual-Xeon, 3.1GHz) and GPU (dual-K20 Tesla, 2496 cores, 3.52Tflops, each) platforms. NiftyReg, an open-source deformable image registration package, was used to calculate the summed dose. Results The 4D-PSO plan yielded PTV coverage comparable to the clinical ITV-based plan and significantly higher OAR-sparing, as follows: lung Dmean=33%; lung V20=27%; spinal cord Dmax=26%; esophagus Dmax=42%; heart Dmax=0%; heart Dmean=47%. The GPU-PSO processing time for 14940 variables and 7 PSO-particles was 41% that of CPU-PSO (199 vs. 488 minutes). Conclusion Truly 4D-IMRT planning can yield significant OAR dose-sparing while preserving PTV coverage. The corresponding optimization problem is large-scale, non-convex and computationally rigorous. Our initial results

  6. NaNet: a low-latency NIC enabling GPU-based, real-time low level trigger systems

    International Nuclear Information System (INIS)

    Ammendola, Roberto; Biagioni, Andrea; Frezza, Ottorino; Cicero, Francesca Lo; Lonardo, Alessandro; Paolucci, Pier Stanislao; Rossetti, Davide; Simula, Francesco; Tosoratto, Laura; Vicini, Piero; Fantechi, Riccardo; Lamanna, Gianluca; Pantaleo, Felice; Piandani, Roberto; Sozzi, Marco; Pontisso, Luca

    2014-01-01

    We implemented the NaNet FPGA-based PCIe Gen2 GbE/APElink NIC, featuring GPUDirect RDMA capabilities and UDP protocol management offloading. NaNet is able to receive a UDP input data stream from its GbE interface and redirect it, without any intermediate buffering or CPU intervention, to the memory of a Fermi/Kepler GPU hosted on the same PCIe bus, provided that the two devices share the same upstream root complex. Synthetic benchmarks for latency and bandwidth are presented. We describe how NaNet can be employed in the prototype of the GPU-based RICH low-level trigger processor of the NA62 CERN experiment, to implement the data link between the TEL62 readout boards and the low level trigger processor. Results for the throughput and latency of the integrated system are presented and discussed.

  7. NaNet: a low-latency NIC enabling GPU-based, real-time low level trigger systems

    Energy Technology Data Exchange (ETDEWEB)

    Ammendola, Roberto [INFN, Rome – Tor Vergata (Italy); Biagioni, Andrea; Frezza, Ottorino; Cicero, Francesca Lo; Lonardo, Alessandro; Paolucci, Pier Stanislao; Rossetti, Davide; Simula, Francesco; Tosoratto, Laura; Vicini, Piero [INFN, Rome – Sapienza (Italy); Fantechi, Riccardo [CERN, Geneve (Switzerland); Lamanna, Gianluca; Pantaleo, Felice; Piandani, Roberto; Sozzi, Marco [INFN, Pisa (Italy); Pontisso, Luca [University, Rome (Italy)

    2014-06-11

    We implemented the NaNet FPGA-based PCIe Gen2 GbE/APElink NIC, featuring GPUDirect RDMA capabilities and UDP protocol management offloading. NaNet is able to receive a UDP input data stream from its GbE interface and redirect it, without any intermediate buffering or CPU intervention, to the memory of a Fermi/Kepler GPU hosted on the same PCIe bus, provided that the two devices share the same upstream root complex. Synthetic benchmarks for latency and bandwidth are presented. We describe how NaNet can be employed in the prototype of the GPU-based RICH low-level trigger processor of the NA62 CERN experiment, to implement the data link between the TEL62 readout boards and the low level trigger processor. Results for the throughput and latency of the integrated system are presented and discussed.

  8. NaNet:a low-latency NIC enabling GPU-based, real-time low level trigger systems

    CERN Document Server

    INSPIRE-00646837; Biagioni, Andrea; Fantechi, Riccardo; Frezza, Ottorino; Lamanna, Gianluca; Lo Cicero, Francesca; Lonardo, Alessandro; Paolucci, Pier Stanislao; Pantaleo, Felice; Piandani, Roberto; Pontisso, Luca; Rossetti, Davide; Simula, Francesco; Sozzi, Marco; Tosoratto, Laura; Vicini, Piero

    2014-01-01

    We implemented the NaNet FPGA-based PCI2 Gen2 GbE/APElink NIC, featuring GPUDirect RDMA capabilities and UDP protocol management offloading. NaNet is able to receive a UDP input data stream from its GbE interface and redirect it, without any intermediate buffering or CPU intervention, to the memory of a Fermi/Kepler GPU hosted on the same PCIe bus, provided that the two devices share the same upstream root complex. Synthetic benchmarks for latency and bandwidth are presented. We describe how NaNet can be employed in the prototype of the GPU-based RICH low-level trigger processor of the NA62 CERN experiment, to implement the data link between the TEL62 readout boards and the low level trigger processor. Results for the throughput and latency of the integrated system are presented and discussed.

  9. GPU based Monte Carlo for PET image reconstruction: detector modeling

    International Nuclear Information System (INIS)

    Légrády; Cserkaszky, Á; Lantos, J.; Patay, G.; Bükki, T.

    2011-01-01

    Monte Carlo (MC) calculations and Graphical Processing Units (GPUs) are almost like the dedicated hardware designed for the specific task given the similarities between visible light transport and neutral particle trajectories. A GPU based MC gamma transport code has been developed for Positron Emission Tomography iterative image reconstruction calculating the projection from unknowns to data at each iteration step taking into account the full physics of the system. This paper describes the simplified scintillation detector modeling and its effect on convergence. (author)

  10. Clinical implementation of a GPU-based simplified Monte Carlo method for a treatment planning system of proton beam therapy

    International Nuclear Information System (INIS)

    Kohno, R; Hotta, K; Nishioka, S; Matsubara, K; Tansho, R; Suzuki, T

    2011-01-01

    We implemented the simplified Monte Carlo (SMC) method on graphics processing unit (GPU) architecture under the computer-unified device architecture platform developed by NVIDIA. The GPU-based SMC was clinically applied for four patients with head and neck, lung, or prostate cancer. The results were compared to those obtained by a traditional CPU-based SMC with respect to the computation time and discrepancy. In the CPU- and GPU-based SMC calculations, the estimated mean statistical errors of the calculated doses in the planning target volume region were within 0.5% rms. The dose distributions calculated by the GPU- and CPU-based SMCs were similar, within statistical errors. The GPU-based SMC showed 12.30–16.00 times faster performance than the CPU-based SMC. The computation time per beam arrangement using the GPU-based SMC for the clinical cases ranged 9–67 s. The results demonstrate the successful application of the GPU-based SMC to a clinical proton treatment planning. (note)

  11. A Study on GPU-based Iterative ML-EM Reconstruction Algorithm for Emission Computed Tomographic Imaging Systems

    Energy Technology Data Exchange (ETDEWEB)

    Ha, Woo Seok; Kim, Soo Mee; Park, Min Jae; Lee, Dong Soo; Lee, Jae Sung [Seoul National University, Seoul (Korea, Republic of)

    2009-10-15

    The maximum likelihood-expectation maximization (ML-EM) is the statistical reconstruction algorithm derived from probabilistic model of the emission and detection processes. Although the ML-EM has many advantages in accuracy and utility, the use of the ML-EM is limited due to the computational burden of iterating processing on a CPU (central processing unit). In this study, we developed a parallel computing technique on GPU (graphic processing unit) for ML-EM algorithm. Using Geforce 9800 GTX+ graphic card and CUDA (compute unified device architecture) the projection and backprojection in ML-EM algorithm were parallelized by NVIDIA's technology. The time delay on computations for projection, errors between measured and estimated data and backprojection in an iteration were measured. Total time included the latency in data transmission between RAM and GPU memory. The total computation time of the CPU- and GPU-based ML-EM with 32 iterations were 3.83 and 0.26 sec, respectively. In this case, the computing speed was improved about 15 times on GPU. When the number of iterations increased into 1024, the CPU- and GPU-based computing took totally 18 min and 8 sec, respectively. The improvement was about 135 times and was caused by delay on CPU-based computing after certain iterations. On the other hand, the GPU-based computation provided very small variation on time delay per iteration due to use of shared memory. The GPU-based parallel computation for ML-EM improved significantly the computing speed and stability. The developed GPU-based ML-EM algorithm could be easily modified for some other imaging geometries

  12. A Study on GPU-based Iterative ML-EM Reconstruction Algorithm for Emission Computed Tomographic Imaging Systems

    International Nuclear Information System (INIS)

    Ha, Woo Seok; Kim, Soo Mee; Park, Min Jae; Lee, Dong Soo; Lee, Jae Sung

    2009-01-01

    The maximum likelihood-expectation maximization (ML-EM) is the statistical reconstruction algorithm derived from probabilistic model of the emission and detection processes. Although the ML-EM has many advantages in accuracy and utility, the use of the ML-EM is limited due to the computational burden of iterating processing on a CPU (central processing unit). In this study, we developed a parallel computing technique on GPU (graphic processing unit) for ML-EM algorithm. Using Geforce 9800 GTX+ graphic card and CUDA (compute unified device architecture) the projection and backprojection in ML-EM algorithm were parallelized by NVIDIA's technology. The time delay on computations for projection, errors between measured and estimated data and backprojection in an iteration were measured. Total time included the latency in data transmission between RAM and GPU memory. The total computation time of the CPU- and GPU-based ML-EM with 32 iterations were 3.83 and 0.26 sec, respectively. In this case, the computing speed was improved about 15 times on GPU. When the number of iterations increased into 1024, the CPU- and GPU-based computing took totally 18 min and 8 sec, respectively. The improvement was about 135 times and was caused by delay on CPU-based computing after certain iterations. On the other hand, the GPU-based computation provided very small variation on time delay per iteration due to use of shared memory. The GPU-based parallel computation for ML-EM improved significantly the computing speed and stability. The developed GPU-based ML-EM algorithm could be easily modified for some other imaging geometries

  13. APEnet+: a 3D Torus network optimized for GPU-based HPC Systems

    International Nuclear Information System (INIS)

    Ammendola, R; Biagioni, A; Frezza, O; Lo Cicero, F; Lonardo, A; Paolucci, P S; Rossetti, D; Simula, F; Tosoratto, L; Vicini, P

    2012-01-01

    In the supercomputing arena, the strong rise of GPU-accelerated clusters is a matter of fact. Within INFN, we proposed an initiative — the QUonG project — whose aim is to deploy a high performance computing system dedicated to scientific computations leveraging on commodity multi-core processors coupled with latest generation GPUs. The inter-node interconnection system is based on a point-to-point, high performance, low latency 3D torus network which is built in the framework of the APEnet+ project. It takes the form of an FPGA-based PCIe network card exposing six full bidirectional links running at 34 Gbps each that implements the RDMA protocol. In order to enable significant access latency reduction for inter-node data transfer, a direct network-to-GPU interface was built. The specialized hardware blocks, integrated in the APEnet+ board, provide support for GPU-initiated communications using the so called PCIe peer-to-peer (P2P) transactions. This development is made in close collaboration with the GPU vendor NVIDIA. The final shape of a complete QUonG deployment is an assembly of standard 42U racks, each one capable of 80 TFLOPS/rack of peak performance, at a cost of 5 k€/T F LOPS and for an estimated power consumption of 25 kW/rack. In this paper we report on the status of final rack deployment and on the R and D activities for 2012 that will focus on performance enhancement of the APEnet+ hardware through the adoption of new generation 28 nm FPGAs allowing the implementation of PCIe Gen3 host interface and the addition of new fault tolerance-oriented capabilities.

  14. APEnet+: a 3D Torus network optimized for GPU-based HPC Systems

    Energy Technology Data Exchange (ETDEWEB)

    Ammendola, R [INFN Tor Vergata (Italy); Biagioni, A; Frezza, O; Lo Cicero, F; Lonardo, A; Paolucci, P S; Rossetti, D; Simula, F; Tosoratto, L; Vicini, P [INFN Roma (Italy)

    2012-12-13

    In the supercomputing arena, the strong rise of GPU-accelerated clusters is a matter of fact. Within INFN, we proposed an initiative - the QUonG project - whose aim is to deploy a high performance computing system dedicated to scientific computations leveraging on commodity multi-core processors coupled with latest generation GPUs. The inter-node interconnection system is based on a point-to-point, high performance, low latency 3D torus network which is built in the framework of the APEnet+ project. It takes the form of an FPGA-based PCIe network card exposing six full bidirectional links running at 34 Gbps each that implements the RDMA protocol. In order to enable significant access latency reduction for inter-node data transfer, a direct network-to-GPU interface was built. The specialized hardware blocks, integrated in the APEnet+ board, provide support for GPU-initiated communications using the so called PCIe peer-to-peer (P2P) transactions. This development is made in close collaboration with the GPU vendor NVIDIA. The final shape of a complete QUonG deployment is an assembly of standard 42U racks, each one capable of 80 TFLOPS/rack of peak performance, at a cost of 5 k Euro-Sign /T F LOPS and for an estimated power consumption of 25 kW/rack. In this paper we report on the status of final rack deployment and on the R and D activities for 2012 that will focus on performance enhancement of the APEnet+ hardware through the adoption of new generation 28 nm FPGAs allowing the implementation of PCIe Gen3 host interface and the addition of new fault tolerance-oriented capabilities.

  15. A GPU-based large-scale Monte Carlo simulation method for systems with long-range interactions

    Science.gov (United States)

    Liang, Yihao; Xing, Xiangjun; Li, Yaohang

    2017-06-01

    In this work we present an efficient implementation of Canonical Monte Carlo simulation for Coulomb many body systems on graphics processing units (GPU). Our method takes advantage of the GPU Single Instruction, Multiple Data (SIMD) architectures, and adopts the sequential updating scheme of Metropolis algorithm. It makes no approximation in the computation of energy, and reaches a remarkable 440-fold speedup, compared with the serial implementation on CPU. We further use this method to simulate primitive model electrolytes, and measure very precisely all ion-ion pair correlation functions at high concentrations. From these data, we extract the renormalized Debye length, renormalized valences of constituent ions, and renormalized dielectric constants. These results demonstrate unequivocally physics beyond the classical Poisson-Boltzmann theory.

  16. GPU-based cone beam computed tomography.

    Science.gov (United States)

    Noël, Peter B; Walczak, Alan M; Xu, Jinhui; Corso, Jason J; Hoffmann, Kenneth R; Schafer, Sebastian

    2010-06-01

    The use of cone beam computed tomography (CBCT) is growing in the clinical arena due to its ability to provide 3D information during interventions, its high diagnostic quality (sub-millimeter resolution), and its short scanning times (60 s). In many situations, the short scanning time of CBCT is followed by a time-consuming 3D reconstruction. The standard reconstruction algorithm for CBCT data is the filtered backprojection, which for a volume of size 256(3) takes up to 25 min on a standard system. Recent developments in the area of Graphic Processing Units (GPUs) make it possible to have access to high-performance computing solutions at a low cost, allowing their use in many scientific problems. We have implemented an algorithm for 3D reconstruction of CBCT data using the Compute Unified Device Architecture (CUDA) provided by NVIDIA (NVIDIA Corporation, Santa Clara, California), which was executed on a NVIDIA GeForce GTX 280. Our implementation results in improved reconstruction times from minutes, and perhaps hours, to a matter of seconds, while also giving the clinician the ability to view 3D volumetric data at higher resolutions. We evaluated our implementation on ten clinical data sets and one phantom data set to observe if differences occur between CPU and GPU-based reconstructions. By using our approach, the computation time for 256(3) is reduced from 25 min on the CPU to 3.2 s on the GPU. The GPU reconstruction time for 512(3) volumes is 8.5 s. Copyright 2009 Elsevier Ireland Ltd. All rights reserved.

  17. GPU Based Software Correlators - Perspectives for VLBI2010

    Science.gov (United States)

    Hobiger, Thomas; Kimura, Moritaka; Takefuji, Kazuhiro; Oyama, Tomoaki; Koyama, Yasuhiro; Kondo, Tetsuro; Gotoh, Tadahiro; Amagai, Jun

    2010-01-01

    Caused by historical separation and driven by the requirements of the PC gaming industry, Graphics Processing Units (GPUs) have evolved to massive parallel processing systems which entered the area of non-graphic related applications. Although a single processing core on the GPU is much slower and provides less functionality than its counterpart on the CPU, the huge number of these small processing entities outperforms the classical processors when the application can be parallelized. Thus, in recent years various radio astronomical projects have started to make use of this technology either to realize the correlator on this platform or to establish the post-processing pipeline with GPUs. Therefore, the feasibility of GPUs as a choice for a VLBI correlator is being investigated, including pros and cons of this technology. Additionally, a GPU based software correlator will be reviewed with respect to energy consumption/GFlop/sec and cost/GFlop/sec.

  18. Quaternary Morphodynamics of Fluvial Dispersal Systems Revealed: The Fly River, PNG, and the Sunda Shelf, SE Asia, simulated with the Massively Parallel GPU-based Model 'GULLEM'

    Science.gov (United States)

    Aalto, R. E.; Lauer, J. W.; Darby, S. E.; Best, J.; Dietrich, W. E.

    2015-12-01

    During glacial-marine transgressions vast volumes of sediment are deposited due to the infilling of lowland fluvial systems and shallow shelves, material that is removed during ensuing regressions. Modelling these processes would illuminate system morphodynamics, fluxes, and 'complexity' in response to base level change, yet such problems are computationally formidable. Environmental systems are characterized by strong interconnectivity, yet traditional supercomputers have slow inter-node communication -- whereas rapidly advancing Graphics Processing Unit (GPU) technology offers vastly higher (>100x) bandwidths. GULLEM (GpU-accelerated Lowland Landscape Evolution Model) employs massively parallel code to simulate coupled fluvial-landscape evolution for complex lowland river systems over large temporal and spatial scales. GULLEM models the accommodation space carved/infilled by representing a range of geomorphic processes, including: river & tributary incision within a multi-directional flow regime, non-linear diffusion, glacial-isostatic flexure, hydraulic geometry, tectonic deformation, sediment production, transport & deposition, and full 3D tracking of all resulting stratigraphy. Model results concur with the Holocene dynamics of the Fly River, PNG -- as documented with dated cores, sonar imaging of floodbasin stratigraphy, and the observations of topographic remnants from LGM conditions. Other supporting research was conducted along the Mekong River, the largest fluvial system of the Sunda Shelf. These and other field data provide tantalizing empirical glimpses into the lowland landscapes of large rivers during glacial-interglacial transitions, observations that can be explored with this powerful numerical model. GULLEM affords estimates for the timing and flux budgets within the Fly and Sunda Systems, illustrating complex internal system responses to the external forcing of sea level and climate. Furthermore, GULLEM can be applied to most ANY fluvial system to

  19. GPU-based low-level trigger system for the standalone reconstruction of the ring-shaped hit patterns in the RICH Cherenkov detector of NA62 experiment

    International Nuclear Information System (INIS)

    Ammendola, R.; Biagioni, A.; Cretaro, P.; Frezza, O.; Cicero, F. Lo; Lonardo, A.; Martinelli, M.; Paolucci, P.S.; Pastorelli, E.; Chiozzi, S.; Ramusino, A. Cotta; Fiorini, M.; Gianoli, A.; Neri, I.; Lorenzo, S. Di; Fantechi, R.; Piandani, R.; Pontisso, L.; Lamanna, G.; Piccini, M.

    2017-01-01

    This project aims to exploit the parallel computing power of a commercial Graphics Processing Unit (GPU) to implement fast pattern matching in the Ring Imaging Cherenkov (RICH) detector for the level 0 (L0) trigger of the NA62 experiment. In this approach, the ring-fitting algorithm is seedless, being fed with raw RICH data, with no previous information on the ring position from other detectors. Moreover, since the L0 trigger is provided with a more elaborated information than a simple multiplicity number, it results in a higher selection power. Two methods have been studied in order to reduce the data transfer latency from the readout boards of the detector to the GPU, i.e., the use of a dedicated NIC device driver with very low latency and a direct data transfer protocol from a custom FPGA-based NIC to the GPU. The performance of the system, developed through the FPGA approach, for multi-ring Cherenkov online reconstruction obtained during the NA62 physics runs is presented.

  20. Cobalt: A GPU-based correlator and beamformer for LOFAR

    Science.gov (United States)

    Broekema, P. Chris; Mol, J. Jan David; Nijboer, R.; van Amesfoort, A. S.; Brentjens, M. A.; Loose, G. Marcel; Klijn, W. F. A.; Romein, J. W.

    2018-04-01

    For low-frequency radio astronomy, software correlation and beamforming on general purpose hardware is a viable alternative to custom designed hardware. LOFAR, a new-generation radio telescope centered in the Netherlands with international stations in Germany, France, Ireland, Poland, Sweden and the UK, has successfully used software real-time processors based on IBM Blue Gene technology since 2004. Since then, developments in technology have allowed us to build a system based on commercial off-the-shelf components that combines the same capabilities with lower operational cost. In this paper, we describe the design and implementation of a GPU-based correlator and beamformer with the same capabilities as the Blue Gene based systems. We focus on the design approach taken, and show the challenges faced in selecting an appropriate system. The design, implementation and verification of the software system show the value of a modern test-driven development approach. Operational experience, based on three years of operations, demonstrates that a general purpose system is a good alternative to the previous supercomputer-based system or custom-designed hardware.

  1. A Performance/Cost Evaluation for a GPU-Based Drug Discovery Application on Volunteer Computing

    Science.gov (United States)

    Guerrero, Ginés D.; Imbernón, Baldomero; García, José M.

    2014-01-01

    Bioinformatics is an interdisciplinary research field that develops tools for the analysis of large biological databases, and, thus, the use of high performance computing (HPC) platforms is mandatory for the generation of useful biological knowledge. The latest generation of graphics processing units (GPUs) has democratized the use of HPC as they push desktop computers to cluster-level performance. Many applications within this field have been developed to leverage these powerful and low-cost architectures. However, these applications still need to scale to larger GPU-based systems to enable remarkable advances in the fields of healthcare, drug discovery, genome research, etc. The inclusion of GPUs in HPC systems exacerbates power and temperature issues, increasing the total cost of ownership (TCO). This paper explores the benefits of volunteer computing to scale bioinformatics applications as an alternative to own large GPU-based local infrastructures. We use as a benchmark a GPU-based drug discovery application called BINDSURF that their computational requirements go beyond a single desktop machine. Volunteer computing is presented as a cheap and valid HPC system for those bioinformatics applications that need to process huge amounts of data and where the response time is not a critical factor. PMID:25025055

  2. A Performance/Cost Evaluation for a GPU-Based Drug Discovery Application on Volunteer Computing

    Directory of Open Access Journals (Sweden)

    Ginés D. Guerrero

    2014-01-01

    Full Text Available Bioinformatics is an interdisciplinary research field that develops tools for the analysis of large biological databases, and, thus, the use of high performance computing (HPC platforms is mandatory for the generation of useful biological knowledge. The latest generation of graphics processing units (GPUs has democratized the use of HPC as they push desktop computers to cluster-level performance. Many applications within this field have been developed to leverage these powerful and low-cost architectures. However, these applications still need to scale to larger GPU-based systems to enable remarkable advances in the fields of healthcare, drug discovery, genome research, etc. The inclusion of GPUs in HPC systems exacerbates power and temperature issues, increasing the total cost of ownership (TCO. This paper explores the benefits of volunteer computing to scale bioinformatics applications as an alternative to own large GPU-based local infrastructures. We use as a benchmark a GPU-based drug discovery application called BINDSURF that their computational requirements go beyond a single desktop machine. Volunteer computing is presented as a cheap and valid HPC system for those bioinformatics applications that need to process huge amounts of data and where the response time is not a critical factor.

  3. High-throughput GPU-based LDPC decoding

    Science.gov (United States)

    Chang, Yang-Lang; Chang, Cheng-Chun; Huang, Min-Yu; Huang, Bormin

    2010-08-01

    Low-density parity-check (LDPC) code is a linear block code known to approach the Shannon limit via the iterative sum-product algorithm. LDPC codes have been adopted in most current communication systems such as DVB-S2, WiMAX, WI-FI and 10GBASE-T. LDPC for the needs of reliable and flexible communication links for a wide variety of communication standards and configurations have inspired the demand for high-performance and flexibility computing. Accordingly, finding a fast and reconfigurable developing platform for designing the high-throughput LDPC decoder has become important especially for rapidly changing communication standards and configurations. In this paper, a new graphic-processing-unit (GPU) LDPC decoding platform with the asynchronous data transfer is proposed to realize this practical implementation. Experimental results showed that the proposed GPU-based decoder achieved 271x speedup compared to its CPU-based counterpart. It can serve as a high-throughput LDPC decoder.

  4. High-Dimensional Adaptive Particle Swarm Optimization on Heterogeneous Systems

    International Nuclear Information System (INIS)

    Wachowiak, M P; Sarlo, B B; Foster, A E Lambe

    2014-01-01

    Much work has recently been reported in parallel GPU-based particle swarm optimization (PSO). Motivated by the encouraging results of these investigations, while also recognizing the limitations of GPU-based methods for big problems using a large amount of data, this paper explores the efficacy of employing other types of parallel hardware for PSO. Most commodity systems feature a variety of architectures whose high-performance capabilities can be exploited. In this paper, high-dimensional problems and those that employ a large amount of external data are explored within the context of heterogeneous systems. Large problems are decomposed into constituent components, and analyses are undertaken of which components would benefit from multi-core or GPU parallelism. The current study therefore provides another demonstration that ''supercomputing on a budget'' is possible when subtasks of large problems are run on hardware most suited to these tasks. Experimental results show that large speedups can be achieved on high dimensional, data-intensive problems. Cost functions must first be analysed for parallelization opportunities, and assigned hardware based on the particular task

  5. Fast GPU-based spot extraction for energy-dispersive X-ray Laue diffraction

    International Nuclear Information System (INIS)

    Alghabi, F.; Schipper, U.; Kolb, A.; Send, S.; Abboud, A.; Pashniak, N.; Pietsch, U.

    2014-01-01

    This paper describes a novel method for fast online analysis of X-ray Laue spots taken by means of an energy-dispersive X-ray 2D detector. Current pnCCD detectors typically operate at some 100 Hz (up to a maximum of 400 Hz) and have a resolution of 384 × 384 pixels, future devices head for even higher pixel counts and frame rates. The proposed online data analysis is based on a computer utilizing multiple Graphics Processing Units (GPUs), which allow for fast and parallel data processing. Our multi-GPU based algorithm is compliant with the rules of stream-based data processing, for which GPUs are optimized. The paper's main contribution is therefore an alternative algorithm for the determination of spot positions and energies over the full sequence of pnCCD data frames. Furthermore, an improved background suppression algorithm is presented.The resulting system is able to process data at the maximum acquisition rate of 400 Hz. We present a detailed analysis of the spot positions and energies deduced from a prior (single-core) CPU-based and the novel GPU-based data processing, showing that the parallel computed results using the GPU implementation are at least of the same quality as prior CPU-based results. Furthermore, the GPU-based algorithm is able to speed up the data processing by a factor of 7 (in comparison to single-core CPU-based algorithm) which effectively makes the detector system more suitable for online data processing

  6. GPU-based large-scale visualization

    KAUST Repository

    Hadwiger, Markus; Krueger, Jens; Beyer, Johanna; Bruckner, Stefan

    2013-01-01

    and how to render and process gigapixel images using scalable, display-aware techniques. We will describe custom virtual texturing architectures as well as recent hardware developments in this area. We will also describe client/server systems

  7. GPU-based large-scale visualization

    KAUST Repository

    Hadwiger, Markus

    2013-11-19

    Recent advances in image and volume acquisition as well as computational advances in simulation have led to an explosion of the amount of data that must be visualized and analyzed. Modern techniques combine the parallel processing power of GPUs with out-of-core methods and data streaming to enable the interactive visualization of giga- and terabytes of image and volume data. A major enabler for interactivity is making both the computational and the visualization effort proportional to the amount of data that is actually visible on screen, decoupling it from the full data size. This leads to powerful display-aware multi-resolution techniques that enable the visualization of data of almost arbitrary size. The course consists of two major parts: An introductory part that progresses from fundamentals to modern techniques, and a more advanced part that discusses details of ray-guided volume rendering, novel data structures for display-aware visualization and processing, and the remote visualization of large online data collections. You will learn how to develop efficient GPU data structures and large-scale visualizations, implement out-of-core strategies and concepts such as virtual texturing that have only been employed recently, as well as how to use modern multi-resolution representations. These approaches reduce the GPU memory requirements of extremely large data to a working set size that fits into current GPUs. You will learn how to perform ray-casting of volume data of almost arbitrary size and how to render and process gigapixel images using scalable, display-aware techniques. We will describe custom virtual texturing architectures as well as recent hardware developments in this area. We will also describe client/server systems for distributed visualization, on-demand data processing and streaming, and remote visualization. We will describe implementations using OpenGL as well as CUDA, exploiting parallelism on GPUs combined with additional asynchronous

  8. GPU-based high-performance computing for radiation therapy

    International Nuclear Information System (INIS)

    Jia, Xun; Jiang, Steve B; Ziegenhein, Peter

    2014-01-01

    Recent developments in radiotherapy therapy demand high computation powers to solve challenging problems in a timely fashion in a clinical environment. The graphics processing unit (GPU), as an emerging high-performance computing platform, has been introduced to radiotherapy. It is particularly attractive due to its high computational power, small size, and low cost for facility deployment and maintenance. Over the past few years, GPU-based high-performance computing in radiotherapy has experienced rapid developments. A tremendous amount of study has been conducted, in which large acceleration factors compared with the conventional CPU platform have been observed. In this paper, we will first give a brief introduction to the GPU hardware structure and programming model. We will then review the current applications of GPU in major imaging-related and therapy-related problems encountered in radiotherapy. A comparison of GPU with other platforms will also be presented. (topical review)

  9. GPU-based Branchless Distance-Driven Projection and Backprojection.

    Science.gov (United States)

    Liu, Rui; Fu, Lin; De Man, Bruno; Yu, Hengyong

    2017-12-01

    Projection and backprojection operations are essential in a variety of image reconstruction and physical correction algorithms in CT. The distance-driven (DD) projection and backprojection are widely used for their highly sequential memory access pattern and low arithmetic cost. However, a typical DD implementation has an inner loop that adjusts the calculation depending on the relative position between voxel and detector cell boundaries. The irregularity of the branch behavior makes it inefficient to be implemented on massively parallel computing devices such as graphics processing units (GPUs). Such irregular branch behaviors can be eliminated by factorizing the DD operation as three branchless steps: integration, linear interpolation, and differentiation, all of which are highly amenable to massive vectorization. In this paper, we implement and evaluate a highly parallel branchless DD algorithm for 3D cone beam CT. The algorithm utilizes the texture memory and hardware interpolation on GPUs to achieve fast computational speed. The developed branchless DD algorithm achieved 137-fold speedup for forward projection and 188-fold speedup for backprojection relative to a single-thread CPU implementation. Compared with a state-of-the-art 32-thread CPU implementation, the proposed branchless DD achieved 8-fold acceleration for forward projection and 10-fold acceleration for backprojection. GPU based branchless DD method was evaluated by iterative reconstruction algorithms with both simulation and real datasets. It obtained visually identical images as the CPU reference algorithm.

  10. Validation of GPU based TomoTherapy dose calculation engine.

    Science.gov (United States)

    Chen, Quan; Lu, Weiguo; Chen, Yu; Chen, Mingli; Henderson, Douglas; Sterpin, Edmond

    2012-04-01

    The graphic processing unit (GPU) based TomoTherapy convolution/superposition(C/S) dose engine (GPU dose engine) achieves a dramatic performance improvement over the traditional CPU-cluster based TomoTherapy dose engine (CPU dose engine). Besides the architecture difference between the GPU and CPU, there are several algorithm changes from the CPU dose engine to the GPU dose engine. These changes made the GPU dose slightly different from the CPU-cluster dose. In order for the commercial release of the GPU dose engine, its accuracy has to be validated. Thirty eight TomoTherapy phantom plans and 19 patient plans were calculated with both dose engines to evaluate the equivalency between the two dose engines. Gamma indices (Γ) were used for the equivalency evaluation. The GPU dose was further verified with the absolute point dose measurement with ion chamber and film measurements for phantom plans. Monte Carlo calculation was used as a reference for both dose engines in the accuracy evaluation in heterogeneous phantom and actual patients. The GPU dose engine showed excellent agreement with the current CPU dose engine. The majority of cases had over 99.99% of voxels with Γ(1%, 1 mm) engine also showed similar degree of accuracy in heterogeneous media as the current TomoTherapy dose engine. It is verified and validated that the ultrafast TomoTherapy GPU dose engine can safely replace the existing TomoTherapy cluster based dose engine without degradation in dose accuracy.

  11. SPATIOTEMPORAL VISUALIZATION OF TIME-SERIES SATELLITE-DERIVED CO2 FLUX DATA USING VOLUME RENDERING AND GPU-BASED INTERPOLATION ON A CLOUD-DRIVEN DIGITAL EARTH

    Directory of Open Access Journals (Sweden)

    S. Wu

    2017-10-01

    Full Text Available The ocean carbon cycle has a significant influence on global climate, and is commonly evaluated using time-series satellite-derived CO2 flux data. Location-aware and globe-based visualization is an important technique for analyzing and presenting the evolution of climate change. To achieve realistic simulation of the spatiotemporal dynamics of ocean carbon, a cloud-driven digital earth platform is developed to support the interactive analysis and display of multi-geospatial data, and an original visualization method based on our digital earth is proposed to demonstrate the spatiotemporal variations of carbon sinks and sources using time-series satellite data. Specifically, a volume rendering technique using half-angle slicing and particle system is implemented to dynamically display the released or absorbed CO2 gas. To enable location-aware visualization within the virtual globe, we present a 3D particlemapping algorithm to render particle-slicing textures onto geospace. In addition, a GPU-based interpolation framework using CUDA during real-time rendering is designed to obtain smooth effects in both spatial and temporal dimensions. To demonstrate the capabilities of the proposed method, a series of satellite data is applied to simulate the air-sea carbon cycle in the China Sea. The results show that the suggested strategies provide realistic simulation effects and acceptable interactive performance on the digital earth.

  12. Development of a GPU-based high-performance radiative transfer model for the Infrared Atmospheric Sounding Interferometer (IASI)

    International Nuclear Information System (INIS)

    Huang Bormin; Mielikainen, Jarno; Oh, Hyunjong; Allen Huang, Hung-Lung

    2011-01-01

    Satellite-observed radiance is a nonlinear functional of surface properties and atmospheric temperature and absorbing gas profiles as described by the radiative transfer equation (RTE). In the era of hyperspectral sounders with thousands of high-resolution channels, the computation of the radiative transfer model becomes more time-consuming. The radiative transfer model performance in operational numerical weather prediction systems still limits the number of channels we can use in hyperspectral sounders to only a few hundreds. To take the full advantage of such high-resolution infrared observations, a computationally efficient radiative transfer model is needed to facilitate satellite data assimilation. In recent years the programmable commodity graphics processing unit (GPU) has evolved into a highly parallel, multi-threaded, many-core processor with tremendous computational speed and very high memory bandwidth. The radiative transfer model is very suitable for the GPU implementation to take advantage of the hardware's efficiency and parallelism where radiances of many channels can be calculated in parallel in GPUs. In this paper, we develop a GPU-based high-performance radiative transfer model for the Infrared Atmospheric Sounding Interferometer (IASI) launched in 2006 onboard the first European meteorological polar-orbiting satellites, METOP-A. Each IASI spectrum has 8461 spectral channels. The IASI radiative transfer model consists of three modules. The first module for computing the regression predictors takes less than 0.004% of CPU time, while the second module for transmittance computation and the third module for radiance computation take approximately 92.5% and 7.5%, respectively. Our GPU-based IASI radiative transfer model is developed to run on a low-cost personal supercomputer with four GPUs with total 960 compute cores, delivering near 4 TFlops theoretical peak performance. By massively parallelizing the second and third modules, we reached 364x

  13. SU-E-T-395: Multi-GPU-Based VMAT Treatment Plan Optimization Using a Column-Generation Approach

    International Nuclear Information System (INIS)

    Tian, Z; Shi, F; Jia, X; Jiang, S; Peng, F

    2014-01-01

    Purpose: GPU has been employed to speed up VMAT optimizations from hours to minutes. However, its limited memory capacity makes it difficult to handle cases with a huge dose-deposition-coefficient (DDC) matrix, e.g. those with a large target size, multiple arcs, small beam angle intervals and/or small beamlet size. We propose multi-GPU-based VMAT optimization to solve this memory issue to make GPU-based VMAT more practical for clinical use. Methods: Our column-generation-based method generates apertures sequentially by iteratively searching for an optimal feasible aperture (referred as pricing problem, PP) and optimizing aperture intensities (referred as master problem, MP). The PP requires access to the large DDC matrix, which is implemented on a multi-GPU system. Each GPU stores a DDC sub-matrix corresponding to one fraction of beam angles and is only responsible for calculation related to those angles. Broadcast and parallel reduction schemes are adopted for inter-GPU data transfer. MP is a relatively small-scale problem and is implemented on one GPU. One headand- neck cancer case was used for test. Three different strategies for VMAT optimization on single GPU were also implemented for comparison: (S1) truncating DDC matrix to ignore its small value entries for optimization; (S2) transferring DDC matrix part by part to GPU during optimizations whenever needed; (S3) moving DDC matrix related calculation onto CPU. Results: Our multi-GPU-based implementation reaches a good plan within 1 minute. Although S1 was 10 seconds faster than our method, the obtained plan quality is worse. Both S2 and S3 handle the full DDC matrix and hence yield the same plan as in our method. However, the computation time is longer, namely 4 minutes and 30 minutes, respectively. Conclusion: Our multi-GPU-based VMAT optimization can effectively solve the limited memory issue with good plan quality and high efficiency, making GPUbased ultra-fast VMAT planning practical for real clinical use

  14. SU-E-T-395: Multi-GPU-Based VMAT Treatment Plan Optimization Using a Column-Generation Approach

    Energy Technology Data Exchange (ETDEWEB)

    Tian, Z; Shi, F; Jia, X; Jiang, S [UT Southwestern Medical Ctr at Dallas, Dallas, TX (United States); Peng, F [Carnegie Mellon University, Pittsburgh, PA (United States)

    2014-06-01

    Purpose: GPU has been employed to speed up VMAT optimizations from hours to minutes. However, its limited memory capacity makes it difficult to handle cases with a huge dose-deposition-coefficient (DDC) matrix, e.g. those with a large target size, multiple arcs, small beam angle intervals and/or small beamlet size. We propose multi-GPU-based VMAT optimization to solve this memory issue to make GPU-based VMAT more practical for clinical use. Methods: Our column-generation-based method generates apertures sequentially by iteratively searching for an optimal feasible aperture (referred as pricing problem, PP) and optimizing aperture intensities (referred as master problem, MP). The PP requires access to the large DDC matrix, which is implemented on a multi-GPU system. Each GPU stores a DDC sub-matrix corresponding to one fraction of beam angles and is only responsible for calculation related to those angles. Broadcast and parallel reduction schemes are adopted for inter-GPU data transfer. MP is a relatively small-scale problem and is implemented on one GPU. One headand- neck cancer case was used for test. Three different strategies for VMAT optimization on single GPU were also implemented for comparison: (S1) truncating DDC matrix to ignore its small value entries for optimization; (S2) transferring DDC matrix part by part to GPU during optimizations whenever needed; (S3) moving DDC matrix related calculation onto CPU. Results: Our multi-GPU-based implementation reaches a good plan within 1 minute. Although S1 was 10 seconds faster than our method, the obtained plan quality is worse. Both S2 and S3 handle the full DDC matrix and hence yield the same plan as in our method. However, the computation time is longer, namely 4 minutes and 30 minutes, respectively. Conclusion: Our multi-GPU-based VMAT optimization can effectively solve the limited memory issue with good plan quality and high efficiency, making GPUbased ultra-fast VMAT planning practical for real clinical use.

  15. MO-A-BRD-10: A Fast and Accurate GPU-Based Proton Transport Monte Carlo Simulation for Validating Proton Therapy Treatment Plans

    Energy Technology Data Exchange (ETDEWEB)

    Wan Chan Tseung, H; Ma, J; Beltran, C [Mayo Clinic, Rochester, MN (United States)

    2014-06-15

    Purpose: To build a GPU-based Monte Carlo (MC) simulation of proton transport with detailed modeling of elastic and non-elastic (NE) protonnucleus interactions, for use in a very fast and cost-effective proton therapy treatment plan verification system. Methods: Using the CUDA framework, we implemented kernels for the following tasks: (1) Simulation of beam spots from our possible scanning nozzle configurations, (2) Proton propagation through CT geometry, taking into account nuclear elastic and multiple scattering, as well as energy straggling, (3) Bertini-style modeling of the intranuclear cascade stage of NE interactions, and (4) Simulation of nuclear evaporation. To validate our MC, we performed: (1) Secondary particle yield calculations in NE collisions with therapeutically-relevant nuclei, (2) Pencil-beam dose calculations in homogeneous phantoms, (3) A large number of treatment plan dose recalculations, and compared with Geant4.9.6p2/TOPAS. A workflow was devised for calculating plans from a commercially available treatment planning system, with scripts for reading DICOM files and generating inputs for our MC. Results: Yields, energy and angular distributions of secondaries from NE collisions on various nuclei are in good agreement with the Geant4.9.6p2 Bertini and Binary cascade models. The 3D-gamma pass rate at 2%–2mm for 70–230 MeV pencil-beam dose distributions in water, soft tissue, bone and Ti phantoms is 100%. The pass rate at 2%–2mm for treatment plan calculations is typically above 98%. The net computational time on a NVIDIA GTX680 card, including all CPU-GPU data transfers, is around 20s for 1×10{sup 7} proton histories. Conclusion: Our GPU-based proton transport MC is the first of its kind to include a detailed nuclear model to handle NE interactions on any nucleus. Dosimetric calculations demonstrate very good agreement with Geant4.9.6p2/TOPAS. Our MC is being integrated into a framework to perform fast routine clinical QA of pencil

  16. GPU-based real-time triggering in the NA62 experiment

    CERN Document Server

    Ammendola, R.; Cretaro, P.; Di Lorenzo, S.; Fantechi, R.; Fiorini, M.; Frezza, O.; Lamanna, G.; Lo Cicero, F.; Lonardo, A.; Martinelli, M.; Neri, I.; Paolucci, P.S.; Pastorelli, E.; Piandani, R.; Pontisso, L.; Rossetti, D.; Simula, F.; Sozzi, M.; Vicini, P.

    2016-01-01

    Over the last few years the GPGPU (General-Purpose computing on Graphics Processing Units) paradigm represented a remarkable development in the world of computing. Computing for High-Energy Physics is no exception: several works have demonstrated the effectiveness of the integration of GPU-based systems in high level trigger of different experiments. On the other hand the use of GPUs in the low level trigger systems, characterized by stringent real-time constraints, such as tight time budget and high throughput, poses several challenges. In this paper we focus on the low level trigger in the CERN NA62 experiment, investigating the use of real-time computing on GPUs in this synchronous system. Our approach aimed at harvesting the GPU computing power to build in real-time refined physics-related trigger primitives for the RICH detector, as the the knowledge of Cerenkov rings parameters allows to build stringent conditions for data selection at trigger level. Latencies of all components of the trigger chain have...

  17. An optimization of a GPU-based parallel wind field module

    International Nuclear Information System (INIS)

    Pinheiro, André L.S.; Shirru, Roberto

    2017-01-01

    Atmospheric radionuclide dispersion systems (ARDS) are important tools to predict the impact of radioactive releases from Nuclear Power Plants and guide people evacuation from affected areas. Four modules comprise ARDS: Source Term, Wind Field, Plume Dispersion and Doses Calculations. The slowest is the Wind Field Module that was previously parallelized using the CUDA C language. The statement purpose of this work is to show the speedup gain with the optimization of the already parallel code of the GPU-based Wind Field module, based in WEST model (Extrapolated from Stability and Terrain). Due to the parallelization done in the wind field module, it was observed that some CUDA processors became idle, thus contributing to a reduction in speedup. It was proposed in this work a way of allocating these idle CUDA processors in order to increase the speedup. An acceleration of about 4 times can be seen in the comparative case study between the regular CUDA code and the optimized CUDA code. These results are quite motivating and point out that even after a parallelization of code, a parallel code optimization should be taken into account. (author)

  18. Visualizing whole-brain DTI tractography with GPU-based Tuboids and LoD management.

    Science.gov (United States)

    Petrovic, Vid; Fallon, James; Kuester, Falko

    2007-01-01

    Diffusion Tensor Imaging (DTI) of the human brain, coupled with tractography techniques, enable the extraction of large-collections of three-dimensional tract pathways per subject. These pathways and pathway bundles represent the connectivity between different brain regions and are critical for the understanding of brain related diseases. A flexible and efficient GPU-based rendering technique for DTI tractography data is presented that addresses common performance bottlenecks and image-quality issues, allowing interactive render rates to be achieved on commodity hardware. An occlusion query-based pathway LoD management system for streamlines/streamtubes/tuboids is introduced that optimizes input geometry, vertex processing, and fragment processing loads, and helps reduce overdraw. The tuboid, a fully-shaded streamtube impostor constructed entirely on the GPU from streamline vertices, is also introduced. Unlike full streamtubes and other impostor constructs, tuboids require little to no preprocessing or extra space over the original streamline data. The supported fragment processing levels of detail range from texture-based draft shading to full raycast normal computation, Phong shading, environment mapping, and curvature-correct text labeling. The presented text labeling technique for tuboids provides adaptive, aesthetically pleasing labels that appear attached to the surface of the tubes. Furthermore, an occlusion query aggregating and scheduling scheme for tuboids is described that reduces the query overhead. Results for a tractography dataset are presented, and demonstrate that LoD-managed tuboids offer benefits over traditional streamtubes both in performance and appearance.

  19. An optimization of a GPU-based parallel wind field module

    Energy Technology Data Exchange (ETDEWEB)

    Pinheiro, André L.S.; Shirru, Roberto [Coordenacao de Pos-Graduacao e Pesquisa de Engenharia (PEN/COPPE/UFRJ), Rio de Janeiro, RJ (Brazil). Programa de Engenharia Nuclear; Pereira, Cláudio M.N.A., E-mail: apinheiro99@gmail.com, E-mail: schirru@lmp.ufrj.br, E-mail: cmnap@ien.gov.br [Instituto de Engenharia Nuclear (IEN/CNEN-RJ), Rio de Janeiro, RJ (Brazil)

    2017-07-01

    Atmospheric radionuclide dispersion systems (ARDS) are important tools to predict the impact of radioactive releases from Nuclear Power Plants and guide people evacuation from affected areas. Four modules comprise ARDS: Source Term, Wind Field, Plume Dispersion and Doses Calculations. The slowest is the Wind Field Module that was previously parallelized using the CUDA C language. The statement purpose of this work is to show the speedup gain with the optimization of the already parallel code of the GPU-based Wind Field module, based in WEST model (Extrapolated from Stability and Terrain). Due to the parallelization done in the wind field module, it was observed that some CUDA processors became idle, thus contributing to a reduction in speedup. It was proposed in this work a way of allocating these idle CUDA processors in order to increase the speedup. An acceleration of about 4 times can be seen in the comparative case study between the regular CUDA code and the optimized CUDA code. These results are quite motivating and point out that even after a parallelization of code, a parallel code optimization should be taken into account. (author)

  20. The development of GPU-based parallel PRNG for Monte Carlo applications in CUDA Fortran

    Directory of Open Access Journals (Sweden)

    Hamed Kargaran

    2016-04-01

    Full Text Available The implementation of Monte Carlo simulation on the CUDA Fortran requires a fast random number generation with good statistical properties on GPU. In this study, a GPU-based parallel pseudo random number generator (GPPRNG have been proposed to use in high performance computing systems. According to the type of GPU memory usage, GPU scheme is divided into two work modes including GLOBAL_MODE and SHARED_MODE. To generate parallel random numbers based on the independent sequence method, the combination of middle-square method and chaotic map along with the Xorshift PRNG have been employed. Implementation of our developed PPRNG on a single GPU showed a speedup of 150x and 470x (with respect to the speed of PRNG on a single CPU core for GLOBAL_MODE and SHARED_MODE, respectively. To evaluate the accuracy of our developed GPPRNG, its performance was compared to that of some other commercially available PPRNGs such as MATLAB, FORTRAN and Miller-Park algorithm through employing the specific standard tests. The results of this comparison showed that the developed GPPRNG in this study can be used as a fast and accurate tool for computational science applications.

  1. The development of GPU-based parallel PRNG for Monte Carlo applications in CUDA Fortran

    Energy Technology Data Exchange (ETDEWEB)

    Kargaran, Hamed, E-mail: h-kargaran@sbu.ac.ir; Minuchehr, Abdolhamid; Zolfaghari, Ahmad [Department of nuclear engineering, Shahid Behesti University, Tehran, 1983969411 (Iran, Islamic Republic of)

    2016-04-15

    The implementation of Monte Carlo simulation on the CUDA Fortran requires a fast random number generation with good statistical properties on GPU. In this study, a GPU-based parallel pseudo random number generator (GPPRNG) have been proposed to use in high performance computing systems. According to the type of GPU memory usage, GPU scheme is divided into two work modes including GLOBAL-MODE and SHARED-MODE. To generate parallel random numbers based on the independent sequence method, the combination of middle-square method and chaotic map along with the Xorshift PRNG have been employed. Implementation of our developed PPRNG on a single GPU showed a speedup of 150x and 470x (with respect to the speed of PRNG on a single CPU core) for GLOBAL-MODE and SHARED-MODE, respectively. To evaluate the accuracy of our developed GPPRNG, its performance was compared to that of some other commercially available PPRNGs such as MATLAB, FORTRAN and Miller-Park algorithm through employing the specific standard tests. The results of this comparison showed that the developed GPPRNG in this study can be used as a fast and accurate tool for computational science applications.

  2. SU-E-T-29: A Web Application for GPU-Based Monte Carlo IMRT/VMAT QA with Delivered Dose Verification

    International Nuclear Information System (INIS)

    Folkerts, M; Graves, Y; Tian, Z; Gu, X; Jia, X; Jiang, S

    2014-01-01

    Purpose: To enable an existing web application for GPU-based Monte Carlo (MC) 3D dosimetry quality assurance (QA) to compute “delivered dose” from linac logfile data. Methods: We added significant features to an IMRT/VMAT QA web application which is based on existing technologies (HTML5, Python, and Django). This tool interfaces with python, c-code libraries, and command line-based GPU applications to perform a MC-based IMRT/VMAT QA. The web app automates many complicated aspects of interfacing clinical DICOM and logfile data with cutting-edge GPU software to run a MC dose calculation. The resultant web app is powerful, easy to use, and is able to re-compute both plan dose (from DICOM data) and delivered dose (from logfile data). Both dynalog and trajectorylog file formats are supported. Users upload zipped DICOM RP, CT, and RD data and set the expected statistic uncertainty for the MC dose calculation. A 3D gamma index map, 3D dose distribution, gamma histogram, dosimetric statistics, and DVH curves are displayed to the user. Additional the user may upload the delivery logfile data from the linac to compute a 'delivered dose' calculation and corresponding gamma tests. A comprehensive PDF QA report summarizing the results can also be downloaded. Results: We successfully improved a web app for a GPU-based QA tool that consists of logfile parcing, fluence map generation, CT image processing, GPU based MC dose calculation, gamma index calculation, and DVH calculation. The result is an IMRT and VMAT QA tool that conducts an independent dose calculation for a given treatment plan and delivery log file. The system takes both DICOM data and logfile data to compute plan dose and delivered dose respectively. Conclusion: We sucessfully improved a GPU-based MC QA tool to allow for logfile dose calculation. The high efficiency and accessibility will greatly facilitate IMRT and VMAT QA

  3. SU-E-T-29: A Web Application for GPU-Based Monte Carlo IMRT/VMAT QA with Delivered Dose Verification

    Energy Technology Data Exchange (ETDEWEB)

    Folkerts, M [The University of Texas Southwestern Medical Ctr, Dallas, TX (United States); University of California, San Diego, La Jolla, CA (United States); Graves, Y [University of California, San Diego, La Jolla, CA (United States); Tian, Z; Gu, X; Jia, X; Jiang, S [The University of Texas Southwestern Medical Ctr, Dallas, TX (United States)

    2014-06-01

    Purpose: To enable an existing web application for GPU-based Monte Carlo (MC) 3D dosimetry quality assurance (QA) to compute “delivered dose” from linac logfile data. Methods: We added significant features to an IMRT/VMAT QA web application which is based on existing technologies (HTML5, Python, and Django). This tool interfaces with python, c-code libraries, and command line-based GPU applications to perform a MC-based IMRT/VMAT QA. The web app automates many complicated aspects of interfacing clinical DICOM and logfile data with cutting-edge GPU software to run a MC dose calculation. The resultant web app is powerful, easy to use, and is able to re-compute both plan dose (from DICOM data) and delivered dose (from logfile data). Both dynalog and trajectorylog file formats are supported. Users upload zipped DICOM RP, CT, and RD data and set the expected statistic uncertainty for the MC dose calculation. A 3D gamma index map, 3D dose distribution, gamma histogram, dosimetric statistics, and DVH curves are displayed to the user. Additional the user may upload the delivery logfile data from the linac to compute a 'delivered dose' calculation and corresponding gamma tests. A comprehensive PDF QA report summarizing the results can also be downloaded. Results: We successfully improved a web app for a GPU-based QA tool that consists of logfile parcing, fluence map generation, CT image processing, GPU based MC dose calculation, gamma index calculation, and DVH calculation. The result is an IMRT and VMAT QA tool that conducts an independent dose calculation for a given treatment plan and delivery log file. The system takes both DICOM data and logfile data to compute plan dose and delivered dose respectively. Conclusion: We sucessfully improved a GPU-based MC QA tool to allow for logfile dose calculation. The high efficiency and accessibility will greatly facilitate IMRT and VMAT QA.

  4. SU-E-T-806: Very Fast GPU-Based IMPT Dose Computation

    Energy Technology Data Exchange (ETDEWEB)

    Sullivan, A; Brand, M [Mitsubishi Electric Research Lab, Cambridge, MA (United States)

    2015-06-15

    Purpose: Designing particle therapy treatment plans is a dosimetrist-in-the-loop optimization wherein the conflicting constraints of achieving a desired tumor dose distribution must be balanced against the need to minimize the dose to nearby OARs. IMPT introduces an additional, inner, numerical optimization step in which the dosimetrist’s current set of constraints are used to determine the weighting of beam spots. Very fast dose calculations are needed to enable the dosimetrist to perform many iterations of the outer optimization in a commercially reasonable time. Methods: We have developed a GPU-based convolution-type dose computation algorithm that more accurately handles heterogeneities than earlier algorithms by redistributing energy from dose computed in a water volume. The depth dependence of the beam size is handled by pre-processing Bragg curves using a weighted superposition of Gaussian bases. Additionally, scattering, the orientation of treatment ports, and the non-parallel propagation of beams are handled by large, but sparse, energy-redistribution matrices that implement affine transforms. Results: We tested our algorithm using a brain tumor dataset with 1 mm voxels and a single treatment port from the patient’s anterior through the sinuses. The resulting dose volume is 100 × 100 × 230 mm with 66,200 beam spots on a 3 × 3 × 2 mm grid. The dose computation takes <1 msec on a GeForce GTX Titan GPU with the Gamma passing rate for 2mm/2% criterion of 99.1% compared to dose calculated by an alternative dose algorithm based on pencil beams. We will present comparisons to Monte Carlo dose calculations. Conclusion: Our high-speed dose computation method enables the IMPT spot weights to be optimized in <1 second, resulting in a nearly instantaneous response to user changes to dose constraints. This permits the creation of higher quality plans by allowing the dosimetrist to evaluate more alternatives in a short period of time.

  5. SU-E-T-37: A GPU-Based Pencil Beam Algorithm for Dose Calculations in Proton Radiation Therapy

    International Nuclear Information System (INIS)

    Kalantzis, G; Leventouri, T; Tachibana, H; Shang, C

    2015-01-01

    Purpose: Recent developments in radiation therapy have been focused on applications of charged particles, especially protons. Over the years several dose calculation methods have been proposed in proton therapy. A common characteristic of all these methods is their extensive computational burden. In the current study we present for the first time, to our best knowledge, a GPU-based PBA for proton dose calculations in Matlab. Methods: In the current study we employed an analytical expression for the protons depth dose distribution. The central-axis term is taken from the broad-beam central-axis depth dose in water modified by an inverse square correction while the distribution of the off-axis term was considered Gaussian. The serial code was implemented in MATLAB and was launched on a desktop with a quad core Intel Xeon X5550 at 2.67GHz with 8 GB of RAM. For the parallelization on the GPU, the parallel computing toolbox was employed and the code was launched on a GTX 770 with Kepler architecture. The performance comparison was established on the speedup factors. Results: The performance of the GPU code was evaluated for three different energies: low (50 MeV), medium (100 MeV) and high (150 MeV). Four square fields were selected for each energy, and the dose calculations were performed with both the serial and parallel codes for a homogeneous water phantom with size 300×300×300 mm3. The resolution of the PBs was set to 1.0 mm. The maximum speedup of ∼127 was achieved for the highest energy and the largest field size. Conclusion: A GPU-based PB algorithm for proton dose calculations in Matlab was presented. A maximum speedup of ∼127 was achieved. Future directions of the current work include extension of our method for dose calculation in heterogeneous phantoms

  6. Single particle detecting telescope system

    International Nuclear Information System (INIS)

    Yamamoto, I.; Tomiyama, T.; Iga, Y.; Komatsubara, T.; Kanada, M.; Yamashita, Y.; Wada, T.; Furukawa, S.

    1981-01-01

    We constructed the single particle detecting telescope system for detecting a fractionally charged particle. The telescope consists of position detecting counters, wall-less multi-cell chambers, single detecting circuits and microcomputer system as data I/0 processor. Especially, a frequency of double particle is compared the case of the single particle detecting with the case of an ordinary measurement

  7. State-of-the-Art in GPU-Based Large-Scale Volume Visualization

    KAUST Repository

    Beyer, Johanna

    2015-05-01

    This survey gives an overview of the current state of the art in GPU techniques for interactive large-scale volume visualization. Modern techniques in this field have brought about a sea change in how interactive visualization and analysis of giga-, tera- and petabytes of volume data can be enabled on GPUs. In addition to combining the parallel processing power of GPUs with out-of-core methods and data streaming, a major enabler for interactivity is making both the computational and the visualization effort proportional to the amount and resolution of data that is actually visible on screen, i.e. \\'output-sensitive\\' algorithms and system designs. This leads to recent output-sensitive approaches that are \\'ray-guided\\', \\'visualization-driven\\' or \\'display-aware\\'. In this survey, we focus on these characteristics and propose a new categorization of GPU-based large-scale volume visualization techniques based on the notions of actual output-resolution visibility and the current working set of volume bricks-the current subset of data that is minimally required to produce an output image of the desired display resolution. Furthermore, we discuss the differences and similarities of different rendering and data traversal strategies in volume rendering by putting them into a common context-the notion of address translation. For our purposes here, we view parallel (distributed) visualization using clusters as an orthogonal set of techniques that we do not discuss in detail but that can be used in conjunction with what we present in this survey. © 2015 The Eurographics Association and John Wiley & Sons Ltd.

  8. State-of-the-Art in GPU-Based Large-Scale Volume Visualization

    KAUST Repository

    Beyer, Johanna; Hadwiger, Markus; Pfister, Hanspeter

    2015-01-01

    This survey gives an overview of the current state of the art in GPU techniques for interactive large-scale volume visualization. Modern techniques in this field have brought about a sea change in how interactive visualization and analysis of giga-, tera- and petabytes of volume data can be enabled on GPUs. In addition to combining the parallel processing power of GPUs with out-of-core methods and data streaming, a major enabler for interactivity is making both the computational and the visualization effort proportional to the amount and resolution of data that is actually visible on screen, i.e. 'output-sensitive' algorithms and system designs. This leads to recent output-sensitive approaches that are 'ray-guided', 'visualization-driven' or 'display-aware'. In this survey, we focus on these characteristics and propose a new categorization of GPU-based large-scale volume visualization techniques based on the notions of actual output-resolution visibility and the current working set of volume bricks-the current subset of data that is minimally required to produce an output image of the desired display resolution. Furthermore, we discuss the differences and similarities of different rendering and data traversal strategies in volume rendering by putting them into a common context-the notion of address translation. For our purposes here, we view parallel (distributed) visualization using clusters as an orthogonal set of techniques that we do not discuss in detail but that can be used in conjunction with what we present in this survey. © 2015 The Eurographics Association and John Wiley & Sons Ltd.

  9. GPU-based discrete element rigid body transport

    CSIR Research Space (South Africa)

    Govender, Nicolin

    2013-08-01

    Full Text Available . For applications in coastal engineering and also in pavement engineering, the capture of particle shapes as polyhedra rather than clumped spheres is particularly important. The development of a Discrete Element Model applicable to both fields, and to industrial...

  10. GPU-Based Point Cloud Superpositioning for Structural Comparisons of Protein Binding Sites.

    Science.gov (United States)

    Leinweber, Matthias; Fober, Thomas; Freisleben, Bernd

    2018-01-01

    In this paper, we present a novel approach to solve the labeled point cloud superpositioning problem for performing structural comparisons of protein binding sites. The solution is based on a parallel evolution strategy that operates on large populations and runs on GPU hardware. The proposed evolution strategy reduces the likelihood of getting stuck in a local optimum of the multimodal real-valued optimization problem represented by labeled point cloud superpositioning. The performance of the GPU-based parallel evolution strategy is compared to a previously proposed CPU-based sequential approach for labeled point cloud superpositioning, indicating that the GPU-based parallel evolution strategy leads to qualitatively better results and significantly shorter runtimes, with speed improvements of up to a factor of 1,500 for large populations. Binary classification tests based on the ATP, NADH, and FAD protein subsets of CavBase, a database containing putative binding sites, show average classification rate improvements from about 92 percent (CPU) to 96 percent (GPU). Further experiments indicate that the proposed GPU-based labeled point cloud superpositioning approach can be superior to traditional protein comparison approaches based on sequence alignments.

  11. NaNet-10: a 10GbE network interface card for the GPU-based low-level trigger of the NA62 RICH detector

    International Nuclear Information System (INIS)

    Ammendola, R.; Biagioni, A.; Frezza, O.; Lonardo, A.; Cicero, F. Lo; Martinelli, M.; Paolucci, P.S.; Pastorelli, E.; Simula, F.; Tosoratto, L.; Vicini, P.; Fiorini, M.; Neri, I.; Lamanna, G.; Piandani, R.; Pontisso, L.; Sozzi, M.; Rossetti, D.

    2016-01-01

    A GPU-based low level (L0) trigger is currently integrated in the experimental setup of the RICH detector of the NA62 experiment to assess the feasibility of building more refined physics-related trigger primitives and thus improve the trigger discriminating power. To ensure the real-time operation of the system, a dedicated data transport mechanism has been implemented: an FPGA-based Network Interface Card (NaNet-10) receives data from detectors and forwards them with low, predictable latency to the memory of the GPU performing the trigger algorithms. Results of the ring-shaped hit patterns reconstruction will be reported and discussed

  12. Particle measurement systems and methods

    Science.gov (United States)

    Steele, Paul T [Livermore, CA

    2011-10-04

    A system according to one embodiment includes a light source for generating light fringes; a sampling mechanism for directing a particle through the light fringes; and at least one light detector for detecting light scattered by the particle as the particle passes through the light fringes. A method according to one embodiment includes generating light fringes using a light source; directing a particle through the light fringes; and detecting light scattered by the particle as the particle passes through the light fringes using at least one light detector.

  13. GPU-based fast pencil beam algorithm for proton therapy

    International Nuclear Information System (INIS)

    Fujimoto, Rintaro; Nagamine, Yoshihiko; Kurihara, Tsuneya

    2011-01-01

    Performance of a treatment planning system is an essential factor in making sophisticated plans. The dose calculation is a major time-consuming process in planning operations. The standard algorithm for proton dose calculations is the pencil beam algorithm which produces relatively accurate results, but is time consuming. In order to shorten the computational time, we have developed a GPU (graphics processing unit)-based pencil beam algorithm. We have implemented this algorithm and calculated dose distributions in the case of a water phantom. The results were compared to those obtained by a traditional method with respect to the computational time and discrepancy between the two methods. The new algorithm shows 5-20 times faster performance using the NVIDIA GeForce GTX 480 card in comparison with the Intel Core-i7 920 processor. The maximum discrepancy of the dose distribution is within 0.2%. Our results show that GPUs are effective for proton dose calculations.

  14. GPU based Monte Carlo for PET image reconstruction: parameter optimization

    International Nuclear Information System (INIS)

    Cserkaszky, Á; Légrády, D.; Wirth, A.; Bükki, T.; Patay, G.

    2011-01-01

    This paper presents the optimization of a fully Monte Carlo (MC) based iterative image reconstruction of Positron Emission Tomography (PET) measurements. With our MC re- construction method all the physical effects in a PET system are taken into account thus superior image quality is achieved in exchange for increased computational effort. The method is feasible because we utilize the enormous processing power of Graphical Processing Units (GPUs) to solve the inherently parallel problem of photon transport. The MC approach regards the simulated positron decays as samples in mathematical sums required in the iterative reconstruction algorithm, so to complement the fast architecture, our work of optimization focuses on the number of simulated positron decays required to obtain sufficient image quality. We have achieved significant results in determining the optimal number of samples for arbitrary measurement data, this allows to achieve the best image quality with the least possible computational effort. Based on this research recommendations can be given for effective partitioning of computational effort into the iterations in limited time reconstructions. (author)

  15. GPU-based Parallel Application Design for Emerging Mobile Devices

    Science.gov (United States)

    Gupta, Kshitij

    A revolution is underway in the computing world that is causing a fundamental paradigm shift in device capabilities and form-factor, with a move from well-established legacy desktop/laptop computers to mobile devices in varying sizes and shapes. Amongst all the tasks these devices must support, graphics has emerged as the 'killer app' for providing a fluid user interface and high-fidelity game rendering, effectively making the graphics processor (GPU) one of the key components in (present and future) mobile systems. By utilizing the GPU as a general-purpose parallel processor, this dissertation explores the GPU computing design space from an applications standpoint, in the mobile context, by focusing on key challenges presented by these devices---limited compute, memory bandwidth, and stringent power consumption requirements---while improving the overall application efficiency of the increasingly important speech recognition workload for mobile user interaction. We broadly partition trends in GPU computing into four major categories. We analyze hardware and programming model limitations in current-generation GPUs and detail an alternate programming style called Persistent Threads, identify four use case patterns, and propose minimal modifications that would be required for extending native support. We show how by manually extracting data locality and altering the speech recognition pipeline, we are able to achieve significant savings in memory bandwidth while simultaneously reducing the compute burden on GPU-like parallel processors. As we foresee GPU computing to evolve from its current 'co-processor' model into an independent 'applications processor' that is capable of executing complex work independently, we create an alternate application framework that enables the GPU to handle all control-flow dependencies autonomously at run-time while minimizing host involvement to just issuing commands, that facilitates an efficient application implementation. Finally, as

  16. GPU based numerical simulation of core shooting process

    Directory of Open Access Journals (Sweden)

    Yi-zhong Zhang

    2017-11-01

    Full Text Available Core shooting process is the most widely used technique to make sand cores and it plays an important role in the quality of sand cores. Although numerical simulation can hopefully optimize the core shooting process, research on numerical simulation of the core shooting process is very limited. Based on a two-fluid model (TFM and a kinetic-friction constitutive correlation, a program for 3D numerical simulation of the core shooting process has been developed and achieved good agreements with in-situ experiments. To match the needs of engineering applications, a graphics processing unit (GPU has also been used to improve the calculation efficiency. The parallel algorithm based on the Compute Unified Device Architecture (CUDA platform can significantly decrease computing time by multi-threaded GPU. In this work, the program accelerated by CUDA parallelization method was developed and the accuracy of the calculations was ensured by comparing with in-situ experimental results photographed by a high-speed camera. The design and optimization of the parallel algorithm were discussed. The simulation result of a sand core test-piece indicated the improvement of the calculation efficiency by GPU. The developed program has also been validated by in-situ experiments with a transparent core-box, a high-speed camera, and a pressure measuring system. The computing time of the parallel program was reduced by nearly 95% while the simulation result was still quite consistent with experimental data. The GPU parallelization method can successfully solve the problem of low computational efficiency of the 3D sand shooting simulation program, and thus the developed GPU program is appropriate for engineering applications.

  17. Automatic commissioning of a GPU-based Monte Carlo radiation dose calculation code for photon radiotherapy

    International Nuclear Information System (INIS)

    Tian, Zhen; Jia, Xun; Jiang, Steve B; Graves, Yan Jiang

    2014-01-01

    Monte Carlo (MC) simulation is commonly considered as the most accurate method for radiation dose calculations. Commissioning of a beam model in the MC code against a clinical linear accelerator beam is of crucial importance for its clinical implementation. In this paper, we propose an automatic commissioning method for our GPU-based MC dose engine, gDPM. gDPM utilizes a beam model based on a concept of phase-space-let (PSL). A PSL contains a group of particles that are of the same type and close in space and energy. A set of generic PSLs was generated by splitting a reference phase-space file. Each PSL was associated with a weighting factor, and in dose calculations the particle carried a weight corresponding to the PSL where it was from. Dose for each PSL in water was pre-computed, and hence the dose in water for a whole beam under a given set of PSL weighting factors was the weighted sum of the PSL doses. At the commissioning stage, an optimization problem was solved to adjust the PSL weights in order to minimize the difference between the calculated dose and measured one. Symmetry and smoothness regularizations were utilized to uniquely determine the solution. An augmented Lagrangian method was employed to solve the optimization problem. To validate our method, a phase-space file of a Varian TrueBeam 6 MV beam was used to generate the PSLs for 6 MV beams. In a simulation study, we commissioned a Siemens 6 MV beam on which a set of field-dependent phase-space files was available. The dose data of this desired beam for different open fields and a small off-axis open field were obtained by calculating doses using these phase-space files. The 3D γ-index test passing rate within the regions with dose above 10% of d max dose for those open fields tested was improved averagely from 70.56 to 99.36% for 2%/2 mm criteria and from 32.22 to 89.65% for 1%/1 mm criteria. We also tested our commissioning method on a six-field head-and-neck cancer IMRT plan. The

  18. A GPU-based framework for modeling real-time 3D lung tumor conformal dosimetry with subject-specific lung tumor motion

    International Nuclear Information System (INIS)

    Min Yugang; Santhanam, Anand; Ruddy, Bari H; Neelakkantan, Harini; Meeks, Sanford L; Kupelian, Patrick A

    2010-01-01

    In this paper, we present a graphics processing unit (GPU)-based simulation framework to calculate the delivered dose to a 3D moving lung tumor and its surrounding normal tissues, which are undergoing subject-specific lung deformations. The GPU-based simulation framework models the motion of the 3D volumetric lung tumor and its surrounding tissues, simulates the dose delivery using the dose extracted from a treatment plan using Pinnacle Treatment Planning System, Phillips, for one of the 3DCTs of the 4DCT and predicts the amount and location of radiation doses deposited inside the lung. The 4DCT lung datasets were registered with each other using a modified optical flow algorithm. The motion of the tumor and the motion of the surrounding tissues were simulated by measuring the changes in lung volume during the radiotherapy treatment using spirometry. The real-time dose delivered to the tumor for each beam is generated by summing the dose delivered to the target volume at each increase in lung volume during the beam delivery time period. The simulation results showed the real-time capability of the framework at 20 discrete tumor motion steps per breath, which is higher than the number of 4DCT steps (approximately 12) reconstructed during multiple breathing cycles.

  19. A GPU-based framework for modeling real-time 3D lung tumor conformal dosimetry with subject-specific lung tumor motion

    Energy Technology Data Exchange (ETDEWEB)

    Min Yugang; Santhanam, Anand; Ruddy, Bari H [University of Central Florida, FL (United States); Neelakkantan, Harini; Meeks, Sanford L [M D Anderson Cancer Center Orlando, FL (United States); Kupelian, Patrick A, E-mail: anand.santhanam@orlandohealth.co [Department of Radiation Oncology, University of California, Los Angeles, CA (United States)

    2010-09-07

    In this paper, we present a graphics processing unit (GPU)-based simulation framework to calculate the delivered dose to a 3D moving lung tumor and its surrounding normal tissues, which are undergoing subject-specific lung deformations. The GPU-based simulation framework models the motion of the 3D volumetric lung tumor and its surrounding tissues, simulates the dose delivery using the dose extracted from a treatment plan using Pinnacle Treatment Planning System, Phillips, for one of the 3DCTs of the 4DCT and predicts the amount and location of radiation doses deposited inside the lung. The 4DCT lung datasets were registered with each other using a modified optical flow algorithm. The motion of the tumor and the motion of the surrounding tissues were simulated by measuring the changes in lung volume during the radiotherapy treatment using spirometry. The real-time dose delivered to the tumor for each beam is generated by summing the dose delivered to the target volume at each increase in lung volume during the beam delivery time period. The simulation results showed the real-time capability of the framework at 20 discrete tumor motion steps per breath, which is higher than the number of 4DCT steps (approximately 12) reconstructed during multiple breathing cycles.

  20. A GPU-based framework for modeling real-time 3D lung tumor conformal dosimetry with subject-specific lung tumor motion.

    Science.gov (United States)

    Min, Yugang; Santhanam, Anand; Neelakkantan, Harini; Ruddy, Bari H; Meeks, Sanford L; Kupelian, Patrick A

    2010-09-07

    In this paper, we present a graphics processing unit (GPU)-based simulation framework to calculate the delivered dose to a 3D moving lung tumor and its surrounding normal tissues, which are undergoing subject-specific lung deformations. The GPU-based simulation framework models the motion of the 3D volumetric lung tumor and its surrounding tissues, simulates the dose delivery using the dose extracted from a treatment plan using Pinnacle Treatment Planning System, Phillips, for one of the 3DCTs of the 4DCT and predicts the amount and location of radiation doses deposited inside the lung. The 4DCT lung datasets were registered with each other using a modified optical flow algorithm. The motion of the tumor and the motion of the surrounding tissues were simulated by measuring the changes in lung volume during the radiotherapy treatment using spirometry. The real-time dose delivered to the tumor for each beam is generated by summing the dose delivered to the target volume at each increase in lung volume during the beam delivery time period. The simulation results showed the real-time capability of the framework at 20 discrete tumor motion steps per breath, which is higher than the number of 4DCT steps (approximately 12) reconstructed during multiple breathing cycles.

  1. Rapid simulation of X-ray transmission imaging for baggage inspection via GPU-based ray-tracing

    Science.gov (United States)

    Gong, Qian; Stoian, Razvan-Ionut; Coccarelli, David S.; Greenberg, Joel A.; Vera, Esteban; Gehm, Michael E.

    2018-01-01

    We present a pipeline that rapidly simulates X-ray transmission imaging for arbitrary system architectures using GPU-based ray-tracing techniques. The purpose of the pipeline is to enable statistical analysis of threat detection in the context of airline baggage inspection. As a faster alternative to Monte Carlo methods, we adopt a deterministic approach for simulating photoelectric absorption-based imaging. The highly-optimized NVIDIA OptiX API is used to implement ray-tracing, greatly speeding code execution. In addition, we implement the first hierarchical representation structure to determine the interaction path length of rays traversing heterogeneous media described by layered polygons. The accuracy of the pipeline has been validated by comparing simulated data with experimental data collected using a heterogenous phantom and a laboratory X-ray imaging system. On a single computer, our approach allows us to generate over 400 2D transmission projections (125 × 125 pixels per frame) per hour for a bag packed with hundreds of everyday objects. By implementing our approach on cloud-based GPU computing platforms, we find that the same 2D projections of approximately 3.9 million bags can be obtained in a single day using 400 GPU instances, at a cost of only 0.001 per bag.

  2. GPU-based parallel computing in real-time modeling of atmospheric transport and diffusion of radioactive material

    International Nuclear Information System (INIS)

    Santos, Marcelo C. dos; Pereira, Claudio M.N.A.; Schirru, Roberto; Pinheiro, André; Coordenacao de Pos-Graduacao e Pesquisa de Engenharia

    2017-01-01

    Atmospheric radionuclide dispersion systems (ARDS) are essential mechanisms to predict the consequences of unexpected radioactive releases from nuclear power plants. Considering, that during an eventuality of an accident with a radioactive material release, an accurate forecast is vital to guide the evacuation plan of the possible affected areas. However, in order to predict the dispersion of the radioactive material and its impact on the environment, the model must process information about source term (radioactive materials released, activities and location), weather condition (wind, humidity and precipitation) and geographical characteristics (topography). Furthermore, ARDS is basically composed of 4 main modules: Source Term, Wind Field, Plume Dispersion and Doses Calculations. The Wind Field and Plume Dispersion modules are the ones that require a high computational performance to achieve accurate results within an acceptable time. Taking this into account, this work focuses on the development of a GPU-based parallel Plume Dispersion module, focusing on the radionuclide transport and diffusion calculations, which use a given wind field and a released source term as parameters. The program is being developed using the C ++ programming language, allied with CUDA libraries. In comparative case study between a parallel and sequential version of the slower function of the Plume Dispersion module, a speedup of 11.63 times could be observed. (author)

  3. GPU-based parallel computing in real-time modeling of atmospheric transport and diffusion of radioactive material

    Energy Technology Data Exchange (ETDEWEB)

    Santos, Marcelo C. dos; Pereira, Claudio M.N.A.; Schirru, Roberto; Pinheiro, André, E-mail: jovitamarcelo@gmail.com, E-mail: cmnap@ien.gov.br, E-mail: schirru@lmp.ufrj.br, E-mail: apinheiro99@gmail.com [Instituto de Engenharia Nuclear (IEN/CNEN-RJ), Rio de Janeiro, RJ (Brazil); Coordenacao de Pos-Graduacao e Pesquisa de Engenharia (COPPE/UFRJ), Rio de Janeiro, RJ (Brazil). Programa de Engenharia Nuclear

    2017-07-01

    Atmospheric radionuclide dispersion systems (ARDS) are essential mechanisms to predict the consequences of unexpected radioactive releases from nuclear power plants. Considering, that during an eventuality of an accident with a radioactive material release, an accurate forecast is vital to guide the evacuation plan of the possible affected areas. However, in order to predict the dispersion of the radioactive material and its impact on the environment, the model must process information about source term (radioactive materials released, activities and location), weather condition (wind, humidity and precipitation) and geographical characteristics (topography). Furthermore, ARDS is basically composed of 4 main modules: Source Term, Wind Field, Plume Dispersion and Doses Calculations. The Wind Field and Plume Dispersion modules are the ones that require a high computational performance to achieve accurate results within an acceptable time. Taking this into account, this work focuses on the development of a GPU-based parallel Plume Dispersion module, focusing on the radionuclide transport and diffusion calculations, which use a given wind field and a released source term as parameters. The program is being developed using the C ++ programming language, allied with CUDA libraries. In comparative case study between a parallel and sequential version of the slower function of the Plume Dispersion module, a speedup of 11.63 times could be observed. (author)

  4. Particle detection systems and methods

    Science.gov (United States)

    Morris, Christopher L.; Makela, Mark F.

    2010-05-11

    Techniques, apparatus and systems for detecting particles such as muons and neutrons. In one implementation, a particle detection system employs a plurality of drift cells, which can be for example sealed gas-filled drift tubes, arranged on sides of a volume to be scanned to track incoming and outgoing charged particles, such as cosmic ray-produced muons. The drift cells can include a neutron sensitive medium to enable concurrent counting of neutrons. The system can selectively detect devices or materials, such as iron, lead, gold, uranium, plutonium, and/or tungsten, occupying the volume from multiple scattering of the charged particles passing through the volume and can concurrently detect any unshielded neutron sources occupying the volume from neutrons emitted therefrom. If necessary, the drift cells can be used to also detect gamma rays. The system can be employed to inspect occupied vehicles at border crossings for nuclear threat objects.

  5. Development of parallel GPU based algorithms for problems in nuclear area

    International Nuclear Information System (INIS)

    Almeida, Adino Americo Heimlich

    2009-01-01

    Graphics Processing Units (GPU) are high performance co-processors intended, originally, to improve the use and quality of computer graphics applications. Since researchers and practitioners realized the potential of using GPU for general purpose, their application has been extended to other fields out of computer graphics scope. The main objective of this work is to evaluate the impact of using GPU in two typical problems of Nuclear area. The neutron transport simulation using Monte Carlo method and solve heat equation in a bi-dimensional domain by finite differences method. To achieve this, we develop parallel algorithms for GPU and CPU in the two problems described above. The comparison showed that the GPU-based approach is faster than the CPU in a computer with two quad core processors, without precision loss. (author)

  6. Hypergraph partitioning implementation for parallelizing matrix-vector multiplication using CUDA GPU-based parallel computing

    Science.gov (United States)

    Murni, Bustamam, A.; Ernastuti, Handhika, T.; Kerami, D.

    2017-07-01

    Calculation of the matrix-vector multiplication in the real-world problems often involves large matrix with arbitrary size. Therefore, parallelization is needed to speed up the calculation process that usually takes a long time. Graph partitioning techniques that have been discussed in the previous studies cannot be used to complete the parallelized calculation of matrix-vector multiplication with arbitrary size. This is due to the assumption of graph partitioning techniques that can only solve the square and symmetric matrix. Hypergraph partitioning techniques will overcome the shortcomings of the graph partitioning technique. This paper addresses the efficient parallelization of matrix-vector multiplication through hypergraph partitioning techniques using CUDA GPU-based parallel computing. CUDA (compute unified device architecture) is a parallel computing platform and programming model that was created by NVIDIA and implemented by the GPU (graphics processing unit).

  7. GPU-based high performance Monte Carlo simulation in neutron transport

    Energy Technology Data Exchange (ETDEWEB)

    Heimlich, Adino; Mol, Antonio C.A.; Pereira, Claudio M.N.A. [Instituto de Engenharia Nuclear (IEN/CNEN-RJ), Rio de Janeiro, RJ (Brazil). Lab. de Inteligencia Artificial Aplicada], e-mail: cmnap@ien.gov.br

    2009-07-01

    Graphics Processing Units (GPU) are high performance co-processors intended, originally, to improve the use and quality of computer graphics applications. Since researchers and practitioners realized the potential of using GPU for general purpose, their application has been extended to other fields out of computer graphics scope. The main objective of this work is to evaluate the impact of using GPU in neutron transport simulation by Monte Carlo method. To accomplish that, GPU- and CPU-based (single and multicore) approaches were developed and applied to a simple, but time-consuming problem. Comparisons demonstrated that the GPU-based approach is about 15 times faster than a parallel 8-core CPU-based approach also developed in this work. (author)

  8. GPU-based high performance Monte Carlo simulation in neutron transport

    International Nuclear Information System (INIS)

    Heimlich, Adino; Mol, Antonio C.A.; Pereira, Claudio M.N.A.

    2009-01-01

    Graphics Processing Units (GPU) are high performance co-processors intended, originally, to improve the use and quality of computer graphics applications. Since researchers and practitioners realized the potential of using GPU for general purpose, their application has been extended to other fields out of computer graphics scope. The main objective of this work is to evaluate the impact of using GPU in neutron transport simulation by Monte Carlo method. To accomplish that, GPU- and CPU-based (single and multicore) approaches were developed and applied to a simple, but time-consuming problem. Comparisons demonstrated that the GPU-based approach is about 15 times faster than a parallel 8-core CPU-based approach also developed in this work. (author)

  9. A GPU-based solution for fast calculation of the betweenness centrality in large weighted networks

    Directory of Open Access Journals (Sweden)

    Rui Fan

    2017-12-01

    Full Text Available Betweenness, a widely employed centrality measure in network science, is a decent proxy for investigating network loads and rankings. However, its extremely high computational cost greatly hinders its applicability in large networks. Although several parallel algorithms have been presented to reduce its calculation cost for unweighted networks, a fast solution for weighted networks, which are commonly encountered in many realistic applications, is still lacking. In this study, we develop an efficient parallel GPU-based approach to boost the calculation of the betweenness centrality (BC for large weighted networks. We parallelize the traditional Dijkstra algorithm by selecting more than one frontier vertex each time and then inspecting the frontier vertices simultaneously. By combining the parallel SSSP algorithm with the parallel BC framework, our GPU-based betweenness algorithm achieves much better performance than its CPU counterparts. Moreover, to further improve performance, we integrate the work-efficient strategy, and to address the load-imbalance problem, we introduce a warp-centric technique, which assigns many threads rather than one to a single frontier vertex. Experiments on both realistic and synthetic networks demonstrate the efficiency of our solution, which achieves 2.9× to 8.44× speedups over the parallel CPU implementation. Our algorithm is open-source and free to the community; it is publicly available through https://dx.doi.org/10.6084/m9.figshare.4542405. Considering the pervasive deployment and declining price of GPUs in personal computers and servers, our solution will offer unprecedented opportunities for exploring betweenness-related problems and will motivate follow-up efforts in network science.

  10. BLAZE-DEM: A GPU based Polyhedral DEM particle transport code

    CSIR Research Space (South Africa)

    Govender, Nicolin

    2013-05-01

    Full Text Available expensive and cannot be done in real time. This paper will discuss methods and algorithms that substantially reduce the computational run-time of such simulations. An example is the spatial partitioning and hashing algorithm that allows just the nearest...

  11. Particle Systems and PDEs II

    CERN Document Server

    Soares, Ana

    2015-01-01

    This book focuses on mathematical problems concerning different applications in physics, engineering, chemistry and biology. It covers topics ranging from interacting particle systems to partial differential equations (PDEs), statistical mechanics and dynamical systems. The purpose of the second meeting on Particle Systems and PDEs was to bring together renowned researchers working actively in the respective fields, to discuss their topics of expertise and to present recent scientific results in both areas. Further, the meeting was intended to present the subject of interacting particle systems, its roots in and impacts on the field of physics, and its relation with PDEs to a vast and varied public, including young researchers. The book also includes the notes from two mini-courses presented at the conference, allowing readers who are less familiar with these areas of mathematics to more easily approach them. The contributions will be of interest to mathematicians, theoretical physicists and other researchers...

  12. Ultrafast cone-beam CT scatter correction with GPU-based Monte Carlo simulation

    Directory of Open Access Journals (Sweden)

    Yuan Xu

    2014-03-01

    Full Text Available Purpose: Scatter artifacts severely degrade image quality of cone-beam CT (CBCT. We present an ultrafast scatter correction framework by using GPU-based Monte Carlo (MC simulation and prior patient CT image, aiming at automatically finish the whole process including both scatter correction and reconstruction within 30 seconds.Methods: The method consists of six steps: 1 FDK reconstruction using raw projection data; 2 Rigid Registration of planning CT to the FDK results; 3 MC scatter calculation at sparse view angles using the planning CT; 4 Interpolation of the calculated scatter signals to other angles; 5 Removal of scatter from the raw projections; 6 FDK reconstruction using the scatter-corrected projections. In addition to using GPU to accelerate MC photon simulations, we also use a small number of photons and a down-sampled CT image in simulation to further reduce computation time. A novel denoising algorithm is used to eliminate MC noise from the simulated scatter images caused by low photon numbers. The method is validated on one simulated head-and-neck case with 364 projection angles.Results: We have examined variation of the scatter signal among projection angles using Fourier analysis. It is found that scatter images at 31 angles are sufficient to restore those at all angles with < 0.1% error. For the simulated patient case with a resolution of 512 × 512 × 100, we simulated 5 × 106 photons per angle. The total computation time is 20.52 seconds on a Nvidia GTX Titan GPU, and the time at each step is 2.53, 0.64, 14.78, 0.13, 0.19, and 2.25 seconds, respectively. The scatter-induced shading/cupping artifacts are substantially reduced, and the average HU error of a region-of-interest is reduced from 75.9 to 19.0 HU.Conclusion: A practical ultrafast MC-based CBCT scatter correction scheme is developed. It accomplished the whole procedure of scatter correction and reconstruction within 30 seconds.----------------------------Cite this

  13. Particle contamination in vacuum systems

    International Nuclear Information System (INIS)

    Martignac, J.; Bonin, B.; Henriot, C.; Poupeau, J.P.; Koltchakian, I.; Kocic, D.; Herbeaux, Ch.; Marx, J.P.

    1996-01-01

    Many vacuum devices, like RF cavities, are sensitive to particle contamination. This fact has motivated a considerable effort of cleanliness from the SRF community. The present paper reports the first results of a general study trying to identify the most contaminating steps during assembly and vacuum operation of the cavity. The steps investigated here are gasket assembly, evacuation and venting of the vacuum system, and operation of sputter ion pumps. (author)

  14. Particle contamination in vacuum systems

    International Nuclear Information System (INIS)

    Martignac, J.; Bonin, B.; Henriot, C.; Poupeau, J.P.; Koltchakian, I.; Kocic, D.; Herbeaux, Ch.; Marx, J.P.

    1996-01-01

    Many vacuum devices, like RF cavities, are sensitive to particle contamination. This fact has motivated a considerable effort of cleanliness from the SRF community. The first results of a general study trying to identify the most contaminating steps during assembly and vacuum operation of the cavity is reported. The steps investigated here are gasket assembly, evacuation and venting of the vacuum system, and operation of sputter ion pumps. (author)

  15. GPU-Based Block-Wise Nonlocal Means Denoising for 3D Ultrasound Images

    Directory of Open Access Journals (Sweden)

    Liu Li

    2013-01-01

    Full Text Available Speckle suppression plays an important role in improving ultrasound (US image quality. While lots of algorithms have been proposed for 2D US image denoising with remarkable filtering quality, there is relatively less work done on 3D ultrasound speckle suppression, where the whole volume data rather than just one frame needs to be considered. Then, the most crucial problem with 3D US denoising is that the computational complexity increases tremendously. The nonlocal means (NLM provides an effective method for speckle suppression in US images. In this paper, a programmable graphic-processor-unit- (GPU- based fast NLM filter is proposed for 3D ultrasound speckle reduction. A Gamma distribution noise model, which is able to reliably capture image statistics for Log-compressed ultrasound images, was used for the 3D block-wise NLM filter on basis of Bayesian framework. The most significant aspect of our method was the adopting of powerful data-parallel computing capability of GPU to improve the overall efficiency. Experimental results demonstrate that the proposed method can enormously accelerate the algorithm.

  16. GPU-based online track reconstruction for the MuPix-telescope

    Energy Technology Data Exchange (ETDEWEB)

    Grzesik, Carsten [JGU, Mainz (Germany); Collaboration: Mu3e-Collaboration

    2016-07-01

    The MuPix telescope is a beam telescope consisting of High Voltage Monolithic Active Pixel Sensors (HV-MAPS). This type of sensor is going to be used for the Mu3e experiment, which is aiming to measure the lepton flavor violating decay μ→ eee with an ultimate sensitivity of 10{sup -16}. This sensitivity requires a high muon decay rate in the order of 1 GHz leading to a data rate of about 1 TBit/s for the whole detector. This needs to be reduced by a factor 1000 using online event selection algorithms on Graphical Processing Units (GPUs) before passing the data to the storage. A test setup for the MuPix sensors and parts of the Mu3e tracking detector readout is realized in a four plane telescope. The telescope can also be used to show the usability of an online track reconstruction using GPUs. As a result the telescope can provide online information about efficiencies of a device under test or the alignment of the telescope itself. This talk discusses the implementation of the GPU based track reconstruction and shows some results from recent testbeam campaigns.

  17. GRay: A MASSIVELY PARALLEL GPU-BASED CODE FOR RAY TRACING IN RELATIVISTIC SPACETIMES

    Energy Technology Data Exchange (ETDEWEB)

    Chan, Chi-kwan; Psaltis, Dimitrios; Özel, Feryal [Department of Astronomy, University of Arizona, 933 N. Cherry Ave., Tucson, AZ 85721 (United States)

    2013-11-01

    We introduce GRay, a massively parallel integrator designed to trace the trajectories of billions of photons in a curved spacetime. This graphics-processing-unit (GPU)-based integrator employs the stream processing paradigm, is implemented in CUDA C/C++, and runs on nVidia graphics cards. The peak performance of GRay using single-precision floating-point arithmetic on a single GPU exceeds 300 GFLOP (or 1 ns per photon per time step). For a realistic problem, where the peak performance cannot be reached, GRay is two orders of magnitude faster than existing central-processing-unit-based ray-tracing codes. This performance enhancement allows more effective searches of large parameter spaces when comparing theoretical predictions of images, spectra, and light curves from the vicinities of compact objects to observations. GRay can also perform on-the-fly ray tracing within general relativistic magnetohydrodynamic algorithms that simulate accretion flows around compact objects. Making use of this algorithm, we calculate the properties of the shadows of Kerr black holes and the photon rings that surround them. We also provide accurate fitting formulae of their dependencies on black hole spin and observer inclination, which can be used to interpret upcoming observations of the black holes at the center of the Milky Way, as well as M87, with the Event Horizon Telescope.

  18. GPU-based parallel algorithm for blind image restoration using midfrequency-based methods

    Science.gov (United States)

    Xie, Lang; Luo, Yi-han; Bao, Qi-liang

    2013-08-01

    GPU-based general-purpose computing is a new branch of modern parallel computing, so the study of parallel algorithms specially designed for GPU hardware architecture is of great significance. In order to solve the problem of high computational complexity and poor real-time performance in blind image restoration, the midfrequency-based algorithm for blind image restoration was analyzed and improved in this paper. Furthermore, a midfrequency-based filtering method is also used to restore the image hardly with any recursion or iteration. Combining the algorithm with data intensiveness, data parallel computing and GPU execution model of single instruction and multiple threads, a new parallel midfrequency-based algorithm for blind image restoration is proposed in this paper, which is suitable for stream computing of GPU. In this algorithm, the GPU is utilized to accelerate the estimation of class-G point spread functions and midfrequency-based filtering. Aiming at better management of the GPU threads, the threads in a grid are scheduled according to the decomposition of the filtering data in frequency domain after the optimization of data access and the communication between the host and the device. The kernel parallelism structure is determined by the decomposition of the filtering data to ensure the transmission rate to get around the memory bandwidth limitation. The results show that, with the new algorithm, the operational speed is significantly increased and the real-time performance of image restoration is effectively improved, especially for high-resolution images.

  19. A GPU-based finite-size pencil beam algorithm with 3D-density correction for radiotherapy dose calculation

    International Nuclear Information System (INIS)

    Gu Xuejun; Jia Xun; Jiang, Steve B; Jelen, Urszula; Li Jinsheng

    2011-01-01

    Targeting at the development of an accurate and efficient dose calculation engine for online adaptive radiotherapy, we have implemented a finite-size pencil beam (FSPB) algorithm with a 3D-density correction method on graphics processing unit (GPU). This new GPU-based dose engine is built on our previously published ultrafast FSPB computational framework (Gu et al 2009 Phys. Med. Biol. 54 6287-97). Dosimetric evaluations against Monte Carlo dose calculations are conducted on ten IMRT treatment plans (five head-and-neck cases and five lung cases). For all cases, there is improvement with the 3D-density correction over the conventional FSPB algorithm and for most cases the improvement is significant. Regarding the efficiency, because of the appropriate arrangement of memory access and the usage of GPU intrinsic functions, the dose calculation for an IMRT plan can be accomplished well within 1 s (except for one case) with this new GPU-based FSPB algorithm. Compared to the previous GPU-based FSPB algorithm without 3D-density correction, this new algorithm, though slightly sacrificing the computational efficiency (∼5-15% lower), has significantly improved the dose calculation accuracy, making it more suitable for online IMRT replanning.

  20. A GPU-based Monte Carlo dose calculation code for photon transport in a voxel phantom

    International Nuclear Information System (INIS)

    Bellezzo, M.; Do Nascimento, E.; Yoriyaz, H.

    2014-08-01

    As the most accurate method to estimate absorbed dose in radiotherapy, Monte Carlo method has been widely used in radiotherapy treatment planning. Nevertheless, its efficiency can be improved for clinical routine applications. In this paper, we present the CUBMC code, a GPU-based Mc photon transport algorithm for dose calculation under the Compute Unified Device Architecture platform. The simulation of physical events is based on the algorithm used in Penelope, and the cross section table used is the one generated by the Material routine, als present in Penelope code. Photons are transported in voxel-based geometries with different compositions. To demonstrate the capabilities of the algorithm developed in the present work four 128 x 128 x 128 voxel phantoms have been considered. One of them is composed by a homogeneous water-based media, the second is composed by bone, the third is composed by lung and the fourth is composed by a heterogeneous bone and vacuum geometry. Simulations were done considering a 6 MeV monoenergetic photon point source. There are two distinct approaches that were used for transport simulation. The first of them forces the photon to stop at every voxel frontier, the second one is the Woodcock method, where the photon stop in the frontier will be considered depending on the material changing across the photon travel line. Dose calculations using these methods are compared for validation with Penelope and MCNP5 codes. Speed-up factors are compared using a NVidia GTX 560-Ti GPU card against a 2.27 GHz Intel Xeon CPU processor. (Author)

  1. GPU-based RFA simulation for minimally invasive cancer treatment of liver tumours.

    Science.gov (United States)

    Mariappan, Panchatcharam; Weir, Phil; Flanagan, Ronan; Voglreiter, Philip; Alhonnoro, Tuomas; Pollari, Mika; Moche, Michael; Busse, Harald; Futterer, Jurgen; Portugaller, Horst Rupert; Sequeiros, Roberto Blanco; Kolesnik, Marina

    2017-01-01

    Radiofrequency ablation (RFA) is one of the most popular and well-standardized minimally invasive cancer treatments (MICT) for liver tumours, employed where surgical resection has been contraindicated. Less-experienced interventional radiologists (IRs) require an appropriate planning tool for the treatment to help avoid incomplete treatment and so reduce the tumour recurrence risk. Although a few tools are available to predict the ablation lesion geometry, the process is computationally expensive. Also, in our implementation, a few patient-specific parameters are used to improve the accuracy of the lesion prediction. Advanced heterogeneous computing using personal computers, incorporating the graphics processing unit (GPU) and the central processing unit (CPU), is proposed to predict the ablation lesion geometry. The most recent GPU technology is used to accelerate the finite element approximation of Penne's bioheat equation and a three state cell model. Patient-specific input parameters are used in the bioheat model to improve accuracy of the predicted lesion. A fast GPU-based RFA solver is developed to predict the lesion by doing most of the computational tasks in the GPU, while reserving the CPU for concurrent tasks such as lesion extraction based on the heat deposition at each finite element node. The solver takes less than 3 min for a treatment duration of 26 min. When the model receives patient-specific input parameters, the deviation between real and predicted lesion is below 3 mm. A multi-centre retrospective study indicates that the fast RFA solver is capable of providing the IR with the predicted lesion in the short time period before the intervention begins when the patient has been clinically prepared for the treatment.

  2. A GPU-based Monte Carlo dose calculation code for photon transport in a voxel phantom

    Energy Technology Data Exchange (ETDEWEB)

    Bellezzo, M.; Do Nascimento, E.; Yoriyaz, H., E-mail: mbellezzo@gmail.br [Instituto de Pesquisas Energeticas e Nucleares / CNEN, Av. Lineu Prestes 2242, Cidade Universitaria, 05508-000 Sao Paulo (Brazil)

    2014-08-15

    As the most accurate method to estimate absorbed dose in radiotherapy, Monte Carlo method has been widely used in radiotherapy treatment planning. Nevertheless, its efficiency can be improved for clinical routine applications. In this paper, we present the CUBMC code, a GPU-based Mc photon transport algorithm for dose calculation under the Compute Unified Device Architecture platform. The simulation of physical events is based on the algorithm used in Penelope, and the cross section table used is the one generated by the Material routine, als present in Penelope code. Photons are transported in voxel-based geometries with different compositions. To demonstrate the capabilities of the algorithm developed in the present work four 128 x 128 x 128 voxel phantoms have been considered. One of them is composed by a homogeneous water-based media, the second is composed by bone, the third is composed by lung and the fourth is composed by a heterogeneous bone and vacuum geometry. Simulations were done considering a 6 MeV monoenergetic photon point source. There are two distinct approaches that were used for transport simulation. The first of them forces the photon to stop at every voxel frontier, the second one is the Woodcock method, where the photon stop in the frontier will be considered depending on the material changing across the photon travel line. Dose calculations using these methods are compared for validation with Penelope and MCNP5 codes. Speed-up factors are compared using a NVidia GTX 560-Ti GPU card against a 2.27 GHz Intel Xeon CPU processor. (Author)

  3. GPU-based simulation of optical propagation through turbulence for active and passive imaging

    Science.gov (United States)

    Monnier, Goulven; Duval, François-Régis; Amram, Solène

    2014-10-01

    IMOTEP is a GPU-based (Graphical Processing Units) software relying on a fast parallel implementation of Fresnel diffraction through successive phase screens. Its applications include active imaging, laser telemetry and passive imaging through turbulence with anisoplanatic spatial and temporal fluctuations. Thanks to parallel implementation on GPU, speedups ranging from 40X to 70X are achieved. The present paper gives a brief overview of IMOTEP models, algorithms, implementation and user interface. It then focuses on major improvements recently brought to the anisoplanatic imaging simulation method. Previously, we took advantage of the computational power offered by the GPU to develop a simulation method based on large series of deterministic realisations of the PSF distorted by turbulence. The phase screen propagation algorithm, by reproducing higher moments of the incident wavefront distortion, provides realistic PSFs. However, we first used a coarse gaussian model to fit the numerical PSFs and characterise there spatial statistics through only 3 parameters (two-dimensional displacements of centroid and width). Meanwhile, this approach was unable to reproduce the effects related to the details of the PSF structure, especially the "speckles" leading to prominent high-frequency content in short-exposure images. To overcome this limitation, we recently implemented a new empirical model of the PSF, based on Principal Components Analysis (PCA), ought to catch most of the PSF complexity. The GPU implementation allows estimating and handling efficiently the numerous (up to several hundreds) principal components typically required under the strong turbulence regime. A first demanding computational step involves PCA, phase screen propagation and covariance estimates. In a second step, realistic instantaneous images, fully accounting for anisoplanatic effects, are quickly generated. Preliminary results are presented.

  4. Interacting particle systems on graphs

    Science.gov (United States)

    Sood, Vishal

    In this dissertation, the dynamics of socially or biologically interacting populations are investigated. The individual members of the population are treated as particles that interact via links on a social or biological network represented as a graph. The effect of the structure of the graph on the properties of the interacting particle system is studied using statistical physics techniques. In the first chapter, the central concepts of graph theory and social and biological networks are presented. Next, interacting particle systems that are drawn from physics, mathematics and biology are discussed in the second chapter. In the third chapter, the random walk on a graph is studied. The mean time for a random walk to traverse between two arbitrary sites of a random graph is evaluated. Using an effective medium approximation it is found that the mean first-passage time between pairs of sites, as well as all moments of this first-passage time, are insensitive to the density of links in the graph. The inverse of the mean-first passage time varies non-monotonically with the density of links near the percolation transition of the random graph. Much of the behavior can be understood by simple heuristic arguments. Evolutionary dynamics, by which mutants overspread an otherwise uniform population on heterogeneous graphs, are studied in the fourth chapter. Such a process underlies' epidemic propagation, emergence of fads, social cooperation or invasion of an ecological niche by a new species. The first part of this chapter is devoted to neutral dynamics, in which the mutant genotype does not have a selective advantage over the resident genotype. The time to extinction of one of the two genotypes is derived. In the second part of this chapter, selective advantage or fitness is introduced such that the mutant genotype has a higher birth rate or a lower death rate. This selective advantage leads to a dynamical competition in which selection dominates for large populations

  5. Infinite Particle Systems: Complex Systems III

    Directory of Open Access Journals (Sweden)

    Editorial Board

    2008-06-01

    Full Text Available In the years 2002-2005, a group of German and Polish mathematicians worked under a DFG research project No 436 POL 113/98/0-1 entitled "Methods of stochastic analysis in the theory of collective phenomena: Gibbs states and statistical hydrodynamics". The results of their study were summarized at the German-Polish conference, which took place in Poland in October 2005. The venue of the conference was Kazimierz Dolny upon Vistula - a lovely town and a popular place for various cultural, scientific, and even political events of an international significance. The conference was also attended by scientists from France, Italy, Portugal, UK, Ukraine, and USA, which predetermined its international character. Since that time, the conference, entitled "Infinite Particle Systems: Complex Systems" has become an annual international event, attended by leading scientists from Germany, Poland and many other countries. The present volume of the "Condensed Matter Physics" contains proceedings of the conference "Infinite Particle Systems: Complex Systems III", which took place in June 2007.

  6. AMITIS: A 3D GPU-Based Hybrid-PIC Model for Space and Plasma Physics

    Science.gov (United States)

    Fatemi, Shahab; Poppe, Andrew R.; Delory, Gregory T.; Farrell, William M.

    2017-05-01

    We have developed, for the first time, an advanced modeling infrastructure in space simulations (AMITIS) with an embedded three-dimensional self-consistent grid-based hybrid model of plasma (kinetic ions and fluid electrons) that runs entirely on graphics processing units (GPUs). The model uses NVIDIA GPUs and their associated parallel computing platform, CUDA, developed for general purpose processing on GPUs. The model uses a single CPU-GPU pair, where the CPU transfers data between the system and GPU memory, executes CUDA kernels, and writes simulation outputs on the disk. All computations, including moving particles, calculating macroscopic properties of particles on a grid, and solving hybrid model equations are processed on a single GPU. We explain various computing kernels within AMITIS and compare their performance with an already existing well-tested hybrid model of plasma that runs in parallel using multi-CPU platforms. We show that AMITIS runs ∼10 times faster than the parallel CPU-based hybrid model. We also introduce an implicit solver for computation of Faraday’s Equation, resulting in an explicit-implicit scheme for the hybrid model equation. We show that the proposed scheme is stable and accurate. We examine the AMITIS energy conservation and show that the energy is conserved with an error < 0.2% after 500,000 timesteps, even when a very low number of particles per cell is used.

  7. The Review-of-Particle-Properties system

    International Nuclear Information System (INIS)

    Trippe, T.G.

    1984-01-01

    The Berkeley Particle Data Group is engaged in a major modernization of its primary project, the Review of Particle Properties, a compilation of experimental data on elementary particles. The goal of this modernization is to develop an integrated system for data storage, manipulation, interactive access and publication using modern technqiues for database management, text processing and phototypesetting. The existing system and the plans for modernization are described. The group's other projects and the computer systems used are also discussed. (orig.)

  8. Desirable Elements for a Particle System Interface

    Directory of Open Access Journals (Sweden)

    Daniel Schroeder

    2014-01-01

    Full Text Available Particle systems have many applications, with the most popular being to produce special effects in video games and films. To permit particle systems to be created quickly and easily, Particle System Interfaces (PSIs have been developed. A PSI is a piece of software designed to perform common tasks related to particle systems for clients, while providing them with a set of parameters whose values can be adjusted to create different particle systems. Most PSIs are inflexible, and when clients require functionality that is not supported by the PSI they are using, they are forced to either find another PSI that meets their requirements or, more commonly, create their own particle system or PSI from scratch. This paper presents three original contributions. First, it identifies 18 features that a PSI should provide in order to be capable of creating diverse effects. If these features are implemented in a PSI, clients will be more likely to be able to accomplish all desired effects related to particle systems with one PSI. Secondly, it introduces a novel use of events to determine, at run time, which particle system code to execute in each frame. Thirdly, it describes a software architecture called the Dynamic Particle System Framework (DPSF. Simulation results show that DPSF possesses all 18 desirable features.

  9. GPU-based streaming architectures for fast cone-beam CT image reconstruction and demons deformable registration

    International Nuclear Information System (INIS)

    Sharp, G C; Kandasamy, N; Singh, H; Folkert, M

    2007-01-01

    This paper shows how to significantly accelerate cone-beam CT reconstruction and 3D deformable image registration using the stream-processing model. We describe data-parallel designs for the Feldkamp, Davis and Kress (FDK) reconstruction algorithm, and the demons deformable registration algorithm, suitable for use on a commodity graphics processing unit. The streaming versions of these algorithms are implemented using the Brook programming environment and executed on an NVidia 8800 GPU. Performance results using CT data of a preserved swine lung indicate that the GPU-based implementations of the FDK and demons algorithms achieve a substantial speedup-up to 80 times for FDK and 70 times for demons when compared to an optimized reference implementation on a 2.8 GHz Intel processor. In addition, the accuracy of the GPU-based implementations was found to be excellent. Compared with CPU-based implementations, the RMS differences were less than 0.1 Hounsfield unit for reconstruction and less than 0.1 mm for deformable registration

  10. Habitat Particle Impact Monitoring System

    Data.gov (United States)

    National Aeronautics and Space Administration — The purpose of this project is the development of particle impact detection technology for application to habitable space exploration modules, both in space and on...

  11. Quantum statistics of many-particle systems

    International Nuclear Information System (INIS)

    Kraeft, W.D.; Ebeling, W.; Kremp, D.; Ropke, G.

    1986-01-01

    This paper presents the elements of quantum statistics and discusses the quantum mechanics of many-particle systems. The method of second quantization is discussed and the Bogolyubov hierarchy is examined. The general properties of the correlation function and one-particle Green's function are examined. The paper presents dynamical and thermodynamical information contained in the spectral function. An equation of motion is given for the one-particle Green's function. T-matrix and thermodynamic properties in binary collision approximation are discussed

  12. Classical dynamics of particles and systems

    CERN Document Server

    Marion, Jerry B

    1965-01-01

    Classical Dynamics of Particles and Systems presents a modern and reasonably complete account of the classical mechanics of particles, systems of particles, and rigid bodies for physics students at the advanced undergraduate level. The book aims to present a modern treatment of classical mechanical systems in such a way that the transition to the quantum theory of physics can be made with the least possible difficulty; to acquaint the student with new mathematical techniques and provide sufficient practice in solving problems; and to impart to the student some degree of sophistication in handl

  13. GPU-based fast cone beam CT reconstruction from undersampled and noisy projection data via total variation

    International Nuclear Information System (INIS)

    Jia Xun; Lou Yifei; Li Ruijiang; Song, William Y.; Jiang, Steve B.

    2010-01-01

    Purpose: Cone-beam CT (CBCT) plays an important role in image guided radiation therapy (IGRT). However, the large radiation dose from serial CBCT scans in most IGRT procedures raises a clinical concern, especially for pediatric patients who are essentially excluded from receiving IGRT for this reason. The goal of this work is to develop a fast GPU-based algorithm to reconstruct CBCT from undersampled and noisy projection data so as to lower the imaging dose. Methods: The CBCT is reconstructed by minimizing an energy functional consisting of a data fidelity term and a total variation regularization term. The authors developed a GPU-friendly version of the forward-backward splitting algorithm to solve this model. A multigrid technique is also employed. Results: It is found that 20-40 x-ray projections are sufficient to reconstruct images with satisfactory quality for IGRT. The reconstruction time ranges from 77 to 130 s on an NVIDIA Tesla C1060 (NVIDIA, Santa Clara, CA) GPU card, depending on the number of projections used, which is estimated about 100 times faster than similar iterative reconstruction approaches. Moreover, phantom studies indicate that the algorithm enables the CBCT to be reconstructed under a scanning protocol with as low as 0.1 mA s/projection. Comparing with currently widely used full-fan head and neck scanning protocol of ∼360 projections with 0.4 mA s/projection, it is estimated that an overall 36-72 times dose reduction has been achieved in our fast CBCT reconstruction algorithm. Conclusions: This work indicates that the developed GPU-based CBCT reconstruction algorithm is capable of lowering imaging dose considerably. The high computation efficiency in this algorithm makes the iterative CBCT reconstruction approach applicable in real clinical environments.

  14. GPU-based fast cone beam CT reconstruction from undersampled and noisy projection data via total variation.

    Science.gov (United States)

    Jia, Xun; Lou, Yifei; Li, Ruijiang; Song, William Y; Jiang, Steve B

    2010-04-01

    Cone-beam CT (CBCT) plays an important role in image guided radiation therapy (IGRT). However, the large radiation dose from serial CBCT scans in most IGRT procedures raises a clinical concern, especially for pediatric patients who are essentially excluded from receiving IGRT for this reason. The goal of this work is to develop a fast GPU-based algorithm to reconstruct CBCT from undersampled and noisy projection data so as to lower the imaging dose. The CBCT is reconstructed by minimizing an energy functional consisting of a data fidelity term and a total variation regularization term. The authors developed a GPU-friendly version of the forward-backward splitting algorithm to solve this model. A multigrid technique is also employed. It is found that 20-40 x-ray projections are sufficient to reconstruct images with satisfactory quality for IGRT. The reconstruction time ranges from 77 to 130 s on an NVIDIA Tesla C1060 (NVIDIA, Santa Clara, CA) GPU card, depending on the number of projections used, which is estimated about 100 times faster than similar iterative reconstruction approaches. Moreover, phantom studies indicate that the algorithm enables the CBCT to be reconstructed under a scanning protocol with as low as 0.1 mA s/projection. Comparing with currently widely used full-fan head and neck scanning protocol of approximately 360 projections with 0.4 mA s/projection, it is estimated that an overall 36-72 times dose reduction has been achieved in our fast CBCT reconstruction algorithm. This work indicates that the developed GPU-based CBCT reconstruction algorithm is capable of lowering imaging dose considerably. The high computation efficiency in this algorithm makes the iterative CBCT reconstruction approach applicable in real clinical environments.

  15. SU-D-207-04: GPU-Based 4D Cone-Beam CT Reconstruction Using Adaptive Meshing Method

    International Nuclear Information System (INIS)

    Zhong, Z; Gu, X; Iyengar, P; Mao, W; Wang, J; Guo, X

    2015-01-01

    Purpose: Due to the limited number of projections at each phase, the image quality of a four-dimensional cone-beam CT (4D-CBCT) is often degraded, which decreases the accuracy of subsequent motion modeling. One of the promising methods is the simultaneous motion estimation and image reconstruction (SMEIR) approach. The objective of this work is to enhance the computational speed of the SMEIR algorithm using adaptive feature-based tetrahedral meshing and GPU-based parallelization. Methods: The first step is to generate the tetrahedral mesh based on the features of a reference phase 4D-CBCT, so that the deformation can be well captured and accurately diffused from the mesh vertices to voxels of the image volume. After the mesh generation, the updated motion model and other phases of 4D-CBCT can be obtained by matching the 4D-CBCT projection images at each phase with the corresponding forward projections of the deformed reference phase of 4D-CBCT. The entire process of this 4D-CBCT reconstruction method is implemented on GPU, resulting in significantly increasing the computational efficiency due to its tremendous parallel computing ability. Results: A 4D XCAT digital phantom was used to test the proposed mesh-based image reconstruction algorithm. The image Result shows both bone structures and inside of the lung are well-preserved and the tumor position can be well captured. Compared to the previous voxel-based CPU implementation of SMEIR, the proposed method is about 157 times faster for reconstructing a 10 -phase 4D-CBCT with dimension 256×256×150. Conclusion: The GPU-based parallel 4D CBCT reconstruction method uses the feature-based mesh for estimating motion model and demonstrates equivalent image Result with previous voxel-based SMEIR approach, with significantly improved computational speed

  16. DYNAMIC PARTICLE SYSTEMS FOR OBJECT STRUCTURE EXTRACTION

    Directory of Open Access Journals (Sweden)

    Olivier Lavialle

    2011-05-01

    Full Text Available A new deformable model based on the use of a particle system is introduced. By defining the local behavior of each particle, the system behaves as an active contour model showing a variable topology and regularization properties. The efficiency of the particle system is illustrated by two applications: the first one concerns the use of the system as a skeleton extractor based on the propagation of particles inside a treeshaped object. Using this method, it is possible to generate a cartography of structures such as veins or channels. In a second illustration, the system avoids the problem of initialization of a piecewise cubic Bspline network used to straighten curved text lines.

  17. Unstable particles as open quantum systems

    International Nuclear Information System (INIS)

    Caban, Pawel; Rembielinski, Jakub; Smolinski, Kordian A.; Walczak, Zbigniew

    2005-01-01

    We present the probability-preserving description of the decaying particle within the framework of quantum mechanics of open systems, taking into account the superselection rule prohibiting the superposition of the particle and vacuum. In our approach the evolution of the system is given by a family of completely positive trace-preserving maps forming a one-parameter dynamical semigroup. We give the Kraus representation for the general evolution of such systems, which allows one to write the evolution for systems with two or more particles. Moreover, we show that the decay of the particle can be regarded as a Markov process by finding explicitly the master equation in the Lindblad form. We also show that there are remarkable restrictions on the possible strength of decoherence

  18. Hydrodynamic limit of interacting particle systems

    International Nuclear Information System (INIS)

    Landim, C.

    2004-01-01

    We present in these notes two methods to derive the hydrodynamic equation of conservative interacting particle systems. The intention is to present the main ideas in the simplest possible context and refer for details and references. (author)

  19. Future of motion graphics and particle systems

    OpenAIRE

    Warambo, Bryan

    2012-01-01

    The purpose of this research is to study the use of particle systems in motion graphics, which is known to be the most popular graphics tool for multiple animated elements. It is known to be a procedural animation because as the emitter builds up more particles are formed to create a motion effect. At the same time exploring the future of motion graphics and Particle systems connection and the relevance it has in terms of longevity in being a major post-production element in digital media. Th...

  20. Comprehensive evaluations of cone-beam CT dose in image-guided radiation therapy via GPU-based Monte Carlo simulations

    Energy Technology Data Exchange (ETDEWEB)

    Montanari, Davide; Scolari, Enrica; Silvestri, Chiara; Graves, Yan Jiang; Cervino, Laura [Center for Advanced Radiotherapy Technologies, University of California San Diego, La Jolla, CA 92037-0843 (United States); Yan, Hao; Jiang, Steve B; Jia, Xun [Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390-9315 (United States); Rice, Roger [Department of Radiation Medicine and Applied Sciences, University of California San Diego, La Jolla, CA 92037-0843 (United States)

    2014-03-07

    Cone beam CT (CBCT) has been widely used for patient setup in image-guided radiation therapy (IGRT). Radiation dose from CBCT scans has become a clinical concern. The purposes of this study are (1) to commission a graphics processing unit (GPU)-based Monte Carlo (MC) dose calculation package gCTD for Varian On-Board Imaging (OBI) system and test the calculation accuracy, and (2) to quantitatively evaluate CBCT dose from the OBI system in typical IGRT scan protocols. We first conducted dose measurements in a water phantom. X-ray source model parameters used in gCTD are obtained through a commissioning process. gCTD accuracy is demonstrated by comparing calculations with measurements in water and in CTDI phantoms. Twenty-five brain cancer patients are used to study dose in a standard-dose head protocol, and 25 prostate cancer patients are used to study dose in pelvis protocol and pelvis spotlight protocol. Mean dose to each organ is calculated. Mean dose to 2% voxels that have the highest dose is also computed to quantify the maximum dose. It is found that the mean dose value to an organ varies largely among patients. Moreover, dose distribution is highly non-homogeneous inside an organ. The maximum dose is found to be 1–3 times higher than the mean dose depending on the organ, and is up to eight times higher for the entire body due to the very high dose region in bony structures. High computational efficiency has also been observed in our studies, such that MC dose calculation time is less than 5 min for a typical case. (paper)

  1. Aersol particle losses in sampling systems

    International Nuclear Information System (INIS)

    Fan, B.J.; Wong, F.S.; Ortiz, C.A.; Anand, N.K.; McFarland, A.R.

    1993-01-01

    When aerosols are sampled from stacks and ducts, it is usually necessary to transport them from the point of sampling to a location of collection or analysis. Losses of aerosol particles can occur in the inlet region of the probe, in straight horizontal and vertical tubes and in elbows. For probes in laminary flow, the Saffman lift force can cause substantial losses of particles in a short inlet region. An empirical model has been developed to predict probe inlet losses, which are often on the order of 40% for 10 μm AED particles. A user-friendly PC computer code, DEPOSITION, has been setup to model losses in transport systems. Experiments have been conducted to compare the actual aerosol particle losses in transport systems with those predicted by the DEPOSITION code

  2. Dispersion relations in three-particle systems

    International Nuclear Information System (INIS)

    Grach, I.L.; Harodetskij, I.M.; Shmatikov, M.Zh.

    1979-01-01

    Positions of all dynamical singularities of the triangular nonrelativistic diagram are calculated including the form factors. The jumps of the amplitude are written in an analitical form. The dispersion method predictions for bound states in the three-particle system are compared with the results of the Amado exactly solvable model. It is shown that the one-channel N/D method is equivalent to the pole approximation in the Amado model, and that the three-particle s channel unitarity should be taken into account calculating (in the dispersion method) the ground and excited states of the three-particle system. The relation of the three-particle unitary contribution to the Thomas theorem and Efimov effect is briefly discussed

  3. Leptons as systems of Dirac particles

    International Nuclear Information System (INIS)

    Borstnik, N.M.; Kaluza, M.

    1988-03-01

    Charged leptons are treated as systems of three equal independent Dirac particles in an external static effective potential which has a vector and a scalar term. The potential is constructed to reproduce the experimental mass spectrum of the charged leptons. The Dirac covariant equation for three interacting particles is discussed in order to comment on the magnetic moment of leptons. (author). 9 refs, 2 figs, 4 tabs

  4. An Out-of-Core GPU based dimensionality reduction algorithm for Big Mass Spectrometry Data and its application in bottom-up Proteomics.

    Science.gov (United States)

    Awan, Muaaz Gul; Saeed, Fahad

    2017-08-01

    Modern high resolution Mass Spectrometry instruments can generate millions of spectra in a single systems biology experiment. Each spectrum consists of thousands of peaks but only a small number of peaks actively contribute to deduction of peptides. Therefore, pre-processing of MS data to detect noisy and non-useful peaks are an active area of research. Most of the sequential noise reducing algorithms are impractical to use as a pre-processing step due to high time-complexity. In this paper, we present a GPU based dimensionality-reduction algorithm, called G-MSR, for MS2 spectra. Our proposed algorithm uses novel data structures which optimize the memory and computational operations inside GPU. These novel data structures include Binary Spectra and Quantized Indexed Spectra (QIS) . The former helps in communicating essential information between CPU and GPU using minimum amount of data while latter enables us to store and process complex 3-D data structure into a 1-D array structure while maintaining the integrity of MS data. Our proposed algorithm also takes into account the limited memory of GPUs and switches between in-core and out-of-core modes based upon the size of input data. G-MSR achieves a peak speed-up of 386x over its sequential counterpart and is shown to process over a million spectra in just 32 seconds. The code for this algorithm is available as a GPL open-source at GitHub at the following link: https://github.com/pcdslab/G-MSR.

  5. High-performance GPU-based rendering for real-time, rigid 2D/3D-image registration and motion prediction in radiation oncology.

    Science.gov (United States)

    Spoerk, Jakob; Gendrin, Christelle; Weber, Christoph; Figl, Michael; Pawiro, Supriyanto Ardjo; Furtado, Hugo; Fabri, Daniella; Bloch, Christoph; Bergmann, Helmar; Gröller, Eduard; Birkfellner, Wolfgang

    2012-02-01

    A common problem in image-guided radiation therapy (IGRT) of lung cancer as well as other malignant diseases is the compensation of periodic and aperiodic motion during dose delivery. Modern systems for image-guided radiation oncology allow for the acquisition of cone-beam computed tomography data in the treatment room as well as the acquisition of planar radiographs during the treatment. A mid-term research goal is the compensation of tumor target volume motion by 2D/3D Registration. In 2D/3D registration, spatial information on organ location is derived by an iterative comparison of perspective volume renderings, so-called digitally rendered radiographs (DRR) from computed tomography volume data, and planar reference x-rays. Currently, this rendering process is very time consuming, and real-time registration, which should at least provide data on organ position in less than a second, has not come into existence. We present two GPU-based rendering algorithms which generate a DRR of 512×512 pixels size from a CT dataset of 53 MB size at a pace of almost 100 Hz. This rendering rate is feasible by applying a number of algorithmic simplifications which range from alternative volume-driven rendering approaches - namely so-called wobbled splatting - to sub-sampling of the DRR-image by means of specialized raycasting techniques. Furthermore, general purpose graphics processing unit (GPGPU) programming paradigms were consequently utilized. Rendering quality and performance as well as the influence on the quality and performance of the overall registration process were measured and analyzed in detail. The results show that both methods are competitive and pave the way for fast motion compensation by rigid and possibly even non-rigid 2D/3D registration and, beyond that, adaptive filtering of motion models in IGRT. Copyright © 2011. Published by Elsevier GmbH.

  6. High-performance GPU-based rendering for real-time, rigid 2D/3D-image registration and motion prediction in radiation oncology

    Energy Technology Data Exchange (ETDEWEB)

    Spoerk, Jakob; Gendrin, Christelle; Weber, Christoph [Medical University of Vienna (Austria). Center of Medical Physics and Biomedical Engineering] [and others

    2012-07-01

    A common problem in image-guided radiation therapy (IGRT) of lung cancer as well as other malignant diseases is the compensation of periodic and aperiodic motion during dose delivery. Modern systems for image-guided radiation oncology allow for the acquisition of cone-beam computed tomography data in the treatment room as well as the acquisition of planar radiographs during the treatment. A mid-term research goal is the compensation of tumor target volume motion by 2D/3D Registration. In 2D/3D registration, spatial information on organ location is derived by an iterative comparison of perspective volume renderings, so-called digitally rendered radiographs (DRR) from computed tomography volume data, and planar reference X-rays. Currently, this rendering process is very time consuming, and real-time registration, which should at least provide data on organ position in less than a second, has not come into existence. We present two GPU-based rendering algorithms which generate a DRR of 512 x 512 pixels size from a CT dataset of 53 MB size at a pace of almost 100 Hz. This rendering rate is feasible by applying a number of algorithmic simplifications which range from alternative volume-driven rendering approaches - namely so-called wobbled splatting - to sub-sampling of the DRR-image by means of specialized raycasting techniques. Furthermore, general purpose graphics processing unit (GPGPU) programming paradigms were consequently utilized. Rendering quality and performance as well as the influence on the quality and performance of the overall registration process were measured and analyzed in detail. The results show that both methods are competitive and pave the way for fast motion compensation by rigid and possibly even non-rigid 2D/3D registration and, beyond that, adaptive filtering of motion models in IGRT. (orig.)

  7. Quantum many-particle systems

    CERN Document Server

    Negele, John W

    1988-01-01

    This book explains the fundamental concepts and theoretical techniques used to understand the properties of quantum systems having large numbers of degrees of freedom. A number of complimentary approaches are developed, including perturbation theory; nonperturbative approximations based on functional integrals; general arguments based on order parameters, symmetry, and Fermi liquid theory; and stochastic methods.

  8. Particle physics data system at IHEP

    International Nuclear Information System (INIS)

    Alekhin, S.I.; Grudtsin, S.N.; Demidov, N.G.; Ezhela, V.V.

    1981-01-01

    This note presents the description of information search and retrieval facilities supplied by the Berkeley Database Management System - BDMS V2.2 implemented for ICL-1906A computers at IHEP. The system is used for creation and maintenance of archieve Particle Physics Data Bases [ru

  9. Fully iterative scatter corrected digital breast tomosynthesis using GPU-based fast Monte Carlo simulation and composition ratio update

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Kyungsang; Ye, Jong Chul, E-mail: jong.ye@kaist.ac.kr [Bio Imaging and Signal Processing Laboratory, Department of Bio and Brain Engineering, KAIST 291, Daehak-ro, Yuseong-gu, Daejeon 34141 (Korea, Republic of); Lee, Taewon; Cho, Seungryong [Medical Imaging and Radiotherapeutics Laboratory, Department of Nuclear and Quantum Engineering, KAIST 291, Daehak-ro, Yuseong-gu, Daejeon 34141 (Korea, Republic of); Seong, Younghun; Lee, Jongha; Jang, Kwang Eun [Samsung Advanced Institute of Technology, Samsung Electronics, 130, Samsung-ro, Yeongtong-gu, Suwon-si, Gyeonggi-do, 443-803 (Korea, Republic of); Choi, Jaegu; Choi, Young Wook [Korea Electrotechnology Research Institute (KERI), 111, Hanggaul-ro, Sangnok-gu, Ansan-si, Gyeonggi-do, 426-170 (Korea, Republic of); Kim, Hak Hee; Shin, Hee Jung; Cha, Joo Hee [Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, 88 Olympic-ro, 43-gil, Songpa-gu, Seoul, 138-736 (Korea, Republic of)

    2015-09-15

    accurate under a variety of conditions. Our GPU-based fast MCS implementation took approximately 3 s to generate each angular projection for a 6 cm thick breast, which is believed to make this process acceptable for clinical applications. In addition, the clinical preferences of three radiologists were evaluated; the preference for the proposed method compared to the preference for the convolution-based method was statistically meaningful (p < 0.05, McNemar test). Conclusions: The proposed fully iterative scatter correction method and the GPU-based fast MCS using tissue-composition ratio estimation successfully improved the image quality within a reasonable computational time, which may potentially increase the clinical utility of DBT.

  10. TU-AB-202-05: GPU-Based 4D Deformable Image Registration Using Adaptive Tetrahedral Mesh Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Zhong, Z; Zhuang, L [Wayne State University, Detroit, MI (United States); Gu, X; Wang, J [UT Southwestern Medical Center, Dallas, TX (United States); Chen, H; Zhen, X [Southern Medical University, Guangzhou, Guangdong (China)

    2016-06-15

    Purpose: Deformable image registration (DIR) has been employed today as an automated and effective segmentation method to transfer tumor or organ contours from the planning image to daily images, instead of manual segmentation. However, the computational time and accuracy of current DIR approaches are still insufficient for online adaptive radiation therapy (ART), which requires real-time and high-quality image segmentation, especially in a large datasets of 4D-CT images. The objective of this work is to propose a new DIR algorithm, with fast computational speed and high accuracy, by using adaptive feature-based tetrahedral meshing and GPU-based parallelization. Methods: The first step is to generate the adaptive tetrahedral mesh based on the image features of a reference phase of 4D-CT, so that the deformation can be well captured and accurately diffused from the mesh vertices to voxels of the image volume. Subsequently, the deformation vector fields (DVF) and other phases of 4D-CT can be obtained by matching each phase of the target 4D-CT images with the corresponding deformed reference phase. The proposed 4D DIR method is implemented on GPU, resulting in significantly increasing the computational efficiency due to its parallel computing ability. Results: A 4D NCAT digital phantom was used to test the efficiency and accuracy of our method. Both the image and DVF results show that the fine structures and shapes of lung are well preserved, and the tumor position is well captured, i.e., 3D distance error is 1.14 mm. Compared to the previous voxel-based CPU implementation of DIR, such as demons, the proposed method is about 160x faster for registering a 10-phase 4D-CT with a phase dimension of 256×256×150. Conclusion: The proposed 4D DIR method uses feature-based mesh and GPU-based parallelism, which demonstrates the capability to compute both high-quality image and motion results, with significant improvement on the computational speed.

  11. TU-AB-202-05: GPU-Based 4D Deformable Image Registration Using Adaptive Tetrahedral Mesh Modeling

    International Nuclear Information System (INIS)

    Zhong, Z; Zhuang, L; Gu, X; Wang, J; Chen, H; Zhen, X

    2016-01-01

    Purpose: Deformable image registration (DIR) has been employed today as an automated and effective segmentation method to transfer tumor or organ contours from the planning image to daily images, instead of manual segmentation. However, the computational time and accuracy of current DIR approaches are still insufficient for online adaptive radiation therapy (ART), which requires real-time and high-quality image segmentation, especially in a large datasets of 4D-CT images. The objective of this work is to propose a new DIR algorithm, with fast computational speed and high accuracy, by using adaptive feature-based tetrahedral meshing and GPU-based parallelization. Methods: The first step is to generate the adaptive tetrahedral mesh based on the image features of a reference phase of 4D-CT, so that the deformation can be well captured and accurately diffused from the mesh vertices to voxels of the image volume. Subsequently, the deformation vector fields (DVF) and other phases of 4D-CT can be obtained by matching each phase of the target 4D-CT images with the corresponding deformed reference phase. The proposed 4D DIR method is implemented on GPU, resulting in significantly increasing the computational efficiency due to its parallel computing ability. Results: A 4D NCAT digital phantom was used to test the efficiency and accuracy of our method. Both the image and DVF results show that the fine structures and shapes of lung are well preserved, and the tumor position is well captured, i.e., 3D distance error is 1.14 mm. Compared to the previous voxel-based CPU implementation of DIR, such as demons, the proposed method is about 160x faster for registering a 10-phase 4D-CT with a phase dimension of 256×256×150. Conclusion: The proposed 4D DIR method uses feature-based mesh and GPU-based parallelism, which demonstrates the capability to compute both high-quality image and motion results, with significant improvement on the computational speed.

  12. Permanent magnet system to guide superparamagnetic particles

    Science.gov (United States)

    Baun, Olga; Blümler, Peter

    2017-10-01

    A new concept of using permanent magnet systems for guiding superparamagnetic nano-particles on arbitrary trajectories over a large volume is proposed. The basic idea is to use one magnet system which provides a strong, homogeneous, dipolar magnetic field to magnetize and orient the particles, and a second constantly graded, quadrupolar field, superimposed on the first, to generate a force on the oriented particles. In this configuration the motion of the particles is driven predominantly by the component of the gradient field which is parallel to the direction of the homogeneous field. As a result, particles are guided with constant force and in a single direction over the entire volume. The direction is simply adjusted by varying the angle between quadrupole and dipole. Since a single gradient is impossible due to Gauß' law, the other gradient component of the quadrupole determines the angular deviation of the force. However, the latter can be neglected if the homogeneous field is stronger than the local contribution of the quadrupole field. A possible realization of this idea is a coaxial arrangement of two Halbach cylinders. A dipole to evenly magnetize and orient the particles, and a quadrupole to generate the force. The local force was calculated analytically for this particular geometry and the directional limits were analyzed and discussed. A simple prototype was constructed to demonstrate the principle in two dimensions on several nano-particles of different size, which were moved along a rough square by manual adjustment of the force angle. The observed velocities of superparamagnetic particles in this prototype were always several orders of magnitude higher than the theoretically expected value. This discrepancy is attributed to the observed formation of long particle chains as a result of their polarization by the homogeneous field. The magnetic moment of such a chain is then the combination of that of its constituents, while its hydrodynamic radius

  13. Neutral particle beam distributed data acquisition system

    International Nuclear Information System (INIS)

    Daly, R.T.; Kraimer, M.R.; Novick, A.H.

    1987-01-01

    A distributed data acquisition system has been designed to support experiments at the Argonne Neutral Particle Beam Accelerator. The system uses a host VAXstation II/GPX computer acting as an experimenter's station linked via Ethernet with multiple MicroVAX IIs and rtVAXs dedicated to acquiring data and controlling hardware at remote sites. This paper describes the hardware design of the system, the applications support software on the host and target computers, and the real-time performance

  14. Control system technology for particle accelerators

    International Nuclear Information System (INIS)

    Tsumura, Yoshihiko; Matsuo, Keiichi; Maruyama, Takayuki.

    1995-01-01

    Control systems for particle accelerators are being designed around open-architecture systems, which allows easy upgrading, high-speed networks and high-speed processors. Mitsubishi Electric is applying realtime Unix operating systems, fiber-distributed data interface (FDDI), shared memory networks and remote I/O systems to achieve these objectives. In the area of vacuum control systems, which requires large-scale sequence control, the corporation is employing general-purpose programmable logic controllers (PLCs) to achieve cost-effective design. Software for these applications is designed around a library of application program interfaces (APIs) that give users direct access to key system functions. (author)

  15. GPU based 3D feature profile simulation of high-aspect ratio contact hole etch process under fluorocarbon plasmas

    Science.gov (United States)

    Chun, Poo-Reum; Lee, Se-Ah; Yook, Yeong-Geun; Choi, Kwang-Sung; Cho, Deog-Geun; Yu, Dong-Hun; Chang, Won-Seok; Kwon, Deuk-Chul; Im, Yeon-Ho

    2013-09-01

    Although plasma etch profile simulation has been attracted much interest for developing reliable plasma etching, there still exist big gaps between current research status and predictable modeling due to the inherent complexity of plasma process. As an effort to address this issue, we present 3D feature profile simulation coupled with well-defined plasma-surface kinetic model for silicon dioxide etching process under fluorocarbon plasmas. To capture the realistic plasma surface reaction behaviors, a polymer layer based surface kinetic model was proposed to consider the simultaneous polymer deposition and oxide etching. Finally, the realistic plasma surface model was used for calculation of speed function for 3D topology simulation, which consists of multiple level set based moving algorithm, and ballistic transport module. In addition, the time consumable computations in the ballistic transport calculation were improved drastically by GPU based numerical computation, leading to the real time computation. Finally, we demonstrated that the surface kinetic model could be coupled successfully for 3D etch profile simulations in high-aspect ratio contact hole plasma etching.

  16. Few-body system and particle resonances

    International Nuclear Information System (INIS)

    Mubarak, Ahmad.

    1979-01-01

    Techniques of few-body system in nuclear physics are exploited to analyze the spectrum of the T resonance and its family. Their relation to nuclear resonances are established so as to apply few-body dynamical techniques in the dynamical structure of particles carrying the truth quantum number. (author)

  17. Laboratory system for alpha particle spectroscopy

    International Nuclear Information System (INIS)

    Dean, J.R.; Chiu, N.W.

    1987-03-01

    An automated alpha particle spectroscopy system has beeen designed and fabricated. It consists of two major components, the automatic sample changer and the controller/data acquisition unit. It is capable of unattended analysis of ten samples for up to 65,000 seconds per sample

  18. Quantum theory of many-particle systems

    CERN Document Server

    Fetter, Alexander L

    2003-01-01

    ""Singlemindedly devoted to its job of educating potential many-particle theorists…deserves to become the standard text in the field."" - Physics Today""The most comprehensive textbook yet published in its field and every postgraduate student or teacher in this field should own or have access to a copy."" - EndeavorA self-contained, unified treatment of nonrelativistic many-particle systems, this text offers a solid introduction to procedures in a manner that enables students to adopt techniques for their own use. Its discussions of formalism and applications move easily between general theo

  19. Chapter 14. Systems of identical particles

    International Nuclear Information System (INIS)

    Anon.

    1977-01-01

    For the systems of identical particles it is necessary to introduce the postulate of symmetrization to describe quantum systems. Physical implications of this postulate are presented: bosons, fermions and Pauli exclusion principle; quantum statistics; interferences between direct process and exchange process. Permutation operators are also exposed. In complement are studied: approximation of central field, electron configurations, energy levels of helium atom (configurations, terms, multiplets), physical properties of electron gas (application to solids) [fr

  20. Data extraction system for underwater particle holography

    Science.gov (United States)

    Nebrensky, J. J.; Craig, Gary; Hobson, Peter R.; Lampitt, R. S.; Nareid, Helge; Pescetto, A.; Trucco, Andrea; Watson, John

    2000-08-01

    Pulsed laser holography in an extremely powerful technique for the study of particle fields as it allows instantaneous, non-invasive high- resolution recording of substantial volumes. By relaying the real image one can obtain the size, shape, position and - if multiple exposures are made - velocity of every object in the recorded field. Manual analysis of large volumes containing thousands of particles is, however, an enormous and time-consuming task, with operator fatigue an unpredictable source of errors. Clearly the value of holographic measurements also depends crucially on the quality of the reconstructed image: not only will poor resolution degrade the size and shape measurements, but aberrations such as coma and astigmatism can change the perceived centroid of a particle, affecting position and velocity measurements. For large-scale applications of particle field holography, specifically the in situ recording of marine plankton with Holocam, we have developed an automated data extraction system that can be readily switched between the in-line and off-axis geometries and provides optimised reconstruction from holograms recorded underwater. As a videocamera is automatically stepped through the 200 by 200 by 1000mm sample volume, image processing and object tracking routines locate and extract particle images for further classification by a separate software module.

  1. Deflection system for charged-particle beam

    Energy Technology Data Exchange (ETDEWEB)

    Bates, T

    1982-01-13

    A system is described for achromatically deflecting a beam of charged particles without producing net divergence of the beam comprising three successive magnetic deflection means which deflect the beam alternately in opposite directions; the first and second deflect by angles of less than 50/sup 0/ and the third by an angle of at least 90/sup 0/. Particles with different respective energies are transversely spaced as they enter the third deflection means, but emerge completely superimposed in both position and direction and may be brought to a focus in each of two mutually perpendicular planes, a short distance thereafter. Such a system may be particularly compact, especially in the direction in which the beam leaves the system. It is suitable for deflecting a beam of electrons from a linear accelerator so producing a vertical beam of electron (or with an X-ray target, of X-rays) which can be rotated about a horizontal patient for radiation therapy.

  2. A GPU based high-resolution multilevel biomechanical head and neck model for validating deformable image registration

    International Nuclear Information System (INIS)

    Neylon, J.; Qi, X.; Sheng, K.; Low, D. A.; Kupelian, P.; Santhanam, A.; Staton, R.; Pukala, J.; Manon, R.

    2015-01-01

    Purpose: Validating the usage of deformable image registration (DIR) for daily patient positioning is critical for adaptive radiotherapy (RT) applications pertaining to head and neck (HN) radiotherapy. The authors present a methodology for generating biomechanically realistic ground-truth data for validating DIR algorithms for HN anatomy by (a) developing a high-resolution deformable biomechanical HN model from a planning CT, (b) simulating deformations for a range of interfraction posture changes and physiological regression, and (c) generating subsequent CT images representing the deformed anatomy. Methods: The biomechanical model was developed using HN kVCT datasets and the corresponding structure contours. The voxels inside a given 3D contour boundary were clustered using a graphics processing unit (GPU) based algorithm that accounted for inconsistencies and gaps in the boundary to form a volumetric structure. While the bony anatomy was modeled as rigid body, the muscle and soft tissue structures were modeled as mass–spring-damper models with elastic material properties that corresponded to the underlying contoured anatomies. Within a given muscle structure, the voxels were classified using a uniform grid and a normalized mass was assigned to each voxel based on its Hounsfield number. The soft tissue deformation for a given skeletal actuation was performed using an implicit Euler integration with each iteration split into two substeps: one for the muscle structures and the other for the remaining soft tissues. Posture changes were simulated by articulating the skeletal structure and enabling the soft structures to deform accordingly. Physiological changes representing tumor regression were simulated by reducing the target volume and enabling the surrounding soft structures to deform accordingly. Finally, the authors also discuss a new approach to generate kVCT images representing the deformed anatomy that accounts for gaps and antialiasing artifacts that may

  3. A GPU based high-resolution multilevel biomechanical head and neck model for validating deformable image registration

    Energy Technology Data Exchange (ETDEWEB)

    Neylon, J., E-mail: jneylon@mednet.ucla.edu; Qi, X.; Sheng, K.; Low, D. A.; Kupelian, P.; Santhanam, A. [Department of Radiation Oncology, University of California Los Angeles, 200 Medical Plaza, #B265, Los Angeles, California 90095 (United States); Staton, R.; Pukala, J.; Manon, R. [Department of Radiation Oncology, M.D. Anderson Cancer Center, Orlando, 1440 South Orange Avenue, Orlando, Florida 32808 (United States)

    2015-01-15

    Purpose: Validating the usage of deformable image registration (DIR) for daily patient positioning is critical for adaptive radiotherapy (RT) applications pertaining to head and neck (HN) radiotherapy. The authors present a methodology for generating biomechanically realistic ground-truth data for validating DIR algorithms for HN anatomy by (a) developing a high-resolution deformable biomechanical HN model from a planning CT, (b) simulating deformations for a range of interfraction posture changes and physiological regression, and (c) generating subsequent CT images representing the deformed anatomy. Methods: The biomechanical model was developed using HN kVCT datasets and the corresponding structure contours. The voxels inside a given 3D contour boundary were clustered using a graphics processing unit (GPU) based algorithm that accounted for inconsistencies and gaps in the boundary to form a volumetric structure. While the bony anatomy was modeled as rigid body, the muscle and soft tissue structures were modeled as mass–spring-damper models with elastic material properties that corresponded to the underlying contoured anatomies. Within a given muscle structure, the voxels were classified using a uniform grid and a normalized mass was assigned to each voxel based on its Hounsfield number. The soft tissue deformation for a given skeletal actuation was performed using an implicit Euler integration with each iteration split into two substeps: one for the muscle structures and the other for the remaining soft tissues. Posture changes were simulated by articulating the skeletal structure and enabling the soft structures to deform accordingly. Physiological changes representing tumor regression were simulated by reducing the target volume and enabling the surrounding soft structures to deform accordingly. Finally, the authors also discuss a new approach to generate kVCT images representing the deformed anatomy that accounts for gaps and antialiasing artifacts that may

  4. SU-F-J-204: Carbon Digitally Reconstructed Radiography (CDRR): A GPU Based Tool for Fast and Versatile Carbonimaging Simulation

    International Nuclear Information System (INIS)

    Dias, M F; Seco, J; Baroni, G; Riboldi, M

    2016-01-01

    Purpose: Research in carbon imaging has been growing over the past years, as a way to increase treatment accuracy and patient positioning in carbon therapy. The purpose of this tool is to allow a fast and flexible way to generate CDRR data without the need to use Monte Carlo (MC) simulations. It can also be used to predict future clinically measured data. Methods: A python interface has been developed, which uses information from CT or 4DCT and thetreatment calibration curve to compute the Water Equivalent Path Length (WEPL) of carbon ions. A GPU based ray tracing algorithm computes the WEPL of each individual carbon traveling through the CT voxels. A multiple peak detection method to estimate high contrast margin positioning has been implemented (described elsewhere). MC simulations have been used to simulate carbons depth dose curves in order to simulate the response of a range detector. Results: The tool allows the upload of CT or 4DCT images. The user has the possibility to selectphase/slice of interested as well as position, angle…). The WEPL is represented as a range detector which can be used to assess range dilution and multiple peak detection effects. The tool also provides knowledge of the minimum energy that should be considered for imaging purposes. The multiple peak detection method has been used in a lung tumor case, showing an accuracy of 1mm in determine the exact interface position. Conclusion: The tool offers an easy and fast way to simulate carbon imaging data. It can be used for educational and for clinical purposes, allowing the user to test beam energies and angles before real acquisition. An analysis add-on is being developed, where the used will have the opportunity to select different reconstruction methods and detector types (range or energy). Fundacao para a Ciencia e a Tecnologia (FCT), PhD Grant number SFRH/BD/85749/2012

  5. SU-F-J-204: Carbon Digitally Reconstructed Radiography (CDRR): A GPU Based Tool for Fast and Versatile Carbonimaging Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Dias, M F [Dipartamento di Elettronica, Informazione e Bioingegneria - DEIB, Politecnico di Milano (Italy); Department of Radiation Oncology, Francis H. Burr Proton Therapy Center Massachusetts General Hospital (MGH), Boston, Massachusetts (United States); Seco, J [Department of Radiation Oncology, Francis H. Burr Proton Therapy Center Massachusetts General Hospital (MGH), Boston, Massachusetts (United States); Baroni, G; Riboldi, M [Dipartamento di Elettronica, Informazione e Bioingegneria - DEIB, Politecnico di Milano (Italy); Bioengineering Unit, Centro Nazionale di Adroterapia Oncologica, Pavia (Italy)

    2016-06-15

    Purpose: Research in carbon imaging has been growing over the past years, as a way to increase treatment accuracy and patient positioning in carbon therapy. The purpose of this tool is to allow a fast and flexible way to generate CDRR data without the need to use Monte Carlo (MC) simulations. It can also be used to predict future clinically measured data. Methods: A python interface has been developed, which uses information from CT or 4DCT and thetreatment calibration curve to compute the Water Equivalent Path Length (WEPL) of carbon ions. A GPU based ray tracing algorithm computes the WEPL of each individual carbon traveling through the CT voxels. A multiple peak detection method to estimate high contrast margin positioning has been implemented (described elsewhere). MC simulations have been used to simulate carbons depth dose curves in order to simulate the response of a range detector. Results: The tool allows the upload of CT or 4DCT images. The user has the possibility to selectphase/slice of interested as well as position, angle…). The WEPL is represented as a range detector which can be used to assess range dilution and multiple peak detection effects. The tool also provides knowledge of the minimum energy that should be considered for imaging purposes. The multiple peak detection method has been used in a lung tumor case, showing an accuracy of 1mm in determine the exact interface position. Conclusion: The tool offers an easy and fast way to simulate carbon imaging data. It can be used for educational and for clinical purposes, allowing the user to test beam energies and angles before real acquisition. An analysis add-on is being developed, where the used will have the opportunity to select different reconstruction methods and detector types (range or energy). Fundacao para a Ciencia e a Tecnologia (FCT), PhD Grant number SFRH/BD/85749/2012.

  6. A Fast Mixed-Precision Strategy for Iterative GPU-Based Solution of the Laplace Equation

    DEFF Research Database (Denmark)

    Our work is concerned with the development of a generic high-performance library for scientific computing. The library is targeted for assembling flexible-order finite-difference solvers for PDEs. Our goal is to enable fast solution of large PDE systems, fully exploiting the massively parallel ar...

  7. A Fast Mixed-Precision Strategy for Iterative Gpu-Based Solution of the Laplace Equation

    DEFF Research Database (Denmark)

    Our work is concerned with the development of a generic high-performance library for scientific computing. The library is targeted for assembling flexible-order finite-difference solvers for PDEs. Our goal is to enable fast solution of large PDE systems, fully exploiting the massively parallel ar...

  8. Particle Systems and Partial Differential Equations I

    CERN Document Server

    Gonçalves, Patricia

    2014-01-01

    This book presents the proceedings of the international conference Particle Systems and Partial Differential Equations I, which took place at the Centre of Mathematics of the University of Minho, Braga, Portugal, from the 5th to the 7th of December, 2012.  The purpose of the conference was to bring together world leaders to discuss their topics of expertise and to present some of their latest research developments in those fields. Among the participants were researchers in probability, partial differential equations and kinetics theory. The aim of the meeting was to present to a varied public the subject of interacting particle systems, its motivation from the viewpoint of physics and its relation with partial differential equations or kinetics theory, and to stimulate discussions and possibly new collaborations among researchers with different backgrounds.  The book contains lecture notes written by François Golse on the derivation of hydrodynamic equations (compressible and incompressible Euler and Navie...

  9. Fluctuations in interacting particle systems with memory

    International Nuclear Information System (INIS)

    Harris, Rosemary J

    2015-01-01

    We consider the effects of long-range temporal correlations in many-particle systems, focusing particularly on fluctuations about the typical behaviour. For a specific class of memory dependence we discuss the modification of the large deviation principle describing the probability of rare currents and show how superdiffusive behaviour can emerge. We illustrate the general framework with detailed calculations for a memory-dependent version of the totally asymmetric simple exclusion process as well as indicating connections to other recent work

  10. Study of FPGA and GPU based pixel calibration for ATLAS IBL

    CERN Document Server

    Dopke, J; The ATLAS collaboration; Flick, T; Gabrielli, A; Grosse-Knetter, J; Krieger, N; Kugel, A; Polini, A; Schroer, N

    2010-01-01

    The insertable B-layer (IBL) is a new stage of the ATLAS pixel detector to be installed around 2014. 12 million pixel are attached to new FE-I4 readout ASICs, each controlling 26680 pixel. Compared to the existing FE-I3 based detector the new system features higher readout speed of 160Mbit/s per ASIC and simplified control. For calibration defined charges are applied to all pixels and the resulting time-over-threshold values are evaluated. In the present system multiple sets of two custom VME cards which employ a combination of FPGA and DSP technology are used for I/O interfacing, formatting and processing. The execution time of 51s to perform a threshold scan on a FE-I3 module of 46080 pixel is composed of 8s control, 29s transfer, 7.5s histogramming and 7s analysis. Extrapolating to FE-I4 the times per module of 53760 pixels are 12ms, 5.8s, 9.4s and 8.3s, a total of 23.5s. We present a proposal for a novel approach to the dominant tasks for FE-I4: histogramming and ananlysis. An FPGA-based histogramming uni...

  11. Heavy particle transport in sputtering systems

    Science.gov (United States)

    Trieschmann, Jan

    2015-09-01

    This contribution aims to discuss the theoretical background of heavy particle transport in plasma sputtering systems such as direct current magnetron sputtering (dcMS), high power impulse magnetron sputtering (HiPIMS), or multi frequency capacitively coupled plasmas (MFCCP). Due to inherently low process pressures below one Pa only kinetic simulation models are suitable. In this work a model appropriate for the description of the transport of film forming particles sputtered of a target material has been devised within the frame of the OpenFOAM software (specifically dsmcFoam). The three dimensional model comprises of ejection of sputtered particles into the reactor chamber, their collisional transport through the volume, as well as deposition of the latter onto the surrounding surfaces (i.e. substrates, walls). An angular dependent Thompson energy distribution fitted to results from Monte-Carlo simulations is assumed initially. Binary collisions are treated via the M1 collision model, a modified variable hard sphere (VHS) model. The dynamics of sputtered and background gas species can be resolved self-consistently following the direct simulation Monte-Carlo (DSMC) approach or, whenever possible, simplified based on the test particle method (TPM) with the assumption of a constant, non-stationary background at a given temperature. At the example of an MFCCP research reactor the transport of sputtered aluminum is specifically discussed. For the peculiar configuration and under typical process conditions with argon as process gas the transport of aluminum sputtered of a circular target is shown to be governed by a one dimensional interaction of the imposed and backscattered particle fluxes. The results are analyzed and discussed on the basis of the obtained velocity distribution functions (VDF). This work is supported by the German Research Foundation (DFG) in the frame of the Collaborative Research Centre TRR 87.

  12. GPU-Based FFT Computation for Multi-Gigabit WirelessHD Baseband Processing

    Directory of Open Access Journals (Sweden)

    Nicholas Hinitt

    2010-01-01

    Full Text Available The next generation Graphics Processing Units (GPUs are being considered for non-graphics applications. Millimeter wave (60 Ghz wireless networks that are capable of multi-gigabit per second (Gbps transfer rates require a significant baseband throughput. In this work, we consider the baseband of WirelessHD, a 60 GHz communications system, which can provide a data rate of up to 3.8 Gbps over a short range wireless link. Thus, we explore the feasibility of achieving gigabit baseband throughput using the GPUs. One of the most computationally intensive functions commonly used in baseband communications, the Fast Fourier Transform (FFT algorithm, is implemented on an NVIDIA GPU using their general-purpose computing platform called the Compute Unified Device Architecture (CUDA. The paper, first, investigates the implementation of an FFT algorithm using the GPU hardware and exploiting the computational capability available. It then outlines the limitations discovered and the methods used to overcome these challenges. Finally a new algorithm to compute FFT is proposed, which reduces interprocessor communication. It is further optimized by improving memory access, enabling the processing rate to exceed 4 Gbps, achieving a processing time of a 512-point FFT in less than 200 ns using a two-GPU solution.

  13. Cellular structures in a system of interacting particles

    International Nuclear Information System (INIS)

    Lev, B.I.

    2009-01-01

    The general description of the formation of a cellular structure in the system of interacting particles is proposed. The analytical results for possible cellular structures in the usual colloidal systems, systems of particles immersed in a liquid crystal, and gravitational systems have been presented. It is shown that the formation of a cellular structure in all systems of interacting particles at different temperatures and concentrations of particles has the same physical nature

  14. Resonances in three-particle systems

    International Nuclear Information System (INIS)

    Moeller, K.; Orlov, Y.V.

    1989-01-01

    Studies of the theory of resonances in three-particle systems are reviewed. Particular attention is paid to a method which uses analytic continuation of the Faddeev integral equations to the unphysical sheets of the Riemann energy surface. The features of the method are studied in the example of the two-body potential problem. In this case, Fourier transformation, normalization, and calculation of the matrix elements in the momentum representation are generalized to include Gamow states. The main subject of study is systems of nucleons. For these systems the results of experimental investigations during the last 20 years are also summarized. Problems of allowance for the Coulomb interaction are briefly discussed. Applications of the theory to other hadronic systems, including mesons and antinucleons, are mentioned

  15. MO-C-17A-03: A GPU-Based Method for Validating Deformable Image Registration in Head and Neck Radiotherapy Using Biomechanical Modeling

    International Nuclear Information System (INIS)

    Neylon, J; Min, Y; Qi, S; Kupelian, P; Santhanam, A

    2014-01-01

    Purpose: Deformable image registration (DIR) plays a pivotal role in head and neck adaptive radiotherapy but a systematic validation of DIR algorithms has been limited by a lack of quantitative high-resolution groundtruth. We address this limitation by developing a GPU-based framework that provides a systematic DIR validation by generating (a) model-guided synthetic CTs representing posture and physiological changes, and (b) model-guided landmark-based validation. Method: The GPU-based framework was developed to generate massive mass-spring biomechanical models from patient simulation CTs and contoured structures. The biomechanical model represented soft tissue deformations for known rigid skeletal motion. Posture changes were simulated by articulating skeletal anatomy, which subsequently applied elastic corrective forces upon the soft tissue. Physiological changes such as tumor regression and weight loss were simulated in a biomechanically precise manner. Synthetic CT data was then generated from the deformed anatomy. The initial and final positions for one hundred randomly-chosen mass elements inside each of the internal contoured structures were recorded as ground truth data. The process was automated to create 45 synthetic CT datasets for a given patient CT. For instance, the head rotation was varied between +/− 4 degrees along each axis, and tumor volumes were systematically reduced up to 30%. Finally, the original CT and deformed synthetic CT were registered using an optical flow based DIR. Results: Each synthetic data creation took approximately 28 seconds of computation time. The number of landmarks per data set varied between two and three thousand. The validation method is able to perform sub-voxel analysis of the DIR, and report the results by structure, giving a much more in depth investigation of the error. Conclusions: We presented a GPU based high-resolution biomechanical head and neck model to validate DIR algorithms by generating CT equivalent 3D

  16. Fast-GPU-PCC: A GPU-Based Technique to Compute Pairwise Pearson's Correlation Coefficients for Time Series Data-fMRI Study.

    Science.gov (United States)

    Eslami, Taban; Saeed, Fahad

    2018-04-20

    Functional magnetic resonance imaging (fMRI) is a non-invasive brain imaging technique, which has been regularly used for studying brain’s functional activities in the past few years. A very well-used measure for capturing functional associations in brain is Pearson’s correlation coefficient. Pearson’s correlation is widely used for constructing functional network and studying dynamic functional connectivity of the brain. These are useful measures for understanding the effects of brain disorders on connectivities among brain regions. The fMRI scanners produce huge number of voxels and using traditional central processing unit (CPU)-based techniques for computing pairwise correlations is very time consuming especially when large number of subjects are being studied. In this paper, we propose a graphics processing unit (GPU)-based algorithm called Fast-GPU-PCC for computing pairwise Pearson’s correlation coefficient. Based on the symmetric property of Pearson’s correlation, this approach returns N ( N − 1 ) / 2 correlation coefficients located at strictly upper triangle part of the correlation matrix. Storing correlations in a one-dimensional array with the order as proposed in this paper is useful for further usage. Our experiments on real and synthetic fMRI data for different number of voxels and varying length of time series show that the proposed approach outperformed state of the art GPU-based techniques as well as the sequential CPU-based versions. We show that Fast-GPU-PCC runs 62 times faster than CPU-based version and about 2 to 3 times faster than two other state of the art GPU-based methods.

  17. TH-A-18C-04: Ultrafast Cone-Beam CT Scatter Correction with GPU-Based Monte Carlo Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Y [UT Southwestern Medical Center, Dallas, TX (United States); Southern Medical University, Guangzhou (China); Bai, T [UT Southwestern Medical Center, Dallas, TX (United States); Xi' an Jiaotong University, Xi' an (China); Yan, H; Ouyang, L; Wang, J; Pompos, A; Jiang, S; Jia, X [UT Southwestern Medical Center, Dallas, TX (United States); Zhou, L [Southern Medical University, Guangzhou (China)

    2014-06-15

    Purpose: Scatter artifacts severely degrade image quality of cone-beam CT (CBCT). We present an ultrafast scatter correction framework by using GPU-based Monte Carlo (MC) simulation and prior patient CT image, aiming at automatically finish the whole process including both scatter correction and reconstructions within 30 seconds. Methods: The method consists of six steps: 1) FDK reconstruction using raw projection data; 2) Rigid Registration of planning CT to the FDK results; 3) MC scatter calculation at sparse view angles using the planning CT; 4) Interpolation of the calculated scatter signals to other angles; 5) Removal of scatter from the raw projections; 6) FDK reconstruction using the scatter-corrected projections. In addition to using GPU to accelerate MC photon simulations, we also use a small number of photons and a down-sampled CT image in simulation to further reduce computation time. A novel denoising algorithm is used to eliminate MC scatter noise caused by low photon numbers. The method is validated on head-and-neck cases with simulated and clinical data. Results: We have studied impacts of photo histories, volume down sampling factors on the accuracy of scatter estimation. The Fourier analysis was conducted to show that scatter images calculated at 31 angles are sufficient to restore those at all angles with <0.1% error. For the simulated case with a resolution of 512×512×100, we simulated 10M photons per angle. The total computation time is 23.77 seconds on a Nvidia GTX Titan GPU. The scatter-induced shading/cupping artifacts are substantially reduced, and the average HU error of a region-of-interest is reduced from 75.9 to 19.0 HU. Similar results were found for a real patient case. Conclusion: A practical ultrafast MC-based CBCT scatter correction scheme is developed. The whole process of scatter correction and reconstruction is accomplished within 30 seconds. This study is supported in part by NIH (1R01CA154747-01), The Core Technology Research

  18. Fast GPU-based Monte Carlo code for SPECT/CT reconstructions generates improved 177Lu images.

    Science.gov (United States)

    Rydén, T; Heydorn Lagerlöf, J; Hemmingsson, J; Marin, I; Svensson, J; Båth, M; Gjertsson, P; Bernhardt, P

    2018-01-04

    clearly improved with MC-based OSEM reconstruction, e.g., the activity recovery was 88% for the largest sphere, while it was 66% for AC-OSEM and 79% for RRC-OSEM. The GPU-based MC code generated an MC-based SPECT/CT reconstruction within a few minutes, and reconstructed patient images of 177 Lu-DOTATATE treatments revealed clearly improved resolution and contrast.

  19. AI systems approach in particle accelerators

    International Nuclear Information System (INIS)

    Kataria, S.K.; Bhagwat, P.V.; Kori, S.A.

    1992-01-01

    The large particle accelerators machines like pelletron accelerator at Tata Institute of Fundamental Research (T.I.F.R) have several levels of controls with operators responsible for overall global control decisions and closed loop feedback controllers for relatively small subsystems of the machines. As the accelerator machines are becoming more complicated and the requirements more stringent, there is a need to provide the operators with an artificial intelligence (AI) system to aid in the tuning the machine and in failure diagnosis. There are few major areas in the pelletron operation, which can be done more efficiently using AI systems approach so that useful beam is available for much more time: 1) Accelerator Conditioning, 2) Accelerator Tuning, and 3) Maintaining the Tune beams. The feasibility study for using expert system for above areas and also for safety evaluation of the various subsystems is carried out. (author). 10 refs., 4 figs

  20. Optical system for trapping particles in air.

    Science.gov (United States)

    Kampmann, R; Chall, A K; Kleindienst, R; Sinzinger, S

    2014-02-01

    An innovative optical system for trapping particles in air is presented. We demonstrate an optical system specifically optimized for high precision positioning of objects with a size of several micrometers within a nanopositioning and nanomeasuring machine (NPMM). Based on a specification sheet, an initial system design was calculated and optimized in an iterative design process. By combining optical design software with optical force simulation tools, a highly efficient optical system was developed. Both components of the system, which include a refractive double axicon and a parabolic ring mirror, were fabricated by ultra-precision turning. The characterization of the optical elements and the whole system, especially the force simulations based on caustic measurements, represent an important interim result for the subsequently performed trapping experiments. The caustic of the trapping beam produced by the system was visualized with the help of image processing techniques. Finally, we demonstrated the unique efficiency of the configuration by reproducibly trapping fused silica spheres with a diameter of 10 μm at a distance of 2.05 mm from the final optical surface.

  1. Development of automatic flaw detection systems for magnetic particle examination

    International Nuclear Information System (INIS)

    Shirai, T.; Kimura, J.; Amako, T.

    1988-01-01

    Utilizing a video camera and an image processor, development was carried out on automatic flaw detection and discrimination techniques for the purpose of achieving automated magnetic particle examination. Following this, fluorescent wet magnetic particle examination systems for blade roots and rotor grooves of turbine rotors and the non-fluorescent dry magnetic particle examination system for butt welds, were developed. This paper describes these automatic magnetic particle examination (MT) systems and the functional test results

  2. Particle dispersing system and method for testing semiconductor manufacturing equipment

    Science.gov (United States)

    Chandrachood, Madhavi; Ghanayem, Steve G.; Cantwell, Nancy; Rader, Daniel J.; Geller, Anthony S.

    1998-01-01

    The system and method prepare a gas stream comprising particles at a known concentration using a particle disperser for moving particles from a reservoir of particles into a stream of flowing carrier gas. The electrostatic charges on the particles entrained in the carrier gas are then neutralized or otherwise altered, and the resulting particle-laden gas stream is then diluted to provide an acceptable particle concentration. The diluted gas stream is then split into a calibration stream and the desired output stream. The particles in the calibration stream are detected to provide an indication of the actual size distribution and concentration of particles in the output stream that is supplied to a process chamber being analyzed. Particles flowing out of the process chamber within a vacuum pumping system are detected, and the output particle size distribution and concentration are compared with the particle size distribution and concentration of the calibration stream in order to determine the particle transport characteristics of a process chamber, or to determine the number of particles lodged in the process chamber as a function of manufacturing process parameters such as pressure, flowrate, temperature, process chamber geometry, particle size, particle charge, and gas composition.

  3. Shielding requirements for particle bed propulsion systems

    Science.gov (United States)

    Gruneisen, S. J.

    1991-06-01

    Nuclear Thermal Propulsion systems present unique challenges in reliability and safety. Due to the radiation incident upon all components of the propulsion system, shielding must be used to keep nuclear heating in the materials within limits; in addition, electronic control systems must be protected. This report analyzes the nuclear heating due to the radiation and the shielding required to meet the established criteria while also minimizing the shield mass. Heating rates were determined in a 2000 MWt Particle Bed Reactor (PBR) system for all materials in the interstage region, between the reactor vessel and the propellant tank, with special emphasis on meeting the silicon dose criteria. Using a Lithium Hydride/Tungsten shield, the optimum shield design was found to be: 50 cm LiH/2 cm W on the axial reflector in the reactor vessel and 50 cm LiH/2 cm W in a collar extension of the inside shield outside of the pressure vessel. Within these parameters, the radiation doses in all of the components in the interstage and lower tank regions would be within acceptable limits for mission requirements.

  4. Particle simulation in curvilinear coordinate systems

    International Nuclear Information System (INIS)

    LeBrun, M.J.; Tajima, T.

    1989-01-01

    We present methods for particle simulation of plasmas in a nearly arbitrary coordinate metric and describe a toroidal electrostatic simulation code that evolved from this effort. A Mercier-type coordinate system is used, with a nonuniform radial grid for improved cross-field resolution. A fast iterative method for solving the Poisson equation is employed, and the interpolation/filtering technique shown to be momentum and energy conserving in the continuum limit. Lorentz ion and drift electron species are used. The code has been thoroughly tested for its reproduction of linear and nonlinear physics, and has been applied to the toroidal drift wave problem and its impact on anomalous transport in tokamaks. 40 refs., 10 figs., 1 tab

  5. Particle ''swarm'' dynamics in triboelectric systems

    International Nuclear Information System (INIS)

    Vinay, Stephen J.; Jhon, Myung S.

    2001-01-01

    Using state-of-the-art flow/particle visualization and animation techniques, the time-dependent statistical distributions of charged-particle ''swarms'' exposed to external fields (both electrostatic and flow) are examined. We found that interparticle interaction and drag forces mainly influenced swarm dispersion in a Lagrangian reference frame, whereas the average particle trajectory was affected primarily by the external electric and flow fields

  6. INTERACTING MANY-PARTICLE SYSTEMS OF DIFFERENT PARTICLE TYPES CONVERGE TO A SORTED STATE

    DEFF Research Database (Denmark)

    Kokkendorff, Simon Lyngby; Starke, Jens; Hummel, N.

    2010-01-01

    We consider a model class of interacting many-particle systems consisting of different types of particles defined by a gradient flow. The corresponding potential expresses attractive and repulsive interactions between particles of the same type and different types, respectively. The introduced...... system converges by self-organized pattern formation to a sorted state where particles of the same type share a common position and those of different types are separated from each other. This is proved in the sense that we show that the property of being sorted is asymptotically stable and all other...... states are unstable. The models are motivated from physics, chemistry, and biology, and the principal investigations can be useful for many systems with interacting particles or agents. The models match particularly well a system in neuroscience, namely the axonal pathfinding and sorting in the olfactory...

  7. TU-AB-BRC-02: Accuracy Evaluation of GPU-Based OpenCL Carbon Monte Carlo Package (goCMC) in Biological Dose and Microdosimetry in Comparison to FLUKA Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Taleei, R; Peeler, C; Qin, N; Jiang, S; Jia, X [UT Southwestern Medical Center, Dallas, TX (United States)

    2016-06-15

    Purpose: One of the most accurate methods for radiation transport is Monte Carlo (MC) simulation. Long computation time prevents its wide applications in clinic. We have recently developed a fast MC code for carbon ion therapy called GPU-based OpenCL Carbon Monte Carlo (goCMC) and its accuracy in physical dose has been established. Since radiobiology is an indispensible aspect of carbon ion therapy, this study evaluates accuracy of goCMC in biological dose and microdosimetry by benchmarking it with FLUKA. Methods: We performed simulations of a carbon pencil beam with 150, 300 and 450 MeV/u in a homogeneous water phantom using goCMC and FLUKA. Dose and energy spectra for primary and secondary ions on the central beam axis were recorded. Repair-misrepair-fixation model was employed to calculate Relative Biological Effectiveness (RBE). Monte Carlo Damage Simulation (MCDS) tool was used to calculate microdosimetry parameters. Results: Physical dose differences on the central axis were <1.6% of the maximum value. Before the Bragg peak, differences in RBE and RBE-weighted dose were <2% and <1%. At the Bragg peak, the differences were 12.5% caused by small range discrepancy and sensitivity of RBE to beam spectra. Consequently, RBE-weighted dose difference was 11%. Beyond the peak, RBE differences were <20% and primarily caused by differences in the Helium-4 spectrum. However, the RBE-weighted dose agreed within 1% due to the low physical dose. Differences in microdosimetric quantities were small except at the Bragg peak. The simulation time per source particle with FLUKA was 0.08 sec, while goCMC was approximately 1000 times faster. Conclusion: Physical doses computed by FLUKA and goCMC were in good agreement. Although relatively large RBE differences were observed at and beyond the Bragg peak, the RBE-weighted dose differences were considered to be acceptable.

  8. Collected abstracts on particle beam diagnostic systems

    International Nuclear Information System (INIS)

    Hickok, R.L.

    1979-01-01

    This report contains a compilation of abstracts on work related to particle beam diagnostics for high temperature plasmas. The abstracts were gathered in early 1978 and represent the status of the various programs as of that date. It is not suggested that this is a comprehensive list of all the work that is going on in the development of particle beam diagnostics, but it does provide a representative view of the work in this field. For example, no abstracts were received from the U.S.S.R. even though they have considerable activity in particle beam diagnostics

  9. New apparatus of single particle trap system for aerosol visualization

    Science.gov (United States)

    Higashi, Hidenori; Fujioka, Tomomi; Endo, Tetsuo; Kitayama, Chiho; Seto, Takafumi; Otani, Yoshio

    2014-08-01

    Control of transport and deposition of charged aerosol particles is important in various manufacturing processes. Aerosol visualization is an effective method to directly observe light scattering signal from laser-irradiated single aerosol particle trapped in a visualization cell. New single particle trap system triggered by light scattering pulse signal was developed in this study. The performance of the device was evaluated experimentally. Experimental setup consisted of an aerosol generator, a differential mobility analyzer (DMA), an optical particle counter (OPC) and the single particle trap system. Polystylene latex standard (PSL) particles (0.5, 1.0 and 2.0 μm) were generated and classified according to the charge by the DMA. Singly charged 0.5 and 1.0 μm particles and doubly charged 2.0 μm particles were used as test particles. The single particle trap system was composed of a light scattering signal detector and a visualization cell. When the particle passed through the detector, trigger signal with a given delay time sent to the solenoid valves upstream and downstream of the visualization cell for trapping the particle in the visualization cell. The motion of particle in the visualization cell was monitored by CCD camera and the gravitational settling velocity and the electrostatic migration velocity were measured from the video image. The aerodynamic diameter obtained from the settling velocity was in good agreement with Stokes diameter calculated from the electrostatic migration velocity for individual particles. It was also found that the aerodynamic diameter obtained from the settling velocity was a one-to-one function of the scattered light intensity of individual particles. The applicability of this system will be discussed.

  10. Development of parallel GPU based algorithms for problems in nuclear area; Desenvolvimento de algoritmos paralelos baseados em GPU para solucao de problemas na area nuclear

    Energy Technology Data Exchange (ETDEWEB)

    Almeida, Adino Americo Heimlich

    2009-07-01

    Graphics Processing Units (GPU) are high performance co-processors intended, originally, to improve the use and quality of computer graphics applications. Since researchers and practitioners realized the potential of using GPU for general purpose, their application has been extended to other fields out of computer graphics scope. The main objective of this work is to evaluate the impact of using GPU in two typical problems of Nuclear area. The neutron transport simulation using Monte Carlo method and solve heat equation in a bi-dimensional domain by finite differences method. To achieve this, we develop parallel algorithms for GPU and CPU in the two problems described above. The comparison showed that the GPU-based approach is faster than the CPU in a computer with two quad core processors, without precision loss. (author)

  11. Global Positioning System (GPS) Energetic Particle Data

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Energetic particle data from the CXD and BDD instrument on the GPS constellation are available to the space weather research community. The release of these data...

  12. [Construction of Lactobacillus rhamnosus GG particles surface display system].

    Science.gov (United States)

    Su, Runyu; Nie, Boyao; Yuan, Shengling; Tao, Haoxia; Liu, Chunjie; Yang, Bailiang; Wang, Yanchun

    2017-01-25

    To describe a novel particles surface display system which is consisted of gram-positive enhancer matrix (GEM) particles and anchor proteins for bacteria-like particles vaccines, we treated Lactobacillus rhamnosus GG bacteria with 10% heated-TCA for preparing GEM particles, and then identified the harvested GEM particles by electron microscopy, RT-PCR and SDS-PAGE. Meanwhile, Escherichia coli was induced to express hybrid proteins PA3-EGFP and P60-EGFP, and GEM particles were incubated with them. Then binding of anchor proteins were determined by Western blotting, transmission electron microscopy, fluorescence microscopy and spectrofluorometry. GEM particles preserved original size and shape, and proteins and DNA contents of GEM particles were released substantially. The two anchor proteins both had efficiently immobilized on the surface of GEM. GEM particles that were bounded by anchor proteins were brushy. The fluorescence of GEM particles anchoring PA3 was slightly brighter than P60, but the difference was not significant (P>0.05). GEM particles prepared from L. rhamnosus GG have a good binding efficiency with anchor proteins PA3-EGFP and P60-EGFP. Therefore, this novel foreign protein surface display system could be used for bacteria-like particle vaccines.

  13. Laboratory evaluation of a gasifier particle sampling system using model compounds of different particle morphology

    Energy Technology Data Exchange (ETDEWEB)

    Nilsson, Patrik T.; Malik, Azhar; Pagels, Joakim; Lindskog, Magnus; Rissler, Jenny; Gudmundsson, Anders; Bohgard, Mats; Sanati, Mehri [Lund University, Division of Ergonomics and Aerosol Technology, P.O. Box 118, Lund (Sweden)

    2011-07-15

    The objective of this work was to design and evaluate an experimental setup to be used for field studies of particle formation in biomass gasification processes. The setup includes a high-temperature dilution probe and a denuder to separate solid particles from condensable volatile material. The efficiency of the setup to remove volatile material from the sampled stream and the influence from condensation on particles with different morphologies is presented. In order to study the sampling setup model, aerosols were created with a nebulizer to produce compact and solid KCl particles and a diffusion flame burner to produce agglomerated and irregular soot particles. The nebulizer and soot generator was followed by an evaporation-condensation section where volatile material, dioctylsebacete (DOS), was added to the system as a tar model compound. The model aerosol particles were heated to 200 C to create a system containing both solid particles and volatile organic material in gas phase. The heated aerosol particles were sampled and diluted at the same temperature with the dilution probe. Downstream the probe, the DOS was adsorbed in the denuder. This was achieved by slowly decreasing the temperature of the diluted sample towards ambient level in the denuder. Thereby the supersaturation of organic vapors was reduced which decreased the probability for tar condensation and nucleation of new particles. Both the generation system and the sampling technique gave reproducible results. A DOS collection efficiency of >99% was achieved if the denuder inlet concentration was diluted to less than 1-6 mg/m{sup 3} depending on the denuder flow rate. Concentrations higher than that lead to significant impact on the resulting KCl size distribution. The choice of model compounds was done to study the effect from the particle morphology on the achieved particle characteristics after the sampling setup. When similar amounts of volatile material condensed on soot agglomerates and

  14. WE-AB-204-11: Development of a Nuclear Medicine Dosimetry Module for the GPU-Based Monte Carlo Code ARCHER

    Energy Technology Data Exchange (ETDEWEB)

    Liu, T; Lin, H; Xu, X [Rensselaer Polytechnic Institute, Troy, NY (United States); Stabin, M [Vanderbilt Univ Medical Ctr, Nashville, TN (United States)

    2015-06-15

    Purpose: To develop a nuclear medicine dosimetry module for the GPU-based Monte Carlo code ARCHER. Methods: We have developed a nuclear medicine dosimetry module for the fast Monte Carlo code ARCHER. The coupled electron-photon Monte Carlo transport kernel included in ARCHER is built upon the Dose Planning Method code (DPM). The developed module manages the radioactive decay simulation by consecutively tracking several types of radiation on a per disintegration basis using the statistical sampling method. Optimization techniques such as persistent threads and prefetching are studied and implemented. The developed module is verified against the VIDA code, which is based on Geant4 toolkit and has previously been verified against OLINDA/EXM. A voxelized geometry is used in the preliminary test: a sphere made of ICRP soft tissue is surrounded by a box filled with water. Uniform activity distribution of I-131 is assumed in the sphere. Results: The self-absorption dose factors (mGy/MBqs) of the sphere with varying diameters are calculated by ARCHER and VIDA respectively. ARCHER’s result is in agreement with VIDA’s that are obtained from a previous publication. VIDA takes hours of CPU time to finish the computation, while it takes ARCHER 4.31 seconds for the 12.4-cm uniform activity sphere case. For a fairer CPU-GPU comparison, more effort will be made to eliminate the algorithmic differences. Conclusion: The coupled electron-photon Monte Carlo code ARCHER has been extended to radioactive decay simulation for nuclear medicine dosimetry. The developed code exhibits good performance in our preliminary test. The GPU-based Monte Carlo code is developed with grant support from the National Institute of Biomedical Imaging and Bioengineering through an R01 grant (R01EB015478)

  15. Particle and particle systems characterization small-angle scattering (SAS) applications

    CERN Document Server

    Gille, Wilfried

    2016-01-01

    Small-angle scattering (SAS) is the premier technique for the characterization of disordered nanoscale particle ensembles. SAS is produced by the particle as a whole and does not depend in any way on the internal crystal structure of the particle. Since the first applications of X-ray scattering in the 1930s, SAS has developed into a standard method in the field of materials science. SAS is a non-destructive method and can be directly applied for solid and liquid samples. Particle and Particle Systems Characterization: Small-Angle Scattering (SAS) Applications is geared to any scientist who might want to apply SAS to study tightly packed particle ensembles using elements of stochastic geometry. After completing the book, the reader should be able to demonstrate detailed knowledge of the application of SAS for the characterization of physical and chemical materials.

  16. Inelastic two composite particle systems scattering at high energy

    International Nuclear Information System (INIS)

    Zhang Yushun.

    1986-11-01

    In this paper, by using the collective coordinate of Bohr and phenomenological deformed optical potentials, the scattering amplitudes of two composite particle systems can be obtained and the collective excitation for two composite particle systems in the scattering process is discussed. (author). 10 refs, 6 figs, 2 tabs

  17. Transistor-based particle detection systems and methods

    Science.gov (United States)

    Jain, Ankit; Nair, Pradeep R.; Alam, Muhammad Ashraful

    2015-06-09

    Transistor-based particle detection systems and methods may be configured to detect charged and non-charged particles. Such systems may include a supporting structure contacting a gate of a transistor and separating the gate from a dielectric of the transistor, and the transistor may have a near pull-in bias and a sub-threshold region bias to facilitate particle detection. The transistor may be configured to change current flow through the transistor in response to a change in stiffness of the gate caused by securing of a particle to the gate, and the transistor-based particle detection system may configured to detect the non-charged particle at least from the change in current flow.

  18. A study on the particle penetration in RMS Right Single Quotation Marks particle transport system

    International Nuclear Information System (INIS)

    Son, S. M.; Oh, S. H.; Choi, C. R.

    2014-01-01

    In nuclear facilities, a radiation monitoring system (RMS) monitors the exhaust gas containing the radioactive material. Samples of exhaust gas are collected in the downstream region of air cleaning units (ACUs) in order to examine radioactive materials. It is possible to predict an amount of radioactive material by analyzing the corrected samples. Representation of the collected samples should be assured in order to accurately sense and measure of radioactive materials. The radius of curvature is mainly 5 times of tube diameter. Sometimes, a booster fan is additionally added to enhance particle penetration rate... In this study, particle penetrations are calculated to evaluate particle penetration rate with various design parameters (tube lengths, tube declined angles, radius of curvatures, etc). The particle penetration rates have been calculated for several elements in the particle transport system. In general, the horizontal length of tube and the number of bending tube have a big impact on the penetration rate in the particle transport system. If the sampling location is far from the radiation monitoring system, additional installation of booster fans could be considered in case of large diameter tubes, but is not recommended in case of small diameter tube. In order to enhance particle penetration rate, the following works are recommended by priority. 1) to reduce the interval between sampling location and radiation monitoring system 2) to reduce the number of the bending tube

  19. REDUCE system in elementary particle physics

    International Nuclear Information System (INIS)

    Grozin, A.G.

    1990-01-01

    This preprint is the first part of the problem book on using REDUCE for calculations of cross sections and decay probabilities in elementary particle physics. It contains the review of the necessary formulae and examples of using REDUCE for calculations with vectors and Dirac matrices. 5 refs.; 11 figs

  20. Particle surface area and bacterial activity in recirculating aquaculture systems

    DEFF Research Database (Denmark)

    Pedersen, Per Bovbjerg; von Ahnen, Mathis; Fernandes, Paulo

    2017-01-01

    Suspended particles in recirculating aquaculture systems (RAS) provide surface area that can be colonized by bacteria. More particles accumulate as the intensity of recirculation increases thus potentially increasing the bacterial carrying capacity of the systems. Applying a recent, rapid, culture...... but may provide significant surface area. Hence, the study substantiates that particles in RAS provide surface area supporting bacterial activity, and that particles play a key role in controlling the bacterial carrying capacity at least in less intensive RAS. Applying fast, culture-independent techniques......-independent fluorometric detection method (Bactiquant®) for measuring bacterial activity, the current study explored the relationship between total particle surface area (TSA, derived from the size distribution of particles >5 μm) and bacterial activity in freshwater RAS operated at increasing intensity of recirculation...

  1. Mass spectrometer provided with an optical system for separating neutron particles against charged particles

    Energy Technology Data Exchange (ETDEWEB)

    Reeher, J R; Story, M S; Smith, R D

    1977-03-03

    This invention concerns a mass spectrometer with an ion focusing optical system that efficiently separates the charged and neutral particles. It concerns an apparatus that can be used in ionisation areas operating at relatively high pressure (> 10/sup -2/ Torr). The invention relates more particularly to a mass spectrometer with an inlet device for the samples to be identified, a sample ionisation system for forming charged and neutral particles, a mass analyser and an optical system for focusing the ions formed in the mass analyser. The optics include several conducting components of which at least one has sides formed of grids, in the direction of the axis, towards the analyser the optics forming a potential well along the axis. The selected charged particles are focused in the analyser and the remaining particles can escape by the openings in the conducting grids.

  2. Particle filtration in consolidated granular systems

    International Nuclear Information System (INIS)

    Schwartz, L.M.; Wilkinson, D.J.; Bolsterli, M.; Hammond, P.

    1993-01-01

    Grain-packing algorithms are used to model the mechanical trapping of dilute suspensions of particles by consolidated granular media. We study the distribution of filtrate particles, the formation of a damage zone (internal filter cake), and the transport properties of the host--filter-cake composite. At the early stages of filtration, our simulations suggest simple relationships between the structure of the internal filter cake and the characteristics of the underlying host matrix. These relationships are then used to describe the dynamics of the filtration process. Depending on the grain size and porosity of the host matrix, calculated filtration rates may either be greater than (spurt loss) or less than (due to internal clogging) those predicted by standard surface-filtration models

  3. Particle Tracking Model (PTM) with Coastal Modeling System (CMS)

    Science.gov (United States)

    2015-11-04

    Coastal Inlets Research Program Particle Tracking Model (PTM) with Coastal Modeling System ( CMS ) The Particle Tracking Model (PTM) is a Lagrangian...currents and waves. The Coastal Inlets Research Program (CIRP) supports the PTM with the Coastal Modeling System ( CMS ), which provides coupled wave...and current forcing for PTM simulations. CMS -PTM is implemented in the Surface-water Modeling System, a GUI environment for input development

  4. Spherical and cylindrical particle resonator as a cloak system

    Science.gov (United States)

    Minin, I. V.; Minin, O. V.; Eremeev, A. I.; Tseplyaev, I. S.

    2018-05-01

    The concept of dielectric spherical or cylindrical particle in resonant mode as a cloak system is offered. In fundamental modes (modes with the smallest volume correspond to |m| = l, and s = 1) the field is concentrated mostly in the equatorial plane and at the surface of the sphere. Thus under resonance modes, such perturbation due to cuboid particle inserted in the spherical or cylindrical particle has almost no effect on the field forming resonance regardless of the value of internal particle material (defect) as long as this material does not cover the region where resonance takes place.

  5. Multi-particle correlations in quaternionic quantum systems

    International Nuclear Information System (INIS)

    Brumby, S.P.; Joshi, G.C.

    1994-01-01

    The authors investigated the outcomes of measurements on correlated, few-body quantum systems described by a quaternionic quantum mechanics that allows for regions of quaternionic curvature. It was found that a multi particles interferometry experiment using a correlated system of four nonrelativistic, spin-half particles has the potential to detect the presence of quaternionic curvature. Two-body systems, however, are shown to give predictions identical to those of standard quantum mechanics when relative angles are used in the construction of the operators corresponding to measurements of particle spin components. 15 refs

  6. Two particle correlations in small systems

    CERN Document Server

    Palmeiro Pazos, Brais

    2015-01-01

    The present report summarizes the work on the Summer Student project within the ALICE Collaboration. The aim of the project is to study the two-particle correlations in peripheral Pb-Pb collisions with the ALICE detector. The first part of this project is the development of a Toy Monte Carlo (MC) generator to reproduce and understand the Physics behind and probe the analysis in a controlled data set. Then, once the Toy MC is fully understood, it is possible to move to real data where some unexpected effects might appear and should be comprehended in order to have the whole physical picture of the peripheral Pb-Pb collisions.

  7. Engineering aspects of particle beam fusion systems

    International Nuclear Information System (INIS)

    Cook, D.L.

    1982-01-01

    The Department of Energy is supporting research directed toward demonstration of DT fuel ignition in an Inertial Confinement Fusion (ICF) capsule. As part of the ICF effort, two major Particle Beam Fusion Accelerators (PBFA I and II) are being developed at Sandia National Laboratories with the objective of providing energetic light ion beams of sufficient power density for target implosion. Supporting light ion beam research is being performed at the Naval Research Laboratory and at Cornell University. If the answers to several key physics and engineering questions are favorable, pulsed power accelerators will be able to provide an efficient and inexpensive approach to high target gain and eventual power production applications

  8. UHE particle production in close binary systems

    International Nuclear Information System (INIS)

    Hillas, A.M.

    1985-01-01

    Cygnus X-3 appears to generate so much power in the form of charged particles of up to approx 10 to the 17th power eV that the galaxy may need approx 1 such source on average to maintain its flux of ultra high energy cosmic rays. Accreting gas must supply the energy, and in a surprisingly ordered form, if it is correct to use a Vest-rand-Eichler model for radiation of gammas, modified by the introduction of an accretion wake. Certain relationships between 10 to the 12th power eV and 10 to the 15th power gamma rays are expected

  9. Fiber-Optic Monitoring System of Particle Counters

    Directory of Open Access Journals (Sweden)

    A. A. Titov

    2016-01-01

    Full Text Available The article considers development of a fiber-optic system to monitor the counters of particles. Presently, optical counters of particles, which are often arranged at considerable distance from each other, are used to study the saltation phenomenon. For monitoring the counters, can be used electric communication lines.However, it complicates and raises the price of system Therefore, we offered a fiber-optic system and the counter of particles, free from these shortcomings. The difference between the offered counter of particles and the known one is that the input of radiation to the counter and the output of radiation scattering on particles are made by the optical fibers, and direct radiation is entered the optical fiber rather than is delayed by a light trap and can be used for lighting the other counters thereby allowing to use their connection in series.The work involved a choice of the quartz multimode optical fiber for communication, defining the optical fiber and lenses parameters of the counter of particles, and a selection of the radiation source and the photo-detector.Using the theory of light diffraction on a particle, a measuring range of the particle sizes has been determined. The system speed has been estimated, and it has been shown that a range of communication can reach 200km.It should be noted that modulation noise of counters of particles connected in series have the impact on the useful signal. To assess the extent of this influence we have developed a calculation procedure to illustrate that with ten counters connected in series this influence on the signal-to-noise ratio will be insignificant.Thus, it has been shown that the offered fiber-optic system can be used for monitoring the counters of particles across the desertified territories. 

  10. Optimization of a particle optical system in a mutilprocessor environment

    International Nuclear Information System (INIS)

    Wei Lei; Yin Hanchun; Wang Baoping; Tong Linsu

    2002-01-01

    In the design of a charged particle optical system, many geometrical and electric parameters have to be optimized to improve the performance characteristics. In every optimization cycle, the electromagnetic field and particle trajectories have to be calculated. Therefore, the optimization of a charged particle optical system is limited by the computer resources seriously. Apart from this, numerical errors of calculation may also influence the convergence of merit function. This article studies how to improve the optimization of charged particle optical systems. A new method is used to determine the gradient matrix. With this method, the accuracy of the Jacobian matrix can be improved. In this paper, the charged particle optical system is optimized with a Message Passing Interface (MPI). The electromagnetic field, particle trajectories and gradients of optimization variables are calculated on networks of workstations. Therefore, the speed of optimization has been increased largely. It is possible to design a complicated charged particle optical system with optimum quality on a MPI environment. Finally, an electron gun for a cathode ray tube has been optimized on a MPI environment to verify the method proposed in this paper

  11. Shock waves in collective field theories for many particle systems

    Energy Technology Data Exchange (ETDEWEB)

    Oki, F; Saito, T [Kyoto Prefectural Univ. of Medicine (Japan); Shigemoto, K

    1980-10-01

    We find shock wave solutions to collective field equations for quantum mechanical many particle system. Importance of the existence of a ''tension'' working on the surface of the shock-wave front is pointed out.

  12. Data simulation for the Associated Particle Imaging system

    International Nuclear Information System (INIS)

    Tunnell, L.N.

    1994-01-01

    A data simulation procedure for the Associated Particle Imaging (API) system has been developed by postprocessing output from the Monte Carlo Neutron Photon (MCNP) code. This paper compares the simulated results to our experimental data

  13. Initiator Systems Effect on Particle Coagulation and Particle Size Distribution in One-Step Emulsion Polymerization of Styrene

    Directory of Open Access Journals (Sweden)

    Baijun Liu

    2016-02-01

    Full Text Available Particle coagulation is a facile approach to produce large-scale polymer latex particles. This approach has been widely used in academic and industrial research owing to its higher polymerization rate and one-step polymerization process. Our work was motivated to control the extent (or time of particle coagulation. Depending on reaction parameters, particle coagulation is also able to produce narrowly dispersed latex particles. In this study, a series of experiments were performed to investigate the role of the initiator system in determining particle coagulation and particle size distribution. Under the optimal initiation conditions, such as cationic initiator systems or higher reaction temperature, the time of particle coagulation would be advanced to particle nucleation period, leading to the narrowly dispersed polymer latex particles. By using a combination of the Smoluchowski equation and the electrostatic stability theory, the relationship between the particle size distribution and particle coagulation was established: the earlier the particle coagulation, the narrower the particle size distribution, while the larger the extent of particle coagulation, the larger the average particle size. Combined with the results of previous studies, a systematic method controlling the particle size distribution in the presence of particle coagulation was developed.

  14. Pair correlation of particles in strongly nonideal systems

    International Nuclear Information System (INIS)

    Vaulina, O. S.

    2012-01-01

    A new semiempirical model is proposed for describing the spatial correlation between interacting particles in nonideal systems. The developed model describes the main features in the behavior of the pair correlation function for crystalline structures and can also be used for qualitative and quantitative description of the spatial correlation of particles in strongly nonideal liquid systems. The proposed model is compared with the results of simulation of the pair correlation function.

  15. Characterization of steady streaming for a particle manipulation system.

    Science.gov (United States)

    Amit, Roni; Abadi, Avi; Kosa, Gabor

    2016-04-01

    Accurate positioning of biological cells or microscopic particle without directly contacting them is a challenging task in biomedical engineering. Various trapping methods for controlling the position of a particle have been suggested. The common driving methods are based on laser and ultrasonic actuation principles. In this work we suggest a design for a hydrodynamic particle manoeuvring system. The system operates using steady streaming in a viscous fluid media induced by high frequency vibration of piezoelectric cantilevers. A particle within the workspace of the system can be trapped and manipulated to a desired position by the fairly unidirectional flow field created by the beams. In this paper, the flow field in the particle manipulation system is characterized numerically and experimentally. We find that the flow field resembles the analytical solutions of a flow field created by an oscillating sphere. Furthermore, we validate numerically the quadratic relation between the steady streaming velocity and the vibration amplitude of the beam. The calibration of the piezoelectric actuator's oscillation amplitudes enables effective positioning of particles with a diameter of 20 um to 1 mm. We find that a 30X0.8X2 mm(3) piezoelectric beam vibrating at its first resonance frequency, 200 Hz, is able to move a particle at a typical flow velocity ranging between 0.05 mm/sec and 0.13 mm/s in 430 cSt Si oil (Re=0.2).

  16. Energy exchange in systems of particles with nonreciprocal interaction

    Energy Technology Data Exchange (ETDEWEB)

    Vaulina, O. S.; Lisina, I. I., E-mail: Irina.Lisina@mail.ru; Lisin, E. A. [Russian Academy of Sciences, Joint Institute for High Temperatures (Russian Federation)

    2015-10-15

    A model is proposed to describe the sources of additional kinetic energy and its redistribution in systems of particles with a nonreciprocal interaction. The proposed model is shown to explain the qualitative specific features of the dust particle dynamics in the sheath region of an RF discharge. Prominence is given to the systems of particles with a quasi-dipole–dipole interaction, which is similar to the interaction induced by the ion focusing effects that occur in experiments on a laboratory dusty plasma, and with the shadow interaction caused by thermophoretic forces and Le Sage’s forces.

  17. Gravitational instantons as models for charged particle systems

    Science.gov (United States)

    Franchetti, Guido; Manton, Nicholas S.

    2013-03-01

    In this paper we propose ALF gravitational instantons of types A k and D k as models for charged particle systems. We calculate the charges of the two families. These are -( k + 1) for A k , which is proposed as a model for k + 1 electrons, and 2 - k for D k , which is proposed as a model for either a particle of charge +2 and k electrons or a proton and k - 1 electrons. Making use of preferred topological and metrical structures of the manifolds, namely metrically preferred representatives of middle dimension homology classes, we construct two different energy functionals which reproduce the Coulomb interaction energy for a system of charged particles.

  18. A Magnetostrictive Tuning System for Particle Accelerators

    CERN Document Server

    Tai, Chiu-Ying; Daly, Edward; Davis, Kirk; Espinola, William; Han, Zhixiu; Joshi, Chandrashekhar; Mavanur, Anil; Racz, Livia; Shepard, Kenneth

    2005-01-01

    Energen, Inc. has designed, built, and demonstrated several fast and slow tuners based on its magnetostrictive actuators and stepper motor. These tuners are designed for Superconducting Radio Frequency (SRF) cavities, which are important structures in particle accelerators that support a wide spectrum of disciplines, including nuclear and high-energy physics and free electron lasers (FEL). In the past two years, Energen's work has focused on magnetostrictive fast tuners for microphonics and Lorentz detuning compensation on elliptical-cell and spoke-loaded cavities, including the capability for real-time closed-loop control. These tuners were custom designed to meet specific requirements, which included a few to 100 micron stroke range, hundreds to kilohertz operation frequency, and cryogenic temperature operation in vacuum or liquid helium. These tuners have been tested in house and at different laboratories, such as DESY, Argonne National Lab, and Jefferson Lab. Some recent results are presented in this pape...

  19. A Magnetorestrictive Tuning System for Particle Accelerators

    Energy Technology Data Exchange (ETDEWEB)

    Chiu-Ying Tai; Jordan Cormier; William Espinola; Zhixiu Han; Chad Joshi; Anil Mavanur; Livia Racz; Kenneth Shepard; Edward Daly; Kirk Davis

    2005-05-16

    Energen, Inc. has designed, built, and demonstrated several fast and slow tuners based on its magnetostrictive actuators and stepper motor. These tuners are designed for Superconducting Radio Frequency (SRF) cavities, which are important structures in particle accelerators that support a wide spectrum of disciplines, including nuclear and high-energy physics and free electron lasers (FEL). In the past two years, Energen's work has focused on magnetostrictive fast tuners for microphonics and Lorentz detuning compensation on elliptical-cell and spoke-loaded cavities. These tuners were custom designed to meet specific requirements, which included a few to 100 micron stroke range, hundreds to kilohertz operation frequency, and cryogenic temperature operation in vacuum or liquid helium. These tuners have been tested in house and at different laboratories, such as DESY, Argonne National Lab, and Jefferson Lab. Some recent results are presented in this paper.

  20. Conservative interacting particles system with anomalous rate of ergodicity

    OpenAIRE

    Brzeźniak, Zdzislaw; Flandoli, Franco; Neklyudov, Misha; Zegarliński, Boguslaw

    2010-01-01

    We analyze certain conservative interacting particle system and establish ergodicity of the system for a family of invariant measures. Furthermore, we show that convergence rate to equilibrium is exponential. This result is of interest because it presents counterexample to the standard assumption of physicists that conservative system implies polynomial rate of convergence.

  1. AEi systems designing power sstem for world's largest particle accelerator

    CERN Multimedia

    Weinberg, Lee

    2007-01-01

    "AEi Systems, a world leader in power systems analysis and design, announced today that the Large Hadron Collider (LHC) at CERN (the European Centre for Nuclear Research) near Geneva, Switzerland, has engaged AEi Systems to design and develop a radiation-hard power supply for CERN's giant ATLAS particle detector." (1 page)

  2. Cryogenic systems for detectors and particle accelerators

    International Nuclear Information System (INIS)

    Sondericker, J.H.

    1988-01-01

    It's been one hundred years since the first successful experiments were carried out leading to the liquefaction of oxygen which birthed the field of cryogenics and about sixty years since cryogenics went commercial. Originally, cryogenics referred to the technology and art of producing low temperatures but today the definition adopted by the XII Congress of the International Institute of Refrigeration describes cryogenics as the study of phenomena, techniques, and concepts occurring at our pertaining to temperatures below 120 K. Modern acceptance of the importance and use of cryogenic fluids continues to grow. By far, the bulk of cryogenic products are utilized by industry for metal making, agriculture, medicine, food processing and as efficient storage of fuels. Cryogenics has found many uses in the scientific community as well, enabling the development of ultra low noise amplifiers, fast cold electronics, cryopumped ultra high vacuums, the production of intense magnetic fields and low loss power transmission through the sue of cryogenically cooled superconductors. High energy physic research has been and continues to use cryogenic hardware to produce liquids used as detector targets and to produce refrigeration necessary to cool superconducting magnets to design temperature for particle accelerator applications. In fact, today's super accelerators achieve energies that would be impossible to reach with conventional copper magnets, demonstrating that cryogenics has become an indispensable ingredient in today's scientific endeavors

  3. Particle tracker system delivered to CERN

    CERN Multimedia

    Pitcher, Graham

    2006-01-01

    "The CCLRC Rutherford Appleton Laboratory (RAL) has delivered a system to CERN that will help to process the vast amounts of data generated by the silicon tracking detector within the Compact Muon Solenoid experiment." (1/2 page)

  4. PHITS-a particle and heavy ion transport code system

    International Nuclear Information System (INIS)

    Niita, Koji; Sato, Tatsuhiko; Iwase, Hiroshi; Nose, Hiroyuki; Nakashima, Hiroshi; Sihver, Lembit

    2006-01-01

    The paper presents a summary of the recent development of the multi-purpose Monte Carlo Particle and Heavy Ion Transport code System, PHITS. In particular, we discuss in detail the development of two new models, JAM and JQMD, for high energy particle interactions, incorporated in PHITS, and show comparisons between model calculations and experiments for the validations of these models. The paper presents three applications of the code including spallation neutron source, heavy ion therapy and space radiation. The results and examples shown indicate PHITS has great ability of carrying out the radiation transport analysis of almost all particles including heavy ions within a wide energy range

  5. Kinetic theory of a longitudinally expanding system of scalar particles

    International Nuclear Information System (INIS)

    Epelbaum, Thomas; Gelis, François; Jeon, Sangyong; Moore, Guy; Wu, Bin

    2015-01-01

    A simple kinematical argument suggests that the classical approximation may be inadequate to describe the evolution of a system with an anisotropic particle distribution. In order to verify this quantitatively, we study the Boltzmann equation for a longitudinally expanding system of scalar particles interacting with a ϕ 4 coupling, that mimics the kinematics of a heavy ion collision at very high energy. We consider only elastic 2→2 scatterings, and we allow the formation of a Bose-Einstein condensate in overpopulated situations by solving the coupled equations for the particle distribution and the particle density in the zero mode. For generic CGC-like initial conditions with a large occupation number, the solutions of the full Boltzmann equation cease to display the classical attractor behavior sooner than expected; for moderate coupling, the solutions appear never to follow a classical attractor solution.

  6. Entanglement for multipartite systems of indistinguishable particles

    Energy Technology Data Exchange (ETDEWEB)

    Grabowski, Janusz [Polish Academy of Sciences, Institute of Mathematics, Sniadeckich 8, PO Box 21, 00-956 Warsaw (Poland); Kus, Marek [Center for Theoretical Physics, Polish Academy of Sciences, Aleja Lotnikow 32/46, 02-668 Warszawa (Poland); Marmo, Giuseppe, E-mail: jagrab@impan.pl, E-mail: marek.kus@cft.edu.pl, E-mail: marmo@na.infn.it [Dipartimento di Scienze Fisiche, Universita ' Federico II' di Napoli and Istituto Nazionale di Fisica Nucleare, Sezione di Napoli, Complesso Universitario di Monte Sant Angelo, Via Cintia, I-80126 Napoli (Italy)

    2011-04-29

    We analyze the concept of entanglement for a multipartite system with bosonic and fermionic constituents and its generalization to systems with arbitrary parastatistics. We use the representation theory of symmetry groups to formulate a unified approach to this problem in terms of simple tensors with an appropriate symmetry. For an arbitrary parastatistics, we define the S-rank generalizing the notion of the Schmidt rank. The S-rank, defined for all types of tensors, serves for distinguishing entanglement of pure states. In addition, for Bose and Fermi statistics, we construct an analog of the Jamiolkowski isomorphism.

  7. GPU-based online track reconstruction for PANDA and application to the analysis of D→Kππ

    Energy Technology Data Exchange (ETDEWEB)

    Herten, Andreas

    2015-07-02

    The PANDA experiment is a new hadron physics experiment which is being built for the FAIR facility in Darmstadt, Germany. PANDA will employ a novel scheme of data acquisition: the experiment will reconstruct the full stream of events in realtime to make trigger decisions based on the event topology. An important part of this online event reconstruction is online track reconstruction. Online track reconstruction algorithms need to reconstruct particle trajectories in nearly realtime. This work uses the high-throughput devices of Graphics Processing Units to benchmark different online track reconstruction algorithms. The reconstruction of D{sup ±}→K{sup -+}π{sup ±}π{sup ±} is studied extensively and one online track reconstruction algorithm applied.

  8. Magnetic particle clutch controls servo system

    Science.gov (United States)

    Fow, P. B.

    1973-01-01

    Magnetic clutches provide alternative means of driving low-power rate or positioning servo systems. They may be used over wide variety of input speed ranges and weigh comparatively little. Power drain is good with overall motor/clutch efficiency greater than 50 percent, and gain of clutch is close to linear, following hysteresis curve of core and rotor material.

  9. Streaming and particle motion in acoustically-actuated leaky systems

    Science.gov (United States)

    Nama, Nitesh; Barnkob, Rune; Jun Huang, Tony; Kahler, Christian; Costanzo, Francesco

    2017-11-01

    The integration of acoustics with microfluidics has shown great promise for applications within biology, chemistry, and medicine. A commonly employed system to achieve this integration consists of a fluid-filled, polymer-walled microchannel that is acoustically actuated via standing surface acoustic waves. However, despite significant experimental advancements, the precise physical understanding of such systems remains a work in progress. In this work, we investigate the nature of acoustic fields that are setup inside the microchannel as well as the fundamental driving mechanism governing the fluid and particle motion in these systems. We provide an experimental benchmark using state-of-art 3D measurements of fluid and particle motion and present a Lagrangian velocity based temporal multiscale numerical framework to explain the experimental observations. Following verification and validation, we employ our numerical model to reveal the presence of a pseudo-standing acoustic wave that drives the acoustic streaming and particle motion in these systems.

  10. SU-F-T-193: Evaluation of a GPU-Based Fast Monte Carlo Code for Proton Therapy Biological Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Taleei, R; Qin, N; Jiang, S [UT Southwestern Medical Center, Dallas, TX (United States); Peeler, C [UT MD Anderson Cancer Center, Houston, TX (United States); Jia, X [The University of Texas Southwestern Medical Ctr, Dallas, TX (United States)

    2016-06-15

    Purpose: Biological treatment plan optimization is of great interest for proton therapy. It requires extensive Monte Carlo (MC) simulations to compute physical dose and biological quantities. Recently, a gPMC package was developed for rapid MC dose calculations on a GPU platform. This work investigated its suitability for proton therapy biological optimization in terms of accuracy and efficiency. Methods: We performed simulations of a proton pencil beam with energies of 75, 150 and 225 MeV in a homogeneous water phantom using gPMC and FLUKA. Physical dose and energy spectra for each ion type on the central beam axis were scored. Relative Biological Effectiveness (RBE) was calculated using repair-misrepair-fixation model. Microdosimetry calculations were performed using Monte Carlo Damage Simulation (MCDS). Results: Ranges computed by the two codes agreed within 1 mm. Physical dose difference was less than 2.5 % at the Bragg peak. RBE-weighted dose agreed within 5 % at the Bragg peak. Differences in microdosimetric quantities such as dose average lineal energy transfer and specific energy were < 10%. The simulation time per source particle with FLUKA was 0.0018 sec, while gPMC was ∼ 600 times faster. Conclusion: Physical dose computed by FLUKA and gPMC were in a good agreement. The RBE differences along the central axis were small, and RBE-weighted dose difference was found to be acceptable. The combined accuracy and efficiency makes gPMC suitable for proton therapy biological optimization.

  11. Anions, quantum particles in planar systems

    International Nuclear Information System (INIS)

    Monerat, Germano Amaral

    2000-03-01

    Our purpose here is to present a general review of the non-relativistic quantum-mechanical description of excitations that do not obey neither the Fermi-Dirac nor the Bose-Einstein statistics; they rather fulfill an intermediate statistics, the we called 'any-statistics'. As we shall see, this is a peculiarity of (1+1) and (1+2) dimensions, due to the fact that, in two space dimensions, the spin is not quantised, once the rotation group is Abelian. The relevance of studying theories in (1+2) dimensions is justified by the evidence that, in condensed matter physics, there are examples of planar systems, for which everything goes as if the third spatial dimension is frozen. (author)

  12. Transient bimodality in interacting particle systems

    International Nuclear Information System (INIS)

    Calderoni, P.; Pellegrinotti, A.; Presutti, E.; Vares, M.E.

    1989-01-01

    The authors consider a system of spins which have values ± 1 and evolve according to a jump Markov process whose generator is the sum of two generators, one describing a spin-flip Glauber process, the other a Kawasaki (stirring) evolution. It was proven elsewhere that if the Kawasaki dynamics is speeded up by a factor var-epsilon -2 , then, in the limit var-epsilon → 0 (continuum limit), propagation of chaos holds and the local magnetization solves a reaction-diffusion equation. They choose the parameters of the Glauber interaction so that the potential of the reaction term in the reaction-diffusion equation is a double-well potential with quartic maximum at the origin. They assume further that for each var-epsilon the system is in a finite interval of Z with var-epsilon -1 sites and periodic boundary conditions. They specify the initial measure as the product measure with 0 spin average, thus obtaining, in the continuum limit, a constant magnetic profile equal to 0, which is a stationary unstable solution to the reaction-diffusion equation. They prove that at times of the order var-epsilon -1/2 propagation of chaos does not hold any more and, in the limit as var-epsilon → 0, the state becomes a nontrivial superposition of Bernoulli measures with parameters corresponding to the minima of the reaction potential. The coefficients of such a superposition depend on time (on the scale var-epsilon -1/2 ) and at large times (on this scale) the coefficient of the term corresponding to the initial magnetization vanishes (transient bimodality). This differs from what was observed by De Masi, Presutti, and Vares, who considered a reaction potential with quadratic maximum and no bimodal effect was seen, as predicted by Broggi, Lugiato, and Colombo

  13. A system for aerodynamically sizing ultrafine environmental radioactive particles

    International Nuclear Information System (INIS)

    Olawoyin, L.

    1995-09-01

    The unattached environmental radioactive particles/clusters, produced mainly by 222 Rn in indoor air, are usually few nanometers in size. The inhalation of these radioactive clusters can lead to deposition of radioactivity on the mucosal surface of the tracheobronchial tree. The ultimate size of the cluster together with the flow characteristics will determine the depositional site in the human lung and thus, the extent of damage that can be caused. Thus, there exists the need for the determination of the size of the radioactive clusters. However, the existing particle measuring device have low resolution in the sub-nanometer range. In this research, a system for the alternative detection and measurement of the size of particles/cluster in the less than 2 nm range have been developed. The system is a one stage impactor which has a solid state spectrometer as its impaction plate. It's major feature is the nozzle-to-plate separation, L. The particle size collected changes with L and thus, particle size spectroscopy is achieved by varying L. The number of collected particles is determined by alpha spectroscopy. The size-discriminating ability of the system was tested with laboratory generated radon particles and it was subsequently used to characterize the physical (size) changes associated with the interaction of radon progeny with water vapor and short chain alcohols in various support gases. The theory of both traditional and high velocity jet impactors together with the design and evaluation of the system developed in this study are discussed in various chapters of this dissertation. The major results obtained in the course of the study are also presented

  14. A system for aerodynamically sizing ultrafine environmental radioactive particles

    Energy Technology Data Exchange (ETDEWEB)

    Olawoyin, L.

    1995-09-01

    The unattached environmental radioactive particles/clusters, produced mainly by {sup 222}Rn in indoor air, are usually few nanometers in size. The inhalation of these radioactive clusters can lead to deposition of radioactivity on the mucosal surface of the tracheobronchial tree. The ultimate size of the cluster together with the flow characteristics will determine the depositional site in the human lung and thus, the extent of damage that can be caused. Thus, there exists the need for the determination of the size of the radioactive clusters. However, the existing particle measuring device have low resolution in the sub-nanometer range. In this research, a system for the alternative detection and measurement of the size of particles/cluster in the less than 2 nm range have been developed. The system is a one stage impactor which has a solid state spectrometer as its impaction plate. It`s major feature is the nozzle-to-plate separation, L. The particle size collected changes with L and thus, particle size spectroscopy is achieved by varying L. The number of collected particles is determined by alpha spectroscopy. The size-discriminating ability of the system was tested with laboratory generated radon particles and it was subsequently used to characterize the physical (size) changes associated with the interaction of radon progeny with water vapor and short chain alcohols in various support gases. The theory of both traditional and high velocity jet impactors together with the design and evaluation of the system developed in this study are discussed in various chapters of this dissertation. The major results obtained in the course of the study are also presented.

  15. Two-particle correlations in reactor systems

    International Nuclear Information System (INIS)

    Mika, J.

    1975-01-01

    A study is made of the relationship between the general transport equation and the correlation matrix equation, in reactor systems. How some of the results obtained so far for the generalized transport equation can be simply translated to the correlation matrix equation is indicated. In particular, the semigroup formalism developed for the generalized transport equation is used to prove the existence and uniqueness of solution to the correlation matrix equation. The generalized transport equation is rigorously formulated in a Hilbert space of square integrable functions. The semigroup formalism for that equation is introduced and the solution expressed in terms of the semigroup. The correlation matrix equation is then formulated. It is shown how the semigroup formalism developed for the generalized transport equation can be applied to the correlation matrix equation and used to prove the existence theorem. Some applications of the semigroup formalism are then indicated. Firstly, the simple one point model obtained from the general equations is introduced. Secondly, the well known phenomenon of the linear increase with time of the components of the correlation matrix in a critical reactor is analyzed. Finally, it is shown how the singular perturbation method developed recently for the generalized transport equation can be applied to the correlation matrix equation. (U.K.)

  16. Development of GPU Based Parallel Computing Module for Solving Pressure Equation in the CUPID Component Thermo-Fluid Analysis Code

    International Nuclear Information System (INIS)

    Lee, Jin Pyo; Joo, Han Gyu

    2010-01-01

    In the thermo-fluid analysis code named CUPID, the linear system of pressure equations must be solved in each iteration step. The time for repeatedly solving the linear system can be quite significant because large sparse matrices of Rank more than 50,000 are involved and the diagonal dominance of the system is hardly hold. Therefore parallelization of the linear system solver is essential to reduce the computing time. Meanwhile, Graphics Processing Units (GPU) have been developed as highly parallel, multi-core processors for the global demand of high quality 3D graphics. If a suitable interface is provided, parallelization using GPU can be available to engineering computing. NVIDIA provides a Software Development Kit(SDK) named CUDA(Compute Unified Device Architecture) to code developers so that they can manage GPUs for parallelization using the C language. In this research, we implement parallel routines for the linear system solver using CUDA, and examine the performance of the parallelization. In the next section, we will describe the method of CUDA parallelization for the CUPID code, and then the performance of the CUDA parallelization will be discussed

  17. Particle Based Modeling of Electrical Field Flow Fractionation Systems

    Directory of Open Access Journals (Sweden)

    Tonguc O. Tasci

    2015-10-01

    Full Text Available Electrical Field Flow Fractionation (ElFFF is a sub method in the field flow fractionation (FFF family that relies on an applied voltage on the channel walls to effect a separation. ElFFF has fallen behind some of the other FFF methods because of the optimization complexity of its experimental parameters. To enable better optimization, a particle based model of the ElFFF systems has been developed and is presented in this work that allows the optimization of the main separation parameters, such as electric field magnitude, frequency, duty cycle, offset, flow rate and channel dimensions. The developed code allows visualization of individual particles inside the separation channel, generation of realistic fractograms, and observation of the effects of the various parameters on the behavior of the particle cloud. ElFFF fractograms have been generated via simulations and compared with experiments for both normal and cyclical ElFFF. The particle visualizations have been used to verify that high duty cycle voltages are essential to achieve long retention times and high resolution separations. Furthermore, by simulating the particle motions at the channel outlet, it has been demonstrated that the top channel wall should be selected as the accumulation wall for cyclical ElFFF to reduce band broadening and achieve high efficiency separations. While the generated particle based model is a powerful tool to estimate the outcomes of the ElFFF experiments and visualize particle motions, it can also be used to design systems with new geometries which may lead to the design of higher efficiency ElFFF systems. Furthermore, this model can be extended to other FFF techniques by replacing the electrical field component of the model with the fields used in the other FFF techniques.

  18. Chemical equilibrium between particles and complex particles in quantum many-body system at very low temperature

    International Nuclear Information System (INIS)

    Matsumoto, Atsushi

    2004-01-01

    The equilibrium state at very low temperature and phase state at 0 K between the particle 1 and particle 2 and the particle 12, which particle 1 bond with particle 2, of infinite uniform system was investigated. Boson and fermion are thought as particle and three kinds of reactions are considered. On the case of boson + boson ? boson, the system is all molecules or atoms when ΔE≠0 and T=0, and the density is not determined under Tc when ΔE=0. On the case of boson + fermion ? fermion, molecules and atoms are able to exist together at T=0. On fermion + fermion ? boson, molecule is formed and condensed. The chemical equilibrium between particles and complex particles and three cases of equilibrium are explained. (S.Y.)

  19. Fast GPU-based computation of the sensitivity matrix for a PET list-mode OSEM algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Nassiri, Moulay Ali; Carrier, Jean-Francois [Montreal Univ., QC (Canada). Dept. de Radio-Oncologie; Hissoiny, Sami [Ecole Polytechnique de Montreal, QC (Canada). Dept. de Genie Informatique et Genie Logiciel; Despres, Philippe [Quebec Univ. (Canada). Dept. de Radio-Oncologie

    2011-07-01

    One of the obstacle in introducing a list-mode PET reconstruction algorithm for routine clinical use is the long computation time required for the sensitivity matrix calculation. This matrix must be computed for each study because it depends on the object attenuation map. During the last decade, studies have shown that 3D list-mode OSEM reconstruction algorithms could be effectively performed and considerably accelerated by GPU devices. However, most of that preliminary work (1) was done for pre-clinical PET systems in which the number of LORs is small compared to modern human PET systems and (2) supposed that the sensitivity matrix is pre-calculated. The time required to compute this matrix can however be longer than the reconstruction time itself. The objective of this work is to investigate the performance of sensitivity matrix calculations in terms of computation time with modern GPUs, for clinical fully 3D LM-OSEM for modern PET scanners. For this purpose, sensitivity matrix calculations and full list-mode OSEM reconstruction for human PET systems were implemented on GPUs using the CUDA framework. The system matrices were built on-the-fly by using the multi-ray Siddon algorithm. The time to compute the sensitivity matrix for 288 x 288 x 57 arrays using 3 tangential LORs was 29 seconds. The 3D LM-OSEM algorithm, including the sensitivity matrix calculation, was performed for the same LORs in 71 seconds for 62 millions events, 6 frames and 1 iterations. This work let envision fast reconstructions for advanced PET application such as dynamic studies and parametric image reconstruction. (orig.)

  20. Algorithms for GPU-based molecular dynamics simulations of complex fluids: Applications to water, mixtures, and liquid crystals.

    Science.gov (United States)

    Kazachenko, Sergey; Giovinazzo, Mark; Hall, Kyle Wm; Cann, Natalie M

    2015-09-15

    A custom code for molecular dynamics simulations has been designed to run on CUDA-enabled NVIDIA graphics processing units (GPUs). The double-precision code simulates multicomponent fluids, with intramolecular and intermolecular forces, coarse-grained and atomistic models, holonomic constraints, Nosé-Hoover thermostats, and the generation of distribution functions. Algorithms to compute Lennard-Jones and Gay-Berne interactions, and the electrostatic force using Ewald summations, are discussed. A neighbor list is introduced to improve scaling with respect to system size. Three test systems are examined: SPC/E water; an n-hexane/2-propanol mixture; and a liquid crystal mesogen, 2-(4-butyloxyphenyl)-5-octyloxypyrimidine. Code performance is analyzed for each system. With one GPU, a 33-119 fold increase in performance is achieved compared with the serial code while the use of two GPUs leads to a 69-287 fold improvement and three GPUs yield a 101-377 fold speedup. © 2015 Wiley Periodicals, Inc.

  1. Failing in place for low-serviceability storage infrastructure using high-parity GPU-based RAID

    International Nuclear Information System (INIS)

    Curry, Matthew L.; Ward, H. Lee; Skjellum, Anthony

    2010-01-01

    In order to provide large quantities of high-reliability disk-based storage, it has become necessary to aggregate disks into fault-tolerant groups based on the RAID methodology. Most RAID levels do provide some fault tolerance, but there are certain classes of applications that require increased levels of fault tolerance within an array. Some of these applications include embedded systems in harsh environments that have a low level of serviceability, or uninhabited data centers servicing cloud computing. When describing RAID reliability, the Mean Time To Data Loss (MTTDL) calculations will often assume that the time to replace a failed disk is relatively low, or even negligible compared to rebuild time. For platforms that are in remote areas collecting and processing data, it may be impossible to access the system to perform system maintenance for long periods. A disk may fail early in a platform's life, but not be replaceable for much longer than typical for RAID arrays. Service periods may be scheduled at intervals on the order of months, or the platform may not be serviced until the end of a mission in progress. Further, this platform may be subject to extreme conditions that can accelerate wear and tear on a disk, requiring even more protection from failures. We have created a high parity RAID implementation that uses a Graphics Processing Unit (GPU) to compute more than two blocks of parity information per stripe, allowing extra parity to eliminate or reduce the requirement for rebuilding data between service periods. While this type of controller is highly effective for RAID 6 systems, an important benefit is the ability to incorporate more parity into a RAID storage system. Such RAID levels, as yet unnamed, can tolerate the failure of three or more disks (depending on configuration) without data loss. While this RAID system certainly has applications in embedded systems running applications in the field, similar benefits can be obtained for servers that are

  2. Symmetries of the 2D magnetic particle imaging system matrix

    International Nuclear Information System (INIS)

    Weber, A; Knopp, T

    2015-01-01

    In magnetic particle imaging (MPI), the relation between the particle distribution and the measurement signal can be described by a linear system of equations. For 1D imaging, it can be shown that the system matrix can be expressed as a product of a convolution matrix and a Chebyshev transformation matrix. For multidimensional imaging, the structure of the MPI system matrix is not yet fully explored as the sampling trajectory complicates the physical model. It has been experimentally found that the MPI system matrix rows have symmetries and look similar to the tensor products of Chebyshev polynomials. In this work we will mathematically prove that the 2D MPI system matrix has symmetries that can be used for matrix compression. (paper)

  3. Blended particle filters for large-dimensional chaotic dynamical systems

    Science.gov (United States)

    Majda, Andrew J.; Qi, Di; Sapsis, Themistoklis P.

    2014-01-01

    A major challenge in contemporary data science is the development of statistically accurate particle filters to capture non-Gaussian features in large-dimensional chaotic dynamical systems. Blended particle filters that capture non-Gaussian features in an adaptively evolving low-dimensional subspace through particles interacting with evolving Gaussian statistics on the remaining portion of phase space are introduced here. These blended particle filters are constructed in this paper through a mathematical formalism involving conditional Gaussian mixtures combined with statistically nonlinear forecast models compatible with this structure developed recently with high skill for uncertainty quantification. Stringent test cases for filtering involving the 40-dimensional Lorenz 96 model with a 5-dimensional adaptive subspace for nonlinear blended filtering in various turbulent regimes with at least nine positive Lyapunov exponents are used here. These cases demonstrate the high skill of the blended particle filter algorithms in capturing both highly non-Gaussian dynamical features as well as crucial nonlinear statistics for accurate filtering in extreme filtering regimes with sparse infrequent high-quality observations. The formalism developed here is also useful for multiscale filtering of turbulent systems and a simple application is sketched below. PMID:24825886

  4. Atomic orbital-based SOS-MP2 with tensor hypercontraction. I. GPU-based tensor construction and exploiting sparsity.

    Science.gov (United States)

    Song, Chenchen; Martínez, Todd J

    2016-05-07

    We present a tensor hypercontracted (THC) scaled opposite spin second order Møller-Plesset perturbation theory (SOS-MP2) method. By using THC, we reduce the formal scaling of SOS-MP2 with respect to molecular size from quartic to cubic. We achieve further efficiency by exploiting sparsity in the atomic orbitals and using graphical processing units (GPUs) to accelerate integral construction and matrix multiplication. The practical scaling of GPU-accelerated atomic orbital-based THC-SOS-MP2 calculations is found to be N(2.6) for reference data sets of water clusters and alanine polypeptides containing up to 1600 basis functions. The errors in correlation energy with respect to density-fitting-SOS-MP2 are less than 0.5 kcal/mol for all systems tested (up to 162 atoms).

  5. Atomic orbital-based SOS-MP2 with tensor hypercontraction. I. GPU-based tensor construction and exploiting sparsity

    Energy Technology Data Exchange (ETDEWEB)

    Song, Chenchen; Martínez, Todd J. [Department of Chemistry and the PULSE Institute, Stanford University, Stanford, California 94305 (United States); SLAC National Accelerator Laboratory, Menlo Park, California 94025 (United States)

    2016-05-07

    We present a tensor hypercontracted (THC) scaled opposite spin second order Møller-Plesset perturbation theory (SOS-MP2) method. By using THC, we reduce the formal scaling of SOS-MP2 with respect to molecular size from quartic to cubic. We achieve further efficiency by exploiting sparsity in the atomic orbitals and using graphical processing units (GPUs) to accelerate integral construction and matrix multiplication. The practical scaling of GPU-accelerated atomic orbital-based THC-SOS-MP2 calculations is found to be N{sup 2.6} for reference data sets of water clusters and alanine polypeptides containing up to 1600 basis functions. The errors in correlation energy with respect to density-fitting-SOS-MP2 are less than 0.5 kcal/mol for all systems tested (up to 162 atoms).

  6. Multi-GPU based acceleration of a list-mode DRAMA toward real-time OpenPET imaging

    Energy Technology Data Exchange (ETDEWEB)

    Kinouchi, Shoko [Chiba Univ. (Japan); National Institute of Radiological Sciences, Chiba (Japan); Yamaya, Taiga; Yoshida, Eiji; Tashima, Hideaki [National Institute of Radiological Sciences, Chiba (Japan); Kudo, Hiroyuki [Tsukuba Univ., Ibaraki (Japan); Suga, Mikio [Chiba Univ. (Japan)

    2011-07-01

    OpenPET, which has a physical gap between two detector rings, is our new PET geometry. In order to realize future radiation therapy guided by OpenPET, real-time imaging is required. Therefore we developed a list-mode image reconstruction method using general purpose graphic processing units (GPUs). For GPU implementation, the efficiency of acceleration depends on the implementation method which is required to avoid conditional statements. Therefore, in our previous study, we developed a new system model which was suited for the GPU implementation. In this paper, we implemented our image reconstruction method using 4 GPUs to get further acceleration. We applied the developed reconstruction method to a small OpenPET prototype. We obtained calculation times of total iteration using 4 GPUs that were 3.4 times faster than using a single GPU. Compared to using a single CPU, we achieved the reconstruction time speed-up of 142 times using 4 GPUs. (orig.)

  7. Accelerating solutions of one-dimensional unsteady PDEs with GPU-based swept time-space decomposition

    Science.gov (United States)

    Magee, Daniel J.; Niemeyer, Kyle E.

    2018-03-01

    The expedient design of precision components in aerospace and other high-tech industries requires simulations of physical phenomena often described by partial differential equations (PDEs) without exact solutions. Modern design problems require simulations with a level of resolution difficult to achieve in reasonable amounts of time-even in effectively parallelized solvers. Though the scale of the problem relative to available computing power is the greatest impediment to accelerating these applications, significant performance gains can be achieved through careful attention to the details of memory communication and access. The swept time-space decomposition rule reduces communication between sub-domains by exhausting the domain of influence before communicating boundary values. Here we present a GPU implementation of the swept rule, which modifies the algorithm for improved performance on this processing architecture by prioritizing use of private (shared) memory, avoiding interblock communication, and overwriting unnecessary values. It shows significant improvement in the execution time of finite-difference solvers for one-dimensional unsteady PDEs, producing speedups of 2 - 9 × for a range of problem sizes, respectively, compared with simple GPU versions and 7 - 300 × compared with parallel CPU versions. However, for a more sophisticated one-dimensional system of equations discretized with a second-order finite-volume scheme, the swept rule performs 1.2 - 1.9 × worse than a standard implementation for all problem sizes.

  8. Pattern formation in annular systems of repulsive particles

    DEFF Research Database (Denmark)

    Marschler, Christian; Starke, Jens; Sørensen, Mads Peter

    2016-01-01

    General particle models with symmetric and asymmetric repulsion are studied and investigated for finite-range and exponential interaction in an annulus. In the symmetric case transitions from one- to multi-lane behavior including multistability are observed for varying particle density and for a ...... and for a varying curvature with fixed density. Hence, the system cannot be approximated by a periodic channel. In the asymmetric case, which is important in pedestrian dynamics, we reveal an inhomogeneous new phase, a traveling wave reminiscent of peristaltic motion....

  9. The Sun as a system of elementary particles

    International Nuclear Information System (INIS)

    Kleczek, J.

    1986-01-01

    The paper based on known facts of solar physics-is an attempt to interpret the Sun as a selfgravitating system of about 10/sup 57/ nucleons and electrons. These elementary particles are endowed with strong, electromagnetic, weak and gravitational interactions. Origin of the Sun, its evolution, structure and physiology are consequences of the four interactions. Each structural property, every evolutionary process, any activity phenomenon or event on the Sun can be traced backwards to the four fundamental forces of nature, viz. to interactions of elementary particles

  10. Particle size distribution of selected electronic nicotine delivery system products.

    Science.gov (United States)

    Oldham, Michael J; Zhang, Jingjie; Rusyniak, Mark J; Kane, David B; Gardner, William P

    2018-03-01

    Dosimetry models can be used to predict the dose of inhaled material, but they require several parameters including particle size distribution. The reported particle size distributions for aerosols from electronic nicotine delivery system (ENDS) products vary widely and don't always identify a specific product. A low-flow cascade impactor was used to determine the particle size distribution [mass median aerodynamic diameter (MMAD); geometric standard deviation (GSD)] from 20 different cartridge based ENDS products. To assess losses and vapor phase amount, collection efficiency of the system was measured by comparing the collected mass in the impactor to the difference in ENDS product mass. The levels of nicotine, glycerin, propylene glycol, water, and menthol in the formulations of each product were also measured. Regardless of the ENDS product formulation, the MMAD of all tested products was similar and ranged from 0.9 to 1.2 μm with a GSD ranging from 1.7 to 2.2. There was no consistent pattern of change in the MMAD and GSD as a function of number of puffs (cartridge life). The collection efficiency indicated that 9%-26% of the generated mass was deposited in the collection system or was in the vapor phase. The particle size distribution data are suitable for use in aerosol dosimetry programs. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.

  11. Development of particle and heavy ion transport code system

    International Nuclear Information System (INIS)

    Niita, Koji

    2004-01-01

    Particle and heavy ion transport code system (PHITS) is 3 dimension general purpose Monte Carlo simulation codes for description of transport and reaction of particle and heavy ion in materials. It is developed on the basis of NMTC/JAM for design and safety of J-PARC. What is PHITS, it's physical process, physical models and development process of PHITC code are described. For examples of application, evaluation of neutron optics, cancer treatment by heavy particle ray and cosmic radiation are stated. JAM and JQMD model are used as the physical model. Neutron motion in six polar magnetic field and gravitational field, PHITC simulation of trace of C 12 beam and secondary neutron track of small model of cancer treatment device in HIMAC and neutron flux in Space Shuttle are explained. (S.Y.)

  12. Pattern formation in annular systems of repulsive particles

    International Nuclear Information System (INIS)

    Marschler, Christian; Starke, Jens; Sørensen, Mads P.; Gaididei, Yuri B.; Christiansen, Peter L.

    2016-01-01

    General particle models with symmetric and asymmetric repulsion are studied and investigated for finite-range and exponential interaction in an annulus. In the symmetric case transitions from one- to multi-lane behavior including multistability are observed for varying particle density and for a varying curvature with fixed density. Hence, the system cannot be approximated by a periodic channel. In the asymmetric case, which is important in pedestrian dynamics, we reveal an inhomogeneous new phase, a traveling wave reminiscent of peristaltic motion. - Highlights: • An asymmetrically interacting repulsive particle model is introduced. • Multi-stability is found in a pedestrian dynamics model. • Transitions from one- to multi-lane behavior are studied numerically.

  13. Magnetic coupling mechanisms in particle/thin film composite systems

    Directory of Open Access Journals (Sweden)

    Giovanni A. Badini Confalonieri

    2010-12-01

    Full Text Available Magnetic γ-Fe2O3 nanoparticles with a mean diameter of 20 nm and size distribution of 7% were chemically synthesized and spin-coated on top of a Si-substrate. As a result, the particles self-assembled into a monolayer with hexagonal close-packed order. Subsequently, the nanoparticle array was coated with a Co layer of 20 nm thickness. The magnetic properties of this composite nanoparticle/thin film system were investigated by magnetometry and related to high-resolution transmission electron microscopy studies. Herein three systems were compared: i.e. a reference sample with only the particle monolayer, a composite system where the particle array was ion-milled prior to the deposition of a thin Co film on top, and a similar composite system but without ion-milling. The nanoparticle array showed a collective super-spin behavior due to dipolar interparticle coupling. In the composite system, we observed a decoupling into two nanoparticle subsystems. In the ion-milled system, the nanoparticle layer served as a magnetic flux guide as observed by magnetic force microscopy. Moreover, an exchange bias effect was found, which is likely to be due to oxygen exchange between the iron oxide and the Co layer, and thus forming of an antiferromagnetic CoO layer at the γ-Fe2O3/Co interface.

  14. On the description of classical Einstein relativistic two-particle systems

    International Nuclear Information System (INIS)

    Aaberge, T.

    1978-01-01

    The author starts by considering the system of one free particle, and gives a sufficiently general description of this system to include the center of mass of systems of several particles. He then passes to the system of two particles. The coordinates separating the center of mass and the internal system are defined and the dynamics discussed. Finally the author outlines the construction of a more restrictive two-particle theory, and studies some consequences of the definition of a particle in an external field as a two-particle system in the limit where the mass of one of the particles becomes infinite. (Auth.)

  15. Traffic and related self-driven many-particle systems

    Science.gov (United States)

    Helbing, Dirk

    2001-10-01

    Since the subject of traffic dynamics has captured the interest of physicists, many surprising effects have been revealed and explained. Some of the questions now understood are the following: Why are vehicles sometimes stopped by ``phantom traffic jams'' even though drivers all like to drive fast? What are the mechanisms behind stop-and-go traffic? Why are there several different kinds of congestion, and how are they related? Why do most traffic jams occur considerably before the road capacity is reached? Can a temporary reduction in the volume of traffic cause a lasting traffic jam? Under which conditions can speed limits speed up traffic? Why do pedestrians moving in opposite directions normally organize into lanes, while similar systems ``freeze by heating''? All of these questions have been answered by applying and extending methods from statistical physics and nonlinear dynamics to self-driven many-particle systems. This article considers the empirical data and then reviews the main approaches to modeling pedestrian and vehicle traffic. These include microscopic (particle-based), mesoscopic (gas-kinetic), and macroscopic (fluid-dynamic) models. Attention is also paid to the formulation of a micro-macro link, to aspects of universality, and to other unifying concepts, such as a general modeling framework for self-driven many-particle systems, including spin systems. While the primary focus is upon vehicle and pedestrian traffic, applications to biological or socio-economic systems such as bacterial colonies, flocks of birds, panics, and stock market dynamics are touched upon as well.

  16. SU-E-T-673: Recent Developments and Comprehensive Validations of a GPU-Based Proton Monte Carlo Simulation Package, GPMC

    International Nuclear Information System (INIS)

    Qin, N; Tian, Z; Pompos, A; Jiang, S; Jia, X; Giantsoudi, D; Schuemann, J; Paganetti, H

    2015-01-01

    Purpose: A GPU-based Monte Carlo (MC) simulation package gPMC has been previously developed and high computational efficiency was achieved. This abstract reports our recent improvements on this package in terms of accuracy, functionality, and code portability. Methods: In the latest version of gPMC, nuclear interaction cross section database was updated to include data from TOPAS/Geant4. Inelastic interaction model, particularly the proton scattering angle distribution, was updated to improve overall simulation accuracy. Calculation of dose averaged LET (LETd) was implemented. gPMC was ported onto an OpenCL environment to enable portability across different computing devices (GPUs from different vendors and CPUs). We also performed comprehensive tests of the code accuracy. Dose from electro-magnetic (EM) interaction channel, primary and secondary proton doses and fluences were scored and compared with those computed by TOPAS. Results: In a homogeneous water phantom with 100 and 200 MeV beams, mean dose differences in EM channel computed by gPMC and by TOPAS were 0.28% and 0.65% of the corresponding maximum dose, respectively. With the Geant4 nuclear interaction cross section data, mean difference of primary proton dose was 0.84% for the 200 MeV case and 0.78% for the 100 MeV case. After updating inelastic interaction model, maximum difference of secondary proton fluence and dose were 0.08% and 0.5% for the 200 MeV beam, and 0.04% and 0.2% for the 100 MeV beam. In a test case with a 150MeV proton beam, the mean difference between LETd computed by gPMC and TOPAS was 0.96% within the proton range. With the OpenCL implementation, gPMC is executable on AMD and Nvidia GPUs, as well as on Intel CPU in single or multiple threads. Results on different platforms agreed within statistical uncertainty. Conclusion: Several improvements have been implemented in the latest version of gPMC, which enhanced its accuracy, functionality, and code portability

  17. SU-E-T-673: Recent Developments and Comprehensive Validations of a GPU-Based Proton Monte Carlo Simulation Package, GPMC

    Energy Technology Data Exchange (ETDEWEB)

    Qin, N; Tian, Z; Pompos, A; Jiang, S; Jia, X [UT Southwestern Medical Center, Dallas, TX (United States); Giantsoudi, D; Schuemann, J; Paganetti, H [Massachusetts General Hospital, Boston, MA (United States)

    2015-06-15

    Purpose: A GPU-based Monte Carlo (MC) simulation package gPMC has been previously developed and high computational efficiency was achieved. This abstract reports our recent improvements on this package in terms of accuracy, functionality, and code portability. Methods: In the latest version of gPMC, nuclear interaction cross section database was updated to include data from TOPAS/Geant4. Inelastic interaction model, particularly the proton scattering angle distribution, was updated to improve overall simulation accuracy. Calculation of dose averaged LET (LETd) was implemented. gPMC was ported onto an OpenCL environment to enable portability across different computing devices (GPUs from different vendors and CPUs). We also performed comprehensive tests of the code accuracy. Dose from electro-magnetic (EM) interaction channel, primary and secondary proton doses and fluences were scored and compared with those computed by TOPAS. Results: In a homogeneous water phantom with 100 and 200 MeV beams, mean dose differences in EM channel computed by gPMC and by TOPAS were 0.28% and 0.65% of the corresponding maximum dose, respectively. With the Geant4 nuclear interaction cross section data, mean difference of primary proton dose was 0.84% for the 200 MeV case and 0.78% for the 100 MeV case. After updating inelastic interaction model, maximum difference of secondary proton fluence and dose were 0.08% and 0.5% for the 200 MeV beam, and 0.04% and 0.2% for the 100 MeV beam. In a test case with a 150MeV proton beam, the mean difference between LETd computed by gPMC and TOPAS was 0.96% within the proton range. With the OpenCL implementation, gPMC is executable on AMD and Nvidia GPUs, as well as on Intel CPU in single or multiple threads. Results on different platforms agreed within statistical uncertainty. Conclusion: Several improvements have been implemented in the latest version of gPMC, which enhanced its accuracy, functionality, and code portability.

  18. Contractive relaxation systems and interacting particles for scalar conservation laws

    International Nuclear Information System (INIS)

    Katsoulakis, M.A.; Tzavaras, A.E.

    1996-01-01

    We consider a class of semi linear hyperbolic systems with relaxation that are contractive in the L 1 -norm and admit invariant regions. We show that, as the relaxation parameter ξ goes to zero, their solutions converge to a weak solution of the scalar multidimensional conversation law that satisfies the Kruzhkov conditions. In the case of one space dimension, we propose certain interacting particle systems, whose mesoscopic limit is the systems with relaxation and their macroscopic dynamics is described by entropy solutions of a scalar conservation law. (author)

  19. The flow equation approach to many-particle systems

    CERN Document Server

    Kehrein, Stefan; Fujimori, A; Varma, C; Steiner, F

    2006-01-01

    This self-contained monograph addresses the flow equation approach to many-particle systems. The flow equation approach consists of a sequence of infinitesimal unitary transformations and is conceptually similar to renormalization and scaling methods. Flow equations provide a framework for analyzing Hamiltonian systems where these conventional many-body techniques fail. The text first discusses the general ideas and concepts of the flow equation method. In a second part these concepts are illustrated with various applications in condensed matter theory including strong-coupling problems and non-equilibrium systems. The monograph is accessible to readers familiar with graduate- level solid-state theory.

  20. Estimation of the sizes of hot nuclear systems from particle-particle large angle kinematical correlations

    International Nuclear Information System (INIS)

    La Ville, J.L.; Bizard, G.; Durand, D.; Jin, G.M.; Rosato, E.

    1990-06-01

    Light fragment emission, when triggered by large transverse momentum protons shows specific kinematical correlations due to recoil effects of the excited emitting source. Such effects have been observed in azimuthal angular distributions of He-particles produced in collisions induced by 94 MeV/u 16 0 ions on Al, Ni and Au targets. A model calculation assuming a two-stage mechanism (formation and sequential decay of a hot source) gives a good description of these whole data. From this succesfull confrontation, it is possible to estimate the size of the emitting system

  1. The use of database management systems in particle physics

    CERN Document Server

    Stevens, P H; Read, B J; Rittenberg, Alan

    1979-01-01

    Examines data-handling needs and problems in particle physics and looks at three very different efforts by the Particle Data Group (PDG) , the CERN-HERA Group in Geneva, and groups cooperating with ZAED in Germany at resolving these problems. The ZAED effort does not use a database management system (DBMS), the CERN-HERA Group uses an existing, limited capability DBMS, and PDG uses the Berkely Database Management (BDMS), which PDG itself designed and implemented with scientific data-handling needs in mind. The range of problems each group tried to resolve was influenced by whether or not a DBMS was available and by what capabilities it had. Only PDG has been able to systematically address all the problems. The authors discuss the BDMS- centered system PDG is now building in some detail. (12 refs).

  2. Large deviations for noninteracting infinite-particle systems

    International Nuclear Information System (INIS)

    Donsker, M.D.; Varadhan, S.R.S.

    1987-01-01

    A large deviation property is established for noninteracting infinite particle systems. Previous large deviation results obtained by the authors involved a single I-function because the cases treated always involved a unique invariant measure for the process. In the context of this paper there is an infinite family of invariant measures and a corresponding infinite family of I-functions governing the large deviations

  3. A particle system with cooperative branching and coalescence

    Czech Academy of Sciences Publication Activity Database

    Sturm, A.; Swart, Jan M.

    2015-01-01

    Roč. 25, č. 3 (2015), s. 1616-1649 ISSN 1050-5164 R&D Projects: GA ČR GAP201/10/0752 Institutional support: RVO:67985556 Keywords : interacting particle system * cooperative branching * coalescence * phase transition * upper invariant law * survival * extinction Subject RIV: BA - General Mathematics Impact factor: 1.755, year: 2015 http://library.utia.cas.cz/separaty/2015/SI/swart-0442871.pdf

  4. Classical many-particle systems with unique disordered ground states

    Science.gov (United States)

    Zhang, G.; Stillinger, F. H.; Torquato, S.

    2017-10-01

    Classical ground states (global energy-minimizing configurations) of many-particle systems are typically unique crystalline structures, implying zero enumeration entropy of distinct patterns (aside from trivial symmetry operations). By contrast, the few previously known disordered classical ground states of many-particle systems are all high-entropy (highly degenerate) states. Here we show computationally that our recently proposed "perfect-glass" many-particle model [Sci. Rep. 6, 36963 (2016), 10.1038/srep36963] possesses disordered classical ground states with a zero entropy: a highly counterintuitive situation . For all of the system sizes, parameters, and space dimensions that we have numerically investigated, the disordered ground states are unique such that they can always be superposed onto each other or their mirror image. At low energies, the density of states obtained from simulations matches those calculated from the harmonic approximation near a single ground state, further confirming ground-state uniqueness. Our discovery provides singular examples in which entropy and disorder are at odds with one another. The zero-entropy ground states provide a unique perspective on the celebrated Kauzmann-entropy crisis in which the extrapolated entropy of a supercooled liquid drops below that of the crystal. We expect that our disordered unique patterns to be of value in fields beyond glass physics, including applications in cryptography as pseudorandom functions with tunable computational complexity.

  5. Effect of particle-size dynamics on properties of dense spongy-particle systems: Approach towards equilibrium

    Science.gov (United States)

    Zakhari, Monica E. A.; Anderson, Patrick D.; Hütter, Markus

    2017-07-01

    Open-porous deformable particles, often envisaged as sponges, are ubiquitous in biological and industrial systems (e.g., casein micelles in dairy products and microgels in cosmetics). The rich behavior of these suspensions is owing to the elasticity of the supporting network of the particle, and the viscosity of permeating solvent. Therefore, the rate-dependent size change of these particles depends on their structure, i.e., the permeability. This work aims at investigating the effect of the particle-size dynamics and the underlying particle structure, i.e., the particle permeability, on the transient and long-time behavior of suspensions of spongy particles in the absence of applied deformation, using the dynamic two-scale model developed by Hütter et al. [Farad. Discuss. 158, 407 (2012), 10.1039/c2fd20025b]. In the high-density limit, the transient behavior is found to be accelerated by the particle-size dynamics, even at average size changes as small as 1 % . The accelerated dynamics is evidenced by (i) the higher short-time diffusion coefficient as compared to elastic-particle systems and (ii) the accelerated formation of the stable fcc crystal structure. Furthermore, after long times, the particle-size dynamics of spongy particles is shown to result in lower stationary values of the energy and normal stresses as compared to elastic-particle systems. This dependence of the long-time behavior of these systems on the permeability, that essentially is a transport coefficient and hence must not affect the equilibrium properties, confirms that full equilibration has not been reached.

  6. High Resolution Spectrometer (HRS) particle-identification system

    International Nuclear Information System (INIS)

    Pratt, J.C.; Spencer, J.E.; Whitten, C.A.

    1977-08-01

    The functions of the particle-identification system (PIDS) designed for the High Resolution Spectrometer facility (HRS) at LAMPF are described, together with the mechanical layout, counter hardware, and associated electronics. The system was designed for easy use and to be applicable to currently proposed experiments at HRS. The several strobe signals that can be generated correspond to different event types or characteristics, and logic configuration and timing can be remotely controlled by computer. Concepts of discrete pattern recognition and multidimensional, analog pulse discrimination are used to distinguish between different event types

  7. Safety aspects of Particle Bed Reactor plutonium burner system

    International Nuclear Information System (INIS)

    Powell, J.R.; Ludewig, H.; Todosow, M.

    1993-01-01

    An assessment is made of the safety aspects peculiar to using the Particle Bed Reactor (PBR) as the burner in a plutonium disposal system. It is found that a combination of the graphitic fuel, high power density possible with the PBR and engineered design features results in an attractive concept. The high power density potentially makes it possible to complete the plutonium burning without requiring reprocessing and remanufacturing fuel. This possibility removes two hazardous steps from a plutonium burning complex. Finally, two backup cooling systems depending on thermo-electric converters and heat pipes act as ultimate heat removal sinks in the event of accident scenarios which result in loss of fuel cooling

  8. Diffusion and particle mobility in 1D system

    International Nuclear Information System (INIS)

    Borman, V.D.; Johansson, B.; Skorodumova, N.V.; Tronin, I.V.; Tronin, V.N.; Troyan, V.I.

    2006-01-01

    The transport properties of one-dimensional (1D) systems have been studied theoretically. Contradictory experimental results on molecular transport in quasi-1D systems, such as zeolite structures, when both diffusion transport acceleration and the existence of the diffusion mode with lower particle mobility (single-file diffusion ( 2 >∼t 1/2 )) have been reported, are consolidated in a consistent model. Transition from the single-file diffusion mode to an Einstein-like diffusion 2 >∼t with diffusion coefficient increasing with the density has been predicted to occur at large observation times

  9. An Expert System For Tuning Particle-Beam Accelerators

    Science.gov (United States)

    Lager, Darrel L.; Brand, Hal R.; Maurer, William J.; Searfus, Robert M.; Hernandez, Jose E.

    1989-03-01

    We have developed a proof-of-concept prototype of an expert system for tuning particle beam accelerators. It is designed to function as an intelligent assistant for an operator. In its present form it implements the strategies and reasoning followed by the operator for steering through the beam transport section of the Advanced Test Accelerator at Lawrence Livermore Laboratory's Site 300. The system is implemented in the language LISP using the Artificial Intelligence concepts of frames, daemons, and a representation we developed called a Monitored Decision Script.

  10. Directing orbits of chaotic systems by particle swarm optimization

    International Nuclear Information System (INIS)

    Liu Bo; Wang Ling; Jin Yihui; Tang Fang; Huang Dexian

    2006-01-01

    This paper applies a novel evolutionary computation algorithm named particle swarm optimization (PSO) to direct the orbits of discrete chaotic dynamical systems towards desired target region within a short time by adding only small bounded perturbations, which could be formulated as a multi-modal numerical optimization problem with high dimension. Moreover, the synchronization of chaotic systems is also studied, which can be dealt with as an online problem of directing orbits. Numerical simulations based on Henon Map demonstrate the effectiveness and efficiency of PSO, and the effects of some parameters are also investigated

  11. A data acquisition system for elementary particle physics

    International Nuclear Information System (INIS)

    Grittenden, J.A.; Benenson, G.; Cunitz, H.; Hsuing, Y.B.; Kaplan, D.M.; Sippach, W.; Stern, B.

    1984-01-01

    The data acquisition system experiment 605 at the Fermi National Accelerator Laboratory employs a set of data transfer protocols developed at Columbia University and implemented in the Nevis Laboratories Data Transport System. The authors describe the logical design of the Transport System, its physical realization, and its particular application during the Spring, 1982 data run of experiment 605. During that run it served as the interface between the data latches and a megabyte of fast memory, operating at a data transfer rate of 200 nsec/16-bit word. Up to two thousand events were read out during the one second beam spill, each event consisting of about 250 words. Included are details of proposed improvements to the data acquisition system and append a brief comment of the need for inexpensive, versatile readout systems in experimental elementary particle physics

  12. Theoretical Studies of Strongly Interacting Fine Particle Systems

    Science.gov (United States)

    Fearon, Michael

    Available from UMI in association with The British Library. A theoretical analysis of the time dependent behaviour of a system of fine magnetic particles as a function of applied field and temperature was carried out. The model used was based on a theory assuming Neel relaxation with a distribution of particle sizes. This theory predicted a linear variation of S_{max} with temperature and a finite intercept, which is not reflected by experimental observations. The remanence curves of strongly interacting fine-particle systems were also investigated theoretically. It was shown that the Henkel plot of the dc demagnetisation remanence vs the isothermal remanence is a useful representation of interactions. The form of the plot was found to be a reflection of the magnetic and physical microstructure of the material, which is consistent with experimental data. The relationship between the Henkel plot and the noise of a particulate recording medium, another property dependent on the microstructure, is also considered. The Interaction Field Factor (IFF), a single parameter characterising the non-linearity of the Henkel plot, is investigated. These results are consistent with a previous experimental study. Finally the results of the noise power spectral density for erased and saturated recording media are presented, so that characterisation of interparticle interactions may be carried out with greater accuracy.

  13. Los Alamos energetic particle sensor systems at geostationary orbit

    International Nuclear Information System (INIS)

    Baker, D.N.; Aiello, W.; Asbridge, J.R.; Belian, R.D.; Higbie, P.R.; Klebesadel, R.W.; Laros, J.G.; Tech, E.R.

    1985-01-01

    The Los Alamos National Laboratory has provided energetic particle sensors for a variety of spacecraft at the geostationary orbit (36,000 km altitude). The sensor system called the Charged Particle Analyzer (CPA) consists of four separate subsystems. The LoE and HiE subsystems measure electrons in the energy ranges 30 to 300 keV and 200 to 2000 keV, respectively. The LoP and HiP subsystems measure ions in the ranges 100 to 600 keV and 0.40 to 150 MeV, respectively. A separate sensor system called the spectrometer for energetic electrons (SEE) measures very high-energy electrons (2 to 15 MeV) using advanced scintillator design. In this paper we describe the relationship of operational anomalies and spacecraft upsets to the directly measured energetic particle environments at 6.6 R/sub E/. We also compare and contrast the CPA and SEE instrument design characteristics with the next generation of Los Alamos instruments to be flown at geostationary altitudes

  14. A system for designing and simulating particle physics experiments

    International Nuclear Information System (INIS)

    Zelazny, R.; Strzalkowski, P.

    1987-01-01

    In view of the rapid development of experimental facilities and their costs, the systematic design and preparation of particle physics experiments have become crucial. A software system is proposed as an aid for the experimental designer, mainly for experimental geometry analysis and experimental simulation. The following model is adopted: the description of an experiment is formulated in a language (here called XL) and put by its processor in a data base. The language is based on the entity-relationship-attribute approach. The information contained in the data base can be reported and analysed by an analyser (called XA) and modifications can be made at any time. In particular, the Monte Carlo methods can be used in experiment simulation for both physical phenomena in experimental set-up and detection analysis. The general idea of the system is based on the design concept of ISDOS project information systems. The characteristics of the simulation module are similar to those of the CERN Geant system, but some extensions are proposed. The system could be treated as a component of greater, integrated software environment for the design of particle physics experiments, their monitoring and data processing. (orig.)

  15. Particle Swarms in Fractures: Open Versus Partially Closed Systems

    Science.gov (United States)

    Boomsma, E.; Pyrak-Nolte, L. J.

    2014-12-01

    In the field, fractures may be isolated or connected to fluid reservoirs anywhere along the perimeter of a fracture. These boundaries affect fluid circulation, flow paths and communication with external reservoirs. The transport of drop like collections of colloidal-sized particles (particle swarms) in open and partially closed systems was studied. A uniform aperture synthetic fracture was constructed using two blocks (100 x 100 x 50 mm) of transparent acrylic placed parallel to each other. The fracture was fully submerged a tank filled with 100cSt silicone oil. Fracture apertures were varied from 5-80 mm. Partially closed systems were created by sealing the sides of the fracture with plastic film. The four boundary conditions study were: (Case 1) open, (Case 2) closed on the sides, (Case 3) closed on the bottom, and (Case 4) closed on both the sides and bottom of the fracture. A 15 μL dilute suspension of soda-lime glass particles in oil (2% by mass) were released into the fracture. Particle swarms were illuminated using a green (525 nm) LED array and imaged with a CCD camera. The presence of the additional boundaries modified the speed of the particle swarms (see figure). In Case 1, enhanced swarm transport was observed for a range of apertures, traveling faster than either very small or very large apertures. In Case 2, swarm velocities were enhanced over a larger range of fracture apertures than in any of the other cases. Case 3 shifted the enhanced transport regime to lower apertures and also reduced swarm speed when compared to Case 2. Finally, Case 4 eliminated the enhanced transport regime entirely. Communication between the fluid in the fracture and an external fluid reservoir resulted in enhanced swarm transport in Cases 1-3. The non-rigid nature of a swarm enables drag from the fracture walls to modify the swarm geometry. The particles composing a swarm reorganize in response to the fracture, elongating the swarm and maintaining its density. Unlike a

  16. Two-particle microrheology of quasi-2D viscous systems.

    Science.gov (United States)

    Prasad, V; Koehler, S A; Weeks, Eric R

    2006-10-27

    We study the spatially correlated motions of colloidal particles in a quasi-2D system (human serum albumin protein molecules at an air-water interface) for different surface viscosities eta s. We observe a transition in the behavior of the correlated motion, from 2D interface dominated at high eta s to bulk fluid dependent at low eta s. The correlated motions can be scaled onto a master curve which captures the features of this transition. This master curve also characterizes the spatial dependence of the flow field of a viscous interface in response to a force. The scale factors used for the master curve allow for the calculation of the surface viscosity eta s that can be compared to one-particle measurements.

  17. Deviation from the superparamagnetic behaviour of fine-particle systems

    CERN Document Server

    Malaescu, I

    2000-01-01

    Studies concerning superparamagnetic behaviour of fine magnetic particle systems were performed using static and radiofrequency measurements, in the range 1-60 MHz. The samples were: a ferrofluid with magnetite particles dispersed in kerosene (sample A), magnetite powder (sample B) and the same magnetite powder dispersed in a polymer (sample C). Radiofrequency measurements indicated a maximum in the imaginary part of the complex magnetic susceptibility, for each of the samples, at frequencies with the magnitude order of tens of MHz, the origin of which was assigned to Neel-type relaxation processes. The static measurements showed a Langevin-type dependence of magnetisation M and of susceptibility chi, on the magnetic field for sample A. For samples B and C deviations from this type of dependence were found. These deviations were analysed qualitatively and explained in terms of the interparticle interactions, dispersion medium influence and surface effects.

  18. Classical mechanics systems of particles and Hamiltonian dynamics

    CERN Document Server

    Greiner, Walter

    2010-01-01

    This textbook Classical Mechanics provides a complete survey on all aspects of classical mechanics in theoretical physics. An enormous number of worked examples and problems show students how to apply the abstract principles to realistic problems. The textbook covers Newtonian mechanics in rotating coordinate systems, mechanics of systems of point particles, vibrating systems and mechanics of rigid bodies. It thoroughly introduces and explains the Lagrange and Hamilton equations and the Hamilton-Jacobi theory. A large section on nonlinear dynamics and chaotic behavior of systems takes Classical Mechanics to newest development in physics. The new edition is completely revised and updated. New exercises and new sections in canonical transformation and Hamiltonian theory have been added.

  19. A radioactive particle detection system for the Maralinga rehabilitation project

    International Nuclear Information System (INIS)

    Martin, L.J.; Baylis, S.H.

    1998-01-01

    Following the cessation of British nuclear testing at Maralinga, several sites were left contaminated with plutonium in various forms over areas of many square kilometres. The contamination included 239 Pu and 241 Am together with other isotopes of plutonium, and was in the form of fine dust, discrete particles and contaminated fragments of metals, plastics and ceramics. Following a government decision to rehabilitate the area to a standard suitable for re-occupation by the traditional owners, the Maralinga Tjarutja, an expert committee, MARTAC, was convened to advise the Department of Primary Industries and Energy about suitable methods and standards for the cleanup. Criteria set by MARTAC required the removal of all discrete particles of contamination exceeding 100 kBq of 241 Am and imposed limits on the numbers of particles exceeding 20 kBq. A detection system was required which could detect any radioactive particles exceeding criteria and, following rehabilitation, verify that none remained. The most useful indicator of contamination was the 60 keV gamma-ray from 241 Am. The system was fitted to a four wheel drive utility vehicle, and was based on four thin-crystal, sodium iodide detectors of 12.5 cm diameter held 30 cm above ground level. Electronic components from off-the-shelf hand-held ratemeters were used to provide the high voltage supply, amplifier and single channel analyser in a ruggedised form of proven reliability. The combined use of thin crystal detectors and single channel analyser allowed a significant reduction in background count rates while maintaining the efficiency for detection of the 60 keV gamma-ray. To allow efficient and reliable coverage, the vehicle was fitted with a high-resolution speedometer to allow the proper speed to be maintained, and a mapping display, based on the GPS system, which showed the previous path of the vehicle and boundary of the area to be scanned. Whenever a radioactive particle was detected, its coordinates were

  20. Particle beam digital phase control system for COSY

    International Nuclear Information System (INIS)

    Schnase, A.

    1994-02-01

    Particle accelerators require that the orbit of the charged particles in the vacuum chamber is controlled to fulfil narrow limits. This is done by magnetic deflection systems and exactly adjusted rf-acceleration. Up to now the necessary control-functions were realised with analogue parts. This work describes a digital phase control system that works in real time and is used with the proton accelerator COSY. The physical design of the accelerator sets the accuracy-specifications of the revolution frequency (<1 Hz in the whole range from 400 kHz to 1.6 MHz), the phase-difference (<0.01 ), the signal-to-noise-ratio (<-60 dBc) and the update rate (<1 μs) of the parameters. In a typical operation the beam is first bunched and synchronised to the reference oscillator. After that the beam influences the rf-system with the help of charge detectors and now the rf-systems will be synchronised with the bunched beam. This control-loop is modelled and simulated with PSPICE. (orig.)

  1. Systems and methods for separating particles utilizing engineered acoustic contrast capture particles

    Science.gov (United States)

    Kaduchak, Gregory; Ward, Michael D.

    2018-03-06

    An apparatus for separating particles from a medium includes a capillary defining a flow path therein that is in fluid communication with a medium source. The medium source includes engineered acoustic contrast capture particle having a predetermined acoustic contrast. The apparatus includes a vibration generator that is operable to produce at least one acoustic field within the flow path. The acoustic field produces a force potential minima for positive acoustic contrast particles and a force potential minima for negative acoustic contrast particles in the flow path and drives the engineered acoustic contrast capture particles to either the force potential minima for positive acoustic contrast particles or the force potential minima for negative acoustic contrast particles.

  2. DIRC, a new type of particle identification system For BABAR

    International Nuclear Information System (INIS)

    Schwiening, J.

    1997-12-01

    The DIRC, a new type of Cherenkov imaging device, has been selected as the primary particle identification system for the BABAR detector at the asymmetric B-factory, PEP-II. It is based on total internal reflection and uses long, rectangular bars made from synthetic fused silica as Cherenkov radiators and light guides. In this paper, the principles of the DIRC ring imaging Cherenkov technique are explained and results from the prototype program are presented. The studies of the optical properties and radiation hardness of the quartz radiators are described, followed by a discussion of the detector design

  3. Nonequilibrium Microscopic Distribution of Thermal Current in Particle Systems

    KAUST Repository

    Yukawa, Satoshi

    2009-02-15

    A nonequilibrium distribution function of microscopic thermal current is studied by a direct numerical simulation in a thermal conducting steady state of particle systems. Two characteristic temperatures of the thermal current are investigated on the basis of the distribution. It is confirmed that the temperature depends on the current direction; Parallel temperature to the heat-flux is higher than antiparallel one. The difference between the parallel temperature and the antiparallel one is proportional to a macroscopic temperature gradient. ©2009 The Physical Society of Japan.

  4. Nonequilibrium Microscopic Distribution of Thermal Current in Particle Systems

    KAUST Repository

    Yukawa, Satoshi; Shimada, Takashi; Ogushi, Fumiko; Ito, Nobuyasu

    2009-01-01

    A nonequilibrium distribution function of microscopic thermal current is studied by a direct numerical simulation in a thermal conducting steady state of particle systems. Two characteristic temperatures of the thermal current are investigated on the basis of the distribution. It is confirmed that the temperature depends on the current direction; Parallel temperature to the heat-flux is higher than antiparallel one. The difference between the parallel temperature and the antiparallel one is proportional to a macroscopic temperature gradient. ©2009 The Physical Society of Japan.

  5. Integrable systems for particles with internal degrees of freedom

    CERN Document Server

    Minahan, J A; Minahan, Joseph A.; Polychronakos, Alexios P.

    1993-01-01

    We show that a class of models for particles with internal degrees of freedom are integrable. These systems are basically generalizations of the models of Calogero and Sutherland. The proofs of integrability are based on a recently developed exchange operator formalism. We calculate the wave-functions for the Calogero-like models and find the ground-state wave-function for a Calogero-like model in a position dependent magnetic field. This last model might have some relevance for matrix models of open strings.

  6. Statistical quasi-particle theory for open quantum systems

    Science.gov (United States)

    Zhang, Hou-Dao; Xu, Rui-Xue; Zheng, Xiao; Yan, YiJing

    2018-04-01

    This paper presents a comprehensive account on the recently developed dissipaton-equation-of-motion (DEOM) theory. This is a statistical quasi-particle theory for quantum dissipative dynamics. It accurately describes the influence of bulk environments, with a few number of quasi-particles, the dissipatons. The novel dissipaton algebra is then followed, which readily bridges the Schrödinger equation to the DEOM theory. As a fundamental theory of quantum mechanics in open systems, DEOM characterizes both the stationary and dynamic properties of system-and-bath interferences. It treats not only the quantum dissipative systems of primary interest, but also the hybrid environment dynamics that could be experimentally measurable. Examples are the linear or nonlinear Fano interferences and the Herzberg-Teller vibronic couplings in optical spectroscopies. This review covers the DEOM construction, the underlying dissipaton algebra and theorems, the physical meanings of dynamical variables, the possible identifications of dissipatons, and some recent advancements in efficient DEOM evaluations on various problems. The relations of the present theory to other nonperturbative methods are also critically presented.

  7. Computational transport phenomena of fluid-particle systems

    CERN Document Server

    Arastoopour, Hamid; Abbasi, Emad

    2017-01-01

    This book concerns the most up-to-date advances in computational transport phenomena (CTP), an emerging tool for the design of gas-solid processes such as fluidized bed systems. The authors examine recent work in kinetic theory and CTP and illustrate gas-solid processes’ many applications in the energy, chemical, pharmaceutical, and food industries. They also discuss the kinetic theory approach in developing constitutive equations for gas-solid flow systems and how it has advanced over the last decade as well as the possibility of obtaining innovative designs for multiphase reactors, such as those needed to capture CO2 from flue gases. Suitable as a concise reference and a textbook supplement for graduate courses, Computational Transport Phenomena of Gas-Solid Systems is ideal for practitioners in industries involved with the design and operation of processes based on fluid/particle mixtures, such as the energy, chemicals, pharmaceuticals, and food processing. Explains how to couple the population balance e...

  8. Irreversible data compression concepts with polynomial fitting in time-order of particle trajectory for visualization of huge particle system

    International Nuclear Information System (INIS)

    Ohtani, H; Ito, A M; Hagita, K; Kato, T; Saitoh, T; Takeda, T

    2013-01-01

    We propose in this paper a data compression scheme for large-scale particle simulations, which has favorable prospects for scientific visualization of particle systems. Our data compression concepts deal with the data of particle orbits obtained by simulation directly and have the following features: (i) Through control over the compression scheme, the difference between the simulation variables and the reconstructed values for the visualization from the compressed data becomes smaller than a given constant. (ii) The particles in the simulation are regarded as independent particles and the time-series data for each particle is compressed with an independent time-step for the particle. (iii) A particle trajectory is approximated by a polynomial function based on the characteristic motion of the particle. It is reconstructed as a continuous curve through interpolation from the values of the function for intermediate values of the sample data. We name this concept ''TOKI (Time-Order Kinetic Irreversible compression)''. In this paper, we present an example of an implementation of a data-compression scheme with the above features. Several application results are shown for plasma and galaxy formation simulation data

  9. Single particle dynamics of many-body systems described by Vlasov-Fokker-Planck equations

    International Nuclear Information System (INIS)

    Frank, T.D.

    2003-01-01

    Using Langevin equations we describe the random walk of single particles that belong to particle systems satisfying Vlasov-Fokker-Planck equations. In doing so, we show that Haissinski distributions of bunched particles in electron storage rings can be derived from a particle dynamics model

  10. 21 CFR 892.5050 - Medical charged-particle radiation therapy system.

    Science.gov (United States)

    2010-04-01

    ...-particle radiation therapy system. (a) Identification. A medical charged-particle radiation therapy system... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Medical charged-particle radiation therapy system... equipment, patient and equipment supports, treatment planning computer programs, component parts, and...

  11. Particle and heavy ion transport code system; PHITS

    International Nuclear Information System (INIS)

    Niita, Koji

    2004-01-01

    Intermediate and high energy nuclear data are strongly required in design study of many facilities such as accelerator-driven systems, intense pulse spallation neutron sources, and also in medical and space technology. There is, however, few evaluated nuclear data of intermediate and high energy nuclear reactions. Therefore, we have to use some models or systematics for the cross sections, which are essential ingredients of high energy particle and heavy ion transport code to estimate neutron yield, heat deposition and many other quantities of the transport phenomena in materials. We have developed general purpose particle and heavy ion transport Monte Carlo code system, PHITS (Particle and Heavy Ion Transport code System), based on the NMTC/JAM code by the collaboration of Tohoku University, JAERI and RIST. The PHITS has three important ingredients which enable us to calculate (1) high energy nuclear reactions up to 200 GeV, (2) heavy ion collision and its transport in material, (3) low energy neutron transport based on the evaluated nuclear data. In the PHITS, the cross sections of high energy nuclear reactions are obtained by JAM model. JAM (Jet AA Microscopic Transport Model) is a hadronic cascade model, which explicitly treats all established hadronic states including resonances and all hadron-hadron cross sections parametrized based on the resonance model and string model by fitting the available experimental data. The PHITS can describe the transport of heavy ions and their collisions by making use of JQMD and SPAR code. The JQMD (JAERI Quantum Molecular Dynamics) is a simulation code for nucleus nucleus collisions based on the molecular dynamics. The SPAR code is widely used to calculate the stopping powers and ranges for charged particles and heavy ions. The PHITS has included some part of MCNP4C code, by which the transport of low energy neutron, photon and electron based on the evaluated nuclear data can be described. Furthermore, the high energy nuclear

  12. Interacting particle systems in time-dependent geometries

    Science.gov (United States)

    Ali, A.; Ball, R. C.; Grosskinsky, S.; Somfai, E.

    2013-09-01

    Many complex structures and stochastic patterns emerge from simple kinetic rules and local interactions, and are governed by scale invariance properties in combination with effects of the global geometry. We consider systems that can be described effectively by space-time trajectories of interacting particles, such as domain boundaries in two-dimensional growth or river networks. We study trajectories embedded in time-dependent geometries, and the main focus is on uniformly expanding or decreasing domains for which we obtain an exact mapping to simple fixed domain systems while preserving the local scale invariance properties. This approach was recently introduced in Ali et al (2013 Phys. Rev. E 87 020102(R)) and here we provide a detailed discussion on its applicability for self-affine Markovian models, and how it can be adapted to self-affine models with memory or explicit time dependence. The mapping corresponds to a nonlinear time transformation which converges to a finite value for a large class of trajectories, enabling an exact analysis of asymptotic properties in expanding domains. We further provide a detailed discussion of different particle interactions and generalized geometries. All our findings are based on exact computations and are illustrated numerically for various examples, including Lévy processes and fractional Brownian motion.

  13. Heavy particle scattering by atomic and nuclear systems

    International Nuclear Information System (INIS)

    Lazauskas, R.

    2003-10-01

    In this thesis quantum mechanical non-relativistic few-body problem is discussed. Basing on fundamentals ideas from Faddeev and Yakubovski three and four body equations are formulated and solved for fermionic atomic and nuclear systems. Former equations are modified to include long range interactions. Original results for nuclear and molecular physics were obtained: -) positively charged particle scattering on hydrogen atoms was considered; predictions for π + → H, μ + → H and p + → H scattering lengths were given. Existence of an unknown, very weakly bound H + 2 bound state was predicted. -) Motivated by the possible observation of bound four neutron structure at GANIL we have studied compatibility of such an existence within the current nuclear interaction models. -) 4 nucleon scattering at low energies was investigated. Results for n → 3 H, p → 3 H and p → 3 He systems were compared with the experimental data. Validity of realistic nucleon-nucleon interaction models is questioned. (author)

  14. Entanglement and nonlocality in multi-particle systems

    Science.gov (United States)

    Reid, Margaret D.; He, Qiong-Yi; Drummond, Peter D.

    2012-02-01

    Entanglement, the Einstein-Podolsky-Rosen (EPR) paradox and Bell's failure of local-hiddenvariable (LHV) theories are three historically famous forms of "quantum nonlocality". We give experimental criteria for these three forms of nonlocality in multi-particle systems, with the aim of better understanding the transition from microscopic to macroscopic nonlocality. We examine the nonlocality of N separated spin J systems. First, we obtain multipartite Bell inequalities that address the correlation between spin values measured at each site, and then we review spin squeezing inequalities that address the degree of reduction in the variance of collective spins. The latter have been particularly useful as a tool for investigating entanglement in Bose-Einstein condensates (BEC). We present solutions for two topical quantum states: multi-qubit Greenberger-Horne-Zeilinger (GHZ) states, and the ground state of a two-well BEC.

  15. Time evolution of a system of two alpha particles

    International Nuclear Information System (INIS)

    Baye, D.; Herschkowitz, D.

    1996-01-01

    Motivated by interpretations of a broad structure at 32.5 MeV in the 12 C( 12 C, 12 C(0 + 2 )) 12 C(0 + 2 ) doubly inelastic scattering cross sections in terms of linear chains of α particles, we study in a microscopic model with an exact account of antisymmetrization the time evolution of a system of two α clusters. The evolution of the system is obtained from a time-dependent variational principle and visualized with matter densities. Even in the most favourable case, an initial two-cluster structure completely disappears in less than 2.10 -22 s. This result casts doubts on the observability of longer α chains. (orig.)

  16. The simulation status of particle transport system JPTS

    International Nuclear Information System (INIS)

    Deng, L.

    2015-01-01

    'Full text:' Particle transport system JPTS has been developed by IAPCM. It is based on the three support frustrations (JASMIN, JAUMIN and JCOGIN) and is used to simulate the reactor full core and radiation shielding problems. The system has been realized the high fidelity. In this presentation, analysis of the H-M, BEAVRS, VENUS-III and SG-III models are shown. Analyze HZP conditions of BEAVRS model with Monte Carlo code JMCT, MC21 and OpenMC to assess code accuracy against available data. Assess the feasibility of analysis of a PWR using JMCT. The large scale depletion solver is also shown. Assess the feasibility of analysis of radiation shielding using JSNT. JPTS has been proved with the capability of the full-core pin-by-pin and radiation shielding. (author)

  17. The history of magnetization process influence on FMR response of particle systems

    International Nuclear Information System (INIS)

    Dumitru, Ioan; Stancu, Alexandru

    2007-01-01

    In order to express the history of magnetization process dependence on ferromagnetic resonance (FMR) for a particle system we use a statistical model based on the Preisach model. The precedent magnetization processes define in Preisach plane a configuration of particle magnetization orientations. The particles are considered single domain and saturated and are modeled as Stoner-Wohlfarth particles. The FMR response of the system is computed by summarizing the individual dynamic susceptibility of the particles, keeping account of the initial directions of the particle magnetizations. The FMR spectra of the particle system is determined considering three initial magnetization states: the demagnetized state, the positive saturated state in which all the particles have the magnetization in the static field direction and the negative saturated state when all the particles have the magnetization in the opposite field direction. The static field dependence of the resonance frequency and linewidth are determined as functions of the initial magnetization states

  18. Energy spectrum structure and ''trap'' effects in a three-particle system

    International Nuclear Information System (INIS)

    Simenog, I.V.; Sitnichenko, A.I.

    1982-01-01

    Investigation is made of the threshold energy spectrum structure in a system of three spinless particles depending on the form of two-particle interaction. The correlation dependence of the spectrum and low-energy scattering parameters are shown. A new phenomenon of ''traps'' for the spectrum in a three-particle system with interaction involving components of considerably different ranges is established

  19. Quantum chaos and thermalization in isolated systems of interacting particles

    Energy Technology Data Exchange (ETDEWEB)

    Borgonovi, F., E-mail: fausto.borgonovi@unicatt.it [Dipartimento di Matematica e Fisica and Interdisciplinary Laboratories for Advanced Materials Physics, Universitá Cattolica, via Musei 41, 25121 Brescia, and INFN, Sezione di Pavia (Italy); Izrailev, F.M., E-mail: felix.izrailev@gmail.com [Instituto de Física, Universidad Autónoma de Puebla, Apt. Postal J-48, Puebla, Pue., 72570 (Mexico); NSCL and Department of Physics and Astronomy, Michigan State University, East Lansing, MI 48824-1321 (United States); Santos, L.F., E-mail: lsantos2@yu.edu [Department of Physics, Yeshiva University, 245 Lexington Ave, New York, NY 10016 (United States); Zelevinsky, V.G., E-mail: Zelevins@nscl.msu.edu [NSCL and Department of Physics and Astronomy, Michigan State University, East Lansing, MI 48824-1321 (United States)

    2016-04-15

    This review is devoted to the problem of thermalization in a small isolated conglomerate of interacting constituents. A variety of physically important systems of intensive current interest belong to this category: complex atoms, molecules (including biological molecules), nuclei, small devices of condensed matter and quantum optics on nano- and micro-scale, cold atoms in optical lattices, ion traps. Physical implementations of quantum computers, where there are many interacting qubits, also fall into this group. Statistical regularities come into play through inter-particle interactions, which have two fundamental components: mean field, that along with external conditions, forms the regular component of the dynamics, and residual interactions responsible for the complex structure of the actual stationary states. At sufficiently high level density, the stationary states become exceedingly complicated superpositions of simple quasiparticle excitations. At this stage, regularities typical of quantum chaos emerge and bring in signatures of thermalization. We describe all the stages and the results of the processes leading to thermalization, using analytical and massive numerical examples for realistic atomic, nuclear, and spin systems, as well as for models with random parameters. The structure of stationary states, strength functions of simple configurations, and concepts of entropy and temperature in application to isolated mesoscopic systems are discussed in detail. We conclude with a schematic discussion of the time evolution of such systems to equilibrium.

  20. Free boundary problems in PDEs and particle systems

    CERN Document Server

    Carinci, Gioia; Giardinà, Cristian; Presutti, Errico

    2016-01-01

    In this volume a theory for models of transport in the presence of a free boundary is developed. Macroscopic laws of transport are described by PDE's. When the system is open, there are several mechanisms to couple the system with the external forces. Here a class of systems where the interaction with the exterior takes place in correspondence of a free boundary is considered. Both continuous and discrete models sharing the same structure are analysed. In Part I a free boundary problem related to the Stefan Problem is worked out in all details. For this model a new notion of relaxed solution is proposed for which global existence and uniqueness is proven. It is also shown that this is the hydrodynamic limit of the empirical mass density of the associated particle system. In Part II several other models are discussed. The expectation is that the results proved for the basic model extend to these other cases. All the models discussed in this volume have an interest in problems arising in several research fields...

  1. Iron free permanent magnet systems for charged particle beam optics

    International Nuclear Information System (INIS)

    Lund, S.M.; Halbach, K.

    1995-01-01

    The strength and astounding simplicity of certain permanent magnet materials allow a wide variety of simple, compact configurations of high field strength and quality multipole magnets. Here we analyze the important class of iron-free permanent magnet systems for charged particle beam optics. The theory of conventional segmented multipole magnets formed from uniformly magnetized block magnets placed in regular arrays about a circular magnet aperture is reviewed. Practical multipole configurations resulting are presented that are capable of high and intermediate aperture field strengths. A new class of elliptical aperture magnets is presented within a model with continuously varying magnetization angle. Segmented versions of these magnets promise practical high field dipole and quadrupole magnets with an increased range of applicability

  2. Particle Swarm Optimization Approach in a Consignment Inventory System

    Science.gov (United States)

    Sharifyazdi, Mehdi; Jafari, Azizollah; Molamohamadi, Zohreh; Rezaeiahari, Mandana; Arshizadeh, Rahman

    2009-09-01

    Consignment Inventory (CI) is a kind of inventory which is in the possession of the customer, but is still owned by the supplier. This creates a condition of shared risk whereby the supplier risks the capital investment associated with the inventory while the customer risks dedicating retail space to the product. This paper considers both the vendor's and the retailers' costs in an integrated model. The vendor here is a warehouse which stores one type of product and supplies it at the same wholesale price to multiple retailers who then sell the product in independent markets at retail prices. Our main aim is to design a CI system which generates minimum costs for the two parties. Here a Particle Swarm Optimization (PSO) algorithm is developed to calculate the proper values. Finally a sensitivity analysis is performed to examine the effects of each parameter on decision variables. Also PSO performance is compared with genetic algorithm.

  3. Effects of heavy particle irradiation on central nervous system

    International Nuclear Information System (INIS)

    Nojima, Kumie; Nakadai, Taeko; Khono, Yukio

    2006-01-01

    Effects of low dose heavy particle radiation to central nervous system were studied using human embryonal carcinoma (Ntera2=NT2) and Human neuroblastoma cell (NB1). They exposed to heavy ions and X ray 80% confluent cells in culture bottles. The cells were different type about growth and differentiation in the neuron. The apoptosis profile was measured by AnnexinV-EGFP, PI stained and fluorescence-activated cell sorter (FACS). Memory and learning function of adult mice were studied using water maze test after carbon- or iron-ion irradiation. Memory functions were rapidly decreased after irradiation both ions. Iron -ion group were recovered 20 weeks after irradiation C-ion group were recovered 25 weeks after irradiation. Tier memory were still keep at over 100 weeks after irradiation. (author)

  4. Collisional dynamics of perturbed particle disks in the solar system

    Science.gov (United States)

    Roberts, W. W.; Stewart, G. R.

    1987-01-01

    Investigations of the collisional evolution of particulate disks subject to the gravitational perturbation of a more massive particle orbiting within the disk are underway. Both numerical N-body simulations using a novel collision algorithm and analytical kinetic theory are being employed to extend our understanding of perturbed disks in planetary rings and during the formation of the solar system. Particular problems proposed for investigation are: (1) The development and testing of general criteria for a small moonlet to clear a gap and produce observable morphological features in planetary rings; (2) The development of detailed models of collisional damping of the wavy edges observed on the Encke division of Saturn's A ring; and (3) The determination of the extent of runaway growth of the few largest planetesimals during the early stages of planetary accretion.

  5. The Wonderful World of Active Many-Particle Systems

    Science.gov (United States)

    Helbing, Dirk

    Since the subject of traffic dynamics has captured the interest of physicists, many astonishing effects have been revealed and explained. Some of the questions now understood are the following: Why are vehicles sometimes stopped by so-called ``phantom traffic jams'', although they all like to drive fast? What are the mechanisms behind stop-and-go traffic? Why are there several different kinds of congestion, and how are they related? Why do most traffic jams occur considerably before the road capacity is reached? Can a temporary reduction of the traffic volume cause a lasting traffic jam? Why do pedestrians moving in opposite directions normally organize in lanes, while nervous crowds are ``freezing by heating''? Why do panicking pedestrians produce dangerous deadlocks? All these questions have been answered by applying and extending methods from statistical physics and non-linear dynamics to self-driven many-particle systems.

  6. Extending the Modelling Framework for Gas-Particle Systems

    DEFF Research Database (Denmark)

    Rosendahl, Lasse Aistrup

    , with very good results. Single particle combustion has been tested using a number of different particle combustion models applied to coal and straw particles. Comparing the results of these calculations to measurements on straw burnout, the results indicate that for straw, existing heterogeneous combustion...... models perform well, and may be used in high temperature ranges. Finally, the particle tracking and combustion model is applied to an existing coal and straw co- fuelled burner. The results indicate that again, the straw follows very different trajectories than the coal particles, and also that burnout...

  7. Relativistic ''potential model'' for N-particle systems

    International Nuclear Information System (INIS)

    Noyes, H.P.

    1986-08-01

    Neither quantum field theory nor S-Matrix theory have a well defined procedure for going over to an approximation that can be reliably used in non-relativistic models for nuclear physics. We meet the problem here by constructing a finite particle number relativistic scattering theory for (scalar) particles and mesons using integral equations of the Faddeev-Yakubovsky type. Restricted to N particles and one meson, we can go from the relativistic theory to a ''potential theory'' in the integral equation formulation by using boundary states which do not contain the meson asymptotically. The meson-particle input amplitudes contain a pole at the particle mass, and the particle-particle input amplitudes are null. This gives unique definition (numerically calculable) to the particle-particle off-shell amplitude, and hence to the covariant ''scattering potential'' (but not to the noninvariant concept of ''potential energy''). As we have commented before, if we take these scattering amplitudes as iput for relativistic Faddeev equations, the results are identical to those obtained from the same model starting from three particles and one meson. In this paper we explore how far we can extend this relativistic ''potential model'' to higher numbers of particles and mesons. 10 refs

  8. DANTSYS: a system for deterministic, neutral particle transport calculations

    Energy Technology Data Exchange (ETDEWEB)

    Alcouffe, R.E.; Baker, R.S.

    1996-12-31

    The THREEDANT code is the latest addition to our system of codes, DANTSYS, which perform neutral particle transport computations on a given system of interest. The system of codes is distinguished by geometrical or symmetry considerations. For example, ONEDANT and TWODANT are designed for one and two dimensional geometries respectively. We have TWOHEX for hexagonal geometries, TWODANT/GQ for arbitrary quadrilaterals in XY and RZ geometry, and THREEDANT for three-dimensional geometries. The design of this system of codes is such that they share the same input and edit module and hence the input and output is uniform for all the codes (with the obvious additions needed to specify each type of geometry). The codes in this system are also designed to be general purpose solving both eigenvalue and source driven problems. In this paper we concentrate on the THREEDANT module since there are special considerations that need to be taken into account when designing such a module. The main issues that need to be addressed in a three-dimensional transport solver are those of the computational time needed to solve a problem and the amount of storage needed to accomplish that solution. Of course both these issues are directly related to the number of spatial mesh cells required to obtain a solution to a specified accuracy, but is also related to the spatial discretization method chosen and the requirements of the iteration acceleration scheme employed as will be noted below. Another related consideration is the robustness of the resulting algorithms as implemented; because insistence on complete robustness has a significant impact upon the computation time. We address each of these issues in the following through which we give reasons for the choices we have made in our approach to this code. And this is useful in outlining how the code is evolving to better address the shortcomings that presently exist.

  9. Biological effects of particles from the paris subway system.

    Science.gov (United States)

    Bachoual, Rafik; Boczkowski, Jorge; Goven, Delphine; Amara, Nadia; Tabet, Lyes; On, Dinhill; Leçon-Malas, Véronique; Aubier, Michel; Lanone, Sophie

    2007-10-01

    Particulate matter (PM) from atmospheric pollution can easily deposit in the lungs and induce recruitment of inflammatory cells, a source of inflammatory cytokines, oxidants, and matrix metalloproteases (MMPs), which are important players in lung structural homeostasis. In many large cities, the subway system is a potent source of PM emission, but little is known about the biological effects of PM from this source. We performed a comprehensive study to evaluate the biological effects of PM sampled at two sites (RER and Metro) in the Paris subway system. Murine macrophages (RAW 264.7) and C57Bl/6 mice, respectively, were exposed to 0.01-10 microg/cm2 and 5-100 microg/mouse subway PM or reference materials [carbon black (CB), titanium dioxide (TiO2), or diesel exhaust particles (DEPs)]. We analyzed cell viability, production of cellular and lung proinflammatory cytokines [tumor necrosis factor alpha (TNFalpha), macrophage inflammatory protein (MIP-2), KC (the murin analog of interleukin-8), and granulocyte macrophage-colony stimulating factor (GM-CSF)], and mRNA or protein expression of MMP-2, -9, and -12 and heme oxygenase-1 (HO-1). Deferoxamine and polymixin B were used to evaluate the roles of iron and endotoxin, respectively. Noncytotoxic concentrations of subway PM (but not CB, TiO2, or DEPs) induced a time- and dose-dependent increase in TNFalpha and MIP-2 production by RAW 264.7 cells, in a manner involving, at least in part, PM iron content (34% inhibition of TNF production 8 h after stimulation of RAW 264.7 cells with 10 microg/cm2 RER particles pretreated with deferoxamine). Similar increased cytokine production was transiently observed in vivo in mice and was accompanied by an increased neutrophil cellularity of bronchoalveolar lavage (84.83+/-0.98% of polymorphonuclear neutrophils for RER-treated mice after 24 h vs 7.33+/-0.99% for vehicle-treated animals). Subway PM induced an increased expression of MMP-12 and HO-1 both in vitro and in vivo. PM from the

  10. Panorama parking assistant system with improved particle swarm optimization method

    Science.gov (United States)

    Cheng, Ruzhong; Zhao, Yong; Li, Zhichao; Jiang, Weigang; Wang, Xin'an; Xu, Yong

    2013-10-01

    A panorama parking assistant system (PPAS) for the automotive aftermarket together with a practical improved particle swarm optimization method (IPSO) are proposed in this paper. In the PPAS system, four fisheye cameras are installed in the vehicle with different views, and four channels of video frames captured by the cameras are processed as a 360-deg top-view image around the vehicle. Besides the embedded design of PPAS, the key problem for image distortion correction and mosaicking is the efficiency of parameter optimization in the process of camera calibration. In order to address this problem, an IPSO method is proposed. Compared with other parameter optimization methods, the proposed method allows a certain range of dynamic change for the intrinsic and extrinsic parameters, and can exploit only one reference image to complete all of the optimization; therefore, the efficiency of the whole camera calibration is increased. The PPAS is commercially available, and the IPSO method is a highly practical way to increase the efficiency of the installation and the calibration of PPAS in automobile 4S shops.

  11. Particle simulation of grid system for krypton ion thrusters

    Directory of Open Access Journals (Sweden)

    Maolin CHEN

    2018-04-01

    Full Text Available The transport processes of plasmas in grid systems of krypton (Kr ion thrusters at different acceleration voltages were simulated with a 3D-PIC model, and the result was compared with xenon (Xe ion thrusters. The variation of the screen grid transparency, the accelerator grid current ratio and the divergence loss were explored. It is found that the screen grid transparency increases with the acceleration voltage and decreases with the beam current, while the accelerator grid current ratio and divergence loss decrease first and then increase with the beam current. This result is the same with Xe ion thrusters. Simulation results also show that Kr ion thrusters have more advantages than Xe ion thrusters, such as higher screen grid transparency, smaller accelerator grid current ratio, larger cut-off current threshold, and better divergence loss characteristic. These advantages mean that Kr ion thrusters have the ability of operating in a wide range of current. Through comprehensive analyses, it can be concluded that using Kr as propellant is very suitable for a multi-mode ion thruster design. Keywords: Grid system, Ion thrusters, Krypton, Particle in cell method, Plasma

  12. Resonating group calculation for a three particle system

    International Nuclear Information System (INIS)

    Kumar, Kiran; Jain, A.K.

    1979-01-01

    The elastic scattering of a projectile comprising of a loosely bound pair of particles by a target has been investigated in the Resonating Group Method (RGM). An effective interaction between the projectile and the target has also been derived in terms of the individual particle-target interaction. Phenomenological potentials are employed to describe, with reasonable accuracy, the antisymmetrized particle-target wavefunctions. This simplifies the analysis from an N-particle calculation to a three body RGM calculation. Results obtained for d-α scattering are compared with a full six nucleon calculation as well as with experiment. Results on 6 Li scattering on 40 Ca are discussed. (auth.)

  13. Design of Compact Particle Detector System Using FPGA for Space Particle Environment Measurement

    Directory of Open Access Journals (Sweden)

    K. Ryu

    2007-06-01

    Full Text Available We have designed a high resolution proton and electron telescope for the detection of high energy particles, which constitute a major part of the space environment. The flux of the particles, in the satellite orbits, can vary abruptly according to the position and solar activities. In this study, a conceptual design of the detector, for adapting these variations with a high energy resolution, was made and the performance was estimated. In addition, a parallel processing algorithm was devised and embodied using FPGA for the high speed data processing, capable of detecting high flux without losing energy resolution, on board a satellite.

  14. Complex dynamic and static structures in interconnected particle systems

    International Nuclear Information System (INIS)

    Kristiansen, Kai de Lange

    2004-01-01

    , and may also be a subject. for future studies. The diffusive behaviour of a cluster of a semi-large number spheres in a soft potential undergoes transitions in length scale from super diffusion via normal diffusion to sub diffusion. This analysis follows the motion of one sphere over a large time span. Knot theory can be used to get other measures of the collective behaviour, e.g. the linking number seems to be a promising measure and would be worth studying. This quantity represents the number of times the world lines from two spheres cross each other in a preferred direction of rotation. Random dense packing of spheres is a useful model for disordered and granular media. The monolayer of non-magnetic spheres in a ferro fluid is used to simulate this packing in 2D. Our experiments show packing structures similar to previous results. In 3D we have used a mechanical contraction method, paper 5, to simulate rapid sedimentation of binary mixture of spherical colloidal particles. The densities as function of sphere composition were found to be similar to results from the experiments. For a random dense packing it would be interesting to follow the idea of the excluded volume argument to explain quantitatively the density as function of size- and shape distributions. The mechanical contraction method seems to be ideal for doing these kinds of numerical calculations. The coordination number is difficult to find in a real system of colloidal particles, but is easily obtained in numerical simulations. Nucleation of a colloidal monolayer in all alternating electric field has been studied recently. The magnetic hole system may be used to show a similar behaviour in a magnetic field. With this system we can study the nucleation process from the beginning and also to investigate the nucleation rate. Preliminary experiments have also been done that show large differences in the behaviour in systems with only free spheres and systems with some obstacles or fixed spheres among the

  15. Complex dynamic and static structures in interconnected particle systems

    Energy Technology Data Exchange (ETDEWEB)

    Kristiansen, Kai de Lange

    2004-07-01

    -Mandelbrot relation is not fully understood, and may also be a subject. for future studies. The diffusive behaviour of a cluster of a semi-large number spheres in a soft potential undergoes transitions in length scale from super diffusion via normal diffusion to sub diffusion. This analysis follows the motion of one sphere over a large time span. Knot theory can be used to get other measures of the collective behaviour, e.g. the linking number seems to be a promising measure and would be worth studying. This quantity represents the number of times the world lines from two spheres cross each other in a preferred direction of rotation. Random dense packing of spheres is a useful model for disordered and granular media. The monolayer of non-magnetic spheres in a ferro fluid is used to simulate this packing in 2D. Our experiments show packing structures similar to previous results. In 3D we have used a mechanical contraction method, paper 5, to simulate rapid sedimentation of binary mixture of spherical colloidal particles. The densities as function of sphere composition were found to be similar to results from the experiments. For a random dense packing it would be interesting to follow the idea of the excluded volume argument to explain quantitatively the density as function of size- and shape distributions. The mechanical contraction method seems to be ideal for doing these kinds of numerical calculations. The coordination number < C > is difficult to find in a real system of colloidal particles, but is easily obtained in numerical simulations. Nucleation of a colloidal monolayer in all alternating electric field has been studied recently. The magnetic hole system may be used to show a similar behaviour in a magnetic field. With this system we can study the nucleation process from the beginning and also to investigate the nucleation rate. Preliminary experiments have also been done that show large differences in the behaviour in systems with only free spheres and systems with some

  16. Effects of heavy particle irradiation on central nervous system

    International Nuclear Information System (INIS)

    Nojima, Kumie; Nakadai, Taeko; Khono, Yukio; Nagaoka, Shunji

    2004-01-01

    Effects of low dose heavy particle radiation to central nervous system were studied using mouse neonatal brain cells in culture exposed to heavy ions and X ray at fifth days of the culture. The subsequent biological effects were evaluated by an induction of apoptosis and the survivability of neurons focusing on the dependencies of the animal strains with different genetic types, and linear energy transfer (LET) of the different nucleons. Of the three mouse strains tested, SCID, B6, B6C3F1 and C3H, used for brain cell culture, SCID was the most sensitive. Radiation sensitivity of these cells ware SCID>B6>B6C3F1>C3H to both X-ray and carbon ion (290 MeV/n) when compared by 10% apoptotic induction. The LET dependency was compared with using SCID cells exposing to different ions, (X, C, Si, Ar, and Fe). Although no detectable LET dependency was observed at higher dose than 1 Gy, an enhancement was observed in the high LET region and at lower dose than 0.5 Gy. The survivability profiles of the neurons were different in the mouse strains and ions. Memory and learning function of adult mice were studied using water maze test after localized carbon- or iron-ion irradiation to hippocampus area. Memory function were rapidly decrease after irradiation both ions. C-ion group were recovered 20 weeks after irradiation, but Iron group were different. (author)

  17. Effects of heavy particle irradiation on central nervous system

    International Nuclear Information System (INIS)

    Nojima, Kumie; Liu Cuihua; Nagaoka, Shunji

    2003-01-01

    Effects of low dose heavy particle radiation to central nervous system were studied using mouse neonatal brain cells in culture exposed to heavy ions and X ray at fifth days of the culture. The subsequent biological effects were evaluated by an induction of apoptosis and the survivability of neurons focusing on the dependencies of the animal strains with different genetic types, and linear energy transfer (LET) of the different nucleons. Of the three mouse strains tested, severe combined immunodeficiency (SCID), B6 and C3H, used for brain cell culture, SCID was the most sensitive and C3H the least sensitive to both X-ray and carbon ion (290 MeV/n) when compared by 10% apoptotic induction. The LET dependency was compared with using SCID cells exposing to different ions, (X, C, Si, Ar, and Fe). Although no detectable LET dependency was observed at higher dose than 1 Gy, an enhancement was observed in the high LET region and at lower dose than 0.5 Gy. The survivability profiles of the neurons were different in the mouse strains and ions. Memory and learning function of adult mice were studied using water maze test after localized carbon- or iron-ion irradiation to hippocampus area. (author)

  18. Granular dynamics, contact mechanics and particle system simulations a DEM study

    CERN Document Server

    Thornton, Colin

    2015-01-01

    This book is devoted to the Discrete Element Method (DEM) technique, a discontinuum modelling approach that takes into account the fact that granular materials are composed of discrete particles which interact with each other at the microscale level. This numerical simulation technique can be used both for dispersed systems in which the particle-particle interactions are collisional and compact systems of particles with multiple enduring contacts. The book provides an extensive and detailed explanation of the theoretical background of DEM. Contact mechanics theories for elastic, elastic-plastic, adhesive elastic and adhesive elastic-plastic particle-particle interactions are presented. Other contact force models are also discussed, including corrections to some of these models as described in the literature, and important areas of further research are identified. A key issue in DEM simulations is whether or not a code can reliably simulate the simplest of systems, namely the single particle oblique impact wit...

  19. Status of the Melbourne experimental particle physics DAQ, silicon hodoscope and readout systems

    International Nuclear Information System (INIS)

    Moorhead, G.F.

    1995-01-01

    This talk will present a brief review of the current status of the Melbourne Experimental Particle Physics group's primary data acquisition system (DAQ), the associated silicon hodoscope and trigger systems, and of the tests currently underway and foreseen. Simulations of the propagation of 106-Ru β particles through the system will also be shown

  20. Effects of types of ventilation system on indoor particle concentrations in residential buildings.

    Science.gov (United States)

    Park, J S; Jee, N-Y; Jeong, J-W

    2014-12-01

    The objective of this study was to quantify the influence of ventilation systems on indoor particle concentrations in residential buildings. Fifteen occupied, single-family apartments were selected from three sites. The three sites have three different ventilation systems: unbalanced mechanical ventilation, balanced mechanical ventilation, and natural ventilation. Field measurements were conducted between April and June 2012, when outdoor air temperatures were comfortable. Number concentrations of particles, PM2.5 and CO2 , were continuously measured both outdoors and indoors. In the apartments with natural ventilation, I/O ratios of particle number concentrations ranged from 0.56 to 0.72 for submicron particles, and from 0.25 to 0.60 for particles larger than 1.0 μm. The daily average indoor particle concentration decreased to 50% below the outdoor level for submicron particles and 25% below the outdoor level for fine particles, when the apartments were mechanically ventilated. The two mechanical ventilation systems reduced the I/O ratios by 26% for submicron particles and 65% for fine particles compared with the natural ventilation. These results showed that mechanical ventilation can reduce exposure to outdoor particles in residential buildings. Results of this study confirm that mechanical ventilation with filtration can significantly reduce indoor particle levels compared with natural ventilation. The I/O ratios of particles substantially varied at the naturally ventilated apartments because of the influence of variable window opening conditions and unsteadiness of wind flow on the penetration of outdoor air particles. For better prediction of the exposure to outdoor particles in naturally ventilated residential buildings, it is important to understand the penetration of outdoor particles with variable window opening conditions. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  1. Method of determining the characteristics of circulatory systems using tracer particles, making the particles and radioactive particles for use in the method

    International Nuclear Information System (INIS)

    Pratt, F.P.; Gagnon, D.L.

    1981-01-01

    In the method described tracer particles consist of ion exchange resin cores labelled with suitable radioactive ions or with a nuclide excitable by X-rays, and have a non-leaching polymeric coating. The particles are introduced into the system and are detected by visual inspection, radiation detection or X-ray fluorescence techniques. The cores are labelled using conventional batch ion exchange techniques. Coated tracers are produced by contacting a monomer, preferably furfuryl alcohol, with cores bearing catalytic ions (hydroxyl or hydrogen) on the surface which catalyse the monomer to form a polymer. The tracer particles in a physiologically acceptable liquid carrier are useful in clinical and medical investigations of blood flow. They can also be used for flow measurement in chemical process control streams. (U.K.)

  2. Acoustic Manipulation of Particles and Fluids in Microfluidic Systems

    OpenAIRE

    Johansson, Linda

    2009-01-01

    The downscaling and integration of biomedical analyses onto a single chip offers several advantages in speed, cost, parallelism and de-centralization. Acoustic radiation forces are attractive to use in these applications since they are strong, long-range and gentle. Lab-on-a-chip operations such as cell trapping, particle fluorescence activated cell sorting, fluid mixing and particle sorting performed by acoustic radiation forces are exploited in this thesis. Two different platforms are desig...

  3. Structures and dynamics in a two-dimensional dipolar dust particle system

    Science.gov (United States)

    Hou, X. N.; Liu, Y. H.; Kravchenko, O. V.; Lapushkina, T. A.; Azarova, O. A.; Chen, Z. Y.; Huang, F.

    2018-05-01

    The effects of electric dipole moment, the number of dipolar particles, and system temperature on the structures and dynamics of a dipolar dust particle system are studied by molecular dynamics simulations. The results show that the larger electric dipole moment is favorable for the formation of a long-chain structure, the larger number of dipolar dust particles promotes the formation of the multi-chain structure, and the higher system temperature can cause higher rotation frequency. The trajectories, mean square displacement (MSD), and the corresponding spectrum functions of the MSDs are also calculated to illustrate the dynamics of the dipolar dust particle system, which is also closely related to the growth of dust particles. Some simulations are qualitatively in agreement with our experiments and can provide a guide for the study on dust growth, especially on the large-sized particles.

  4. EGUN, Charged Particle Trajectories in Electromagnetic Focusing System

    International Nuclear Information System (INIS)

    Herrmannsfeldt, W.B.

    2002-01-01

    1 - Description of problem or function: EGUN computes trajectories of charged particles in electrostatic and magnetostatic focusing systems including the effects of space charge and self-magnetic fields. Starting options include Child's Law conditions on cathodes of various shapes, user-specified conditions input for each ray, and a combination of Child's Law conditions and user specifications. Either rectangular or cylindrically symmetric geometry may be used. Magnetic fields may be specified using an arbitrary configuration of coils, or the output of a magnet program, such as Poisson, or by an externally calculated array of the axial fields. 2 - Method of solution: The program first solves Laplace's equation. Next, the first iteration of electron trajectories is started using one of the four starting options. On the first iteration cycle, space charge forces are calculated from the assumption of paraxial flow. As the rays are traced, space charge is computed and stored. After all the electron trajectories have been calculated, the program begins the second cycle by solving the Poisson equation with the space charge from the first iteration. Subsequent iteration cycles follow this pattern. The Poisson equation is solved by an alternate column relaxation technique known as the semi-iterative Chebyshev method. A fourth-order Runge-Kutta method is used to solve the relativistic differential equations of the trajectory calculations. 3 - Restrictions on the complexity of the problem - Maxima of: 9001 mesh points in a square mesh, 300 mesh points in the axial direction, 100 mesh points in the radial direction, 101 potentials, 51 rays. In the cylindrical coordinates, the magnetic fields are axially symmetric. In rectangular coordinates, the external field is assumed to be normal to the plane of the problem, which is assumed to be the median plane

  5. Development of a pneumatic transfer system for HTGR recycle fuel particles

    International Nuclear Information System (INIS)

    Mack, J.E.; Johnson, D.R.

    1978-02-01

    In support of the High-Temperature Gas-Cooled Reactor (HTGR) Fuel Refabrication Development Program, an experimental pneumatic transfer system was constructed to determine the feasibility of pneumatically conveying pyrocarbon-coated fuel particles of Triso and Biso designs. Tests were conducted with these particles in each of their nonpyrophoric forms to determine pressure drops, particle velocities, and gas flow requirements during pneumatic transfer as well as to evaluate particle wear and breakage. Results indicated that the material can be pneumatically conveyed at low pressures without excessive damage to the particles or their coatings

  6. An experimental and analytical study of a buoyancy driven cooling system for a particle accelerator

    International Nuclear Information System (INIS)

    Campbell, B.; Ranganathan, R.

    1993-05-01

    A buoyancy driven closed-loop cooling system that transports the heat generated in a particle accelerator to the ambient has been evaluated both through experiments performed earlier and analysis techniques developed elsewhere. Excellent comparisons between measurements and calculations have been obtained. The model illustrates the feasibility (from a heat transfer viewpoint) of such a cooling system for a particle accelerator

  7. An experimental and analytical study of a buoyancy driven cooling system for a particle accelerator

    International Nuclear Information System (INIS)

    Campbell, B.; Ranganathan, R.

    1993-01-01

    A buoyancy driven closed-loop cooling system that transports the heat generated in a particle accelerator to the ambient has been evaluated both through experiments performed earlier and analysis techniques developed elsewhere. Excellent comparisons between measurements and calculations have been obtained. The model illustrates the feasibility (from a heat transfer viewpoint) of such a cooling system for a particle accelerator

  8. A model for particle emission from a fissioning system

    International Nuclear Information System (INIS)

    Milek, B.; Reif, R.; Revai, J.

    1987-04-01

    The differential emission probability for a neutron emitted in a binary fission process due to non-adiabatic effects in the coupling of the single particle degrees of freedom to the accelerated relative motion of the fragments is investigated wihtin a model, which represents each nucleus by a non-deformed one-term separable potential. The derivation of measurable quantities from the asymptotic solution of the time-dependent Schroedinger equation for the single particle wave function is examined. Numerical calculations were performed for parameter values, which correspond to 252 Cf(sf). The calculated energy spectra and angular distributions of the emitted particles are presented in dependence on the mass asymmetry. (author)

  9. Fabrication, Characterization, and Biological Activity of Avermectin Nano-delivery Systems with Different Particle Sizes

    Science.gov (United States)

    Wang, Anqi; Wang, Yan; Sun, Changjiao; Wang, Chunxin; Cui, Bo; Zhao, Xiang; Zeng, Zhanghua; Yao, Junwei; Yang, Dongsheng; Liu, Guoqiang; Cui, Haixin

    2018-01-01

    Nano-delivery systems for the active ingredients of pesticides can improve the utilization rates of pesticides and prolong their control effects. This is due to the nanocarrier envelope and controlled release function. However, particles containing active ingredients in controlled release pesticide formulations are generally large and have wide size distributions. There have been limited studies about the effect of particle size on the controlled release properties and biological activities of pesticide delivery systems. In the current study, avermectin (Av) nano-delivery systems were constructed with different particle sizes and their performances were evaluated. The Av release rate in the nano-delivery system could be effectively controlled by changing the particle size. The biological activity increased with decreasing particle size. These results suggest that Av nano-delivery systems can significantly improve the controllable release, photostability, and biological activity, which will improve efficiency and reduce pesticide residues.

  10. Theoretical method for determining particle distribution functions of classical systems

    International Nuclear Information System (INIS)

    Johnson, E.

    1980-01-01

    An equation which involves the triplet distribution function and the three-particle direct correlation function is obtained. This equation was derived using an analogue of the Ornstein--Zernike equation. The new equation is used to develop a variational method for obtaining the triplet distribution function of uniform one-component atomic fluids from the pair distribution function. The variational method may be used with the first and second equations in the YBG hierarchy to obtain pair and triplet distribution functions. It should be easy to generalize the results to the n-particle distribution function

  11. Development of low level alpha particle counting system

    International Nuclear Information System (INIS)

    Minobe, Masao; Kondo, Hiraku; Chinuki, Takashi; Hirano, Hiromichi

    1987-01-01

    Much attention has been paid to the trace analysis of uranium and thorium contained in the base material of LSI or VLSI, since the so-called ''soft-error'' of the memory device was known to be due to alpha particles emitted from these radioactive elements. We have developed an apparatus to meet the needs of estimating such a very small quantity of U and Th of the level of ppb, by directly counting alpha particles using a gas-flow type proportional counter. This method requires no sophisticated analytical skill, and the accuracy of the result is satisfactory. The instrumentation and some application of this apparatus are described. (author)

  12. Fracture of a Brittle-Particle Ductile Matrix Composite with Applications to a Coating System

    Science.gov (United States)

    Bianculli, Steven J.

    In material systems consisting of hard second phase particles in a ductile matrix, failure initiating from cracking of the second phase particles is an important failure mechanism. This dissertation applies the principles of fracture mechanics to consider this problem, first from the standpoint of fracture of the particles, and then the onset of crack propagation from fractured particles. This research was inspired by the observation of the failure mechanism of a commercial zinc-based anti-corrosion coating and the analysis was initially approached as coatings problem. As the work progressed it became evident that failure mechanism was relevant to a broad range of composite material systems and research approach was generalized to consider failure of a system consisting of ellipsoidal second phase particles in a ductile matrix. The starting point for the analysis is the classical Eshelby Problem, which considered stress transfer from the matrix to an ellipsoidal inclusion. The particle fracture problem is approached by considering cracks within particles and how they are affected by the particle/matrix interface, the difference in properties between the particle and matrix, and by particle shape. These effects are mapped out for a wide range of material combinations. The trends developed show that, although the particle fracture problem is very complex, the potential for fracture among a range of particle shapes can, for certain ranges in particle shape, be considered easily on the basis of the Eshelby Stress alone. Additionally, the evaluation of cracks near the curved particle/matrix interface adds to the existing body of work of cracks approaching bi-material interfaces in layered material systems. The onset of crack propagation from fractured particles is then considered as a function of particle shape and mismatch in material properties between the particle and matrix. This behavior is mapped out for a wide range of material combinations. The final section of

  13. System of automized determination of charged particle trajectories in extended magnetic fields

    International Nuclear Information System (INIS)

    Toumanian, A.R.

    1981-01-01

    An automized system for the determination of particle trajectories by the floating current-carrying wire method is described. The system is able to determine the charged particle trajectories with the energy above 100 MeV in magnetic systems of any configuration and with track extent up to several tens metres with momentum resolution up to 3.10 -4 . The system efficiency makes 1500 tracks/hour on the average [ru

  14. System of data bases on particle physics at IHEP

    International Nuclear Information System (INIS)

    Alekhin, S.I.; Bazeeva, V.V.; Ezhela, V.V.

    1987-01-01

    Up-to-date status of the IHEP DOCUMENTS and EXPERIMENTS Data Bases are described. Now these data bases are the most complete computerized catalogues of experimental particle physics publications. BDMS and PPDL provide extended possibilities for any user in searching and retrieving desired information

  15. Detection systems for high energy particle producing gaseous ionization

    International Nuclear Information System (INIS)

    Martinez, L.; Duran, I.

    1985-01-01

    This report contains a review on the most used detectors based on the collection of the ionization produced by high energy particles: proportional counters, multiwire proportional chambers, Geiger-Muller counters and drift chambers. In six sections, the fundamental principles, the field configuration and useful gas mixtures, are discussed, most relevant devices are reported along 90 pages with 98 references. (Author) 98 refs

  16. Detection systems for high energy particle producing gaseous ionization

    International Nuclear Information System (INIS)

    Duran, I.; Martinez, L.

    1985-01-01

    This report contains a review on the most used detectors based on the collection of the ionization produced by high energy particles: proportional counters, multiwire proportional chambers, Geiger-Mueller counters and drift chambers. In six sections, the fundamental principles, the field configuration and useful gas mixtures are discussed, most relevant devices are reported. (author)

  17. Connection of European particle therapy centers and generation of a common particle database system within the European ULICE-framework

    International Nuclear Information System (INIS)

    Kessel, Kerstin A; Pötter, Richard; Dosanjh, Manjit; Debus, Jürgen; Combs, Stephanie E; Bougatf, Nina; Bohn, Christian; Habermehl, Daniel; Oetzel, Dieter; Bendl, Rolf; Engelmann, Uwe; Orecchia, Roberto; Fossati, Piero

    2012-01-01

    To establish a common database on particle therapy for the evaluation of clinical studies integrating a large variety of voluminous datasets, different documentation styles, and various information systems, especially in the field of radiation oncology. We developed a web-based documentation system for transnational and multicenter clinical studies in particle therapy. 560 patients have been treated from November 2009 to September 2011. Protons, carbon ions or a combination of both, as well as a combination with photons were applied. To date, 12 studies have been initiated and more are in preparation. It is possible to immediately access all patient information and exchange, store, process, and visualize text data, any DICOM images and multimedia data. Accessing the system and submitting clinical data is possible for internal and external users. Integrated into the hospital environment, data is imported both manually and automatically. Security and privacy protection as well as data validation and verification are ensured. Studies can be designed to fit individual needs. The described database provides a basis for documentation of large patient groups with specific and specialized questions to be answered. Having recently begun electronic documentation, it has become apparent that the benefits lie in the user-friendly and timely workflow for documentation. The ultimate goal is a simplification of research work, better study analyses quality and eventually, the improvement of treatment concepts by evaluating the effectiveness of particle therapy

  18. Connection of European particle therapy centers and generation of a common particle database system within the European ULICE-framework

    Directory of Open Access Journals (Sweden)

    Kessel Kerstin A

    2012-07-01

    Full Text Available Abstract Background To establish a common database on particle therapy for the evaluation of clinical studies integrating a large variety of voluminous datasets, different documentation styles, and various information systems, especially in the field of radiation oncology. Methods We developed a web-based documentation system for transnational and multicenter clinical studies in particle therapy. 560 patients have been treated from November 2009 to September 2011. Protons, carbon ions or a combination of both, as well as a combination with photons were applied. To date, 12 studies have been initiated and more are in preparation. Results It is possible to immediately access all patient information and exchange, store, process, and visualize text data, any DICOM images and multimedia data. Accessing the system and submitting clinical data is possible for internal and external users. Integrated into the hospital environment, data is imported both manually and automatically. Security and privacy protection as well as data validation and verification are ensured. Studies can be designed to fit individual needs. Conclusions The described database provides a basis for documentation of large patient groups with specific and specialized questions to be answered. Having recently begun electronic documentation, it has become apparent that the benefits lie in the user-friendly and timely workflow for documentation. The ultimate goal is a simplification of research work, better study analyses quality and eventually, the improvement of treatment concepts by evaluating the effectiveness of particle therapy.

  19. Modelling and measurement of wear particle flow in a dual oil filter system for condition monitoring

    DEFF Research Database (Denmark)

    Henneberg, Morten; Eriksen, René Lynge; Fich, Jens

    2016-01-01

    . The quantity of wear particles in gear oil is analysed with respect to system running conditions. It is shown that the model fits the data in terms of startup “particle burst” phenomenon, quasi-stationary conditions during operation, and clean-up filtration when placed out of operation. In order to establish...... boundary condition for particle burst phenomenon, the release of wear particles from a pleated mesh filter is measured in a test rig and included in the model. The findings show that a dual filter model, with startup phenomenon included, can describe trends in the wear particle flow observed in the gear...... particle generation is made possible by model parameter estimation and identification of an unintended lack of filter change. The model may also be used to optimise system and filtration performance, and to enable continuous condition monitoring....

  20. Modeling the C. elegans nematode and its environment using a particle system.

    Science.gov (United States)

    Rönkkö, Mauno; Wong, Garry

    2008-07-21

    A particle system, as understood in computer science, is a novel technique for modeling living organisms in their environment. Such particle systems have traditionally been used for modeling the complex dynamics of fluids and gases. In the present study, a particle system was devised to model the movement and feeding behavior of the nematode Caenorhabditis elegans in three different virtual environments: gel, liquid, and soil. The results demonstrate that distinct movements of the nematode can be attributed to its mechanical interactions with the virtual environment. These results also revealed emergent properties associated with modeling organisms within environment-based systems.

  1. An efficient venturi scrubber system to remove submicron particles in exhaust gas.

    Science.gov (United States)

    Tsai, Chuen-Jinn; Lin, Chia-Hung; Wang, Yu-Min; Hunag, Cheng-Hsiung; Li, Shou-Nan; Wu, Zong-Xue; Wang, Feng-Cai

    2005-03-01

    An efficient venturi scrubber system making use of heterogeneous nucleation and condensational growth of particles was designed and tested to remove fine particles from the exhaust of a local scrubber where residual SiH4 gas was abated and lots of fine SiO2 particles were generated. In front of the venturi scrubber, normal-temperature fine-water mist mixes with high-temperature exhaust gas to cool it to the saturation temperature, allowing submicron particles to grow into micron sizes. The grown particles are then scrubbed efficiently in the venturi scrubber. Test results show that the present venturi scrubber system is effective for removing submicron particles. For SiO2 particles greater than 0.1microm, the removal efficiency is greater than 80-90%, depending on particle concentration. The corresponding pressure drop is relatively low. For example, the pressure drop of the venturi scrubber is approximately 15.4 +/- 2.4 cm H2O when the liquid-to-gas ratio is 1.50 L/m3. A theoretical calculation has been conducted to simulate particle growth process and the removal efficiency of the venturi scrubber. The theoretical results agree with the experimental data reasonably well when SiO2 particle diameter is greater than 0.1 microm.

  2. Physical sputtering of metallic systems by charged-particle impact

    International Nuclear Information System (INIS)

    Lam, N.Q.

    1989-12-01

    The present paper provides a brief overview of our current understanding of physical sputtering by charged-particle impact, with the emphasis on sputtering of metals and alloys under bombardment with particles that produce knock-on collisions. Fundamental aspects of ion-solid interactions, and recent developments in the study of sputtering of elemental targets and preferential sputtering in multicomponent materials are reviewed. We concentrate only on a few specific topics of sputter emission, including the various properties of the sputtered flux and depth of origin, and on connections between sputtering and other radiation-induced and -enhanced phenomena that modify the near-surface composition of the target. The synergistic effects of these diverse processes in changing the composition of the integrated sputtered-atom flux is described in simple physical terms, using selected examples of recent important progress. 325 refs., 27 figs

  3. Quasi-particle states of electron systems out of equilibrium

    Czech Academy of Sciences Publication Activity Database

    Velický, B.; Kalvová, Anděla; Špička, Václav

    2007-01-01

    Roč. 75, č. 19 (2007), 195125/1-195125/9 ISSN 1098-0121 R&D Projects: GA ČR GA202/04/0585 Institutional research plan: CEZ:AV0Z10100520; CEZ:AV0Z10100521 Keywords : non-equilibrium * Green’s functions * quantum transport equations * quasi-particles Subject RIV: BM - Solid Matter Physics ; Magnetism Impact factor: 3.172, year: 2007

  4. Experimental central nervous system injury after charged-particle irradiation

    International Nuclear Information System (INIS)

    Rodriguez, A.; Levy, R.P.; Fabrikant, J.I.

    1991-01-01

    This paper reports on the development of definite paralytic signs, used as an endpoint for the determination of latent periods, which reflect the presence of damage but do not reveal its pathologic characteristics. Paralysis was a nonstochastic effect for which both the probability and severity vary with dose and for which a threshold of dose-response existed. Histologically the primary lesion induced by both charged-particle irradiation and X- or γ-radiation was demyelination and necrosis of the white matter. This has been attributed generally to damage to the oligodendrocytes. The spinal cord tolerance toward fractionated helium radiation was similar to X- or γ-radiation, but the spinal cord was much more sensitive to heavier charged-particle radiation. There was much less sparing and decreased tissue tolerance in the high-LET spread Bragg peak regions of carbon- and neon-ion beams that in the plateau regions. A radiologic model for effects in CNS after charged-particle radiation indicated that the α/β values, a measure of tissue repair capacity, increased with LET as predicted. The α/β values for spinal cord injury with neon range from 2.52 to 12.0 depending on the LET; for helium, the α/β value was 1.52, similar to values for X rays

  5. Particles from wood smoke and road traffic differently affect the innate immune system of the lung.

    Science.gov (United States)

    Samuelsen, Mari; Cecilie Nygaard, Unni; Løvik, Martinus

    2009-09-01

    The effect of particles from road traffic and wood smoke on the innate immune response in the lung was studied in a lung challenge model with the intracellular bacterium Listeria monocytogenes. Female Balb/cA mice were instilled intratracheally with wood smoke particles, particles from road traffic collected during winter (studded tires used; St+), and during autumn (no studded tires; St-), or diesel exhaust particles (DEP). Simultaneously with, and 1 or 7 days after particle instillation, 10(5) bacteria were inoculated intratracheally. Bacterial numbers in the lungs and spleen 1 day after Listeria challenge were determined, as an indicator of cellular activation. In separate experiments, bronchoalveolar lavage (BAL) fluid was collected 4 h and 24 h after particle instillation. All particles tested reduced the numbers of bacteria in the lung 24 h after bacterial inoculation. When particles were given simultaneously with Listeria, the reduction was greatest for DEP, followed by St+ and St-, and least for wood smoke particles. Particle effects were no longer apparent after 7 days. Neutrophil numbers in BAL fluid were increased for all particle exposed groups. St+ and St- induced the highest levels of IL-1beta, MIP-2, MCP-1, and TNF-alpha, followed by DEP, which induced no TNF-alpha. In contrast, wood smoke particles only increased lactate dehydrogenase (LDH) activity, indicating a cytotoxic effect of these particles. In conclusion, all particles tested activated the innate immune system as determined with Listeria. However, differences in kinetics of anti-Listeria activity and levels of proinflammatory mediators point to cellular activation by different mechanisms.

  6. Concentration and characterization of airborne particles in Tehran's subway system.

    Science.gov (United States)

    Kamani, Hosein; Hoseini, Mohammad; Seyedsalehi, Mahdi; Mahdavi, Yousef; Jaafari, Jalil; Safari, Gholam Hosein

    2014-06-01

    Particulate matter is an important air pollutant, especially in closed environments like underground subway stations. In this study, a total of 13 elements were determined from PM10 and PM2.5 samples collected at two subway stations (Imam Khomeini and Sadeghiye) in Tehran's subway system. Sampling was conducted in April to August 2011 to measure PM concentrations in platform and adjacent outdoor air of the stations. In the Imam Khomeini station, the average concentrations of PM10 and PM2.5 were 94.4 ± 26.3 and 52.3 ± 16.5 μg m(-3) in the platform and 81.8 ± 22.2 and 35 ± 17.6 μg m(-3) in the outdoor air, respectively. In the Sadeghiye station, mean concentrations of PM10 and PM2.5 were 87.6 ± 23 and 41.3 ± 20.4 μg m(-3) in the platform and 73.9 ± 17.3 and 30 ± 15 μg m(-3), in the outdoor air, respectively. The relative contribution of elemental components in each particle fraction were accounted for 43% (PM10) and 47.7% (PM2.5) in platform of Imam Khomeini station and 15.9% (PM10) and 18.5% (PM2.5) in the outdoor air of this station. Also, at the Sadeghiye station, each fraction accounted for 31.6% (PM10) and 39.8% (PM2.5) in platform and was 11.7% (PM10) and 14.3% (PM2.5) in the outdoor. At the Imam Khomeini station, Fe was the predominant element to represent 32.4 and 36 % of the total mass of PM10 and PM2.5 in the platform and 11.5 and 13.3% in the outdoor, respectively. At the Sadeghiye station, this element represented 22.7 and 29.8% of total mass of PM10 and PM2.5 in the platform and 8.7 and 10.5% in the outdoor air, respectively. Other major crustal elements were 5.8% (PM10) and 5.3% (PM2.5) in the Imam Khomeini station platform and 2.3 and 2.4% in the outdoor air, respectively. The proportion of other minor elements was significantly lower, actually less than 7% in total samples, and V was the minor concentration in total mass of PM10 and PM2.5 in both platform stations.

  7. Fractional exclusion statistics: the method for describing interacting particle systems as ideal gases

    International Nuclear Information System (INIS)

    Anghel, Dragoş-Victor

    2012-01-01

    I show that if the total energy of a system of interacting particles may be written as a sum of quasiparticle energies, then the system of quasiparticles can be viewed, in general, as an ideal gas with fractional exclusion statistics (FES). The general method for calculating the FES parameters is also provided. The interacting particle system cannot be described as an ideal gas of Bose and Fermi quasiparticles except in trivial situations.

  8. Solar Particle Induced Upsets in the TDRS-1 Attitude Control System RAM During the October 1989 Solar Particle Events

    Science.gov (United States)

    Croley, D. R.; Garrett, H. B.; Murphy, G. B.; Garrard,T. L.

    1995-01-01

    The three large solar particle events, beginning on October 19, 1989 and lasting approximately six days, were characterized by high fluences of solar protons and heavy ions at 1 AU. During these events, an abnormally large number of upsets (243) were observed in the random access memory of the attitude control system (ACS) control processing electronics (CPE) on-board the geosynchronous TDRS-1 (Telemetry and Data Relay Satellite). The RAM unit affected was composed of eight Fairchild 93L422 memory chips. The Galileo spacecraft, launched on October 18, 1989 (one day prior to the solar particle events) observed the fluxes of heavy ions experienced by TDRS-1. Two solid-state detector telescopes on-board Galileo, designed to measure heavy ion species and energy, were turned on during time periods within each of the three separate events. The heavy ion data have been modeled and the time history of the events reconstructed to estimate heavy ion fluences. These fluences were converted to effective LET spectra after transport through the estimated shielding distribution around the TDRS-1 ACS system. The number of single event upsets (SEU) expected was calculated by integrating the measured cross section for the Fairchild 93L422 memory chip with average effective LET spectrum. The expected number of heavy ion induced SEU's calculated was 176. GOES-7 proton data, observed during the solar particle events, were used to estimate the number of proton-induced SEU's by integrating the proton fluence spectrum incident on the memory chips, with the two-parameter Bendel cross section for proton SEU'S. The proton fluence spectrum at the device level was gotten by transporting the protons through the estimated shielding distribution. The number of calculated proton-induced SEU's was 72, yielding a total of 248 predicted SEU'S, very dose to the 243 observed SEU'S. These calculations uniquely demonstrate the roles that solar heavy ions and protons played in the production of SEU

  9. Efficiencies of dynamic Monte Carlo algorithms for off-lattice particle systems with a single impurity

    KAUST Repository

    Novotny, M.A.; Watanabe, Hiroshi; Ito, Nobuyasu

    2010-01-01

    The efficiency of dynamic Monte Carlo algorithms for off-lattice systems composed of particles is studied for the case of a single impurity particle. The theoretical efficiencies of the rejection-free method and of the Monte Carlo with Absorbing

  10. Efficiencies of dynamic Monte Carlo algorithms for off-lattice particle systems with a single impurity

    KAUST Repository

    Novotny, M.A.

    2010-02-01

    The efficiency of dynamic Monte Carlo algorithms for off-lattice systems composed of particles is studied for the case of a single impurity particle. The theoretical efficiencies of the rejection-free method and of the Monte Carlo with Absorbing Markov Chains method are given. Simulation results are presented to confirm the theoretical efficiencies. © 2010.

  11. Simulation of distributed parameter system consisting of charged and neutral particles

    International Nuclear Information System (INIS)

    Grover, P.S.; Sinha, K.V.

    1986-01-01

    The time-dependent behavior of positively charged light particles have been simulated in an assembly of heavy gas atoms. The system is formulated in terms of partial differential equation. The stability and convergence of the numerical algorithm has been examined. Using this formulation effects of external electric field and temperature have been investigated on the lifetime and distribution function characteristics of charged particles

  12. Technology for meat-grinding systems to improve removal of hard particles from ground meat.

    Science.gov (United States)

    Zhao, Y; Sebranek, J G

    1997-03-01

    With increased consumption of ground meat, especially ground beef, quality issues for these products have become more important to industry and consumers alike. Ground meats are usually obtained from relatively low-value cuts and trimmings, and may on occasion contain undesirable hard particles. Hard particles in coarse-ground meat products may include bone chips or fragments, cartilage and dense connective tissue; all of which are considered undesirable defects and which can be reduced by utilizing hard-particle removal systems during grinding operations. This review discusses the principles of hard-particle separation from ground meat, the factors which influence performance of particle separation and some commercially available particle removal systems. Product and processing parameters such as initial bone and connective tissue content, fat content, temperature, pre-grinding size and grinder knife design are considered important for removing hard particles effectively. Pressure gradient on the grinder knife/plate interface was found to play a significant role in particle separation from soft (fat and lean) tissue. Various commercial systems, which are classified as central removal and periphery removal systems, are also discussed. Finally, the authors suggest some processing considerations for meat grinding to help achieve the best quality ground meat for consumers' satisfaction.

  13. Study of efficiency of particles removal by different filtration systems in a municipal wastewater tertiary treatment

    International Nuclear Information System (INIS)

    Andreu, P. S.; Lardin Mifsut, C.; Farinas Iglesias, M.; Sanchez-Arevalo Serrano, J.; Perez Sanchez, P.; Rancano Perez, A.

    2009-01-01

    The disinfection of municipal wastewater using ultraviolet radiation depends greatly on the presence within the water of particles in suspension. This work determines how the level of elimination of particles varies depending on the technique of filtration used (open, closed sand filters, with continuous washing of the sand, cloth, disk and ring filters). all systems are very effective in the removal of particles more than 25 microns and for removing helminth eggs. The membrane bio-reactors with ultrafiltration membranes were superior in terms of particle removal when compared to conventional filters. (Author) 11 refs.

  14. Phase transitions in a system of hard Y-shaped particles on the triangular lattice

    Science.gov (United States)

    Mandal, Dipanjan; Nath, Trisha; Rajesh, R.

    2018-03-01

    We study the different phases and the phase transitions in a system of Y-shaped particles, examples of which include immunoglobulin-G and trinaphthylene molecules, on a triangular lattice interacting exclusively through excluded volume interactions. Each particle consists of a central site and three of its six nearest neighbors chosen alternately, such that there are two types of particles which are mirror images of each other. We study the equilibrium properties of the system using grand canonical Monte Carlo simulations that implement an algorithm with cluster moves that is able to equilibrate the system at densities close to full packing. We show that, with increasing density, the system undergoes two entropy-driven phase transitions with two broken-symmetry phases. At low densities, the system is in a disordered phase. As intermediate phases, there is a solidlike sublattice phase in which one type of particle is preferred over the other and the particles preferentially occupy one of four sublattices, thus breaking both particle symmetry as well as translational invariance. At even higher densities, the phase is a columnar phase, where the particle symmetry is restored, and the particles preferentially occupy even or odd rows along one of the three directions. This phase has translational order in only one direction, and breaks rotational invariance. From finite-size scaling, we demonstrate that both the transitions are first order in nature. We also show that the simpler system with only one type of particle undergoes a single discontinuous phase transition from a disordered phase to a solidlike sublattice phase with an increasing density of particles.

  15. Violation of local realism by a system with N spin-(1/2) particles

    International Nuclear Information System (INIS)

    Wu, Xiao-Hua; Zong, Hong-Shi

    2003-01-01

    Recently, it was found that Mermin's inequalities may not always be optimal for the refutation of a local realistic description [Phys. Rev. Lett. 88, 210402 (2002)]. To complete this work, we derive an inequality for the Greenberger-Horne-Zeilinger-type pure state for a system with N spin-(1/2) particles and the violation of the inequality can be shown for all the non product pure states. Mermin's inequality for a system of N spin-(1/2) particles and Gisin's theorem for a system of two spin-(1/2) particles are both included in our inequality

  16. Equilibrium distributions of free charged particles and molecules in systems with non-plane boundaries

    International Nuclear Information System (INIS)

    Usenko, A.S.

    1995-01-01

    The equilibrium space-inhomogeneous distributions of free and pair bound charged particles are calculated in the dipole approximation for the plasma-molecular cylinder and sphere. It is shown that the space and orientational distributions of charged particles and molecules in these systems are similar to those in the cases of plasma-molecular system restricted by one or two parallel planes. The influence of the parameters of outer medium and a plasma-molecular system on the space and orientational distributions of charged particles and molecules is studied in detail

  17. The Bumper Boats Effect: Effect of Inertia on Self Propelled Active Particles Systems

    Science.gov (United States)

    Dai, Chengyu; Bruss, Isaac; Glotzer, Sharon

    Active matter has been well studied using the standard Brownian dynamics model, which assumes that the self-propelled particles have no inertia. However, many examples of active systems, such as sub-millimeter bacteria and colloids, have non-negligible inertia. Using particle-based Langevin Dynamics simulation with HOOMD-blue, we study the role of particle inertia on the collective emergent behavior of self-propelled particles. We find that inertia hinders motility-induced phase separation. This is because the effective speed of particles is reduced due to particle-particle collisions-\\x9Dmuch like bumper boats, which take time to reach terminal velocity after a crash. We are able to fully account for this effect by tracking a particle's average rather than terminal velocity, allowing us to extend the standard Brownian dynamics model to account for the effects of momentum. This study aims to inform experimental systems where the inertia of the active particles is non-negligible. We acknowledge the funding support from the Center for Bio-Inspired Energy Science (CBES), an Energy Frontier Research Center funded by the U.S. Department of Energy, Office of Science, Basic Energy Sciences under Award # DE-SC0000989.

  18. Engineering aspects of particle-beam fusion systems

    International Nuclear Information System (INIS)

    Cook, D.L.

    1982-01-01

    The Department of Energy is supporting research directed toward demonstration of DT fuel ignition in an Inertial Confinement Fusion (ICF) capsule. As part of the ICF effort, two major Particle Beam Fusion Accelerators (PBFA I and II) are being developed at Sandia National Laboratories with the objective of providing energetic light ion beams of sufficient power density for target implosion. Supporting light ion beam research is being performed at the Naval Research Laboratory and at Cornell University. If the answers to several key physics and engineering questions are favorable, pulsed power accelerators will be able to provide an efficient and inexpensive approach to high target gain and eventual power production applications

  19. Experimental investigation on improving the removal effect of WFGD system on fine particles by heterogeneous condensation

    Energy Technology Data Exchange (ETDEWEB)

    Bao, Jingjing; Yang, Linjun; Yan, Jinpei; Xiong, Guilong; Shen, Xianglin [Southeast Univ., Nanjing (China). School of Energy and Environment

    2013-07-01

    Heterogeneous condensation of water vapor as a preconditioning technique for the removal of fine particles from flue gas was investigated experimentally in a wet flue gas desulfurization (WFGD) system. A supersaturated vapor phase, necessary for condensational growth of fine particles, was achieved in the SO{sub 2} absorption zone and at the top of the wet FGD scrubber by adding steam in the gas inlet and above the scrubbing liquid inlet of the scrubber, respectively. The condensational grown droplets were then removed by the scrubbing liquid and a high-efficiency demister. The results show that the effectiveness of the WFGD system for removal of fine particles is related to the SO{sub 2} absorbent and the types of scrubber employed. Despite a little better effectiveness for the removal of fine particles in the rotating-stream-tray scrubber at the same liquid-to-gas ratio, The similar trends are obtained between the spray scrubber and rotating-stream-tray scrubber. Due to the formation of aerosol particles in the limestone and ammonia-based FGD processes, the fine particle removal efficiencies are lower than those for Na{sub 2}CO{sub 3} and water. The performance of the WFGD system for removal of fine particles can be significantly improved for both steam addition cases, for which the removal efficiency increases with increasing amount of added steam. A high liquid to gas ratio is beneficial for efficient removal of fine particles by heterogeneous condensation of water vapor.

  20. Removal of fine particles in wet flue gas desulfurization system by heterogeneous condensation

    Energy Technology Data Exchange (ETDEWEB)

    Yang, L.J.; Bao, J.J.; Yan, J.P.; Liu, J.H.; Song, S.J.; Fan, F.X. [Southeast University, Nanjing (China). School of Energy & Environment

    2010-01-01

    A novel process to remove fine particles with high efficiency by heterogeneous condensation in a wet flue gas desulfurization (WFGD) system is presented. A supersaturated vapor phase, necessary for condensational growth of fine particles, was achieved in the SO{sub 2} absorption zone and at the top of the wet FGD scrubber by adding steam in the gas inlet and above the scrubbing liquid inlet of the scrubber, respectively. The condensational grown droplets were then removed by the scrubbing liquid and a high-efficiency demister. The results show that the effectiveness of the WFGD system for removal of fine particles is related to the SO{sub 2} absorbent employed. When using CaCO{sub 3} and NH{sub 3} {center_dot} H{sub 2}O to remove SO{sub 2} from flue gas, the fine particle removal efficiencies are lower than those for Na2CO{sub 3} and water, and the morphology and elemental composition of fine particles are changed. This effect can be attributed to the formation of aerosol particles in the limestone and ammonia-based FGD processes. The performance of the WFGD system for removal of fine particles can be significantly improved for both steam addition cases, for which the removal efficiency increases with increasing amount of added steam. A high liquid to gas ratio is beneficial for efficient removal of fine particles by heterogeneous condensation of water vapor.

  1. An analysis of multiple particle settling for LMR backup shutdown systems

    International Nuclear Information System (INIS)

    Brock, R.W.

    1992-05-01

    Backup shutdown systems proposed for future LMRs may employ discreet absorber particles to provide the negative reactivity insertion. When actuated, these systems release a dense packing of particles from an out-of-core region to settle into an in-core region. The multiple particle settling behavior is analyzed by the method of continuity waves. This method provides predictions of the dynamic response of the system including the average particle velocity and volume fraction of particles vs. time. Although hindered settling problems have been previously analyzed using continuity wave theory, this application represents an extension of the theory to conditions of unrestrained settling. Typical cases are analyzed and numerical results are calculated based on a semi-empirical drift-flux model. For 1/4-inch diameter boron-carbide particles in hot liquid sodium, the unrestrained settling problem assumes a steady-state solution when the average volume fraction of particles is 0.295 and the average particle velocity is 26.0 cm/s

  2. Impact of particles on sediment accumulation in a drinking water distribution system.

    Science.gov (United States)

    Vreeburg, J H G; Schippers, D; Verberk, J Q J C; van Dijk, J C

    2008-10-01

    Discolouration of drinking water is one of the main reasons customers complain to their water company. Though corrosion of cast iron is often seen as the main source for this problem, the particles originating from the treatment plant play an important and potentially dominant role in the generation of a discolouration risk in drinking water distribution systems. To investigate this thesis a study was performed in a drinking water distribution system. In two similar isolated network areas the effect of particles on discolouration risk was studied with particle counting, the Resuspension Potential Method (RPM) and assessment of the total accumulated sediment. In the 'Control Area', supplied with normal drinking water, the discolouration risk was regenerated within 1.5 year. In the 'Research Area', supplied with particle-free water, this will take 10-15 years. An obvious remedy for controlling the discolouration risk is to improve the treatment with respect to the short peaks that are caused by particle breakthrough.

  3. GPU-based, parallel-line, omni-directional integration of measured acceleration field to obtain the 3D pressure distribution

    Science.gov (United States)

    Wang, Jin; Zhang, Cao; Katz, Joseph

    2016-11-01

    A PIV based method to reconstruct the volumetric pressure field by direct integration of the 3D material acceleration directions has been developed. Extending the 2D virtual-boundary omni-directional method (Omni2D, Liu & Katz, 2013), the new 3D parallel-line omni-directional method (Omni3D) integrates the material acceleration along parallel lines aligned in multiple directions. Their angles are set by a spherical virtual grid. The integration is parallelized on a Tesla K40c GPU, which reduced the computing time from three hours to one minute for a single realization. To validate its performance, this method is utilized to calculate the 3D pressure fields in isotropic turbulence and channel flow using the JHU DNS Databases (http://turbulence.pha.jhu.edu). Both integration of the DNS acceleration as well as acceleration from synthetic 3D particles are tested. Results are compared to other method, e.g. solution to the Pressure Poisson Equation (e.g. PPE, Ghaemi et al., 2012) with Bernoulli based Dirichlet boundary conditions, and the Omni2D method. The error in Omni3D prediction is uniformly low, and its sensitivity to acceleration errors is local. It agrees with the PPE/Bernoulli prediction away from the Dirichlet boundary. The Omni3D method is also applied to experimental data obtained using tomographic PIV, and results are correlated with deformation of a compliant wall. ONR.

  4. Design and development on automated control system of coated fuel particle fabrication process

    International Nuclear Information System (INIS)

    Liu Malin; Shao Youlin; Liu Bing

    2013-01-01

    With the development trend of the large-scale production of the HTR coated fuel particles, the original manual control system can not meet the requirement and the automation control system of coated fuel particle fabrication in modern industrial grade is needed to develop. The comprehensive analysis aiming at successive 4-layer coating process of TRISO type coated fuel particles was carried out. It was found that the coating process could be divided into five subsystems and nine operating states. The establishment of DCS-type (distributed control system) of automation control system was proposed. According to the rigorous requirements of preparation process for coated particles, the design considerations of DCS were proposed, including the principle of coordinated control, safety and reliability, integration specification, practical and easy to use, and open and easy to update. A complete set of automation control system for coated fuel particle preparation process was manufactured based on fulfilling the requirements of these principles in manufacture practice. The automated control system was put into operation in the production of irradiated samples for HTRPM demonstration project. The experimental results prove that the system can achieve better control of coated fuel particle preparation process and meet the requirements of factory-scale production. (authors)

  5. Construction of the radiation oncology teaching files system for charged particle radiotherapy.

    Science.gov (United States)

    Masami, Mukai; Yutaka, Ando; Yasuo, Okuda; Naoto, Takahashi; Yoshihisa, Yoda; Hiroshi, Tsuji; Tadashi, Kamada

    2013-01-01

    Our hospital started the charged particle therapy since 1996. New institutions for charged particle therapy are planned in the world. Our hospital are accepting many visitors from those newly planned medical institutions and having many opportunities to provide with the training to them. Based upon our experiences, we have developed the radiation oncology teaching files system for charged particle therapy. We adopted the PowerPoint of Microsoft as a basic framework of our teaching files system. By using our export function of the viewer any physician can create teaching files easily and effectively. Now our teaching file system has 33 cases for clinical and physics contents. We expect that we can improve the safety and accuracy of charged particle therapy by using our teaching files system substantially.

  6. Complete system of three-particle hyperspherical harmonics in collective variables

    International Nuclear Information System (INIS)

    Mukhtarova, M.I.; Ehfros, V.D.

    1983-01-01

    A complete system of three-particle hyperspherical harmonics (HH) is built in a simple closed form for arbitrary Values of L making use of collectiVe variables including Euler angles of the system. A method of expanding the HH product into HH series is presented. A number of formulas are derived for differentiating Jacobi polynomials. The obtained results are, in particular, usefUl for phenomenological analysis of three-particle reactions and for dynamical problems concerning three interacting atoms

  7. Fokker-action principle for a system of particles interacting through a linear potential

    International Nuclear Information System (INIS)

    Rivacoba, A.

    1984-01-01

    A Fokker-action principle for a system of scalar particles interacting through their time-symmetric relativistic generalization of linear potential is obtained. From this action, motion equations and conservation laws for the total energy and angular momentum of the system, in which field contributions are included, are derived. These equations are exactly applied to the problem suggested by Schild of two particles moving in circular concentric orbits

  8. History and Technology Developments of Radio Frequency (RF) Systems for Particle Accelerators

    Science.gov (United States)

    Nassiri, A.; Chase, B.; Craievich, P.; Fabris, A.; Frischholz, H.; Jacob, J.; Jensen, E.; Jensen, M.; Kustom, R.; Pasquinelli, R.

    2016-04-01

    This article attempts to give a historical account and review of technological developments and innovations in radio frequency (RF) systems for particle accelerators. The evolution from electrostatic field to the use of RF voltage suggested by R. Wideröe made it possible to overcome the shortcomings of electrostatic accelerators, which limited the maximum achievable electric field due to voltage breakdown. After an introduction, we will provide reviews of technological developments of RF systems for particle accelerators.

  9. Development of a utility system for charged particle nuclear reaction data by using intelligentPad

    International Nuclear Information System (INIS)

    Aoyama, Shigeyoshi; Ohbayashi, Yoshihide; Masui, Hiroshi; Kato, Kiyoshi; Chiba, Masaki

    2000-01-01

    We have developed a utility system, WinNRDF2, for a nuclear charged particle reaction data of NRDF (Nuclear Reaction Data File) on the IntelligentPad architecture. By using the system, we can search the experimental data of a charged particle reaction of NRDF. Furthermore, we also see the experimental data by using graphic pads which was made through the CONTIP project. (author)

  10. The Multi-Element Electronstatic Lens Systems for Controlling and Focusing Charged Particle

    International Nuclear Information System (INIS)

    Sise, O.

    2004-01-01

    Particle optics are very close anolog of photon optics and most of the principles of an barged particle beam can be understood by thinking of the particles as rays of light. There are similar behaviours between particle and photon optics in controlling beams of light and charged particles, such as lenses and mirrors. Extensive information about the properties of charged particle optics, from which appropriate systems can be designed for any specific problem. In this way electrostatic lens systems are used to control beams of c/iarged particle with various energy and directions in several fields, for example electron microscopy, cathode ray tubes, ion accelerators and electron impact studies. In an electrostatic lens system quantative information is required over a wide energy range and a zoom-type of optics is needed. If the magnification is to remain constant over a wide range of energies, quite complicated electrostatic lens systems are required, .containing three, four, five, or even more lens elements. We firstly calculated the optical properties of three and four element cylinder electrostatic lenses with the help of the SIMION and LENSYS programs and developed the method for the calculation of the focal properties of five and more element lenses with afocal mode. In this method we used the combination of three and four element lenses to derive focal properties of multi-element lenses and presented this data over a wide range of energy

  11. Modeling of changes in particle size distribution of solids in multistage separation systems

    Directory of Open Access Journals (Sweden)

    Lagereva E.A.

    2016-09-01

    Full Text Available The presented method of calculation of the separation of solid particles from gas streams to multistage separation sys-tems, consisting of a number of sequentially installed separational devices of various design and principle of operation. It is based on a separate analysis of the sequential processes of capture and transmission of individual fractions of solid particles of a polydisperse structure. The technique provides information about changes in particle size distribution of solids with the passage of the gas flow in the treatment system and allows you to specifically select the effective combination of different types of separators.

  12. Systems and methods of varying charged particle beam spot size

    Science.gov (United States)

    Chen, Yu-Jiuan

    2014-09-02

    Methods and devices enable shaping of a charged particle beam. A modified dielectric wall accelerator includes a high gradient lens section and a main section. The high gradient lens section can be dynamically adjusted to establish the desired electric fields to minimize undesirable transverse defocusing fields at the entrance to the dielectric wall accelerator. Once a baseline setting with desirable output beam characteristic is established, the output beam can be dynamically modified to vary the output beam characteristics. The output beam can be modified by slightly adjusting the electric fields established across different sections of the modified dielectric wall accelerator. Additional control over the shape of the output beam can be excreted by introducing intentional timing de-synchronization offsets and producing an injected beam that is not fully matched to the entrance of the modified dielectric accelerator.

  13. Singularity and stability in a periodic system of particle accelerators

    Science.gov (United States)

    Cai, Yunhai

    2018-05-01

    We study the single-particle dynamics in a general and parametrized alternating-gradient cell with zero chromaticity using the Lie algebra method. To our surprise, the first-order perturbation of the sextupoles largely determines the dynamics away from the major resonances. The dynamic aperture can be estimated from the topology and geometry of the phase space. In the linearly normalized phase space, it is scaled according to A ¯ ∝ϕ √{L } , where ϕ is the bending angle and L the length of the cell. For the 2 degrees of freedom with equal betatron tunes, the analytical perturbation theory leads us to the invariant or quasi-invariant tori, which play an important role in determining the stable volume in the four-dimensional phase space.

  14. Radiation formation of colloidal metallic particles in aqueous systems

    International Nuclear Information System (INIS)

    Cuba, Vaclav; Nemec, Mojmir; Gbur, Tomas; John, Jan; Pospisil, Milan; Mucka, Viliam

    2008-01-01

    Full text: Radiation and photochemical methods have been successfully utilized in various steps of nanoparticles preparation. Presented study deals with formation of silver nanoparticles in various aqueous solutions initiated by UV and gamma radiation. Silver nitrate and silver cyanide were used as precursors for radiation and/or photochemical reduction of Ag + ions to the metallic form. Influence of various parameters (dose of radiation, dose rate, exposition time) on nucleation and formation of colloid particles was studied. Attention was also focused on composition of irradiated solution. Aliphatic alcohols were used as scavengers of OH radicals and other oxidizing species. Various organic stabilizers of formed nanoparticles were used, among others ethylenediaminetetraacetic acid, citric acid and polyvinyl alcohol. Irradiation effects were evaluated using UV/Vis absorption spectra in colloid solution, solid phase formed after long-term irradiation was analysed via X-ray structural analysis

  15. Shape memory system with integrated actuation using embedded particles

    Science.gov (United States)

    Buckley, Patrick R [New York, NY; Maitland, Duncan J [Pleasant Hill, CA

    2009-09-22

    A shape memory material with integrated actuation using embedded particles. One embodiment provides a shape memory material apparatus comprising a shape memory material body and magnetic pieces in the shape memory material body. Another embodiment provides a method of actuating a device to perform an activity on a subject comprising the steps of positioning a shape memory material body in a desired position with regard to the subject, the shape memory material body capable of being formed in a specific primary shape, reformed into a secondary stable shape, and controllably actuated to recover the specific primary shape; including pieces in the shape memory material body; and actuating the shape memory material body using the pieces causing the shape memory material body to be controllably actuated to recover the specific primary shape and perform the activity on the subject.

  16. Parameter estimation for chaotic systems with a Drift Particle Swarm Optimization method

    International Nuclear Information System (INIS)

    Sun Jun; Zhao Ji; Wu Xiaojun; Fang Wei; Cai Yujie; Xu Wenbo

    2010-01-01

    Inspired by the motion of electrons in metal conductors in an electric field, we propose a variant of Particle Swarm Optimization (PSO), called Drift Particle Swarm Optimization (DPSO) algorithm, and apply it in estimating the unknown parameters of chaotic dynamic systems. The principle and procedure of DPSO are presented, and the algorithm is used to identify Lorenz system and Chen system. The experiment results show that for the given parameter configurations, DPSO can identify the parameters of the systems accurately and effectively, and it may be a promising tool for chaotic system identification as well as other numerical optimization problems in physics.

  17. Solar particle induced upsets in the TDRS-1 attitude control system RAM during the October 1989 solar particle events

    International Nuclear Information System (INIS)

    Croley, D.R.; Garrett, H.B.; Murphy, G.B.; Garrard, T.L.

    1995-01-01

    The three large solar particle events, beginning on October 19, 1989 and lasting approximately six days, were characterized by high fluences of solar protons and heavy ions at 1 AU. During these events, an abnormally large number of upsets (243) were observed in the random access memory of the attitude control system (ACS) control processing electronics (CPE) on-board the geosynchronous TDRS-1 (Telemetry and Data Relay Satellite). The RAM unit affected was composed of eight Fairchild 93L422 memory chips. The Galileo spacecraft, launched on October 18, 1989 (one day prior to the solar particle events) observed the fluxes of heavy ions experienced by TDRS-1. Two solid-state detector telescopes on-board Galileo, designed to measure heavy ion species and energy, were turned on during time periods within each of the three separate events. The heavy ion data have been modeled and the time history of the events reconstructed to estimate heavy ion fluences. These fluences were converted to effective LET spectra after transport through the estimated shielding distribution around the TDRS-1 ACS system. The number of single event upsets (SEU) expected was calculated by integrating the measured cross section for the Fairchild 93L422 memory chip with average effective LET spectrum. The expected number of heavy ion induced SEU's calculated was 176. GOES-7 proton data, observed during the solar particle events, were used to estimate the number of proton-induced SEU's by integrating the proton fluence spectrum incident on the memory chips, with the two-parameter Bendel cross section for proton SEU's. The proton fluence spectrum at the device level was gotten by transporting the protons through the estimated shielding distribution. The number of calculated proton-induced SEU's was 72, yielding a total of 248 predicted SEU's, very close to the 243 observed SEU's

  18. Features and states of microscopic particles in nonlinear quantum-mechanics systems

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    In this paper,we present the elementary principles of nonlinear quantum mechanics(NLQM),which is based on some problems in quantum mechanics.We investigate in detail the motion laws and some main properties of microscopic particles in nonlinear quantum systems using these elementary principles.Concretely speaking,we study in this paper the wave-particle duality of the solution of the nonlinear Schr6dinger equation,the stability of microscopic particles described by NLQM,invariances and conservation laws of motion of particles,the Hamiltonian principle of particle motion and corresponding Lagrangian and Hamilton equations,the classical rule of microscopic particle motion,the mechanism and rules of particle collision,the features of reflection and the transmission of particles at interfaces,and the uncertainty relation of particle motion as well as the eigenvalue and eigenequations of particles,and so on.We obtained the invariance and conservation laws of mass,energy and momentum and angular momenturn for the microscopic particles,which are also some elementary and universal laws of matter in the NLQM and give further the methods and ways of solving the above questions.We also find that the laws of motion of microscopic particles in such a case are completely different from that in the linear quantum mechanics(LQM).They have a lot of new properties;for example,the particles possess the real wave-corpuscle duality,obey the classical rule of motion and conservation laws of energy,momentum and mass,satisfy minimum uncertainty relation,can be localized due to the nonlinear interaction,and its position and momentum can also be determined,etc.From these studies,we see clearly that rules and features of microscopic particle motion in NLQM is different from that in LQM.Therefore,the NLQM is a new physical theory,and a necessary result of the development of quantum mechanics and has a correct representation of describing microscopic particles in nonlinear systems,which can

  19. On creating macroscopically identical granular systems with different numbers of particles

    Science.gov (United States)

    van der Meer, Devaraj; Rivas, Nicolas

    2015-11-01

    One of the fundamental differences between granular and molecular hydrodynamics is the enormous difference in the total number of constituents. The small number of particles implies that the role of fluctuations in granular dynamics is of paramount importance. To obtain more insight in these fluctuations, we investigate to what extent it is possible to create identical granular hydrodynamic states with different number of particles. A definition is given of macroscopically equivalent systems, and the dependency of the conservation equations on the particle size is studied. We show that, in certain cases, and by appropriately scaling the microscopic variables, we are able to compare systems with significantly different number of particles that present the same macroscopic phenomenology. We apply these scalings in simulations of a vertically vibrated system, namely the density inverted granular Leidenfrost state and its transition to a buoyancy-driven convective state.

  20. Research on the method of information system risk state estimation based on clustering particle filter

    Science.gov (United States)

    Cui, Jia; Hong, Bei; Jiang, Xuepeng; Chen, Qinghua

    2017-05-01

    With the purpose of reinforcing correlation analysis of risk assessment threat factors, a dynamic assessment method of safety risks based on particle filtering is proposed, which takes threat analysis as the core. Based on the risk assessment standards, the method selects threat indicates, applies a particle filtering algorithm to calculate influencing weight of threat indications, and confirms information system risk levels by combining with state estimation theory. In order to improve the calculating efficiency of the particle filtering algorithm, the k-means cluster algorithm is introduced to the particle filtering algorithm. By clustering all particles, the author regards centroid as the representative to operate, so as to reduce calculated amount. The empirical experience indicates that the method can embody the relation of mutual dependence and influence in risk elements reasonably. Under the circumstance of limited information, it provides the scientific basis on fabricating a risk management control strategy.

  1. Research on the method of information system risk state estimation based on clustering particle filter

    Directory of Open Access Journals (Sweden)

    Cui Jia

    2017-05-01

    Full Text Available With the purpose of reinforcing correlation analysis of risk assessment threat factors, a dynamic assessment method of safety risks based on particle filtering is proposed, which takes threat analysis as the core. Based on the risk assessment standards, the method selects threat indicates, applies a particle filtering algorithm to calculate influencing weight of threat indications, and confirms information system risk levels by combining with state estimation theory. In order to improve the calculating efficiency of the particle filtering algorithm, the k-means cluster algorithm is introduced to the particle filtering algorithm. By clustering all particles, the author regards centroid as the representative to operate, so as to reduce calculated amount. The empirical experience indicates that the method can embody the relation of mutual dependence and influence in risk elements reasonably. Under the circumstance of limited information, it provides the scientific basis on fabricating a risk management control strategy.

  2. Elemental and organic carbon in flue gas particles of various wood combustion systems

    Energy Technology Data Exchange (ETDEWEB)

    Gaegauf, C.; Schmid, M.; Guentert, P.

    2005-12-15

    The airborne particulate matter (PM) in the environment is of ever increasing concern to authorities and the public. The major fractions of particles in wood combustion processes are in the size less than 1 micron, typically in the range of 30 to 300 nm. Of specific interest is the content of the elemental carbon (EC) and organic carbon (OC) in the particles since these substances are known for its particular potential as carcinogens. Various wood combustion systems have been analysed (wood chip boiler, pellet boiler, wood log boiler, wood stove and open fire). The sampling of the particles was done by mean of a multi-stage particle sizing sampler cascade impactor. The impactor classifies the particles collected according to their size. The 7 stages classify the particles between 0.4 and 9 microns aerodynamic diameter. The analytical method for determining the content of EC and OC in the particles is based on coulometry. The coulometer measures the conductivity of CO{sub 2} released by oxidation of EC in the samples at 650 {sup o}C. The OC content is determined by pyrolysis of the particle samples in helium atmosphere.

  3. Lung clearance of inhaled particles after exposure to carbon black generated from a resuspension system

    International Nuclear Information System (INIS)

    Lee, P.S.; Gorski, R.A.; Hering, W.E.; Chan, T.L.

    1987-01-01

    A system to resuspend carbon black particles for providing submicron aerosols for inhalation exposure studies has been developed. The effect of continuous exposure to carbonaceous material (as a surrogate for the carbonaceous particles in diesel exhaust) on the pulmonary clearance of inhaled diesel tracer particles was studied in male Fischer 344 rats. Submicron carbon black particles with a mass median aerodynamic diameter (MMAD) of 0.22 micron and a size distribution similar to that of exhaust particles from a GM 5.7-liter diesel engine were successfully generated and administered to test animals at a nominal concentration of 6 mg/m3 for 20 hr/day, 7 days/week, for periods lasting 1 to 11 weeks. Immediately after the carbon black exposure, test animals were administered 14 C-tagged diesel particles for 45 min in a nose-only chamber. The pulmonary retention of inhaled radioactive tracer particles was determined at preselected time intervals. Based upon the data collected up to 1 year postexposure, prolonged exposure to carbon black particles exhibits a similar inhibitory effect on pulmonary clearance as does prolonged exposure to diesel exhaust with a comparable particulate dose. This observation indicates that the excessive accumulation of carbonaceous material may be the predominant factor affecting lung clearance

  4. Computational Fluid and Particle Dynamics in the Human Respiratory System

    CERN Document Server

    Tu, Jiyuan; Ahmadi, Goodarz

    2013-01-01

    Traditional research methodologies in the human respiratory system have always been challenging due to their invasive nature. Recent advances in medical imaging and computational fluid dynamics (CFD) have accelerated this research. This book compiles and details recent advances in the modelling of the respiratory system for researchers, engineers, scientists, and health practitioners. It breaks down the complexities of this field and provides both students and scientists with an introduction and starting point to the physiology of the respiratory system, fluid dynamics and advanced CFD modeling tools. In addition to a brief introduction to the physics of the respiratory system and an overview of computational methods, the book contains best-practice guidelines for establishing high-quality computational models and simulations. Inspiration for new simulations can be gained through innovative case studies as well as hands-on practice using pre-made computational code. Last but not least, students and researcher...

  5. Coherent scattering of electromagnetic radiation by a polarized particle system

    International Nuclear Information System (INIS)

    Agre, M.Ya.; Rapoport, L.P.

    1996-01-01

    The paper deals with the development of the theory of coherent scattering of electromagnetic waves by a polarized atom or molecular system. Peculiarities of the angular distribution and polarization peculiarities of scattered radiation are discussed

  6. Dynamics and Thermodynamics of Many Particle Cold Atom Systems

    Science.gov (United States)

    2016-05-05

    simulate their dynamics far from equilibrium . It is likely that these ideas will find many applications in many areas of physics, quantum chemistry and...focus of this proposal was theoretical research on various non- equilibrium phenomena in isolated quantum systems and applications to experimental setups...theoretical research on various non- equilibrium phenomena in isolated quantum systems and applications to experimental setups largely to cold atoms

  7. Equilibrium magnetization and microstructure of the system of superparamagnetic interacting particles: numerical simulation

    CERN Document Server

    Pshenichnikov, A F

    2000-01-01

    The Monte Carlo method is used to study the equilibrium magnetization of a 3D system of superparamagnetic particles taking into account the steric and dipole-dipole interparticle interactions. Two types of systems are considered: magnetic fluids and solidified ferrocolloids containing randomly spatially distributed particles with negligible energy of magnetic anisotropy. The results of numerical simulations confirm the universality of Langevin susceptibility as a main dimensionless parameter determining the influence of interparticle interactions on the magnetization of the system for moderate values of the aggregation parameter. The obtained results are in good agreement with theoretical and experimental data. At large values of the aggregation parameter, the clustering of particles in magnetic fluids is observed resulting in a reduction of their magnetization as compared to solidified systems. It is shown that the magnetization of solidified systems can be well described by the modified effective field appr...

  8. Equilibrium magnetization and microstructure of the system of superparamagnetic interacting particles: numerical simulation

    International Nuclear Information System (INIS)

    Pshenichnikov, A.F.; Mekhonoshin, V.V.

    2000-01-01

    The Monte Carlo method is used to study the equilibrium magnetization of a 3D system of superparamagnetic particles taking into account the steric and dipole-dipole interparticle interactions. Two types of systems are considered: magnetic fluids and solidified ferrocolloids containing randomly spatially distributed particles with negligible energy of magnetic anisotropy. The results of numerical simulations confirm the universality of Langevin susceptibility as a main dimensionless parameter determining the influence of interparticle interactions on the magnetization of the system for moderate values of the aggregation parameter. The obtained results are in good agreement with theoretical and experimental data. At large values of the aggregation parameter, the clustering of particles in magnetic fluids is observed resulting in a reduction of their magnetization as compared to solidified systems. It is shown that the magnetization of solidified systems can be well described by the modified effective field approximation within the whole investigated range of parameters

  9. Prenatal exposure to diesel exhaust particles and effect on the male reproductive system in mice

    DEFF Research Database (Denmark)

    Hemmingsen, Jette Gjerke; Hougaard, Karin Sørig; Talsness, Chris

    2009-01-01

    In utero exposure to diesel exhaust particles may reduce sperm production in adulthood. We investigated the effect of prenatal exposure to diesel exhaust particles on the male reproductive system and assessed endocrine disruption and regulation of aquaporin expression as possible mechanisms...... of action. Dams inhaled 20 mg/m(3) of diesel exhaust particle standard reference material 2975 (SRM2975) or clean air for 1h/day on day 7-19 during pregnancy. Male offspring were killed on day 170 after birth. The dams that had inhaled SRM2975 delivered offspring, which in adulthood had reduced daily sperm...

  10. Compact and portable system for evaluation of individual exposure at aerosol particle in urban area

    International Nuclear Information System (INIS)

    De Zaiacomo, T.

    1995-01-01

    A compact and portable system for real-time acquisition of aerosol concentration data in urban and extra-urban area is presented. It is based on two optical type aerosol monitors integrated by aerosol particle separating and collecting devices, assembled into a carrying case together with temperature and relative humidity sensors and a programmable analog data logger; data output is addressed to a dedicated printer or personal computer. Further data about particle size, morphological aspect and particle mass concentration are obtainable by weighing supports used to concurrently collect aerosol particles and/or by means of microanalytical techniques. System performances are evaluated from the point of view of portability, possibility of use as stationary sampler for long-term monitoring purposes and coherence between optical response and ponderal mass. Some tests are finally carried out, to investigate the effect of relative humidity on the optical response of this type of instruments

  11. Advanced development of particle beam probe diagnostic systems

    International Nuclear Information System (INIS)

    Hickok, R.L.; Crowley, T.P.; Connor, K.A.

    1990-11-01

    This progress report covers the period starting with the approval to go ahead with the 2 MeV heavy ion beam probe (HIBP) for TEXT Upgrade to the submission of the grant renewal proposal. During this period the co-principal investigators, R. L. Hickok and T. P. Crowley have each devoted 45% of their time to this Grant. Their effort has been almost exclusively devoted to the design and fabrication of the 2 MeV HIBP system. The 1989 report that described the advantages of a 2 MeV HIBP for TEXT Upgrade compared to the existing 0.5 MeV HIBP and outlined the design of the 2 MeV system is attached as Appendix A. Since the major effort under the renewal proposal will be the continued fabrication, installation and operation of the 2 MeV system on TEXT Upgrade, we describe some of the unique results that have been obtained with the 0.5 MeV system on TEXT. For completeness, we also include the preliminary operation of the 160 keV HIBP on ATF. We present the present fabrication status of the 2 MeV system with the exception of the electrostatic energy analyzer. The energy analyzer which is designed to operate with 400 kV on the top plate is a major development effort and is treated separately. Included in this section are the results obtained with a prototype no guard ring analyzer, the conceptual design for the 2 MeV analyzer, the status of the high voltage testing of full size analyzer systems and backup plans if it turns out that it is impossible to hold 400 kV on an analyzer this size

  12. Nanolipoprotein particles and related methods and systems for protein capture, solubilization, and/or purification

    Energy Technology Data Exchange (ETDEWEB)

    Chromy, Brett A.; Henderson, Paul; Hoeprich, Jr, Paul D.

    2016-10-04

    Provided herein are methods and systems for assembling, solubilizing and/or purifying a membrane associated protein in a nanolipoprotein particle, which comprise a temperature transition cycle performed in presence of a detergent, wherein during the temperature transition cycle the nanolipoprotein components are brought to a temperature above and below the gel to liquid crystalling transition temperature of the membrane forming lipid of the nanolipoprotein particle.

  13. A Nonlinear Schrödinger Model for Many-Particle Quantum Systems

    Directory of Open Access Journals (Sweden)

    Qiang Zhang

    2012-01-01

    Full Text Available Considering both effects of the s-wave scattering and the atom-atom interaction rather than only the effect of the s-wave scattering, we establish a nonlinear Schrödinger model for many-particle quantum systems and we prove the global existence of a solution to the model and obtain the expression of the solution. Furthermore, we show that the Hamilton energy and the total particle number both are conservative quantities.

  14. Distribution functions and thermodynamic functions of many particle systems

    International Nuclear Information System (INIS)

    Isihara, A.; Rosa Junior, S.G.

    1976-01-01

    A method is given of determining and upper bound of the entropy of a classical interacting system. A family of gaussian trial distribution functions is introduced for an electron gas. It was found that the ring diagram energy corresponds to the minimum free energy which the family produces. In contrast to the ring diagram method, the new approach is extremely simple and general [pt

  15. Duality and hidden symmetries in interacting particle systems

    NARCIS (Netherlands)

    Giardinà, C.; Kurchan, J.; Redig, F.H.J.; Vafayi, K.

    2009-01-01

    In the context of Markov processes, both in discrete and continuous setting, we show a general relation between duality functions and symmetries of the generator. If the generator can be written in the form of a Hamiltonian of a quantum spin system, then the "hidden" symmetries are easily derived.

  16. Neural network based expert system for fault diagnosis of particle accelerators

    International Nuclear Information System (INIS)

    Dewidar, M.M.

    1997-01-01

    Particle accelerators are generators that produce beams of charged particles, acquiring different energies, depending on the accelerator type. The MGC-20 cyclotron is a cyclic particle accelerator used for accelerating protons, deuterons, alpha particles, and helium-3 to different energies. Its applications include isotope production, nuclear reaction, and mass spectroscopy studies. It is a complicated machine, it consists of five main parts, the ion source, the deflector, the beam transport system, the concentric and harmonic coils, and the radio frequency system. The diagnosis of this device is a very complex task. it depends on the conditions of 27 indicators of the control panel of the device. The accurate diagnosis can lead to a high system reliability and save maintenance costs. so an expert system for the cyclotron fault diagnosis is necessary to be built. In this thesis , a hybrid expert system was developed for the fault diagnosis of the MGC-20 cyclotron. Two intelligent techniques, multilayer feed forward back propagation neural network and the rule based expert system, are integrated as a pre-processor loosely coupled model to build the proposed hybrid expert system. The architecture of the developed hybrid expert system consists of two levels. The first level is two feed forward back propagation neural networks, used for isolating the faulty part of the cyclotron. The second level is the rule based expert system, used for troubleshooting the faults inside the isolated faulty part. 4-6 tabs., 4-5 figs., 36 refs

  17. Force fields of charged particles in micro-nanofluidic preconcentration systems

    Science.gov (United States)

    Gong, Lingyan; Ouyang, Wei; Li, Zirui; Han, Jongyoon

    2017-12-01

    Electrokinetic concentration devices based on the ion concentration polarization (ICP) phenomenon have drawn much attention due to their simple setup, high enrichment factor, and easy integration with many subsequent processes, such as separation, reaction, and extraction etc. Despite significant progress in the experimental research, fundamental understanding and detailed modeling of the preconcentration systems is still lacking. The mechanism of the electrokinetic trapping of charged particles is currently limited to the force balance analysis between the electric force and fluid drag force in an over-simplified one-dimensional (1D) model, which misses many signatures of the actual system. This letter studies the particle trapping phenomena that are not explainable in the 1D model through the calculation of the two-dimensional (2D) force fields. The trapping of charged particles is shown to significantly distort the electric field and fluid flow pattern, which in turn leads to the different trapping behaviors of particles of different sizes. The mechanisms behind the protrusions and instability of the focused band, which are important factors determining overall preconcentration efficiency, are revealed through analyzing the rotating fluxes of particles in the vicinity of the ion-selective membrane. The differences in the enrichment factors of differently sized particles are understood through the interplay between the electric force and convective fluid flow. These results provide insights into the electrokinetic concentration effect, which could facilitate the design and optimization of ICP-based preconcentration systems.

  18. Design of the TORCH detector: A Cherenkov based Time-of-Flight system for particle identification

    CERN Document Server

    AUTHOR|(CDS)2078663; Rademacker, Jonas

    The LHCb detector at the LHC collider has been very successfully operated over the past years, providing new and profound insights into the Standard Model, in particular through study of $b$-hadrons to achieve a better understanding of CP violation. One of the key components of LHCb is its particle identification system, comprised of two RICH detectors, which allow for high precision separation of particle species over a large momentum range. In order to retain and improve the performance of the particle identification system in light of the LHCb upgrade, the TORCH detector has been proposed to supplement the RICH system at low momentum (2-10 GeV/c). The TORCH detector provides (charged) particle identification through precision timing of particles passing through it. Assuming a known momentum from the tracking, it is possible to derive the species of a particle from the time of flight from its primary vertex. This measurement is achieved by timing and combining photons generated in a solid radiator. The geom...

  19. Log-Normal Distribution in a Growing System with Weighted and Multiplicatively Interacting Particles

    Science.gov (United States)

    Fujihara, Akihiro; Tanimoto, Satoshi; Yamamoto, Hiroshi; Ohtsuki, Toshiya

    2018-03-01

    A growing system with weighted and multiplicatively interacting particles is investigated. Each particle has a quantity that changes multiplicatively after a binary interaction, with its growth rate controlled by a weight parameter in a homogeneous symmetric kernel. We consider the system using moment inequalities and analytically derive the log-normal-type tail in the probability distribution function of quantities when the parameter is negative, which is different from the result for single-body multiplicative processes. We also find that the system approaches a winner-take-all state when the parameter is positive.

  20. Implicit Particle Filter for Power System State Estimation with Large Scale Renewable Power Integration.

    Science.gov (United States)

    Uzunoglu, B.; Hussaini, Y.

    2017-12-01

    Implicit Particle Filter is a sequential Monte Carlo method for data assimilation that guides the particles to the high-probability by an implicit step . It optimizes a nonlinear cost function which can be inherited from legacy assimilation routines . Dynamic state estimation for almost real-time applications in power systems are becomingly increasingly more important with integration of variable wind and solar power generation. New advanced state estimation tools that will replace the old generation state estimation in addition to having a general framework of complexities should be able to address the legacy software and able to integrate the old software in a mathematical framework while allowing the power industry need for a cautious and evolutionary change in comparison to a complete revolutionary approach while addressing nonlinearity and non-normal behaviour. This work implements implicit particle filter as a state estimation tool for the estimation of the states of a power system and presents the first implicit particle filter application study on a power system state estimation. The implicit particle filter is introduced into power systems and the simulations are presented for a three-node benchmark power system . The performance of the filter on the presented problem is analyzed and the results are presented.

  1. Systemic characterization and evaluation of particle packings as initial sets for discrete element simulations

    Science.gov (United States)

    Morfa, Carlos Recarey; Cortés, Lucía Argüelles; Farias, Márcio Muniz de; Morales, Irvin Pablo Pérez; Valera, Roberto Roselló; Oñate, Eugenio

    2018-07-01

    A methodology that comprises several characterization properties for particle packings is proposed in this paper. The methodology takes into account factors such as dimension and shape of particles, space occupation, homogeneity, connectivity and isotropy, among others. This classification and integration of several properties allows to carry out a characterization process to systemically evaluate the particle packings in order to guarantee the quality of the initial meshes in discrete element simulations, in both the micro- and the macroscales. Several new properties were created, and improvements in existing ones are presented. Properties from other disciplines were adapted to be used in the evaluation of particle systems. The methodology allows to easily characterize media at the level of the microscale (continuous geometries—steels, rocks microstructures, etc., and discrete geometries) and the macroscale. A global, systemic and integral system for characterizing and evaluating particle sets, based on fuzzy logic, is presented. Such system allows researchers to have a unique evaluation criterion based on the aim of their research. Examples of applications are shown.

  2. Systemic characterization and evaluation of particle packings as initial sets for discrete element simulations

    Science.gov (United States)

    Morfa, Carlos Recarey; Cortés, Lucía Argüelles; Farias, Márcio Muniz de; Morales, Irvin Pablo Pérez; Valera, Roberto Roselló; Oñate, Eugenio

    2017-10-01

    A methodology that comprises several characterization properties for particle packings is proposed in this paper. The methodology takes into account factors such as dimension and shape of particles, space occupation, homogeneity, connectivity and isotropy, among others. This classification and integration of several properties allows to carry out a characterization process to systemically evaluate the particle packings in order to guarantee the quality of the initial meshes in discrete element simulations, in both the micro- and the macroscales. Several new properties were created, and improvements in existing ones are presented. Properties from other disciplines were adapted to be used in the evaluation of particle systems. The methodology allows to easily characterize media at the level of the microscale (continuous geometries—steels, rocks microstructures, etc., and discrete geometries) and the macroscale. A global, systemic and integral system for characterizing and evaluating particle sets, based on fuzzy logic, is presented. Such system allows researchers to have a unique evaluation criterion based on the aim of their research. Examples of applications are shown.

  3. Particle damage sources for fused silica optics and their mitigation on high energy laser systems.

    Science.gov (United States)

    Bude, J; Carr, C W; Miller, P E; Parham, T; Whitman, P; Monticelli, M; Raman, R; Cross, D; Welday, B; Ravizza, F; Suratwala, T; Davis, J; Fischer, M; Hawley, R; Lee, H; Matthews, M; Norton, M; Nostrand, M; VanBlarcom, D; Sommer, S

    2017-05-15

    High energy laser systems are ultimately limited by laser-induced damage to their critical components. This is especially true of damage to critical fused silica optics, which grows rapidly upon exposure to additional laser pulses. Much progress has been made in eliminating damage precursors in as-processed fused silica optics (the advanced mitigation process, AMP3), and very high damage resistance has been demonstrated in laboratory studies. However, the full potential of these improvements has not yet been realized in actual laser systems. In this work, we explore the importance of additional damage sources-in particular, particle contamination-for fused silica optics fielded in a high-performance laser environment, the National Ignition Facility (NIF) laser system. We demonstrate that the most dangerous sources of particle contamination in a system-level environment are laser-driven particle sources. In the specific case of the NIF laser, we have identified the two important particle sources which account for nearly all the damage observed on AMP3 optics during full laser operation and present mitigations for these particle sources. Finally, with the elimination of these laser-driven particle sources, we demonstrate essentially damage free operation of AMP3 fused silica for ten large optics (a total of 12,000 cm 2 of beam area) for shots from 8.6 J/cm 2 to 9.5 J/cm 2 of 351 nm light (3 ns Gaussian pulse shapes). Potentially many other pulsed high energy laser systems have similar particle sources, and given the insight provided by this study, their identification and elimination should be possible. The mitigations demonstrated here are currently being employed for all large UV silica optics on the National Ignition Facility.

  4. Reliability Models Applied to a System of Power Converters in Particle Accelerators

    OpenAIRE

    Siemaszko, D; Speiser, M; Pittet, S

    2012-01-01

    Several reliability models are studied when applied to a power system containing a large number of power converters. A methodology is proposed and illustrated in the case study of a novel linear particle accelerator designed for reaching high energies. The proposed methods result in the prediction of both reliability and availability of the considered system for optimisation purposes.

  5. The design and scale-up of spray dried particle delivery systems.

    Science.gov (United States)

    Al-Khattawi, Ali; Bayly, Andrew; Phillips, Andrew; Wilson, David

    2018-01-01

    The rising demand for pharmaceutical particles with tailored physicochemical properties has opened new markets for spray drying especially for solubility enhancement, improving inhalation medicines and stabilization of biopharmaceuticals. Despite this, the spray drying literature is scattered and often does not address the principles underpinning robust development of pharmaceuticals. It is therefore necessary to present clearer picture of the field and highlight the factors influencing particle design and scale-up. Areas covered: The review presents a systematic analysis of the trends in development of particle delivery systems using spray drying. This is followed by exploring the mechanisms governing particle formation in the process stages. Particle design factors including those of equipment configurations and feed/process attributes were highlighted. Finally, the review summarises the current industrial approaches for upscaling pharmaceutical spray drying. Expert opinion: Spray drying provides the ability to design particles of the desired functionality. This greatly benefits the pharmaceutical sector especially as product specifications are becoming more encompassing and exacting. One of the biggest barriers to product translation remains one of scale-up/scale-down. A shift from trial and error approaches to model-based particle design helps to enhance control over product properties. To this end, process innovations and advanced manufacturing technologies are particularly welcomed.

  6. A VLSI System-on-Chip for Particle Detectors

    CERN Document Server

    AUTHOR|(CDS)2078019

    In this thesis I present a System-on-Chip (SoC) I designed to oer a self- contained, compact data acquisition platform for micromegas detector mon- itoring. I carried on my work within the RD-51 collab oration of CERN. With a companion ADC, my architecture is capable to acquire the signal from a detector electro de, pro cess the data and p erform monitoring tests. The SoC is built around on a custom 8-bit micropro cessor with internal mem- ory resources and emb eds the p eripherals to b e interf...

  7. On the ``Matrix Approach'' to Interacting Particle Systems

    Science.gov (United States)

    de Sanctis, L.; Isopi, M.

    2004-04-01

    Derrida et al. and Schütz and Stinchcombe gave algebraic formulas for the correlation functions of the partially asymmetric simple exclusion process. Here we give a fairly general recipe of how to get these formulas and extend them to the whole time evolution (starting from the generator of the process), for a certain class of interacting systems. We then analyze the algebraic relations obtained to show that the matrix approach does not work with some models such as the voter and the contact processes.

  8. Clogging transition of many-particle systems flowing through bottlenecks

    Science.gov (United States)

    Zuriguel, Iker; Parisi, Daniel Ricardo; Hidalgo, Raúl Cruz; Lozano, Celia; Janda, Alvaro; Gago, Paula Alejandra; Peralta, Juan Pablo; Ferrer, Luis Miguel; Pugnaloni, Luis Ariel; Clément, Eric; Maza, Diego; Pagonabarraga, Ignacio; Garcimartín, Angel

    2014-12-01

    When a large set of discrete bodies passes through a bottleneck, the flow may become intermittent due to the development of clogs that obstruct the constriction. Clogging is observed, for instance, in colloidal suspensions, granular materials and crowd swarming, where consequences may be dramatic. Despite its ubiquity, a general framework embracing research in such a wide variety of scenarios is still lacking. We show that in systems of very different nature and scale -including sheep herds, pedestrian crowds, assemblies of grains, and colloids- the probability distribution of time lapses between the passages of consecutive bodies exhibits a power-law tail with an exponent that depends on the system condition. Consequently, we identify the transition to clogging in terms of the divergence of the average time lapse. Such a unified description allows us to put forward a qualitative clogging state diagram whose most conspicuous feature is the presence of a length scale qualitatively related to the presence of a finite size orifice. This approach helps to understand paradoxical phenomena, such as the faster-is-slower effect predicted for pedestrians evacuating a room and might become a starting point for researchers working in a wide variety of situations where clogging represents a hindrance.

  9. Multiscale simulations of patchy particle systems combining Molecular Dynamics, Path Sampling and Green's Function Reaction Dynamics

    Science.gov (United States)

    Bolhuis, Peter

    Important reaction-diffusion processes, such as biochemical networks in living cells, or self-assembling soft matter, span many orders in length and time scales. In these systems, the reactants' spatial dynamics at mesoscopic length and time scales of microns and seconds is coupled to the reactions between the molecules at microscopic length and time scales of nanometers and milliseconds. This wide range of length and time scales makes these systems notoriously difficult to simulate. While mean-field rate equations cannot describe such processes, the mesoscopic Green's Function Reaction Dynamics (GFRD) method enables efficient simulation at the particle level provided the microscopic dynamics can be integrated out. Yet, many processes exhibit non-trivial microscopic dynamics that can qualitatively change the macroscopic behavior, calling for an atomistic, microscopic description. The recently developed multiscale Molecular Dynamics Green's Function Reaction Dynamics (MD-GFRD) approach combines GFRD for simulating the system at the mesocopic scale where particles are far apart, with microscopic Molecular (or Brownian) Dynamics, for simulating the system at the microscopic scale where reactants are in close proximity. The association and dissociation of particles are treated with rare event path sampling techniques. I will illustrate the efficiency of this method for patchy particle systems. Replacing the microscopic regime with a Markov State Model avoids the microscopic regime completely. The MSM is then pre-computed using advanced path-sampling techniques such as multistate transition interface sampling. I illustrate this approach on patchy particle systems that show multiple modes of binding. MD-GFRD is generic, and can be used to efficiently simulate reaction-diffusion systems at the particle level, including the orientational dynamics, opening up the possibility for large-scale simulations of e.g. protein signaling networks.

  10. Assessment of Available Particle Size Data to Support an Analysis of the Waste Feed Delivery System Transfer System

    International Nuclear Information System (INIS)

    JEWETT, J.R.

    2000-01-01

    Available data pertaining to size distribution of the particulates in Hanford underground tank waste have been reviewed. Although considerable differences exist between measurement methods, it may be stated with 95% confidence that the median particle size does not exceed 275 (micro)m in at least 95% of the ten tanks selected as sources of HLW feed for Phase 1 vitrification in the RPP. This particle size is recommended as a design basis for the WFD transfer system

  11. Pathway analysis of systemic transcriptome responses to injected polystyrene particles in zebrafish larvae.

    Science.gov (United States)

    Veneman, Wouter J; Spaink, Herman P; Brun, Nadja R; Bosker, Thijs; Vijver, Martina G

    2017-09-01

    Microplastics are a contaminant of emergent concern in the environment, however, to date there is a limited understanding on their movement within organisms and the response of organisms. In the current study zebrafish embryos at different development stages were exposed to 700nm fluorescent polystyrene (PS) particles and the response pathway after exposure was investigated using imaging and transcriptomics. Our results show limited spreading of particles within the larvae after injection during the blastula stage. This is in contrast to injection of PS particles in the yolk of 2-day old embryos, which resulted in redistribution of the PS particles throughout the bloodstream, and accumulation in the heart region. Although injection was local, the transcriptome profiling showed strong responses of zebrafish embryos exposed to PS particle, indicating a systemic response. We found several biological pathways activated which are related to an immune response in the PS exposed zebrafish larvae. Most notably the complement system was enriched as indicated by upregulation of genes in the alternative complement pathway (e.g. cfhl3, cfhl4, cfb and c9). The fact that complement pathway is activated indicates that plastic microparticles are integrated in immunological recognition processes. This was supported by fluorescence microscopy results, in which we observed co-localisation of neutrophils and macrophages around the PS particles. Identifying these key events can be a first building block to the development of an adverse outcome pathway (AOP). These data subsequently can be used within ecological and human risk assessment. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Long lifetimes of ultrahot particles in interacting Fermi systems

    Science.gov (United States)

    Bard, M.; Protopopov, I. V.; Mirlin, A. D.

    2018-05-01

    The energy dependence of the relaxation rate of hot electrons due to interaction with the Fermi sea is studied. We consider 2D and 3D systems, quasi-1D quantum wires with multiple transverse bands, as well as single-channel 1D wires. Our analysis includes both spinful and spin-polarized setups, with short-range and Coulomb interactions. We show that, quite generally, the relaxation rate is a nonmonotonic function of the electron energy and decays as a power law at high energies. In other words, ultrahot electrons regain their coherence with increasing energy. Such a behavior was observed in a recent experiment on multiband quantum wires, J. Reiner et al., Phys. Rev. X 7, 021016 (2017)., 10.1103/PhysRevX.7.021016

  13. Herd Behavior and Financial Crashes: An Interacting Particle System Approach

    Directory of Open Access Journals (Sweden)

    Vincenzo Crescimanna

    2016-01-01

    Full Text Available We provide an approach based on a modification of the Ising model to describe the dynamics of stock markets. Our model incorporates three different factors: imitation, the impact of external news, and private information; moreover, it is characterized by coupling coefficients, static in time, but not identical for each agent. By analogy with physical models, we consider the temperature parameter of the system, assuming that it evolves with memory of the past, hence considering how former news influences realized market returns. We show that a standard Ising potential assumption is not sufficient to reproduce the stylized facts characterizing financial markets; this is because it assigns low probabilities to rare events. Hence, we study a variation of the previous setting providing, also by concrete computations, new insights and improvements.

  14. Functional integrals for nuclear many-particle systems

    International Nuclear Information System (INIS)

    Lobanov, Yu.Yu.

    1996-04-01

    The new method for computation of the physical characteristics of quantum systems with many degrees of freedom is described. This method is based on the representation of the matrix element of the evolution operator in Euclidean metrics in a form of the functional integral with a certain measure in the corresponding space and on the use of approximation formulas which we constructed for this kind of integrals. The method does not require preliminary discretization of space and time and allows to use the deterministic algorithms. This approach proved to have important advantages over the other known methods, including the higher efficiency of computations. Examples of application of the method to the numerical study of some potential nuclear models as well as comparison of results with the experimental data and with the values obtained by the other authors are presented. (author). 25 refs, 1 fig., 2 tabs

  15. A low-cost, high-magnification imaging system for particle sizing applications

    International Nuclear Information System (INIS)

    Tipnis, Tanmay J; Lawson, Nicholas J; Tatam, Ralph P

    2014-01-01

    A low-cost imaging system for high magnification and high resolution was developed as an alternative to long-working-distance microscope-based systems, primarily for particle sizing applications. The imaging optics, comprising an inverted fixed focus lens coupled to a microscope objective, were able to provide a working distance of approximately 50 mm. The system magnification could be changed by using an appropriate microscope objective. Particle sizing was achieved using shadow-based techniques with the backlight illumination provided by a pulsed light-emitting diode light source. The images were analysed using commercial sizing software which gave the particle sizes and their distribution. A range of particles, from 6 to 8 µm to over 100 µm, was successfully measured with a minimum spatial resolution of approximately 2.5 µm. This system allowed measurement of a wide range of particles at a lower cost and improved operator safety without disturbing the flow. (technical design note)

  16. Stable transformation via particle bombardment in two different soybean regeneration systems.

    Science.gov (United States)

    Sato, S; Newell, C; Kolacz, K; Tredo, L; Finer, J; Hinchee, M

    1993-05-01

    The Biolistics(®) particle delivery system for the transformation of soybean (Glycine max L. Merr.) was evaluated in two different regeneration systems. The first system was multiple shoot proliferation from shoot tips obtained from immature zygotic embryos of the cultivar Williams 82, and the second was somatic embryogenesis from a long term proliferative suspension culture of the cultivar Fayette. Bombardment of shoot tips with tungsten particles, coated with precipitated DNA containing the gene for β-glucuronidase (GUS), produced GUS-positive sectors in 30% of the regenerated shoots. However, none of the regenerants which developed into plants continued to produce GUS positive tissue. Bombardment of embryogenic suspension cultures produced GUS positive globular somatic embryos which proliferated into GUS positive somatic embryos and plants. An average of 4 independent transgenic lines were generated per bombarded flask of an embryogenic suspension. Particle bombardment delivered particles into the first two cell layers of either shoot tips or somatic embryos. Histological analysis indicated that shoot organogenesis appeared to involve more than the first two superficial cell layers of a shoot tip, while somatic embryo proliferation occurred from the first cell layer of existing somatic embryos. The different transformation results obtained with these two systems appeared to be directly related to differences in the cell types which were responsible for regeneration and their accessibility to particle penetration.

  17. Local System Matrix Compression for Efficient Reconstruction in Magnetic Particle Imaging

    Directory of Open Access Journals (Sweden)

    T. Knopp

    2015-01-01

    Full Text Available Magnetic particle imaging (MPI is a quantitative method for determining the spatial distribution of magnetic nanoparticles, which can be used as tracers for cardiovascular imaging. For reconstructing a spatial map of the particle distribution, the system matrix describing the magnetic particle imaging equation has to be known. Due to the complex dynamic behavior of the magnetic particles, the system matrix is commonly measured in a calibration procedure. In order to speed up the reconstruction process, recently, a matrix compression technique has been proposed that makes use of a basis transformation in order to compress the MPI system matrix. By thresholding the resulting matrix and storing the remaining entries in compressed row storage format, only a fraction of the data has to be processed when reconstructing the particle distribution. In the present work, it is shown that the image quality of the algorithm can be considerably improved by using a local threshold for each matrix row instead of a global threshold for the entire system matrix.

  18. Elimination of particle effects in SF/sub 6/ insulated transmission systems. First quarterly report

    Energy Technology Data Exchange (ETDEWEB)

    Dale, S.J.

    1979-01-01

    The purpose of this program is to develop methods and equipment to eliminate the adverse effect of particle contamination in SF/sub 6/-insulated transmission (CGIT) systems, so that the excellent dielectric properties of SF/sub 6/ can be fully exploited. Presently, CGIT systems are operated at about 10% of the dielectric strength capability of the SF/sub 6/ gas. The program includes theoretical and experimental evaluation of concepts, optimization and verification studies in CGIT systems, and reliability analysis, documentation of designs and economic analysis. Progress is now being made on evaluating alternative conductor and sheath designs to minimize the effect of particles. Materials for solid insulation is being investigated for the same purpose; the effort is presently concentrated on obtaining reliable quantitative measurement techniques of electrostatic properties. Computer calculation of particle trap configurations are being made to determine the optimum trap configurations. A novel particle trapping technique is being using adhesive materials. Manufacture and field control technique studies has commenced with a study of mechanical vibration techniques. An experimental test chamber consisting of a 9 m (30 foot) long 145 kV bus has been designed. This system will be used in testing of particle control concepts and in migration and optimization studies.

  19. [The interaction of soil micromycetes with "hot" particles in a model system].

    Science.gov (United States)

    Zhdanova, N N; Lashko, T N; Redchits, T I; Vasilevskaia, A I; Borisiuk, L G; Siniavskaia, O I; Gavriliuk, V I; Muzalev, P N

    1991-01-01

    A model system which permits observing for a long time and fixing interaction of fungi with a radiation source has been created on the basis of an isolated "hot" particle, deficient mineral medium (saccharose content 60 mg/l) and suspension of fungal conidia. Five species (six strains) of micromycetes isolated from radionuclide-contaminated soils and fifteen "hot" particles have been tested. It has been found out for the first time that Cladosporium cladosporioides and Penicillium roseo-purpureum are able actively overgrow "hot" particles whose radioactivity did not exceed 3.1-1.0(-7) Ci by gamma-spectrum and to destroy them 50-150 days later. Certain changes in morphology of fungi-destructors of "hot" particles are revealed. A problem on ecological significance of the found phenomenon is discussed.

  20. Acoustically Driven Fluid and Particle Motion in Confined and Leaky Systems

    Science.gov (United States)

    Barnkob, Rune; Nama, Nitesh; Ren, Liqiang; Huang, Tony Jun; Costanzo, Francesco; Kähler, Christian J.

    2018-01-01

    The acoustic motion of fluids and particles in confined and acoustically leaky systems is receiving increasing attention for its use in medicine and biotechnology. A number of contradicting physical and numerical models currently exist, but their validity is uncertain due to the unavailability of hard-to-access experimental data for validation. We provide experimental benchmarking data by measuring 3D particle trajectories and demonstrate that the particle trajectories can be described numerically without any fitting parameter by a reduced-fluid model with leaky impedance-wall conditions. The results reveal the hitherto unknown existence of a pseudo-standing wave that drives the acoustic streaming as well as the acoustic radiation force on suspended particles.

  1. A microstructure-composition map of a ternary liquid/liquid/particle system with partially-wetting particles.

    Science.gov (United States)

    Yang, Junyi; Roell, David; Echavarria, Martin; Velankar, Sachin S

    2017-11-22

    We examine the effect of composition on the morphology of a ternary mixture comprising two molten polymeric liquid phases (polyisobutylene and polyethylene oxide) and micron-scale spherical silica particles. The silica particles were treated with silanes to make them partially wetted by both polymers. Particle loadings up to 30 vol% are examined while varying the fluid phase ratios across a wide range. Numerous effects of particle addition are catalogued, stabilization of Pickering emulsions and of interfacially-jammed co-continuous microstructures, meniscus-bridging of particles, particle-induced coalescence of the dispersed phase, and significant shifts in the phase inversion composition. Many of the effects are asymmetric, for example particle-induced coalescence is more severe and drop sizes are larger when polyisobutylene is the continuous phase, and particles promote phase continuity of the polyethylene oxide. These asymmetries are likely attributable to a slight preferential wettability of the particles towards the polyethylene oxide. A state map is constructed which classifies the various microstructures within a triangular composition diagram. Comparisons are made between this diagram vs. a previous one constructed for the case when particles are fully-wetted by polyethylene oxide.

  2. A New Class of Particle Filters for Random Dynamic Systems with Unknown Statistics

    Directory of Open Access Journals (Sweden)

    Joaquín Míguez

    2004-11-01

    Full Text Available In recent years, particle filtering has become a powerful tool for tracking signals and time-varying parameters of random dynamic systems. These methods require a mathematical representation of the dynamics of the system evolution, together with assumptions of probabilistic models. In this paper, we present a new class of particle filtering methods that do not assume explicit mathematical forms of the probability distributions of the noise in the system. As a consequence, the proposed techniques are simpler, more robust, and more flexible than standard particle filters. Apart from the theoretical development of specific methods in the new class, we provide computer simulation results that demonstrate the performance of the algorithms in the problem of autonomous positioning of a vehicle in a 2-dimensional space.

  3. 3rd International Conference on Particle Systems and Partial Differential Equations

    CERN Document Server

    Soares, Ana

    2016-01-01

    The main focus of this book is on different topics in probability theory, partial differential equations and kinetic theory, presenting some of the latest developments in these fields. It addresses mathematical problems concerning applications in physics, engineering, chemistry and biology that were presented at the Third International Conference on Particle Systems and Partial Differential Equations, held at the University of Minho, Braga, Portugal in December 2014. The purpose of the conference was to bring together prominent researchers working in the fields of particle systems and partial differential equations, providing a venue for them to present their latest findings and discuss their areas of expertise. Further, it was intended to introduce a vast and varied public, including young researchers, to the subject of interacting particle systems, its underlying motivation, and its relation to partial differential equations. This book will appeal to probabilists, analysts and those mathematicians whose wor...

  4. Mixing rates of particle systems with energy exchange

    International Nuclear Information System (INIS)

    Grigo, A; Khanin, K; Szász, D

    2012-01-01

    A fundamental problem of non-equilibrium statistical mechanics is the derivation of macroscopic transport equations in the hydrodynamic limit. The rigorous study of such limits requires detailed information about rates of convergence to equilibrium for finite sized systems. In this paper, we consider the finite lattice {1, 2, …, N}, with an energy x i ∈ (0, ∞) associated with each site. The energies evolve according to a Markov jump process with nearest neighbour interaction such that the total energy is preserved. We prove that for an entire class of such models the spectral gap of the generator of the Markov process scales as O(N -2 ). Furthermore, we provide a complete classification of reversible stationary distributions of product type. We demonstrate that our results apply to models similar to the billiard lattice model considered in Gaspard and Gilbert (2009 J. Stat. Mech.: Theory Exp. 2009 24), and hence provide a first step in the derivation of a macroscopic heat equation for a microscopic stochastic evolution of mechanical origin. (paper)

  5. Radon Daughters Background Reduction in Alpha Particles Counting System

    International Nuclear Information System (INIS)

    Dadon, S. S.; Pelled, O.; Orion, I.

    2014-01-01

    The ABPC method is using a serially occurring events of the beta decay of the 214Bi fallow by alpha decay of the 214Po that take place almost simultaneously to detect the Pseudo Coincidence Event (PCE) from the RDP, and to subtract them from the gross alpha counts. 267 This work showed that it is possible to improve the efficiency of RDP background reduction, including subtracting the 218Po contribution by using the ABPC method based on a single solid state silicon PIPS detector. False counts percentage obtained at the output of the PCE circuit were smaller than 0.1%. The results show that the PCE circuit was not influenced by non RDP alpha emitters. The PCE system did not reduce the non PCE of the 218Po. After 20 minutes the 218Po was strongly decayed, and its contribution became negligible. In order to overcome this disadvantage, a mathematical matching calculations for the 214Po and the 218Po decay equations were employed, and a constant ratio of the APo214(0) / APo218(0) was obtained. This ratio can be used to estimate the count rate of the 218Po at the first 20 minutes, and to subtract it from the total count rate in order to obtain correct RDP reduction

  6. Efficient GPU-based skyline computation

    DEFF Research Database (Denmark)

    Bøgh, Kenneth Sejdenfaden; Assent, Ira; Magnani, Matteo

    2013-01-01

    The skyline operator for multi-criteria search returns the most interesting points of a data set with respect to any monotone preference function. Existing work has almost exclusively focused on efficiently computing skylines on one or more CPUs, ignoring the high parallelism possible in GPUs. In...

  7. GPU based acceleration of first principles calculation

    International Nuclear Information System (INIS)

    Tomono, H; Tsumuraya, K; Aoki, M; Iitaka, T

    2010-01-01

    We present a Graphics Processing Unit (GPU) accelerated simulations of first principles electronic structure calculations. The FFT, which is the most time-consuming part, is about 10 times accelerated. As the result, the total computation time of a first principles calculation is reduced to 15 percent of that of the CPU.

  8. Particle deposition due to turbulent diffusion in the upper respiratory system

    Science.gov (United States)

    Hamill, P.

    1979-01-01

    Aerosol deposition in the upper respiratory system (trachea to segmental bronchi) is considered and the importance of turbulent diffusion as a deposition mechanism is evaluated. It is demonstrated that for large particles (diameter greater than about 5 microns), turbulent diffusion is the dominant deposition mechanism in the trachea. Conditions under which turbulent diffusion may be important in successive generations of the pulmonary system are determined. The probability of particle deposition is compared with probabilities of deposition, as determined by the equations generally used in regional deposition models. The analysis is theoretical; no new experimental data is presented.

  9. Hybrid three-dimensional variation and particle filtering for nonlinear systems

    International Nuclear Information System (INIS)

    Leng Hong-Ze; Song Jun-Qiang

    2013-01-01

    This work addresses the problem of estimating the states of nonlinear dynamic systems with sparse observations. We present a hybrid three-dimensional variation (3DVar) and particle piltering (PF) method, which combines the advantages of 3DVar and particle-based filters. By minimizing the cost function, this approach will produce a better proposal distribution of the state. Afterwards the stochastic resampling step in standard PF can be avoided through a deterministic scheme. The simulation results show that the performance of the new method is superior to the traditional ensemble Kalman filtering (EnKF) and the standard PF, especially in highly nonlinear systems

  10. Development of utility system of charged particle Nuclear Reaction Data on Unified Interface

    International Nuclear Information System (INIS)

    Aoyama, Shigeyoshi; Ohbayashi, Yosihide; Kato, Kiyoshi; Masui, Hiroshi; Ohnishi, Akira; Chiba, Masaki

    1999-01-01

    We have developed a utility system, WinNRDF, for a nuclear charged particle reaction data of NRDF (Nuclear Reaction Data File) on a unified interface of Windows95, 98/NT. By using the system, we can easily search the experimental data of a charged particle reaction in NRDF and also see the graphic data on GUI (Graphical User Interface). Furthermore, we develop a mechanism of making a new index of keywords in order to include the time developing character of the NRDF database. (author)

  11. Power System Stabilizer Design Based on a Particle Swarm Optimization Multiobjective Function Implemented Under Graphical Interface

    Directory of Open Access Journals (Sweden)

    Ghouraf Djamel Eddine

    2016-05-01

    Full Text Available Power system stability considered a necessary condition for normal functioning of an electrical network. The role of regulation and control systems is to ensure that stability by determining the essential elements that influence it. This paper proposes a Particle Swarm Optimization (PSO based multiobjective function to tuning optimal parameters of Power System Stabilizer (PSS; this later is used as auxiliary to generator excitation system in order to damp electro mechanicals oscillations of the rotor and consequently improve Power system stability. The computer simulation results obtained by developed graphical user interface (GUI have proved the efficiency of PSS optimized by a Particle Swarm Optimization, in comparison with a conventional PSS, showing stable   system   responses   almost   insensitive   to   large parameter variations.Our present study was performed using a GUI realized under MATLAB in our work.

  12. A Class of Hamiltonians for a Three-Particle Fermionic System at Unitarity

    Energy Technology Data Exchange (ETDEWEB)

    Correggi, M., E-mail: michele.correggi@gmail.com [Università degli Studi Roma Tre, Largo San Leonardo Murialdo 1, Dipartimento di Matematica e Fisica (Italy); Dell’Antonio, G. [“Sapienza” Università di Roma, P.le A. Moro 5, Dipartimento di Matematica (Italy); Finco, D. [Università Telematica Internazionale Uninettuno, Corso V. Emanuele II 39, Facoltà di Ingegneria (Italy); Michelangeli, A. [Scuola Internazionale Superiore di Studi Avanzati, Via Bonomea 265 (Italy); Teta, A. [“Sapienza” Università di Roma, P.le A. Moro 5, Dipartimento di Matematica (Italy)

    2015-12-15

    We consider a quantum mechanical three-particle system made of two identical fermions of mass one and a different particle of mass m, where each fermion interacts via a zero-range force with the different particle. In particular we study the unitary regime, i.e., the case of infinite two-body scattering length. The Hamiltonians describing the system are, by definition, self-adjoint extensions of the free Hamiltonian restricted on smooth functions vanishing at the two-body coincidence planes, i.e., where the positions of two interacting particles coincide. It is known that for m larger than a critical value m{sup ∗} ≃ (13.607){sup −1} a self-adjoint and lower bounded Hamiltonian H{sub 0} can be constructed, whose domain is characterized in terms of the standard point-interaction boundary condition at each coincidence plane. Here we prove that for m ∈ (m{sup ∗},m{sup ∗∗}), where m{sup ∗∗} ≃ (8.62){sup −1}, there is a further family of self-adjoint and lower bounded Hamiltonians H{sub 0,β}, β ∈ ℝ, describing the system. Using a quadratic form method, we give a rigorous construction of such Hamiltonians and we show that the elements of their domains satisfy a further boundary condition, characterizing the singular behavior when the positions of all the three particles coincide.

  13. A Class of Hamiltonians for a Three-Particle Fermionic System at Unitarity

    International Nuclear Information System (INIS)

    Correggi, M.; Dell’Antonio, G.; Finco, D.; Michelangeli, A.; Teta, A.

    2015-01-01

    We consider a quantum mechanical three-particle system made of two identical fermions of mass one and a different particle of mass m, where each fermion interacts via a zero-range force with the different particle. In particular we study the unitary regime, i.e., the case of infinite two-body scattering length. The Hamiltonians describing the system are, by definition, self-adjoint extensions of the free Hamiltonian restricted on smooth functions vanishing at the two-body coincidence planes, i.e., where the positions of two interacting particles coincide. It is known that for m larger than a critical value m ∗ ≃ (13.607) −1 a self-adjoint and lower bounded Hamiltonian H 0 can be constructed, whose domain is characterized in terms of the standard point-interaction boundary condition at each coincidence plane. Here we prove that for m ∈ (m ∗ ,m ∗∗ ), where m ∗∗ ≃ (8.62) −1 , there is a further family of self-adjoint and lower bounded Hamiltonians H 0,β , β ∈ ℝ, describing the system. Using a quadratic form method, we give a rigorous construction of such Hamiltonians and we show that the elements of their domains satisfy a further boundary condition, characterizing the singular behavior when the positions of all the three particles coincide

  14. Particle Damping with Granular Materials for Multi Degree of Freedom System

    Directory of Open Access Journals (Sweden)

    Masanobu Inoue

    2011-01-01

    Full Text Available A particle damper consists of a bed of granular materials moving in cavities within a multi degree-of-freedom (MDOF structure. This paper deals with the damping effects on forced vibrations of a MDOF structure provided with the vertical particle dampers. In the analysis, the particle bed is assumed to be a single mass, and the collisions between the granules and the cavities are completely inelastic, i.e., all energy dissipation mechanisms are wrapped into zero coefficient of restitution. To predict the particle damping effect, equations of motion are developed in terms of equivalent single degree-of-freedom (SDOF system and damper mass with use made of modal approach. In this report, the periodic vibration model comprising sustained contact on or separation of the damper mass from vibrating structure is developed. A digital model is also formulated to simulate the damped motion of the physical system, taking account of all vibration modes. Numerical and experimental studies are made of the damping performance of plural dampers located at selected positions throughout a 3MDOF system. The experimental results confirm numerical prediction that collision between granules and structures is completely inelastic as the contributing mechanism of damping in the vertical vibration. It is found that particle dampers with properly selected mass ratios and clearances effectively suppress the resonance peaks over a wide frequency range.

  15. Delocalization of Relativistic Dirac Particles in Disordered One-Dimensional Systems and Its Implementation with Cold Atoms

    International Nuclear Information System (INIS)

    Zhu Shiliang; Zhang Danwei; Wang, Z. D.

    2009-01-01

    We study theoretically the localization of relativistic particles in disordered one-dimensional chains. It is found that the relativistic particles tend to delocalization in comparison with the nonrelativistic particles with the same disorder strength. More intriguingly, we reveal that the massless Dirac particles are entirely delocalized for any energy due to the inherent chiral symmetry, leading to a well-known result that particles are always localized in one-dimensional systems for arbitrary weak disorders to break down. Furthermore, we propose a feasible scheme to detect the delocalization feature of the Dirac particles with cold atoms in a light-induced gauge field.

  16. Automatic calibration and signal switching system for the particle beam fusion research data acquisition facility

    Energy Technology Data Exchange (ETDEWEB)

    Boyer, W.B.

    1979-09-01

    This report describes both the hardware and software components of an automatic calibration and signal system (Autocal) for the data acquisition system for the Sandia particle beam fusion research accelerators Hydra, Proto I, and Proto II. The Autocal hardware consists of off-the-shelf commercial equipment. The various hardware components, special modifications and overall system configuration are described. Special software has been developed to support the Autocal hardware. Software operation and maintenance are described.

  17. Automatic calibration and signal switching system for the particle beam fusion research data acquisition facility

    International Nuclear Information System (INIS)

    Boyer, W.B.

    1979-09-01

    This report describes both the hardware and software components of an automatic calibration and signal system (Autocal) for the data acquisition system for the Sandia particle beam fusion research accelerators Hydra, Proto I, and Proto II. The Autocal hardware consists of off-the-shelf commercial equipment. The various hardware components, special modifications and overall system configuration are described. Special software has been developed to support the Autocal hardware. Software operation and maintenance are described

  18. Light scattering studies of lower dimensional colloidal particle and critical fluid systems. Final progress report

    International Nuclear Information System (INIS)

    O'Sullivan, W.J.; Mockler, R.C.

    1985-08-01

    We have completed a program of small angle scattering Rayleigh linewidth measurements on thin films of a 2,6-lutidine + water mixture. No statistically significant departures from three dimensional dynamic response were seen, although the conditions set by the theory of Calvo and Ferrell were met. We have applied digital image processing to evaluate fractal scale invariance in two dimensional particle aggregates arising from the induced coagulation of colloidal particle monolayer crystals. Our system gives us the capability of calculating the pair correlation function for both small and very large (2 x 10 4 particles) particle clusters. We find evidence of an apparent crossover between kinetic clustering aggregation at small distances (about 20 particle diameters) to percolation or gel/sol transition-behavior at large distances. This is evident in both isolated clusters and in final state ''giant'' aggregates. We are carrying through a parallel program of computer calculations whose motivation is to assess the sensitivity of experimental measures of self similarity to cluster size and image resolution, and to generate efficient algorithms which can be applied to calculate fractal ''critical exponents'' other than the Hausdorff dimension. We have succeeded in measuring the surface tension of a water surface covered by a colloidal particle monolayer crystal, in both its repulsive-dipole and close-packed van der Waals phases

  19. Tomographic reconstruction by using FPSIRT (Fast Particle System Iterative Reconstruction Technique)

    Energy Technology Data Exchange (ETDEWEB)

    Moreira, Icaro Valgueiro M.; Melo, Silvio de Barros; Dantas, Carlos; Lima, Emerson Alexandre; Silva, Ricardo Martins; Cardoso, Halisson Alberdan C., E-mail: ivmm@cin.ufpe.br, E-mail: sbm@cin.ufpe.br, E-mail: rmas@cin.ufpe.br, E-mail: hacc@cin.ufpe.br, E-mail: ccd@ufpe.br, E-mail: eal@cin.ufpe.br [Universidade Federal de Pernambuco (UFPE), Recife, PE (Brazil)

    2015-07-01

    The PSIRT (Particle System Iterative Reconstruction Technique) is a method of tomographic image reconstruction primarily designed to work with configurations suitable for industrial applications. A particle system is an optimization technique inspired in real physical systems that associates to the reconstructing material a set of particles with certain physical features, subject to a force eld, which can produce movement. The system constantly updates the set of particles by repositioning them in such a way as to approach the equilibrium. The elastic potential along a trajectory is a function of the difference between the attenuation coefficient in the current configuration and the corresponding input data. PSIRT has been successfully used to reconstruct simulated and real objects subject to sets of parallel and fanbeam lines in different angles, representing typical gamma-ray tomographic arrangements. One of PSIRT's limitation was its performance, too slow for real time scenarios. In this work, it is presented a reformulation in PSIRT's computational model, which is able to grant the new algorithm, the FPSIRT - Fast System Iterative Reconstruction Technique, a performance up to 200-time faster than PSIRT's. In this work a comparison of their application to real and simulated data from the HSGT, High Speed Gamma Tomograph, is presented. (author)

  20. Tomographic reconstruction by using FPSIRT (Fast Particle System Iterative Reconstruction Technique)

    International Nuclear Information System (INIS)

    Moreira, Icaro Valgueiro M.; Melo, Silvio de Barros; Dantas, Carlos; Lima, Emerson Alexandre; Silva, Ricardo Martins; Cardoso, Halisson Alberdan C.

    2015-01-01

    The PSIRT (Particle System Iterative Reconstruction Technique) is a method of tomographic image reconstruction primarily designed to work with configurations suitable for industrial applications. A particle system is an optimization technique inspired in real physical systems that associates to the reconstructing material a set of particles with certain physical features, subject to a force eld, which can produce movement. The system constantly updates the set of particles by repositioning them in such a way as to approach the equilibrium. The elastic potential along a trajectory is a function of the difference between the attenuation coefficient in the current configuration and the corresponding input data. PSIRT has been successfully used to reconstruct simulated and real objects subject to sets of parallel and fanbeam lines in different angles, representing typical gamma-ray tomographic arrangements. One of PSIRT's limitation was its performance, too slow for real time scenarios. In this work, it is presented a reformulation in PSIRT's computational model, which is able to grant the new algorithm, the FPSIRT - Fast System Iterative Reconstruction Technique, a performance up to 200-time faster than PSIRT's. In this work a comparison of their application to real and simulated data from the HSGT, High Speed Gamma Tomograph, is presented. (author)

  1. Pulse pile-up in nuclear particle detection systems with rapidly varying counting rates

    International Nuclear Information System (INIS)

    Datlowe, D.W.

    1977-01-01

    Pulse pile-up in nuclear particle detection systems is the distortion of the measured pulse height distribution which occurs when there is a significant probability that more than one particle will arrive within the detector resolving time. This paper treats the problem in cases where the probability of pile-up varies on a time scale comparable to the rise time of the detector system electronics. These variations introduce structure into the pulse height distributions which cannot occur for a time-independent pile-up probability. Three classes of problems which exemplify these effects are as follows: 1) Pile-up rejection circuits. 2) Cascaded nuclear decays, in which the lifetime for emission of a second X-ray is comparable to the detector rise time. 3) Bursts of particles where the intensity is modulated on a time scale comparable to the detector rise time. These problems are solved computationally by an extension of a numerical technique previously developed. (Auth.)

  2. Core-Shell Particles as Building Blocks for Systems with High Duality Symmetry

    Science.gov (United States)

    Rahimzadegan, Aso; Rockstuhl, Carsten; Fernandez-Corbaton, Ivan

    2018-05-01

    Material electromagnetic duality symmetry requires a system to have equal electric and magnetic responses. Intrinsically dual materials that meet the duality conditions at the level of the constitutive relations do not exist in many frequency bands. Nevertheless, discrete objects like metallic helices and homogeneous dielectric spheres can be engineered to approximate the dual behavior. We exploit the extra degrees of freedom of a core-shell dielectric sphere in a particle optimization procedure. The duality symmetry of the resulting particle is more than 1 order of magnitude better than previously reported nonmagnetic objects. We use T -matrix-based multiscattering techniques to show that the improvement is transferred onto the duality symmetry of composite objects when the core-shell particle is used as a building block instead of homogeneous spheres. These results are relevant for the fashioning of systems with high duality symmetry, which are required for some technologically important effects.

  3. Development of Bioadhesive Chitosan Superporous Hydrogel Composite Particles Based Intestinal Drug Delivery System

    Directory of Open Access Journals (Sweden)

    Hitesh Chavda

    2013-01-01

    Full Text Available Bioadhesive superporous hydrogel composite (SPHC particles were developed for an intestinal delivery of metoprolol succinate and characterized for density, porosity, swelling, morphology, and bioadhesion studies. Chitosan and HPMC were used as bioadhesive and release retardant polymers, respectively. A 32 full factorial design was applied to optimize the concentration of chitosan and HPMC. The drug loaded bioadhesive SPHC particles were filled in capsule, and the capsule was coated with cellulose acetate phthalate and evaluated for drug content, in vitro drug release, and stability studies. To ascertain the drug release kinetics, the drug release profiles were fitted for mathematical models. The prepared system remains bioadhesive up to eight hours in intestine and showed Hixson-Crowell release with anomalous nonfickian type of drug transport. The application of SPHC polymer particles as a biomaterial carrier opens a new insight into bioadhesive drug delivery system and could be a future platform for other molecules for intestinal delivery.

  4. Application of particle swarm optimization algorithm in the heating system planning problem.

    Science.gov (United States)

    Ma, Rong-Jiang; Yu, Nan-Yang; Hu, Jun-Yi

    2013-01-01

    Based on the life cycle cost (LCC) approach, this paper presents an integral mathematical model and particle swarm optimization (PSO) algorithm for the heating system planning (HSP) problem. The proposed mathematical model minimizes the cost of heating system as the objective for a given life cycle time. For the particularity of HSP problem, the general particle swarm optimization algorithm was improved. An actual case study was calculated to check its feasibility in practical use. The results show that the improved particle swarm optimization (IPSO) algorithm can more preferably solve the HSP problem than PSO algorithm. Moreover, the results also present the potential to provide useful information when making decisions in the practical planning process. Therefore, it is believed that if this approach is applied correctly and in combination with other elements, it can become a powerful and effective optimization tool for HSP problem.

  5. Water and sludge treatment device provided with a system for irradiating by accelerated charged particles

    International Nuclear Information System (INIS)

    Azam, Guy; Bensussan, Andre; Levaillant, Claude; Huber, Harry; Mevel, Emile; Tronc, Dominique.

    1977-01-01

    Treatment system for a fluid made up of water and sludge, provided with a system for irradiating the fluid by a beam of accelerated charged particles comprising means for obtaining a constant flow of the fluid to be treated, facilities for monitoring this flow, an irradiation channel located on the path of the beam, in which the fluid to be treated can flow, a portion of this channel having at least one window transparent to the beam of accelerated particles. A safety system associated with the system for monitoring the characteristics of the beam and with the system for monitoring the flow of the fluid to be treated, stops the flow of the fluid and the recycling of the fluid defectively treated [fr

  6. A variational Bayesian multiple particle filtering scheme for large-dimensional systems

    KAUST Repository

    Ait-El-Fquih, Boujemaa

    2016-06-14

    This paper considers the Bayesian filtering problem in high-dimensional nonlinear state-space systems. In such systems, classical particle filters (PFs) are impractical due to the prohibitive number of required particles to obtain reasonable performances. One approach that has been introduced to overcome this problem is the concept of multiple PFs (MPFs), where the state-space is split into low-dimensional subspaces and then a separate PF is applied to each subspace. Remarkable performances of MPF-like filters motivated our investigation here into a new strategy that combines the variational Bayesian approach to split the state-space with random sampling techniques, to derive a new computationally efficient MPF. The propagation of each particle in the prediction step of the resulting filter requires generating only a single particle in contrast with standard MPFs, for which a set of (children) particles is required. We present simulation results to evaluate the behavior of the proposed filter and compare its performances against standard PF and a MPF.

  7. A variational Bayesian multiple particle filtering scheme for large-dimensional systems

    KAUST Repository

    Ait-El-Fquih, Boujemaa; Hoteit, Ibrahim

    2016-01-01

    This paper considers the Bayesian filtering problem in high-dimensional nonlinear state-space systems. In such systems, classical particle filters (PFs) are impractical due to the prohibitive number of required particles to obtain reasonable performances. One approach that has been introduced to overcome this problem is the concept of multiple PFs (MPFs), where the state-space is split into low-dimensional subspaces and then a separate PF is applied to each subspace. Remarkable performances of MPF-like filters motivated our investigation here into a new strategy that combines the variational Bayesian approach to split the state-space with random sampling techniques, to derive a new computationally efficient MPF. The propagation of each particle in the prediction step of the resulting filter requires generating only a single particle in contrast with standard MPFs, for which a set of (children) particles is required. We present simulation results to evaluate the behavior of the proposed filter and compare its performances against standard PF and a MPF.

  8. Self-organized magnetic particles to tune the mechanical behavior of a granular system

    Science.gov (United States)

    Cox, Meredith; Wang, Dong; Barés, Jonathan; Behringer, Robert P.

    2016-09-01

    Above a certain density a granular material jams. This property can be controlled by either tuning a global property, such as the packing fraction or by applying shear strain, or at the micro-scale by tuning grain shape, inter-particle friction or externally controlled organization. Here, we introduce a novel way to change a local granular property by adding a weak anisotropic magnetic interaction between particles. We measure the evolution of the pressure, P, and coordination number, Z, for a packing of 2D photo-elastic disks, subject to uniaxial compression. A fraction R m of the particles have embedded cuboidal magnets. The strength of the magnetic interactions between particles is too weak to have a strong direct effect on P or Z when the system is jammed. However, the magnetic interactions play an important role in the evolution of latent force networks when systems containing a large enough fraction of the particles with magnets are driven through unjammed to jammed states. In this case, a statistically stable network of magnetic chains self-organizes before jamming and overlaps with force chains once jamming occurs, strengthening the granular medium. This property opens a novel way to control mechanical properties of granular materials.

  9. Radial transport processes as a precursor to particle deposition in drinking water distribution systems.

    Science.gov (United States)

    van Thienen, P; Vreeburg, J H G; Blokker, E J M

    2011-02-01

    Various particle transport mechanisms play a role in the build-up of discoloration potential in drinking water distribution networks. In order to enhance our understanding of and ability to predict this build-up, it is essential to recognize and understand their role. Gravitational settling with drag has primarily been considered in this context. However, since flow in water distribution pipes is nearly always in the turbulent regime, turbulent processes should be considered also. In addition to these, single particle effects and forces may affect radial particle transport. In this work, we present an application of a previously published turbulent particle deposition theory to conditions relevant for drinking water distribution systems. We predict quantitatively under which conditions turbophoresis, including the virtual mass effect, the Saffman lift force, and the Magnus force may contribute significantly to sediment transport in radial direction and compare these results to experimental observations. The contribution of turbophoresis is mostly limited to large particles (>50 μm) in transport mains, and not expected to play a major role in distribution mains. The Saffman lift force may enhance this process to some degree. The Magnus force is not expected to play any significant role in drinking water distribution systems. © 2010 Elsevier Ltd. All rights reserved.

  10. Conditional-sampling spectrograph detection system for fluorescence measurements of individual airborne biological particles

    Science.gov (United States)

    Nachman, Paul; Pinnick, R. G.; Hill, Steven C.; Chen, Gang; Chang, Richard K.; Mayo, Michael W.; Fernandez, Gilbert L.

    1996-03-01

    We report the design and operation of a prototype conditional-sampling spectrograph detection system that can record the fluorescence spectra of individual, micrometer-sized aerosols as they traverse an intense 488-nm intracavity laser beam. The instrument's image-intensified CCD detector is gated by elastic scattering or by undispersed fluorescence from particles that enter the spectrograph's field of view. It records spectra only from particles with preselected scattering-fluorescence levels (a fiber-optic-photomultiplier subsystem provides the gating signal). This conditional-sampling procedure reduces data-handling rates and increases the signal-to-noise ratio by restricting the system's exposures to brief periods when aerosols traverse the beam. We demonstrate these advantages by reliably capturing spectra from individual fluorescent microspheres dispersed in an airstream. The conditional-sampling procedure also permits some discrimination among different types of particles, so that spectra may be recorded from the few interesting particles present in a cloud of background aerosol. We demonstrate such discrimination by measuring spectra from selected fluorescent microspheres in a mixture of two types of microspheres, and from bacterial spores in a mixture of spores and nonfluorescent kaolin particles.

  11. Combining neural networks and signed particles to simulate quantum systems more efficiently

    Science.gov (United States)

    Sellier, Jean Michel

    2018-04-01

    Recently a new formulation of quantum mechanics has been suggested which describes systems by means of ensembles of classical particles provided with a sign. This novel approach mainly consists of two steps: the computation of the Wigner kernel, a multi-dimensional function describing the effects of the potential over the system, and the field-less evolution of the particles which eventually create new signed particles in the process. Although this method has proved to be extremely advantageous in terms of computational resources - as a matter of fact it is able to simulate in a time-dependent fashion many-body systems on relatively small machines - the Wigner kernel can represent the bottleneck of simulations of certain systems. Moreover, storing the kernel can be another issue as the amount of memory needed is cursed by the dimensionality of the system. In this work, we introduce a new technique which drastically reduces the computation time and memory requirement to simulate time-dependent quantum systems which is based on the use of an appropriately tailored neural network combined with the signed particle formalism. In particular, the suggested neural network is able to compute efficiently and reliably the Wigner kernel without any training as its entire set of weights and biases is specified by analytical formulas. As a consequence, the amount of memory for quantum simulations radically drops since the kernel does not need to be stored anymore as it is now computed by the neural network itself, only on the cells of the (discretized) phase-space which are occupied by particles. As its is clearly shown in the final part of this paper, not only this novel approach drastically reduces the computational time, it also remains accurate. The author believes this work opens the way towards effective design of quantum devices, with incredible practical implications.

  12. Bell's Inequality for a System Composed of Particles with Different Spins

    International Nuclear Information System (INIS)

    Moradi, Shahpoor

    2009-01-01

    For two particles with different spins, we derive the Bell's inequality. The inequality is investigated for two systems combining spin-1 and spin-1/2; spin-1/2 and spin-3/2. We show that for these states Bell's inequality is violated.

  13. The effect of perception anisotropy on particle systems describing pedestrian flows in corridors

    NARCIS (Netherlands)

    Gulikers, L.; Evers, J.H.M.; Muntean, A.; Lyulin, A.

    2012-01-01

    We consider a microscopic model (a system of self-propelled particles) to study the behaviour of a large group of pedestrians walking in a corridor. Our point of interest is the effect of anisotropic interactions on the global behaviour of the crowd. The anisotropy we have in mind reflects the fact

  14. The effect of perception anisotropy on particle systems describing pedestrian flows in corridors

    NARCIS (Netherlands)

    Gulikers, L.; Evers, J.H.M.; Muntean, A.; Lyulin, A.

    2013-01-01

    We consider a microscopic model (a system of self-propelled particles) to study the behaviour of a large group of pedestrians walking in a corridor. Our point of interest is the effect of anisotropic interactions on the global behaviour of the crowd. The anisotropy we have in mind reflects the fact

  15. Daily micro particle distribution of an experimental recirculating aquaculture system – A case study

    DEFF Research Database (Denmark)

    Fernandes, Paulo; Pedersen, Lars-Flemming; Pedersen, Per Bovbjerg

    2014-01-01

    The particle size distribution (PSD) in a recirculating aquaculture system (RAS) was investigated duringa 24-h cycle. PSD was analyzed in water sampled at several locations in a recirculation loop containing a60-m drum filter, a submerged fixed-bed biofilter and a trickling filter.In relation...

  16. Relativistic formulations with Blankenbecler-Sugar reduction technique for the three-particle system

    International Nuclear Information System (INIS)

    Morioka, S.; Afnan, I.R.

    1980-05-01

    A critical comparison for two-types of three-dimensional covariant equations for the three-particle system obtained by the Blankenbecler-Sugar reduction technique with the Whitghtman-Garding momenta and the usual Jacobi variables is presented. The relations between the relativistic and non-relativistic equations in the low energy limit are discussed

  17. A nuclear field treatment of the 3-particle system outside closed-shell nuclei

    International Nuclear Information System (INIS)

    Liotta, R.J.; Silvestre-Brac, B.

    1978-01-01

    The nuclear field treatment of the 3-particle system is carried out to all orders of perturbation theory but without including hole-excitations. It is shown that the theory properly corrects both the Pauli principle violations and the overcompleteness of the basis

  18. Improvement of the Stokesian Dynamics method for systems with finite number of particles

    NARCIS (Netherlands)

    Ichiki, K.

    2002-01-01

    An improvement of the Stokesian Dynamics method for many-particle systems is presented. A direct calculation of the hydrodynamic interaction is used rather than imposing periodic boundary conditions. The two major diculties concern the accuracy and the speed of calculations. The accuracy discussed

  19. Performance of school bus retrofit systems: ultrafine particles and other vehicular pollutants.

    Science.gov (United States)

    Zhang, Qunfang; Zhu, Yifang

    2011-08-01

    This study evaluated the performance of retrofit systems for diesel-powered school buses, a diesel oxidation catalyst (DOC) muffler and a spiracle crankcase filtration system (CFS), regarding ultrafine particles (UFPs) and other air pollutants from tailpipe emissions and inside bus cabins. Tailpipe emissions and in-cabin air pollutant levels were measured before and after retrofitting when the buses were idling and during actual pick-up/drop off routes. Retrofit systems significantly reduced tailpipe emissions with a reduction of 20-94% of total particles with both DOC and CFS installed. However, no unequivocal decrease was observed for in-cabin air pollutants after retrofitting. The AC/fan unit and the surrounding air pollutant concentrations played more important roles for determining the in-cabin air quality of school buses than did retrofit technologies. Although current retrofit systems reduce children's exposure while waiting to board at a bus station, retrofitting by itself does not protect children satisfactorily from in-cabin particle exposures. Turning on the bus engine increased in-cabin UFP levels significantly only when the wind blew from the bus' tailpipe toward its hood with its windows open. This indicated that wind direction and window position are significant factors determining how much self-released tailpipe emissions may penetrate into the bus cabin. The use of an air purifier was found to remove in-cabin particles by up to 50% which might be an alternative short-to-medium term strategy to protect children's health.

  20. Configurational entropy and effective temperature in systems of active Brownian particles

    NARCIS (Netherlands)

    Preisler, Zdeněk; Dijkstra, Marjolein

    2016-01-01

    We propose a method to determine the effective density of states and configurational entropy in systems of active Brownian particles by measuring the probability distribution function of potential energy at varying temperatures. Assuming that the entropy is a continuous and monotonically increasing

  1. Description of identical particles via gauged matrix models: a generalization of the Calogero-Sutherland system

    International Nuclear Information System (INIS)

    Park, Jeong-Hyuck

    2003-01-01

    We elaborate the idea that the matrix models equipped with the gauge symmetry provide a natural framework to describe identical particles. After demonstrating the general prescription, we study an exactly solvable harmonic oscillator type gauged matrix model. The model gives a generalization of the Calogero-Sutherland system where the strength of the inverse square potential is not fixed but dynamical bounded by below

  2. The Taylor-expansion method of moments for the particle system with bimodal distribution

    Directory of Open Access Journals (Sweden)

    Liu Yan-Hua

    2013-01-01

    Full Text Available This paper derives the multipoint Taylor expansion method of moments for the bimodal particle system. The collision effects are modeled by the internal and external coagulation terms. Simple theory and numerical tests are performed to prove the effect of the current model.

  3. Quadratic integrals of motion for identical particle systems in quantum case

    International Nuclear Information System (INIS)

    Brije, I.; Gonera, S.; Kosinski, P.; Maslanka, P.; Giller, S.

    2005-01-01

    One studied quantum dynamic systems of identical particles allowing for additional integral of motion being quadratic in pulses. It was found that there was an appropriate way to ensure order that enabled to convert the classical integrals of motion into their quantum analogues. One analyzed relation of the mentioned integrals with splitting of the variables in the Schroedinger equation [ru

  4. Development of charged particle nuclear reaction data retrieval system on IntelligentPad

    International Nuclear Information System (INIS)

    Ohbayashi, Yosihide; Masui, Hiroshi; Aoyama, Shigeyoshi; Kato, Kiyoshi; Chiba, Masaki

    1999-01-01

    An newly designed database retrieval system of charged particle nuclear reaction database system is developed with IntelligentPad architecture. We designed the network-based (server-client) data retrieval system, and a client system constructs on Windows95, 98/NT with IntelligentPad. We set the future aim of our database system toward the 'effective' use of nuclear reaction data: I. 'Re-produce, Re-edit, Re-use', II. 'Circulation, Evolution', III. 'Knowledge discovery'. Thus, further developments are under way. (author)

  5. An expert system for automatic mesh generation for Sn particle transport simulation in parallel environment

    International Nuclear Information System (INIS)

    Apisit, Patchimpattapong; Alireza, Haghighat; Shedlock, D.

    2003-01-01

    An expert system for generating an effective mesh distribution for the SN particle transport simulation has been developed. This expert system consists of two main parts: 1) an algorithm for generating an effective mesh distribution in a serial environment, and 2) an algorithm for inference of an effective domain decomposition strategy for parallel computing. For the first part, the algorithm prepares an effective mesh distribution considering problem physics and the spatial differencing scheme. For the second part, the algorithm determines a parallel-performance-index (PPI), which is defined as the ratio of the granularity to the degree-of-coupling. The parallel-performance-index provides expected performance of an algorithm depending on computing environment and resources. A large index indicates a high granularity algorithm with relatively low coupling among processors. This expert system has been successfully tested within the PENTRAN (Parallel Environment Neutral-Particle Transport) code system for simulating real-life shielding problems. (authors)

  6. An expert system for automatic mesh generation for Sn particle transport simulation in parallel environment

    Energy Technology Data Exchange (ETDEWEB)

    Apisit, Patchimpattapong [Electricity Generating Authority of Thailand, Office of Corporate Planning, Bangkruai, Nonthaburi (Thailand); Alireza, Haghighat; Shedlock, D. [Florida Univ., Department of Nuclear and Radiological Engineering, Gainesville, FL (United States)

    2003-07-01

    An expert system for generating an effective mesh distribution for the SN particle transport simulation has been developed. This expert system consists of two main parts: 1) an algorithm for generating an effective mesh distribution in a serial environment, and 2) an algorithm for inference of an effective domain decomposition strategy for parallel computing. For the first part, the algorithm prepares an effective mesh distribution considering problem physics and the spatial differencing scheme. For the second part, the algorithm determines a parallel-performance-index (PPI), which is defined as the ratio of the granularity to the degree-of-coupling. The parallel-performance-index provides expected performance of an algorithm depending on computing environment and resources. A large index indicates a high granularity algorithm with relatively low coupling among processors. This expert system has been successfully tested within the PENTRAN (Parallel Environment Neutral-Particle Transport) code system for simulating real-life shielding problems. (authors)

  7. Test of the photon detection system for the LHCb RICH Upgrade in a charged particle beam

    CERN Document Server

    Baszczyk, M.K.

    2017-01-16

    The LHCb detector will be upgraded to make more efficient use of the available luminosity at the LHC in Run III and extend its potential for discovery. The Ring Imaging Cherenkov detectors are key components of the LHCb detector for particle identification. In this paper we describe the setup and the results of tests in a charged particle beam, carried out to assess prototypes of the upgraded opto-electronic chain from the Multi-Anode PMT photosensor to the readout and data acquisition system.

  8. Numerical Treatment of the Boltzmann Equation for Self-Propelled Particle Systems

    Directory of Open Access Journals (Sweden)

    Florian Thüroff

    2014-11-01

    Full Text Available Kinetic theories constitute one of the most promising tools to decipher the characteristic spatiotemporal dynamics in systems of actively propelled particles. In this context, the Boltzmann equation plays a pivotal role, since it provides a natural translation between a particle-level description of the system’s dynamics and the corresponding hydrodynamic fields. Yet, the intricate mathematical structure of the Boltzmann equation substantially limits the progress toward a full understanding of this equation by solely analytical means. Here, we propose a general framework to numerically solve the Boltzmann equation for self-propelled particle systems in two spatial dimensions and with arbitrary boundary conditions. We discuss potential applications of this numerical framework to active matter systems and use the algorithm to give a detailed analysis to a model system of self-propelled particles with polar interactions. In accordance with previous studies, we find that spatially homogeneous isotropic and broken-symmetry states populate two distinct regions in parameter space, which are separated by a narrow region of spatially inhomogeneous, density-segregated moving patterns. We find clear evidence that these three regions in parameter space are connected by first-order phase transitions and that the transition between the spatially homogeneous isotropic and polar ordered phases bears striking similarities to liquid-gas phase transitions in equilibrium systems. Within the density-segregated parameter regime, we find a novel stable limit-cycle solution of the Boltzmann equation, which consists of parallel lanes of polar clusters moving in opposite directions, so as to render the overall symmetry of the system’s ordered state nematic, despite purely polar interactions on the level of single particles.

  9. Hydrosilylated Porous Silicon Particles Function as an Intravitreal Drug Delivery System for Daunorubicin

    Science.gov (United States)

    Hartmann, Kathrin I.; Nieto, Alejandra; Wu, Elizabeth C.; Freeman, William R.; Kim, Jae Suk; Chhablani, Jay; Sailor, Michael J.

    2013-01-01

    Abstract Purpose To evaluate in vivo ocular safety of an intravitreal hydrosilylated porous silicon (pSi) drug delivery system along with the payload of daunorubicin (DNR). Methods pSi microparticles were prepared from the electrochemical etching of highly doped, p-type Si wafers and an organic linker was attached to the Si-H terminated inner surface of the particles by thermal hydrosilylation of undecylenic acid. DNR was bound to the carboxy terminus of the linker as a drug-loading strategy. DNR release from hydrosilylated pSi particles was confirmed in the excised rabbit vitreous using liquid chromatography–electrospray ionization–multistage mass spectrometry. Both empty and DNR-loaded hydrosilylated pSi particles were injected into the rabbit vitreous and the degradation and safety were studied for 6 months. Results The mean pSi particle size was 30×46×15 μm with an average pore size of 15 nm. Drug loading was determined as 22 μg per 1 mg of pSi particles. An ex vivo drug release study showed that intact DNR was detected in the rabbit vitreous. An in vivo ocular toxicity study did not reveal clinical or pathological evidence of any toxicity during a 6-month observation. Hydrosilylated pSi particles, either empty or loaded with DNR, demonstrated a slow elimination kinetics from the rabbit vitreous without ocular toxicity. Conclusions Hydrosilylated pSi particles can host a large quantity of DNR by a covalent loading strategy and DNR can be slowly released into the vitreous without ocular toxicity, which would appear if an equivalent quantity of free drug was injected. PMID:23448595

  10. Global Positioning System Energetic Particle Data: The Next Space Weather Data Revolution

    Science.gov (United States)

    Knipp, Delores J.; Giles, Barbara L.

    2016-01-01

    The Global Positioning System (GPS) has revolutionized the process of getting from point A to point Band so much more. A large fraction of the worlds population relies on GPS (and its counterparts from other nations) for precision timing, location, and navigation. Most GPS users are unaware that the spacecraft providing the signals they rely on are operating in a very harsh space environment the radiation belts where energetic particles trapped in Earths magnetic field dash about at nearly the speed of light. These subatomic particles relentlessly pummel GPS satellites. So by design, every GPS satellite and its sensors are radiation hardened. Each spacecraft carries particle detectors that provide health and status data to system operators. Although these data reveal much about the state of the space radiation environment, heretofore they have been available only to system operators and supporting scientists. Research scientists have long sought a policy shift to allow more general access. With the release of the National Space Weather Strategy and Action Plan organized by the White House Office of Science Technology Policy (OSTP) a sample of these data have been made available to space weather researchers. Los Alamos National Laboratory (LANL) and the National Center for Environmental Information released a months worth of GPS energetic particle data from an interval of heightened space weather activity in early 2014 with the hope of stimulating integration of these data sets into the research arena. Even before the public data release GPS support scientists from LANL showed the extraordinary promise of these data.

  11. Coordinate asymptotics of the (3→3) wave functions for a three charged particle system

    International Nuclear Information System (INIS)

    Merkur'ev, S.P.

    1977-01-01

    Coordinate asymptotics of the (3 → 3) wave functions for three particles system with Coulomb interaction in the scattering problem is plotted. (3 → 3) and (3 → 2) process cases are considered, when the particles are not connected at the initial state. For coordinate asymptotics plotting the basis functions are used which meet Schroedinger equation in the eikonal approximation. The wave functions coordinate asymptotics plotting method is described far from special directions. Wave function asymptotical form is studied in the range of special directions and (3 → 3) scattering amplitude singularities are described. All data are given in accordance with the system with 2 charged particles only. The model in question is of special interest because of the described ppn system the studying of which is of great importance in nuclear physics. Final formulae are discussed for the most general case of three charged particles. Boundary problems for Schroedinger equation are shown to give the only way of definition for the (3 → 3) wave functions. It is pointed out that in special directions wave function coordinate asymptotics is presented with accuracy that gives the possibility to set such a boundary problem

  12. Motion of the relativistic charged particle in an axisymmetric toroidal system

    Energy Technology Data Exchange (ETDEWEB)

    Chiyoda, K; Sugimoto, H [Electrotechnical Labs., Sakura, Ibaraki (Japan)

    1980-01-01

    The relativistic theory of motion of one particle by Morozov and Solov'ev is summarized for convenience of the present study. Then, a drift equation is given and four constants of motion, E/sub 0/, J perpendicular, J and J parallel, are obtained. These constants of motion are used in analyzing the particle motion in an axisymmetric toroidal system. The displacement of the particle from the magnetic surface, ..delta..r, and the period of the banana motion, tau, are obtained. The relativistic expressions of the displacement, ..delta..r, and the period, tau, are obtained by multiplying the corresponding nonrelativistic expressions by (1 - v parallel/sup 2//c/sup 2/) - 1/2, where the relativistic expression of ..delta..r includes the relativistic mass in terms of Larmor radius r/sub L/.

  13. Problem on eigenfunctions and eigenvalues for effective Hamiltonians in pair channels of four-particle systems

    International Nuclear Information System (INIS)

    Gurbanovich, N.S.; Zelenskaya, I.N.

    1976-01-01

    The solution for eigenfunction and eigenvalue for effective Hamiltonians anti Hsub(p) in two-particle channels corresponding to division of four particles into groups (3.1) and (2.2) is very essential in the four-body problem as applied to nuclear reactions. The interaction of anti√sub(p) in each channel may be written in the form of an integral operator which takes account of the structure of a target nucleus or of an incident particle and satisfying the integral equation. While assuming the two-particle potentials to be central, it is possible to expand the effective interactions anti√sub(p) in partial waves and write the radial equation for anti Hsub(p). In the approximation on a mass shell the radial equations for the eigenfunctions of Hsub(p) are reduced to an algebraic equations system. The coefficients of the latter are expressed through the Fourier images for products of wave functions of bound clusters and the two-particle central potential which are localized in a momentum space

  14. Observation particle morphology of colloidal system by conventional SEM with an improved specimen preparation technique.

    Science.gov (United States)

    Xu, Jing; Hou, Zhaosheng; Yuan, Xiaojiao; Guo, Hong

    2011-08-01

    On the basis of our previous report that polymer emulsion with different viscosity can be investigated by conventional scanning electron microscopy (SEM), we have developed an improved specimen preparation technique for obtaining particle morphology and size of colloidal silver, collagen, glutin, and polymer microspheres. In this study, we expect to provide a means for charactering the three-dimensional surface microstructure of colloidal particles. Dilution of the samples with appropriate volatile solvent like ethanol is effective for SEM specimen preparation. At a proper ratio between sample and ethanol, the colloidal particles are dispersed uniformly in ethanol and then deposited evenly on the substrate. Different drying methods are studied to search a proper drying condition, in which the small molecule solvent is removed without destroying the natural particle morphology. And the effects of ethanol in the specimen preparation process are described by analyzing the physicochemical properties of ethanol. The specimen preparation technique is simple and can be achieved in common laboratory for charactering the particle morphology of colloidal system. Copyright © 2010 Wiley-Liss, Inc.

  15. Development of vapor deposited silica sol-gel particles for use as a bioactive materials system.

    Science.gov (United States)

    Snyder, Katherine L; Holmes, Hallie R; VanWagner, Michael J; Hartman, Natalie J; Rajachar, Rupak M

    2013-06-01

    Silica-based sol-gel and bioglass materials are used in a variety of biomedical applications including the surface modification of orthopedic implants and tissue engineering scaffolds. In this work, a simple system for vapor depositing silica sol-gel nano- and micro-particles onto substrates using nebulizer technology has been developed and characterized. Particle morphology, size distribution, and degradation can easily be controlled through key formulation and manufacturing parameters including water:alkoxide molar ratio, pH, deposition time, and substrate character. These particles can be used as a means to rapidly modify substrate surface properties, including surface hydrophobicity (contact angle changes >15°) and roughness (RMS roughness changes of up to 300 nm), creating unique surface topography. Ions (calcium and phosphate) were successfully incorporated into particles, and induced apatitie-like mineral formation upon exposure to simulated body fluid Preosteoblasts (MC3T3) cultured with these particles showed up to twice the adhesivity within 48 h when compared to controls, potentially indicating an increase in cell proliferation, with the effect likely due to both the modified substrate properties as well as the release of silica ions. This novel method has the potential to be used with implants and tissue engineering materials to influence cell behavior including attachment, proliferation, and differentiation via cell-material interactions to promote osteogenesis. Copyright © 2012 Wiley Periodicals, Inc.

  16. Patterns of particle distribution in multiparticle systems by random walks with memory enhancement and decay

    Science.gov (United States)

    Tan, Zhi-Jie; Zou, Xian-Wu; Huang, Sheng-You; Zhang, Wei; Jin, Zhun-Zhi

    2002-07-01

    We investigate the pattern of particle distribution and its evolution with time in multiparticle systems using the model of random walks with memory enhancement and decay. This model describes some biological intelligent walks. With decrease in the memory decay exponent α, the distribution of particles changes from a random dispersive pattern to a locally dense one, and then returns to the random one. Correspondingly, the fractal dimension Df,p characterizing the distribution of particle positions increases from a low value to a maximum and then decreases to the low one again. This is determined by the degree of overlap of regions consisting of sites with remanent information. The second moment of the density ρ(2) was introduced to investigate the inhomogeneity of the particle distribution. The dependence of ρ(2) on α is similar to that of Df,p on α. ρ(2) increases with time as a power law in the process of adjusting the particle distribution, and then ρ(2) tends to a stable equilibrium value.

  17. A study of Monte Carlo methods for weak approximations of stochastic particle systems in the mean-field?

    KAUST Repository

    Haji Ali, Abdul Lateef

    2016-01-01

    I discuss using single level and multilevel Monte Carlo methods to compute quantities of interests of a stochastic particle system in the mean-field. In this context, the stochastic particles follow a coupled system of Ito stochastic differential equations (SDEs). Moreover, this stochastic particle system converges to a stochastic mean-field limit as the number of particles tends to infinity. I start by recalling the results of applying different versions of Multilevel Monte Carlo (MLMC) for particle systems, both with respect to time steps and the number of particles and using a partitioning estimator. Next, I expand on these results by proposing the use of our recent Multi-index Monte Carlo method to obtain improved convergence rates.

  18. A study of Monte Carlo methods for weak approximations of stochastic particle systems in the mean-field?

    KAUST Repository

    Haji Ali, Abdul Lateef

    2016-01-08

    I discuss using single level and multilevel Monte Carlo methods to compute quantities of interests of a stochastic particle system in the mean-field. In this context, the stochastic particles follow a coupled system of Ito stochastic differential equations (SDEs). Moreover, this stochastic particle system converges to a stochastic mean-field limit as the number of particles tends to infinity. I start by recalling the results of applying different versions of Multilevel Monte Carlo (MLMC) for particle systems, both with respect to time steps and the number of particles and using a partitioning estimator. Next, I expand on these results by proposing the use of our recent Multi-index Monte Carlo method to obtain improved convergence rates.

  19. Irradiated-Microsphere Gamma Analyzer (IMGA): an integrated system for HTGR coated particle fuel performance assessment

    International Nuclear Information System (INIS)

    Kania, M.J.; Valentine, K.H.

    1980-02-01

    The Irradiated-Microsphere Gamma Analyzer (IMGA) System, designed and built at ORNL, provides the capability of making statistically accurate failure fraction measurements on irradiated HTGR coated particle fuel. The IMGA records the gamma-ray energy spectra from fuel particles and performs quantitative analyses on these spectra; then, using chemical and physical properties of the gamma emitters it makes a failed-nonfailed decision concerning the ability of the coatings to retain fission products. Actual retention characteristics for the coatings are determined by measuring activity ratios for certain gamma emitters such as 137 Cs/ 95 Zr and 144 Ce/ 95 Zr for metallic fission product retention and 134 Cs/ 137 Cs for an indirect measure of gaseous fission product retention. Data from IMGA (which can be put in the form of n failures observed in N examinations) can be accurately described by the binomial probability distribution model. Using this model, a mathematical relationship between IMGA data (n,N), failure fraction, and confidence level was developed. To determine failure fractions of less than or equal to 1% at confidence levels near 95%, this model dictates that from several hundred to several thousand particles must be examined. The automated particle handler of the IMGA system provides this capability. As a demonstration of failure fraction determination, fuel rod C-3-1 from the OF-2 irradiation capsule was analyzed and failure fraction statistics were applied. Results showed that at the 1% failure fraction level, with a 95% confidence level, the fissile particle batch could not meet requirements; however, the fertile particle exceeded these requirements for the given irradiation temperature and burnup

  20. Wave–particle interactions in a resonant system of photons and ion-solvated water

    Energy Technology Data Exchange (ETDEWEB)

    Konishi, Eiji, E-mail: konishi.eiji.27c@st.kyoto-u.ac.jp

    2017-02-26

    Highlights: • We consider a QED model of rotating water molecules with ion solvation effects. • The equations of motion are cast in terms of a conventional free electron laser. • We offer a new quantum coherence mechanism induced by collective instability. - Abstract: We investigate a laser model for a resonant system of photons and ion cluster-solvated rotating water molecules in which ions in the cluster are identical and have very low, non-relativistic velocities and direction of motion parallel to a static electric field induced in a single direction. This model combines Dicke superradiation with wave–particle interaction. As the result, we find that the equations of motion of the system are expressed in terms of a conventional free electron laser system. This result leads to a mechanism for dynamical coherence, induced by collective instability in the wave–particle interaction.

  1. openSE: a Systems Engineering Framework Particularly Suited to Particle Accelerator Studies and Development Projects

    Energy Technology Data Exchange (ETDEWEB)

    Bonnal, P. [CERN; Féral, B. [CERN; Kershaw, K. [CERN; Nicquevert, B. [CERN; Baudin, M. [Ecole Normale Superieure; Lari, L. [ESS, Lund; Le Cardinal, J. [Chatenay-Malabry, Ecole Centrale

    2016-07-15

    Particle accelerator projects share many characteristics with industrial projects. However, experience has shown that best practice of industrial project management is not always well suited to particle accelerator projects. Major differences include the number and complexity of technologies involved, the importance of collaborative work, development phases that can last more than a decade, and the importance of telerobotics and remote handling to address future preventive and corrective maintenance requirements due to induced radioactivity, to cite just a few. The openSE framework it is a systems engineering and project management framework specifically designed for scientific facilities’ systems and equipment studies and development projects. Best practices in project management, in systems and requirements engineering, in telerobotics and remote handling and in radiation safety management were used as sources of inspiration, together with analysis of current practices surveyed at CERN, GSI and ESS.

  2. An efficient plant viral expression system generating orally immunogenic Norwalk virus-like particles.

    Science.gov (United States)

    Santi, Luca; Batchelor, Lance; Huang, Zhong; Hjelm, Brooke; Kilbourne, Jacquelyn; Arntzen, Charles J; Chen, Qiang; Mason, Hugh S

    2008-03-28

    Virus-like particles (VLPs) derived from enteric pathogens like Norwalk virus (NV) are well suited to study oral immunization. We previously described stable transgenic plants that accumulate recombinant NV-like particles (rNVs) that were orally immunogenic in mice and humans. The transgenic approach suffers from long generation time and modest level of antigen accumulation. We now overcome these constraints with an efficient tobacco mosaic virus (TMV)-derived transient expression system using leaves of Nicotiana benthamiana. We produced properly assembled rNV at 0.8 mg/g leaf 12 days post-infection (dpi). Oral immunization of CD1 mice with 100 or 250 microg/dose of partially purified rNV elicited systemic and mucosal immune responses. We conclude that the plant viral transient expression system provides a robust research tool to generate abundant quantities of rNV as enriched, concentrated VLP preparations that are orally immunogenic.

  3. Passive Target Tracking in Non-cooperative Radar System Based on Particle Filtering

    Institute of Scientific and Technical Information of China (English)

    LI Shuo; TAO Ran

    2006-01-01

    We propose a target tracking method based on particle filtering(PF) to solve the nonlinear non-Gaussian target-tracking problem in the bistatic radar systems using external radiation sources. Traditional nonlinear state estimation method is extended Kalman filtering (EKF), which is to do the first level Taylor series extension. It will cause an inaccuracy or even a scatter estimation result on condition that there is either a highly nonlinear target or a large noise square-error. Besides, Kalman filtering is the optimal resolution under a Gaussian noise assumption, and is not suitable to the non-Gaussian condition. PF is a sort of statistic filtering based on Monte Carlo simulation that is using some random samples (particles) to simulate the posterior probability density of system random variables. This method can be used in any nonlinear random system. It can be concluded through simulation that PF can achieve higher accuracy than the traditional EKF.

  4. A Time-of-Flight System for Low Energy Charged Particles

    Science.gov (United States)

    Giordano, Micheal; Sadwick, Krystalyn; Fletcher, Kurt; Padalino, Stephen

    2013-10-01

    A time-of-flight system has been developed to measure the energy of charged particles in the keV range. Positively charged ions passing through very thin carbon films mounted on grids generate secondary electrons. These electrons are accelerated by a -2000 V grid bias towards a grounded channeltron electron multiplier (CEM) which amplifies the signal. Two CEM detector assemblies are mounted 23.1 cm apart along the path of the ions. An ion generates a start signal by passing through the first CEM and a stop signal by passing through the second. The start and stop signals generate a time-of-flight spectrum via conventional electronics. Higher energy alpha particles from radioactive sources have been used to test the system. This time-of-flight system will be deployed to measure the energies of 15 to 30 keV ions produced by a duoplasmatron ion source that is used to characterize ICF detectors.

  5. A measure for isotropy-equilibrium degree of a multi-particle system

    International Nuclear Information System (INIS)

    Liu Zhiqing; Li Runze; Xu Mingmei; Liu Lianshou

    2008-01-01

    Aiming at using sphericity as a tool to study the isotropy-equilibrium property of a multi-particle system, in particular the hadronic final state IFS produced in instanton-induced DIS events, we discuss in detail the dependence is sphericity on multiplicity and the multiplicity distribution, as well as on the isotropy degree of the system. A rotational symmetric model with a fluctuating isotropy-degree is constructed, which can fit the mean and width of sphericity of the Monte Carlo IFS-results simultaneously. The IFS from the Monte Carlo simulation is found to be not ideally isotropic but has a probability of 4.7% to be isotropic within error of 5%. The results provide us a description of how far the IFS departs from equilibrium. The method developed is applicable to any Monte Carlo generated multi-particle system, for which the isotropy-equilibrium property is significant. (authors)

  6. Parameter estimation of fractional-order chaotic systems by using quantum parallel particle swarm optimization algorithm.

    Directory of Open Access Journals (Sweden)

    Yu Huang

    Full Text Available Parameter estimation for fractional-order chaotic systems is an important issue in fractional-order chaotic control and synchronization and could be essentially formulated as a multidimensional optimization problem. A novel algorithm called quantum parallel particle swarm optimization (QPPSO is proposed to solve the parameter estimation for fractional-order chaotic systems. The parallel characteristic of quantum computing is used in QPPSO. This characteristic increases the calculation of each generation exponentially. The behavior of particles in quantum space is restrained by the quantum evolution equation, which consists of the current rotation angle, individual optimal quantum rotation angle, and global optimal quantum rotation angle. Numerical simulation based on several typical fractional-order systems and comparisons with some typical existing algorithms show the effectiveness and efficiency of the proposed algorithm.

  7. Nonequilibrium mode-coupling theory for dense active systems of self-propelled particles.

    Science.gov (United States)

    Nandi, Saroj Kumar; Gov, Nir S

    2017-10-25

    The physics of active systems of self-propelled particles, in the regime of a dense liquid state, is an open puzzle of great current interest, both for statistical physics and because such systems appear in many biological contexts. We develop a nonequilibrium mode-coupling theory (MCT) for such systems, where activity is included as a colored noise with the particles having a self-propulsion force f 0 and a persistence time τ p . Using the extended MCT and a generalized fluctuation-dissipation theorem, we calculate the effective temperature T eff of the active fluid. The nonequilibrium nature of the systems is manifested through a time-dependent T eff that approaches a constant in the long-time limit, which depends on the activity parameters f 0 and τ p . We find, phenomenologically, that this long-time limit is captured by the potential energy of a single, trapped active particle (STAP). Through a scaling analysis close to the MCT glass transition point, we show that τ α , the α-relaxation time, behaves as τ α ∼ f 0 -2γ , where γ = 1.74 is the MCT exponent for the passive system. τ α may increase or decrease as a function of τ p depending on the type of active force correlations, but the behavior is always governed by the same value of the exponent γ. Comparison with the numerical solution of the nonequilibrium MCT and simulation results give excellent agreement with scaling analysis.

  8. Final Report: Model interacting particle systems for simulation and macroscopic description of particulate suspensions

    Energy Technology Data Exchange (ETDEWEB)

    Peter J. Mucha

    2007-08-30

    Suspensions of solid particles in liquids appear in numerous applications, from environmental settings like river silt, to industrial systems of solids transport and water treatment, and biological flows such as blood flow. Despite their importance, much remains unexplained about these complicated systems. Mucha's research aims to improve understanding of basic properties of suspensions through a program of simulating model interacting particle systems with critical evaluation of proposed continuum equations, in close collaboration with experimentalists. Natural to this approach, the original proposal centered around collaboration with studies already conducted in various experimental groups. However, as was detailed in the 2004 progress report, following the first year of this award, a number of the questions from the original proposal were necessarily redirected towards other specific goals because of changes in the research programs of the proposed experimental collaborators. Nevertheless, the modified project goals and the results that followed from those goals maintain close alignment with the main themes of the original proposal, improving efficient simulation and macroscopic modeling of sedimenting and colloidal suspensions. In particular, the main investigations covered under this award have included: (1) Sedimentation instabilities, including the sedimentation analogue of the Rayleigh-Taylor instability (for heavy, particle-laden fluid over lighter, clear fluid). (2) Ageing dynamics of colloidal suspensions at concentrations above the glass transition, using simplified interactions. (3) Stochastic reconstruction of velocity-field dependence for particle image velocimetry (PIV). (4) Stochastic modeling of the near-wall bias in 'nano-PIV'. (5) Distributed Lagrange multiplier simulation of the 'internal splash' of a particle falling through a stable stratified interface. (6) Fundamental study of velocity fluctuations in sedimentation

  9. FIELD COMPARISONS OF DUAL SMPS-APS SYSTEMS TO MEASURE INDOOR-OUTDOOR PARTICLE SIZE DISTRIBUTIONS

    Science.gov (United States)

    Simultaneous measurements of particle size distributions across multiple locations can provide critical information to accurately assess human exposure to particles. These data are very useful to describe indoor-outdoor particle relationships, outdoor particle penetration thro...

  10. Particles and emissions from a diesel engine equipped with a humid air motor system

    Energy Technology Data Exchange (ETDEWEB)

    Nord, Kent; Zurita, Grover; Tingvall, Bror; Haupt, Dan [Luleaa Univ. of Technology (Sweden). Div. of Environmental Technology

    2002-02-01

    A system for reduction of NO{sub x}, humid air motor system (HAM), has been connected to an eleven liters diesel engine. Earlier studies have demonstrated the system's capacity to lower NO{sub x}-emissions from diesel engines. The present study is directed to investigate their influence of the system on the emissions of particles, aldehydes and noise while at the same time monitoring essential engine parameters, water consumption and verifying the NO{sub x} reducing ability. The system has been tested under the various conditions stated in 13-mode cycle ECE R-49. Additional tests have been necessary for sampling and measurements of particles and noise. The results show that HAM caused a large reduction of the NO{sub x} emissions while the engine performance was almost unaffected. Average reduction of NO{sub x} during the different modes of ECE R-49 was 51,1%. The reduction was directly related to the humidity of the inlet air and a further reduction can be anticipated with higher humidity. Samples have also been taken for acetaldehydes and formaldehyde. The results suggest a large reduction of aldehydes, in the range of 78 to 100%, when using HAM. Unfortunately it cannot be excluded that the results obtained are a result of a combination of high air humidity and the sampling technique used. The influence of the system on the emission of hydrocarbons was negligible while a moderate increase in the emission of carbon monoxide was noticed. No confident relationship between air humidity and the observed effects could be detected. Particle number concentrations and size distribution have also been measured. The measurements showed that the particle number concentrations was usually increased when HAM was coupled to the engine. The increase in particle number concentration, observed in five out of six running modes, varied between 46 and 148%. There was no trend indicating a shift in mean particle diameter when using HAM. Noise level and cylinder pressure have also

  11. Effects of Isospin on Pre-scission Particle Multiplicity of Heavy Systems and Its Excitation Energy Dependence

    Institute of Scientific and Technical Information of China (English)

    YE Wei; CHEN Na

    2004-01-01

    Isospin effects on particle emission of fissioning isobaric sources 202Fr, 202po, 202Tl and isotopic sources 189,202,212Po, and its dependence on the excitation energy are studied via Smoluchowski equations. It is shown that with increasing the isospin of fissioning systems, charged-particle emission is not sensitive to the strength of nuclear dissipation. In addition, we have found that increasing the excitation energy not only increases the influence of nuclear dissipation on particle emission but also greatly enhances the sensitivity of the emission of pre-scission neutrons or charged particles to the isospin of the system. Therefore, in order to extract dissipation strength more accurately by taking light particle multiplicities it is important to choose both a highly excited compound nucleus and a proper kind of particles for systems with different isospins.

  12. Effects of copper particles on a model septic system's function and microbial community.

    Science.gov (United States)

    Taylor, Alicia A; Walker, Sharon L

    2016-03-15

    There is concern surrounding the addition of nanoparticles into consumer products due to toxicity potential and the increased risk of human and environmental exposures to these particles. Copper nanoparticles are found in many common consumer goods; therefore, the disposal and subsequent interactions between potentially toxic Cu-based nanoparticles and microbial communities may have detrimental impacts on wastewater treatment processes. This study investigates the effects of three copper particles (micron- and nano-scale Cu particles, and a nano-scale Cu(OH)2-based fungicide) on the function and operation of a model septic tank. Septic system analyses included water quality evaluations and microbial community characterizations to detect changes in and relationships between the septic tank function and microbial community phenotype/genotype. As would be expected for optimal wastewater treatment, biological oxygen demand (BOD5) was reduced by at least 63% during nano-scale Cu exposure, indicating normal function. pH was reduced to below the optimum anaerobic fermentation range during the micro Cu exposure, suggesting incomplete degradation of organic waste may have occurred. The copper fungicide, Cu(OH)2, caused a 57% increase in total organic carbon (TOC), which is well above the typical range for septic systems and also corresponded to increased BOD5 during the majority of the Cu(OH)2 exposure. The changes in TOC and BOD5 demonstrate that the system was improperly treating waste. Overall, results imply individual exposures to the three Cu particles caused distinct disruptions in septic tank function. However, it was observed that the system was able to recover to typical operating conditions after three weeks post-exposure. These results imply that during periods of Cu introduction, there are likely pulses of improper removal of total organic carbon and significant changes in pH not in the optimal range for the system. Copyright © 2016 Elsevier Ltd. All rights

  13. The charged particle veto system of the cosmic ray electron synchrotron telescope

    Science.gov (United States)

    Geske, Matthew T.

    The Cosmic Ray Electron Synchrotron Telescope is a balloon-borne detector designed to measure cosmic electrons at energies from 2 to 50 TeV. CREST completed a successful 10-day Antarctic flight which launched on December 25, 2011. CREST utilizes a novel detection method, searching for the synchrotron radiation emitted by the interaction of TeV-energy electrons with the geomagnetic field. The main detector component for CREST is a 32 x 32 square array of BaF 2 crystal detectors coupled to photomultiplier tubes, with an inter-crystal spacing of 7.5 cm. This document describes the design, construction and flight of the CREST experiment. A special focus is put upon the charged particle veto system, and its use in the analysis of the CREST results. The veto system, consisting of a series of 27 large slabs of organic plastic scintillator read out through photomultiplier tubes, is designed as a passive mechanism for rejecting charged particle events that could contaminate the X-ray signal from synchrotron radiation. The CREST veto system has 99.15% geometric coverage, with individual detector components exhibiting a mean detection efficiency of 99.7%. In whole, the veto system provides a charged particle rejection factor of better than 7 x 103.

  14. An new MHD/kinetic model for exploring energetic particle production in macro-scale systems

    Science.gov (United States)

    Drake, J. F.; Swisdak, M.; Dahlin, J. T.

    2017-12-01

    A novel MHD/kinetic model is being developed to explore magneticreconnection and particle energization in macro-scale systems such asthe solar corona and the outer heliosphere. The model blends the MHDdescription with a macro-particle description. The rationale for thismodel is based on the recent discovery that energetic particleproduction during magnetic reconnection is controlled by Fermireflection and Betatron acceleration and not parallel electricfields. Since the former mechanisms are not dependent on kineticscales such as the Debye length and the electron and ion inertialscales, a model that sheds these scales is sufficient for describingparticle acceleration in macro-systems. Our MHD/kinetic model includesmacroparticles laid out on an MHD grid that are evolved with the MHDfields. Crucially, the feedback of the energetic component on the MHDfluid is included in the dynamics. Thus, energy of the total system,the MHD fluid plus the energetic component, is conserved. The systemhas no kinetic scales and therefore can be implemented to modelenergetic particle production in macro-systems with none of theconstraints associated with a PIC model. Tests of the new model insimple geometries will be presented and potential applications will bediscussed.

  15. Functional representation for the grand partition function of a multicomponent system of charged particles: Correlation functions of the reference system

    Directory of Open Access Journals (Sweden)

    O.V.Patsahan

    2006-01-01

    Full Text Available Based on the method of collective variables (CV with a reference system, the exact expression for the functional of the grand partition function of a m-component ionic model with charge and size asymmetry is found. Particular attention is paid to the n-th particle correlation functions of the reference system which is presented as a m-component system of "colour" hard spheres of the same diameter. A two-component model is considered in more detail. In this case the recurrence formulas for the correlation functions are found. A general case of a m-component inhomogeneous system of the "colour" hard spheres is also analysed.

  16. Correlation between peak and median blocking temperatures by magnetization measurement on isolated ferromagnetic and antiferromagnetic particle systems

    DEFF Research Database (Denmark)

    Jiang, Jianzhong; Mørup, Steen

    1997-01-01

    The influence of the particle size distribution on the ratio of the peak temperature, T-peak, to the blocking temperature, T-Bm, in zero field cooled (ZFD) magnetization curves has studied for both ferromagnetic and antiferromagnetic particle systems. In both systems the ratio beta=T-peak/T-Bm does...... not depend on the median particle volume. However, T-Bm can be considerably different from T-peak in both systems. These results show that the ZFD measurements can be used to determine T-Bm values only if the particle size distribution of the system is known. Otherwise, the estimated T-Bm values will have...... a large uncertainty, especially in systems with a broad particle size distribution....

  17. Repeating pulsed magnet system for axion-like particle searches and vacuum birefringence experiments

    Energy Technology Data Exchange (ETDEWEB)

    Yamazaki, T., E-mail: yamazaki@icepp.s.u-tokyo.ac.jp [International Center for Elementary Particle Physics, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-0033 (Japan); Inada, T.; Namba, T. [International Center for Elementary Particle Physics, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-0033 (Japan); Asai, S. [Department of Physics, Graduate School of Science, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-0033 (Japan); Kobayashi, T. [International Center for Elementary Particle Physics, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-0033 (Japan); Matsuo, A.; Kindo, K. [The Institute for Solid State Physics, The University of Tokyo, 5-1-5 Kashiwanoha, Kashiwa-shi, Chiba 277-8581 (Japan); Nojiri, H. [Institute for Materials Research, Tohoku University, 2-1-1 Katahira, Aoba-ku, Sendai 980-8577 (Japan)

    2016-10-11

    We have developed a repeating pulsed magnet system which generates magnetic fields of about 10 T in a direction transverse to an incident beam over a length of 0.8 m with a repetition rate of 0.2 Hz. Its repetition rate is by two orders of magnitude higher than usual pulsed magnets. It is composed of four low resistance racetrack coils and a 30 kJ transportable capacitor bank as a power supply. The system aims at axion-like particle searches with a pulsed light source and vacuum birefringence measurements. We report on the details of the system and its performances.

  18. Monte Carlo simulation of the spectral response of beta-particle emitters in LSC systems

    International Nuclear Information System (INIS)

    Ortiz, F.; Los Arcos, J.M.; Grau, A.; Rodriguez, L.

    1992-01-01

    This paper presents a new method to evaluate the counting efficiency and the effective spectra at the output of any dynodic stage, for any pure beta-particle emitter, measured in a liquid scintillation counting system with two photomultipliers working in sum-coincidence mode. The process is carried out by a Monte Carlo simulation procedure that gives the electron distribution, and consequently the counting efficiency, at any dynode, in response to the beta particles emitted, as a function of the figure of merit of the system and the dynodic gains. The spectral outputs for 3 H and 14 C have been computed and compared with experimental data obtained with two sets of quenched radioactive standards of these nuclides. (orig.)

  19. Form factor of relativistic two-particle system and covariant hamiltonian formulation of quantum field theory

    International Nuclear Information System (INIS)

    Skachkov, N.; Solovtsov, I.

    1979-01-01

    Based on the hamiltonian formulation of quantum field theory proposed by Kadyshevsky the three-dimensional relativistic approach is developed for describing the form factors of composite systems. The main features of the diagram technique appearing in the covariant hamiltonian formulation of field theory are discussed. The three-dimensional relativistic equation for the vertex function is derived and its connection with that for the quasipotential wave function is found. The expressions are obtained for the form factor of the system through equal-time two-particle wave functions both in momentum and relativistic configurational representations. An explicit expression for the form factor is found for the case of two-particle interaction through the Coulomb potential

  20. Dual-Particle Imaging System with Neutron Spectroscopy for Safeguard Applications

    Energy Technology Data Exchange (ETDEWEB)

    Hamel, Michael C. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Weber, Thomas M. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-11-01

    A dual-particle imager (DPI) has been designed that is capable of detecting gamma-ray and neutron signatures from shielded SNM. The system combines liquid organic and NaI(Tl) scintillators to form a combined Compton and neutron scatter camera. Effective image reconstruction of detected particles is a crucial component for maximizing the performance of the system; however, a key deficiency exists in the widely used iterative list-mode maximum-likelihood estimation-maximization (MLEM) image reconstruction technique. For MLEM a stopping condition is required to achieve a good quality solution but these conditions fail to achieve maximum image quality. Stochastic origin ensembles (SOE) imaging is a good candidate to address this problem as it uses Markov chain Monte Carlo to reach a stochastic steady-state solution. The application of SOE to the DPI is presented in this work.

  1. Cryogenic Microcalorimeter System for Ultra-High Resolution Alpha-Particle Spectrometry

    Science.gov (United States)

    Croce, M. P.; Bacrania, M. K.; Hoover, A. S.; Rabin, M. W.; Hoteling, N. J.; LaMont, S. P.; Plionis, A. A.; Dry, D. E.; Ullom, J. N.; Bennett, D. A.; Horansky, R. D.; Kotsubo, V.; Cantor, R.

    2009-12-01

    Microcalorimeters have been shown to yield unsurpassed energy resolution for alpha spectrometry, up to 1.06 keV FWHM at 5.3 MeV. These detectors use a superconducting transition-edge sensor (TES) to measure the temperature change in an absorber from energy deposited by an interacting alpha particle. Our system has four independent detectors mounted inside a liquid nitrogen/liquid helium cryostat. An adiabatic demagnetization refrigerator (ADR) cools the detector stage to its operating temperature of 80 mK. Temperature regulation with ˜15-μK peak-to-peak variation is achieved by PID control of the ADR. The detectors are voltage-biased, and the current signal is amplified by a commercial SQUID readout system and digitized for further analysis. This paper will discuss design and operation of our microcalorimeter alpha-particle spectrometer, and will show recent results.

  2. An Alternative Derivation of the Energy Levels of the "Particle on a Ring" System

    Science.gov (United States)

    Vincent, Alan

    1996-10-01

    All acceptable wave functions must be continuous mathematical functions. This criterion limits the acceptable functions for a particle in a linear 1-dimensional box to sine functions. If, however, the linear box is bent round into a ring, acceptable wave functions are those which are continuous at the 'join'. On this model some acceptable linear functions become unacceptable for the ring and some unacceptable cosine functions become acceptable. This approach can be used to produce a straightforward derivation of the energy levels and wave functions of the particle on a ring. These simple wave mechanical systems can be used as models of linear and cyclic delocalised systems such as conjugated hydrocarbons or the benzene ring. The promotion energy of an electron can then be used to calculate the wavelength of absorption of uv light. The simple model gives results of the correct order of magnitude and shows that, as the chain length increases, the uv maximum moves to longer wavelengths, as found experimentally.

  3. Optimization of heat pump system in indoor swimming pool using particle swarm algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Wen-Shing; Kung, Chung-Kuan [Department of Energy and Refrigerating Air-Conditioning Engineering, National Taipei University of Technology, 1, Section 3, Chung-Hsiao East Road, Taipei (China)

    2008-09-15

    When it comes to indoor swimming pool facilities, a large amount of energy is required to heat up low-temperature outdoor air before it is being introduced indoors to maintain indoor humidity. Since water is evaporated from the pool surface, the exhausted air contains more water and specific enthalpy. In response to this indoor air, heat pump is generally used in heat recovery for indoor swimming pools. To reduce the cost in energy consumption, this paper utilizes particle swarm algorithm to optimize the design of heat pump system. The optimized parameters include continuous parameters and discrete parameters. The former consists of outdoor air mass flow and heat conductance of heat exchangers; the latter comprises compressor type and boiler type. In a case study, life cycle energy cost is considered as an objective function. In this regard, the optimized outdoor air flow and the optimized design for heating system can be deduced by using particle swarm algorithm. (author)

  4. Single-Particle Momentum Distributions of Efimov States in Mixed-Species Systems

    DEFF Research Database (Denmark)

    T. Yamashita, M.; F. Bellotti, F.; Frederico, T.

    2013-01-01

    to derive formulas for the scaling factor of the Efimov spectrum for any mass ratio assuming either that two or three of the two-body subsystems have a bound state at zero energy. We consider the single-particle momentum distribution analytically and numerically and analyse the tail of the momentum......We solve the three-body bound state problem in three dimensions for mass imbalanced systems of two identical bosons and a third particle in the universal limit where the interactions are assumed to be of zero-range. The system displays the Efimov effect and we use the momentum-space wave equation...... distribution to obtain the three-body contact parameter. Our finding demonstrate that the functional form of the three-body contact term depends on the mass ratio and we obtain an analytic expression for this behavior. To exemplify our results, we consider mixtures of Lithium with either two Caesium or Rubium...

  5. Eletronic system for search of particles with 1-0.1 ps lifetime

    International Nuclear Information System (INIS)

    Bobkov, S.G.; Kantserov, V.A.; Pershin, A.S.

    1986-01-01

    The algorithms for search of short-lived particleswith 1-0.1 ps lifetime in a vertex detector based on drift chambers are considered. Electronics supply of such selecting by means ofthe suggested algorithms is described. The algorithms for useful event selecting in electron detectors are developed by pp-interaction simulation using the Monte-Carlo method. Events of two types are simulated: charm particle generation with theirfurther decay and interactions without charm particle generation. Two-dimensional interaction pattern is cnsidered. Some algorithms for data processing by the selecting systems (trigger systems) are developed. For all algorithms the event is considered to be useful, if more than one vertex is determined. The algorithms are based on geometrical relations for one-vertex events. Systematic deviation from these relations means that the event is multivertex

  6. Characterization of invariant measures at the leading edge for competing particle systems

    CERN Document Server

    Ruzmaikina, A

    2004-01-01

    We study systems of particles on a line which have a maximum, are locally finite, and evolve with independent increments. `Quasi-stationary states' are defined as probability measures, on the $\\sigma$ algebra generated by the gap variables, for which the joint distribution of the gaps is invariant under the time evolution. Examples are provided by Poisson processes with densities of the form, $\\rho(dx) \\ =\\ e^{- s x} \\, s\\, dx$, with $ s > 0$, and linear superpositions of such measures. We show that conversely: any quasi-stationary state for the independent dynamics, with an exponentially bounded integrated density of particles, corresponds to a superposition of the above described probability measures, restricted to the relevant $\\sigma$-algebra. Among the systems for which this question is of some relevance are spin-glass models of statistical mechanics, where the point process represents the collection of the free energies of distinct ``pure states'', the time evolution corresponds to the addition of a spi...

  7. Micro-particle transporting system using galvanotactically stimulated apo-symbiotic cells of Paramecium bursaria.

    Science.gov (United States)

    Furukawa, Shunsuke; Karaki, Chiaki; Kawano, Tomonori

    2009-01-01

    It is well known that Paramecium species including green paramecia (Paramecium bursaria) migrate towards the anode when exposed to an electric field in a medium. This type of a cellular movement is known as galvanotaxis. Our previous study revealed that an electric stimulus given to P bursaria is converted to a galvanotactic cellular movement by involvement of T-type calcium channel on the plasma membrane [Aonuma et al. (2007), Z. Naturforsch. 62c, 93-102]. This phenomenon has attracted the attention of bioengineers in the fields of biorobotics or micro-robotics in order to develop electrically controllable micromachineries. Here, we demonstrate the galvanotactic controls of the cellular migration of P bursaria in capillary tubes (diameter, 1-2 mm; length, 30-240 mm). Since the Paramecium cells take up particles of various sizes, we attempted to use the electrically stimulated cells of P bursaria as the vehicle for transportation of micro-particles in the capillary system. By using apo-symbiotic cells of P bursaria obtained after forced removal of symbiotic algae, the uptake of the particles could be maximized and visualized. Then, electrically controlled transportations of particle-filled apo-symbiotic P bursaria cells were manifested. The particles transported by electrically controlled cells (varying in size from nm to /m levels) included re-introduced green algae, fluorescence-labeled polystyrene beads, magnetic microspheres, emerald green fluorescent protein (EmGFP)-labeled cells of E. coli, Indian ink, and crystals of zeolite (hydrated aluminosilicate minerals with a micro-porous structure) and some metal oxides. Since the above demonstrations were successful, we concluded that P bursaria has a potential to be employed as one of the micro-biorobotic devices used in BioMEMS (biological micro-electro-mechanical systems).

  8. First experience with particle-in-cell plasma physics code on ARM-based HPC systems

    Science.gov (United States)

    Sáez, Xavier; Soba, Alejandro; Sánchez, Edilberto; Mantsinen, Mervi; Mateo, Sergi; Cela, José M.; Castejón, Francisco

    2015-09-01

    In this work, we will explore the feasibility of porting a Particle-in-cell code (EUTERPE) to an ARM multi-core platform from the Mont-Blanc project. The used prototype is based on a system-on-chip Samsung Exynos 5 with an integrated GPU. It is the first prototype that could be used for High-Performance Computing (HPC), since it supports double precision and parallel programming languages.

  9. A novel straightness measurement system applied to the position monitoring of large Particle Physics Detectors

    OpenAIRE

    Goudard, R; Price, M J; Ribeiro, R; Klumb, F

    1999-01-01

    The Compact Muon Solenoid experiment, CMS, is one of the two general purpose experiments foreseen to operate at the Large Hadron Collider, LHC, at CERN, the European Laboratory for Particle Physics. The experiment aims to study very high energy collisions of proton beams. Investigation of the most fundamental properties of matter, in particular the study of the nature of the electroweak symmetry breaking and the origin of mass, is the experiment scope. The central Tracking System, a six meter...

  10. Stability analysis of a Vlasov-Wave system describing particles interacting with their environment

    Science.gov (United States)

    De Bièvre, Stephan; Goudon, Thierry; Vavasseur, Arthur

    2018-06-01

    We study a kinetic equation of the Vlasov-Wave type, which arises in the description of the behavior of a large number of particles interacting weakly with an environment, composed of an infinite collection of local vibrational degrees of freedom, modeled by wave equations. We use variational techniques to establish the existence of large families of stationary states for this system, and analyze their stability.

  11. Hierarchical modelling of line commutated power systems used in particle accelerators using Saber

    International Nuclear Information System (INIS)

    Reimund, J.A.

    1993-01-01

    This paper discusses the use of hierarchical simulation models using the program Saber trademark for the prediction of magnet ripple currents generated by the power supply/output filter combination. Modeling of an entire power system connected to output filters and particle accelerator ring magnets will be presented. Special emphasis is made on the modeling of power source imbalances caused by transformer impedance imbalances and utility variances. The affect that these imbalances have on the harmonic content of ripple current is also investigated

  12. Correlated wave functions for three-particle systems with Coulomb interaction - The muonic helium atom

    Science.gov (United States)

    Huang, K.-N.

    1977-01-01

    A computational procedure for calculating correlated wave functions is proposed for three-particle systems interacting through Coulomb forces. Calculations are carried out for the muonic helium atom. Variational wave functions which explicitly contain interparticle coordinates are presented for the ground and excited states. General Hylleraas-type trial functions are used as the basis for the correlated wave functions. Excited-state energies of the muonic helium atom computed from 1- and 35-term wave functions are listed for four states.

  13. Hybrid extended particle filter (HEPF) for integrated inertial navigation and global positioning systems

    International Nuclear Information System (INIS)

    Aggarwal, Priyanka; Syed, Zainab; El-Sheimy, Naser

    2009-01-01

    Navigation includes the integration of methodologies and systems for estimating time-varying position, velocity and attitude of moving objects. Navigation incorporating the integrated inertial navigation system (INS) and global positioning system (GPS) generally requires extensive evaluations of nonlinear equations involving double integration. Currently, integrated navigation systems are commonly implemented using the extended Kalman filter (EKF). The EKF assumes a linearized process, measurement models and Gaussian noise distributions. These assumptions are unrealistic for highly nonlinear systems like land vehicle navigation and may cause filter divergence. A particle filter (PF) is developed to enhance integrated INS/GPS system performance as it can easily deal with nonlinearity and non-Gaussian noises. In this paper, a hybrid extended particle filter (HEPF) is developed as an alternative to the well-known EKF to achieve better navigation data accuracy for low-cost microelectromechanical system sensors. The results show that the HEPF performs better than the EKF during GPS outages, especially when simulated outages are located in periods with high vehicle dynamics

  14. Light scattering studies of lower dimensional colloidal particle and critical fluid systems

    International Nuclear Information System (INIS)

    O'Sullivan, W.J.; Mockler, R.C.

    1984-09-01

    The authors have studied the response to compression of colloidal particle crystals in monolayers on the surface of water. The crystals deform elastically as the crystals are compressed in a Langmuir trough from a lattice spacing of ten microns to spacings less than two microns. A phase transition to a close packed triangular lattice phase occurs at very high densities, when the attractive van der Waals/steric interations between particles dominate. The authors have found that the aggregates formed, when a colloidal particle monolayer coagulates following switching off of the repulsive electric dipole-dipole interactions, show scale invariance with a fractal dimension consistent with the prediction of a theory of diffusion limited aggregation in two dimensions. The authors have made progress toward the development of a computer processed array detector-spectrometer to be used in studies of melting and crystallization of two dimensional colloidal particle films. Stable black bilipid membranes have been produced, both spherical and planar, with and without embedded microparticles. We have modified our heterodyne autocorrelation spectrometer, used for studies of the dynamic response of critical fluid films, to enable us to measure the intensity autocorrelation of light scattered at forward angles. Rayleigh linewidth data has been gathered from a 1.9 micron film of a 2,6-lutidine+water critical mixture, taken at a scattering angle of ten degrees. The preliminary results indicate that the film dynamical response remains that of an equivalent three dimensional system, in apparent disgreement with recent theoretical predictions of Calvo and Ferrell

  15. Thermoresponsive copolymer-grafted SBA-15 porous silica particles for temperature-triggered topical delivery systems

    Directory of Open Access Journals (Sweden)

    S. A. Jadhav

    2017-02-01

    Full Text Available A series of poly(N-isopropylacrylamide-co-acrylamide thermoresponsive random copolymers with different molecular weights and composition were synthesized and characterized by attenuated total reflectance Fourier-transform infrared (ATR-FTIR, differential scanning calorimetry (DSC, size exclusion chromatography (SEC and proton nuclear magnetic resonance (NMR spectroscopy. The lower critical solution temperatures (LCST of the copolymers were tuned by changing the mole ratios of monomers. Copolymer with highest molecular weight and LCST (41.2 °C was grafted on SBA-15 type mesoporous silica particles by a two-step polymer grafting procedure. Bare SBA-15 and the thermoresponsive copolymergrafted (hybrid SBA-15 particles were fully characterized by scanning electron microscope (SEM, ATR-FTIR, thermogravimetric analysis (TGA and Brunauer-Emmett-Teller (BET analyses. The hybrid particles were tested for their efficiency as temperature-sensitive systems for dermal delivery of the antioxidant rutin (quercetin-3-O-rutinoside. Improved control over rutin release by hybrid particles was obtained which makes them attractive hybrid materials for drug delivery.

  16. Alternate particle removal technologies for the Airborne Activity Confinement System at the Savannah River Site

    International Nuclear Information System (INIS)

    Brockmann, J.E.; Adkins, C.L.J.; Gelbard, F.

    1991-09-01

    This report presents a review of the filtration technologies available for the removal of particulate material from a gas stream. It was undertaken to identify alternate filtration technologies that may be employed in the Airborne Activity Confinement System (AACS) at the Savannah River Plant. This report is organized into six sections: (1) a discussion of the aerosol source term and its definition, (2) a short discussion of particle and gaseous contaminant removal mechanisms, (3) a brief overview of particle removal technologies, (4) a discussion of the existing AACS and its potential shortcomings, (5) an enumeration of issues to be addressed in upgrading the AACS, and, (6) a detailed discussion of the identified technologies. The purpose of this report is to identity available options to the existing particle removal system. This system is in continuous operation during routine operation of the reactor. As will be seen, there are a number of options and the selection of any technology or combination of technologies will depend on the design aerosol source term (yet to be appropriately defined) as well as the flow requirements and configuration. This report does not select a specific technology. It focuses on particulate removal and qualitatively on the removal of radio-iodine and mist elimination. Candidate technologies have been selected from industrial and nuclear gas cleaning applications

  17. A System Based on the Internet of Things for Real-Time Particle Monitoring in Buildings

    Directory of Open Access Journals (Sweden)

    Gonçalo Marques

    2018-04-01

    Full Text Available Occupational health can be strongly influenced by the indoor environment as people spend 90% of their time indoors. Although indoor air quality (IAQ is not typically monitored, IAQ parameters could be in many instances very different from those defined as healthy values. Particulate matter (PM, a complex mixture of solid and liquid particles of organic and inorganic substances suspended in the air, is considered the pollutant that affects more people. The most health-damaging particles are the ≤PM10 (diameter of 10 microns or less, which can penetrate and lodge deep inside the lungs, contributing to the risk of developing cardiovascular and respiratory diseases, as well as of lung cancer. This paper presents an Internet of Things (IoT system for real-time PM monitoring named iDust. This system is based on a WEMOS D1 mini microcontroller and a PMS5003 PM sensor that incorporates scattering principle to measure the value of particles suspended in the air (PM10, PM2.5, and PM1.0. Through a Web dashboard for data visualization and remote notifications, the building manager can plan interventions for enhanced IAQ and ambient assisted living (AAL. Compared to other solutions the iDust is based on open-source technologies, providing a total Wi-Fi system, with several advantages such as its modularity, scalability, low cost, and easy installation. The results obtained are very promising, representing a meaningful tool on the contribution to IAQ and occupational health.

  18. 4th International Conference on Particle Systems and Partial Differential Equations

    CERN Document Server

    Soares, Ana

    2017-01-01

    'This book addresses mathematical problems motivated by various applications in physics, engineering, chemistry and biology. It gathers the lecture notes from the mini-course presented by Jean-Christophe Mourrat on the construction of the various stochastic “basic” terms involved in the formulation of the dynamic Ö4  theory in three space dimensions, as well as selected contributions presented at the fourth meeting on Particle Systems and PDEs, which was held at the University of Minho’s Centre of Mathematics in December 2015. The purpose of the conference was to bring together prominent researchers working in the fields of particle systems and partial differential equations, offering them a forum to present their recent results and discuss their topics of expertise. The meeting was also intended to present to a vast and varied public, including young researchers, the area of interacting particle systems, its underlying motivation, and its relation to partial differential equations.  The book w...

  19. A System Based on the Internet of Things for Real-Time Particle Monitoring in Buildings.

    Science.gov (United States)

    Marques, Gonçalo; Roque Ferreira, Cristina; Pitarma, Rui

    2018-04-21

    Occupational health can be strongly influenced by the indoor environment as people spend 90% of their time indoors. Although indoor air quality (IAQ) is not typically monitored, IAQ parameters could be in many instances very different from those defined as healthy values. Particulate matter (PM), a complex mixture of solid and liquid particles of organic and inorganic substances suspended in the air, is considered the pollutant that affects more people. The most health-damaging particles are the ≤PM 10 (diameter of 10 microns or less), which can penetrate and lodge deep inside the lungs, contributing to the risk of developing cardiovascular and respiratory diseases, as well as of lung cancer. This paper presents an Internet of Things (IoT) system for real-time PM monitoring named iDust. This system is based on a WEMOS D1 mini microcontroller and a PMS5003 PM sensor that incorporates scattering principle to measure the value of particles suspended in the air (PM 10 , PM 2.5 , and PM 1.0 ). Through a Web dashboard for data visualization and remote notifications, the building manager can plan interventions for enhanced IAQ and ambient assisted living (AAL). Compared to other solutions the iDust is based on open-source technologies, providing a total Wi-Fi system, with several advantages such as its modularity, scalability, low cost, and easy installation. The results obtained are very promising, representing a meaningful tool on the contribution to IAQ and occupational health.

  20. Optimal design and operation of a photovoltaic-electrolyser system using particle swarm optimisation

    Science.gov (United States)

    Sayedin, Farid; Maroufmashat, Azadeh; Roshandel, Ramin; Khavas, Sourena Sattari

    2016-07-01

    In this study, hydrogen generation is maximised by optimising the size and the operating conditions of an electrolyser (EL) directly connected to a photovoltaic (PV) module at different irradiance. Due to the variations of maximum power points of the PV module during a year and the complexity of the system, a nonlinear approach is considered. A mathematical model has been developed to determine the performance of the PV/EL system. The optimisation methodology presented here is based on the particle swarm optimisation algorithm. By this method, for the given number of PV modules, the optimal sizeand operating condition of a PV/EL system areachieved. The approach can be applied for different sizes of PV systems, various ambient temperatures and different locations with various climaticconditions. The results show that for the given location and the PV system, the energy transfer efficiency of PV/EL system can reach up to 97.83%.

  1. Development of a Dual-Particle Imaging System for Nonproliferation Applications

    Science.gov (United States)

    Poitrasson-Riviere, Alexis Pierre Valere

    A rising concern in our society is preventing the proliferation of nuclear weapons and fissionable material. This prevention can be incorporated at multiple levels, from the use of nuclear safeguards in nuclear facilities to the detection of threat objects in the field. At any level, systems used for such tasks need to be specially designed for use with Special Nuclear Material (SNM) which is defined by the NRC as plutonium and uranium enriched in U-233 or U-235 isotopes. These radioactive materials have the particularity of emitting both fast neutrons and gamma rays; thus, systems able to detect both particles simultaneously are particularly desirable. In the field of nuclear nonproliferation and safeguards, detection systems capable of accurately imaging various sources of radiation can greatly simplify any monitoring or detection task. The localization of the radiation sources can allow users of the system to focus their efforts on the areas of interest, whether it be for radiation detection or radiation characterization. This thesis describes the development of a dual-particle imaging system at the University of Michigan to address these technical challenges. The imaging system relies on the use of organic liquid scintillators that can detect both fast neutrons and gamma rays, and inorganic NaI(Tl) scintillators that are not very sensitive to neutrons yet yield photoelectric absorptions from gamma rays. A prototype of the imaging system has been constructed and operated. The system will aid the remote monitoring of nuclear materials within facilities, and it has the scalability for standoff detection in the field. A software suite has been developed to analyze measured data in real time, in an effort to obtain a system as close to field-ready as possible. The system's performance has been tested with various materials of interest, such as MOX and plutonium metal, measured at the PERLA facility of the Joint Research Center in Ispra, Italy. The robust and

  2. Design and characterization of a real time particle radiography system based on scintillating optical fibers

    International Nuclear Information System (INIS)

    Longhitano, F.; Lo Presti, D.; Bonanno, D.L.; Bongiovanni, D.G.; Leonora, E.; Randazzo, N.; Reito, S.; Sipala, V.; Gallo, G.

    2017-01-01

    The fabrication and characterization of a charged particle imaging system composed of a tracker and a residual range detector (RRD) is described. The tracker is composed of four layers of scintillating fibers (SciFi), 500 μm side square section, arranged to form two planes orthogonal to each other. The fibers are coupled to two Multi-Pixel Photon Counter (MPPC) arrays by means of a channel reduction system patented by the Istituto Nazionale di Fisica Nucleare (INFN) (Presti, 2015) . Sixty parallel layers of the same fibers used in the tracker compose the RRD. The various layers are optically coupled to a MPPC array by means of wavelength shifting (WLS) fibers. The sensitive area of the two detectors is 9×9 cm"2. The results of the measurements, acquired by the prototypes with CATANA (Cirrone, 2008) proton beam, and a comparison with the simulations of the detectors are presented. - Highlights: • A real time charged particle imaging system is described. • The system is composed of a position sensitive and a residual range detectors. • The sensitive area of the system is composed of submillimeter scintillating fibers. • The read-out is based on a patented channel reduction system. • The results of the measurements with proton beam are presented.

  3. The outline design of FEB-E particle exhaust and pumping system

    International Nuclear Information System (INIS)

    Zhu Yukun; Huang Jinhua; Feng Kaiming; Deng Peizhi; Li Yiqiang

    1999-01-01

    The particle exhaust of Fusion Experimental Breeder FEB-E is carried out with divertor. The FEB-E divertor consists of 48 wedge shaped cassette modules connected with primary pumping system and cooling system. The FEB-E pumping system consists of two major subsystems, the torus rough pumping system and the torus high vacuum pumping system. The torus high vacuum pumping system consists of a series of internal cryopumps located in most of the lower ports (up to 20) and additional turbomolecular pumps located outside of the bio-shield. These cryopumps are capable of providing a nominal gross pumping speed of 576 m 3 ·s -1 , regulated with inlet valves for throttle control of the exhaust particle flow in the case of high neutral pressure (>1 Pa) in the divertor. However, limited conductance through the divertor pumping slot and through the clearance between the underside of the divertor and the vacuum vessel results in the effective net pumping speed of 160 m 3 ·s -1 in the divertor private region. This pumping speed implies that a neutral pressure operation range of 0.5 - 1.0 Pa is required in the divertor private region to achieve an exhausting throughput range of 80 - 160 Pa·m 3 ·s -1 . The regeneration of cryopump is activated at the end of the 1000 s of the breeder burning

  4. Supersymmetric many-particle quantum systems with inverse-square interactions

    International Nuclear Information System (INIS)

    Ghosh, Pijush K

    2012-01-01

    The development in the study of supersymmetric many-particle quantum systems with inverse-square interactions is reviewed. The main emphasis is on quantum systems with dynamical OSp(2|2) supersymmetry. Several results related to the exactly solved supersymmetric rational Calogero model, including shape invariance, equivalence to a system of free superoscillators and non-uniqueness in the construction of the Hamiltonian, are presented in some detail. This review also includes a formulation of pseudo-Hermitian supersymmetric quantum systems with a special emphasis on the rational Calogero model. There are quite a few number of many-particle quantum systems with inverse-square interactions which are not exactly solved for a complete set of states in spite of the construction of infinitely many exact eigenfunctions and eigenvalues. The Calogero–Marchioro model with dynamical SU(1, 1|2) supersymmetry and a quantum system related to the short-range Dyson model belong to this class and certain aspects of these models are reviewed. Several other related and important developments are briefly summarized. (topical review)

  5. Design and characterization of a real time particle radiography system based on scintillating optical fibers

    Energy Technology Data Exchange (ETDEWEB)

    Longhitano, F., E-mail: fabio.longhitano@ct.infn.it [Istituto Nazionale di Fisica Nucleare (INFN), Sezione Catania (Italy); Lo Presti, D. [Istituto Nazionale di Fisica Nucleare (INFN), Sezione Catania (Italy); Department of Physics and Astronomy, University of Catania (Italy); Bonanno, D.L.; Bongiovanni, D.G.; Leonora, E.; Randazzo, N.; Reito, S. [Istituto Nazionale di Fisica Nucleare (INFN), Sezione Catania (Italy); Sipala, V. [University of Sassari, Sassari (Italy); Istituto Nazionale di Fisica Nucleare (INFN), Sezione di Cagliari (Italy); Gallo, G. [Department of Physics and Astronomy, University of Catania (Italy)

    2017-02-11

    The fabrication and characterization of a charged particle imaging system composed of a tracker and a residual range detector (RRD) is described. The tracker is composed of four layers of scintillating fibers (SciFi), 500 μm side square section, arranged to form two planes orthogonal to each other. The fibers are coupled to two Multi-Pixel Photon Counter (MPPC) arrays by means of a channel reduction system patented by the Istituto Nazionale di Fisica Nucleare (INFN) (Presti, 2015) . Sixty parallel layers of the same fibers used in the tracker compose the RRD. The various layers are optically coupled to a MPPC array by means of wavelength shifting (WLS) fibers. The sensitive area of the two detectors is 9×9 cm{sup 2}. The results of the measurements, acquired by the prototypes with CATANA (Cirrone, 2008) proton beam, and a comparison with the simulations of the detectors are presented. - Highlights: • A real time charged particle imaging system is described. • The system is composed of a position sensitive and a residual range detectors. • The sensitive area of the system is composed of submillimeter scintillating fibers. • The read-out is based on a patented channel reduction system. • The results of the measurements with proton beam are presented.

  6. Tracking suspended particle transport via radium isotopes (226Ra and 228Ra) through the Apalachicola–Chattahoochee–Flint River system

    International Nuclear Information System (INIS)

    Peterson, Richard N.; Burnett, William C.; Opsahl, Stephen P.; Santos, Isaac R.; Misra, Sambuddha; Froelich, Philip N.

    2013-01-01

    Suspended particles in rivers can carry metals, nutrients, and pollutants downstream which can become bioactive in estuaries and coastal marine waters. In river systems with multiple sources of both suspended particles and contamination sources, it is important to assess the hydrologic conditions under which contaminated particles can be delivered to downstream ecosystems. The Apalachicola–Chattahoochee–Flint (ACF) River system in the southeastern United States represents an ideal system to study these hydrologic impacts on particle transport through a heavily-impacted river (the Chattahoochee River) and one much less impacted by anthropogenic activities (the Flint River). We demonstrate here the utility of natural radioisotopes as tracers of suspended particles through the ACF system, where particles contaminated with arsenic (As) and antimony (Sb) have been shown to be contributed from coal-fired power plants along the Chattahoochee River, and have elevated concentrations in the surficial sediments of the Apalachicola Bay Delta. Radium isotopes ( 228 Ra and 226 Ra) on suspended particles should vary throughout the different geologic provinces of this river system, allowing differentiation of the relative contributions of the Chattahoochee and Flint Rivers to the suspended load delivered to Lake Seminole, the Apalachicola River, and ultimately to Apalachicola Bay. We also use various geochemical proxies ( 40 K, organic carbon, and calcium) to assess the relative composition of suspended particles (lithogenic, organic, and carbonate fractions, respectively) under a range of hydrologic conditions. During low (base) flow conditions, the Flint River contributed 70% of the suspended particle load to both the Apalachicola River and the bay, whereas the Chattahoochee River became the dominant source during higher discharge, contributing 80% of the suspended load to the Apalachicola River and 62% of the particles entering the estuary. Neither of these hydrologic

  7. New Principles for Interfacial Engineering and Superstabilization of Biphase Systems by Using Particles with Engineered Structure and Properties

    Science.gov (United States)

    2014-09-27

    found in nature and accounts for ~ 1.5 x 1012 tons of annual biomass production. 10 As a result of its biodegradability, biocompatibility, and...would also be available to sterically stabilize the disperse (air) phase. As in most particle stabilized systems, HP-55 particles will form a shell ...particle-aggregates forming a shell around the bubbles in a secondary adsorption step. As the concentration of HP-55 is increased in the test

  8. Trap-size scaling in confined-particle systems at quantum transitions

    International Nuclear Information System (INIS)

    Campostrini, Massimo; Vicari, Ettore

    2010-01-01

    We develop a trap-size scaling theory for trapped particle systems at quantum transitions. As a theoretical laboratory, we consider a quantum XY chain in an external transverse field acting as a trap for the spinless fermions of its quadratic Hamiltonian representation. We discuss trap-size scaling at the Mott insulator to superfluid transition in the Bose-Hubbard model. We present exact and accurate numerical results for the XY chain and for the low-density Mott transition in the hard-core limit of the one-dimensional Bose-Hubbard model. Our results are relevant for systems of cold atomic gases in optical lattices.

  9. Rare event computation in deterministic chaotic systems using genealogical particle analysis

    International Nuclear Information System (INIS)

    Wouters, J; Bouchet, F

    2016-01-01

    In this paper we address the use of rare event computation techniques to estimate small over-threshold probabilities of observables in deterministic dynamical systems. We demonstrate that genealogical particle analysis algorithms can be successfully applied to a toy model of atmospheric dynamics, the Lorenz ’96 model. We furthermore use the Ornstein–Uhlenbeck system to illustrate a number of implementation issues. We also show how a time-dependent objective function based on the fluctuation path to a high threshold can greatly improve the performance of the estimator compared to a fixed-in-time objective function. (paper)

  10. Dependence of the quasipotential on the total energy of a two-particle system

    International Nuclear Information System (INIS)

    Kapshai, V.N.; Savrin, V.I.; Skachkov, N.B.

    1987-01-01

    For a system of two relativistic particles described in the Logunov-Tavkhelidze one-time approach the dependence of the quasipotential of one-boson exchange on the total energy of the system is calculated. It is shown that despite the nonlocal form of the obtained quasipotential the three-dimensional equations for the waves function can be reduced by a partial expansion to one-dimensional equations. The influence of the energy dependence of the quasipotential on its behavior in the coordinate representation is discussed

  11. On the dependence of quasipotential on the total energy of a two-particle system

    International Nuclear Information System (INIS)

    Kapshaj, V.N.; Savrin, V.I.

    1986-01-01

    For a system of two relativistic particles described in the framework of the Logunov-Tavkhelidze one-time approach the dependence is calculated of the one-boson exchange potential on the total energy of the system. It is shown that in spite of a nonlocal form of the quasipotential obtained, three-dimensional equations for the wave function are reduced to one-dimensional ones by means of partial expansion. Influence of the energy dependence of the quasipotential on its behaviour in the coordinate representation is discussed

  12. Stepwise optimization and global chaos of nonlinear parameters in exact calculations of few-particle systems

    International Nuclear Information System (INIS)

    Frolov, A.M.

    1986-01-01

    The problem of exact variational calculations of few-particle systems in the exponential basis of the relative coordinates using nonlinear parameters is studied. The techniques of stepwise optimization and global chaos of nonlinear parameters are used to calculate the S and P states of homonuclear muonic molecules with an error of no more than +0.001 eV. The global-chaos technique also has proved to be successful in the case of the nuclear systems 3 H and 3 He

  13. Solution of charged particle transport equation by Monte-Carlo method in the BRANDZ code system

    International Nuclear Information System (INIS)

    Artamonov, S.N.; Androsenko, P.A.; Androsenko, A.A.

    1992-01-01

    Consideration is given to the issues of Monte-Carlo employment for the solution of charged particle transport equation and its implementation in the BRANDZ code system under the conditions of real 3D geometry and all the data available on radiation-to-matter interaction in multicomponent and multilayer targets. For the solution of implantation problem the results of BRANDZ data comparison with the experiments and calculations by other codes in complexes systems are presented. The results of direct nuclear pumping process simulation for laser-active media by a proton beam are also included. 4 refs.; 7 figs

  14. Image processing of integrated video image obtained with a charged-particle imaging video monitor system

    International Nuclear Information System (INIS)

    Iida, Takao; Nakajima, Takehiro

    1988-01-01

    A new type of charged-particle imaging video monitor system was constructed for video imaging of the distributions of alpha-emitting and low-energy beta-emitting nuclides. The system can display not only the scintillation image due to radiation on the video monitor but also the integrated video image becoming gradually clearer on another video monitor. The distortion of the image is about 5% and the spatial resolution is about 2 line pairs (lp)mm -1 . The integrated image is transferred to a personal computer and image processing is performed qualitatively and quantitatively. (author)

  15. Fuzzy Adaptive Particle Swarm Optimization for Power Loss Minimisation in Distribution Systems Using Optimal Load Response

    DEFF Research Database (Denmark)

    Hu, Weihao; Chen, Zhe; Bak-Jensen, Birgitte

    2014-01-01

    Consumers may decide to modify the profile of their demand from high price periods to low price periods in order to reduce their electricity costs. This optimal load response to electricity prices for demand side management generates different load profiles and provides an opportunity to achieve...... power loss minimization in distribution systems. In this paper, a new method to achieve power loss minimization in distribution systems by using a price signal to guide the demand side management is proposed. A fuzzy adaptive particle swarm optimization (FAPSO) is used as a tool for the power loss...

  16. Expert system strategies for the diagnostic in a particle physics experiment

    International Nuclear Information System (INIS)

    D'Antone, I.; Mandrioli, G.; Matteuzzi, P.

    1990-01-01

    The maintenance of a particle detector functionality requires the knowledge of more experts: physicists and engineers for the detector and the electronic system. The integration of different knowledges and experiences can be easily done using an Expert System. A real-time Expert System allows us to diagnose the detector and data acquisition system anomalies; it makes an on-line diagnosis and, if an abnormal condition is identified, takes the appropriate action to reduce the unavailability of the apparatus. A method based on structural and behavioral reasoning is considered. Reasoning on the structure and on the functionality of the apparatus all the possible failures that can explain the sensor readings are searched. The behaviour of the apparatus components are described in qualitative terms to write the rules for the expert system

  17. Contribution to a study of real time information systems for elementary particle physics

    International Nuclear Information System (INIS)

    Meyer, J.-M.

    1977-01-01

    The structure of data acquisition systems used in elementary particle physics experiments is formulated. The experiments and the equipment used from a data processing point of view are characterized and the acquisition system is modeled to obtain an optimal architecture. Practical compromises are implemented, leading to a system with a new structure, now being used at the CERN SPS in a hyperon experiment. The realization of this system (FAS) is described using three computers: a NORD-10, a DDP and GESPRO. The latter is an original device built using INTEL-3000 integrated circuits. GESPRO can be microprogramed with instructions specialized for use with CAMAC. Finally, the software for the entire FAS system is given. This includes the assembler, test programs for CAMAC, management programs for the memory, etc [fr

  18. Diagnostic system for measurement of particle balance in TMX-U

    International Nuclear Information System (INIS)

    Allen, S.L.; Correll, D.L.; Hill, D.N.; Wood, R.D.; Brown, M.D.

    1986-01-01

    Several diagnostics measure the particle sources and losses in the Tandem Mirror Experiment-Upgrade (TMX-U) plasma. An absolutely calibrated high-speed (0.5 ms per frame) filtered (6561 A) video camera measures the total ionization source as a function of radius. An axial view of the plasma automatically integrates the axial variations within the depth of field of the system. Another camera, viewing the plasma radially, measures the axial source variations near the deuterium fueling source. Axial ion losses are measured by an array of Faraday cups that are equipped with grids for repelling electrons and are mounted at each end of the experiment. Unequal ion and electron (nonambipolar) radial losses are inferred from net current measurements on an array of grounded plates at each end. Any differences between the measured particle losses and sources may be attributed to ambipolar radial losses and/or azimuthal asymmetries in the particle-loss profiles. Methods of system calibration, along with details of computer data acquisition and processing of this relatively large set of data, are also presented. 6 refs., 1 fig

  19. Beyond the relativistic point particle: A reciprocally invariant system and its generalisation

    International Nuclear Information System (INIS)

    Pavsic, Matej

    2009-01-01

    We investigate a reciprocally invariant system proposed by Low and Govaerts et al., whose action contains both the orthogonal and the symplectic forms and is invariant under global O(2,4) intersection Sp(2,4) transformations. We find that the general solution to the classical equations of motion has no linear term in the evolution parameter, τ, but only the oscillatory terms, and therefore cannot represent a particle propagating in spacetime. As a remedy, we consider a generalisation of the action by adopting a procedure similar to that of Bars et al., who introduced the concept of a τ derivative that is covariant under local Sp(2) transformations between the phase space variables x μ (τ) and p μ (τ). This system, in particular, is similar to a rigid particle whose action contains the extrinsic curvature of the world line, which turns out to be helical in spacetime. Another possible generalisation is the introduction of a symplectic potential proposed by Montesinos. We show how the latter approach is related to Kaluza-Klein theories and to the concept of Clifford space, a manifold whose tangent space at any point is Clifford algebra Cl(8), a promising framework for the unification of particles and forces.

  20. Characterization of the structural collapse undergone by an unstable system of ultrasoft particles

    Science.gov (United States)

    Prestipino, Santi; Malescio, Gianpietro

    2016-09-01

    The effective repulsion between macromolecules such as polymer chains or dendrimers is everywhere finite, implying that interaction centers can even coincide. If, in addition, the large-distance attraction is sufficiently strong, then the system is driven unstable. An unstable system lacks a conventional thermodynamics since, in the infinite-size limit, it eventually collapses to a finite-size cluster (for instance, a polymer dispersion undergoes irreversible coagulation when increasing the amount of dissolved salt beyond a certain limit). Using a double-Gaussian (DG) potential for demonstration, we study the phase behavior of a system of ultrasoft particles as a function of the attraction strength η. Above a critical threshold ηc, the DG system is unstable but its collective behavior is far from trivial since two separate regions of the thermodynamic plane can be identified, based on the value taken by the average waiting time for collapse: this is finite and small on one side of the boundary, while presumably infinite in the other region. In order to make sense of this evidence, we consider a stable system of particles interacting through a DG potential augmented with a hard core (stabilized DG, or SDG potential). We provide arguments supporting the view that the boundary line of the unstable DG model is the remnant of the spinodal line of a fluid-fluid phase transition occurring in the SDG model when the hard-core diameter is sent to zero.