WorldWideScience

Sample records for gpu-based calculation method

  1. A GPU-based calculation using the three-dimensional FDTD method for electromagnetic field analysis.

    Science.gov (United States)

    Nagaoka, Tomoaki; Watanabe, Soichi

    2010-01-01

    Numerical simulations with the numerical human model using the finite-difference time domain (FDTD) method have recently been performed frequently in a number of fields in biomedical engineering. However, the FDTD calculation runs too slowly. We focus, therefore, on general purpose programming on the graphics processing unit (GPGPU). The three-dimensional FDTD method was implemented on the GPU using Compute Unified Device Architecture (CUDA). In this study, we used the NVIDIA Tesla C1060 as a GPGPU board. The performance of the GPU is evaluated in comparison with the performance of a conventional CPU and a vector supercomputer. The results indicate that three-dimensional FDTD calculations using a GPU can significantly reduce run time in comparison with that using a conventional CPU, even a native GPU implementation of the three-dimensional FDTD method, while the GPU/CPU speed ratio varies with the calculation domain and thread block size.

  2. GPU based acceleration of first principles calculation

    International Nuclear Information System (INIS)

    Tomono, H; Tsumuraya, K; Aoki, M; Iitaka, T

    2010-01-01

    We present a Graphics Processing Unit (GPU) accelerated simulations of first principles electronic structure calculations. The FFT, which is the most time-consuming part, is about 10 times accelerated. As the result, the total computation time of a first principles calculation is reduced to 15 percent of that of the CPU.

  3. Validation of GPU based TomoTherapy dose calculation engine.

    Science.gov (United States)

    Chen, Quan; Lu, Weiguo; Chen, Yu; Chen, Mingli; Henderson, Douglas; Sterpin, Edmond

    2012-04-01

    The graphic processing unit (GPU) based TomoTherapy convolution/superposition(C/S) dose engine (GPU dose engine) achieves a dramatic performance improvement over the traditional CPU-cluster based TomoTherapy dose engine (CPU dose engine). Besides the architecture difference between the GPU and CPU, there are several algorithm changes from the CPU dose engine to the GPU dose engine. These changes made the GPU dose slightly different from the CPU-cluster dose. In order for the commercial release of the GPU dose engine, its accuracy has to be validated. Thirty eight TomoTherapy phantom plans and 19 patient plans were calculated with both dose engines to evaluate the equivalency between the two dose engines. Gamma indices (Γ) were used for the equivalency evaluation. The GPU dose was further verified with the absolute point dose measurement with ion chamber and film measurements for phantom plans. Monte Carlo calculation was used as a reference for both dose engines in the accuracy evaluation in heterogeneous phantom and actual patients. The GPU dose engine showed excellent agreement with the current CPU dose engine. The majority of cases had over 99.99% of voxels with Γ(1%, 1 mm) engine also showed similar degree of accuracy in heterogeneous media as the current TomoTherapy dose engine. It is verified and validated that the ultrafast TomoTherapy GPU dose engine can safely replace the existing TomoTherapy cluster based dose engine without degradation in dose accuracy.

  4. Clinical implementation of a GPU-based simplified Monte Carlo method for a treatment planning system of proton beam therapy

    International Nuclear Information System (INIS)

    Kohno, R; Hotta, K; Nishioka, S; Matsubara, K; Tansho, R; Suzuki, T

    2011-01-01

    We implemented the simplified Monte Carlo (SMC) method on graphics processing unit (GPU) architecture under the computer-unified device architecture platform developed by NVIDIA. The GPU-based SMC was clinically applied for four patients with head and neck, lung, or prostate cancer. The results were compared to those obtained by a traditional CPU-based SMC with respect to the computation time and discrepancy. In the CPU- and GPU-based SMC calculations, the estimated mean statistical errors of the calculated doses in the planning target volume region were within 0.5% rms. The dose distributions calculated by the GPU- and CPU-based SMCs were similar, within statistical errors. The GPU-based SMC showed 12.30–16.00 times faster performance than the CPU-based SMC. The computation time per beam arrangement using the GPU-based SMC for the clinical cases ranged 9–67 s. The results demonstrate the successful application of the GPU-based SMC to a clinical proton treatment planning. (note)

  5. A GPU-based finite-size pencil beam algorithm with 3D-density correction for radiotherapy dose calculation

    International Nuclear Information System (INIS)

    Gu Xuejun; Jia Xun; Jiang, Steve B; Jelen, Urszula; Li Jinsheng

    2011-01-01

    Targeting at the development of an accurate and efficient dose calculation engine for online adaptive radiotherapy, we have implemented a finite-size pencil beam (FSPB) algorithm with a 3D-density correction method on graphics processing unit (GPU). This new GPU-based dose engine is built on our previously published ultrafast FSPB computational framework (Gu et al 2009 Phys. Med. Biol. 54 6287-97). Dosimetric evaluations against Monte Carlo dose calculations are conducted on ten IMRT treatment plans (five head-and-neck cases and five lung cases). For all cases, there is improvement with the 3D-density correction over the conventional FSPB algorithm and for most cases the improvement is significant. Regarding the efficiency, because of the appropriate arrangement of memory access and the usage of GPU intrinsic functions, the dose calculation for an IMRT plan can be accomplished well within 1 s (except for one case) with this new GPU-based FSPB algorithm. Compared to the previous GPU-based FSPB algorithm without 3D-density correction, this new algorithm, though slightly sacrificing the computational efficiency (∼5-15% lower), has significantly improved the dose calculation accuracy, making it more suitable for online IMRT replanning.

  6. A GPU-based Monte Carlo dose calculation code for photon transport in a voxel phantom

    International Nuclear Information System (INIS)

    Bellezzo, M.; Do Nascimento, E.; Yoriyaz, H.

    2014-08-01

    As the most accurate method to estimate absorbed dose in radiotherapy, Monte Carlo method has been widely used in radiotherapy treatment planning. Nevertheless, its efficiency can be improved for clinical routine applications. In this paper, we present the CUBMC code, a GPU-based Mc photon transport algorithm for dose calculation under the Compute Unified Device Architecture platform. The simulation of physical events is based on the algorithm used in Penelope, and the cross section table used is the one generated by the Material routine, als present in Penelope code. Photons are transported in voxel-based geometries with different compositions. To demonstrate the capabilities of the algorithm developed in the present work four 128 x 128 x 128 voxel phantoms have been considered. One of them is composed by a homogeneous water-based media, the second is composed by bone, the third is composed by lung and the fourth is composed by a heterogeneous bone and vacuum geometry. Simulations were done considering a 6 MeV monoenergetic photon point source. There are two distinct approaches that were used for transport simulation. The first of them forces the photon to stop at every voxel frontier, the second one is the Woodcock method, where the photon stop in the frontier will be considered depending on the material changing across the photon travel line. Dose calculations using these methods are compared for validation with Penelope and MCNP5 codes. Speed-up factors are compared using a NVidia GTX 560-Ti GPU card against a 2.27 GHz Intel Xeon CPU processor. (Author)

  7. A GPU-based Monte Carlo dose calculation code for photon transport in a voxel phantom

    Energy Technology Data Exchange (ETDEWEB)

    Bellezzo, M.; Do Nascimento, E.; Yoriyaz, H., E-mail: mbellezzo@gmail.br [Instituto de Pesquisas Energeticas e Nucleares / CNEN, Av. Lineu Prestes 2242, Cidade Universitaria, 05508-000 Sao Paulo (Brazil)

    2014-08-15

    As the most accurate method to estimate absorbed dose in radiotherapy, Monte Carlo method has been widely used in radiotherapy treatment planning. Nevertheless, its efficiency can be improved for clinical routine applications. In this paper, we present the CUBMC code, a GPU-based Mc photon transport algorithm for dose calculation under the Compute Unified Device Architecture platform. The simulation of physical events is based on the algorithm used in Penelope, and the cross section table used is the one generated by the Material routine, als present in Penelope code. Photons are transported in voxel-based geometries with different compositions. To demonstrate the capabilities of the algorithm developed in the present work four 128 x 128 x 128 voxel phantoms have been considered. One of them is composed by a homogeneous water-based media, the second is composed by bone, the third is composed by lung and the fourth is composed by a heterogeneous bone and vacuum geometry. Simulations were done considering a 6 MeV monoenergetic photon point source. There are two distinct approaches that were used for transport simulation. The first of them forces the photon to stop at every voxel frontier, the second one is the Woodcock method, where the photon stop in the frontier will be considered depending on the material changing across the photon travel line. Dose calculations using these methods are compared for validation with Penelope and MCNP5 codes. Speed-up factors are compared using a NVidia GTX 560-Ti GPU card against a 2.27 GHz Intel Xeon CPU processor. (Author)

  8. SU-E-T-37: A GPU-Based Pencil Beam Algorithm for Dose Calculations in Proton Radiation Therapy

    International Nuclear Information System (INIS)

    Kalantzis, G; Leventouri, T; Tachibana, H; Shang, C

    2015-01-01

    Purpose: Recent developments in radiation therapy have been focused on applications of charged particles, especially protons. Over the years several dose calculation methods have been proposed in proton therapy. A common characteristic of all these methods is their extensive computational burden. In the current study we present for the first time, to our best knowledge, a GPU-based PBA for proton dose calculations in Matlab. Methods: In the current study we employed an analytical expression for the protons depth dose distribution. The central-axis term is taken from the broad-beam central-axis depth dose in water modified by an inverse square correction while the distribution of the off-axis term was considered Gaussian. The serial code was implemented in MATLAB and was launched on a desktop with a quad core Intel Xeon X5550 at 2.67GHz with 8 GB of RAM. For the parallelization on the GPU, the parallel computing toolbox was employed and the code was launched on a GTX 770 with Kepler architecture. The performance comparison was established on the speedup factors. Results: The performance of the GPU code was evaluated for three different energies: low (50 MeV), medium (100 MeV) and high (150 MeV). Four square fields were selected for each energy, and the dose calculations were performed with both the serial and parallel codes for a homogeneous water phantom with size 300×300×300 mm3. The resolution of the PBs was set to 1.0 mm. The maximum speedup of ∼127 was achieved for the highest energy and the largest field size. Conclusion: A GPU-based PB algorithm for proton dose calculations in Matlab was presented. A maximum speedup of ∼127 was achieved. Future directions of the current work include extension of our method for dose calculation in heterogeneous phantoms

  9. A GPU-based solution for fast calculation of the betweenness centrality in large weighted networks

    Directory of Open Access Journals (Sweden)

    Rui Fan

    2017-12-01

    Full Text Available Betweenness, a widely employed centrality measure in network science, is a decent proxy for investigating network loads and rankings. However, its extremely high computational cost greatly hinders its applicability in large networks. Although several parallel algorithms have been presented to reduce its calculation cost for unweighted networks, a fast solution for weighted networks, which are commonly encountered in many realistic applications, is still lacking. In this study, we develop an efficient parallel GPU-based approach to boost the calculation of the betweenness centrality (BC for large weighted networks. We parallelize the traditional Dijkstra algorithm by selecting more than one frontier vertex each time and then inspecting the frontier vertices simultaneously. By combining the parallel SSSP algorithm with the parallel BC framework, our GPU-based betweenness algorithm achieves much better performance than its CPU counterparts. Moreover, to further improve performance, we integrate the work-efficient strategy, and to address the load-imbalance problem, we introduce a warp-centric technique, which assigns many threads rather than one to a single frontier vertex. Experiments on both realistic and synthetic networks demonstrate the efficiency of our solution, which achieves 2.9× to 8.44× speedups over the parallel CPU implementation. Our algorithm is open-source and free to the community; it is publicly available through https://dx.doi.org/10.6084/m9.figshare.4542405. Considering the pervasive deployment and declining price of GPUs in personal computers and servers, our solution will offer unprecedented opportunities for exploring betweenness-related problems and will motivate follow-up efforts in network science.

  10. Automatic commissioning of a GPU-based Monte Carlo radiation dose calculation code for photon radiotherapy

    International Nuclear Information System (INIS)

    Tian, Zhen; Jia, Xun; Jiang, Steve B; Graves, Yan Jiang

    2014-01-01

    Monte Carlo (MC) simulation is commonly considered as the most accurate method for radiation dose calculations. Commissioning of a beam model in the MC code against a clinical linear accelerator beam is of crucial importance for its clinical implementation. In this paper, we propose an automatic commissioning method for our GPU-based MC dose engine, gDPM. gDPM utilizes a beam model based on a concept of phase-space-let (PSL). A PSL contains a group of particles that are of the same type and close in space and energy. A set of generic PSLs was generated by splitting a reference phase-space file. Each PSL was associated with a weighting factor, and in dose calculations the particle carried a weight corresponding to the PSL where it was from. Dose for each PSL in water was pre-computed, and hence the dose in water for a whole beam under a given set of PSL weighting factors was the weighted sum of the PSL doses. At the commissioning stage, an optimization problem was solved to adjust the PSL weights in order to minimize the difference between the calculated dose and measured one. Symmetry and smoothness regularizations were utilized to uniquely determine the solution. An augmented Lagrangian method was employed to solve the optimization problem. To validate our method, a phase-space file of a Varian TrueBeam 6 MV beam was used to generate the PSLs for 6 MV beams. In a simulation study, we commissioned a Siemens 6 MV beam on which a set of field-dependent phase-space files was available. The dose data of this desired beam for different open fields and a small off-axis open field were obtained by calculating doses using these phase-space files. The 3D γ-index test passing rate within the regions with dose above 10% of d max dose for those open fields tested was improved averagely from 70.56 to 99.36% for 2%/2 mm criteria and from 32.22 to 89.65% for 1%/1 mm criteria. We also tested our commissioning method on a six-field head-and-neck cancer IMRT plan. The

  11. GPU based contouring method on grid DEM data

    Science.gov (United States)

    Tan, Liheng; Wan, Gang; Li, Feng; Chen, Xiaohui; Du, Wenlong

    2017-08-01

    This paper presents a novel method to generate contour lines from grid DEM data based on the programmable GPU pipeline. The previous contouring approaches often use CPU to construct a finite element mesh from the raw DEM data, and then extract contour segments from the elements. They also need a tracing or sorting strategy to generate the final continuous contours. These approaches can be heavily CPU-costing and time-consuming. Meanwhile the generated contours would be unsmooth if the raw data is sparsely distributed. Unlike the CPU approaches, we employ the GPU's vertex shader to generate a triangular mesh with arbitrary user-defined density, in which the height of each vertex is calculated through a third-order Cardinal spline function. Then in the same frame, segments are extracted from the triangles by the geometry shader, and translated to the CPU-side with an internal order in the GPU's transform feedback stage. Finally we propose a "Grid Sorting" algorithm to achieve the continuous contour lines by travelling the segments only once. Our method makes use of multiple stages of GPU pipeline for computation, which can generate smooth contour lines, and is significantly faster than the previous CPU approaches. The algorithm can be easily implemented with OpenGL 3.3 API or higher on consumer-level PCs.

  12. A GPU-based mipmapping method for water surface visualization

    Science.gov (United States)

    Li, Hua; Quan, Wei; Xu, Chao; Wu, Yan

    2018-03-01

    Visualization of water surface is a hot topic in computer graphics. In this paper, we presented a fast method to generate wide range of water surface with good image quality both near and far from the viewpoint. This method utilized uniform mesh and Fractal Perlin noise to model water surface. Mipmapping technology was enforced to the surface textures, which adjust the resolution with respect to the distance from the viewpoint and reduce the computing cost. Lighting effect was computed based on shadow mapping technology, Snell's law and Fresnel term. The render pipeline utilizes a CPU-GPU shared memory structure, which improves the rendering efficiency. Experiment results show that our approach visualizes water surface with good image quality at real-time frame rates performance.

  13. GPU-based parallel algorithm for blind image restoration using midfrequency-based methods

    Science.gov (United States)

    Xie, Lang; Luo, Yi-han; Bao, Qi-liang

    2013-08-01

    GPU-based general-purpose computing is a new branch of modern parallel computing, so the study of parallel algorithms specially designed for GPU hardware architecture is of great significance. In order to solve the problem of high computational complexity and poor real-time performance in blind image restoration, the midfrequency-based algorithm for blind image restoration was analyzed and improved in this paper. Furthermore, a midfrequency-based filtering method is also used to restore the image hardly with any recursion or iteration. Combining the algorithm with data intensiveness, data parallel computing and GPU execution model of single instruction and multiple threads, a new parallel midfrequency-based algorithm for blind image restoration is proposed in this paper, which is suitable for stream computing of GPU. In this algorithm, the GPU is utilized to accelerate the estimation of class-G point spread functions and midfrequency-based filtering. Aiming at better management of the GPU threads, the threads in a grid are scheduled according to the decomposition of the filtering data in frequency domain after the optimization of data access and the communication between the host and the device. The kernel parallelism structure is determined by the decomposition of the filtering data to ensure the transmission rate to get around the memory bandwidth limitation. The results show that, with the new algorithm, the operational speed is significantly increased and the real-time performance of image restoration is effectively improved, especially for high-resolution images.

  14. SU-D-207-04: GPU-Based 4D Cone-Beam CT Reconstruction Using Adaptive Meshing Method

    International Nuclear Information System (INIS)

    Zhong, Z; Gu, X; Iyengar, P; Mao, W; Wang, J; Guo, X

    2015-01-01

    Purpose: Due to the limited number of projections at each phase, the image quality of a four-dimensional cone-beam CT (4D-CBCT) is often degraded, which decreases the accuracy of subsequent motion modeling. One of the promising methods is the simultaneous motion estimation and image reconstruction (SMEIR) approach. The objective of this work is to enhance the computational speed of the SMEIR algorithm using adaptive feature-based tetrahedral meshing and GPU-based parallelization. Methods: The first step is to generate the tetrahedral mesh based on the features of a reference phase 4D-CBCT, so that the deformation can be well captured and accurately diffused from the mesh vertices to voxels of the image volume. After the mesh generation, the updated motion model and other phases of 4D-CBCT can be obtained by matching the 4D-CBCT projection images at each phase with the corresponding forward projections of the deformed reference phase of 4D-CBCT. The entire process of this 4D-CBCT reconstruction method is implemented on GPU, resulting in significantly increasing the computational efficiency due to its tremendous parallel computing ability. Results: A 4D XCAT digital phantom was used to test the proposed mesh-based image reconstruction algorithm. The image Result shows both bone structures and inside of the lung are well-preserved and the tumor position can be well captured. Compared to the previous voxel-based CPU implementation of SMEIR, the proposed method is about 157 times faster for reconstructing a 10 -phase 4D-CBCT with dimension 256×256×150. Conclusion: The GPU-based parallel 4D CBCT reconstruction method uses the feature-based mesh for estimating motion model and demonstrates equivalent image Result with previous voxel-based SMEIR approach, with significantly improved computational speed

  15. MO-C-17A-03: A GPU-Based Method for Validating Deformable Image Registration in Head and Neck Radiotherapy Using Biomechanical Modeling

    International Nuclear Information System (INIS)

    Neylon, J; Min, Y; Qi, S; Kupelian, P; Santhanam, A

    2014-01-01

    Purpose: Deformable image registration (DIR) plays a pivotal role in head and neck adaptive radiotherapy but a systematic validation of DIR algorithms has been limited by a lack of quantitative high-resolution groundtruth. We address this limitation by developing a GPU-based framework that provides a systematic DIR validation by generating (a) model-guided synthetic CTs representing posture and physiological changes, and (b) model-guided landmark-based validation. Method: The GPU-based framework was developed to generate massive mass-spring biomechanical models from patient simulation CTs and contoured structures. The biomechanical model represented soft tissue deformations for known rigid skeletal motion. Posture changes were simulated by articulating skeletal anatomy, which subsequently applied elastic corrective forces upon the soft tissue. Physiological changes such as tumor regression and weight loss were simulated in a biomechanically precise manner. Synthetic CT data was then generated from the deformed anatomy. The initial and final positions for one hundred randomly-chosen mass elements inside each of the internal contoured structures were recorded as ground truth data. The process was automated to create 45 synthetic CT datasets for a given patient CT. For instance, the head rotation was varied between +/− 4 degrees along each axis, and tumor volumes were systematically reduced up to 30%. Finally, the original CT and deformed synthetic CT were registered using an optical flow based DIR. Results: Each synthetic data creation took approximately 28 seconds of computation time. The number of landmarks per data set varied between two and three thousand. The validation method is able to perform sub-voxel analysis of the DIR, and report the results by structure, giving a much more in depth investigation of the error. Conclusions: We presented a GPU based high-resolution biomechanical head and neck model to validate DIR algorithms by generating CT equivalent 3D

  16. Comparison of GPU-Based Numerous Particles Simulation and Experiment

    International Nuclear Information System (INIS)

    Park, Sang Wook; Jun, Chul Woong; Sohn, Jeong Hyun; Lee, Jae Wook

    2014-01-01

    The dynamic behavior of numerous grains interacting with each other can be easily observed. In this study, this dynamic behavior was analyzed based on the contact between numerous grains. The discrete element method was used for analyzing the dynamic behavior of each particle and the neighboring-cell algorithm was employed for detecting their contact. The Hertzian and tangential sliding friction contact models were used for calculating the contact force acting between the particles. A GPU-based parallel program was developed for conducting the computer simulation and calculating the numerous contacts. The dam break experiment was performed to verify the simulation results. The reliability of the program was verified by comparing the results of the simulation with those of the experiment

  17. A GPU-based large-scale Monte Carlo simulation method for systems with long-range interactions

    Science.gov (United States)

    Liang, Yihao; Xing, Xiangjun; Li, Yaohang

    2017-06-01

    In this work we present an efficient implementation of Canonical Monte Carlo simulation for Coulomb many body systems on graphics processing units (GPU). Our method takes advantage of the GPU Single Instruction, Multiple Data (SIMD) architectures, and adopts the sequential updating scheme of Metropolis algorithm. It makes no approximation in the computation of energy, and reaches a remarkable 440-fold speedup, compared with the serial implementation on CPU. We further use this method to simulate primitive model electrolytes, and measure very precisely all ion-ion pair correlation functions at high concentrations. From these data, we extract the renormalized Debye length, renormalized valences of constituent ions, and renormalized dielectric constants. These results demonstrate unequivocally physics beyond the classical Poisson-Boltzmann theory.

  18. GPU based Monte Carlo for PET image reconstruction: detector modeling

    International Nuclear Information System (INIS)

    Légrády; Cserkaszky, Á; Lantos, J.; Patay, G.; Bükki, T.

    2011-01-01

    Monte Carlo (MC) calculations and Graphical Processing Units (GPUs) are almost like the dedicated hardware designed for the specific task given the similarities between visible light transport and neutral particle trajectories. A GPU based MC gamma transport code has been developed for Positron Emission Tomography iterative image reconstruction calculating the projection from unknowns to data at each iteration step taking into account the full physics of the system. This paper describes the simplified scintillation detector modeling and its effect on convergence. (author)

  19. GPU-based Branchless Distance-Driven Projection and Backprojection.

    Science.gov (United States)

    Liu, Rui; Fu, Lin; De Man, Bruno; Yu, Hengyong

    2017-12-01

    Projection and backprojection operations are essential in a variety of image reconstruction and physical correction algorithms in CT. The distance-driven (DD) projection and backprojection are widely used for their highly sequential memory access pattern and low arithmetic cost. However, a typical DD implementation has an inner loop that adjusts the calculation depending on the relative position between voxel and detector cell boundaries. The irregularity of the branch behavior makes it inefficient to be implemented on massively parallel computing devices such as graphics processing units (GPUs). Such irregular branch behaviors can be eliminated by factorizing the DD operation as three branchless steps: integration, linear interpolation, and differentiation, all of which are highly amenable to massive vectorization. In this paper, we implement and evaluate a highly parallel branchless DD algorithm for 3D cone beam CT. The algorithm utilizes the texture memory and hardware interpolation on GPUs to achieve fast computational speed. The developed branchless DD algorithm achieved 137-fold speedup for forward projection and 188-fold speedup for backprojection relative to a single-thread CPU implementation. Compared with a state-of-the-art 32-thread CPU implementation, the proposed branchless DD achieved 8-fold acceleration for forward projection and 10-fold acceleration for backprojection. GPU based branchless DD method was evaluated by iterative reconstruction algorithms with both simulation and real datasets. It obtained visually identical images as the CPU reference algorithm.

  20. SU-E-T-29: A Web Application for GPU-Based Monte Carlo IMRT/VMAT QA with Delivered Dose Verification

    International Nuclear Information System (INIS)

    Folkerts, M; Graves, Y; Tian, Z; Gu, X; Jia, X; Jiang, S

    2014-01-01

    Purpose: To enable an existing web application for GPU-based Monte Carlo (MC) 3D dosimetry quality assurance (QA) to compute “delivered dose” from linac logfile data. Methods: We added significant features to an IMRT/VMAT QA web application which is based on existing technologies (HTML5, Python, and Django). This tool interfaces with python, c-code libraries, and command line-based GPU applications to perform a MC-based IMRT/VMAT QA. The web app automates many complicated aspects of interfacing clinical DICOM and logfile data with cutting-edge GPU software to run a MC dose calculation. The resultant web app is powerful, easy to use, and is able to re-compute both plan dose (from DICOM data) and delivered dose (from logfile data). Both dynalog and trajectorylog file formats are supported. Users upload zipped DICOM RP, CT, and RD data and set the expected statistic uncertainty for the MC dose calculation. A 3D gamma index map, 3D dose distribution, gamma histogram, dosimetric statistics, and DVH curves are displayed to the user. Additional the user may upload the delivery logfile data from the linac to compute a 'delivered dose' calculation and corresponding gamma tests. A comprehensive PDF QA report summarizing the results can also be downloaded. Results: We successfully improved a web app for a GPU-based QA tool that consists of logfile parcing, fluence map generation, CT image processing, GPU based MC dose calculation, gamma index calculation, and DVH calculation. The result is an IMRT and VMAT QA tool that conducts an independent dose calculation for a given treatment plan and delivery log file. The system takes both DICOM data and logfile data to compute plan dose and delivered dose respectively. Conclusion: We sucessfully improved a GPU-based MC QA tool to allow for logfile dose calculation. The high efficiency and accessibility will greatly facilitate IMRT and VMAT QA

  1. SU-E-T-29: A Web Application for GPU-Based Monte Carlo IMRT/VMAT QA with Delivered Dose Verification

    Energy Technology Data Exchange (ETDEWEB)

    Folkerts, M [The University of Texas Southwestern Medical Ctr, Dallas, TX (United States); University of California, San Diego, La Jolla, CA (United States); Graves, Y [University of California, San Diego, La Jolla, CA (United States); Tian, Z; Gu, X; Jia, X; Jiang, S [The University of Texas Southwestern Medical Ctr, Dallas, TX (United States)

    2014-06-01

    Purpose: To enable an existing web application for GPU-based Monte Carlo (MC) 3D dosimetry quality assurance (QA) to compute “delivered dose” from linac logfile data. Methods: We added significant features to an IMRT/VMAT QA web application which is based on existing technologies (HTML5, Python, and Django). This tool interfaces with python, c-code libraries, and command line-based GPU applications to perform a MC-based IMRT/VMAT QA. The web app automates many complicated aspects of interfacing clinical DICOM and logfile data with cutting-edge GPU software to run a MC dose calculation. The resultant web app is powerful, easy to use, and is able to re-compute both plan dose (from DICOM data) and delivered dose (from logfile data). Both dynalog and trajectorylog file formats are supported. Users upload zipped DICOM RP, CT, and RD data and set the expected statistic uncertainty for the MC dose calculation. A 3D gamma index map, 3D dose distribution, gamma histogram, dosimetric statistics, and DVH curves are displayed to the user. Additional the user may upload the delivery logfile data from the linac to compute a 'delivered dose' calculation and corresponding gamma tests. A comprehensive PDF QA report summarizing the results can also be downloaded. Results: We successfully improved a web app for a GPU-based QA tool that consists of logfile parcing, fluence map generation, CT image processing, GPU based MC dose calculation, gamma index calculation, and DVH calculation. The result is an IMRT and VMAT QA tool that conducts an independent dose calculation for a given treatment plan and delivery log file. The system takes both DICOM data and logfile data to compute plan dose and delivered dose respectively. Conclusion: We sucessfully improved a GPU-based MC QA tool to allow for logfile dose calculation. The high efficiency and accessibility will greatly facilitate IMRT and VMAT QA.

  2. Moving-Target Position Estimation Using GPU-Based Particle Filter for IoT Sensing Applications

    Directory of Open Access Journals (Sweden)

    Seongseop Kim

    2017-11-01

    Full Text Available A particle filter (PF has been introduced for effective position estimation of moving targets for non-Gaussian and nonlinear systems. The time difference of arrival (TDOA method using acoustic sensor array has normally been used to for estimation by concealing the location of a moving target, especially underwater. In this paper, we propose a GPU -based acceleration of target position estimation using a PF and propose an efficient system and software architecture. The proposed graphic processing unit (GPU-based algorithm has more advantages in applying PF signal processing to a target system, which consists of large-scale Internet of Things (IoT-driven sensors because of the parallelization which is scalable. For the TDOA measurement from the acoustic sensor array, we use the generalized cross correlation phase transform (GCC-PHAT method to obtain the correlation coefficient of the signal using Fast Fourier Transform (FFT, and we try to accelerate the calculations of GCC-PHAT based TDOA measurements using FFT with GPU compute unified device architecture (CUDA. The proposed approach utilizes a parallelization method in the target position estimation algorithm using GPU-based PF processing. In addition, it could efficiently estimate sudden movement change of the target using GPU-based parallel computing which also can be used for multiple target tracking. It also provides scalability in extending the detection algorithm according to the increase of the number of sensors. Therefore, the proposed architecture can be applied in IoT sensing applications with a large number of sensors. The target estimation algorithm was verified using MATLAB and implemented using GPU CUDA. We implemented the proposed signal processing acceleration system using target GPU to analyze in terms of execution time. The execution time of the algorithm is reduced by 55% from to the CPU standalone operation in target embedded board, NVIDIA Jetson TX1. Also, to apply large

  3. Implementation and Optimization of GPU-Based Static State Security Analysis in Power Systems

    Directory of Open Access Journals (Sweden)

    Yong Chen

    2017-01-01

    Full Text Available Static state security analysis (SSSA is one of the most important computations to check whether a power system is in normal and secure operating state. It is a challenge to satisfy real-time requirements with CPU-based concurrent methods due to the intensive computations. A sensitivity analysis-based method with Graphics processing unit (GPU is proposed for power systems, which can reduce calculation time by 40% compared to the execution on a 4-core CPU. The proposed method involves load flow analysis and sensitivity analysis. In load flow analysis, a multifrontal method for sparse LU factorization is explored on GPU through dynamic frontal task scheduling between CPU and GPU. The varying matrix operations during sensitivity analysis on GPU are highly optimized in this study. The results of performance evaluations show that the proposed GPU-based SSSA with optimized matrix operations can achieve a significant reduction in computation time.

  4. SU-E-T-395: Multi-GPU-Based VMAT Treatment Plan Optimization Using a Column-Generation Approach

    International Nuclear Information System (INIS)

    Tian, Z; Shi, F; Jia, X; Jiang, S; Peng, F

    2014-01-01

    Purpose: GPU has been employed to speed up VMAT optimizations from hours to minutes. However, its limited memory capacity makes it difficult to handle cases with a huge dose-deposition-coefficient (DDC) matrix, e.g. those with a large target size, multiple arcs, small beam angle intervals and/or small beamlet size. We propose multi-GPU-based VMAT optimization to solve this memory issue to make GPU-based VMAT more practical for clinical use. Methods: Our column-generation-based method generates apertures sequentially by iteratively searching for an optimal feasible aperture (referred as pricing problem, PP) and optimizing aperture intensities (referred as master problem, MP). The PP requires access to the large DDC matrix, which is implemented on a multi-GPU system. Each GPU stores a DDC sub-matrix corresponding to one fraction of beam angles and is only responsible for calculation related to those angles. Broadcast and parallel reduction schemes are adopted for inter-GPU data transfer. MP is a relatively small-scale problem and is implemented on one GPU. One headand- neck cancer case was used for test. Three different strategies for VMAT optimization on single GPU were also implemented for comparison: (S1) truncating DDC matrix to ignore its small value entries for optimization; (S2) transferring DDC matrix part by part to GPU during optimizations whenever needed; (S3) moving DDC matrix related calculation onto CPU. Results: Our multi-GPU-based implementation reaches a good plan within 1 minute. Although S1 was 10 seconds faster than our method, the obtained plan quality is worse. Both S2 and S3 handle the full DDC matrix and hence yield the same plan as in our method. However, the computation time is longer, namely 4 minutes and 30 minutes, respectively. Conclusion: Our multi-GPU-based VMAT optimization can effectively solve the limited memory issue with good plan quality and high efficiency, making GPUbased ultra-fast VMAT planning practical for real clinical use

  5. SU-E-T-395: Multi-GPU-Based VMAT Treatment Plan Optimization Using a Column-Generation Approach

    Energy Technology Data Exchange (ETDEWEB)

    Tian, Z; Shi, F; Jia, X; Jiang, S [UT Southwestern Medical Ctr at Dallas, Dallas, TX (United States); Peng, F [Carnegie Mellon University, Pittsburgh, PA (United States)

    2014-06-01

    Purpose: GPU has been employed to speed up VMAT optimizations from hours to minutes. However, its limited memory capacity makes it difficult to handle cases with a huge dose-deposition-coefficient (DDC) matrix, e.g. those with a large target size, multiple arcs, small beam angle intervals and/or small beamlet size. We propose multi-GPU-based VMAT optimization to solve this memory issue to make GPU-based VMAT more practical for clinical use. Methods: Our column-generation-based method generates apertures sequentially by iteratively searching for an optimal feasible aperture (referred as pricing problem, PP) and optimizing aperture intensities (referred as master problem, MP). The PP requires access to the large DDC matrix, which is implemented on a multi-GPU system. Each GPU stores a DDC sub-matrix corresponding to one fraction of beam angles and is only responsible for calculation related to those angles. Broadcast and parallel reduction schemes are adopted for inter-GPU data transfer. MP is a relatively small-scale problem and is implemented on one GPU. One headand- neck cancer case was used for test. Three different strategies for VMAT optimization on single GPU were also implemented for comparison: (S1) truncating DDC matrix to ignore its small value entries for optimization; (S2) transferring DDC matrix part by part to GPU during optimizations whenever needed; (S3) moving DDC matrix related calculation onto CPU. Results: Our multi-GPU-based implementation reaches a good plan within 1 minute. Although S1 was 10 seconds faster than our method, the obtained plan quality is worse. Both S2 and S3 handle the full DDC matrix and hence yield the same plan as in our method. However, the computation time is longer, namely 4 minutes and 30 minutes, respectively. Conclusion: Our multi-GPU-based VMAT optimization can effectively solve the limited memory issue with good plan quality and high efficiency, making GPUbased ultra-fast VMAT planning practical for real clinical use.

  6. GPU-based cone beam computed tomography.

    Science.gov (United States)

    Noël, Peter B; Walczak, Alan M; Xu, Jinhui; Corso, Jason J; Hoffmann, Kenneth R; Schafer, Sebastian

    2010-06-01

    The use of cone beam computed tomography (CBCT) is growing in the clinical arena due to its ability to provide 3D information during interventions, its high diagnostic quality (sub-millimeter resolution), and its short scanning times (60 s). In many situations, the short scanning time of CBCT is followed by a time-consuming 3D reconstruction. The standard reconstruction algorithm for CBCT data is the filtered backprojection, which for a volume of size 256(3) takes up to 25 min on a standard system. Recent developments in the area of Graphic Processing Units (GPUs) make it possible to have access to high-performance computing solutions at a low cost, allowing their use in many scientific problems. We have implemented an algorithm for 3D reconstruction of CBCT data using the Compute Unified Device Architecture (CUDA) provided by NVIDIA (NVIDIA Corporation, Santa Clara, California), which was executed on a NVIDIA GeForce GTX 280. Our implementation results in improved reconstruction times from minutes, and perhaps hours, to a matter of seconds, while also giving the clinician the ability to view 3D volumetric data at higher resolutions. We evaluated our implementation on ten clinical data sets and one phantom data set to observe if differences occur between CPU and GPU-based reconstructions. By using our approach, the computation time for 256(3) is reduced from 25 min on the CPU to 3.2 s on the GPU. The GPU reconstruction time for 512(3) volumes is 8.5 s. Copyright 2009 Elsevier Ireland Ltd. All rights reserved.

  7. Ultrafast cone-beam CT scatter correction with GPU-based Monte Carlo simulation

    Directory of Open Access Journals (Sweden)

    Yuan Xu

    2014-03-01

    Full Text Available Purpose: Scatter artifacts severely degrade image quality of cone-beam CT (CBCT. We present an ultrafast scatter correction framework by using GPU-based Monte Carlo (MC simulation and prior patient CT image, aiming at automatically finish the whole process including both scatter correction and reconstruction within 30 seconds.Methods: The method consists of six steps: 1 FDK reconstruction using raw projection data; 2 Rigid Registration of planning CT to the FDK results; 3 MC scatter calculation at sparse view angles using the planning CT; 4 Interpolation of the calculated scatter signals to other angles; 5 Removal of scatter from the raw projections; 6 FDK reconstruction using the scatter-corrected projections. In addition to using GPU to accelerate MC photon simulations, we also use a small number of photons and a down-sampled CT image in simulation to further reduce computation time. A novel denoising algorithm is used to eliminate MC noise from the simulated scatter images caused by low photon numbers. The method is validated on one simulated head-and-neck case with 364 projection angles.Results: We have examined variation of the scatter signal among projection angles using Fourier analysis. It is found that scatter images at 31 angles are sufficient to restore those at all angles with < 0.1% error. For the simulated patient case with a resolution of 512 × 512 × 100, we simulated 5 × 106 photons per angle. The total computation time is 20.52 seconds on a Nvidia GTX Titan GPU, and the time at each step is 2.53, 0.64, 14.78, 0.13, 0.19, and 2.25 seconds, respectively. The scatter-induced shading/cupping artifacts are substantially reduced, and the average HU error of a region-of-interest is reduced from 75.9 to 19.0 HU.Conclusion: A practical ultrafast MC-based CBCT scatter correction scheme is developed. It accomplished the whole procedure of scatter correction and reconstruction within 30 seconds.----------------------------Cite this

  8. SU-E-T-806: Very Fast GPU-Based IMPT Dose Computation

    Energy Technology Data Exchange (ETDEWEB)

    Sullivan, A; Brand, M [Mitsubishi Electric Research Lab, Cambridge, MA (United States)

    2015-06-15

    Purpose: Designing particle therapy treatment plans is a dosimetrist-in-the-loop optimization wherein the conflicting constraints of achieving a desired tumor dose distribution must be balanced against the need to minimize the dose to nearby OARs. IMPT introduces an additional, inner, numerical optimization step in which the dosimetrist’s current set of constraints are used to determine the weighting of beam spots. Very fast dose calculations are needed to enable the dosimetrist to perform many iterations of the outer optimization in a commercially reasonable time. Methods: We have developed a GPU-based convolution-type dose computation algorithm that more accurately handles heterogeneities than earlier algorithms by redistributing energy from dose computed in a water volume. The depth dependence of the beam size is handled by pre-processing Bragg curves using a weighted superposition of Gaussian bases. Additionally, scattering, the orientation of treatment ports, and the non-parallel propagation of beams are handled by large, but sparse, energy-redistribution matrices that implement affine transforms. Results: We tested our algorithm using a brain tumor dataset with 1 mm voxels and a single treatment port from the patient’s anterior through the sinuses. The resulting dose volume is 100 × 100 × 230 mm with 66,200 beam spots on a 3 × 3 × 2 mm grid. The dose computation takes <1 msec on a GeForce GTX Titan GPU with the Gamma passing rate for 2mm/2% criterion of 99.1% compared to dose calculated by an alternative dose algorithm based on pencil beams. We will present comparisons to Monte Carlo dose calculations. Conclusion: Our high-speed dose computation method enables the IMPT spot weights to be optimized in <1 second, resulting in a nearly instantaneous response to user changes to dose constraints. This permits the creation of higher quality plans by allowing the dosimetrist to evaluate more alternatives in a short period of time.

  9. Fast GPU-based spot extraction for energy-dispersive X-ray Laue diffraction

    International Nuclear Information System (INIS)

    Alghabi, F.; Schipper, U.; Kolb, A.; Send, S.; Abboud, A.; Pashniak, N.; Pietsch, U.

    2014-01-01

    This paper describes a novel method for fast online analysis of X-ray Laue spots taken by means of an energy-dispersive X-ray 2D detector. Current pnCCD detectors typically operate at some 100 Hz (up to a maximum of 400 Hz) and have a resolution of 384 × 384 pixels, future devices head for even higher pixel counts and frame rates. The proposed online data analysis is based on a computer utilizing multiple Graphics Processing Units (GPUs), which allow for fast and parallel data processing. Our multi-GPU based algorithm is compliant with the rules of stream-based data processing, for which GPUs are optimized. The paper's main contribution is therefore an alternative algorithm for the determination of spot positions and energies over the full sequence of pnCCD data frames. Furthermore, an improved background suppression algorithm is presented.The resulting system is able to process data at the maximum acquisition rate of 400 Hz. We present a detailed analysis of the spot positions and energies deduced from a prior (single-core) CPU-based and the novel GPU-based data processing, showing that the parallel computed results using the GPU implementation are at least of the same quality as prior CPU-based results. Furthermore, the GPU-based algorithm is able to speed up the data processing by a factor of 7 (in comparison to single-core CPU-based algorithm) which effectively makes the detector system more suitable for online data processing

  10. GPU-based large-scale visualization

    KAUST Repository

    Hadwiger, Markus

    2013-11-19

    Recent advances in image and volume acquisition as well as computational advances in simulation have led to an explosion of the amount of data that must be visualized and analyzed. Modern techniques combine the parallel processing power of GPUs with out-of-core methods and data streaming to enable the interactive visualization of giga- and terabytes of image and volume data. A major enabler for interactivity is making both the computational and the visualization effort proportional to the amount of data that is actually visible on screen, decoupling it from the full data size. This leads to powerful display-aware multi-resolution techniques that enable the visualization of data of almost arbitrary size. The course consists of two major parts: An introductory part that progresses from fundamentals to modern techniques, and a more advanced part that discusses details of ray-guided volume rendering, novel data structures for display-aware visualization and processing, and the remote visualization of large online data collections. You will learn how to develop efficient GPU data structures and large-scale visualizations, implement out-of-core strategies and concepts such as virtual texturing that have only been employed recently, as well as how to use modern multi-resolution representations. These approaches reduce the GPU memory requirements of extremely large data to a working set size that fits into current GPUs. You will learn how to perform ray-casting of volume data of almost arbitrary size and how to render and process gigapixel images using scalable, display-aware techniques. We will describe custom virtual texturing architectures as well as recent hardware developments in this area. We will also describe client/server systems for distributed visualization, on-demand data processing and streaming, and remote visualization. We will describe implementations using OpenGL as well as CUDA, exploiting parallelism on GPUs combined with additional asynchronous

  11. Uneconomical top calculation method

    International Nuclear Information System (INIS)

    De Noord, M.; Vanm Sambeek, E.J.W.

    2003-08-01

    The methodology used to calculate the financial gap of renewable electricity sources and technologies is described. This methodology is used for calculating the production subsidy levels (MEP subsidies) for new renewable electricity projects in 2004 and 2005 in the Netherlands [nl

  12. Hypergraph partitioning implementation for parallelizing matrix-vector multiplication using CUDA GPU-based parallel computing

    Science.gov (United States)

    Murni, Bustamam, A.; Ernastuti, Handhika, T.; Kerami, D.

    2017-07-01

    Calculation of the matrix-vector multiplication in the real-world problems often involves large matrix with arbitrary size. Therefore, parallelization is needed to speed up the calculation process that usually takes a long time. Graph partitioning techniques that have been discussed in the previous studies cannot be used to complete the parallelized calculation of matrix-vector multiplication with arbitrary size. This is due to the assumption of graph partitioning techniques that can only solve the square and symmetric matrix. Hypergraph partitioning techniques will overcome the shortcomings of the graph partitioning technique. This paper addresses the efficient parallelization of matrix-vector multiplication through hypergraph partitioning techniques using CUDA GPU-based parallel computing. CUDA (compute unified device architecture) is a parallel computing platform and programming model that was created by NVIDIA and implemented by the GPU (graphics processing unit).

  13. WE-AB-204-11: Development of a Nuclear Medicine Dosimetry Module for the GPU-Based Monte Carlo Code ARCHER

    Energy Technology Data Exchange (ETDEWEB)

    Liu, T; Lin, H; Xu, X [Rensselaer Polytechnic Institute, Troy, NY (United States); Stabin, M [Vanderbilt Univ Medical Ctr, Nashville, TN (United States)

    2015-06-15

    Purpose: To develop a nuclear medicine dosimetry module for the GPU-based Monte Carlo code ARCHER. Methods: We have developed a nuclear medicine dosimetry module for the fast Monte Carlo code ARCHER. The coupled electron-photon Monte Carlo transport kernel included in ARCHER is built upon the Dose Planning Method code (DPM). The developed module manages the radioactive decay simulation by consecutively tracking several types of radiation on a per disintegration basis using the statistical sampling method. Optimization techniques such as persistent threads and prefetching are studied and implemented. The developed module is verified against the VIDA code, which is based on Geant4 toolkit and has previously been verified against OLINDA/EXM. A voxelized geometry is used in the preliminary test: a sphere made of ICRP soft tissue is surrounded by a box filled with water. Uniform activity distribution of I-131 is assumed in the sphere. Results: The self-absorption dose factors (mGy/MBqs) of the sphere with varying diameters are calculated by ARCHER and VIDA respectively. ARCHER’s result is in agreement with VIDA’s that are obtained from a previous publication. VIDA takes hours of CPU time to finish the computation, while it takes ARCHER 4.31 seconds for the 12.4-cm uniform activity sphere case. For a fairer CPU-GPU comparison, more effort will be made to eliminate the algorithmic differences. Conclusion: The coupled electron-photon Monte Carlo code ARCHER has been extended to radioactive decay simulation for nuclear medicine dosimetry. The developed code exhibits good performance in our preliminary test. The GPU-based Monte Carlo code is developed with grant support from the National Institute of Biomedical Imaging and Bioengineering through an R01 grant (R01EB015478)

  14. GPU Based Software Correlators - Perspectives for VLBI2010

    Science.gov (United States)

    Hobiger, Thomas; Kimura, Moritaka; Takefuji, Kazuhiro; Oyama, Tomoaki; Koyama, Yasuhiro; Kondo, Tetsuro; Gotoh, Tadahiro; Amagai, Jun

    2010-01-01

    Caused by historical separation and driven by the requirements of the PC gaming industry, Graphics Processing Units (GPUs) have evolved to massive parallel processing systems which entered the area of non-graphic related applications. Although a single processing core on the GPU is much slower and provides less functionality than its counterpart on the CPU, the huge number of these small processing entities outperforms the classical processors when the application can be parallelized. Thus, in recent years various radio astronomical projects have started to make use of this technology either to realize the correlator on this platform or to establish the post-processing pipeline with GPUs. Therefore, the feasibility of GPUs as a choice for a VLBI correlator is being investigated, including pros and cons of this technology. Additionally, a GPU based software correlator will be reviewed with respect to energy consumption/GFlop/sec and cost/GFlop/sec.

  15. GPU-based high-performance computing for radiation therapy

    International Nuclear Information System (INIS)

    Jia, Xun; Jiang, Steve B; Ziegenhein, Peter

    2014-01-01

    Recent developments in radiotherapy therapy demand high computation powers to solve challenging problems in a timely fashion in a clinical environment. The graphics processing unit (GPU), as an emerging high-performance computing platform, has been introduced to radiotherapy. It is particularly attractive due to its high computational power, small size, and low cost for facility deployment and maintenance. Over the past few years, GPU-based high-performance computing in radiotherapy has experienced rapid developments. A tremendous amount of study has been conducted, in which large acceleration factors compared with the conventional CPU platform have been observed. In this paper, we will first give a brief introduction to the GPU hardware structure and programming model. We will then review the current applications of GPU in major imaging-related and therapy-related problems encountered in radiotherapy. A comparison of GPU with other platforms will also be presented. (topical review)

  16. Development of parallel GPU based algorithms for problems in nuclear area

    International Nuclear Information System (INIS)

    Almeida, Adino Americo Heimlich

    2009-01-01

    Graphics Processing Units (GPU) are high performance co-processors intended, originally, to improve the use and quality of computer graphics applications. Since researchers and practitioners realized the potential of using GPU for general purpose, their application has been extended to other fields out of computer graphics scope. The main objective of this work is to evaluate the impact of using GPU in two typical problems of Nuclear area. The neutron transport simulation using Monte Carlo method and solve heat equation in a bi-dimensional domain by finite differences method. To achieve this, we develop parallel algorithms for GPU and CPU in the two problems described above. The comparison showed that the GPU-based approach is faster than the CPU in a computer with two quad core processors, without precision loss. (author)

  17. GPU-based high performance Monte Carlo simulation in neutron transport

    Energy Technology Data Exchange (ETDEWEB)

    Heimlich, Adino; Mol, Antonio C.A.; Pereira, Claudio M.N.A. [Instituto de Engenharia Nuclear (IEN/CNEN-RJ), Rio de Janeiro, RJ (Brazil). Lab. de Inteligencia Artificial Aplicada], e-mail: cmnap@ien.gov.br

    2009-07-01

    Graphics Processing Units (GPU) are high performance co-processors intended, originally, to improve the use and quality of computer graphics applications. Since researchers and practitioners realized the potential of using GPU for general purpose, their application has been extended to other fields out of computer graphics scope. The main objective of this work is to evaluate the impact of using GPU in neutron transport simulation by Monte Carlo method. To accomplish that, GPU- and CPU-based (single and multicore) approaches were developed and applied to a simple, but time-consuming problem. Comparisons demonstrated that the GPU-based approach is about 15 times faster than a parallel 8-core CPU-based approach also developed in this work. (author)

  18. GPU-based high performance Monte Carlo simulation in neutron transport

    International Nuclear Information System (INIS)

    Heimlich, Adino; Mol, Antonio C.A.; Pereira, Claudio M.N.A.

    2009-01-01

    Graphics Processing Units (GPU) are high performance co-processors intended, originally, to improve the use and quality of computer graphics applications. Since researchers and practitioners realized the potential of using GPU for general purpose, their application has been extended to other fields out of computer graphics scope. The main objective of this work is to evaluate the impact of using GPU in neutron transport simulation by Monte Carlo method. To accomplish that, GPU- and CPU-based (single and multicore) approaches were developed and applied to a simple, but time-consuming problem. Comparisons demonstrated that the GPU-based approach is about 15 times faster than a parallel 8-core CPU-based approach also developed in this work. (author)

  19. Methods for magnetostatic field calculation

    International Nuclear Information System (INIS)

    Vorozhtsov, S.B.

    1984-01-01

    Two methods for magnetostatic field calculation: differential and integrat are considered. Both approaches are shown to have certain merits and drawbacks, choice of the method depend on the type of the solved problem. An opportunity of combination of these tWo methods in one algorithm (hybrid method) is considered

  20. Cobalt: A GPU-based correlator and beamformer for LOFAR

    Science.gov (United States)

    Broekema, P. Chris; Mol, J. Jan David; Nijboer, R.; van Amesfoort, A. S.; Brentjens, M. A.; Loose, G. Marcel; Klijn, W. F. A.; Romein, J. W.

    2018-04-01

    For low-frequency radio astronomy, software correlation and beamforming on general purpose hardware is a viable alternative to custom designed hardware. LOFAR, a new-generation radio telescope centered in the Netherlands with international stations in Germany, France, Ireland, Poland, Sweden and the UK, has successfully used software real-time processors based on IBM Blue Gene technology since 2004. Since then, developments in technology have allowed us to build a system based on commercial off-the-shelf components that combines the same capabilities with lower operational cost. In this paper, we describe the design and implementation of a GPU-based correlator and beamformer with the same capabilities as the Blue Gene based systems. We focus on the design approach taken, and show the challenges faced in selecting an appropriate system. The design, implementation and verification of the software system show the value of a modern test-driven development approach. Operational experience, based on three years of operations, demonstrates that a general purpose system is a good alternative to the previous supercomputer-based system or custom-designed hardware.

  1. High-throughput GPU-based LDPC decoding

    Science.gov (United States)

    Chang, Yang-Lang; Chang, Cheng-Chun; Huang, Min-Yu; Huang, Bormin

    2010-08-01

    Low-density parity-check (LDPC) code is a linear block code known to approach the Shannon limit via the iterative sum-product algorithm. LDPC codes have been adopted in most current communication systems such as DVB-S2, WiMAX, WI-FI and 10GBASE-T. LDPC for the needs of reliable and flexible communication links for a wide variety of communication standards and configurations have inspired the demand for high-performance and flexibility computing. Accordingly, finding a fast and reconfigurable developing platform for designing the high-throughput LDPC decoder has become important especially for rapidly changing communication standards and configurations. In this paper, a new graphic-processing-unit (GPU) LDPC decoding platform with the asynchronous data transfer is proposed to realize this practical implementation. Experimental results showed that the proposed GPU-based decoder achieved 271x speedup compared to its CPU-based counterpart. It can serve as a high-throughput LDPC decoder.

  2. Development of a GPU-based high-performance radiative transfer model for the Infrared Atmospheric Sounding Interferometer (IASI)

    International Nuclear Information System (INIS)

    Huang Bormin; Mielikainen, Jarno; Oh, Hyunjong; Allen Huang, Hung-Lung

    2011-01-01

    Satellite-observed radiance is a nonlinear functional of surface properties and atmospheric temperature and absorbing gas profiles as described by the radiative transfer equation (RTE). In the era of hyperspectral sounders with thousands of high-resolution channels, the computation of the radiative transfer model becomes more time-consuming. The radiative transfer model performance in operational numerical weather prediction systems still limits the number of channels we can use in hyperspectral sounders to only a few hundreds. To take the full advantage of such high-resolution infrared observations, a computationally efficient radiative transfer model is needed to facilitate satellite data assimilation. In recent years the programmable commodity graphics processing unit (GPU) has evolved into a highly parallel, multi-threaded, many-core processor with tremendous computational speed and very high memory bandwidth. The radiative transfer model is very suitable for the GPU implementation to take advantage of the hardware's efficiency and parallelism where radiances of many channels can be calculated in parallel in GPUs. In this paper, we develop a GPU-based high-performance radiative transfer model for the Infrared Atmospheric Sounding Interferometer (IASI) launched in 2006 onboard the first European meteorological polar-orbiting satellites, METOP-A. Each IASI spectrum has 8461 spectral channels. The IASI radiative transfer model consists of three modules. The first module for computing the regression predictors takes less than 0.004% of CPU time, while the second module for transmittance computation and the third module for radiance computation take approximately 92.5% and 7.5%, respectively. Our GPU-based IASI radiative transfer model is developed to run on a low-cost personal supercomputer with four GPUs with total 960 compute cores, delivering near 4 TFlops theoretical peak performance. By massively parallelizing the second and third modules, we reached 364x

  3. MO-A-BRD-10: A Fast and Accurate GPU-Based Proton Transport Monte Carlo Simulation for Validating Proton Therapy Treatment Plans

    Energy Technology Data Exchange (ETDEWEB)

    Wan Chan Tseung, H; Ma, J; Beltran, C [Mayo Clinic, Rochester, MN (United States)

    2014-06-15

    Purpose: To build a GPU-based Monte Carlo (MC) simulation of proton transport with detailed modeling of elastic and non-elastic (NE) protonnucleus interactions, for use in a very fast and cost-effective proton therapy treatment plan verification system. Methods: Using the CUDA framework, we implemented kernels for the following tasks: (1) Simulation of beam spots from our possible scanning nozzle configurations, (2) Proton propagation through CT geometry, taking into account nuclear elastic and multiple scattering, as well as energy straggling, (3) Bertini-style modeling of the intranuclear cascade stage of NE interactions, and (4) Simulation of nuclear evaporation. To validate our MC, we performed: (1) Secondary particle yield calculations in NE collisions with therapeutically-relevant nuclei, (2) Pencil-beam dose calculations in homogeneous phantoms, (3) A large number of treatment plan dose recalculations, and compared with Geant4.9.6p2/TOPAS. A workflow was devised for calculating plans from a commercially available treatment planning system, with scripts for reading DICOM files and generating inputs for our MC. Results: Yields, energy and angular distributions of secondaries from NE collisions on various nuclei are in good agreement with the Geant4.9.6p2 Bertini and Binary cascade models. The 3D-gamma pass rate at 2%–2mm for 70–230 MeV pencil-beam dose distributions in water, soft tissue, bone and Ti phantoms is 100%. The pass rate at 2%–2mm for treatment plan calculations is typically above 98%. The net computational time on a NVIDIA GTX680 card, including all CPU-GPU data transfers, is around 20s for 1×10{sup 7} proton histories. Conclusion: Our GPU-based proton transport MC is the first of its kind to include a detailed nuclear model to handle NE interactions on any nucleus. Dosimetric calculations demonstrate very good agreement with Geant4.9.6p2/TOPAS. Our MC is being integrated into a framework to perform fast routine clinical QA of pencil

  4. Methods for Melting Temperature Calculation

    Science.gov (United States)

    Hong, Qi-Jun

    Melting temperature calculation has important applications in the theoretical study of phase diagrams and computational materials screenings. In this thesis, we present two new methods, i.e., the improved Widom's particle insertion method and the small-cell coexistence method, which we developed in order to capture melting temperatures both accurately and quickly. We propose a scheme that drastically improves the efficiency of Widom's particle insertion method by efficiently sampling cavities while calculating the integrals providing the chemical potentials of a physical system. This idea enables us to calculate chemical potentials of liquids directly from first-principles without the help of any reference system, which is necessary in the commonly used thermodynamic integration method. As an example, we apply our scheme, combined with the density functional formalism, to the calculation of the chemical potential of liquid copper. The calculated chemical potential is further used to locate the melting temperature. The calculated results closely agree with experiments. We propose the small-cell coexistence method based on the statistical analysis of small-size coexistence MD simulations. It eliminates the risk of a metastable superheated solid in the fast-heating method, while also significantly reducing the computer cost relative to the traditional large-scale coexistence method. Using empirical potentials, we validate the method and systematically study the finite-size effect on the calculated melting points. The method converges to the exact result in the limit of a large system size. An accuracy within 100 K in melting temperature is usually achieved when the simulation contains more than 100 atoms. DFT examples of Tantalum, high-pressure Sodium, and ionic material NaCl are shown to demonstrate the accuracy and flexibility of the method in its practical applications. The method serves as a promising approach for large-scale automated material screening in which

  5. GPU-based fast pencil beam algorithm for proton therapy

    International Nuclear Information System (INIS)

    Fujimoto, Rintaro; Nagamine, Yoshihiko; Kurihara, Tsuneya

    2011-01-01

    Performance of a treatment planning system is an essential factor in making sophisticated plans. The dose calculation is a major time-consuming process in planning operations. The standard algorithm for proton dose calculations is the pencil beam algorithm which produces relatively accurate results, but is time consuming. In order to shorten the computational time, we have developed a GPU (graphics processing unit)-based pencil beam algorithm. We have implemented this algorithm and calculated dose distributions in the case of a water phantom. The results were compared to those obtained by a traditional method with respect to the computational time and discrepancy between the two methods. The new algorithm shows 5-20 times faster performance using the NVIDIA GeForce GTX 480 card in comparison with the Intel Core-i7 920 processor. The maximum discrepancy of the dose distribution is within 0.2%. Our results show that GPUs are effective for proton dose calculations.

  6. Haptic Feedback for the GPU-based Surgical Simulator

    DEFF Research Database (Denmark)

    Sørensen, Thomas Sangild; Mosegaard, Jesper

    2006-01-01

    The GPU has proven to be a powerful processor to compute spring-mass based surgical simulations. It has not previously been shown however, how to effectively implement haptic interaction with a simulation running entirely on the GPU. This paper describes a method to calculate haptic feedback...... with limited performance cost. It allows easy balancing of the GPU workload between calculations of simulation, visualisation, and the haptic feedback....

  7. Friction and wear calculation methods

    CERN Document Server

    Kragelsky, I V; Kombalov, V S

    1981-01-01

    Friction and Wear: Calculation Methods provides an introduction to the main theories of a new branch of mechanics known as """"contact interaction of solids in relative motion."""" This branch is closely bound up with other sciences, especially physics and chemistry. The book analyzes the nature of friction and wear, and some theoretical relationships that link the characteristics of the processes and the properties of the contacting bodies essential for practical application of the theories in calculating friction forces and wear values. The effect of the environment on friction and wear is a

  8. Methods for calculating nonconcave entropies

    International Nuclear Information System (INIS)

    Touchette, Hugo

    2010-01-01

    Five different methods which can be used to analytically calculate entropies that are nonconcave as functions of the energy in the thermodynamic limit are discussed and compared. The five methods are based on the following ideas and techniques: (i) microcanonical contraction, (ii) metastable branches of the free energy, (iii) generalized canonical ensembles with specific illustrations involving the so-called Gaussian and Betrag ensembles, (iv) the restricted canonical ensemble, and (v) the inverse Laplace transform. A simple long-range spin model having a nonconcave entropy is used to illustrate each method

  9. GRay: A MASSIVELY PARALLEL GPU-BASED CODE FOR RAY TRACING IN RELATIVISTIC SPACETIMES

    Energy Technology Data Exchange (ETDEWEB)

    Chan, Chi-kwan; Psaltis, Dimitrios; Özel, Feryal [Department of Astronomy, University of Arizona, 933 N. Cherry Ave., Tucson, AZ 85721 (United States)

    2013-11-01

    We introduce GRay, a massively parallel integrator designed to trace the trajectories of billions of photons in a curved spacetime. This graphics-processing-unit (GPU)-based integrator employs the stream processing paradigm, is implemented in CUDA C/C++, and runs on nVidia graphics cards. The peak performance of GRay using single-precision floating-point arithmetic on a single GPU exceeds 300 GFLOP (or 1 ns per photon per time step). For a realistic problem, where the peak performance cannot be reached, GRay is two orders of magnitude faster than existing central-processing-unit-based ray-tracing codes. This performance enhancement allows more effective searches of large parameter spaces when comparing theoretical predictions of images, spectra, and light curves from the vicinities of compact objects to observations. GRay can also perform on-the-fly ray tracing within general relativistic magnetohydrodynamic algorithms that simulate accretion flows around compact objects. Making use of this algorithm, we calculate the properties of the shadows of Kerr black holes and the photon rings that surround them. We also provide accurate fitting formulae of their dependencies on black hole spin and observer inclination, which can be used to interpret upcoming observations of the black holes at the center of the Milky Way, as well as M87, with the Event Horizon Telescope.

  10. An optimization of a GPU-based parallel wind field module

    International Nuclear Information System (INIS)

    Pinheiro, André L.S.; Shirru, Roberto

    2017-01-01

    Atmospheric radionuclide dispersion systems (ARDS) are important tools to predict the impact of radioactive releases from Nuclear Power Plants and guide people evacuation from affected areas. Four modules comprise ARDS: Source Term, Wind Field, Plume Dispersion and Doses Calculations. The slowest is the Wind Field Module that was previously parallelized using the CUDA C language. The statement purpose of this work is to show the speedup gain with the optimization of the already parallel code of the GPU-based Wind Field module, based in WEST model (Extrapolated from Stability and Terrain). Due to the parallelization done in the wind field module, it was observed that some CUDA processors became idle, thus contributing to a reduction in speedup. It was proposed in this work a way of allocating these idle CUDA processors in order to increase the speedup. An acceleration of about 4 times can be seen in the comparative case study between the regular CUDA code and the optimized CUDA code. These results are quite motivating and point out that even after a parallelization of code, a parallel code optimization should be taken into account. (author)

  11. An optimization of a GPU-based parallel wind field module

    Energy Technology Data Exchange (ETDEWEB)

    Pinheiro, André L.S.; Shirru, Roberto [Coordenacao de Pos-Graduacao e Pesquisa de Engenharia (PEN/COPPE/UFRJ), Rio de Janeiro, RJ (Brazil). Programa de Engenharia Nuclear; Pereira, Cláudio M.N.A., E-mail: apinheiro99@gmail.com, E-mail: schirru@lmp.ufrj.br, E-mail: cmnap@ien.gov.br [Instituto de Engenharia Nuclear (IEN/CNEN-RJ), Rio de Janeiro, RJ (Brazil)

    2017-07-01

    Atmospheric radionuclide dispersion systems (ARDS) are important tools to predict the impact of radioactive releases from Nuclear Power Plants and guide people evacuation from affected areas. Four modules comprise ARDS: Source Term, Wind Field, Plume Dispersion and Doses Calculations. The slowest is the Wind Field Module that was previously parallelized using the CUDA C language. The statement purpose of this work is to show the speedup gain with the optimization of the already parallel code of the GPU-based Wind Field module, based in WEST model (Extrapolated from Stability and Terrain). Due to the parallelization done in the wind field module, it was observed that some CUDA processors became idle, thus contributing to a reduction in speedup. It was proposed in this work a way of allocating these idle CUDA processors in order to increase the speedup. An acceleration of about 4 times can be seen in the comparative case study between the regular CUDA code and the optimized CUDA code. These results are quite motivating and point out that even after a parallelization of code, a parallel code optimization should be taken into account. (author)

  12. GPU-Based Block-Wise Nonlocal Means Denoising for 3D Ultrasound Images

    Directory of Open Access Journals (Sweden)

    Liu Li

    2013-01-01

    Full Text Available Speckle suppression plays an important role in improving ultrasound (US image quality. While lots of algorithms have been proposed for 2D US image denoising with remarkable filtering quality, there is relatively less work done on 3D ultrasound speckle suppression, where the whole volume data rather than just one frame needs to be considered. Then, the most crucial problem with 3D US denoising is that the computational complexity increases tremendously. The nonlocal means (NLM provides an effective method for speckle suppression in US images. In this paper, a programmable graphic-processor-unit- (GPU- based fast NLM filter is proposed for 3D ultrasound speckle reduction. A Gamma distribution noise model, which is able to reliably capture image statistics for Log-compressed ultrasound images, was used for the 3D block-wise NLM filter on basis of Bayesian framework. The most significant aspect of our method was the adopting of powerful data-parallel computing capability of GPU to improve the overall efficiency. Experimental results demonstrate that the proposed method can enormously accelerate the algorithm.

  13. TH-A-18C-04: Ultrafast Cone-Beam CT Scatter Correction with GPU-Based Monte Carlo Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Y [UT Southwestern Medical Center, Dallas, TX (United States); Southern Medical University, Guangzhou (China); Bai, T [UT Southwestern Medical Center, Dallas, TX (United States); Xi' an Jiaotong University, Xi' an (China); Yan, H; Ouyang, L; Wang, J; Pompos, A; Jiang, S; Jia, X [UT Southwestern Medical Center, Dallas, TX (United States); Zhou, L [Southern Medical University, Guangzhou (China)

    2014-06-15

    Purpose: Scatter artifacts severely degrade image quality of cone-beam CT (CBCT). We present an ultrafast scatter correction framework by using GPU-based Monte Carlo (MC) simulation and prior patient CT image, aiming at automatically finish the whole process including both scatter correction and reconstructions within 30 seconds. Methods: The method consists of six steps: 1) FDK reconstruction using raw projection data; 2) Rigid Registration of planning CT to the FDK results; 3) MC scatter calculation at sparse view angles using the planning CT; 4) Interpolation of the calculated scatter signals to other angles; 5) Removal of scatter from the raw projections; 6) FDK reconstruction using the scatter-corrected projections. In addition to using GPU to accelerate MC photon simulations, we also use a small number of photons and a down-sampled CT image in simulation to further reduce computation time. A novel denoising algorithm is used to eliminate MC scatter noise caused by low photon numbers. The method is validated on head-and-neck cases with simulated and clinical data. Results: We have studied impacts of photo histories, volume down sampling factors on the accuracy of scatter estimation. The Fourier analysis was conducted to show that scatter images calculated at 31 angles are sufficient to restore those at all angles with <0.1% error. For the simulated case with a resolution of 512×512×100, we simulated 10M photons per angle. The total computation time is 23.77 seconds on a Nvidia GTX Titan GPU. The scatter-induced shading/cupping artifacts are substantially reduced, and the average HU error of a region-of-interest is reduced from 75.9 to 19.0 HU. Similar results were found for a real patient case. Conclusion: A practical ultrafast MC-based CBCT scatter correction scheme is developed. The whole process of scatter correction and reconstruction is accomplished within 30 seconds. This study is supported in part by NIH (1R01CA154747-01), The Core Technology Research

  14. The development of GPU-based parallel PRNG for Monte Carlo applications in CUDA Fortran

    Directory of Open Access Journals (Sweden)

    Hamed Kargaran

    2016-04-01

    Full Text Available The implementation of Monte Carlo simulation on the CUDA Fortran requires a fast random number generation with good statistical properties on GPU. In this study, a GPU-based parallel pseudo random number generator (GPPRNG have been proposed to use in high performance computing systems. According to the type of GPU memory usage, GPU scheme is divided into two work modes including GLOBAL_MODE and SHARED_MODE. To generate parallel random numbers based on the independent sequence method, the combination of middle-square method and chaotic map along with the Xorshift PRNG have been employed. Implementation of our developed PPRNG on a single GPU showed a speedup of 150x and 470x (with respect to the speed of PRNG on a single CPU core for GLOBAL_MODE and SHARED_MODE, respectively. To evaluate the accuracy of our developed GPPRNG, its performance was compared to that of some other commercially available PPRNGs such as MATLAB, FORTRAN and Miller-Park algorithm through employing the specific standard tests. The results of this comparison showed that the developed GPPRNG in this study can be used as a fast and accurate tool for computational science applications.

  15. The development of GPU-based parallel PRNG for Monte Carlo applications in CUDA Fortran

    Energy Technology Data Exchange (ETDEWEB)

    Kargaran, Hamed, E-mail: h-kargaran@sbu.ac.ir; Minuchehr, Abdolhamid; Zolfaghari, Ahmad [Department of nuclear engineering, Shahid Behesti University, Tehran, 1983969411 (Iran, Islamic Republic of)

    2016-04-15

    The implementation of Monte Carlo simulation on the CUDA Fortran requires a fast random number generation with good statistical properties on GPU. In this study, a GPU-based parallel pseudo random number generator (GPPRNG) have been proposed to use in high performance computing systems. According to the type of GPU memory usage, GPU scheme is divided into two work modes including GLOBAL-MODE and SHARED-MODE. To generate parallel random numbers based on the independent sequence method, the combination of middle-square method and chaotic map along with the Xorshift PRNG have been employed. Implementation of our developed PPRNG on a single GPU showed a speedup of 150x and 470x (with respect to the speed of PRNG on a single CPU core) for GLOBAL-MODE and SHARED-MODE, respectively. To evaluate the accuracy of our developed GPPRNG, its performance was compared to that of some other commercially available PPRNGs such as MATLAB, FORTRAN and Miller-Park algorithm through employing the specific standard tests. The results of this comparison showed that the developed GPPRNG in this study can be used as a fast and accurate tool for computational science applications.

  16. Calculational methods for lattice cells

    International Nuclear Information System (INIS)

    Askew, J.R.

    1980-01-01

    At the current stage of development, direct simulation of all the processes involved in the reactor to the degree of accuracy required is not an economic proposition, and this is achieved by progressive synthesis of models for parts of the full space/angle/energy neutron behaviour. The split between reactor and lattice calculations is one such simplification. Most reactors are constructed of repetitions of similar geometric units, the fuel elements, having broadly similar properties. Thus the provision of detailed predictions of their behaviour is an important step towards overall modelling. We shall be dealing with these lattice methods in this series of lectures, but will refer back from time to time to their relationship with overall reactor calculation The lattice cell is itself composed of somewhat similar sub-units, the fuel pins, and will itself often rely upon a further break down of modelling. Construction of a good model depends upon the identification, on physical and mathematical grounds, of the most helpful division of the calculation at this level

  17. SU-E-T-500: Initial Implementation of GPU-Based Particle Swarm Optimization for 4D IMRT Planning in Lung SBRT

    International Nuclear Information System (INIS)

    Modiri, A; Hagan, A; Gu, X; Sawant, A

    2015-01-01

    Purpose 4D-IMRT planning, combined with dynamic MLC tracking delivery, utilizes the temporal dimension as an additional degree of freedom to achieve improved OAR-sparing. The computational complexity for such optimization increases exponentially with increase in dimensionality. In order to accomplish this task in a clinically-feasible time frame, we present an initial implementation of GPU-based 4D-IMRT planning based on particle swarm optimization (PSO). Methods The target and normal structures were manually contoured on ten phases of a 4DCT scan of a NSCLC patient with a 54cm3 right-lower-lobe tumor (1.5cm motion). Corresponding ten 3D-IMRT plans were created in the Eclipse treatment planning system (Ver-13.6). A vendor-provided scripting interface was used to export 3D-dose matrices corresponding to each control point (10 phases × 9 beams × 166 control points = 14,940), which served as input to PSO. The optimization task was to iteratively adjust the weights of each control point and scale the corresponding dose matrices. In order to handle the large amount of data in GPU memory, dose matrices were sparsified and placed in contiguous memory blocks with the 14,940 weight-variables. PSO was implemented on CPU (dual-Xeon, 3.1GHz) and GPU (dual-K20 Tesla, 2496 cores, 3.52Tflops, each) platforms. NiftyReg, an open-source deformable image registration package, was used to calculate the summed dose. Results The 4D-PSO plan yielded PTV coverage comparable to the clinical ITV-based plan and significantly higher OAR-sparing, as follows: lung Dmean=33%; lung V20=27%; spinal cord Dmax=26%; esophagus Dmax=42%; heart Dmax=0%; heart Dmean=47%. The GPU-PSO processing time for 14940 variables and 7 PSO-particles was 41% that of CPU-PSO (199 vs. 488 minutes). Conclusion Truly 4D-IMRT planning can yield significant OAR dose-sparing while preserving PTV coverage. The corresponding optimization problem is large-scale, non-convex and computationally rigorous. Our initial results

  18. SU-E-T-500: Initial Implementation of GPU-Based Particle Swarm Optimization for 4D IMRT Planning in Lung SBRT

    Energy Technology Data Exchange (ETDEWEB)

    Modiri, A; Hagan, A; Gu, X; Sawant, A [UT Southwestern Medical Center, Dallas, TX (United States)

    2015-06-15

    Purpose 4D-IMRT planning, combined with dynamic MLC tracking delivery, utilizes the temporal dimension as an additional degree of freedom to achieve improved OAR-sparing. The computational complexity for such optimization increases exponentially with increase in dimensionality. In order to accomplish this task in a clinically-feasible time frame, we present an initial implementation of GPU-based 4D-IMRT planning based on particle swarm optimization (PSO). Methods The target and normal structures were manually contoured on ten phases of a 4DCT scan of a NSCLC patient with a 54cm3 right-lower-lobe tumor (1.5cm motion). Corresponding ten 3D-IMRT plans were created in the Eclipse treatment planning system (Ver-13.6). A vendor-provided scripting interface was used to export 3D-dose matrices corresponding to each control point (10 phases × 9 beams × 166 control points = 14,940), which served as input to PSO. The optimization task was to iteratively adjust the weights of each control point and scale the corresponding dose matrices. In order to handle the large amount of data in GPU memory, dose matrices were sparsified and placed in contiguous memory blocks with the 14,940 weight-variables. PSO was implemented on CPU (dual-Xeon, 3.1GHz) and GPU (dual-K20 Tesla, 2496 cores, 3.52Tflops, each) platforms. NiftyReg, an open-source deformable image registration package, was used to calculate the summed dose. Results The 4D-PSO plan yielded PTV coverage comparable to the clinical ITV-based plan and significantly higher OAR-sparing, as follows: lung Dmean=33%; lung V20=27%; spinal cord Dmax=26%; esophagus Dmax=42%; heart Dmax=0%; heart Dmean=47%. The GPU-PSO processing time for 14940 variables and 7 PSO-particles was 41% that of CPU-PSO (199 vs. 488 minutes). Conclusion Truly 4D-IMRT planning can yield significant OAR dose-sparing while preserving PTV coverage. The corresponding optimization problem is large-scale, non-convex and computationally rigorous. Our initial results

  19. GPU based numerical simulation of core shooting process

    Directory of Open Access Journals (Sweden)

    Yi-zhong Zhang

    2017-11-01

    Full Text Available Core shooting process is the most widely used technique to make sand cores and it plays an important role in the quality of sand cores. Although numerical simulation can hopefully optimize the core shooting process, research on numerical simulation of the core shooting process is very limited. Based on a two-fluid model (TFM and a kinetic-friction constitutive correlation, a program for 3D numerical simulation of the core shooting process has been developed and achieved good agreements with in-situ experiments. To match the needs of engineering applications, a graphics processing unit (GPU has also been used to improve the calculation efficiency. The parallel algorithm based on the Compute Unified Device Architecture (CUDA platform can significantly decrease computing time by multi-threaded GPU. In this work, the program accelerated by CUDA parallelization method was developed and the accuracy of the calculations was ensured by comparing with in-situ experimental results photographed by a high-speed camera. The design and optimization of the parallel algorithm were discussed. The simulation result of a sand core test-piece indicated the improvement of the calculation efficiency by GPU. The developed program has also been validated by in-situ experiments with a transparent core-box, a high-speed camera, and a pressure measuring system. The computing time of the parallel program was reduced by nearly 95% while the simulation result was still quite consistent with experimental data. The GPU parallelization method can successfully solve the problem of low computational efficiency of the 3D sand shooting simulation program, and thus the developed GPU program is appropriate for engineering applications.

  20. GPU-based simulation of optical propagation through turbulence for active and passive imaging

    Science.gov (United States)

    Monnier, Goulven; Duval, François-Régis; Amram, Solène

    2014-10-01

    IMOTEP is a GPU-based (Graphical Processing Units) software relying on a fast parallel implementation of Fresnel diffraction through successive phase screens. Its applications include active imaging, laser telemetry and passive imaging through turbulence with anisoplanatic spatial and temporal fluctuations. Thanks to parallel implementation on GPU, speedups ranging from 40X to 70X are achieved. The present paper gives a brief overview of IMOTEP models, algorithms, implementation and user interface. It then focuses on major improvements recently brought to the anisoplanatic imaging simulation method. Previously, we took advantage of the computational power offered by the GPU to develop a simulation method based on large series of deterministic realisations of the PSF distorted by turbulence. The phase screen propagation algorithm, by reproducing higher moments of the incident wavefront distortion, provides realistic PSFs. However, we first used a coarse gaussian model to fit the numerical PSFs and characterise there spatial statistics through only 3 parameters (two-dimensional displacements of centroid and width). Meanwhile, this approach was unable to reproduce the effects related to the details of the PSF structure, especially the "speckles" leading to prominent high-frequency content in short-exposure images. To overcome this limitation, we recently implemented a new empirical model of the PSF, based on Principal Components Analysis (PCA), ought to catch most of the PSF complexity. The GPU implementation allows estimating and handling efficiently the numerous (up to several hundreds) principal components typically required under the strong turbulence regime. A first demanding computational step involves PCA, phase screen propagation and covariance estimates. In a second step, realistic instantaneous images, fully accounting for anisoplanatic effects, are quickly generated. Preliminary results are presented.

  1. Calculation methods in program CCRMN

    Energy Technology Data Exchange (ETDEWEB)

    Chonghai, Cai [Nankai Univ., Tianjin (China). Dept. of Physics; Qingbiao, Shen [Chinese Nuclear Data Center, Beijing, BJ (China)

    1996-06-01

    CCRMN is a program for calculating complex reactions of a medium-heavy nucleus with six light particles. In CCRMN, the incoming particles can be neutrons, protons, {sup 4}He, deuterons, tritons and {sup 3}He. the CCRMN code is constructed within the framework of the optical model, pre-equilibrium statistical theory based on the exciton model and the evaporation model. CCRMN is valid in 1{approx} MeV energy region, it can give correct results for optical model quantities and all kinds of reaction cross sections. This program has been applied in practical calculations and got reasonable results.

  2. State-of-the-Art in GPU-Based Large-Scale Volume Visualization

    KAUST Repository

    Beyer, Johanna

    2015-05-01

    This survey gives an overview of the current state of the art in GPU techniques for interactive large-scale volume visualization. Modern techniques in this field have brought about a sea change in how interactive visualization and analysis of giga-, tera- and petabytes of volume data can be enabled on GPUs. In addition to combining the parallel processing power of GPUs with out-of-core methods and data streaming, a major enabler for interactivity is making both the computational and the visualization effort proportional to the amount and resolution of data that is actually visible on screen, i.e. \\'output-sensitive\\' algorithms and system designs. This leads to recent output-sensitive approaches that are \\'ray-guided\\', \\'visualization-driven\\' or \\'display-aware\\'. In this survey, we focus on these characteristics and propose a new categorization of GPU-based large-scale volume visualization techniques based on the notions of actual output-resolution visibility and the current working set of volume bricks-the current subset of data that is minimally required to produce an output image of the desired display resolution. Furthermore, we discuss the differences and similarities of different rendering and data traversal strategies in volume rendering by putting them into a common context-the notion of address translation. For our purposes here, we view parallel (distributed) visualization using clusters as an orthogonal set of techniques that we do not discuss in detail but that can be used in conjunction with what we present in this survey. © 2015 The Eurographics Association and John Wiley & Sons Ltd.

  3. State-of-the-Art in GPU-Based Large-Scale Volume Visualization

    KAUST Repository

    Beyer, Johanna; Hadwiger, Markus; Pfister, Hanspeter

    2015-01-01

    This survey gives an overview of the current state of the art in GPU techniques for interactive large-scale volume visualization. Modern techniques in this field have brought about a sea change in how interactive visualization and analysis of giga-, tera- and petabytes of volume data can be enabled on GPUs. In addition to combining the parallel processing power of GPUs with out-of-core methods and data streaming, a major enabler for interactivity is making both the computational and the visualization effort proportional to the amount and resolution of data that is actually visible on screen, i.e. 'output-sensitive' algorithms and system designs. This leads to recent output-sensitive approaches that are 'ray-guided', 'visualization-driven' or 'display-aware'. In this survey, we focus on these characteristics and propose a new categorization of GPU-based large-scale volume visualization techniques based on the notions of actual output-resolution visibility and the current working set of volume bricks-the current subset of data that is minimally required to produce an output image of the desired display resolution. Furthermore, we discuss the differences and similarities of different rendering and data traversal strategies in volume rendering by putting them into a common context-the notion of address translation. For our purposes here, we view parallel (distributed) visualization using clusters as an orthogonal set of techniques that we do not discuss in detail but that can be used in conjunction with what we present in this survey. © 2015 The Eurographics Association and John Wiley & Sons Ltd.

  4. GPU-based fast cone beam CT reconstruction from undersampled and noisy projection data via total variation

    International Nuclear Information System (INIS)

    Jia Xun; Lou Yifei; Li Ruijiang; Song, William Y.; Jiang, Steve B.

    2010-01-01

    Purpose: Cone-beam CT (CBCT) plays an important role in image guided radiation therapy (IGRT). However, the large radiation dose from serial CBCT scans in most IGRT procedures raises a clinical concern, especially for pediatric patients who are essentially excluded from receiving IGRT for this reason. The goal of this work is to develop a fast GPU-based algorithm to reconstruct CBCT from undersampled and noisy projection data so as to lower the imaging dose. Methods: The CBCT is reconstructed by minimizing an energy functional consisting of a data fidelity term and a total variation regularization term. The authors developed a GPU-friendly version of the forward-backward splitting algorithm to solve this model. A multigrid technique is also employed. Results: It is found that 20-40 x-ray projections are sufficient to reconstruct images with satisfactory quality for IGRT. The reconstruction time ranges from 77 to 130 s on an NVIDIA Tesla C1060 (NVIDIA, Santa Clara, CA) GPU card, depending on the number of projections used, which is estimated about 100 times faster than similar iterative reconstruction approaches. Moreover, phantom studies indicate that the algorithm enables the CBCT to be reconstructed under a scanning protocol with as low as 0.1 mA s/projection. Comparing with currently widely used full-fan head and neck scanning protocol of ∼360 projections with 0.4 mA s/projection, it is estimated that an overall 36-72 times dose reduction has been achieved in our fast CBCT reconstruction algorithm. Conclusions: This work indicates that the developed GPU-based CBCT reconstruction algorithm is capable of lowering imaging dose considerably. The high computation efficiency in this algorithm makes the iterative CBCT reconstruction approach applicable in real clinical environments.

  5. Methods in nuclear reactors calculations

    International Nuclear Information System (INIS)

    Velarde, G.

    1966-01-01

    Studies are made of the neutron transport equation corresponding to the the real and virtual reactors, as well as the starting hypotheses. Methods are developed to solve the transport equation in slab geometry, and P l ; B l ; M l ; S n and discrete ordinates approximations. (Author)

  6. GPU-Based Point Cloud Superpositioning for Structural Comparisons of Protein Binding Sites.

    Science.gov (United States)

    Leinweber, Matthias; Fober, Thomas; Freisleben, Bernd

    2018-01-01

    In this paper, we present a novel approach to solve the labeled point cloud superpositioning problem for performing structural comparisons of protein binding sites. The solution is based on a parallel evolution strategy that operates on large populations and runs on GPU hardware. The proposed evolution strategy reduces the likelihood of getting stuck in a local optimum of the multimodal real-valued optimization problem represented by labeled point cloud superpositioning. The performance of the GPU-based parallel evolution strategy is compared to a previously proposed CPU-based sequential approach for labeled point cloud superpositioning, indicating that the GPU-based parallel evolution strategy leads to qualitatively better results and significantly shorter runtimes, with speed improvements of up to a factor of 1,500 for large populations. Binary classification tests based on the ATP, NADH, and FAD protein subsets of CavBase, a database containing putative binding sites, show average classification rate improvements from about 92 percent (CPU) to 96 percent (GPU). Further experiments indicate that the proposed GPU-based labeled point cloud superpositioning approach can be superior to traditional protein comparison approaches based on sequence alignments.

  7. A Performance/Cost Evaluation for a GPU-Based Drug Discovery Application on Volunteer Computing

    Science.gov (United States)

    Guerrero, Ginés D.; Imbernón, Baldomero; García, José M.

    2014-01-01

    Bioinformatics is an interdisciplinary research field that develops tools for the analysis of large biological databases, and, thus, the use of high performance computing (HPC) platforms is mandatory for the generation of useful biological knowledge. The latest generation of graphics processing units (GPUs) has democratized the use of HPC as they push desktop computers to cluster-level performance. Many applications within this field have been developed to leverage these powerful and low-cost architectures. However, these applications still need to scale to larger GPU-based systems to enable remarkable advances in the fields of healthcare, drug discovery, genome research, etc. The inclusion of GPUs in HPC systems exacerbates power and temperature issues, increasing the total cost of ownership (TCO). This paper explores the benefits of volunteer computing to scale bioinformatics applications as an alternative to own large GPU-based local infrastructures. We use as a benchmark a GPU-based drug discovery application called BINDSURF that their computational requirements go beyond a single desktop machine. Volunteer computing is presented as a cheap and valid HPC system for those bioinformatics applications that need to process huge amounts of data and where the response time is not a critical factor. PMID:25025055

  8. A Performance/Cost Evaluation for a GPU-Based Drug Discovery Application on Volunteer Computing

    Directory of Open Access Journals (Sweden)

    Ginés D. Guerrero

    2014-01-01

    Full Text Available Bioinformatics is an interdisciplinary research field that develops tools for the analysis of large biological databases, and, thus, the use of high performance computing (HPC platforms is mandatory for the generation of useful biological knowledge. The latest generation of graphics processing units (GPUs has democratized the use of HPC as they push desktop computers to cluster-level performance. Many applications within this field have been developed to leverage these powerful and low-cost architectures. However, these applications still need to scale to larger GPU-based systems to enable remarkable advances in the fields of healthcare, drug discovery, genome research, etc. The inclusion of GPUs in HPC systems exacerbates power and temperature issues, increasing the total cost of ownership (TCO. This paper explores the benefits of volunteer computing to scale bioinformatics applications as an alternative to own large GPU-based local infrastructures. We use as a benchmark a GPU-based drug discovery application called BINDSURF that their computational requirements go beyond a single desktop machine. Volunteer computing is presented as a cheap and valid HPC system for those bioinformatics applications that need to process huge amounts of data and where the response time is not a critical factor.

  9. Pile Load Capacity – Calculation Methods

    Directory of Open Access Journals (Sweden)

    Wrana Bogumił

    2015-12-01

    Full Text Available The article is a review of the current problems of the foundation pile capacity calculations. The article considers the main principles of pile capacity calculations presented in Eurocode 7 and other methods with adequate explanations. Two main methods are presented: α – method used to calculate the short-term load capacity of piles in cohesive soils and β – method used to calculate the long-term load capacity of piles in both cohesive and cohesionless soils. Moreover, methods based on cone CPTu result are presented as well as the pile capacity problem based on static tests.

  10. Fast-GPU-PCC: A GPU-Based Technique to Compute Pairwise Pearson's Correlation Coefficients for Time Series Data-fMRI Study.

    Science.gov (United States)

    Eslami, Taban; Saeed, Fahad

    2018-04-20

    Functional magnetic resonance imaging (fMRI) is a non-invasive brain imaging technique, which has been regularly used for studying brain’s functional activities in the past few years. A very well-used measure for capturing functional associations in brain is Pearson’s correlation coefficient. Pearson’s correlation is widely used for constructing functional network and studying dynamic functional connectivity of the brain. These are useful measures for understanding the effects of brain disorders on connectivities among brain regions. The fMRI scanners produce huge number of voxels and using traditional central processing unit (CPU)-based techniques for computing pairwise correlations is very time consuming especially when large number of subjects are being studied. In this paper, we propose a graphics processing unit (GPU)-based algorithm called Fast-GPU-PCC for computing pairwise Pearson’s correlation coefficient. Based on the symmetric property of Pearson’s correlation, this approach returns N ( N − 1 ) / 2 correlation coefficients located at strictly upper triangle part of the correlation matrix. Storing correlations in a one-dimensional array with the order as proposed in this paper is useful for further usage. Our experiments on real and synthetic fMRI data for different number of voxels and varying length of time series show that the proposed approach outperformed state of the art GPU-based techniques as well as the sequential CPU-based versions. We show that Fast-GPU-PCC runs 62 times faster than CPU-based version and about 2 to 3 times faster than two other state of the art GPU-based methods.

  11. Normal estimation for pointcloud using GPU based sparse tensor voting

    OpenAIRE

    Liu , Ming; Pomerleau , François; Colas , Francis; Siegwart , Roland

    2012-01-01

    International audience; Normal estimation is the basis for most applications using pointcloud, such as segmentation. However, it is still a challenging problem regarding computational complexity and observation noise. In this paper, we propose a normal estimation method for pointcloud using results from tensor voting. Comparing with other approaches, we show it has smaller estimation error. Moreover, by varying the voting kernel size, we find it is a flexible approach for structure extraction...

  12. GPU based Monte Carlo for PET image reconstruction: parameter optimization

    International Nuclear Information System (INIS)

    Cserkaszky, Á; Légrády, D.; Wirth, A.; Bükki, T.; Patay, G.

    2011-01-01

    This paper presents the optimization of a fully Monte Carlo (MC) based iterative image reconstruction of Positron Emission Tomography (PET) measurements. With our MC re- construction method all the physical effects in a PET system are taken into account thus superior image quality is achieved in exchange for increased computational effort. The method is feasible because we utilize the enormous processing power of Graphical Processing Units (GPUs) to solve the inherently parallel problem of photon transport. The MC approach regards the simulated positron decays as samples in mathematical sums required in the iterative reconstruction algorithm, so to complement the fast architecture, our work of optimization focuses on the number of simulated positron decays required to obtain sufficient image quality. We have achieved significant results in determining the optimal number of samples for arbitrary measurement data, this allows to achieve the best image quality with the least possible computational effort. Based on this research recommendations can be given for effective partitioning of computational effort into the iterations in limited time reconstructions. (author)

  13. Fully iterative scatter corrected digital breast tomosynthesis using GPU-based fast Monte Carlo simulation and composition ratio update

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Kyungsang; Ye, Jong Chul, E-mail: jong.ye@kaist.ac.kr [Bio Imaging and Signal Processing Laboratory, Department of Bio and Brain Engineering, KAIST 291, Daehak-ro, Yuseong-gu, Daejeon 34141 (Korea, Republic of); Lee, Taewon; Cho, Seungryong [Medical Imaging and Radiotherapeutics Laboratory, Department of Nuclear and Quantum Engineering, KAIST 291, Daehak-ro, Yuseong-gu, Daejeon 34141 (Korea, Republic of); Seong, Younghun; Lee, Jongha; Jang, Kwang Eun [Samsung Advanced Institute of Technology, Samsung Electronics, 130, Samsung-ro, Yeongtong-gu, Suwon-si, Gyeonggi-do, 443-803 (Korea, Republic of); Choi, Jaegu; Choi, Young Wook [Korea Electrotechnology Research Institute (KERI), 111, Hanggaul-ro, Sangnok-gu, Ansan-si, Gyeonggi-do, 426-170 (Korea, Republic of); Kim, Hak Hee; Shin, Hee Jung; Cha, Joo Hee [Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, 88 Olympic-ro, 43-gil, Songpa-gu, Seoul, 138-736 (Korea, Republic of)

    2015-09-15

    accurate under a variety of conditions. Our GPU-based fast MCS implementation took approximately 3 s to generate each angular projection for a 6 cm thick breast, which is believed to make this process acceptable for clinical applications. In addition, the clinical preferences of three radiologists were evaluated; the preference for the proposed method compared to the preference for the convolution-based method was statistically meaningful (p < 0.05, McNemar test). Conclusions: The proposed fully iterative scatter correction method and the GPU-based fast MCS using tissue-composition ratio estimation successfully improved the image quality within a reasonable computational time, which may potentially increase the clinical utility of DBT.

  14. TU-AB-202-05: GPU-Based 4D Deformable Image Registration Using Adaptive Tetrahedral Mesh Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Zhong, Z; Zhuang, L [Wayne State University, Detroit, MI (United States); Gu, X; Wang, J [UT Southwestern Medical Center, Dallas, TX (United States); Chen, H; Zhen, X [Southern Medical University, Guangzhou, Guangdong (China)

    2016-06-15

    Purpose: Deformable image registration (DIR) has been employed today as an automated and effective segmentation method to transfer tumor or organ contours from the planning image to daily images, instead of manual segmentation. However, the computational time and accuracy of current DIR approaches are still insufficient for online adaptive radiation therapy (ART), which requires real-time and high-quality image segmentation, especially in a large datasets of 4D-CT images. The objective of this work is to propose a new DIR algorithm, with fast computational speed and high accuracy, by using adaptive feature-based tetrahedral meshing and GPU-based parallelization. Methods: The first step is to generate the adaptive tetrahedral mesh based on the image features of a reference phase of 4D-CT, so that the deformation can be well captured and accurately diffused from the mesh vertices to voxels of the image volume. Subsequently, the deformation vector fields (DVF) and other phases of 4D-CT can be obtained by matching each phase of the target 4D-CT images with the corresponding deformed reference phase. The proposed 4D DIR method is implemented on GPU, resulting in significantly increasing the computational efficiency due to its parallel computing ability. Results: A 4D NCAT digital phantom was used to test the efficiency and accuracy of our method. Both the image and DVF results show that the fine structures and shapes of lung are well preserved, and the tumor position is well captured, i.e., 3D distance error is 1.14 mm. Compared to the previous voxel-based CPU implementation of DIR, such as demons, the proposed method is about 160x faster for registering a 10-phase 4D-CT with a phase dimension of 256×256×150. Conclusion: The proposed 4D DIR method uses feature-based mesh and GPU-based parallelism, which demonstrates the capability to compute both high-quality image and motion results, with significant improvement on the computational speed.

  15. TU-AB-202-05: GPU-Based 4D Deformable Image Registration Using Adaptive Tetrahedral Mesh Modeling

    International Nuclear Information System (INIS)

    Zhong, Z; Zhuang, L; Gu, X; Wang, J; Chen, H; Zhen, X

    2016-01-01

    Purpose: Deformable image registration (DIR) has been employed today as an automated and effective segmentation method to transfer tumor or organ contours from the planning image to daily images, instead of manual segmentation. However, the computational time and accuracy of current DIR approaches are still insufficient for online adaptive radiation therapy (ART), which requires real-time and high-quality image segmentation, especially in a large datasets of 4D-CT images. The objective of this work is to propose a new DIR algorithm, with fast computational speed and high accuracy, by using adaptive feature-based tetrahedral meshing and GPU-based parallelization. Methods: The first step is to generate the adaptive tetrahedral mesh based on the image features of a reference phase of 4D-CT, so that the deformation can be well captured and accurately diffused from the mesh vertices to voxels of the image volume. Subsequently, the deformation vector fields (DVF) and other phases of 4D-CT can be obtained by matching each phase of the target 4D-CT images with the corresponding deformed reference phase. The proposed 4D DIR method is implemented on GPU, resulting in significantly increasing the computational efficiency due to its parallel computing ability. Results: A 4D NCAT digital phantom was used to test the efficiency and accuracy of our method. Both the image and DVF results show that the fine structures and shapes of lung are well preserved, and the tumor position is well captured, i.e., 3D distance error is 1.14 mm. Compared to the previous voxel-based CPU implementation of DIR, such as demons, the proposed method is about 160x faster for registering a 10-phase 4D-CT with a phase dimension of 256×256×150. Conclusion: The proposed 4D DIR method uses feature-based mesh and GPU-based parallelism, which demonstrates the capability to compute both high-quality image and motion results, with significant improvement on the computational speed.

  16. GPU-based real-time trinocular stereo vision

    Science.gov (United States)

    Yao, Yuanbin; Linton, R. J.; Padir, Taskin

    2013-01-01

    Most stereovision applications are binocular which uses information from a 2-camera array to perform stereo matching and compute the depth image. Trinocular stereovision with a 3-camera array has been proved to provide higher accuracy in stereo matching which could benefit applications like distance finding, object recognition, and detection. This paper presents a real-time stereovision algorithm implemented on a GPGPU (General-purpose graphics processing unit) using a trinocular stereovision camera array. Algorithm employs a winner-take-all method applied to perform fusion of disparities in different directions following various image processing techniques to obtain the depth information. The goal of the algorithm is to achieve real-time processing speed with the help of a GPGPU involving the use of Open Source Computer Vision Library (OpenCV) in C++ and NVidia CUDA GPGPU Solution. The results are compared in accuracy and speed to verify the improvement.

  17. GPU-based parallel computing in real-time modeling of atmospheric transport and diffusion of radioactive material

    International Nuclear Information System (INIS)

    Santos, Marcelo C. dos; Pereira, Claudio M.N.A.; Schirru, Roberto; Pinheiro, André; Coordenacao de Pos-Graduacao e Pesquisa de Engenharia

    2017-01-01

    Atmospheric radionuclide dispersion systems (ARDS) are essential mechanisms to predict the consequences of unexpected radioactive releases from nuclear power plants. Considering, that during an eventuality of an accident with a radioactive material release, an accurate forecast is vital to guide the evacuation plan of the possible affected areas. However, in order to predict the dispersion of the radioactive material and its impact on the environment, the model must process information about source term (radioactive materials released, activities and location), weather condition (wind, humidity and precipitation) and geographical characteristics (topography). Furthermore, ARDS is basically composed of 4 main modules: Source Term, Wind Field, Plume Dispersion and Doses Calculations. The Wind Field and Plume Dispersion modules are the ones that require a high computational performance to achieve accurate results within an acceptable time. Taking this into account, this work focuses on the development of a GPU-based parallel Plume Dispersion module, focusing on the radionuclide transport and diffusion calculations, which use a given wind field and a released source term as parameters. The program is being developed using the C ++ programming language, allied with CUDA libraries. In comparative case study between a parallel and sequential version of the slower function of the Plume Dispersion module, a speedup of 11.63 times could be observed. (author)

  18. GPU-based parallel computing in real-time modeling of atmospheric transport and diffusion of radioactive material

    Energy Technology Data Exchange (ETDEWEB)

    Santos, Marcelo C. dos; Pereira, Claudio M.N.A.; Schirru, Roberto; Pinheiro, André, E-mail: jovitamarcelo@gmail.com, E-mail: cmnap@ien.gov.br, E-mail: schirru@lmp.ufrj.br, E-mail: apinheiro99@gmail.com [Instituto de Engenharia Nuclear (IEN/CNEN-RJ), Rio de Janeiro, RJ (Brazil); Coordenacao de Pos-Graduacao e Pesquisa de Engenharia (COPPE/UFRJ), Rio de Janeiro, RJ (Brazil). Programa de Engenharia Nuclear

    2017-07-01

    Atmospheric radionuclide dispersion systems (ARDS) are essential mechanisms to predict the consequences of unexpected radioactive releases from nuclear power plants. Considering, that during an eventuality of an accident with a radioactive material release, an accurate forecast is vital to guide the evacuation plan of the possible affected areas. However, in order to predict the dispersion of the radioactive material and its impact on the environment, the model must process information about source term (radioactive materials released, activities and location), weather condition (wind, humidity and precipitation) and geographical characteristics (topography). Furthermore, ARDS is basically composed of 4 main modules: Source Term, Wind Field, Plume Dispersion and Doses Calculations. The Wind Field and Plume Dispersion modules are the ones that require a high computational performance to achieve accurate results within an acceptable time. Taking this into account, this work focuses on the development of a GPU-based parallel Plume Dispersion module, focusing on the radionuclide transport and diffusion calculations, which use a given wind field and a released source term as parameters. The program is being developed using the C ++ programming language, allied with CUDA libraries. In comparative case study between a parallel and sequential version of the slower function of the Plume Dispersion module, a speedup of 11.63 times could be observed. (author)

  19. GPU based 3D feature profile simulation of high-aspect ratio contact hole etch process under fluorocarbon plasmas

    Science.gov (United States)

    Chun, Poo-Reum; Lee, Se-Ah; Yook, Yeong-Geun; Choi, Kwang-Sung; Cho, Deog-Geun; Yu, Dong-Hun; Chang, Won-Seok; Kwon, Deuk-Chul; Im, Yeon-Ho

    2013-09-01

    Although plasma etch profile simulation has been attracted much interest for developing reliable plasma etching, there still exist big gaps between current research status and predictable modeling due to the inherent complexity of plasma process. As an effort to address this issue, we present 3D feature profile simulation coupled with well-defined plasma-surface kinetic model for silicon dioxide etching process under fluorocarbon plasmas. To capture the realistic plasma surface reaction behaviors, a polymer layer based surface kinetic model was proposed to consider the simultaneous polymer deposition and oxide etching. Finally, the realistic plasma surface model was used for calculation of speed function for 3D topology simulation, which consists of multiple level set based moving algorithm, and ballistic transport module. In addition, the time consumable computations in the ballistic transport calculation were improved drastically by GPU based numerical computation, leading to the real time computation. Finally, we demonstrated that the surface kinetic model could be coupled successfully for 3D etch profile simulations in high-aspect ratio contact hole plasma etching.

  20. Assessment of seismic margin calculation methods

    International Nuclear Information System (INIS)

    Kennedy, R.P.; Murray, R.C.; Ravindra, M.K.; Reed, J.W.; Stevenson, J.D.

    1989-03-01

    Seismic margin review of nuclear power plants requires that the High Confidence of Low Probability of Failure (HCLPF) capacity be calculated for certain components. The candidate methods for calculating the HCLPF capacity as recommended by the Expert Panel on Quantification of Seismic Margins are the Conservative Deterministic Failure Margin (CDFM) method and the Fragility Analysis (FA) method. The present study evaluated these two methods using some representative components in order to provide further guidance in conducting seismic margin reviews. It is concluded that either of the two methods could be used for calculating HCLPF capacities. 21 refs., 9 figs., 6 tabs

  1. GPU-based relative fuzzy connectedness image segmentation

    International Nuclear Information System (INIS)

    Zhuge Ying; Ciesielski, Krzysztof C.; Udupa, Jayaram K.; Miller, Robert W.

    2013-01-01

    Purpose:Recently, clinical radiological research and practice are becoming increasingly quantitative. Further, images continue to increase in size and volume. For quantitative radiology to become practical, it is crucial that image segmentation algorithms and their implementations are rapid and yield practical run time on very large data sets. The purpose of this paper is to present a parallel version of an algorithm that belongs to the family of fuzzy connectedness (FC) algorithms, to achieve an interactive speed for segmenting large medical image data sets. Methods: The most common FC segmentations, optimizing an ℓ ∞ -based energy, are known as relative fuzzy connectedness (RFC) and iterative relative fuzzy connectedness (IRFC). Both RFC and IRFC objects (of which IRFC contains RFC) can be found via linear time algorithms, linear with respect to the image size. The new algorithm, P-ORFC (for parallel optimal RFC), which is implemented by using NVIDIA’s Compute Unified Device Architecture (CUDA) platform, considerably improves the computational speed of the above mentioned CPU based IRFC algorithm. Results: Experiments based on four data sets of small, medium, large, and super data size, achieved speedup factors of 32.8×, 22.9×, 20.9×, and 17.5×, correspondingly, on the NVIDIA Tesla C1060 platform. Although the output of P-ORFC need not precisely match that of IRFC output, it is very close to it and, as the authors prove, always lies between the RFC and IRFC objects. Conclusions: A parallel version of a top-of-the-line algorithm in the family of FC has been developed on the NVIDIA GPUs. An interactive speed of segmentation has been achieved, even for the largest medical image data set. Such GPU implementations may play a crucial role in automatic anatomy recognition in clinical radiology.

  2. GPU-based relative fuzzy connectedness image segmentation

    Energy Technology Data Exchange (ETDEWEB)

    Zhuge Ying; Ciesielski, Krzysztof C.; Udupa, Jayaram K.; Miller, Robert W. [Radiation Oncology Branch, National Cancer Institute, National Institutes of Health, Bethesda, Maryland 20892 (United States); Department of Mathematics, West Virginia University, Morgantown, West Virginia 26506 (United States) and Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania 19104 (United States); Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania 19104 (United States); Radiation Oncology Branch, National Cancer Institute, National Institutes of Health, Bethesda, Maryland 20892 (United States)

    2013-01-15

    Purpose:Recently, clinical radiological research and practice are becoming increasingly quantitative. Further, images continue to increase in size and volume. For quantitative radiology to become practical, it is crucial that image segmentation algorithms and their implementations are rapid and yield practical run time on very large data sets. The purpose of this paper is to present a parallel version of an algorithm that belongs to the family of fuzzy connectedness (FC) algorithms, to achieve an interactive speed for segmenting large medical image data sets. Methods: The most common FC segmentations, optimizing an Script-Small-L {sub {infinity}}-based energy, are known as relative fuzzy connectedness (RFC) and iterative relative fuzzy connectedness (IRFC). Both RFC and IRFC objects (of which IRFC contains RFC) can be found via linear time algorithms, linear with respect to the image size. The new algorithm, P-ORFC (for parallel optimal RFC), which is implemented by using NVIDIA's Compute Unified Device Architecture (CUDA) platform, considerably improves the computational speed of the above mentioned CPU based IRFC algorithm. Results: Experiments based on four data sets of small, medium, large, and super data size, achieved speedup factors of 32.8 Multiplication-Sign , 22.9 Multiplication-Sign , 20.9 Multiplication-Sign , and 17.5 Multiplication-Sign , correspondingly, on the NVIDIA Tesla C1060 platform. Although the output of P-ORFC need not precisely match that of IRFC output, it is very close to it and, as the authors prove, always lies between the RFC and IRFC objects. Conclusions: A parallel version of a top-of-the-line algorithm in the family of FC has been developed on the NVIDIA GPUs. An interactive speed of segmentation has been achieved, even for the largest medical image data set. Such GPU implementations may play a crucial role in automatic anatomy recognition in clinical radiology.

  3. GPU-based relative fuzzy connectedness image segmentation

    Science.gov (United States)

    Zhuge, Ying; Ciesielski, Krzysztof C.; Udupa, Jayaram K.; Miller, Robert W.

    2013-01-01

    Purpose: Recently, clinical radiological research and practice are becoming increasingly quantitative. Further, images continue to increase in size and volume. For quantitative radiology to become practical, it is crucial that image segmentation algorithms and their implementations are rapid and yield practical run time on very large data sets. The purpose of this paper is to present a parallel version of an algorithm that belongs to the family of fuzzy connectedness (FC) algorithms, to achieve an interactive speed for segmenting large medical image data sets. Methods: The most common FC segmentations, optimizing an ℓ∞-based energy, are known as relative fuzzy connectedness (RFC) and iterative relative fuzzy connectedness (IRFC). Both RFC and IRFC objects (of which IRFC contains RFC) can be found via linear time algorithms, linear with respect to the image size. The new algorithm, P-ORFC (for parallel optimal RFC), which is implemented by using NVIDIA’s Compute Unified Device Architecture (CUDA) platform, considerably improves the computational speed of the above mentioned CPU based IRFC algorithm. Results: Experiments based on four data sets of small, medium, large, and super data size, achieved speedup factors of 32.8×, 22.9×, 20.9×, and 17.5×, correspondingly, on the NVIDIA Tesla C1060 platform. Although the output of P-ORFC need not precisely match that of IRFC output, it is very close to it and, as the authors prove, always lies between the RFC and IRFC objects. Conclusions: A parallel version of a top-of-the-line algorithm in the family of FC has been developed on the NVIDIA GPUs. An interactive speed of segmentation has been achieved, even for the largest medical image data set. Such GPU implementations may play a crucial role in automatic anatomy recognition in clinical radiology. PMID:23298094

  4. A GPU-based framework for modeling real-time 3D lung tumor conformal dosimetry with subject-specific lung tumor motion

    International Nuclear Information System (INIS)

    Min Yugang; Santhanam, Anand; Ruddy, Bari H; Neelakkantan, Harini; Meeks, Sanford L; Kupelian, Patrick A

    2010-01-01

    In this paper, we present a graphics processing unit (GPU)-based simulation framework to calculate the delivered dose to a 3D moving lung tumor and its surrounding normal tissues, which are undergoing subject-specific lung deformations. The GPU-based simulation framework models the motion of the 3D volumetric lung tumor and its surrounding tissues, simulates the dose delivery using the dose extracted from a treatment plan using Pinnacle Treatment Planning System, Phillips, for one of the 3DCTs of the 4DCT and predicts the amount and location of radiation doses deposited inside the lung. The 4DCT lung datasets were registered with each other using a modified optical flow algorithm. The motion of the tumor and the motion of the surrounding tissues were simulated by measuring the changes in lung volume during the radiotherapy treatment using spirometry. The real-time dose delivered to the tumor for each beam is generated by summing the dose delivered to the target volume at each increase in lung volume during the beam delivery time period. The simulation results showed the real-time capability of the framework at 20 discrete tumor motion steps per breath, which is higher than the number of 4DCT steps (approximately 12) reconstructed during multiple breathing cycles.

  5. A GPU-based framework for modeling real-time 3D lung tumor conformal dosimetry with subject-specific lung tumor motion

    Energy Technology Data Exchange (ETDEWEB)

    Min Yugang; Santhanam, Anand; Ruddy, Bari H [University of Central Florida, FL (United States); Neelakkantan, Harini; Meeks, Sanford L [M D Anderson Cancer Center Orlando, FL (United States); Kupelian, Patrick A, E-mail: anand.santhanam@orlandohealth.co [Department of Radiation Oncology, University of California, Los Angeles, CA (United States)

    2010-09-07

    In this paper, we present a graphics processing unit (GPU)-based simulation framework to calculate the delivered dose to a 3D moving lung tumor and its surrounding normal tissues, which are undergoing subject-specific lung deformations. The GPU-based simulation framework models the motion of the 3D volumetric lung tumor and its surrounding tissues, simulates the dose delivery using the dose extracted from a treatment plan using Pinnacle Treatment Planning System, Phillips, for one of the 3DCTs of the 4DCT and predicts the amount and location of radiation doses deposited inside the lung. The 4DCT lung datasets were registered with each other using a modified optical flow algorithm. The motion of the tumor and the motion of the surrounding tissues were simulated by measuring the changes in lung volume during the radiotherapy treatment using spirometry. The real-time dose delivered to the tumor for each beam is generated by summing the dose delivered to the target volume at each increase in lung volume during the beam delivery time period. The simulation results showed the real-time capability of the framework at 20 discrete tumor motion steps per breath, which is higher than the number of 4DCT steps (approximately 12) reconstructed during multiple breathing cycles.

  6. A GPU-based framework for modeling real-time 3D lung tumor conformal dosimetry with subject-specific lung tumor motion.

    Science.gov (United States)

    Min, Yugang; Santhanam, Anand; Neelakkantan, Harini; Ruddy, Bari H; Meeks, Sanford L; Kupelian, Patrick A

    2010-09-07

    In this paper, we present a graphics processing unit (GPU)-based simulation framework to calculate the delivered dose to a 3D moving lung tumor and its surrounding normal tissues, which are undergoing subject-specific lung deformations. The GPU-based simulation framework models the motion of the 3D volumetric lung tumor and its surrounding tissues, simulates the dose delivery using the dose extracted from a treatment plan using Pinnacle Treatment Planning System, Phillips, for one of the 3DCTs of the 4DCT and predicts the amount and location of radiation doses deposited inside the lung. The 4DCT lung datasets were registered with each other using a modified optical flow algorithm. The motion of the tumor and the motion of the surrounding tissues were simulated by measuring the changes in lung volume during the radiotherapy treatment using spirometry. The real-time dose delivered to the tumor for each beam is generated by summing the dose delivered to the target volume at each increase in lung volume during the beam delivery time period. The simulation results showed the real-time capability of the framework at 20 discrete tumor motion steps per breath, which is higher than the number of 4DCT steps (approximately 12) reconstructed during multiple breathing cycles.

  7. Broyden's method in nuclear structure calculations

    International Nuclear Information System (INIS)

    Baran, Andrzej; Bulgac, Aurel; Forbes, Michael McNeil; Hagen, Gaute; Nazarewicz, Witold; Schunck, Nicolas; Stoitsov, Mario V.

    2008-01-01

    Broyden's method, widely used in quantum chemistry electronic-structure calculations for the numerical solution of nonlinear equations in many variables, is applied in the context of the nuclear many-body problem. Examples include the unitary gas problem, the nuclear density functional theory with Skyrme functionals, and the nuclear coupled-cluster theory. The stability of the method, its ease of use, and its rapid convergence rates make Broyden's method a tool of choice for large-scale nuclear structure calculations

  8. Calculation methods for determining dose equivalent

    International Nuclear Information System (INIS)

    Endres, G.W.R.; Tanner, J.E.; Scherpelz, R.I.; Hadlock, D.E.

    1988-01-01

    A series of calculations of neutron fluence as a function of energy in an anthropomorphic phantom was performed to develop a system for determining effective dose equivalent for external radiation sources. critical organ dose equivalents are calculated and effective dose equivalents are determined using ICRP-26 methods. Quality factors based on both present definitions and ICRP-40 definitions are used in the analysis. The results of these calculations are presented and discussed

  9. Methods of bone marrow dose calculation

    International Nuclear Information System (INIS)

    Taboaco, R.C.

    1982-02-01

    Several methods of bone marrow dose calculation for photon irradiation were analised. After a critical analysis, the author proposes the adoption, by the Instituto de Radioprotecao e Dosimetria/CNEN, of Rosenstein's method for dose calculations in Radiodiagnostic examinations and Kramer's method in case of occupational irradiation. It was verified by Eckerman and Simpson that for monoenergetic gamma emitters uniformly distributed within the bone mineral of the skeleton the dose in the bone surface can be several times higher than dose in skeleton. In this way, is also proposed the Calculation of tissue-air ratios for bone surfaces in some irradiation geometries and photon energies to be included in the Rosenstein's method for organ dose calculation in Radiodiagnostic examinations. (Author) [pt

  10. Simplified dose calculation method for mantle technique

    International Nuclear Information System (INIS)

    Scaff, L.A.M.

    1984-01-01

    A simplified dose calculation method for mantle technique is described. In the routine treatment of lymphom as using this technique, the daily doses at the midpoints at five anatomical regions are different because the thicknesses are not equal. (Author) [pt

  11. Simple Calculation Programs for Biology Immunological Methods

    Indian Academy of Sciences (India)

    First page Back Continue Last page Overview Graphics. Simple Calculation Programs for Biology Immunological Methods. Computation of Ab/Ag Concentration from EISA data. Graphical Method; Raghava et al., 1992, J. Immuno. Methods 153: 263. Determination of affinity of Monoclonal Antibody. Using non-competitive ...

  12. Range calculations using multigroup transport methods

    International Nuclear Information System (INIS)

    Hoffman, T.J.; Robinson, M.T.; Dodds, H.L. Jr.

    1979-01-01

    Several aspects of radiation damage effects in fusion reactor neutron and ion irradiation environments are amenable to treatment by transport theory methods. In this paper, multigroup transport techniques are developed for the calculation of particle range distributions. These techniques are illustrated by analysis of Au-196 atoms recoiling from (n,2n) reactions with gold. The results of these calculations agree very well with range calculations performed with the atomistic code MARLOWE. Although some detail of the atomistic model is lost in the multigroup transport calculations, the improved computational speed should prove useful in the solution of fusion material design problems

  13. Eigenvalue translation method for mode calculations

    International Nuclear Information System (INIS)

    Gerck, E.; Cruz, C.H.B.

    1978-11-01

    A new method is described for the first few modes calculations in a interferometer that has several advantages over the ALLMAT subroutine, the Prony Method and the Fox and Li Method. In the illustrative results shown for the same cases it can be seen that the eigenvalue translation method is typically 100 fold times faster than the usual Fox and Li Method and 10 times faster than ALLMAT [pt

  14. Calculation methods for determining dose equivalent

    International Nuclear Information System (INIS)

    Endres, G.W.R.; Tanner, J.E.; Scherpelz, R.I.; Hadlock, D.E.

    1987-11-01

    A series of calculations of neutron fluence as a function of energy in an anthropomorphic phantom was performed to develop a system for determining effective dose equivalent for external radiation sources. Critical organ dose equivalents are calculated and effective dose equivalents are determined using ICRP-26 [1] methods. Quality factors based on both present definitions and ICRP-40 definitions are used in the analysis. The results of these calculations are presented and discussed. The effective dose equivalent determined using ICRP-26 methods is significantly smaller than the dose equivalent determined by traditional methods. No existing personnel dosimeter or health physics instrument can determine effective dose equivalent. At the present time, the conversion of dosimeter response to dose equivalent is based on calculations for maximal or ''cap'' values using homogeneous spherical or cylindrical phantoms. The evaluated dose equivalent is, therefore, a poor approximation of the effective dose equivalent as defined by ICRP Publication 26. 3 refs., 2 figs., 1 tab

  15. Reactor perturbation calculations by Monte Carlo methods

    International Nuclear Information System (INIS)

    Gubbins, M.E.

    1965-09-01

    Whilst Monte Carlo methods are useful for reactor calculations involving complicated geometry, it is difficult to apply them to the calculation of perturbation worths because of the large amount of computing time needed to obtain good accuracy. Various ways of overcoming these difficulties are investigated in this report, with the problem of estimating absorbing control rod worths particularly in mind. As a basis for discussion a method of carrying out multigroup reactor calculations by Monte Carlo methods is described. Two methods of estimating a perturbation worth directly, without differencing two quantities of like magnitude, are examined closely but are passed over in favour of a third method based on a correlation technique. This correlation method is described, and demonstrated by a limited range of calculations for absorbing control rods in a fast reactor. In these calculations control rod worths of between 1% and 7% in reactivity are estimated to an accuracy better than 10% (3 standard errors) in about one hour's computing time on the English Electric KDF.9 digital computer. (author)

  16. Calculation Methods for Wallenius’ Noncentral Hypergeometric Distribution

    DEFF Research Database (Denmark)

    Fog, Agner

    2008-01-01

    Two different probability distributions are both known in the literature as "the" noncentral hypergeometric distribution. Wallenius' noncentral hypergeometric distribution can be described by an urn model without replacement with bias. Fisher's noncentral hypergeometric distribution...... is the conditional distribution of independent binomial variates given their sum. No reliable calculation method for Wallenius' noncentral hypergeometric distribution has hitherto been described in the literature. Several new methods for calculating probabilities from Wallenius' noncentral hypergeometric...... distribution are derived. Range of applicability, numerical problems, and efficiency are discussed for each method. Approximations to the mean and variance are also discussed. This distribution has important applications in models of biased sampling and in models of evolutionary systems....

  17. Simple Calculation Programs for Biology Other Methods

    Indian Academy of Sciences (India)

    First page Back Continue Last page Overview Graphics. Simple Calculation Programs for Biology Other Methods. Hemolytic potency of drugs. Raghava et al., (1994) Biotechniques 17: 1148. FPMAP: methods for classification and identification of microorganisms 16SrRNA. graphical display of restriction and fragment map of ...

  18. Rapid simulation of X-ray transmission imaging for baggage inspection via GPU-based ray-tracing

    Science.gov (United States)

    Gong, Qian; Stoian, Razvan-Ionut; Coccarelli, David S.; Greenberg, Joel A.; Vera, Esteban; Gehm, Michael E.

    2018-01-01

    We present a pipeline that rapidly simulates X-ray transmission imaging for arbitrary system architectures using GPU-based ray-tracing techniques. The purpose of the pipeline is to enable statistical analysis of threat detection in the context of airline baggage inspection. As a faster alternative to Monte Carlo methods, we adopt a deterministic approach for simulating photoelectric absorption-based imaging. The highly-optimized NVIDIA OptiX API is used to implement ray-tracing, greatly speeding code execution. In addition, we implement the first hierarchical representation structure to determine the interaction path length of rays traversing heterogeneous media described by layered polygons. The accuracy of the pipeline has been validated by comparing simulated data with experimental data collected using a heterogenous phantom and a laboratory X-ray imaging system. On a single computer, our approach allows us to generate over 400 2D transmission projections (125 × 125 pixels per frame) per hour for a bag packed with hundreds of everyday objects. By implementing our approach on cloud-based GPU computing platforms, we find that the same 2D projections of approximately 3.9 million bags can be obtained in a single day using 400 GPU instances, at a cost of only 0.001 per bag.

  19. Simple method for calculating island widths

    International Nuclear Information System (INIS)

    Cary, J.R.; Hanson, J.D.; Carreras, B.A.; Lynch, V.E.

    1989-01-01

    A simple method for calculating magnetic island widths has been developed. This method uses only information obtained from integrating along the closed field line at the island center. Thus, this method is computationally less intensive than the usual method of producing surfaces of section of sufficient detail to locate and resolve the island separatrix. This method has been implemented numerically and used to analyze the buss work islands of ATF. In this case the method proves to be accurate to at least within 30%. 7 refs

  20. Willow growing - Methods of calculation and profitability

    International Nuclear Information System (INIS)

    Rosenqvist, H.

    1997-01-01

    The calculation method presented here makes it possible to conduct profitability comparisons between annual and perennial crops and in addition take the planning situation into account. The method applied is a modified total step calculation. The difference between a traditional total step calculation and the modified version is the way in which payments and disbursements are taken into account over a period of several years. This is achieved by combining the present value method and the annuity method. The choice of interest rate has great bearing on the result in perennial calculations. The various components influencing the interest rate are analysed and factors relating to the establishment of the interest rate in different situations are described. The risk factor can be an important variable component of the interest rate calculation. Risk is also addressed from an approach in accordance with portfolio theory. The application of the methods sheds light on the profitability of Salix cultivation from the viewpoint of business economics, and also how different factors influence the profitability of Salix cultivation. Aspects studied are harvesting intervals, the importance of yield level, the competitiveness of Salix versus grain cultivation, the influence of income taxes on profitability etc. Methods for evaluation of activities concerning cultivation of a perennial crop are described and also involve the application of nitrogen fertilization to Salix cultivation. Studies have been performed using these methods to look into nitrogen fertilizer profitability in Salix cultivation during the first rotation period. Nitrogen fertilizer profitability has been investigated involving both production functions and cost calculations, taking the year fertilization into consideration. 72 refs., 2 figs., 52 tabs

  1. Hybrid numerical calculation method for bend waveguides

    OpenAIRE

    Garnier , Lucas; Saavedra , C.; Castro-Beltran , Rigoberto; Lucio , José Luis; Bêche , Bruno

    2017-01-01

    National audience; The knowledge of how the light will behave in a waveguide with a radius of curvature becomes more and more important because of the development of integrated photonics, which include ring micro-resonators, phasars, and other devices with a radius of curvature. This work presents a numerical calculation method to determine the eigenvalues and eigenvectors of curved waveguides. This method is a hybrid method which uses at first conform transformation of the complex plane gene...

  2. Monte Carlo methods for shield design calculations

    International Nuclear Information System (INIS)

    Grimstone, M.J.

    1974-01-01

    A suite of Monte Carlo codes is being developed for use on a routine basis in commercial reactor shield design. The methods adopted for this purpose include the modular construction of codes, simplified geometries, automatic variance reduction techniques, continuous energy treatment of cross section data, and albedo methods for streaming. Descriptions are given of the implementation of these methods and of their use in practical calculations. 26 references. (U.S.)

  3. Radiation transport calculation methods in BNCT

    International Nuclear Information System (INIS)

    Koivunoro, H.; Seppaelae, T.; Savolainen, S.

    2000-01-01

    Boron neutron capture therapy (BNCT) is used as a radiotherapy for malignant brain tumours. Radiation dose distribution is necessary to determine individually for each patient. Radiation transport and dose distribution calculations in BNCT are more complicated than in conventional radiotherapy. Total dose in BNCT consists of several different dose components. The most important dose component for tumour control is therapeutic boron dose D B . The other dose components are gamma dose D g , incident fast neutron dose D f ast n and nitrogen dose D N . Total dose is a weighted sum of the dose components. Calculation of neutron and photon flux is a complex problem and requires numerical methods, i.e. deterministic or stochastic simulation methods. Deterministic methods are based on the numerical solution of Boltzmann transport equation. Such are discrete ordinates (SN) and spherical harmonics (PN) methods. The stochastic simulation method for calculation of radiation transport is known as Monte Carlo method. In the deterministic methods the spatial geometry is partitioned into mesh elements. In SN method angular integrals of the transport equation are replaced with weighted sums over a set of discrete angular directions. Flux is calculated iteratively for all these mesh elements and for each discrete direction. Discrete ordinates transport codes used in the dosimetric calculations are ANISN, DORT and TORT. In PN method a Legendre expansion for angular flux is used instead of discrete direction fluxes, land the angular dependency comes a property of vector function space itself. Thus, only spatial iterations are required for resulting equations. A novel radiation transport code based on PN method and tree-multigrid technique (TMG) has been developed at VTT (Technical Research Centre of Finland). Monte Carlo method solves the radiation transport by randomly selecting neutrons and photons from a prespecified boundary source and following the histories of selected particles

  4. Comparative Study of Daylighting Calculation Methods

    Directory of Open Access Journals (Sweden)

    Mandala Ariani

    2018-01-01

    Full Text Available The aim of this study is to assess five daylighting calculation method commonly used in architectural study. The methods used include hand calculation methods (SNI/DPMB method and BRE Daylighting Protractors, scale models studied in an artificial sky simulator and computer programs using Dialux and Velux lighting software. The test room is conditioned by the uniform sky conditions, simple room geometry with variations of the room reflectance (black, grey, and white color. The analyses compared the result (including daylight factor, illumination, and coefficient of uniformity value and examines the similarity and contrast the result different. The color variations trial is used to analyses the internally reflection factor contribution to the result.

  5. Three-dimensional space charge calculation method

    International Nuclear Information System (INIS)

    Lysenko, W.P.; Wadlinger, E.A.

    1981-01-01

    A method is presented for calculating space-charge forces suitable for use in a particle tracing code. Poisson's equation is solved in three dimensions with boundary conditions specified on an arbitrary surface by using a weighted residual method. Using a discrete particle distribution as our source input, examples are shown of off-axis, bunched beams of noncircular crosssection in radio-frequency quadrupole (RFQ) and drift-tube linac geometries

  6. Efficient pseudospectral methods for density functional calculations

    International Nuclear Information System (INIS)

    Murphy, R. B.; Cao, Y.; Beachy, M. D.; Ringnalda, M. N.; Friesner, R. A.

    2000-01-01

    Novel improvements of the pseudospectral method for assembling the Coulomb operator are discussed. These improvements consist of a fast atom centered multipole method and a variation of the Head-Gordan J-engine analytic integral evaluation. The details of the methodology are discussed and performance evaluations presented for larger molecules within the context of DFT energy and gradient calculations. (c) 2000 American Institute of Physics

  7. Monte Carlo method for array criticality calculations

    International Nuclear Information System (INIS)

    Dickinson, D.; Whitesides, G.E.

    1976-01-01

    The Monte Carlo method for solving neutron transport problems consists of mathematically tracing paths of individual neutrons collision by collision until they are lost by absorption or leakage. The fate of the neutron after each collision is determined by the probability distribution functions that are formed from the neutron cross-section data. These distributions are sampled statistically to establish the successive steps in the neutron's path. The resulting data, accumulated from following a large number of batches, are analyzed to give estimates of k/sub eff/ and other collision-related quantities. The use of electronic computers to produce the simulated neutron histories, initiated at Los Alamos Scientific Laboratory, made the use of the Monte Carlo method practical for many applications. In analog Monte Carlo simulation, the calculation follows the physical events of neutron scattering, absorption, and leakage. To increase calculational efficiency, modifications such as the use of statistical weights are introduced. The Monte Carlo method permits the use of a three-dimensional geometry description and a detailed cross-section representation. Some of the problems in using the method are the selection of the spatial distribution for the initial batch, the preparation of the geometry description for complex units, and the calculation of error estimates for region-dependent quantities such as fluxes. The Monte Carlo method is especially appropriate for criticality safety calculations since it permits an accurate representation of interacting units of fissile material. Dissimilar units, units of complex shape, moderators between units, and reflected arrays may be calculated. Monte Carlo results must be correlated with relevant experimental data, and caution must be used to ensure that a representative set of neutron histories is produced

  8. Comparison of methods for calculating decay lifetimes

    International Nuclear Information System (INIS)

    Tobocman, W.

    1978-01-01

    A simple scattering model is used to test alternative methods for calculating decay lifetimes, or equivalently, resonance widths. We consider the scattering of s-wave particles by a square well with a square barrier. Exact values for resonance energies and resonance widths are compared with values calculated from Wigner-Weisskopf perturbation theory and from the Garside-MacDonald projection operator formalism. The Garside-MacDonald formalism gives essentially exact results while the predictions of the Wigner-Weisskopf formalism are fairly poor

  9. A numerical method for resonance integral calculations

    International Nuclear Information System (INIS)

    Tanbay, Tayfun; Ozgener, Bilge

    2013-01-01

    A numerical method has been proposed for resonance integral calculations and a cubic fit based on least squares approximation to compute the optimum Bell factor is given. The numerical method is based on the discretization of the neutron slowing down equation. The scattering integral is approximated by taking into account the location of the upper limit in energy domain. The accuracy of the method has been tested by performing computations of resonance integrals for uranium dioxide isolated rods and comparing the results with empirical values. (orig.)

  10. Sputtering calculations with the discrete ordinated method

    International Nuclear Information System (INIS)

    Hoffman, T.J.; Dodds, H.L. Jr.; Robinson, M.T.; Holmes, D.K.

    1977-01-01

    The purpose of this work is to investigate the applicability of the discrete ordinates (S/sub N/) method to light ion sputtering problems. In particular, the neutral particle discrete ordinates computer code, ANISN, was used to calculate sputtering yields. No modifications to this code were necessary to treat charged particle transport. However, a cross section processing code was written for the generation of multigroup cross sections; these cross sections include a modification to the total macroscopic cross section to account for electronic interactions and small-scattering-angle elastic interactions. The discrete ordinates approach enables calculation of the sputtering yield as functions of incident energy and angle and of many related quantities such as ion reflection coefficients, angular and energy distributions of sputtering particles, the behavior of beams penetrating thin foils, etc. The results of several sputtering problems as calculated with ANISN are presented

  11. SU-F-J-204: Carbon Digitally Reconstructed Radiography (CDRR): A GPU Based Tool for Fast and Versatile Carbonimaging Simulation

    International Nuclear Information System (INIS)

    Dias, M F; Seco, J; Baroni, G; Riboldi, M

    2016-01-01

    Purpose: Research in carbon imaging has been growing over the past years, as a way to increase treatment accuracy and patient positioning in carbon therapy. The purpose of this tool is to allow a fast and flexible way to generate CDRR data without the need to use Monte Carlo (MC) simulations. It can also be used to predict future clinically measured data. Methods: A python interface has been developed, which uses information from CT or 4DCT and thetreatment calibration curve to compute the Water Equivalent Path Length (WEPL) of carbon ions. A GPU based ray tracing algorithm computes the WEPL of each individual carbon traveling through the CT voxels. A multiple peak detection method to estimate high contrast margin positioning has been implemented (described elsewhere). MC simulations have been used to simulate carbons depth dose curves in order to simulate the response of a range detector. Results: The tool allows the upload of CT or 4DCT images. The user has the possibility to selectphase/slice of interested as well as position, angle…). The WEPL is represented as a range detector which can be used to assess range dilution and multiple peak detection effects. The tool also provides knowledge of the minimum energy that should be considered for imaging purposes. The multiple peak detection method has been used in a lung tumor case, showing an accuracy of 1mm in determine the exact interface position. Conclusion: The tool offers an easy and fast way to simulate carbon imaging data. It can be used for educational and for clinical purposes, allowing the user to test beam energies and angles before real acquisition. An analysis add-on is being developed, where the used will have the opportunity to select different reconstruction methods and detector types (range or energy). Fundacao para a Ciencia e a Tecnologia (FCT), PhD Grant number SFRH/BD/85749/2012

  12. SU-F-J-204: Carbon Digitally Reconstructed Radiography (CDRR): A GPU Based Tool for Fast and Versatile Carbonimaging Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Dias, M F [Dipartamento di Elettronica, Informazione e Bioingegneria - DEIB, Politecnico di Milano (Italy); Department of Radiation Oncology, Francis H. Burr Proton Therapy Center Massachusetts General Hospital (MGH), Boston, Massachusetts (United States); Seco, J [Department of Radiation Oncology, Francis H. Burr Proton Therapy Center Massachusetts General Hospital (MGH), Boston, Massachusetts (United States); Baroni, G; Riboldi, M [Dipartamento di Elettronica, Informazione e Bioingegneria - DEIB, Politecnico di Milano (Italy); Bioengineering Unit, Centro Nazionale di Adroterapia Oncologica, Pavia (Italy)

    2016-06-15

    Purpose: Research in carbon imaging has been growing over the past years, as a way to increase treatment accuracy and patient positioning in carbon therapy. The purpose of this tool is to allow a fast and flexible way to generate CDRR data without the need to use Monte Carlo (MC) simulations. It can also be used to predict future clinically measured data. Methods: A python interface has been developed, which uses information from CT or 4DCT and thetreatment calibration curve to compute the Water Equivalent Path Length (WEPL) of carbon ions. A GPU based ray tracing algorithm computes the WEPL of each individual carbon traveling through the CT voxels. A multiple peak detection method to estimate high contrast margin positioning has been implemented (described elsewhere). MC simulations have been used to simulate carbons depth dose curves in order to simulate the response of a range detector. Results: The tool allows the upload of CT or 4DCT images. The user has the possibility to selectphase/slice of interested as well as position, angle…). The WEPL is represented as a range detector which can be used to assess range dilution and multiple peak detection effects. The tool also provides knowledge of the minimum energy that should be considered for imaging purposes. The multiple peak detection method has been used in a lung tumor case, showing an accuracy of 1mm in determine the exact interface position. Conclusion: The tool offers an easy and fast way to simulate carbon imaging data. It can be used for educational and for clinical purposes, allowing the user to test beam energies and angles before real acquisition. An analysis add-on is being developed, where the used will have the opportunity to select different reconstruction methods and detector types (range or energy). Fundacao para a Ciencia e a Tecnologia (FCT), PhD Grant number SFRH/BD/85749/2012.

  13. Development of parallel GPU based algorithms for problems in nuclear area; Desenvolvimento de algoritmos paralelos baseados em GPU para solucao de problemas na area nuclear

    Energy Technology Data Exchange (ETDEWEB)

    Almeida, Adino Americo Heimlich

    2009-07-01

    Graphics Processing Units (GPU) are high performance co-processors intended, originally, to improve the use and quality of computer graphics applications. Since researchers and practitioners realized the potential of using GPU for general purpose, their application has been extended to other fields out of computer graphics scope. The main objective of this work is to evaluate the impact of using GPU in two typical problems of Nuclear area. The neutron transport simulation using Monte Carlo method and solve heat equation in a bi-dimensional domain by finite differences method. To achieve this, we develop parallel algorithms for GPU and CPU in the two problems described above. The comparison showed that the GPU-based approach is faster than the CPU in a computer with two quad core processors, without precision loss. (author)

  14. A Study on GPU-based Iterative ML-EM Reconstruction Algorithm for Emission Computed Tomographic Imaging Systems

    Energy Technology Data Exchange (ETDEWEB)

    Ha, Woo Seok; Kim, Soo Mee; Park, Min Jae; Lee, Dong Soo; Lee, Jae Sung [Seoul National University, Seoul (Korea, Republic of)

    2009-10-15

    The maximum likelihood-expectation maximization (ML-EM) is the statistical reconstruction algorithm derived from probabilistic model of the emission and detection processes. Although the ML-EM has many advantages in accuracy and utility, the use of the ML-EM is limited due to the computational burden of iterating processing on a CPU (central processing unit). In this study, we developed a parallel computing technique on GPU (graphic processing unit) for ML-EM algorithm. Using Geforce 9800 GTX+ graphic card and CUDA (compute unified device architecture) the projection and backprojection in ML-EM algorithm were parallelized by NVIDIA's technology. The time delay on computations for projection, errors between measured and estimated data and backprojection in an iteration were measured. Total time included the latency in data transmission between RAM and GPU memory. The total computation time of the CPU- and GPU-based ML-EM with 32 iterations were 3.83 and 0.26 sec, respectively. In this case, the computing speed was improved about 15 times on GPU. When the number of iterations increased into 1024, the CPU- and GPU-based computing took totally 18 min and 8 sec, respectively. The improvement was about 135 times and was caused by delay on CPU-based computing after certain iterations. On the other hand, the GPU-based computation provided very small variation on time delay per iteration due to use of shared memory. The GPU-based parallel computation for ML-EM improved significantly the computing speed and stability. The developed GPU-based ML-EM algorithm could be easily modified for some other imaging geometries

  15. A Study on GPU-based Iterative ML-EM Reconstruction Algorithm for Emission Computed Tomographic Imaging Systems

    International Nuclear Information System (INIS)

    Ha, Woo Seok; Kim, Soo Mee; Park, Min Jae; Lee, Dong Soo; Lee, Jae Sung

    2009-01-01

    The maximum likelihood-expectation maximization (ML-EM) is the statistical reconstruction algorithm derived from probabilistic model of the emission and detection processes. Although the ML-EM has many advantages in accuracy and utility, the use of the ML-EM is limited due to the computational burden of iterating processing on a CPU (central processing unit). In this study, we developed a parallel computing technique on GPU (graphic processing unit) for ML-EM algorithm. Using Geforce 9800 GTX+ graphic card and CUDA (compute unified device architecture) the projection and backprojection in ML-EM algorithm were parallelized by NVIDIA's technology. The time delay on computations for projection, errors between measured and estimated data and backprojection in an iteration were measured. Total time included the latency in data transmission between RAM and GPU memory. The total computation time of the CPU- and GPU-based ML-EM with 32 iterations were 3.83 and 0.26 sec, respectively. In this case, the computing speed was improved about 15 times on GPU. When the number of iterations increased into 1024, the CPU- and GPU-based computing took totally 18 min and 8 sec, respectively. The improvement was about 135 times and was caused by delay on CPU-based computing after certain iterations. On the other hand, the GPU-based computation provided very small variation on time delay per iteration due to use of shared memory. The GPU-based parallel computation for ML-EM improved significantly the computing speed and stability. The developed GPU-based ML-EM algorithm could be easily modified for some other imaging geometries

  16. Direct Discrete Method for Neutronic Calculations

    International Nuclear Information System (INIS)

    Vosoughi, Naser; Akbar Salehi, Ali; Shahriari, Majid

    2002-01-01

    The objective of this paper is to introduce a new direct method for neutronic calculations. This method which is named Direct Discrete Method, is simpler than the neutron Transport equation and also more compatible with physical meaning of problems. This method is based on physic of problem and with meshing of the desired geometry, writing the balance equation for each mesh intervals and with notice to the conjunction between these mesh intervals, produce the final discrete equations series without production of neutron transport differential equation and mandatory passing from differential equation bridge. We have produced neutron discrete equations for a cylindrical shape with two boundary conditions in one group energy. The correction of the results from this method are tested with MCNP-4B code execution. (authors)

  17. SU-E-T-673: Recent Developments and Comprehensive Validations of a GPU-Based Proton Monte Carlo Simulation Package, GPMC

    International Nuclear Information System (INIS)

    Qin, N; Tian, Z; Pompos, A; Jiang, S; Jia, X; Giantsoudi, D; Schuemann, J; Paganetti, H

    2015-01-01

    Purpose: A GPU-based Monte Carlo (MC) simulation package gPMC has been previously developed and high computational efficiency was achieved. This abstract reports our recent improvements on this package in terms of accuracy, functionality, and code portability. Methods: In the latest version of gPMC, nuclear interaction cross section database was updated to include data from TOPAS/Geant4. Inelastic interaction model, particularly the proton scattering angle distribution, was updated to improve overall simulation accuracy. Calculation of dose averaged LET (LETd) was implemented. gPMC was ported onto an OpenCL environment to enable portability across different computing devices (GPUs from different vendors and CPUs). We also performed comprehensive tests of the code accuracy. Dose from electro-magnetic (EM) interaction channel, primary and secondary proton doses and fluences were scored and compared with those computed by TOPAS. Results: In a homogeneous water phantom with 100 and 200 MeV beams, mean dose differences in EM channel computed by gPMC and by TOPAS were 0.28% and 0.65% of the corresponding maximum dose, respectively. With the Geant4 nuclear interaction cross section data, mean difference of primary proton dose was 0.84% for the 200 MeV case and 0.78% for the 100 MeV case. After updating inelastic interaction model, maximum difference of secondary proton fluence and dose were 0.08% and 0.5% for the 200 MeV beam, and 0.04% and 0.2% for the 100 MeV beam. In a test case with a 150MeV proton beam, the mean difference between LETd computed by gPMC and TOPAS was 0.96% within the proton range. With the OpenCL implementation, gPMC is executable on AMD and Nvidia GPUs, as well as on Intel CPU in single or multiple threads. Results on different platforms agreed within statistical uncertainty. Conclusion: Several improvements have been implemented in the latest version of gPMC, which enhanced its accuracy, functionality, and code portability

  18. SU-E-T-673: Recent Developments and Comprehensive Validations of a GPU-Based Proton Monte Carlo Simulation Package, GPMC

    Energy Technology Data Exchange (ETDEWEB)

    Qin, N; Tian, Z; Pompos, A; Jiang, S; Jia, X [UT Southwestern Medical Center, Dallas, TX (United States); Giantsoudi, D; Schuemann, J; Paganetti, H [Massachusetts General Hospital, Boston, MA (United States)

    2015-06-15

    Purpose: A GPU-based Monte Carlo (MC) simulation package gPMC has been previously developed and high computational efficiency was achieved. This abstract reports our recent improvements on this package in terms of accuracy, functionality, and code portability. Methods: In the latest version of gPMC, nuclear interaction cross section database was updated to include data from TOPAS/Geant4. Inelastic interaction model, particularly the proton scattering angle distribution, was updated to improve overall simulation accuracy. Calculation of dose averaged LET (LETd) was implemented. gPMC was ported onto an OpenCL environment to enable portability across different computing devices (GPUs from different vendors and CPUs). We also performed comprehensive tests of the code accuracy. Dose from electro-magnetic (EM) interaction channel, primary and secondary proton doses and fluences were scored and compared with those computed by TOPAS. Results: In a homogeneous water phantom with 100 and 200 MeV beams, mean dose differences in EM channel computed by gPMC and by TOPAS were 0.28% and 0.65% of the corresponding maximum dose, respectively. With the Geant4 nuclear interaction cross section data, mean difference of primary proton dose was 0.84% for the 200 MeV case and 0.78% for the 100 MeV case. After updating inelastic interaction model, maximum difference of secondary proton fluence and dose were 0.08% and 0.5% for the 200 MeV beam, and 0.04% and 0.2% for the 100 MeV beam. In a test case with a 150MeV proton beam, the mean difference between LETd computed by gPMC and TOPAS was 0.96% within the proton range. With the OpenCL implementation, gPMC is executable on AMD and Nvidia GPUs, as well as on Intel CPU in single or multiple threads. Results on different platforms agreed within statistical uncertainty. Conclusion: Several improvements have been implemented in the latest version of gPMC, which enhanced its accuracy, functionality, and code portability.

  19. Burnup calculations using Monte Carlo method

    International Nuclear Information System (INIS)

    Ghosh, Biplab; Degweker, S.B.

    2009-01-01

    In the recent years, interest in burnup calculations using Monte Carlo methods has gained momentum. Previous burn up codes have used multigroup transport theory based calculations followed by diffusion theory based core calculations for the neutronic portion of codes. The transport theory methods invariably make approximations with regard to treatment of the energy and angle variables involved in scattering, besides approximations related to geometry simplification. Cell homogenisation to produce diffusion, theory parameters adds to these approximations. Moreover, while diffusion theory works for most reactors, it does not produce accurate results in systems that have strong gradients, strong absorbers or large voids. Also, diffusion theory codes are geometry limited (rectangular, hexagonal, cylindrical, and spherical coordinates). Monte Carlo methods are ideal to solve very heterogeneous reactors and/or lattices/assemblies in which considerable burnable poisons are used. The key feature of this approach is that Monte Carlo methods permit essentially 'exact' modeling of all geometrical detail, without resort to ene and spatial homogenization of neutron cross sections. Monte Carlo method would also be better for in Accelerator Driven Systems (ADS) which could have strong gradients due to the external source and a sub-critical assembly. To meet the demand for an accurate burnup code, we have developed a Monte Carlo burnup calculation code system in which Monte Carlo neutron transport code is coupled with a versatile code (McBurn) for calculating the buildup and decay of nuclides in nuclear materials. McBurn is developed from scratch by the authors. In this article we will discuss our effort in developing the continuous energy Monte Carlo burn-up code, McBurn. McBurn is intended for entire reactor core as well as for unit cells and assemblies. Generally, McBurn can do burnup of any geometrical system which can be handled by the underlying Monte Carlo transport code

  20. Acceleration methods and models in Sn calculations

    International Nuclear Information System (INIS)

    Sbaffoni, M.M.; Abbate, M.J.

    1984-01-01

    In some neutron transport problems solved by the discrete ordinate method, it is relatively common to observe some particularities as, for example, negative fluxes generation, slow and insecure convergences and solution instabilities. The commonly used models for neutron flux calculation and acceleration methods included in the most used codes were analyzed, in face of their use in problems characterized by a strong upscattering effect. Some special conclusions derived from this analysis are presented as well as a new method to perform the upscattering scaling for solving the before mentioned problems in this kind of cases. This method has been included in the DOT3.5 code (two dimensional discrete ordinates radiation transport code) generating a new version of wider application. (Author) [es

  1. Criticality calculation method for mixer-settlers

    International Nuclear Information System (INIS)

    Gonda, Kozo; Aoyagi, Haruki; Nakano, Ko; Kamikawa, Hiroshi.

    1980-01-01

    A new criticality calculation code MACPEX has been developed to evaluate and manage the criticality of the process in the extractor of mixer-settler type. MACPEX can perform the combined calculation with the PUREX process calculation code MIXSET, to get the neutron flux and the effective multiplication constant in the mixer-settlers. MACPEX solves one-dimensional diffusion equation by the explicit difference method and the standard source-iteration technique. The characteristics of MACPEX are as follows. 1) Group constants of 4 energy groups for the 239 Pu-H 2 O solution, water, polyethylene and SUS 28 are provided. 2) The group constants of the 239 Pu-H 2 O solution are given by the functional formulae of the plutonium concentration, which is less than 50 g/l. 3) Two boundary conditions of the vacuum condition and the reflective condition are available in this code. 4) The geometrical bucklings can be calculated for a certain energy group and/or region by using the three dimentional neutron flux profiles obtained by CITATION. 5) The buckling correction search can be carried out in order to get a desired k sub(eff). (author)

  2. Methods for Calculating Empires in Quasicrystals

    Directory of Open Access Journals (Sweden)

    Fang Fang

    2017-10-01

    Full Text Available This paper reviews the empire problem for quasiperiodic tilings and the existing methods for generating the empires of the vertex configurations in quasicrystals, while introducing a new and more efficient method based on the cut-and-project technique. Using Penrose tiling as an example, this method finds the forced tiles with the restrictions in the high dimensional lattice (the mother lattice that can be cut-and-projected into the lower dimensional quasicrystal. We compare our method to the two existing methods, namely one method that uses the algorithm of the Fibonacci chain to force the Ammann bars in order to find the forced tiles of an empire and the method that follows the work of N.G. de Bruijn on constructing a Penrose tiling as the dual to a pentagrid. This new method is not only conceptually simple and clear, but it also allows us to calculate the empires of the vertex configurations in a defected quasicrystal by reversing the configuration of the quasicrystal to its higher dimensional lattice, where we then apply the restrictions. These advantages may provide a key guiding principle for phason dynamics and an important tool for self error-correction in quasicrystal growth.

  3. A GPU based high-resolution multilevel biomechanical head and neck model for validating deformable image registration

    International Nuclear Information System (INIS)

    Neylon, J.; Qi, X.; Sheng, K.; Low, D. A.; Kupelian, P.; Santhanam, A.; Staton, R.; Pukala, J.; Manon, R.

    2015-01-01

    Purpose: Validating the usage of deformable image registration (DIR) for daily patient positioning is critical for adaptive radiotherapy (RT) applications pertaining to head and neck (HN) radiotherapy. The authors present a methodology for generating biomechanically realistic ground-truth data for validating DIR algorithms for HN anatomy by (a) developing a high-resolution deformable biomechanical HN model from a planning CT, (b) simulating deformations for a range of interfraction posture changes and physiological regression, and (c) generating subsequent CT images representing the deformed anatomy. Methods: The biomechanical model was developed using HN kVCT datasets and the corresponding structure contours. The voxels inside a given 3D contour boundary were clustered using a graphics processing unit (GPU) based algorithm that accounted for inconsistencies and gaps in the boundary to form a volumetric structure. While the bony anatomy was modeled as rigid body, the muscle and soft tissue structures were modeled as mass–spring-damper models with elastic material properties that corresponded to the underlying contoured anatomies. Within a given muscle structure, the voxels were classified using a uniform grid and a normalized mass was assigned to each voxel based on its Hounsfield number. The soft tissue deformation for a given skeletal actuation was performed using an implicit Euler integration with each iteration split into two substeps: one for the muscle structures and the other for the remaining soft tissues. Posture changes were simulated by articulating the skeletal structure and enabling the soft structures to deform accordingly. Physiological changes representing tumor regression were simulated by reducing the target volume and enabling the surrounding soft structures to deform accordingly. Finally, the authors also discuss a new approach to generate kVCT images representing the deformed anatomy that accounts for gaps and antialiasing artifacts that may

  4. A GPU based high-resolution multilevel biomechanical head and neck model for validating deformable image registration

    Energy Technology Data Exchange (ETDEWEB)

    Neylon, J., E-mail: jneylon@mednet.ucla.edu; Qi, X.; Sheng, K.; Low, D. A.; Kupelian, P.; Santhanam, A. [Department of Radiation Oncology, University of California Los Angeles, 200 Medical Plaza, #B265, Los Angeles, California 90095 (United States); Staton, R.; Pukala, J.; Manon, R. [Department of Radiation Oncology, M.D. Anderson Cancer Center, Orlando, 1440 South Orange Avenue, Orlando, Florida 32808 (United States)

    2015-01-15

    Purpose: Validating the usage of deformable image registration (DIR) for daily patient positioning is critical for adaptive radiotherapy (RT) applications pertaining to head and neck (HN) radiotherapy. The authors present a methodology for generating biomechanically realistic ground-truth data for validating DIR algorithms for HN anatomy by (a) developing a high-resolution deformable biomechanical HN model from a planning CT, (b) simulating deformations for a range of interfraction posture changes and physiological regression, and (c) generating subsequent CT images representing the deformed anatomy. Methods: The biomechanical model was developed using HN kVCT datasets and the corresponding structure contours. The voxels inside a given 3D contour boundary were clustered using a graphics processing unit (GPU) based algorithm that accounted for inconsistencies and gaps in the boundary to form a volumetric structure. While the bony anatomy was modeled as rigid body, the muscle and soft tissue structures were modeled as mass–spring-damper models with elastic material properties that corresponded to the underlying contoured anatomies. Within a given muscle structure, the voxels were classified using a uniform grid and a normalized mass was assigned to each voxel based on its Hounsfield number. The soft tissue deformation for a given skeletal actuation was performed using an implicit Euler integration with each iteration split into two substeps: one for the muscle structures and the other for the remaining soft tissues. Posture changes were simulated by articulating the skeletal structure and enabling the soft structures to deform accordingly. Physiological changes representing tumor regression were simulated by reducing the target volume and enabling the surrounding soft structures to deform accordingly. Finally, the authors also discuss a new approach to generate kVCT images representing the deformed anatomy that accounts for gaps and antialiasing artifacts that may

  5. Method for consequence calculations for severe accidents

    International Nuclear Information System (INIS)

    Nielsen, F.; Thykier-Nielsen, S.; Walmod-Larsen, O.

    1986-08-01

    This report was commissioned by the Swedish State Power Board, who wanted a method for calculation of radiation doses in the surroundings of nuclear power plants caused by severe accidents. The PLUCON4 code were used for the calculations. A TC-SV-accident at Ringhals 1 wer chosen as example. A transient without shutdown leads to core meltdown through the reactor vessel. The pressure peak at the moment of vessel failure opens a safety valve in the dry well. Meteorolgical data for two years from the Ringhals meteorological tower were analysed to find representative weather situations. As typical weather were chosen Pasquill D with wind speed 8 m/s, and as extreme weather were chosen Pasquill F with wind speed 4.8 m/s. (author)

  6. Monte Carlo methods to calculate impact probabilities

    Science.gov (United States)

    Rickman, H.; Wiśniowski, T.; Wajer, P.; Gabryszewski, R.; Valsecchi, G. B.

    2014-09-01

    Context. Unraveling the events that took place in the solar system during the period known as the late heavy bombardment requires the interpretation of the cratered surfaces of the Moon and terrestrial planets. This, in turn, requires good estimates of the statistical impact probabilities for different source populations of projectiles, a subject that has received relatively little attention, since the works of Öpik (1951, Proc. R. Irish Acad. Sect. A, 54, 165) and Wetherill (1967, J. Geophys. Res., 72, 2429). Aims: We aim to work around the limitations of the Öpik and Wetherill formulae, which are caused by singularities due to zero denominators under special circumstances. Using modern computers, it is possible to make good estimates of impact probabilities by means of Monte Carlo simulations, and in this work, we explore the available options. Methods: We describe three basic methods to derive the average impact probability for a projectile with a given semi-major axis, eccentricity, and inclination with respect to a target planet on an elliptic orbit. One is a numerical averaging of the Wetherill formula; the next is a Monte Carlo super-sizing method using the target's Hill sphere. The third uses extensive minimum orbit intersection distance (MOID) calculations for a Monte Carlo sampling of potentially impacting orbits, along with calculations of the relevant interval for the timing of the encounter allowing collision. Numerical experiments are carried out for an intercomparison of the methods and to scrutinize their behavior near the singularities (zero relative inclination and equal perihelion distances). Results: We find an excellent agreement between all methods in the general case, while there appear large differences in the immediate vicinity of the singularities. With respect to the MOID method, which is the only one that does not involve simplifying assumptions and approximations, the Wetherill averaging impact probability departs by diverging toward

  7. A keff calculation method by Monte Carlo

    International Nuclear Information System (INIS)

    Shen, H; Wang, K.

    2008-01-01

    The effective multiplication factor (k eff ) is defined as the ratio between the number of neutrons in successive generations, which definition is adopted by most Monte Carlo codes (e.g. MCNP). Also, it can be thought of as the ratio of the generation rate of neutrons by the sum of the leakage rate and the absorption rate, which should exclude the effect of the neutron reaction such as (n, 2n) and (n, 3n). This article discusses the Monte Carlo method for k eff calculation based on the second definition. A new code has been developed and the results are presented. (author)

  8. Methods for calculating radiation attenuation in shields

    Energy Technology Data Exchange (ETDEWEB)

    Butler, J; Bueneman, D; Etemad, A; Lafore, P; Moncassoli, A M; Penkuhn, H; Shindo, M; Stoces, B

    1964-10-01

    In recent years the development of high-speed digital computers of large capacity has revolutionized the field of reactor shield design. For compact special-purpose reactor shields, Monte-Carlo codes in two- and three dimensional geometries are now available for the proper treatment of both the neutron and gamma- ray problems. Furthermore, techniques are being developed for the theoretical optimization of minimum-weight shield configurations for this type of reactor system. In the design of land-based power reactors, on the other hand, there is a strong incentive to reduce the capital cost of the plant, and economic considerations are also relevant to reactors designed for merchant ship propulsion. In this context simple methods are needed which are economic in their data input and computing time requirements and which, at the same time, are sufficiently accurate for design work. In general the computing time required for Monte-Carlo calculations in complex geometry is excessive for routine design calculations and the capacity of the present codes is inadequate for the proper treatment of large reactor shield systems in three dimensions. In these circumstances a wide range of simpler techniques are currently being employed for design calculations. The methods of calculation for neutrons in reactor shields fall naturally into four categories: Multigroup diffusion theory; Multigroup diffusion with removal sources; Transport codes; and Monte Carlo methods. In spite of the numerous Monte- Carlo techniques which are available for penetration and back scattering, serious problems are still encountered in practice with the scattering of gamma rays from walls of buildings which contain critical facilities and also concrete-lined discharge shafts containing irradiated fuel elements. The considerable volume of data in the unclassified literature on the solution of problems of this type in civil defence work appears not to have been evaluated for reactor shield design. In

  9. New nonlinear methods for linear transport calculations

    International Nuclear Information System (INIS)

    Adams, M.L.

    1993-01-01

    We present a new family of methods for the numerical solution of the linear transport equation. With these methods an iteration consists of an 'S N sweep' followed by an 'S 2 -like' calculation. We show, by analysis as well as numerical results, that iterative convergence is always rapid. We show that this rapid convergence does not depend on a consistent discretization of the S 2 -like equations - they can be discretized independently from the S N equations. We show further that independent discretizations can offer significant advantages over consistent ones. In particular, we find that in a wide range of problems, an accurate discretization of the S 2 -like equation can be combined with a crude discretization of the S N equations to produce an accurate S N answer. We demonstrate this by analysis as well as numerical results. (orig.)

  10. TU-AB-BRC-02: Accuracy Evaluation of GPU-Based OpenCL Carbon Monte Carlo Package (goCMC) in Biological Dose and Microdosimetry in Comparison to FLUKA Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Taleei, R; Peeler, C; Qin, N; Jiang, S; Jia, X [UT Southwestern Medical Center, Dallas, TX (United States)

    2016-06-15

    Purpose: One of the most accurate methods for radiation transport is Monte Carlo (MC) simulation. Long computation time prevents its wide applications in clinic. We have recently developed a fast MC code for carbon ion therapy called GPU-based OpenCL Carbon Monte Carlo (goCMC) and its accuracy in physical dose has been established. Since radiobiology is an indispensible aspect of carbon ion therapy, this study evaluates accuracy of goCMC in biological dose and microdosimetry by benchmarking it with FLUKA. Methods: We performed simulations of a carbon pencil beam with 150, 300 and 450 MeV/u in a homogeneous water phantom using goCMC and FLUKA. Dose and energy spectra for primary and secondary ions on the central beam axis were recorded. Repair-misrepair-fixation model was employed to calculate Relative Biological Effectiveness (RBE). Monte Carlo Damage Simulation (MCDS) tool was used to calculate microdosimetry parameters. Results: Physical dose differences on the central axis were <1.6% of the maximum value. Before the Bragg peak, differences in RBE and RBE-weighted dose were <2% and <1%. At the Bragg peak, the differences were 12.5% caused by small range discrepancy and sensitivity of RBE to beam spectra. Consequently, RBE-weighted dose difference was 11%. Beyond the peak, RBE differences were <20% and primarily caused by differences in the Helium-4 spectrum. However, the RBE-weighted dose agreed within 1% due to the low physical dose. Differences in microdosimetric quantities were small except at the Bragg peak. The simulation time per source particle with FLUKA was 0.08 sec, while goCMC was approximately 1000 times faster. Conclusion: Physical doses computed by FLUKA and goCMC were in good agreement. Although relatively large RBE differences were observed at and beyond the Bragg peak, the RBE-weighted dose differences were considered to be acceptable.

  11. GPU-based real-time triggering in the NA62 experiment

    CERN Document Server

    Ammendola, R.; Cretaro, P.; Di Lorenzo, S.; Fantechi, R.; Fiorini, M.; Frezza, O.; Lamanna, G.; Lo Cicero, F.; Lonardo, A.; Martinelli, M.; Neri, I.; Paolucci, P.S.; Pastorelli, E.; Piandani, R.; Pontisso, L.; Rossetti, D.; Simula, F.; Sozzi, M.; Vicini, P.

    2016-01-01

    Over the last few years the GPGPU (General-Purpose computing on Graphics Processing Units) paradigm represented a remarkable development in the world of computing. Computing for High-Energy Physics is no exception: several works have demonstrated the effectiveness of the integration of GPU-based systems in high level trigger of different experiments. On the other hand the use of GPUs in the low level trigger systems, characterized by stringent real-time constraints, such as tight time budget and high throughput, poses several challenges. In this paper we focus on the low level trigger in the CERN NA62 experiment, investigating the use of real-time computing on GPUs in this synchronous system. Our approach aimed at harvesting the GPU computing power to build in real-time refined physics-related trigger primitives for the RICH detector, as the the knowledge of Cerenkov rings parameters allows to build stringent conditions for data selection at trigger level. Latencies of all components of the trigger chain have...

  12. GPU-based online track reconstruction for the MuPix-telescope

    Energy Technology Data Exchange (ETDEWEB)

    Grzesik, Carsten [JGU, Mainz (Germany); Collaboration: Mu3e-Collaboration

    2016-07-01

    The MuPix telescope is a beam telescope consisting of High Voltage Monolithic Active Pixel Sensors (HV-MAPS). This type of sensor is going to be used for the Mu3e experiment, which is aiming to measure the lepton flavor violating decay μ→ eee with an ultimate sensitivity of 10{sup -16}. This sensitivity requires a high muon decay rate in the order of 1 GHz leading to a data rate of about 1 TBit/s for the whole detector. This needs to be reduced by a factor 1000 using online event selection algorithms on Graphical Processing Units (GPUs) before passing the data to the storage. A test setup for the MuPix sensors and parts of the Mu3e tracking detector readout is realized in a four plane telescope. The telescope can also be used to show the usability of an online track reconstruction using GPUs. As a result the telescope can provide online information about efficiencies of a device under test or the alignment of the telescope itself. This talk discusses the implementation of the GPU based track reconstruction and shows some results from recent testbeam campaigns.

  13. Visualizing whole-brain DTI tractography with GPU-based Tuboids and LoD management.

    Science.gov (United States)

    Petrovic, Vid; Fallon, James; Kuester, Falko

    2007-01-01

    Diffusion Tensor Imaging (DTI) of the human brain, coupled with tractography techniques, enable the extraction of large-collections of three-dimensional tract pathways per subject. These pathways and pathway bundles represent the connectivity between different brain regions and are critical for the understanding of brain related diseases. A flexible and efficient GPU-based rendering technique for DTI tractography data is presented that addresses common performance bottlenecks and image-quality issues, allowing interactive render rates to be achieved on commodity hardware. An occlusion query-based pathway LoD management system for streamlines/streamtubes/tuboids is introduced that optimizes input geometry, vertex processing, and fragment processing loads, and helps reduce overdraw. The tuboid, a fully-shaded streamtube impostor constructed entirely on the GPU from streamline vertices, is also introduced. Unlike full streamtubes and other impostor constructs, tuboids require little to no preprocessing or extra space over the original streamline data. The supported fragment processing levels of detail range from texture-based draft shading to full raycast normal computation, Phong shading, environment mapping, and curvature-correct text labeling. The presented text labeling technique for tuboids provides adaptive, aesthetically pleasing labels that appear attached to the surface of the tubes. Furthermore, an occlusion query aggregating and scheduling scheme for tuboids is described that reduces the query overhead. Results for a tractography dataset are presented, and demonstrate that LoD-managed tuboids offer benefits over traditional streamtubes both in performance and appearance.

  14. Computational methods in calculating superconducting current problems

    Science.gov (United States)

    Brown, David John, II

    Various computational problems in treating superconducting currents are examined. First, field inversion in spatial Fourier transform space is reviewed to obtain both one-dimensional transport currents flowing down a long thin tape, and a localized two-dimensional current. The problems associated with spatial high-frequency noise, created by finite resolution and experimental equipment, are presented, and resolved with a smooth Gaussian cutoff in spatial frequency space. Convergence of the Green's functions for the one-dimensional transport current densities is discussed, and particular attention is devoted to the negative effects of performing discrete Fourier transforms alone on fields asymptotically dropping like 1/r. Results of imaging simulated current densities are favorably compared to the original distributions after the resulting magnetic fields undergo the imaging procedure. The behavior of high-frequency spatial noise, and the behavior of the fields with a 1/r asymptote in the imaging procedure in our simulations is analyzed, and compared to the treatment of these phenomena in the published literature. Next, we examine calculation of Mathieu and spheroidal wave functions, solutions to the wave equation in elliptical cylindrical and oblate and prolate spheroidal coordinates, respectively. These functions are also solutions to Schrodinger's equations with certain potential wells, and are useful in solving time-varying superconducting problems. The Mathieu functions are Fourier expanded, and the spheroidal functions expanded in associated Legendre polynomials to convert the defining differential equations to recursion relations. The infinite number of linear recursion equations is converted to an infinite matrix, multiplied by a vector of expansion coefficients, thus becoming an eigenvalue problem. The eigenvalue problem is solved with root solvers, and the eigenvector problem is solved using a Jacobi-type iteration method, after preconditioning the

  15. GPU-based RFA simulation for minimally invasive cancer treatment of liver tumours.

    Science.gov (United States)

    Mariappan, Panchatcharam; Weir, Phil; Flanagan, Ronan; Voglreiter, Philip; Alhonnoro, Tuomas; Pollari, Mika; Moche, Michael; Busse, Harald; Futterer, Jurgen; Portugaller, Horst Rupert; Sequeiros, Roberto Blanco; Kolesnik, Marina

    2017-01-01

    Radiofrequency ablation (RFA) is one of the most popular and well-standardized minimally invasive cancer treatments (MICT) for liver tumours, employed where surgical resection has been contraindicated. Less-experienced interventional radiologists (IRs) require an appropriate planning tool for the treatment to help avoid incomplete treatment and so reduce the tumour recurrence risk. Although a few tools are available to predict the ablation lesion geometry, the process is computationally expensive. Also, in our implementation, a few patient-specific parameters are used to improve the accuracy of the lesion prediction. Advanced heterogeneous computing using personal computers, incorporating the graphics processing unit (GPU) and the central processing unit (CPU), is proposed to predict the ablation lesion geometry. The most recent GPU technology is used to accelerate the finite element approximation of Penne's bioheat equation and a three state cell model. Patient-specific input parameters are used in the bioheat model to improve accuracy of the predicted lesion. A fast GPU-based RFA solver is developed to predict the lesion by doing most of the computational tasks in the GPU, while reserving the CPU for concurrent tasks such as lesion extraction based on the heat deposition at each finite element node. The solver takes less than 3 min for a treatment duration of 26 min. When the model receives patient-specific input parameters, the deviation between real and predicted lesion is below 3 mm. A multi-centre retrospective study indicates that the fast RFA solver is capable of providing the IR with the predicted lesion in the short time period before the intervention begins when the patient has been clinically prepared for the treatment.

  16. Computing and physical methods to calculate Pu

    International Nuclear Information System (INIS)

    Mohamed, Ashraf Elsayed Mohamed

    2013-01-01

    Main limitations due to the enhancement of the plutonium content are related to the coolant void effect as the spectrum becomes faster, the neutron flux in the thermal region tends towards zero and is concentrated in the region from 10 Ke to 1 MeV. Thus, all captures by 240 Pu and 242 Pu in the thermal and epithermal resonance disappear and the 240 Pu and 242 Pu contributions to the void effect became positive. The higher the Pu content and the poorer the Pu quality, the larger the void effect. The core control in nominal or transient conditions Pu enrichment leads to a decrease in (B eff.), the efficiency of soluble boron and control rods. Also, the Doppler effect tends to decrease when Pu replaces U, so, that in case of transients the core could diverge again if the control is not effective enough. As for the voiding effect, the plutonium degradation and the 240 Pu and 242 Pu accumulation after multiple recycling lead to spectrum hardening and to a decrease in control. One solution would be to use enriched boron in soluble boron and shutdown rods. In this paper, I discuss and show the advanced computing and physical methods to calculate Pu inside the nuclear reactors and glovebox and the different solutions to be used to overcome the difficulties that effect, on safety parameters and on reactor performance, and analysis the consequences of plutonium management on the whole fuel cycle like Raw materials savings, fraction of nuclear electric power involved in the Pu management. All through two types of scenario, one involving a low fraction of the nuclear park dedicated to plutonium management, the other involving a dilution of the plutonium in all the nuclear park. (author)

  17. Overview of multifluid-flow-calculation methods

    International Nuclear Information System (INIS)

    Stewart, H.B.

    1981-01-01

    Two categories of numerical methods which may be useful in multiphase flow research are discussed. The first category includes methods which are specifically intended for accurate computation of discontinuities, such as the method of characteristics, particle-in-cell method, flux-corrected transport, and random choice methods. Methods in this category could be applied to research on rocket exhaust plumes and interior ballistics. The second category includes methods for smooth, subsonic flows, such as fractional step methods, semi-implicit method, and methods which treat convection implicitly. The subsonic flow methods could be of interest for ice flows

  18. Method for consequence calculations for severe accidents

    International Nuclear Information System (INIS)

    Nielsen, F.; Thykier-Nielsn, S.

    1987-03-01

    This report was commissioned by the Swedish State Power Board. The report contains a calculation of radiation doses in the surroundings caused by a theoretical core meltdown accident at Forsmark reactor No 3. The assumption used for the calculations were a 0.06% release of iodine and cesium corresponding to a 0.1% release through the FILTRA plant at Barsebaeck. The calculations were made by means of the PLUCON4 code. Meteorological data for two years from the Forsmark meteorological tower were analysed to find representative weather situations. As typical weather pasquill D was chosen with wind speed 5 m/s, and as extreme weather, Pasquill F with wind speed 2 m/s. 23 tabs., 36 ills., 21 refs. (author)

  19. Method for consequence calculations for servere accidents

    International Nuclear Information System (INIS)

    Nielsen, F.

    1987-01-01

    With the exception of the part about collective doses, this report was commissioned by the Swedish State Power Board. The part about collective doses was commissioned by the Swedish National Institute of Radiation Protection. The report contains a calculation of radiation doses in the sursurroundings caused by a theoretical core meltdown accident at one of the Barsebaeck reactors with filtered venting through the FILTRA plant. The calculations were made by means of the PLUCON4 code. The assumption used for the calculations were givon by the Swedish National Institute of Radiation Protection as follows: Pasquill D with wind speed 3 m/s and a mixing layer at 300 m height. Elevation of the release: 100 m with no energy release. The release starts 12 hours after shut-down and its duration is one hour. The release contains 100% of the noble gasses and 0,1% of all other isotopes in a 1800 MW t -reactor. (author)

  20. Method for consequence calculations for severe accidents

    International Nuclear Information System (INIS)

    Nielsen, F.

    1988-07-01

    This report was commissioned by the Swedish State Power Board. The report contains a calculation of radiation doses in the surroundings caused by a theoretical core meltdown accident at Forsmark reactor No 3. The accident sequence chosen for the calculating was a release caused by total power failure. The calculations were made by means of the PLUCON4 code. Meteorological data for two years from the Forsmark meteorological tower were analysed to find representative weather situations. As typical weather, Pasquill D was chosen with a wind speed of 5 m/s, and as extreme weather, Pasquill F with a wind speed of 2 m/s. 23 tabs., 37 ills., 20 refs. (author)

  1. Methods for thermal reactor lattice calculations

    International Nuclear Information System (INIS)

    Schneider, A.

    1976-12-01

    The American code HAMMER and the British code WIMS, for the analysis of thermal reactor lattices, have been investigated. The primary objective of this investigation was to identify the causes for the discrepancies that exist between the calculated and the experimentally determined reactivity of clean critical experiments. Three phases have been undertaken in the research: (a) Detailed comparison between the group cross-sections used by the codes; (b) Definition of the various approximations incorporated into the codes; (c) Comparison between the values of a variety of reaction rates calculated by the two codes. It was concluded that the main cause of discrepancy between calculations and experiments is due to data inaccuracies, while approximations introduced in solving the transport equation are of smaller importance

  2. Evaluation bases for calculation methods in radioecology

    International Nuclear Information System (INIS)

    Bleck-Neuhaus, J.; Boikat, U.; Franke, B.; Hinrichsen, K.; Hoepfner, U.; Ratka, R.; Steinhilber-Schwab, B.; Teufel, D.; Urbach, M.

    1982-03-01

    The seven contributions in this book deal with the state and problems of radioecology. In particular it analyses: The propagation of radioactive materials in the atmosphere, the transfer of radioactive substances from the soil into plants, respectively from animal feed into meat, the exposure pathways for, and high-risk groups of the population, the uncertainties and the band width of the ingestion factor, as well as the treatment of questions of radioecology in practice. The calculation model is assessed and the difficulty evaluated of laying down data in the general calculation basis. (DG) [de

  3. Method for consequence calculations for severe accidents

    International Nuclear Information System (INIS)

    Nielsen, F.

    1988-01-01

    This report was commissioned by the Swedish State Power Board. The report contains a calculation of radiation doses in the surroundings caused by a theoretical core meltdown accident at Ringhals reactor No 3/4. The accident sequence chosen for the calcualtions was a release caused by total power failure. The calculations were made by means of the PLUCON4 code. A decontamination factor of 500 is used to account for the scrubber effect. Meteorological data for two years from the Ringhals meteorological tower were analysed to find representative weather situations. As typical weather, Pasquill D, was chosen with a wind speed of 10 m/s, and as extreme weather, Pasquill E, with a wind speed of 2 m/s. 19 refs. (author)

  4. Methods for calculating anisotropic transfer cross sections

    International Nuclear Information System (INIS)

    Cai, Shaohui; Zhang, Yixin.

    1985-01-01

    The Legendre moments of the group transfer cross section, which are widely used in the numerical solution of the transport calculation can be efficiently and accurately constructed from low-order (K = 1--2) successive partial range moments. This is convenient for the generation of group constants. In addition, a technique to obtain group-angle correlation transfer cross section without Legendre expansion is presented. (author)

  5. Analyzed method for calculating the distribution of electrostatic field

    International Nuclear Information System (INIS)

    Lai, W.

    1981-01-01

    An analyzed method for calculating the distribution of electrostatic field under any given axial gradient in tandem accelerators is described. This method possesses satisfactory accuracy compared with the results of numerical calculation

  6. Comprehensive evaluations of cone-beam CT dose in image-guided radiation therapy via GPU-based Monte Carlo simulations

    Energy Technology Data Exchange (ETDEWEB)

    Montanari, Davide; Scolari, Enrica; Silvestri, Chiara; Graves, Yan Jiang; Cervino, Laura [Center for Advanced Radiotherapy Technologies, University of California San Diego, La Jolla, CA 92037-0843 (United States); Yan, Hao; Jiang, Steve B; Jia, Xun [Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390-9315 (United States); Rice, Roger [Department of Radiation Medicine and Applied Sciences, University of California San Diego, La Jolla, CA 92037-0843 (United States)

    2014-03-07

    Cone beam CT (CBCT) has been widely used for patient setup in image-guided radiation therapy (IGRT). Radiation dose from CBCT scans has become a clinical concern. The purposes of this study are (1) to commission a graphics processing unit (GPU)-based Monte Carlo (MC) dose calculation package gCTD for Varian On-Board Imaging (OBI) system and test the calculation accuracy, and (2) to quantitatively evaluate CBCT dose from the OBI system in typical IGRT scan protocols. We first conducted dose measurements in a water phantom. X-ray source model parameters used in gCTD are obtained through a commissioning process. gCTD accuracy is demonstrated by comparing calculations with measurements in water and in CTDI phantoms. Twenty-five brain cancer patients are used to study dose in a standard-dose head protocol, and 25 prostate cancer patients are used to study dose in pelvis protocol and pelvis spotlight protocol. Mean dose to each organ is calculated. Mean dose to 2% voxels that have the highest dose is also computed to quantify the maximum dose. It is found that the mean dose value to an organ varies largely among patients. Moreover, dose distribution is highly non-homogeneous inside an organ. The maximum dose is found to be 1–3 times higher than the mean dose depending on the organ, and is up to eight times higher for the entire body due to the very high dose region in bony structures. High computational efficiency has also been observed in our studies, such that MC dose calculation time is less than 5 min for a typical case. (paper)

  7. Soil structure interaction calculations: a comparison of methods

    International Nuclear Information System (INIS)

    Wight, L.; Zaslawsky, M.

    1976-01-01

    Two approaches for calculating soil structure interaction (SSI) are compared: finite element and lumped mass. Results indicate that the calculations with the lumped mass method are generally conservative compared to those obtained by the finite element method. They also suggest that a closer agreement between the two sets of calculations is possible, depending on the use of frequency-dependent soil springs and dashpots in the lumped mass calculations. There is a total lack of suitable guidelines for implementing the lumped mass method of calculating SSI, which leads to the conclusion that the finite element method is generally superior for calculative purposes

  8. Soil structure interaction calculations: a comparison of methods

    Energy Technology Data Exchange (ETDEWEB)

    Wight, L.; Zaslawsky, M.

    1976-07-22

    Two approaches for calculating soil structure interaction (SSI) are compared: finite element and lumped mass. Results indicate that the calculations with the lumped mass method are generally conservative compared to those obtained by the finite element method. They also suggest that a closer agreement between the two sets of calculations is possible, depending on the use of frequency-dependent soil springs and dashpots in the lumped mass calculations. There is a total lack of suitable guidelines for implementing the lumped mass method of calculating SSI, which leads to the conclusion that the finite element method is generally superior for calculative purposes.

  9. The multigrid method for reactor calculations

    International Nuclear Information System (INIS)

    Douglas, S.R.

    1991-07-01

    Iterative solutions to linear systems of equations are discussed. The emphasis is on the concepts that affect convergence rates of these solution methods. The multigrid method is described, including the smoothing property, restriction, and prolongation. A simple example is used to illustrate the ideas

  10. Advanced Computational Methods for Monte Carlo Calculations

    Energy Technology Data Exchange (ETDEWEB)

    Brown, Forrest B. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2018-01-12

    This course is intended for graduate students who already have a basic understanding of Monte Carlo methods. It focuses on advanced topics that may be needed for thesis research, for developing new state-of-the-art methods, or for working with modern production Monte Carlo codes.

  11. Criticality calculation by the LTSN method

    International Nuclear Information System (INIS)

    Batistela, Claudia H.F.; Vilhena, Marco T. de; Borges, Volnei

    1997-01-01

    This work evaluates criticality parameters (multiplication factor and critical thickness) by the LTS N method in unidimensional slabs homogeneous and heterogeneous considering one-group model and isotropic scattering. The idea of the LTS N method encompasses the following steps: application of the Laplace transform into a set of discrete ordinates equations, analytical solution of the algebraic linear system for the transformed angular fluxes and their reconstruction by the Heaviside expansion technique. The novel feature of the proposed method is based upon the criticality parameters determination by solving a transcendental equation. Numerical results are reported. 12 refs., 2 tabs

  12. convergent methods for calculating thermodynamic Green functions

    OpenAIRE

    Bowen, S. P.; Williams, C. D.; Mancini, J. D.

    1984-01-01

    A convergent method of approximating thermodynamic Green functions is outlined briefly. The method constructs a sequence of approximants which converges independently of the strength of the Hamiltonian's coupling constants. Two new concepts associated with the approximants are introduced: the resolving power of the approximation, and conditional creation (annihilation) operators. These ideas are illustrated on an exactly soluble model and a numerical example. A convergent expression for the s...

  13. COSTS CALCULATION OF TARGET COSTING METHOD

    Directory of Open Access Journals (Sweden)

    Sebastian UNGUREANU

    2014-06-01

    Full Text Available Cost information system plays an important role in every organization in the decision making process. An important task of management is ensuring control of the operations, processes, sectors, and not ultimately on costs. Although in achieving the objectives of an organization compete more control systems (production control, quality control, etc., the cost information system is important because monitors results of the other. Detailed analysis of costs, production cost calculation, quantification of losses, estimate the work efficiency provides a solid basis for financial control. Knowledge of the costs is a decisive factor in taking decisions and planning future activities. Managers are concerned about the costs that will appear in the future, their level underpinning the supply and production decisions as well as price policy. An important factor is the efficiency of cost information system in such a way that the information provided by it may be useful for decisions and planning of the work.

  14. Homotopy analysis method for neutron diffusion calculations

    International Nuclear Information System (INIS)

    Cavdar, S.

    2009-01-01

    The Homotopy Analysis Method (HAM), proposed in 1992 by Shi Jun Liao and has been developed since then, is based on a fundamental concept in differential geometry and topology, the homotopy. It has proved useful for problems involving algebraic, linear/non-linear, ordinary/partial differential and differential-integral equations being an analytic, recursive method that provides a series sum solution. It has the advantage of offering a certain freedom for the choice of its arguments such as the initial guess, the auxiliary linear operator and the convergence control parameter, and it allows us to effectively control the rate and region of convergence of the series solution. HAM is applied for the fixed source neutron diffusion equation in this work, which is a part of our research motivated by the question of whether methods for solving the neutron diffusion equation that yield straightforward expressions but able to provide a solution of reasonable accuracy exist such that we could avoid analytic methods that are widely used but either fail to solve the problem or provide solutions through many intricate expressions that are likely to contain mistakes or numerical methods that require powerful computational resources and advanced programming skills due to their very nature or intricate mathematical fundamentals. Fourier basis are employed for expressing the initial guess due to the structure of the problem and its boundary conditions. We present the results in comparison with other widely used methods of Adomian Decomposition and Variable Separation.

  15. Calculation of radon concentration in water by toluene extraction method

    Energy Technology Data Exchange (ETDEWEB)

    Saito, Masaaki [Tokyo Metropolitan Isotope Research Center (Japan)

    1997-02-01

    Noguchi method and Horiuchi method have been used as the calculation method of radon concentration in water. Both methods have two problems in the original, that is, the concentration calculated is changed by the extraction temperature depend on the incorrect solubility data and the concentration calculated are smaller than the correct values, because the radon calculation equation does not true to the gas-liquid equilibrium theory. However, the two problems are solved by improving the radon equation. I presented the Noguchi-Saito equation and the constant B of Horiuchi-Saito equation. The calculating results by the improved method showed about 10% of error. (S.Y.)

  16. SPATIOTEMPORAL VISUALIZATION OF TIME-SERIES SATELLITE-DERIVED CO2 FLUX DATA USING VOLUME RENDERING AND GPU-BASED INTERPOLATION ON A CLOUD-DRIVEN DIGITAL EARTH

    Directory of Open Access Journals (Sweden)

    S. Wu

    2017-10-01

    Full Text Available The ocean carbon cycle has a significant influence on global climate, and is commonly evaluated using time-series satellite-derived CO2 flux data. Location-aware and globe-based visualization is an important technique for analyzing and presenting the evolution of climate change. To achieve realistic simulation of the spatiotemporal dynamics of ocean carbon, a cloud-driven digital earth platform is developed to support the interactive analysis and display of multi-geospatial data, and an original visualization method based on our digital earth is proposed to demonstrate the spatiotemporal variations of carbon sinks and sources using time-series satellite data. Specifically, a volume rendering technique using half-angle slicing and particle system is implemented to dynamically display the released or absorbed CO2 gas. To enable location-aware visualization within the virtual globe, we present a 3D particlemapping algorithm to render particle-slicing textures onto geospace. In addition, a GPU-based interpolation framework using CUDA during real-time rendering is designed to obtain smooth effects in both spatial and temporal dimensions. To demonstrate the capabilities of the proposed method, a series of satellite data is applied to simulate the air-sea carbon cycle in the China Sea. The results show that the suggested strategies provide realistic simulation effects and acceptable interactive performance on the digital earth.

  17. GPU-based streaming architectures for fast cone-beam CT image reconstruction and demons deformable registration

    International Nuclear Information System (INIS)

    Sharp, G C; Kandasamy, N; Singh, H; Folkert, M

    2007-01-01

    This paper shows how to significantly accelerate cone-beam CT reconstruction and 3D deformable image registration using the stream-processing model. We describe data-parallel designs for the Feldkamp, Davis and Kress (FDK) reconstruction algorithm, and the demons deformable registration algorithm, suitable for use on a commodity graphics processing unit. The streaming versions of these algorithms are implemented using the Brook programming environment and executed on an NVidia 8800 GPU. Performance results using CT data of a preserved swine lung indicate that the GPU-based implementations of the FDK and demons algorithms achieve a substantial speedup-up to 80 times for FDK and 70 times for demons when compared to an optimized reference implementation on a 2.8 GHz Intel processor. In addition, the accuracy of the GPU-based implementations was found to be excellent. Compared with CPU-based implementations, the RMS differences were less than 0.1 Hounsfield unit for reconstruction and less than 0.1 mm for deformable registration

  18. Transportation channels calculation method in MATLAB

    International Nuclear Information System (INIS)

    Averyanov, G.P.; Budkin, V.A.; Dmitrieva, V.V.; Osadchuk, I.O.; Bashmakov, Yu.A.

    2014-01-01

    Output devices and charged particles transport channels are necessary components of any modern particle accelerator. They differ both in sizes and in terms of focusing elements depending on particle accelerator type and its destination. A package of transport line designing codes for magnet optical channels in MATLAB environment is presented in this report. Charged particles dynamics in a focusing channel can be studied easily by means of the matrix technique. MATLAB usage is convenient because its information objects are matrixes. MATLAB allows the use the modular principle to build the software package. Program blocks are small in size and easy to use. They can be executed separately or commonly. A set of codes has a user-friendly interface. Transport channel construction consists of focusing lenses (doublets and triplets). The main of the magneto-optical channel parameters are total length and lens position and parameters of the output beam in the phase space (channel acceptance, beam emittance - beam transverse dimensions, particles divergence and image stigmaticity). Choice of the channel operation parameters is based on the conditions for satisfying mutually competing demands. And therefore the channel parameters calculation is carried out by using the search engine optimization techniques.

  19. Nodal methods in numerical reactor calculations

    International Nuclear Information System (INIS)

    Hennart, J.P.; Valle, E. del

    2004-01-01

    The present work describes the antecedents, developments and applications started in 1972 with Prof. Hennart who was invited to be part of the staff of the Nuclear Engineering Department at the School of Physics and Mathematics of the National Polytechnic Institute. Since that time and up to 1981, several master theses based on classical finite element methods were developed with applications in point kinetics and in the steady state as well as the time dependent multigroup diffusion equations. After this period the emphasis moved to nodal finite elements in 1, 2 and 3D cartesian geometries. All the thesis were devoted to the numerical solution of the neutron multigroup diffusion and transport equations, few of them including the time dependence, most of them related with steady state diffusion equations. The main contributions were as follows: high order nodal schemes for the primal and mixed forms of the diffusion equations, block-centered finite-differences methods, post-processing, composite nodal finite elements for hexagons, and weakly and strongly discontinuous schemes for the transport equation. Some of these are now being used by several researchers involved in nuclear fuel management. (Author)

  20. Nodal methods in numerical reactor calculations

    Energy Technology Data Exchange (ETDEWEB)

    Hennart, J P [UNAM, IIMAS, A.P. 20-726, 01000 Mexico D.F. (Mexico); Valle, E del [National Polytechnic Institute, School of Physics and Mathematics, Department of Nuclear Engineering, Mexico, D.F. (Mexico)

    2004-07-01

    The present work describes the antecedents, developments and applications started in 1972 with Prof. Hennart who was invited to be part of the staff of the Nuclear Engineering Department at the School of Physics and Mathematics of the National Polytechnic Institute. Since that time and up to 1981, several master theses based on classical finite element methods were developed with applications in point kinetics and in the steady state as well as the time dependent multigroup diffusion equations. After this period the emphasis moved to nodal finite elements in 1, 2 and 3D cartesian geometries. All the thesis were devoted to the numerical solution of the neutron multigroup diffusion and transport equations, few of them including the time dependence, most of them related with steady state diffusion equations. The main contributions were as follows: high order nodal schemes for the primal and mixed forms of the diffusion equations, block-centered finite-differences methods, post-processing, composite nodal finite elements for hexagons, and weakly and strongly discontinuous schemes for the transport equation. Some of these are now being used by several researchers involved in nuclear fuel management. (Author)

  1. New efficient methods for calculating watersheds

    International Nuclear Information System (INIS)

    Fehr, E; Andrade, J S Jr; Herrmann, H J; Kadau, D; Moukarzel, C F; Da Cunha, S D; Da Silva, L R; Oliveira, E A

    2009-01-01

    We present an advanced algorithm for the determination of watershed lines on digital elevation models (DEMs) which is based on the iterative application of invasion percolation (IP). The main advantage of our method over previously proposed ones is that it has a sub-linear time-complexity. This enables us to process systems comprising up to 10 8 sites in a few CPU seconds. Using our algorithm we are able to demonstrate, convincingly and with high accuracy, the fractal character of watershed lines. We find the fractal dimension of watersheds to be D f = 1.211 ± 0.001 for artificial landscapes, D f = 1.10 ± 0.01 for the Alps and D f = 1.11 ± 0.01 for the Himalayas

  2. Quantum Monte Carlo diagonalization method as a variational calculation

    International Nuclear Information System (INIS)

    Mizusaki, Takahiro; Otsuka, Takaharu; Honma, Michio.

    1997-01-01

    A stochastic method for performing large-scale shell model calculations is presented, which utilizes the auxiliary field Monte Carlo technique and diagonalization method. This method overcomes the limitation of the conventional shell model diagonalization and can extremely widen the feasibility of shell model calculations with realistic interactions for spectroscopic study of nuclear structure. (author)

  3. Assessment of chemical exposures: calculation methods for environmental professionals

    National Research Council Canada - National Science Library

    Daugherty, Jack E

    1997-01-01

    ... on by scientists, businessmen, and policymakers. Assessment of Chemical Exposures: Calculation Methods for Environmental Professionals addresses the expanding scope of exposure assessments in both the workplace and environment...

  4. Comparison of different dose calculation methods for irregular photon fields

    International Nuclear Information System (INIS)

    Zakaria, G.A.; Schuette, W.

    2000-01-01

    In this work, 4 calculation methods (Wrede method, Clarskon method of sector integration, beam-zone method of Quast and pencil-beam method of Ahnesjoe) are introduced to calculate point doses in different irregular photon fields. The calculations cover a typical mantle field, an inverted Y-field and different blocked fields for 4 and 10 MV photon energies. The results are compared to those of measurements in a water phantom. The Clarkson and the pencil-beam method have been proved to be the methods of equal standard in relation to accuracy. Both of these methods are being distinguished by minimum deviations and applied in our clinical routine work. The Wrede and beam-zone methods deliver useful results to central beam and yet provide larger deviations in calculating points beyond the central axis. (orig.) [de

  5. Core burn-up calculation method of JRR-3

    International Nuclear Information System (INIS)

    Kato, Tomoaki; Yamashita, Kiyonobu

    2007-01-01

    SRAC code system is utilized for core burn-up calculation of JRR-3. SRAC code system includes calculation modules such as PIJ, PIJBURN, ANISN and CITATION for making effective cross section and calculation modules such as COREBN and HIST for core burn-up calculation. As for calculation method for JRR-3, PIJBURN (Cell burn-up calculation module) is used for making effective cross section of fuel region at each burn-up step. PIJ, ANISN and CITATION are used for making effective cross section of non-fuel region. COREBN and HIST is used for core burn-up calculation and fuel management. This paper presents details of NRR-3 core burn-up calculation. FNCA Participating countries are expected to carry out core burn-up calculation of domestic research reactor by SRAC code system by utilizing the information of this paper. (author)

  6. Fast GPU-based Monte Carlo simulations for LDR prostate brachytherapy

    Science.gov (United States)

    Bonenfant, Éric; Magnoux, Vincent; Hissoiny, Sami; Ozell, Benoît; Beaulieu, Luc; Després, Philippe

    2015-07-01

    The aim of this study was to evaluate the potential of bGPUMCD, a Monte Carlo algorithm executed on Graphics Processing Units (GPUs), for fast dose calculations in permanent prostate implant dosimetry. It also aimed to validate a low dose rate brachytherapy source in terms of TG-43 metrics and to use this source to compute dose distributions for permanent prostate implant in very short times. The physics of bGPUMCD was reviewed and extended to include Rayleigh scattering and fluorescence from photoelectric interactions for all materials involved. The radial and anisotropy functions were obtained for the Nucletron SelectSeed in TG-43 conditions. These functions were compared to those found in the MD Anderson Imaging and Radiation Oncology Core brachytherapy source registry which are considered the TG-43 reference values. After appropriate calibration of the source, permanent prostate implant dose distributions were calculated for four patients and compared to an already validated Geant4 algorithm. The radial function calculated from bGPUMCD showed excellent agreement (differences within 1.3%) with TG-43 accepted values. The anisotropy functions at r = 1 cm and r = 4 cm were within 2% of TG-43 values for angles over 17.5°. For permanent prostate implants, Monte Carlo-based dose distributions with a statistical uncertainty of 1% or less for the target volume were obtained in 30 s or less for 1 × 1 × 1 mm3 calculation grids. Dosimetric indices were very similar (within 2.7%) to those obtained with a validated, independent Monte Carlo code (Geant4) performing the calculations for the same cases in a much longer time (tens of minutes to more than a hour). bGPUMCD is a promising code that lets envision the use of Monte Carlo techniques in a clinical environment, with sub-minute execution times on a standard workstation. Future work will explore the use of this code with an inverse planning method to provide a complete Monte Carlo-based planning solution.

  7. NaNet: a low-latency NIC enabling GPU-based, real-time low level trigger systems

    International Nuclear Information System (INIS)

    Ammendola, Roberto; Biagioni, Andrea; Frezza, Ottorino; Cicero, Francesca Lo; Lonardo, Alessandro; Paolucci, Pier Stanislao; Rossetti, Davide; Simula, Francesco; Tosoratto, Laura; Vicini, Piero; Fantechi, Riccardo; Lamanna, Gianluca; Pantaleo, Felice; Piandani, Roberto; Sozzi, Marco; Pontisso, Luca

    2014-01-01

    We implemented the NaNet FPGA-based PCIe Gen2 GbE/APElink NIC, featuring GPUDirect RDMA capabilities and UDP protocol management offloading. NaNet is able to receive a UDP input data stream from its GbE interface and redirect it, without any intermediate buffering or CPU intervention, to the memory of a Fermi/Kepler GPU hosted on the same PCIe bus, provided that the two devices share the same upstream root complex. Synthetic benchmarks for latency and bandwidth are presented. We describe how NaNet can be employed in the prototype of the GPU-based RICH low-level trigger processor of the NA62 CERN experiment, to implement the data link between the TEL62 readout boards and the low level trigger processor. Results for the throughput and latency of the integrated system are presented and discussed.

  8. NaNet: a low-latency NIC enabling GPU-based, real-time low level trigger systems

    Energy Technology Data Exchange (ETDEWEB)

    Ammendola, Roberto [INFN, Rome – Tor Vergata (Italy); Biagioni, Andrea; Frezza, Ottorino; Cicero, Francesca Lo; Lonardo, Alessandro; Paolucci, Pier Stanislao; Rossetti, Davide; Simula, Francesco; Tosoratto, Laura; Vicini, Piero [INFN, Rome – Sapienza (Italy); Fantechi, Riccardo [CERN, Geneve (Switzerland); Lamanna, Gianluca; Pantaleo, Felice; Piandani, Roberto; Sozzi, Marco [INFN, Pisa (Italy); Pontisso, Luca [University, Rome (Italy)

    2014-06-11

    We implemented the NaNet FPGA-based PCIe Gen2 GbE/APElink NIC, featuring GPUDirect RDMA capabilities and UDP protocol management offloading. NaNet is able to receive a UDP input data stream from its GbE interface and redirect it, without any intermediate buffering or CPU intervention, to the memory of a Fermi/Kepler GPU hosted on the same PCIe bus, provided that the two devices share the same upstream root complex. Synthetic benchmarks for latency and bandwidth are presented. We describe how NaNet can be employed in the prototype of the GPU-based RICH low-level trigger processor of the NA62 CERN experiment, to implement the data link between the TEL62 readout boards and the low level trigger processor. Results for the throughput and latency of the integrated system are presented and discussed.

  9. NaNet:a low-latency NIC enabling GPU-based, real-time low level trigger systems

    CERN Document Server

    INSPIRE-00646837; Biagioni, Andrea; Fantechi, Riccardo; Frezza, Ottorino; Lamanna, Gianluca; Lo Cicero, Francesca; Lonardo, Alessandro; Paolucci, Pier Stanislao; Pantaleo, Felice; Piandani, Roberto; Pontisso, Luca; Rossetti, Davide; Simula, Francesco; Sozzi, Marco; Tosoratto, Laura; Vicini, Piero

    2014-01-01

    We implemented the NaNet FPGA-based PCI2 Gen2 GbE/APElink NIC, featuring GPUDirect RDMA capabilities and UDP protocol management offloading. NaNet is able to receive a UDP input data stream from its GbE interface and redirect it, without any intermediate buffering or CPU intervention, to the memory of a Fermi/Kepler GPU hosted on the same PCIe bus, provided that the two devices share the same upstream root complex. Synthetic benchmarks for latency and bandwidth are presented. We describe how NaNet can be employed in the prototype of the GPU-based RICH low-level trigger processor of the NA62 CERN experiment, to implement the data link between the TEL62 readout boards and the low level trigger processor. Results for the throughput and latency of the integrated system are presented and discussed.

  10. Comments on Simplified Calculation Method for Fire Exposed Concrete Columns

    DEFF Research Database (Denmark)

    Hertz, Kristian Dahl

    1998-01-01

    The author has developed new simplified calculation methods for fire exposed columns. Methods, which are found In ENV 1992-1-2 chapter 4.3 and in proposal for Danish code of Practise DS411 chapter 9. In the present supporting document the methods are derived and 50 eccentrically loaded fire expos...... columns are calculated and compared to results of full-scale tests. Furthermore 500 columns are calculated in order to present each test result related to a variation of the calculation in time of fire resistance....

  11. Current trends in methods for neutron diffusion calculations

    International Nuclear Information System (INIS)

    Adams, C.H.

    1977-01-01

    Current work and trends in the application of neutron diffusion theory to reactor design and analysis are reviewed. Specific topics covered include finite-difference methods, synthesis methods, nodal calculations, finite-elements and perturbation theory

  12. Evolution of calculation methods taking into account severe accidents

    International Nuclear Information System (INIS)

    L'Homme, A.; Courtaud, J.M.

    1990-12-01

    During the first decade of PWRs operation in France the calculation methods used for design and operation have improved very much. This paper gives a general analysis of the calculation methods evolution in parallel with the evolution of safety approach concerning PWRs. Then a comprehensive presentation of principal calculation tools is presented as applied during the past decade. An effort is done to predict the improvements in near future

  13. GPU-based fast cone beam CT reconstruction from undersampled and noisy projection data via total variation.

    Science.gov (United States)

    Jia, Xun; Lou, Yifei; Li, Ruijiang; Song, William Y; Jiang, Steve B

    2010-04-01

    Cone-beam CT (CBCT) plays an important role in image guided radiation therapy (IGRT). However, the large radiation dose from serial CBCT scans in most IGRT procedures raises a clinical concern, especially for pediatric patients who are essentially excluded from receiving IGRT for this reason. The goal of this work is to develop a fast GPU-based algorithm to reconstruct CBCT from undersampled and noisy projection data so as to lower the imaging dose. The CBCT is reconstructed by minimizing an energy functional consisting of a data fidelity term and a total variation regularization term. The authors developed a GPU-friendly version of the forward-backward splitting algorithm to solve this model. A multigrid technique is also employed. It is found that 20-40 x-ray projections are sufficient to reconstruct images with satisfactory quality for IGRT. The reconstruction time ranges from 77 to 130 s on an NVIDIA Tesla C1060 (NVIDIA, Santa Clara, CA) GPU card, depending on the number of projections used, which is estimated about 100 times faster than similar iterative reconstruction approaches. Moreover, phantom studies indicate that the algorithm enables the CBCT to be reconstructed under a scanning protocol with as low as 0.1 mA s/projection. Comparing with currently widely used full-fan head and neck scanning protocol of approximately 360 projections with 0.4 mA s/projection, it is estimated that an overall 36-72 times dose reduction has been achieved in our fast CBCT reconstruction algorithm. This work indicates that the developed GPU-based CBCT reconstruction algorithm is capable of lowering imaging dose considerably. The high computation efficiency in this algorithm makes the iterative CBCT reconstruction approach applicable in real clinical environments.

  14. Method of characteristics - Based sensitivity calculations for international PWR benchmark

    International Nuclear Information System (INIS)

    Suslov, I. R.; Tormyshev, I. V.; Komlev, O. G.

    2013-01-01

    Method to calculate sensitivity of fractional-linear neutron flux functionals to transport equation coefficients is proposed. Implementation of the method on the basis of MOC code MCCG3D is developed. Sensitivity calculations for fission intensity for international PWR benchmark are performed. (authors)

  15. Manual method for dose calculation in gynecologic brachytherapy

    International Nuclear Information System (INIS)

    Vianello, Elizabeth A.; Almeida, Carlos E. de; Biaggio, Maria F. de

    1998-01-01

    This paper describes a manual method for dose calculation in brachytherapy of gynecological tumors, which allows the calculation of the doses at any plane or point of clinical interest. This method uses basic principles of vectorial algebra and the simulating orthogonal films taken from the patient with the applicators and dummy sources in place. The results obtained with method were compared with the values calculated with the values calculated with the treatment planning system model Theraplan and the agreement was better than 5% in most cases. The critical points associated with the final accuracy of the proposed method is related to the quality of the image and the appropriate selection of the magnification factors. This method is strongly recommended to the radiation oncology centers where are no treatment planning systems available and the dose calculations are manually done. (author)

  16. Iterative acceleration methods for Monte Carlo and deterministic criticality calculations

    International Nuclear Information System (INIS)

    Urbatsch, T.J.

    1995-11-01

    If you have ever given up on a nuclear criticality calculation and terminated it because it took so long to converge, you might find this thesis of interest. The author develops three methods for improving the fission source convergence in nuclear criticality calculations for physical systems with high dominance ratios for which convergence is slow. The Fission Matrix Acceleration Method and the Fission Diffusion Synthetic Acceleration (FDSA) Method are acceleration methods that speed fission source convergence for both Monte Carlo and deterministic methods. The third method is a hybrid Monte Carlo method that also converges for difficult problems where the unaccelerated Monte Carlo method fails. The author tested the feasibility of all three methods in a test bed consisting of idealized problems. He has successfully accelerated fission source convergence in both deterministic and Monte Carlo criticality calculations. By filtering statistical noise, he has incorporated deterministic attributes into the Monte Carlo calculations in order to speed their source convergence. He has used both the fission matrix and a diffusion approximation to perform unbiased accelerations. The Fission Matrix Acceleration method has been implemented in the production code MCNP and successfully applied to a real problem. When the unaccelerated calculations are unable to converge to the correct solution, they cannot be accelerated in an unbiased fashion. A Hybrid Monte Carlo method weds Monte Carlo and a modified diffusion calculation to overcome these deficiencies. The Hybrid method additionally possesses reduced statistical errors

  17. Iterative acceleration methods for Monte Carlo and deterministic criticality calculations

    Energy Technology Data Exchange (ETDEWEB)

    Urbatsch, T.J.

    1995-11-01

    If you have ever given up on a nuclear criticality calculation and terminated it because it took so long to converge, you might find this thesis of interest. The author develops three methods for improving the fission source convergence in nuclear criticality calculations for physical systems with high dominance ratios for which convergence is slow. The Fission Matrix Acceleration Method and the Fission Diffusion Synthetic Acceleration (FDSA) Method are acceleration methods that speed fission source convergence for both Monte Carlo and deterministic methods. The third method is a hybrid Monte Carlo method that also converges for difficult problems where the unaccelerated Monte Carlo method fails. The author tested the feasibility of all three methods in a test bed consisting of idealized problems. He has successfully accelerated fission source convergence in both deterministic and Monte Carlo criticality calculations. By filtering statistical noise, he has incorporated deterministic attributes into the Monte Carlo calculations in order to speed their source convergence. He has used both the fission matrix and a diffusion approximation to perform unbiased accelerations. The Fission Matrix Acceleration method has been implemented in the production code MCNP and successfully applied to a real problem. When the unaccelerated calculations are unable to converge to the correct solution, they cannot be accelerated in an unbiased fashion. A Hybrid Monte Carlo method weds Monte Carlo and a modified diffusion calculation to overcome these deficiencies. The Hybrid method additionally possesses reduced statistical errors.

  18. A New Thermodynamic Calculation Method for Binary Alloys: Part I: Statistical Calculation of Excess Functions

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    The improved form of calculation formula for the activities of the components in binary liquids and solid alloys has been derived based on the free volume theory considering excess entropy and Miedema's model for calculating the formation heat of binary alloys. A calculation method of excess thermodynamic functions for binary alloys, the formulas of integral molar excess properties and partial molar excess properties for solid ordered or disordered binary alloys have been developed. The calculated results are in good agreement with the experimental values.

  19. Calculating the albedo characteristics by the method of transmission probabilities

    International Nuclear Information System (INIS)

    Lukhvich, A.A.; Rakhno, I.L.; Rubin, I.E.

    1983-01-01

    The possibility to use the method of transmission probabilities for calculating the albedo characteristics of homogeneous and heterogeneous zones is studied. The transmission probabilities method is a numerical method for solving the transport equation in the integrated form. All calculations have been conducted as a one-group approximation for the planes and rods with different optical thicknesses and capture-to-scattering ratios. Above calculations for plane and cylindrical geometries have shown the possibility to use the numerical method of transmission probabilities for calculating the albedo characteristics of homogeneous and heterogeneous zones with high accuracy. In this case the computer time consumptions are minimum even with the cylindrical geometry, if the interpolation calculation of characteristics is used for the neutrons of the first path

  20. A finite element method for SSI time history calculation

    International Nuclear Information System (INIS)

    Ni, X.; Gantenbein, F.; Petit, M.

    1989-01-01

    The method which is proposed is based on a finite element modelization for the soil and the structure and a time history calculation. It has been developed for plane and axisymmetric geometries. The principle of this method is presented, then applications are given, first to a linear calculation for which results will be compared to those obtained by standard methods. Then results for a non linear behavior are described

  1. Acceleration methods for assembly-level transport calculations

    International Nuclear Information System (INIS)

    Adams, Marvin L.; Ramone, Gilles

    1995-01-01

    A family acceleration methods for the iterations that arise in assembly-level transport calculations is presented. A single iteration in these schemes consists of a transport sweep followed by a low-order calculation which is itself a simplified transport problem. It is shown that a previously-proposed method fitting this description is unstable in two and three dimensions. It is presented a family of methods and shown that some members are unconditionally stable. (author). 8 refs, 4 figs, 4 tabs

  2. A finite element method for SSI time history calculations

    International Nuclear Information System (INIS)

    Ni, X.M.; Gantenbein, F.; Petit, M.

    1989-01-01

    The method which is proposed is based on a finite element modelisation for the soil and the structure and a time history calculation. It has been developed for plane and axisymmetric geometries. The principle of this method will be presented, then applications will be given, first to a linear calculation for which results will be compared to those obtained by standard methods. Then results for a non linear behavior will be described

  3. GPU-BASED MONTE CARLO DUST RADIATIVE TRANSFER SCHEME APPLIED TO ACTIVE GALACTIC NUCLEI

    International Nuclear Information System (INIS)

    Heymann, Frank; Siebenmorgen, Ralf

    2012-01-01

    A three-dimensional parallel Monte Carlo (MC) dust radiative transfer code is presented. To overcome the huge computing-time requirements of MC treatments, the computational power of vectorized hardware is used, utilizing either multi-core computer power or graphics processing units. The approach is a self-consistent way to solve the radiative transfer equation in arbitrary dust configurations. The code calculates the equilibrium temperatures of two populations of large grains and stochastic heated polycyclic aromatic hydrocarbons. Anisotropic scattering is treated applying the Heney-Greenstein phase function. The spectral energy distribution (SED) of the object is derived at low spatial resolution by a photon counting procedure and at high spatial resolution by a vectorized ray tracer. The latter allows computation of high signal-to-noise images of the objects at any frequencies and arbitrary viewing angles. We test the robustness of our approach against other radiative transfer codes. The SED and dust temperatures of one- and two-dimensional benchmarks are reproduced at high precision. The parallelization capability of various MC algorithms is analyzed and included in our treatment. We utilize the Lucy algorithm for the optical thin case where the Poisson noise is high, the iteration-free Bjorkman and Wood method to reduce the calculation time, and the Fleck and Canfield diffusion approximation for extreme optical thick cells. The code is applied to model the appearance of active galactic nuclei (AGNs) at optical and infrared wavelengths. The AGN torus is clumpy and includes fluffy composite grains of various sizes made up of silicates and carbon. The dependence of the SED on the number of clumps in the torus and the viewing angle is studied. The appearance of the 10 μm silicate features in absorption or emission is discussed. The SED of the radio-loud quasar 3C 249.1 is fit by the AGN model and a cirrus component to account for the far-infrared emission.

  4. Comparison of calculational methods for EBT reactor nucleonics

    International Nuclear Information System (INIS)

    Henninger, R.J.; Seed, T.J.; Soran, P.D.; Dudziak, D.J.

    1980-01-01

    Nucleonic calculations for a preliminary conceptual design of the first wall/blanket/shield/coil assembly for an EBT reactor are described. Two-dimensional Monte Carlo, and one- and two-dimensional discrete-ordinates calculations are compared. Good agreement for the calculated values of tritium breeding and nuclear heating is seen. We find that the three methods are all useful and complementary as a design of this type evolves

  5. The development and validation of control rod calculation methods

    International Nuclear Information System (INIS)

    Rowlands, J.L.; Sweet, D.W.; Franklin, B.M.

    1979-01-01

    Fission rate distributions have been measured in the zero power critical facility, ZEBRA, for a series of eight different arrays of boron carbide control rods. Diffusion theory calculations have been compared with these measurements. The normalised fission rates differ by up to about 30% in some regions, between the different arrays, and these differences are well predicted by the calculations. A development has been made to a method used to produce homogenised cross sections for lattice regions containing control rods. Calculations show that the method also reproduces the reaction rate within the rod and the fission rate dip at the surface of the rod in satisfactory agreement with the more accurate calculations which represent the fine structure of the rod. A comparison between diffusion theory and transport theory calculations of control rod reactivity worths in the CDFR shows that for the standard design method the finite mesh approximation and the difference between diffusion theory and transport theory (the transport correction) tend to cancel and result in corrections to be applied to the standard mesh diffusion theory calculations of about +- 2% or less. This result applies for mesh centred finite difference diffusion theory codes and for the arrays of natural boron carbide control rods for which the calculations were made. Improvements have also been made to the effective diffusion coefficients used in diffusion theory calculations for control rod followers and these give satisfactory agreement with transport theory calculations. (U.K.)

  6. Quantum mechanical methods for calculation of force constants

    International Nuclear Information System (INIS)

    Mullally, D.J.

    1985-01-01

    The focus of this thesis is upon the calculation of force constants; i.e., the second derivatives of the potential energy with respect to nuclear displacements. This information is useful for the calculation of molecular vibrational modes and frequencies. In addition, it may be used for the location and characterization of equilibrium and transition state geometries. The methods presented may also be applied to the calculation of electric polarizabilities and infrared and Raman vibrational intensities. Two approaches to this problem are studied and evaluated: finite difference methods and analytical techniques. The most suitable method depends on the type and level of theory used to calculate the electronic wave function. Double point displacement finite differencing is often required for accurate calculation of the force constant matrix. These calculations require energy and gradient calculations on both sides of the geometry of interest. In order to speed up these calculations, a novel method is presented that uses geometry dependent information about the wavefunction. A detailed derivation for the analytical evaluation of force constants with a complete active space multiconfiguration self consistent field wave function is presented

  7. New method for calculation of integral characteristics of thermal plumes

    DEFF Research Database (Denmark)

    Zukowska, Daria; Popiolek, Zbigniew; Melikov, Arsen Krikor

    2008-01-01

    A method for calculation of integral characteristics of thermal plumes is proposed. The method allows for determination of the integral parameters of plumes based on speed measurements performed with omnidirectional low velocity thermoanemometers. The method includes a procedure for calculation...... of the directional velocity (upward component of the mean velocity). The method is applied for determination of the characteristics of an asymmetric thermal plume generated by a sitting person. The method was validated in full-scale experiments in a climatic chamber with a thermal manikin as a simulator of a sitting...

  8. The pseudo-harmonics method applied to depletion calculation

    International Nuclear Information System (INIS)

    Silva, F.C. da; Amaral, J.A.C.; Thome, Z.D.

    1989-01-01

    In this paper, a new method for performing depletion calculations, based on the use of the Pseudo-Harmonics perturbation method, was developed. The fuel burnup was considered as a global perturbation and the multigroup difusion equations were rewriten in such a way as to treat the soluble boron concentration as the eigenvalue. By doing this, the critical boron concentration can be obtained by a perturbation method. A test of the new method was performed for a H 2 O-colled, D 2 O-moderated reactor. Comparison with direct calculation showed that this method is very accurate and efficient. (author) [pt

  9. A GPU-based incompressible Navier-Stokes solver on moving overset grids

    Science.gov (United States)

    Chandar, Dominic D. J.; Sitaraman, Jayanarayanan; Mavriplis, Dimitri J.

    2013-07-01

    In pursuit of obtaining high fidelity solutions to the fluid flow equations in a short span of time, graphics processing units (GPUs) which were originally intended for gaming applications are currently being used to accelerate computational fluid dynamics (CFD) codes. With a high peak throughput of about 1 TFLOPS on a PC, GPUs seem to be favourable for many high-resolution computations. One such computation that involves a lot of number crunching is computing time accurate flow solutions past moving bodies. The aim of the present paper is thus to discuss the development of a flow solver on unstructured and overset grids and its implementation on GPUs. In its present form, the flow solver solves the incompressible fluid flow equations on unstructured/hybrid/overset grids using a fully implicit projection method. The resulting discretised equations are solved using a matrix-free Krylov solver using several GPU kernels such as gradient, Laplacian and reduction. Some of the simple arithmetic vector calculations are implemented using the CU++: An Object Oriented Framework for Computational Fluid Dynamics Applications using Graphics Processing Units, Journal of Supercomputing, 2013, doi:10.1007/s11227-013-0985-9 approach where GPU kernels are automatically generated at compile time. Results are presented for two- and three-dimensional computations on static and moving grids.

  10. Statistics of Monte Carlo methods used in radiation transport calculation

    International Nuclear Information System (INIS)

    Datta, D.

    2009-01-01

    Radiation transport calculation can be carried out by using either deterministic or statistical methods. Radiation transport calculation based on statistical methods is basic theme of the Monte Carlo methods. The aim of this lecture is to describe the fundamental statistics required to build the foundations of Monte Carlo technique for radiation transport calculation. Lecture note is organized in the following way. Section (1) will describe the introduction of Basic Monte Carlo and its classification towards the respective field. Section (2) will describe the random sampling methods, a key component of Monte Carlo radiation transport calculation, Section (3) will provide the statistical uncertainty of Monte Carlo estimates, Section (4) will describe in brief the importance of variance reduction techniques while sampling particles such as photon, or neutron in the process of radiation transport

  11. Comparison study on cell calculation method of fast reactor

    International Nuclear Information System (INIS)

    Chiba, Gou

    2002-10-01

    Effective cross sections obtained by cell calculations are used in core calculations in current deterministic methods. Therefore, it is important to calculate the effective cross sections accurately and several methods have been proposed. In this study, some of the methods are compared to each other using a continuous energy Monte Carlo method as a reference. The result shows that the table look-up method used in Japan Nuclear Cycle Development Institute (JNC) sometimes has a difference over 10% in effective microscopic cross sections and be inferior to the sub-group method. The problem was overcome by introducing a new nuclear constant system developed in JNC, in which the ultra free energy group library is used. The system can also deal with resonance interaction effects between nuclides which are not able to be considered by other methods. In addition, a new method was proposed to calculate effective cross section accurately for power reactor fuel subassembly where the new nuclear constant system cannot be applied. This method uses the sub-group method and the ultra fine energy group collision probability method. The microscopic effective cross sections obtained by this method agree with the reference values within 5% difference. (author)

  12. Methods for tornado frequency calculation of nuclear power plant

    International Nuclear Information System (INIS)

    Liu Haibin; Li Lin

    2012-01-01

    In order to take probabilistic safety assessment of nuclear power plant tornado attack event, a method to calculate tornado frequency of nuclear power plant is introduced based on HAD 101/10 and NUREG/CR-4839 references. This method can consider history tornado frequency of the plant area, construction dimension, intensity various along with tornado path and area distribution and so on and calculate the frequency of different scale tornado. (authors)

  13. Internal quality control of RIA with Tonks error calculation method

    International Nuclear Information System (INIS)

    Chen Xiaodong

    1996-01-01

    According to the methodology feature of RIA, an internal quality control chart with Tonks error calculation method which is suitable for RIA is designed. The quality control chart defines the value of the allowance error with normal reference range. The method has the simplicity of its performance and directly perceived through the senses. Taking the example of determining T 3 and T 4 , the calculation of allowance error, drawing of quality control chart and the analysis of result are introduced

  14. Improvement of methods for calculation of sound insulation in buildings

    OpenAIRE

    Mašović, Draško B.

    2015-01-01

    The main object of this work are the methods for calculation of sound insulation based on the classical model of sound propagation in buildings and single-number rating of sound insulation. The aim of the work is inspection of the possibilities for improvement of standard methods for quantification and calculation of sound insulation, in order to achieve higher accuracy of the obtained numerical values and their correlation with subjective impression of the acoustic comfort in buildings. Proc...

  15. High-performance GPU-based rendering for real-time, rigid 2D/3D-image registration and motion prediction in radiation oncology.

    Science.gov (United States)

    Spoerk, Jakob; Gendrin, Christelle; Weber, Christoph; Figl, Michael; Pawiro, Supriyanto Ardjo; Furtado, Hugo; Fabri, Daniella; Bloch, Christoph; Bergmann, Helmar; Gröller, Eduard; Birkfellner, Wolfgang

    2012-02-01

    A common problem in image-guided radiation therapy (IGRT) of lung cancer as well as other malignant diseases is the compensation of periodic and aperiodic motion during dose delivery. Modern systems for image-guided radiation oncology allow for the acquisition of cone-beam computed tomography data in the treatment room as well as the acquisition of planar radiographs during the treatment. A mid-term research goal is the compensation of tumor target volume motion by 2D/3D Registration. In 2D/3D registration, spatial information on organ location is derived by an iterative comparison of perspective volume renderings, so-called digitally rendered radiographs (DRR) from computed tomography volume data, and planar reference x-rays. Currently, this rendering process is very time consuming, and real-time registration, which should at least provide data on organ position in less than a second, has not come into existence. We present two GPU-based rendering algorithms which generate a DRR of 512×512 pixels size from a CT dataset of 53 MB size at a pace of almost 100 Hz. This rendering rate is feasible by applying a number of algorithmic simplifications which range from alternative volume-driven rendering approaches - namely so-called wobbled splatting - to sub-sampling of the DRR-image by means of specialized raycasting techniques. Furthermore, general purpose graphics processing unit (GPGPU) programming paradigms were consequently utilized. Rendering quality and performance as well as the influence on the quality and performance of the overall registration process were measured and analyzed in detail. The results show that both methods are competitive and pave the way for fast motion compensation by rigid and possibly even non-rigid 2D/3D registration and, beyond that, adaptive filtering of motion models in IGRT. Copyright © 2011. Published by Elsevier GmbH.

  16. High-performance GPU-based rendering for real-time, rigid 2D/3D-image registration and motion prediction in radiation oncology

    Energy Technology Data Exchange (ETDEWEB)

    Spoerk, Jakob; Gendrin, Christelle; Weber, Christoph [Medical University of Vienna (Austria). Center of Medical Physics and Biomedical Engineering] [and others

    2012-07-01

    A common problem in image-guided radiation therapy (IGRT) of lung cancer as well as other malignant diseases is the compensation of periodic and aperiodic motion during dose delivery. Modern systems for image-guided radiation oncology allow for the acquisition of cone-beam computed tomography data in the treatment room as well as the acquisition of planar radiographs during the treatment. A mid-term research goal is the compensation of tumor target volume motion by 2D/3D Registration. In 2D/3D registration, spatial information on organ location is derived by an iterative comparison of perspective volume renderings, so-called digitally rendered radiographs (DRR) from computed tomography volume data, and planar reference X-rays. Currently, this rendering process is very time consuming, and real-time registration, which should at least provide data on organ position in less than a second, has not come into existence. We present two GPU-based rendering algorithms which generate a DRR of 512 x 512 pixels size from a CT dataset of 53 MB size at a pace of almost 100 Hz. This rendering rate is feasible by applying a number of algorithmic simplifications which range from alternative volume-driven rendering approaches - namely so-called wobbled splatting - to sub-sampling of the DRR-image by means of specialized raycasting techniques. Furthermore, general purpose graphics processing unit (GPGPU) programming paradigms were consequently utilized. Rendering quality and performance as well as the influence on the quality and performance of the overall registration process were measured and analyzed in detail. The results show that both methods are competitive and pave the way for fast motion compensation by rigid and possibly even non-rigid 2D/3D registration and, beyond that, adaptive filtering of motion models in IGRT. (orig.)

  17. Development of 3-D FBR heterogeneous core calculation method based on characteristics method

    International Nuclear Information System (INIS)

    Takeda, Toshikazu; Maruyama, Manabu; Hamada, Yuzuru; Nishi, Hiroshi; Ishibashi, Junichi; Kitano, Akihiro

    2002-01-01

    A new 3-D transport calculation method taking into account the heterogeneity of fuel assemblies has been developed by combining the characteristics method and the nodal transport method. In the axial direction the nodal transport method is applied, and the characteristics method is applied to take into account the radial heterogeneity of fuel assemblies. The numerical calculations have been performed to verify 2-D radial calculations of FBR assemblies and partial core calculations. Results are compared with the reference Monte-Carlo calculations. A good agreement has been achieved. It is shown that the present method has an advantage in calculating reaction rates in a small region

  18. 3D electric field calculation with surface charge method

    International Nuclear Information System (INIS)

    Yamada, S.

    1992-01-01

    This paper describes an outline and some examples of three dimensional electric field calculations with a computer code developed at NIRS. In the code, a surface charge method is adopted because of it's simplicity in the mesh establishing procedure. The charge density in a triangular mesh is assumed to distribute with a linear function of the position. The electric field distribution is calculated for a pair of drift tubes with the focusing fingers on the opposing surfaces. The field distribution in an acceleration gap is analyzed with a Fourier-Bessel series expansion method. The calculated results excellently reproduces the measured data with a magnetic model. (author)

  19. Validation of calculational methods for nuclear criticality safety - approved 1975

    International Nuclear Information System (INIS)

    Anon.

    1977-01-01

    The American National Standard for Nuclear Criticality Safety in Operations with Fissionable Materials Outside Reactors, N16.1-1975, states in 4.2.5: In the absence of directly applicable experimental measurements, the limits may be derived from calculations made by a method shown to be valid by comparison with experimental data, provided sufficient allowances are made for uncertainties in the data and in the calculations. There are many methods of calculation which vary widely in basis and form. Each has its place in the broad spectrum of problems encountered in the nuclear criticality safety field; however, the general procedure to be followed in establishing validity is common to all. The standard states the requirements for establishing the validity and area(s) of applicability of any calculational method used in assessing nuclear criticality safety

  20. Comparison of electrical conductivity calculation methods for natural waters

    Science.gov (United States)

    McCleskey, R. Blaine; Nordstrom, D. Kirk; Ryan, Joseph N.

    2012-01-01

    The capability of eleven methods to calculate the electrical conductivity of a wide range of natural waters from their chemical composition was investigated. A brief summary of each method is presented including equations to calculate the conductivities of individual ions, the ions incorporated, and the method's limitations. The ability of each method to reliably predict the conductivity depends on the ions included, effective accounting of ion pairing, and the accuracy of the equation used to estimate the ionic conductivities. The performances of the methods were evaluated by calculating the conductivity of 33 environmentally important electrolyte solutions, 41 U.S. Geological Survey standard reference water samples, and 1593 natural water samples. The natural waters tested include acid mine waters, geothermal waters, seawater, dilute mountain waters, and river water impacted by municipal waste water. The three most recent conductivity methods predict the conductivity of natural waters better than other methods. Two of the recent methods can be used to reliably calculate the conductivity for samples with pH values greater than about 3 and temperatures between 0 and 40°C. One method is applicable to a variety of natural water types with a range of pH from 1 to 10, temperature from 0 to 95°C, and ionic strength up to 1 m.

  1. Calculation method for gamma dose rates from Gaussian puffs

    Energy Technology Data Exchange (ETDEWEB)

    Thykier-Nielsen, S; Deme, S; Lang, E

    1995-06-01

    The Lagrangian puff models are widely used for calculation of the dispersion of releases to the atmosphere. Basic output from such models is concentration of material in the air and on the ground. The most simple method for calculation of the gamma dose from the concentration of airborne activity is based on the semi-infinite cloud model. This method is however only applicable for puffs with large dispersion parameters, i.e. for receptors far away from the release point. The exact calculation of the cloud dose using volume integral requires large computer time usually exceeding what is available for real time calculations. The volume integral for gamma doses could be approximated by using the semi-infinite cloud model combined with correction factors. This type of calculation procedure is very fast, but usually the accuracy is poor because only a few of the relevant parameters are considered. A multi-parameter method for calculation of gamma doses is described here. This method uses precalculated values of the gamma dose rates as a function of E{sub {gamma}}, {sigma}{sub y}, the asymmetry factor - {sigma}{sub y}/{sigma}{sub z}, the height of puff center - H and the distance from puff center R{sub xy}. To accelerate the calculations the release energy, for each significant radionuclide in each energy group, has been calculated and tabulated. Based on the precalculated values and suitable interpolation procedure the calculation of gamma doses needs only short computing time and it is almost independent of the number of radionuclides considered. (au) 2 tabs., 15 ills., 12 refs.

  2. Calculation method for gamma dose rates from Gaussian puffs

    International Nuclear Information System (INIS)

    Thykier-Nielsen, S.; Deme, S.; Lang, E.

    1995-06-01

    The Lagrangian puff models are widely used for calculation of the dispersion of releases to the atmosphere. Basic output from such models is concentration of material in the air and on the ground. The most simple method for calculation of the gamma dose from the concentration of airborne activity is based on the semi-infinite cloud model. This method is however only applicable for puffs with large dispersion parameters, i.e. for receptors far away from the release point. The exact calculation of the cloud dose using volume integral requires large computer time usually exceeding what is available for real time calculations. The volume integral for gamma doses could be approximated by using the semi-infinite cloud model combined with correction factors. This type of calculation procedure is very fast, but usually the accuracy is poor because only a few of the relevant parameters are considered. A multi-parameter method for calculation of gamma doses is described here. This method uses precalculated values of the gamma dose rates as a function of E γ , σ y , the asymmetry factor - σ y /σ z , the height of puff center - H and the distance from puff center R xy . To accelerate the calculations the release energy, for each significant radionuclide in each energy group, has been calculated and tabulated. Based on the precalculated values and suitable interpolation procedure the calculation of gamma doses needs only short computing time and it is almost independent of the number of radionuclides considered. (au) 2 tabs., 15 ills., 12 refs

  3. Classical Methods and Calculation Algorithms for Determining Lime Requirements

    Directory of Open Access Journals (Sweden)

    André Guarçoni

    Full Text Available ABSTRACT The methods developed for determination of lime requirements (LR are based on widely accepted principles. However, the formulas used for calculation have evolved little over recent decades, and in some cases there are indications of their inadequacy. The aim of this study was to compare the lime requirements calculated by three classic formulas and three algorithms, defining those most appropriate for supplying Ca and Mg to coffee plants and the smaller possibility of causing overliming. The database used contained 600 soil samples, which were collected in coffee plantings. The LR was estimated by the methods of base saturation, neutralization of Al3+, and elevation of Ca2+ and Mg2+ contents (two formulas and by the three calculation algorithms. Averages of the lime requirements were compared, determining the frequency distribution of the 600 lime requirements (LR estimated through each calculation method. In soils with low cation exchange capacity at pH 7, the base saturation method may fail to adequately supply the plants with Ca and Mg in many situations, while the method of Al3+ neutralization and elevation of Ca2+ and Mg2+ contents can result in the calculation of application rates that will increase the pH above the suitable range. Among the methods studied for calculating lime requirements, the algorithm that predicts reaching a defined base saturation, with adequate Ca and Mg supply and the maximum application rate limited to the H+Al value, proved to be the most efficient calculation method, and it can be recommended for use in numerous crops conditions.

  4. Calculation method for gamma-dose rates from spherical puffs

    International Nuclear Information System (INIS)

    Thykier-Nielsen, S.; Deme, S.; Lang, E.

    1993-05-01

    The Lagrangian puff-models are widely used for calculation of the dispersion of atmospheric releases. Basic output from such models are concentrations of material in the air and on the ground. The most simple method for calculation of the gamma dose from the concentration of airborne activity is based on semi-infinite cloud model. This method is however only applicable for points far away from the release point. The exact calculation of the cloud dose using the volume integral requires significant computer time. The volume integral for the gamma dose could be approximated by using the semi-infinite cloud model combined with correction factors. This type of calculation procedure is very fast, but usually the accuracy is poor due to the fact that the same correction factors are used for all isotopes. The authors describe a more elaborate correction method. This method uses precalculated values of the gamma-dose rate as a function of the puff dispersion parameter (δ p ) and the distance from the puff centre for four energy groups. The release of energy for each radionuclide in each energy group has been calculated and tabulated. Based on these tables and a suitable interpolation procedure the calculation of gamma doses takes very short time and is almost independent of the number of radionuclides. (au) (7 tabs., 7 ills., 12 refs.)

  5. Hybrid SN Laplace Transform Method For Slab Lattice Calculations

    International Nuclear Information System (INIS)

    Segatto, Cynthia F.; Vilhena, Marco T.; Zani, Jose H.; Barros, Ricardo C.

    2008-01-01

    In typical lattice cells where a highly absorbing, small fuel element is embedded in the moderator, a large weakly absorbing medium, high-order transport methods become unnecessary. In this paper we describe a hybrid discrete ordinates (S N ) method for slab lattice calculations. This hybrid S N method combines the convenience of a low-order S N method in the moderator with a high-order S N method in the fuel. We use special fuel-moderator interface conditions based on an approximate angular flux interpolation analytical method and the Laplace transform (LTS N ) numerical method to calculate the neutron flux distribution and the thermal disadvantage factor. We present numerical results for a range of typical model problems. (authors)

  6. Application of nonparametric statistic method for DNBR limit calculation

    International Nuclear Information System (INIS)

    Dong Bo; Kuang Bo; Zhu Xuenong

    2013-01-01

    Background: Nonparametric statistical method is a kind of statistical inference method not depending on a certain distribution; it calculates the tolerance limits under certain probability level and confidence through sampling methods. The DNBR margin is one important parameter of NPP design, which presents the safety level of NPP. Purpose and Methods: This paper uses nonparametric statistical method basing on Wilks formula and VIPER-01 subchannel analysis code to calculate the DNBR design limits (DL) of 300 MW NPP (Nuclear Power Plant) during the complete loss of flow accident, simultaneously compared with the DL of DNBR through means of ITDP to get certain DNBR margin. Results: The results indicate that this method can gain 2.96% DNBR margin more than that obtained by ITDP methodology. Conclusions: Because of the reduction of the conservation during analysis process, the nonparametric statistical method can provide greater DNBR margin and the increase of DNBR margin is benefited for the upgrading of core refuel scheme. (authors)

  7. Use of the Local Variation Methods for Nuclear Design Calculations

    International Nuclear Information System (INIS)

    Zhukov, A.I.

    2006-01-01

    A new problem-solving method for steady-state equations, which describe neutron diffusion, is presented. The method bases on a variation principal for steady-state diffusion equations and direct search the minimum of a corresponding functional. Benchmark problem calculation for power of fuel assemblies show ∼ 2% relative accuracy

  8. Optimization method for quantitative calculation of clay minerals in soil

    Indian Academy of Sciences (India)

    However, no reliable method for quantitative analysis of clay minerals has been established so far. In this study, an attempt was made to propose an optimization method for the quantitative ... 2. Basic principles. The mineralogical constitution of soil is rather complex. ... K2O, MgO, and TFe as variables for the calculation.

  9. Efficient Calculation of Near Fields in the FDTD Method

    DEFF Research Database (Denmark)

    Franek, Ondrej

    2011-01-01

    When calculating frequency-domain near fields by the FDTD method, almost 50 % reduction in memory and CPU operations can be achieved if only E-fields are stored during the main time-stepping loop and H-fields computed later. An improved method of obtaining the H-fields from Faraday's Law is prese...

  10. Linear augmented plane wave method for self-consistent calculations

    International Nuclear Information System (INIS)

    Takeda, T.; Kuebler, J.

    1979-01-01

    O.K. Andersen has recently introduced a linear augmented plane wave method (LAPW) for the calculation of electronic structure that was shown to be computationally fast. A more general formulation of an LAPW method is presented here. It makes use of a freely disposable number of eigenfunctions of the radial Schroedinger equation. These eigenfunctions can be selected in a self-consistent way. The present formulation also results in a computationally fast method. It is shown that Andersen's LAPW is obtained in a special limit from the present formulation. Self-consistent test calculations for copper show the present method to be remarkably accurate. As an application, scalar-relativistic self-consistent calculations are presented for the band structure of FCC lanthanum. (author)

  11. Comparison between ASHRAE and ISO thermal transmittance calculation methods

    DEFF Research Database (Denmark)

    Blanusa, Petar; Goss, William P.; Roth, Hartwig

    2007-01-01

    is proportional to the glazing/frame sightline distance that is also proportional to the total glazing spacer length. An example calculation of the overall heat transfer and thermal transmittance (U-value or U-factor) using the two methods for a thermally broken, aluminum framed slider window is presented....... The fenestration thermal transmittance calculations analyses presented in this paper show that small differences exist between the calculated thermal transmittance values produced by the ISO and ASHRAE methods. The results also show that the overall thermal transmittance difference between the two methodologies...... decreases as the total window area (glazing plus frame) increases. Thus, the resulting difference in thermal transmittance values for the two methods is negligible for larger windows. This paper also shows algebraically that the differences between the ISO and ASHRAE methods turn out to be due to the way...

  12. Introduction to quantum calculation methods in high resolution NMR

    International Nuclear Information System (INIS)

    Goldman, M.

    1996-01-01

    New techniques as for instance the polarization transfer, the coherence with several quanta and the double Fourier transformation have appeared fifteen years ago. These techniques constitute a considerable advance in NMR. Indeed, they allow to study more complex molecules than it was before possible. But with these advances, the classical description of the NMR is not enough to understand precisely the physical phenomena induced by these methods. It is then necessary to resort to quantum calculation methods. The aim of this work is to present these calculation methods. After some recalls of quantum mechanics, the author describes the NMR with the density matrix, reviews the main methods of double Fourier transformation and then gives the principle of the relaxation times calculation. (O.M.)

  13. Pressure algorithm for elliptic flow calculations with the PDF method

    Science.gov (United States)

    Anand, M. S.; Pope, S. B.; Mongia, H. C.

    1991-01-01

    An algorithm to determine the mean pressure field for elliptic flow calculations with the probability density function (PDF) method is developed and applied. The PDF method is a most promising approach for the computation of turbulent reacting flows. Previous computations of elliptic flows with the method were in conjunction with conventional finite volume based calculations that provided the mean pressure field. The algorithm developed and described here permits the mean pressure field to be determined within the PDF calculations. The PDF method incorporating the pressure algorithm is applied to the flow past a backward-facing step. The results are in good agreement with data for the reattachment length, mean velocities, and turbulence quantities including triple correlations.

  14. Comparison of calculational methods for liquid metal reactor shields

    International Nuclear Information System (INIS)

    Carter, L.L.; Moore, F.S.; Morford, R.J.; Mann, F.M.

    1985-09-01

    A one-dimensional comparison is made between Monte Carlo (MCNP), discrete ordinances (ANISN), and diffusion theory (MlDX) calculations of neutron flux and radiation damage from the core of the Fast Flux Test Facility (FFTF) out to the reactor vessel. Diffusion theory was found to be reasonably accurate for the calculation of both total flux and radiation damage. However, for large distances from the core, the calculated flux at very high energies is low by an order of magnitude or more when the diffusion theory is used. Particular emphasis was placed in this study on the generation of multitable cross sections for use in discrete ordinates codes that are self-shielded, consistent with the self-shielding employed in the generation of cross sections for use with diffusion theory. The Monte Carlo calculation, with a pointwise representation of the cross sections, was used as the benchmark for determining the limitations of the other two calculational methods. 12 refs., 33 figs

  15. Temperature Calculation of Annular Fuel Pellet by Finite Difference Method

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Yong Sik; Bang, Je Geon; Kim, Dae Ho; Kim, Sun Ki; Lim, Ik Sung; Song, Kun Woo [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2009-10-15

    KAERI has started an innovative fuel development project for applying dual-cooled annular fuel to existing PWR reactor. In fuel design, fuel temperature is the most important factor which can affect nuclear fuel integrity and safety. Many models and methodologies, which can calculate temperature distribution in a fuel pellet have been proposed. However, due to the geometrical characteristics and cooling condition differences between existing solid type fuel and dual-cooled annular fuel, current fuel temperature calculation models can not be applied directly. Therefore, the new heat conduction model of fuel pellet was established. In general, fuel pellet temperature is calculated by FDM(Finite Difference Method) or FEM(Finite Element Method), because, temperature dependency of fuel thermal conductivity and spatial dependency heat generation in the pellet due to the self-shielding should be considered. In our study, FDM is adopted due to high exactness and short calculation time.

  16. On the resolvents methods in quantum perturbation calculations

    International Nuclear Information System (INIS)

    Burzynski, A.

    1979-01-01

    This paper gives a systematic review of resolvent methods in quantum perturbation calculations. The case of discrete spectrum of hamiltonian is considered specially (in the literature this is the fewest considered case). The topics of calculations of quantum transitions by using of the resolvent formalism, quantum transitions between states from particular subspaces, the shifts of energy levels, are shown. The main ideas of stationary perturbation theory developed by Lippmann and Schwinger are considered too. (author)

  17. The analytic method for calculating the control rod worth

    International Nuclear Information System (INIS)

    Kim, Han Gon; Lee, Byeong Ho; Chang, Soon Heung

    1989-01-01

    We calculated the control rod worth in this paper. To avoid complexity, we did not consider burnable poisons and soluble boron. The system was localized within one assembly. The control rod was treated as not an absorber but an another boundary. Thus all of the group constants were unchanged before and after control rod insertion. And we discussed the method for calculation of the reactivity of the whole core

  18. The method of calculation of pipelines laid on supports

    OpenAIRE

    Benin D.M.

    2017-01-01

    this article focuses on the issue of laying pipelines on supports and the method of calculation of vertical and horizontal loads acting on the support. As pipelines can be water piping systems, heat networks, oil and mazout lines, condensate lines, steam lines, etc. this article describes the calculations of supports for pipelines laid above ground, in crowded channels, premises, on racks, in impassable channels, hanging supports, etc. The paper explores recommendations for placement of the s...

  19. Method for dose calculation in intracavitary irradiation of endometrical carcinoma

    International Nuclear Information System (INIS)

    Zevrieva, I.F.; Ivashchenko, N.T.; Musapirova, N.A.; Fel'dman, S.Z.; Sajbekov, T.S.

    1979-01-01

    A method for dose calculation for the conditions of intracavitary gamma therapy of endometrial carcinoma using spherical and linear 60 Co sources was elaborated. Calculations of dose rates for different amount and orientation of spherical radiation sources and for different planes were made with the aid of BEhSM-4M computer. Dosimet were made with the aid of BEhSM-4M computer. Dosimetric study of dose fields was made using a phantom imitating the real conditions of irradiation. Discrepancies between experimental and calculated values are within the limits of the experiment accuracy

  20. Nuclear data and multigroup methods in fast reactor calculations

    International Nuclear Information System (INIS)

    Gur, Y.

    1975-03-01

    The work deals with fast reactor multigroup calculations, and the efficient treatment of basic nuclear data, which serves as raw material for the calculations. Its purpose is twofold: to build a computer code system that handles a large, detailed library of basic neutron cross section data, (such as ENDF/B-III) and yields a compact set of multigroup cross sections for reactor calculations; to use the code system for comparative analysis of different libraries, in order to discover basic uncertainties that still exist in the measurement of neutron cross sections, and to determine their influence upon uncertainties in nuclear calculations. A program named NANICK which was written in two versions is presented. The first handles the American basic data library, ENDF/B-III, while the second handles the German basic data library, KEDAK. The mathematical algorithm is identical in both versions, and only the file management is different. This program calculates infinitely diluted multigroup cross sections and scattering matrices. It is complemented by the program NASIF that calculates shielding factors from resonance parameters. Different versions of NASIF were written to handle ENDF/B-III or KEDAK. New methods for evaluating in reactor calculations the long term behavior of the neutron flux as well as its fine structure are described and an efficient calculation of the shielding factors from resonance parameters is offered. (B.G.)

  1. Effectiveness of the current method of calculating member states' contributions

    CERN Document Server

    2002-01-01

    At its Two-hundred and eighty-sixth Meeting of 19 September 2001, the Finance Committee requested the Management to re-assess the effectiveness of the current method of forecasting Net National Income (NNI) for the purposes of calculating the Member States' contributions by comparing the results of the current weighted average method with a method based on a simple arithmetic average. The Finance Committee is invited to take note of this information.

  2. METHOD OF CALCULATING THE OPTIMAL HEAT EMISSION GEOTHERMAL WELLS

    Directory of Open Access Journals (Sweden)

    A. I. Akaev

    2015-01-01

    Full Text Available This paper presents a simplified method of calculating the optimal regimes of the fountain and the pumping exploitation of geothermal wells, reducing scaling and corrosion during operation. Comparative characteristics to quantify the heat of formation for these methods of operation under the same pressure at the wellhead. The problem is solved graphic-analytical method based on a balance of pressure in the well with the heat pump. 

  3. Efficient methods for time-absorption (α) eigenvalue calculations

    International Nuclear Information System (INIS)

    Hill, T.R.

    1983-01-01

    The time-absorption eigenvalue (α) calculation is one of the options found in most discrete-ordinates transport codes. Several methods have been developed at Los Alamos to improve the efficiency of this calculation. Two procedures, based on coarse-mesh rebalance, to accelerate the α eigenvalue search are derived. A hybrid scheme to automatically choose the more-effective rebalance method is described. The α rebalance scheme permits some simple modifications to the iteration strategy that eliminates many unnecessary calculations required in the standard search procedure. For several fast supercritical test problems, these methods resulted in convergence with one-fifth the number of iterations required for the conventional eigenvalue search procedure

  4. Correlation expansion: a powerful alternative multiple scattering calculation method

    International Nuclear Information System (INIS)

    Zhao Haifeng; Wu Ziyu; Sebilleau, Didier

    2008-01-01

    We introduce a powerful alternative expansion method to perform multiple scattering calculations. In contrast to standard MS series expansion, where the scattering contributions are grouped in terms of scattering order and may diverge in the low energy region, this expansion, called correlation expansion, partitions the scattering process into contributions from different small atom groups and converges at all energies. It converges faster than MS series expansion when the latter is convergent. Furthermore, it takes less memory than the full MS method so it can be used in the near edge region without any divergence problem, even for large clusters. The correlation expansion framework we derive here is very general and can serve to calculate all the elements of the scattering path operator matrix. Photoelectron diffraction calculations in a cluster containing 23 atoms are presented to test the method and compare it to full MS and standard MS series expansion

  5. Development and application of advanced methods for electronic structure calculations

    DEFF Research Database (Denmark)

    Schmidt, Per Simmendefeldt

    . For this reason, part of this thesis relates to developing and applying a new method for constructing so-called norm-conserving PAW setups, that are applicable to GW calculations by using a genetic algorithm. The effect of applying the new setups significantly affects the absolute band positions, both for bulk......This thesis relates to improvements and applications of beyond-DFT methods for electronic structure calculations that are applied in computational material science. The improvements are of both technical and principal character. The well-known GW approximation is optimized for accurate calculations...... of electronic excitations in two-dimensional materials by exploiting exact limits of the screened Coulomb potential. This approach reduces the computational time by an order of magnitude, enabling large scale applications. The GW method is further improved by including so-called vertex corrections. This turns...

  6. RCS Leak Rate Calculation with High Order Least Squares Method

    International Nuclear Information System (INIS)

    Lee, Jeong Hun; Kang, Young Kyu; Kim, Yang Ki

    2010-01-01

    As a part of action items for Application of Leak before Break(LBB), RCS Leak Rate Calculation Program is upgraded in Kori unit 3 and 4. For real time monitoring of operators, periodic calculation is needed and corresponding noise reduction scheme is used. This kind of study was issued in Korea, so there have upgraded and used real time RCS Leak Rate Calculation Program in UCN unit 3 and 4 and YGN unit 1 and 2. For reduction of the noise in signals, Linear Regression Method was used in those programs. Linear Regression Method is powerful method for noise reduction. But the system is not static with some alternative flow paths and this makes mixed trend patterns of input signal values. In this condition, the trend of signal and average of Linear Regression are not entirely same pattern. In this study, high order Least squares Method is used to follow the trend of signal and the order of calculation is rearranged. The result of calculation makes reasonable trend and the procedure is physically consistence

  7. Cluster monte carlo method for nuclear criticality safety calculation

    International Nuclear Information System (INIS)

    Pei Lucheng

    1984-01-01

    One of the most important applications of the Monte Carlo method is the calculation of the nuclear criticality safety. The fair source game problem was presented at almost the same time as the Monte Carlo method was applied to calculating the nuclear criticality safety. The source iteration cost may be reduced as much as possible or no need for any source iteration. This kind of problems all belongs to the fair source game prolems, among which, the optimal source game is without any source iteration. Although the single neutron Monte Carlo method solved the problem without the source iteration, there is still quite an apparent shortcoming in it, that is, it solves the problem without the source iteration only in the asymptotic sense. In this work, a new Monte Carlo method called the cluster Monte Carlo method is given to solve the problem further

  8. Fast calculation method for computer-generated cylindrical holograms.

    Science.gov (United States)

    Yamaguchi, Takeshi; Fujii, Tomohiko; Yoshikawa, Hiroshi

    2008-07-01

    Since a general flat hologram has a limited viewable area, we usually cannot see the other side of a reconstructed object. There are some holograms that can solve this problem. A cylindrical hologram is well known to be viewable in 360 deg. Most cylindrical holograms are optical holograms, but there are few reports of computer-generated cylindrical holograms. The lack of computer-generated cylindrical holograms is because the spatial resolution of output devices is not great enough; therefore, we have to make a large hologram or use a small object to fulfill the sampling theorem. In addition, in calculating the large fringe, the calculation amount increases in proportion to the hologram size. Therefore, we propose what we believe to be a new calculation method for fast calculation. Then, we print these fringes with our prototype fringe printer. As a result, we obtain a good reconstructed image from a computer-generated cylindrical hologram.

  9. Statistic method of research reactors maximum permissible power calculation

    International Nuclear Information System (INIS)

    Grosheva, N.A.; Kirsanov, G.A.; Konoplev, K.A.; Chmshkyan, D.V.

    1998-01-01

    The technique for calculating maximum permissible power of a research reactor at which the probability of the thermal-process accident does not exceed the specified value, is presented. The statistical method is used for the calculations. It is regarded that the determining function related to the reactor safety is the known function of the reactor power and many statistically independent values which list includes the reactor process parameters, geometrical characteristics of the reactor core and fuel elements, as well as random factors connected with the reactor specific features. Heat flux density or temperature is taken as a limiting factor. The program realization of the method discussed is briefly described. The results of calculating the PIK reactor margin coefficients for different probabilities of the thermal-process accident are considered as an example. It is shown that the probability of an accident with fuel element melting in hot zone is lower than 10 -8 1 per year for the reactor rated power [ru

  10. A New Method to Calculate Internal Rate of Return

    Directory of Open Access Journals (Sweden)

    azadeh zandi

    2015-09-01

    Full Text Available A number of methods have been developed to choose the best capital investment projects such as net present value, internal rate of return and etc. Internal rate of return method is probably the most popular method among managers and investors. But despite the popularity there are serious drawbacks and limitations in this method. After decades of efforts made by economists and experts to improve the method and its shortcomings, Magni in 2010 has revealed a new approach that can solves the most of internal rate of return method problems. This paper present a new method which is originated from Magni’s approach but has much more simple calculations and can resolve all the drawbacks of internal rate of return method.

  11. Report: GPU Based Massive Parallel Kawasaki Kinetics In Monte Carlo Modelling of Lipid Microdomains

    OpenAIRE

    Lis, M.; Pintal, L.

    2013-01-01

    This paper introduces novel method of simulation of lipid biomembranes based on Metropolis Hastings algorithm and Graphic Processing Unit computational power. Method gives up to 55 times computational boost in comparison to classical computations. Extensive study of algorithm correctness is provided. Analysis of simulation results and results obtained with classical simulation methodologies are presented.

  12. Comparative Study of the Volumetric Methods Calculation Using GNSS Measurements

    Science.gov (United States)

    Şmuleac, Adrian; Nemeş, Iacob; Alina Creţan, Ioana; Sorina Nemeş, Nicoleta; Şmuleac, Laura

    2017-10-01

    This paper aims to achieve volumetric calculations for different mineral aggregates using different methods of analysis and also comparison of results. To achieve these comparative studies and presentation were chosen two software licensed, namely TopoLT 11.2 and Surfer 13. TopoLT program is a program dedicated to the development of topographic and cadastral plans. 3D terrain model, level courves and calculation of cut and fill volumes, including georeferencing of images. The program Surfer 13 is produced by Golden Software, in 1983 and is active mainly used in various fields such as agriculture, construction, geophysical, geotechnical engineering, GIS, water resources and others. It is also able to achieve GRID terrain model, to achieve the density maps using the method of isolines, volumetric calculations, 3D maps. Also, it can read different file types, including SHP, DXF and XLSX. In these paper it is presented a comparison in terms of achieving volumetric calculations using TopoLT program by two methods: a method where we choose a 3D model both for surface as well as below the top surface and a 3D model in which we choose a 3D terrain model for the bottom surface and another 3D model for the top surface. The comparison of the two variants will be made with data obtained from the realization of volumetric calculations with the program Surfer 13 generating GRID terrain model. The topographical measurements were performed with equipment from Leica GPS 1200 Series. Measurements were made using Romanian position determination system - ROMPOS which ensures accurate positioning of reference and coordinates ETRS through the National Network of GNSS Permanent Stations. GPS data processing was performed with the program Leica Geo Combined Office. For the volumetric calculating the GPS used point are in 1970 stereographic projection system and for the altitude the reference is 1975 the Black Sea projection system.

  13. ODYSSEY: A PUBLIC GPU-BASED CODE FOR GENERAL RELATIVISTIC RADIATIVE TRANSFER IN KERR SPACETIME

    Energy Technology Data Exchange (ETDEWEB)

    Pu, Hung-Yi [Institute of Astronomy and Astrophysics, Academia Sinica, 11F of Astronomy-Mathematics Building, AS/NTU No. 1, Taipei 10617, Taiwan (China); Yun, Kiyun; Yoon, Suk-Jin [Department of Astronomy and Center for Galaxy Evolution Research, Yonsei University, Seoul 120-749 (Korea, Republic of); Younsi, Ziri [Institut für Theoretische Physik, Max-von-Laue-Straße 1, D-60438 Frankfurt am Main (Germany)

    2016-04-01

    General relativistic radiative transfer calculations coupled with the calculation of geodesics in the Kerr spacetime are an essential tool for determining the images, spectra, and light curves from matter in the vicinity of black holes. Such studies are especially important for ongoing and upcoming millimeter/submillimeter very long baseline interferometry observations of the supermassive black holes at the centers of Sgr A* and M87. To this end we introduce Odyssey, a graphics processing unit (GPU) based code for ray tracing and radiative transfer in the Kerr spacetime. On a single GPU, the performance of Odyssey can exceed 1 ns per photon, per Runge–Kutta integration step. Odyssey is publicly available, fast, accurate, and flexible enough to be modified to suit the specific needs of new users. Along with a Graphical User Interface powered by a video-accelerated display architecture, we also present an educational software tool, Odyssey-Edu, for showing in real time how null geodesics around a Kerr black hole vary as a function of black hole spin and angle of incidence onto the black hole.

  14. Emergy Algebra: Improving Matrix Methods for Calculating Tranformities

    Science.gov (United States)

    Transformity is one of the core concepts in Energy Systems Theory and it is fundamental to the calculation of emergy. Accurate evaluation of transformities and other emergy per unit values is essential for the broad acceptance, application and further development of emergy method...

  15. Advances in computational methods for Quantum Field Theory calculations

    NARCIS (Netherlands)

    Ruijl, B.J.G.

    2017-01-01

    In this work we describe three methods to improve the performance of Quantum Field Theory calculations. First, we simplify large expressions to speed up numerical integrations. Second, we design Forcer, a program for the reduction of four-loop massless propagator integrals. Third, we extend the R*

  16. Perturbation method for calculating impurity binding energy in an ...

    Indian Academy of Sciences (India)

    Nilanjan Sil

    2017-12-18

    Dec 18, 2017 ... Abstract. In the present paper, we have studied the binding energy of the shallow donor hydrogenic impurity, which is confined in an inhomogeneous cylindrical quantum dot (CQD) of GaAs-AlxGa1−xAs. Perturbation method is used to calculate the binding energy within the framework of effective mass ...

  17. Methods for calculating population dose from atmospheric dispersion of radioactivity

    Energy Technology Data Exchange (ETDEWEB)

    Cohen, B L; Jow, H N; Lee, I S [Pittsburgh Univ., PA (USA)

    1978-06-01

    Curves are computed from which population dose (man-rem) due to dispersal of radioactivity from a point source can be calculated in the gaussian plume model by simple multiplication, and methods of using them and their limitations are considered. Illustrative examples are presented.

  18. Calculating Resonance Positions and Widths Using the Siegert Approximation Method

    Science.gov (United States)

    Rapedius, Kevin

    2011-01-01

    Here, we present complex resonance states (or Siegert states) that describe the tunnelling decay of a trapped quantum particle from an intuitive point of view that naturally leads to the easily applicable Siegert approximation method. This can be used for analytical and numerical calculations of complex resonances of both the linear and nonlinear…

  19. A quick method to calculate QTL confidence interval

    Indian Academy of Sciences (India)

    2011-08-19

    Aug 19, 2011 ... experimental design and analysis to reveal the real molecular nature of the ... strap sample form the bootstrap distribution of QTL location. The 2.5 and ..... ative probability to harbour a true QTL, hence x-LOD rule is not stable ... Darvasi A. and Soller M. 1997 A simple method to calculate resolv- ing power ...

  20. Simple Calculation Programs for Biology Methods in Molecular ...

    Indian Academy of Sciences (India)

    First page Back Continue Last page Overview Graphics. Simple Calculation Programs for Biology Methods in Molecular Biology. GMAP: A program for mapping potential restriction sites. RE sites in ambiguous and non-ambiguous DNA sequence; Minimum number of silent mutations required for introducing a RE sites; Set ...

  1. LEGO-Method--New Strategy for Chemistry Calculation

    Science.gov (United States)

    Molnar, Jozsef; Molnar-Hamvas, Livia

    2011-01-01

    The presented strategy of chemistry calculation is based on mole-concept, but it uses only one fundamental relationship of the amounts of substance as a basic panel. The name of LEGO-method comes from the famous toy of LEGO[R] because solving equations by grouping formulas is similar to that. The relations of mole and the molar amounts, as small…

  2. Further Stable methods for the calculation of partition functions

    International Nuclear Information System (INIS)

    Wilson, B G; Gilleron, F; Pain, J

    2007-01-01

    The extension to recursion over holes of the Gilleron and Pain method for calculating partition functions of a canonical ensemble of non-interacting bound electrons is presented as well as a generalization for the efficient computation of collisional line broadening

  3. Thick-Restart Lanczos Method for Electronic Structure Calculations

    International Nuclear Information System (INIS)

    Simon, Horst D.; Wang, L.-W.; Wu, Kesheng

    1999-01-01

    This paper describes two recent innovations related to the classic Lanczos method for eigenvalue problems, namely the thick-restart technique and dynamic restarting schemes. Combining these two new techniques we are able to implement an efficient eigenvalue problem solver. This paper will demonstrate its effectiveness on one particular class of problems for which this method is well suited: linear eigenvalue problems generated from non-self-consistent electronic structure calculations

  4. Calculation of neutron and gamma transport at the FOA:type of problems and calculation methods

    International Nuclear Information System (INIS)

    Lefvert, T.

    1975-11-01

    Protection against the effects of nuclear warfare involves the analysis of the forms of results of a nuclear charge explosion producing neutron and gamma radiation. It brings out problems leading to the calculation of criticality, leakage, and deep transmission. Methods have been developed for various kinds of particle transport problems. Applications to radiation therapy, storage of fissile materials, and fast reactors are discussed. A list (with brief description) of all neutron and gamma transport programmes of the FOA is given. (J.S.)

  5. Cluster-cell calculation using the method of generalized homogenization

    International Nuclear Information System (INIS)

    Laletin, N.I.; Boyarinov, V.F.

    1988-01-01

    The generalized-homogenization method (GHM), used for solving the neutron transfer equation, was applied to calculating the neutron distribution in the cluster cell with a series of cylindrical cells with cylindrically coaxial zones. Single-group calculations of the technological channel of the cell of an RBMK reactor were performed using GHM. The technological channel was understood to be the reactor channel, comprised of the zirconium rod, the water or steam-water mixture, the uranium dioxide fuel element, and the zirconium tube, together with the adjacent graphite layer. Calculations were performed for channels with no internal sources and with unit incoming current at the external boundary as well as for channels with internal sources and zero current at the external boundary. The PRAKTINETs program was used to calculate the symmetric neutron distributions in the microcell and in channels with homogenized annular zones. The ORAR-TsM program was used to calculate the antisymmetric distribution in the microcell. The accuracy of the calculations were compared for the two channel versions

  6. Some experience of shielding calculations by combinatorial method

    International Nuclear Information System (INIS)

    Korobejnikov, V.V.; Oussanov, V.I.

    1996-01-01

    Some aspects of the compound systems shielding calculations by a combinatorial approach are discussed. The effectiveness of such an approach is based on the fundamental characteristic of a compound system: if some element of the system have in itself mathematical or physical properties favorable for calculation, these properties may be used in a combinatorial approach and are lost when the system is being calculated in the whole by a direct approach. The combinatorial technique applied is well known. A compound system are being splitting for two or more auxiliary subsystems (so that calculation each of them is a more simple problem than calculation of the original problem (or at last is a soluble problem if original one is not). Calculation of every subsystem are carried out by suitable method and code, the coupling being made through boundary conditions or boundary source. The special consideration in the paper is given to a fast reactor shielding combinatorial analysis and to the testing of the results received. (author)

  7. A simple method for calculation of Glauber's amplitude

    International Nuclear Information System (INIS)

    Omboo, Z.

    1983-01-01

    A method of calculating the terms of Glauber series expansions for elastic scattering of composed systems are presented. The inclusion of general scattering diagram simplifies essentially the calculation procedure. In this case the complicated combinatorical problem of reduction of similar terms in Glauber series is solved easily and determinant corresponding to various terms of the series decreases at least by a factor of two, if numbers of constituents of scattered systems are equal. If these numbers are not equal, the determinant order is equal to the smallest one

  8. Three-dimensional space-charge calculation method

    International Nuclear Information System (INIS)

    Lysenko, W.P.; Wadlinger, E.A.

    1980-09-01

    A method is presented for calculating space-charge forces on individual particles in a particle tracing simulation code. Poisson's equation is solved in three dimensions with boundary conditions specified on an arbitrary surface. When the boundary condition is defined by an impressed radio-frequency field, the external electric fields as well as the space-charge fields are determined. A least squares fitting procedure is used to calculate the coefficients of expansion functions, which need not be orthogonal nor individually satisfy the boundary condition

  9. A new method for the automatic calculation of prosody

    International Nuclear Information System (INIS)

    GUIDINI, Annie

    1981-01-01

    An algorithm is presented for the calculation of the prosodic parameters for speech synthesis. It uses the melodic patterns, composed of rising and falling slopes, suggested by G. CAELEN, and rests on: 1. An analysis into units of meaning to determine a melodic pattern 2. the calculation of the numeric values for the prosodic variations of each syllable; 3. The use of a table of vocalic values for the three parameters for each vowel according to the consonantal environment and of a table of standard duration for consonants. This method was applied in the 'SARA' program of synthesis with satisfactory results. (author) [fr

  10. Comparison of matrix exponential methods for fuel burnup calculations

    International Nuclear Information System (INIS)

    Oh, Hyung Suk; Yang, Won Sik

    1999-01-01

    Series expansion methods to compute the exponential of a matrix have been compared by applying them to fuel depletion calculations. Specifically, Taylor, Pade, Chebyshev, and rational Chebyshev approximations have been investigated by approximating the exponentials of bum matrices by truncated series of each method with the scaling and squaring algorithm. The accuracy and efficiency of these methods have been tested by performing various numerical tests using one thermal reactor and two fast reactor depletion problems. The results indicate that all the four series methods are accurate enough to be used for fuel depletion calculations although the rational Chebyshev approximation is relatively less accurate. They also show that the rational approximations are more efficient than the polynomial approximations. Considering the computational accuracy and efficiency, the Pade approximation appears to be better than the other methods. Its accuracy is better than the rational Chebyshev approximation, while being comparable to the polynomial approximations. On the other hand, its efficiency is better than the polynomial approximations and is similar to the rational Chebyshev approximation. In particular, for fast reactor depletion calculations, it is faster than the polynomial approximations by a factor of ∼ 1.7. (author). 11 refs., 4 figs., 2 tabs

  11. A New Parallel Approach for Accelerating the GPU-Based Execution of Edge Detection Algorithms.

    Science.gov (United States)

    Emrani, Zahra; Bateni, Soroosh; Rabbani, Hossein

    2017-01-01

    Real-time image processing is used in a wide variety of applications like those in medical care and industrial processes. This technique in medical care has the ability to display important patient information graphi graphically, which can supplement and help the treatment process. Medical decisions made based on real-time images are more accurate and reliable. According to the recent researches, graphic processing unit (GPU) programming is a useful method for improving the speed and quality of medical image processing and is one of the ways of real-time image processing. Edge detection is an early stage in most of the image processing methods for the extraction of features and object segments from a raw image. The Canny method, Sobel and Prewitt filters, and the Roberts' Cross technique are some examples of edge detection algorithms that are widely used in image processing and machine vision. In this work, these algorithms are implemented using the Compute Unified Device Architecture (CUDA), Open Source Computer Vision (OpenCV), and Matrix Laboratory (MATLAB) platforms. An existing parallel method for Canny approach has been modified further to run in a fully parallel manner. This has been achieved by replacing the breadth- first search procedure with a parallel method. These algorithms have been compared by testing them on a database of optical coherence tomography images. The comparison of results shows that the proposed implementation of the Canny method on GPU using the CUDA platform improves the speed of execution by 2-100× compared to the central processing unit-based implementation using the OpenCV and MATLAB platforms.

  12. BLAZE-DEM: A GPU based Polyhedral DEM particle transport code

    CSIR Research Space (South Africa)

    Govender, Nicolin

    2013-05-01

    Full Text Available expensive and cannot be done in real time. This paper will discuss methods and algorithms that substantially reduce the computational run-time of such simulations. An example is the spatial partitioning and hashing algorithm that allows just the nearest...

  13. Comparison between calculation methods of dose rates in gynecologic brachytherapy

    International Nuclear Information System (INIS)

    Vianello, E.A.; Biaggio, M.F.; D R, M.F.; Almeida, C.E. de

    1998-01-01

    In treatments with radiations for gynecologic tumors is necessary to evaluate the quality of the results obtained by different calculation methods for the dose rates on the points of clinical interest (A, rectal, vesicle). The present work compares the results obtained by two methods. The Manual Calibration Method (MCM) tri dimensional (Vianello E., et.al. 1998), using orthogonal radiographs for each patient in treatment, and the Theraplan/T P-11 planning system (Thratonics International Limited 1990) this last one verified experimentally (Vianello et.al. 1996). The results show that MCM can be used in the physical-clinical practice with a percentile difference comparable at the computerized programs. (Author)

  14. Neutron flux calculation by means of Monte Carlo methods

    International Nuclear Information System (INIS)

    Barz, H.U.; Eichhorn, M.

    1988-01-01

    In this report a survey of modern neutron flux calculation procedures by means of Monte Carlo methods is given. Due to the progress in the development of variance reduction techniques and the improvements of computational techniques this method is of increasing importance. The basic ideas in application of Monte Carlo methods are briefly outlined. In more detail various possibilities of non-analog games and estimation procedures are presented, problems in the field of optimizing the variance reduction techniques are discussed. In the last part some important international Monte Carlo codes and own codes of the authors are listed and special applications are described. (author)

  15. AMITIS: A 3D GPU-Based Hybrid-PIC Model for Space and Plasma Physics

    Science.gov (United States)

    Fatemi, Shahab; Poppe, Andrew R.; Delory, Gregory T.; Farrell, William M.

    2017-05-01

    We have developed, for the first time, an advanced modeling infrastructure in space simulations (AMITIS) with an embedded three-dimensional self-consistent grid-based hybrid model of plasma (kinetic ions and fluid electrons) that runs entirely on graphics processing units (GPUs). The model uses NVIDIA GPUs and their associated parallel computing platform, CUDA, developed for general purpose processing on GPUs. The model uses a single CPU-GPU pair, where the CPU transfers data between the system and GPU memory, executes CUDA kernels, and writes simulation outputs on the disk. All computations, including moving particles, calculating macroscopic properties of particles on a grid, and solving hybrid model equations are processed on a single GPU. We explain various computing kernels within AMITIS and compare their performance with an already existing well-tested hybrid model of plasma that runs in parallel using multi-CPU platforms. We show that AMITIS runs ∼10 times faster than the parallel CPU-based hybrid model. We also introduce an implicit solver for computation of Faraday’s Equation, resulting in an explicit-implicit scheme for the hybrid model equation. We show that the proposed scheme is stable and accurate. We examine the AMITIS energy conservation and show that the energy is conserved with an error < 0.2% after 500,000 timesteps, even when a very low number of particles per cell is used.

  16. The method of calculation of pipelines laid on supports

    Directory of Open Access Journals (Sweden)

    Benin D.M.

    2017-08-01

    Full Text Available this article focuses on the issue of laying pipelines on supports and the method of calculation of vertical and horizontal loads acting on the support. As pipelines can be water piping systems, heat networks, oil and mazout lines, condensate lines, steam lines, etc. this article describes the calculations of supports for pipelines laid above ground, in crowded channels, premises, on racks, in impassable channels, hanging supports, etc. The paper explores recommendations for placement of the supports on the route of the pipelines, calculation of loads on rotating and stationary supports of pipelines; inspection of stresses in the metal pipe, resulting from elongation of the piping from the temperature from the thermal expansion of the metal during operation.

  17. Use of results from microscopic methods in optical model calculations

    International Nuclear Information System (INIS)

    Lagrange, C.

    1985-11-01

    A concept of vectorization for coupled-channel programs based upon conventional methods is first presented. This has been implanted in our program for its use on the CRAY-1 computer. In a second part we investigate the capabilities of a semi-microscopic optical model involving fewer adjustable parameters than phenomenological ones. The two main ingredients of our calculations are, for spherical or well-deformed nuclei, the microscopic optical-model calculations of Jeukenne, Lejeune and Mahaux and nuclear densities from Hartree-Fock-Bogoliubov calculations using the density-dependent force D1. For transitional nuclei deformation-dependent nuclear structure wave functions are employed to weigh the scattering potentials for different shapes and channels [fr

  18. GPU-Based 3D Cone-Beam CT Image Reconstruction for Large Data Volume

    Directory of Open Access Journals (Sweden)

    Xing Zhao

    2009-01-01

    Full Text Available Currently, 3D cone-beam CT image reconstruction speed is still a severe limitation for clinical application. The computational power of modern graphics processing units (GPUs has been harnessed to provide impressive acceleration of 3D volume image reconstruction. For extra large data volume exceeding the physical graphic memory of GPU, a straightforward compromise is to divide data volume into blocks. Different from the conventional Octree partition method, a new partition scheme is proposed in this paper. This method divides both projection data and reconstructed image volume into subsets according to geometric symmetries in circular cone-beam projection layout, and a fast reconstruction for large data volume can be implemented by packing the subsets of projection data into the RGBA channels of GPU, performing the reconstruction chunk by chunk and combining the individual results in the end. The method is evaluated by reconstructing 3D images from computer-simulation data and real micro-CT data. Our results indicate that the GPU implementation can maintain original precision and speed up the reconstruction process by 110–120 times for circular cone-beam scan, as compared to traditional CPU implementation.

  19. GPU-Based Cloud Service for Smith-Waterman Algorithm Using Frequency Distance Filtration Scheme

    Directory of Open Access Journals (Sweden)

    Sheng-Ta Lee

    2013-01-01

    Full Text Available As the conventional means of analyzing the similarity between a query sequence and database sequences, the Smith-Waterman algorithm is feasible for a database search owing to its high sensitivity. However, this algorithm is still quite time consuming. CUDA programming can improve computations efficiently by using the computational power of massive computing hardware as graphics processing units (GPUs. This work presents a novel Smith-Waterman algorithm with a frequency-based filtration method on GPUs rather than merely accelerating the comparisons yet expending computational resources to handle such unnecessary comparisons. A user friendly interface is also designed for potential cloud server applications with GPUs. Additionally, two data sets, H1N1 protein sequences (query sequence set and human protein database (database set, are selected, followed by a comparison of CUDA-SW and CUDA-SW with the filtration method, referred to herein as CUDA-SWf. Experimental results indicate that reducing unnecessary sequence alignments can improve the computational time by up to 41%. Importantly, by using CUDA-SWf as a cloud service, this application can be accessed from any computing environment of a device with an Internet connection without time constraints.

  20. GPU-based cloud service for Smith-Waterman algorithm using frequency distance filtration scheme.

    Science.gov (United States)

    Lee, Sheng-Ta; Lin, Chun-Yuan; Hung, Che Lun

    2013-01-01

    As the conventional means of analyzing the similarity between a query sequence and database sequences, the Smith-Waterman algorithm is feasible for a database search owing to its high sensitivity. However, this algorithm is still quite time consuming. CUDA programming can improve computations efficiently by using the computational power of massive computing hardware as graphics processing units (GPUs). This work presents a novel Smith-Waterman algorithm with a frequency-based filtration method on GPUs rather than merely accelerating the comparisons yet expending computational resources to handle such unnecessary comparisons. A user friendly interface is also designed for potential cloud server applications with GPUs. Additionally, two data sets, H1N1 protein sequences (query sequence set) and human protein database (database set), are selected, followed by a comparison of CUDA-SW and CUDA-SW with the filtration method, referred to herein as CUDA-SWf. Experimental results indicate that reducing unnecessary sequence alignments can improve the computational time by up to 41%. Importantly, by using CUDA-SWf as a cloud service, this application can be accessed from any computing environment of a device with an Internet connection without time constraints.

  1. Process control and optimization with simple interval calculation method

    DEFF Research Database (Denmark)

    Pomerantsev, A.; Rodionova, O.; Høskuldsson, Agnar

    2006-01-01

    for the quality improvement in the course of production. The latter is an active quality optimization, which takes into account the actual history of the process. The advocate approach is allied to the conventional method of multivariate statistical process control (MSPC) as it also employs the historical process......Methods of process control and optimization are presented and illustrated with a real world example. The optimization methods are based on the PLS block modeling as well as on the simple interval calculation methods of interval prediction and object status classification. It is proposed to employ...... the series of expanding PLS/SIC models in order to support the on-line process improvements. This method helps to predict the effect of planned actions on the product quality and thus enables passive quality control. We have also considered an optimization approach that proposes the correcting actions...

  2. A comparison of published methods of calculation of defect significance

    International Nuclear Information System (INIS)

    Ingham, T.; Harrison, R.P.

    1982-01-01

    This paper presents some of the results obtained in a round-robin calculational exercise organised by the OECD Committee on the Safety of Nuclear Installations (CSNI). The exercise was initiated to examine practical aspects of using documented elastic-plastic fracture mechanics methods to calculate defect significance. The extent to which the objectives of the exercise were met is illustrated using solutions to 'standard' problems produced by UKAEA and CEGB using the methods given in ASME XI, Appendix A, BSI PD6493, and the CEGB R/H/R6 Document. Differences in critical or tolerable defect size defined using these procedures are examined in terms of their different treatments and reasons for discrepancies are discussed. (author)

  3. Simple method to calculate percolation, Ising and Potts clusters

    International Nuclear Information System (INIS)

    Tsallis, C.

    1981-01-01

    A procedure ('break-collapse method') is introduced which considerably simplifies the calculation of two - or multirooted clusters like those commonly appearing in real space renormalization group (RG) treatments of bond-percolation, and pure and random Ising and Potts problems. The method is illustrated through two applications for the q-state Potts ferromagnet. The first of them concerns a RG calculation of the critical exponent ν for the isotropic square lattice: numerical consistence is obtained (particularly for q→0) with den Nijs conjecture. The second application is a compact reformulation of the standard star-triangle and duality transformations which provide the exact critical temperature for the anisotropic triangular and honeycomb lattices. (Author) [pt

  4. Thermal disadvantage factor calculation by the multiregion collision probability method

    International Nuclear Information System (INIS)

    Ozgener, B.; Ozgener, H.A.

    2004-01-01

    A multi-region collision probability formulation that is capable of applying white boundary condition directly is presented and applied to thermal neutron transport problems. The disadvantage factors computed are compared with their counterparts calculated by S N methods with both direct and indirect application of white boundary condition. The results of the ABH and collision probability method with indirect application of white boundary condition are also considered and comparisons with benchmark Monte Carlo results are carried out. The studies show that the proposed formulation is capable of calculating thermal disadvantage factor with sufficient accuracy without resorting to the fictitious scattering outer shell approximation associated with the indirect application of the white boundary condition in collision probability solutions

  5. The application of advanced rotor (performance) methods for design calculations

    Energy Technology Data Exchange (ETDEWEB)

    Bussel, G.J.W. van [Delft Univ. of Technology, Inst. for Wind Energy, Delft (Netherlands)

    1997-08-01

    The calculation of loads and performance of wind turbine rotors has been a topic for research over the last century. The principles for the calculation of loads on rotor blades with a given specific geometry, as well as the development of optimal shaped rotor blades have been published in the decades that significant aircraft development took place. Nowadays advanced computer codes are used for specific problems regarding modern aircraft, and application to wind turbine rotors has also been performed occasionally. The engineers designing rotor blades for wind turbines still use methods based upon global principles developed in the beginning of the century. The question what to expect in terms of the type of methods to be applied in a design environment for the near future is addressed here. (EG) 14 refs.

  6. Problems in radiation shielding calculations with Monte Carlo methods

    International Nuclear Information System (INIS)

    Ueki, Kohtaro

    1985-01-01

    The Monte Carlo method is a very useful tool for solving a large class of radiation transport problem. In contrast with deterministic method, geometric complexity is a much less significant problem for Monte Carlo calculations. However, the accuracy of Monte Carlo calculations is of course, limited by statistical error of the quantities to be estimated. In this report, we point out some typical problems to solve a large shielding system including radiation streaming. The Monte Carlo coupling technique was developed to settle such a shielding problem accurately. However, the variance of the Monte Carlo results using the coupling technique of which detectors were located outside the radiation streaming, was still not enough. So as to bring on more accurate results for the detectors located outside the streaming and also for a multi-legged-duct streaming problem, a practicable way of ''Prism Scattering technique'' is proposed in the study. (author)

  7. Benchmark calculations for evaluation methods of gas volumetric leakage rate

    International Nuclear Information System (INIS)

    Asano, R.; Aritomi, M.; Matsuzaki, M.

    1998-01-01

    A containment function of radioactive materials transport casks is essential for safe transportation to prevent the radioactive materials from being released into environment. Regulations such as IAEA standard determined the limit of radioactivity to be released. Since is not practical for the leakage tests to measure directly the radioactivity release from a package, as gas volumetric leakages rates are proposed in ANSI N14.5 and ISO standards. In our previous works, gas volumetric leakage rates for several kinds of gas from various leaks were measured and two evaluation methods, 'a simple evaluation method' and 'a strict evaluation method', were proposed based on the results. The simple evaluation method considers the friction loss of laminar flow with expansion effect. The strict evaluating method considers an exit loss in addition to the friction loss. In this study, four worked examples were completed for on assumed large spent fuel transport cask (Type B Package) with wet or dry capacity and at three transport conditions; normal transport with intact fuels or failed fuels, and an accident in transport. The standard leakage rates and criteria for two kinds of leak test were calculated for each example by each evaluation method. The following observations are made based upon the calculations and evaluations: the choked flow model of ANSI method greatly overestimates the criteria for tests ; the laminar flow models of both ANSI and ISO methods slightly overestimate the criteria for tests; the above two results are within the design margin for ordinary transport condition and all methods are useful for the evaluation; for severe condition such as failed fuel transportation, it should pay attention to apply a choked flow model of ANSI method. (authors)

  8. Aerodynamic calculational methods for curved-blade Darrieus VAWT WECS

    Science.gov (United States)

    Templin, R. J.

    1985-03-01

    Calculation of aerodynamic performance and load distributions for curved-blade wind turbines is discussed. Double multiple stream tube theory, and the uncertainties that remain in further developing adequate methods are considered. The lack of relevant airfoil data at high Reynolds numbers and high angles of attack, and doubts concerning the accuracy of models of dynamic stall are underlined. Wind tunnel tests of blade airbrake configurations are summarized.

  9. Applying probabilistic methods for assessments and calculations for accident prevention

    International Nuclear Information System (INIS)

    Anon.

    1984-01-01

    The guidelines for the prevention of accidents require plant design-specific and radioecological calculations to be made in order to show that maximum acceptable expsoure values will not be exceeded in case of an accident. For this purpose, main parameters affecting the accident scenario have to be determined by probabilistic methods. This offers the advantage that parameters can be quantified on the basis of unambigious and realistic criteria, and final results can be defined in terms of conservativity. (DG) [de

  10. Testing the QA Method for Calculating Jet v_{2}

    CERN Document Server

    Mueller, Jason

    2014-01-01

    For the summer, I was assigned to work on the ALICE experiment with Alice Ohlson. I wrote several programs throughout the summer that were used to calculate jet v 2 using a non-standard method described by my supervisor in her Ph.D. thesis. Though the project is not yet complete, significant progress has been made, and the results so far seem promising.

  11. Calculations of pair production by Monte Carlo methods

    International Nuclear Information System (INIS)

    Bottcher, C.; Strayer, M.R.

    1991-01-01

    We describe some of the technical design issues associated with the production of particle-antiparticle pairs in very large accelerators. To answer these questions requires extensive calculation of Feynman diagrams, in effect multi-dimensional integrals, which we evaluate by Monte Carlo methods on a variety of supercomputers. We present some portable algorithms for generating random numbers on vector and parallel architecture machines. 12 refs., 14 figs

  12. An analytical method for neutron thermalization calculations in heterogenous reactors

    Energy Technology Data Exchange (ETDEWEB)

    Pop-Jordanov, J [Boris Kidric Institute of Nuclear Sciences, Vinca, Belgrade (Yugoslavia)

    1965-07-01

    It is well known that the use of the diffusion approximation for stumethods are rather laborious and require the use of large digital computers. In this paper, the use of the diffusion approximation in absorbing media has been avoided, but the treatment remained analytical, thus simplifying practical calculations.

  13. An analytical method for neutron thermalization calculations in heterogenous reactors

    International Nuclear Information System (INIS)

    Pop-Jordanov, J.

    1965-01-01

    It is well known that the use of the diffusion approximation for studying neutron thermalization in heterogeneous reactors may result in considerable errors. On the other hand, more exact numerical methods are rather laborious and require the use of large digital computers. In this paper, the use of the diffusion approximation in absorbing media has been avoided, but the treatment remained analytical, thus simplifying practical calculations

  14. GPU-based Scalable Volumetric Reconstruction for Multi-view Stereo

    Energy Technology Data Exchange (ETDEWEB)

    Kim, H; Duchaineau, M; Max, N

    2011-09-21

    We present a new scalable volumetric reconstruction algorithm for multi-view stereo using a graphics processing unit (GPU). It is an effectively parallelized GPU algorithm that simultaneously uses a large number of GPU threads, each of which performs voxel carving, in order to integrate depth maps with images from multiple views. Each depth map, triangulated from pair-wise semi-dense correspondences, represents a view-dependent surface of the scene. This algorithm also provides scalability for large-scale scene reconstruction in a high resolution voxel grid by utilizing streaming and parallel computation. The output is a photo-realistic 3D scene model in a volumetric or point-based representation. We demonstrate the effectiveness and the speed of our algorithm with a synthetic scene and real urban/outdoor scenes. Our method can also be integrated with existing multi-view stereo algorithms such as PMVS2 to fill holes or gaps in textureless regions.

  15. Higher order methods for burnup calculations with Bateman solutions

    International Nuclear Information System (INIS)

    Isotalo, A.E.; Aarnio, P.A.

    2011-01-01

    Highlights: → Average microscopic reaction rates need to be estimated at each step. → Traditional predictor-corrector methods use zeroth and first order predictions. → Increasing predictor order greatly improves results. → Increasing corrector order does not improve results. - Abstract: A group of methods for burnup calculations solves the changes in material compositions by evaluating an explicit solution to the Bateman equations with constant microscopic reaction rates. This requires predicting representative averages for the one-group cross-sections and flux during each step, which is usually done using zeroth and first order predictions for their time development in a predictor-corrector calculation. In this paper we present the results of using linear, rather than constant, extrapolation on the predictor and quadratic, rather than linear, interpolation on the corrector. Both of these are done by using data from the previous step, and thus do not affect the stepwise running time. The methods were tested by implementing them into the reactor physics code Serpent and comparing the results from four test cases to accurate reference results obtained with very short steps. Linear extrapolation greatly improved results for thermal spectra and should be preferred over the constant one currently used in all Bateman solution based burnup calculations. The effects of using quadratic interpolation on the corrector were, on the other hand, predominantly negative, although not enough so to conclusively decide between the linear and quadratic variants.

  16. Comparison of optimization methods for electronic-structure calculations

    International Nuclear Information System (INIS)

    Garner, J.; Das, S.G.; Min, B.I.; Woodward, C.; Benedek, R.

    1989-01-01

    The performance of several local-optimization methods for calculating electronic structure is compared. The fictitious first-order equation of motion proposed by Williams and Soler is integrated numerically by three procedures: simple finite-difference integration, approximate analytical integration (the Williams-Soler algorithm), and the Born perturbation series. These techniques are applied to a model problem for which exact solutions are known, the Mathieu equation. The Williams-Soler algorithm and the second Born approximation converge equally rapidly, but the former involves considerably less computational effort and gives a more accurate converged solution. Application of the method of conjugate gradients to the Mathieu equation is discussed

  17. MATH: A Scientific Tool for Numerical Methods Calculation and Visualization

    Directory of Open Access Journals (Sweden)

    Henrich Glaser-Opitz

    2016-02-01

    Full Text Available MATH is an easy to use application for various numerical methods calculations with graphical user interface and integrated plotting tool written in Qt with extensive use of Qwt library for plotting options and use of Gsl and MuParser libraries as a numerical and parser helping libraries. It can be found at http://sourceforge.net/projects/nummath. MATH is a convenient tool for use in education process because of its capability of showing every important step in solution process to better understand how it is done. MATH also enables fast comparison of similar method speed and precision.

  18. Nuclear calculation methods for light water moderated reactors

    International Nuclear Information System (INIS)

    Hicks, D.

    1961-02-01

    This report is intended as an introductory review. After a brief discussion of problems encountered in the nuclear design of water moderated reactors a comprehensive scheme of calculations is described. This scheme is based largely on theoretical methods and computer codes developed in the U.S.A. but some previously unreported developments made in this country are also described. It is shown that the effective reproduction factor of simple water moderated lattices may be estimated to an accuracy of approximately 1%. Methods for treating water gap flux peaking and control absorbers are presented in some detail, together with a brief discussion of temperature coefficients, void coefficients and burn-up problems. (author)

  19. GPU-based prompt gamma ray imaging from boron neutron capture therapy

    International Nuclear Information System (INIS)

    Yoon, Do-Kun; Jung, Joo-Young; Suk Suh, Tae; Jo Hong, Key; Sil Lee, Keum

    2015-01-01

    Purpose: The purpose of this research is to perform the fast reconstruction of a prompt gamma ray image using a graphics processing unit (GPU) computation from boron neutron capture therapy (BNCT) simulations. Methods: To evaluate the accuracy of the reconstructed image, a phantom including four boron uptake regions (BURs) was used in the simulation. After the Monte Carlo simulation of the BNCT, the modified ordered subset expectation maximization reconstruction algorithm using the GPU computation was used to reconstruct the images with fewer projections. The computation times for image reconstruction were compared between the GPU and the central processing unit (CPU). Also, the accuracy of the reconstructed image was evaluated by a receiver operating characteristic (ROC) curve analysis. Results: The image reconstruction time using the GPU was 196 times faster than the conventional reconstruction time using the CPU. For the four BURs, the area under curve values from the ROC curve were 0.6726 (A-region), 0.6890 (B-region), 0.7384 (C-region), and 0.8009 (D-region). Conclusions: The tomographic image using the prompt gamma ray event from the BNCT simulation was acquired using the GPU computation in order to perform a fast reconstruction during treatment. The authors verified the feasibility of the prompt gamma ray image reconstruction using the GPU computation for BNCT simulations

  20. GPU-Based FFT Computation for Multi-Gigabit WirelessHD Baseband Processing

    Directory of Open Access Journals (Sweden)

    Nicholas Hinitt

    2010-01-01

    Full Text Available The next generation Graphics Processing Units (GPUs are being considered for non-graphics applications. Millimeter wave (60 Ghz wireless networks that are capable of multi-gigabit per second (Gbps transfer rates require a significant baseband throughput. In this work, we consider the baseband of WirelessHD, a 60 GHz communications system, which can provide a data rate of up to 3.8 Gbps over a short range wireless link. Thus, we explore the feasibility of achieving gigabit baseband throughput using the GPUs. One of the most computationally intensive functions commonly used in baseband communications, the Fast Fourier Transform (FFT algorithm, is implemented on an NVIDIA GPU using their general-purpose computing platform called the Compute Unified Device Architecture (CUDA. The paper, first, investigates the implementation of an FFT algorithm using the GPU hardware and exploiting the computational capability available. It then outlines the limitations discovered and the methods used to overcome these challenges. Finally a new algorithm to compute FFT is proposed, which reduces interprocessor communication. It is further optimized by improving memory access, enabling the processing rate to exceed 4 Gbps, achieving a processing time of a 512-point FFT in less than 200 ns using a two-GPU solution.

  1. GPU-Based Simulation of Ultrasound Imaging Artifacts for Cryosurgery Training

    Science.gov (United States)

    Keelan, Robert; Shimada, Kenji

    2016-01-01

    This study presents an efficient computational technique for the simulation of ultrasound imaging artifacts associated with cryosurgery based on nonlinear ray tracing. This study is part of an ongoing effort to develop computerized training tools for cryosurgery, with prostate cryosurgery as a development model. The capability of performing virtual cryosurgical procedures on a variety of test cases is essential for effective surgical training. Simulated ultrasound imaging artifacts include reverberation and reflection of the cryoprobes in the unfrozen tissue, reflections caused by the freezing front, shadowing caused by the frozen region, and tissue property changes in repeated freeze–thaw cycles procedures. The simulated artifacts appear to preserve the key features observed in a clinical setting. This study displays an example of how training may benefit from toggling between the undisturbed ultrasound image, the simulated temperature field, the simulated imaging artifacts, and an augmented hybrid presentation of the temperature field superimposed on the ultrasound image. The proposed method is demonstrated on a graphic processing unit at 100 frames per second, on a mid-range personal workstation, at two orders of magnitude faster than a typical cryoprocedure. This performance is based on computation with C++ accelerated massive parallelism and its interoperability with the DirectX-rendering application programming interface. PMID:26818026

  2. GPU-based acceleration of computations in nonlinear finite element deformation analysis.

    Science.gov (United States)

    Mafi, Ramin; Sirouspour, Shahin

    2014-03-01

    The physics of deformation for biological soft-tissue is best described by nonlinear continuum mechanics-based models, which then can be discretized by the FEM for a numerical solution. However, computational complexity of such models have limited their use in applications requiring real-time or fast response. In this work, we propose a graphic processing unit-based implementation of the FEM using implicit time integration for dynamic nonlinear deformation analysis. This is the most general formulation of the deformation analysis. It is valid for large deformations and strains and can account for material nonlinearities. The data-parallel nature and the intense arithmetic computations of nonlinear FEM equations make it particularly suitable for implementation on a parallel computing platform such as graphic processing unit. In this work, we present and compare two different designs based on the matrix-free and conventional preconditioned conjugate gradients algorithms for solving the FEM equations arising in deformation analysis. The speedup achieved with the proposed parallel implementations of the algorithms will be instrumental in the development of advanced surgical simulators and medical image registration methods involving soft-tissue deformation. Copyright © 2013 John Wiley & Sons, Ltd.

  3. Brain MR Image Restoration Using an Automatic Trilateral Filter With GPU-Based Acceleration.

    Science.gov (United States)

    Chang, Herng-Hua; Li, Cheng-Yuan; Gallogly, Audrey Haihong

    2018-02-01

    Noise reduction in brain magnetic resonance (MR) images has been a challenging and demanding task. This study develops a new trilateral filter that aims to achieve robust and efficient image restoration. Extended from the bilateral filter, the proposed algorithm contains one additional intensity similarity funct-ion, which compensates for the unique characteristics of noise in brain MR images. An entropy function adaptive to intensity variations is introduced to regulate the contributions of the weighting components. To hasten the computation, parallel computing based on the graphics processing unit (GPU) strategy is explored with emphasis on memory allocations and thread distributions. To automate the filtration, image texture feature analysis associated with machine learning is investigated. Among the 98 candidate features, the sequential forward floating selection scheme is employed to acquire the optimal texture features for regularization. Subsequently, a two-stage classifier that consists of support vector machines and artificial neural networks is established to predict the filter parameters for automation. A speedup gain of 757 was reached to process an entire MR image volume of 256 × 256 × 256 pixels, which completed within 0.5 s. Automatic restoration results revealed high accuracy with an ensemble average relative error of 0.53 ± 0.85% in terms of the peak signal-to-noise ratio. This self-regulating trilateral filter outperformed many state-of-the-art noise reduction methods both qualitatively and quantitatively. We believe that this new image restoration algorithm is of potential in many brain MR image processing applications that require expedition and automation.

  4. Calculation of degenerated Eigenmodes with modified power method

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Peng; Lee, Hyun Suk; Lee, Deok Jung [School of Mechanical and Nuclear Engineering, Ulsan National Institute of Science and Technology, Ulsan (Korea, Republic of)

    2017-02-15

    The modified power method has been studied by many researchers to calculate the higher Eigenmodes and accelerate the convergence of the fundamental mode. Its application to multidimensional problems may be unstable due to degenerated or near-degenerated Eigenmodes. Complex Eigenmode solutions are occasionally encountered in such cases, and the shapes of the corresponding eigenvectors may change during the simulation. These issues must be addressed for the successful implementation of the modified power method. Complex components are examined and an approximation method to eliminate the usage of the complex numbers is provided. A technique to fix the eigenvector shapes is also provided. The performance of the methods for dealing with those aforementioned problems is demonstrated with two dimensional one group and three dimensional one group homogeneous diffusion problems.

  5. Improvement of correlated sampling Monte Carlo methods for reactivity calculations

    International Nuclear Information System (INIS)

    Nakagawa, Masayuki; Asaoka, Takumi

    1978-01-01

    Two correlated Monte Carlo methods, the similar flight path and the identical flight path methods, have been improved to evaluate up to the second order change of the reactivity perturbation. Secondary fission neutrons produced by neutrons having passed through perturbed regions in both unperturbed and perturbed systems are followed in a way to have a strong correlation between secondary neutrons in both the systems. These techniques are incorporated into the general purpose Monte Carlo code MORSE, so as to be able to estimate also the statistical error of the calculated reactivity change. The control rod worths measured in the FCA V-3 assembly are analyzed with the present techniques, which are shown to predict the measured values within the standard deviations. The identical flight path method has revealed itself more useful than the similar flight path method for the analysis of the control rod worth. (auth.)

  6. SOLAR OPACITY CALCULATIONS USING THE SUPER-TRANSITION-ARRAY METHOD

    International Nuclear Information System (INIS)

    Krief, M.; Feigel, A.; Gazit, D.

    2016-01-01

    A new opacity model has been developed based on the Super-Transition-Array (STA) method for the calculation of monochromatic opacities of plasmas in local thermodynamic equilibrium. The atomic code, named STAR (STA-Revised), is described and used to calculate spectral opacities for a solar model implementing the recent AGSS09 composition. Calculations are carried out throughout the solar radiative zone. The relative contributions of different chemical elements and atomic processes to the total Rosseland mean opacity are analyzed in detail. Monochromatic opacities and charge-state distributions are compared with the widely used Opacity Project (OP) code, for several elements near the radiation–convection interface. STAR Rosseland opacities for the solar mixture show a very good agreement with OP and the OPAL opacity code throughout the radiation zone. Finally, an explicit STA calculation was performed of the full AGSS09 photospheric mixture, including all heavy metals. It was shown that, due to their extremely low abundance, and despite being very good photon absorbers, the heavy elements do not affect the Rosseland opacity

  7. SOLAR OPACITY CALCULATIONS USING THE SUPER-TRANSITION-ARRAY METHOD

    Energy Technology Data Exchange (ETDEWEB)

    Krief, M.; Feigel, A.; Gazit, D., E-mail: menahem.krief@mail.huji.ac.il [The Racah Institute of Physics, The Hebrew University, 91904 Jerusalem (Israel)

    2016-04-10

    A new opacity model has been developed based on the Super-Transition-Array (STA) method for the calculation of monochromatic opacities of plasmas in local thermodynamic equilibrium. The atomic code, named STAR (STA-Revised), is described and used to calculate spectral opacities for a solar model implementing the recent AGSS09 composition. Calculations are carried out throughout the solar radiative zone. The relative contributions of different chemical elements and atomic processes to the total Rosseland mean opacity are analyzed in detail. Monochromatic opacities and charge-state distributions are compared with the widely used Opacity Project (OP) code, for several elements near the radiation–convection interface. STAR Rosseland opacities for the solar mixture show a very good agreement with OP and the OPAL opacity code throughout the radiation zone. Finally, an explicit STA calculation was performed of the full AGSS09 photospheric mixture, including all heavy metals. It was shown that, due to their extremely low abundance, and despite being very good photon absorbers, the heavy elements do not affect the Rosseland opacity.

  8. MERSENNE AND HADAMARD MATRICES CALCULATION BY SCARPIS METHOD

    Directory of Open Access Journals (Sweden)

    N. A. Balonin

    2014-05-01

    Full Text Available Purpose. The paper deals with the problem of basic generalizations of Hadamard matrices associated with maximum determinant matrices or not optimal by determinant matrices with orthogonal columns (weighing matrices, Mersenne and Euler matrices, ets.; calculation methods for the quasi-orthogonal local maximum determinant Mersenne matrices are not studied enough sufficiently. The goal of this paper is to develop the theory of Mersenne and Hadamard matrices on the base of generalized Scarpis method research. Methods. Extreme solutions are found in general by minimization of maximum for absolute values of the elements of studied matrices followed by their subsequent classification according to the quantity of levels and their values depending on orders. Less universal but more effective methods are based on structural invariants of quasi-orthogonal matrices (Silvester, Paley, Scarpis methods, ets.. Results. Generalizations of Hadamard and Belevitch matrices as a family of quasi-orthogonal matrices of odd orders are observed; they include, in particular, two-level Mersenne matrices. Definitions of section and layer on the set of generalized matrices are proposed. Calculation algorithms for matrices of adjacent layers and sections by matrices of lower orders are described. Approximation examples of the Belevitch matrix structures up to 22-nd critical order by Mersenne matrix of the third order are given. New formulation of the modified Scarpis method to approximate Hadamard matrices of high orders by lower order Mersenne matrices is proposed. Williamson method is described by example of one modular level matrices approximation by matrices with a small number of levels. Practical relevance. The efficiency of developing direction for the band-pass filters creation is justified. Algorithms for Mersenne matrices design by Scarpis method are used in developing software of the research program complex. Mersenne filters are based on the suboptimal by

  9. Application of Indenting Method for Calculation of Activation Energy

    International Nuclear Information System (INIS)

    Kim, Jong-Seog; Kim, Tae-Ryong

    2006-01-01

    For the calculation of activation energy of cable materials, we used to apply the break-elongation test in accordance with ASTM D412(Stand Test Methods for Rubber Properties in Tension). For the cable jacket and insulation which have regular thickness, break-elongation test had been preferred since it showed linear character in the activation energy curve. But, for the cable which has irregular thickness or rugged surface of cable inside, break-elongation test show scattered data which can not be used for the calculation of activation energy. It is not easy to prepare break-elongation specimen for the cable smaller than 13mm diameter in accordance with ASTM D412. In the cases of above, we sometime use TGA method which heat the specimen from 50 .deg. C to 700 .deg. C at heating rates of 10, 15, 20 .deg. C/min. But, TGA is suspected for the representative of natural aging in the plant since it measure the weight decreasing rate during burning which may have different aging mechanism with that of natural aging. To solve above problems, we investigated alternatives such as indenter test. Indenter test is very convenient since it does not ask for a special test specimen as the break-elongation test does. Regular surface of cable outside is the only requirement of indenter test. Experience of activation energy calculation by using the indenter test is described herein

  10. A Novel TRM Calculation Method by Probabilistic Concept

    Science.gov (United States)

    Audomvongseree, Kulyos; Yokoyama, Akihiko; Verma, Suresh Chand; Nakachi, Yoshiki

    In a new competitive environment, it becomes possible for the third party to access a transmission facility. From this structure, to efficiently manage the utilization of the transmission network, a new definition about Available Transfer Capability (ATC) has been proposed. According to the North American ElectricReliability Council (NERC)’s definition, ATC depends on several parameters, i. e. Total Transfer Capability (TTC), Transmission Reliability Margin (TRM), and Capacity Benefit Margin (CBM). This paper is focused on the calculation of TRM which is one of the security margin reserved for any uncertainty of system conditions. The TRM calculation by probabilistic method is proposed in this paper. Based on the modeling of load forecast error and error in transmission line limitation, various cases of transmission transfer capability and its related probabilistic nature can be calculated. By consideration of the proposed concept of risk analysis, the appropriate required amount of TRM can be obtained. The objective of this research is to provide realistic information on the actual ability of the network which may be an alternative choice for system operators to make an appropriate decision in the competitive market. The advantages of the proposed method are illustrated by application to the IEEJ-WEST10 model system.

  11. Bulk Electric Load Cost Calculation Methods: Iraqi Network Comparative Study

    Directory of Open Access Journals (Sweden)

    Qais M. Alias

    2016-09-01

    Full Text Available It is vital in any industry to regain the spent capitals plus running costs and a margin of profits for the industry to flourish. The electricity industry is an everyday life touching industry which follows the same finance-economic strategy. Cost allocation is a major issue in all sectors of the electric industry, viz, generation, transmission and distribution. Generation and distribution service costing’s well documented in the literature, while the transmission share is still of need for research. In this work, the cost of supplying a bulk electric load connected to the EHV system is calculated. A sample basic lump-average method is used to provide a rough costing guide. Also, two transmission pricing methods are employed, namely, the postage-stamp and the load-flow based MW-distance methods to calculate transmission share in the total cost of each individual bulk load. The three costing methods results are then analyzed and compared for the 400kV Iraqi power grid considered for a case study.

  12. The New Performance Calculation Method of Fouled Axial Flow Compressor

    Directory of Open Access Journals (Sweden)

    Huadong Yang

    2014-01-01

    Full Text Available Fouling is the most important performance degradation factor, so it is necessary to accurately predict the effect of fouling on engine performance. In the previous research, it is very difficult to accurately model the fouled axial flow compressor. This paper develops a new performance calculation method of fouled multistage axial flow compressor based on experiment result and operating data. For multistage compressor, the whole compressor is decomposed into two sections. The first section includes the first 50% stages which reflect the fouling level, and the second section includes the last 50% stages which are viewed as the clean stage because of less deposits. In this model, the performance of the first section is obtained by combining scaling law method and linear progression model with traditional stage stacking method; simultaneously ambient conditions and engine configurations are considered. On the other hand, the performance of the second section is calculated by averaged infinitesimal stage method which is based on Reynolds’ law of similarity. Finally, the model is successfully applied to predict the 8-stage axial flow compressor and 16-stage LM2500-30 compressor. The change of thermodynamic parameters such as pressure ratio, efficiency with the operating time, and stage number is analyzed in detail.

  13. Evaluation of cost estimates and calculation methods used by SKB

    International Nuclear Information System (INIS)

    1994-01-01

    The Swedish Nuclear Fuel Management Co. (SKB) has estimated the costs for decommissioning the swedish nuclear power plants and managing the nuclear wastes in a 'traditional' manner i.e. by handling uncertainties through percentage additions. A 'normal' addition is used for uncertainties in specified technical systems. 'Extra' additions are used for systems uncertainties. An alternative method is suggested, using top-down principles for uncertainties, which should be applied successively, giving higher precision as the knowledge accumulates. This type of calculation can help project managers to identify and deal with areas common to different partial projects. A first step in this direction would be to perform sensitivity analyses for the most important calculation parameters. 21 refs

  14. A density gradient theory based method for surface tension calculations

    DEFF Research Database (Denmark)

    Liang, Xiaodong; Michelsen, Michael Locht; Kontogeorgis, Georgios

    2016-01-01

    The density gradient theory has been becoming a widely used framework for calculating surface tension, within which the same equation of state is used for the interface and bulk phases, because it is a theoretically sound, consistent and computationally affordable approach. Based on the observation...... that the optimal density path from the geometric mean density gradient theory passes the saddle point of the tangent plane distance to the bulk phases, we propose to estimate surface tension with an approximate density path profile that goes through this saddle point. The linear density gradient theory, which...... assumes linearly distributed densities between the two bulk phases, has also been investigated. Numerical problems do not occur with these density path profiles. These two approximation methods together with the full density gradient theory have been used to calculate the surface tension of various...

  15. Charged-particle calculations using Boltzmann transport methods

    International Nuclear Information System (INIS)

    Hoffman, T.J.; Dodds, H.L. Jr.; Robinson, M.T.; Holmes, D.K.

    1981-01-01

    Several aspects of radiation damage effects in fusion reactor neutron and ion irradiation environments are amenable to treatment by transport theory methods. In this paper, multigroup transport techniques are developed for the calculation of charged particle range distributions, reflection coefficients, and sputtering yields. The Boltzmann transport approach can be implemented, with minor changes, in standard neutral particle computer codes. With the multigroup discrete ordinates code, ANISN, determination of ion and target atom distributions as functions of position, energy, and direction can be obtained without the stochastic error associated with atomistic computer codes such as MARLOWE and TRIM. With the multigroup Monte Carlo code, MORSE, charged particle effects can be obtained for problems associated with very complex geometries. Results are presented for several charged particle problems. Good agreement is obtained between quantities calculated with the multigroup approach and those obtained experimentally or by atomistic computer codes

  16. Large-scale atomic calculations using variational methods

    Energy Technology Data Exchange (ETDEWEB)

    Joensson, Per

    1995-01-01

    Atomic properties, such as radiative lifetimes, hyperfine structures and isotope shift, have been studied both theoretically and experimentally. Computer programs which calculate these properties from multiconfiguration Hartree-Fock (MCHF) and configuration interaction (CI) wave functions have been developed and tested. To study relativistic effects, a program which calculates hyperfine structures from multiconfiguration Dirac-Fock (MCDF) wave functions has also been written. A new method of dealing with radial non-orthogonalities in transition matrix elements has been investigated. This method allows two separate orbital sets to be used for the initial and final states, respectively. It is shown that, once the usual orthogonality restrictions have been overcome, systematic MCHF calculations are able to predict oscillator strengths in light atoms with high accuracy. In connection with recent high-power laser experiments, time-dependent calculations of the atomic response to intense laser fields have been performed. Using the frozen-core approximation, where the atom is modeled as an active electron moving in the average field of the core electrons and the nucleus, the active electron has been propagated in time under the influence of the laser field. Radiative lifetimes and hyperfine structures of excited states in sodium and silver have been experimentally determined using time-resolved laser spectroscopy. By recording the fluorescence light decay following laser excitation in the vacuum ultraviolet spectral region, the radiative lifetimes and hyperfine structures of the 7p{sup 2}P states in silver have been measured. The delayed-coincidence technique has been used to make very accurate measurements of the radiative lifetimes and hyperfine structures of the lowest 2P states in sodium and silver. 77 refs, 2 figs, 14 tabs.

  17. Large-scale atomic calculations using variational methods

    International Nuclear Information System (INIS)

    Joensson, Per.

    1995-01-01

    Atomic properties, such as radiative lifetimes, hyperfine structures and isotope shift, have been studied both theoretically and experimentally. Computer programs which calculate these properties from multiconfiguration Hartree-Fock (MCHF) and configuration interaction (CI) wave functions have been developed and tested. To study relativistic effects, a program which calculates hyperfine structures from multiconfiguration Dirac-Fock (MCDF) wave functions has also been written. A new method of dealing with radial non-orthogonalities in transition matrix elements has been investigated. This method allows two separate orbital sets to be used for the initial and final states, respectively. It is shown that, once the usual orthogonality restrictions have been overcome, systematic MCHF calculations are able to predict oscillator strengths in light atoms with high accuracy. In connection with recent high-power laser experiments, time-dependent calculations of the atomic response to intense laser fields have been performed. Using the frozen-core approximation, where the atom is modeled as an active electron moving in the average field of the core electrons and the nucleus, the active electron has been propagated in time under the influence of the laser field. Radiative lifetimes and hyperfine structures of excited states in sodium and silver have been experimentally determined using time-resolved laser spectroscopy. By recording the fluorescence light decay following laser excitation in the vacuum ultraviolet spectral region, the radiative lifetimes and hyperfine structures of the 7p 2 P states in silver have been measured. The delayed-coincidence technique has been used to make very accurate measurements of the radiative lifetimes and hyperfine structures of the lowest 2P states in sodium and silver. 77 refs, 2 figs, 14 tabs

  18. A comparison of Nodal methods in neutron diffusion calculations

    Energy Technology Data Exchange (ETDEWEB)

    Tavron, Barak [Israel Electric Company, Haifa (Israel) Nuclear Engineering Dept. Research and Development Div.

    1996-12-01

    The nuclear engineering department at IEC uses in the reactor analysis three neutron diffusion codes based on nodal methods. The codes, GNOMERl, ADMARC2 and NOXER3 solve the neutron diffusion equation to obtain flux and power distributions in the core. The resulting flux distributions are used for the furl cycle analysis and for fuel reload optimization. This work presents a comparison of the various nodal methods employed in the above codes. Nodal methods (also called Coarse-mesh methods) have been designed to solve problems that contain relatively coarse areas of homogeneous composition. In the nodal method parts of the equation that present the state in the homogeneous area are solved analytically while, according to various assumptions and continuity requirements, a general solution is sought out. Thus efficiency of the method for this kind of problems, is very high compared with the finite element and finite difference methods. On the other hand, using this method one can get only approximate information about the node vicinity (or coarse-mesh area, usually a feel assembly of a 20 cm size). These characteristics of the nodal method make it suitable for feel cycle analysis and reload optimization. This analysis requires many subsequent calculations of the flux and power distributions for the feel assemblies while there is no need for detailed distribution within the assembly. For obtaining detailed distribution within the assembly methods of power reconstruction may be applied. However homogenization of feel assembly properties, required for the nodal method, may cause difficulties when applied to fuel assemblies with many absorber rods, due to exciting strong neutron properties heterogeneity within the assembly. (author).

  19. An integral nodal variational method for multigroup criticality calculations

    International Nuclear Information System (INIS)

    Lewis, E.E.; Tsoulfanidis, N.

    2003-01-01

    An integral formulation of the variational nodal method is presented and applied to a series of benchmark critically problems. The method combines an integral transport treatment of the even-parity flux within the spatial node with an odd-parity spherical harmonics expansion of the Lagrange multipliers at the node interfaces. The response matrices that result from this formulation are compatible with those in the VARIANT code at Argonne National Laboratory. Either homogeneous or heterogeneous nodes may be employed. In general, for calculations requiring higher-order angular approximations, the integral method yields solutions with comparable accuracy while requiring substantially less CPU time and memory than the standard spherical harmonics expansion using the same spatial approximations. (author)

  20. A sub-structure method for multidimensional integral transport calculations

    International Nuclear Information System (INIS)

    Kavenoky, A.; Stankovski, Z.

    1983-03-01

    A new method has been developed for fine structure burn-up calculations of very heterogeneous large size media. It is a generalization of the well-known surface-source method, allowing coupling actual two-dimensional heterogeneous assemblies, called sub-structures. The method has been applied to a rectangular medium, divided into sub-structures, containing rectangular and/or cylindrical fuel, moderator and structure elements. The sub-structures are divided into homogeneous zones. A zone-wise flux expansion is used to formulate a direct collision probability problem within it (linear or flat flux expansion in the rectangular zones, flat flux in the others). The coupling of the sub-structures is performed by making extra assumptions on the currents entering and leaving the interfaces. The accuracies and computing times achieved are illustrated by numerical results on two benchmark problems

  1. Calculation of Multiphase Chemical Equilibrium by the Modified RAND Method

    DEFF Research Database (Denmark)

    Tsanas, Christos; Stenby, Erling Halfdan; Yan, Wei

    2017-01-01

    method. The modified RAND extends the classical RAND method from single-phase chemical reaction equilibrium of ideal systems to multiphase chemical equilibrium of nonideal systems. All components in all phases are treated in the same manner and the system Gibbs energy can be used to monitor convergence....... This is the first time that modified RAND was applied to multiphase chemical equilibrium systems. The combined algorithm was tested using nine examples covering vapor–liquid (VLE) and vapor–liquid–liquid equilibria (VLLE) of ideal and nonideal reaction systems. Successive substitution provided good initial......A robust and efficient algorithm for simultaneous chemical and phase equilibrium calculations is proposed. It combines two individual nonstoichiometric solving procedures: a nested-loop method with successive substitution for the first steps and final convergence with the second-order modified RAND...

  2. Nested element method in multidimensional neutron diffusion calculations

    International Nuclear Information System (INIS)

    Altiparmakov, D.V.

    1983-01-01

    A new numerical method is developed that is particularly efficient in solving the multidimensional neutron diffusion equation in geometrically complex systems. The needs for a generally applicable and fast running computer code have stimulated the inroad of a nonclassical (R-function) numerical method into the nuclear field. By using the R-functions, the geometrical components of the diffusion problem are a priori analytically implemented into the approximate solution. The class of functions, to which the approximate solution belongs, is chosen as close to the exact solution class as practically acceptable from the time consumption point of view. That implies a drastic reduction of the number of degrees of freedom, compared to the other methods. Furthermore, the reduced number of degrees of freedom enables calculation of large multidimensional problems on small computers

  3. A unique manual method for emergency offsite dose calculations

    International Nuclear Information System (INIS)

    Wildner, T.E.; Carson, B.H.; Shank, K.E.

    1987-01-01

    This paper describes a manual method developed for performance of emergency offsite dose calculations for PP and L's Susquehanna Steam Electric Station. The method is based on a three-part carbonless form. The front page guides the user through selection of the appropriate accident case and inclusion of meteorological and effluent data data. By circling the applicable accident descriptors, the user circles the dose factors on pages 2 and 3 which are then simply multiplied to yield the whole body and thyroid dose rates at the plant boundary, two, five, and ten miles. The process used to generate the worksheet is discussed, including the method used to incorporate the observed terrain effects on airflow patterns caused by the Susquehanna River Valley topography

  4. Fast GPU-based Monte Carlo code for SPECT/CT reconstructions generates improved 177Lu images.

    Science.gov (United States)

    Rydén, T; Heydorn Lagerlöf, J; Hemmingsson, J; Marin, I; Svensson, J; Båth, M; Gjertsson, P; Bernhardt, P

    2018-01-04

    clearly improved with MC-based OSEM reconstruction, e.g., the activity recovery was 88% for the largest sphere, while it was 66% for AC-OSEM and 79% for RRC-OSEM. The GPU-based MC code generated an MC-based SPECT/CT reconstruction within a few minutes, and reconstructed patient images of 177 Lu-DOTATATE treatments revealed clearly improved resolution and contrast.

  5. NaNet-10: a 10GbE network interface card for the GPU-based low-level trigger of the NA62 RICH detector

    International Nuclear Information System (INIS)

    Ammendola, R.; Biagioni, A.; Frezza, O.; Lonardo, A.; Cicero, F. Lo; Martinelli, M.; Paolucci, P.S.; Pastorelli, E.; Simula, F.; Tosoratto, L.; Vicini, P.; Fiorini, M.; Neri, I.; Lamanna, G.; Piandani, R.; Pontisso, L.; Sozzi, M.; Rossetti, D.

    2016-01-01

    A GPU-based low level (L0) trigger is currently integrated in the experimental setup of the RICH detector of the NA62 experiment to assess the feasibility of building more refined physics-related trigger primitives and thus improve the trigger discriminating power. To ensure the real-time operation of the system, a dedicated data transport mechanism has been implemented: an FPGA-based Network Interface Card (NaNet-10) receives data from detectors and forwards them with low, predictable latency to the memory of the GPU performing the trigger algorithms. Results of the ring-shaped hit patterns reconstruction will be reported and discussed

  6. [Evaluation of methods to calculate dialysis dose in daily hemodialysis].

    Science.gov (United States)

    Maduell, F; Gutiérrez, E; Navarro, V; Torregrosa, E; Martínez, A; Rius, A

    2003-01-01

    Daily dialysis has shown excellent clinical results because a higher frequency of dialysis is more physiological. Different methods have been described to calculate dialysis dose which take into consideration change in frequency. The aim of this study was to calculate all dialysis dose possibilities and evaluate the better and practical options. Eight patients, 6 males and 2 females, on standard 4 to 5 hours thrice weekly on-line hemodiafiltration (S-OL-HDF) were switched to daily on-line hemodiafiltration (D-OL-HDF) 2 to 2.5 hours six times per week. Dialysis parameters were identical during both periods and only frequency and dialysis time of each session were changed. Time average concentration (TAC), time average deviation (TAD), normalized protein catabolic rate (nPCR), Kt/V, equilibrated Kt/V (eKt/V), equivalent renal urea clearance (EKR), standard Kt/V (stdKt/V), urea reduction ratio (URR), hemodialysis product and time off dialysis were measured. Daily on-line hemodiafiltration was well accepted and tolerated. Patients maintained the same TAC although TAD decreased from 9.7 +/- 2 in baseline to a 6.2 +/- 2 mg/dl after six months, p time off dialysis was reduced to half. Dialysis frequency is an important urea kinetic parameter which there are to take in consideration. It's necessary to use EKR, stdKt/V or weekly URR to calculate dialysis dose for an adequate comparison between different frequency dialysis schedules.

  7. Moment methods with effective nuclear Hamiltonians; calculations of radial moments

    International Nuclear Information System (INIS)

    Belehrad, R.H.

    1981-02-01

    A truncated orthogonal polynomial expansion is used to evaluate the expectation value of the radial moments of the one-body density of nuclei. The expansion contains the configuration moments, , , and 2 >, where R/sup (k)/ is the operator for the k-th power of the radial coordinate r, and H is the effective nuclear Hamiltonian which is the sum of the relative kinetic energy operator and the Bruckner G matrix. Configuration moments are calculated using trace reduction formulae where the proton and neutron orbitals are treated separately in order to find expectation values of good total isospin. The operator averages are taken over many-body shell model states in the harmonic oscillator basis where all particles are active and single-particle orbitals through six major shells are included. The radial moment expectation values are calculated for the nuclei 16 O, 40 Ca, and 58 Ni and find that is usually the largest term in the expansion giving a large model space dependence to the results. For each of the 3 nuclei, a model space is found which gives the desired rms radius and then we find that the other 5 lowest moments compare favorably with other theoretical predictions. Finally, we use a method of Gordon (5) to employ the lowest 6 radial moment expectation values in the calculation of elastic electron scattering from these nuclei. For low to moderate momentum transfer, the results compare favorably with the experimental data

  8. Study on calculation methods for the effective delayed neutron fraction

    International Nuclear Information System (INIS)

    Irwanto, Dwi; Obara, Toru; Chiba, Go; Nagaya, Yasunobu

    2011-03-01

    The effective delayed neutron fraction β eff is one of the important neutronic parameters from a view point of a reactor kinetics. Several Monte-Carlo-based methods to estimate β eff have been proposed to date. In order to quantify the accuracy of these methods, we study calculation methods for β eff by analyzing various fast neutron systems including the bare spherical systems (Godiva, Jezebel, Skidoo, Jezebel-240), the reflective spherical systems (Popsy, Topsy, Flattop-23), MASURCA-R2 and MASURCA-ZONA2, and FCA XIX-1, XIX-2 and XIX-3. These analyses are performed by using SLAROM-UF and CBG for the deterministic method and MVP-II for the Monte Carlo method. We calculate β eff with various definitions such as the fundamental value β 0 , the standard definition, Nauchi's definition and Meulekamp's definition, and compare these results with each other. Through the present study, we find the following: The largest difference among the standard definition of β eff , Nauchi's β eff and Meulekamp's β eff is approximately 10%. The fundamental value β 0 is quite larger than the others in several cases. For all the cases, Meulekamp's β eff is always higher than Nauchi's β eff . This is because Nauchi's β eff considers the average neutron multiplicity value per fission which is large in the high energy range (1MeV-10MeV), while the definition of Meulekamp's β eff does not include this parameter. Furthermore, we evaluate the multi-generation effect on β eff values and demonstrate that this effect should be considered to obtain the standard definition values of β eff . (author)

  9. A Method for Calculating the Mean Orbits of Meteor Streams

    Science.gov (United States)

    Voloshchuk, Yu. I.; Kashcheev, B. L.

    An examination of the published catalogs of orbits of meteor streams and of a large number of works devoted to the selection of streams, their analysis and interpretation, showed that elements of stream orbits are calculated, as a rule, as arithmetical (sometimes, weighed) sample means. On the basis of these means, a search for parent bodies, a study of the evolution of swarms generating these streams, an analysis of one-dimensional and multidimensional distributions of these elements, etc., are performed. We show that systematic errors in the estimates of elements of the mean orbits are present in each of the catalogs. These errors are caused by the formal averaging of orbital elements over the sample, while ignoring the fact that they represent not only correlated, but dependent quantities, with nonlinear, in most cases, interrelations between them. Numerous examples are given of such inaccuracies, in particular, the cases where the "mean orbit of the stream" recorded by ground-based techniques does not cross the Earth's orbit. We suggest the computation algorithm, in which the averaging over the sample is carried out at the initial stage of the calculation of the mean orbit, and only for the variables required for subsequent calculations. After this, the known astrometric formulas are used to sequentially calculate all other parameters of the stream, considered now as a standard orbit. Variance analysis is used to estimate the errors in orbital elements of the streams, in the case that their orbits are obtained by averaging the orbital elements of meteoroids forming the stream, without taking into account their interdependence. The results obtained in this analysis indicate the behavior of systematic errors in the elements of orbits of meteor streams. As an example, the effect of the incorrect computation method on the distribution of elements of the stream orbits close to the orbits of asteroids of the Apollo, Aten, and Amor groups (AAA asteroids) is examined.

  10. Efficient parallel implicit methods for rotary-wing aerodynamics calculations

    Science.gov (United States)

    Wissink, Andrew M.

    Euler/Navier-Stokes Computational Fluid Dynamics (CFD) methods are commonly used for prediction of the aerodynamics and aeroacoustics of modern rotary-wing aircraft. However, their widespread application to large complex problems is limited lack of adequate computing power. Parallel processing offers the potential for dramatic increases in computing power, but most conventional implicit solution methods are inefficient in parallel and new techniques must be adopted to realize its potential. This work proposes alternative implicit schemes for Euler/Navier-Stokes rotary-wing calculations which are robust and efficient in parallel. The first part of this work proposes an efficient parallelizable modification of the Lower Upper-Symmetric Gauss Seidel (LU-SGS) implicit operator used in the well-known Transonic Unsteady Rotor Navier Stokes (TURNS) code. The new hybrid LU-SGS scheme couples a point-relaxation approach of the Data Parallel-Lower Upper Relaxation (DP-LUR) algorithm for inter-processor communication with the Symmetric Gauss Seidel algorithm of LU-SGS for on-processor computations. With the modified operator, TURNS is implemented in parallel using Message Passing Interface (MPI) for communication. Numerical performance and parallel efficiency are evaluated on the IBM SP2 and Thinking Machines CM-5 multi-processors for a variety of steady-state and unsteady test cases. The hybrid LU-SGS scheme maintains the numerical performance of the original LU-SGS algorithm in all cases and shows a good degree of parallel efficiency. It experiences a higher degree of robustness than DP-LUR for third-order upwind solutions. The second part of this work examines use of Krylov subspace iterative solvers for the nonlinear CFD solutions. The hybrid LU-SGS scheme is used as a parallelizable preconditioner. Two iterative methods are tested, Generalized Minimum Residual (GMRES) and Orthogonal s-Step Generalized Conjugate Residual (OSGCR). The Newton method demonstrates good

  11. Method for calculating annual energy efficiency improvement of TV sets

    International Nuclear Information System (INIS)

    Varman, M.; Mahlia, T.M.I.; Masjuki, H.H.

    2006-01-01

    The popularization of 24 h pay-TV, interactive video games, web-TV, VCD and DVD are poised to have a large impact on overall TV electricity consumption in the Malaysia. Following this increased consumption, energy efficiency standard present a highly effective measure for decreasing electricity consumption in the residential sector. The main problem in setting energy efficiency standard is identifying annual efficiency improvement, due to the lack of time series statistical data available in developing countries. This study attempts to present a method of calculating annual energy efficiency improvement for TV set, which can be used for implementing energy efficiency standard for TV sets in Malaysia and other developing countries. Although the presented result is only an approximation, definitely it is one of the ways of accomplishing energy standard. Furthermore, the method can be used for other appliances without any major modification

  12. Method for calculating annual energy efficiency improvement of TV sets

    Energy Technology Data Exchange (ETDEWEB)

    Varman, M. [Department of Mechanical Engineering, University of Malaya, Lembah Pantai, 50603 Kuala Lumpur (Malaysia); Mahlia, T.M.I. [Department of Mechanical Engineering, University of Malaya, Lembah Pantai, 50603 Kuala Lumpur (Malaysia)]. E-mail: indra@um.edu.my; Masjuki, H.H. [Department of Mechanical Engineering, University of Malaya, Lembah Pantai, 50603 Kuala Lumpur (Malaysia)

    2006-10-15

    The popularization of 24 h pay-TV, interactive video games, web-TV, VCD and DVD are poised to have a large impact on overall TV electricity consumption in the Malaysia. Following this increased consumption, energy efficiency standard present a highly effective measure for decreasing electricity consumption in the residential sector. The main problem in setting energy efficiency standard is identifying annual efficiency improvement, due to the lack of time series statistical data available in developing countries. This study attempts to present a method of calculating annual energy efficiency improvement for TV set, which can be used for implementing energy efficiency standard for TV sets in Malaysia and other developing countries. Although the presented result is only an approximation, definitely it is one of the ways of accomplishing energy standard. Furthermore, the method can be used for other appliances without any major modification.

  13. A new method for calculation of an air quality index

    Energy Technology Data Exchange (ETDEWEB)

    Ilvessalo, P. [Finnish Meteorological Inst., Helsinki (Finland). Air Quality Dept.

    1995-12-31

    Air quality measurement programs in Finnish towns have expanded during the last few years. As a result of this it is more and more difficult to make use of all the measured concentration data. Citizens of Finnish towns are nowadays taking more of an interest in the air quality of their surroundings. The need to describe air quality in a simplified form has increased. Air quality indices permit the presentation of air quality data in such a way that prevailing conditions are more easily understandable than when using concentration data as such. Using an air quality index always means that some of the information about concentrations of contaminants in the air will be lost. How much information is possible to extract from a single index number depends on the calculation method. A new method for the calculation of an air quality index has been developed. This index always indicates the overstepping of an air quality guideline level. The calculation of this air quality index is performed using the concentrations of all the contaminants measured. The index gives information both about the prevailing air quality and also the short-term trend. It can also warn about the expected exceeding of guidelines due to one or several contaminants. The new index is especially suitable for the real-time monitoring and notification of air quality values. The behaviour of the index was studied using material from a measurement period in the spring of 1994 in Kaepylae, Helsinki. Material from a pre-operational period in the town of Oulu was also available. (author)

  14. A new method for calculation of an air quality index

    Energy Technology Data Exchange (ETDEWEB)

    Ilvessalo, P [Finnish Meteorological Inst., Helsinki (Finland). Air Quality Dept.

    1996-12-31

    Air quality measurement programs in Finnish towns have expanded during the last few years. As a result of this it is more and more difficult to make use of all the measured concentration data. Citizens of Finnish towns are nowadays taking more of an interest in the air quality of their surroundings. The need to describe air quality in a simplified form has increased. Air quality indices permit the presentation of air quality data in such a way that prevailing conditions are more easily understandable than when using concentration data as such. Using an air quality index always means that some of the information about concentrations of contaminants in the air will be lost. How much information is possible to extract from a single index number depends on the calculation method. A new method for the calculation of an air quality index has been developed. This index always indicates the overstepping of an air quality guideline level. The calculation of this air quality index is performed using the concentrations of all the contaminants measured. The index gives information both about the prevailing air quality and also the short-term trend. It can also warn about the expected exceeding of guidelines due to one or several contaminants. The new index is especially suitable for the real-time monitoring and notification of air quality values. The behaviour of the index was studied using material from a measurement period in the spring of 1994 in Kaepylae, Helsinki. Material from a pre-operational period in the town of Oulu was also available. (author)

  15. A drainage data-based calculation method for coalbed permeability

    International Nuclear Information System (INIS)

    Lai, Feng-peng; Li, Zhi-ping; Fu, Ying-kun; Yang, Zhi-hao

    2013-01-01

    This paper establishes a drainage data-based calculation method for coalbed permeability. The method combines material balance and production equations. We use a material balance equation to derive the average pressure of the coalbed in the production process. The dimensionless water production index is introduced into the production equation for the water production stage. In the subsequent stage, which uses both gas and water, the gas and water production ratio is introduced to eliminate the effect of flush-flow radius, skin factor, and other uncertain factors in the calculation of coalbed methane permeability. The relationship between permeability and surface cumulative liquid production can be described as a single-variable cubic equation by derivation. The trend shows that the permeability initially declines and then increases after ten wells in the southern Qinshui coalbed methane field. The results show an exponential relationship between permeability and cumulative water production. The relationship between permeability and cumulative gas production is represented by a linear curve and that between permeability and surface cumulative liquid production is represented by a cubic polynomial curve. The regression result of the permeability and surface cumulative liquid production agrees with the theoretical mathematical relationship. (paper)

  16. Physics methods for calculating light water reactor increased performances

    International Nuclear Information System (INIS)

    Vandenberg, C.; Charlier, A.

    1988-01-01

    The intensive use of light water reactors (LWRs) has induced modification of their characteristics and performances in order to improve fissile material utilization and to increase their availability and flexibility under operation. From the conceptual point of view, adequate methods must be used to calculate core characteristics, taking into account present design requirements, e.g., use of burnable poison, plutonium recycling, etc. From the operational point of view, nuclear plants that have been producing a large percentage of electricity in some countries must adapt their planning to the need of the electrical network and operate on a load-follow basis. Consequently, plant behavior must be predicted and accurately followed in order to improve the plant's capability within safety limits. The Belgonucleaire code system has been developed and extensively validated. It is an accurate, flexible, easily usable, fast-running tool for solving the problems related to LWR technology development. The methods and validation of the two computer codes LWR-WIMS and MICROLUX, which are the main components of the physics calculation system, are explained

  17. Domain decomposition methods for core calculations using the MINOS solver

    International Nuclear Information System (INIS)

    Guerin, P.; Baudron, A. M.; Lautard, J. J.

    2007-01-01

    Cell by cell homogenized transport calculations of an entire nuclear reactor core are currently too expensive for industrial applications, even if a simplified transport (SPn) approximation is used. In order to take advantage of parallel computers, we propose here two domain decomposition methods using the mixed dual finite element solver MINOS. The first one is a modal synthesis method on overlapping sub-domains: several Eigenmodes solutions of a local problem on each sub-domain are taken as basis functions used for the resolution of the global problem on the whole domain. The second one is an iterative method based on non-overlapping domain decomposition with Robin interface conditions. At each iteration, we solve the problem on each sub-domain with the interface conditions given by the solutions on the close sub-domains estimated at the previous iteration. For these two methods, we give numerical results which demonstrate their accuracy and their efficiency for the diffusion model on realistic 2D and 3D cores. (authors)

  18. An improved method for calculation of interface pressure force in PLIC-VOF methods

    International Nuclear Information System (INIS)

    Sefollahi, M.; Shirani, E.

    2004-08-01

    Conventional methods for the modeling of surface tension force in Piecewise Linear Interface Calculation-Volume of Fluid (PLIC-VOF) methods, such as Continuum Surface Force (CSF), Continuum Surface Stress (CSS) and also Meier's method, convert the surface tension force into a body force. Not only do they include the force in the interfacial cells but also in the neighboring cells. Thus they produce spurious currents. Also the pressure jump, due to the surface tension, is not calculated accurately in these methods. In this paper a more accurate method for the application of interface force in the computational modeling of free surfaces and interfaces which use PLIC-VOF methods is developed. This method is based on the evaluation of the surface tension force only in the interfacial cells and not the neighboring cells. Also the normal and the interface surface area needed for the calculation of the surface tension force is calculated more accurately. The present method is applied to a two-dimensional motionless drop of liquid and a bubble of gas as well as a non-circular two-dimensional drop, which oscillates due to the surface tension force, in an initially stagnant fluid with no gravity force. The results are compared with the results of the cases when CSF, CSS and Meier's methods are used. It is shown that the present method calculates pressure jump at the interface more accurately and produces less spurious currents comparing to CSS an CSF models. (author)

  19. About possibilities using of theoretical calculation methods in radioecology

    International Nuclear Information System (INIS)

    Demoukhamedova, S.D.; Aliev, D.I.; Alieva, I.N.

    2002-01-01

    Full text: Increasing the radiation level into environment is accompanied by accumulation of radioactive compounds into organism and/or their migration into biosphere. Radiotoxins are accumulated into irradiated plants and animals in result of violation of exchanging processes. The are play an important role at the pathogenesis of irradiation. To date, there is well known that even small quantity of the pesticides capable intensified the radiation effect. To understand the mechanism of radiation effect on physiologically active compounds and their complexes, the knowledge of such molecules three-dimensional organization and electron structure is essential. This work is devoted to study the pesticides of carbamate range, i.e. 'sevin' and its derivatives the physiological activity of which has been connected with cholinesterase degradation. Spatial organization and conformational possibilities of the pesticides has been studied using a method of the theoretical conformational analysis on the base of computational program worked out in laboratory of Molecular Biophysics at the Baku State University. Quantum-chemical methods CNDO/2, AM1 and PM3 and complex programs 'LEV' were used in studies of electronic structures of 'sevin' and number of its analogues. Charge distribution on the atoms, optimization of geometrical electrooptic parameters, as well as molecular electrostatic potentials, electron density and nuclear forces were calculated. Visual maps and surface of valence electron density distribution in the given plane and surface of electron-nuclear forces distribution projection were constructed. The geometrical and energetic characteristics, charges on the atoms of investigated pesticides, as well as the maps and relief of the valence electron density distribution on the atoms have been received. According to calculation results, the changing of charge distribution in naphthalene ring is observed. The conclusion was made that the carbonyl group is essential for

  20. Hybrid Monte-Carlo method for ICF calculations

    International Nuclear Information System (INIS)

    Clouet, J.F.; Samba, G.

    2003-01-01

    ) conduction and ray-tracing for laser description. Radiation transport is usually solved by a Monte-Carlo method. In coupling diffusion approximation and transport description, the difficult part comes from the need for an implicit discretization of the emission-absorption terms: this problem was solved by using the symbolic Monte-Carlo method. This means that at each step of the simulation a matrix is computed by a Monte-Carlo method which accounts for the radiation energy exchange between the cells. Because of time step limitation by hydrodynamic motion, energy exchange is limited to a small number of cells and the matrix remains sparse. This matrix is added to usual diffusion matrix for thermal and radiative conductions: finally we arrive at a non-symmetric linear system to invert. A generalized Marshak condition describe the coupling between transport and diffusion. In this paper we will present the principles of the method and numerical simulation of an ICF hohlraum. We shall illustrate the benefits of the method by comparing the results with full implicit Monte-Carlo calculations. In particular we shall show how the spectral cut-off evolves during the propagation of the radiative front in the gold wall. Several issues are still to be addressed (robust algorithm for spectral cut- off calculation, coupling with ALE capabilities): we shall briefly discuss these problems. (authors)

  1. Calculation-experimental method justifies the life of wagons

    Directory of Open Access Journals (Sweden)

    Валерія Сергіївна Воропай

    2015-11-01

    Full Text Available The article proposed a method to evaluate the technical state of tank wagons operating in chemical industry. An algorithm for evaluation the technical state of tank wagons was developed, that makes it possible on the basis of diagnosis and analysis of current condition to justify a further period of operation. The complex of works on testing the tanks and mathematical models for calculations of the design strength and reliability were proposed. The article is devoted to solving the problem of effective exploitation of the working fleet of tank wagons. Opportunities for further exploitation of cars, the complex of works on the assessment of their technical state and the calculation of the resources have been proposed in the article. Engineering research of the chemical industries park has reduced the shortage of the rolling stock for transportation of ammonia. The analysis of the chassis numerous faults and the main elements of tank wagons supporting structure after 20 years of exploitation was made. The algorithm of determining the residual life of the specialized tank wagons operating in an industrial plant has been proposed. The procedure for resource conservation of tank wagons carrying cargo under high pressure was first proposed. The improved procedure for identifying residual life proposed in the article has both theoretical and practical importance

  2. Acceleration and parallelization calculation of EFEN-SP_3 method

    International Nuclear Information System (INIS)

    Yang Wen; Zheng Youqi; Wu Hongchun; Cao Liangzhi; Li Yunzhao

    2013-01-01

    Due to the fact that the exponential function expansion nodal-SP_3 (EFEN-SP_3) method needs further improvement in computational efficiency to routinely carry out PWR whole core pin-by-pin calculation, the coarse mesh acceleration and spatial parallelization were investigated in this paper. The coarse mesh acceleration was built by considering discontinuity factor on each coarse mesh interface and preserving neutron balance within each coarse mesh in space, angle and energy. The spatial parallelization based on MPI was implemented by guaranteeing load balancing and minimizing communications cost to fully take advantage of the modern computing and storage abilities. Numerical results based on a commercial nuclear power reactor demonstrate an speedup ratio of about 40 for the coarse mesh acceleration and a parallel efficiency of higher than 60% with 40 CPUs for the spatial parallelization. With these two improvements, the EFEN code can complete a PWR whole core pin-by-pin calculation with 289 × 289 × 218 meshes and 4 energy groups within 100 s by using 48 CPUs (2.40 GHz frequency). (authors)

  3. Method of sections in analytical calculations of pneumatic tires

    Science.gov (United States)

    Tarasov, V. N.; Boyarkina, I. V.

    2018-01-01

    Analytical calculations in the pneumatic tire theory are more preferable in comparison with experimental methods. The method of section of a pneumatic tire shell allows to obtain equations of intensities of internal forces in carcass elements and bead rings. Analytical dependencies of intensity of distributed forces have been obtained in tire equator points, on side walls (poles) and pneumatic tire bead rings. Along with planes in the capacity of secant surfaces cylindrical surfaces are used for the first time together with secant planes. The tire capacity equation has been obtained using the method of section, by means of which a contact body is cut off from the tire carcass along the contact perimeter by the surface which is normal to the bearing surface. It has been established that the Laplace equation for the solution of tasks of this class of pneumatic tires contains two unknown values that requires the generation of additional equations. The developed computational schemes of pneumatic tire sections and new equations allow to accelerate the pneumatic tire structure improvement process during engineering.

  4. Calculation methods for advanced concept light water reactor lattices

    International Nuclear Information System (INIS)

    Carmona, S.

    1986-01-01

    In the last few years s several advanced concepts for fuel rod lattices have been studied. Improved fuel utilization is one of the major aims in the development of new fuel rod designs and lattice modifications. By these changes s better performance in fuel economics s fuel burnup and material endurance can be achieved in the frame of the well-known basic Light Water Reactor technology. Among the new concepts involved in these studies that have attracted serious attention are lattices consisting of arrays of annular rods duplex pellet rods or tight multicells. These new designs of fuel rods and lattices present several computational problems. The treatment of resonance shielded cross sections is a crucial point in the analyses of these advanced concepts . The purpose of this study was to assess adequate approximation methods for calculating as accurately as possible, resonance shielding for these new lattices. Although detailed and exact computational methods for the evaluation of the resonance shielding in these lattices are possible, they are quite inefficient when used in lattice codes. The computer time and memory required for this kind of computations are too large to be used in an acceptable routine manner. In order to over- come these limitations and to make the analyses possible with reasonable use of computer resources s approximation methods are necessary. Usual approximation methods, for the resonance energy regions used in routine lattice computer codes, can not adequately handle the evaluation of these new fuel rod lattices. The main contribution of the present work to advanced lattice concepts is the development of an equivalence principle for the calculation of resonance shielding in the annular fuel pellet zone of duplex pellets; the duplex pellet in this treatment consists of two fuel zones with the same absorber isotope in both regions. In the transition from a single duplex rod to an infinite array of this kind of fuel rods, the similarity of the

  5. Hamiltonian lattice field theory: Computer calculations using variational methods

    International Nuclear Information System (INIS)

    Zako, R.L.

    1991-01-01

    I develop a variational method for systematic numerical computation of physical quantities -- bound state energies and scattering amplitudes -- in quantum field theory. An infinite-volume, continuum theory is approximated by a theory on a finite spatial lattice, which is amenable to numerical computation. I present an algorithm for computing approximate energy eigenvalues and eigenstates in the lattice theory and for bounding the resulting errors. I also show how to select basis states and choose variational parameters in order to minimize errors. The algorithm is based on the Rayleigh-Ritz principle and Kato's generalizations of Temple's formula. The algorithm could be adapted to systems such as atoms and molecules. I show how to compute Green's functions from energy eigenvalues and eigenstates in the lattice theory, and relate these to physical (renormalized) coupling constants, bound state energies and Green's functions. Thus one can compute approximate physical quantities in a lattice theory that approximates a quantum field theory with specified physical coupling constants. I discuss the errors in both approximations. In principle, the errors can be made arbitrarily small by increasing the size of the lattice, decreasing the lattice spacing and computing sufficiently long. Unfortunately, I do not understand the infinite-volume and continuum limits well enough to quantify errors due to the lattice approximation. Thus the method is currently incomplete. I apply the method to real scalar field theories using a Fock basis of free particle states. All needed quantities can be calculated efficiently with this basis. The generalization to more complicated theories is straightforward. I describe a computer implementation of the method and present numerical results for simple quantum mechanical systems

  6. Hamiltonian lattice field theory: Computer calculations using variational methods

    International Nuclear Information System (INIS)

    Zako, R.L.

    1991-01-01

    A variational method is developed for systematic numerical computation of physical quantities-bound state energies and scattering amplitudes-in quantum field theory. An infinite-volume, continuum theory is approximated by a theory on a finite spatial lattice, which is amenable to numerical computation. An algorithm is presented for computing approximate energy eigenvalues and eigenstates in the lattice theory and for bounding the resulting errors. It is shown how to select basis states and choose variational parameters in order to minimize errors. The algorithm is based on the Rayleigh-Ritz principle and Kato's generalizations of Temple's formula. The algorithm could be adapted to systems such as atoms and molecules. It is shown how to compute Green's functions from energy eigenvalues and eigenstates in the lattice theory, and relate these to physical (renormalized) coupling constants, bound state energies and Green's functions. Thus one can compute approximate physical quantities in a lattice theory that approximates a quantum field theory with specified physical coupling constants. The author discusses the errors in both approximations. In principle, the errors can be made arbitrarily small by increasing the size of the lattice, decreasing the lattice spacing and computing sufficiently long. Unfortunately, the author does not understand the infinite-volume and continuum limits well enough to quantify errors due to the lattice approximation. Thus the method is currently incomplete. The method is applied to real scalar field theories using a Fock basis of free particle states. All needed quantities can be calculated efficiently with this basis. The generalization to more complicated theories is straightforward. The author describes a computer implementation of the method and present numerical results for simple quantum mechanical systems

  7. Multi-GPU based acceleration of a list-mode DRAMA toward real-time OpenPET imaging

    Energy Technology Data Exchange (ETDEWEB)

    Kinouchi, Shoko [Chiba Univ. (Japan); National Institute of Radiological Sciences, Chiba (Japan); Yamaya, Taiga; Yoshida, Eiji; Tashima, Hideaki [National Institute of Radiological Sciences, Chiba (Japan); Kudo, Hiroyuki [Tsukuba Univ., Ibaraki (Japan); Suga, Mikio [Chiba Univ. (Japan)

    2011-07-01

    OpenPET, which has a physical gap between two detector rings, is our new PET geometry. In order to realize future radiation therapy guided by OpenPET, real-time imaging is required. Therefore we developed a list-mode image reconstruction method using general purpose graphic processing units (GPUs). For GPU implementation, the efficiency of acceleration depends on the implementation method which is required to avoid conditional statements. Therefore, in our previous study, we developed a new system model which was suited for the GPU implementation. In this paper, we implemented our image reconstruction method using 4 GPUs to get further acceleration. We applied the developed reconstruction method to a small OpenPET prototype. We obtained calculation times of total iteration using 4 GPUs that were 3.4 times faster than using a single GPU. Compared to using a single CPU, we achieved the reconstruction time speed-up of 142 times using 4 GPUs. (orig.)

  8. Development of methods for burn-up calculations for LWR's

    International Nuclear Information System (INIS)

    Jaschik, W.

    1978-01-01

    This method is based on all burn-up depending data, namely particle densities and neutron spectra, being available in a burn-up library. This one is created by means of a small number of cell burn-up calculations which can easily be carried out and in which the heterogeneous cell structure and self-shielding effects can explicitly be accounted for. Then the cluster burn-up is simulated by adequate correlation of the burn-up data. The advantage of this method is given by - an exact determination of the real spectrum distribution in the individual fuel element clusters; - an exact determination of the burn-up related spectrum variations for each fuel rod and for each burn-up value obtained; - accounting for heterogeneity of the fuel rod cells and the self-shielding in the fuel; high accuracy of the results of a comparably low effort and - simple handling by largely automating the process of computation. Programed realization was achieved by establishing the RSYST modules ABRAJA, MITHOM, and SIMABB and their implementation within the code system. (orig./HP) [de

  9. Methods for calculating the electrode position Jacobian for impedance imaging.

    Science.gov (United States)

    Boyle, A; Crabb, M G; Jehl, M; Lionheart, W R B; Adler, A

    2017-03-01

    Electrical impedance tomography (EIT) or electrical resistivity tomography (ERT) current and measure voltages at the boundary of a domain through electrodes. The movement or incorrect placement of electrodes may lead to modelling errors that result in significant reconstructed image artifacts. These errors may be accounted for by allowing for electrode position estimates in the model. Movement may be reconstructed through a first-order approximation, the electrode position Jacobian. A reconstruction that incorporates electrode position estimates and conductivity can significantly reduce image artifacts. Conversely, if electrode position is ignored it can be difficult to distinguish true conductivity changes from reconstruction artifacts which may increase the risk of a flawed interpretation. In this work, we aim to determine the fastest, most accurate approach for estimating the electrode position Jacobian. Four methods of calculating the electrode position Jacobian were evaluated on a homogeneous halfspace. Results show that Fréchet derivative and rank-one update methods are competitive in computational efficiency but achieve different solutions for certain values of contact impedance and mesh density.

  10. A refined method for calculating equivalent effective stratospheric chlorine

    Science.gov (United States)

    Engel, Andreas; Bönisch, Harald; Ostermöller, Jennifer; Chipperfield, Martyn P.; Dhomse, Sandip; Jöckel, Patrick

    2018-01-01

    Chlorine and bromine atoms lead to catalytic depletion of ozone in the stratosphere. Therefore the use and production of ozone-depleting substances (ODSs) containing chlorine and bromine is regulated by the Montreal Protocol to protect the ozone layer. Equivalent effective stratospheric chlorine (EESC) has been adopted as an appropriate metric to describe the combined effects of chlorine and bromine released from halocarbons on stratospheric ozone. Here we revisit the concept of calculating EESC. We derive a refined formulation of EESC based on an advanced concept of ODS propagation into the stratosphere and reactive halogen release. A new transit time distribution is introduced in which the age spectrum for an inert tracer is weighted with the release function for inorganic halogen from the source gases. This distribution is termed the release time distribution. We show that a much better agreement with inorganic halogen loading from the chemistry transport model TOMCAT is achieved compared with using the current formulation. The refined formulation shows EESC levels in the year 1980 for the mid-latitude lower stratosphere, which are significantly lower than previously calculated. The year 1980 is commonly used as a benchmark to which EESC must return in order to reach significant progress towards halogen and ozone recovery. Assuming that - under otherwise unchanged conditions - the EESC value must return to the same level in order for ozone to fully recover, we show that it will take more than 10 years longer than estimated in this region of the stratosphere with the current method for calculation of EESC. We also present a range of sensitivity studies to investigate the effect of changes and uncertainties in the fractional release factors and in the assumptions on the shape of the release time distributions. We further discuss the value of EESC as a proxy for future evolution of inorganic halogen loading under changing atmospheric dynamics using simulations from

  11. A GPU OpenCL based cross-platform Monte Carlo dose calculation engine (goMC)

    Science.gov (United States)

    Tian, Zhen; Shi, Feng; Folkerts, Michael; Qin, Nan; Jiang, Steve B.; Jia, Xun

    2015-09-01

    Monte Carlo (MC) simulation has been recognized as the most accurate dose calculation method for radiotherapy. However, the extremely long computation time impedes its clinical application. Recently, a lot of effort has been made to realize fast MC dose calculation on graphic processing units (GPUs). However, most of the GPU-based MC dose engines have been developed under NVidia’s CUDA environment. This limits the code portability to other platforms, hindering the introduction of GPU-based MC simulations to clinical practice. The objective of this paper is to develop a GPU OpenCL based cross-platform MC dose engine named goMC with coupled photon-electron simulation for external photon and electron radiotherapy in the MeV energy range. Compared to our previously developed GPU-based MC code named gDPM (Jia et al 2012 Phys. Med. Biol. 57 7783-97), goMC has two major differences. First, it was developed under the OpenCL environment for high code portability and hence could be run not only on different GPU cards but also on CPU platforms. Second, we adopted the electron transport model used in EGSnrc MC package and PENELOPE’s random hinge method in our new dose engine, instead of the dose planning method employed in gDPM. Dose distributions were calculated for a 15 MeV electron beam and a 6 MV photon beam in a homogenous water phantom, a water-bone-lung-water slab phantom and a half-slab phantom. Satisfactory agreement between the two MC dose engines goMC and gDPM was observed in all cases. The average dose differences in the regions that received a dose higher than 10% of the maximum dose were 0.48-0.53% for the electron beam cases and 0.15-0.17% for the photon beam cases. In terms of efficiency, goMC was ~4-16% slower than gDPM when running on the same NVidia TITAN card for all the cases we tested, due to both the different electron transport models and the different development environments. The code portability of our new dose engine goMC was validated by

  12. A GPU OpenCL based cross-platform Monte Carlo dose calculation engine (goMC).

    Science.gov (United States)

    Tian, Zhen; Shi, Feng; Folkerts, Michael; Qin, Nan; Jiang, Steve B; Jia, Xun

    2015-10-07

    Monte Carlo (MC) simulation has been recognized as the most accurate dose calculation method for radiotherapy. However, the extremely long computation time impedes its clinical application. Recently, a lot of effort has been made to realize fast MC dose calculation on graphic processing units (GPUs). However, most of the GPU-based MC dose engines have been developed under NVidia's CUDA environment. This limits the code portability to other platforms, hindering the introduction of GPU-based MC simulations to clinical practice. The objective of this paper is to develop a GPU OpenCL based cross-platform MC dose engine named goMC with coupled photon-electron simulation for external photon and electron radiotherapy in the MeV energy range. Compared to our previously developed GPU-based MC code named gDPM (Jia et al 2012 Phys. Med. Biol. 57 7783-97), goMC has two major differences. First, it was developed under the OpenCL environment for high code portability and hence could be run not only on different GPU cards but also on CPU platforms. Second, we adopted the electron transport model used in EGSnrc MC package and PENELOPE's random hinge method in our new dose engine, instead of the dose planning method employed in gDPM. Dose distributions were calculated for a 15 MeV electron beam and a 6 MV photon beam in a homogenous water phantom, a water-bone-lung-water slab phantom and a half-slab phantom. Satisfactory agreement between the two MC dose engines goMC and gDPM was observed in all cases. The average dose differences in the regions that received a dose higher than 10% of the maximum dose were 0.48-0.53% for the electron beam cases and 0.15-0.17% for the photon beam cases. In terms of efficiency, goMC was ~4-16% slower than gDPM when running on the same NVidia TITAN card for all the cases we tested, due to both the different electron transport models and the different development environments. The code portability of our new dose engine goMC was validated by

  13. A GPU OpenCL based cross-platform Monte Carlo dose calculation engine (goMC)

    International Nuclear Information System (INIS)

    Tian, Zhen; Shi, Feng; Folkerts, Michael; Qin, Nan; Jiang, Steve B; Jia, Xun

    2015-01-01

    Monte Carlo (MC) simulation has been recognized as the most accurate dose calculation method for radiotherapy. However, the extremely long computation time impedes its clinical application. Recently, a lot of effort has been made to realize fast MC dose calculation on graphic processing units (GPUs). However, most of the GPU-based MC dose engines have been developed under NVidia’s CUDA environment. This limits the code portability to other platforms, hindering the introduction of GPU-based MC simulations to clinical practice. The objective of this paper is to develop a GPU OpenCL based cross-platform MC dose engine named goMC with coupled photon–electron simulation for external photon and electron radiotherapy in the MeV energy range. Compared to our previously developed GPU-based MC code named gDPM (Jia et al 2012 Phys. Med. Biol. 57 7783–97), goMC has two major differences. First, it was developed under the OpenCL environment for high code portability and hence could be run not only on different GPU cards but also on CPU platforms. Second, we adopted the electron transport model used in EGSnrc MC package and PENELOPE’s random hinge method in our new dose engine, instead of the dose planning method employed in gDPM. Dose distributions were calculated for a 15 MeV electron beam and a 6 MV photon beam in a homogenous water phantom, a water-bone-lung-water slab phantom and a half-slab phantom. Satisfactory agreement between the two MC dose engines goMC and gDPM was observed in all cases. The average dose differences in the regions that received a dose higher than 10% of the maximum dose were 0.48–0.53% for the electron beam cases and 0.15–0.17% for the photon beam cases. In terms of efficiency, goMC was ∼4–16% slower than gDPM when running on the same NVidia TITAN card for all the cases we tested, due to both the different electron transport models and the different development environments. The code portability of our new dose engine goMC was

  14. Method and program for complex calculation of heterogeneous reactor

    International Nuclear Information System (INIS)

    Kalashnikov, A.G.; Glebov, A.P.; Elovskaya, L.F.; Kuznetsova, L.I.

    1988-01-01

    An algorithm and the GITA program for complex one-dimensional calculation of a heterogeneous reactor which permits to conduct calculations for the reactor and its cell simultaneously using the same algorithm are described. Multigroup macrocross sections for reactor zones in the thermal energy range are determined according to the technique for calculating a cell with complicate structure and then the continuous multi group calculation of the reactor in the thermal energy range and in the range of neutron thermalization is made. The kinetic equation is solved using the Pi- and DSn- approximations [fr

  15. Estimation of subcriticality of TCA using 'indirect estimation method for calculation error'

    International Nuclear Information System (INIS)

    Naito, Yoshitaka; Yamamoto, Toshihiro; Arakawa, Takuya; Sakurai, Kiyoshi

    1996-01-01

    To estimate the subcriticality of neutron multiplication factor in a fissile system, 'Indirect Estimation Method for Calculation Error' is proposed. This method obtains the calculational error of neutron multiplication factor by correlating measured values with the corresponding calculated ones. This method was applied to the source multiplication and to the pulse neutron experiments conducted at TCA, and the calculation error of MCNP 4A was estimated. In the source multiplication method, the deviation of measured neutron count rate distributions from the calculated ones estimates the accuracy of calculated k eff . In the pulse neutron method, the calculation errors of prompt neutron decay constants give the accuracy of the calculated k eff . (author)

  16. SU-F-T-193: Evaluation of a GPU-Based Fast Monte Carlo Code for Proton Therapy Biological Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Taleei, R; Qin, N; Jiang, S [UT Southwestern Medical Center, Dallas, TX (United States); Peeler, C [UT MD Anderson Cancer Center, Houston, TX (United States); Jia, X [The University of Texas Southwestern Medical Ctr, Dallas, TX (United States)

    2016-06-15

    Purpose: Biological treatment plan optimization is of great interest for proton therapy. It requires extensive Monte Carlo (MC) simulations to compute physical dose and biological quantities. Recently, a gPMC package was developed for rapid MC dose calculations on a GPU platform. This work investigated its suitability for proton therapy biological optimization in terms of accuracy and efficiency. Methods: We performed simulations of a proton pencil beam with energies of 75, 150 and 225 MeV in a homogeneous water phantom using gPMC and FLUKA. Physical dose and energy spectra for each ion type on the central beam axis were scored. Relative Biological Effectiveness (RBE) was calculated using repair-misrepair-fixation model. Microdosimetry calculations were performed using Monte Carlo Damage Simulation (MCDS). Results: Ranges computed by the two codes agreed within 1 mm. Physical dose difference was less than 2.5 % at the Bragg peak. RBE-weighted dose agreed within 5 % at the Bragg peak. Differences in microdosimetric quantities such as dose average lineal energy transfer and specific energy were < 10%. The simulation time per source particle with FLUKA was 0.0018 sec, while gPMC was ∼ 600 times faster. Conclusion: Physical dose computed by FLUKA and gPMC were in a good agreement. The RBE differences along the central axis were small, and RBE-weighted dose difference was found to be acceptable. The combined accuracy and efficiency makes gPMC suitable for proton therapy biological optimization.

  17. Lambda-guided calculation method (LGC method) for xenon/CT CBF

    Energy Technology Data Exchange (ETDEWEB)

    Sase, Shigeru [Anzai Medical Co., Ltd., Tokyo (Japan); Honda, Mitsuru; Kushida, Tsuyoshi; Seiki, Yoshikatsu; Machida, Keiichi; Shibata, Iekado [Toho Univ., Tokyo (Japan). School of Medicine

    2001-12-01

    A quantitative CBF calculation method for xenon/CT was developed by logically estimating time-course change rate (rate constant) of arterial xenon concentration from that of end-tidal xenon concentration. A single factor ({gamma}) was introduced to correlate the end-tidal rate constant (Ke) with the arterial rate constant (Ka) in a simplified equation. This factor ({gamma}) is thought to reflect the diffusing capacity of the lung for xenon. When an appropriate value is given to {gamma}, it is possible to calculate the arterial rate constant (Calculated Ka) from Ke. To determine {gamma} for each xenon/CT CBF examination, a procedure was established which utilizes the characteristics of white matter lambda; lambda refers to xenon brain-blood partition coefficient. Xenon/CT studies were performed on four healthy volunteers. Hemispheric CBF values (47.0{+-}9.0 ml/100 g/min) with use of Calculated Ka were close to the reported normative values. For a 27-year-old healthy man, the rate constant for the common carotid artery was successfully measured and nearly equal to Calculated Ka. The authors conclude the method proposed in this work, lambda-guided calculation method, could make xenon/CT CBF substantially reliable and quantitative by effective use of end-tidal xenon. (author)

  18. Lambda-guided calculation method (LGC method) for xenon/CT CBF

    International Nuclear Information System (INIS)

    Sase, Shigeru; Honda, Mitsuru; Kushida, Tsuyoshi; Seiki, Yoshikatsu; Machida, Keiichi; Shibata, Iekado

    2001-01-01

    A quantitative CBF calculation method for xenon/CT was developed by logically estimating time-course change rate (rate constant) of arterial xenon concentration from that of end-tidal xenon concentration. A single factor (γ) was introduced to correlate the end-tidal rate constant (Ke) with the arterial rate constant (Ka) in a simplified equation. This factor (γ) is thought to reflect the diffusing capacity of the lung for xenon. When an appropriate value is given to γ, it is possible to calculate the arterial rate constant (Calculated Ka) from Ke. To determine γ for each xenon/CT CBF examination, a procedure was established which utilizes the characteristics of white matter lambda; lambda refers to xenon brain-blood partition coefficient. Xenon/CT studies were performed on four healthy volunteers. Hemispheric CBF values (47.0±9.0 ml/100 g/min) with use of Calculated Ka were close to the reported normative values. For a 27-year-old healthy man, the rate constant for the common carotid artery was successfully measured and nearly equal to Calculated Ka. The authors conclude the method proposed in this work, lambda-guided calculation method, could make xenon/CT CBF substantially reliable and quantitative by effective use of end-tidal xenon. (author)

  19. Comparison of Monte Carlo method and deterministic method for neutron transport calculation

    International Nuclear Information System (INIS)

    Mori, Takamasa; Nakagawa, Masayuki

    1987-01-01

    The report outlines major features of the Monte Carlo method by citing various applications of the method and techniques used for Monte Carlo codes. Major areas of its application include analysis of measurements on fast critical assemblies, nuclear fusion reactor neutronics analysis, criticality safety analysis, evaluation by VIM code, and calculation for shielding. Major techniques used for Monte Carlo codes include the random walk method, geometric expression method (combinatorial geometry, 1, 2, 4-th degree surface and lattice geometry), nuclear data expression, evaluation method (track length, collision, analog (absorption), surface crossing, point), and dispersion reduction (Russian roulette, splitting, exponential transform, importance sampling, corrected sampling). Major features of the Monte Carlo method are as follows: 1) neutron source distribution and systems of complex geometry can be simulated accurately, 2) physical quantities such as neutron flux in a place, on a surface or at a point can be evaluated, and 3) calculation requires less time. (Nogami, K.)

  20. Reactor calculation in coarse mesh by finite element method applied to matrix response method

    International Nuclear Information System (INIS)

    Nakata, H.

    1982-01-01

    The finite element method is applied to the solution of the modified formulation of the matrix-response method aiming to do reactor calculations in coarse mesh. Good results are obtained with a short running time. The method is applicable to problems where the heterogeneity is predominant and to problems of evolution in coarse meshes where the burnup is variable in one same coarse mesh, making the cross section vary spatially with the evolution. (E.G.) [pt

  1. TU-AB-BRC-09: Fast Dose-Averaged LET and Biological Dose Calculations for Proton Therapy Using Graphics Cards

    International Nuclear Information System (INIS)

    Wan, H; Tseung, Chan; Beltran, C

    2016-01-01

    Purpose: To demonstrate fast and accurate Monte Carlo (MC) calculations of proton dose-averaged linear energy transfer (LETd) and biological dose (BD) on a Graphics Processing Unit (GPU) card. Methods: A previously validated GPU-based MC simulation of proton transport was used to rapidly generate LETd distributions for proton treatment plans. Since this MC handles proton-nuclei interactions on an event-by-event using a Bertini intranuclear cascade-evaporation model, secondary protons were taken into account. The smaller contributions of secondary neutrons and recoil nuclei were ignored. Recent work has shown that LETd values are sensitive to the scoring method. The GPU-based LETd calculations were verified by comparing with a TOPAS custom scorer that uses tabulated stopping powers, following recommendations by other authors. Comparisons were made for prostate and head-and-neck patients. A python script is used to convert the MC-generated LETd distributions to BD using a variety of published linear quadratic models, and to export the BD in DICOM format for subsequent evaluation. Results: Very good agreement is obtained between TOPAS and our GPU MC. Given a complex head-and-neck plan with 1 mm voxel spacing, the physical dose, LETd and BD calculations for 10"8 proton histories can be completed in ∼5 minutes using a NVIDIA Titan X card. The rapid turnover means that MC feedback can be obtained on dosimetric plan accuracy as well as BD hotspot locations, particularly in regards to their proximity to critical structures. In our institution the GPU MC-generated dose, LETd and BD maps are used to assess plan quality for all patients undergoing treatment. Conclusion: Fast and accurate MC-based LETd calculations can be performed on the GPU. The resulting BD maps provide valuable feedback during treatment plan review. Partially funded by Varian Medical Systems.

  2. TU-AB-BRC-09: Fast Dose-Averaged LET and Biological Dose Calculations for Proton Therapy Using Graphics Cards

    Energy Technology Data Exchange (ETDEWEB)

    Wan, H; Tseung, Chan; Beltran, C [Mayo Clinic, Rochester, MN (United States)

    2016-06-15

    Purpose: To demonstrate fast and accurate Monte Carlo (MC) calculations of proton dose-averaged linear energy transfer (LETd) and biological dose (BD) on a Graphics Processing Unit (GPU) card. Methods: A previously validated GPU-based MC simulation of proton transport was used to rapidly generate LETd distributions for proton treatment plans. Since this MC handles proton-nuclei interactions on an event-by-event using a Bertini intranuclear cascade-evaporation model, secondary protons were taken into account. The smaller contributions of secondary neutrons and recoil nuclei were ignored. Recent work has shown that LETd values are sensitive to the scoring method. The GPU-based LETd calculations were verified by comparing with a TOPAS custom scorer that uses tabulated stopping powers, following recommendations by other authors. Comparisons were made for prostate and head-and-neck patients. A python script is used to convert the MC-generated LETd distributions to BD using a variety of published linear quadratic models, and to export the BD in DICOM format for subsequent evaluation. Results: Very good agreement is obtained between TOPAS and our GPU MC. Given a complex head-and-neck plan with 1 mm voxel spacing, the physical dose, LETd and BD calculations for 10{sup 8} proton histories can be completed in ∼5 minutes using a NVIDIA Titan X card. The rapid turnover means that MC feedback can be obtained on dosimetric plan accuracy as well as BD hotspot locations, particularly in regards to their proximity to critical structures. In our institution the GPU MC-generated dose, LETd and BD maps are used to assess plan quality for all patients undergoing treatment. Conclusion: Fast and accurate MC-based LETd calculations can be performed on the GPU. The resulting BD maps provide valuable feedback during treatment plan review. Partially funded by Varian Medical Systems.

  3. Accurate methods for calculating atomic processes in high temperature plasmas

    International Nuclear Information System (INIS)

    Keady, J.J.; Abdallah, J.A. Jr.; Clark, R.E.H.

    1992-01-01

    A technique for computing monochromatic X-ray absorption is described and compared to experimental data. Calculations of power loss from carbon plasmas with comprehensive new datasets confirm that the direct inclusion of metastable states can noticeably decrease the calculated power loss

  4. Structural reliability calculation method based on the dual neural network and direct integration method.

    Science.gov (United States)

    Li, Haibin; He, Yun; Nie, Xiaobo

    2018-01-01

    Structural reliability analysis under uncertainty is paid wide attention by engineers and scholars due to reflecting the structural characteristics and the bearing actual situation. The direct integration method, started from the definition of reliability theory, is easy to be understood, but there are still mathematics difficulties in the calculation of multiple integrals. Therefore, a dual neural network method is proposed for calculating multiple integrals in this paper. Dual neural network consists of two neural networks. The neural network A is used to learn the integrand function, and the neural network B is used to simulate the original function. According to the derivative relationships between the network output and the network input, the neural network B is derived from the neural network A. On this basis, the performance function of normalization is employed in the proposed method to overcome the difficulty of multiple integrations and to improve the accuracy for reliability calculations. The comparisons between the proposed method and Monte Carlo simulation method, Hasofer-Lind method, the mean value first-order second moment method have demonstrated that the proposed method is an efficient and accurate reliability method for structural reliability problems.

  5. Comparison of Two-Block Decomposition Method and Chebyshev Rational Approximation Method for Depletion Calculation

    International Nuclear Information System (INIS)

    Lee, Yoon Hee; Cho, Nam Zin

    2016-01-01

    The code gives inaccurate results of nuclides for evaluation of source term analysis, e.g., Sr- 90, Ba-137m, Cs-137, etc. A Krylov Subspace method was suggested by Yamamoto et al. The method is based on the projection of solution space of Bateman equation to a lower dimension of Krylov subspace. It showed good accuracy in the detailed burnup chain calculation if dimension of the Krylov subspace is high enough. In this paper, we will compare the two methods in terms of accuracy and computing time. In this paper, two-block decomposition (TBD) method and Chebyshev rational approximation method (CRAM) are compared in the depletion calculations. In the two-block decomposition method, according to the magnitude of effective decay constant, the system of Bateman equation is decomposed into short- and longlived blocks. The short-lived block is calculated by the general Bateman solution and the importance concept. Matrix exponential with smaller norm is used in the long-lived block. In the Chebyshev rational approximation, there is no decomposition of the Bateman equation system, and the accuracy of the calculation is determined by the order of expansion in the partial fraction decomposition of the rational form. The coefficients in the partial fraction decomposition are determined by a Remez-type algorithm.

  6. Comparison of Two-Block Decomposition Method and Chebyshev Rational Approximation Method for Depletion Calculation

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Yoon Hee; Cho, Nam Zin [KAERI, Daejeon (Korea, Republic of)

    2016-05-15

    The code gives inaccurate results of nuclides for evaluation of source term analysis, e.g., Sr- 90, Ba-137m, Cs-137, etc. A Krylov Subspace method was suggested by Yamamoto et al. The method is based on the projection of solution space of Bateman equation to a lower dimension of Krylov subspace. It showed good accuracy in the detailed burnup chain calculation if dimension of the Krylov subspace is high enough. In this paper, we will compare the two methods in terms of accuracy and computing time. In this paper, two-block decomposition (TBD) method and Chebyshev rational approximation method (CRAM) are compared in the depletion calculations. In the two-block decomposition method, according to the magnitude of effective decay constant, the system of Bateman equation is decomposed into short- and longlived blocks. The short-lived block is calculated by the general Bateman solution and the importance concept. Matrix exponential with smaller norm is used in the long-lived block. In the Chebyshev rational approximation, there is no decomposition of the Bateman equation system, and the accuracy of the calculation is determined by the order of expansion in the partial fraction decomposition of the rational form. The coefficients in the partial fraction decomposition are determined by a Remez-type algorithm.

  7. Calculation device for amount of heavy element nuclide in reactor fuels and calculation method therefor

    International Nuclear Information System (INIS)

    Naka, Takafumi; Yamamoto, Munenari.

    1995-01-01

    When there are two or more origins of deuterium nuclides in reactor fuels, there are disposed a memory device for an amount of deuterium nuclides for every origin in a noted fuel segment at a certain time point, a device for calculating the amount of nuclides for every origin and current neutron fluxes in the noted fuel segment, and a device for separating and then displaying the amount of deuterium nuclides for every origin. Equations for combustion are dissolved for every origin of the deuterium nuclides based on the amount of the deuterium nuclides for every origin and neutron fluxes, to calculate the current amount of deuterium nuclides for every origin. The amount of deuterium nuclides originated from uranium is calculated ignoring α-decay of curium, while the amount of deuterium nuclides originated from plutonium is calculated ignoring the generation of plutonium formed from neptunium. Deuterium nuclides can be measured and controlled accurately for every origin of the reactor fuels. Even when nuclear fuel materials have two or more nationalities, the measurement and control thereof can be conducted for every country. (N.H.)

  8. Advances in supercell calculation methods and comparison with measurements

    Energy Technology Data Exchange (ETDEWEB)

    Arsenault, B [Atomic Energy of Canada Limited, Mississauga, Ontario (Canada); Baril, R; Hotte, G [Hydro-Quebec, Central Nucleaire Gentilly, Montreal, Quebec (Canada)

    1996-07-01

    In the last few years, modelling techniques have been developed in new supercell computer codes. These techniques have been used to model the CANDU reactivity devices. One technique is based on one- and two-dimensional transport calculations with the WIMS-AECL lattice code followed by super homogenization and three-dimensional flux calculations in a modified version of the MULTICELL code. The second technique is based on two- and three-dimensional transport calculations in DRAGON. The code calculates the lattice properties by solving the transport equation in a two-dimensional geometry followed by supercell calculations in three dimensions. These two calculation schemes have been used to calculate the incremental macroscopic properties of CANDU reactivity devices. The supercell size has also been modified to define incremental properties over a larger region. The results show improved agreement between the reactivity worth of zone controllers and adjusters. However, at the same time the agreement between measured and simulated flux distributions deteriorated somewhat. (author)

  9. Field calculations. Part I: Choice of variables and methods

    International Nuclear Information System (INIS)

    Turner, L.R.

    1981-01-01

    Magnetostatic calculations can involve (in order of increasing complexity) conductors only, material with constant or infinite permeability, or material with variable permeability. We consider here only the most general case, calculations involving ferritic material with variable permeability. Variables suitable for magnetostatic calculations are the magnetic field, the magnetic vector potential, and the magnetic scalar potential. For two-dimensional calculations the potentials, which each have only one component, have advantages over the field, which has two components. Because it is a single-valued variable, the vector potential is perhaps the best variable for two-dimensional calculations. In three dimensions, both the field and the vector potential have three components; the scalar potential, with only one component,provides a much smaller system of equations to be solved. However the scalar potential is not single-valued. To circumvent this problem, a calculation with two scalar potentials can be performed. The scalar potential whose source is the conductors can be calculated directly by the Biot-Savart law, and the scalar potential whose source is the magnetized material is single valued. However in some situations, the fields from the two potentials nearly cancel; and the numerical accuracy is lost. The 3-D magnetostatic program TOSCA employs a single total scalar potential; the program GFUN uses the magnetic field as its variable

  10. Current evaluation of dose rate calculation - analytical method

    International Nuclear Information System (INIS)

    Tello, Marcos; Vilhena, Marco Tulio

    1996-01-01

    The accuracy of the dose calculations based on pencil beam formulas such as Fokker-Plank equations and Fermi equations for charged particle transport are studied and a methodology to solve the Boltzmann transport equation is suggested

  11. A calculation method of cracking moment for the high strength ...

    Indian Academy of Sciences (India)

    mal stress and crack width for the tensional behaviour of concrete and has been proposed by ... stresses. To calculate concrete stress in a cross section of high strength concrete beams, failure strain is ..... American Concrete. Institute, Detroit.

  12. Method of the characteristics for calculation of VVER without homogenization

    Energy Technology Data Exchange (ETDEWEB)

    Suslov, I.R.; Komlev, O.G.; Novikova, N.N.; Zemskov, E.A.; Tormyshev, I.V.; Melnikov, K.G.; Sidorov, E.B. [Institute of Physics and Power Engineering, Obninsk (Russian Federation)

    2005-07-01

    The first stage of the development of characteristics code MCCG3D for calculation of the VVER-type reactor without homogenization is presented. The parallel version of the code for MPI was developed and tested on cluster PC with LINUX-OS. Further development of the MCCG3D code for design-level calculations with full-scale space-distributed feedbacks is discussed. For validation of the MCCG3D code we use the critical assembly VENUS-2. The geometrical models with and without homogenization have been used. With both models the MCCG3D results agree well with the experimental power distribution and with results generated by the other codes, but model without homogenization provides better results. The perturbation theory for MCCG3D code is developed and implemented in the module KEFSFGG. The calculations with KEFSFGG are in good agreement with direct calculations. (authors)

  13. Whole core calculations of power reactors by Monte Carlo method

    International Nuclear Information System (INIS)

    Nakagawa, Masayuki; Mori, Takamasa

    1993-01-01

    Whole core calculations have been performed for a commercial size PWR and a prototype LMFBR by using vectorized Monte Carlo codes. Geometries of cores were precisely represented in a pin by pin model. The calculated parameters were k eff , control rod worth, power distribution and so on. Both multigroup and continuous energy models were used and the accuracy of multigroup approximation was evaluated through the comparison of both results. One million neutron histories were tracked to considerably reduce variances. It was demonstrated that the high speed vectorized codes could calculate k eff , assembly power and some reactivity worths within practical computation time. For pin power and small reactivity worth calculations, the order of 10 million histories would be necessary. Required number of histories to achieve target design accuracy were estimated for those neutronic parameters. (orig.)

  14. A method for calculating active feedback system to provide vertical

    Indian Academy of Sciences (India)

    The active feedback system is applied to control slow motions of plasma. The objective of the ... The other problem is connected with the control of plasma vertical position with active feedback system. Calculation of ... Current Issue Volume 90 ...

  15. A mathematical method to calculate efficiency of BF3 detectors

    International Nuclear Information System (INIS)

    Si Fenni; Hu Qingyuan; Peng Taiping

    2009-01-01

    In order to calculate absolute efficiency of the BF 3 detector, MCNP/4C code is applied to calculate relative efficiency of the BF 3 detector first, and then absolute efficiency is figured out through mathematical techniques. Finally an energy response curve of the BF 3 detector for 1-20 MeV neutrons is derived. It turns out that efficiency of BF 3 detector are relatively uniform for 2-16 MeV neutrons. (authors)

  16. An approximate method for calculating the deformation of rotating nuclei

    International Nuclear Information System (INIS)

    Lind, P.

    1988-01-01

    The author presents as a collective model where the potential surface at spin I=0 is calculated in the Nilsson-Strutinsky model, an analytical expression for the moment of inertia is used which depends on the deformation and the pairing gaps for protons and neutrons, and the energy is minimized with respect to these gaps. Calculations in this model are performed for 16 Oyb. (HSI)

  17. Failure Probability Calculation Method Using Kriging Metamodel-based Importance Sampling Method

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Seunggyu [Korea Aerospace Research Institue, Daejeon (Korea, Republic of); Kim, Jae Hoon [Chungnam Nat’l Univ., Daejeon (Korea, Republic of)

    2017-05-15

    The kernel density was determined based on sampling points obtained in a Markov chain simulation and was assumed to be an important sampling function. A Kriging metamodel was constructed in more detail in the vicinity of a limit state. The failure probability was calculated based on importance sampling, which was performed for the Kriging metamodel. A pre-existing method was modified to obtain more sampling points for a kernel density in the vicinity of a limit state. A stable numerical method was proposed to find a parameter of the kernel density. To assess the completeness of the Kriging metamodel, the possibility of changes in the calculated failure probability due to the uncertainty of the Kriging metamodel was calculated.

  18. Numerical methods for calculating thermal residual stresses and hydrogen diffusion

    International Nuclear Information System (INIS)

    Leblond, J.B.; Devaux, J.; Dubois, D.

    1983-01-01

    Thermal residual stresses and hydrogen concentrations are two major factors intervening in cracking phenomena. These parameters were numerically calculated by a computer programme (TITUS) using the FEM, during the deposition of a stainless clad on a low-alloy plate. The calculation was performed with a 2-dimensional option in four successive steps: thermal transient calculation, metallurgical transient calculation (determination of the metallurgical phase proportions), elastic-plastic transient (plain strain conditions), hydrogen diffusion transient. Temperature and phase dependence of hydrogen diffusion coefficient and solubility constant. The following results were obtained: thermal calculations are very consistent with experiments at higher temperatures (due to the introduction of fusion and solidification latent heats); the consistency is not as good (by 70 degrees) for lower temperatures (below 650 degrees C); this was attributed to the non-introduction of gamma-alpha transformation latent heat. The metallurgical phase calculation indicates that the heat affected zone is almost entirely transformed into bainite after cooling down (the martensite proportion does not exceed 5%). The elastic-plastic calculations indicate that the stresses in the heat affected zone are compressive or slightly tensile; on the other hand, higher tensile stresses develop on the boundary of the heat affected zone. The transformation plasticity has a definite influence on the final stress level. The return of hydrogen to the clad during the bainitic transformation is but an incomplete phenomenon and the hydrogen concentration in the heat affected zone after cooling down to room temperature is therefore sufficient to cause cold cracking (if no heat treatment is applied). Heat treatments are efficient in lowering the hydrogen concentration. These results enable us to draw preliminary conclusions on practical means to avoid cracking. (orig.)

  19. Atomic orbital-based SOS-MP2 with tensor hypercontraction. I. GPU-based tensor construction and exploiting sparsity.

    Science.gov (United States)

    Song, Chenchen; Martínez, Todd J

    2016-05-07

    We present a tensor hypercontracted (THC) scaled opposite spin second order Møller-Plesset perturbation theory (SOS-MP2) method. By using THC, we reduce the formal scaling of SOS-MP2 with respect to molecular size from quartic to cubic. We achieve further efficiency by exploiting sparsity in the atomic orbitals and using graphical processing units (GPUs) to accelerate integral construction and matrix multiplication. The practical scaling of GPU-accelerated atomic orbital-based THC-SOS-MP2 calculations is found to be N(2.6) for reference data sets of water clusters and alanine polypeptides containing up to 1600 basis functions. The errors in correlation energy with respect to density-fitting-SOS-MP2 are less than 0.5 kcal/mol for all systems tested (up to 162 atoms).

  20. Atomic orbital-based SOS-MP2 with tensor hypercontraction. I. GPU-based tensor construction and exploiting sparsity

    Energy Technology Data Exchange (ETDEWEB)

    Song, Chenchen; Martínez, Todd J. [Department of Chemistry and the PULSE Institute, Stanford University, Stanford, California 94305 (United States); SLAC National Accelerator Laboratory, Menlo Park, California 94025 (United States)

    2016-05-07

    We present a tensor hypercontracted (THC) scaled opposite spin second order Møller-Plesset perturbation theory (SOS-MP2) method. By using THC, we reduce the formal scaling of SOS-MP2 with respect to molecular size from quartic to cubic. We achieve further efficiency by exploiting sparsity in the atomic orbitals and using graphical processing units (GPUs) to accelerate integral construction and matrix multiplication. The practical scaling of GPU-accelerated atomic orbital-based THC-SOS-MP2 calculations is found to be N{sup 2.6} for reference data sets of water clusters and alanine polypeptides containing up to 1600 basis functions. The errors in correlation energy with respect to density-fitting-SOS-MP2 are less than 0.5 kcal/mol for all systems tested (up to 162 atoms).

  1. Reliable method for fission source convergence of Monte Carlo criticality calculation with Wielandt's method

    International Nuclear Information System (INIS)

    Yamamoto, Toshihiro; Miyoshi, Yoshinori

    2004-01-01

    A new algorithm of Monte Carlo criticality calculations for implementing Wielandt's method, which is one of acceleration techniques for deterministic source iteration methods, is developed, and the algorithm can be successfully implemented into MCNP code. In this algorithm, part of fission neutrons emitted during random walk processes are tracked within the current cycle, and thus a fission source distribution used in the next cycle spread more widely. Applying this method intensifies a neutron interaction effect even in a loosely-coupled array where conventional Monte Carlo criticality methods have difficulties, and a converged fission source distribution can be obtained with fewer cycles. Computing time spent for one cycle, however, increases because of tracking fission neutrons within the current cycle, which eventually results in an increase of total computing time up to convergence. In addition, statistical fluctuations of a fission source distribution in a cycle are worsened by applying Wielandt's method to Monte Carlo criticality calculations. However, since a fission source convergence is attained with fewer source iterations, a reliable determination of convergence can easily be made even in a system with a slow convergence. This acceleration method is expected to contribute to prevention of incorrect Monte Carlo criticality calculations. (author)

  2. A method to calculate spatial xenon oscillations in PWR reactors

    International Nuclear Information System (INIS)

    Ronig, H.

    1976-01-01

    The new digital computer programme SEXI for the calculation of spatial Xe oscillations is described. A series expansion of the flux density and the particle densities following the geometrical eigenfunctions of a homogeneous block reactor is chosen as an approach to the solution of the system of differential equations describing this feedback process between neutron flux density and Xe particle density. To calculate the neutron flux density, the time-dependent form of the diffusion equation is used instead of the more common stationary form. Integration is carried out using formal time differential quotients of the Fourier coefficients. (orig./RW) [de

  3. Load calculation methods for offshore wind turbine foundations

    DEFF Research Database (Denmark)

    Passon, Patrik; Branner, Kim

    2014-01-01

    Calculation of design loads for offshore wind turbine (OWT) foundations is typically performed in a joint effort between wind turbine manufactures and foundation designers (FDs). Ideally, both parties would apply the same fully integrated design tool and model for that purpose. However, such solu...

  4. A balancing method for calculating a component raw involving CGF

    International Nuclear Information System (INIS)

    Kim, K.; Kang, D.; Yang, J.E.

    2004-01-01

    In this paper, a method called the 'Balancing Method' to derive a component RAW (Risk Achievement Worth) with basic event RAWs including a CCF (Common Cause Failure) RAW is summarized, and compared with the method proposed by the NEI (Nuclear Energy Institute) by mathematically checking the background on which the two methods are based. It is proved that the Balancing Method has a strong mathematically background. While the NEI method significantly underestimates the component RAW and is a little bit ad hoc in handling CCF RAW, the Balancing Method estimates the true component RAW very closely. Validity of the Balancing Method is based on the fact that if an component is out-of-service, it does not mean that the component is non-existent, but integrates the possibility that the component might fail due to CCF. The validity of the Balancing Method is proved by comparing it to the exact component RAW generated from the fault tree model

  5. Biexponential analysis of diffusion-weighted imaging: comparison of three different calculation methods in transplanted kidneys.

    Science.gov (United States)

    Heusch, Philipp; Wittsack, Hans-Jörg; Pentang, Gael; Buchbender, Christian; Miese, Falk; Schek, Julia; Kröpil, Patric; Antoch, Gerald; Lanzman, Rotem S

    2013-12-01

    Biexponential analysis has been used increasingly to obtain contributions of both diffusion and microperfusion to the signal decay in diffusion-weighted imaging DWI of different parts of the body. To compare biexponential diffusion parameters of transplanted kidneys obtained with three different calculation methods. DWI was acquired in 15 renal allograft recipients (eight men, seven women; mean age, 52.4 ± 14.3 years) using a paracoronal EPI sequence with 16 b-values (b = 0-750 s/mm(2)) and six averages at 1.5T. No respiratory gating was used. Three different calculation methods were used for the calculation of biexponential diffusion parameters: Fp, ADCP, and ADCD were calculated without fixing any parameter a priori (calculation method 1); ADCP was fixed to 12.0 µm(2)/ms, whereas Fp and ADCD were calculated using the biexponential model (calculation method 2); multistep approach with monoexponential fitting of the high b-value portion (b ≥ 250 s/mm(2)) for determination of ADCD and assessment of the low b intercept for determination of Fp (calculation method 3). For quantitative analysis, ROI measurements were performed on the according parameter maps. Mean ADCD values of the renal cortex using calculation method 1 were significantly lower than using calculation methods 2 and 3 (P < 0.001). There was a significant correlation between calculation methods 1 and 2 (r = 0.69 (P < 0.005) and calculation methods 1 and 3 (r = 0.59; P < 0.05) as well as calculation methods 2 and 3 (r = 0.98; P < 0.001). Mean Fp values of the renal cortex were higher with calculation method 1 than with calculation methods 2 and 3 (P < 0.001). For Fp, only the correlation between calculation methods 2 and 3 was significant (r = 0.98; P < 0.001). Biexponential diffusion parameters differ significantly depending on the calculation methods used for their calculation.

  6. Classification of methods for annual energy harvesting calculations of photovoltaic generators

    International Nuclear Information System (INIS)

    Rus-Casas, C.; Aguilar, J.D.; Rodrigo, P.; Almonacid, F.; Pérez-Higueras, P.J.

    2014-01-01

    Highlights: • The paper presents a novel classification of methods for annual energy harvesting calculation of grid-connected PV systems. • The methods are classified in direct and indirect methods. • Direct methods directly calculate the energy. Indirect methods calculate the energy from the power. • The classification can help the PV professionals in order to choose the most suitable method for each application. - Abstract: Estimating the energy provided by the generators of grid-connected photovoltaic systems is important in order to analyze their economic viability and supervise their operation. The energy harvesting calculation of a photovoltaic generator is not trivial; there are a lot of methods for this calculation. The aim of this paper is to develop a novel classification of methods for annual energy harvesting calculation of a generator of a grid-connected photovoltaic system. The methods are classified in two groups: (1) those that indirectly calculate the energy, i.e. they first calculate the power and from this, they calculate the energy, and (2) those that directly calculate the energy. Furthermore, the indirect methods are grouped in two categories: those that first calculate the I–V curve of the generator and from this, they calculate the power, and those that directly calculate the power. The study has shown that the existing methods differ in simplicity and accuracy, so that the proposed classification is useful in order to choose the most suitable method for each specific application

  7. The calculation of neutron flux using Monte Carlo method

    Science.gov (United States)

    Günay, Mehtap; Bardakçı, Hilal

    2017-09-01

    In this study, a hybrid reactor system was designed by using 99-95% Li20Sn80 + 1-5% RG-Pu, 99-95% Li20Sn80 + 1-5% RG-PuF4, and 99-95% Li20Sn80 + 1-5% RG-PuO2 fluids, ENDF/B-VII.0 evaluated nuclear data library and 9Cr2WVTa structural material. The fluids were used in the liquid first wall, liquid second wall (blanket) and shield zones of a fusion-fission hybrid reactor system. The neutron flux was calculated according to the mixture components, radial, energy spectrum in the designed hybrid reactor system for the selected fluids, library and structural material. Three-dimensional nucleonic calculations were performed using the most recent version MCNPX-2.7.0 the Monte Carlo code.

  8. Method of calculating heat transfer in furnaces of small power

    Directory of Open Access Journals (Sweden)

    Khavanov Pavel

    2016-01-01

    Full Text Available This publication presents the experiences and results of generalization criterion equation of importance in the analysis of the processes of heat transfer and thermal calculations of low-power heat generators cooled combustion chambers. With generalizing depending estimated contribution of radiation and convective heat transfer component in the complex for the combustion chambers of small capacity boilers. Determined qualitative and quantitative dependence of the integrated radiative-convective heat transfer from the main factors working combustion chambers of small volume.

  9. Experiences with leak rate calculations methods for LBB application

    International Nuclear Information System (INIS)

    Grebner, H.; Kastner, W.; Hoefler, A.; Maussner, G.

    1997-01-01

    In this paper, three leak rate computer programs for the application of leak before break analysis are described and compared. The programs are compared to each other and to results of an HDR Reactor experiment and two real crack cases. The programs analyzed are PIPELEAK, FLORA, and PICEP. Generally, the different leak rate models are in agreement. To obtain reasonable agreement between measured and calculated leak rates, it was necessary to also use data from detailed crack investigations

  10. Experiences with leak rate calculations methods for LBB application

    Energy Technology Data Exchange (ETDEWEB)

    Grebner, H.; Kastner, W.; Hoefler, A.; Maussner, G. [and others

    1997-04-01

    In this paper, three leak rate computer programs for the application of leak before break analysis are described and compared. The programs are compared to each other and to results of an HDR Reactor experiment and two real crack cases. The programs analyzed are PIPELEAK, FLORA, and PICEP. Generally, the different leak rate models are in agreement. To obtain reasonable agreement between measured and calculated leak rates, it was necessary to also use data from detailed crack investigations.

  11. Substep methods for burnup calculations with Bateman solutions

    International Nuclear Information System (INIS)

    Isotalo, A.E.; Aarnio, P.A.

    2011-01-01

    Highlights: → Bateman solution based depletion requires constant microscopic reaction rates. → Traditionally constant approximation is used for each depletion step. → Here depletion steps are divided to substeps which are solved sequentially. → This allows piecewise constant, rather than constant, approximation for each step. → Discretization errors are almost completely removed with only minor slowdown. - Abstract: When material changes in burnup calculations are solved by evaluating an explicit solution to the Bateman equations with constant microscopic reaction rates, one has to first predict the development of the reaction rates during the step and then further approximate these predictions with their averages in the depletion calculation. Representing the continuously changing reaction rates with their averages results in some error regardless of how accurately their development was predicted. Since neutronics solutions tend to be computationally expensive, steps in typical calculations are long and the resulting discretization errors significant. In this paper we present a simple solution to reducing these errors: the depletion steps are divided to substeps that are solved sequentially, allowing finer discretization of the reaction rates without additional neutronics solutions. This greatly reduces the discretization errors and, at least when combined with Monte Carlo neutronics, causes only minor slowdown as neutronics dominates the total running time.

  12. Recently developed methods in neutral-particle transport calculations: overview

    International Nuclear Information System (INIS)

    Alcouffe, R.E.

    1982-01-01

    It has become increasingly apparent that successful, general methods for the solution of the neutral particle transport equation involve a close connection between the spatial-discretization method used and the source-acceleration method chosen. The first form of the transport equation, angular discretization which is discrete ordinates is considered as well as spatial discretization based upon a mesh arrangement. Characteristic methods are considered briefly in the context of future, desirable developments. The ideal spatial-discretization method is described as having the following attributes: (1) positive-positive boundary data yields a positive angular flux within the mesh including its boundaries; (2) satisfies the particle balance equation over the mesh, that is, the method is conservative; (3) possesses the diffusion limit independent of spatial mesh size, that is, for a linearly isotropic flux assumption, the transport differencing reduces to a suitable diffusion equation differencing; (4) the method is unconditionally acceleratable, i.e., for each mesh size, the method is unconditionally convergent with a source iteration acceleration. It is doubtful that a single method possesses all these attributes for a general problem. Some commonly used methods are outlined and their computational performance and usefulness are compared; recommendations for future development are detailed, which include practical computational considerations

  13. A balancing method for calculating a component raw involving CGF

    Energy Technology Data Exchange (ETDEWEB)

    Kim, K.; Kang, D.; Yang, J.E. [Integrated Safety Assessment Division, Korea Atomic Energy Research Institute, Daejon (Korea, Republic of)

    2004-07-01

    In this paper, a method called the 'Balancing Method' to derive a component RAW (Risk Achievement Worth) with basic event RAWs including a CCF (Common Cause Failure) RAW is summarized, and compared with the method proposed by the NEI (Nuclear Energy Institute) by mathematically checking the background on which the two methods are based. It is proved that the Balancing Method has a strong mathematically background. While the NEI method significantly underestimates the component RAW and is a little bit ad hoc in handling CCF RAW, the Balancing Method estimates the true component RAW very closely. Validity of the Balancing Method is based on the fact that if an component is out-of-service, it does not mean that the component is non-existent, but integrates the possibility that the component might fail due to CCF. The validity of the Balancing Method is proved by comparing it to the exact component RAW generated from the fault tree model.

  14. 7 CFR 51.308 - Methods of sampling and calculation of percentages.

    Science.gov (United States)

    2010-01-01

    ..., CERTIFICATION, AND STANDARDS) United States Standards for Grades of Apples Methods of Sampling and Calculation of Percentages § 51.308 Methods of sampling and calculation of percentages. (a) When the numerical... 7 Agriculture 2 2010-01-01 2010-01-01 false Methods of sampling and calculation of percentages. 51...

  15. Method of allowing for resonances in calculating reactivity values

    International Nuclear Information System (INIS)

    Kumpf, H.

    1985-01-01

    On the basis of the integral transport equation for the source density an expression has been derived for calculating reactivity values taking resonances in the core and in the sample into account. The model has been used for evaluating reactivities measured in the Rossendorf SEG IV configuration. It is shown that the influence of resonances in the core can be kept tolerable, if a sufficiently thick buffer zone of only slightly absorbing non-resonant material is arranged between the sample and the core. (author)

  16. Power operation, measurement and methods of calculation of power distribution

    International Nuclear Information System (INIS)

    Lindahl, S.O.; Bernander, O.; Olsson, S.

    1982-01-01

    During the initial fuel loading of a BWR core, extensive checks and measurements of the fuel are performed. The measurements are designed to verify that the reactor can always be safely operated in compliance with the regulatory constraints. The power distribution within the reactor core is evaluated by means of instrumentation and elaborate computer calculations. The power distribution forms the basis for the evaluation of thermal limits. The behaviour of the reactor during the ordinary modes of operation as well as during transients shall be well understood and such that the integrity of the fuel and the reactor systems is always well preserved. (author)

  17. Discussion on calculation method of overburden cover for radon reduction

    International Nuclear Information System (INIS)

    Liang Jianlong; Zhou Xinghuo; Zhou Ju; Liu Huijuan

    2010-01-01

    The article collects a large number of experimental results from domestic researchers with regard to soil overburden experimental methods. Based on analyzing experimental results, some questions in determining requirements for overburden cover thickness, data processing method and negative intercept have been dis- cussed. (authors)

  18. An empirical method for calculating thermodynamic parameters for U(6) phases, applications to performance assessment calculations

    International Nuclear Information System (INIS)

    Ewing, R.C.; Chen, F.; Clark, S.B.

    2002-01-01

    Uranyl minerals form by oxidation and alteration of uraninite, UO 2+x , and the UO 2 in used nuclear fuels. The thermodynamic database for these phases is extremely limited. However, the Gibbs free energies and enthalpies for uranyl phases may be estimated based on a method that sums polyhedral contributions. The molar contributions of the structural components to Δ f G m 0 and Δ f H m 0 are derived by multiple regression using the thermodynamic data of phases for which the crystal structures are known. In comparison with experimentally determined values, the average residuals associated with the predicted Δ f G m 0 and Δ f H m 0 for the uranyl phases used in the model are 0.08 and 0.10%, respectively. There is also good agreement between the predicted mineral stability relations and field occurrences, thus providing confidence in this method for the estimation of Δ f G m 0 and Δ f H m 0 of the U(VI) phases. This approach provides a means of generating estimated thermodynamic data for performance assessment calcination and a basic for making bounding calcination of phase stabilities and solubilities. (author)

  19. Calculation of radiation exposure in diagnostic radiology. Method and surveys

    International Nuclear Information System (INIS)

    Duvauferrier, R.; Ramee, A.; Ezzeldin, K.; Guibert, J.L.

    1984-01-01

    A computerized method for evaluating the radiation exposure of the main target organs during various diagnostic radiologic procedures is described. This technique was used for educational purposes: study of exposure variations according to the technical modalities of a given procedure, and study of exposure variations according to various technical protocols (IVU, EGD barium study, etc.). This method was also used for studying exposure of patients during hospitalization in the Rennes Regional Hospital Center (France) in 1982, according to departments (urology, neurology, etc.). This method and results of these three studies are discussed [fr

  20. A nonlinear analytic function expansion nodal method for transient calculations

    Energy Technology Data Exchange (ETDEWEB)

    Joo, Han Gyn; Park, Sang Yoon; Cho, Byung Oh; Zee, Sung Quun [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1998-12-31

    The nonlinear analytic function expansion nodal (AFEN) method is applied to the solution of the time-dependent neutron diffusion equation. Since the AFEN method requires both the particular solution and the homogeneous solution to the transient fixed source problem, the derivation of the solution method is focused on finding the particular solution efficiently. To avoid complicated particular solutions, the source distribution is approximated by quadratic polynomials and the transient source is constructed such that the error due to the quadratic approximation is minimized, In addition, this paper presents a new two-node solution scheme that is derived by imposing the constraint of current continuity at the interface corner points. The method is verified through a series of application to the NEACRP PWR rod ejection benchmark problems. 6 refs., 2 figs., 1 tab. (Author)

  1. A nonlinear analytic function expansion nodal method for transient calculations

    Energy Technology Data Exchange (ETDEWEB)

    Joo, Han Gyn; Park, Sang Yoon; Cho, Byung Oh; Zee, Sung Quun [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1999-12-31

    The nonlinear analytic function expansion nodal (AFEN) method is applied to the solution of the time-dependent neutron diffusion equation. Since the AFEN method requires both the particular solution and the homogeneous solution to the transient fixed source problem, the derivation of the solution method is focused on finding the particular solution efficiently. To avoid complicated particular solutions, the source distribution is approximated by quadratic polynomials and the transient source is constructed such that the error due to the quadratic approximation is minimized, In addition, this paper presents a new two-node solution scheme that is derived by imposing the constraint of current continuity at the interface corner points. The method is verified through a series of application to the NEACRP PWR rod ejection benchmark problems. 6 refs., 2 figs., 1 tab. (Author)

  2. Criticism of the OPW method for band structure calculations

    International Nuclear Information System (INIS)

    Lendi, K.

    1977-01-01

    The OPW method is associated with a general eigenvalue problem of type (A - lambda B) x vector = 0, in which the matrix B and in particular its lowest eigenvalue decide upon the stability of the solutions lambda and, therefore, upon the applicability of the method which may become very questionable for heavier substances. Analytical proofs as well as explicit numerical estimates for several solids are given [pt

  3. Use of deterministic methods in survey calculations for criticality problems

    International Nuclear Information System (INIS)

    Hutton, J.L.; Phenix, J.; Course, A.F.

    1991-01-01

    A code package using deterministic methods for solving the Boltzmann Transport equation is the WIMS suite. This has been very successful for a range of situations. In particular it has been used with great success to analyse trends in reactivity with a range of changes in state. The WIMS suite of codes have a range of methods and are very flexible in the way they can be combined. A wide variety of situations can be modelled ranging through all the current Thermal Reactor variants to storage systems and items of chemical plant. These methods have recently been enhanced by the introduction of the CACTUS method. This is based on a characteristics technique for solving the Transport equation and has the advantage that complex geometrical situations can be treated. In this paper the basis of the method is outlined and examples of its use are illustrated. In parallel with these developments the validation for out of pile situations has been extended to include experiments with relevance to criticality situations. The paper will summarise this evidence and show how these results point to a partial re-adoption of deterministic methods for some areas of criticality. The paper also presents results to illustrate the use of WIMS in criticality situations and in particular show how it can complement codes such as MONK when used for surveying the reactivity effect due to changes in geometry or materials. (Author)

  4. Energy conservation for houses and its calculation methods

    Energy Technology Data Exchange (ETDEWEB)

    Lee, S H

    1981-04-01

    The concept of energy conservation of houses has been developed and began to be applied widely since the first oil crisis. Now we can say definitely that insulating a house is the most effective way of saving energy, and the renewable energy sources are useful only when the demand for space heating and hot water is minimized by insulating. If a house is well insulated, it will need a much smaller, simpler and cheaper heating system. So it will be less efficient to put a solar collector and wind generator on a poorly insulated house. Architects and engineers should have a certain level of practical knowledge of insulation for house to persuade customers using insulating materials and structure. Moreover, it is very essential to amend the existing building codes in order to facilitate this basic necessity. For instance, the Building Regulations of Denmark requires a U-value of 0.4 W/m/sup 2/ degC for heavy weight external wall. If the cavity wall has outer and inner leaf of just normal brick with internal finish of 20 mm cement mortar, which is a typical wall construction for houses in Korea, the thickness of insulation materials to the cavity can be calculated in order to fullfil the U-value of 0.4 W/m/sup 2/ degC in addition to the cavity of the external heavy wall: expanded polyurethane 58 mm, urea formaldehyde foam 67 mm, expanded polystyrene 78 mm, mineral wool 94 mm. The economic feasibility of solar heating system has been calculated. By applying 25% of the year inflation ratio for fuel cost, the result turns out economically comparable with solar heating systems.

  5. Transport survey calculations using the spectral collocation method

    International Nuclear Information System (INIS)

    Painter, S.L.; Lyon, J.F.

    1989-01-01

    A novel transport survey code has been developed and is being used to study the sensitivity of stellarator reactor performance to various transport assumptions. Instead of following one of the usual approaches, the steady-state transport equation are solved in integral form using the spectral collocation method. This approach effectively combine the computational efficiency of global models with the general nature of 1-D solutions. A compact torsatron reactor test case was used to study the convergence properties and flexibility of the new method. The heat transport model combined Shaing's model for ripple-induced neoclassical transport, the Chang-Hinton model for axisymmetric neoclassical transport, and neoalcator scaling for anomalous electron heat flux. Alpha particle heating, radiation losses, classical electron-ion heat flow, and external heating were included. For the test problem, the method exhibited some remarkable convergence properties. As the number of basis functions was increased, the maximum, pointwise error in the integrated power balance decayed exponentially until the numerical noise level as reached. Better than 10% accuracy in the globally-averaged quantities was achieved with only 5 basis functions; better than 1% accuracy was achieved with 10 basis functions. The numerical method was also found to be very general. Extreme temperature gradients at the plasma edge which sometimes arise from the neoclassical models and are difficult to resolve with finite-difference methods were easily resolved. 8 refs., 6 figs

  6. Analytical method of spectra calculations in the Bargmann representation

    International Nuclear Information System (INIS)

    Maciejewski, Andrzej J.; Przybylska, Maria; Stachowiak, Tomasz

    2014-01-01

    We formulate a universal method for solving an arbitrary quantum system which, in the Bargmann representation, is described by a system of linear equations with one independent variable, such as one- and multi-photon Rabi models, or N level systems interacting with a single mode of the electromagnetic field and their various generalizations. We explain three types of conditions that determine the spectrum and show their usage for two deformations of the Rabi model. We prove that the spectra of both models are just zeros of transcendental functions, which in one case are given explicitly in terms of confluent Heun functions. - Highlights: • Analytical method of spectrum determination in Bargmann representation is proposed. • Three types of conditions determining spectrum are identified. • Method to two generalizations of the Rabi system is applied

  7. Application of γ field theory based calculation method to the monitoring of mine nuclear radiation environment

    International Nuclear Information System (INIS)

    Du Yanjun; Liu Qingcheng; Liu Hongzhang; Qin Guoxiu

    2009-01-01

    In order to find the feasibility of calculating mine radiation dose based on γ field theory, this paper calculates the γ radiation dose of a mine by means of γ field theory based calculation method. The results show that the calculated radiation dose is of small error and can be used to monitor mine environment of nuclear radiation. (authors)

  8. Ab initio calculations of mechanical properties: Methods and applications

    Czech Academy of Sciences Publication Activity Database

    Pokluda, J.; Černý, Miroslav; Šob, Mojmír; Umeno, Y.

    2015-01-01

    Roč. 73, AUG (2015), s. 127-158 ISSN 0079-6425 R&D Projects: GA ČR(CZ) GAP108/12/0311 Institutional support: RVO:68081723 Keywords : Ab initio methods * Elastic moduli * Intrinsic hardness * Stability analysis * Theoretical strength * Intrinsic brittleness/ductility Subject RIV: BM - Solid Matter Physics ; Magnetism Impact factor: 31.083, year: 2015

  9. Spectral calculations in magnetohydrodynamics using the Jacobi-Davidson method

    NARCIS (Netherlands)

    Belien, A. J. C.; van der Holst, B.; Nool, M.; van der Ploeg, A.; Goedbloed, J. P.

    2001-01-01

    For the solution of the generalized complex non-Hermitian eigenvalue problems Ax = lambda Bx occurring in the spectral study of linearized resistive magnetohydrodynamics (MHD) a new parallel solver based on the recently developed Jacobi-Davidson [SIAM J. Matrix Anal. Appl. 17 (1996) 401] method has

  10. Calculation method for control rod dropping time in reactor

    International Nuclear Information System (INIS)

    Nogami, Takeki; Kato, Yoshifumi; Ishino, Jun-ichi; Doi, Isamu.

    1996-01-01

    If a control rod starts dropping, the dropping speed is rapidly increased, then settled substantially constant, rapidly decreased when it reaches a dash pot. A second detection signal generated by removing an AC component from a first detection signal is differentiated twice. The time when the maximum value among the twice differentiated values is generated is determined as a time when the control rods starts dropping. The time when minimum value among the twice differentiated values is generated is determined as a time when the control rod reaches the dash pot of the reactor. The measuring time within a range from the time when the control rod starts dropping to the time when the control rod reaches the dash pot of the reactor is determined. As a result, processing for the calculation of the dropping start time and dash pot reaching time of the control rod can be automatized. Further, it is suffice to conduct differentiation twice till the reaching time, which can facilitate the processing thereby enabling to determine a reliable time range. (N.H.)

  11. Analytic moment method calculations of the drift wave spectrum

    International Nuclear Information System (INIS)

    Thayer, D.R.; Molvig, K.

    1985-11-01

    A derivation and approximate solution of renormalized mode coupling equations describing the turbulent drift wave spectrum is presented. Arguments are given which indicate that a weak turbulence formulation of the spectrum equations fails for a system with negative dissipation. The inadequacy of the weak turbulence theory is circumvented by utilizing a renormalized formation. An analytic moment method is developed to approximate the solution of the nonlinear spectrum integral equations. The solution method employs trial functions to reduce the integral equations to algebraic equations in basic parameters describing the spectrum. An approximate solution of the spectrum equations is first obtained for a mode dissipation with known solution, and second for an electron dissipation in the NSA

  12. Method for Calculation of Steam-Compression Heat Transformers

    Directory of Open Access Journals (Sweden)

    S. V. Zditovetckaya

    2012-01-01

    Full Text Available The paper considers a method for joint numerical analysis of cycle parameters and heatex-change equipment of steam-compression heat transformer contour that takes into account a non-stationary operational mode and irreversible losses in devices and pipeline contour. The method has been realized in the form of the software package and can be used while making design or selection of a heat transformer with due account of a coolant and actual equipment being included in its structure.The paper presents investigation results revealing influence of pressure loss in an evaporator and a condenser from the side of the coolant caused by a friction and local resistance on power efficiency of the heat transformer which is operating in the mode of refrigerating and heating installation and a thermal pump. Actually obtained operational parameters of the thermal pump in the nominal and off-design operatinal modes depend on the structure of the concrete contour equipment.

  13. The DV-Xα molecular-orbital calculation method

    CERN Document Server

    Ishii, Tomohiko; Ogasawara, Kazuyoshi

    2014-01-01

    This multi-author contributed volume contains chapters featuring the development of the DV-Xα method and its application to a variety of problems in Materials Science and Spectroscopy written by leaders of the respective fields. The volume contains a Foreword written by the Chairs of Japanese and Korea DV-X alpha Societies. This book is aimed at individuals working in Quantum Chemistry.

  14. The power series method in the effectiveness factor calculations

    OpenAIRE

    Filipich, C. P.; Villa, L. T.; Grossi, Ricardo Oscar

    2017-01-01

    In the present paper, exact analytical solutions are obtained for nonlinear ordinary differential equations which appear in complex diffusionreaction processes. A technique based on the power series method is used. Numerical results were computed for a number of cases which correspond to boundary value problems available in the literature. Additionally, new numerical results were generated for several important cases. Fil: Filipich, C. P.. Universidad Tecnológica Nacional. Facultad Regiona...

  15. Methods for the calculation of uncertainty in analytical chemistry

    Energy Technology Data Exchange (ETDEWEB)

    Suh, M. Y.; Sohn, S. C.; Park, Y. J.; Park, K. K.; Jee, K. Y.; Joe, K. S.; Kim, W. H

    2000-07-01

    This report describes the statistical rules for evaluating and expressing uncertainty in analytical chemistry. The procedures for the evaluation of uncertainty in chemical analysis are illustrated by worked examples. This report, in particular, gives guidance on how uncertainty can be estimated from various chemical analyses. This report can be also used for planning the experiments which will provide the information required to obtain an estimate of uncertainty for the method.

  16. Feasibility study on heterogeneous method in criticality calculations

    International Nuclear Information System (INIS)

    Prati, A.

    1977-01-01

    The criticality of finite heterogeneous assemblies is analysed by the heterogeneous methods employing the Eigen-function analysis. The moderation is treated by the Fermi age theory. The system is analysed in two dimensional rectangular coordinates. The criticality and the fluxes are determined for systems with small and large number of fuel rods. The convergence and the residual error in the modal analysis are discussed. (author)

  17. A combination between the differential and the perturbation theory methods for calculating sensitivity coefficients

    International Nuclear Information System (INIS)

    Borges, Antonio Andrade

    1998-01-01

    A new method for the calculation of sensitivity coefficients is developed. The new method is a combination of two methodologies used for calculating theses coefficients, which are the differential and the generalized perturbation theory methods. The method utilizes as integral parameter the average flux in an arbitrary region of the system. Thus, the sensitivity coefficient contains only the component corresponding to the neutron flux. To obtain the new sensitivity coefficient, the derivatives of the integral parameter, Φ, with respect to σ are calculated using the perturbation method and the functional derivatives of this generic integral parameter with respect to σ and Φ are calculated using the differential method. (author)

  18. An improved filtered spherical harmonic method for transport calculations

    International Nuclear Information System (INIS)

    Ahrens, C.; Merton, S.

    2013-01-01

    Motivated by the work of R. G. McClarren, C. D. Hauck, and R. B. Lowrie on a filtered spherical harmonic method, we present a new filter for such numerical approximations to the multi-dimensional transport equation. In several test problems, we demonstrate that the new filter produces results with significantly less Gibbs phenomena than the filter used by McClarren, Hauck and Lowrie. This reduction in Gibbs phenomena translates into propagation speeds that more closely match the correct propagation speed and solutions that have fewer regions where the scalar flux is negative. (authors)

  19. Simplified hourly method to calculate summer temperatures in dwellings

    DEFF Research Database (Denmark)

    Mortensen, Lone Hedegaard; Aggerholm, Søren

    2012-01-01

    with an ordinary distribution of windows and a “worst” case where the window area facing south and west was increased by more than 60%. The simplified method used Danish weather data and only needs information on transmission losses, thermal mass, surface contact, internal load, ventilation scheme and solar load...... program for thermal simulations of buildings. The results are based on one year simulations of two cases. The cases were based on a low energy dwelling of 196 m². The transmission loss for the building envelope was 3.3 W/m², not including windows and doors. The dwelling was tested in two cases, a case...

  20. Methods of calculating engineering parameters for gas separations

    Science.gov (United States)

    Lawson, D. D.

    1980-01-01

    A group additivity method has been generated which makes it possible to estimate, from the structural formulas alone, the energy of vaporization and the molar volume at 25 C of many nonpolar organic liquids. From these two parameters and appropriate thermodynamic relationships it is then possible to predict the vapor pressure of the liquid phase and the solubility of various gases in nonpolar organic liquids. The data are then used to evaluate organic and some inorganic liquids for use in gas separation stages or as heat exchange fluids in prospective thermochemical cycles for hydrogen production.

  1. A New Method for Calculating Counts in Cells

    Science.gov (United States)

    Szapudi, István

    1998-04-01

    In the near future, a new generation of CCD-based galaxy surveys will enable high-precision determination of the N-point correlation functions. The resulting information will help to resolve the ambiguities associated with two-point correlation functions, thus constraining theories of structure formation, biasing, and Gaussianity of initial conditions independently of the value of Ω. As one of the most successful methods of extracting the amplitude of higher order correlations is based on measuring the distribution of counts in cells, this work presents an advanced way of measuring it with unprecedented accuracy. Szapudi & Colombi identified the main sources of theoretical errors in extracting counts in cells from galaxy catalogs. One of these sources, termed as measurement error, stems from the fact that conventional methods use a finite number of sampling cells to estimate counts in cells. This effect can be circumvented by using an infinite number of cells. This paper presents an algorithm, which in practice achieves this goal; that is, it is equivalent to throwing an infinite number of sampling cells in finite time. The errors associated with sampling cells are completely eliminated by this procedure, which will be essential for the accurate analysis of future surveys.

  2. A Method for Correcting IMRT Optimizer Heterogeneity Dose Calculations

    International Nuclear Information System (INIS)

    Zacarias, Albert S.; Brown, Mellonie F.; Mills, Michael D.

    2010-01-01

    Radiation therapy treatment planning for volumes close to the patient's surface, in lung tissue and in the head and neck region, can be challenging for the planning system optimizer because of the complexity of the treatment and protected volumes, as well as striking heterogeneity corrections. Because it is often the goal of the planner to produce an isodose plan with uniform dose throughout the planning target volume (PTV), there is a need for improved planning optimization procedures for PTVs located in these anatomical regions. To illustrate such an improved procedure, we present a treatment planning case of a patient with a lung lesion located in the posterior right lung. The intensity-modulated radiation therapy (IMRT) plan generated using standard optimization procedures produced substantial dose nonuniformity across the tumor caused by the effect of lung tissue surrounding the tumor. We demonstrate a novel iterative method of dose correction performed on the initial IMRT plan to produce a more uniform dose distribution within the PTV. This optimization method corrected for the dose missing on the periphery of the PTV and reduced the maximum dose on the PTV to 106% from 120% on the representative IMRT plan.

  3. On the question of calculation methods of phase diagrams

    International Nuclear Information System (INIS)

    Vasil'ev, M.V.

    1983-01-01

    The technique of determining interaction parameters of components of binary alloys is suggested. U-Mo and Cu-Al systems are used as example with the aid of experimental state diagrams. It is shown that the search for new regularities is necessary with the aim of analytical description of state diagrams and forecast of the shape of phase equilibria curves in real systems. Optimum combinations of experimental investigations with the aim of reliable determination of supporting points and forecasting possibilities of typical equations can considerably decrease the volume of experimental work when preparing state diagrams, in cases of repeated state diagrams of more reliable state diagrams with the application of more advanced methods of investigation. The translation of state diagrams from geometric to analytical language with the use of typical equations opens up new possibilities for establishing a compact information bank for state diagrams

  4. Performance of various mathematical methods for calculation of radioimmunoassay results

    International Nuclear Information System (INIS)

    Sandel, P.; Vogt, W.

    1977-01-01

    Interpolation and regression methods are available for computer aided determination of radioimmunological end results. We compared the performance of eight algorithms (weighted and unweighted linear logit-log regression, quadratic logit-log regression, Rodbards logistic model in the weighted and unweighted form, smoothing spline interpolation with a large and small smoothing factor and polygonal interpolation) on the basis of three radioimmunoassays with different reference curve characteristics (digoxin, estriol, human chorionic somatomammotropin = HCS). Great store was set by the accuracy of the approximation at the intermediate points on the curve, ie. those points that lie midway between two standard concentrations. These concentrations were obtained by weighing and inserted as unknown samples. In the case of digoxin and estriol the polygonal interpolation provided the best results while the weighted logit-log regression proved superior in the case of HCS. (orig.) [de

  5. Comparison of hardenability calculation methods of the heat-treatable constructional steels

    Energy Technology Data Exchange (ETDEWEB)

    Dobrzanski, L.A.; Sitek, W. [Division of Tool Materials and Computer Techniques in Metal Science, Silesian Technical University, Gliwice (Poland)

    1995-12-31

    Evaluation has been made of the consistency of calculation of the hardenability curves of the selected heat-treatable alloyed constructional steels with the experimental data. The study has been conducted basing on the analysis of present state of knowledge on hardenability calculation employing the neural network methods. Several calculation examples and comparison of the consistency of calculation methods employed are included. (author). 35 refs, 2 figs, 3 tabs.

  6. Different methods for calculation of LVEF: which is right?

    International Nuclear Information System (INIS)

    Blair, E.; McLean, R.; Dixson, H.

    1999-01-01

    Full text: Before the introduction of quantitative gated SPET (QGS) software, our routine method of determining left ventricular ejection fraction (LVEF) was the manual processing of gated heart pool studies (GHPS). The purpose of this preliminary study was to evaluate four methods of LVEF determination available in our private practice. We compared the LVEF obtained from manual GHPS (mGHPS) with that from automated GHPS (aGHPS), and that from both manual and automated QGS (mQGS and aQGS respectively) in 20 patients with a mean age of 63.5 years. All studies were analysed using standard ADAC computers and proprietary software. Two observers were used to determine mGHPS and mQGS, and the results were analysed using linear regression, Bland-Altman plots and visual analysis. The values determined by the two observers for the mGHPS and mQGS differed by an average of 1.15% and - 0.35% respectively and were strongly correlated (r = 0.95 and 0.94). For the automatic processing protocols (aGHPS and aQGS), there was a mean difference of 1.00% and a correlation of r = 0.63. The differences between mGHPS and aGHPS were greater than the differences between mQGS and aQGS. Comparing Observer 1's mGHPS and mQGS, a mean difference of 12.4% (range 2% to 24%), r=0.75. Comparing Observer 2's GHPS and QGS, a mean difference of 11.0% (range -11% to 22%), r = 0.66. Comparing the average mGHPS to aGHPS, a mean difference of 2.4% (range -8.5% to 12%), r = 0.88. Comparing the average mQGS to aQGS, a mean difference of -10.5% (range -18.5% to -5%), r = 0.96. From this study, we have found that the LVEF by mGHPS is substantially higher than mQGS, aGHPS and aQGS. Further investigation with a larger sample and different camera systems is needed

  7. Method of calculation overall equipment effectiveness in fertilizer factory

    Science.gov (United States)

    Siregar, I.; Muchtar, M. A.; Rahmat, R. F.; Andayani, U.; Nasution, T. H.; Sari, R. M.

    2018-02-01

    This research was conducted at a fertilizer company in Sumatra, where companies that produce fertilizers in large quantities to meet the needs of consumers. This company cannot be separated from issues related to the performance/effectiveness of the machinery and equipment. It can be seen from the engine that runs every day without a break resulted in not all of the quality of products in accordance with the quality standards set by the company. Therefore, to measure and improve the performance of the machine in the unit Plant Urea-1 as a whole then used method of Overall Equipment Effectiveness (OEE), which is one important element in the Total Productive Maintenance (TPM) to measure the effectiveness of the machine so that it can take measures to maintain that level. In July, August and September OEE values above the standard set at 85%. Meanwhile, in October, November and December have not reached the standard OEE values. The low value of OEE due to lack of time availability of machines for the production shut down due to the occurrence of the engine long enough so that the availability of reduced production time.

  8. Theories and calculation methods for regional objective ET

    Institute of Scientific and Technical Information of China (English)

    QIN DaYong; LO JinYan; LIU JiaHong; WANG MingNa

    2009-01-01

    The regional objective ET (Evapotranspiration) is a new concept in water resources research, which refers to the total amount of water that could be exhausted from a region in the form of vapor per year. The objective-ET based water resources management allocates water to different regions in terms of ET. It controls the water exhausted from a region to meet the objective ET. The regional objective ET must be adapted to fit the region's local available water resources. By improving the water utilization effi-ciency and reducing the unrecoverable water in the social water circle, it is saved so that water related production is maintained or even increased under the same water consumption conditions. Regional water balance is realized by rationally deploying the available water among different industries, adjusting industrial structures, and adopting new water-saving technologies, therefore to meeting the requirements for groundwater conservation, agricultural income stability, and avoiding environmental damages. Furthermore, water competition among various departments and industries (including envi-ronmental and ecological water use) may be avoided. This paper proposes an innovative definition of objective ET, and its principles, sub-index systems. Besides, a computational method for regional ob-jective ET is developed by combining the distributed hydrological model and the soil moisture model.

  9. Preconditioned Conjugate Gradient methods for low speed flow calculations

    Science.gov (United States)

    Ajmani, Kumud; Ng, Wing-Fai; Liou, Meng-Sing

    1993-01-01

    An investigation is conducted into the viability of using a generalized Conjugate Gradient-like method as an iterative solver to obtain steady-state solutions of very low-speed fluid flow problems. Low-speed flow at Mach 0.1 over a backward-facing step is chosen as a representative test problem. The unsteady form of the two dimensional, compressible Navier-Stokes equations are integrated in time using discrete time-steps. The Navier-Stokes equations are cast in an implicit, upwind finite-volume, flux split formulation. The new iterative solver is used to solve a linear system of equations at each step of the time-integration. Preconditioning techniques are used with the new solver to enhance the stability and the convergence rate of the solver and are found to be critical to the overall success of the solver. A study of various preconditioners reveals that a preconditioner based on the lower-upper (L-U)-successive symmetric over-relaxation iterative scheme is more efficient than a preconditioner based on incomplete L-U factorizations of the iteration matrix. The performance of the new preconditioned solver is compared with a conventional line Gauss-Seidel relaxation (LGSR) solver. Overall speed-up factors of 28 (in terms of global time-steps required to converge to a steady-state solution) and 20 (in terms of total CPU time on one processor of a CRAY-YMP) are found in favor of the new preconditioned solver, when compared with the LGSR solver.

  10. OCOPTR, Minimization of Nonlinear Function, Variable Metric Method, Derivative Calculation. DRVOCR, Minimization of Nonlinear Function, Variable Metric Method, Derivative Calculation

    International Nuclear Information System (INIS)

    Nazareth, J. L.

    1979-01-01

    1 - Description of problem or function: OCOPTR and DRVOCR are computer programs designed to find minima of non-linear differentiable functions f: R n →R with n dimensional domains. OCOPTR requires that the user only provide function values (i.e. it is a derivative-free routine). DRVOCR requires the user to supply both function and gradient information. 2 - Method of solution: OCOPTR and DRVOCR use the variable metric (or quasi-Newton) method of Davidon (1975). For OCOPTR, the derivatives are estimated by finite differences along a suitable set of linearly independent directions. For DRVOCR, the derivatives are user- supplied. Some features of the codes are the storage of the approximation to the inverse Hessian matrix in lower trapezoidal factored form and the use of an optimally-conditioned updating method. Linear equality constraints are permitted subject to the initial Hessian factor being chosen correctly. 3 - Restrictions on the complexity of the problem: The functions to which the routine is applied are assumed to be differentiable. The routine also requires (n 2 /2) + 0(n) storage locations where n is the problem dimension

  11. Fast GPU-based computation of the sensitivity matrix for a PET list-mode OSEM algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Nassiri, Moulay Ali; Carrier, Jean-Francois [Montreal Univ., QC (Canada). Dept. de Radio-Oncologie; Hissoiny, Sami [Ecole Polytechnique de Montreal, QC (Canada). Dept. de Genie Informatique et Genie Logiciel; Despres, Philippe [Quebec Univ. (Canada). Dept. de Radio-Oncologie

    2011-07-01

    One of the obstacle in introducing a list-mode PET reconstruction algorithm for routine clinical use is the long computation time required for the sensitivity matrix calculation. This matrix must be computed for each study because it depends on the object attenuation map. During the last decade, studies have shown that 3D list-mode OSEM reconstruction algorithms could be effectively performed and considerably accelerated by GPU devices. However, most of that preliminary work (1) was done for pre-clinical PET systems in which the number of LORs is small compared to modern human PET systems and (2) supposed that the sensitivity matrix is pre-calculated. The time required to compute this matrix can however be longer than the reconstruction time itself. The objective of this work is to investigate the performance of sensitivity matrix calculations in terms of computation time with modern GPUs, for clinical fully 3D LM-OSEM for modern PET scanners. For this purpose, sensitivity matrix calculations and full list-mode OSEM reconstruction for human PET systems were implemented on GPUs using the CUDA framework. The system matrices were built on-the-fly by using the multi-ray Siddon algorithm. The time to compute the sensitivity matrix for 288 x 288 x 57 arrays using 3 tangential LORs was 29 seconds. The 3D LM-OSEM algorithm, including the sensitivity matrix calculation, was performed for the same LORs in 71 seconds for 62 millions events, 6 frames and 1 iterations. This work let envision fast reconstructions for advanced PET application such as dynamic studies and parametric image reconstruction. (orig.)

  12. Efficient k⋅p method for the calculation of total energy and electronic density of states

    OpenAIRE

    Iannuzzi, Marcella; Parrinello, Michele

    2001-01-01

    An efficient method for calculating the electronic structure in large systems with a fully converged BZ sampling is presented. The method is based on a k.p-like approximation developed in the framework of the density functional perturbation theory. The reliability and efficiency of the method are demostrated in test calculations on Ar and Si supercells

  13. An Efficient Method for Electron-Atom Scattering Using Ab-initio Calculations

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Yuan; Yang, Yonggang; Xiao, Liantuan; Jia, Suotang [Shanxi University, Taiyuan (China)

    2017-02-15

    We present an efficient method based on ab-initio calculations to investigate electron-atom scatterings. Those calculations profit from methods implemented in standard quantum chemistry programs. The new approach is applied to electron-helium scattering. The results are compared with experimental and other theoretical references to demonstrate the efficiency of our method.

  14. Development of approximate shielding calculation method for high energy cosmic radiation on LEO satellites

    International Nuclear Information System (INIS)

    Sin, M. W.; Kim, M. H.

    2002-01-01

    To calculate total dose effect on semi-conductor devices in satellite for a period of space mission effectively, two approximate calculation models for a comic radiation shielding were proposed. They are a sectoring method and a chord-length distribution method. When an approximate method was applied in this study, complex structure of satellite was described into multiple 1-dimensional slabs, structural materials were converted to reference material(aluminum), and the pre-calculated dose-depth conversion function was introduced to simplify the calculation process. Verification calculation was performed for orbit location and structure geometry of KITSAT-1 and compared with detailed 3-dimensional calculation results and experimental values. The calculation results from approximate method were estimated conservatively with acceptable error. However, results for satellite mission simulation were underestimated in total dose rate compared with experimental values

  15. Development of approximate shielding calculation method for high energy cosmic radiation on LEO satellites

    Energy Technology Data Exchange (ETDEWEB)

    Sin, M. W.; Kim, M. H. [Kyunghee Univ., Yongin (Korea, Republic of)

    2002-10-01

    To calculate total dose effect on semi-conductor devices in satellite for a period of space mission effectively, two approximate calculation models for a comic radiation shielding were proposed. They are a sectoring method and a chord-length distribution method. When an approximate method was applied in this study, complex structure of satellite was described into multiple 1-dimensional slabs, structural materials were converted to reference material(aluminum), and the pre-calculated dose-depth conversion function was introduced to simplify the calculation process. Verification calculation was performed for orbit location and structure geometry of KITSAT-1 and compared with detailed 3-dimensional calculation results and experimental values. The calculation results from approximate method were estimated conservatively with acceptable error. However, results for satellite mission simulation were underestimated in total dose rate compared with experimental values.

  16. Non-iterative method to calculate the periodical distribution of temperature in reactors with thermal regeneration

    International Nuclear Information System (INIS)

    Sanchez de Alsina, O.L.; Scaricabarozzi, R.A.

    1982-01-01

    A matrix non-iterative method to calculate the periodical distribution in reactors with thermal regeneration is presented. In case of exothermic reaction, a source term will be included. A computer code was developed to calculate the final temperature distribution in solids and in the outlet temperatures of the gases. The results obtained from ethane oxidation calculation in air, using the Dietrich kinetic data are presented. This method is more advantageous than iterative methods. (E.G.) [pt

  17. Perturbation method for calculation of narrow-band impedance and trapped modes

    International Nuclear Information System (INIS)

    Heifets, S.A.

    1987-01-01

    An iterative method for calculation of the narrow-band impedance is described for a system with a small variation in boundary conditions, so that the variation can be considered as a perturbation. The results are compared with numeric calculations. The method is used to relate the origin of the trapped modes with the degeneracy of the spectrum of an unperturbed system. The method also can be applied to transverse impedance calculations. 6 refs., 6 figs., 1 tab

  18. Study on application of green's function method in thermal stress rapid calculation

    International Nuclear Information System (INIS)

    Zhang Guihe; Duan Yuangang; Xu Xiao; Chen Rong

    2013-01-01

    This paper presents a quick and accuracy thermal stress calculation method, the Green's Function Method, which is a combination of finite element method and numerical algorithm method. Thermal stress calculation of Safe Injection Nozzle of Reactor Coolant Line of PWR plant is performed with Green's function method for heatup and cooldown thermal transients as a demonstration example, and the result is compared with finite element method to verify the rationality and accuracy of this method. The advantage and disadvantage of the Green's function method and the finite element method are also compared. (authors)

  19. Integrated Power Flow and Short Circuit Calculation Method for Distribution Network with Inverter Based Distributed Generation

    OpenAIRE

    Yang, Shan; Tong, Xiangqian

    2016-01-01

    Power flow calculation and short circuit calculation are the basis of theoretical research for distribution network with inverter based distributed generation. The similarity of equivalent model for inverter based distributed generation during normal and fault conditions of distribution network and the differences between power flow and short circuit calculation are analyzed in this paper. Then an integrated power flow and short circuit calculation method for distribution network with inverte...

  20. Comparison of the methods for calculating the interfacial heat transfer coefficient in hot stamping

    International Nuclear Information System (INIS)

    Zhao, Kunmin; Wang, Bin; Chang, Ying; Tang, Xinghui; Yan, Jianwen

    2015-01-01

    This paper presents a hot stamping experimentation and three methods for calculating the Interfacial Heat Transfer Coefficient (IHTC) of 22MnB5 boron steel. Comparison of the calculation results shows an average error of 7.5% for the heat balance method, 3.7% for the Beck's nonlinear inverse estimation method (the Beck's method), and 10.3% for the finite-element-analysis-based optimization method (the FEA method). The Beck's method is a robust and accurate method for identifying the IHTC in hot stamping applications. The numerical simulation using the IHTC identified by the Beck's method can predict the temperature field with a high accuracy. - Highlights: • A theoretical formula was derived for direct calculation of IHTC. • The Beck's method is a robust and accurate method for identifying IHTC. • Finite element method can be used to identify an overall equivalent IHTC

  1. GPU-Based Computation of Formation Pressure for Multistage Hydraulically Fractured Horizontal Wells in Tight Oil and Gas Reservoirs

    Directory of Open Access Journals (Sweden)

    Rongwang Yin

    2018-01-01

    Full Text Available A mathematical model for multistage hydraulically fractured horizontal wells (MFHWs in tight oil and gas reservoirs was derived by considering the variations in the permeability and porosity of tight oil and gas reservoirs that depend on formation pressure and mixed fluid properties and introducing the pseudo-pressure; analytical solutions were presented using the Newman superposition principle. The CPU-GPU asynchronous computing model was designed based on the CUDA platform, and the analytic solution was decomposed into infinite summation and integral forms for parallel computation. Implementation of this algorithm on an Intel i5 4590 CPU and NVIDIA GT 730 GPU demonstrates that computation speed increased by almost 80 times, which meets the requirement for real-time calculation of the formation pressure of MFHWs.

  2. A combination of differential method and perturbation theory for the calculation of sensitivity coefficients

    International Nuclear Information System (INIS)

    Santos, Adimir dos; Borges, A.A.

    2000-01-01

    A new method for the calculation of sensitivity coefficients is developed. The new method is a combination of two methodologies used for calculating these coefficients, which are the differential and the generalized perturbation theory methods. The proposed method utilizes as integral parameter the average flux in an arbitrary region of the system. Thus, the sensitivity coefficient contains only the component corresponding to the neutron flux. To obtain the new sensitivity coefficient, the derivates of the integral parameter, φ(ξ), with respect to σ are calculated using the perturbation method and the functional derivates of this generic integral parameter with respect to σ and φ are calculated using the differential method. The new method merges the advantages of the differential and generalized perturbation theory methods and eliminates their disadvantages. (author)

  3. An Effective Method to Accurately Calculate the Phase Space Factors for β"-β"- Decay

    International Nuclear Information System (INIS)

    Horoi, Mihai; Neacsu, Andrei

    2016-01-01

    Accurate calculations of the electron phase space factors are necessary for reliable predictions of double-beta decay rates and for the analysis of the associated electron angular and energy distributions. We present an effective method to calculate these phase space factors that takes into account the distorted Coulomb field of the daughter nucleus, yet it allows one to easily calculate the phase space factors with good accuracy relative to the most exact methods available in the recent literature.

  4. Self-consistent field variational cellular method as applied to the band structure calculation of sodium

    International Nuclear Information System (INIS)

    Lino, A.T.; Takahashi, E.K.; Leite, J.R.; Ferraz, A.C.

    1988-01-01

    The band structure of metallic sodium is calculated, using for the first time the self-consistent field variational cellular method. In order to implement the self-consistency in the variational cellular theory, the crystal electronic charge density was calculated within the muffin-tin approximation. The comparison between our results and those derived from other calculations leads to the conclusion that the proposed self-consistent version of the variational cellular method is fast and accurate. (author) [pt

  5. PKI, Gamma Radiation Reactor Shielding Calculation by Point-Kernel Method

    International Nuclear Information System (INIS)

    Li Chunhuai; Zhang Liwu; Zhang Yuqin; Zhang Chuanxu; Niu Xihua

    1990-01-01

    1 - Description of program or function: This code calculates radiation shielding problem of gamma-ray in geometric space. 2 - Method of solution: PKI uses a point kernel integration technique, describes radiation shielding geometric space by using geometric space configuration method and coordinate conversion, and makes use of calculation result of reactor primary shielding and flow regularity in loop system for coolant

  6. New method of ionization energy calculation for two-electron ions

    International Nuclear Information System (INIS)

    Ershov, D.K.

    1997-01-01

    A new method for calculation of the ionization energy of two-electron ions is proposed. The method is based on the calculation of the energy of second electron interaction with the field of an one-electron ion the potential of which is well known

  7. Critical Values for Lawshe's Content Validity Ratio: Revisiting the Original Methods of Calculation

    Science.gov (United States)

    Ayre, Colin; Scally, Andrew John

    2014-01-01

    The content validity ratio originally proposed by Lawshe is widely used to quantify content validity and yet methods used to calculate the original critical values were never reported. Methods for original calculation of critical values are suggested along with tables of exact binomial probabilities.

  8. Comparison of Different Numerical Methods for Quality Factor Calculation of Nano and Micro Photonic Cavities

    DEFF Research Database (Denmark)

    Taghizadeh, Alireza; Mørk, Jesper; Chung, Il-Sug

    2014-01-01

    Four different numerical methods for calculating the quality factor and resonance wavelength of a nano or micro photonic cavity are compared. Good agreement was found for a wide range of quality factors. Advantages and limitations of the different methods are discussed.......Four different numerical methods for calculating the quality factor and resonance wavelength of a nano or micro photonic cavity are compared. Good agreement was found for a wide range of quality factors. Advantages and limitations of the different methods are discussed....

  9. Conventional method for the calculation of the global energy cost of buildings; Methode conventionnelle de calcul du cout global energetique des batiments

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2002-05-01

    A working group driven by Electricite de France (EdF), Chauffage Fioul and Gaz de France (GdF) companies has been built with the sustain of several building engineering companies in order to clarify the use of the method of calculation of the global energy cost of buildings. This global cost is an economical decision help criterion among others. This press kit presents, first, the content of the method (input data, calculation of annual expenses, calculation of the global energy cost, display of results and limitations of the method). Then it fully describes the method and its appendixes necessary for its implementation: economical and financial context, general data of the project in progress, environmental data, occupation and comfort level, variants, investment cost of energy systems, investment cost for the structure linked with the energy system, investment cost for other invariant elements of the structure, calculation of consumptions (space heating, hot water, ventilation), maintenance costs (energy systems, structure), operation and exploitation costs, tariffs and consumption costs and taxes, actualized global cost, annualized global cost, comparison between variants. The method is applied to a council building of 23 flats taken as an example. (J.S.)

  10. GPU-based stochastic-gradient optimization for non-rigid medical image registration in time-critical applications

    NARCIS (Netherlands)

    Staring, M.; Al-Ars, Z.; Berendsen, Floris; Angelini, Elsa D.; Landman, Bennett A.

    2018-01-01

    Currently, non-rigid image registration algorithms are too computationally intensive to use in time-critical applications. Existing implementations that focus on speed typically address this by either parallelization on GPU-hardware, or by introducing methodically novel techniques into

  11. Environment-based pin-power reconstruction method for homogeneous core calculations

    International Nuclear Information System (INIS)

    Leroyer, H.; Brosselard, C.; Girardi, E.

    2012-01-01

    Core calculation schemes are usually based on a classical two-step approach associated with assembly and core calculations. During the first step, infinite lattice assemblies calculations relying on a fundamental mode approach are used to generate cross-sections libraries for PWRs core calculations. This fundamental mode hypothesis may be questioned when dealing with loading patterns involving several types of assemblies (UOX, MOX), burnable poisons, control rods and burn-up gradients. This paper proposes a calculation method able to take into account the heterogeneous environment of the assemblies when using homogeneous core calculations and an appropriate pin-power reconstruction. This methodology is applied to MOX assemblies, computed within an environment of UOX assemblies. The new environment-based pin-power reconstruction is then used on various clusters of 3x3 assemblies showing burn-up gradients and UOX/MOX interfaces, and compared to reference calculations performed with APOLLO-2. The results show that UOX/MOX interfaces are much better calculated with the environment-based calculation scheme when compared to the usual pin-power reconstruction method. The power peak is always better located and calculated with the environment-based pin-power reconstruction method on every cluster configuration studied. This study shows that taking into account the environment in transport calculations can significantly improve the pin-power reconstruction so far as it is consistent with the core loading pattern. (authors)

  12. An Out-of-Core GPU based dimensionality reduction algorithm for Big Mass Spectrometry Data and its application in bottom-up Proteomics.

    Science.gov (United States)

    Awan, Muaaz Gul; Saeed, Fahad

    2017-08-01

    Modern high resolution Mass Spectrometry instruments can generate millions of spectra in a single systems biology experiment. Each spectrum consists of thousands of peaks but only a small number of peaks actively contribute to deduction of peptides. Therefore, pre-processing of MS data to detect noisy and non-useful peaks are an active area of research. Most of the sequential noise reducing algorithms are impractical to use as a pre-processing step due to high time-complexity. In this paper, we present a GPU based dimensionality-reduction algorithm, called G-MSR, for MS2 spectra. Our proposed algorithm uses novel data structures which optimize the memory and computational operations inside GPU. These novel data structures include Binary Spectra and Quantized Indexed Spectra (QIS) . The former helps in communicating essential information between CPU and GPU using minimum amount of data while latter enables us to store and process complex 3-D data structure into a 1-D array structure while maintaining the integrity of MS data. Our proposed algorithm also takes into account the limited memory of GPUs and switches between in-core and out-of-core modes based upon the size of input data. G-MSR achieves a peak speed-up of 386x over its sequential counterpart and is shown to process over a million spectra in just 32 seconds. The code for this algorithm is available as a GPL open-source at GitHub at the following link: https://github.com/pcdslab/G-MSR.

  13. Approximated calculation of the vacuum wave function and vacuum energy of the LGT with RPA method

    International Nuclear Information System (INIS)

    Hui Ping

    2004-01-01

    The coupled cluster method is improved with the random phase approximation (RPA) to calculate vacuum wave function and vacuum energy of 2 + 1 - D SU(2) lattice gauge theory. In this calculating, the trial wave function composes of single-hollow graphs. The calculated results of vacuum wave functions show very good scaling behaviors at weak coupling region l/g 2 >1.2 from the third order to the sixth order, and the vacuum energy obtained with RPA method is lower than the vacuum energy obtained without RPA method, which means that this method is a more efficient one

  14. The Impact of Harmonics Calculation Methods on Power Quality Assessment in Wind Farms

    DEFF Research Database (Denmark)

    Kocewiak, Lukasz Hubert; Hjerrild, Jesper; Bak, Claus Leth

    2010-01-01

    Different methods of calculating harmonics in measurements obtained from offshore wind farms are shown in this paper. Appropriate data processing methods are suggested for harmonics with different origin and nature. Enhancements of discrete Fourier transform application in order to reduce...... measurement data processing errors are proposed and compared with classical methods. Comparison of signal processing methods for harmonic studies is presented and application dependent on harmonics origin and nature recommended. Certain aspects related to magnitude and phase calculation in stationary...... measurement data are analysed and described. Qualitative indices of measurement data harmonic analysis in order to assess the calculation accuracy are suggested and used....

  15. An algorithm of α-and γ-mode eigenvalue calculations by Monte Carlo method

    International Nuclear Information System (INIS)

    Yamamoto, Toshihiro; Miyoshi, Yoshinori

    2003-01-01

    A new algorithm for Monte Carlo calculation was developed to obtain α- and γ-mode eigenvalues. The α is a prompt neutron time decay constant measured in subcritical experiments, and the γ is a spatial decay constant measured in an exponential method for determining the subcriticality. This algorithm can be implemented into existing Monte Carlo eigenvalue calculation codes with minimum modifications. The algorithm was implemented into MCNP code and the performance of calculating the both mode eigenvalues were verified through comparison of the calculated eigenvalues with the ones obtained by fixed source calculations. (author)

  16. GPU-based local interaction simulation approach for simplified temperature effect modelling in Lamb wave propagation used for damage detection

    International Nuclear Information System (INIS)

    Kijanka, P; Radecki, R; Packo, P; Staszewski, W J; Uhl, T

    2013-01-01

    Temperature has a significant effect on Lamb wave propagation. It is important to compensate for this effect when the method is considered for structural damage detection. The paper explores a newly proposed, very efficient numerical simulation tool for Lamb wave propagation modelling in aluminum plates exposed to temperature changes. A local interaction approach implemented with a parallel computing architecture and graphics cards is used for these numerical simulations. The numerical results are compared with the experimental data. The results demonstrate that the proposed approach could be used efficiently to produce a large database required for the development of various temperature compensation procedures in structural health monitoring applications. (paper)

  17. Virial-statistic method for calculation of atom and molecule energies

    International Nuclear Information System (INIS)

    Borisov, Yu.A.

    1977-01-01

    A virial-statistical method has been applied to the calculation of the atomization energies of the following molecules: Mo(CO) 6 , Cr(CO) 6 , Fe(CO) 5 , MnH(CO) 5 , CoH(CO) 4 , Ni(CO) 4 . The principles of this method are briefly presented. Calculation results are given for the individual contributions to the atomization energies together with the calculated and experimental atomization energies (D). For the Mo(CO) 6 complex Dsub(calc) = 1759 and Dsub(exp) = 1763 kcal/mole. Calculated and experimental combination heat values for carbonyl complexes are presented. These values are shown to be adequately consistent [ru

  18. A functional method for estimating DPA tallies in Monte Carlo calculations of Light Water Reactors

    International Nuclear Information System (INIS)

    Read, Edward A.; Oliveira, Cassiano R.E. de

    2011-01-01

    There has been a growing need in recent years for the development of methodology to calculate radiation damage factors, namely displacements per atom (dpa), of structural components for Light Water Reactors (LWRs). The aim of this paper is to discuss the development and implementation of a dpa method using Monte Carlo method for transport calculations. The capabilities of the Monte Carlo code Serpent such as Woodcock tracking and fuel depletion are assessed for radiation damage calculations and its capability demonstrated and compared to those of the Monte Carlo code MCNP for radiation damage calculations of a typical LWR configuration. (author)

  19. An accelerated hologram calculation using the wavefront recording plane method and wavelet transform

    Science.gov (United States)

    Arai, Daisuke; Shimobaba, Tomoyoshi; Nishitsuji, Takashi; Kakue, Takashi; Masuda, Nobuyuki; Ito, Tomoyoshi

    2017-06-01

    Fast hologram calculation methods are critical in real-time holography applications such as three-dimensional (3D) displays. We recently proposed a wavelet transform-based hologram calculation called WASABI. Even though WASABI can decrease the calculation time of a hologram from a point cloud, it increases the calculation time with increasing propagation distance. We also proposed a wavefront recoding plane (WRP) method. This is a two-step fast hologram calculation in which the first step calculates the superposition of light waves emitted from a point cloud in a virtual plane, and the second step performs a diffraction calculation from the virtual plane to the hologram plane. A drawback of the WRP method is in the first step when the point cloud has a large number of object points and/or a long distribution in the depth direction. In this paper, we propose a method combining WASABI and the WRP method in which the drawbacks of each can be complementarily solved. Using a consumer CPU, the proposed method succeeded in performing a hologram calculation with 2048 × 2048 pixels from a 3D object with one million points in approximately 0.4 s.

  20. Hamming method for solving the delayed neutron precursor concentration for reactivity calculation

    International Nuclear Information System (INIS)

    Díaz, Daniel Suescún; Ospina, Juan Felipe Flórez; Sarasty, Jesús Andrés Rodríguez

    2012-01-01

    Highlights: ► We present a new formulation to calculate the reactivity using the Hamming method. ► This method shows better accuracy than existing methods for reactivity calculation. ► The reactivity is calculated without limitation of the nuclear power form. ► The method can be implemented in reactivity meters with time step of up to 0.1 s. - Abstract: We propose a new method for numerically solving the inverse point kinetic equation for a nuclear reactor using the Hamming method, without requiring the nuclear power history and without using the Laplace transform. This new method converges with accuracy of order h 5 , where h is the step in the computation time. The procedure is validated for different forms of the nuclear power and with different time steps. The results indicate that this method has a better accuracy and lower computational effort compared with other conventional methods that use the nuclear power history.

  1. Failing in place for low-serviceability storage infrastructure using high-parity GPU-based RAID

    International Nuclear Information System (INIS)

    Curry, Matthew L.; Ward, H. Lee; Skjellum, Anthony

    2010-01-01

    In order to provide large quantities of high-reliability disk-based storage, it has become necessary to aggregate disks into fault-tolerant groups based on the RAID methodology. Most RAID levels do provide some fault tolerance, but there are certain classes of applications that require increased levels of fault tolerance within an array. Some of these applications include embedded systems in harsh environments that have a low level of serviceability, or uninhabited data centers servicing cloud computing. When describing RAID reliability, the Mean Time To Data Loss (MTTDL) calculations will often assume that the time to replace a failed disk is relatively low, or even negligible compared to rebuild time. For platforms that are in remote areas collecting and processing data, it may be impossible to access the system to perform system maintenance for long periods. A disk may fail early in a platform's life, but not be replaceable for much longer than typical for RAID arrays. Service periods may be scheduled at intervals on the order of months, or the platform may not be serviced until the end of a mission in progress. Further, this platform may be subject to extreme conditions that can accelerate wear and tear on a disk, requiring even more protection from failures. We have created a high parity RAID implementation that uses a Graphics Processing Unit (GPU) to compute more than two blocks of parity information per stripe, allowing extra parity to eliminate or reduce the requirement for rebuilding data between service periods. While this type of controller is highly effective for RAID 6 systems, an important benefit is the ability to incorporate more parity into a RAID storage system. Such RAID levels, as yet unnamed, can tolerate the failure of three or more disks (depending on configuration) without data loss. While this RAID system certainly has applications in embedded systems running applications in the field, similar benefits can be obtained for servers that are

  2. A finite element method for a time dependence soil-structure interactions calculations

    International Nuclear Information System (INIS)

    Ni, X.M.; Gantenbein, F.; Petit, M.

    1989-01-01

    The method which is proposed is based on a finite element modelisation for the soil and the structure and a time history calculation. It has been developed for plane and axisymmetric geometries. The principle of this method will be presented, then applications will be given, first to a linear calculation for which results will be compared to those obtained by standard methods. Then results for a non linear behavior will be described [fr

  3. GUESS-ing polygenic associations with multiple phenotypes using a GPU-based evolutionary stochastic search algorithm.

    Directory of Open Access Journals (Sweden)

    Leonardo Bottolo

    Full Text Available Genome-wide association studies (GWAS yielded significant advances in defining the genetic architecture of complex traits and disease. Still, a major hurdle of GWAS is narrowing down multiple genetic associations to a few causal variants for functional studies. This becomes critical in multi-phenotype GWAS where detection and interpretability of complex SNP(s-trait(s associations are complicated by complex Linkage Disequilibrium patterns between SNPs and correlation between traits. Here we propose a computationally efficient algorithm (GUESS to explore complex genetic-association models and maximize genetic variant detection. We integrated our algorithm with a new Bayesian strategy for multi-phenotype analysis to identify the specific contribution of each SNP to different trait combinations and study genetic regulation of lipid metabolism in the Gutenberg Health Study (GHS. Despite the relatively small size of GHS (n  =  3,175, when compared with the largest published meta-GWAS (n > 100,000, GUESS recovered most of the major associations and was better at refining multi-trait associations than alternative methods. Amongst the new findings provided by GUESS, we revealed a strong association of SORT1 with TG-APOB and LIPC with TG-HDL phenotypic groups, which were overlooked in the larger meta-GWAS and not revealed by competing approaches, associations that we replicated in two independent cohorts. Moreover, we demonstrated the increased power of GUESS over alternative multi-phenotype approaches, both Bayesian and non-Bayesian, in a simulation study that mimics real-case scenarios. We showed that our parallel implementation based on Graphics Processing Units outperforms alternative multi-phenotype methods. Beyond multivariate modelling of multi-phenotypes, our Bayesian model employs a flexible hierarchical prior structure for genetic effects that adapts to any correlation structure of the predictors and increases the power to identify

  4. Improvement of calculation method for temperature coefficient of HTTR by neutronics calculation code based on diffusion theory. Analysis for temperature coefficient by SRAC code system

    International Nuclear Information System (INIS)

    Goto, Minoru; Takamatsu, Kuniyoshi

    2007-03-01

    The HTTR temperature coefficients required for the core dynamics calculations had been calculated from the HTTR core calculation results by the diffusion code with which the corrections had been performed using the core calculation results by the Monte-Carlo code MVP. This calculation method for the temperature coefficients was considered to have some issues to be improved. Then, the calculation method was improved to obtain the temperature coefficients in which the corrections by the Monte-Carlo code were not required. Specifically, from the point of view of neutron spectrum calculated by lattice calculations, the lattice model was revised which had been used for the calculations of the temperature coefficients. The HTTR core calculations were performed by the diffusion code with the group constants which were generated by the lattice calculations with the improved lattice model. The core calculations and the lattice calculations were performed by the SRAC code system. The HTTR core dynamics calculation was performed with the temperature coefficient obtained from the core calculation results. In consequence, the core dynamics calculation result showed good agreement with the experimental data and the valid temperature coefficient could be calculated only by the diffusion code without the corrections by Monte-Carlo code. (author)

  5. Calculation of neutron importance function in fissionable assemblies using Monte Carlo method

    International Nuclear Information System (INIS)

    Feghhi, S.A.H.; Shahriari, M.; Afarideh, H.

    2007-01-01

    The purpose of the present work is to develop an efficient solution method for the calculation of neutron importance function in fissionable assemblies for all criticality conditions, based on Monte Carlo calculations. The neutron importance function has an important role in perturbation theory and reactor dynamic calculations. Usually this function can be determined by calculating the adjoint flux while solving the adjoint weighted transport equation based on deterministic methods. However, in complex geometries these calculations are very complicated. In this article, considering the capabilities of MCNP code in solving problems with complex geometries and its closeness to physical concepts, a comprehensive method based on the physical concept of neutron importance has been introduced for calculating the neutron importance function in sub-critical, critical and super-critical conditions. For this propose a computer program has been developed. The results of the method have been benchmarked with ANISN code calculations in 1 and 2 group modes for simple geometries. The correctness of these results has been confirmed for all three criticality conditions. Finally, the efficiency of the method for complex geometries has been shown by the calculation of neutron importance in Miniature Neutron Source Reactor (MNSR) research reactor

  6. The analysis of RPV fast neutron flux calculation for PWR with three-dimensional SN method

    International Nuclear Information System (INIS)

    Yang Shouhai; Chen Yixue; Wang Weijin; Shi Shengchun; Lu Daogang

    2011-01-01

    Discrete ordinates (S N ) method is one of the most widely used method for reactor pressure vessel (RPV) design. As the fast development of computer CPU speed and memory capacity and consummation of three-dimensional discrete-ordinates method, it is mature for 3-D S N method to be used to engineering design for nuclear facilities. This work was done specifically for PWR model, with the results of 3-D core neutron transport calculation by 3-D core calculation, 3-D RPV fast neutron flux distribution obtain by 3-D S N method were compared with gained by 1-D and 2-D S N method and the 3-D Monte Carlo (MC) method. In this paper, the application of three-dimensional S N method in calculating RPV fast neutron flux distribution for pressurized water reactor (PWR) is presented and discussed. (authors)

  7. Iterative resonance self-shielding methods using resonance integral table in heterogeneous transport lattice calculations

    International Nuclear Information System (INIS)

    Hong, Ser Gi; Kim, Kang-Seog

    2011-01-01

    This paper describes the iteration methods using resonance integral tables to estimate the effective resonance cross sections in heterogeneous transport lattice calculations. Basically, these methods have been devised to reduce an effort to convert resonance integral table into subgroup data to be used in the physical subgroup method. Since these methods do not use subgroup data but only use resonance integral tables directly, these methods do not include an error in converting resonance integral into subgroup data. The effective resonance cross sections are estimated iteratively for each resonance nuclide through the heterogeneous fixed source calculations for the whole problem domain to obtain the background cross sections. These methods have been implemented in the transport lattice code KARMA which uses the method of characteristics (MOC) to solve the transport equation. The computational results show that these iteration methods are quite promising in the practical transport lattice calculations.

  8. GBOOST: a GPU-based tool for detecting gene-gene interactions in genome-wide case control studies.

    Science.gov (United States)

    Yung, Ling Sing; Yang, Can; Wan, Xiang; Yu, Weichuan

    2011-05-01

    Collecting millions of genetic variations is feasible with the advanced genotyping technology. With a huge amount of genetic variations data in hand, developing efficient algorithms to carry out the gene-gene interaction analysis in a timely manner has become one of the key problems in genome-wide association studies (GWAS). Boolean operation-based screening and testing (BOOST), a recent work in GWAS, completes gene-gene interaction analysis in 2.5 days on a desktop computer. Compared with central processing units (CPUs), graphic processing units (GPUs) are highly parallel hardware and provide massive computing resources. We are, therefore, motivated to use GPUs to further speed up the analysis of gene-gene interactions. We implement the BOOST method based on a GPU framework and name it GBOOST. GBOOST achieves a 40-fold speedup compared with BOOST. It completes the analysis of Wellcome Trust Case Control Consortium Type 2 Diabetes (WTCCC T2D) genome data within 1.34 h on a desktop computer equipped with Nvidia GeForce GTX 285 display card. GBOOST code is available at http://bioinformatics.ust.hk/BOOST.html#GBOOST.

  9. Development of GPU Based Parallel Computing Module for Solving Pressure Equation in the CUPID Component Thermo-Fluid Analysis Code

    International Nuclear Information System (INIS)

    Lee, Jin Pyo; Joo, Han Gyu

    2010-01-01

    In the thermo-fluid analysis code named CUPID, the linear system of pressure equations must be solved in each iteration step. The time for repeatedly solving the linear system can be quite significant because large sparse matrices of Rank more than 50,000 are involved and the diagonal dominance of the system is hardly hold. Therefore parallelization of the linear system solver is essential to reduce the computing time. Meanwhile, Graphics Processing Units (GPU) have been developed as highly parallel, multi-core processors for the global demand of high quality 3D graphics. If a suitable interface is provided, parallelization using GPU can be available to engineering computing. NVIDIA provides a Software Development Kit(SDK) named CUDA(Compute Unified Device Architecture) to code developers so that they can manage GPUs for parallelization using the C language. In this research, we implement parallel routines for the linear system solver using CUDA, and examine the performance of the parallelization. In the next section, we will describe the method of CUDA parallelization for the CUPID code, and then the performance of the CUDA parallelization will be discussed

  10. TU-FG-BRB-07: GPU-Based Prompt Gamma Ray Imaging From Boron Neutron Capture Therapy

    Energy Technology Data Exchange (ETDEWEB)

    Kim, S; Suh, T; Yoon, D; Jung, J; Shin, H; Kim, M [The catholic university of Korea, Seoul (Korea, Republic of)

    2016-06-15

    Purpose: The purpose of this research is to perform the fast reconstruction of a prompt gamma ray image using a graphics processing unit (GPU) computation from boron neutron capture therapy (BNCT) simulations. Methods: To evaluate the accuracy of the reconstructed image, a phantom including four boron uptake regions (BURs) was used in the simulation. After the Monte Carlo simulation of the BNCT, the modified ordered subset expectation maximization reconstruction algorithm using the GPU computation was used to reconstruct the images with fewer projections. The computation times for image reconstruction were compared between the GPU and the central processing unit (CPU). Also, the accuracy of the reconstructed image was evaluated by a receiver operating characteristic (ROC) curve analysis. Results: The image reconstruction time using the GPU was 196 times faster than the conventional reconstruction time using the CPU. For the four BURs, the area under curve values from the ROC curve were 0.6726 (A-region), 0.6890 (B-region), 0.7384 (C-region), and 0.8009 (D-region). Conclusion: The tomographic image using the prompt gamma ray event from the BNCT simulation was acquired using the GPU computation in order to perform a fast reconstruction during treatment. The authors verified the feasibility of the prompt gamma ray reconstruction using the GPU computation for BNCT simulations.

  11. Cell homogenization methods for pin-by-pin core calculations tested in slab geometry

    International Nuclear Information System (INIS)

    Yamamoto, Akio; Kitamura, Yasunori; Yamane, Yoshihiro

    2004-01-01

    In this paper, performances of spatial homogenization methods for fuel or non-fuel cells are compared in slab geometry in order to facilitate pin-by-pin core calculations. Since the spatial homogenization methods were mainly developed for fuel assemblies, systematic study of their performance for the cell-level homogenization has not been carried out. Importance of cell-level homogenization is recently increasing since the pin-by-pin mesh core calculation in actual three-dimensional geometry, which is less approximate approach than current advanced nodal method, is getting feasible. Four homogenization methods were investigated in this paper; the flux-volume weighting, the generalized equivalence theory, the superhomogenization (SPH) method and the nonlinear iteration method. The last one, the nonlinear iteration method, was tested as the homogenization method for the first time. The calculations were carried out in simplified colorset assembly configurations of PWR, which are simulated by slab geometries, and homogenization performances were evaluated through comparison with the reference cell-heterogeneous calculations. The calculation results revealed that the generalized equivalence theory showed best performance. Though the nonlinear iteration method can significantly reduce homogenization error, its performance was not as good as that of the generalized equivalence theory. Through comparison of the results obtained by the generalized equivalence theory and the superhomogenization method, important byproduct was obtained; deficiency of the current superhomogenization method, which could be improved by incorporating the 'cell-level discontinuity factor between assemblies', was clarified

  12. A New Power Calculation Method for Single-Phase Grid-Connected Systems

    DEFF Research Database (Denmark)

    Yang, Yongheng; Blaabjerg, Frede

    2013-01-01

    A new method to calculate average active power and reactive power for single-phase systems is proposed in this paper. It can be used in different applications where the output active power and reactive power need to be calculated accurately and fast. For example, a grid-connected photovoltaic...... system in low voltage ride through operation mode requires a power feedback for the power control loop. Commonly, a Discrete Fourier Transform (DFT) based power calculation method can be adopted in such systems. However, the DFT method introduces at least a one-cycle time delay. The new power calculation...... method, which is based on the adaptive filtering technique, can achieve a faster response. The performance of the proposed method is verified by experiments and demonstrated in a 1 kW single-phase grid-connected system operating under different conditions.Experimental results show the effectiveness...

  13. Calculation methods for SPF for heat pump systems for comparison, system choice and dimensioning

    Energy Technology Data Exchange (ETDEWEB)

    Nordman, Roger; Andersson, Kajsa; Axell, Monica; Lindahl, Markus

    2010-09-15

    In this project, results from field measurements of heat pumps have been collected and summarised. Also existing calculation methods have been compared and summarised. Analyses have been made on how the field measurements compare to existing calculation models for heat pumps Seasonal Performance Factor (SPF), and what deviations may depend on. Recommendations for new calculation models are proposed, which include combined systems (e.g. solar - HP), capacity controlled heat pumps and combined DHW and heating operation

  14. A grey diffusion acceleration method for time-dependent radiative transfer calculations: analysis and application

    International Nuclear Information System (INIS)

    Nowak, P.F.

    1993-01-01

    A grey diffusion acceleration method is presented and is shown by Fourier analysis and test calculations to be effective in accelerating radiative transfer calculations. The spectral radius is bounded by 0.9 for the continuous equations, but is significantly smaller for the discretized equations, especially in the optically thick regimes characteristic to radiation transport problems. The GDA method is more efficient than the multigroup DSA method because its slightly higher iteration count is more than offset by the much lower cost per iteration. A wide range of test calculations confirm the efficiency of GDA compared to multifrequency DSA. (orig.)

  15. Assessment of New Calculation Method for Toxicological Sums-of-Fractions for Hanford Tank Farm Wastes

    International Nuclear Information System (INIS)

    Mahoney, Lenna A.

    2006-01-01

    The toxicological source terms used for potential accident assessment in the Hanford Tank Farms DSA are based on toxicological sums-of-fractions (SOFs) that were calculated based on the Best Basis Inventory (BBI) from May 2002, using a method that depended on thermodynamic equilibrium calculations of the compositions of liquid and solid phases. The present report describes a simplified SOF-calculation method that is to be used in future toxicological updates and assessments and compares its results (for the 2002 BBI) to those of the old method.

  16. A fast dose calculation method based on table lookup for IMRT optimization

    International Nuclear Information System (INIS)

    Wu Qiuwen; Djajaputra, David; Lauterbach, Marc; Wu Yan; Mohan, Radhe

    2003-01-01

    This note describes a fast dose calculation method that can be used to speed up the optimization process in intensity-modulated radiotherapy (IMRT). Most iterative optimization algorithms in IMRT require a large number of dose calculations to achieve convergence and therefore the total amount of time needed for the IMRT planning can be substantially reduced by using a faster dose calculation method. The method that is described in this note relies on an accurate dose calculation engine that is used to calculate an approximate dose kernel for each beam used in the treatment plan. Once the kernel is computed and saved, subsequent dose calculations can be done rapidly by looking up this kernel. Inaccuracies due to the approximate nature of the kernel in this method can be reduced by performing scheduled kernel updates. This fast dose calculation method can be performed more than two orders of magnitude faster than the typical superposition/convolution methods and therefore is suitable for applications in which speed is critical, e.g., in an IMRT optimization that requires a simulated annealing optimization algorithm or in a practical IMRT beam-angle optimization system. (note)

  17. Integrated Power Flow and Short Circuit Calculation Method for Distribution Network with Inverter Based Distributed Generation

    Directory of Open Access Journals (Sweden)

    Shan Yang

    2016-01-01

    Full Text Available Power flow calculation and short circuit calculation are the basis of theoretical research for distribution network with inverter based distributed generation. The similarity of equivalent model for inverter based distributed generation during normal and fault conditions of distribution network and the differences between power flow and short circuit calculation are analyzed in this paper. Then an integrated power flow and short circuit calculation method for distribution network with inverter based distributed generation is proposed. The proposed method let the inverter based distributed generation be equivalent to Iθ bus, which makes it suitable to calculate the power flow of distribution network with a current limited inverter based distributed generation. And the low voltage ride through capability of inverter based distributed generation can be considered as well in this paper. Finally, some tests of power flow and short circuit current calculation are performed on a 33-bus distribution network. The calculated results from the proposed method in this paper are contrasted with those by the traditional method and the simulation method, whose results have verified the effectiveness of the integrated method suggested in this paper.

  18. Calculation of neutron importance function in fissionable assemblies using Monte Carlo method

    International Nuclear Information System (INIS)

    Feghhi, S. A. H.; Afarideh, H.; Shahriari, M.

    2007-01-01

    The purpose of the present work is to develop an efficient solution method to calculate neutron importance function in fissionable assemblies for all criticality conditions, using Monte Carlo Method. The neutron importance function has a well important role in perturbation theory and reactor dynamic calculations. Usually this function can be determined by calculating adjoint flux through out solving the Adjoint weighted transport equation with deterministic methods. However, in complex geometries these calculations are very difficult. In this article, considering the capabilities of MCNP code in solving problems with complex geometries and its closeness to physical concepts, a comprehensive method based on physical concept of neutron importance has been introduced for calculating neutron importance function in sub-critical, critical and supercritical conditions. For this means a computer program has been developed. The results of the method has been benchmarked with ANISN code calculations in 1 and 2 group modes for simple geometries and their correctness has been approved for all three criticality conditions. Ultimately, the efficiency of the method for complex geometries has been shown by calculation of neutron importance in MNSR research reactor

  19. Polarizable Embedded RI-CC2 Method for Two-Photon Absorption Calculations

    DEFF Research Database (Denmark)

    Hršak, Dalibor; Khah, Alireza Marefat; Christiansen, Ove

    2015-01-01

    We present a novel polarizable embedded resolution-of-identity coupled cluster singles and approximate doubles (PERI-CC2) method for calculation of two-photon absorption (TPA) spectra of large molecular systems. The method was benchmarked for three types of systems: a water-solvated molecule...... of formamide, a uracil molecule in aqueous solution, and a set of mutants of the channelrhodopsin (ChR) protein. The first test case shows that the PERI-CC2 method is in excellent agreement with the PE-CC2 method and in good agreement with the PE-CCSD method. The uracil test case indicates that the effects...... of hydrogen bonding on the TPA of a chromophore with the nearest environment is well-described with the PERI-CC2 method. Finally, the ChR calculation shows that the PERI-CC2 method is well-suited and efficient for calculations on proteins with medium-sized chromophores....

  20. The ion exchange and its connection the industry II.- Calculation methods for installations

    International Nuclear Information System (INIS)

    Uriarte Hueda, A.; Lopez Perez, B.; Gutierrez Jodra, L.

    1960-01-01

    An exposure is made of calculation methods for ion exchange installations based on kinetic considerations and similarity with other unitary operations. Factors to be experimentally obtained as well as difficulties which may occur in its determination are also given. Calculation procedures most commonly used in industry are enclosed and explained with numerical resolution of a problem of water demineralization. (Author) 22 refs

  1. A Simple and Convenient Method of Multiple Linear Regression to Calculate Iodine Molecular Constants

    Science.gov (United States)

    Cooper, Paul D.

    2010-01-01

    A new procedure using a student-friendly least-squares multiple linear-regression technique utilizing a function within Microsoft Excel is described that enables students to calculate molecular constants from the vibronic spectrum of iodine. This method is advantageous pedagogically as it calculates molecular constants for ground and excited…

  2. A modified Gaussian integration method for thermal reaction rate calculation in U- and Pu-isotopes

    International Nuclear Information System (INIS)

    Bosevski, T.; Fredin, B.

    1966-01-01

    An advanced multi-group cell calculations a lot of data information is very often necessary, and hence the data administration will be elaborate, and the spectrum calculation will be time consuming. We think it is possible to reduce the necessary data information by using an effective reaction rate integration method well suited for U- and Pu-absorptions (author)

  3. Implantation of a new calculation method of fuel depletion in the CITHAM code

    International Nuclear Information System (INIS)

    Alvarenga, M.A.B.

    1985-01-01

    It is evaluated the accuracy of the linear aproximation method used in the CITHAN code to obtain the solution of depletion equations. Results are compared with the Benchmark problem. The convenience of depletion chain before criticality calculations is analysed. The depletion calculation was modified using linear combination technic of linear chains. (M.C.K.) [pt

  4. Numerical calculation of acoustic radiation from band-vibrating structures via FEM/FAQP method

    Directory of Open Access Journals (Sweden)

    GAO Honglin

    2017-08-01

    Full Text Available The Finite Element Method (FEM combined with the Frequency Averaged Quadratic Pressure method (FAQP are used to calculate the acoustic radiation of structures excited in the frequency band. The surface particle velocity of stiffened cylindrical shells under frequency band excitation is calculated using finite element software, the normal vibration velocity is converted from the surface particle velocity to calculate the average energy source (frequency averaged across intensity, frequency averaged across pressure and frequency averaged across velocity, and the FAQP method is used to calculate the average sound pressure level within the bandwidth. The average sound pressure levels are then compared with the bandwidth using finite element and boundary element software, and the results show that FEM combined with FAQP is more suitable for high frequencies and can be used to calculate the average sound pressure level in the 1/3 octave band with good stability, presenting an alternative to applying frequency-by-frequency calculation and the average frequency process. The FEM/FAQP method can be used as a prediction method for calculating acoustic radiation while taking the randomness of vibration at medium and high frequencies into consideration.

  5. A modified method of calculating the lateral build-up ratio for small electron fields

    International Nuclear Information System (INIS)

    Tyner, E; McCavana, P; McClean, B

    2006-01-01

    This note outlines an improved method of calculating dose per monitor unit values for small electron fields using Khan's lateral build-up ratio (LBR). This modified method obtains the LBR directly from the ratio of measured, surface normalized, electron beam percentage depth dose curves. The LBR calculated using this modified method more accurately accounts for the change in lateral scatter with decreasing field size. The LBR is used along with Khan's dose per monitor unit formula to calculate dose per monitor unit values for a set of small fields. These calculated dose per monitor unit values are compared to measured values to within 3.5% for all circular fields and electron energies examined. The modified method was further tested using a small triangular field. A maximum difference of 4.8% was found. (note)

  6. A least squares calculational method: application to e±-H elastic scattering

    International Nuclear Information System (INIS)

    Das, J.N.; Chakraborty, S.

    1989-01-01

    The least squares calcualtional method proposed by Das has been applied for the e ± -H elastic scattering problems for intermediate energies. Some important conclusions are made on the basis of the calculation. (author). 7 refs ., 2 tabs

  7. Reactor theory and power reactors. 1. Calculational methods for reactors. 2. Reactor kinetics

    International Nuclear Information System (INIS)

    Henry, A.F.

    1980-01-01

    Various methods for calculation of neutron flux in power reactors are discussed. Some mathematical models used to describe transients in nuclear reactors and techniques for the reactor kinetics' relevant equations solution are also presented

  8. Improved method of generating bit reversed numbers for calculating fast fourier transform

    Digital Repository Service at National Institute of Oceanography (India)

    Suresh, T.

    Fast Fourier Transform (FFT) is an important tool required for signal processing in defence applications. This paper reports an improved method for generating bit reversed numbers needed in calculating FFT using radix-2. The refined algorithm takes...

  9. The adaptation of methods in multilayer optics for the calculation of specular neutron reflection

    International Nuclear Information System (INIS)

    Penfold, J.

    1988-10-01

    The adaptation of standard methods in multilayer optics to the calculation of specular neutron reflection is described. Their application is illustrated with examples which include a glass optical flat and a deuterated Langmuir-Blodgett film. (author)

  10. Research of coincidence method for calculation model of the specific detector

    Energy Technology Data Exchange (ETDEWEB)

    Guangchun, Hu; Suping, Liu; Jian, Gong [China Academy of Engineering Physics, Mianyang (China). Inst. of Nuclear Physics and Chemistry

    2003-07-01

    The physical size of specific detector is known normally, but production business is classified for some sizes that is concerned with the property of detector, such as the well diameter, well depth of detector and dead region. The surface source of even distribution and the sampling method of source particle isotropy sport have been established with the method of Monte Carlo, and gamma ray respond spectral with the {sup 152}Eu surface source been calculated. The experiment have been performed under the same conditions. Calculation and experiment results are compared with relative efficiency coincidence method and spectral similar degree coincidence method. According to comparison as a result, detector model is revised repeatedly to determine the calculation model of detector and to calculate efficiency of detector and spectra. (authors)

  11. Innovative methods for calculation of freeway travel time using limited data : executive summary report.

    Science.gov (United States)

    2008-08-01

    ODOTs policy for Dynamic Message Sign : utilization requires travel time(s) to be displayed as : a default message. The current method of : calculating travel time involves a workstation : operator estimating the travel time based upon : observati...

  12. Research on neutron noise analysis stochastic simulation method for α calculation

    International Nuclear Information System (INIS)

    Zhong Bin; Shen Huayun; She Ruogu; Zhu Shengdong; Xiao Gang

    2014-01-01

    The prompt decay constant α has significant application on the physical design and safety analysis in nuclear facilities. To overcome the difficulty of a value calculation with Monte-Carlo method, and improve the precision, a new method based on the neutron noise analysis technology was presented. This method employs the stochastic simulation and the theory of neutron noise analysis technology. Firstly, the evolution of stochastic neutron was simulated by discrete-events Monte-Carlo method based on the theory of generalized Semi-Markov process, then the neutron noise in detectors was solved from neutron signal. Secondly, the neutron noise analysis methods such as Rossia method, Feynman-α method, zero-probability method, and cross-correlation method were used to calculate a value. All of the parameters used in neutron noise analysis method were calculated based on auto-adaptive arithmetic. The a value from these methods accords with each other, the largest relative deviation is 7.9%, which proves the feasibility of a calculation method based on neutron noise analysis stochastic simulation. (authors)

  13. Calculation of isotopic mass and energy production by a matrix operator method

    International Nuclear Information System (INIS)

    Lee, C.E.

    1976-08-01

    The Volterra method of the multiplicative integral is used to determine the isotopic density, mass, and energy production in linear systems. The solution method, assumptions, and limitations are discussed. The method allows a rapid accurate calculation of the change in isotopic density, mass, and energy production independent of the magnitude of the time steps, production or decay rates, or flux levels

  14. The effective atomic numbers of some biomolecules calculated by two methods: A comparative study

    DEFF Research Database (Denmark)

    Manohara, S.R.; Hanagodimath, S.M.; Gerward, Leif

    2009-01-01

    The effective atomic numbers Z(eff) of some fatty acids and amino acids have been calculated by two numerical methods, a direct method and an interpolation method, in the energy range of 1 keV-20 MeV. The notion of Z(eff) is given a new meaning by using a modern database of photon interaction cro...

  15. Peculiarities of cyclotron magnetic system calculation with the finite difference method using two-dimensional approximation

    International Nuclear Information System (INIS)

    Shtromberger, N.L.

    1989-01-01

    To design a cyclotron magnetic system the legitimacy of two-dimensional approximations application is discussed. In all the calculations the finite difference method is used, and the linearization method with further use of the gradient conjugation method is used to solve the set of finite-difference equations. 3 refs.; 5 figs

  16. Applying Activity Based Costing (ABC) Method to Calculate Cost Price in Hospital and Remedy Services.

    Science.gov (United States)

    Rajabi, A; Dabiri, A

    2012-01-01

    Activity Based Costing (ABC) is one of the new methods began appearing as a costing methodology in the 1990's. It calculates cost price by determining the usage of resources. In this study, ABC method was used for calculating cost price of remedial services in hospitals. To apply ABC method, Shahid Faghihi Hospital was selected. First, hospital units were divided into three main departments: administrative, diagnostic, and hospitalized. Second, activity centers were defined by the activity analysis method. Third, costs of administrative activity centers were allocated into diagnostic and operational departments based on the cost driver. Finally, with regard to the usage of cost objectives from services of activity centers, the cost price of medical services was calculated. The cost price from ABC method significantly differs from tariff method. In addition, high amount of indirect costs in the hospital indicates that capacities of resources are not used properly. Cost price of remedial services with tariff method is not properly calculated when compared with ABC method. ABC calculates cost price by applying suitable mechanisms but tariff method is based on the fixed price. In addition, ABC represents useful information about the amount and combination of cost price services.

  17. Calculation of one-loop anomalous dimensions by means of the background field method

    International Nuclear Information System (INIS)

    Morozov, A.Yu.

    1983-01-01

    The knowledge of propagators in background fields makes calculation of anomalous dimensions (AD) straightforward and brief. The paper illustrates this statement by calculation of AD of many spin-zero and one QCD operators up to the eighth dimension included. The method presented does not simplify calculations in case of four-quark operators, therefore these are not discussed. Together with calculational difficulties arising for operators with derivatives this limits capacities of the whole approach and leads to incompleteness of some mixing matrices found in the article

  18. Monte-Carlo Method Python Library for dose distribution Calculation in Brachytherapy

    Energy Technology Data Exchange (ETDEWEB)

    Randriantsizafy, R D; Ramanandraibe, M J [Madagascar Institut National des Sciences et Techniques Nucleaires, Antananarivo (Madagascar); Raboanary, R [Institut of astro and High-Energy Physics Madagascar, University of Antananarivo, Antananarivo (Madagascar)

    2007-07-01

    The Cs-137 Brachytherapy treatment is performed in Madagascar since 2005. Time treatment calculation for prescribed dose is made manually. Monte-Carlo Method Python library written at Madagascar INSTN is experimentally used to calculate the dose distribution on the tumour and around it. The first validation of the code was done by comparing the library curves with the Nucletron company curves. To reduce the duration of the calculation, a Grid of PC's is set up with listner patch run on each PC. The library will be used to modelize the dose distribution in the CT scan patient picture for individual and better accuracy time calculation for a prescribed dose.

  19. Monte-Carlo Method Python Library for dose distribution Calculation in Brachytherapy

    International Nuclear Information System (INIS)

    Randriantsizafy, R.D.; Ramanandraibe, M.J.; Raboanary, R.

    2007-01-01

    The Cs-137 Brachytherapy treatment is performed in Madagascar since 2005. Time treatment calculation for prescribed dose is made manually. Monte-Carlo Method Python library written at Madagascar INSTN is experimentally used to calculate the dose distribution on the tumour and around it. The first validation of the code was done by comparing the library curves with the Nucletron company curves. To reduce the duration of the calculation, a Grid of PC's is set up with listner patch run on each PC. The library will be used to modelize the dose distribution in the CT scan patient picture for individual and better accuracy time calculation for a prescribed dose.

  20. Comparison of the accuracy of three angiographic methods for calculating left ventricular volume measurement

    International Nuclear Information System (INIS)

    Hu Lin; Cui Wei; Shi Hanwen; Tian Yingping; Wang Weigang; Feng Yanguang; Huang Xueyan; Liu Zhisheng

    2003-01-01

    Objective: To compare the relative accuracy of three methods measuring left ventricular volume by X-ray ventriculography: single plane area-length method, biplane area-length method, and single-plane Simpson's method. Methods: Left ventricular casts were obtained within 24 hours after death from 12 persons who died from non-cardiac causes. The true left ventricular cast volume was measured by water displacement. The calculated volume of the casts was obtained with 3 angiographic methods, i.e., single-plane area-length method, biplane area-length method, and single-plane Simpson's method. Results: The actual average volume of left ventricular casts was (61.17±26.49) ml. The left ventricular volume was averagely (97.50±35.56) ml with single plane area-length method, (90.51±36.33) ml with biplane area-length method, and (65.00± 23.63) ml with single-plane Simpson's method. The left ventricular volumes calculated with single-plane and biplane area-length method were significantly larger than that the actual volumes (P 0.05). The left ventricular volumes calculated with single-plane and biplane area-length method were significantly larger than those calculated with single-plane Simpson's method (P 0.05). The over-estimation of left ventricular volume by single plane area-length method (36.34±17.98) ml and biplane area-length method (29.34±15.59) ml was more obvious than that calculated by single-plane Simpson's method (3.83±8.48) ml. Linear regression analysis showed that there was close correlations between left ventricular volumes calculated with single plane area-length method, biplane area-length method, Simpson's method and the true volume (all r>0.98). Conclusion: Single-plane Simpson's method is more accurate than single plane area-length method and biplane area-length method for left ventricular volume measurement; however, both the single-plane and biplane area-length methods could be used in clinical practice, especially in those imaging modality

  1. A study of the literature on nodal methods in reactor physics calculations

    International Nuclear Information System (INIS)

    Van de Wetering, T.F.H.

    1993-01-01

    During the last few decades several calculation methods have been developed for the three-dimensional analysis of a reactor core. A literature survey was carried out to gain insights in the starting points and method of operation of the advanced nodal methods. These methods are applied in reactor core analyses of large nuclear power reactors, because of their high computing speed. The so-called Nodal-Expansion method is described in detail

  2. Improved stiffness confinement method within the coarse mesh finite difference framework for efficient spatial kinetics calculation

    International Nuclear Information System (INIS)

    Park, Beom Woo; Joo, Han Gyu

    2015-01-01

    Highlights: • The stiffness confinement method is combined with multigroup CMFD with SENM nodal kernel. • The systematic methods for determining the shape and amplitude frequencies are established. • Eigenvalue problems instead of fixed source problems are solved in the transient calculation. • It is demonstrated that much larger time step sizes can be used with the SCM–CMFD method. - Abstract: An improved Stiffness Confinement Method (SCM) is formulated within the framework of the coarse mesh finite difference (CMFD) formulation for efficient multigroup spatial kinetics calculation. The algorithm for searching for the amplitude frequency that makes the dynamic eigenvalue unity is developed in a systematic way along with the methods for determining the shape and precursor frequencies. A nodal calculation scheme is established within the CMFD framework to incorporate the cross section changes due to thermal feedback and dynamic frequency update. The conditional nodal update scheme is employed such that the transient calculation is performed mostly with the CMFD formulation and the CMFD parameters are conditionally updated by intermittent nodal calculations. A quadratic representation of amplitude frequency is introduced as another improvement. The performance of the improved SCM within the CMFD framework is assessed by comparing the solution accuracy and computing times for the NEACRP control rod ejection benchmark problems with those obtained with the Crank–Nicholson method with exponential transform (CNET). It is demonstrated that the improved SCM is beneficial for large time step size calculations with stability and accuracy enhancement

  3. Bending Moment Calculations for Piles Based on the Finite Element Method

    Directory of Open Access Journals (Sweden)

    Yu-xin Jie

    2013-01-01

    Full Text Available Using the finite element analysis program ABAQUS, a series of calculations on a cantilever beam, pile, and sheet pile wall were made to investigate the bending moment computational methods. The analyses demonstrated that the shear locking is not significant for the passive pile embedded in soil. Therefore, higher-order elements are not always necessary in the computation. The number of grids across the pile section is important for bending moment calculated with stress and less significant for that calculated with displacement. Although computing bending moment with displacement requires fewer grid numbers across the pile section, it sometimes results in variation of the results. For displacement calculation, a pile row can be suitably represented by an equivalent sheet pile wall, whereas the resulting bending moments may be different. Calculated results of bending moment may differ greatly with different grid partitions and computational methods. Therefore, a comparison of results is necessary when performing the analysis.

  4. The calculations of small molecular conformation energy differences by density functional method

    Science.gov (United States)

    Topol, I. A.; Burt, S. K.

    1993-03-01

    The differences in the conformational energies for the gauche (G) and trans(T) conformers of 1,2-difluoroethane and for myo-and scyllo-conformer of inositol have been calculated by local density functional method (LDF approximation) with geometry optimization using different sets of calculation parameters. It is shown that in the contrast to Hartree—Fock methods, density functional calculations reproduce the correct sign and value of the gauche effect for 1,2-difluoroethane and energy difference for both conformers of inositol. The results of normal vibrational analysis for1,2-difluoroethane showed that harmonic frequencies calculated in LDF approximation agree with experimental data with the accuracy typical for scaled large basis set Hartree—Fock calculations.

  5. Transport calculation of medium-energy protons and neutrons by Monte Carlo method

    International Nuclear Information System (INIS)

    Ban, Syuuichi; Hirayama, Hideo; Katoh, Kazuaki.

    1978-09-01

    A Monte Carlo transport code, ARIES, has been developed for protons and neutrons at medium energy (25 -- 500 MeV). Nuclear data provided by R.G. Alsmiller, Jr. were used for the calculation. To simulate the cascade development in the medium, each generation was represented by a single weighted particle and an average number of emitted particles was used as the weight. Neutron fluxes were stored by the collisions density method. The cutoff energy was set to 25 MeV. Neutrons below the cutoff were stored to be used as the source for the low energy neutron transport calculation upon the discrete ordinates method. Then transport calculations were performed for both low energy neutrons (thermal -- 25 MeV) and secondary gamma-rays. Energy spectra of emitted neutrons were calculated and compared with those of published experimental and calculated results. The agreement was good for the incident particles of energy between 100 and 500 MeV. (author)

  6. Pellet by pellet neutron flux calculations coupled with nodal expansion method

    International Nuclear Information System (INIS)

    Aldo, Dall'Osso

    2003-01-01

    We present a technique whose aim is to replace 2-dimensional pin by pin de-homogenization, currently done in core reactor calculations with the nodal expansion method (NEM), by a 3-dimensional finite difference diffusion calculation. This fine calculation is performed as a zoom in each node taking as boundary conditions the results of the NEM calculations. The size of fine mesh is of the order of a fuel pellet. The coupling between fine and NEM calculations is realised by an albedo like boundary condition. Some examples are presented showing fine neutron flux shape near control rods or assembly grids. Other fine flux behaviour as the thermal flux rise in the fuel near the reflector is emphasised. In general the results show the interest of the method in conditions where the separability of radial and axial directions is not granted. (author)

  7. Nonlinear optimization method of ship floating condition calculation in wave based on vector

    Science.gov (United States)

    Ding, Ning; Yu, Jian-xing

    2014-08-01

    Ship floating condition in regular waves is calculated. New equations controlling any ship's floating condition are proposed by use of the vector operation. This form is a nonlinear optimization problem which can be solved using the penalty function method with constant coefficients. And the solving process is accelerated by dichotomy. During the solving process, the ship's displacement and buoyant centre have been calculated by the integration of the ship surface according to the waterline. The ship surface is described using an accumulative chord length theory in order to determine the displacement, the buoyancy center and the waterline. The draught forming the waterline at each station can be found out by calculating the intersection of the ship surface and the wave surface. The results of an example indicate that this method is exact and efficient. It can calculate the ship floating condition in regular waves as well as simplify the calculation and improve the computational efficiency and the precision of results.

  8. Structural system reliability calculation using a probabilistic fault tree analysis method

    Science.gov (United States)

    Torng, T. Y.; Wu, Y.-T.; Millwater, H. R.

    1992-01-01

    The development of a new probabilistic fault tree analysis (PFTA) method for calculating structural system reliability is summarized. The proposed PFTA procedure includes: developing a fault tree to represent the complex structural system, constructing an approximation function for each bottom event, determining a dominant sampling sequence for all bottom events, and calculating the system reliability using an adaptive importance sampling method. PFTA is suitable for complicated structural problems that require computer-intensive computer calculations. A computer program has been developed to implement the PFTA.

  9. A method of paralleling computer calculation for two-dimensional kinetic plasma model

    International Nuclear Information System (INIS)

    Brazhnik, V.A.; Demchenko, V.V.; Dem'yanov, V.G.; D'yakov, V.E.; Ol'shanskij, V.V.; Panchenko, V.I.

    1987-01-01

    A method for parallel computer calculation and OSIRIS program complex realizing it and designed for numerical plasma simulation by the macroparticle method are described. The calculation can be carried out either with one or simultaneously with two computers BESM-6, that is provided by some package of interacting programs functioning in every computer. Program interaction in every computer is based on event techniques realized in OS DISPAK. Parallel computer calculation with two BESM-6 computers allows to accelerate the computation 1.5 times

  10. Lagrange polynomial interpolation method applied in the calculation of the J({xi},{beta}) function

    Energy Technology Data Exchange (ETDEWEB)

    Fraga, Vinicius Munhoz; Palma, Daniel Artur Pinheiro [Centro Federal de Educacao Tecnologica de Quimica de Nilopolis, RJ (Brazil)]. E-mails: munhoz.vf@gmail.com; dpalma@cefeteq.br; Martinez, Aquilino Senra [Universidade Federal do Rio de Janeiro (UFRJ), RJ (Brazil). Coordenacao dos Programas de Pos-graduacao de Engenharia (COPPE) (COPPE). Programa de Engenharia Nuclear]. E-mail: aquilino@lmp.ufrj.br

    2008-07-01

    The explicit dependence of the Doppler broadening function creates difficulties in the obtaining an analytical expression for J function . The objective of this paper is to present a method for the quick and accurate calculation of J function based on the recent advances in the calculation of the Doppler broadening function and on a systematic analysis of its integrand. The methodology proposed, of a semi-analytical nature, uses the Lagrange polynomial interpolation method and the Frobenius formulation in the calculation of Doppler broadening function . The results have proven satisfactory from the standpoint of accuracy and processing time. (author)

  11. Lagrange polynomial interpolation method applied in the calculation of the J(ξ,β) function

    International Nuclear Information System (INIS)

    Fraga, Vinicius Munhoz; Palma, Daniel Artur Pinheiro; Martinez, Aquilino Senra

    2008-01-01

    The explicit dependence of the Doppler broadening function creates difficulties in the obtaining an analytical expression for J function . The objective of this paper is to present a method for the quick and accurate calculation of J function based on the recent advances in the calculation of the Doppler broadening function and on a systematic analysis of its integrand. The methodology proposed, of a semi-analytical nature, uses the Lagrange polynomial interpolation method and the Frobenius formulation in the calculation of Doppler broadening function . The results have proven satisfactory from the standpoint of accuracy and processing time. (author)

  12. Calculation of large ion densities under HVdc transmission lines by the finite difference method

    International Nuclear Information System (INIS)

    Suda, Tomotaka; Sunaga, Yoshitaka

    1995-01-01

    A calculation method for large ion densities (charged aerosols) under HVdc transmission lines was developed considering both the charging mechanism of aerosols by small ions and the drifting process by wind. Large ion densities calculated by this method agreed well with the ones measured under the Shiobara HVdc test line on the lateral profiles at ground level up to about 70m downwind from the line. Measured values decreased more quickly than calculated ones farther downwind from the line. Considering the effect of point discharge from ground cover (earth corona) improved the agreement in the farther downwind region

  13. Calculation of mixed mode stress intensity factors using an alternating method

    International Nuclear Information System (INIS)

    Sakai, Takayuki

    1999-01-01

    In this study, mixed mode stress intensity factors (K I and K II ) of a square plate with a notch were calculated using a finite element alternating method. The obtained results were compared with the ones by a finite element method, and it was shown that the finite element alternating method can accurately estimate mixed mode stress intensity factors. Then, using this finite element alternating method, mixed mode stress intensity factors were calculated as changing the size and position of the notch, and its simplified equations were proposed. (author)

  14. A parallel orbital-updating based plane-wave basis method for electronic structure calculations

    International Nuclear Information System (INIS)

    Pan, Yan; Dai, Xiaoying; Gironcoli, Stefano de; Gong, Xin-Gao; Rignanese, Gian-Marco; Zhou, Aihui

    2017-01-01

    Highlights: • Propose three parallel orbital-updating based plane-wave basis methods for electronic structure calculations. • These new methods can avoid the generating of large scale eigenvalue problems and then reduce the computational cost. • These new methods allow for two-level parallelization which is particularly interesting for large scale parallelization. • Numerical experiments show that these new methods are reliable and efficient for large scale calculations on modern supercomputers. - Abstract: Motivated by the recently proposed parallel orbital-updating approach in real space method , we propose a parallel orbital-updating based plane-wave basis method for electronic structure calculations, for solving the corresponding eigenvalue problems. In addition, we propose two new modified parallel orbital-updating methods. Compared to the traditional plane-wave methods, our methods allow for two-level parallelization, which is particularly interesting for large scale parallelization. Numerical experiments show that these new methods are more reliable and efficient for large scale calculations on modern supercomputers.

  15. A method for calculating Bayesian uncertainties on internal doses resulting from complex occupational exposures

    International Nuclear Information System (INIS)

    Puncher, M.; Birchall, A.; Bull, R. K.

    2012-01-01

    Estimating uncertainties on doses from bioassay data is of interest in epidemiology studies that estimate cancer risk from occupational exposures to radionuclides. Bayesian methods provide a logical framework to calculate these uncertainties. However, occupational exposures often consist of many intakes, and this can make the Bayesian calculation computationally intractable. This paper describes a novel strategy for increasing the computational speed of the calculation by simplifying the intake pattern to a single composite intake, termed as complex intake regime (CIR). In order to assess whether this approximation is accurate and fast enough for practical purposes, the method is implemented by the Weighted Likelihood Monte Carlo Sampling (WeLMoS) method and evaluated by comparing its performance with a Markov Chain Monte Carlo (MCMC) method. The MCMC method gives the full solution (all intakes are independent), but is very computationally intensive to apply routinely. Posterior distributions of model parameter values, intakes and doses are calculated for a representative sample of plutonium workers from the United Kingdom Atomic Energy cohort using the WeLMoS method with the CIR and the MCMC method. The distributions are in good agreement: posterior means and Q 0.025 and Q 0.975 quantiles are typically within 20 %. Furthermore, the WeLMoS method using the CIR converges quickly: a typical case history takes around 10-20 min on a fast workstation, whereas the MCMC method took around 12-hr. The advantages and disadvantages of the method are discussed. (authors)

  16. Resampling Approach for Determination of the Method for Reference Interval Calculation in Clinical Laboratory Practice▿

    Science.gov (United States)

    Pavlov, Igor Y.; Wilson, Andrew R.; Delgado, Julio C.

    2010-01-01

    Reference intervals (RI) play a key role in clinical interpretation of laboratory test results. Numerous articles are devoted to analyzing and discussing various methods of RI determination. The two most widely used approaches are the parametric method, which assumes data normality, and a nonparametric, rank-based procedure. The decision about which method to use is usually made arbitrarily. The goal of this study was to demonstrate that using a resampling approach for the comparison of RI determination techniques could help researchers select the right procedure. Three methods of RI calculation—parametric, transformed parametric, and quantile-based bootstrapping—were applied to multiple random samples drawn from 81 values of complement factor B observations and from a computer-simulated normally distributed population. It was shown that differences in RI between legitimate methods could be up to 20% and even more. The transformed parametric method was found to be the best method for the calculation of RI of non-normally distributed factor B estimations, producing an unbiased RI and the lowest confidence limits and interquartile ranges. For a simulated Gaussian population, parametric calculations, as expected, were the best; quantile-based bootstrapping produced biased results at low sample sizes, and the transformed parametric method generated heavily biased RI. The resampling approach could help compare different RI calculation methods. An algorithm showing a resampling procedure for choosing the appropriate method for RI calculations is included. PMID:20554803

  17. Comparison of stress and total energy methods for calculation of elastic properties of semiconductors.

    Science.gov (United States)

    Caro, M A; Schulz, S; O'Reilly, E P

    2013-01-16

    We explore the calculation of the elastic properties of zinc-blende and wurtzite semiconductors using two different approaches: one based on stress and the other on total energy as a function of strain. The calculations are carried out within the framework of density functional theory in the local density approximation, with the plane wave-based package VASP. We use AlN as a test system, with some results also shown for selected other materials (C, Si, GaAs and GaN). Differences are found in convergence rate between the two methods, especially in low symmetry cases, where there is a much slower convergence for total energy calculations with respect to the number of plane waves and k points used. The stress method is observed to be more robust than the total energy method with respect to the residual error in the elastic constants calculated for different strain branches in the systems studied.

  18. Comparing Methods of Calculating Expected Annual Damage in Urban Pluvial Flood Risk Assessments

    DEFF Research Database (Denmark)

    Skovgård Olsen, Anders; Zhou, Qianqian; Linde, Jens Jørgen

    2015-01-01

    Estimating the expected annual damage (EAD) due to flooding in an urban area is of great interest for urban water managers and other stakeholders. It is a strong indicator for a given area showing how vulnerable it is to flood risk and how much can be gained by implementing e.g., climate change...... adaptation measures. This study identifies and compares three different methods for estimating the EAD based on unit costs of flooding of urban assets. One of these methods was used in previous studies and calculates the EAD based on a few extreme events by assuming a log-linear relationship between cost...... of an event and the corresponding return period. This method is compared to methods that are either more complicated or require more calculations. The choice of method by which the EAD is calculated appears to be of minor importance. At all three case study areas it seems more important that there is a shift...

  19. TORT/MCNP coupling method for the calculation of neutron flux around a core of BWR

    International Nuclear Information System (INIS)

    Kurosawa, M.

    2005-01-01

    For the analysis of BWR neutronics performance, accurate data are required for neutron flux distribution over the In-Reactor Pressure Vessel equipments taking into account the detailed geometrical arrangement. The TORT code can calculate neutron flux around a core of BWR in a three-dimensional geometry model, but has difficulties in fine geometrical modelling and lacks huge computer resource. On the other hand, the MCNP code enables the calculation of the neutron flux with a detailed geometry model, but requires very long sampling time to give enough number of particles. Therefore, a TORT/MCNP coupling method has been developed to eliminate the two problems mentioned above in each code. In this method, the TORT code calculates angular flux distribution on the core surface and the MCNP code calculates neutron spectrum at the points of interest using the flux distribution. The coupling method will be used as the DOT-DOMINO-MORSE code system. This TORT/MCNP coupling method was applied to calculate the neutron flux at points where induced radioactivity data were measured for 54 Mn and 60 Co and the radioactivity calculations based on the neutron flux obtained from the above method were compared with the measured data. (authors)

  20. TORT/MCNP coupling method for the calculation of neutron flux around a core of BWR.

    Science.gov (United States)

    Kurosawa, Masahiko

    2005-01-01

    For the analysis of BWR neutronics performance, accurate data are required for neutron flux distribution over the In-Reactor Pressure Vessel equipments taking into account the detailed geometrical arrangement. The TORT code can calculate neutron flux around a core of BWR in a three-dimensional geometry model, but has difficulties in fine geometrical modelling and lacks huge computer resource. On the other hand, the MCNP code enables the calculation of the neutron flux with a detailed geometry model, but requires very long sampling time to give enough number of particles. Therefore, a TORT/MCNP coupling method has been developed to eliminate the two problems mentioned above in each code. In this method, the TORT code calculates angular flux distribution on the core surface and the MCNP code calculates neutron spectrum at the points of interest using the flux distribution. The coupling method will be used as the DOT-DOMINO-MORSE code system. This TORT/MCNP coupling method was applied to calculate the neutron flux at points where induced radioactivity data were measured for 54Mn and 60Co and the radioactivity calculations based on the neutron flux obtained from the above method were compared with the measured data.