Computation cluster for Monte Carlo calculations
Two computation clusters based on Rocks Clusters 5.1 Linux distribution with Intel Core Duo and Intel Core Quad based computers were made at the Department of the Nuclear Physics and Technology. Clusters were used for Monte Carlo calculations, specifically for MCNP calculations applied in Nuclear reactor core simulations. Optimization for computation speed was made on hardware and software basis. Hardware cluster parameters, such as size of the memory, network speed, CPU speed, number of processors per computation, number of processors in one computer were tested for shortening the calculation time. For software optimization, different Fortran compilers, MPI implementations and CPU multi-core libraries were tested. Finally computer cluster was used in finding the weighting functions of neutron ex-core detectors of VVER-440. (authors)
Calculation of profitability in computer tomography (CT)
The comments do not refer to a specific type of whole body computer tomography which made it necessary to base the calculations on mean values with regard to both initial costs and operating costs. The calculation of the receipts was based on the resulting costs, mean long-term utilization of the unit and on a reasonable period of amortization. The model calculation indicates that the break-even point is reached with 1,920 annual examinations and a five-year amortization period. (orig.) 891 MG/orig. 892 MB
Computer Program Development for House Cost Calculation
Korablev, Maxim
2010-01-01
The main purpose of this project was to develop a program, which can calculate the cost of houses. This program should accelerate a matching process between a company and users. Also the program should contain a database of building materials. The program language is PHP. PHP is a modern computer language for the development of web programs. The writing of a program code was based on the official PHP manual and a little support from a programmer in the company. For making the database of...
Atomic physics: computer calculations and theoretical analysis
Drukarev, E. G.
2004-01-01
It is demonstrated, how the theoretical analysis preceding the numerical calculations helps to calculate the energy of the ground state of helium atom, and enables to avoid qualitative errors in the calculations of the characteristics of the double photoionization.
Computational methods for probability of instability calculations
Wu, Y.-T.; Burnside, O. H.
1990-01-01
This paper summarizes the development of the methods and a computer program to compute the probability of instability of a dynamic system than can be represented by a system of second-order ordinary linear differential equations. Two instability criteria based upon the roots of the characteristics equation or Routh-Hurwitz test functions are investigated. Computational methods based on system reliability analysis methods and importance sampling concepts are proposed to perform efficient probabilistic analysis. Numerical examples are provided to demonstrate the methods.
Computing tools for accelerator design calculations
Fischler, M.; Nash, T.
1984-01-01
This note is intended as a brief, summary guide for accelerator designers to the new generation of commercial and special processors that allow great increases in computing cost effectiveness. New thinking is required to take best advantage of these computing opportunities, in particular, when moving from analytical approaches to tracking simulations. In this paper, we outline the relevant considerations.
CACTUS: Calculator and Computer Technology User Service.
Hyde, Hartley
1998-01-01
Presents an activity in which students use computer-based spreadsheets to find out how much grain should be added to a chess board when a grain of rice is put on the first square, the amount is doubled for the next square, and the chess board is covered. (ASK)
Computer calculation of Witten's 3-manifold invariant
Witten's 2+1 dimensional Chern-Simons theory is exactly solvable. We compute the partition function, a topological invariant of 3-manifolds, on generalized Seifert spaces. Thus we test the path integral using the theory of 3-manifolds. In particular, we compare the exact solution with the asymptotic formula predicted by perturbation theory. We conclude that this path integral works as advertised and gives an effective topological invariant. (orig.)
Computer calculation of Witten's 3-manifold invariant
Freed, Daniel S.; Gompf, Robert E.
1991-10-01
Witten's 2+1 dimensional Chern-Simons theory is exactly solvable. We compute the partition function, a topological invariant of 3-manifolds, on generalized Seifert spaces. Thus we test the path integral using the theory of 3-manifolds. In particular, we compare the exact solution with the asymptotic formula predicted by perturbation theory. We conclude that this path integral works as advertised and gives an effective topological invariant.
Parallel computer calculation of quantum spin lattices
Numerical simulation allows the theorists to convince themselves about the validity of the models they use. Particularly by simulating the spin lattices one can judge about the validity of a conjecture. Simulating a system defined by a large number of degrees of freedom requires highly sophisticated machines. This study deals with modelling the magnetic interactions between the ions of a crystal. Many exact results have been found for spin 1/2 systems but not for systems of other spins for which many simulation have been carried out. The interest for simulations has been renewed by the Haldane's conjecture stipulating the existence of a energy gap between the ground state and the first excited states of a spin 1 lattice. The existence of this gap has been experimentally demonstrated. This report contains the following four chapters: 1. Spin systems; 2. Calculation of eigenvalues; 3. Programming; 4. Parallel calculation
Graphical representation of supersymmetry and computer calculation
A graphical representation of supersymmetry is presented. It clearly expresses the chiral flow appearing in SUSY quantities, by representing spinors by directed lines (arrows). The chiral suffixes are expressed by the directions (up, down, left, right) of the arrows. The SL(2,C) invariants are represented by wedges. We are free from the messy symbols of spinor suffixes. The method is applied to the 5D supersymmetry. Many applications are expected. The result is suitable for coding a computer program and is highly expected to be applicable to various SUSY theories (including Supergravity) in various dimensions. (author)
Newnes circuit calculations pocket book with computer programs
Davies, Thomas J
2013-01-01
Newnes Circuit Calculations Pocket Book: With Computer Programs presents equations, examples, and problems in circuit calculations. The text includes 300 computer programs that help solve the problems presented. The book is comprised of 20 chapters that tackle different aspects of circuit calculation. The coverage of the text includes dc voltage, dc circuits, and network theorems. The book also covers oscillators, phasors, and transformers. The text will be useful to electrical engineers and other professionals whose work involves electronic circuitry.
Calculations of angular momentum coupling coefficients on a computer code
In this study, Clebsch-Gordan coefficients, 3j symbols, Racah coefficients, Wigner's 6j and 9j symbols were calculated by a prepared computer code of COEFF. The computer program COEFF is described which calculates angular momentum coupling coefficients and expresses them as quotient of two integers multiplied by the square root of the quotient of two integers. The program includes subroutines to encode an integer into its prime factors, to decode of prime factors back into an integer , to perform basic arithmetic operations on prime-coded numbers, as well as subroutines which calculate the coupling coefficients themselves. The computer code COEFF had been prepared to run on a VAX. In this study we rearranged the code to run on PC and tested it successfully. The obtained values in this study, were compared with the values of other computer programmes. A pretty good agreement is obtained between our prepared computer code and other computer programmes
Analytical calculation of heavy quarkonia production processes in computer
Braguta, V. V.; Likhoded, A. K.; Luchinsky, A. V.; Poslavsky, S. V.
2013-01-01
This report is devoted to the analytical calculation of heavy quarkonia production processes in modern experiments such as LHC, B-factories and superB-factories in computer. Theoretical description of heavy quarkonia is based on the factorization theorem. This theorem leads to special structure of the production amplitudes which can be used to develop computer algorithm which calculates these amplitudes automatically. This report is devoted to the description of this algorithm. As an example ...
CRACKEL: a computer code for CFR fuel management calculations
The CRACKLE computer code is designed to perform rapid fuel management surveys of CFR systems. The code calculates overall features such as reactivity, power distributions and breeding gain, and also calculates for each sub-assembly plutonium content and power output. A number of alternative options are built into the code, in order to permit different fuel management strategies to be calculated, and to perform more detailed calculations when necessary. A brief description is given of the methods of calculation, and the input facilities of CRACKLE, with examples. (author)
Computer program developed for flowsheet calculations and process data reduction
Alfredson, P. G.; Anastasia, L. J.; Knudsen, I. E.; Koppel, L. B.; Vogel, G. J.
1969-01-01
Computer program PACER-65, is used for flowsheet calculations and easily adapted to process data reduction. Each unit, vessel, meter, and processing operation in the overall flowsheet is represented by a separate subroutine, which the program calls in the order required to complete an overall flowsheet calculation.
TRIGLAV - a computer programme for research reactor calculation
Persic, A.; Ravnik, M.; Slavic, S.; Zagar, T. (J.Stefan Institute, Ljubljana (Slovenia))
1999-12-15
TRIGLAV is a new computer programme for burn-up calculation of mixed core of research reactors. The code is based on diffusion model in two dimensions and iterative procedure is applied for its solution. The material data used in the model are calculated with the transport programme WIMS. In regard to fission density distribution and energy produced by the reactor the burn-up increment of fuel elements is determined. (orig.)
Computer program for equilibrium calculation and diffusion simulation
无
2000-01-01
A computer program called TKCALC(thermodynamic and kinetic calculation) has been successfully developedfor the purpose of phase equilibrium calculation and diffusion simulation in ternary substitutional alloy systems. The program was subsequently applied to calculate the isothermal sections of the Fe-Cr-Ni system and predict the concentrationprofiles of two γ/γ single-phase diffusion couples in the Ni-Cr-Al system. The results are in excellent agreement withTHERMO-CALC and DICTRA software packages. Detailed mathematical derivation of some important formulae involvedis also elaborated
Quantum Computing Approach to Nonrelativistic and Relativistic Molecular Energy Calculations
Veis, Libor; Pittner, Jiří
Hoboken : John Wiley, 2014 - (Kais, S.), s. 107-135 ISBN 978-1-118-49566-7. - (Advances in Chemical Physics. Vol. 154) R&D Projects: GA ČR GA203/08/0626 Institutional support: RVO:61388955 Keywords : full configuration interaction (FCI) calculations * nonrelativistic molecular hamiltonians * quantum computing Subject RIV: CF - Physical ; Theoretical Chemistry
Computer calculation of bacterial survival during industrial poultry scalding
Computer simulation was used to model survival of bacteria during poultry scalding under common industrial conditions. Bacterial survival was calculated in a single-tank single-pass scalder with and without counterflow water movement, in a single-tank two-pass scalder, and in a three-tank two-pass ...
Development of a computational methodology for internal dose calculations
Yoriyaz, H
2000-01-01
A new approach for calculating internal dose estimates was developed through the use of a more realistic computational model of the human body and a more precise tool for the radiation transport simulation. The present technique shows the capability to build a patient-specific phantom with tomography data (a voxel-based phantom) for the simulation of radiation transport and energy deposition using Monte Carlo methods such as in the MCNP-4B code. In order to utilize the segmented human anatomy as a computational model for the simulation of radiation transport, an interface program, SCMS, was developed to build the geometric configurations for the phantom through the use of tomographic images. This procedure allows to calculate not only average dose values but also spatial distribution of dose in regions of interest. With the present methodology absorbed fractions for photons and electrons in various organs of the Zubal segmented phantom were calculated and compared to those reported for the mathematical phanto...
Shieldings for X-ray radiotherapy facilities calculated by computer
This work presents a methodology for calculation of X-ray shielding in facilities of radiotherapy with help of computer. Even today, in Brazil, the calculation of shielding for X-ray radiotherapy is done based on NCRP-49 recommendation establishing a methodology for calculating required to the elaboration of a project of shielding. With regard to high energies, where is necessary the construction of a labyrinth, the NCRP-49 is not very clear, so that in this field, studies were made resulting in an article that proposes a solution to the problem. It was developed a friendly program in Delphi programming language that, through the manual data entry of a basic design of architecture and some parameters, interprets the geometry and calculates the shields of the walls, ceiling and floor of on X-ray radiation therapy facility. As the final product, this program provides a graphical screen on the computer with all the input data and the calculation of shieldings and the calculation memory. The program can be applied in practical implementation of shielding projects for radiotherapy facilities and can be used in a didactic way compared to NCRP-49.
CRONOS: A modular computational system for neutronic core calculations
The CRONOS code has been designed to provide all the computational means needed for Pressurized Water Reactor calculations, including design, fuel management, follow up and accidents. CRONOS allows steady state, kinetic and transient multigroup calculations of power distribution taking into account the thermal-hydraulic feedback effects. All this can be done without any limitation on any parameter (energy groups, meshes...). The code solves either the diffusion equation or the even parity transport equation with isotropic scattering and sources. Different geometries are available such as 1, 2 or 3 dimensions cartesian geometries, 2 or 3D hexagonal geometries and cylindrical geometries. The numerical method is based on the finite difference or finite element methods. CRONOS 2 has been written with the constant will of optimizing its portability. Presently, it is running on very different computers such as IBM 3090, CRAY 1, CRAY 2, SUN 4, MIPS RS2030 or IBM RS6000. A special data structure is used in order to improve vectorization. CRONOS is based on a modular structure that allows a great flexibility of use. It is implemented in the SAPHYR system which includes assembly calculation code (APOLLO), and thermal-hydraulic core calculation code (FLICA IV). A special object oriented language, named GIBIANE, and a common tool library have been developed to chain the various computation modules of those codes. (author). 11 refs, 1 fig., 5 tabs
Computer and engineering calculations of Brazilian Tokamak-II
Analytical and computer calculations carried out by researches of Physics Institute - University of Sao Paulo (IFUSP), for defining the engineering project and constructing the TBR-II tokamak are presented. The hydrodynamics behavioue and determined parameters for magnetic confinement of the plasma were analysed. The computer code was developed using magnetohydrodynamics (MHD) equations which involve plasma interactions, magnetic field and electrical current circulating in more than 20 coils distributed around toroidal vase of the plasma. The electromagnetic, thermal and mechanical couplings are also presented. The TBR-II will be feed by two turbo-generators with 15 MW each one. (M.C.K.)
Methods and computer codes for nuclear systems calculations
B P Kochurov; A P Knyazev; A Yu Kwaretzkheli
2007-02-01
Some numerical methods for reactor cell, sub-critical systems and 3D models of nuclear reactors are presented. The methods are developed for steady states and space–time calculations. Computer code TRIFON solves space-energy problem in (, ) systems of finite height and calculates heterogeneous few-group matrix parameters of reactor cells. These parameters are used as input data in the computer code SHERHAN solving the 3D heterogeneous reactor equation for steady states and 3D space–time neutron processes simulation. Modification of TRIFON was developed for the simulation of space–time processes in sub-critical systems with external sources. An option of SHERHAN code for the system with external sources is under development.
Computationally efficient implementation of combustion chemistry in parallel PDF calculations
In parallel calculations of combustion processes with realistic chemistry, the serial in situ adaptive tabulation (ISAT) algorithm [S.B. Pope, Computationally efficient implementation of combustion chemistry using in situ adaptive tabulation, Combustion Theory and Modelling, 1 (1997) 41-63; L. Lu, S.B. Pope, An improved algorithm for in situ adaptive tabulation, Journal of Computational Physics 228 (2009) 361-386] substantially speeds up the chemistry calculations on each processor. To improve the parallel efficiency of large ensembles of such calculations in parallel computations, in this work, the ISAT algorithm is extended to the multi-processor environment, with the aim of minimizing the wall clock time required for the whole ensemble. Parallel ISAT strategies are developed by combining the existing serial ISAT algorithm with different distribution strategies, namely purely local processing (PLP), uniformly random distribution (URAN), and preferential distribution (PREF). The distribution strategies enable the queued load redistribution of chemistry calculations among processors using message passing. They are implemented in the software x2fmpi, which is a Fortran 95 library for facilitating many parallel evaluations of a general vector function. The relative performance of the parallel ISAT strategies is investigated in different computational regimes via the PDF calculations of multiple partially stirred reactors burning methane/air mixtures. The results show that the performance of ISAT with a fixed distribution strategy strongly depends on certain computational regimes, based on how much memory is available and how much overlap exists between tabulated information on different processors. No one fixed strategy consistently achieves good performance in all the regimes. Therefore, an adaptive distribution strategy, which blends PLP, URAN and PREF, is devised and implemented. It yields consistently good performance in all regimes. In the adaptive parallel ISAT
Computer Program for Point Location And Calculation of ERror (PLACER)
Granato, Gregory E.
1999-01-01
A program designed for point location and calculation of error (PLACER) was developed as part of the Quality Assurance Program of the Federal Highway Administration/U.S. Geological Survey (USGS) National Data and Methodology Synthesis (NDAMS) review process. The program provides a standard method to derive study-site locations from site maps in highwayrunoff, urban-runoff, and other research reports. This report provides a guide for using PLACER, documents methods used to estimate study-site locations, documents the NDAMS Study-Site Locator Form, and documents the FORTRAN code used to implement the method. PLACER is a simple program that calculates the latitude and longitude coordinates of one or more study sites plotted on a published map and estimates the uncertainty of these calculated coordinates. PLACER calculates the latitude and longitude of each study site by interpolating between the coordinates of known features and the locations of study sites using any consistent, linear, user-defined coordinate system. This program will read data entered from the computer keyboard and(or) from a formatted text file, and will write the results to the computer screen and to a text file. PLACER is readily transferable to different computers and operating systems with few (if any) modifications because it is written in standard FORTRAN. PLACER can be used to calculate study site locations in latitude and longitude, using known map coordinates or features that are identifiable in geographic information data bases such as USGS Geographic Names Information System, which is available on the World Wide Web.
Automatic computed tomography patient dose calculation using header metadata
The present work describes a method that calculates the patient dose values in computed tomography (CT) based on metadata contained in DICOM images in support of patient dose studies. The DICOM metadata is pre-processed to extract necessary calculation parameters. Vendor-specific DICOM header information is harmonized using vendor translation tables and unavailable DICOM tags can be completed with a graphical user interface. CT-Expo, an MS Excel application for calculating the radiation dose, is used to calculate the patient doses. All relevant data and calculation results are stored for further analysis in a relational database. Final results are compiled by utilizing data mining tools. This solution was successfully used for the 2009 CT dose study in Luxembourg. National diagnostic reference levels for standard examinations were calculated based on each of the countries' hospitals. The benefits using this new automatic system saved time as well as resources during the data acquisition and the evaluation when compared with earlier questionnaire-based surveys. (authors)
Analytical calculations by computer in physics and mathematics
The review of present status of analytical calculations by computer is given. Some programming systems for analytical computations are considered. Such systems as SCHOONSCHIP, CLAM, REDUCE-2, SYMBAL, CAMAL, AVTO-ANALITIK which are implemented or will be implemented in JINR, and MACSYMA - one of the most developed systems - are discussed. It is shown on the basis of mathematical operations, realized in these systems, that they are appropriated for different problems of theoretical physics and mathematics, for example, for problems of quantum field theory, celestial mechanics, general relativity and so on. Some problems solved in JINR by programming systems for analytical computations are described. The review is intended for specialists in different fields of theoretical physics and mathematics
Hamiltonian lattice field theory: Computer calculations using variational methods
A variational method is developed for systematic numerical computation of physical quantities-bound state energies and scattering amplitudes-in quantum field theory. An infinite-volume, continuum theory is approximated by a theory on a finite spatial lattice, which is amenable to numerical computation. An algorithm is presented for computing approximate energy eigenvalues and eigenstates in the lattice theory and for bounding the resulting errors. It is shown how to select basis states and choose variational parameters in order to minimize errors. The algorithm is based on the Rayleigh-Ritz principle and Kato's generalizations of Temple's formula. The algorithm could be adapted to systems such as atoms and molecules. It is shown how to compute Green's functions from energy eigenvalues and eigenstates in the lattice theory, and relate these to physical (renormalized) coupling constants, bound state energies and Green's functions. Thus one can compute approximate physical quantities in a lattice theory that approximates a quantum field theory with specified physical coupling constants. The author discusses the errors in both approximations. In principle, the errors can be made arbitrarily small by increasing the size of the lattice, decreasing the lattice spacing and computing sufficiently long. Unfortunately, the author does not understand the infinite-volume and continuum limits well enough to quantify errors due to the lattice approximation. Thus the method is currently incomplete. The method is applied to real scalar field theories using a Fock basis of free particle states. All needed quantities can be calculated efficiently with this basis. The generalization to more complicated theories is straightforward. The author describes a computer implementation of the method and present numerical results for simple quantum mechanical systems
Hamiltonian lattice field theory: Computer calculations using variational methods
I develop a variational method for systematic numerical computation of physical quantities -- bound state energies and scattering amplitudes -- in quantum field theory. An infinite-volume, continuum theory is approximated by a theory on a finite spatial lattice, which is amenable to numerical computation. I present an algorithm for computing approximate energy eigenvalues and eigenstates in the lattice theory and for bounding the resulting errors. I also show how to select basis states and choose variational parameters in order to minimize errors. The algorithm is based on the Rayleigh-Ritz principle and Kato's generalizations of Temple's formula. The algorithm could be adapted to systems such as atoms and molecules. I show how to compute Green's functions from energy eigenvalues and eigenstates in the lattice theory, and relate these to physical (renormalized) coupling constants, bound state energies and Green's functions. Thus one can compute approximate physical quantities in a lattice theory that approximates a quantum field theory with specified physical coupling constants. I discuss the errors in both approximations. In principle, the errors can be made arbitrarily small by increasing the size of the lattice, decreasing the lattice spacing and computing sufficiently long. Unfortunately, I do not understand the infinite-volume and continuum limits well enough to quantify errors due to the lattice approximation. Thus the method is currently incomplete. I apply the method to real scalar field theories using a Fock basis of free particle states. All needed quantities can be calculated efficiently with this basis. The generalization to more complicated theories is straightforward. I describe a computer implementation of the method and present numerical results for simple quantum mechanical systems
Automated objective thyroid ablation dose calculations using interactive computer program
Aim: Development of an interactive computer program allowing automatic calculation of an optimized dose of I-131 required for the effective ablation of remnants of thyroid tissue. Materials and methods: The Standard Thyroid Uptake Neck Phantom (Nucl.Assoc.) was used for measurements of efficiency of high energy (for I-131) and low energy (for I-123) collimator mounted on the Picker Prism 2000 gamma camera. The efficiency was calculated for a wide range of distances between the patient's neck and the camera head and for different sizes of remnant thyroid tissue and activities. These data were built into the computer memory (Picker Odyssey FX 729) and then were used for calculation of percentage uptake in the neck (regular quality control and maintenance of gamma camera secures the stability of its performance). On the basis of the uptake on an early and late image after the administration of radioisotope, its biological and effective half lives in the patient are calculated and the dose required for delivery of 50mGy per gram of I-131 radiation to the remaining thyroid tissue is evaluated. Results: The technologist selects the appropriate isotope, enters the patient's dose and the neck to collimator distance then draws the regions of interests around the thyroid remnants on each of anterior images. No other operator interventions are required. When regions are assigned the percentage uptake, biological half life, effective half life and required I-131 activity in MBq per gram are calculated automatically. It was found that efficiency is independent of activity over the range seen clinically. The need for a standard is eliminated and automated calculations ensure accuracy. Estimation of remnant mass and desired radiation dose is required to complete the dose calculations. The program works for both I-131 (using 1 to 3 day and 5 to 10 day images) and I-123 (using 6 and 24 hrs images). The program automatically corrects for the exact imaging time. Results are displayed
Computer program 'SOMC2' for spherical optical model calculations
This report is a description of the computer program 'SOMC2'. It is a program for spherical optical model calculations of the nuclear scattering cross sections of neutron, proton and α particles. In the first section, the formalism and the non-linear least square algorithm are presented. Section II is devoted to the detailed explanations of all the routines of the present program. A brief explanation of the methods used to obtain not only the fitting parameters, but also their uncertainties and their correlations is given. In section III detailed explanations of the input-data cards and of the various out-puts are given. Finally some examples of calculations are presented
TRING: a computer program for calculating radionuclide transport in groundwater
The computer program TRING is described which enables the transport of radionuclides in groundwater to be calculated for use in long term radiological assessments using methods described previously. Examples of the areas of application of the program are activity transport in groundwater associated with accidental spillage or leakage of activity, the shutdown of reactors subject to delayed decommissioning, shallow land burial of intermediate level waste and geologic disposal of high level waste. Some examples of the use of the program are given, together with full details to enable users to run the program. (author)
A computer program for calculating effective capture cross section
FORTRAN program CPCS (Computer Program to analyze Capture TOF Spectra) was developed to deduce effective neutron capture cross sections from raw data obtained by a time-of-flight facility at the JAERI Electron Linear Accelerator. The data processing system for capture experiments consists of three stages, i.e. data acquisition, data handling (summing, listing, plotting, etc.), and data analysis (background determination, flux determination, normalization, etc.). In the three stages of processing, three respective computers are used; USC-3, FACOM U-200, and FACOM 230/75. CPCS is included in the stage of data analysis. A feature of this program is that the magnetic disk file is effectively used as INPUT/OUTPUT data storage interconnecting with other programs to determine neutron flux, to average calculated cross sections and to fit data with strength functions. This program is able to handle eight sets of TOF spectra with 8192 channels including channel block option simultaneously. Particular attention is paid to determine a precise background in the wide neutron energy range. (author)
Million atom DFT calculations using coarse graining and petascale computing
Nicholson, Don; Odbadrakh, Kh.; Samolyuk, G. D.; Stoller, R. E.; Zhang, X. G.; Stocks, G. M.
2014-03-01
Researchers performing classical Molecular Dynamics (MD) on defect structures often find it necessary to use millions of atoms in their models. It would be useful to perform density functional calculations on these large configurations in order to observe electron-based properties such as local charge and spin and the Helmann-Feynman forces on the atoms. The great number of atoms usually requires that a subset be ``carved'' from the configuration and terminated in a less that satisfactory manner, e.g. free space or inappropriate periodic boundary conditions. Coarse graining based on the Locally Self-consistent Multiple Scattering method (LSMS) and petascale computing can circumvent this problem by treating the whole system but dividing the atoms into two groups. In Coarse Grained LSMS (CG-LSMS) one group of atoms has its charge and scattering determined prescriptively based on neighboring atoms while the remaining group of atoms have their charge and scattering determined according to DFT as implemented in the LSMS. The method will be demonstrated for a one-million-atom model of a displacement cascade in Fe for which 24,130 atoms are treated with full DFT and the remaining atoms are treated prescriptively. Work supported as part of Center for Defect Physics, an Energy Frontier Research Center funded by the U.S. DOE, Office of Science, Basic Energy Sciences, used Oak Ridge Leadership Computing Facility, Oak Ridge National Lab, of DOE Office of Science.
Computational models for probabilistic neutronic calculation in TADSEA
The Very High Temperature Reactor is one of the main candidates for the next generation of nuclear power plants. In pebble bed reactors, the fuel is contained within graphite pebbles in the form of TRISO particles, which form a randomly packed bed inside a graphite-walled cylindrical cavity. In previous studies, the conceptual design of a Transmutation Advanced Device for Sustainable Energy Applications (TADSEA) has been made. The TADSEA is a pebble-bed ADS cooled by helium and moderated by graphite. In order to simulate the TADSEA correctly, the double heterogeneity of the system must be considered. It consists on randomly located pebbles into the core and randomly located TRISO particles into the fuel pebbles. These features are often neglected due to the difficulty to model with MCNP code. The main reason is that there is a limited number of cells and surfaces to be defined. In this paper a computational tool, which allows to get a new geometrical model for fuel pebble to neutronic calculation with MCNPX, was presented. The heterogeneity of system is considered, and also the randomly located TRISO particles inside the pebble. There are also compared several neutronic computational models for TADSEA's fuel pebbles in order to study heterogeneity effects. On the other hand the boundary effect given by the intersection between the pebble surface and the TRISO particles could be significative in the multiplicative properties. A model to study this e ect is also presented. (author)
Summaries of recent computer-assisted Feynam diagram calculations
Mark Fischler
2001-08-16
The AIHENP Workshop series has traditionally included cutting edge work on automated computation of Feynman diagrams. The conveners of the Symbolic Problem Solving topic in this ACAT conference felt it would be useful to solicit presentations of brief summaries of the interesting recent calculations. Since this conference was the first in the series to be held in the Western Hemisphere, it was decided that the summaries would be solicited both from attendees and from researchers who could not attend the conference. This would represent a sampling of many of the key calculations being performed. The results were presented at the Poster session; contributions from ten researchers were displayed and posted on the web. Although the poster presentation, which can be viewed at conferences.fnal.gov/acat2000/ placed equal emphasis on results presented at the conference and other contributions, here we primarily discuss the latter, which do not appear in full form in these proceedings. This brief paper can't do full justice to each contribution; interested readers can find details of the work not presented at this conference in references (1), (2), (3), (4), (5), (6), (7).
Comparison of computer code calculations with FEBA test data
The FEBA forced feed reflood experiments included base line tests with unblocked geometry. The experiments consisted of separate effect tests on a full-length 5x5 rod bundle. Experimental cladding temperatures and heat transfer coefficients of FEBA test No. 216 are compared with the analytical data postcalculated utilizing the SSYST-3 computer code. The comparison indicates a satisfactory matching of the peak cladding temperatures, quench times and heat transfer coefficients for nearly all axial positions. This agreement was made possible by the use of an artificially adjusted value of the empirical code input parameter in the heat transfer for the dispersed flow regime. A limited comparison of test data and calculations using the RELAP4/MOD6 transient analysis code are also included. In this case the input data for the water entrainment fraction and the liquid weighting factor in the heat transfer for the dispersed flow regime were adjusted to match the experimental data. On the other hand, no fitting of the input parameters was made for the COBRA-TF calculations which are included in the data comparison. (orig.)
Computer code for shielding calculations of x-rays rooms
The building an effective barrier against ionizing radiation present in radiographic rooms requires consideration of many variables. The methodology used for thickness specification of primary and secondary, barrier of a traditional radiographic room, considers the following factors: Use Factor, Occupational Factor, distance between the source and the wall, Workload, Kerma in the air and distance between the patient and the source. With these data it was possible to develop a computer code, which aims to identify and use variables in functions obtained through graphics regressions provided by NCRP-147 (Structural Shielding Design for Medical X-Ray Imaging Facilities) report, for shielding calculation of room walls, and the walls of the dark room and adjacent areas. With the implemented methodology, it was made a code validation by comparison of results with a study case provided by the report. The obtained values for thickness comprise different materials such as concrete, lead and glass. After validation it was made a case study of an arbitrary radiographic room.The development of the code resulted in a user-friendly tool for planning radiographic rooms to comply with the limits established by CNEN-NN-3:01 published in september/2011. (authors)
Systems for neutronic, thermohydraulic and shielding calculation in personal computers
The MTR-PC (Materials Testing Reactors-Personal Computers) system has been developed by the Nuclear Engineering Division of INVAP S.E. with the aim of providing working conditions integrated with personal computers for design and neutronic, thermohydraulic and shielding analysis for reactors employing plate type fuel. (Author)
Computing NLTE Opacities -- Node Level Parallel
Holladay, Daniel [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2015-09-11
Presentation. The goal: to produce a robust library capable of computing reasonably accurate opacities inline with the assumption of LTE relaxed (non-LTE). Near term: demonstrate acceleration of non-LTE opacity computation. Far term (if funded): connect to application codes with in-line capability and compute opacities. Study science problems. Use efficient algorithms that expose many levels of parallelism and utilize good memory access patterns for use on advanced architectures. Portability to multiple types of hardware including multicore processors, manycore processors such as KNL, GPUs, etc. Easily coupled to radiation hydrodynamics and thermal radiative transfer codes.
GRUCAL, a computer program for calculating macroscopic group constants
Nuclear reactor calculations require material- and composition-dependent, energy averaged nuclear data to describe the interaction of neutrons with individual isotopes in material compositions of reactor zones. The code GRUCAL calculates these macroscopic group constants for given compositions from the material-dependent data of the group constant library GRUBA. The instructions for calculating group constants are not fixed in the program, but will be read at the actual execution time from a separate instruction file. This allows to accomodate GRUCAL to various problems or different group constant concepts. (orig.)
Computer code for nuclear reactor core thermal reliability calculation
RASTENAR program was described for computing heat-engineering reliability of cores in nuclear reactors operating under stationary conditions. The following factors of heat-engineering reliability were found to be computable: rated critical margin; limiting critical margin; probability of initiation of critical heat removal in channel (inferior conditions of heat transfer); probability that no channel would be subject to critical heat removal; and reactor power reserve coefficient. The probability that no channel in the core would experience critical heat removal when boiling during operation of the reactor at fixed power level was taken for the principal quantitative criterion. The structure and limitations of the program were described together with the computation algorithm. The program was written for an M-220 computer
A computer program is proposed allowing the automatic calculation of control charts for accuracy and precision. The calculated charts enable the analyst to control easily the daily results for a determined radioimmunoassay. (Auth.)
Computing energy expenditure from indirect calorimetry data: a calculation exercise
Alferink, S.J.J.; Heetkamp, M.J.W.; Gerrits, W.J.J.
2015-01-01
Energy expenditure (Q) can be accurately derived from the volume of O2 consumed (VO2), and the volume of CO2 (VCO2) and CH4 (VCH4) produced. When the measurements are performed using a respiration chamber, VO2, VCO2 and VCH4 are calculated by the difference between the inflow (l/h) and outflow rates (l/h), plus the change in volume of gas in the chamber between successive measurements. There are many steps involved in the calculation of Q from raw data. These steps are rarely published in ful...
Heuristic and computer calculations for the magnitude of metric spaces
Willerton, Simon
2009-01-01
The notion of the magnitude of a compact metric space was considered in arXiv:0908.1582 with Tom Leinster, where the magnitude was calculated for line segments, circles and Cantor sets. In this paper more evidence is presented for a conjectured relationship with a geometric measure theoretic valuation. Firstly, a heuristic is given for deriving this valuation by considering 'large' subspaces of Euclidean space and, secondly, numerical approximations to the magnitude are calculated for squares, disks, cubes, annuli, tori and Sierpinski gaskets. The valuation is seen to be very close to the magnitude for the convex spaces considered and is seen to be 'asymptotically' close for some other spaces.
Quantum computing applied to calculations of molecular energies
Pittner, Jiří; Veis, L.
2011-01-01
Roč. 241, - (2011), 151-phys. ISSN 0065-7727. [National Meeting and Exposition of the American-Chemical-Society (ACS) /241./. 27.03.2011-31.03.2011, Anaheim] Institutional research plan: CEZ:AV0Z40400503 Keywords : molecular energie * quantum computers Subject RIV: CF - Physical ; Theoretical Chemistry
Computer code for double beta decay QRPA based calculations
The computer code developed by our group some years ago for the evaluation of nuclear matrix elements, within the QRPA and PQRPA nuclear structure models, involved in neutrino-nucleus reactions, muon capture and β± processes, is extended to include also the nuclear double beta decay
Calculation of Linear Systems Metric Tensors via Algebraic Computation
Neto, Joao Jose de Farias
2002-01-01
A formula for the Riemannian metric tensor of differentiable manifolds of linear dynamical systems of same McMillan degree is presented in terms of their transfer function matrices. The necessary calculations for its application to ARMA and state space overlapping parametrizations are drafted. The importance of this approach for systems identification and multiple time series analysis and forecasting is explained.
A FORTRAN Computer Program for Q Sort Calculations
Dunlap, William R.
1978-01-01
The Q Sort method is a rank order procedure. A FORTRAN program is described which calculates a total value for any group of cases for the items in the Q Sort, and rank orders the items according to this composite value. (Author/JKS)
On the pressure calculation for polarizable models in computer simulation.
Kiss, Péter T; Baranyai, András
2012-03-14
We present a short overview of pressure calculation in molecular dynamics or Monte Carlo simulations. The emphasis is given to polarizable models in order to resolve the controversy caused by the paper of M. J. Louwerse and E. J. Baerends [Chem. Phys. Lett. 421, 138 (2006)] about pressure calculation in systems with periodic boundaries. We systematically derive expressions for the pressure and show that despite the lack of explicit pairwise additivity, the pressure formula for polarizable models is identical with that of nonpolarizable ones. However, a strict condition for using this formula is that the induced dipole should be in perfect mechanical equilibrium prior to pressure calculation. The perfect convergence of induced dipoles ensures conservation of energy as well. We demonstrate using more cumbersome but exact methods that the derived expressions for the polarizable model of water provide correct numerical results. We also show that the inaccuracy caused by imperfect convergence of the induced dipoles correlates with the inaccuracy of the calculated pressure. PMID:22423830
On the calculation of dynamic derivatives using computational fluid dynamics
Da Ronch, Andrea
2012-01-01
In this thesis, the exploitation of computational fluid dynamics (CFD) methods for the flight dynamics of manoeuvring aircraft is investigated. It is demonstrated that CFD can now be used in a reasonably routine fashion to generate stability and control databases. Different strategies to create CFD-derived simulation models across the flight envelope are explored, ranging from combined low-fidelity/high-fidelity methods to reduced-order modelling. For the representation of the unsteady aerody...
TRANS-I: A fast calculating computer code for the calculation of reactivity transients
In literature is shown that the adiabatic and the quasistatic approximation to space time neutron kinetics are generally fast and conservative methods for calculating reactivity transients. Nevertheless if a feedback reactivity is considered these methods predict too high values of peak flux, energy production and temperature. It is demonstrated, that the deficiency of adiabatic and quasistatic method can be removed, if the mean fuel temperature is multiplied by a weighting factor to get a corrected temperature for calculating Doppler-feedback. The code TRANS-I including this modification is presented. (author)
Ozgun-Koca, S. Ash
2010-01-01
Although growing numbers of secondary school mathematics teachers and students use calculators to study graphs, they mainly rely on paper-and-pencil when manipulating algebraic symbols. However, the Computer Algebra Systems (CAS) on computers or handheld calculators create new possibilities for teaching and learning algebraic manipulation. This…
Computer code for calculating reliability/availability of technical systems
Three computer codes are reviewed, which can be applied to reliability analyses of technical systems. They are based on the fault tree and the laws of probability theory. The codes can be used for both non-repairable and repairable systems. The simulation code REMO 79 and the analytical code RELAV are based on the conception that a failure of system components is immediately detected and repaired. The model of the FUPRO2 code provides for failures to be detected and repaired only in periodic functional tests. Apart from code descriptions experience and far-reaching aspects resulting from modularization of the fault trees are summarized. (author)
Computational benchmark for calculation of silane and siloxane thermochemistry.
Cypryk, Marek; Gostyński, Bartłomiej
2016-01-01
Geometries of model chlorosilanes, R3SiCl, silanols, R3SiOH, and disiloxanes, (R3Si)2O, R = H, Me, as well as the thermochemistry of the reactions involving these species were modeled using 11 common density functionals in combination with five basis sets to examine the accuracy and applicability of various theoretical methods in organosilicon chemistry. As the model reactions, the proton affinities of silanols and siloxanes, hydrolysis of chlorosilanes and condensation of silanols to siloxanes were considered. As the reference values, experimental bonding parameters and reaction enthalpies were used wherever available. Where there are no experimental data, W1 and CBS-QB3 values were used instead. For the gas phase conditions, excellent agreement between theoretical CBS-QB3 and W1 and experimental thermochemical values was observed. All DFT methods also give acceptable values and the precision of various functionals used was comparable. No significant advantage of newer more advanced functionals over 'classical' B3LYP and PBEPBE ones was noted. The accuracy of the results was improved significantly when triple-zeta basis sets were used for energy calculations, instead of double-zeta ones. The accuracy of calculations for the reactions in water solution within the SCRF model was inferior compared to the gas phase. However, by careful estimation of corrections to the ΔHsolv and ΔGsolv of H(+) and HCl, reasonable values of thermodynamic quantities for the discussed reactions can be obtained. PMID:26781663
Parallel computation of automatic differentiation applied to magnetic field calculations
The author presents a parallelization of an accelerator physics application to simulate magnetic field in three dimensions. The problem involves the evaluation of high order derivatives with respect to two variables of a multivariate function. Automatic differentiation software had been used with some success, but the computation time was prohibitive. The implementation runs on several platforms, including a network of workstations using PVM, a MasPar using MPFortran, and a CM-5 using CMFortran. A careful examination of the code led to several optimizations that improved its serial performance by a factor of 8.7. The parallelization produced further improvements, especially on the MasPar with a speedup factor of 620. As a result a problem that took six days on a SPARC 10/41 now runs in minutes on the MasPar, making it feasible for physicists at Lawrence Berkeley Laboratory to simulate larger magnets
A computer code for beam optics calculation--third order approximation
L(U) Jianqin; LI Jinhai
2006-01-01
To calculate the beam transport in the ion optical systems accurately, a beam dynamics computer program of third order approximation is developed. Many conventional optical elements are incorporated in the program. Particle distributions of uniform type or Gaussian type in the ( x, y, z ) 3D ellipses can be selected by the users. The optimization procedures are provided to make the calculations reasonable and fast. The calculated results can be graphically displayed on the computer monitor.
Lamarcq, J. [Service de Physique Theorique, CEA Centre d`Etudes de Saclay, 91 - Gif-sur-Yvette (France)
1998-07-10
Numerical simulation allows the theorists to convince themselves about the validity of the models they use. Particularly by simulating the spin lattices one can judge about the validity of a conjecture. Simulating a system defined by a large number of degrees of freedom requires highly sophisticated machines. This study deals with modelling the magnetic interactions between the ions of a crystal. Many exact results have been found for spin 1/2 systems but not for systems of other spins for which many simulation have been carried out. The interest for simulations has been renewed by the Haldane`s conjecture stipulating the existence of a energy gap between the ground state and the first excited states of a spin 1 lattice. The existence of this gap has been experimentally demonstrated. This report contains the following four chapters: 1. Spin systems; 2. Calculation of eigenvalues; 3. Programming; 4. Parallel calculation 14 refs., 6 figs.
Oyamatsu, Kazuhiro [Nagoya Univ. (Japan)
1998-03-01
Application programs for personal computers are developed to calculate the decay heat power and delayed neutron activity from fission products. The main programs can be used in any computers from personal computers to main frames because their sources are written in Fortran. These programs have user friendly interfaces to be used easily not only for research activities but also for educational purposes. (author)
Burnup calculations using the ORIGEN code in the CONKEMO computing system
This article describes the CONKEMO computing system for kinetic multigroup calculations of nuclear reactors and their physical characteristics during burnup. The ORIGEN burnup calculation code has been added to the system. The results of an international benchmark calculation are also presented. (author)
Computer calculations in interstitial seed therapy: I. Radiation treatment planning
Interstitial seed therapy computers can be used for radiation treatment planning and for dose control after implantation. In interstitial therapy with radioactive seeds there are much greater differences between planning and carrying out radiation treatment than in teletherapy with cobalt-60 or X-rays. Because of the short distance between radioactive sources and tumour tissue, even slight deviations from the planned implantation geometry cause considerable dose deviations. Furthermore, the distribution of seeds in an actual implant is inhomogeneous. During implantation the spatial distribution of seeds cannot be examined exactly, though X-rays are used to control the operation. The afterloading technique of Henschke allows a more exact implantation geometry, but I have no experience of this method. In spite of the technical difficulty of achieving optimum geometry, interstitial therapy still has certain advantages when compared with teletherapy: the dose in the treated volume can be kept smaller than in teletherapy, the radiation can be better concentrated in the tumour volume, the treatment can be restricted to one or two operations, and localized inoperable tumours may be cured more easily. The latter may depend on an optimal treatment time, a relatively high tumour dose and a continuous exponentially decreasing dose rate during the treatment time. A disadvantage of interstitial therapy is the high personnel dose, which may be reduced by the afterloading technique of Henschke (1956). However, the afterloading method requires much greater personnel and instrumental expense than free implantation of radiogold seeds and causes greater trauma for the patient
Parallel beam dynamics calculations on high performance computers
Faced with a backlog of nuclear waste and weapons plutonium, as well as an ever-increasing public concern about safety and environmental issues associated with conventional nuclear reactors, many countries are studying new, accelerator-driven technologies that hold the promise of providing safe and effective solutions to these problems. Proposed projects include accelerator transmutation of waste (ATW), accelerator-based conversion of plutonium (ABC), accelerator-driven energy production (ADEP), and accelerator production of tritium (APT). Also, next-generation spallation neutron sources based on similar technology will play a major role in materials science and biological science research. The design of accelerators for these projects will require a major advance in numerical modeling capability. For example, beam dynamics simulations with approximately 100 million particles will be needed to ensure that extremely stringent beam loss requirements (less than a nanoampere per meter) can be met. Compared with typical present-day modeling using 10,000-100,000 particles, this represents an increase of 3-4 orders of magnitude. High performance computing (HPC) platforms make it possible to perform such large scale simulations, which require 10's of GBytes of memory. They also make it possible to perform smaller simulations in a matter of hours that would require months to run on a single processor workstation. This paper will describe how HPC platforms can be used to perform the numerically intensive beam dynamics simulations required for development of these new accelerator-driven technologies
Computational challenges in large nucleosynthesis calculations in stars
Full text: The study of how the elements form in stars requires significant computational efforts. The time scale of nuclear reactions in different evolutionary phases in stars changes by several orders of magnitude, and requires the implementation of fully implicit solvers to obtain precise results, where the lack of accuracy may be a severe issue to consider, in particular in explosive conditions like in supernovae. Another important point to consider is the number of isotopic species that need to be included in the simulations. Neutron capture processes are the main responsible to produce the abundances of elements heavier than iron. For the slow neutron capture process (i.e., the s process), the typical number of species is about 600, whereas in the explosive rapid neutron capture process (i.e., the r process) the dimension of the matrix that needs to be inverted to solve the nucleosynthesis equations is well above 1000. I aim to present these topics providing a general overview of the astrophysical scenarios involved, and showing meaningful examples to clarify the discussion. (author)
Direct Calculation of Protein Fitness Landscapes through Computational Protein Design.
Au, Loretta; Green, David F
2016-01-01
Naturally selected amino-acid sequences or experimentally derived ones are often the basis for understanding how protein three-dimensional conformation and function are determined by primary structure. Such sequences for a protein family comprise only a small fraction of all possible variants, however, representing the fitness landscape with limited scope. Explicitly sampling and characterizing alternative, unexplored protein sequences would directly identify fundamental reasons for sequence robustness (or variability), and we demonstrate that computational methods offer an efficient mechanism toward this end, on a large scale. The dead-end elimination and A(∗) search algorithms were used here to find all low-energy single mutant variants, and corresponding structures of a G-protein heterotrimer, to measure changes in structural stability and binding interactions to define a protein fitness landscape. We established consistency between these algorithms with known biophysical and evolutionary trends for amino-acid substitutions, and could thus recapitulate known protein side-chain interactions and predict novel ones. PMID:26745411
Parallel beam dynamics calculations on high performance computers
Faced with a backlog of nuclear waste and weapons plutonium, as well as an ever-increasing public concern about safety and environmental issues associated with conventional nuclear reactors, many countries are studying new, accelerator-driven technologies that hold the promise of providing safe and effective solutions to these problems. Proposed projects include accelerator transmutation of waste (ATW), accelerator-based conversion of plutonium (ABC), accelerator-driven energy production (ADEP), and accelerator production of tritium (APT). Also, next-generation spallation neutron sources based on similar technology will play a major role in materials science and biological science research. The design of accelerators for these projects will require a major advance in numerical modeling capability. For example, beam dynamics simulations with approximately 100 million particles will be needed to ensure that extremely stringent beam loss requirements (less than a nanoampere per meter) can be met. Compared with typical present-day modeling using 10,000 endash 100,000 particles, this represents an increase of 3 endash 4 orders of magnitude. High performance computing (HPC) platforms make it possible to perform such large scale simulations, which require 10 close-quote s of GBytes of memory. They also make it possible to perform smaller simulations in a matter of hours that would require months to run on a single processor workstation. This paper will describe how HPC platforms can be used to perform the numerically intensive beam dynamics simulations required for development of these new accelerator-driven technologies. copyright 1997 American Institute of Physics
pH and conductivity of sodium phosphate solutions. [Computer calculation
Wright, J.M.; VonNieda, G.E.
1979-03-01
This paper describes a computer program for the calculation of the pH and conductivity of sodium phosphate solutions over the phosphate concentration range of 1 to 10000 ppM and sodium to phosphate molar ratios of approximately 2 to 3. pH can be calculated over the temperature range of 0 to 300/sup 0/C; conductivities can be calculated over the temperature range of 0 to 50/sup 0/C. Calculated values of pH and conductivity are compred to measured values and found to be in excellent agreement. Several practical uses for the computer program are discussed.
Radiation therapy calculations using an on-demand virtual cluster via cloud computing
Keyes, Roy W; Arnold, Dorian; Luan, Shuang
2010-01-01
Computer hardware costs are the limiting factor in producing highly accurate radiation dose calculations on convenient time scales. Because of this, large-scale, full Monte Carlo simulations and other resource intensive algorithms are often considered infeasible for clinical settings. The emerging cloud computing paradigm promises to fundamentally alter the economics of such calculations by providing relatively cheap, on-demand, pay-as-you-go computing resources over the Internet. We believe that cloud computing will usher in a new era, in which very large scale calculations will be routinely performed by clinics and researchers using cloud-based resources. In this research, several proof-of-concept radiation therapy calculations were successfully performed on a cloud-based virtual Monte Carlo cluster. Performance evaluations were made of a distributed processing framework developed specifically for this project. The expected 1/n performance was observed with some caveats. The economics of cloud-based virtual...
ANIGAM: a computer code for the automatic calculation of nuclear group data
The computer code ANIGAM consists mainly of the well-known programmes GAM-I and ANISN as well as of a subroutine which reads the THERMOS cross section library and prepares it for ANISN. ANIGAM has been written for the automatic calculation of microscopic and macroscopic cross sections of light water reactor fuel assemblies. In a single computer run both were calculated, the cross sections representative for fuel assemblies in reactor core calculations and the cross sections of each cell type of a fuel assembly. The calculated data were delivered to EXTERMINATOR and CITATION for following diffusion or burn up calculations by an auxiliary programme. This report contains a detailed description of the computer codes and methods used in ANIGAM, a description of the subroutines, of the OVERLAY structure and an input and output description. (oririg.)
Neutron spectra calculation in material in order to compute irradiation damage
This short presentation will be on neutron spectra calculation methods in order to compute the damage rate formation in irradiated structure. Three computation schemes are used in the French C.E.A.: (1) 3-dimensional calculations using the line of sight attenuation method (MERCURE IV code), the removal cross section being obtained from an adjustment on a 1-dimensional transport calculation with the discrete ordinate code ANISN; (2) 2-dimensional calculations using the discrete ordinates method (DOT 3.5 code), 20 to 30 group library obtained by collapsing the 100 group a library on fluxes computed by ANISN; (3) 3-dimensional calculations using the Monte Carlo method (TRIPOLI system). The cross sections which originally came from UKNDL 73 and ENDF/B3 are now processed from ENDF B IV. (author)
Some questions of using coding theory and analytical calculation methods on computers
Main results of investigations devoted to the application of theory and practice of correcting codes are presented. These results are used to create very fast units for the selection of events registered in multichannel detectors of nuclear particles. Using this theory and analytical computing calculations, practically new combination devices, for example, parallel encoders, have been developed. Questions concerning the creation of a new algorithm for the calculation of digital functions by computers and problems of devising universal, dynamically reprogrammable logic modules are discussed
Lesniak, Joseph; Behrman, Elizabeth; Zandler, Melvin; Kumar, Preethika
2008-03-01
Very few quantum algorithms are currently useable today. When calculating molecular energies, using a quantum algorithm takes advantage of the quantum nature of the algorithm and calculation. A few small molecules have been used to show that this method is possible. This method will be applied to larger molecules and compared to classical computer methods.
SAMDIST: A computer code for calculating statistical distributions for R-matrix resonance parameters
Leal, L.C.; Larson, N.M.
1995-09-01
The SAMDIST computer code has been developed to calculate distribution of resonance parameters of the Reich-Moore R-matrix type. The program assumes the parameters are in the format compatible with that of the multilevel R-matrix code SAMMY. SAMDIST calculates the energy-level spacing distribution, the resonance width distribution, and the long-range correlation of the energy levels. Results of these calculations are presented in both graphic and tabular forms.
Rasmussen, Claus P.; Krejbjerg, Kristian; Michelsen, Michael Locht; Bjurstrøm, Kersti E.
2006-01-01
Approaches are presented for reducing the computation time spent on flash calculations in compositional, transient simulations. In a conventional flash calculation, the majority of the simulation time is spent on stability analysis, even for systems far into the single-phase region. A criterion has...... been implemented for deciding when it is justified to bypass the stability analysis. With the implementation of the developed time-saving initiatives, it has been shown for a number of compositional, transient pipeline simulations that a reduction of the computation time spent on flash calculations by...
Tuncay Bayram
2012-04-01
Full Text Available Objective: In this study, we aimed to make a computer program that calculates approximate radiation dose received by embryo/fetus in nuclear medicine applications. Material and Methods: Radiation dose values per MBq-1 received by embryo/fetus in nuclear medicine applications were gathered from literature for various stages of pregnancy. These values were embedded in the computer code, which was written in Fortran 90 program language. Results: The computer program called nmfdose covers almost all radiopharmaceuticals used in nuclear medicine applications. Approximate radiation dose received by embryo/fetus can be calculated easily at a few steps using this computer program. Conclusion: Although there are some constraints on using the program for some special cases, nmfdose is useful and it provides practical solution for calculation of approximate dose to embryo/fetus in nuclear medicine applications. (MIRT 2012;21:19-22
The purpose of this paper is to discuss the theories, techniques and computer codes that are frequently used in numerical reactor criticality and burnup calculations. It is a part of an integrated nuclear reactor calculation scheme conducted by the Reactors Department, Inshas Nuclear Research Centre. The crude part in numerical reactor criticality and burnup calculations includes the determination of neutron flux distribution which can be obtained in principle as a solution of Boltzmann transport equation. Numerical methods used for solving transport equations are discussed. Emphasis are made on numerical techniques based on multigroup diffusion theory. These numerical techniques include nodal, modal, and finite difference ones. The most commonly known computer codes utilizing these techniques are reviewed. Some of the main computer codes that have been already developed at the Reactors Department and related to numerical reactor criticality and burnup calculations have been presented
Calculation reduction method for color computer-generated hologram using color space conversion
Shimobaba, Tomoyoshi; Oikawa, Minoru; Takada, Naoki; Okada, Naohisa; Endo, Yutaka; Hirayama, Ryuji; Ito, Tomoyoshi
2013-01-01
We report a calculation reduction method for color computer-generated holograms (CGHs) using color space conversion. Color CGHs are generally calculated on RGB space. In this paper, we calculate color CGHs in other color spaces: for example, YCbCr color space. In YCbCr color space, a RGB image is converted to the luminance component (Y), blue-difference chroma (Cb) and red-difference chroma (Cr) components. In terms of the human eye, although the negligible difference of the luminance component is well-recognized, the difference of the other components is not. In this method, the luminance component is normal sampled and the chroma components are down-sampled. The down-sampling allows us to accelerate the calculation of the color CGHs. We compute diffraction calculations from the components, and then we convert the diffracted results in YCbCr color space to RGB color space.
Computer codes used in the calculation of high-temperature thermodynamic properties of sodium
Three computer codes - SODIPROP, NAVAPOR, and NASUPER - were written in order to calculate a self-consistent set of thermodynamic properties for saturated, subcooled, and superheated sodium. These calculations incorporate new critical parameters (temperature, pressure, and density) and recently derived single equations for enthalpy and vapor pressure. The following thermodynamic properties have been calculated in these codes: enthalpy, heat capacity, entropy, vapor pressure, heat of vaporization, density, volumetric thermal expansion coefficient, compressibility, and thermal pressure coefficient. In the code SODIPROP, these properties are calculated for saturated and subcooled liquid sodium. Thermodynamic properties of saturated sodium vapor are calculated in the code NAVAPOR. The code NASUPER calculates thermodynamic properties for super-heated sodium vapor only for low (< 1644 K) temperatures. No calculations were made for the supercritical region
Parallel diffusion calculation for the PHAETON on-line multiprocessor computer
The aim of the PHAETON project is the design of an on-line computer in order to increase the immediate knowledge of the main operating and safety parameters in power plants. A significant stage is the computation of the three dimensional flux distribution. For cost and safety reason a computer based on a parallel microprocessor architecture has been studied. This paper presents a first approach to parallelized three dimensional diffusion calculation. A computing software has been written and built in a four processors demonstrator. We present the realization in progress, concerning the final equipment. 8 refs
Efficient Computation of Power, Force, and Torque in BEM Scattering Calculations
Reid, M T Homer
2013-01-01
We present concise, computationally efficient formulas for several quantities of interest -- including absorbed and scattered power, optical force (radiation pressure), and torque -- in scattering calculations performed using the boundary-element method (BEM) [also known as the method of moments (MOM)]. Our formulas compute the quantities of interest \\textit{directly} from the BEM surface currents with no need ever to compute the scattered electromagnetic fields. We derive our new formulas and demonstrate their effectiveness by computing power, force, and torque in a number of example geometries. Free, open-source software implementations of our formulas are available for download online.
The computer code HADOC (Hanford Acute Dose Calculations) is described and instructions for its use are presented. The code calculates external dose from air submersion and inhalation doses following acute radionuclide releases. Atmospheric dispersion is calculated using the Hanford model with options to determine maximum conditions. Building wake effects and terrain variation may also be considered. Doses are calculated using dose conversion factor supplied in a data library. Doses are reported for one and fifty year dose commitment periods for the maximum individual and the regional population (within 50 miles). The fractional contribution to dose by radionuclide and exposure mode are also printed if requested
A computer program, PABLM, was written to facilitate the calculation of internal radiation doses to man from radionuclides in food products and external radiation doses from radionuclides in the environment. This report contains details of mathematical models used and calculational procedures required to run the computer program. Radiation doses from radionuclides in the environment may be calculated from deposition on the soil or plants during an atmospheric or liquid release, or from exposure to residual radionuclides in the environment after the releases have ended. Radioactive decay is considered during the release of radionuclides, after they are deposited on the plants or ground, and during holdup of food after harvest. The radiation dose models consider several exposure pathways. Doses may be calculated for either a maximum-exposed individual or for a population group. The doses calculated are accumulated doses from continuous chronic exposure. A first-year committed dose is calculated as well as an integrated dose for a selected number of years. The equations for calculating internal radiation doses are derived from those given by the International Commission on Radiological Protection (ICRP) for body burdens and MPC's of each radionuclide. The radiation doses from external exposure to contaminated water and soil are calculated using the basic assumption that the contaminated medium is large enough to be considered an infinite volume or plane relative to the range of the emitted radiations. The equations for calculations of the radiation dose from external exposure to shoreline sediments include a correction for the finite width of the contaminated beach
Microcomputers, desk calculators and process computers for use in radiation protection
The goals achievable, or to be pursued, in radiation protection measurement and evaluation by using computers are explained. As there is a large variety of computers available offering a likewise large variety, of performances, use of a computer is justified even for minor measuring and evaluation tasks. The subdivision into: Microcomputers as an installed part of measuring equipment; measuring and evaluation systems with desk calculators; measuring and evaluation systems with process computers is done to explain the importance and extent of the measuring or evaluation tasks and the computing devices suitable for the various purposes. The special requirements to be met in order to fulfill the different tasks are discussed, both in terms of hardware and software and in terms of skill and knowledge of the personnel, and are illustrated by an example showing the usefulness of computers in radiation protection. (orig./HP)
Calculation of Plutonium content in RSG-GAS spent fuel using IAFUEL computer code
It has been calculated the contain of isotopes Pu-239, Pu-240, Pu-241, and isotope Pu-242 in MTR reactor fuel types which have U-235 contain about 250 gram. The calculation was performed in three steps. The first step is to determine the library of calculation output of BOC (Beginning of Cycle). The second step is to determine the core isotope density, the weight of plutonium for one core, and one fuel isotope density. The third step is to calculate weight of plutonium in gram. All calculation is performed by IAFUEL computer code. The calculation was produced content of each Pu isotopes were Pu-239 is 6.7666 gr, Pu-240 is 1.4628 gr, Pu-241 is 0.52951 gr, and Pu-242 is 0.068952 gr
Development of dose calculation system of brachytherapy with a personal computer. 138
A dose calculation system for the brachytherapy was developed with a personal computer. The system was made up of a 16 bits personal computer and a digitizer with a light panel. As the operating system, MS-DOS version 2.1 was used and the programs were written in the BASIC (compiler version) and the assembler. The followings are characteristics of the systeM1; (1) low cost, (2) high performances in the speed of calculation and the data-transfer, (3) high accuracy of the calculated dose-distribution, (4) consideration of the absorption of gamma rays within sources themselves and their containers. In this paper, functions of the system and the performances are described minutely. Moreover, we show the results of estimation of the accuracy of the calculated dose. 10 refs.; 5 figs.; 1 table
GENGTC-JB: a computer program to calculate temperature distribution for cylindrical geometry capsule
In design of JMTR irradiation capsules contained specimens, a program (named GENGTC) has been generally used to evaluate temperature distributions in the capsules. The program was originally compiled by ORNL(U.S.A.) and consisted of very simple calculation methods. From the incorporated calculation methods, the program is easy to use, and has many applications to the capsule design. However, it was considered to replace original computing methods with advanced ones, when the program was checked from a standpoint of the recent computer abilities, and also to be complicated in data input. Therefore, the program was versioned up as aim to make better calculations and improve input method. The present report describes revised calculation methods and input/output guide of the version-up program. (author)
A symbolic computing environment for doing calculations in quantum field theory
A computational environment, as a set of MapleV R.3 routines for doing symbolic calculations in Quantum Field Theory, is presented. The Q F T package's routines extend the standard MapleV computational domain by introducing representations for anti commutative and noncommutative objects, tensors, spinors and gauge fields, as well as related objects and procedures (Dirac matrices, differential operators, functional differentiation w.r.t indexed fields, sum rule for repeated indices, etc.). Furthermore, the Q F T routines permit the user-definition of algebra rules for the commutation/ anti commutation of operators, to be taken into account during the calculations. (author)
Radiation damage calculation by NPRIM computer code with JENDL3.3
The Neutron Damage Evaluation Group of the Atomic Energy Society of Japan starts an identification of neutron-induced radiation damage in materials for typical neutron fields. For this study, a computer code, NPRIM, has been developed to be free from a tedious computational effort, which has been devoted to the calculation of derived quantities such as dpa and helium production rate. Neutron cross sections concerning to damage reactions based on JENDL3.3 are given with 640-group-structure. The impact of cross sections based on JENDL3.3 to damage calculation results has been described in this paper. (author)
A FORTRAN computer code for calculating flows in multiple-blade-element cascades
Mcfarland, E. R.
1985-01-01
A solution technique has been developed for solving the multiple-blade-element, surface-of-revolution, blade-to-blade flow problem in turbomachinery. The calculation solves approximate flow equations which include the effects of compressibility, radius change, blade-row rotation, and variable stream sheet thickness. An integral equation solution (i.e., panel method) is used to solve the equations. A description of the computer code and computer code input is given in this report.
Off-site dose calculation computer code based on ICRP-60(II) - liquid radioactive effluents -
The development of computer code for calculating off-site doses(K-DOSE60) was based on ICRP-60 and the dose calculationi equations of Reg. Guide 1.109. In this paper, the methodology to compute dose for liquid effluents was described. To examine reliability of the K-DOSE60 code the results obtained from K-DOSE60 were compared with analytic solutions. For liquid effluents. The results by K-DOSE60 are in agreement with analytic solution
Tian, Jiandong; Duan, Zhigang; Ren, Weihong; Han, Zhi; Tang, Yandong
2016-04-01
The spectral power distributions (SPD) of outdoor light sources are not constant over time and atmospheric conditions, which causes the appearance variation of a scene and common natural illumination phenomena, such as twilight, shadow, and haze/fog. Calculating the SPD of outdoor light sources at different time (or zenith angles) and under different atmospheric conditions is of interest to physically-based vision. In this paper, for computer vision and its applications, we propose a feasible, simple, and effective SPD calculating method based on analyzing the transmittance functions of absorption and scattering along the path of solar radiation through the atmosphere in the visible spectrum. Compared with previous SPD calculation methods, our model has less parameters and is accurate enough to be directly applied in computer vision. It can be applied in computer vision tasks including spectral inverse calculation, lighting conversion, and shadowed image processing. The experimental results of the applications demonstrate that our calculation methods have practical values in computer vision. It establishes a bridge between image and physical environmental information, e.g., time, location, and weather conditions. PMID:27137018
FISPRO: a simplified computer program for general fission product formation and decay calculations
This report describes a computer program that solves a general form of the fission product formation and decay equations over given time steps for arbitrary decay chains composed of up to three nuclides. All fission product data and operational history data are input through user-defined input files. The program is very useful in the calculation of fission product activities of specific nuclides for various reactor operational histories and accident consequence calculations
Shipboard fires both in the same ship hold and in an adjacent hold aboard a break-bulk cargo ship are simulated with a commercial finite-volume computational fluid mechanics code. The fire models and modeling techniques are described and discussed. Temperatures and heat fluxes to a simulated materials package are calculated and compared to experimental values. The overall accuracy of the calculations is assessed
Computer calculation of dose distributions in radiotherapy. Report of a panel
As in most areas of scientific endeavour, the advent of electronic computers has made a significant impact on the investigation of the physical aspects of radiotherapy. Since the first paper on the subject was published in 1955 the literature has rapidly expanded to include the application of computer techniques to problems of external beam, and intracavitary and interstitial dosimetry. By removing the tedium of lengthy repetitive calculations, the availability of automatic computers has encouraged physicists and radiotherapists to take a fresh look at many fundamental physical problems of radiotherapy. The most important result of the automation of dosage calculations is not simply an increase in the quantity of data but an improvement in the quality of data available as a treatment guide for the therapist. In October 1965 the International Atomic Energy Agency convened a panel in Vienna on the 'Use of Computers for Calculation of Dose Distributions in Radiotherapy' to assess the current status of work, provide guidelines for future research, explore the possibility of international cooperation and make recommendations to the Agency. The panel meeting was attended by 15 participants from seven countries, one observer, and two representatives of the World Health Organization. Participants contributed 20 working papers which served as the bases of discussion. By the nature of the work, computer techniques have been developed by a few advanced centres with access to large computer installations. However, several computer methods are now becoming 'routine' and can be used by institutions without facilities for research. It is hoped that the report of the Panel will provide a comprehensive view of the automatic computation of radiotherapeutic dose distributions and serve as a means of communication between present and potential users of computers
Calculations of reactor-accident consequences, Version 2. CRAC2: computer code user's guide
The CRAC2 computer code is a revision of the Calculation of Reactor Accident Consequences computer code, CRAC, developed for the Reactor Safety Study. The CRAC2 computer code incorporates significant modeling improvements in the areas of weather sequence sampling and emergency response, and refinements to the plume rise, atmospheric dispersion, and wet deposition models. New output capabilities have also been added. This guide is to facilitate the informed and intelligent use of CRAC2. It includes descriptions of the input data, the output results, the file structures, control information, and five sample problems
Gonzalez Portilla, M. I.; Marquez, J.
2011-07-01
Radiological protection aims to limit the ionizing radiation received by people and equipment, which in numerous occasions requires of protection shields. Although, for certain configurations, there are analytical formulas, to characterize these shields, the design setup may be very intensive in numerical calculations, therefore the most efficient from to design the shields is by means of computer programs to calculate dose and dose rates. In the present article we review the codes most frequently used to perform these calculations, and the techniques used by such codes. (Author) 13 refs.
Using the ORIGEN-2 computer code for near core activation calculations
The ORIGEN2 computer code is a useful tool for calculating radionuclide inventories resulting from irradiation of materials in a reactor. It is widely used to calculate activation products in irradiated metals that form the structural portion of fuel assemblies. The code is straightforward for materials within the active fuel region of a reactor core, which are subject to core average conditions. For materials outside the active core, ORIGEN2 cannot be used directly. However, ORIGEN2 can be used with the appropriate methodology to calculate the activation of materials in near core locations. This paper presents the background and a methodology for estimating radionuclide inventories in activated metals in near core locations
An approach to first principles electronic structure calculation by symbolic-numeric computation
Akihito Kikuchi
2013-04-01
Full Text Available There is a wide variety of electronic structure calculation cooperating with symbolic computation. The main purpose of the latter is to play an auxiliary role (but not without importance to the former. In the field of quantum physics [1-9], researchers sometimes have to handle complicated mathematical expressions, whose derivation seems almost beyond human power. Thus one resorts to the intensive use of computers, namely, symbolic computation [10-16]. Examples of this can be seen in various topics: atomic energy levels, molecular dynamics, molecular energy and spectra, collision and scattering, lattice spin models and so on [16]. How to obtain molecular integrals analytically or how to manipulate complex formulas in many body interactions, is one such problem. In the former, when one uses special atomic basis for a specific purpose, to express the integrals by the combination of already known analytic functions, may sometimes be very difficult. In the latter, one must rearrange a number of creation and annihilation operators in a suitable order and calculate the analytical expectation value. It is usual that a quantitative and massive computation follows a symbolic one; for the convenience of the numerical computation, it is necessary to reduce a complicated analytic expression into a tractable and computable form. This is the main motive for the introduction of the symbolic computation as a forerunner of the numerical one and their collaboration has won considerable successes. The present work should be classified as one such trial. Meanwhile, the use of symbolic computation in the present work is not limited to indirect and auxiliary part to the numerical computation. The present work can be applicable to a direct and quantitative estimation of the electronic structure, skipping conventional computational methods.
Easy calculations of lod scores and genetic risks on small computers.
Lathrop, G M; Lalouel, J M
1984-01-01
A computer program that calculates lod scores and genetic risks for a wide variety of both qualitative and quantitative genetic traits is discussed. An illustration is given of the joint use of a genetic marker, affection status, and quantitative information in counseling situations regarding Duchenne muscular dystrophy.
An improvement has been made to the LALA program to compute resonant frequencies and fields for all the modes of the lowest TM01 band-pass of multicell structures. The results are compared with those calculated by another popular rf cavity code and with experimentally measured quantities. (author)
CPS: a continuous-point-source computer code for plume dispersion and deposition calculations
Peterson, K.R.; Crawford, T.V.; Lawson, L.A.
1976-05-21
The continuous-point-source computer code calculates concentrations and surface deposition of radioactive and chemical pollutants at distances from 0.1 to 100 km, assuming a Gaussian plume. The basic input is atmospheric stability category and wind speed, but a number of refinements are also included.
Shibata, C.S.; Montes, A. [Instituto de Pesquisas Espaciais (INPE), Sao Jose dos Campos, SP (Brazil); Galvao, R.M.O. [Sao Paulo Univ., SP (Brazil). Inst. de Fisica
1994-04-01
This paper describes the `FLINESH` computer code for magnetic fields calculation developed for the simulation of field configurations in plasma magnetic confinement devices. The expressions for the poloidal field and flux, the program structure and the input parameters description are presented, and also the analysis of the graphic output possibilities. (L.C.J.A.). 12 refs, 14 figs, 2 tabs.
This report describes a computer program which is useful in transmission electron microscopy. The program is written in FORTRAN and calculates kinematical electron diffraction patterns in any zone axis from a given crystal structure. Quite large unit cells, containing up to 2250 atoms, can be handled by the program. The program runs on both the Helcules graphic card and the standard IBM CGA card
Computer Programs for Calculating the Isentropic Flow Properties for Mixtures of R-134a and Air
Kvaternik, Raymond G.
2000-01-01
Three computer programs for calculating the isentropic flow properties of R-134a/air mixtures which were developed in support of the heavy gas conversion of the Langley Transonic Dynamics Tunnel (TDT) from dichlorodifluoromethane (R-12) to 1,1,1,2 tetrafluoroethane (R-134a) are described. The first program calculates the Mach number and the corresponding flow properties when the total temperature, total pressure, static pressure, and mole fraction of R-134a in the mixture are given. The second program calculates tables of isentropic flow properties for a specified set of free-stream Mach numbers given the total pressure, total temperature, and mole fraction of R-134a. Real-gas effects are accounted for in these programs by treating the gases comprising the mixture as both thermally and calorically imperfect. The third program is a specialized version of the first program in which the gases are thermally perfect. It was written to provide a simpler computational alternative to the first program in those cases where real-gas effects are not important. The theory and computational procedures underlying the programs are summarized, the equations used to compute the flow quantities of interest are given, and sample calculated results that encompass the operating conditions of the TDT are shown.
Calculating the Thermal Rate Constant with Exponential Speed-Up on a Quantum Computer
Lidar, D A; Lidar, Daniel A.; Wang, Haobin
1999-01-01
It is shown how to formulate the ubiquitous quantum chemistry problem of calculating the thermal rate constant on a quantum computer. The resulting exact algorithm scales exponentially faster with the dimensionality of the system than all known ``classical'' algorithms for this problem.
Gordon, Sanford; Mcbride, Bonnie J.
1994-01-01
This report presents the latest in a number of versions of chemical equilibrium and applications programs developed at the NASA Lewis Research Center over more than 40 years. These programs have changed over the years to include additional features and improved calculation techniques and to take advantage of constantly improving computer capabilities. The minimization-of-free-energy approach to chemical equilibrium calculations has been used in all versions of the program since 1967. The two principal purposes of this report are presented in two parts. The first purpose, which is accomplished here in part 1, is to present in detail a number of topics of general interest in complex equilibrium calculations. These topics include mathematical analyses and techniques for obtaining chemical equilibrium; formulas for obtaining thermodynamic and transport mixture properties and thermodynamic derivatives; criteria for inclusion of condensed phases; calculations at a triple point; inclusion of ionized species; and various applications, such as constant-pressure or constant-volume combustion, rocket performance based on either a finite- or infinite-chamber-area model, shock wave calculations, and Chapman-Jouguet detonations. The second purpose of this report, to facilitate the use of the computer code, is accomplished in part 2, entitled 'Users Manual and Program Description'. Various aspects of the computer code are discussed, and a number of examples are given to illustrate its versatility.
Accurate calculation of computer-generated holograms using angular-spectrum layer-oriented method.
Zhao, Yan; Cao, Liangcai; Zhang, Hao; Kong, Dezhao; Jin, Guofan
2015-10-01
Fast calculation and correct depth cue are crucial issues in the calculation of computer-generated hologram (CGH) for high quality three-dimensional (3-D) display. An angular-spectrum based algorithm for layer-oriented CGH is proposed. Angular spectra from each layer are synthesized as a layer-corresponded sub-hologram based on the fast Fourier transform without paraxial approximation. The proposed method can avoid the huge computational cost of the point-oriented method and yield accurate predictions of the whole diffracted field compared with other layer-oriented methods. CGHs of versatile formats of 3-D digital scenes, including computed tomography and 3-D digital models, are demonstrated with precise depth performance and advanced image quality. PMID:26480062
AQUAMAN is an interactive computer code for calculating values of dose (50-year dose commitment) to man from aqueous releases of radionuclides from nuclear facilities. The data base contains values of internal and external dose conversion factors, and bioaccumulation (freshwater and marine) factors for 56 radionuclides. A maximum of 20 radionuclides may be selected for any one calculation. Dose and cumulative exposure index (CUEX) values are calculated for total body, GI tract, bone, thyroid, lungs, liver, kidneys, testes, and ovaries for each of three exposure pathways: water ingestion, fish ingestion, and submersion. The user is provided the option at the time of execution to change the default values of most of the variables, with the exception of the dose conversion factor values. AQUAMAN is written in FORTRAN for the PDP-10 computer
Program POD. A computer code to calculate cross sections for neutron-induced nuclear reactions
A computer code, POD, was developed for neutron-induced nuclear data evaluations. This program is based on four theoretical models, (1) the optical model to calculate shape-elastic scattering and reaction cross sections, (2) the distorted wave Born approximation to calculate neutron inelastic scattering cross sections, (3) the preequilibrium model, and (4) the multi-step statistical model. With this program, cross sections can be calculated for reactions (n, γ), (n, n'), (n, p), (n, α), (n, d), (n, t), (n, 3He), (n, 2n), (n, np), (n, nα), (n, nd), and (n, 3n) in the neutron energy range above the resonance region to 20 MeV. The computational methods and input parameters are explained in this report, with sample inputs and outputs. (author)
Zimmermann, Anke; Kuhn, Sandra; Richter, Marten
2016-01-01
Often, the calculation of Coulomb coupling elements for quantum dynamical treatments, e.g., in cluster or correlation expansion schemes, requires the evaluation of a six dimensional spatial integral. Therefore, it represents a significant limiting factor in quantum mechanical calculations. If the size or the complexity of the investigated system increases, many coupling elements need to be determined. The resulting computational constraints require an efficient method for a fast numerical calculation of the Coulomb coupling. We present a computational method to reduce the numerical complexity by decreasing the number of spatial integrals for arbitrary geometries. We use a Green's function formulation of the Coulomb coupling and introduce a generalized scalar potential as solution of a generalized Poisson equation with a generalized charge density as the inhomogeneity. That enables a fast calculation of Coulomb coupling elements and, additionally, a straightforward inclusion of boundary conditions and arbitrarily spatially dependent dielectrics through the Coulomb Green's function. Particularly, if many coupling elements are included, the presented method, which is not restricted to specific symmetries of the model, presents a promising approach for increasing the efficiency of numerical calculations of the Coulomb interaction. To demonstrate the wide range of applications, we calculate internanostructure couplings, such as the Förster coupling, and illustrate the inclusion of symmetry considerations in the method for the Coulomb coupling between bound quantum dot states and unbound continuum states.
Calculation of shielding of X rays in radiotherapy facilities with computer aid
This work presents a methodology for calculation of shielding of X rays in radiotherapy facilities with computer aid. A friendly program, called RadTeraX, was developed in programming language Delphi that, through manual data input of a basic project of architecture and of some parameters, interprets the geometry and calculates the shielding of the walls, ground and roof of a radiotherapy installation for X rays. As a final product, this program supplies a graphic screen in the computer with all the input data and the calculation of the shielding, besides the respective calculation memory. Still today, in Brazil, the calculation of the shielding for radiotherapy facilities with X rays has been made based on recommendations of NCRP-49, that establishes a necessary calculation methodology to the elaboration of a shielding project. However, in high energies, where it is necessary the construction of a maze, NCRP-49 is insufficient, so that in this field, studies were made originating an article that proposes a solution for the problem and this solution was implemented in the program. The program can be applied in the practical execution of shielding projects for radiotherapy facilities and in didactic way in comparison with NCRP-49 and has been registered under number 00059420 at INPI - Instituto Nacional da Propriedade Industrial (National Institute of Industrial Property). (author)
Napier, B.A.; Kennedy, W.E. Jr.; Soldat, J.K.
1980-03-01
A computer program, PABLM, was written to facilitate the calculation of internal radiation doses to man from radionuclides in food products and external radiation doses from radionuclides in the environment. This report contains details of mathematical models used and calculational procedures required to run the computer program. Radiation doses from radionuclides in the environment may be calculated from deposition on the soil or plants during an atmospheric or liquid release, or from exposure to residual radionuclides in the environment after the releases have ended. Radioactive decay is considered during the release of radionuclides, after they are deposited on the plants or ground, and during holdup of food after harvest. The radiation dose models consider several exposure pathways. Doses may be calculated for either a maximum-exposed individual or for a population group. The doses calculated are accumulated doses from continuous chronic exposure. A first-year committed dose is calculated as well as an integrated dose for a selected number of years. The equations for calculating internal radiation doses are derived from those given by the International Commission on Radiological Protection (ICRP) for body burdens and MPC's of each radionuclide. The radiation doses from external exposure to contaminated water and soil are calculated using the basic assumption that the contaminated medium is large enough to be considered an infinite volume or plane relative to the range of the emitted radiations. The equations for calculations of the radiation dose from external exposure to shoreline sediments include a correction for the finite width of the contaminated beach.
The CITHAN computer code was developed at IPEN (Instituto de Pesquisas Energeticas e Nucleares) to link the HAMMER computer code with a fuel depletion routine and to provide neutron cross sections to be read with the appropriate format of the CITATION code. The problem arised due to the efforts to addapt the new version denomined HAMMER-TECHION with the routine refered. The HAMMER-TECHION computer code was elaborated by Haifa Institute, Israel within a project with EPRI. This version is at CNEN to be used in multigroup constant generation for neutron diffusion calculation in the scope of the new methodology to be adopted by CNEN. The theoretical formulation of CITHAM computer code, tests and modificatins are described. (Author)
Tetrahedral-mesh-based computational human phantom for fast Monte Carlo dose calculations
Although polygonal-surface computational human phantoms can address several critical limitations of conventional voxel phantoms, their Monte Carlo simulation speeds are much slower than those of voxel phantoms. In this study, we sought to overcome this problem by developing a new type of computational human phantom, a tetrahedral mesh phantom, by converting a polygonal surface phantom to a tetrahedral mesh geometry. The constructed phantom was implemented in the Geant4 Monte Carlo code to calculate organ doses as well as to measure computation speed, the values were then compared with those for the original polygonal surface phantom. It was found that using the tetrahedral mesh phantom significantly improved the computation speed by factors of between 150 and 832 considering all of the particles and simulated energies other than the low-energy neutrons (0.01 and 1 MeV), for which the improvement was less significant (17.2 and 8.8 times, respectively). (paper)
A computer code for calculating a γ-external dose from a randomly distributed radioactive cloud
A computer code ( CIDE ) has been developed to calculate a γ-external dose from a randomly distributed radioactive cloud. Atmospheric dispersion of radioactive materials accidentally released from a nuclear reactor needs to be estimated considering time-dependent meteorological data and terrain heights. Particle-in-Cell model is useful for that purpose, but it is not easy to calculate the dose from the randomly distributed concentration by numerical integration. In this study the mean concentration in a cell evaluated by PIC model was assumed to be uniformly distributed over that cell, which was integrated as a constant concentration by a point kernel method. The dose was obtained by summing the attributable cell doses. When the concentration of plume had a Gaussian distribution, the results of CIDE code well agreed with those of GAMPLE, which was the code for calculating the dose from the Gaussian distribution. The choice of cell sizes affecting the accuracy of the calculated results was discussed. (author)
TEMP: a computer code to calculate fuel pin temperatures during a transient
The computer code TEMP calculates fuel pin temperatures during a transient. It was developed to accommodate temperature calculations in any system of axi-symmetric concentric cylinders. When used to calculate fuel pin temperatures, the code will handle a fuel pin as simple as a solid cylinder or as complex as a central void surrounded by fuel that is broken into three regions by two circumferential cracks. Any fuel situation between these two extremes can be analyzed along with additional cladding, heat sink, coolant or capsule regions surrounding the fuel. The one-region version of the code accurately calculates the solution to two problems having closed-form solutions. The code uses an implicit method, an explicit method and a Crank-Nicolson (implicit-explicit) method
Reactor physics simulations with coupled Monte Carlo calculation and computational fluid dynamics
A computational code system based on coupling the Monte Carlo code MCNP5 and the Computational Fluid Dynamics (CFD) code STAR-CD was developed as an audit tool for lower order nuclear reactor calculations. This paper presents the methodology of the developed computer program 'McSTAR'. McSTAR is written in FORTRAN90 programming language and couples MCNP5 and the commercial CFD code STAR-CD. MCNP uses a continuous energy cross section library produced by the NJOY code system from the raw ENDF/B data. A major part of the work was to develop and implement methods to update the cross section library with the temperature distribution calculated by STARCD for every region. Three different methods were investigated and implemented in McSTAR. The user subroutines in STAR-CD are modified to read the power density data and assign them to the appropriate variables in the program and to write an output data file containing the temperature, density and indexing information to perform the mapping between MCNP and STAR-CD cells. Preliminary testing of the code was performed using a 3x3 PWR pin-cell problem. The preliminary results are compared with those obtained from a STAR-CD coupled calculation with the deterministic transport code DeCART. Good agreement in the keff and the power profile was observed. Increased computational capabilities and improvements in computational methods have accelerated interest in high fidelity modeling of nuclear reactor cores during the last several years. High-fidelity has been achieved by utilizing full core neutron transport solutions for the neutronics calculation and computational fluid dynamics solutions for the thermal-hydraulics calculation. Previous researchers have reported the coupling of 3D deterministic neutron transport method to CFD and their application to practical reactor analysis problems. One of the principal motivations of the work here was to utilize Monte Carlo methods to validate the coupled deterministic neutron transport
Reactor physics simulations with coupled Monte Carlo calculation and computational fluid dynamics.
Seker, V.; Thomas, J. W.; Downar, T. J.; Purdue Univ.
2007-01-01
A computational code system based on coupling the Monte Carlo code MCNP5 and the Computational Fluid Dynamics (CFD) code STAR-CD was developed as an audit tool for lower order nuclear reactor calculations. This paper presents the methodology of the developed computer program 'McSTAR'. McSTAR is written in FORTRAN90 programming language and couples MCNP5 and the commercial CFD code STAR-CD. MCNP uses a continuous energy cross section library produced by the NJOY code system from the raw ENDF/B data. A major part of the work was to develop and implement methods to update the cross section library with the temperature distribution calculated by STARCD for every region. Three different methods were investigated and implemented in McSTAR. The user subroutines in STAR-CD are modified to read the power density data and assign them to the appropriate variables in the program and to write an output data file containing the temperature, density and indexing information to perform the mapping between MCNP and STAR-CD cells. Preliminary testing of the code was performed using a 3x3 PWR pin-cell problem. The preliminary results are compared with those obtained from a STAR-CD coupled calculation with the deterministic transport code DeCART. Good agreement in the k{sub eff} and the power profile was observed. Increased computational capabilities and improvements in computational methods have accelerated interest in high fidelity modeling of nuclear reactor cores during the last several years. High-fidelity has been achieved by utilizing full core neutron transport solutions for the neutronics calculation and computational fluid dynamics solutions for the thermal-hydraulics calculation. Previous researchers have reported the coupling of 3D deterministic neutron transport method to CFD and their application to practical reactor analysis problems. One of the principal motivations of the work here was to utilize Monte Carlo methods to validate the coupled deterministic
A version of the CRAC2 computer code applicable for use in analyses of consequences and risks of reactor accidents in case work for environmental statements has been implemented for use on the Nuclear Regulatory Commission Data General MV/8000 computer system. Input preparation is facilitated through the use of an interactive computer program which operates on an IBM personal computer. The resulting CRAC2 input deck is transmitted to the MV/8000 by using an error-free file transfer mechanism. To facilitate the use of CRAC2 at NRC, relevant background material on input requirements and model descriptions has been extracted from four reports - ''Calculations of Reactor Accident Consequences,'' Version 2, NUREG/CR-2326 (SAND81-1994) and ''CRAC2 Model Descriptions,'' NUREG/CR-2552 (SAND82-0342), ''CRAC Calculations for Accident Sections of Environmental Statements, '' NUREG/CR-2901 (SAND82-1693), and ''Sensitivity and Uncertainty Studies of the CRAC2 Computer Code,'' NUREG/CR-4038 (ORNL-6114). When this background information is combined with instructions on the input processor, this report provides a self-contained guide for preparing CRAC2 input data with a specific orientation toward applications on the MV/8000. 8 refs., 11 figs., 10 tabs
CREST : a computer program for the calculation of composition dependent self-shielded cross-sections
A computer program CREST for the calculation of the composition and temperature dependent self-shielded cross-sections using the shielding factor approach has been described. The code includes the editing and formation of the data library, calculation of the effective shielding factors and cross-sections, a fundamental mode calculation to generate the neutron spectrum for the system which is further used to calculate the effective elastic removal cross-sections. Studies to explore the sensitivity of reactor parameters to changes in group cross-sections can also be carried out by using the facility available in the code to temporarily change the desired constants. The final self-shielded and transport corrected group cross-sections can be dumped on cards or magnetic tape in a suitable form for their direct use in a transport or diffusion theory code for detailed reactor calculations. The program is written in FORTRAN and can be accommodated in a computer with 32 K work memory. The input preparation details, sample problem and the listing of the program are given. (author)
Wilson, J. W.; Khandelwal, G. S.
1976-01-01
Calculational methods for estimation of dose from external proton exposure of arbitrary convex bodies are briefly reviewed. All the necessary information for the estimation of dose in soft tissue is presented. Special emphasis is placed on retaining the effects of nuclear reaction, especially in relation to the dose equivalent. Computer subroutines to evaluate all of the relevant functions are discussed. Nuclear reaction contributions for standard space radiations are in most cases found to be significant. Many of the existing computer programs for estimating dose in which nuclear reaction effects are neglected can be readily converted to include nuclear reaction effects by use of the subroutines described herein.
Barber, Duncan Henry
During some postulated accidents at nuclear power stations, fuel cooling may be impaired. In such cases, the fuel heats up and the subsequent increased fission-gas release from the fuel to the gap may result in fuel sheath failure. After fuel sheath failure, the barrier between the coolant and the fuel pellets is lost or impaired, gases and vapours from the fuel-to-sheath gap and other open voids in the fuel pellets can be vented. Gases and steam from the coolant can enter the broken fuel sheath and interact with the fuel pellet surfaces and the fission-product inclusion on the fuel surface (including material at the surface of the fuel matrix). The chemistry of this interaction is an important mechanism to model in order to assess fission-product releases from fuel. Starting in 1995, the computer program SOURCE 2.0 was developed by the Canadian nuclear industry to model fission-product release from fuel during such accidents. SOURCE 2.0 has employed an early thermochemical model of irradiated uranium dioxide fuel developed at the Royal Military College of Canada. To overcome the limitations of computers of that time, the implementation of the RMC model employed lookup tables to pre-calculated equilibrium conditions. In the intervening years, the RMC model has been improved, the power of computers has increased significantly, and thermodynamic subroutine libraries have become available. This thesis is the result of extensive work based on these three factors. A prototype computer program (referred to as SC11) has been developed that uses a thermodynamic subroutine library to calculate thermodynamic equilibria using Gibbs energy minimization. The Gibbs energy minimization requires the system temperature (T) and pressure (P), and the inventory of chemical elements (n) in the system. In order to calculate the inventory of chemical elements in the fuel, the list of nuclides and nuclear isomers modelled in SC11 had to be expanded from the list used by SOURCE 2.0. A
RAP-4A Computer code for thermohydraulic calculation of liquid metal cooled fuel clusters
RAP-4A is a programme for calculating the fuel clusters thermal-hydraulic parameters in a fast liquid metal-cooled reactor. The code gives the possibility to calculate steady state axial distribution temperature, enthalpy, pressure drop and mass velocity . A monodimensional mathematical model along the cluster allowing the study of the single and two phase flow is used by taking into account the mixing between adjacent subchannels. Physical and mathematical models, general features and an example are presented. RAP-4A code is written in FORTRAN-IV language on IBM 370/135 computer
A fast running computer code SHETEMP has been developed for analysis of reactivity initiated accidents under constant core cooling conditions such as coolant temperature and heat transfer coefficient on fuel rods. This code can predict core power and fuel temperature behaviours. A control rod movement can be taken into account in power control system. The objective of the code is to provide fast running capability with easy handling of the code required for audit and design calculations where a large number of calculations are performed for parameter surveys during short time period. The fast running capability of the code was realized by neglection of fluid flow calculation. The computer code SHETEMP was made up by extracting and conglomerating routines for reactor kinetics and heat conduction in the transient reactor thermal-hydraulic analysis code ALARM-P1, and by combining newly developed routines for reactor power control system. As ALARM-P1, SHETEMP solves point reactor kinetics equations by the modified Runge-Kutta method and one-dimensional transient heat conduction equations for slab and cylindrical geometries by the Crank-Nicholson methods. The model for reactor power control system takes into account effects of PID regulator and control rod drive mechanism. In order to check errors in programming of the code, calculated results by SHETEMP were compared with analytic solution. Based on the comparisons, the appropriateness of the programming was verified. Also, through a sample calculation for typical modelling, it was concluded that the code could satisfy the fast running capability required for audit and design calculations. This report will be described as a code manual of SHETEMP. It contains descriptions on a sample problem, code structure, input data specifications and usage of the code, in addition to analytical models and results of code verification calculations. (author)
Calculation and evaluation methodology of the flawed pipe and the compute program development
Background: The crack will grow gradually under alternating load for a pressurized pipe, whereas the load is less than the fatigue strength limit. Purpose: Both calculation and evaluation methodology for a flawed pipe that have been detected during in-service inspection is elaborated here base on the Elastic Plastic Fracture Mechanics (EPFM) criteria. Methods: In the compute, the depth and length interaction of a flaw has been considered and a compute program is developed per Visual C++. Results: The fluctuating load of the Reactor Coolant System transients, the initial flaw shape, the initial flaw orientation are all accounted here. Conclusions: The calculation and evaluation methodology here is an important basis for continue working or not. (authors)
Ablinger, J; Blümlein, J; De Freitas, A; von Manteuffel, A; Schneider, C
2015-01-01
Three loop ladder and $V$-topology diagrams contributing to the massive operator matrix element $A_{Qg}$ are calculated. The corresponding objects can all be expressed in terms of nested sums and recurrences depending on the Mellin variable $N$ and the dimensional parameter $\\varepsilon$. Given these representations, the desired Laurent series expansions in $\\varepsilon$ can be obtained with the help of our computer algebra toolbox. Here we rely on generalized hypergeometric functions and Mellin-Barnes representations, on difference ring algorithms for symbolic summation, on an optimized version of the multivariate Almkvist-Zeilberger algorithm for symbolic integration, and on new methods to calculate Laurent series solutions of coupled systems of differential equations. The solutions can be computed for general coefficient matrices directly for any basis also performing the expansion in the dimensional parameter in case it is expressible in terms of indefinite nested product-sum expressions. This structural ...
OPT13B and OPTIM4 - computer codes for optical model calculations
OPT13B is a computer code in FORTRAN for optical model calculations with automatic search. A summary of different formulae used for computation is given. Numerical methods are discussed. The 'search' technique followed to obtain the set of optical model parameters which produce best fit to experimental data in a least-square sense is also discussed. Different subroutines of the program are briefly described. Input-output specifications are given in detail. A modified version of OPT13B specifications are given in detail. A modified version of OPT13B is OPTIM4. It can be used for optical model calculations where the form factors of different parts of the optical potential are known point by point. A brief description of the modifications is given. (author)
Fotland, Åge; Mehl, Sigbjørn; Sunnanå, Knut
1995-01-01
Standard 0-group indices distribution maps are now produced based on hand-drawn maps using AutoCad with some additional procedures. This paper briefly describes the mathod. The paper further describes ways of importing coastlines and survey data directly into standard computer programs such as AUtoCad and SAS. Standard methods are used for gridding data, producing isolines and further calculation of abundance indices and presentation of distributions. Interactive editing of distribution maps ...
Calculation of Heat-Kernel Coefficients and Usage of Computer Algebra
Bel'kov, A. A.; Lanyov, A. V.; Schaale, A.
1995-01-01
The calculation of heat-kernel coefficients with the classical DeWitt algorithm has been discussed. We present the explicit form of the coefficients up to $h_5$ in the general case and up to $h_7^{min}$ for the minimal parts. The results are compared with the expressions in other papers. A method to optimize the usage of memory for working with large expressions on universal computer algebra systems has been proposed.
A compilation of structural property data for computer impact calculation (5/5)
The paper describes structural property data for computer impact calculations of nuclear fuel shipping casks. Four kinds of material data, mild steel, stainless steel, lead and wood are compiled. These materials are main structural elements of shipping casks. Structural data such as, the coefficient of thermal expansion, the modulus of longitudinal elasticity, the modulus of transverse elasticity, the Poisson's ratio and stress and strain relationships, have been tabulated against temperature or strain rate. This volume 5 involve structural property data of wood. (author)
A compilation of structural property data for computer impact calculation (4/5)
The paper describes structural property data for computer impact calculations of nuclear fuel shipping casks. Four kinds of material data, mild steel, stainless steel, lead and wood are compiled. These materials are main structural elements of shipping casks. Structural data such as, the coefficient of thermal expansion, the modulus of longitudinal elasticity, the modulus of transverse elasticity, the Poisson's ratio and stress and strain relationships, have been tabulated against temperature or strain rate. This volume 4 involve structural property data of lead. (author)
Mitchell, Scott A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Ebeida, Mohamed Salah [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Romero, Vicente J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Swiler, Laura Painton [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Rushdi, Ahmad A. [Univ. of Texas, Austin, TX (United States); Abdelkader, Ahmad [Univ. of Maryland, College Park, MD (United States)
2015-09-01
This SAND report summarizes our work on the Sandia National Laboratory LDRD project titled "Efficient Probability of Failure Calculations for QMU using Computational Geometry" which was project #165617 and proposal #13-0144. This report merely summarizes our work. Those interested in the technical details are encouraged to read the full published results, and contact the report authors for the status of the software and follow-on projects.
POPCYCLE: a computer code for calculating nuclear and fossil plant levelized life-cycle power costs
POPCYCLE, a computer code designed to calculate levelized life-cycle power costs for nuclear and fossil electrical generating plants is described. Included are (1) derivations of the equations and a discussion of the methodology used by POPCYCLE, (2) a description of the input required by the code, (3) a listing of the input for a sample case, and (4) the output for a sample case
The comparatively high dose and increasing frequency of computed tomography (CT) examinations have spurred the development of techniques for reducing radiation dose to imaging patients. Among these is the application of tube current modulation (TCM), which can be applied either longitudinally along the body or rotationally along the body, or both. Existing computational models for calculating dose from CT examinations do not include TCM techniques. Dose calculations using Monte Carlo methods have been previously prepared for constant-current rotational exposures at various positions along the body and for the principle exposure projections for several sets of computational phantoms, including adult male and female and pregnant patients. Dose calculations from CT scans with TCM are prepared by appropriately weighting the existing dose data. Longitudinal TCM doses can be obtained by weighting the dose at the z-axis scan position by the relative tube current at that position. Rotational TCM doses are weighted using the relative organ doses from the principle projections as a function of the current at the rotational angle. Significant dose reductions of 15% to 25% to fetal tissues are found from simulations of longitudinal TCM schemes to pregnant patients of different gestational ages. Weighting factors for each organ in rotational TCM schemes applied to adult male and female patients have also been found. As the application of TCM techniques becomes more prevalent, the need for including TCM in CT dose estimates will necessarily increase. (author)
The last 12 years studies about the CABRI, SCARABEE and PHEBUS projects are summarized. It describes the object and the genesis of the cores, the evolution of the core concept and the associated neutronic problems. The calculational scheme used is presented, together with its qualification. The formalism, and the qualification of the different modules of GOLEM are presented. COXYS: module of physical analysis in order to determine the best energetic and spatial mesh for the case of interest. GOLU.B: input data management module. VAREC: calculation module of perturbations due to materials enables to compute perturbed flux and reactivity variation. VARYX: calculation module of geometric perturbations. TRACASYN: module of 3D power shape calculation. Finally TRACASTORE: module of management and graphic exploitation of results. Then, one gives utilization directions for these different modules. Qualification results show that GOLEM is able to analyse the fine physics of many various cases, to calculate by perturbation effects greater than 5000 pcm, to rebuild perturbed flux with margins near 3% for difficult situations, like reactor voiding or spectral or spectral variation in a PWR. Furthermore, 3D hot spots are calculated within margins of a magnitude comparable to experimental ones
DCHAIN: A user-friendly computer program for radioactive decay and reaction chain calculations
A computer program for calculating the time-dependent daughter populations in radioactive decay and nuclear reaction chains is described. Chain members can have non-zero initial populations and be produced from the preceding chain member as the result of radioactive decay, a nuclear reaction, or both. As presently implemented, chains can contain up to 15 members. Program input can be supplied interactively or read from ASCII data files. Time units for half-lives, etc. can be specified during data entry. Input values are verified and can be modified if necessary, before used in calculations. Output results can be saved in ASCII files in a format suitable for including in reports or other documents. The calculational method, described in some detail, utilizes a generalized form of the Bateman equations. The program is written in the C language in conformance with current ANSI standards and can be used on multiple hardware platforms
Reactor physics simulations with coupled Monte Carlo calculation and computational fluid dynamics
The interest in high fidelity modeling of nuclear reactor cores has increased over the last few years and has become computationally more feasible because of the dramatic improvements in processor speed and the availability of low cost parallel platforms. In the research here high fidelity, multi-physics analyses was performed by solving the neutron transport equation using Monte Carlo methods and by solving the thermal-hydraulics equations using computational fluid dynamics. A computation tool based on coupling the Monte Carlo code MCNP5 and the Computational Fluid Dynamics (CFD) code STAR-CD was developed as an audit tool for lower order nuclear reactor calculations. This paper presents the methodology of the developed computer program 'McSTAR' along with the verification and validation efforts. McSTAR is written in PERL programming language and couples MCNP5 and the commercial CFD code STAR-CD. MCNP uses a continuous energy cross section library produced by the NJOY code system from the raw ENDF/B data. A major part of the work was to develop and implement methods to update the cross section library with the temperature distribution calculated by STAR-CD for every region. Three different methods were investigated and two of them are implemented in McSTAR. The user subroutines in STAR-CD are modified to read the power density data and assign them to the appropriate variables in the program and to write an output data file containing the temperature, density and indexing information to perform the mapping between MCNP and STAR-CD cells. The necessary input file manipulation, data file generation, normalization and multi-processor calculation settings are all done through the program flow in McSTAR. Initial testing of the code was performed using a single pin cell and a 3X3 PWR pin-cell problem. The preliminary results of the single pin-cell problem are compared with those obtained from a STAR-CD coupled calculation with the deterministic transport code De
The special purpose computer PERCOLA is designed for long numerical simulations on a percolation problem in Statistical Mechanics of disordered media. Our aim is to improve the actual values of the critical exponents characterizing the behaviour of random resistance networks at percolation threshold. The architecture of PERCOLA is based on an efficient iterative algorithm used to compute the electric conductivity of such networks. The calculator has the characteristics of a general purpose 64 bits floating point micro-programmable computer that can run programs for various types of problems with a peak performance of 25 Mflops. This high computing speed is a result of the pipeline architecture based on internal parallelism and separately micro-code controlled units such as: data memories, a micro-code memory, ALUs and multipliers (both WEITEK components), various data paths, a sequencer (ANALOG DEVICES component), address generators and a random number generator. Thus, the special purpose computer runs percolation problem program 10 percent faster than the supercomputer CRAY XMP. (author)
Brzuszek, Marcin; Daniluk, Andrzej
2006-11-01
Writing a concurrent program can be more difficult than writing a sequential program. Programmer needs to think about synchronisation, race conditions and shared variables. Transactions help reduce the inconvenience of using threads. A transaction is an abstraction, which allows programmers to group a sequence of actions on the program into a logical, higher-level computation unit. This paper presents multithreaded versions of the GROWTH program, which allow to calculate the layer coverages during the growth of thin epitaxial films and the corresponding RHEED intensities according to the kinematical approximation. The presented programs also contain graphical user interfaces, which enable displaying program data at run-time. New version program summaryTitles of programs:GROWTHGr, GROWTH06 Catalogue identifier:ADVL_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADVL_v2_0 Program obtainable from:CPC Program Library, Queen's University of Belfast, N. Ireland Catalogue identifier of previous version:ADVL Does the new version supersede the original program:No Computer for which the new version is designed and others on which it has been tested: Pentium-based PC Operating systems or monitors under which the new version has been tested: Windows 9x, XP, NT Programming language used:Object Pascal Memory required to execute with typical data:More than 1 MB Number of bits in a word:64 bits Number of processors used:1 No. of lines in distributed program, including test data, etc.:20 931 Number of bytes in distributed program, including test data, etc.: 1 311 268 Distribution format:tar.gz Nature of physical problem: The programs compute the RHEED intensities during the growth of thin epitaxial structures prepared using the molecular beam epitaxy (MBE). The computations are based on the use of kinematical diffraction theory [P.I. Cohen, G.S. Petrich, P.R. Pukite, G.J. Whaley, A.S. Arrott, Surf. Sci. 216 (1989) 222. [1
Smith, P.D.
1978-02-01
A special purpose computer program, TRAFIC, is presented for calculating the release of metallic fission products from an HTGR core. The program is based upon Fick's law of diffusion for radioactive species. One-dimensional transient diffusion calculations are performed for the coated fuel particles and for the structural graphite web. A quasi steady-state calculation is performed for the fuel rod matrix material. The model accounts for nonlinear adsorption behavior in the fuel rod gap and on the coolant hole boundary. The TRAFIC program is designed to operate in a core survey mode; that is, it performs many repetitive calculations for a large number of spatial locations in the core. This is necessary in order to obtain an accurate volume integrated release. For this reason the program has been designed with calculational efficiency as one of its main objectives. A highly efficient numerical method is used in the solution. The method makes use of the Duhamel superposition principle to eliminate interior spatial solutions from consideration. Linear response functions relating the concentrations and mass fluxes on the boundaries of a homogeneous region are derived. Multiple regions are numerically coupled through interface conditions. Algebraic elimination is used to reduce the equations as far as possible. The problem reduces to two nonlinear equations in two unknowns, which are solved using a Newton Raphson technique.
This report describes a development of a wind field calculation code and an atmospheric dispersion and dose calculation code which can be used for real-time prediction in an emergency. Models used in the computer codes are a mass-consistent model for wind field and a particle diffusion model for atmospheric dispersion. In order to attain quick response even when the codes are used in a small-scale computer, high-speed iteration method (MILUCR) and kernel density method are applied to the wind field model and the atmospheric and dose calculation model, respectively. In this report, numerical models, computational codes, related files and calculation examples are shown. (author)
Leal, Allan; Saar, Martin
2016-04-01
Computational methods for geochemical and reactive transport modeling are essential for the understanding of many natural and industrial processes. Most of these processes involve several phases and components, and quite often requires chemical equilibrium and kinetics calculations. We present an overview of novel methods for multiphase equilibrium calculations, based on both the Gibbs energy minimization (GEM) approach and on the solution of the law of mass-action (LMA) equations. We also employ kinetics calculations, assuming partial equilibrium (e.g., fluid species in equilibrium while minerals are in disequilibrium) using automatic time stepping to improve simulation efficiency and robustness. These methods are developed specifically for applications that are computationally expensive, such as reactive transport simulations. We show how efficient the new methods are, compared to other algorithms, and how easy it is to use them for geochemical modeling via a simple script language. All methods are available in Reaktoro, a unified open-source framework for modeling chemically reactive systems, which we also briefly describe.
Plummer, L.N.; Parkhurst, D.L.; Fleming, G.W.; Dunkle, S.A.
1988-01-01
The program named PHRQPITZ is a computer code capable of making geochemical calculations in brines and other electrolyte solutions to high concentrations using the Pitzer virial-coefficient approach for activity-coefficient corrections. Reaction-modeling capabilities include calculation of (1) aqueous speciation and mineral-saturation index, (2) mineral solubility, (3) mixing and titration of aqueous solutions, (4) irreversible reactions and mineral water mass transfer, and (5) reaction path. The computed results for each aqueous solution include the osmotic coefficient, water activity , mineral saturation indices, mean activity coefficients, total activity coefficients, and scale-dependent values of pH, individual-ion activities and individual-ion activity coeffients , and scale-dependent values of pH, individual-ion activities and individual-ion activity coefficients. A data base of Pitzer interaction parameters is provided at 25 C for the system: Na-K-Mg-Ca-H-Cl-SO4-OH-HCO3-CO3-CO2-H2O, and extended to include largely untested literature data for Fe(II), Mn(II), Sr, Ba, Li, and Br with provision for calculations at temperatures other than 25C. An extensive literature review of published Pitzer interaction parameters for many inorganic salts is given. Also described is an interactive input code for PHRQPITZ called PITZINPT. (USGS)
Computing Moment-Based Probability Tables for Self-Shielding Calculations in Lattice Codes
As part of the self-shielding model used in the APOLLO2 lattice code, probability tables are required to compute self-shielded cross sections for coarse energy groups (typically with 99 or 172 groups). This paper describes the replacement of the multiband tables (typically with 51 subgroups) with moment-based tables in release 2.5 of APOLLO2. An improved Ribon method is proposed to compute moment-based probability tables, allowing important savings in CPU resources while maintaining the accuracy of the self-shielding algorithm. Finally, a validation is presented where the absorption rates obtained with each of these techniques are compared with exact values obtained using a fine-group elastic slowing-down calculation in the resolved energy domain. Other results, relative to the Rowland's benchmark and to three assembly production cases, are also presented
BALANCE : a computer program for calculating mass transfer for geochemical reactions in ground water
Parkhurst, David L.; Plummer, L. Niel; Thorstenson, Donald C.
1982-01-01
BALANCE is a Fortran computer designed to define and quantify chemical reactions between ground water and minerals. Using (1) the chemical compositions of two waters along a flow path and (2) a set of mineral phases hypothesized to be the reactive constituents in the system, the program calculates the mass transfer (amounts of the phases entering or leaving the aqueous phase) necessary to account for the observed changes in composition between the two waters. Additional constraints can be included in the problem formulation to account for mixing of two end-member waters, redox reactions, and, in a simplified form, isotopic composition. The computer code and a description of the input necessary to run the program are presented. Three examples typical of ground-water systems are described. (USGS)
The general-purpose Monte Carlo radiation transport code MCNPX has been used to simulate photon transport and energy deposition in anthropomorphic phantoms due to the x-ray exposure from the Philips iCT 256 and Siemens Definition CT scanners, together with the previously studied General Electric 9800. The MCNPX code was compiled with the Intel FORTRAN compiler and run on a Linux PC cluster. A patch has been successfully applied to reduce computing times by about 4%. The International Commission on Radiological Protection (ICRP) has recently published the Adult Male (AM) and Adult Female (AF) reference computational voxel phantoms as successors to the Medical Internal Radiation Dose (MIRD) stylised hermaphrodite mathematical phantoms that form the basis for the widely-used ImPACT CT dosimetry tool. Comparisons of normalised organ and effective doses calculated for a range of scanner operating conditions have demonstrated significant differences in results (in excess of 30%) between the voxel and mathematical phantoms as a result of variations in anatomy. These analyses illustrate the significant influence of choice of phantom on normalised organ doses and the need for standardisation to facilitate comparisons of dose. Further such dose simulations are needed in order to update the ImPACT CT Patient Dosimetry spreadsheet for contemporary CT practice. (author)
In the fuel rods of the first DUELL experiment highly asymmetric fuel structures were found which had been caused by a steep transversal neutron flux gradient and eccentric pellet location. The TEXDIF-P computer code was developed to explain this phenomenon in quantitative terms. This computer code solves for an encapsulated fuel rod the equation of two-dimensional heat conduction using the finite differences method. Any distribution may be specified of the heat source density and of the gap between the fuel pellet and the cladding tube. By use of the modular structure the material relations are easily exchangeable. The TEXDIF-P code can be applied both to oxide and to carbide fuel rods. Coupling of the POUMEC subprogram of SATURN-1 allows the dynamic calculation of pore migration. Independent of this, the program includes an option for determination of the limit of the pore migration zone via a relation covering the minimum pore migration path according to Olander. TEXDIF-P has been used so far to verify the first start-up ramp experiment of DUELL. The agreement between the computation and the findings of post-examinations is quite satisfactory regarding the size and the location of the central void. Also the limit of the compacted zone is fairly well reproduced by the computation. The assumption on the size of the transversal neutron flux gradient has been essentially confirmed retroactively by transversal γ-scanning. (orig.)
Computational Issues Associated with Automatic Calculation of Acute Myocardial Infarction Scores
Destro-Filho, J. B.; Machado, S. J. S.; Fonseca, G. T.
2008-12-01
This paper presents a comparison among the three principal acute myocardial infarction (AMI) scores (Selvester, Aldrich, Anderson-Wilkins) as they are automatically estimated from digital electrocardiographic (ECG) files, in terms of memory occupation and processing time. Theoretical algorithm complexity is also provided. Our simulation study supposes that the ECG signal is already digitized and available within a computer platform. We perform 1000 000 Monte Carlo experiments using the same input files, leading to average results that point out drawbacks and advantages of each score. Since all these calculations do not require either large memory occupation or long processing, automatic estimation is compatible with real-time requirements associated with AMI urgency and with telemedicine systems, being faster than manual calculation, even in the case of simple costless personal microcomputers.
An approach to first principles electronic structure calculation by symbolic-numeric computation
Kikuchi, Akihito
2013-01-01
This article is an introduction to a new approach to first principles electronic structure calculation. The starting point is the Hartree-Fock-Roothaan equation, in which molecular integrals are approximated by polynomials by way of Taylor expansion with respect to atomic coordinates and other variables. It leads to a set of polynomial equations whose solutions are eigenstate, which is designated as algebraic molecular orbital equation. Symbolic computation, especially, Gr\\"obner bases theory, enables us to rewrite the polynomial equations into more trimmed and tractable forms with identical roots, from which we can unravel the relationship between physical parameters (wave function, atomic coordinates, and others) and numerically evaluate them one by one in order. Furthermore, this method is a unified way to solve the electronic structure calculation, the optimization of physical parameters, and the inverse problem as a forward problem.
Methods, algorithms and computer codes for calculation of electron-impact excitation parameters
Bogdanovich, P; Stonys, D
2015-01-01
We describe the computer codes, developed at Vilnius University, for the calculation of electron-impact excitation cross sections, collision strengths, and excitation rates in the plane-wave Born approximation. These codes utilize the multireference atomic wavefunctions which are also adopted to calculate radiative transition parameters of complex many-electron ions. This leads to consistent data sets suitable in plasma modelling codes. Two versions of electron scattering codes are considered in the present work, both of them employing configuration interaction method for inclusion of correlation effects and Breit-Pauli approximation to account for relativistic effects. These versions differ only by one-electron radial orbitals, where the first one employs the non-relativistic numerical radial orbitals, while another version uses the quasirelativistic radial orbitals. The accuracy of produced results is assessed by comparing radiative transition and electron-impact excitation data for neutral hydrogen, helium...
WOLF: a computer code package for the calculation of ion beam trajectories
The WOLF code solves POISSON'S equation within a user-defined problem boundary of arbitrary shape. The code is compatible with ANSI FORTRAN and uses a two-dimensional Cartesian coordinate geometry represented on a triangular lattice. The vacuum electric fields and equipotential lines are calculated for the input problem. The use may then introduce a series of emitters from which particles of different charge-to-mass ratios and initial energies can originate. These non-relativistic particles will then be traced by WOLF through the user-defined region. Effects of ion and electron space charge are included in the calculation. A subprogram PISA forms part of this code and enables optimization of various aspects of the problem. The WOLF package also allows detailed graphics analysis of the computed results to be performed
One of the important phenomena in magnetically-confined fusion plasma is plasma turbulence, which causes particle and heat transport and degrades plasma confinement. To address multi-scale turbulence including temporal and spatial scales of electrons and ions, we extend our gyrokinetic Vlasov simulation code GKV to run efficiently on peta-scale supercomputers. A key numerical technique is the parallel Fast Fourier Transform (FFT) required for parallel spectral calculations, where masking of the cost of inter-node transpose communications is essential to improve strong scaling. To mask communication costs, computation-communication overlap techniques are applied for FFTs and transpose with the help of the hybrid parallelization of message passing interface and open multi-processing. Integrated overlaps including whole spectral calculation procedures show better scaling than simple overlaps of FFTs and transpose. The masking of communication costs significantly improves strong scaling of the GKV code, and makes substantial speed-up toward multi-scale turbulence simulations. (author)
Computer codes for the calculation of vibrations in machines and structures
After an introductory paper on the typical requirements to be met by vibration calculations, the first two sections of the conference papers present universal as well as specific finite-element codes tailored to solve individual problems. The calculation of dynamic processes increasingly now in addition to the finite elements applies the method of multi-component systems which takes into account rigid bodies or partial structures and linking and joining elements. This method, too, is explained referring to universal computer codes and to special versions. In mechanical engineering, rotary vibrations are a major problem, and under this topic, conference papers exclusively deal with codes that also take into account special effects such as electromechanical coupling, non-linearities in clutches, etc. (orig./HP)
A compilation of structural property data for computer impact calculation (3/5)
The paper describes structural property data for computer impact calculations of nuclear fuel shipping casks. Four kinds of material data, mild steel, stainless steel, lead and wood are compiled. These materials are main structural elements of shipping casks. Structural data such as, the coefficient of thermal expansion, the modulus of longitudinal elasticity, the modulus of transverse elasticity, the Poisson's ratio and stress and strain relationships, have been tabulated against temperature or strain rate. This volume 3 involve structural property data of stainless steel. (author)
Algorithm and computer code for calculating the swelling of the fuel elements with a ceramic fuel
Algorithm and the OVERAT program intended for calculating the strain deformed state of a cylindrical axially symmetric fuel element with ceramic fuel and thin-walled shell are described. Calculations are performed with account for creep deformation, fuel swelling, coolant and gas pressures in the axial cavity. At each moment of time deformations and strains in the shell as well as the spatial (by rod radius) dependence of fuel swelling are calculated. Fuel swelling is determined on the basis of a theoretical model, in which gas swelling is related to formation and development only of intergrain porosity. The reactor operation at a constant power at invariable in time temperature and energy release distributions in the fuel element core rod are considered. For description of the processes taking place in a fuel element a hard system of usual differential first order equations which is solved by the Gear method has been used. The OVERAT program is written in FORTRAN and at BESM-6 computer debuged. The results of test calculations of strain-deformed state and fuel element swelling with an UO2 hollow rod in a molybdenum shell are presented. It is pointed out that the described program in a complex with other programs can be used for investigating serviceability of various type reactors fuel elements
Analysis of shielding calculation methods for 16- and 64-slice computed tomography facilities
Moreno, C; Cenizo, E; Bodineau, C; Mateo, B; Ortega, E M, E-mail: c_morenosaiz@yahoo.e [Servicio de RadiofIsica Hospitalaria, Hospital Regional Universitario Carlos Haya, Malaga (Spain)
2010-09-15
The new multislice computed tomography (CT) machines require some new methods of shielding calculation, which need to be analysed. NCRP Report No. 147 proposes three shielding calculation methods based on the following dosimetric parameters: weighted CT dose index for the peripheral axis (CTDI{sub w,per}), dose-length product (DLP) and isodose maps. A survey of these three methods has been carried out. For this analysis, we have used measured values of the dosimetric quantities involved and also those provided by the manufacturer, making a comparison between the results obtained. The barrier thicknesses when setting up two different multislice CT instruments, a Philips Brilliance 16 or a Philips Brilliance 64, in the same room, are also compared. Shielding calculation from isodose maps provides more reliable results than the other two methods, since it is the only method that takes the actual scattered radiation distribution into account. It is concluded therefore that the most suitable method for calculating the barrier thicknesses of the CT facility is the one based on isodose maps. This study also shows that for different multislice CT machines the barrier thicknesses do not necessarily become bigger as the number of slices increases, because of the great dependence on technique used in CT protocols for different anatomical regions.
Calculation of the properties of digital mammograms using a computer simulation
A Mote Carlo computer model of mammography has been developed to study and optimise the performance of digital mammographic systems. The program uses high-resolution voxel phantoms to model the breast, which simulate the adipose and fibro-glandular tissues, Cooper's ligaments, ducts and skin in three dimensions. The model calculates the dose to each tissue, and also the quantities such as energy imparted to image pixels, noise per image pixel and scatter-to-primary (S/P) ratios. It allows studies of the dependence of image properties on breast structure and on position within the image. The program has been calibrated by calculating and measuring the pixel values and noise for a digital mammographic system. The thicknesses of two components of this system were unknown, and were adjusted to obtain a good agreement between measurement and calculation. The utility of the program is demonstrated with the calculations of the variation of the S/P ratio with and without a grid, and of the image contrast across the image of a 50-mm-thick breast phantom. (authors)
Monteray Mark-I: Computer program (PC-version) for shielding calculation with Monte Carlo method
A computer program for gamma ray shielding calculation using Monte Carlo method has been developed. The program is written in WATFOR77 language. The MONTERAY MARH-1 is originally developed by James Wood. The program was modified by the authors that the modified version is easily executed. Applying Monte Carlo method the program observe photon gamma transport in an infinity planar shielding with various thick. A photon gamma is observed till escape from the shielding or when its energy less than the cut off energy. Pair production process is treated as pure absorption process that annihilation photons generated in the process are neglected in the calculation. The out put data calculated by the program are total albedo, build-up factor, and photon spectra. The calculation result for build-up factor of a slab lead and water media with 6 MeV parallel beam gamma source shows that they are in agreement with published data. Hence the program is adequate as a shielding design tool for observing gamma radiation transport in various media
Yamaguchi, Kizashi [Institute for Nano Science Design Center, Osaka University, 1-3 Machikaneyama, Toyonaka, Osaka 560-8531, Japan and TOYOTA Physical and Chemical Research Institute, Nagakute, Aichi, 480-1192 (Japan); Nishihara, Satomichi; Saito, Toru; Yamanaka, Shusuke; Kitagawa, Yasutaka; Kawakami, Takashi; Yamada, Satoru; Isobe, Hiroshi; Okumura, Mitsutaka [Department of Chemistry, Graduate School of Science, Osaka University, 1-1 Machikaneyama, Toyonaka, Osaka 560-0043 (Japan)
2015-01-22
First principle calculations of effective exchange integrals (J) in the Heisenberg model for diradical species were performed by both symmetry-adapted (SA) multi-reference (MR) and broken-symmetry (BS) single reference (SR) methods. Mukherjee-type (Mk) state specific (SS) MR coupled-cluster (CC) calculations by the use of natural orbital (NO) references of ROHF, UHF, UDFT and CASSCF solutions were carried out to elucidate J values for di- and poly-radical species. Spin-unrestricted Hartree Fock (UHF) based coupled-cluster (CC) computations were also performed to these species. Comparison between UHF-NO(UNO)-MkMRCC and BS UHF-CC computational results indicated that spin-contamination of UHF-CC solutions still remains at the SD level. In order to eliminate the spin contamination, approximate spin-projection (AP) scheme was applied for UCC, and the AP procedure indeed corrected the error to yield good agreement with MkMRCC in energy. The CC double with spin-unrestricted Brueckner's orbital (UBD) was furthermore employed for these species, showing that spin-contamination involved in UHF solutions is largely suppressed, and therefore AP scheme for UBCCD removed easily the rest of spin-contamination. We also performed spin-unrestricted pure- and hybrid-density functional theory (UDFT) calculations of diradical and polyradical species. Three different computational schemes for total spin angular momentums were examined for the AP correction of the hybrid (H) UDFT. HUDFT calculations followed by AP, HUDFT(AP), yielded the S-T gaps that were qualitatively in good agreement with those of MkMRCCSD, UHF-CC(AP) and UB-CC(AP). Thus a systematic comparison among MkMRCCSD, UCC(AP) UBD(AP) and UDFT(AP) was performed concerning with the first principle calculations of J values in di- and poly-radical species. It was found that BS (AP) methods reproduce MkMRCCSD results, indicating their applicability to large exchange coupled systems.
A Geometric Computational Model for Calculation of Longwall Face Effect on Gate Roadways
Mohammadi, Hamid; Ebrahimi Farsangi, Mohammad Ali; Jalalifar, Hossein; Ahmadi, Ali Reza
2016-01-01
In this paper a geometric computational model (GCM) has been developed for calculating the effect of longwall face on the extension of excavation-damaged zone (EDZ) above the gate roadways (main and tail gates), considering the advance longwall mining method. In this model, the stability of gate roadways are investigated based on loading effects due to EDZ and caving zone (CZ) above the longwall face, which can extend the EDZ size. The structure of GCM depends on four important factors: (1) geomechanical properties of hanging wall, (2) dip and thickness of coal seam, (3) CZ characteristics, and (4) pillar width. The investigations demonstrated that the extension of EDZ is a function of pillar width. Considering the effect of pillar width, new mathematical relationships were presented to calculate the face influence coefficient and characteristics of extended EDZ. Furthermore, taking GCM into account, a computational algorithm for stability analysis of gate roadways was suggested. Validation was carried out through instrumentation and monitoring results of a longwall face at Parvade-2 coal mine in Tabas, Iran, demonstrating good agreement between the new model and measured results. Finally, a sensitivity analysis was carried out on the effect of pillar width, bearing capacity of support system and coal seam dip.
Three loop ladder and V-topology diagrams contributing to the massive operator matrix element AQg are calculated. The corresponding objects can all be expressed in terms of nested sums and recurrences depending on the Mellin variable N and the dimensional parameter ε. Given these representations, the desired Laurent series expansions in ε can be obtained with the help of our computer algebra toolbox. Here we rely on generalized hypergeometric functions and Mellin-Barnes representations, on difference ring algorithms for symbolic summation, on an optimized version of the multivariate Almkvist-Zeilberger algorithm for symbolic integration, and on new methods to calculate Laurent series solutions of coupled systems of differential equations. The solutions can be computed for general coefficient matrices directly for any basis also performing the expansion in the dimensional parameter in case it is expressible in terms of indefinite nested product-sum expressions. This structural result is based on new results of our difference ring theory. In the cases discussed we deal with iterative sum- and integral-solutions over general alphabets. The final results are expressed in terms of special sums, forming quasi-shuffle algebras, such as nested harmonic sums, generalized harmonic sums, and nested binomially weighted (cyclotomic) sums. Analytic continuations to complex values of N are possible through the recursion relations obeyed by these quantities and their analytic asymptotic expansions. The latter lead to a host of new constants beyond the multiple zeta values, the infinite generalized harmonic and cyclotomic sums in the case of V-topologies.
Ablinger, J.; Behring, A.; Blümlein, J.; De Freitas, A.; von Manteuffel, A.; Schneider, C.
2016-05-01
Three loop ladder and V-topology diagrams contributing to the massive operator matrix element AQg are calculated. The corresponding objects can all be expressed in terms of nested sums and recurrences depending on the Mellin variable N and the dimensional parameter ε. Given these representations, the desired Laurent series expansions in ε can be obtained with the help of our computer algebra toolbox. Here we rely on generalized hypergeometric functions and Mellin-Barnes representations, on difference ring algorithms for symbolic summation, on an optimized version of the multivariate Almkvist-Zeilberger algorithm for symbolic integration, and on new methods to calculate Laurent series solutions of coupled systems of differential equations. The solutions can be computed for general coefficient matrices directly for any basis also performing the expansion in the dimensional parameter in case it is expressible in terms of indefinite nested product-sum expressions. This structural result is based on new results of our difference ring theory. In the cases discussed we deal with iterative sum- and integral-solutions over general alphabets. The final results are expressed in terms of special sums, forming quasi-shuffle algebras, such as nested harmonic sums, generalized harmonic sums, and nested binomially weighted (cyclotomic) sums. Analytic continuations to complex values of N are possible through the recursion relations obeyed by these quantities and their analytic asymptotic expansions. The latter lead to a host of new constants beyond the multiple zeta values, the infinite generalized harmonic and cyclotomic sums in the case of V-topologies.
Honea, R.B.; Petrich, C.H.; Wilson, D.L.; Dillard, C.A.; Durfee, R.C.; Faber, J.A.
1979-04-01
This report documents methodologic and computer software developed by Energy Division and Computer Sciences Division personnel at Oak Ridge National Laboratory (ORNL). The software is designed to quantify and automatically map geologic and other cost-related parameters as required to estimate coal mining costs. The software complements the detailed coal production cost models for both underground and surface mines which have been developed for the Electric Power Research Institute (EPRI) by NUS, Corp. These models require input variables such as coal seam thickness, coal seam depth, surface slope, etc., to estimate mining costs. This report provides a general overview of the software and methodology developed by ORNL to calculate some of these parameters along with sample map output which indicates the geographical distribution of these geologic characteristics. A detailed user guide for implementing the software has been prepared and is included in the appendixes. (Sample input data which may be used to verify the operation of the software are available from ORNL.) Also included is a brief review of coal production, coal recovery, and coal resource calculation studies. This system will be useful to utilities and coal mine operators alike in estimating costs through comprehensive assessment before mining takes place.
The Spallation Neutron Source (SNS) will provide an intense source of low-energy neutrons for experimental use. The low-energy neutrons are produced by the interaction of a high-energy (1.0 GeV) proton beam on a mercury (Hg) target and slowed down in liquid hydrogen or light water moderators. Computer codes and computational techniques are being benchmarked against relevant experimental data to validate and verify the tools being used to predict the performance of the SNS. The LAHET Code System (LCS), which includes LAHET, HTAPE ad HMCNP (a modified version of MCNP version 3b), have been applied to the analysis of experiments that were conducted in the Alternating Gradient Synchrotron (AGS) facility at Brookhaven National Laboratory (BNL). In the AGS experiments, foils of various materials were placed around a mercury-filled stainless steel cylinder, which was bombarded with protons at 1.6 GeV. Neutrons created in the mercury target, activated the foils. Activities of the relevant isotopes were accurately measured and compared with calculated predictions. Measurements at BNL were provided in part by collaborating scientists from JAERI as part of the AGS Spallation Target Experiment (ASTE) collaboration. To date, calculations have shown good agreement with measurements
DIST: a computer code system for calculation of distribution ratios of solutes in the purex system
Tachimori, Shoichi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment
1996-05-01
Purex is a solvent extraction process for reprocessing the spent nuclear fuel using tri n-butylphosphate (TBP). A computer code system DIST has been developed to calculate distribution ratios for the major solutes in the Purex process. The DIST system is composed of database storing experimental distribution data of U(IV), U(VI), Pu(III), Pu(IV), Pu(VI), Np(IV), Np(VI), HNO{sub 3} and HNO{sub 2}: DISTEX and of Zr(IV), Tc(VII): DISTEXFP and calculation programs to calculate distribution ratios of U(IV), U(VI), Pu(III), Pu(IV), Pu(VI), Np(IV), Np(VI), HNO{sub 3} and HNO{sub 2}(DIST1), and Zr(IV), Tc(VII)(DITS2). The DIST1 and DIST2 determine, by the best-fit procedures, the most appropriate values of many parameters put on empirical equations by using the DISTEX data which fulfill the assigned conditions and are applied to calculate distribution ratios of the respective solutes. Approximately 5,000 data were stored in the DISTEX and DISTEXFP. In the present report, the following items are described, 1) specific features of DIST1 and DIST2 codes and the examples of calculation 2) explanation of databases, DISTEX, DISTEXFP and a program DISTIN, which manages the data in the DISTEX and DISTEXFP by functions as input, search, correction and delete. and at the annex, 3) programs of DIST1, DIST2, and figure-drawing programs DIST1G and DIST2G 4) user manual for DISTIN. 5) source programs of DIST1 and DIST2. 6) the experimental data stored in the DISTEX and DISTEXFP. (author). 122 refs.
Park, Peter C. [Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Schreibmann, Eduard; Roper, Justin; Elder, Eric; Crocker, Ian [Department of Radiation Oncology, Winship Cancer Institute of Emory University, Atlanta, Georgia (United States); Fox, Tim [Varian Medical Systems, Palo Alto, California (United States); Zhu, X. Ronald [Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Dong, Lei [Scripps Proton Therapy Center, San Diego, California (United States); Dhabaan, Anees, E-mail: anees.dhabaan@emory.edu [Department of Radiation Oncology, Winship Cancer Institute of Emory University, Atlanta, Georgia (United States)
2015-03-15
Purpose: Computed tomography (CT) artifacts can severely degrade dose calculation accuracy in proton therapy. Prompted by the recently increased popularity of magnetic resonance imaging (MRI) in the radiation therapy clinic, we developed an MRI-based CT artifact correction method for improving the accuracy of proton range calculations. Methods and Materials: The proposed method replaces corrupted CT data by mapping CT Hounsfield units (HU number) from a nearby artifact-free slice, using a coregistered MRI. MRI and CT volumetric images were registered with use of 3-dimensional (3D) deformable image registration (DIR). The registration was fine-tuned on a slice-by-slice basis by using 2D DIR. Based on the intensity of paired MRI pixel values and HU from an artifact-free slice, we performed a comprehensive analysis to predict the correct HU for the corrupted region. For a proof-of-concept validation, metal artifacts were simulated on a reference data set. Proton range was calculated using reference, artifactual, and corrected images to quantify the reduction in proton range error. The correction method was applied to 4 unique clinical cases. Results: The correction method resulted in substantial artifact reduction, both quantitatively and qualitatively. On respective simulated brain and head and neck CT images, the mean error was reduced from 495 and 370 HU to 108 and 92 HU after correction. Correspondingly, the absolute mean proton range errors of 2.4 cm and 1.7 cm were reduced to less than 2 mm in both cases. Conclusions: Our MRI-based CT artifact correction method can improve CT image quality and proton range calculation accuracy for patients with severe CT artifacts.
Purpose: Computed tomography (CT) artifacts can severely degrade dose calculation accuracy in proton therapy. Prompted by the recently increased popularity of magnetic resonance imaging (MRI) in the radiation therapy clinic, we developed an MRI-based CT artifact correction method for improving the accuracy of proton range calculations. Methods and Materials: The proposed method replaces corrupted CT data by mapping CT Hounsfield units (HU number) from a nearby artifact-free slice, using a coregistered MRI. MRI and CT volumetric images were registered with use of 3-dimensional (3D) deformable image registration (DIR). The registration was fine-tuned on a slice-by-slice basis by using 2D DIR. Based on the intensity of paired MRI pixel values and HU from an artifact-free slice, we performed a comprehensive analysis to predict the correct HU for the corrupted region. For a proof-of-concept validation, metal artifacts were simulated on a reference data set. Proton range was calculated using reference, artifactual, and corrected images to quantify the reduction in proton range error. The correction method was applied to 4 unique clinical cases. Results: The correction method resulted in substantial artifact reduction, both quantitatively and qualitatively. On respective simulated brain and head and neck CT images, the mean error was reduced from 495 and 370 HU to 108 and 92 HU after correction. Correspondingly, the absolute mean proton range errors of 2.4 cm and 1.7 cm were reduced to less than 2 mm in both cases. Conclusions: Our MRI-based CT artifact correction method can improve CT image quality and proton range calculation accuracy for patients with severe CT artifacts
Dekker, C. M.; Sliggers, C. J.
To spur on quality assurance for models that calculate air pollution, quality criteria for such models have been formulated. By satisfying these criteria the developers of these models and producers of the software packages in this field can assure and account for the quality of their products. In this way critics and users of such (computer) models can gain a clear understanding of the quality of the model. Quality criteria have been formulated for the development of mathematical models, for their programming—including user-friendliness, and for the after-sales service, which is part of the distribution of such software packages. The criteria have been introduced into national and international frameworks to obtain standardization.
Electronic Structure Calculations and Adaptation Scheme in Multi-core Computing Environments
Seshagiri, Lakshminarasimhan; Sosonkina, Masha; Zhang, Zhao
2009-05-20
Multi-core processing environments have become the norm in the generic computing environment and are being considered for adding an extra dimension to the execution of any application. The T2 Niagara processor is a very unique environment where it consists of eight cores having a capability of running eight threads simultaneously in each of the cores. Applications like General Atomic and Molecular Electronic Structure (GAMESS), used for ab-initio molecular quantum chemistry calculations, can be good indicators of the performance of such machines and would be a guideline for both hardware designers and application programmers. In this paper we try to benchmark the GAMESS performance on a T2 Niagara processor for a couple of molecules. We also show the suitability of using a middleware based adaptation algorithm on GAMESS on such a multi-core environment.
The SMART-IST computer code models radionuclide behaviour in CANDU reactor containments during postulated accidents. It calculates nuclide concentrations in various parts of containment and releases of nuclides from containment to the atmosphere. The intended application of SMART-IST is safety and licensing analyses of public dose resulting from the releases of nuclides. SMART-IST has been developed and validated meeting the CSA N286.7 quality assurance standard, under the sponsorship of the Industry Standard Toolset (IST) partners consisting of AECL and Canadian nuclear utilities; OPG, Bruce Power, NB Power and Hydro-Quebec. This paper presents an overview of the SMART-IST code including its theoretical framework and models, and also presents typical examples of code predictions. (author)
Ablinger, J.; Schneider, C. [Johannes Kepler Univ., Linz (Austria). Research Inst. for Symbolic Computation; Behring, A.; Bluemlein, J.; Freitas, A. de [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany); Manteuffel, A. von [Mainz Univ. (Germany). Inst. fuer Physik
2015-09-15
Three loop ladder and V-topology diagrams contributing to the massive operator matrix element A{sub Qg} are calculated. The corresponding objects can all be expressed in terms of nested sums and recurrences depending on the Mellin variable N and the dimensional parameter ε. Given these representations, the desired Laurent series expansions in ε can be obtained with the help of our computer algebra toolbox. Here we rely on generalized hypergeometric functions and Mellin-Barnes representations, on difference ring algorithms for symbolic summation, on an optimized version of the multivariate Almkvist-Zeilberger algorithm for symbolic integration, and on new methods to calculate Laurent series solutions of coupled systems of differential equations. The solutions can be computed for general coefficient matrices directly for any basis also performing the expansion in the dimensional parameter in case it is expressible in terms of indefinite nested product-sum expressions. This structural result is based on new results of our difference ring theory. In the cases discussed we deal with iterative sum- and integral-solutions over general alphabets. The final results are expressed in terms of special sums, forming quasi-shuffle algebras, such as nested harmonic sums, generalized harmonic sums, and nested binomially weighted (cyclotomic) sums. Analytic continuations to complex values of N are possible through the recursion relations obeyed by these quantities and their analytic asymptotic expansions. The latter lead to a host of new constants beyond the multiple zeta values, the infinite generalized harmonic and cyclotomic sums in the case of V-topologies.
Strange, D. L.; Bander, T. J.
1981-04-01
The MILDOS Computer Code estimates impacts from radioactive emissions from uranium milling facilities. These impacts are presented as dose commitments to individuals and the regional population within an 80 km radius of the facility. Only airborne releases of radioactive materials are considered: releases to surface water and to groundwater are not addressed in MILDOS. This code is multi-purposed and can be used to evaluate population doses for NEPA assessments, maximum individual doses for predictive 40 CFR 190 compliance evaluations, or maximum offsite air concentrations for predictive evaluations of 10 CFR 20 compliance. Emissions of radioactive materials from fixed point source locations and from area sources are modeled using a sector-averaged Gaussian plume dispersion model, which utilizes user-provided wind frequency data. Mechanisms such as deposition of particulates, resuspension. radioactive decay and ingrowth of daughter radionuclides are included in the transport model. Annual average air concentrations are computed, from which subsequent impacts to humans through various pathways are computed. Ground surface concentrations are estimated from deposition buildup and ingrowth of radioactive daughters. The surface concentrations are modified by radioactive decay, weathering and other environmental processes. The MILDOS Computer Code allows the user to vary the emission sources as a step function of time by adjustinq the emission rates. which includes shutting them off completely. Thus the results of a computer run can be made to reflect changing processes throughout the facility's operational lifetime. The pathways considered for individual dose commitments and for population impacts are: • Inhalation • External exposure from ground concentrations • External exposure from cloud immersion • Ingestioo of vegetables • Ingestion of meat • Ingestion of milk • Dose commitments are calculated using dose conversion factors, which are ultimately based
Measurements and computer calculations of pulverized-coal combustion at Asnaes Power Station 4
Biede, O.; Swane Lund, J.
1996-07-01
Measurements have been performed on a front-fired 270 MW (net electrical out-put) pulverized-coal utility furnace with 24 swirl-stabilized burners, placed in four horizontal rows. Apart from continuous operational measurements, special measurements were performed as follows. At one horizontal level above the upper burner row, gas temperatures were measured by an acoustic pyrometer. At the same level and at the level of the second upper burner row, irradiation to the walls was measured in ten positions by means of specially designed 2 {pi}-thermal radiation meters. Fly-ash was collected and analysed for unburned carbon. Coal size distribution to each individual burner was measured. Eight different cases were measured. On a Columbian coal, three cases with different oxygen concentrations in the exit-gas were measured at a load of 260 MW, and in addition, measurements were performed at reduced loads of 215 MW and 130 MW. On a South African coal blend measurements were performed at a load of 260 MW with three different oxygen exit concentrations. Each case has been simulated by a three-dimensional numerical computer code for the prediction of distribution of gas temperatures, species concentrations and thermal radiative net heat absorption on the furnace walls. Comparisons between measured and calculated gas temperatures, irradiation and unburned carbon are made. Measured results among the cases differ significantly, and the computational results agree well with the measured results. (au)
Hybrid approach for fast occlusion processing in computer-generated hologram calculation.
Gilles, Antonin; Gioia, Patrick; Cozot, Rémi; Morin, Luce
2016-07-10
A hybrid approach for fast occlusion processing in computer-generated hologram calculation is studied in this paper. The proposed method is based on the combination of two commonly used approaches that complement one another: the point-source and wave-field approaches. By using these two approaches together, the proposed method thus takes advantage of both of them. In this method, the 3D scene is first sliced into several depth layers parallel to the hologram plane. Light scattered by the scene is then propagated and shielded from one layer to another using either a point-source or a wave-field approach according to a threshold criterion on the number of points within the layer. Finally, the hologram is obtained by computing the propagation of light from the nearest layer to the hologram plane. Experimental results reveal that the proposed method does not produce any visible artifact and outperforms both the point-source and wave-field approaches. PMID:27409327
SU-E-I-28: Evaluating the Organ Dose From Computed Tomography Using Monte Carlo Calculations
Purpose: To evaluate organ doses from computed tomography (CT) using Monte Carlo (MC) calculations. Methods: A Philips Brilliance CT scanner (64 slice) was simulated using the GMctdospp (IMPS, Germany) based on the EGSnrc user code. The X-ray spectra and a bowtie filter for MC simulations were determined to coincide with measurements of half-value layer (HVL) and off-center ratio (OCR) profile in air. The MC dose was calibrated from absorbed dose measurements using a Farmer chamber and a cylindrical water phantom. The dose distribution from CT was calculated using patient CT images and organ doses were evaluated from dose volume histograms. Results: The HVLs of Al at 80, 100, and 120 kV were 6.3, 7.7, and 8.7 mm, respectively. The calculated HVLs agreed with measurements within 0.3%. The calculated and measured OCR profiles agreed within 3%. For adult head scans (CTDIvol) =51.4 mGy), mean doses for brain stem, eye, and eye lens were 23.2, 34.2, and 37.6 mGy, respectively. For pediatric head scans (CTDIvol =35.6 mGy), mean doses for brain stem, eye, and eye lens were 19.3, 24.5, and 26.8 mGy, respectively. For adult chest scans (CTDIvol=19.0 mGy), mean doses for lung, heart, and spinal cord were 21.1, 22.0, and 15.5 mGy, respectively. For adult abdominal scans (CTDIvol=14.4 mGy), the mean doses for kidney, liver, pancreas, spleen, and spinal cord were 17.4, 16.5, 16.8, 16.8, and 13.1 mGy, respectively. For pediatric abdominal scans (CTDIvol=6.76 mGy), mean doses for kidney, liver, pancreas, spleen, and spinal cord were 8.24, 8.90, 8.17, 8.31, and 6.73 mGy, respectively. In head scan, organ doses were considerably different from CTDIvol values. Conclusion: MC dose distributions calculated by using patient CT images are useful to evaluate organ doses absorbed to individual patients
SU-E-I-28: Evaluating the Organ Dose From Computed Tomography Using Monte Carlo Calculations
Ono, T; Araki, F [Faculty of Life Sciences, Kumamoto University, Kumamoto (Japan)
2014-06-01
Purpose: To evaluate organ doses from computed tomography (CT) using Monte Carlo (MC) calculations. Methods: A Philips Brilliance CT scanner (64 slice) was simulated using the GMctdospp (IMPS, Germany) based on the EGSnrc user code. The X-ray spectra and a bowtie filter for MC simulations were determined to coincide with measurements of half-value layer (HVL) and off-center ratio (OCR) profile in air. The MC dose was calibrated from absorbed dose measurements using a Farmer chamber and a cylindrical water phantom. The dose distribution from CT was calculated using patient CT images and organ doses were evaluated from dose volume histograms. Results: The HVLs of Al at 80, 100, and 120 kV were 6.3, 7.7, and 8.7 mm, respectively. The calculated HVLs agreed with measurements within 0.3%. The calculated and measured OCR profiles agreed within 3%. For adult head scans (CTDIvol) =51.4 mGy), mean doses for brain stem, eye, and eye lens were 23.2, 34.2, and 37.6 mGy, respectively. For pediatric head scans (CTDIvol =35.6 mGy), mean doses for brain stem, eye, and eye lens were 19.3, 24.5, and 26.8 mGy, respectively. For adult chest scans (CTDIvol=19.0 mGy), mean doses for lung, heart, and spinal cord were 21.1, 22.0, and 15.5 mGy, respectively. For adult abdominal scans (CTDIvol=14.4 mGy), the mean doses for kidney, liver, pancreas, spleen, and spinal cord were 17.4, 16.5, 16.8, 16.8, and 13.1 mGy, respectively. For pediatric abdominal scans (CTDIvol=6.76 mGy), mean doses for kidney, liver, pancreas, spleen, and spinal cord were 8.24, 8.90, 8.17, 8.31, and 6.73 mGy, respectively. In head scan, organ doses were considerably different from CTDIvol values. Conclusion: MC dose distributions calculated by using patient CT images are useful to evaluate organ doses absorbed to individual patients.
Development of a Korean adult male computational phantom for internal dosimetry calculation
A Korean adult male computational phantom was constructed based on the current anthropometric and organ volume data of Korean average adult male, and was applied to calculate internal photon dosimetry data. The stylised models of external body, skeleton, and a total of 13 internal organs (brain, gall bladder, heart, kidneys, liver, lungs, pancreas, spleen, stomach, testes, thymus, thyroid and urinary bladder) were redesigned based on the Oak Ridge National Laboratory (ORNL) adult phantom. The height of trunk of the Korean phantom was 8.6% less than that of the ORNL adult phantom, and the volumes of all organs decreased up to 65% (pancreas) except for brain, gall bladder wall and thymus. Specific absorbed fraction (SAF) was calculated using the Korean phantom and Monte Carlo code, and compared with those from the ORNL adult phantom. The SAF of organs in the Korean phantom was overall higher than that from the ORNL adult phantom. This was caused by the smaller organ volume and the shorter inter-organ distance in the Korean phantom. The self SAF was dominantly affected by the difference in organ volume, and the SAF for different source and target organs was more affected by the inter-organ distance than by the organ volume difference. The SAFs of the Korean stylised phantom differ from those of the ORNL phantom by 10-180%. The comparison study of internal dosimetry will be extended to tomographic phantom and electron source in the future. (authors)
Leal, Allan M. M.; Kulik, Dmitrii A.; Kosakowski, Georg
2016-02-01
We present a numerical method for multiphase chemical equilibrium calculations based on a Gibbs energy minimization approach. The method can accurately and efficiently determine the stable phase assemblage at equilibrium independently of the type of phases and species that constitute the chemical system. We have successfully applied our chemical equilibrium algorithm in reactive transport simulations to demonstrate its effective use in computationally intensive applications. We used FEniCS to solve the governing partial differential equations of mass transport in porous media using finite element methods in unstructured meshes. Our equilibrium calculations were benchmarked with GEMS3K, the numerical kernel of the geochemical package GEMS. This allowed us to compare our results with a well-established Gibbs energy minimization algorithm, as well as their performance on every mesh node, at every time step of the transport simulation. The benchmark shows that our novel chemical equilibrium algorithm is accurate, robust, and efficient for reactive transport applications, and it is an improvement over the Gibbs energy minimization algorithm used in GEMS3K. The proposed chemical equilibrium method has been implemented in Reaktoro, a unified framework for modeling chemically reactive systems, which is now used as an alternative numerical kernel of GEMS.
VVER 1000 SBO calculations with pressuriser relief valve stuck open with ASTEC computer code
Highlights: ► We modelled the ASTEC input file for accident scenario (SBO) and focused analyses on the behaviour of core degradation. ► We assumed opening and stuck-open of pressurizer relief valve during performance of SBO scenario. ► ASTEC v1.3.2 has been used as a reference code for the comparison study with the new version of ASTEC code. - Abstract: The objective of this paper is to present the results obtained from performing the calculations with ASTEC computer code for the Source Term evaluation for specific severe accident transient. The calculations have been performed with the new version of ASTEC. The ASTEC V2 code version is released by the French IRSN (Institut de Radioprotection at de surete nucleaire) and Gesellschaft für Anlagen-und Reaktorsicherheit (GRS), Germany. This investigation has been performed in the framework of the SARNET2 project (under the Euratom 7th framework program) by Institute for Nuclear Research and Nuclear Energy – Bulgarian Academy of Science (INRNE-BAS).
Development of a computer code for shielding calculation in X-ray facilities
The construction of an effective barrier against the interaction of ionizing radiation present in X-ray rooms requires consideration of many variables. The methodology used for specifying the thickness of primary and secondary shielding of an traditional X-ray room considers the following factors: factor of use, occupational factor, distance between the source and the wall, workload, Kerma in the air and distance between the patient and the receptor. With these data it was possible the development of a computer program in order to identify and use variables in functions obtained through graphics regressions offered by NCRP Report-147 (Structural Shielding Design for Medical X-Ray Imaging Facilities) for the calculation of shielding of the room walls as well as the wall of the darkroom and adjacent areas. With the built methodology, a program validation is done through comparing results with a base case provided by that report. The thickness of the obtained values comprise various materials such as steel, wood and concrete. After validation is made an application in a real case of radiographic room. His visual construction is done with the help of software used in modeling of indoor and outdoor. The construction of barriers for calculating program resulted in a user-friendly tool for planning radiographic rooms to comply with the limits established by CNEN-NN-3:01 published in September / 2011
Katz, D.; Cwik, T.; Sterling, T.
1998-01-01
This paper uses the parallel calculation of the radiation integral for examination of performance and compiler issues on a Beowulf-class computer. This type of computer, built from mass-market, commodity, off-the-shelf components, has limited communications performance and therefore also has a limited regime of codes for which it is suitable.
Ballance, Connor
2013-05-01
Over the last couple of decades, a number of advanced non-perturbative approaches such as the R-matrix, TDCC and CCC methods have made great strides in terms of improved target representation and investigating fundamental 2-4 electron problems. However, for the electron-impact excitation of near-neutral species or complicated open-shell atomic systems we are forced to make certain compromises in terms of the atomic structure and/or the number of channels included in close-coupling expansion of the subsequent scattering calculation. The availability of modern supercomputing architectures with hundreds of thousands of cores, and the emergence new opportunities through GPU usauge offers one possibility to address some of these issues. To effectively harness this computational power will require significant revision of the existing code structures. I shall discuss some effective strategies within a non-relativistic and relativistic R-matrix framework using the examples detailed below. The goal is to extend existing R-matrix methods from 1-2 thousand close coupled channels to 10,000 channels. With the construction of the ITER experiment in Cadarache, which will have Tungsten plasma-facing components, there is an urgent diagnostic need for the collisional rates for the near-neutral ion stages. In particular, spectroscopic diagnostics of impurity influx require accurate electron-impact excitation and ionisation as well as a good target representation. There have been only a few non-perturbative collisional calculations for this system, and the open-f shell ion stages provide a daunting challenge even for perturbative approaches. I shall present non-perturbative results for for the excitation and ionisation of W3+ and illustrate how these fundamental calculations can be integrated into a meaningful diagnostic for the ITER device. We acknowledge support from DoE fusion.
A computer program written in FORTRAN language for calculations of final results of specific surface analysis based on BET theory has been described. Two gases - nitrogen and krypton were used. A technical description of measuring apparaturs is presented as well as theoretical basis of the calculations together with statistical analysis of the results for uranium compounds powders. (author)
Marconi, F.; Salas, M.; Yaeger, L.
1976-01-01
A numerical procedure has been developed to compute the inviscid super/hypersonic flow field about complex vehicle geometries accurately and efficiently. A second order accurate finite difference scheme is used to integrate the three dimensional Euler equations in regions of continuous flow, while all shock waves are computed as discontinuities via the Rankine Hugoniot jump conditions. Conformal mappings are used to develop a computational grid. The effects of blunt nose entropy layers are computed in detail. Real gas effects for equilibrium air are included using curve fits of Mollier charts. Typical calculated results for shuttle orbiter, hypersonic transport, and supersonic aircraft configurations are included to demonstrate the usefulness of this tool.
A small-size computer EC 1010 is proposed for the calculation of dosimetric parameters of irradiation procedures on #betta#-beam therapeutic units. A specially designed program is intended for the calculation of dosimetric parameters for different methods of moving and static irradiation taking into account tissue heterogeneity: multified static irradiation, multizone rotation irradiation, irradiation using dose field forming devices (V-shaped filters, edge blocks, a grid diaphragm). The computation of output parameters according to each preset program of irradiation takes no more than 1 min. The use of the computer EC 1010 for the calculation of dosimetric parameters of irradiation procedures gives an opportunity to reduce considerably calculation time, to avoid possible errors and to simplify the drawing up of documents
Goc, Roman
2004-09-01
This paper describes m2rc3, a program that calculates Van Vleck second moments for solids with internal rotation of molecules, ions or their structural parts. Only rotations about C 3 axes of symmetry are allowed, but up to 15 axes of rotation per crystallographic unit cell are permitted. The program is very useful in interpreting NMR measurements in solids. Program summaryTitle of the program: m2rc3 Catalogue number: ADUC Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADUC Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland License provisions: none Computers: Cray SV1, Cray T3E-900, PCs Installation: Poznań Supercomputing and Networking Center ( http://www.man.poznan.pl/pcss/public/main/index.html) and Faculty of Physics, A. Mickiewicz University, Poznań, Poland ( http://www.amu.edu.pl/welcome.html.en) Operating system under which program has been tested: UNICOS ver. 10.0.0.6 on Cray SV1; UNICOS/mk on Cray T3E-900; Windows98 and Windows XP on PCs. Programming language: FORTRAN 90 No. of lines in distributed program, including test data, etc.: 757 No. of bytes in distributed program, including test data, etc.: 9730 Distribution format: tar.gz Nature of physical problem: The NMR second moment reflects the strength of the nuclear magnetic dipole-dipole interaction in solids. This value can be extracted from the appropriate experiment and can be calculated on the basis of Van Vleck formula. The internal rotation of molecules or their parts averages this interaction decreasing the measured value of the NMR second moment. The analysis of the internal dynamics based on the NMR second moment measurements is as follows. The second moment is measured at different temperatures. On the other hand it is also calculated for different models and frequencies of this motion. Comparison of experimental and calculated values permits the building of the most probable model of internal dynamics in the studied material. The program described
The large-scale construction of atomic power stations results in a need for trainers to instruct power-station personnel. The present work considers one problem of developing training computer software, associated with the development of a high-speed algorithm for calculating the neutron field after control-rod (CR) shift by the operator. The case considered here is that in which training units are developed on the basis of small computers of SM-2 type, which fall significantly short of the BESM-6 and EC-type computers used for the design calculations, in terms of speed and memory capacity. Depending on the apparatus for solving the criticality problem, in a two-dimensional single-group approximation, the physical-calculation programs require ∼ 1 min of machine time on a BESM-6 computer, which translates to ∼ 10 min on an SM-2 machine. In practice, this time is even longer, since ultimately it is necessary to determine not the effective multiplication factor K/sub ef/, but rather the local perturbations of the emergency-control (EC) system (to reach criticality) and change in the neutron field on shifting the CR and the EC rods. This long time means that it is very problematic to use physical-calculation programs to work in dialog mode with a computer. The algorithm presented below allows the neutron field following shift of the CR and EC rods to be calculated in a few seconds on a BESM-6 computer (tens of second on an SM-2 machine. This high speed may be achieved as a result of the preliminary calculation of the influence function (IF) for each CR. The IF may be calculated at high speed on a computer. Then it is stored in the external memory (EM) and, where necessary, used as the initial information
Interpolation method for calculation of computed tomography dose from angular varying tube current
The scope and magnitude of radiation dose from computed tomography (CT) examination has led to increased scrutiny and focus on accurate dose tracking. The use of tube current modulation (TCM) results complicates dose tracking by generating unique scans that are specific to the patient. Three methods of estimating the radiation dose from a CT examination that uses TCM are compared: using the average current for an entire scan, using the average current for each slice in the scan, and using an estimation of the angular variation of the dose contribution. To determine the impact of TCM on the radiation dose received, a set of angular weighting functions for each tissue of the body are derived by fitting a function to the relative dose contributions tabulated for the four principle exposure projections. This weighting function is applied to the angular tube current function to determine the organ dose contributions from a single rotation. Since the angular tube current function is not typically known, a method for estimating that function is also presented. The organ doses calculated using these three methods are compared to simulations that explicitly include the estimated TCM function. (authors)
Zhishan Gao; Meimei Kong; Rihong Zhu; Lei Chen
2007-01-01
Interferometric optical testing using computer-generated hologram (CGH) has provided an approach to highly accurate measurement of aspheric surfaces. While designing the CGH null correctors, we should make them with as small aperture and low spatial frequency as possible, and with no zero slope of phase except at center, for the sake of insuring lowisk of substrate figure error and feasibility of fabrication. On the basis of classic optics, a set of equations for calculating the phase function of CGH are obtained. These equations lead us to find the dependence of the aperture and spatial frequency on the axial diszance from the tested aspheric surface for the CGH. We also simulatethe ptical path difference error of the CGH relative to the accuracy of controlling laser spot during fabrication. Meanwhile, we discuss the constraints used to avoid zero slope of phase except at center and give a design result of the CGH for the tested aspheric surface. The results ensure the feasibility of designing a useful CGH to test aspheric urface fundamentally.
A mathematical model based on the three group theory for theoretical calculation by means of computer of the calibration curves of neutron soil moisture probes with highly effective counters, is described. Methods for experimental correction of the mathematical model are discussed and proposed. The computer programme described allows the calibration of neutron probes with high or low effective counters, and central or end geometry, with or without linearizing of the calibration curve. The use of two calculation variants and printing of output data gives the possibility not only for calibration, but also for other researches. The separate data inputs for soil and probe temperature allow the temperature influence analysis. The computer programme and calculation examples are given. (author)
A computer program, HERMES, that provides the quantities usually needed in nuclear level density calculations, has been developed. The applied model is the standard Fermi Gas Model (FGM) in which pairing correlations and shell effects are opportunely taken into account. The effects of additional nuclear structure properties together with their inclusion into the computer program are also considered. Using HERMES, a level density parameter systematics has been constructed for mass range 41 ≤ A ≤ 253. (author)
Modifications of the SEPHIS computer code for calculating the Purex solvent extraction system
The SEPHIS computer program was developed to simulate the countercurrent solvent extraction. This report gives modifications in the program which result in improved fit to experimental data, a decrease in computer storage requirements, and a decrease in execution time. Methods for applying the computer program to practical solvent extraction problems are explained
Plutonium Usage and Management in PWR and Computing and Physical Methods to Calculate Pu
Main limitations due to the enhancement of the plutonium content are related to the coolant void effect as the spectrum becomes faster, the neutron flux in the thermal region tends towards zero and is concentrated in the region from 10 keV to 1 MeV. Thus, all captures by Pu240 and Pu242 in the thermal and epithermal resonance disappear and the Pu240 and Pu242 contributions to the void effect become positive. The higher the Pu content and the poorer the Pu quality, the larger the void effect. -The core control in nominal or transient conditions. Pu enrichment leads to a decrease in βeff. and the efficiency of soluble boron and control rods. Also, the Doppler effect tends to decrease when Pu replaces U, so, that in case of transients the core could diverge again if the control is not effective enough. -As for the voiding effect, the plutonium degradation and the Pu240 and Pu242 accumulation after multiple recycling lead to spectrum hardening and to a decrease in control. -One solution would be to use enriched boron in soluble boron and shutdown rods. -In this paper I discuss and show the advanced computing and physical methods to calculate Pu inside the nuclear reactors and glovebox and the different solutions to be used to overcome the difficulties that affect on safety parameters and on reactor performance, and analysis the consequences of plutonium management on the whole fuel cycle like Raw materials savings, fraction of nuclear electric power involved in the Pu management. All through two types of scenario, one involving a low fraction of the nuclear park dedicated to plutonium management, the other involving a dilution of the plutonium in all the nuclear park. (author)
The RAP-3A computer code is designed for calculating the main steady state thermo-hydraulic parameters of multirod fuel clusters with liquid metal cooling. The programme provides a double accuracy computation of temperatures and axial enthalpy distributions of pressure losses and axial heat flux distributions in fuel clusters before boiling conditions occur. Physical and mathematical models as well as a sample problem are presented. The code is written in FORTRAN-4 language and is running on a IBM-370/135 computer
Fast calculation of computer-generated-hologram on AMD HD5000 series GPU and OpenCL
Shimobaba, Tomoyoshi; Masuda, Nobuyuki; Ichihashi, Yasuyuki; Takada, Naoki
2010-01-01
In this paper, we report fast calculation of a computer-generated-hologram using a new architecture of the HD5000 series GPU (RV870) made by AMD and its new software development environment, OpenCL. Using a RV870 GPU and OpenCL, we can calculate 1,920 * 1,024 resolution of a CGH from a 3D object consisting of 1,024 points in 30 milli-seconds. The calculation speed realizes a speed approximately two times faster than that of a GPU made by NVIDIA.
The computer codes BROHR and SYSFIT are presented. Both codes are based on the first-order matrix formalism of ion optics. By means of the code BROHR the trajectories of ions and electrons inside of any inclined field accelerating tubes can be calculated. The influence of the stripping process at tandem accelerators is included by changing of the mass and the charge of the ions and by increasing the beam emittance. The code SYSFIT is used for calculation of any beam transport systems and of the transported beam. Special requested imaging properties can be realized by parameter variation. Calculated examples are given for both codes. (author)
Mayhall, D J; Stein, W; Gronberg, J B
2006-05-15
We have performed preliminary computer-based, transient, magnetostatic calculations of the eddy-current power loss in rotating titanium-alloy and aluminum wheels and wheel rims in the predominantly axially-directed, steady magnetic fields of two small, solenoidal coils. These calculations have been undertaken to assess the eddy-current power loss in various possible International Linear Collider (ILC) positron target wheels. They have also been done to validate the simulation code module against known results published in the literature. The commercially available software package used in these calculations is the Maxwell 3D, Version 10, Transient Module from the Ansoft Corporation.
The linear integral-equation-based computer code 'Roger Oleg Nikolai' (RON), which was recently developed at Argonne National Laboratory, was used to calculate the self-amplified spontaneous emission (SASE) performance of the free-electron laser (FEL) being built at Argonne. Signal growth calculations under different conditions were used to estimate tolerances of actual design parameters and to estimate optimal length of the break sections between undulator segments. Explicit calculation of the radiation field was added recently. The measured magnetic fields of five undulators were used to calculate the gain for the Argonne FEL. The result indicates that the real undulators for the Argonne FEL (the effect of magnetic field errors alone) will not significantly degrade the FEL performance. The capability to calculate the small-signal gain for an FEL-oscillator is also demonstrated
The condition of criticality safety (keff<0.95) is fulfilled in all considered cases. Since all cases are undermoderated in the event of cavity flooding, the limit of the cavity volume in the fuel area, fixed by the contruction, is essential for this result. The computer calculations were performed by the Monte-Carlo version MCNP-3B. (orig./HP)
ZOCO V is a computer code which can calculate the time- and space- dependent pressure distribution in containments of water-cooled nuclear power reactors (both full pressure containments and pressure suppression systems) following a loss-of-coolant accident, caused by the rupture of a main coolant or steam pipe
Jothi, S., E-mail: s.jothi@swansea.ac.uk [College of Engineering, Swansea University, Singleton Park, Swansea SA2 8PP (United Kingdom); Winzer, N. [Fraunhofer Institute for Mechanics of Materials IWM, Wöhlerstraße 11, 79108 Freiburg (Germany); Croft, T.N.; Brown, S.G.R. [College of Engineering, Swansea University, Singleton Park, Swansea SA2 8PP (United Kingdom)
2015-10-05
Highlights: • Characterized polycrystalline nickel microstructure using EBSD analysis. • Development meso-microstructural model based on real microstructure. • Calculated effective diffusivity using experimental electrochemical permeation test. • Calculated intergranular diffusivity of hydrogen using computational FE simulation. • Validated the calculated computation simulation results with experimental results. - Abstract: Hydrogen induced intergranular embrittlement has been identified as a cause of failure of aerospace components such as combustion chambers made from electrodeposited polycrystalline nickel. Accurate computational analysis of this process requires knowledge of the differential in hydrogen transport in the intergranular and intragranular regions. The effective diffusion coefficient of hydrogen may be measured experimentally, though experimental measurement of the intergranular grain boundary diffusion coefficient of hydrogen requires significant effort. Therefore an approach to calculate the intergranular GB hydrogen diffusivity using finite element analysis was developed. The effective diffusivity of hydrogen in polycrystalline nickel was measured using electrochemical permeation tests. Data from electron backscatter diffraction measurements were used to construct microstructural representative volume elements including details of grain size and shape and volume fraction of grains and grain boundaries. A Python optimization code has been developed for the ABAQUS environment to calculate the unknown grain boundary diffusivity.
Highlights: • Characterized polycrystalline nickel microstructure using EBSD analysis. • Development meso-microstructural model based on real microstructure. • Calculated effective diffusivity using experimental electrochemical permeation test. • Calculated intergranular diffusivity of hydrogen using computational FE simulation. • Validated the calculated computation simulation results with experimental results. - Abstract: Hydrogen induced intergranular embrittlement has been identified as a cause of failure of aerospace components such as combustion chambers made from electrodeposited polycrystalline nickel. Accurate computational analysis of this process requires knowledge of the differential in hydrogen transport in the intergranular and intragranular regions. The effective diffusion coefficient of hydrogen may be measured experimentally, though experimental measurement of the intergranular grain boundary diffusion coefficient of hydrogen requires significant effort. Therefore an approach to calculate the intergranular GB hydrogen diffusivity using finite element analysis was developed. The effective diffusivity of hydrogen in polycrystalline nickel was measured using electrochemical permeation tests. Data from electron backscatter diffraction measurements were used to construct microstructural representative volume elements including details of grain size and shape and volume fraction of grains and grain boundaries. A Python optimization code has been developed for the ABAQUS environment to calculate the unknown grain boundary diffusivity
A numerical analysis of some neutronic parameters calculated by LEOPARD computer code compared with the literature data are presented. A computer code (LEOCIT) that is a modified version of LEOPARD, was developed, with subroutines that prepare cross sections libraries for 1,2 or 4 energy groups, writing them on tape or on disk, in special format aiming to be diretly used by citation computer codes. Finally, a simulation of the first cycle of Angra I burnup, is done, by CITATION, modelling 1/4 of the core in XY geometry, calculation, the soluble boron curve and the pin to pin power distribution, for two energy group. The more relevant results are compared with those supplied by Westinghouse, CNEN and FURNAS, and some recommendations aiming to perfect the developed system, are done. (E.G)
A computer program for calculating relative-transmissivity input arrays to aid model calibration
Weiss, Emanuel
1982-01-01
A program is documented that calculates a transmissivity distribution for input to a digital ground-water flow model. Factors that are taken into account in the calculation are: aquifer thickness, ground-water viscosity and its dependence on temperature and dissolved solids, and permeability and its dependence on overburden pressure. Other factors affecting ground-water flow are indicated. With small changes in the program code, leakance also could be calculated. The purpose of these calculations is to provide a physical basis for efficient calibration, and to extend rational transmissivity trends into areas where model calibration is insensitive to transmissivity values.
da Silveira, Pedro Rodrigo Castro
2014-01-01
This thesis describes the development and deployment of a cyberinfrastructure for distributed high-throughput computations of materials properties at high pressures and/or temperatures--the Virtual Laboratory for Earth and Planetary Materials--VLab. VLab was developed to leverage the aggregated computational power of grid systems to solve…
Bakalov, Dimitar, E-mail: dbakalov@inrne.bas.bg [Bulgarian Academy of Sciences, INRNE (Bulgaria)
2015-08-15
The potential energy surface and the computational codes, developed for the evaluation of the density shift and broadening of the spectral lines of laser-induced transitions from metastable states of antiprotonic helium, fail to produce convergent results in the case of pionic helium. We briefly analyze the encountered computational problems and outline possible solutions of the problems.
Structure problems in the analog computation; Problemes de structure dans le calcul analogique
Braffort, P.L. [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires
1957-07-01
The recent mathematical development showed the importance of elementary structures (algebraic, topological, etc.) in abeyance under the great domains of classical analysis. Such structures in analog computation are put in evidence and possible development of applied mathematics are discussed. It also studied the topological structures of the standard representation of analog schemes such as additional triangles, integrators, phase inverters and functions generators. The analog method gives only the function of the variable: time, as results of its computations. But the course of computation, for systems including reactive circuits, introduces order structures which are called 'chronological'. Finally, it showed that the approximation methods of ordinary numerical and digital computation present the same structure as these analog computation. The structure analysis permits fruitful comparisons between the several domains of applied mathematics and suggests new important domains of application for analog method. (M.P.)
Celsis, P; Goldman, T; Henriksen, L; Lassen, N A
1981-01-01
Emission tomography of positron or gamma emitting inert gases allows calculation of regional cerebral blood flow (rCBF) in cross-sectional slices of human brain. An algorithm is presented for rCBF calculations from a sequence of time averaged tomograms using inhaled 133Xe. The approach is designe...
This program calculates the linear characteristics of a gyrotron. This program is capable of: (1) calculating the starting current or frequency detuning for each gyrotron mode, (2) generating mode spectra, (3) plotting these linear characteristics as a function of device parameters (e.g., beam voltage), and (4) doing the above for any axial rf field profile
Dejarnette, F. R.; Jones, M. H.
1971-01-01
A description of the computer program used for heating rate calculation for blunt bodies in hypersonic flow is given. The main program and each subprogram are described by defining the pertinent symbols involved and presenting a detailed flow diagram and complete computer program listing. Input and output parameters are discussed in detail. Listings are given for the computation of heating rates on (1) a blunted 15 deg half-angle cone at 20 deg incidence and Mach 10.6, (2) a blunted 70 deg slab delta wing at 10 deg incidence and Mach 8, and (3) the HL-10 lifting body at 20 deg incidence and Mach 10. In addition, the computer program output for two streamlines on the blunted 15 deg half-angle cone is listed. For Part 1, see N71-36186.
Radcalc for Windows' is a menu-driven Microsoft2 Windows-compatible computer code that calculates the radiolytic production of hydrogen gas in high- and low-level radioactive waste. In addition, the code also determines US Department of Transportation (DOT) transportation classifications, calculates the activities of parent and daughter isotopes for a specified period of time, calculates decay heat, and calculates pressure buildup from the production of hydrogen gas in a given package geometry. Radcalc for Windows was developed by Packaging Engineering, Transportation and Packaging, Westinghouse Hanford Company, Richland, Washington, for the US Department of Energy (DOE). It is available from Packaging Engineering and is issued with a user's manual and a technical manual. The code has been verified and validated
A computer programme which performs compound nucleus calculations using the Weisskopf-Ewing formalism is described. The programme will calculate the cross-sections for multi-particle emission by treating the process as a series of stages in the cascade. The relevant compound nucleus absorption cross-sections for particle channels are calculated with built-in optical model routines, and gamma ray emission is described by the giant dipole resonance formalism. Several choices for the final nucleus level density formula may be made using the level density routine contained in the programme. The total cross-section for the emission of a particle at any particular stage, is calculated together with the cross-section as a function of energy. The probability of leaving the final nucleus in a state of any particular energy is also obtained. (author)
Post-test calculation of LOFT test L6-5 using the RETRAN-02 computer code
This paper discusses a post-test calculation of Loss-of-Fluid-Test (LOFT) L6-5, in which a loss of steam generator feedwater flow was simulated. This calculation, using RETRAN-02 is compared to the L6-5 pretest calculation performed to accommodate phenomena actually occurring during the test in order to better understand the test apparatus and, therefore, better understand the capabilities of the computer code used for this application. The RETRAN-02 calculation, therefore, employed model changes to improve the characterization of the test. These changes were needed to reflect differences between the advertised pretest initial conditions and the initial conditions which actually occurred at the start of the test and to reflect differences between boundary conditions that had been expected to occur during the test and those which actually did occur
A computationally efficient software application for calculating vibration from underground railways
The PiP model is a software application with a user-friendly interface for calculating vibration from underground railways. This paper reports about the software with a focus on its latest version and the plans for future developments. The software calculates the Power Spectral Density of vibration due to a moving train on floating-slab track with track irregularity described by typical values of spectra for tracks with good, average and bad conditions. The latest version accounts for a tunnel embedded in a half space by employing a toolbox developed at K.U. Leuven which calculates Green's functions for a multi-layered half-space.
BAC: A computer program for calculating shielding in buildings against initial radiation
Danielson, G.
1980-10-01
Calculation methodology and transmission data for BAC in the event of a nuclear explosion are considered. The shielding factor is the rate between the radiation dose at one point in the building and the dose in open air. It is separately calculated for neutrons, gamma rays from fission products, and secondary gamma rays. For this calculation, BAC uses data for radiation transmission in concrete. This program is utilized for fallout shelters and other buildings where walls and floors/roofs are mostly made of concrete and bricks. Instructions for the program are given, and BAC results and values are in certain cases compared with those obtained with the Monte Carlo method.
Strategic program field 5 was started in 2011 for the effective use of K-computer. In the field of the nuclear research, a large scale nuclear structure studies by Monte Carlo shell model calculation are being carried out at HPCI (High Performance Computing Infrastructure) Consortium. Since the introduction of the shell model by Mayer and Jensen in 1949, it succeeded in the explanation of magic numbers and has been very powerful theory. Recently, however, the great progress of nuclear physics at RIBF (RIKEN Beam Factory) and so on made it clear that the magic numbers disappear in the unstable nuclei, while different ones appear and evolutions of shell structure are considered. In this report the framework and recent results are described. In the second section of 'Shell Model Computation and Monte Carlo Shell Model', '2.1 Model space and effective interactions', '2.2 Strict diagonalization by Lanczos algorithm and its limitations' and '2.3 Framework of Monte Carlo shell model' are picked up with a figure of calculation example. In the third section 'Structure Exploration of Neutron Excess Nickel Isotopes by the Monte Carlo Shell Model' is explained showing the energy surfaces of 68Ni for 01+ and 02+. In the fourth section 'Monte Carlo Model Calculation without Assuming Closed Shell and its Visualization', density distributions in 8Be are shown after and before the angular momenta projection. In the fifth section of 'Development of Monte Carlo Shell Model Program at K-Computer', speeding up of Monte Carlo Shell Model by the parallel computation is shown. Finally it is pointed out that the HPCI program is planned to end 2015. Farther magic numbers are expected to be calculated before HPCI terminates. (S. Funahashi)
Use of symbolic computations for calculating logic circuits and specialized processors
Some applied problems in which using symbolic computations the accurate algebraic expressions describing schematic diagrams of standard logic modules, encoding and decoding devices, different types of complex circuits and event selection devices in high energy physics experiments are considered. Symbolic computations open new prospects for complex automation of design works for discrete logic devices starting from setting tables describing circuit functioning and ending with printed circuit interconnection or topology of integrated circuits
TRANS4: a computer code calculation of solid fuel penetration of a concrete barrier
The computer code, TRANS4, models the melting and penetration of a solid barrier by a solid disc of fuel following a core disruptive accident. This computer code has been used to model fuel debris penetration of basalt, limestone concrete, basaltic concrete, and magnetite concrete. Sensitivity studies were performed to assess the importance of various properties on the rate of penetration. Comparisons were made with results from the GROWS II code
Statistical model calculations with a double-humped fission barrier GIVAB computer code
Neutron and gamma emission probabilities and fission probabilities are computed, taking into account the special feature of the actinide fission barriers with two maxima. Spectra and cross sections are directly deduced from these probabilities. Populations of both wells are followed step by step. For each initial E and J, decay rates are computed and normalized in order to obtain the de-excitation probabilities imposed by the two-humped fission barrier
HTR-2000: Computer program to accompany calculations during reactor operation of HTGR's
HTR-2000 - developed for arithmetical control of pebble bed high temperature reactors with multiple process - is closely coupled to the actual operation of the reactor. Using measured nuclear and thermo-hydraulical parameters as well as detailed model of pebble flow and exact information and fuel burnup, loading and discharge it obtains an excellent simulation of the status of the reactor. The geometry is modelled in three dimensions, so asymmetries in core texture can be taken into account for nuclear and thermohydraulical calculations. A continuous simulation was performed during five years of AVR operation. The comparison between calculated and measured data was very satisfying. In addition, experiments which had been performed at AVR for re-calculating the control rod worth were simulated. The arithmetical analysis shows that at presence of a compensating-absorber in the reactor core the split reactivity worth for single absorbers can be determined by calculation but not by methods of measuring. (orig.)
Fast neutron reaction data calculations with the computer code STAPRE-H
Description of the specific features of the version STAPRE-H are given. Illustration of the model options and parameter influence on the calculated results is done to trace the accurate reproducing of large body of correlated data. (authors)
Highlights: ► The atomic densities of light and heavy materials are calculated. ► The solution is obtained using Runge–Kutta–Fehlberg method. ► The material depletion is calculated for constant flux and constant power condition. - Abstract: The present work investigates an appropriate way to calculate the variations of nuclides composition in the reactor core during operations. Specific Software has been designed for this purpose using C#. The mathematical approach is based on the solution of Bateman differential equations using a Runge–Kutta–Fehlberg method. Material depletion at constant flux and constant power can be calculated with this software. The inputs include reactor power, time step, initial and final times, order of Taylor Series to calculate time dependent flux, time unit, core material composition at initial condition (consists of light and heavy radioactive materials), acceptable error criterion, decay constants library, cross sections database and calculation type (constant flux or constant power). The atomic density of light and heavy fission products during reactor operation is obtained with high accuracy as the program outputs. The results from this method compared with analytical solution show good agreements
Shielding calculations of advanced nuclear facilities such as accelerator based neutron sources or fusion devices of the tokamak type are complicated due to their complex geometries and their large dimensions, including bulk shields of several meters thickness. While the complexity of the geometry in the shielding calculation can be hardly handled by the discrete ordinates method, the deep penetration of radiation through bulk shields is a severe challenge for the Monte Carlo particle transport simulation technique. This work proposes a dedicated computational approach for coupled Monte Carlo - deterministic transport calculations to handle this kind of shielding problems. The Monte Carlo technique is used to simulate the particle generation and transport in the target region with both complex geometry and reaction physics, and the discrete ordinates method is used to treat the deep penetration problem in the bulk shield. To enable the coupling of these two different computational methods, a mapping approach has been developed for calculating the discrete ordinates angular flux distribution from the scored data of the Monte Carlo particle tracks crossing a specified surface. The approach has been implemented in an interface program and validated by means of test calculations using a simplified three-dimensional geometric model. Satisfactory agreement was obtained for the angular fluxes calculated by the mapping approach using the MCNP code for the Monte Carlo calculations and direct three-dimensional discrete ordinates calculations using the TORT code. In the next step, a complete program system has been developed for coupled three-dimensional Monte Carlo deterministic transport calculations by integrating the Monte Carlo transport code MCNP, the three-dimensional discrete ordinates code TORT and the mapping interface program. Test calculations with two simple models have been performed to validate the program system by means of comparison calculations using the
Donald, Jack Bradshaw
1998-01-01
The purpose of this descriptive study was to investigate the availability and distribution of calculators and computers for the mathematics classes in public high schools across the State of Virginia; examine professional development activities used by teachers to prepare for the use of calculators and computers in the classroom; explore factors that may guide and influence mathematics teachers in the use of calculators and computers; examine the familiarity and degre...
Bordy, J.M.; Kodeli, I.; Menard, St.; Bouchet, J.L.; Renard, F.; Martin, E.; Blazy, L.; Voros, S.; Bochud, F.; Laedermann, J.P.; Beaugelin, K.; Makovicka, L.; Quiot, A.; Vermeersch, F.; Roche, H.; Perrin, M.C.; Laye, F.; Bardies, M.; Struelens, L.; Vanhavere, F.; Gschwind, R.; Fernandez, F.; Quesne, B.; Fritsch, P.; Lamart, St.; Crovisier, Ph.; Leservot, A.; Antoni, R.; Huet, Ch.; Thiam, Ch.; Donadille, L.; Monfort, M.; Diop, Ch.; Ricard, M
2006-07-01
The purpose of this conference was to describe the present state of computer codes dedicated to radiation transport or radiation source assessment or dosimetry. The presentations have been parted into 2 sessions: 1) methodology and 2) uses in industrial or medical or research domains. It appears that 2 different calculation strategies are prevailing, both are based on preliminary Monte-Carlo calculations with data storage. First, quick simulations made from a database of particle histories built though a previous Monte-Carlo simulation and secondly, a neuronal approach involving a learning platform generated through a previous Monte-Carlo simulation. This document gathers the slides of the presentations.
Emergency Doses (ED) - Revision 3: A calculator code for environmental dose computations
The calculator program ED (Emergency Doses) was developed from several HP-41CV calculator programs documented in the report Seven Health Physics Calculator Programs for the HP-41CV, RHO-HS-ST-5P (Rittman 1984). The program was developed to enable estimates of offsite impacts more rapidly and reliably than was possible with the software available for emergency response at that time. The ED - Revision 3, documented in this report, revises the inhalation dose model to match that of ICRP 30, and adds the simple estimates for air concentration downwind from a chemical release. In addition, the method for calculating the Pasquill dispersion parameters was revised to match the GENII code within the limitations of a hand-held calculator (e.g., plume rise and building wake effects are not included). The summary report generator for printed output, which had been present in the code from the original version, was eliminated in Revision 3 to make room for the dispersion model, the chemical release portion, and the methods of looping back to an input menu until there is no further no change. This program runs on the Hewlett-Packard programmable calculators known as the HP-41CV and the HP-41CX. The documentation for ED - Revision 3 includes a guide for users, sample problems, detailed verification tests and results, model descriptions, code description (with program listing), and independent peer review. This software is intended to be used by individuals with some training in the use of air transport models. There are some user inputs that require intelligent application of the model to the actual conditions of the accident. The results calculated using ED - Revision 3 are only correct to the extent allowed by the mathematical models. 9 refs., 36 tabs
Calculating additional shielding requirements in diagnostics X-ray departments by computer
This report provides an extension of an existing method for the calculation of the barrier thickness required to reduce the three types of radiation exposure emitted from the source, the primary, secondary and leakage radiation, to a specified weekly design limit (MPD). Since each of these three types of radiation are of different beam quality, having different shielding requirements, NCRP 49 has provided means to calculate the necessary protective barrier thickness for each type of radiation individually. Additionally, barrier requirements specified using the techniques stated at NCRP 49, show enormous variations among users. Part of the variations is due to different assumptions made regarding the use of the examined room and the characteristics of adjoining space. Many of the differences result from the difficulty of accurately relating information from the calculations to graphs and tables involved in the calculation process specified by this report. Moreover, the latest technological developments such as mammography are not addressed and attenuation data for three-phase generators, that are most widely used today, is not provided. The design of shielding barriers in diagnostic X-ray departments generally follow the ALARA principle. That means that, in practice, the exposure levels are kept 'as low as reasonably achievable', taking into account economical and technical factors. Additionally, the calculation of barrier requirements includes many uncertainties (e.g. the workload, the actual kVp used etc.). (author)
The hybrid data processing associating a computer calculation of the processing filters to their use in coherent optical set-up may lead to real-time filtering. On the principle, it is shown that an instantaneous filtering of all known and unknown defects in images can be attained using a well adapted electro-optical relay. Some synthetical holograms, holographic lenses with variable focussing, and a number of processing filters were calculated, all holograms being phase coded in binary. The results were tape registred and displayed in delayed time on a 128x128 points liquid crystal electro-optical relay allowing the quality of reproduction for the computed holograms to be tested on a simple diffraction bench, and on a double diffraction bench in the case of the results of the image filtering
Sarkar, Kanchan; Sharma, Rahul; Bhattacharyya, S P
2010-03-01
A density matrix based soft-computing solution to the quantum mechanical problem of computing the molecular electronic structure of fairly long polythiophene (PT) chains is proposed. The soft-computing solution is based on a "random mutation hill climbing" scheme which is modified by blending it with a deterministic method based on a trial single-particle density matrix [P((0))(R)] for the guessed structural parameters (R), which is allowed to evolve under a unitary transformation generated by the Hamiltonian H(R). The Hamiltonian itself changes as the geometrical parameters (R) defining the polythiophene chain undergo mutation. The scale (λ) of the transformation is optimized by making the energy [E(λ)] stationary with respect to λ. The robustness and the performance levels of variants of the algorithm are analyzed and compared with those of other derivative free methods. The method is further tested successfully with optimization of the geometry of bipolaron-doped long PT chains. PMID:26613302
The modified calculation of the coolant temperature in the computer code TOODEE-2
The programme is intended for the calculation of maximum cladding temperature of the hottest rod of a PWR and can be used to estimate the events after leakage of coolant. THe TOODEE-2 programme corresponds to the LOCTA-code of Westinghouse, which is gave a superior reproduction of reality. The new TOODEE-2 has been improved and a comparison of the calculation of a large fracture with Westinghouse gave good points of agreement for the first 16 seconds of the course of events. The rest describes a serious incident because of the conservation procedure according to 10 CFR 50 appendix K. Most of the calculations have used the form factor of 2.32. Also 2.12 which is the highest form factor for Ringhals 2 has been used. Furhter investigations are needed to clarify the difference in the results. (G.B.)
WASP: A flexible FORTRAN 4 computer code for calculating water and steam properties
Hendricks, R. C.; Peller, I. C.; Baron, A. K.
1973-01-01
A FORTRAN 4 subprogram, WASP, was developed to calculate the thermodynamic and transport properties of water and steam. The temperature range is from the triple point to 1750 K, and the pressure range is from 0.1 to 100 MN/m2 (1 to 1000 bars) for the thermodynamic properties and to 50 MN/m2 (500 bars) for thermal conductivity and to 80 MN/m2 (800 bars) for viscosity. WASP accepts any two of pressure, temperature, and density as input conditions. In addition, pressure and either entropy or enthalpy are also allowable input variables. This flexibility is especially useful in cycle analysis. The properties available in any combination as output include temperature, density, pressure, entropy, enthalpy, specific heats, sonic velocity, viscosity, thermal conductivity, surface tension, and the Laplace constant. The subroutine structure is modular so that the user can choose only those subroutines necessary to his calculations. Metastable calculations can also be made by using WASP.
SaiToh, Akira
2011-01-01
A C++ library, named ZKCM, has been developed for the purpose of multiprecision matrix calculations, which is based on the GNU MP and MPFR libraries. It is especially convenient for writing programs involving tensor-product operations, tracing-out operations, and singular-value decompositions. Its extension library, ZKCM_QC, for simulating quantum computing has been developed using the time-dependent matrix-product-state simulation method. This report gives a brief introduction to the libraries with sample programs.
Amani Tahat; Mahmoud Abu-Allaban; Safeia Hamasha
2011-01-01
In this study, a new atomic physics program (HTAC) is introduced and tested. It is a utility program designed to automate the computation of various atomic structure and spectral data. It is the first comprehensive code that enables performing atomic calculations based on three advanced theories: the fully relativistic configuration interactions approach, the multi-reference many body perturbation theory and the R-Matrix method. It has been designed to generate tabulated atomic data files tha...
ACRO was developed as a computer program to calculate internal exposure doses resulting from acute or chronic inhalation and oral ingestion of radionuclides. The ICRP Task Force Lung Model (TGLM) was used as the inhalation model in ACRO, and a simple one-compartment model was used as the ingestion model. The program is written in FORTRAN IV, and it requires about 260 KB memory capacity
SOURCE 2.0 is a computer code being jointly developed within the Canadian nuclear industry. It will model the necessary mechanisms required to calculate the fission product release for a variety of accident scenarios, including large break loss of coolant accidents with or without emergency coolant injection. This paper presents the origin of SOURCE 2.0, describes the code structure, the fission product mechanisms modelled, and the quality assurance procedures that are being followed during the code's life cycle. (author)
A computer code system for fast calculation of activation and transmutation has been developed. The system consists of a driver code, cross-section libraries, flux libraries, a material library, and a decay library. The code is used to predict transmutations in a Ti-modified 316 stainless steel, a commercial ferritic alloy (HT9), and a V-15%Cr-5%Ti alloy in various magnetic fusion energy (MFE) test facilities and conceptual reactors
Du, Jiangfeng; Xu, Nanyang; Peng, Xinhua; Wang, Pengfei; Wu, Sanfeng; Lu, Dawei
2009-01-01
It is exponentially hard to simulate quantum systems by classical algorithms, while quantum computer could in principle solve this problem polynomially. We demonstrate such an quantum-simulation algorithm on our NMR system to simulate an hydrogen molecule and calculate its ground-state energy. We utilize the NMR interferometry method to measure the phase shift and iterate the process to get a high precision. Finally we get 17 precise bits of the energy value, and we also analyze the source of...
GAPCON-THERMAL-2: a computer program for calculating the thermal behavior of an oxide fuel rod
A description is presented of the computer code GAPCON THERMAL-2, a light water reactor (LWR) fuel thermal performance prediction code. GAPCON-THERMAL-2, is intended to be used as a calculational tool for reactor fuel steady-state thermal performance and to provide input for accident analyses. Some models used in the code provide best estimate as well as conservative predictions. Each of the individual models in the code is based on the best available data
his report presents the description of the computer code REACT/THERMIX, which was developed for calculations of the graphite corrosion phenomena and accident transients in gas cooled High Temperature Reactors (HTR) under air and/or water ingress accident conditions. The two-dimensional code is characterized by direct coupling of thermodynamic, fluiddynamic and chemical processes with a separate handling of heterogeneous chemical reactions. (orig.)
Computer calculations of wire-rope tiedown designs for radioactive materials packages
This Regulatory Compliance Guide (RCG) provides guidance on the use and selection of appropriate wire rope type package tiedowns. It provides an effective way to encourage and to ensure uniform implementation of regulatory requirements applicable to tiedowns. It provides general guidelines for securing packages weighing 5,000 pounds or greater that contain radioactive materials onto legal weight trucks (exclusive of packagings having their own trailer with trunnion type tiedown). This RCG includes a computerized Tiedown Stress Calculation Program (TSCP) which calculates the stresses in the wire-rope tiedowns and specifies appropriate sizes of wire rope and associated hardware parameters (such as turnback length, number of cable clips, etc.)
Radiation doses from radiation sources of neutrons and photons by different computer calculation
In the present paper the calculation technique aspects of dose rate from neutron and photon radiation sources are covered with reference both to the basic theoretical modeling of the MERCURE-4, XSDRNPM-S and MCNP-3A codes and from practical point of view performing safety analyses of irradiation risk of two transportation casks. The input data set of these calculations -regarding the CEN 10/200 HLW container and dry PWR spent fuel assemblies shipping cask- is frequently commented as for as connecting points of input data and understanding theoric background are concerned
BETHSY 6.2TC test calculation with TRACE and RELAP5 computer code
The TRACE code is still under development and it will have all capabilities of RELAP5. The purpose of the present study was therefore to assess the accuracy of the TRACE calculation of BETHSY 6.2TC test, which is 15.24 cm equivalent diameter horizontal cold leg break. For calculations the TRACE V5.0 Patch 1 and RELAP5/MOD3.3 Patch 4 were used. The overall results obtained with TRACE were similar to the results obtained by RELAP5/MOD3.3. The results show that the discrepancies were reasonable. (author)
With the rapidly growing number of CT examinations, the consequential radiation risk has aroused more and more attention. The average dose in each organ during CT scans can only be obtained by using Monte Carlo simulation with computational phantoms. Since children tend to have higher radiation sensitivity than adults, the radiation dose of pediatric CT examinations requires special attention and needs to be assessed accurately. So far, studies on organ doses from CT exposures for pediatric patients are still limited. In this work, a 1-year-old computational phantom was constructed. The body contour was obtained from the CT images of a 1-year-old physical phantom and the internal organs were deformed from an existing Chinese reference adult phantom. To ensure the organ locations in the 1-year-old computational phantom were consistent with those of the physical phantom, the organ locations in 1-year-old computational phantom were manually adjusted one by one, and the organ masses were adjusted to the corresponding Chinese reference values. Moreover, a CT scanner model was developed using the Monte Carlo technique and the 1-year-old computational phantom was applied to estimate organ doses derived from simulated CT exposures. As a result, a database including doses to 36 organs and tissues from 47 single axial scans was built. It has been verified by calculation that doses of axial scans are close to those of helical scans; therefore, this database could be applied to helical scans as well. Organ doses were calculated using the database and compared with those obtained from the measurements made in the physical phantom for helical scans. The differences between simulation and measurement were less than 25% for all organs. The result shows that the 1-year-old phantom developed in this work can be used to calculate organ doses in CT exposures, and the dose database provides a method for the estimation of 1-year-old patient doses in a variety of CT examinations. (paper)
Santoyo, E. [Universidad Nacional Autonoma de Mexico, Centro de Investigacion en Energia, Temixco (Mexico); Garcia, A.; Santoyo, S. [Unidad Geotermia, Inst. de Investigaciones Electricas, Temixco (Mexico); Espinosa, G. [Universidad Autonoma Metropolitana, Co. Vicentina (Mexico); Hernandez, I. [ITESM, Centro de Sistemas de Manufactura, Monterrey (Mexico)
2000-07-01
The development and application of the computer code STATIC{sub T}EMP, a useful tool for calculating static formation temperatures from actual bottomhole temperature data logged in geothermal wells is described. STATIC{sub T}EMP is based on five analytical methods which are the most frequently used in the geothermal industry. Conductive and convective heat flow models (radial, spherical/radial and cylindrical/radial) were selected. The computer code is a useful tool that can be reliably used in situ to determine static formation temperatures before or during the completion stages of geothermal wells (drilling and cementing). Shut-in time and bottomhole temperature measurements logged during well completion activities are required as input data. Output results can include up to seven computations of the static formation temperature by each wellbore temperature data set analysed. STATIC{sub T}EMP was written in Fortran-77 Microsoft language for MS-DOS environment using structured programming techniques. It runs on most IBM compatible personal computers. The source code and its computational architecture as well as the input and output files are described in detail. Validation and application examples on the use of this computer code with wellbore temperature data (obtained from specialised literature) and with actual bottomhole temperature data (taken from completion operations of some geothermal wells) are also presented. (Author)
Quantum computing applied to calculations of molecular energies: CH2 benchmark
Veis, L.; Pittner, Jiří
2010-01-01
Roč. 133, č. 19 (2010), s. 194106. ISSN 0021-9606 R&D Projects: GA ČR GA203/08/0626 Institutional research plan: CEZ:AV0Z40400503 Keywords : computation * algorithm * systems Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 2.920, year: 2010
This investigation used sysmbolic manipulation in developing analytical methods and general computational strategies for solving both linear and nonlinear, regular and singular integral and integro-differential equations which appear in radiative and mixed-mode energy transport. Contained in this report are seven papers which present the technical results as individual modules
LWR-WIMS, a computer code for light water reactor lattice calculations
LMR-WIMS is a comprehensive scheme of computation for studying the reactor physics aspects and burnup behaviour of typical lattices of light water reactors. This report describes the physics methods that have been incorporated in the code, and the modifications that have been made since the code was issued in 1972. (U.K.)
CRITEX - a computer program to calculate criticality excursions in fissile liquid systems
A computer program CRITEX has been developed which models criticality excursions in fissile solutions. This report describes the numerical methods used to approximate the differential equations which are used to simulate the physical behaviour. A flow diagram is given together with a description of the subroutines, input and output variables. (author)
Computer code ANISN multiplying media and shielding calculation II. Code description (input/output)
The user manual of the ANISN computer code describing input and output subroutines is presented. ANISN code was developed to solve one-dimensional transport equation for neutron or gamma rays in slab, sphere or cylinder geometry with general anisotropic scattering. The solution technique is the discrete ordinate method. (M.C.K.)