WorldWideScience

Sample records for calculations computer

  1. Computation cluster for Monte Carlo calculations

    International Nuclear Information System (INIS)

    Two computation clusters based on Rocks Clusters 5.1 Linux distribution with Intel Core Duo and Intel Core Quad based computers were made at the Department of the Nuclear Physics and Technology. Clusters were used for Monte Carlo calculations, specifically for MCNP calculations applied in Nuclear reactor core simulations. Optimization for computation speed was made on hardware and software basis. Hardware cluster parameters, such as size of the memory, network speed, CPU speed, number of processors per computation, number of processors in one computer were tested for shortening the calculation time. For software optimization, different Fortran compilers, MPI implementations and CPU multi-core libraries were tested. Finally computer cluster was used in finding the weighting functions of neutron ex-core detectors of VVER-440. (authors)

  2. Calculation of profitability in computer tomography (CT)

    International Nuclear Information System (INIS)

    The comments do not refer to a specific type of whole body computer tomography which made it necessary to base the calculations on mean values with regard to both initial costs and operating costs. The calculation of the receipts was based on the resulting costs, mean long-term utilization of the unit and on a reasonable period of amortization. The model calculation indicates that the break-even point is reached with 1,920 annual examinations and a five-year amortization period. (orig.) 891 MG/orig. 892 MB

  3. Computer Program Development for House Cost Calculation

    OpenAIRE

    Korablev, Maxim

    2010-01-01

    The main purpose of this project was to develop a program, which can calculate the cost of houses. This program should accelerate a matching process between a company and users. Also the program should contain a database of building materials. The program language is PHP. PHP is a modern computer language for the development of web programs. The writing of a program code was based on the official PHP manual and a little support from a programmer in the company. For making the database of...

  4. Atomic physics: computer calculations and theoretical analysis

    OpenAIRE

    Drukarev, E. G.

    2004-01-01

    It is demonstrated, how the theoretical analysis preceding the numerical calculations helps to calculate the energy of the ground state of helium atom, and enables to avoid qualitative errors in the calculations of the characteristics of the double photoionization.

  5. Computational methods for probability of instability calculations

    Science.gov (United States)

    Wu, Y.-T.; Burnside, O. H.

    1990-01-01

    This paper summarizes the development of the methods and a computer program to compute the probability of instability of a dynamic system than can be represented by a system of second-order ordinary linear differential equations. Two instability criteria based upon the roots of the characteristics equation or Routh-Hurwitz test functions are investigated. Computational methods based on system reliability analysis methods and importance sampling concepts are proposed to perform efficient probabilistic analysis. Numerical examples are provided to demonstrate the methods.

  6. Computing tools for accelerator design calculations

    Energy Technology Data Exchange (ETDEWEB)

    Fischler, M.; Nash, T.

    1984-01-01

    This note is intended as a brief, summary guide for accelerator designers to the new generation of commercial and special processors that allow great increases in computing cost effectiveness. New thinking is required to take best advantage of these computing opportunities, in particular, when moving from analytical approaches to tracking simulations. In this paper, we outline the relevant considerations.

  7. CACTUS: Calculator and Computer Technology User Service.

    Science.gov (United States)

    Hyde, Hartley

    1998-01-01

    Presents an activity in which students use computer-based spreadsheets to find out how much grain should be added to a chess board when a grain of rice is put on the first square, the amount is doubled for the next square, and the chess board is covered. (ASK)

  8. Computer calculation of Witten's 3-manifold invariant

    International Nuclear Information System (INIS)

    Witten's 2+1 dimensional Chern-Simons theory is exactly solvable. We compute the partition function, a topological invariant of 3-manifolds, on generalized Seifert spaces. Thus we test the path integral using the theory of 3-manifolds. In particular, we compare the exact solution with the asymptotic formula predicted by perturbation theory. We conclude that this path integral works as advertised and gives an effective topological invariant. (orig.)

  9. Computer calculation of Witten's 3-manifold invariant

    Science.gov (United States)

    Freed, Daniel S.; Gompf, Robert E.

    1991-10-01

    Witten's 2+1 dimensional Chern-Simons theory is exactly solvable. We compute the partition function, a topological invariant of 3-manifolds, on generalized Seifert spaces. Thus we test the path integral using the theory of 3-manifolds. In particular, we compare the exact solution with the asymptotic formula predicted by perturbation theory. We conclude that this path integral works as advertised and gives an effective topological invariant.

  10. Parallel computer calculation of quantum spin lattices

    International Nuclear Information System (INIS)

    Numerical simulation allows the theorists to convince themselves about the validity of the models they use. Particularly by simulating the spin lattices one can judge about the validity of a conjecture. Simulating a system defined by a large number of degrees of freedom requires highly sophisticated machines. This study deals with modelling the magnetic interactions between the ions of a crystal. Many exact results have been found for spin 1/2 systems but not for systems of other spins for which many simulation have been carried out. The interest for simulations has been renewed by the Haldane's conjecture stipulating the existence of a energy gap between the ground state and the first excited states of a spin 1 lattice. The existence of this gap has been experimentally demonstrated. This report contains the following four chapters: 1. Spin systems; 2. Calculation of eigenvalues; 3. Programming; 4. Parallel calculation

  11. Graphical representation of supersymmetry and computer calculation

    International Nuclear Information System (INIS)

    A graphical representation of supersymmetry is presented. It clearly expresses the chiral flow appearing in SUSY quantities, by representing spinors by directed lines (arrows). The chiral suffixes are expressed by the directions (up, down, left, right) of the arrows. The SL(2,C) invariants are represented by wedges. We are free from the messy symbols of spinor suffixes. The method is applied to the 5D supersymmetry. Many applications are expected. The result is suitable for coding a computer program and is highly expected to be applicable to various SUSY theories (including Supergravity) in various dimensions. (author)

  12. Newnes circuit calculations pocket book with computer programs

    CERN Document Server

    Davies, Thomas J

    2013-01-01

    Newnes Circuit Calculations Pocket Book: With Computer Programs presents equations, examples, and problems in circuit calculations. The text includes 300 computer programs that help solve the problems presented. The book is comprised of 20 chapters that tackle different aspects of circuit calculation. The coverage of the text includes dc voltage, dc circuits, and network theorems. The book also covers oscillators, phasors, and transformers. The text will be useful to electrical engineers and other professionals whose work involves electronic circuitry.

  13. Calculations of angular momentum coupling coefficients on a computer code

    International Nuclear Information System (INIS)

    In this study, Clebsch-Gordan coefficients, 3j symbols, Racah coefficients, Wigner's 6j and 9j symbols were calculated by a prepared computer code of COEFF. The computer program COEFF is described which calculates angular momentum coupling coefficients and expresses them as quotient of two integers multiplied by the square root of the quotient of two integers. The program includes subroutines to encode an integer into its prime factors, to decode of prime factors back into an integer , to perform basic arithmetic operations on prime-coded numbers, as well as subroutines which calculate the coupling coefficients themselves. The computer code COEFF had been prepared to run on a VAX. In this study we rearranged the code to run on PC and tested it successfully. The obtained values in this study, were compared with the values of other computer programmes. A pretty good agreement is obtained between our prepared computer code and other computer programmes

  14. Analytical calculation of heavy quarkonia production processes in computer

    OpenAIRE

    Braguta, V. V.; Likhoded, A. K.; Luchinsky, A. V.; Poslavsky, S. V.

    2013-01-01

    This report is devoted to the analytical calculation of heavy quarkonia production processes in modern experiments such as LHC, B-factories and superB-factories in computer. Theoretical description of heavy quarkonia is based on the factorization theorem. This theorem leads to special structure of the production amplitudes which can be used to develop computer algorithm which calculates these amplitudes automatically. This report is devoted to the description of this algorithm. As an example ...

  15. CRACKEL: a computer code for CFR fuel management calculations

    International Nuclear Information System (INIS)

    The CRACKLE computer code is designed to perform rapid fuel management surveys of CFR systems. The code calculates overall features such as reactivity, power distributions and breeding gain, and also calculates for each sub-assembly plutonium content and power output. A number of alternative options are built into the code, in order to permit different fuel management strategies to be calculated, and to perform more detailed calculations when necessary. A brief description is given of the methods of calculation, and the input facilities of CRACKLE, with examples. (author)

  16. Computer program developed for flowsheet calculations and process data reduction

    Science.gov (United States)

    Alfredson, P. G.; Anastasia, L. J.; Knudsen, I. E.; Koppel, L. B.; Vogel, G. J.

    1969-01-01

    Computer program PACER-65, is used for flowsheet calculations and easily adapted to process data reduction. Each unit, vessel, meter, and processing operation in the overall flowsheet is represented by a separate subroutine, which the program calls in the order required to complete an overall flowsheet calculation.

  17. TRIGLAV - a computer programme for research reactor calculation

    Energy Technology Data Exchange (ETDEWEB)

    Persic, A.; Ravnik, M.; Slavic, S.; Zagar, T. (J.Stefan Institute, Ljubljana (Slovenia))

    1999-12-15

    TRIGLAV is a new computer programme for burn-up calculation of mixed core of research reactors. The code is based on diffusion model in two dimensions and iterative procedure is applied for its solution. The material data used in the model are calculated with the transport programme WIMS. In regard to fission density distribution and energy produced by the reactor the burn-up increment of fuel elements is determined. (orig.)

  18. Computer program for equilibrium calculation and diffusion simulation

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    A computer program called TKCALC(thermodynamic and kinetic calculation) has been successfully developedfor the purpose of phase equilibrium calculation and diffusion simulation in ternary substitutional alloy systems. The program was subsequently applied to calculate the isothermal sections of the Fe-Cr-Ni system and predict the concentrationprofiles of two γ/γ single-phase diffusion couples in the Ni-Cr-Al system. The results are in excellent agreement withTHERMO-CALC and DICTRA software packages. Detailed mathematical derivation of some important formulae involvedis also elaborated

  19. Quantum Computing Approach to Nonrelativistic and Relativistic Molecular Energy Calculations

    Czech Academy of Sciences Publication Activity Database

    Veis, Libor; Pittner, Jiří

    Hoboken : John Wiley, 2014 - (Kais, S.), s. 107-135 ISBN 978-1-118-49566-7. - (Advances in Chemical Physics. Vol. 154) R&D Projects: GA ČR GA203/08/0626 Institutional support: RVO:61388955 Keywords : full configuration interaction (FCI) calculations * nonrelativistic molecular hamiltonians * quantum computing Subject RIV: CF - Physical ; Theoretical Chemistry

  20. Computer calculation of bacterial survival during industrial poultry scalding

    Science.gov (United States)

    Computer simulation was used to model survival of bacteria during poultry scalding under common industrial conditions. Bacterial survival was calculated in a single-tank single-pass scalder with and without counterflow water movement, in a single-tank two-pass scalder, and in a three-tank two-pass ...

  1. Development of a computational methodology for internal dose calculations

    CERN Document Server

    Yoriyaz, H

    2000-01-01

    A new approach for calculating internal dose estimates was developed through the use of a more realistic computational model of the human body and a more precise tool for the radiation transport simulation. The present technique shows the capability to build a patient-specific phantom with tomography data (a voxel-based phantom) for the simulation of radiation transport and energy deposition using Monte Carlo methods such as in the MCNP-4B code. In order to utilize the segmented human anatomy as a computational model for the simulation of radiation transport, an interface program, SCMS, was developed to build the geometric configurations for the phantom through the use of tomographic images. This procedure allows to calculate not only average dose values but also spatial distribution of dose in regions of interest. With the present methodology absorbed fractions for photons and electrons in various organs of the Zubal segmented phantom were calculated and compared to those reported for the mathematical phanto...

  2. Shieldings for X-ray radiotherapy facilities calculated by computer

    International Nuclear Information System (INIS)

    This work presents a methodology for calculation of X-ray shielding in facilities of radiotherapy with help of computer. Even today, in Brazil, the calculation of shielding for X-ray radiotherapy is done based on NCRP-49 recommendation establishing a methodology for calculating required to the elaboration of a project of shielding. With regard to high energies, where is necessary the construction of a labyrinth, the NCRP-49 is not very clear, so that in this field, studies were made resulting in an article that proposes a solution to the problem. It was developed a friendly program in Delphi programming language that, through the manual data entry of a basic design of architecture and some parameters, interprets the geometry and calculates the shields of the walls, ceiling and floor of on X-ray radiation therapy facility. As the final product, this program provides a graphical screen on the computer with all the input data and the calculation of shieldings and the calculation memory. The program can be applied in practical implementation of shielding projects for radiotherapy facilities and can be used in a didactic way compared to NCRP-49.

  3. CRONOS: A modular computational system for neutronic core calculations

    International Nuclear Information System (INIS)

    The CRONOS code has been designed to provide all the computational means needed for Pressurized Water Reactor calculations, including design, fuel management, follow up and accidents. CRONOS allows steady state, kinetic and transient multigroup calculations of power distribution taking into account the thermal-hydraulic feedback effects. All this can be done without any limitation on any parameter (energy groups, meshes...). The code solves either the diffusion equation or the even parity transport equation with isotropic scattering and sources. Different geometries are available such as 1, 2 or 3 dimensions cartesian geometries, 2 or 3D hexagonal geometries and cylindrical geometries. The numerical method is based on the finite difference or finite element methods. CRONOS 2 has been written with the constant will of optimizing its portability. Presently, it is running on very different computers such as IBM 3090, CRAY 1, CRAY 2, SUN 4, MIPS RS2030 or IBM RS6000. A special data structure is used in order to improve vectorization. CRONOS is based on a modular structure that allows a great flexibility of use. It is implemented in the SAPHYR system which includes assembly calculation code (APOLLO), and thermal-hydraulic core calculation code (FLICA IV). A special object oriented language, named GIBIANE, and a common tool library have been developed to chain the various computation modules of those codes. (author). 11 refs, 1 fig., 5 tabs

  4. Computer and engineering calculations of Brazilian Tokamak-II

    International Nuclear Information System (INIS)

    Analytical and computer calculations carried out by researches of Physics Institute - University of Sao Paulo (IFUSP), for defining the engineering project and constructing the TBR-II tokamak are presented. The hydrodynamics behavioue and determined parameters for magnetic confinement of the plasma were analysed. The computer code was developed using magnetohydrodynamics (MHD) equations which involve plasma interactions, magnetic field and electrical current circulating in more than 20 coils distributed around toroidal vase of the plasma. The electromagnetic, thermal and mechanical couplings are also presented. The TBR-II will be feed by two turbo-generators with 15 MW each one. (M.C.K.)

  5. Methods and computer codes for nuclear systems calculations

    Indian Academy of Sciences (India)

    B P Kochurov; A P Knyazev; A Yu Kwaretzkheli

    2007-02-01

    Some numerical methods for reactor cell, sub-critical systems and 3D models of nuclear reactors are presented. The methods are developed for steady states and space–time calculations. Computer code TRIFON solves space-energy problem in (, ) systems of finite height and calculates heterogeneous few-group matrix parameters of reactor cells. These parameters are used as input data in the computer code SHERHAN solving the 3D heterogeneous reactor equation for steady states and 3D space–time neutron processes simulation. Modification of TRIFON was developed for the simulation of space–time processes in sub-critical systems with external sources. An option of SHERHAN code for the system with external sources is under development.

  6. Computationally efficient implementation of combustion chemistry in parallel PDF calculations

    International Nuclear Information System (INIS)

    In parallel calculations of combustion processes with realistic chemistry, the serial in situ adaptive tabulation (ISAT) algorithm [S.B. Pope, Computationally efficient implementation of combustion chemistry using in situ adaptive tabulation, Combustion Theory and Modelling, 1 (1997) 41-63; L. Lu, S.B. Pope, An improved algorithm for in situ adaptive tabulation, Journal of Computational Physics 228 (2009) 361-386] substantially speeds up the chemistry calculations on each processor. To improve the parallel efficiency of large ensembles of such calculations in parallel computations, in this work, the ISAT algorithm is extended to the multi-processor environment, with the aim of minimizing the wall clock time required for the whole ensemble. Parallel ISAT strategies are developed by combining the existing serial ISAT algorithm with different distribution strategies, namely purely local processing (PLP), uniformly random distribution (URAN), and preferential distribution (PREF). The distribution strategies enable the queued load redistribution of chemistry calculations among processors using message passing. They are implemented in the software x2fmpi, which is a Fortran 95 library for facilitating many parallel evaluations of a general vector function. The relative performance of the parallel ISAT strategies is investigated in different computational regimes via the PDF calculations of multiple partially stirred reactors burning methane/air mixtures. The results show that the performance of ISAT with a fixed distribution strategy strongly depends on certain computational regimes, based on how much memory is available and how much overlap exists between tabulated information on different processors. No one fixed strategy consistently achieves good performance in all the regimes. Therefore, an adaptive distribution strategy, which blends PLP, URAN and PREF, is devised and implemented. It yields consistently good performance in all regimes. In the adaptive parallel ISAT

  7. Computer Program for Point Location And Calculation of ERror (PLACER)

    Science.gov (United States)

    Granato, Gregory E.

    1999-01-01

    A program designed for point location and calculation of error (PLACER) was developed as part of the Quality Assurance Program of the Federal Highway Administration/U.S. Geological Survey (USGS) National Data and Methodology Synthesis (NDAMS) review process. The program provides a standard method to derive study-site locations from site maps in highwayrunoff, urban-runoff, and other research reports. This report provides a guide for using PLACER, documents methods used to estimate study-site locations, documents the NDAMS Study-Site Locator Form, and documents the FORTRAN code used to implement the method. PLACER is a simple program that calculates the latitude and longitude coordinates of one or more study sites plotted on a published map and estimates the uncertainty of these calculated coordinates. PLACER calculates the latitude and longitude of each study site by interpolating between the coordinates of known features and the locations of study sites using any consistent, linear, user-defined coordinate system. This program will read data entered from the computer keyboard and(or) from a formatted text file, and will write the results to the computer screen and to a text file. PLACER is readily transferable to different computers and operating systems with few (if any) modifications because it is written in standard FORTRAN. PLACER can be used to calculate study site locations in latitude and longitude, using known map coordinates or features that are identifiable in geographic information data bases such as USGS Geographic Names Information System, which is available on the World Wide Web.

  8. Automatic computed tomography patient dose calculation using header metadata

    International Nuclear Information System (INIS)

    The present work describes a method that calculates the patient dose values in computed tomography (CT) based on metadata contained in DICOM images in support of patient dose studies. The DICOM metadata is pre-processed to extract necessary calculation parameters. Vendor-specific DICOM header information is harmonized using vendor translation tables and unavailable DICOM tags can be completed with a graphical user interface. CT-Expo, an MS Excel application for calculating the radiation dose, is used to calculate the patient doses. All relevant data and calculation results are stored for further analysis in a relational database. Final results are compiled by utilizing data mining tools. This solution was successfully used for the 2009 CT dose study in Luxembourg. National diagnostic reference levels for standard examinations were calculated based on each of the countries' hospitals. The benefits using this new automatic system saved time as well as resources during the data acquisition and the evaluation when compared with earlier questionnaire-based surveys. (authors)

  9. Analytical calculations by computer in physics and mathematics

    International Nuclear Information System (INIS)

    The review of present status of analytical calculations by computer is given. Some programming systems for analytical computations are considered. Such systems as SCHOONSCHIP, CLAM, REDUCE-2, SYMBAL, CAMAL, AVTO-ANALITIK which are implemented or will be implemented in JINR, and MACSYMA - one of the most developed systems - are discussed. It is shown on the basis of mathematical operations, realized in these systems, that they are appropriated for different problems of theoretical physics and mathematics, for example, for problems of quantum field theory, celestial mechanics, general relativity and so on. Some problems solved in JINR by programming systems for analytical computations are described. The review is intended for specialists in different fields of theoretical physics and mathematics

  10. Hamiltonian lattice field theory: Computer calculations using variational methods

    International Nuclear Information System (INIS)

    A variational method is developed for systematic numerical computation of physical quantities-bound state energies and scattering amplitudes-in quantum field theory. An infinite-volume, continuum theory is approximated by a theory on a finite spatial lattice, which is amenable to numerical computation. An algorithm is presented for computing approximate energy eigenvalues and eigenstates in the lattice theory and for bounding the resulting errors. It is shown how to select basis states and choose variational parameters in order to minimize errors. The algorithm is based on the Rayleigh-Ritz principle and Kato's generalizations of Temple's formula. The algorithm could be adapted to systems such as atoms and molecules. It is shown how to compute Green's functions from energy eigenvalues and eigenstates in the lattice theory, and relate these to physical (renormalized) coupling constants, bound state energies and Green's functions. Thus one can compute approximate physical quantities in a lattice theory that approximates a quantum field theory with specified physical coupling constants. The author discusses the errors in both approximations. In principle, the errors can be made arbitrarily small by increasing the size of the lattice, decreasing the lattice spacing and computing sufficiently long. Unfortunately, the author does not understand the infinite-volume and continuum limits well enough to quantify errors due to the lattice approximation. Thus the method is currently incomplete. The method is applied to real scalar field theories using a Fock basis of free particle states. All needed quantities can be calculated efficiently with this basis. The generalization to more complicated theories is straightforward. The author describes a computer implementation of the method and present numerical results for simple quantum mechanical systems

  11. Hamiltonian lattice field theory: Computer calculations using variational methods

    International Nuclear Information System (INIS)

    I develop a variational method for systematic numerical computation of physical quantities -- bound state energies and scattering amplitudes -- in quantum field theory. An infinite-volume, continuum theory is approximated by a theory on a finite spatial lattice, which is amenable to numerical computation. I present an algorithm for computing approximate energy eigenvalues and eigenstates in the lattice theory and for bounding the resulting errors. I also show how to select basis states and choose variational parameters in order to minimize errors. The algorithm is based on the Rayleigh-Ritz principle and Kato's generalizations of Temple's formula. The algorithm could be adapted to systems such as atoms and molecules. I show how to compute Green's functions from energy eigenvalues and eigenstates in the lattice theory, and relate these to physical (renormalized) coupling constants, bound state energies and Green's functions. Thus one can compute approximate physical quantities in a lattice theory that approximates a quantum field theory with specified physical coupling constants. I discuss the errors in both approximations. In principle, the errors can be made arbitrarily small by increasing the size of the lattice, decreasing the lattice spacing and computing sufficiently long. Unfortunately, I do not understand the infinite-volume and continuum limits well enough to quantify errors due to the lattice approximation. Thus the method is currently incomplete. I apply the method to real scalar field theories using a Fock basis of free particle states. All needed quantities can be calculated efficiently with this basis. The generalization to more complicated theories is straightforward. I describe a computer implementation of the method and present numerical results for simple quantum mechanical systems

  12. Automated objective thyroid ablation dose calculations using interactive computer program

    International Nuclear Information System (INIS)

    Aim: Development of an interactive computer program allowing automatic calculation of an optimized dose of I-131 required for the effective ablation of remnants of thyroid tissue. Materials and methods: The Standard Thyroid Uptake Neck Phantom (Nucl.Assoc.) was used for measurements of efficiency of high energy (for I-131) and low energy (for I-123) collimator mounted on the Picker Prism 2000 gamma camera. The efficiency was calculated for a wide range of distances between the patient's neck and the camera head and for different sizes of remnant thyroid tissue and activities. These data were built into the computer memory (Picker Odyssey FX 729) and then were used for calculation of percentage uptake in the neck (regular quality control and maintenance of gamma camera secures the stability of its performance). On the basis of the uptake on an early and late image after the administration of radioisotope, its biological and effective half lives in the patient are calculated and the dose required for delivery of 50mGy per gram of I-131 radiation to the remaining thyroid tissue is evaluated. Results: The technologist selects the appropriate isotope, enters the patient's dose and the neck to collimator distance then draws the regions of interests around the thyroid remnants on each of anterior images. No other operator interventions are required. When regions are assigned the percentage uptake, biological half life, effective half life and required I-131 activity in MBq per gram are calculated automatically. It was found that efficiency is independent of activity over the range seen clinically. The need for a standard is eliminated and automated calculations ensure accuracy. Estimation of remnant mass and desired radiation dose is required to complete the dose calculations. The program works for both I-131 (using 1 to 3 day and 5 to 10 day images) and I-123 (using 6 and 24 hrs images). The program automatically corrects for the exact imaging time. Results are displayed

  13. Computer program 'SOMC2' for spherical optical model calculations

    International Nuclear Information System (INIS)

    This report is a description of the computer program 'SOMC2'. It is a program for spherical optical model calculations of the nuclear scattering cross sections of neutron, proton and α particles. In the first section, the formalism and the non-linear least square algorithm are presented. Section II is devoted to the detailed explanations of all the routines of the present program. A brief explanation of the methods used to obtain not only the fitting parameters, but also their uncertainties and their correlations is given. In section III detailed explanations of the input-data cards and of the various out-puts are given. Finally some examples of calculations are presented

  14. TRING: a computer program for calculating radionuclide transport in groundwater

    International Nuclear Information System (INIS)

    The computer program TRING is described which enables the transport of radionuclides in groundwater to be calculated for use in long term radiological assessments using methods described previously. Examples of the areas of application of the program are activity transport in groundwater associated with accidental spillage or leakage of activity, the shutdown of reactors subject to delayed decommissioning, shallow land burial of intermediate level waste and geologic disposal of high level waste. Some examples of the use of the program are given, together with full details to enable users to run the program. (author)

  15. A computer program for calculating effective capture cross section

    International Nuclear Information System (INIS)

    FORTRAN program CPCS (Computer Program to analyze Capture TOF Spectra) was developed to deduce effective neutron capture cross sections from raw data obtained by a time-of-flight facility at the JAERI Electron Linear Accelerator. The data processing system for capture experiments consists of three stages, i.e. data acquisition, data handling (summing, listing, plotting, etc.), and data analysis (background determination, flux determination, normalization, etc.). In the three stages of processing, three respective computers are used; USC-3, FACOM U-200, and FACOM 230/75. CPCS is included in the stage of data analysis. A feature of this program is that the magnetic disk file is effectively used as INPUT/OUTPUT data storage interconnecting with other programs to determine neutron flux, to average calculated cross sections and to fit data with strength functions. This program is able to handle eight sets of TOF spectra with 8192 channels including channel block option simultaneously. Particular attention is paid to determine a precise background in the wide neutron energy range. (author)

  16. Million atom DFT calculations using coarse graining and petascale computing

    Science.gov (United States)

    Nicholson, Don; Odbadrakh, Kh.; Samolyuk, G. D.; Stoller, R. E.; Zhang, X. G.; Stocks, G. M.

    2014-03-01

    Researchers performing classical Molecular Dynamics (MD) on defect structures often find it necessary to use millions of atoms in their models. It would be useful to perform density functional calculations on these large configurations in order to observe electron-based properties such as local charge and spin and the Helmann-Feynman forces on the atoms. The great number of atoms usually requires that a subset be ``carved'' from the configuration and terminated in a less that satisfactory manner, e.g. free space or inappropriate periodic boundary conditions. Coarse graining based on the Locally Self-consistent Multiple Scattering method (LSMS) and petascale computing can circumvent this problem by treating the whole system but dividing the atoms into two groups. In Coarse Grained LSMS (CG-LSMS) one group of atoms has its charge and scattering determined prescriptively based on neighboring atoms while the remaining group of atoms have their charge and scattering determined according to DFT as implemented in the LSMS. The method will be demonstrated for a one-million-atom model of a displacement cascade in Fe for which 24,130 atoms are treated with full DFT and the remaining atoms are treated prescriptively. Work supported as part of Center for Defect Physics, an Energy Frontier Research Center funded by the U.S. DOE, Office of Science, Basic Energy Sciences, used Oak Ridge Leadership Computing Facility, Oak Ridge National Lab, of DOE Office of Science.

  17. Computational models for probabilistic neutronic calculation in TADSEA

    International Nuclear Information System (INIS)

    The Very High Temperature Reactor is one of the main candidates for the next generation of nuclear power plants. In pebble bed reactors, the fuel is contained within graphite pebbles in the form of TRISO particles, which form a randomly packed bed inside a graphite-walled cylindrical cavity. In previous studies, the conceptual design of a Transmutation Advanced Device for Sustainable Energy Applications (TADSEA) has been made. The TADSEA is a pebble-bed ADS cooled by helium and moderated by graphite. In order to simulate the TADSEA correctly, the double heterogeneity of the system must be considered. It consists on randomly located pebbles into the core and randomly located TRISO particles into the fuel pebbles. These features are often neglected due to the difficulty to model with MCNP code. The main reason is that there is a limited number of cells and surfaces to be defined. In this paper a computational tool, which allows to get a new geometrical model for fuel pebble to neutronic calculation with MCNPX, was presented. The heterogeneity of system is considered, and also the randomly located TRISO particles inside the pebble. There are also compared several neutronic computational models for TADSEA's fuel pebbles in order to study heterogeneity effects. On the other hand the boundary effect given by the intersection between the pebble surface and the TRISO particles could be significative in the multiplicative properties. A model to study this e ect is also presented. (author)

  18. Summaries of recent computer-assisted Feynam diagram calculations

    Energy Technology Data Exchange (ETDEWEB)

    Mark Fischler

    2001-08-16

    The AIHENP Workshop series has traditionally included cutting edge work on automated computation of Feynman diagrams. The conveners of the Symbolic Problem Solving topic in this ACAT conference felt it would be useful to solicit presentations of brief summaries of the interesting recent calculations. Since this conference was the first in the series to be held in the Western Hemisphere, it was decided that the summaries would be solicited both from attendees and from researchers who could not attend the conference. This would represent a sampling of many of the key calculations being performed. The results were presented at the Poster session; contributions from ten researchers were displayed and posted on the web. Although the poster presentation, which can be viewed at conferences.fnal.gov/acat2000/ placed equal emphasis on results presented at the conference and other contributions, here we primarily discuss the latter, which do not appear in full form in these proceedings. This brief paper can't do full justice to each contribution; interested readers can find details of the work not presented at this conference in references (1), (2), (3), (4), (5), (6), (7).

  19. Comparison of computer code calculations with FEBA test data

    International Nuclear Information System (INIS)

    The FEBA forced feed reflood experiments included base line tests with unblocked geometry. The experiments consisted of separate effect tests on a full-length 5x5 rod bundle. Experimental cladding temperatures and heat transfer coefficients of FEBA test No. 216 are compared with the analytical data postcalculated utilizing the SSYST-3 computer code. The comparison indicates a satisfactory matching of the peak cladding temperatures, quench times and heat transfer coefficients for nearly all axial positions. This agreement was made possible by the use of an artificially adjusted value of the empirical code input parameter in the heat transfer for the dispersed flow regime. A limited comparison of test data and calculations using the RELAP4/MOD6 transient analysis code are also included. In this case the input data for the water entrainment fraction and the liquid weighting factor in the heat transfer for the dispersed flow regime were adjusted to match the experimental data. On the other hand, no fitting of the input parameters was made for the COBRA-TF calculations which are included in the data comparison. (orig.)

  20. Computer code for shielding calculations of x-rays rooms

    International Nuclear Information System (INIS)

    The building an effective barrier against ionizing radiation present in radiographic rooms requires consideration of many variables. The methodology used for thickness specification of primary and secondary, barrier of a traditional radiographic room, considers the following factors: Use Factor, Occupational Factor, distance between the source and the wall, Workload, Kerma in the air and distance between the patient and the source. With these data it was possible to develop a computer code, which aims to identify and use variables in functions obtained through graphics regressions provided by NCRP-147 (Structural Shielding Design for Medical X-Ray Imaging Facilities) report, for shielding calculation of room walls, and the walls of the dark room and adjacent areas. With the implemented methodology, it was made a code validation by comparison of results with a study case provided by the report. The obtained values for thickness comprise different materials such as concrete, lead and glass. After validation it was made a case study of an arbitrary radiographic room.The development of the code resulted in a user-friendly tool for planning radiographic rooms to comply with the limits established by CNEN-NN-3:01 published in september/2011. (authors)

  1. Systems for neutronic, thermohydraulic and shielding calculation in personal computers

    International Nuclear Information System (INIS)

    The MTR-PC (Materials Testing Reactors-Personal Computers) system has been developed by the Nuclear Engineering Division of INVAP S.E. with the aim of providing working conditions integrated with personal computers for design and neutronic, thermohydraulic and shielding analysis for reactors employing plate type fuel. (Author)

  2. Computing NLTE Opacities -- Node Level Parallel Calculation

    Energy Technology Data Exchange (ETDEWEB)

    Holladay, Daniel [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-09-11

    Presentation. The goal: to produce a robust library capable of computing reasonably accurate opacities inline with the assumption of LTE relaxed (non-LTE). Near term: demonstrate acceleration of non-LTE opacity computation. Far term (if funded): connect to application codes with in-line capability and compute opacities. Study science problems. Use efficient algorithms that expose many levels of parallelism and utilize good memory access patterns for use on advanced architectures. Portability to multiple types of hardware including multicore processors, manycore processors such as KNL, GPUs, etc. Easily coupled to radiation hydrodynamics and thermal radiative transfer codes.

  3. GRUCAL, a computer program for calculating macroscopic group constants

    International Nuclear Information System (INIS)

    Nuclear reactor calculations require material- and composition-dependent, energy averaged nuclear data to describe the interaction of neutrons with individual isotopes in material compositions of reactor zones. The code GRUCAL calculates these macroscopic group constants for given compositions from the material-dependent data of the group constant library GRUBA. The instructions for calculating group constants are not fixed in the program, but will be read at the actual execution time from a separate instruction file. This allows to accomodate GRUCAL to various problems or different group constant concepts. (orig.)

  4. Computer code for nuclear reactor core thermal reliability calculation

    International Nuclear Information System (INIS)

    RASTENAR program was described for computing heat-engineering reliability of cores in nuclear reactors operating under stationary conditions. The following factors of heat-engineering reliability were found to be computable: rated critical margin; limiting critical margin; probability of initiation of critical heat removal in channel (inferior conditions of heat transfer); probability that no channel would be subject to critical heat removal; and reactor power reserve coefficient. The probability that no channel in the core would experience critical heat removal when boiling during operation of the reactor at fixed power level was taken for the principal quantitative criterion. The structure and limitations of the program were described together with the computation algorithm. The program was written for an M-220 computer

  5. Control of routine radioimmunoassays: a computer program for calculation of control charts for precision and accuracy

    International Nuclear Information System (INIS)

    A computer program is proposed allowing the automatic calculation of control charts for accuracy and precision. The calculated charts enable the analyst to control easily the daily results for a determined radioimmunoassay. (Auth.)

  6. Computing energy expenditure from indirect calorimetry data: a calculation exercise

    OpenAIRE

    Alferink, S.J.J.; Heetkamp, M.J.W.; Gerrits, W.J.J.

    2015-01-01

    Energy expenditure (Q) can be accurately derived from the volume of O2 consumed (VO2), and the volume of CO2 (VCO2) and CH4 (VCH4) produced. When the measurements are performed using a respiration chamber, VO2, VCO2 and VCH4 are calculated by the difference between the inflow (l/h) and outflow rates (l/h), plus the change in volume of gas in the chamber between successive measurements. There are many steps involved in the calculation of Q from raw data. These steps are rarely published in ful...

  7. Heuristic and computer calculations for the magnitude of metric spaces

    CERN Document Server

    Willerton, Simon

    2009-01-01

    The notion of the magnitude of a compact metric space was considered in arXiv:0908.1582 with Tom Leinster, where the magnitude was calculated for line segments, circles and Cantor sets. In this paper more evidence is presented for a conjectured relationship with a geometric measure theoretic valuation. Firstly, a heuristic is given for deriving this valuation by considering 'large' subspaces of Euclidean space and, secondly, numerical approximations to the magnitude are calculated for squares, disks, cubes, annuli, tori and Sierpinski gaskets. The valuation is seen to be very close to the magnitude for the convex spaces considered and is seen to be 'asymptotically' close for some other spaces.

  8. Quantum computing applied to calculations of molecular energies

    Czech Academy of Sciences Publication Activity Database

    Pittner, Jiří; Veis, L.

    2011-01-01

    Roč. 241, - (2011), 151-phys. ISSN 0065-7727. [National Meeting and Exposition of the American-Chemical-Society (ACS) /241./. 27.03.2011-31.03.2011, Anaheim] Institutional research plan: CEZ:AV0Z40400503 Keywords : molecular energie * quantum computers Subject RIV: CF - Physical ; Theoretical Chemistry

  9. Computer code for double beta decay QRPA based calculations

    International Nuclear Information System (INIS)

    The computer code developed by our group some years ago for the evaluation of nuclear matrix elements, within the QRPA and PQRPA nuclear structure models, involved in neutrino-nucleus reactions, muon capture and β± processes, is extended to include also the nuclear double beta decay

  10. Calculation of Linear Systems Metric Tensors via Algebraic Computation

    OpenAIRE

    Neto, Joao Jose de Farias

    2002-01-01

    A formula for the Riemannian metric tensor of differentiable manifolds of linear dynamical systems of same McMillan degree is presented in terms of their transfer function matrices. The necessary calculations for its application to ARMA and state space overlapping parametrizations are drafted. The importance of this approach for systems identification and multiple time series analysis and forecasting is explained.

  11. A FORTRAN Computer Program for Q Sort Calculations

    Science.gov (United States)

    Dunlap, William R.

    1978-01-01

    The Q Sort method is a rank order procedure. A FORTRAN program is described which calculates a total value for any group of cases for the items in the Q Sort, and rank orders the items according to this composite value. (Author/JKS)

  12. On the pressure calculation for polarizable models in computer simulation.

    Science.gov (United States)

    Kiss, Péter T; Baranyai, András

    2012-03-14

    We present a short overview of pressure calculation in molecular dynamics or Monte Carlo simulations. The emphasis is given to polarizable models in order to resolve the controversy caused by the paper of M. J. Louwerse and E. J. Baerends [Chem. Phys. Lett. 421, 138 (2006)] about pressure calculation in systems with periodic boundaries. We systematically derive expressions for the pressure and show that despite the lack of explicit pairwise additivity, the pressure formula for polarizable models is identical with that of nonpolarizable ones. However, a strict condition for using this formula is that the induced dipole should be in perfect mechanical equilibrium prior to pressure calculation. The perfect convergence of induced dipoles ensures conservation of energy as well. We demonstrate using more cumbersome but exact methods that the derived expressions for the polarizable model of water provide correct numerical results. We also show that the inaccuracy caused by imperfect convergence of the induced dipoles correlates with the inaccuracy of the calculated pressure. PMID:22423830

  13. On the calculation of dynamic derivatives using computational fluid dynamics

    OpenAIRE

    Da Ronch, Andrea

    2012-01-01

    In this thesis, the exploitation of computational fluid dynamics (CFD) methods for the flight dynamics of manoeuvring aircraft is investigated. It is demonstrated that CFD can now be used in a reasonably routine fashion to generate stability and control databases. Different strategies to create CFD-derived simulation models across the flight envelope are explored, ranging from combined low-fidelity/high-fidelity methods to reduced-order modelling. For the representation of the unsteady aerody...

  14. TRANS-I: A fast calculating computer code for the calculation of reactivity transients

    International Nuclear Information System (INIS)

    In literature is shown that the adiabatic and the quasistatic approximation to space time neutron kinetics are generally fast and conservative methods for calculating reactivity transients. Nevertheless if a feedback reactivity is considered these methods predict too high values of peak flux, energy production and temperature. It is demonstrated, that the deficiency of adiabatic and quasistatic method can be removed, if the mean fuel temperature is multiplied by a weighting factor to get a corrected temperature for calculating Doppler-feedback. The code TRANS-I including this modification is presented. (author)

  15. Prospective Teachers' Views on the Use of Calculators with Computer Algebra System in Algebra Instruction

    Science.gov (United States)

    Ozgun-Koca, S. Ash

    2010-01-01

    Although growing numbers of secondary school mathematics teachers and students use calculators to study graphs, they mainly rely on paper-and-pencil when manipulating algebraic symbols. However, the Computer Algebra Systems (CAS) on computers or handheld calculators create new possibilities for teaching and learning algebraic manipulation. This…

  16. Computer code for calculating reliability/availability of technical systems

    International Nuclear Information System (INIS)

    Three computer codes are reviewed, which can be applied to reliability analyses of technical systems. They are based on the fault tree and the laws of probability theory. The codes can be used for both non-repairable and repairable systems. The simulation code REMO 79 and the analytical code RELAV are based on the conception that a failure of system components is immediately detected and repaired. The model of the FUPRO2 code provides for failures to be detected and repaired only in periodic functional tests. Apart from code descriptions experience and far-reaching aspects resulting from modularization of the fault trees are summarized. (author)

  17. Computational benchmark for calculation of silane and siloxane thermochemistry.

    Science.gov (United States)

    Cypryk, Marek; Gostyński, Bartłomiej

    2016-01-01

    Geometries of model chlorosilanes, R3SiCl, silanols, R3SiOH, and disiloxanes, (R3Si)2O, R = H, Me, as well as the thermochemistry of the reactions involving these species were modeled using 11 common density functionals in combination with five basis sets to examine the accuracy and applicability of various theoretical methods in organosilicon chemistry. As the model reactions, the proton affinities of silanols and siloxanes, hydrolysis of chlorosilanes and condensation of silanols to siloxanes were considered. As the reference values, experimental bonding parameters and reaction enthalpies were used wherever available. Where there are no experimental data, W1 and CBS-QB3 values were used instead. For the gas phase conditions, excellent agreement between theoretical CBS-QB3 and W1 and experimental thermochemical values was observed. All DFT methods also give acceptable values and the precision of various functionals used was comparable. No significant advantage of newer more advanced functionals over 'classical' B3LYP and PBEPBE ones was noted. The accuracy of the results was improved significantly when triple-zeta basis sets were used for energy calculations, instead of double-zeta ones. The accuracy of calculations for the reactions in water solution within the SCRF model was inferior compared to the gas phase. However, by careful estimation of corrections to the ΔHsolv and ΔGsolv of H(+) and HCl, reasonable values of thermodynamic quantities for the discussed reactions can be obtained. PMID:26781663

  18. Parallel computation of automatic differentiation applied to magnetic field calculations

    International Nuclear Information System (INIS)

    The author presents a parallelization of an accelerator physics application to simulate magnetic field in three dimensions. The problem involves the evaluation of high order derivatives with respect to two variables of a multivariate function. Automatic differentiation software had been used with some success, but the computation time was prohibitive. The implementation runs on several platforms, including a network of workstations using PVM, a MasPar using MPFortran, and a CM-5 using CMFortran. A careful examination of the code led to several optimizations that improved its serial performance by a factor of 8.7. The parallelization produced further improvements, especially on the MasPar with a speedup factor of 620. As a result a problem that took six days on a SPARC 10/41 now runs in minutes on the MasPar, making it feasible for physicists at Lawrence Berkeley Laboratory to simulate larger magnets

  19. A computer code for beam optics calculation--third order approximation

    Institute of Scientific and Technical Information of China (English)

    L(U) Jianqin; LI Jinhai

    2006-01-01

    To calculate the beam transport in the ion optical systems accurately, a beam dynamics computer program of third order approximation is developed. Many conventional optical elements are incorporated in the program. Particle distributions of uniform type or Gaussian type in the ( x, y, z ) 3D ellipses can be selected by the users. The optimization procedures are provided to make the calculations reasonable and fast. The calculated results can be graphically displayed on the computer monitor.

  20. Parallel computer calculation of quantum spin lattices; Calcul de chaines de spins quantiques sur ordinateur parallele

    Energy Technology Data Exchange (ETDEWEB)

    Lamarcq, J. [Service de Physique Theorique, CEA Centre d`Etudes de Saclay, 91 - Gif-sur-Yvette (France)

    1998-07-10

    Numerical simulation allows the theorists to convince themselves about the validity of the models they use. Particularly by simulating the spin lattices one can judge about the validity of a conjecture. Simulating a system defined by a large number of degrees of freedom requires highly sophisticated machines. This study deals with modelling the magnetic interactions between the ions of a crystal. Many exact results have been found for spin 1/2 systems but not for systems of other spins for which many simulation have been carried out. The interest for simulations has been renewed by the Haldane`s conjecture stipulating the existence of a energy gap between the ground state and the first excited states of a spin 1 lattice. The existence of this gap has been experimentally demonstrated. This report contains the following four chapters: 1. Spin systems; 2. Calculation of eigenvalues; 3. Programming; 4. Parallel calculation 14 refs., 6 figs.

  1. Easy-to-use application programs for decay heat and delayed neutron calculations on personal computers

    Energy Technology Data Exchange (ETDEWEB)

    Oyamatsu, Kazuhiro [Nagoya Univ. (Japan)

    1998-03-01

    Application programs for personal computers are developed to calculate the decay heat power and delayed neutron activity from fission products. The main programs can be used in any computers from personal computers to main frames because their sources are written in Fortran. These programs have user friendly interfaces to be used easily not only for research activities but also for educational purposes. (author)

  2. Burnup calculations using the ORIGEN code in the CONKEMO computing system

    International Nuclear Information System (INIS)

    This article describes the CONKEMO computing system for kinetic multigroup calculations of nuclear reactors and their physical characteristics during burnup. The ORIGEN burnup calculation code has been added to the system. The results of an international benchmark calculation are also presented. (author)

  3. Computer calculations in interstitial seed therapy: I. Radiation treatment planning

    International Nuclear Information System (INIS)

    Interstitial seed therapy computers can be used for radiation treatment planning and for dose control after implantation. In interstitial therapy with radioactive seeds there are much greater differences between planning and carrying out radiation treatment than in teletherapy with cobalt-60 or X-rays. Because of the short distance between radioactive sources and tumour tissue, even slight deviations from the planned implantation geometry cause considerable dose deviations. Furthermore, the distribution of seeds in an actual implant is inhomogeneous. During implantation the spatial distribution of seeds cannot be examined exactly, though X-rays are used to control the operation. The afterloading technique of Henschke allows a more exact implantation geometry, but I have no experience of this method. In spite of the technical difficulty of achieving optimum geometry, interstitial therapy still has certain advantages when compared with teletherapy: the dose in the treated volume can be kept smaller than in teletherapy, the radiation can be better concentrated in the tumour volume, the treatment can be restricted to one or two operations, and localized inoperable tumours may be cured more easily. The latter may depend on an optimal treatment time, a relatively high tumour dose and a continuous exponentially decreasing dose rate during the treatment time. A disadvantage of interstitial therapy is the high personnel dose, which may be reduced by the afterloading technique of Henschke (1956). However, the afterloading method requires much greater personnel and instrumental expense than free implantation of radiogold seeds and causes greater trauma for the patient

  4. Parallel beam dynamics calculations on high performance computers

    International Nuclear Information System (INIS)

    Faced with a backlog of nuclear waste and weapons plutonium, as well as an ever-increasing public concern about safety and environmental issues associated with conventional nuclear reactors, many countries are studying new, accelerator-driven technologies that hold the promise of providing safe and effective solutions to these problems. Proposed projects include accelerator transmutation of waste (ATW), accelerator-based conversion of plutonium (ABC), accelerator-driven energy production (ADEP), and accelerator production of tritium (APT). Also, next-generation spallation neutron sources based on similar technology will play a major role in materials science and biological science research. The design of accelerators for these projects will require a major advance in numerical modeling capability. For example, beam dynamics simulations with approximately 100 million particles will be needed to ensure that extremely stringent beam loss requirements (less than a nanoampere per meter) can be met. Compared with typical present-day modeling using 10,000-100,000 particles, this represents an increase of 3-4 orders of magnitude. High performance computing (HPC) platforms make it possible to perform such large scale simulations, which require 10's of GBytes of memory. They also make it possible to perform smaller simulations in a matter of hours that would require months to run on a single processor workstation. This paper will describe how HPC platforms can be used to perform the numerically intensive beam dynamics simulations required for development of these new accelerator-driven technologies

  5. Computational challenges in large nucleosynthesis calculations in stars

    International Nuclear Information System (INIS)

    Full text: The study of how the elements form in stars requires significant computational efforts. The time scale of nuclear reactions in different evolutionary phases in stars changes by several orders of magnitude, and requires the implementation of fully implicit solvers to obtain precise results, where the lack of accuracy may be a severe issue to consider, in particular in explosive conditions like in supernovae. Another important point to consider is the number of isotopic species that need to be included in the simulations. Neutron capture processes are the main responsible to produce the abundances of elements heavier than iron. For the slow neutron capture process (i.e., the s process), the typical number of species is about 600, whereas in the explosive rapid neutron capture process (i.e., the r process) the dimension of the matrix that needs to be inverted to solve the nucleosynthesis equations is well above 1000. I aim to present these topics providing a general overview of the astrophysical scenarios involved, and showing meaningful examples to clarify the discussion. (author)

  6. Direct Calculation of Protein Fitness Landscapes through Computational Protein Design.

    Science.gov (United States)

    Au, Loretta; Green, David F

    2016-01-01

    Naturally selected amino-acid sequences or experimentally derived ones are often the basis for understanding how protein three-dimensional conformation and function are determined by primary structure. Such sequences for a protein family comprise only a small fraction of all possible variants, however, representing the fitness landscape with limited scope. Explicitly sampling and characterizing alternative, unexplored protein sequences would directly identify fundamental reasons for sequence robustness (or variability), and we demonstrate that computational methods offer an efficient mechanism toward this end, on a large scale. The dead-end elimination and A(∗) search algorithms were used here to find all low-energy single mutant variants, and corresponding structures of a G-protein heterotrimer, to measure changes in structural stability and binding interactions to define a protein fitness landscape. We established consistency between these algorithms with known biophysical and evolutionary trends for amino-acid substitutions, and could thus recapitulate known protein side-chain interactions and predict novel ones. PMID:26745411

  7. Parallel beam dynamics calculations on high performance computers

    International Nuclear Information System (INIS)

    Faced with a backlog of nuclear waste and weapons plutonium, as well as an ever-increasing public concern about safety and environmental issues associated with conventional nuclear reactors, many countries are studying new, accelerator-driven technologies that hold the promise of providing safe and effective solutions to these problems. Proposed projects include accelerator transmutation of waste (ATW), accelerator-based conversion of plutonium (ABC), accelerator-driven energy production (ADEP), and accelerator production of tritium (APT). Also, next-generation spallation neutron sources based on similar technology will play a major role in materials science and biological science research. The design of accelerators for these projects will require a major advance in numerical modeling capability. For example, beam dynamics simulations with approximately 100 million particles will be needed to ensure that extremely stringent beam loss requirements (less than a nanoampere per meter) can be met. Compared with typical present-day modeling using 10,000 endash 100,000 particles, this represents an increase of 3 endash 4 orders of magnitude. High performance computing (HPC) platforms make it possible to perform such large scale simulations, which require 10 close-quote s of GBytes of memory. They also make it possible to perform smaller simulations in a matter of hours that would require months to run on a single processor workstation. This paper will describe how HPC platforms can be used to perform the numerically intensive beam dynamics simulations required for development of these new accelerator-driven technologies. copyright 1997 American Institute of Physics

  8. pH and conductivity of sodium phosphate solutions. [Computer calculation

    Energy Technology Data Exchange (ETDEWEB)

    Wright, J.M.; VonNieda, G.E.

    1979-03-01

    This paper describes a computer program for the calculation of the pH and conductivity of sodium phosphate solutions over the phosphate concentration range of 1 to 10000 ppM and sodium to phosphate molar ratios of approximately 2 to 3. pH can be calculated over the temperature range of 0 to 300/sup 0/C; conductivities can be calculated over the temperature range of 0 to 50/sup 0/C. Calculated values of pH and conductivity are compred to measured values and found to be in excellent agreement. Several practical uses for the computer program are discussed.

  9. Radiation therapy calculations using an on-demand virtual cluster via cloud computing

    CERN Document Server

    Keyes, Roy W; Arnold, Dorian; Luan, Shuang

    2010-01-01

    Computer hardware costs are the limiting factor in producing highly accurate radiation dose calculations on convenient time scales. Because of this, large-scale, full Monte Carlo simulations and other resource intensive algorithms are often considered infeasible for clinical settings. The emerging cloud computing paradigm promises to fundamentally alter the economics of such calculations by providing relatively cheap, on-demand, pay-as-you-go computing resources over the Internet. We believe that cloud computing will usher in a new era, in which very large scale calculations will be routinely performed by clinics and researchers using cloud-based resources. In this research, several proof-of-concept radiation therapy calculations were successfully performed on a cloud-based virtual Monte Carlo cluster. Performance evaluations were made of a distributed processing framework developed specifically for this project. The expected 1/n performance was observed with some caveats. The economics of cloud-based virtual...

  10. ANIGAM: a computer code for the automatic calculation of nuclear group data

    International Nuclear Information System (INIS)

    The computer code ANIGAM consists mainly of the well-known programmes GAM-I and ANISN as well as of a subroutine which reads the THERMOS cross section library and prepares it for ANISN. ANIGAM has been written for the automatic calculation of microscopic and macroscopic cross sections of light water reactor fuel assemblies. In a single computer run both were calculated, the cross sections representative for fuel assemblies in reactor core calculations and the cross sections of each cell type of a fuel assembly. The calculated data were delivered to EXTERMINATOR and CITATION for following diffusion or burn up calculations by an auxiliary programme. This report contains a detailed description of the computer codes and methods used in ANIGAM, a description of the subroutines, of the OVERLAY structure and an input and output description. (oririg.)

  11. Neutron spectra calculation in material in order to compute irradiation damage

    International Nuclear Information System (INIS)

    This short presentation will be on neutron spectra calculation methods in order to compute the damage rate formation in irradiated structure. Three computation schemes are used in the French C.E.A.: (1) 3-dimensional calculations using the line of sight attenuation method (MERCURE IV code), the removal cross section being obtained from an adjustment on a 1-dimensional transport calculation with the discrete ordinate code ANISN; (2) 2-dimensional calculations using the discrete ordinates method (DOT 3.5 code), 20 to 30 group library obtained by collapsing the 100 group a library on fluxes computed by ANISN; (3) 3-dimensional calculations using the Monte Carlo method (TRIPOLI system). The cross sections which originally came from UKNDL 73 and ENDF/B3 are now processed from ENDF B IV. (author)

  12. Some questions of using coding theory and analytical calculation methods on computers

    International Nuclear Information System (INIS)

    Main results of investigations devoted to the application of theory and practice of correcting codes are presented. These results are used to create very fast units for the selection of events registered in multichannel detectors of nuclear particles. Using this theory and analytical computing calculations, practically new combination devices, for example, parallel encoders, have been developed. Questions concerning the creation of a new algorithm for the calculation of digital functions by computers and problems of devising universal, dynamically reprogrammable logic modules are discussed

  13. Comparison of molecular energies calculation using simulated quantum algorithm and classical computer methods

    Science.gov (United States)

    Lesniak, Joseph; Behrman, Elizabeth; Zandler, Melvin; Kumar, Preethika

    2008-03-01

    Very few quantum algorithms are currently useable today. When calculating molecular energies, using a quantum algorithm takes advantage of the quantum nature of the algorithm and calculation. A few small molecules have been used to show that this method is possible. This method will be applied to larger molecules and compared to classical computer methods.

  14. SAMDIST: A computer code for calculating statistical distributions for R-matrix resonance parameters

    Energy Technology Data Exchange (ETDEWEB)

    Leal, L.C.; Larson, N.M.

    1995-09-01

    The SAMDIST computer code has been developed to calculate distribution of resonance parameters of the Reich-Moore R-matrix type. The program assumes the parameters are in the format compatible with that of the multilevel R-matrix code SAMMY. SAMDIST calculates the energy-level spacing distribution, the resonance width distribution, and the long-range correlation of the energy levels. Results of these calculations are presented in both graphic and tabular forms.

  15. Increasing the computational speed of flash calculations with applications for compositional, transient simulations

    DEFF Research Database (Denmark)

    Rasmussen, Claus P.; Krejbjerg, Kristian; Michelsen, Michael Locht; Bjurstrøm, Kersti E.

    2006-01-01

    Approaches are presented for reducing the computation time spent on flash calculations in compositional, transient simulations. In a conventional flash calculation, the majority of the simulation time is spent on stability analysis, even for systems far into the single-phase region. A criterion has...... been implemented for deciding when it is justified to bypass the stability analysis. With the implementation of the developed time-saving initiatives, it has been shown for a number of compositional, transient pipeline simulations that a reduction of the computation time spent on flash calculations by...

  16. A Computer Program for Calculation of Approximate Embryo/Fetus Radiation Dose in Nuclear Medicine Applications

    Directory of Open Access Journals (Sweden)

    Tuncay Bayram

    2012-04-01

    Full Text Available Objective: In this study, we aimed to make a computer program that calculates approximate radiation dose received by embryo/fetus in nuclear medicine applications. Material and Methods: Radiation dose values per MBq-1 received by embryo/fetus in nuclear medicine applications were gathered from literature for various stages of pregnancy. These values were embedded in the computer code, which was written in Fortran 90 program language. Results: The computer program called nmfdose covers almost all radiopharmaceuticals used in nuclear medicine applications. Approximate radiation dose received by embryo/fetus can be calculated easily at a few steps using this computer program. Conclusion: Although there are some constraints on using the program for some special cases, nmfdose is useful and it provides practical solution for calculation of approximate dose to embryo/fetus in nuclear medicine applications. (MIRT 2012;21:19-22

  17. On the theories, techniques, and computer codes used in numerical reactor criticality and burnup calculations

    International Nuclear Information System (INIS)

    The purpose of this paper is to discuss the theories, techniques and computer codes that are frequently used in numerical reactor criticality and burnup calculations. It is a part of an integrated nuclear reactor calculation scheme conducted by the Reactors Department, Inshas Nuclear Research Centre. The crude part in numerical reactor criticality and burnup calculations includes the determination of neutron flux distribution which can be obtained in principle as a solution of Boltzmann transport equation. Numerical methods used for solving transport equations are discussed. Emphasis are made on numerical techniques based on multigroup diffusion theory. These numerical techniques include nodal, modal, and finite difference ones. The most commonly known computer codes utilizing these techniques are reviewed. Some of the main computer codes that have been already developed at the Reactors Department and related to numerical reactor criticality and burnup calculations have been presented

  18. Calculation reduction method for color computer-generated hologram using color space conversion

    CERN Document Server

    Shimobaba, Tomoyoshi; Oikawa, Minoru; Takada, Naoki; Okada, Naohisa; Endo, Yutaka; Hirayama, Ryuji; Ito, Tomoyoshi

    2013-01-01

    We report a calculation reduction method for color computer-generated holograms (CGHs) using color space conversion. Color CGHs are generally calculated on RGB space. In this paper, we calculate color CGHs in other color spaces: for example, YCbCr color space. In YCbCr color space, a RGB image is converted to the luminance component (Y), blue-difference chroma (Cb) and red-difference chroma (Cr) components. In terms of the human eye, although the negligible difference of the luminance component is well-recognized, the difference of the other components is not. In this method, the luminance component is normal sampled and the chroma components are down-sampled. The down-sampling allows us to accelerate the calculation of the color CGHs. We compute diffraction calculations from the components, and then we convert the diffracted results in YCbCr color space to RGB color space.

  19. Computer codes used in the calculation of high-temperature thermodynamic properties of sodium

    International Nuclear Information System (INIS)

    Three computer codes - SODIPROP, NAVAPOR, and NASUPER - were written in order to calculate a self-consistent set of thermodynamic properties for saturated, subcooled, and superheated sodium. These calculations incorporate new critical parameters (temperature, pressure, and density) and recently derived single equations for enthalpy and vapor pressure. The following thermodynamic properties have been calculated in these codes: enthalpy, heat capacity, entropy, vapor pressure, heat of vaporization, density, volumetric thermal expansion coefficient, compressibility, and thermal pressure coefficient. In the code SODIPROP, these properties are calculated for saturated and subcooled liquid sodium. Thermodynamic properties of saturated sodium vapor are calculated in the code NAVAPOR. The code NASUPER calculates thermodynamic properties for super-heated sodium vapor only for low (< 1644 K) temperatures. No calculations were made for the supercritical region

  20. Parallel diffusion calculation for the PHAETON on-line multiprocessor computer

    International Nuclear Information System (INIS)

    The aim of the PHAETON project is the design of an on-line computer in order to increase the immediate knowledge of the main operating and safety parameters in power plants. A significant stage is the computation of the three dimensional flux distribution. For cost and safety reason a computer based on a parallel microprocessor architecture has been studied. This paper presents a first approach to parallelized three dimensional diffusion calculation. A computing software has been written and built in a four processors demonstrator. We present the realization in progress, concerning the final equipment. 8 refs

  1. Efficient Computation of Power, Force, and Torque in BEM Scattering Calculations

    CERN Document Server

    Reid, M T Homer

    2013-01-01

    We present concise, computationally efficient formulas for several quantities of interest -- including absorbed and scattered power, optical force (radiation pressure), and torque -- in scattering calculations performed using the boundary-element method (BEM) [also known as the method of moments (MOM)]. Our formulas compute the quantities of interest \\textit{directly} from the BEM surface currents with no need ever to compute the scattered electromagnetic fields. We derive our new formulas and demonstrate their effectiveness by computing power, force, and torque in a number of example geometries. Free, open-source software implementations of our formulas are available for download online.

  2. HADOC: a computer code for calculation of external and inhalation doses from acute radionuclide releases

    International Nuclear Information System (INIS)

    The computer code HADOC (Hanford Acute Dose Calculations) is described and instructions for its use are presented. The code calculates external dose from air submersion and inhalation doses following acute radionuclide releases. Atmospheric dispersion is calculated using the Hanford model with options to determine maximum conditions. Building wake effects and terrain variation may also be considered. Doses are calculated using dose conversion factor supplied in a data library. Doses are reported for one and fifty year dose commitment periods for the maximum individual and the regional population (within 50 miles). The fractional contribution to dose by radionuclide and exposure mode are also printed if requested

  3. PABLM: a computer program to calculate accumulated radiation doses from radionuclides in the environment

    International Nuclear Information System (INIS)

    A computer program, PABLM, was written to facilitate the calculation of internal radiation doses to man from radionuclides in food products and external radiation doses from radionuclides in the environment. This report contains details of mathematical models used and calculational procedures required to run the computer program. Radiation doses from radionuclides in the environment may be calculated from deposition on the soil or plants during an atmospheric or liquid release, or from exposure to residual radionuclides in the environment after the releases have ended. Radioactive decay is considered during the release of radionuclides, after they are deposited on the plants or ground, and during holdup of food after harvest. The radiation dose models consider several exposure pathways. Doses may be calculated for either a maximum-exposed individual or for a population group. The doses calculated are accumulated doses from continuous chronic exposure. A first-year committed dose is calculated as well as an integrated dose for a selected number of years. The equations for calculating internal radiation doses are derived from those given by the International Commission on Radiological Protection (ICRP) for body burdens and MPC's of each radionuclide. The radiation doses from external exposure to contaminated water and soil are calculated using the basic assumption that the contaminated medium is large enough to be considered an infinite volume or plane relative to the range of the emitted radiations. The equations for calculations of the radiation dose from external exposure to shoreline sediments include a correction for the finite width of the contaminated beach

  4. Microcomputers, desk calculators and process computers for use in radiation protection

    International Nuclear Information System (INIS)

    The goals achievable, or to be pursued, in radiation protection measurement and evaluation by using computers are explained. As there is a large variety of computers available offering a likewise large variety, of performances, use of a computer is justified even for minor measuring and evaluation tasks. The subdivision into: Microcomputers as an installed part of measuring equipment; measuring and evaluation systems with desk calculators; measuring and evaluation systems with process computers is done to explain the importance and extent of the measuring or evaluation tasks and the computing devices suitable for the various purposes. The special requirements to be met in order to fulfill the different tasks are discussed, both in terms of hardware and software and in terms of skill and knowledge of the personnel, and are illustrated by an example showing the usefulness of computers in radiation protection. (orig./HP)

  5. Calculation of Plutonium content in RSG-GAS spent fuel using IAFUEL computer code

    International Nuclear Information System (INIS)

    It has been calculated the contain of isotopes Pu-239, Pu-240, Pu-241, and isotope Pu-242 in MTR reactor fuel types which have U-235 contain about 250 gram. The calculation was performed in three steps. The first step is to determine the library of calculation output of BOC (Beginning of Cycle). The second step is to determine the core isotope density, the weight of plutonium for one core, and one fuel isotope density. The third step is to calculate weight of plutonium in gram. All calculation is performed by IAFUEL computer code. The calculation was produced content of each Pu isotopes were Pu-239 is 6.7666 gr, Pu-240 is 1.4628 gr, Pu-241 is 0.52951 gr, and Pu-242 is 0.068952 gr

  6. Development of dose calculation system of brachytherapy with a personal computer. 138

    International Nuclear Information System (INIS)

    A dose calculation system for the brachytherapy was developed with a personal computer. The system was made up of a 16 bits personal computer and a digitizer with a light panel. As the operating system, MS-DOS version 2.1 was used and the programs were written in the BASIC (compiler version) and the assembler. The followings are characteristics of the systeM1; (1) low cost, (2) high performances in the speed of calculation and the data-transfer, (3) high accuracy of the calculated dose-distribution, (4) consideration of the absorption of gamma rays within sources themselves and their containers. In this paper, functions of the system and the performances are described minutely. Moreover, we show the results of estimation of the accuracy of the calculated dose. 10 refs.; 5 figs.; 1 table

  7. GENGTC-JB: a computer program to calculate temperature distribution for cylindrical geometry capsule

    International Nuclear Information System (INIS)

    In design of JMTR irradiation capsules contained specimens, a program (named GENGTC) has been generally used to evaluate temperature distributions in the capsules. The program was originally compiled by ORNL(U.S.A.) and consisted of very simple calculation methods. From the incorporated calculation methods, the program is easy to use, and has many applications to the capsule design. However, it was considered to replace original computing methods with advanced ones, when the program was checked from a standpoint of the recent computer abilities, and also to be complicated in data input. Therefore, the program was versioned up as aim to make better calculations and improve input method. The present report describes revised calculation methods and input/output guide of the version-up program. (author)

  8. A symbolic computing environment for doing calculations in quantum field theory

    International Nuclear Information System (INIS)

    A computational environment, as a set of MapleV R.3 routines for doing symbolic calculations in Quantum Field Theory, is presented. The Q F T package's routines extend the standard MapleV computational domain by introducing representations for anti commutative and noncommutative objects, tensors, spinors and gauge fields, as well as related objects and procedures (Dirac matrices, differential operators, functional differentiation w.r.t indexed fields, sum rule for repeated indices, etc.). Furthermore, the Q F T routines permit the user-definition of algebra rules for the commutation/ anti commutation of operators, to be taken into account during the calculations. (author)

  9. Radiation damage calculation by NPRIM computer code with JENDL3.3

    International Nuclear Information System (INIS)

    The Neutron Damage Evaluation Group of the Atomic Energy Society of Japan starts an identification of neutron-induced radiation damage in materials for typical neutron fields. For this study, a computer code, NPRIM, has been developed to be free from a tedious computational effort, which has been devoted to the calculation of derived quantities such as dpa and helium production rate. Neutron cross sections concerning to damage reactions based on JENDL3.3 are given with 640-group-structure. The impact of cross sections based on JENDL3.3 to damage calculation results has been described in this paper. (author)

  10. A FORTRAN computer code for calculating flows in multiple-blade-element cascades

    Science.gov (United States)

    Mcfarland, E. R.

    1985-01-01

    A solution technique has been developed for solving the multiple-blade-element, surface-of-revolution, blade-to-blade flow problem in turbomachinery. The calculation solves approximate flow equations which include the effects of compressibility, radius change, blade-row rotation, and variable stream sheet thickness. An integral equation solution (i.e., panel method) is used to solve the equations. A description of the computer code and computer code input is given in this report.

  11. Off-site dose calculation computer code based on ICRP-60(II) - liquid radioactive effluents -

    International Nuclear Information System (INIS)

    The development of computer code for calculating off-site doses(K-DOSE60) was based on ICRP-60 and the dose calculationi equations of Reg. Guide 1.109. In this paper, the methodology to compute dose for liquid effluents was described. To examine reliability of the K-DOSE60 code the results obtained from K-DOSE60 were compared with analytic solutions. For liquid effluents. The results by K-DOSE60 are in agreement with analytic solution

  12. Simple and effective calculations about spectral power distributions of outdoor light sources for computer vision.

    Science.gov (United States)

    Tian, Jiandong; Duan, Zhigang; Ren, Weihong; Han, Zhi; Tang, Yandong

    2016-04-01

    The spectral power distributions (SPD) of outdoor light sources are not constant over time and atmospheric conditions, which causes the appearance variation of a scene and common natural illumination phenomena, such as twilight, shadow, and haze/fog. Calculating the SPD of outdoor light sources at different time (or zenith angles) and under different atmospheric conditions is of interest to physically-based vision. In this paper, for computer vision and its applications, we propose a feasible, simple, and effective SPD calculating method based on analyzing the transmittance functions of absorption and scattering along the path of solar radiation through the atmosphere in the visible spectrum. Compared with previous SPD calculation methods, our model has less parameters and is accurate enough to be directly applied in computer vision. It can be applied in computer vision tasks including spectral inverse calculation, lighting conversion, and shadowed image processing. The experimental results of the applications demonstrate that our calculation methods have practical values in computer vision. It establishes a bridge between image and physical environmental information, e.g., time, location, and weather conditions. PMID:27137018

  13. FISPRO: a simplified computer program for general fission product formation and decay calculations

    International Nuclear Information System (INIS)

    This report describes a computer program that solves a general form of the fission product formation and decay equations over given time steps for arbitrary decay chains composed of up to three nuclides. All fission product data and operational history data are input through user-defined input files. The program is very useful in the calculation of fission product activities of specific nuclides for various reactor operational histories and accident consequence calculations

  14. Calculation of shipboard fire conditions for radioactive materials packages with the methods of computational fluid dynamics

    International Nuclear Information System (INIS)

    Shipboard fires both in the same ship hold and in an adjacent hold aboard a break-bulk cargo ship are simulated with a commercial finite-volume computational fluid mechanics code. The fire models and modeling techniques are described and discussed. Temperatures and heat fluxes to a simulated materials package are calculated and compared to experimental values. The overall accuracy of the calculations is assessed

  15. Computer calculation of dose distributions in radiotherapy. Report of a panel

    International Nuclear Information System (INIS)

    As in most areas of scientific endeavour, the advent of electronic computers has made a significant impact on the investigation of the physical aspects of radiotherapy. Since the first paper on the subject was published in 1955 the literature has rapidly expanded to include the application of computer techniques to problems of external beam, and intracavitary and interstitial dosimetry. By removing the tedium of lengthy repetitive calculations, the availability of automatic computers has encouraged physicists and radiotherapists to take a fresh look at many fundamental physical problems of radiotherapy. The most important result of the automation of dosage calculations is not simply an increase in the quantity of data but an improvement in the quality of data available as a treatment guide for the therapist. In October 1965 the International Atomic Energy Agency convened a panel in Vienna on the 'Use of Computers for Calculation of Dose Distributions in Radiotherapy' to assess the current status of work, provide guidelines for future research, explore the possibility of international cooperation and make recommendations to the Agency. The panel meeting was attended by 15 participants from seven countries, one observer, and two representatives of the World Health Organization. Participants contributed 20 working papers which served as the bases of discussion. By the nature of the work, computer techniques have been developed by a few advanced centres with access to large computer installations. However, several computer methods are now becoming 'routine' and can be used by institutions without facilities for research. It is hoped that the report of the Panel will provide a comprehensive view of the automatic computation of radiotherapeutic dose distributions and serve as a means of communication between present and potential users of computers

  16. Calculations of reactor-accident consequences, Version 2. CRAC2: computer code user's guide

    International Nuclear Information System (INIS)

    The CRAC2 computer code is a revision of the Calculation of Reactor Accident Consequences computer code, CRAC, developed for the Reactor Safety Study. The CRAC2 computer code incorporates significant modeling improvements in the areas of weather sequence sampling and emergency response, and refinements to the plume rise, atmospheric dispersion, and wet deposition models. New output capabilities have also been added. This guide is to facilitate the informed and intelligent use of CRAC2. It includes descriptions of the input data, the output results, the file structures, control information, and five sample problems

  17. Shielding calculations using computer techniques; Calculo de blindajes mediante tecnicas de computacion

    Energy Technology Data Exchange (ETDEWEB)

    Gonzalez Portilla, M. I.; Marquez, J.

    2011-07-01

    Radiological protection aims to limit the ionizing radiation received by people and equipment, which in numerous occasions requires of protection shields. Although, for certain configurations, there are analytical formulas, to characterize these shields, the design setup may be very intensive in numerical calculations, therefore the most efficient from to design the shields is by means of computer programs to calculate dose and dose rates. In the present article we review the codes most frequently used to perform these calculations, and the techniques used by such codes. (Author) 13 refs.

  18. Using the ORIGEN-2 computer code for near core activation calculations

    International Nuclear Information System (INIS)

    The ORIGEN2 computer code is a useful tool for calculating radionuclide inventories resulting from irradiation of materials in a reactor. It is widely used to calculate activation products in irradiated metals that form the structural portion of fuel assemblies. The code is straightforward for materials within the active fuel region of a reactor core, which are subject to core average conditions. For materials outside the active core, ORIGEN2 cannot be used directly. However, ORIGEN2 can be used with the appropriate methodology to calculate the activation of materials in near core locations. This paper presents the background and a methodology for estimating radionuclide inventories in activated metals in near core locations

  19. An approach to first principles electronic structure calculation by symbolic-numeric computation

    Directory of Open Access Journals (Sweden)

    Akihito Kikuchi

    2013-04-01

    Full Text Available There is a wide variety of electronic structure calculation cooperating with symbolic computation. The main purpose of the latter is to play an auxiliary role (but not without importance to the former. In the field of quantum physics [1-9], researchers sometimes have to handle complicated mathematical expressions, whose derivation seems almost beyond human power. Thus one resorts to the intensive use of computers, namely, symbolic computation [10-16]. Examples of this can be seen in various topics: atomic energy levels, molecular dynamics, molecular energy and spectra, collision and scattering, lattice spin models and so on [16]. How to obtain molecular integrals analytically or how to manipulate complex formulas in many body interactions, is one such problem. In the former, when one uses special atomic basis for a specific purpose, to express the integrals by the combination of already known analytic functions, may sometimes be very difficult. In the latter, one must rearrange a number of creation and annihilation operators in a suitable order and calculate the analytical expectation value. It is usual that a quantitative and massive computation follows a symbolic one; for the convenience of the numerical computation, it is necessary to reduce a complicated analytic expression into a tractable and computable form. This is the main motive for the introduction of the symbolic computation as a forerunner of the numerical one and their collaboration has won considerable successes. The present work should be classified as one such trial. Meanwhile, the use of symbolic computation in the present work is not limited to indirect and auxiliary part to the numerical computation. The present work can be applicable to a direct and quantitative estimation of the electronic structure, skipping conventional computational methods.

  20. Easy calculations of lod scores and genetic risks on small computers.

    OpenAIRE

    Lathrop, G M; Lalouel, J M

    1984-01-01

    A computer program that calculates lod scores and genetic risks for a wide variety of both qualitative and quantitative genetic traits is discussed. An illustration is given of the joint use of a genetic marker, affection status, and quantitative information in counseling situations regarding Duchenne muscular dystrophy.

  1. LALAGE - a computer program to calculate the TM01 modes of cylindrically symmetrical multicell resonant structures

    International Nuclear Information System (INIS)

    An improvement has been made to the LALA program to compute resonant frequencies and fields for all the modes of the lowest TM01 band-pass of multicell structures. The results are compared with those calculated by another popular rf cavity code and with experimentally measured quantities. (author)

  2. CPS: a continuous-point-source computer code for plume dispersion and deposition calculations

    Energy Technology Data Exchange (ETDEWEB)

    Peterson, K.R.; Crawford, T.V.; Lawson, L.A.

    1976-05-21

    The continuous-point-source computer code calculates concentrations and surface deposition of radioactive and chemical pollutants at distances from 0.1 to 100 km, assuming a Gaussian plume. The basic input is atmospheric stability category and wind speed, but a number of refinements are also included.

  3. FLINESH computer code for magnetic fields calculation; Codigo de calculo de campos magneticos FLINESH

    Energy Technology Data Exchange (ETDEWEB)

    Shibata, C.S.; Montes, A. [Instituto de Pesquisas Espaciais (INPE), Sao Jose dos Campos, SP (Brazil); Galvao, R.M.O. [Sao Paulo Univ., SP (Brazil). Inst. de Fisica

    1994-04-01

    This paper describes the `FLINESH` computer code for magnetic fields calculation developed for the simulation of field configurations in plasma magnetic confinement devices. The expressions for the poloidal field and flux, the program structure and the input parameters description are presented, and also the analysis of the graphic output possibilities. (L.C.J.A.). 12 refs, 14 figs, 2 tabs.

  4. A FORTRAN program for an IBM PC compatible computer for calculating kinematical electron diffraction patterns

    International Nuclear Information System (INIS)

    This report describes a computer program which is useful in transmission electron microscopy. The program is written in FORTRAN and calculates kinematical electron diffraction patterns in any zone axis from a given crystal structure. Quite large unit cells, containing up to 2250 atoms, can be handled by the program. The program runs on both the Helcules graphic card and the standard IBM CGA card

  5. Computer Programs for Calculating the Isentropic Flow Properties for Mixtures of R-134a and Air

    Science.gov (United States)

    Kvaternik, Raymond G.

    2000-01-01

    Three computer programs for calculating the isentropic flow properties of R-134a/air mixtures which were developed in support of the heavy gas conversion of the Langley Transonic Dynamics Tunnel (TDT) from dichlorodifluoromethane (R-12) to 1,1,1,2 tetrafluoroethane (R-134a) are described. The first program calculates the Mach number and the corresponding flow properties when the total temperature, total pressure, static pressure, and mole fraction of R-134a in the mixture are given. The second program calculates tables of isentropic flow properties for a specified set of free-stream Mach numbers given the total pressure, total temperature, and mole fraction of R-134a. Real-gas effects are accounted for in these programs by treating the gases comprising the mixture as both thermally and calorically imperfect. The third program is a specialized version of the first program in which the gases are thermally perfect. It was written to provide a simpler computational alternative to the first program in those cases where real-gas effects are not important. The theory and computational procedures underlying the programs are summarized, the equations used to compute the flow quantities of interest are given, and sample calculated results that encompass the operating conditions of the TDT are shown.

  6. Calculating the Thermal Rate Constant with Exponential Speed-Up on a Quantum Computer

    CERN Document Server

    Lidar, D A; Lidar, Daniel A.; Wang, Haobin

    1999-01-01

    It is shown how to formulate the ubiquitous quantum chemistry problem of calculating the thermal rate constant on a quantum computer. The resulting exact algorithm scales exponentially faster with the dimensionality of the system than all known ``classical'' algorithms for this problem.

  7. Computer program for calculation of complex chemical equilibrium compositions and applications. Part 1: Analysis

    Science.gov (United States)

    Gordon, Sanford; Mcbride, Bonnie J.

    1994-01-01

    This report presents the latest in a number of versions of chemical equilibrium and applications programs developed at the NASA Lewis Research Center over more than 40 years. These programs have changed over the years to include additional features and improved calculation techniques and to take advantage of constantly improving computer capabilities. The minimization-of-free-energy approach to chemical equilibrium calculations has been used in all versions of the program since 1967. The two principal purposes of this report are presented in two parts. The first purpose, which is accomplished here in part 1, is to present in detail a number of topics of general interest in complex equilibrium calculations. These topics include mathematical analyses and techniques for obtaining chemical equilibrium; formulas for obtaining thermodynamic and transport mixture properties and thermodynamic derivatives; criteria for inclusion of condensed phases; calculations at a triple point; inclusion of ionized species; and various applications, such as constant-pressure or constant-volume combustion, rocket performance based on either a finite- or infinite-chamber-area model, shock wave calculations, and Chapman-Jouguet detonations. The second purpose of this report, to facilitate the use of the computer code, is accomplished in part 2, entitled 'Users Manual and Program Description'. Various aspects of the computer code are discussed, and a number of examples are given to illustrate its versatility.

  8. Accurate calculation of computer-generated holograms using angular-spectrum layer-oriented method.

    Science.gov (United States)

    Zhao, Yan; Cao, Liangcai; Zhang, Hao; Kong, Dezhao; Jin, Guofan

    2015-10-01

    Fast calculation and correct depth cue are crucial issues in the calculation of computer-generated hologram (CGH) for high quality three-dimensional (3-D) display. An angular-spectrum based algorithm for layer-oriented CGH is proposed. Angular spectra from each layer are synthesized as a layer-corresponded sub-hologram based on the fast Fourier transform without paraxial approximation. The proposed method can avoid the huge computational cost of the point-oriented method and yield accurate predictions of the whole diffracted field compared with other layer-oriented methods. CGHs of versatile formats of 3-D digital scenes, including computed tomography and 3-D digital models, are demonstrated with precise depth performance and advanced image quality. PMID:26480062

  9. AQUAMAN: a computer code for calculating dose commitment to man from aqueous releases of radionuclides

    International Nuclear Information System (INIS)

    AQUAMAN is an interactive computer code for calculating values of dose (50-year dose commitment) to man from aqueous releases of radionuclides from nuclear facilities. The data base contains values of internal and external dose conversion factors, and bioaccumulation (freshwater and marine) factors for 56 radionuclides. A maximum of 20 radionuclides may be selected for any one calculation. Dose and cumulative exposure index (CUEX) values are calculated for total body, GI tract, bone, thyroid, lungs, liver, kidneys, testes, and ovaries for each of three exposure pathways: water ingestion, fish ingestion, and submersion. The user is provided the option at the time of execution to change the default values of most of the variables, with the exception of the dose conversion factor values. AQUAMAN is written in FORTRAN for the PDP-10 computer

  10. Program POD. A computer code to calculate cross sections for neutron-induced nuclear reactions

    International Nuclear Information System (INIS)

    A computer code, POD, was developed for neutron-induced nuclear data evaluations. This program is based on four theoretical models, (1) the optical model to calculate shape-elastic scattering and reaction cross sections, (2) the distorted wave Born approximation to calculate neutron inelastic scattering cross sections, (3) the preequilibrium model, and (4) the multi-step statistical model. With this program, cross sections can be calculated for reactions (n, γ), (n, n'), (n, p), (n, α), (n, d), (n, t), (n, 3He), (n, 2n), (n, np), (n, nα), (n, nd), and (n, 3n) in the neutron energy range above the resonance region to 20 MeV. The computational methods and input parameters are explained in this report, with sample inputs and outputs. (author)

  11. Poisson Green's function method for increased computational efficiency in numerical calculations of Coulomb coupling elements

    Science.gov (United States)

    Zimmermann, Anke; Kuhn, Sandra; Richter, Marten

    2016-01-01

    Often, the calculation of Coulomb coupling elements for quantum dynamical treatments, e.g., in cluster or correlation expansion schemes, requires the evaluation of a six dimensional spatial integral. Therefore, it represents a significant limiting factor in quantum mechanical calculations. If the size or the complexity of the investigated system increases, many coupling elements need to be determined. The resulting computational constraints require an efficient method for a fast numerical calculation of the Coulomb coupling. We present a computational method to reduce the numerical complexity by decreasing the number of spatial integrals for arbitrary geometries. We use a Green's function formulation of the Coulomb coupling and introduce a generalized scalar potential as solution of a generalized Poisson equation with a generalized charge density as the inhomogeneity. That enables a fast calculation of Coulomb coupling elements and, additionally, a straightforward inclusion of boundary conditions and arbitrarily spatially dependent dielectrics through the Coulomb Green's function. Particularly, if many coupling elements are included, the presented method, which is not restricted to specific symmetries of the model, presents a promising approach for increasing the efficiency of numerical calculations of the Coulomb interaction. To demonstrate the wide range of applications, we calculate internanostructure couplings, such as the Förster coupling, and illustrate the inclusion of symmetry considerations in the method for the Coulomb coupling between bound quantum dot states and unbound continuum states.

  12. Calculation of shielding of X rays in radiotherapy facilities with computer aid

    International Nuclear Information System (INIS)

    This work presents a methodology for calculation of shielding of X rays in radiotherapy facilities with computer aid. A friendly program, called RadTeraX, was developed in programming language Delphi that, through manual data input of a basic project of architecture and of some parameters, interprets the geometry and calculates the shielding of the walls, ground and roof of a radiotherapy installation for X rays. As a final product, this program supplies a graphic screen in the computer with all the input data and the calculation of the shielding, besides the respective calculation memory. Still today, in Brazil, the calculation of the shielding for radiotherapy facilities with X rays has been made based on recommendations of NCRP-49, that establishes a necessary calculation methodology to the elaboration of a shielding project. However, in high energies, where it is necessary the construction of a maze, NCRP-49 is insufficient, so that in this field, studies were made originating an article that proposes a solution for the problem and this solution was implemented in the program. The program can be applied in the practical execution of shielding projects for radiotherapy facilities and in didactic way in comparison with NCRP-49 and has been registered under number 00059420 at INPI - Instituto Nacional da Propriedade Industrial (National Institute of Industrial Property). (author)

  13. PABLM: a computer program to calculate accumulated radiation doses from radionuclides in the environment

    Energy Technology Data Exchange (ETDEWEB)

    Napier, B.A.; Kennedy, W.E. Jr.; Soldat, J.K.

    1980-03-01

    A computer program, PABLM, was written to facilitate the calculation of internal radiation doses to man from radionuclides in food products and external radiation doses from radionuclides in the environment. This report contains details of mathematical models used and calculational procedures required to run the computer program. Radiation doses from radionuclides in the environment may be calculated from deposition on the soil or plants during an atmospheric or liquid release, or from exposure to residual radionuclides in the environment after the releases have ended. Radioactive decay is considered during the release of radionuclides, after they are deposited on the plants or ground, and during holdup of food after harvest. The radiation dose models consider several exposure pathways. Doses may be calculated for either a maximum-exposed individual or for a population group. The doses calculated are accumulated doses from continuous chronic exposure. A first-year committed dose is calculated as well as an integrated dose for a selected number of years. The equations for calculating internal radiation doses are derived from those given by the International Commission on Radiological Protection (ICRP) for body burdens and MPC's of each radionuclide. The radiation doses from external exposure to contaminated water and soil are calculated using the basic assumption that the contaminated medium is large enough to be considered an infinite volume or plane relative to the range of the emitted radiations. The equations for calculations of the radiation dose from external exposure to shoreline sediments include a correction for the finite width of the contaminated beach.

  14. Citham a computer code for calculating fuel depletion-description, tests, modifications and evaluation

    International Nuclear Information System (INIS)

    The CITHAN computer code was developed at IPEN (Instituto de Pesquisas Energeticas e Nucleares) to link the HAMMER computer code with a fuel depletion routine and to provide neutron cross sections to be read with the appropriate format of the CITATION code. The problem arised due to the efforts to addapt the new version denomined HAMMER-TECHION with the routine refered. The HAMMER-TECHION computer code was elaborated by Haifa Institute, Israel within a project with EPRI. This version is at CNEN to be used in multigroup constant generation for neutron diffusion calculation in the scope of the new methodology to be adopted by CNEN. The theoretical formulation of CITHAM computer code, tests and modificatins are described. (Author)

  15. Tetrahedral-mesh-based computational human phantom for fast Monte Carlo dose calculations

    International Nuclear Information System (INIS)

    Although polygonal-surface computational human phantoms can address several critical limitations of conventional voxel phantoms, their Monte Carlo simulation speeds are much slower than those of voxel phantoms. In this study, we sought to overcome this problem by developing a new type of computational human phantom, a tetrahedral mesh phantom, by converting a polygonal surface phantom to a tetrahedral mesh geometry. The constructed phantom was implemented in the Geant4 Monte Carlo code to calculate organ doses as well as to measure computation speed, the values were then compared with those for the original polygonal surface phantom. It was found that using the tetrahedral mesh phantom significantly improved the computation speed by factors of between 150 and 832 considering all of the particles and simulated energies other than the low-energy neutrons (0.01 and 1 MeV), for which the improvement was less significant (17.2 and 8.8 times, respectively). (paper)

  16. A computer code for calculating a γ-external dose from a randomly distributed radioactive cloud

    International Nuclear Information System (INIS)

    A computer code ( CIDE ) has been developed to calculate a γ-external dose from a randomly distributed radioactive cloud. Atmospheric dispersion of radioactive materials accidentally released from a nuclear reactor needs to be estimated considering time-dependent meteorological data and terrain heights. Particle-in-Cell model is useful for that purpose, but it is not easy to calculate the dose from the randomly distributed concentration by numerical integration. In this study the mean concentration in a cell evaluated by PIC model was assumed to be uniformly distributed over that cell, which was integrated as a constant concentration by a point kernel method. The dose was obtained by summing the attributable cell doses. When the concentration of plume had a Gaussian distribution, the results of CIDE code well agreed with those of GAMPLE, which was the code for calculating the dose from the Gaussian distribution. The choice of cell sizes affecting the accuracy of the calculated results was discussed. (author)

  17. TEMP: a computer code to calculate fuel pin temperatures during a transient

    International Nuclear Information System (INIS)

    The computer code TEMP calculates fuel pin temperatures during a transient. It was developed to accommodate temperature calculations in any system of axi-symmetric concentric cylinders. When used to calculate fuel pin temperatures, the code will handle a fuel pin as simple as a solid cylinder or as complex as a central void surrounded by fuel that is broken into three regions by two circumferential cracks. Any fuel situation between these two extremes can be analyzed along with additional cladding, heat sink, coolant or capsule regions surrounding the fuel. The one-region version of the code accurately calculates the solution to two problems having closed-form solutions. The code uses an implicit method, an explicit method and a Crank-Nicolson (implicit-explicit) method

  18. Reactor physics simulations with coupled Monte Carlo calculation and computational fluid dynamics

    International Nuclear Information System (INIS)

    A computational code system based on coupling the Monte Carlo code MCNP5 and the Computational Fluid Dynamics (CFD) code STAR-CD was developed as an audit tool for lower order nuclear reactor calculations. This paper presents the methodology of the developed computer program 'McSTAR'. McSTAR is written in FORTRAN90 programming language and couples MCNP5 and the commercial CFD code STAR-CD. MCNP uses a continuous energy cross section library produced by the NJOY code system from the raw ENDF/B data. A major part of the work was to develop and implement methods to update the cross section library with the temperature distribution calculated by STARCD for every region. Three different methods were investigated and implemented in McSTAR. The user subroutines in STAR-CD are modified to read the power density data and assign them to the appropriate variables in the program and to write an output data file containing the temperature, density and indexing information to perform the mapping between MCNP and STAR-CD cells. Preliminary testing of the code was performed using a 3x3 PWR pin-cell problem. The preliminary results are compared with those obtained from a STAR-CD coupled calculation with the deterministic transport code DeCART. Good agreement in the keff and the power profile was observed. Increased computational capabilities and improvements in computational methods have accelerated interest in high fidelity modeling of nuclear reactor cores during the last several years. High-fidelity has been achieved by utilizing full core neutron transport solutions for the neutronics calculation and computational fluid dynamics solutions for the thermal-hydraulics calculation. Previous researchers have reported the coupling of 3D deterministic neutron transport method to CFD and their application to practical reactor analysis problems. One of the principal motivations of the work here was to utilize Monte Carlo methods to validate the coupled deterministic neutron transport

  19. Reactor physics simulations with coupled Monte Carlo calculation and computational fluid dynamics.

    Energy Technology Data Exchange (ETDEWEB)

    Seker, V.; Thomas, J. W.; Downar, T. J.; Purdue Univ.

    2007-01-01

    A computational code system based on coupling the Monte Carlo code MCNP5 and the Computational Fluid Dynamics (CFD) code STAR-CD was developed as an audit tool for lower order nuclear reactor calculations. This paper presents the methodology of the developed computer program 'McSTAR'. McSTAR is written in FORTRAN90 programming language and couples MCNP5 and the commercial CFD code STAR-CD. MCNP uses a continuous energy cross section library produced by the NJOY code system from the raw ENDF/B data. A major part of the work was to develop and implement methods to update the cross section library with the temperature distribution calculated by STARCD for every region. Three different methods were investigated and implemented in McSTAR. The user subroutines in STAR-CD are modified to read the power density data and assign them to the appropriate variables in the program and to write an output data file containing the temperature, density and indexing information to perform the mapping between MCNP and STAR-CD cells. Preliminary testing of the code was performed using a 3x3 PWR pin-cell problem. The preliminary results are compared with those obtained from a STAR-CD coupled calculation with the deterministic transport code DeCART. Good agreement in the k{sub eff} and the power profile was observed. Increased computational capabilities and improvements in computational methods have accelerated interest in high fidelity modeling of nuclear reactor cores during the last several years. High-fidelity has been achieved by utilizing full core neutron transport solutions for the neutronics calculation and computational fluid dynamics solutions for the thermal-hydraulics calculation. Previous researchers have reported the coupling of 3D deterministic neutron transport method to CFD and their application to practical reactor analysis problems. One of the principal motivations of the work here was to utilize Monte Carlo methods to validate the coupled deterministic

  20. Guide for licensing evaluations using CRAC2: A computer program for calculating reactor accident consequences

    International Nuclear Information System (INIS)

    A version of the CRAC2 computer code applicable for use in analyses of consequences and risks of reactor accidents in case work for environmental statements has been implemented for use on the Nuclear Regulatory Commission Data General MV/8000 computer system. Input preparation is facilitated through the use of an interactive computer program which operates on an IBM personal computer. The resulting CRAC2 input deck is transmitted to the MV/8000 by using an error-free file transfer mechanism. To facilitate the use of CRAC2 at NRC, relevant background material on input requirements and model descriptions has been extracted from four reports - ''Calculations of Reactor Accident Consequences,'' Version 2, NUREG/CR-2326 (SAND81-1994) and ''CRAC2 Model Descriptions,'' NUREG/CR-2552 (SAND82-0342), ''CRAC Calculations for Accident Sections of Environmental Statements, '' NUREG/CR-2901 (SAND82-1693), and ''Sensitivity and Uncertainty Studies of the CRAC2 Computer Code,'' NUREG/CR-4038 (ORNL-6114). When this background information is combined with instructions on the input processor, this report provides a self-contained guide for preparing CRAC2 input data with a specific orientation toward applications on the MV/8000. 8 refs., 11 figs., 10 tabs

  1. CREST : a computer program for the calculation of composition dependent self-shielded cross-sections

    International Nuclear Information System (INIS)

    A computer program CREST for the calculation of the composition and temperature dependent self-shielded cross-sections using the shielding factor approach has been described. The code includes the editing and formation of the data library, calculation of the effective shielding factors and cross-sections, a fundamental mode calculation to generate the neutron spectrum for the system which is further used to calculate the effective elastic removal cross-sections. Studies to explore the sensitivity of reactor parameters to changes in group cross-sections can also be carried out by using the facility available in the code to temporarily change the desired constants. The final self-shielded and transport corrected group cross-sections can be dumped on cards or magnetic tape in a suitable form for their direct use in a transport or diffusion theory code for detailed reactor calculations. The program is written in FORTRAN and can be accommodated in a computer with 32 K work memory. The input preparation details, sample problem and the listing of the program are given. (author)

  2. Computer subroutines for the estimation of nuclear reaction effects in proton-tissue-dose calculations

    Science.gov (United States)

    Wilson, J. W.; Khandelwal, G. S.

    1976-01-01

    Calculational methods for estimation of dose from external proton exposure of arbitrary convex bodies are briefly reviewed. All the necessary information for the estimation of dose in soft tissue is presented. Special emphasis is placed on retaining the effects of nuclear reaction, especially in relation to the dose equivalent. Computer subroutines to evaluate all of the relevant functions are discussed. Nuclear reaction contributions for standard space radiations are in most cases found to be significant. Many of the existing computer programs for estimating dose in which nuclear reaction effects are neglected can be readily converted to include nuclear reaction effects by use of the subroutines described herein.

  3. Implementation of a Thermodynamic Solver within a Computer Program for Calculating Fission-Product Release Fractions

    Science.gov (United States)

    Barber, Duncan Henry

    During some postulated accidents at nuclear power stations, fuel cooling may be impaired. In such cases, the fuel heats up and the subsequent increased fission-gas release from the fuel to the gap may result in fuel sheath failure. After fuel sheath failure, the barrier between the coolant and the fuel pellets is lost or impaired, gases and vapours from the fuel-to-sheath gap and other open voids in the fuel pellets can be vented. Gases and steam from the coolant can enter the broken fuel sheath and interact with the fuel pellet surfaces and the fission-product inclusion on the fuel surface (including material at the surface of the fuel matrix). The chemistry of this interaction is an important mechanism to model in order to assess fission-product releases from fuel. Starting in 1995, the computer program SOURCE 2.0 was developed by the Canadian nuclear industry to model fission-product release from fuel during such accidents. SOURCE 2.0 has employed an early thermochemical model of irradiated uranium dioxide fuel developed at the Royal Military College of Canada. To overcome the limitations of computers of that time, the implementation of the RMC model employed lookup tables to pre-calculated equilibrium conditions. In the intervening years, the RMC model has been improved, the power of computers has increased significantly, and thermodynamic subroutine libraries have become available. This thesis is the result of extensive work based on these three factors. A prototype computer program (referred to as SC11) has been developed that uses a thermodynamic subroutine library to calculate thermodynamic equilibria using Gibbs energy minimization. The Gibbs energy minimization requires the system temperature (T) and pressure (P), and the inventory of chemical elements (n) in the system. In order to calculate the inventory of chemical elements in the fuel, the list of nuclides and nuclear isomers modelled in SC11 had to be expanded from the list used by SOURCE 2.0. A

  4. RAP-4A Computer code for thermohydraulic calculation of liquid metal cooled fuel clusters

    International Nuclear Information System (INIS)

    RAP-4A is a programme for calculating the fuel clusters thermal-hydraulic parameters in a fast liquid metal-cooled reactor. The code gives the possibility to calculate steady state axial distribution temperature, enthalpy, pressure drop and mass velocity . A monodimensional mathematical model along the cluster allowing the study of the single and two phase flow is used by taking into account the mixing between adjacent subchannels. Physical and mathematical models, general features and an example are presented. RAP-4A code is written in FORTRAN-IV language on IBM 370/135 computer

  5. SHETEMP: a computer code for calculation of fuel temperature behavior under reactivity initiated accidents

    International Nuclear Information System (INIS)

    A fast running computer code SHETEMP has been developed for analysis of reactivity initiated accidents under constant core cooling conditions such as coolant temperature and heat transfer coefficient on fuel rods. This code can predict core power and fuel temperature behaviours. A control rod movement can be taken into account in power control system. The objective of the code is to provide fast running capability with easy handling of the code required for audit and design calculations where a large number of calculations are performed for parameter surveys during short time period. The fast running capability of the code was realized by neglection of fluid flow calculation. The computer code SHETEMP was made up by extracting and conglomerating routines for reactor kinetics and heat conduction in the transient reactor thermal-hydraulic analysis code ALARM-P1, and by combining newly developed routines for reactor power control system. As ALARM-P1, SHETEMP solves point reactor kinetics equations by the modified Runge-Kutta method and one-dimensional transient heat conduction equations for slab and cylindrical geometries by the Crank-Nicholson methods. The model for reactor power control system takes into account effects of PID regulator and control rod drive mechanism. In order to check errors in programming of the code, calculated results by SHETEMP were compared with analytic solution. Based on the comparisons, the appropriateness of the programming was verified. Also, through a sample calculation for typical modelling, it was concluded that the code could satisfy the fast running capability required for audit and design calculations. This report will be described as a code manual of SHETEMP. It contains descriptions on a sample problem, code structure, input data specifications and usage of the code, in addition to analytical models and results of code verification calculations. (author)

  6. Calculation and evaluation methodology of the flawed pipe and the compute program development

    International Nuclear Information System (INIS)

    Background: The crack will grow gradually under alternating load for a pressurized pipe, whereas the load is less than the fatigue strength limit. Purpose: Both calculation and evaluation methodology for a flawed pipe that have been detected during in-service inspection is elaborated here base on the Elastic Plastic Fracture Mechanics (EPFM) criteria. Methods: In the compute, the depth and length interaction of a flaw has been considered and a compute program is developed per Visual C++. Results: The fluctuating load of the Reactor Coolant System transients, the initial flaw shape, the initial flaw orientation are all accounted here. Conclusions: The calculation and evaluation methodology here is an important basis for continue working or not. (authors)

  7. Calculating Three Loop Ladder and V-Topologies for Massive Operator Matrix Elements by Computer Algebra

    CERN Document Server

    Ablinger, J; Blümlein, J; De Freitas, A; von Manteuffel, A; Schneider, C

    2015-01-01

    Three loop ladder and $V$-topology diagrams contributing to the massive operator matrix element $A_{Qg}$ are calculated. The corresponding objects can all be expressed in terms of nested sums and recurrences depending on the Mellin variable $N$ and the dimensional parameter $\\varepsilon$. Given these representations, the desired Laurent series expansions in $\\varepsilon$ can be obtained with the help of our computer algebra toolbox. Here we rely on generalized hypergeometric functions and Mellin-Barnes representations, on difference ring algorithms for symbolic summation, on an optimized version of the multivariate Almkvist-Zeilberger algorithm for symbolic integration, and on new methods to calculate Laurent series solutions of coupled systems of differential equations. The solutions can be computed for general coefficient matrices directly for any basis also performing the expansion in the dimensional parameter in case it is expressible in terms of indefinite nested product-sum expressions. This structural ...

  8. OPT13B and OPTIM4 - computer codes for optical model calculations

    International Nuclear Information System (INIS)

    OPT13B is a computer code in FORTRAN for optical model calculations with automatic search. A summary of different formulae used for computation is given. Numerical methods are discussed. The 'search' technique followed to obtain the set of optical model parameters which produce best fit to experimental data in a least-square sense is also discussed. Different subroutines of the program are briefly described. Input-output specifications are given in detail. A modified version of OPT13B specifications are given in detail. A modified version of OPT13B is OPTIM4. It can be used for optical model calculations where the form factors of different parts of the optical potential are known point by point. A brief description of the modifications is given. (author)

  9. Methods of index calculation and presentation of fish abundance data using standard computer programs

    OpenAIRE

    Fotland, Åge; Mehl, Sigbjørn; Sunnanå, Knut

    1995-01-01

    Standard 0-group indices distribution maps are now produced based on hand-drawn maps using AutoCad with some additional procedures. This paper briefly describes the mathod. The paper further describes ways of importing coastlines and survey data directly into standard computer programs such as AUtoCad and SAS. Standard methods are used for gridding data, producing isolines and further calculation of abundance indices and presentation of distributions. Interactive editing of distribution maps ...

  10. Calculation of Heat-Kernel Coefficients and Usage of Computer Algebra

    OpenAIRE

    Bel'kov, A. A.; Lanyov, A. V.; Schaale, A.

    1995-01-01

    The calculation of heat-kernel coefficients with the classical DeWitt algorithm has been discussed. We present the explicit form of the coefficients up to $h_5$ in the general case and up to $h_7^{min}$ for the minimal parts. The results are compared with the expressions in other papers. A method to optimize the usage of memory for working with large expressions on universal computer algebra systems has been proposed.

  11. A compilation of structural property data for computer impact calculation (5/5)

    International Nuclear Information System (INIS)

    The paper describes structural property data for computer impact calculations of nuclear fuel shipping casks. Four kinds of material data, mild steel, stainless steel, lead and wood are compiled. These materials are main structural elements of shipping casks. Structural data such as, the coefficient of thermal expansion, the modulus of longitudinal elasticity, the modulus of transverse elasticity, the Poisson's ratio and stress and strain relationships, have been tabulated against temperature or strain rate. This volume 5 involve structural property data of wood. (author)

  12. A compilation of structural property data for computer impact calculation (4/5)

    International Nuclear Information System (INIS)

    The paper describes structural property data for computer impact calculations of nuclear fuel shipping casks. Four kinds of material data, mild steel, stainless steel, lead and wood are compiled. These materials are main structural elements of shipping casks. Structural data such as, the coefficient of thermal expansion, the modulus of longitudinal elasticity, the modulus of transverse elasticity, the Poisson's ratio and stress and strain relationships, have been tabulated against temperature or strain rate. This volume 4 involve structural property data of lead. (author)

  13. Efficient Probability of Failure Calculations for QMU using Computational Geometry LDRD 13-0144 Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Mitchell, Scott A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Ebeida, Mohamed Salah [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Romero, Vicente J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Swiler, Laura Painton [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Rushdi, Ahmad A. [Univ. of Texas, Austin, TX (United States); Abdelkader, Ahmad [Univ. of Maryland, College Park, MD (United States)

    2015-09-01

    This SAND report summarizes our work on the Sandia National Laboratory LDRD project titled "Efficient Probability of Failure Calculations for QMU using Computational Geometry" which was project #165617 and proposal #13-0144. This report merely summarizes our work. Those interested in the technical details are encouraged to read the full published results, and contact the report authors for the status of the software and follow-on projects.

  14. POPCYCLE: a computer code for calculating nuclear and fossil plant levelized life-cycle power costs

    International Nuclear Information System (INIS)

    POPCYCLE, a computer code designed to calculate levelized life-cycle power costs for nuclear and fossil electrical generating plants is described. Included are (1) derivations of the equations and a discussion of the methodology used by POPCYCLE, (2) a description of the input required by the code, (3) a listing of the input for a sample case, and (4) the output for a sample case

  15. Modeling of tube current modulation methods in computed tomography dose calculations for adult and pregnant patients

    International Nuclear Information System (INIS)

    The comparatively high dose and increasing frequency of computed tomography (CT) examinations have spurred the development of techniques for reducing radiation dose to imaging patients. Among these is the application of tube current modulation (TCM), which can be applied either longitudinally along the body or rotationally along the body, or both. Existing computational models for calculating dose from CT examinations do not include TCM techniques. Dose calculations using Monte Carlo methods have been previously prepared for constant-current rotational exposures at various positions along the body and for the principle exposure projections for several sets of computational phantoms, including adult male and female and pregnant patients. Dose calculations from CT scans with TCM are prepared by appropriately weighting the existing dose data. Longitudinal TCM doses can be obtained by weighting the dose at the z-axis scan position by the relative tube current at that position. Rotational TCM doses are weighted using the relative organ doses from the principle projections as a function of the current at the rotational angle. Significant dose reductions of 15% to 25% to fetal tissues are found from simulations of longitudinal TCM schemes to pregnant patients of different gestational ages. Weighting factors for each organ in rotational TCM schemes applied to adult male and female patients have also been found. As the application of TCM techniques becomes more prevalent, the need for including TCM in CT dose estimates will necessarily increase. (author)

  16. GOLEM: a versatile computer code for reactor neutronic calculation advances in qualification of the different modules

    International Nuclear Information System (INIS)

    The last 12 years studies about the CABRI, SCARABEE and PHEBUS projects are summarized. It describes the object and the genesis of the cores, the evolution of the core concept and the associated neutronic problems. The calculational scheme used is presented, together with its qualification. The formalism, and the qualification of the different modules of GOLEM are presented. COXYS: module of physical analysis in order to determine the best energetic and spatial mesh for the case of interest. GOLU.B: input data management module. VAREC: calculation module of perturbations due to materials enables to compute perturbed flux and reactivity variation. VARYX: calculation module of geometric perturbations. TRACASYN: module of 3D power shape calculation. Finally TRACASTORE: module of management and graphic exploitation of results. Then, one gives utilization directions for these different modules. Qualification results show that GOLEM is able to analyse the fine physics of many various cases, to calculate by perturbation effects greater than 5000 pcm, to rebuild perturbed flux with margins near 3% for difficult situations, like reactor voiding or spectral or spectral variation in a PWR. Furthermore, 3D hot spots are calculated within margins of a magnitude comparable to experimental ones

  17. DCHAIN: A user-friendly computer program for radioactive decay and reaction chain calculations

    International Nuclear Information System (INIS)

    A computer program for calculating the time-dependent daughter populations in radioactive decay and nuclear reaction chains is described. Chain members can have non-zero initial populations and be produced from the preceding chain member as the result of radioactive decay, a nuclear reaction, or both. As presently implemented, chains can contain up to 15 members. Program input can be supplied interactively or read from ASCII data files. Time units for half-lives, etc. can be specified during data entry. Input values are verified and can be modified if necessary, before used in calculations. Output results can be saved in ASCII files in a format suitable for including in reports or other documents. The calculational method, described in some detail, utilizes a generalized form of the Bateman equations. The program is written in the C language in conformance with current ANSI standards and can be used on multiple hardware platforms

  18. Reactor physics simulations with coupled Monte Carlo calculation and computational fluid dynamics

    International Nuclear Information System (INIS)

    The interest in high fidelity modeling of nuclear reactor cores has increased over the last few years and has become computationally more feasible because of the dramatic improvements in processor speed and the availability of low cost parallel platforms. In the research here high fidelity, multi-physics analyses was performed by solving the neutron transport equation using Monte Carlo methods and by solving the thermal-hydraulics equations using computational fluid dynamics. A computation tool based on coupling the Monte Carlo code MCNP5 and the Computational Fluid Dynamics (CFD) code STAR-CD was developed as an audit tool for lower order nuclear reactor calculations. This paper presents the methodology of the developed computer program 'McSTAR' along with the verification and validation efforts. McSTAR is written in PERL programming language and couples MCNP5 and the commercial CFD code STAR-CD. MCNP uses a continuous energy cross section library produced by the NJOY code system from the raw ENDF/B data. A major part of the work was to develop and implement methods to update the cross section library with the temperature distribution calculated by STAR-CD for every region. Three different methods were investigated and two of them are implemented in McSTAR. The user subroutines in STAR-CD are modified to read the power density data and assign them to the appropriate variables in the program and to write an output data file containing the temperature, density and indexing information to perform the mapping between MCNP and STAR-CD cells. The necessary input file manipulation, data file generation, normalization and multi-processor calculation settings are all done through the program flow in McSTAR. Initial testing of the code was performed using a single pin cell and a 3X3 PWR pin-cell problem. The preliminary results of the single pin-cell problem are compared with those obtained from a STAR-CD coupled calculation with the deterministic transport code De

  19. A special purpose computer for the calculation of the electric conductivity of a random resistor network

    International Nuclear Information System (INIS)

    The special purpose computer PERCOLA is designed for long numerical simulations on a percolation problem in Statistical Mechanics of disordered media. Our aim is to improve the actual values of the critical exponents characterizing the behaviour of random resistance networks at percolation threshold. The architecture of PERCOLA is based on an efficient iterative algorithm used to compute the electric conductivity of such networks. The calculator has the characteristics of a general purpose 64 bits floating point micro-programmable computer that can run programs for various types of problems with a peak performance of 25 Mflops. This high computing speed is a result of the pipeline architecture based on internal parallelism and separately micro-code controlled units such as: data memories, a micro-code memory, ALUs and multipliers (both WEITEK components), various data paths, a sequencer (ANALOG DEVICES component), address generators and a random number generator. Thus, the special purpose computer runs percolation problem program 10 percent faster than the supercomputer CRAY XMP. (author)

  20. Multithreaded transactions in scientific computing: New versions of a computer program for kinematical calculations of RHEED intensity oscillations

    Science.gov (United States)

    Brzuszek, Marcin; Daniluk, Andrzej

    2006-11-01

    Writing a concurrent program can be more difficult than writing a sequential program. Programmer needs to think about synchronisation, race conditions and shared variables. Transactions help reduce the inconvenience of using threads. A transaction is an abstraction, which allows programmers to group a sequence of actions on the program into a logical, higher-level computation unit. This paper presents multithreaded versions of the GROWTH program, which allow to calculate the layer coverages during the growth of thin epitaxial films and the corresponding RHEED intensities according to the kinematical approximation. The presented programs also contain graphical user interfaces, which enable displaying program data at run-time. New version program summaryTitles of programs:GROWTHGr, GROWTH06 Catalogue identifier:ADVL_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADVL_v2_0 Program obtainable from:CPC Program Library, Queen's University of Belfast, N. Ireland Catalogue identifier of previous version:ADVL Does the new version supersede the original program:No Computer for which the new version is designed and others on which it has been tested: Pentium-based PC Operating systems or monitors under which the new version has been tested: Windows 9x, XP, NT Programming language used:Object Pascal Memory required to execute with typical data:More than 1 MB Number of bits in a word:64 bits Number of processors used:1 No. of lines in distributed program, including test data, etc.:20 931 Number of bytes in distributed program, including test data, etc.: 1 311 268 Distribution format:tar.gz Nature of physical problem: The programs compute the RHEED intensities during the growth of thin epitaxial structures prepared using the molecular beam epitaxy (MBE). The computations are based on the use of kinematical diffraction theory [P.I. Cohen, G.S. Petrich, P.R. Pukite, G.J. Whaley, A.S. Arrott, Surf. Sci. 216 (1989) 222. [1

  1. TRAFIC, a computer program for calculating the release of metallic fission products from an HTGR core

    Energy Technology Data Exchange (ETDEWEB)

    Smith, P.D.

    1978-02-01

    A special purpose computer program, TRAFIC, is presented for calculating the release of metallic fission products from an HTGR core. The program is based upon Fick's law of diffusion for radioactive species. One-dimensional transient diffusion calculations are performed for the coated fuel particles and for the structural graphite web. A quasi steady-state calculation is performed for the fuel rod matrix material. The model accounts for nonlinear adsorption behavior in the fuel rod gap and on the coolant hole boundary. The TRAFIC program is designed to operate in a core survey mode; that is, it performs many repetitive calculations for a large number of spatial locations in the core. This is necessary in order to obtain an accurate volume integrated release. For this reason the program has been designed with calculational efficiency as one of its main objectives. A highly efficient numerical method is used in the solution. The method makes use of the Duhamel superposition principle to eliminate interior spatial solutions from consideration. Linear response functions relating the concentrations and mass fluxes on the boundaries of a homogeneous region are derived. Multiple regions are numerically coupled through interface conditions. Algebraic elimination is used to reduce the equations as far as possible. The problem reduces to two nonlinear equations in two unknowns, which are solved using a Newton Raphson technique.

  2. Development of an atmospheric dispersion and dose calculation code for real-time response by using small-scale computer

    International Nuclear Information System (INIS)

    This report describes a development of a wind field calculation code and an atmospheric dispersion and dose calculation code which can be used for real-time prediction in an emergency. Models used in the computer codes are a mass-consistent model for wind field and a particle diffusion model for atmospheric dispersion. In order to attain quick response even when the codes are used in a small-scale computer, high-speed iteration method (MILUCR) and kernel density method are applied to the wind field model and the atmospheric and dose calculation model, respectively. In this report, numerical models, computational codes, related files and calculation examples are shown. (author)

  3. Computational methods for multiphase equilibrium and kinetics calculations for geochemical and reactive transport applications

    Science.gov (United States)

    Leal, Allan; Saar, Martin

    2016-04-01

    Computational methods for geochemical and reactive transport modeling are essential for the understanding of many natural and industrial processes. Most of these processes involve several phases and components, and quite often requires chemical equilibrium and kinetics calculations. We present an overview of novel methods for multiphase equilibrium calculations, based on both the Gibbs energy minimization (GEM) approach and on the solution of the law of mass-action (LMA) equations. We also employ kinetics calculations, assuming partial equilibrium (e.g., fluid species in equilibrium while minerals are in disequilibrium) using automatic time stepping to improve simulation efficiency and robustness. These methods are developed specifically for applications that are computationally expensive, such as reactive transport simulations. We show how efficient the new methods are, compared to other algorithms, and how easy it is to use them for geochemical modeling via a simple script language. All methods are available in Reaktoro, a unified open-source framework for modeling chemically reactive systems, which we also briefly describe.

  4. A computer program incorporating Pitzer's equations for calculation of geochemical reactions in brines

    Science.gov (United States)

    Plummer, L.N.; Parkhurst, D.L.; Fleming, G.W.; Dunkle, S.A.

    1988-01-01

    The program named PHRQPITZ is a computer code capable of making geochemical calculations in brines and other electrolyte solutions to high concentrations using the Pitzer virial-coefficient approach for activity-coefficient corrections. Reaction-modeling capabilities include calculation of (1) aqueous speciation and mineral-saturation index, (2) mineral solubility, (3) mixing and titration of aqueous solutions, (4) irreversible reactions and mineral water mass transfer, and (5) reaction path. The computed results for each aqueous solution include the osmotic coefficient, water activity , mineral saturation indices, mean activity coefficients, total activity coefficients, and scale-dependent values of pH, individual-ion activities and individual-ion activity coeffients , and scale-dependent values of pH, individual-ion activities and individual-ion activity coefficients. A data base of Pitzer interaction parameters is provided at 25 C for the system: Na-K-Mg-Ca-H-Cl-SO4-OH-HCO3-CO3-CO2-H2O, and extended to include largely untested literature data for Fe(II), Mn(II), Sr, Ba, Li, and Br with provision for calculations at temperatures other than 25C. An extensive literature review of published Pitzer interaction parameters for many inorganic salts is given. Also described is an interactive input code for PHRQPITZ called PITZINPT. (USGS)

  5. Computing Moment-Based Probability Tables for Self-Shielding Calculations in Lattice Codes

    International Nuclear Information System (INIS)

    As part of the self-shielding model used in the APOLLO2 lattice code, probability tables are required to compute self-shielded cross sections for coarse energy groups (typically with 99 or 172 groups). This paper describes the replacement of the multiband tables (typically with 51 subgroups) with moment-based tables in release 2.5 of APOLLO2. An improved Ribon method is proposed to compute moment-based probability tables, allowing important savings in CPU resources while maintaining the accuracy of the self-shielding algorithm. Finally, a validation is presented where the absorption rates obtained with each of these techniques are compared with exact values obtained using a fine-group elastic slowing-down calculation in the resolved energy domain. Other results, relative to the Rowland's benchmark and to three assembly production cases, are also presented

  6. BALANCE : a computer program for calculating mass transfer for geochemical reactions in ground water

    Science.gov (United States)

    Parkhurst, David L.; Plummer, L. Niel; Thorstenson, Donald C.

    1982-01-01

    BALANCE is a Fortran computer designed to define and quantify chemical reactions between ground water and minerals. Using (1) the chemical compositions of two waters along a flow path and (2) a set of mineral phases hypothesized to be the reactive constituents in the system, the program calculates the mass transfer (amounts of the phases entering or leaving the aqueous phase) necessary to account for the observed changes in composition between the two waters. Additional constraints can be included in the problem formulation to account for mixing of two end-member waters, redox reactions, and, in a simplified form, isotopic composition. The computer code and a description of the input necessary to run the program are presented. Three examples typical of ground-water systems are described. (USGS)

  7. Calculation of normalised organ and effective doses to adult reference computational phantoms from contemporary computed tomography scanners

    International Nuclear Information System (INIS)

    The general-purpose Monte Carlo radiation transport code MCNPX has been used to simulate photon transport and energy deposition in anthropomorphic phantoms due to the x-ray exposure from the Philips iCT 256 and Siemens Definition CT scanners, together with the previously studied General Electric 9800. The MCNPX code was compiled with the Intel FORTRAN compiler and run on a Linux PC cluster. A patch has been successfully applied to reduce computing times by about 4%. The International Commission on Radiological Protection (ICRP) has recently published the Adult Male (AM) and Adult Female (AF) reference computational voxel phantoms as successors to the Medical Internal Radiation Dose (MIRD) stylised hermaphrodite mathematical phantoms that form the basis for the widely-used ImPACT CT dosimetry tool. Comparisons of normalised organ and effective doses calculated for a range of scanner operating conditions have demonstrated significant differences in results (in excess of 30%) between the voxel and mathematical phantoms as a result of variations in anatomy. These analyses illustrate the significant influence of choice of phantom on normalised organ doses and the need for standardisation to facilitate comparisons of dose. Further such dose simulations are needed in order to update the ImPACT CT Patient Dosimetry spreadsheet for contemporary CT practice. (author)

  8. The calculation of temperature asymmetries in encapsulated fuel rods using the TEXDIF-P computer code

    International Nuclear Information System (INIS)

    In the fuel rods of the first DUELL experiment highly asymmetric fuel structures were found which had been caused by a steep transversal neutron flux gradient and eccentric pellet location. The TEXDIF-P computer code was developed to explain this phenomenon in quantitative terms. This computer code solves for an encapsulated fuel rod the equation of two-dimensional heat conduction using the finite differences method. Any distribution may be specified of the heat source density and of the gap between the fuel pellet and the cladding tube. By use of the modular structure the material relations are easily exchangeable. The TEXDIF-P code can be applied both to oxide and to carbide fuel rods. Coupling of the POUMEC subprogram of SATURN-1 allows the dynamic calculation of pore migration. Independent of this, the program includes an option for determination of the limit of the pore migration zone via a relation covering the minimum pore migration path according to Olander. TEXDIF-P has been used so far to verify the first start-up ramp experiment of DUELL. The agreement between the computation and the findings of post-examinations is quite satisfactory regarding the size and the location of the central void. Also the limit of the compacted zone is fairly well reproduced by the computation. The assumption on the size of the transversal neutron flux gradient has been essentially confirmed retroactively by transversal γ-scanning. (orig.)

  9. Computational Issues Associated with Automatic Calculation of Acute Myocardial Infarction Scores

    Science.gov (United States)

    Destro-Filho, J. B.; Machado, S. J. S.; Fonseca, G. T.

    2008-12-01

    This paper presents a comparison among the three principal acute myocardial infarction (AMI) scores (Selvester, Aldrich, Anderson-Wilkins) as they are automatically estimated from digital electrocardiographic (ECG) files, in terms of memory occupation and processing time. Theoretical algorithm complexity is also provided. Our simulation study supposes that the ECG signal is already digitized and available within a computer platform. We perform 1000 000 Monte Carlo experiments using the same input files, leading to average results that point out drawbacks and advantages of each score. Since all these calculations do not require either large memory occupation or long processing, automatic estimation is compatible with real-time requirements associated with AMI urgency and with telemedicine systems, being faster than manual calculation, even in the case of simple costless personal microcomputers.

  10. An approach to first principles electronic structure calculation by symbolic-numeric computation

    CERN Document Server

    Kikuchi, Akihito

    2013-01-01

    This article is an introduction to a new approach to first principles electronic structure calculation. The starting point is the Hartree-Fock-Roothaan equation, in which molecular integrals are approximated by polynomials by way of Taylor expansion with respect to atomic coordinates and other variables. It leads to a set of polynomial equations whose solutions are eigenstate, which is designated as algebraic molecular orbital equation. Symbolic computation, especially, Gr\\"obner bases theory, enables us to rewrite the polynomial equations into more trimmed and tractable forms with identical roots, from which we can unravel the relationship between physical parameters (wave function, atomic coordinates, and others) and numerically evaluate them one by one in order. Furthermore, this method is a unified way to solve the electronic structure calculation, the optimization of physical parameters, and the inverse problem as a forward problem.

  11. Methods, algorithms and computer codes for calculation of electron-impact excitation parameters

    CERN Document Server

    Bogdanovich, P; Stonys, D

    2015-01-01

    We describe the computer codes, developed at Vilnius University, for the calculation of electron-impact excitation cross sections, collision strengths, and excitation rates in the plane-wave Born approximation. These codes utilize the multireference atomic wavefunctions which are also adopted to calculate radiative transition parameters of complex many-electron ions. This leads to consistent data sets suitable in plasma modelling codes. Two versions of electron scattering codes are considered in the present work, both of them employing configuration interaction method for inclusion of correlation effects and Breit-Pauli approximation to account for relativistic effects. These versions differ only by one-electron radial orbitals, where the first one employs the non-relativistic numerical radial orbitals, while another version uses the quasirelativistic radial orbitals. The accuracy of produced results is assessed by comparing radiative transition and electron-impact excitation data for neutral hydrogen, helium...

  12. WOLF: a computer code package for the calculation of ion beam trajectories

    International Nuclear Information System (INIS)

    The WOLF code solves POISSON'S equation within a user-defined problem boundary of arbitrary shape. The code is compatible with ANSI FORTRAN and uses a two-dimensional Cartesian coordinate geometry represented on a triangular lattice. The vacuum electric fields and equipotential lines are calculated for the input problem. The use may then introduce a series of emitters from which particles of different charge-to-mass ratios and initial energies can originate. These non-relativistic particles will then be traced by WOLF through the user-defined region. Effects of ion and electron space charge are included in the calculation. A subprogram PISA forms part of this code and enables optimization of various aspects of the problem. The WOLF package also allows detailed graphics analysis of the computed results to be performed

  13. Computation-communication overlap techniques for parallel spectral calculations in gyrokinetic Vlasov simulations

    International Nuclear Information System (INIS)

    One of the important phenomena in magnetically-confined fusion plasma is plasma turbulence, which causes particle and heat transport and degrades plasma confinement. To address multi-scale turbulence including temporal and spatial scales of electrons and ions, we extend our gyrokinetic Vlasov simulation code GKV to run efficiently on peta-scale supercomputers. A key numerical technique is the parallel Fast Fourier Transform (FFT) required for parallel spectral calculations, where masking of the cost of inter-node transpose communications is essential to improve strong scaling. To mask communication costs, computation-communication overlap techniques are applied for FFTs and transpose with the help of the hybrid parallelization of message passing interface and open multi-processing. Integrated overlaps including whole spectral calculation procedures show better scaling than simple overlaps of FFTs and transpose. The masking of communication costs significantly improves strong scaling of the GKV code, and makes substantial speed-up toward multi-scale turbulence simulations. (author)

  14. Computer codes for the calculation of vibrations in machines and structures

    International Nuclear Information System (INIS)

    After an introductory paper on the typical requirements to be met by vibration calculations, the first two sections of the conference papers present universal as well as specific finite-element codes tailored to solve individual problems. The calculation of dynamic processes increasingly now in addition to the finite elements applies the method of multi-component systems which takes into account rigid bodies or partial structures and linking and joining elements. This method, too, is explained referring to universal computer codes and to special versions. In mechanical engineering, rotary vibrations are a major problem, and under this topic, conference papers exclusively deal with codes that also take into account special effects such as electromechanical coupling, non-linearities in clutches, etc. (orig./HP)

  15. A compilation of structural property data for computer impact calculation (3/5)

    International Nuclear Information System (INIS)

    The paper describes structural property data for computer impact calculations of nuclear fuel shipping casks. Four kinds of material data, mild steel, stainless steel, lead and wood are compiled. These materials are main structural elements of shipping casks. Structural data such as, the coefficient of thermal expansion, the modulus of longitudinal elasticity, the modulus of transverse elasticity, the Poisson's ratio and stress and strain relationships, have been tabulated against temperature or strain rate. This volume 3 involve structural property data of stainless steel. (author)

  16. Algorithm and computer code for calculating the swelling of the fuel elements with a ceramic fuel

    International Nuclear Information System (INIS)

    Algorithm and the OVERAT program intended for calculating the strain deformed state of a cylindrical axially symmetric fuel element with ceramic fuel and thin-walled shell are described. Calculations are performed with account for creep deformation, fuel swelling, coolant and gas pressures in the axial cavity. At each moment of time deformations and strains in the shell as well as the spatial (by rod radius) dependence of fuel swelling are calculated. Fuel swelling is determined on the basis of a theoretical model, in which gas swelling is related to formation and development only of intergrain porosity. The reactor operation at a constant power at invariable in time temperature and energy release distributions in the fuel element core rod are considered. For description of the processes taking place in a fuel element a hard system of usual differential first order equations which is solved by the Gear method has been used. The OVERAT program is written in FORTRAN and at BESM-6 computer debuged. The results of test calculations of strain-deformed state and fuel element swelling with an UO2 hollow rod in a molybdenum shell are presented. It is pointed out that the described program in a complex with other programs can be used for investigating serviceability of various type reactors fuel elements

  17. Analysis of shielding calculation methods for 16- and 64-slice computed tomography facilities

    Energy Technology Data Exchange (ETDEWEB)

    Moreno, C; Cenizo, E; Bodineau, C; Mateo, B; Ortega, E M, E-mail: c_morenosaiz@yahoo.e [Servicio de RadiofIsica Hospitalaria, Hospital Regional Universitario Carlos Haya, Malaga (Spain)

    2010-09-15

    The new multislice computed tomography (CT) machines require some new methods of shielding calculation, which need to be analysed. NCRP Report No. 147 proposes three shielding calculation methods based on the following dosimetric parameters: weighted CT dose index for the peripheral axis (CTDI{sub w,per}), dose-length product (DLP) and isodose maps. A survey of these three methods has been carried out. For this analysis, we have used measured values of the dosimetric quantities involved and also those provided by the manufacturer, making a comparison between the results obtained. The barrier thicknesses when setting up two different multislice CT instruments, a Philips Brilliance 16 or a Philips Brilliance 64, in the same room, are also compared. Shielding calculation from isodose maps provides more reliable results than the other two methods, since it is the only method that takes the actual scattered radiation distribution into account. It is concluded therefore that the most suitable method for calculating the barrier thicknesses of the CT facility is the one based on isodose maps. This study also shows that for different multislice CT machines the barrier thicknesses do not necessarily become bigger as the number of slices increases, because of the great dependence on technique used in CT protocols for different anatomical regions.

  18. Calculation of the properties of digital mammograms using a computer simulation

    International Nuclear Information System (INIS)

    A Mote Carlo computer model of mammography has been developed to study and optimise the performance of digital mammographic systems. The program uses high-resolution voxel phantoms to model the breast, which simulate the adipose and fibro-glandular tissues, Cooper's ligaments, ducts and skin in three dimensions. The model calculates the dose to each tissue, and also the quantities such as energy imparted to image pixels, noise per image pixel and scatter-to-primary (S/P) ratios. It allows studies of the dependence of image properties on breast structure and on position within the image. The program has been calibrated by calculating and measuring the pixel values and noise for a digital mammographic system. The thicknesses of two components of this system were unknown, and were adjusted to obtain a good agreement between measurement and calculation. The utility of the program is demonstrated with the calculations of the variation of the S/P ratio with and without a grid, and of the image contrast across the image of a 50-mm-thick breast phantom. (authors)

  19. Monteray Mark-I: Computer program (PC-version) for shielding calculation with Monte Carlo method

    International Nuclear Information System (INIS)

    A computer program for gamma ray shielding calculation using Monte Carlo method has been developed. The program is written in WATFOR77 language. The MONTERAY MARH-1 is originally developed by James Wood. The program was modified by the authors that the modified version is easily executed. Applying Monte Carlo method the program observe photon gamma transport in an infinity planar shielding with various thick. A photon gamma is observed till escape from the shielding or when its energy less than the cut off energy. Pair production process is treated as pure absorption process that annihilation photons generated in the process are neglected in the calculation. The out put data calculated by the program are total albedo, build-up factor, and photon spectra. The calculation result for build-up factor of a slab lead and water media with 6 MeV parallel beam gamma source shows that they are in agreement with published data. Hence the program is adequate as a shielding design tool for observing gamma radiation transport in various media

  20. First principle calculations of effective exchange integrals: Comparison between SR (BS) and MR computational results

    Energy Technology Data Exchange (ETDEWEB)

    Yamaguchi, Kizashi [Institute for Nano Science Design Center, Osaka University, 1-3 Machikaneyama, Toyonaka, Osaka 560-8531, Japan and TOYOTA Physical and Chemical Research Institute, Nagakute, Aichi, 480-1192 (Japan); Nishihara, Satomichi; Saito, Toru; Yamanaka, Shusuke; Kitagawa, Yasutaka; Kawakami, Takashi; Yamada, Satoru; Isobe, Hiroshi; Okumura, Mitsutaka [Department of Chemistry, Graduate School of Science, Osaka University, 1-1 Machikaneyama, Toyonaka, Osaka 560-0043 (Japan)

    2015-01-22

    First principle calculations of effective exchange integrals (J) in the Heisenberg model for diradical species were performed by both symmetry-adapted (SA) multi-reference (MR) and broken-symmetry (BS) single reference (SR) methods. Mukherjee-type (Mk) state specific (SS) MR coupled-cluster (CC) calculations by the use of natural orbital (NO) references of ROHF, UHF, UDFT and CASSCF solutions were carried out to elucidate J values for di- and poly-radical species. Spin-unrestricted Hartree Fock (UHF) based coupled-cluster (CC) computations were also performed to these species. Comparison between UHF-NO(UNO)-MkMRCC and BS UHF-CC computational results indicated that spin-contamination of UHF-CC solutions still remains at the SD level. In order to eliminate the spin contamination, approximate spin-projection (AP) scheme was applied for UCC, and the AP procedure indeed corrected the error to yield good agreement with MkMRCC in energy. The CC double with spin-unrestricted Brueckner's orbital (UBD) was furthermore employed for these species, showing that spin-contamination involved in UHF solutions is largely suppressed, and therefore AP scheme for UBCCD removed easily the rest of spin-contamination. We also performed spin-unrestricted pure- and hybrid-density functional theory (UDFT) calculations of diradical and polyradical species. Three different computational schemes for total spin angular momentums were examined for the AP correction of the hybrid (H) UDFT. HUDFT calculations followed by AP, HUDFT(AP), yielded the S-T gaps that were qualitatively in good agreement with those of MkMRCCSD, UHF-CC(AP) and UB-CC(AP). Thus a systematic comparison among MkMRCCSD, UCC(AP) UBD(AP) and UDFT(AP) was performed concerning with the first principle calculations of J values in di- and poly-radical species. It was found that BS (AP) methods reproduce MkMRCCSD results, indicating their applicability to large exchange coupled systems.

  1. A Geometric Computational Model for Calculation of Longwall Face Effect on Gate Roadways

    Science.gov (United States)

    Mohammadi, Hamid; Ebrahimi Farsangi, Mohammad Ali; Jalalifar, Hossein; Ahmadi, Ali Reza

    2016-01-01

    In this paper a geometric computational model (GCM) has been developed for calculating the effect of longwall face on the extension of excavation-damaged zone (EDZ) above the gate roadways (main and tail gates), considering the advance longwall mining method. In this model, the stability of gate roadways are investigated based on loading effects due to EDZ and caving zone (CZ) above the longwall face, which can extend the EDZ size. The structure of GCM depends on four important factors: (1) geomechanical properties of hanging wall, (2) dip and thickness of coal seam, (3) CZ characteristics, and (4) pillar width. The investigations demonstrated that the extension of EDZ is a function of pillar width. Considering the effect of pillar width, new mathematical relationships were presented to calculate the face influence coefficient and characteristics of extended EDZ. Furthermore, taking GCM into account, a computational algorithm for stability analysis of gate roadways was suggested. Validation was carried out through instrumentation and monitoring results of a longwall face at Parvade-2 coal mine in Tabas, Iran, demonstrating good agreement between the new model and measured results. Finally, a sensitivity analysis was carried out on the effect of pillar width, bearing capacity of support system and coal seam dip.

  2. Calculating three loop ladder and V-topologies for massive operator matrix elements by computer algebra

    International Nuclear Information System (INIS)

    Three loop ladder and V-topology diagrams contributing to the massive operator matrix element AQg are calculated. The corresponding objects can all be expressed in terms of nested sums and recurrences depending on the Mellin variable N and the dimensional parameter ε. Given these representations, the desired Laurent series expansions in ε can be obtained with the help of our computer algebra toolbox. Here we rely on generalized hypergeometric functions and Mellin-Barnes representations, on difference ring algorithms for symbolic summation, on an optimized version of the multivariate Almkvist-Zeilberger algorithm for symbolic integration, and on new methods to calculate Laurent series solutions of coupled systems of differential equations. The solutions can be computed for general coefficient matrices directly for any basis also performing the expansion in the dimensional parameter in case it is expressible in terms of indefinite nested product-sum expressions. This structural result is based on new results of our difference ring theory. In the cases discussed we deal with iterative sum- and integral-solutions over general alphabets. The final results are expressed in terms of special sums, forming quasi-shuffle algebras, such as nested harmonic sums, generalized harmonic sums, and nested binomially weighted (cyclotomic) sums. Analytic continuations to complex values of N are possible through the recursion relations obeyed by these quantities and their analytic asymptotic expansions. The latter lead to a host of new constants beyond the multiple zeta values, the infinite generalized harmonic and cyclotomic sums in the case of V-topologies.

  3. Calculating three loop ladder and V-topologies for massive operator matrix elements by computer algebra

    Science.gov (United States)

    Ablinger, J.; Behring, A.; Blümlein, J.; De Freitas, A.; von Manteuffel, A.; Schneider, C.

    2016-05-01

    Three loop ladder and V-topology diagrams contributing to the massive operator matrix element AQg are calculated. The corresponding objects can all be expressed in terms of nested sums and recurrences depending on the Mellin variable N and the dimensional parameter ε. Given these representations, the desired Laurent series expansions in ε can be obtained with the help of our computer algebra toolbox. Here we rely on generalized hypergeometric functions and Mellin-Barnes representations, on difference ring algorithms for symbolic summation, on an optimized version of the multivariate Almkvist-Zeilberger algorithm for symbolic integration, and on new methods to calculate Laurent series solutions of coupled systems of differential equations. The solutions can be computed for general coefficient matrices directly for any basis also performing the expansion in the dimensional parameter in case it is expressible in terms of indefinite nested product-sum expressions. This structural result is based on new results of our difference ring theory. In the cases discussed we deal with iterative sum- and integral-solutions over general alphabets. The final results are expressed in terms of special sums, forming quasi-shuffle algebras, such as nested harmonic sums, generalized harmonic sums, and nested binomially weighted (cyclotomic) sums. Analytic continuations to complex values of N are possible through the recursion relations obeyed by these quantities and their analytic asymptotic expansions. The latter lead to a host of new constants beyond the multiple zeta values, the infinite generalized harmonic and cyclotomic sums in the case of V-topologies.

  4. Computer software to calculate and map geologic parameters required in estimating coal production costs. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Honea, R.B.; Petrich, C.H.; Wilson, D.L.; Dillard, C.A.; Durfee, R.C.; Faber, J.A.

    1979-04-01

    This report documents methodologic and computer software developed by Energy Division and Computer Sciences Division personnel at Oak Ridge National Laboratory (ORNL). The software is designed to quantify and automatically map geologic and other cost-related parameters as required to estimate coal mining costs. The software complements the detailed coal production cost models for both underground and surface mines which have been developed for the Electric Power Research Institute (EPRI) by NUS, Corp. These models require input variables such as coal seam thickness, coal seam depth, surface slope, etc., to estimate mining costs. This report provides a general overview of the software and methodology developed by ORNL to calculate some of these parameters along with sample map output which indicates the geographical distribution of these geologic characteristics. A detailed user guide for implementing the software has been prepared and is included in the appendixes. (Sample input data which may be used to verify the operation of the software are available from ORNL.) Also included is a brief review of coal production, coal recovery, and coal resource calculation studies. This system will be useful to utilities and coal mine operators alike in estimating costs through comprehensive assessment before mining takes place.

  5. Computational Benchmark Calculations Relevant to the Neutronic Design of the Spallation Neutron Source (SNS)

    International Nuclear Information System (INIS)

    The Spallation Neutron Source (SNS) will provide an intense source of low-energy neutrons for experimental use. The low-energy neutrons are produced by the interaction of a high-energy (1.0 GeV) proton beam on a mercury (Hg) target and slowed down in liquid hydrogen or light water moderators. Computer codes and computational techniques are being benchmarked against relevant experimental data to validate and verify the tools being used to predict the performance of the SNS. The LAHET Code System (LCS), which includes LAHET, HTAPE ad HMCNP (a modified version of MCNP version 3b), have been applied to the analysis of experiments that were conducted in the Alternating Gradient Synchrotron (AGS) facility at Brookhaven National Laboratory (BNL). In the AGS experiments, foils of various materials were placed around a mercury-filled stainless steel cylinder, which was bombarded with protons at 1.6 GeV. Neutrons created in the mercury target, activated the foils. Activities of the relevant isotopes were accurately measured and compared with calculated predictions. Measurements at BNL were provided in part by collaborating scientists from JAERI as part of the AGS Spallation Target Experiment (ASTE) collaboration. To date, calculations have shown good agreement with measurements

  6. DIST: a computer code system for calculation of distribution ratios of solutes in the purex system

    Energy Technology Data Exchange (ETDEWEB)

    Tachimori, Shoichi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    1996-05-01

    Purex is a solvent extraction process for reprocessing the spent nuclear fuel using tri n-butylphosphate (TBP). A computer code system DIST has been developed to calculate distribution ratios for the major solutes in the Purex process. The DIST system is composed of database storing experimental distribution data of U(IV), U(VI), Pu(III), Pu(IV), Pu(VI), Np(IV), Np(VI), HNO{sub 3} and HNO{sub 2}: DISTEX and of Zr(IV), Tc(VII): DISTEXFP and calculation programs to calculate distribution ratios of U(IV), U(VI), Pu(III), Pu(IV), Pu(VI), Np(IV), Np(VI), HNO{sub 3} and HNO{sub 2}(DIST1), and Zr(IV), Tc(VII)(DITS2). The DIST1 and DIST2 determine, by the best-fit procedures, the most appropriate values of many parameters put on empirical equations by using the DISTEX data which fulfill the assigned conditions and are applied to calculate distribution ratios of the respective solutes. Approximately 5,000 data were stored in the DISTEX and DISTEXFP. In the present report, the following items are described, 1) specific features of DIST1 and DIST2 codes and the examples of calculation 2) explanation of databases, DISTEX, DISTEXFP and a program DISTIN, which manages the data in the DISTEX and DISTEXFP by functions as input, search, correction and delete. and at the annex, 3) programs of DIST1, DIST2, and figure-drawing programs DIST1G and DIST2G 4) user manual for DISTIN. 5) source programs of DIST1 and DIST2. 6) the experimental data stored in the DISTEX and DISTEXFP. (author). 122 refs.

  7. MRI-Based Computed Tomography Metal Artifact Correction Method for Improving Proton Range Calculation Accuracy

    Energy Technology Data Exchange (ETDEWEB)

    Park, Peter C. [Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Schreibmann, Eduard; Roper, Justin; Elder, Eric; Crocker, Ian [Department of Radiation Oncology, Winship Cancer Institute of Emory University, Atlanta, Georgia (United States); Fox, Tim [Varian Medical Systems, Palo Alto, California (United States); Zhu, X. Ronald [Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Dong, Lei [Scripps Proton Therapy Center, San Diego, California (United States); Dhabaan, Anees, E-mail: anees.dhabaan@emory.edu [Department of Radiation Oncology, Winship Cancer Institute of Emory University, Atlanta, Georgia (United States)

    2015-03-15

    Purpose: Computed tomography (CT) artifacts can severely degrade dose calculation accuracy in proton therapy. Prompted by the recently increased popularity of magnetic resonance imaging (MRI) in the radiation therapy clinic, we developed an MRI-based CT artifact correction method for improving the accuracy of proton range calculations. Methods and Materials: The proposed method replaces corrupted CT data by mapping CT Hounsfield units (HU number) from a nearby artifact-free slice, using a coregistered MRI. MRI and CT volumetric images were registered with use of 3-dimensional (3D) deformable image registration (DIR). The registration was fine-tuned on a slice-by-slice basis by using 2D DIR. Based on the intensity of paired MRI pixel values and HU from an artifact-free slice, we performed a comprehensive analysis to predict the correct HU for the corrupted region. For a proof-of-concept validation, metal artifacts were simulated on a reference data set. Proton range was calculated using reference, artifactual, and corrected images to quantify the reduction in proton range error. The correction method was applied to 4 unique clinical cases. Results: The correction method resulted in substantial artifact reduction, both quantitatively and qualitatively. On respective simulated brain and head and neck CT images, the mean error was reduced from 495 and 370 HU to 108 and 92 HU after correction. Correspondingly, the absolute mean proton range errors of 2.4 cm and 1.7 cm were reduced to less than 2 mm in both cases. Conclusions: Our MRI-based CT artifact correction method can improve CT image quality and proton range calculation accuracy for patients with severe CT artifacts.

  8. MRI-Based Computed Tomography Metal Artifact Correction Method for Improving Proton Range Calculation Accuracy

    International Nuclear Information System (INIS)

    Purpose: Computed tomography (CT) artifacts can severely degrade dose calculation accuracy in proton therapy. Prompted by the recently increased popularity of magnetic resonance imaging (MRI) in the radiation therapy clinic, we developed an MRI-based CT artifact correction method for improving the accuracy of proton range calculations. Methods and Materials: The proposed method replaces corrupted CT data by mapping CT Hounsfield units (HU number) from a nearby artifact-free slice, using a coregistered MRI. MRI and CT volumetric images were registered with use of 3-dimensional (3D) deformable image registration (DIR). The registration was fine-tuned on a slice-by-slice basis by using 2D DIR. Based on the intensity of paired MRI pixel values and HU from an artifact-free slice, we performed a comprehensive analysis to predict the correct HU for the corrupted region. For a proof-of-concept validation, metal artifacts were simulated on a reference data set. Proton range was calculated using reference, artifactual, and corrected images to quantify the reduction in proton range error. The correction method was applied to 4 unique clinical cases. Results: The correction method resulted in substantial artifact reduction, both quantitatively and qualitatively. On respective simulated brain and head and neck CT images, the mean error was reduced from 495 and 370 HU to 108 and 92 HU after correction. Correspondingly, the absolute mean proton range errors of 2.4 cm and 1.7 cm were reduced to less than 2 mm in both cases. Conclusions: Our MRI-based CT artifact correction method can improve CT image quality and proton range calculation accuracy for patients with severe CT artifacts

  9. Good manufacturing practice for modelling air pollution: Quality criteria for computer models to calculate air pollution

    Science.gov (United States)

    Dekker, C. M.; Sliggers, C. J.

    To spur on quality assurance for models that calculate air pollution, quality criteria for such models have been formulated. By satisfying these criteria the developers of these models and producers of the software packages in this field can assure and account for the quality of their products. In this way critics and users of such (computer) models can gain a clear understanding of the quality of the model. Quality criteria have been formulated for the development of mathematical models, for their programming—including user-friendliness, and for the after-sales service, which is part of the distribution of such software packages. The criteria have been introduced into national and international frameworks to obtain standardization.

  10. Electronic Structure Calculations and Adaptation Scheme in Multi-core Computing Environments

    Energy Technology Data Exchange (ETDEWEB)

    Seshagiri, Lakshminarasimhan; Sosonkina, Masha; Zhang, Zhao

    2009-05-20

    Multi-core processing environments have become the norm in the generic computing environment and are being considered for adding an extra dimension to the execution of any application. The T2 Niagara processor is a very unique environment where it consists of eight cores having a capability of running eight threads simultaneously in each of the cores. Applications like General Atomic and Molecular Electronic Structure (GAMESS), used for ab-initio molecular quantum chemistry calculations, can be good indicators of the performance of such machines and would be a guideline for both hardware designers and application programmers. In this paper we try to benchmark the GAMESS performance on a T2 Niagara processor for a couple of molecules. We also show the suitability of using a middleware based adaptation algorithm on GAMESS on such a multi-core environment.

  11. SMART- IST: a computer program to calculate aerosol and radionuclide behaviour in CANDU reactor containments

    International Nuclear Information System (INIS)

    The SMART-IST computer code models radionuclide behaviour in CANDU reactor containments during postulated accidents. It calculates nuclide concentrations in various parts of containment and releases of nuclides from containment to the atmosphere. The intended application of SMART-IST is safety and licensing analyses of public dose resulting from the releases of nuclides. SMART-IST has been developed and validated meeting the CSA N286.7 quality assurance standard, under the sponsorship of the Industry Standard Toolset (IST) partners consisting of AECL and Canadian nuclear utilities; OPG, Bruce Power, NB Power and Hydro-Quebec. This paper presents an overview of the SMART-IST code including its theoretical framework and models, and also presents typical examples of code predictions. (author)

  12. Calculating three loop ladder and V-topologies for massive operator matrix elements by computer algebra

    Energy Technology Data Exchange (ETDEWEB)

    Ablinger, J.; Schneider, C. [Johannes Kepler Univ., Linz (Austria). Research Inst. for Symbolic Computation; Behring, A.; Bluemlein, J.; Freitas, A. de [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany); Manteuffel, A. von [Mainz Univ. (Germany). Inst. fuer Physik

    2015-09-15

    Three loop ladder and V-topology diagrams contributing to the massive operator matrix element A{sub Qg} are calculated. The corresponding objects can all be expressed in terms of nested sums and recurrences depending on the Mellin variable N and the dimensional parameter ε. Given these representations, the desired Laurent series expansions in ε can be obtained with the help of our computer algebra toolbox. Here we rely on generalized hypergeometric functions and Mellin-Barnes representations, on difference ring algorithms for symbolic summation, on an optimized version of the multivariate Almkvist-Zeilberger algorithm for symbolic integration, and on new methods to calculate Laurent series solutions of coupled systems of differential equations. The solutions can be computed for general coefficient matrices directly for any basis also performing the expansion in the dimensional parameter in case it is expressible in terms of indefinite nested product-sum expressions. This structural result is based on new results of our difference ring theory. In the cases discussed we deal with iterative sum- and integral-solutions over general alphabets. The final results are expressed in terms of special sums, forming quasi-shuffle algebras, such as nested harmonic sums, generalized harmonic sums, and nested binomially weighted (cyclotomic) sums. Analytic continuations to complex values of N are possible through the recursion relations obeyed by these quantities and their analytic asymptotic expansions. The latter lead to a host of new constants beyond the multiple zeta values, the infinite generalized harmonic and cyclotomic sums in the case of V-topologies.

  13. MILDOS - A Computer Program for Calculating Environmental Radiation Doses from Uranium Recovery Operations

    Energy Technology Data Exchange (ETDEWEB)

    Strange, D. L.; Bander, T. J.

    1981-04-01

    The MILDOS Computer Code estimates impacts from radioactive emissions from uranium milling facilities. These impacts are presented as dose commitments to individuals and the regional population within an 80 km radius of the facility. Only airborne releases of radioactive materials are considered: releases to surface water and to groundwater are not addressed in MILDOS. This code is multi-purposed and can be used to evaluate population doses for NEPA assessments, maximum individual doses for predictive 40 CFR 190 compliance evaluations, or maximum offsite air concentrations for predictive evaluations of 10 CFR 20 compliance. Emissions of radioactive materials from fixed point source locations and from area sources are modeled using a sector-averaged Gaussian plume dispersion model, which utilizes user-provided wind frequency data. Mechanisms such as deposition of particulates, resuspension. radioactive decay and ingrowth of daughter radionuclides are included in the transport model. Annual average air concentrations are computed, from which subsequent impacts to humans through various pathways are computed. Ground surface concentrations are estimated from deposition buildup and ingrowth of radioactive daughters. The surface concentrations are modified by radioactive decay, weathering and other environmental processes. The MILDOS Computer Code allows the user to vary the emission sources as a step function of time by adjustinq the emission rates. which includes shutting them off completely. Thus the results of a computer run can be made to reflect changing processes throughout the facility's operational lifetime. The pathways considered for individual dose commitments and for population impacts are: • Inhalation • External exposure from ground concentrations • External exposure from cloud immersion • Ingestioo of vegetables • Ingestion of meat • Ingestion of milk • Dose commitments are calculated using dose conversion factors, which are ultimately based

  14. Measurements and computer calculations of pulverized-coal combustion at Asnaes Power Station 4

    Energy Technology Data Exchange (ETDEWEB)

    Biede, O.; Swane Lund, J.

    1996-07-01

    Measurements have been performed on a front-fired 270 MW (net electrical out-put) pulverized-coal utility furnace with 24 swirl-stabilized burners, placed in four horizontal rows. Apart from continuous operational measurements, special measurements were performed as follows. At one horizontal level above the upper burner row, gas temperatures were measured by an acoustic pyrometer. At the same level and at the level of the second upper burner row, irradiation to the walls was measured in ten positions by means of specially designed 2 {pi}-thermal radiation meters. Fly-ash was collected and analysed for unburned carbon. Coal size distribution to each individual burner was measured. Eight different cases were measured. On a Columbian coal, three cases with different oxygen concentrations in the exit-gas were measured at a load of 260 MW, and in addition, measurements were performed at reduced loads of 215 MW and 130 MW. On a South African coal blend measurements were performed at a load of 260 MW with three different oxygen exit concentrations. Each case has been simulated by a three-dimensional numerical computer code for the prediction of distribution of gas temperatures, species concentrations and thermal radiative net heat absorption on the furnace walls. Comparisons between measured and calculated gas temperatures, irradiation and unburned carbon are made. Measured results among the cases differ significantly, and the computational results agree well with the measured results. (au)

  15. Hybrid approach for fast occlusion processing in computer-generated hologram calculation.

    Science.gov (United States)

    Gilles, Antonin; Gioia, Patrick; Cozot, Rémi; Morin, Luce

    2016-07-10

    A hybrid approach for fast occlusion processing in computer-generated hologram calculation is studied in this paper. The proposed method is based on the combination of two commonly used approaches that complement one another: the point-source and wave-field approaches. By using these two approaches together, the proposed method thus takes advantage of both of them. In this method, the 3D scene is first sliced into several depth layers parallel to the hologram plane. Light scattered by the scene is then propagated and shielded from one layer to another using either a point-source or a wave-field approach according to a threshold criterion on the number of points within the layer. Finally, the hologram is obtained by computing the propagation of light from the nearest layer to the hologram plane. Experimental results reveal that the proposed method does not produce any visible artifact and outperforms both the point-source and wave-field approaches. PMID:27409327

  16. SU-E-I-28: Evaluating the Organ Dose From Computed Tomography Using Monte Carlo Calculations

    International Nuclear Information System (INIS)

    Purpose: To evaluate organ doses from computed tomography (CT) using Monte Carlo (MC) calculations. Methods: A Philips Brilliance CT scanner (64 slice) was simulated using the GMctdospp (IMPS, Germany) based on the EGSnrc user code. The X-ray spectra and a bowtie filter for MC simulations were determined to coincide with measurements of half-value layer (HVL) and off-center ratio (OCR) profile in air. The MC dose was calibrated from absorbed dose measurements using a Farmer chamber and a cylindrical water phantom. The dose distribution from CT was calculated using patient CT images and organ doses were evaluated from dose volume histograms. Results: The HVLs of Al at 80, 100, and 120 kV were 6.3, 7.7, and 8.7 mm, respectively. The calculated HVLs agreed with measurements within 0.3%. The calculated and measured OCR profiles agreed within 3%. For adult head scans (CTDIvol) =51.4 mGy), mean doses for brain stem, eye, and eye lens were 23.2, 34.2, and 37.6 mGy, respectively. For pediatric head scans (CTDIvol =35.6 mGy), mean doses for brain stem, eye, and eye lens were 19.3, 24.5, and 26.8 mGy, respectively. For adult chest scans (CTDIvol=19.0 mGy), mean doses for lung, heart, and spinal cord were 21.1, 22.0, and 15.5 mGy, respectively. For adult abdominal scans (CTDIvol=14.4 mGy), the mean doses for kidney, liver, pancreas, spleen, and spinal cord were 17.4, 16.5, 16.8, 16.8, and 13.1 mGy, respectively. For pediatric abdominal scans (CTDIvol=6.76 mGy), mean doses for kidney, liver, pancreas, spleen, and spinal cord were 8.24, 8.90, 8.17, 8.31, and 6.73 mGy, respectively. In head scan, organ doses were considerably different from CTDIvol values. Conclusion: MC dose distributions calculated by using patient CT images are useful to evaluate organ doses absorbed to individual patients

  17. SU-E-I-28: Evaluating the Organ Dose From Computed Tomography Using Monte Carlo Calculations

    Energy Technology Data Exchange (ETDEWEB)

    Ono, T; Araki, F [Faculty of Life Sciences, Kumamoto University, Kumamoto (Japan)

    2014-06-01

    Purpose: To evaluate organ doses from computed tomography (CT) using Monte Carlo (MC) calculations. Methods: A Philips Brilliance CT scanner (64 slice) was simulated using the GMctdospp (IMPS, Germany) based on the EGSnrc user code. The X-ray spectra and a bowtie filter for MC simulations were determined to coincide with measurements of half-value layer (HVL) and off-center ratio (OCR) profile in air. The MC dose was calibrated from absorbed dose measurements using a Farmer chamber and a cylindrical water phantom. The dose distribution from CT was calculated using patient CT images and organ doses were evaluated from dose volume histograms. Results: The HVLs of Al at 80, 100, and 120 kV were 6.3, 7.7, and 8.7 mm, respectively. The calculated HVLs agreed with measurements within 0.3%. The calculated and measured OCR profiles agreed within 3%. For adult head scans (CTDIvol) =51.4 mGy), mean doses for brain stem, eye, and eye lens were 23.2, 34.2, and 37.6 mGy, respectively. For pediatric head scans (CTDIvol =35.6 mGy), mean doses for brain stem, eye, and eye lens were 19.3, 24.5, and 26.8 mGy, respectively. For adult chest scans (CTDIvol=19.0 mGy), mean doses for lung, heart, and spinal cord were 21.1, 22.0, and 15.5 mGy, respectively. For adult abdominal scans (CTDIvol=14.4 mGy), the mean doses for kidney, liver, pancreas, spleen, and spinal cord were 17.4, 16.5, 16.8, 16.8, and 13.1 mGy, respectively. For pediatric abdominal scans (CTDIvol=6.76 mGy), mean doses for kidney, liver, pancreas, spleen, and spinal cord were 8.24, 8.90, 8.17, 8.31, and 6.73 mGy, respectively. In head scan, organ doses were considerably different from CTDIvol values. Conclusion: MC dose distributions calculated by using patient CT images are useful to evaluate organ doses absorbed to individual patients.

  18. Development of a Korean adult male computational phantom for internal dosimetry calculation

    International Nuclear Information System (INIS)

    A Korean adult male computational phantom was constructed based on the current anthropometric and organ volume data of Korean average adult male, and was applied to calculate internal photon dosimetry data. The stylised models of external body, skeleton, and a total of 13 internal organs (brain, gall bladder, heart, kidneys, liver, lungs, pancreas, spleen, stomach, testes, thymus, thyroid and urinary bladder) were redesigned based on the Oak Ridge National Laboratory (ORNL) adult phantom. The height of trunk of the Korean phantom was 8.6% less than that of the ORNL adult phantom, and the volumes of all organs decreased up to 65% (pancreas) except for brain, gall bladder wall and thymus. Specific absorbed fraction (SAF) was calculated using the Korean phantom and Monte Carlo code, and compared with those from the ORNL adult phantom. The SAF of organs in the Korean phantom was overall higher than that from the ORNL adult phantom. This was caused by the smaller organ volume and the shorter inter-organ distance in the Korean phantom. The self SAF was dominantly affected by the difference in organ volume, and the SAF for different source and target organs was more affected by the inter-organ distance than by the organ volume difference. The SAFs of the Korean stylised phantom differ from those of the ORNL phantom by 10-180%. The comparison study of internal dosimetry will be extended to tomographic phantom and electron source in the future. (authors)

  19. Computational methods for reactive transport modeling: A Gibbs energy minimization approach for multiphase equilibrium calculations

    Science.gov (United States)

    Leal, Allan M. M.; Kulik, Dmitrii A.; Kosakowski, Georg

    2016-02-01

    We present a numerical method for multiphase chemical equilibrium calculations based on a Gibbs energy minimization approach. The method can accurately and efficiently determine the stable phase assemblage at equilibrium independently of the type of phases and species that constitute the chemical system. We have successfully applied our chemical equilibrium algorithm in reactive transport simulations to demonstrate its effective use in computationally intensive applications. We used FEniCS to solve the governing partial differential equations of mass transport in porous media using finite element methods in unstructured meshes. Our equilibrium calculations were benchmarked with GEMS3K, the numerical kernel of the geochemical package GEMS. This allowed us to compare our results with a well-established Gibbs energy minimization algorithm, as well as their performance on every mesh node, at every time step of the transport simulation. The benchmark shows that our novel chemical equilibrium algorithm is accurate, robust, and efficient for reactive transport applications, and it is an improvement over the Gibbs energy minimization algorithm used in GEMS3K. The proposed chemical equilibrium method has been implemented in Reaktoro, a unified framework for modeling chemically reactive systems, which is now used as an alternative numerical kernel of GEMS.

  20. VVER 1000 SBO calculations with pressuriser relief valve stuck open with ASTEC computer code

    International Nuclear Information System (INIS)

    Highlights: ► We modelled the ASTEC input file for accident scenario (SBO) and focused analyses on the behaviour of core degradation. ► We assumed opening and stuck-open of pressurizer relief valve during performance of SBO scenario. ► ASTEC v1.3.2 has been used as a reference code for the comparison study with the new version of ASTEC code. - Abstract: The objective of this paper is to present the results obtained from performing the calculations with ASTEC computer code for the Source Term evaluation for specific severe accident transient. The calculations have been performed with the new version of ASTEC. The ASTEC V2 code version is released by the French IRSN (Institut de Radioprotection at de surete nucleaire) and Gesellschaft für Anlagen-und Reaktorsicherheit (GRS), Germany. This investigation has been performed in the framework of the SARNET2 project (under the Euratom 7th framework program) by Institute for Nuclear Research and Nuclear Energy – Bulgarian Academy of Science (INRNE-BAS).

  1. Development of a computer code for shielding calculation in X-ray facilities

    International Nuclear Information System (INIS)

    The construction of an effective barrier against the interaction of ionizing radiation present in X-ray rooms requires consideration of many variables. The methodology used for specifying the thickness of primary and secondary shielding of an traditional X-ray room considers the following factors: factor of use, occupational factor, distance between the source and the wall, workload, Kerma in the air and distance between the patient and the receptor. With these data it was possible the development of a computer program in order to identify and use variables in functions obtained through graphics regressions offered by NCRP Report-147 (Structural Shielding Design for Medical X-Ray Imaging Facilities) for the calculation of shielding of the room walls as well as the wall of the darkroom and adjacent areas. With the built methodology, a program validation is done through comparing results with a base case provided by that report. The thickness of the obtained values comprise various materials such as steel, wood and concrete. After validation is made an application in a real case of radiographic room. His visual construction is done with the help of software used in modeling of indoor and outdoor. The construction of barriers for calculating program resulted in a user-friendly tool for planning radiographic rooms to comply with the limits established by CNEN-NN-3:01 published in September / 2011

  2. An Examination of the Performance of Parallel Calculation of the Radiation Integral on a Beowulf-Class Computer

    Science.gov (United States)

    Katz, D.; Cwik, T.; Sterling, T.

    1998-01-01

    This paper uses the parallel calculation of the radiation integral for examination of performance and compiler issues on a Beowulf-class computer. This type of computer, built from mass-market, commodity, off-the-shelf components, has limited communications performance and therefore also has a limited regime of codes for which it is suitable.

  3. Electron-impact calculations of near-neutral atomic systems utilising Petascale computer architectures

    Science.gov (United States)

    Ballance, Connor

    2013-05-01

    Over the last couple of decades, a number of advanced non-perturbative approaches such as the R-matrix, TDCC and CCC methods have made great strides in terms of improved target representation and investigating fundamental 2-4 electron problems. However, for the electron-impact excitation of near-neutral species or complicated open-shell atomic systems we are forced to make certain compromises in terms of the atomic structure and/or the number of channels included in close-coupling expansion of the subsequent scattering calculation. The availability of modern supercomputing architectures with hundreds of thousands of cores, and the emergence new opportunities through GPU usauge offers one possibility to address some of these issues. To effectively harness this computational power will require significant revision of the existing code structures. I shall discuss some effective strategies within a non-relativistic and relativistic R-matrix framework using the examples detailed below. The goal is to extend existing R-matrix methods from 1-2 thousand close coupled channels to 10,000 channels. With the construction of the ITER experiment in Cadarache, which will have Tungsten plasma-facing components, there is an urgent diagnostic need for the collisional rates for the near-neutral ion stages. In particular, spectroscopic diagnostics of impurity influx require accurate electron-impact excitation and ionisation as well as a good target representation. There have been only a few non-perturbative collisional calculations for this system, and the open-f shell ion stages provide a daunting challenge even for perturbative approaches. I shall present non-perturbative results for for the excitation and ionisation of W3+ and illustrate how these fundamental calculations can be integrated into a meaningful diagnostic for the ITER device. We acknowledge support from DoE fusion.

  4. The determination of surface of powders by BET method using nitrogen and krypton with computer calculation of the results

    International Nuclear Information System (INIS)

    A computer program written in FORTRAN language for calculations of final results of specific surface analysis based on BET theory has been described. Two gases - nitrogen and krypton were used. A technical description of measuring apparaturs is presented as well as theoretical basis of the calculations together with statistical analysis of the results for uranium compounds powders. (author)

  5. Development of a computer code for calculating the steady super/hypersonic inviscid flow around real configurations. Volume 1: Computational technique

    Science.gov (United States)

    Marconi, F.; Salas, M.; Yaeger, L.

    1976-01-01

    A numerical procedure has been developed to compute the inviscid super/hypersonic flow field about complex vehicle geometries accurately and efficiently. A second order accurate finite difference scheme is used to integrate the three dimensional Euler equations in regions of continuous flow, while all shock waves are computed as discontinuities via the Rankine Hugoniot jump conditions. Conformal mappings are used to develop a computational grid. The effects of blunt nose entropy layers are computed in detail. Real gas effects for equilibrium air are included using curve fits of Mollier charts. Typical calculated results for shuttle orbiter, hypersonic transport, and supersonic aircraft configurations are included to demonstrate the usefulness of this tool.

  6. Employment of a computer EC 1010 to calculate dosimetric parameters of irradiation procedures during #betta#-beam therapy

    International Nuclear Information System (INIS)

    A small-size computer EC 1010 is proposed for the calculation of dosimetric parameters of irradiation procedures on #betta#-beam therapeutic units. A specially designed program is intended for the calculation of dosimetric parameters for different methods of moving and static irradiation taking into account tissue heterogeneity: multified static irradiation, multizone rotation irradiation, irradiation using dose field forming devices (V-shaped filters, edge blocks, a grid diaphragm). The computation of output parameters according to each preset program of irradiation takes no more than 1 min. The use of the computer EC 1010 for the calculation of dosimetric parameters of irradiation procedures gives an opportunity to reduce considerably calculation time, to avoid possible errors and to simplify the drawing up of documents

  7. Computer calculation of the Van Vleck second moment for materials with internal rotation of spin groups

    Science.gov (United States)

    Goc, Roman

    2004-09-01

    This paper describes m2rc3, a program that calculates Van Vleck second moments for solids with internal rotation of molecules, ions or their structural parts. Only rotations about C 3 axes of symmetry are allowed, but up to 15 axes of rotation per crystallographic unit cell are permitted. The program is very useful in interpreting NMR measurements in solids. Program summaryTitle of the program: m2rc3 Catalogue number: ADUC Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADUC Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland License provisions: none Computers: Cray SV1, Cray T3E-900, PCs Installation: Poznań Supercomputing and Networking Center ( http://www.man.poznan.pl/pcss/public/main/index.html) and Faculty of Physics, A. Mickiewicz University, Poznań, Poland ( http://www.amu.edu.pl/welcome.html.en) Operating system under which program has been tested: UNICOS ver. 10.0.0.6 on Cray SV1; UNICOS/mk on Cray T3E-900; Windows98 and Windows XP on PCs. Programming language: FORTRAN 90 No. of lines in distributed program, including test data, etc.: 757 No. of bytes in distributed program, including test data, etc.: 9730 Distribution format: tar.gz Nature of physical problem: The NMR second moment reflects the strength of the nuclear magnetic dipole-dipole interaction in solids. This value can be extracted from the appropriate experiment and can be calculated on the basis of Van Vleck formula. The internal rotation of molecules or their parts averages this interaction decreasing the measured value of the NMR second moment. The analysis of the internal dynamics based on the NMR second moment measurements is as follows. The second moment is measured at different temperatures. On the other hand it is also calculated for different models and frequencies of this motion. Comparison of experimental and calculated values permits the building of the most probable model of internal dynamics in the studied material. The program described

  8. High-speed algorithm for calculating the neutron field in a reactor when working in dialog mode with a computer

    International Nuclear Information System (INIS)

    The large-scale construction of atomic power stations results in a need for trainers to instruct power-station personnel. The present work considers one problem of developing training computer software, associated with the development of a high-speed algorithm for calculating the neutron field after control-rod (CR) shift by the operator. The case considered here is that in which training units are developed on the basis of small computers of SM-2 type, which fall significantly short of the BESM-6 and EC-type computers used for the design calculations, in terms of speed and memory capacity. Depending on the apparatus for solving the criticality problem, in a two-dimensional single-group approximation, the physical-calculation programs require ∼ 1 min of machine time on a BESM-6 computer, which translates to ∼ 10 min on an SM-2 machine. In practice, this time is even longer, since ultimately it is necessary to determine not the effective multiplication factor K/sub ef/, but rather the local perturbations of the emergency-control (EC) system (to reach criticality) and change in the neutron field on shifting the CR and the EC rods. This long time means that it is very problematic to use physical-calculation programs to work in dialog mode with a computer. The algorithm presented below allows the neutron field following shift of the CR and EC rods to be calculated in a few seconds on a BESM-6 computer (tens of second on an SM-2 machine. This high speed may be achieved as a result of the preliminary calculation of the influence function (IF) for each CR. The IF may be calculated at high speed on a computer. Then it is stored in the external memory (EM) and, where necessary, used as the initial information

  9. Interpolation method for calculation of computed tomography dose from angular varying tube current

    International Nuclear Information System (INIS)

    The scope and magnitude of radiation dose from computed tomography (CT) examination has led to increased scrutiny and focus on accurate dose tracking. The use of tube current modulation (TCM) results complicates dose tracking by generating unique scans that are specific to the patient. Three methods of estimating the radiation dose from a CT examination that uses TCM are compared: using the average current for an entire scan, using the average current for each slice in the scan, and using an estimation of the angular variation of the dose contribution. To determine the impact of TCM on the radiation dose received, a set of angular weighting functions for each tissue of the body are derived by fitting a function to the relative dose contributions tabulated for the four principle exposure projections. This weighting function is applied to the angular tube current function to determine the organ dose contributions from a single rotation. Since the angular tube current function is not typically known, a method for estimating that function is also presented. The organ doses calculated using these three methods are compared to simulations that explicitly include the estimated TCM function. (authors)

  10. Problems on design of computer-generated holograms for testing aspheric surfaces: principle and calculation

    Institute of Scientific and Technical Information of China (English)

    Zhishan Gao; Meimei Kong; Rihong Zhu; Lei Chen

    2007-01-01

    Interferometric optical testing using computer-generated hologram (CGH) has provided an approach to highly accurate measurement of aspheric surfaces. While designing the CGH null correctors, we should make them with as small aperture and low spatial frequency as possible, and with no zero slope of phase except at center, for the sake of insuring lowisk of substrate figure error and feasibility of fabrication. On the basis of classic optics, a set of equations for calculating the phase function of CGH are obtained. These equations lead us to find the dependence of the aperture and spatial frequency on the axial diszance from the tested aspheric surface for the CGH. We also simulatethe ptical path difference error of the CGH relative to the accuracy of controlling laser spot during fabrication. Meanwhile, we discuss the constraints used to avoid zero slope of phase except at center and give a design result of the CGH for the tested aspheric surface. The results ensure the feasibility of designing a useful CGH to test aspheric urface fundamentally.

  11. Mathematical model and computer programme for theoretical calculation of calibration curves of neutron soil moisture probes with highly effective counters

    International Nuclear Information System (INIS)

    A mathematical model based on the three group theory for theoretical calculation by means of computer of the calibration curves of neutron soil moisture probes with highly effective counters, is described. Methods for experimental correction of the mathematical model are discussed and proposed. The computer programme described allows the calibration of neutron probes with high or low effective counters, and central or end geometry, with or without linearizing of the calibration curve. The use of two calculation variants and printing of output data gives the possibility not only for calibration, but also for other researches. The separate data inputs for soil and probe temperature allow the temperature influence analysis. The computer programme and calculation examples are given. (author)

  12. HERMES: a personal-computer program for calculation of the Fermi-Gas Model parameters of nuclear level density

    International Nuclear Information System (INIS)

    A computer program, HERMES, that provides the quantities usually needed in nuclear level density calculations, has been developed. The applied model is the standard Fermi Gas Model (FGM) in which pairing correlations and shell effects are opportunely taken into account. The effects of additional nuclear structure properties together with their inclusion into the computer program are also considered. Using HERMES, a level density parameter systematics has been constructed for mass range 41 ≤ A ≤ 253. (author)

  13. Modifications of the SEPHIS computer code for calculating the Purex solvent extraction system

    International Nuclear Information System (INIS)

    The SEPHIS computer program was developed to simulate the countercurrent solvent extraction. This report gives modifications in the program which result in improved fit to experimental data, a decrease in computer storage requirements, and a decrease in execution time. Methods for applying the computer program to practical solvent extraction problems are explained

  14. Plutonium Usage and Management in PWR and Computing and Physical Methods to Calculate Pu

    International Nuclear Information System (INIS)

    Main limitations due to the enhancement of the plutonium content are related to the coolant void effect as the spectrum becomes faster, the neutron flux in the thermal region tends towards zero and is concentrated in the region from 10 keV to 1 MeV. Thus, all captures by Pu240 and Pu242 in the thermal and epithermal resonance disappear and the Pu240 and Pu242 contributions to the void effect become positive. The higher the Pu content and the poorer the Pu quality, the larger the void effect. -The core control in nominal or transient conditions. Pu enrichment leads to a decrease in βeff. and the efficiency of soluble boron and control rods. Also, the Doppler effect tends to decrease when Pu replaces U, so, that in case of transients the core could diverge again if the control is not effective enough. -As for the voiding effect, the plutonium degradation and the Pu240 and Pu242 accumulation after multiple recycling lead to spectrum hardening and to a decrease in control. -One solution would be to use enriched boron in soluble boron and shutdown rods. -In this paper I discuss and show the advanced computing and physical methods to calculate Pu inside the nuclear reactors and glovebox and the different solutions to be used to overcome the difficulties that affect on safety parameters and on reactor performance, and analysis the consequences of plutonium management on the whole fuel cycle like Raw materials savings, fraction of nuclear electric power involved in the Pu management. All through two types of scenario, one involving a low fraction of the nuclear park dedicated to plutonium management, the other involving a dilution of the plutonium in all the nuclear park. (author)

  15. RAP-3A Computer code for thermal and hydraulic calculations in steady state conditions for fuel element clusters

    International Nuclear Information System (INIS)

    The RAP-3A computer code is designed for calculating the main steady state thermo-hydraulic parameters of multirod fuel clusters with liquid metal cooling. The programme provides a double accuracy computation of temperatures and axial enthalpy distributions of pressure losses and axial heat flux distributions in fuel clusters before boiling conditions occur. Physical and mathematical models as well as a sample problem are presented. The code is written in FORTRAN-4 language and is running on a IBM-370/135 computer

  16. Fast calculation of computer-generated-hologram on AMD HD5000 series GPU and OpenCL

    CERN Document Server

    Shimobaba, Tomoyoshi; Masuda, Nobuyuki; Ichihashi, Yasuyuki; Takada, Naoki

    2010-01-01

    In this paper, we report fast calculation of a computer-generated-hologram using a new architecture of the HD5000 series GPU (RV870) made by AMD and its new software development environment, OpenCL. Using a RV870 GPU and OpenCL, we can calculate 1,920 * 1,024 resolution of a CGH from a 3D object consisting of 1,024 points in 30 milli-seconds. The calculation speed realizes a speed approximately two times faster than that of a GPU made by NVIDIA.

  17. BROHR and SYSFIT - a system of computer codes for the calculation of the beam tansport at electrostatic accelerators

    International Nuclear Information System (INIS)

    The computer codes BROHR and SYSFIT are presented. Both codes are based on the first-order matrix formalism of ion optics. By means of the code BROHR the trajectories of ions and electrons inside of any inclined field accelerating tubes can be calculated. The influence of the stripping process at tandem accelerators is included by changing of the mass and the charge of the ions and by increasing the beam emittance. The code SYSFIT is used for calculation of any beam transport systems and of the transported beam. Special requested imaging properties can be realized by parameter variation. Calculated examples are given for both codes. (author)

  18. Computer Calculations of Eddy-Current Power Loss in Rotating Titanium Wheels and Rims in Localized Axial Magnetic Fields

    Energy Technology Data Exchange (ETDEWEB)

    Mayhall, D J; Stein, W; Gronberg, J B

    2006-05-15

    We have performed preliminary computer-based, transient, magnetostatic calculations of the eddy-current power loss in rotating titanium-alloy and aluminum wheels and wheel rims in the predominantly axially-directed, steady magnetic fields of two small, solenoidal coils. These calculations have been undertaken to assess the eddy-current power loss in various possible International Linear Collider (ILC) positron target wheels. They have also been done to validate the simulation code module against known results published in the literature. The commercially available software package used in these calculations is the Maxwell 3D, Version 10, Transient Module from the Ansoft Corporation.

  19. A linear integral-equation-based computer code for self-amplified spontaneous emission calculations of free-electron lasers

    International Nuclear Information System (INIS)

    The linear integral-equation-based computer code 'Roger Oleg Nikolai' (RON), which was recently developed at Argonne National Laboratory, was used to calculate the self-amplified spontaneous emission (SASE) performance of the free-electron laser (FEL) being built at Argonne. Signal growth calculations under different conditions were used to estimate tolerances of actual design parameters and to estimate optimal length of the break sections between undulator segments. Explicit calculation of the radiation field was added recently. The measured magnetic fields of five undulators were used to calculate the gain for the Argonne FEL. The result indicates that the real undulators for the Argonne FEL (the effect of magnetic field errors alone) will not significantly degrade the FEL performance. The capability to calculate the small-signal gain for an FEL-oscillator is also demonstrated

  20. Criticality computer calculations on the conditioning of Rossendorf fuel elements in transport and storage flasks CASTOR MTR 2

    International Nuclear Information System (INIS)

    The condition of criticality safety (keff<0.95) is fulfilled in all considered cases. Since all cases are undermoderated in the event of cavity flooding, the limit of the cavity volume in the fuel area, fixed by the contruction, is essential for this result. The computer calculations were performed by the Monte-Carlo version MCNP-3B. (orig./HP)

  1. ZOCO V - a computer code for the calculation of time-dependent spatial pressure distribution in reactor containments

    International Nuclear Information System (INIS)

    ZOCO V is a computer code which can calculate the time- and space- dependent pressure distribution in containments of water-cooled nuclear power reactors (both full pressure containments and pressure suppression systems) following a loss-of-coolant accident, caused by the rupture of a main coolant or steam pipe

  2. Meso-microstructural computational simulation of the hydrogen permeation test to calculate intergranular, grain boundary and effective diffusivities

    Energy Technology Data Exchange (ETDEWEB)

    Jothi, S., E-mail: s.jothi@swansea.ac.uk [College of Engineering, Swansea University, Singleton Park, Swansea SA2 8PP (United Kingdom); Winzer, N. [Fraunhofer Institute for Mechanics of Materials IWM, Wöhlerstraße 11, 79108 Freiburg (Germany); Croft, T.N.; Brown, S.G.R. [College of Engineering, Swansea University, Singleton Park, Swansea SA2 8PP (United Kingdom)

    2015-10-05

    Highlights: • Characterized polycrystalline nickel microstructure using EBSD analysis. • Development meso-microstructural model based on real microstructure. • Calculated effective diffusivity using experimental electrochemical permeation test. • Calculated intergranular diffusivity of hydrogen using computational FE simulation. • Validated the calculated computation simulation results with experimental results. - Abstract: Hydrogen induced intergranular embrittlement has been identified as a cause of failure of aerospace components such as combustion chambers made from electrodeposited polycrystalline nickel. Accurate computational analysis of this process requires knowledge of the differential in hydrogen transport in the intergranular and intragranular regions. The effective diffusion coefficient of hydrogen may be measured experimentally, though experimental measurement of the intergranular grain boundary diffusion coefficient of hydrogen requires significant effort. Therefore an approach to calculate the intergranular GB hydrogen diffusivity using finite element analysis was developed. The effective diffusivity of hydrogen in polycrystalline nickel was measured using electrochemical permeation tests. Data from electron backscatter diffraction measurements were used to construct microstructural representative volume elements including details of grain size and shape and volume fraction of grains and grain boundaries. A Python optimization code has been developed for the ABAQUS environment to calculate the unknown grain boundary diffusivity.

  3. Meso-microstructural computational simulation of the hydrogen permeation test to calculate intergranular, grain boundary and effective diffusivities

    International Nuclear Information System (INIS)

    Highlights: • Characterized polycrystalline nickel microstructure using EBSD analysis. • Development meso-microstructural model based on real microstructure. • Calculated effective diffusivity using experimental electrochemical permeation test. • Calculated intergranular diffusivity of hydrogen using computational FE simulation. • Validated the calculated computation simulation results with experimental results. - Abstract: Hydrogen induced intergranular embrittlement has been identified as a cause of failure of aerospace components such as combustion chambers made from electrodeposited polycrystalline nickel. Accurate computational analysis of this process requires knowledge of the differential in hydrogen transport in the intergranular and intragranular regions. The effective diffusion coefficient of hydrogen may be measured experimentally, though experimental measurement of the intergranular grain boundary diffusion coefficient of hydrogen requires significant effort. Therefore an approach to calculate the intergranular GB hydrogen diffusivity using finite element analysis was developed. The effective diffusivity of hydrogen in polycrystalline nickel was measured using electrochemical permeation tests. Data from electron backscatter diffraction measurements were used to construct microstructural representative volume elements including details of grain size and shape and volume fraction of grains and grain boundaries. A Python optimization code has been developed for the ABAQUS environment to calculate the unknown grain boundary diffusivity

  4. Calculation of boron curve and power distributions for a PWR reactor, using LEOPARD and CITATION computer codes

    International Nuclear Information System (INIS)

    A numerical analysis of some neutronic parameters calculated by LEOPARD computer code compared with the literature data are presented. A computer code (LEOCIT) that is a modified version of LEOPARD, was developed, with subroutines that prepare cross sections libraries for 1,2 or 4 energy groups, writing them on tape or on disk, in special format aiming to be diretly used by citation computer codes. Finally, a simulation of the first cycle of Angra I burnup, is done, by CITATION, modelling 1/4 of the core in XY geometry, calculation, the soluble boron curve and the pin to pin power distribution, for two energy group. The more relevant results are compared with those supplied by Westinghouse, CNEN and FURNAS, and some recommendations aiming to perfect the developed system, are done. (E.G)

  5. A computer program for calculating relative-transmissivity input arrays to aid model calibration

    Science.gov (United States)

    Weiss, Emanuel

    1982-01-01

    A program is documented that calculates a transmissivity distribution for input to a digital ground-water flow model. Factors that are taken into account in the calculation are: aquifer thickness, ground-water viscosity and its dependence on temperature and dissolved solids, and permeability and its dependence on overburden pressure. Other factors affecting ground-water flow are indicated. With small changes in the program code, leakance also could be calculated. The purpose of these calculations is to provide a physical basis for efficient calibration, and to extend rational transmissivity trends into areas where model calibration is insensitive to transmissivity values.

  6. VLab: A Science Gateway for Distributed First Principles Calculations in Heterogeneous High Performance Computing Systems

    Science.gov (United States)

    da Silveira, Pedro Rodrigo Castro

    2014-01-01

    This thesis describes the development and deployment of a cyberinfrastructure for distributed high-throughput computations of materials properties at high pressures and/or temperatures--the Virtual Laboratory for Earth and Planetary Materials--VLab. VLab was developed to leverage the aggregated computational power of grid systems to solve…

  7. Calculation of the density shift and broadening of the transition lines in pionic helium: Computational problems

    Energy Technology Data Exchange (ETDEWEB)

    Bakalov, Dimitar, E-mail: dbakalov@inrne.bas.bg [Bulgarian Academy of Sciences, INRNE (Bulgaria)

    2015-08-15

    The potential energy surface and the computational codes, developed for the evaluation of the density shift and broadening of the spectral lines of laser-induced transitions from metastable states of antiprotonic helium, fail to produce convergent results in the case of pionic helium. We briefly analyze the encountered computational problems and outline possible solutions of the problems.

  8. Structure problems in the analog computation; Problemes de structure dans le calcul analogique

    Energy Technology Data Exchange (ETDEWEB)

    Braffort, P.L. [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires

    1957-07-01

    The recent mathematical development showed the importance of elementary structures (algebraic, topological, etc.) in abeyance under the great domains of classical analysis. Such structures in analog computation are put in evidence and possible development of applied mathematics are discussed. It also studied the topological structures of the standard representation of analog schemes such as additional triangles, integrators, phase inverters and functions generators. The analog method gives only the function of the variable: time, as results of its computations. But the course of computation, for systems including reactive circuits, introduces order structures which are called 'chronological'. Finally, it showed that the approximation methods of ordinary numerical and digital computation present the same structure as these analog computation. The structure analysis permits fruitful comparisons between the several domains of applied mathematics and suggests new important domains of application for analog method. (M.P.)

  9. A method for calculating regional cerebral blood flow from emission computed tomography of inert gas concentrations

    DEFF Research Database (Denmark)

    Celsis, P; Goldman, T; Henriksen, L; Lassen, N A

    1981-01-01

    Emission tomography of positron or gamma emitting inert gases allows calculation of regional cerebral blood flow (rCBF) in cross-sectional slices of human brain. An algorithm is presented for rCBF calculations from a sequence of time averaged tomograms using inhaled 133Xe. The approach is designe...

  10. User's manual for LINEAR, a computer program that calculates the linear characteristics of a gyrotron

    International Nuclear Information System (INIS)

    This program calculates the linear characteristics of a gyrotron. This program is capable of: (1) calculating the starting current or frequency detuning for each gyrotron mode, (2) generating mode spectra, (3) plotting these linear characteristics as a function of device parameters (e.g., beam voltage), and (4) doing the above for any axial rf field profile

  11. Calculation of inviscid surface streamlines and heat transfer on shuttle type configurations. Part 2: Description of computer program

    Science.gov (United States)

    Dejarnette, F. R.; Jones, M. H.

    1971-01-01

    A description of the computer program used for heating rate calculation for blunt bodies in hypersonic flow is given. The main program and each subprogram are described by defining the pertinent symbols involved and presenting a detailed flow diagram and complete computer program listing. Input and output parameters are discussed in detail. Listings are given for the computation of heating rates on (1) a blunted 15 deg half-angle cone at 20 deg incidence and Mach 10.6, (2) a blunted 70 deg slab delta wing at 10 deg incidence and Mach 8, and (3) the HL-10 lifting body at 20 deg incidence and Mach 10. In addition, the computer program output for two streamlines on the blunted 15 deg half-angle cone is listed. For Part 1, see N71-36186.

  12. Radcalc: A computer program to calculate the radiolytic production of hydrogen gas from radioactive wastes in packages

    International Nuclear Information System (INIS)

    Radcalc for Windows' is a menu-driven Microsoft2 Windows-compatible computer code that calculates the radiolytic production of hydrogen gas in high- and low-level radioactive waste. In addition, the code also determines US Department of Transportation (DOT) transportation classifications, calculates the activities of parent and daughter isotopes for a specified period of time, calculates decay heat, and calculates pressure buildup from the production of hydrogen gas in a given package geometry. Radcalc for Windows was developed by Packaging Engineering, Transportation and Packaging, Westinghouse Hanford Company, Richland, Washington, for the US Department of Energy (DOE). It is available from Packaging Engineering and is issued with a user's manual and a technical manual. The code has been verified and validated

  13. CADE - A computer programme for the calculation of nuclear cross-sections from the Weisskopf-Ewing theory

    International Nuclear Information System (INIS)

    A computer programme which performs compound nucleus calculations using the Weisskopf-Ewing formalism is described. The programme will calculate the cross-sections for multi-particle emission by treating the process as a series of stages in the cascade. The relevant compound nucleus absorption cross-sections for particle channels are calculated with built-in optical model routines, and gamma ray emission is described by the giant dipole resonance formalism. Several choices for the final nucleus level density formula may be made using the level density routine contained in the programme. The total cross-section for the emission of a particle at any particular stage, is calculated together with the cross-section as a function of energy. The probability of leaving the final nucleus in a state of any particular energy is also obtained. (author)

  14. Post-test calculation of LOFT test L6-5 using the RETRAN-02 computer code

    International Nuclear Information System (INIS)

    This paper discusses a post-test calculation of Loss-of-Fluid-Test (LOFT) L6-5, in which a loss of steam generator feedwater flow was simulated. This calculation, using RETRAN-02 is compared to the L6-5 pretest calculation performed to accommodate phenomena actually occurring during the test in order to better understand the test apparatus and, therefore, better understand the capabilities of the computer code used for this application. The RETRAN-02 calculation, therefore, employed model changes to improve the characterization of the test. These changes were needed to reflect differences between the advertised pretest initial conditions and the initial conditions which actually occurred at the start of the test and to reflect differences between boundary conditions that had been expected to occur during the test and those which actually did occur

  15. A computationally efficient software application for calculating vibration from underground railways

    International Nuclear Information System (INIS)

    The PiP model is a software application with a user-friendly interface for calculating vibration from underground railways. This paper reports about the software with a focus on its latest version and the plans for future developments. The software calculates the Power Spectral Density of vibration due to a moving train on floating-slab track with track irregularity described by typical values of spectra for tracks with good, average and bad conditions. The latest version accounts for a tunnel embedded in a half space by employing a toolbox developed at K.U. Leuven which calculates Green's functions for a multi-layered half-space.

  16. BAC: A computer program for calculating shielding in buildings against initial radiation

    Science.gov (United States)

    Danielson, G.

    1980-10-01

    Calculation methodology and transmission data for BAC in the event of a nuclear explosion are considered. The shielding factor is the rate between the radiation dose at one point in the building and the dose in open air. It is separately calculated for neutrons, gamma rays from fission products, and secondary gamma rays. For this calculation, BAC uses data for radiation transmission in concrete. This program is utilized for fallout shelters and other buildings where walls and floors/roofs are mostly made of concrete and bricks. Instructions for the program are given, and BAC results and values are in certain cases compared with those obtained with the Monte Carlo method.

  17. Large scale nuclear structure calculation by Monte Carlo shell model. Frontier of nuclear research by K-computer

    International Nuclear Information System (INIS)

    Strategic program field 5 was started in 2011 for the effective use of K-computer. In the field of the nuclear research, a large scale nuclear structure studies by Monte Carlo shell model calculation are being carried out at HPCI (High Performance Computing Infrastructure) Consortium. Since the introduction of the shell model by Mayer and Jensen in 1949, it succeeded in the explanation of magic numbers and has been very powerful theory. Recently, however, the great progress of nuclear physics at RIBF (RIKEN Beam Factory) and so on made it clear that the magic numbers disappear in the unstable nuclei, while different ones appear and evolutions of shell structure are considered. In this report the framework and recent results are described. In the second section of 'Shell Model Computation and Monte Carlo Shell Model', '2.1 Model space and effective interactions', '2.2 Strict diagonalization by Lanczos algorithm and its limitations' and '2.3 Framework of Monte Carlo shell model' are picked up with a figure of calculation example. In the third section 'Structure Exploration of Neutron Excess Nickel Isotopes by the Monte Carlo Shell Model' is explained showing the energy surfaces of 68Ni for 01+ and 02+. In the fourth section 'Monte Carlo Model Calculation without Assuming Closed Shell and its Visualization', density distributions in 8Be are shown after and before the angular momenta projection. In the fifth section of 'Development of Monte Carlo Shell Model Program at K-Computer', speeding up of Monte Carlo Shell Model by the parallel computation is shown. Finally it is pointed out that the HPCI program is planned to end 2015. Farther magic numbers are expected to be calculated before HPCI terminates. (S. Funahashi)

  18. Use of symbolic computations for calculating logic circuits and specialized processors

    International Nuclear Information System (INIS)

    Some applied problems in which using symbolic computations the accurate algebraic expressions describing schematic diagrams of standard logic modules, encoding and decoding devices, different types of complex circuits and event selection devices in high energy physics experiments are considered. Symbolic computations open new prospects for complex automation of design works for discrete logic devices starting from setting tables describing circuit functioning and ending with printed circuit interconnection or topology of integrated circuits

  19. TRANS4: a computer code calculation of solid fuel penetration of a concrete barrier

    International Nuclear Information System (INIS)

    The computer code, TRANS4, models the melting and penetration of a solid barrier by a solid disc of fuel following a core disruptive accident. This computer code has been used to model fuel debris penetration of basalt, limestone concrete, basaltic concrete, and magnetite concrete. Sensitivity studies were performed to assess the importance of various properties on the rate of penetration. Comparisons were made with results from the GROWS II code

  20. Statistical model calculations with a double-humped fission barrier GIVAB computer code

    International Nuclear Information System (INIS)

    Neutron and gamma emission probabilities and fission probabilities are computed, taking into account the special feature of the actinide fission barriers with two maxima. Spectra and cross sections are directly deduced from these probabilities. Populations of both wells are followed step by step. For each initial E and J, decay rates are computed and normalized in order to obtain the de-excitation probabilities imposed by the two-humped fission barrier

  1. HTR-2000: Computer program to accompany calculations during reactor operation of HTGR's

    International Nuclear Information System (INIS)

    HTR-2000 - developed for arithmetical control of pebble bed high temperature reactors with multiple process - is closely coupled to the actual operation of the reactor. Using measured nuclear and thermo-hydraulical parameters as well as detailed model of pebble flow and exact information and fuel burnup, loading and discharge it obtains an excellent simulation of the status of the reactor. The geometry is modelled in three dimensions, so asymmetries in core texture can be taken into account for nuclear and thermohydraulical calculations. A continuous simulation was performed during five years of AVR operation. The comparison between calculated and measured data was very satisfying. In addition, experiments which had been performed at AVR for re-calculating the control rod worth were simulated. The arithmetical analysis shows that at presence of a compensating-absorber in the reactor core the split reactivity worth for single absorbers can be determined by calculation but not by methods of measuring. (orig.)

  2. Fast neutron reaction data calculations with the computer code STAPRE-H

    International Nuclear Information System (INIS)

    Description of the specific features of the version STAPRE-H are given. Illustration of the model options and parameter influence on the calculated results is done to trace the accurate reproducing of large body of correlated data. (authors)

  3. Design a computational program to calculate the composition variations of nuclear materials in the reactor operations

    International Nuclear Information System (INIS)

    Highlights: ► The atomic densities of light and heavy materials are calculated. ► The solution is obtained using Runge–Kutta–Fehlberg method. ► The material depletion is calculated for constant flux and constant power condition. - Abstract: The present work investigates an appropriate way to calculate the variations of nuclides composition in the reactor core during operations. Specific Software has been designed for this purpose using C#. The mathematical approach is based on the solution of Bateman differential equations using a Runge–Kutta–Fehlberg method. Material depletion at constant flux and constant power can be calculated with this software. The inputs include reactor power, time step, initial and final times, order of Taylor Series to calculate time dependent flux, time unit, core material composition at initial condition (consists of light and heavy radioactive materials), acceptable error criterion, decay constants library, cross sections database and calculation type (constant flux or constant power). The atomic density of light and heavy fission products during reactor operation is obtained with high accuracy as the program outputs. The results from this method compared with analytical solution show good agreements

  4. Coupled Monte Carlo - Discrete ordinates computational scheme for three-dimensional shielding calculations of large and complex nuclear facilities

    International Nuclear Information System (INIS)

    Shielding calculations of advanced nuclear facilities such as accelerator based neutron sources or fusion devices of the tokamak type are complicated due to their complex geometries and their large dimensions, including bulk shields of several meters thickness. While the complexity of the geometry in the shielding calculation can be hardly handled by the discrete ordinates method, the deep penetration of radiation through bulk shields is a severe challenge for the Monte Carlo particle transport simulation technique. This work proposes a dedicated computational approach for coupled Monte Carlo - deterministic transport calculations to handle this kind of shielding problems. The Monte Carlo technique is used to simulate the particle generation and transport in the target region with both complex geometry and reaction physics, and the discrete ordinates method is used to treat the deep penetration problem in the bulk shield. To enable the coupling of these two different computational methods, a mapping approach has been developed for calculating the discrete ordinates angular flux distribution from the scored data of the Monte Carlo particle tracks crossing a specified surface. The approach has been implemented in an interface program and validated by means of test calculations using a simplified three-dimensional geometric model. Satisfactory agreement was obtained for the angular fluxes calculated by the mapping approach using the MCNP code for the Monte Carlo calculations and direct three-dimensional discrete ordinates calculations using the TORT code. In the next step, a complete program system has been developed for coupled three-dimensional Monte Carlo deterministic transport calculations by integrating the Monte Carlo transport code MCNP, the three-dimensional discrete ordinates code TORT and the mapping interface program. Test calculations with two simple models have been performed to validate the program system by means of comparison calculations using the

  5. Technology in Mathematics Education: A Descriptive Study of the Availability and Uses of Calculators and Computers in Public High School Mathematics Classes in the State of Virginia

    OpenAIRE

    Donald, Jack Bradshaw

    1998-01-01

    The purpose of this descriptive study was to investigate the availability and distribution of calculators and computers for the mathematics classes in public high schools across the State of Virginia; examine professional development activities used by teachers to prepare for the use of calculators and computers in the classroom; explore factors that may guide and influence mathematics teachers in the use of calculators and computers; examine the familiarity and degre...

  6. Computer codes in nuclear safety, radiation transport and dosimetry; Les codes de calcul en radioprotection, radiophysique et dosimetrie

    Energy Technology Data Exchange (ETDEWEB)

    Bordy, J.M.; Kodeli, I.; Menard, St.; Bouchet, J.L.; Renard, F.; Martin, E.; Blazy, L.; Voros, S.; Bochud, F.; Laedermann, J.P.; Beaugelin, K.; Makovicka, L.; Quiot, A.; Vermeersch, F.; Roche, H.; Perrin, M.C.; Laye, F.; Bardies, M.; Struelens, L.; Vanhavere, F.; Gschwind, R.; Fernandez, F.; Quesne, B.; Fritsch, P.; Lamart, St.; Crovisier, Ph.; Leservot, A.; Antoni, R.; Huet, Ch.; Thiam, Ch.; Donadille, L.; Monfort, M.; Diop, Ch.; Ricard, M

    2006-07-01

    The purpose of this conference was to describe the present state of computer codes dedicated to radiation transport or radiation source assessment or dosimetry. The presentations have been parted into 2 sessions: 1) methodology and 2) uses in industrial or medical or research domains. It appears that 2 different calculation strategies are prevailing, both are based on preliminary Monte-Carlo calculations with data storage. First, quick simulations made from a database of particle histories built though a previous Monte-Carlo simulation and secondly, a neuronal approach involving a learning platform generated through a previous Monte-Carlo simulation. This document gathers the slides of the presentations.

  7. Emergency Doses (ED) - Revision 3: A calculator code for environmental dose computations

    International Nuclear Information System (INIS)

    The calculator program ED (Emergency Doses) was developed from several HP-41CV calculator programs documented in the report Seven Health Physics Calculator Programs for the HP-41CV, RHO-HS-ST-5P (Rittman 1984). The program was developed to enable estimates of offsite impacts more rapidly and reliably than was possible with the software available for emergency response at that time. The ED - Revision 3, documented in this report, revises the inhalation dose model to match that of ICRP 30, and adds the simple estimates for air concentration downwind from a chemical release. In addition, the method for calculating the Pasquill dispersion parameters was revised to match the GENII code within the limitations of a hand-held calculator (e.g., plume rise and building wake effects are not included). The summary report generator for printed output, which had been present in the code from the original version, was eliminated in Revision 3 to make room for the dispersion model, the chemical release portion, and the methods of looping back to an input menu until there is no further no change. This program runs on the Hewlett-Packard programmable calculators known as the HP-41CV and the HP-41CX. The documentation for ED - Revision 3 includes a guide for users, sample problems, detailed verification tests and results, model descriptions, code description (with program listing), and independent peer review. This software is intended to be used by individuals with some training in the use of air transport models. There are some user inputs that require intelligent application of the model to the actual conditions of the accident. The results calculated using ED - Revision 3 are only correct to the extent allowed by the mathematical models. 9 refs., 36 tabs

  8. Calculating additional shielding requirements in diagnostics X-ray departments by computer

    International Nuclear Information System (INIS)

    This report provides an extension of an existing method for the calculation of the barrier thickness required to reduce the three types of radiation exposure emitted from the source, the primary, secondary and leakage radiation, to a specified weekly design limit (MPD). Since each of these three types of radiation are of different beam quality, having different shielding requirements, NCRP 49 has provided means to calculate the necessary protective barrier thickness for each type of radiation individually. Additionally, barrier requirements specified using the techniques stated at NCRP 49, show enormous variations among users. Part of the variations is due to different assumptions made regarding the use of the examined room and the characteristics of adjoining space. Many of the differences result from the difficulty of accurately relating information from the calculations to graphs and tables involved in the calculation process specified by this report. Moreover, the latest technological developments such as mammography are not addressed and attenuation data for three-phase generators, that are most widely used today, is not provided. The design of shielding barriers in diagnostic X-ray departments generally follow the ALARA principle. That means that, in practice, the exposure levels are kept 'as low as reasonably achievable', taking into account economical and technical factors. Additionally, the calculation of barrier requirements includes many uncertainties (e.g. the workload, the actual kVp used etc.). (author)

  9. Optical processing of images using computer calculated filters displayed on a liquid crystal electro-optical relay

    International Nuclear Information System (INIS)

    The hybrid data processing associating a computer calculation of the processing filters to their use in coherent optical set-up may lead to real-time filtering. On the principle, it is shown that an instantaneous filtering of all known and unknown defects in images can be attained using a well adapted electro-optical relay. Some synthetical holograms, holographic lenses with variable focussing, and a number of processing filters were calculated, all holograms being phase coded in binary. The results were tape registred and displayed in delayed time on a 128x128 points liquid crystal electro-optical relay allowing the quality of reproduction for the computed holograms to be tested on a simple diffraction bench, and on a double diffraction bench in the case of the results of the image filtering

  10. Blending Determinism with Evolutionary Computing: Applications to the Calculation of the Molecular Electronic Structure of Polythiophene.

    Science.gov (United States)

    Sarkar, Kanchan; Sharma, Rahul; Bhattacharyya, S P

    2010-03-01

    A density matrix based soft-computing solution to the quantum mechanical problem of computing the molecular electronic structure of fairly long polythiophene (PT) chains is proposed. The soft-computing solution is based on a "random mutation hill climbing" scheme which is modified by blending it with a deterministic method based on a trial single-particle density matrix [P((0))(R)] for the guessed structural parameters (R), which is allowed to evolve under a unitary transformation generated by the Hamiltonian H(R). The Hamiltonian itself changes as the geometrical parameters (R) defining the polythiophene chain undergo mutation. The scale (λ) of the transformation is optimized by making the energy [E(λ)] stationary with respect to λ. The robustness and the performance levels of variants of the algorithm are analyzed and compared with those of other derivative free methods. The method is further tested successfully with optimization of the geometry of bipolaron-doped long PT chains. PMID:26613302

  11. The modified calculation of the coolant temperature in the computer code TOODEE-2

    International Nuclear Information System (INIS)

    The programme is intended for the calculation of maximum cladding temperature of the hottest rod of a PWR and can be used to estimate the events after leakage of coolant. THe TOODEE-2 programme corresponds to the LOCTA-code of Westinghouse, which is gave a superior reproduction of reality. The new TOODEE-2 has been improved and a comparison of the calculation of a large fracture with Westinghouse gave good points of agreement for the first 16 seconds of the course of events. The rest describes a serious incident because of the conservation procedure according to 10 CFR 50 appendix K. Most of the calculations have used the form factor of 2.32. Also 2.12 which is the highest form factor for Ringhals 2 has been used. Furhter investigations are needed to clarify the difference in the results. (G.B.)

  12. WASP: A flexible FORTRAN 4 computer code for calculating water and steam properties

    Science.gov (United States)

    Hendricks, R. C.; Peller, I. C.; Baron, A. K.

    1973-01-01

    A FORTRAN 4 subprogram, WASP, was developed to calculate the thermodynamic and transport properties of water and steam. The temperature range is from the triple point to 1750 K, and the pressure range is from 0.1 to 100 MN/m2 (1 to 1000 bars) for the thermodynamic properties and to 50 MN/m2 (500 bars) for thermal conductivity and to 80 MN/m2 (800 bars) for viscosity. WASP accepts any two of pressure, temperature, and density as input conditions. In addition, pressure and either entropy or enthalpy are also allowable input variables. This flexibility is especially useful in cycle analysis. The properties available in any combination as output include temperature, density, pressure, entropy, enthalpy, specific heats, sonic velocity, viscosity, thermal conductivity, surface tension, and the Laplace constant. The subroutine structure is modular so that the user can choose only those subroutines necessary to his calculations. Metastable calculations can also be made by using WASP.

  13. A multiprecision matrix calculation library and its extension library for a matrix-product-state simulation of quantum computing

    CERN Document Server

    SaiToh, Akira

    2011-01-01

    A C++ library, named ZKCM, has been developed for the purpose of multiprecision matrix calculations, which is based on the GNU MP and MPFR libraries. It is especially convenient for writing programs involving tensor-product operations, tracing-out operations, and singular-value decompositions. Its extension library, ZKCM_QC, for simulating quantum computing has been developed using the time-dependent matrix-product-state simulation method. This report gives a brief introduction to the libraries with sample programs.

  14. Developing a New Atomic Physics Computer Program (HTAC) to Perform Atomic Structure and Transition Rate Calculations in Three Advanced Methods

    OpenAIRE

    Amani Tahat; Mahmoud Abu-Allaban; Safeia Hamasha

    2011-01-01

    In this study, a new atomic physics program (HTAC) is introduced and tested. It is a utility program designed to automate the computation of various atomic structure and spectral data. It is the first comprehensive code that enables performing atomic calculations based on three advanced theories: the fully relativistic configuration interactions approach, the multi-reference many body perturbation theory and the R-Matrix method. It has been designed to generate tabulated atomic data files tha...

  15. ACRO: a computer program for calculating organ doses from acute or chronic inhalation and ingestion of radionuclides

    International Nuclear Information System (INIS)

    ACRO was developed as a computer program to calculate internal exposure doses resulting from acute or chronic inhalation and oral ingestion of radionuclides. The ICRP Task Force Lung Model (TGLM) was used as the inhalation model in ACRO, and a simple one-compartment model was used as the ingestion model. The program is written in FORTRAN IV, and it requires about 260 KB memory capacity

  16. SOURCE 2.0: a computer program to calculate fission product release from multiple fuel elements for accident scenarios

    International Nuclear Information System (INIS)

    SOURCE 2.0 is a computer code being jointly developed within the Canadian nuclear industry. It will model the necessary mechanisms required to calculate the fission product release for a variety of accident scenarios, including large break loss of coolant accidents with or without emergency coolant injection. This paper presents the origin of SOURCE 2.0, describes the code structure, the fission product mechanisms modelled, and the quality assurance procedures that are being followed during the code's life cycle. (author)

  17. Transmutation of alloys in MFE facilities as calculated by REAC (a computer code system for activation and transmutation)

    International Nuclear Information System (INIS)

    A computer code system for fast calculation of activation and transmutation has been developed. The system consists of a driver code, cross-section libraries, flux libraries, a material library, and a decay library. The code is used to predict transmutations in a Ti-modified 316 stainless steel, a commercial ferritic alloy (HT9), and a V-15%Cr-5%Ti alloy in various magnetic fusion energy (MFE) test facilities and conceptual reactors

  18. Implementation of a Quantum-simulation Algorithm of Calculating Molecular Ground-state Energy on an NMR Quantum Computer

    OpenAIRE

    Du, Jiangfeng; Xu, Nanyang; Peng, Xinhua; Wang, Pengfei; Wu, Sanfeng; Lu, Dawei

    2009-01-01

    It is exponentially hard to simulate quantum systems by classical algorithms, while quantum computer could in principle solve this problem polynomially. We demonstrate such an quantum-simulation algorithm on our NMR system to simulate an hydrogen molecule and calculate its ground-state energy. We utilize the NMR interferometry method to measure the phase shift and iterate the process to get a high precision. Finally we get 17 precise bits of the energy value, and we also analyze the source of...

  19. GAPCON-THERMAL-2: a computer program for calculating the thermal behavior of an oxide fuel rod

    International Nuclear Information System (INIS)

    A description is presented of the computer code GAPCON THERMAL-2, a light water reactor (LWR) fuel thermal performance prediction code. GAPCON-THERMAL-2, is intended to be used as a calculational tool for reactor fuel steady-state thermal performance and to provide input for accident analyses. Some models used in the code provide best estimate as well as conservative predictions. Each of the individual models in the code is based on the best available data

  20. REACT/THERMIX - a computer code to calculate graphite corrosion due to accidents in pebble-bed reactors

    International Nuclear Information System (INIS)

    his report presents the description of the computer code REACT/THERMIX, which was developed for calculations of the graphite corrosion phenomena and accident transients in gas cooled High Temperature Reactors (HTR) under air and/or water ingress accident conditions. The two-dimensional code is characterized by direct coupling of thermodynamic, fluiddynamic and chemical processes with a separate handling of heterogeneous chemical reactions. (orig.)

  1. Computer calculations of wire-rope tiedown designs for radioactive materials packages

    International Nuclear Information System (INIS)

    This Regulatory Compliance Guide (RCG) provides guidance on the use and selection of appropriate wire rope type package tiedowns. It provides an effective way to encourage and to ensure uniform implementation of regulatory requirements applicable to tiedowns. It provides general guidelines for securing packages weighing 5,000 pounds or greater that contain radioactive materials onto legal weight trucks (exclusive of packagings having their own trailer with trunnion type tiedown). This RCG includes a computerized Tiedown Stress Calculation Program (TSCP) which calculates the stresses in the wire-rope tiedowns and specifies appropriate sizes of wire rope and associated hardware parameters (such as turnback length, number of cable clips, etc.)

  2. Radiation doses from radiation sources of neutrons and photons by different computer calculation

    International Nuclear Information System (INIS)

    In the present paper the calculation technique aspects of dose rate from neutron and photon radiation sources are covered with reference both to the basic theoretical modeling of the MERCURE-4, XSDRNPM-S and MCNP-3A codes and from practical point of view performing safety analyses of irradiation risk of two transportation casks. The input data set of these calculations -regarding the CEN 10/200 HLW container and dry PWR spent fuel assemblies shipping cask- is frequently commented as for as connecting points of input data and understanding theoric background are concerned

  3. BETHSY 6.2TC test calculation with TRACE and RELAP5 computer code

    International Nuclear Information System (INIS)

    The TRACE code is still under development and it will have all capabilities of RELAP5. The purpose of the present study was therefore to assess the accuracy of the TRACE calculation of BETHSY 6.2TC test, which is 15.24 cm equivalent diameter horizontal cold leg break. For calculations the TRACE V5.0 Patch 1 and RELAP5/MOD3.3 Patch 4 were used. The overall results obtained with TRACE were similar to the results obtained by RELAP5/MOD3.3. The results show that the discrepancies were reasonable. (author)

  4. Development of 1-year-old computational phantom and calculation of organ doses during CT scans using Monte Carlo simulation

    International Nuclear Information System (INIS)

    With the rapidly growing number of CT examinations, the consequential radiation risk has aroused more and more attention. The average dose in each organ during CT scans can only be obtained by using Monte Carlo simulation with computational phantoms. Since children tend to have higher radiation sensitivity than adults, the radiation dose of pediatric CT examinations requires special attention and needs to be assessed accurately. So far, studies on organ doses from CT exposures for pediatric patients are still limited. In this work, a 1-year-old computational phantom was constructed. The body contour was obtained from the CT images of a 1-year-old physical phantom and the internal organs were deformed from an existing Chinese reference adult phantom. To ensure the organ locations in the 1-year-old computational phantom were consistent with those of the physical phantom, the organ locations in 1-year-old computational phantom were manually adjusted one by one, and the organ masses were adjusted to the corresponding Chinese reference values. Moreover, a CT scanner model was developed using the Monte Carlo technique and the 1-year-old computational phantom was applied to estimate organ doses derived from simulated CT exposures. As a result, a database including doses to 36 organs and tissues from 47 single axial scans was built. It has been verified by calculation that doses of axial scans are close to those of helical scans; therefore, this database could be applied to helical scans as well. Organ doses were calculated using the database and compared with those obtained from the measurements made in the physical phantom for helical scans. The differences between simulation and measurement were less than 25% for all organs. The result shows that the 1-year-old phantom developed in this work can be used to calculate organ doses in CT exposures, and the dose database provides a method for the estimation of 1-year-old patient doses in a variety of CT examinations. (paper)

  5. STATIC{sub T}EMP: a useful computer code for calculating static formation temperatures in geothermal wells

    Energy Technology Data Exchange (ETDEWEB)

    Santoyo, E. [Universidad Nacional Autonoma de Mexico, Centro de Investigacion en Energia, Temixco (Mexico); Garcia, A.; Santoyo, S. [Unidad Geotermia, Inst. de Investigaciones Electricas, Temixco (Mexico); Espinosa, G. [Universidad Autonoma Metropolitana, Co. Vicentina (Mexico); Hernandez, I. [ITESM, Centro de Sistemas de Manufactura, Monterrey (Mexico)

    2000-07-01

    The development and application of the computer code STATIC{sub T}EMP, a useful tool for calculating static formation temperatures from actual bottomhole temperature data logged in geothermal wells is described. STATIC{sub T}EMP is based on five analytical methods which are the most frequently used in the geothermal industry. Conductive and convective heat flow models (radial, spherical/radial and cylindrical/radial) were selected. The computer code is a useful tool that can be reliably used in situ to determine static formation temperatures before or during the completion stages of geothermal wells (drilling and cementing). Shut-in time and bottomhole temperature measurements logged during well completion activities are required as input data. Output results can include up to seven computations of the static formation temperature by each wellbore temperature data set analysed. STATIC{sub T}EMP was written in Fortran-77 Microsoft language for MS-DOS environment using structured programming techniques. It runs on most IBM compatible personal computers. The source code and its computational architecture as well as the input and output files are described in detail. Validation and application examples on the use of this computer code with wellbore temperature data (obtained from specialised literature) and with actual bottomhole temperature data (taken from completion operations of some geothermal wells) are also presented. (Author)

  6. Quantum computing applied to calculations of molecular energies: CH2 benchmark

    Czech Academy of Sciences Publication Activity Database

    Veis, L.; Pittner, Jiří

    2010-01-01

    Roč. 133, č. 19 (2010), s. 194106. ISSN 0021-9606 R&D Projects: GA ČR GA203/08/0626 Institutional research plan: CEZ:AV0Z40400503 Keywords : computation * algorithm * systems Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 2.920, year: 2010

  7. The use of symbolic computation in radiative, energy, and neutron transport calculations. Final report

    International Nuclear Information System (INIS)

    This investigation used sysmbolic manipulation in developing analytical methods and general computational strategies for solving both linear and nonlinear, regular and singular integral and integro-differential equations which appear in radiative and mixed-mode energy transport. Contained in this report are seven papers which present the technical results as individual modules

  8. LWR-WIMS, a computer code for light water reactor lattice calculations

    International Nuclear Information System (INIS)

    LMR-WIMS is a comprehensive scheme of computation for studying the reactor physics aspects and burnup behaviour of typical lattices of light water reactors. This report describes the physics methods that have been incorporated in the code, and the modifications that have been made since the code was issued in 1972. (U.K.)

  9. CRITEX - a computer program to calculate criticality excursions in fissile liquid systems

    International Nuclear Information System (INIS)

    A computer program CRITEX has been developed which models criticality excursions in fissile solutions. This report describes the numerical methods used to approximate the differential equations which are used to simulate the physical behaviour. A flow diagram is given together with a description of the subroutines, input and output variables. (author)

  10. Computer code ANISN multiplying media and shielding calculation II. Code description (input/output)

    International Nuclear Information System (INIS)

    The user manual of the ANISN computer code describing input and output subroutines is presented. ANISN code was developed to solve one-dimensional transport equation for neutron or gamma rays in slab, sphere or cylinder geometry with general anisotropic scattering. The solution technique is the discrete ordinate method. (M.C.K.)

  11. DCHAIN 2: a computer code for calculation of transmutation of nuclides

    International Nuclear Information System (INIS)

    DCHAIN2 is a one-point depletion code which solves the coupled equation of radioactive growth and decay for a large number of nuclides by the Bateman method. A library of nuclear data for 1170 fission products has been prepared for providing input data to this code. The Bateman method surpasses the matrix exponential method in computational accuracies and in saving computer storage for the code. However, most existing computer codes based on the Bateman method have shown serious drawbacks in treating cyclic chains and more than a few specific types of decay chains. The present code has surmounted the above drawbacks by improving the code FP-S, and has the following characteristics: (1) The code can treat any type of transmutation through decays or neutron induced reactions. Multiple decays and reactions are allowed for a nuclide. (2) Unknown decay energy in the nuclear data library can be estimated. (3) The code constructs the decay scheme of each nuclide in the code and breaks it up into linear chains. Nuclide names, decay types and branching ratios of mother nuclides are necessary as the input data for each nuclide. Order of nuclides in the library is arbitrary because each nuclide is destinguished by its nuclide name. (4) The code can treat cyclic chains by an approximation. A library of the nuclear data has been prepared for 1170 fission products, including the data for half-lives, decay schemes, neutron absorption cross sections, fission yields, and disintegration energies. While DCHAIN2 is used to compute the compositions, radioactivity and decay heat of fission products, the gamma-ray spectrum of fission products can be computed also by a separate code FPGAM using the composition obtained from DCHAIN2. (J.P.N.)

  12. PABS: A Computer Program to Normalize Emission Probabilities and Calculate Realistic Uncertainties

    International Nuclear Information System (INIS)

    The program PABS normalizes relative particle emission probabilities to an absolute scale and calculates the relevant uncertainties on this scale. The program is written in Java using the JDK 1.6 library. For additional information about system requirements, the code itself, and compiling from source, see the README file distributed with this program. The mathematical procedures used are given.

  13. A Computer Program for Calculation of Calibration Curves for Quantitative X-Ray Diffraction Analysis.

    Science.gov (United States)

    Blanchard, Frank N.

    1980-01-01

    Describes a FORTRAN IV program written to supplement a laboratory exercise dealing with quantitative x-ray diffraction analysis of mixtures of polycrystalline phases in an introductory course in x-ray diffraction. Gives an example of the use of the program and compares calculated and observed calibration data. (Author/GS)

  14. Computational Chemistry Laboratory: Calculating the Energy Content of Food Applied to a Real-Life Problem

    Science.gov (United States)

    Barbiric, Dora; Tribe, Lorena; Soriano, Rosario

    2015-01-01

    In this laboratory, students calculated the nutritional value of common foods to assess the energy content needed to answer an everyday life application; for example, how many kilometers can an average person run with the energy provided by 100 g (3.5 oz) of beef? The optimized geometries and the formation enthalpies of the nutritional components…

  15. Using the Metropolis Algorithm to Calculate Thermodynamic Quantities: An Undergraduate Computational Experiment

    Science.gov (United States)

    Beddard, Godfrey S.

    2011-01-01

    Thermodynamic quantities such as the average energy, heat capacity, and entropy are calculated using a Monte Carlo method based on the Metropolis algorithm. This method is illustrated with reference to the harmonic oscillator but is particularly useful when the partition function cannot be evaluated; an example using a one-dimensional spin system…

  16. A computer program for unilateral renal clearance calculation by a modified Oberhausen method

    International Nuclear Information System (INIS)

    A FORTAN program is presented which, on the basis of data obtained with NUKLEOPAN M, calculates the glomerular filtration rate with sup(99m)Tc-DTPA, the unilateral effective renal plasma flow with 131I-hippuran, and the parameters for describing the isotope rephrogram (ING) with 131I-hippuran. The results are calculated fully automatically upon entry of the data, and the results are processed and printed out. The theoretical fundamentals of ING and whole-body clearance calculation are presented as well as the methods available for unilateral clearance calculation, and the FORTAN program is described in detail. The standard values of the method are documented, as well as a comparative gamma camera study of 48 patients in order to determine the accuracy of unilateral imaging with the NUKLEOPAN M instrument, a comparison of unilateral clearances by the Oberhausen and Taplin methods, and a comparison between 7/17' plasma clearance and whole-body clearance. Problems and findings of the method are discussed. (orig./MG)

  17. Computer calculation of the limiting cycle of auto-oscillatory astrophysical objects

    International Nuclear Information System (INIS)

    A new method of seeking the limiting cycle of an auto-oscillatory system without making auxiliary graphic constructions is proposed. The results of calculations of the limiting cycle of the 11-year oscillations of solar activity are presented as an illustration

  18. PABS: A Computer Program to Normalize Emission Probabilities and Calculate Realistic Uncertainties

    Energy Technology Data Exchange (ETDEWEB)

    Caron, D. S.; Browne, E.; Norman, E. B.

    2009-08-21

    The program PABS normalizes relative particle emission probabilities to an absolute scale and calculates the relevant uncertainties on this scale. The program is written in Java using the JDK 1.6 library. For additional information about system requirements, the code itself, and compiling from source, see the README file distributed with this program. The mathematical procedures used are given below.

  19. Computer program for the sensitivity calculation of a CR-39 detector in a diffusion chamber for radon measurements

    International Nuclear Information System (INIS)

    Computer software for calculation of the sensitivity of a CR-39 detector closed in a diffusion chamber to radon is described in this work. The software consists of two programs, both written in the standard Fortran 90 programming language. The physical background and a numerical example are given. Presented software is intended for numerous researches in radon measurement community. Previously published computer programs TRACK-TEST.F90 and TRACK-VISION.F90 [D. Nikezic and K. N. Yu, Comput. Phys. Commun. 174, 160 (2006); D. Nikezic and K. N. Yu, Comput. Phys. Commun. 178, 591 (2008)] are used here as subroutines to calculate the track parameters and to determine whether the track is visible or not, based on the incident angle, impact energy, etching conditions, gray level, and visibility criterion. The results obtained by the software, using five different V functions, were compared with the experimental data found in the literature. Application of two functions in this software reproduced experimental data very well, while other three gave lower sensitivity than experiment

  20. MASFLO: a computer code to calculate mass flow rates in the Thermal-Hydraulic Test Facility (THTF). Technical report

    International Nuclear Information System (INIS)

    This report documents a modular data interpretation computer code. The MASFLO code is a Fortran code used in the Oak Ridge National Laboratory Blowdown Heat Transfer Program to convert measured quantities of density, volumetric flow, and momentum flux into a calculated quantity: mass flow rate. The code performs both homogeneous and two-velocity calculations. The homogeneous models incorporate various combinations of the Thermal-Hydraulic Test Facility instrumented spool piece turbine flow meter, gamma densitometer, and drag disk readings. The two-velocity calculations also incorporate these instruments, but in models developed by Aya, Rouhani, and Popper. Each subroutine is described briefly, and input instructions are provided in the appendix along with a sample of the code output

  1. Computer calculation of neutron cross sections with Hauser-Feshbach code STAPRE incorporating the hybrid pre-compound emission model

    International Nuclear Information System (INIS)

    Computer codes incorporating advanced nuclear models (optical, statistical and pre-equilibrium decay nuclear reaction models) were used to calculate neutron cross sections needed for fusion reactor technology. The elastic and inelastic scattering (n,2n), (n,p), (n,n'p), (n,d) and (n,γ) cross sections for stable molybdenum isotopes Mosup(92,94,95,96,97,98,100) and incident neutron energy from about 100 keV or a threshold to 20 MeV were calculated using the consistent set of input parameters. The hydrogen production cross section which determined the radiation damage in structural materials of fusion reactors can be simply deduced from the presented results. The more elaborated microscopic models of nuclear level density are required for high accuracy calculations

  2. An advanced computational scheme for the optimization of 2D radial reflector calculations in pressurized water reactors

    International Nuclear Information System (INIS)

    Highlights: • We present a computational scheme for the determination of reflector properties in a PWR. • The approach is based on the minimization of a functional. • We use a data assimilation method or a parametric complementarity principle. • The reference target is a solution obtained with the method of characteristics. • The simplified flux solution is based on diffusion theory or on the simplified Pn method. - Abstract: This paper presents a computational scheme for the determination of equivalent 2D multi-group spatially dependant reflector parameters in a Pressurized Water Reactor (PWR). The proposed strategy is to define a full-core calculation consistent with a reference lattice code calculation such as the Method Of Characteristics (MOC) as implemented in APOLLO2 lattice code. The computational scheme presented here relies on the data assimilation module known as “Assimilation de données et Aide à l’Optimisation (ADAO)” of the SALOME platform developed at Électricité De France (EDF), coupled with the full-core code COCAGNE and with the lattice code APOLLO2. A first code-to-code verification of the computational scheme is made using the OPTEX reflector model developed at École Polytechnique de Montréal (EPM). As a result, we obtain 2D multi-group, spatially dependant reflector parameters, using both diffusion or SPN operators. We observe important improvements of the power discrepancies distribution over the core when using reflector parameters computed with the proposed computational scheme, and the SPN operator enables additional improvements

  3. An advanced computational scheme for the optimization of 2D radial reflector calculations in pressurized water reactors

    Energy Technology Data Exchange (ETDEWEB)

    Clerc, T., E-mail: thomas.clerc2@gmail.com [Institut de Génie Nucléaire, P.O. Box 6079, Station “Centre-Ville”, Montréal, Qc., Canada H3C 3A7 (Canada); Hébert, A., E-mail: alain.hebert@polymtl.ca [Institut de Génie Nucléaire, P.O. Box 6079, Station “Centre-Ville”, Montréal, Qc., Canada H3C 3A7 (Canada); Leroyer, H.; Argaud, J.P.; Bouriquet, B.; Ponçot, A. [Électricité de France, R and D, SINETICS, 1 Av. du Général de Gaulle, 92141 Clamart (France)

    2014-07-01

    Highlights: • We present a computational scheme for the determination of reflector properties in a PWR. • The approach is based on the minimization of a functional. • We use a data assimilation method or a parametric complementarity principle. • The reference target is a solution obtained with the method of characteristics. • The simplified flux solution is based on diffusion theory or on the simplified Pn method. - Abstract: This paper presents a computational scheme for the determination of equivalent 2D multi-group spatially dependant reflector parameters in a Pressurized Water Reactor (PWR). The proposed strategy is to define a full-core calculation consistent with a reference lattice code calculation such as the Method Of Characteristics (MOC) as implemented in APOLLO2 lattice code. The computational scheme presented here relies on the data assimilation module known as “Assimilation de données et Aide à l’Optimisation (ADAO)” of the SALOME platform developed at Électricité De France (EDF), coupled with the full-core code COCAGNE and with the lattice code APOLLO2. A first code-to-code verification of the computational scheme is made using the OPTEX reflector model developed at École Polytechnique de Montréal (EPM). As a result, we obtain 2D multi-group, spatially dependant reflector parameters, using both diffusion or SP{sub N} operators. We observe important improvements of the power discrepancies distribution over the core when using reflector parameters computed with the proposed computational scheme, and the SP{sub N} operator enables additional improvements.

  4. Reduced computational cost in the calculation of worst case response time for real time systems

    OpenAIRE

    Urriza, José M.; Schorb, Lucas; Orozco, Javier D.; Cayssials, Ricardo

    2009-01-01

    Modern Real Time Operating Systems require reducing computational costs even though the microprocessors become more powerful each day. It is usual that Real Time Operating Systems for embedded systems have advance features to administrate the resources of the applications that they support. In order to guarantee either the schedulability of the system or the schedulability of a new task in a dynamic Real Time System, it is necessary to know the Worst Case Response Time of the Real Time tasks ...

  5. Computer program for calculation of pressure, velocity and inertia force transients in liquid filled piping networks

    International Nuclear Information System (INIS)

    The phenomena of transient movements have an wavy character due to the pipe elasticity, inertia and compressibility of the fluid. Disturbances of the boundary conditions would result in local and pressure changes which are further transferred through waves along the whole hydraulic installation. A program based of the method of wave superposition was developed and implemented on PC/386 computers. The program was applied for analysis of water hammer shock effects in the Emergency Core Cooling System at the Nuclear Power Plant Cernavoda

  6. Involving High School Students in Computational Physics University Research: Theory Calculations of Toluene Adsorbed on Graphene

    Science.gov (United States)

    Borck, Øyvind; Gunnarsson, Linda; Lydmark, Pär

    2016-01-01

    To increase public awareness of theoretical materials physics, a small group of high school students is invited to participate actively in a current research projects at Chalmers University of Technology. The Chalmers research group explores methods for filtrating hazardous and otherwise unwanted molecules from drinking water, for example by adsorption in active carbon filters. In this project, the students use graphene as an idealized model for active carbon, and estimate the energy of adsorption of the methylbenzene toluene on graphene with the help of the atomic-scale calculational method density functional theory. In this process the students develop an insight into applied quantum physics, a topic usually not taught at this educational level, and gain some experience with a couple of state-of-the-art calculational tools in materials research. PMID:27505418

  7. Involving high school students in computational physics university research: Theory calculations of toluene adsorbed on graphene

    CERN Document Server

    Ericsson, Jonas; Mathiesen, Christoffer; Sepahvand, Benjamin; Borck, Øyvind; Gunnarsson, Linda; Lydmark, Pär; Schröder, Elsebeth

    2016-01-01

    To increase public awareness of theoretical materials physics, a small group of high school students is invited to participate actively in a current research projects at Chalmers University of Technology. The Chalmers research group explores methods for filtrating hazardous and otherwise unwanted molecules from drinking water, for example by adsorption in active carbon filters. In this project, the students use graphene as an idealized model for active carbon, and estimate the energy of adsorption of the methylbenzene toluene on graphene with the help of the atomic-scale calculational method density functional theory. In this process the students develop an insight into applied quantum physics, a topic usually not taught at this educational level, and gain some experience with a couple of state-of-the-art calculational tools in materials research.

  8. Comparison of computer code calculations with experimental results obtained in the NSPP series of experiments

    International Nuclear Information System (INIS)

    Experiments were done on several aerosols in air atmospheres at varying temperatures and humidity conditions of interest in forming a data base for testing aerosol behavior models used as part of the process of evaluating the ''source term'' in light water reactor accidents. This paper deals with the problems of predicting the observed experimental data for suspended aerosol concentration with aerosol calculational codes. Comparisons of measured versus predicted data are provided

  9. Efigie: a computer program for calculating end-isotope accumulation by neutron irradiation and radioactive decay

    International Nuclear Information System (INIS)

    Efigie is a program written in Fortran V which can calculate the concentration of radionuclides produced by neutron irradiation of a target made of either a single isotope or several isotopes. The program includes optimization criteria that can be applied when the goal is the production of a single nuclide. The effect of a cooling time before chemical processing of the target is also accounted for.(author)

  10. Stability Analysis of Large-Scale Incompressible Flow Calculations on Massively Parallel Computers

    International Nuclear Information System (INIS)

    A set of linear and nonlinear stability analysis tools have been developed to analyze steady state incompressible flows in 3D geometries. The algorithms have been implemented to be scalable to hundreds of parallel processors. The linear stability of steady state flows are determined by calculating the rightmost eigenvalues of the associated generalize eigenvalue problem. Nonlinear stability is studied by bifurcation analysis techniques. The boundaries between desirable and undesirable operating conditions are determined for buoyant flow in the rotating disk CVD reactor

  11. Multi-user software of radio therapeutical calculation using a computational network

    International Nuclear Information System (INIS)

    It has been designed a hardware and software system for a radiotherapy Department. It runs under an Operative system platform Novell Network sharing the existing resources and of the server, it is centralized, multi-user and of greater safety. It resolves a variety of problems and calculation necessities, patient steps and administration, it is very fast and versatile, it contains a set of menus and options which may be selected with mouse, direction arrows or abbreviated keys. (Author)

  12. Program to implement MNDO/vs with analytic first-derivative computation and vibrational-spectrum calculation

    International Nuclear Information System (INIS)

    Derivatives of the total energy with respect to the cartesian coordinates have been calculated with the ES computers for the MNDO/VS method. Standard atomic parameters enable one to calculate systems that include the following elements: H, Be, B, C, N, F, Al, Si, P, S, Cl, Sn, Br, I. It is possible to introduce the parameters for atoms in periods I-VI and to correct for hydrogen bonds formed by N, O, and F. Open-shell calculations have been performed by the unrestricted Hartree-Fock method with analytical derivative calculation or by Dewar's half-electron method from a standard scheme with numerical derivatives. The program envisages geometry optimization by the Davidon-Fletcher-Powell (DFP) method, Fletcher's method, and our modification of the Newton-Rafson method. It is possible to calculate the relative vibrational intensities in the dipole approximation; the derivatives of the dipole-moment components with respect to the cartesian coordinates are derived numerically from a three-point formula

  13. DITTY - a computer program for calculating population dose integrated over ten thousand years

    Energy Technology Data Exchange (ETDEWEB)

    Napier, B.A.; Peloquin, R.A.; Strenge, D.L.

    1986-03-01

    The computer program DITTY (Dose Integrated Over Ten Thousand Years) was developed to determine the collective dose from long term nuclear waste disposal sites resulting from the ground-water pathways. DITTY estimates the time integral of collective dose over a ten-thousand-year period for time-variant radionuclide releases to surface waters, wells, or the atmosphere. This document includes the following information on DITTY: a description of the mathematical models, program designs, data file requirements, input preparation, output interpretations, sample problems, and program-generated diagnostic messages.

  14. ALLDOS: a computer program for calculation of radiation doses from airborne and waterborne releases

    International Nuclear Information System (INIS)

    The computer code ALLDOS is described and instructions for its use are presented. ALLDOS generates tables of radiation doses to the maximum individual and the population in the region of the release site. Acute or chronic release of radionuclides may be considered to airborne and waterborne pathways. The code relies heavily on data files of dose conversion factors and environmental transport factors for generating the radiation doses. A source inventory data library may also be used to generate the release terms for each pathway. Codes available for preparation of the dose conversion factors are described and a complete sample problem is provided describing preparation of data files and execution of ALLDOS

  15. Parallel calculations on shared memory, NUMA-based computers using MATLAB

    Science.gov (United States)

    Krotkiewski, Marcin; Dabrowski, Marcin

    2014-05-01

    Achieving satisfactory computational performance in numerical simulations on modern computer architectures can be a complex task. Multi-core design makes it necessary to parallelize the code. Efficient parallelization on NUMA (Non-Uniform Memory Access) shared memory architectures necessitates explicit placement of the data in the memory close to the CPU that uses it. In addition, using more than 8 CPUs (~100 cores) requires a cluster solution of interconnected nodes, which involves (expensive) communication between the processors. It takes significant effort to overcome these challenges even when programming in low-level languages, which give the programmer full control over data placement and work distribution. Instead, many modelers use high-level tools such as MATLAB, which severely limit the optimization/tuning options available. Nonetheless, the advantage of programming simplicity and a large available code base can tip the scale in favor of MATLAB. We investigate whether MATLAB can be used for efficient, parallel computations on modern shared memory architectures. A common approach to performance optimization of MATLAB programs is to identify a bottleneck and migrate the corresponding code block to a MEX file implemented in, e.g. C. Instead, we aim at achieving a scalable parallel performance of MATLABs core functionality. Some of the MATLABs internal functions (e.g., bsxfun, sort, BLAS3, operations on vectors) are multi-threaded. Achieving high parallel efficiency of those may potentially improve the performance of significant portion of MATLABs code base. Since we do not have MATLABs source code, our performance tuning relies on the tools provided by the operating system alone. Most importantly, we use custom memory allocation routines, thread to CPU binding, and memory page migration. The performance tests are carried out on multi-socket shared memory systems (2- and 4-way Intel-based computers), as well as a Distributed Shared Memory machine with 96 CPU

  16. DIRECT MINIMIZATION FOR CALCULATING INVARIANT SUBSPACES IN DENSITY FUNCTIONAL COMPUTATIONS OF THE ELECTRONIC STRUCTURE

    Institute of Scientific and Technical Information of China (English)

    Reinhold Schneider; Thorsten Rohwedder; Alexey Neelov; Johannes Blauert

    2009-01-01

    In this article, we analyse three related preconditioned steepest descent algorithms,which are partially popular in Hartree-Fock and Kohn-Sham theory as well as invariant subspace computations, from the viewpoint of minimization of the corresponding functionals, constrained by orthogonality conditions. We exploit the geometry of the admissible manifold, i.e., the invariance with respect to unitary transformations, to reformulate the problem on the Grassmann manifold as the admissible set. We then prove asymptotical linear convergence of the algorithms under the condition that the Hessian of the corresponding Lagrangian is elliptic on the tangent space of the Crassmann manifold at the minimizer.

  17. DITTY - a computer program for calculating population dose integrated over ten thousand years

    International Nuclear Information System (INIS)

    The computer program DITTY (Dose Integrated Over Ten Thousand Years) was developed to determine the collective dose from long term nuclear waste disposal sites resulting from the ground-water pathways. DITTY estimates the time integral of collective dose over a ten-thousand-year period for time-variant radionuclide releases to surface waters, wells, or the atmosphere. This document includes the following information on DITTY: a description of the mathematical models, program designs, data file requirements, input preparation, output interpretations, sample problems, and program-generated diagnostic messages

  18. Introduction to Radcalc: A computer program to calculate the radiolytic production of hydrogen gas from radioactive wastes in packages

    International Nuclear Information System (INIS)

    A calculational technique for quantifying the concentration of hydrogen generated by radiolysis in sealed radioactive waste containers was developed in a U.S. Department of Energy (DOE) study conducted by EG ampersand G Idaho, Inc., and the Electric Power Research Institute (EPRI) TMI-2 Technology Transfer Office. The study resulted in report GEND-041, entitled open-quotes A Calculational Technique to Predict Combustible Gas Generation in Sealed Radioactive Waste Containersclose quotes. The study also resulted in a presentation to the U.S. Nuclear Regulatory Commission (NRC) which gained acceptance of the methodology for use in ensuring compliance with NRC IE Information Notice No. 84-72 (NRC 1984) concerning the generation of hydrogen within packages. NRC IE Information Notice No. 84-72: open-quotes Clarification of Conditions for Waste Shipments Subject to Hydrogen Gas Generationclose quotes applies to any package containing water and/or organic substances that could radiolytically generate combustible gases. EPRI developed a simple computer program in a spreadsheet format utilizing GEND-041 calculational methodology to predict hydrogen gas concentrations in low-level radioactive wastes containers termed Radcalc. The computer code was extensively benchmarked against TMI-2 (Three Mile Island) EPICOR II resin bed measurements. The benchmarking showed that the model developed predicted hydrogen gas concentrations within 20% of the measured concentrations. Radcalc for Windows was developed using the same calculational methodology. The code is written in Microsoft Visual C++ 2.0 and includes a Microsoft Windows compatible menu-driven front end. In addition to hydrogen gas concentration calculations, Radcalc for Windows also provides transportation and packaging information such as pressure buildup, total activity, decay heat, fissile activity, TRU activity, and transportation classifications

  19. FLOWNET: A Computer Program for Calculating Secondary Flow Conditions in a Network of Turbomachinery

    Science.gov (United States)

    Rose, J. R.

    1978-01-01

    The program requires the network parameters, the flow component parameters, the reservoir conditions, and the gas properties as input. It will then calculate all unknown pressures and the mass flow rate in each flow component in the network. The program can treat networks containing up to fifty flow components and twenty-five unknown network pressures. The types of flow components that can be treated are face seals, narrow slots, and pipes. The program is written in both structured FORTRAN (SFTRAN) and FORTRAN 4. The program must be run in an interactive (conversational) mode.

  20. Computer code TOBUNRAD for PWR fuel bundle heat-up calculations

    International Nuclear Information System (INIS)

    The computer code TOBUNRAD developed is for analysis of ''fuel-bundle'' heat-up phenomena in a loss-of-coolant accident of PWR. The fuel bundle consists of fuel pins in square lattice; its behavior is different from that of individual pins during heat-up. The code is based on the existing TOODEE2 code which analyzes heat-up phenomena of single fuel pins, so that the basic models of heat conduction and transfer and coolant flow are the same as the TOODEE2's. In addition to the TOODEE2 features, unheated rods are modeled and radiation heat loss is considered between fuel pins, a fuel pin and other heat sinks. The TOBUNRAD code is developed by a new FORTRAN technique which makes it possible to interrupt a flow of program controls wherever desired, thereby attaching several subprograms to the main code. Users' manual for TOBUNRAD is presented: The basic program-structure by interruption method, physical and computational model in each sub-code, usage of the code and sample problems. (author)

  1. Construction of a computational exposure model for dosimetric calculations using the EGS4 Monte Carlo code and voxel phantoms

    International Nuclear Information System (INIS)

    The MAX phantom has been developed from existing segmented images of a male adult body, in order to achieve a representation as close as possible to the anatomical properties of the reference adult male specified by the ICRP. In computational dosimetry, MAX can simulate the geometry of a human body under exposure to ionizing radiations, internal or external, with the objective of calculating the equivalent dose in organs and tissues for occupational, medical or environmental purposes of the radiation protection. This study presents a methodology used to build a new computational exposure model MAX/EGS4: the geometric construction of the phantom; the development of the algorithm of one-directional, divergent, and isotropic radioactive sources; new methods for calculating the equivalent dose in the red bone marrow and in the skin, and the coupling of the MAX phantom with the EGS4 Monte Carlo code. Finally, some results of radiation protection, in the form of conversion coefficients between equivalent dose (or effective dose) and free air-kerma for external photon irradiation are presented and discussed. Comparing the results presented with similar data from other human phantoms it is possible to conclude that the coupling MAX/EGS4 is satisfactory for the calculation of the equivalent dose in radiation protection. (author)

  2. Initial experience with distributing structural calculations among computers operating in parallel

    Science.gov (United States)

    Rogers, J. L., Jr.; Sobieszczanski-Sobieski, J.

    1984-01-01

    An existing program is currently being adapted to perform finite element analysis by distributing substructures over a network of four Apple IIe microcomputers connected to a shared disk. In this network, one microcomputer controls the entire process while the others perform the analysis on each substructure in parallel. This substructure analysis is used in an iterative, fully stressed, structural resizing procedure. This procedure allows experimentatation with resizing in which all analyses are not completed during a single iteration. This research gives some insight on how to configure multidiscriplinary analysis and optimization procedures for decomposable engineering systems using either high performance engineering workstations or a parallel processor supercomputer. In addition, the operational experience gained facilitates the implementation of analysis programs on these new computers when they become available in an engineering environment.

  3. Computer programs for calculating partially cavitating blunt trailing edged cascade flows in nonlinear theory

    Science.gov (United States)

    Maekawa, S.; Furuya, O.

    1980-01-01

    In addition to the previously developed partially cavitating cascade theory, two new flow models were constructed in search of a better flow model for determining accurate force coefficients. Effort has been made for obtaining (1) physically acceptable flows, particularly the location of cavity boundary and (2) smooth matching of the flow characteristics between the partially cavitating and super-cavitating flow regimes. Based on the numerical results made with these flow models for practical blade profiles taken after a supercavitating propeller, it was found that no single flow model developed could handle the complete set of cascade geometries and incidence angles. One theory was supplemental to the other and no definite guideline was discovered for selection of an appropriate flow model for a specified flow condition to be solved except for a few weak evidences. This report is a users' manual for the computer programs developed above, describing the structure of program, input data set-up, typical output data and listing.

  4. NAIADQ, a computer program for calculating reactivity transients in low power experimental water reactors

    International Nuclear Information System (INIS)

    The computer code NAIADQ is designed to simulate the course and consequences of non-destructive reactivity accidents in low power, experimental, water-cooled reactor cores fuelled with metal plate elements. It is a coupled neutron kinetics-hydrodynamics-heat transfer code which uses point kinetics and one-dimensional thermohydraulic equations. Nucleate boiling, which occurs at the fuel surface during transients, is modelled by the growth of a superheated layer of water in which vapour is generated at a non-equilibrium rate. It is assumed that this vapour is formed at its saturation temperature and that it mixes homogeneously with the water in this layer. The code is written in FORTRAN IV and has been programmed to run as a catalogued procedure on an IBM operating system such as MVT or MVS, with facility for the inclusion of user routines

  5. Computational programs for shielding calculation with transport of one dimensional and monoenergetic SN

    International Nuclear Information System (INIS)

    This paper describes a computational program for result simulation of neutron transport problems at one velocity with isotropic scattering in Cartesian onedimensional geometry. Describing the physical modelling, the next phase is a mathematical modelling of the physical problem for simulation of the neutron distribution. The mathematical modelling uses the linearized Boltzmann equation which represents a balance among the production and loss of particles. The formulation of the discrete ordinates SN consists of discretization of angular variables at N directions (discrete ordinates), and using a set of angular quadratures for the approximation of integral terms of scattering sources. The SN equations are numerically solved. This work describes three numerical methods: diamond difference, step and characteristic step. The paper also presents numerical results for illustration of the efficiency of the developed program

  6. Computationally Efficient Calculations of Target Performance of the Normalized Matched Filter Detector for Hydrocoustic Signals

    CERN Document Server

    Diamant, Roee

    2016-01-01

    Detection of hydroacoustic transmissions is a key enabling technology in applications such as depth measurements, detection of objects, and undersea mapping. To cope with the long channel delay spread and the low signal-to-noise ratio, hydroacoustic signals are constructed with a large time-bandwidth product, $N$. A promising detector for hydroacoustic signals is the normalized matched filter (NMF). For the NMF, the detection threshold depends only on $N$, thereby obviating the need to estimate the characteristics of the sea ambient noise which are time-varying and hard to estimate. While previous works analyzed the characteristics of the normalized matched filter (NMF), for hydroacoustic signals with large $N$ values the expressions available are computationally complicated to evaluate. Specifically for hydroacoustic signals of large $N$ values, this paper presents approximations for the probability distribution of the NMF. These approximations are found extremely accurate in numerical simulations. We also o...

  7. Metadata management for distributed first principles calculations in VLab—A collaborative cyberinfrastructure for materials computation

    Science.gov (United States)

    da Silveira, Pedro R. C.; da Silva, Cesar R. S.; Wentzcovitch, Renata M.

    2008-02-01

    This paper describes the metadata and metadata management algorithms necessary to handle the concurrent execution of multiple tasks from a single workflow, in a collaborative service oriented architecture environment. Metadata requirements are imposed by the distributed workflow that calculates thermoelastic properties of materials at high pressures and temperatures. The scientific relevance of this workflow is also discussed. We explain the basic metaphor, the receipt, underlying the metadata management. We show the actual java representation of the receipt, and explain how it is converted to XML in order to be transferred between servers, and stored in a database. We also discuss how the collaborative aspect of user activity on running workflows could potentially lead to race conditions, how this affects requirements on metadata, and how these race conditions are precluded. Finally we describe an additional metadata structure, complementary to the receipts, that contains general information about the workflow.

  8. Electronic stopping power calculation for water under the Lindhard formalism for application in proton computed tomography

    Science.gov (United States)

    Guerrero, A. F.; Mesa, J.

    2016-07-01

    Because of the behavior that charged particles have when they interact with biological material, proton therapy is shaping the future of radiation therapy in cancer treatment. The planning of radiation therapy is made up of several stages. The first one is the diagnostic image, in which you have an idea of the density, size and type of tumor being treated; to understand this it is important to know how the particles beam interacts with the tissue. In this work, by using de Lindhard formalism and the Y.R. Waghmare model for the charge distribution of the proton, the electronic stopping power (SP) for a proton beam interacting with a liquid water target in the range of proton energies 101 eV - 1010 eV taking into account all the charge states is calculated.

  9. Calculation of the RSG-GAS core using computer code citation-3D

    International Nuclear Information System (INIS)

    Since core reactivity is one of the reactor safety parameters, this R and D has been carried out. To carry out the R and D, the code called WIMSD4 was used respectively for generating cross section and diffusion parameters. The code CITATION was then applied to estimate core reactivity in the RSG-GAS core. To verify the result of the calculation, data and information of the RSG-GAS Typical Working Core Were used. To Prove the codes reliably used, the case of all control elements down in the reactor core and that of all control rods up in the core were applied. The result taking into account those cases showed respectively that Keff are less and greater than unity (Keffeff>1)

  10. Protonation Sites, Tandem Mass Spectrometry and Computational Calculations of o-Carbonyl Carbazolequinone Derivatives

    Science.gov (United States)

    Martínez-Cifuentes, Maximiliano; Clavijo-Allancan, Graciela; Zuñiga-Hormazabal, Pamela; Aranda, Braulio; Barriga, Andrés; Weiss-López, Boris; Araya-Maturana, Ramiro

    2016-01-01

    A series of a new type of tetracyclic carbazolequinones incorporating a carbonyl group at the ortho position relative to the quinone moiety was synthesized and analyzed by tandem electrospray ionization mass spectrometry (ESI/MS-MS), using Collision-Induced Dissociation (CID) to dissociate the protonated species. Theoretical parameters such as molecular electrostatic potential (MEP), local Fukui functions and local Parr function for electrophilic attack as well as proton affinity (PA) and gas phase basicity (GB), were used to explain the preferred protonation sites. Transition states of some main fragmentation routes were obtained and the energies calculated at density functional theory (DFT) B3LYP level were compared with the obtained by ab initio quadratic configuration interaction with single and double excitation (QCISD). The results are in accordance with the observed distribution of ions. The nature of the substituents in the aromatic ring has a notable impact on the fragmentation routes of the molecules. PMID:27399676

  11. GOBLIN computer code. Comparison between calculations and TLTA small break test

    International Nuclear Information System (INIS)

    GOBLIN calcuations have been performed for two simulation tests of the boiling water reactor (BWR) small break loss-of-coolant accidents (LOCAs) which were conducted in the two loop test apparatus (TLTA). The first test investigated the small break with nondegraded emergency core coolant (ECC) systems and the second test studied the same small break but with degraded ECC systems in which the high pressure core spray (HPCS) was assumed unavailable. Very good agreement between test data and calculations is achieved. The second test is the most challenging from code comparison point of view and the code prediction of the complicated mass distribution pattern which changes with time is very satisfactory. In the first test and to some extent late in the second test multidimensional subchannel effects are evident in the core bundle region. These are not and cannot be reproduced by the code since the bundle model of GOBLIN is strictly one-dimensional. (Author)

  12. Comparison of various mathematical procedures for calculating enzyme-immunological results with mini-computers

    International Nuclear Information System (INIS)

    The question of the most appropriate evaluation method is, for enzyme-immuno-assays, of decisive importance for preciseness and exactness of the results. The time-consuming graphical determination which can easily contain errors, cannot be used here as far as quality assurance is concerned. The reaction principle which enzyme-immuno-assay is based on is anologous to that used for radio-immuno-assay. Thus it seems reasonable to carry out evaluation with the same software. Our tests concerning the applicability of different, frequently used mathematical algorithms have shown in radio-immuno-assays, that no method can approach all curves to the same degree of exactness and reproducibility. In this paper we compared a simple and a weighted linear regression according to logit-log transformation, and the polygonal and cubic spline interpolation with respect to their applicability for the calculation of the results in Phenytoin with the EMIT technique. (orig.)

  13. Structural Analysis of Char by Raman Spectroscopy: Improving Band Assignments through Computational Calculations from First Principles

    Energy Technology Data Exchange (ETDEWEB)

    Smith, Matthew W.; Dallmeyer, Ian; Johnson, Timothy J.; Brauer, Carolyn S.; McEwen, Jean-Sabin; Espinal, Juan F.; Garcia-Perez, Manuel

    2016-04-01

    Raman spectroscopy is a powerful tool for the characterization of many carbon 27 species. The complex heterogeneous nature of chars and activated carbons has confounded 28 complete analysis due to the additional shoulders observed on the D-band and high intensity 29 valley between the D and G-bands. In this paper the effects of various vacancy and substitution 30 defects have been systematically analyzed via molecular modeling using density functional 31 theory (DFT) and how this is manifested in the calculated gas-phase Raman spectra. The 32 accuracy of these calculations was validated by comparison with (solid-phase) experimental 33 spectra, with a small correction factor being applied to improve the accuracy of frequency 34 predictions. The spectroscopic effects on the char species are best understood in terms of a 35 reduced symmetry as compared to a “parent” coronene molecule. Based upon the simulation 36 results, the shoulder observed in chars near 1200 cm-1 has been assigned to the totally symmetric 37 A1g vibrations of various small polyaromatic hydrocarbons (PAH) as well as those containing 38 rings of seven or more carbons. Intensity between 1400 cm-1 and 1450 cm-1 is assigned to A1g 39 type vibrations present in small PAHs and especially those containing cyclopentane rings. 40 Finally, band intensity between 1500 cm-1 and 1550 cm-1 is ascribed to predominately E2g 41 vibrational modes in strained PAH systems. A total of ten potential bands have been assigned 42 between 1000 cm-1 and 1800 cm-1. These fitting parameters have been used to deconvolute a 43 thermoseries of cellulose chars produced by pyrolysis at 300-700 °C. The results of the 44 deconvolution show consistent growth of PAH clusters with temperature, development of non-45 benzyl rings as temperature increases and loss of oxygenated features between 400 °C and 46 600 °C

  14. RISKIND: A computer program for calculating radiological consequences and health risks from transportation of spent nuclear fuel

    Energy Technology Data Exchange (ETDEWEB)

    Yuan, Y.C. [Square Y, Orchard Park, NY (United States); Chen, S.Y.; LePoire, D.J. [Argonne National Lab., IL (United States). Environmental Assessment and Information Sciences Div.; Rothman, R. [USDOE Idaho Field Office, Idaho Falls, ID (United States)

    1993-02-01

    This report presents the technical details of RISIUND, a computer code designed to estimate potential radiological consequences and health risks to individuals and the collective population from exposures associated with the transportation of spent nuclear fuel. RISKIND is a user-friendly, semiinteractive program that can be run on an IBM or equivalent personal computer. The program language is FORTRAN-77. Several models are included in RISKIND that have been tailored to calculate the exposure to individuals under various incident-free and accident conditions. The incidentfree models assess exposures from both gamma and neutron radiation and can account for different cask designs. The accident models include accidental release, atmospheric transport, and the environmental pathways of radionuclides from spent fuels; these models also assess health risks to individuals and the collective population. The models are supported by databases that are specific to spent nuclear fuels and include a radionudide inventory and dose conversion factors.

  15. RISKIND: A computer program for calculating radiological consequences and health risks from transportation of spent nuclear fuel

    International Nuclear Information System (INIS)

    This report presents the technical details of RISIUND, a computer code designed to estimate potential radiological consequences and health risks to individuals and the collective population from exposures associated with the transportation of spent nuclear fuel. RISKIND is a user-friendly, semiinteractive program that can be run on an IBM or equivalent personal computer. The program language is FORTRAN-77. Several models are included in RISKIND that have been tailored to calculate the exposure to individuals under various incident-free and accident conditions. The incidentfree models assess exposures from both gamma and neutron radiation and can account for different cask designs. The accident models include accidental release, atmospheric transport, and the environmental pathways of radionuclides from spent fuels; these models also assess health risks to individuals and the collective population. The models are supported by databases that are specific to spent nuclear fuels and include a radionudide inventory and dose conversion factors

  16. User's guide to SERICPAC: A computer program for calculating electric-utility avoided costs rates

    Energy Technology Data Exchange (ETDEWEB)

    Wirtshafter, R.; Abrash, M.; Koved, M.; Feldman, S.

    1982-05-01

    SERICPAC is a computer program developed to calculate average avoided cost rates for decentralized power producers and cogenerators that sell electricity to electric utilities. SERICPAC works in tandem with SERICOST, a program to calculate avoided costs, and determines the appropriate rates for buying and selling of electricity from electric utilities to qualifying facilities (QF) as stipulated under Section 210 of PURA. SERICPAC contains simulation models for eight technologies including wind, hydro, biogas, and cogeneration. The simulations are converted in a diversified utility production which can be either gross production or net production, which accounts for an internal electricity usage by the QF. The program allows for adjustments to the production to be made for scheduled and forced outages. The final output of the model is a technology-specific average annual rate. The report contains a description of the technologies and the simulations as well as complete user's guide to SERICPAC.

  17. BUSH: A computer code for calculating steady state heat transfer in LWR rod bundles under accident conditions

    International Nuclear Information System (INIS)

    The computer code BUSH has been developed for the calculation of steady state heat transfer in a rod bundle. For a given power, flow and geometry it can calculate the temperatures in the rods, coolant and shroud assuming that at any axial level each rod can be described by one temperature and the coolant fluid is also radially uniform at this level. Heat transfer by convection and radiation are handled and the geometry is flexible enough to model nearly all types of envisaged shroud design for the SUPERSARA test series. The modular way in which BUSH has been written makes it suitable for future development, either within the present BUSH framework or as part of a more advanced code

  18. CHILES 2: a finite element computer program that calculates the intensities of linear elastic singularities in isotropic and orthotropic materials

    International Nuclear Information System (INIS)

    CHILES 2 is a finite-element computer program that calculates the strength of singularities in linear elastic bodies. A generalized quadrilateral finite element that includes a singular point at a corner node is incorporated in the code. The displacement formulation is used and interelement compatibility is maintained so that monotone convergence is preserved. Plane stress, plane strain, and axisymmetric conditions are treated. Isotropic and orthotropic crack tip singularity problems are solved by this version of the code, but any type of singularity may be properly modeled by modifying selected subroutines in the program

  19. CHILES 2: a finite element computer program that calculates the intensities of linear elastic singularities in isotropic and orthotropic materials

    Energy Technology Data Exchange (ETDEWEB)

    Benzley, S.E.; Beisinger, Z.E.

    1978-02-01

    CHILES 2 is a finite-element computer program that calculates the strength of singularities in linear elastic bodies. A generalized quadrilateral finite element that includes a singular point at a corner node is incorporated in the code. The displacement formulation is used and interelement compatibility is maintained so that monotone convergence is preserved. Plane stress, plane strain, and axisymmetric conditions are treated. Isotropic and orthotropic crack tip singularity problems are solved by this version of the code, but any type of singularity may be properly modeled by modifying selected subroutines in the program.

  20. Implementation of a Quantum-simulation Algorithm of Calculating Molecular Ground-state Energy on an NMR Quantum Computer

    CERN Document Server

    Du, Jiangfeng; Peng, Xinhua; Wang, Pengfei; Wu, Sanfeng; Lu, Dawei

    2009-01-01

    It is exponentially hard to simulate quantum systems by classical algorithms, while quantum computer could in principle solve this problem polynomially. We demonstrate such an quantum-simulation algorithm on our NMR system to simulate an hydrogen molecule and calculate its ground-state energy. We utilize the NMR interferometry method to measure the phase shift and iterate the process to get a high precision. Finally we get 17 precise bits of the energy value, and we also analyze the source of the error in the simulation.

  1. WATEQF; a FORTRAN IV version of WATEQ : a computer program for calculating chemical equilibrium of natural waters

    Science.gov (United States)

    Plummer, L. Niel; Jones, Blair F.; Truesdell, Alfred Hemingway

    1976-01-01

    WATEQF is a FORTRAN IV computer program that models the thermodynamic speciation of inorganic ions and complex species in solution for a given water analysis. The original version (WATEQ) was written in 1973 by A. H. Truesdell and B. F. Jones in Programming Language/one (PL/1.) With but a few exceptions, the thermochemical data, speciation, coefficients, and general calculation procedure of WATEQF is identical to the PL/1 version. This report notes the differences between WATEQF and WATEQ, demonstrates how to set up the input data to execute WATEQF, provides a test case for comparison, and makes available a listing of WATEQF. (Woodard-USGS)

  2. EQ3NR: a computer program for geochemical aqueous speciation-solubility calculations. User's guide and documentation

    International Nuclear Information System (INIS)

    EQ3NR is a geochemical aqueous speciation-solubility FORTRAN program developed for application with the EQ3/6 software package. The program models the thermodynamic state of an aqueous solution by using a modified Newton-Raphson algorithm to calculate the distribution of aqueous species such as simple ions, ion-pairs, and aqueous complexes. Input to EQ3NR primarily consists of data derived from total analytical concentrations of dissolved components and can also include pH, alkalinity, electrical balance, phase equilibrium (solubility) constraints, and a default value for either Eh, pe, or the logarithm of oxygen fugacity. The program evaluates the degree of disequilibrium for various reactions and computes either the saturation index (SI = log Q/K) or thermodynamic affinity (A = -2.303 RT log Q/K) for minerals. Individual values of Eh, pe, equilibrium oxygen fugacity, and Ah (redox affinity, a new parameter) are computed for aqueous redox couples. Differences in these values define the degree of aqueous redox disequilibrium. EQ3NR can be used alone. It must be used to initialize a reaction-path calculation by EQ6, its companion program. EQ3NR reads a secondary data file, DATAl, created from a primary data file, DATA0, by the data base preprocessor, EQTL. The temperature range for the thermodynamic data in the file is 0 to 3000C. Addition or deletion of species or changes in associated thermodynamic data are made by changing only the file. Changes are not made to either EQ3NR or EQTL. Modification or substitution of equilibrium constant values can be selected on the EQ3NR INPUT file by the user at run time. EQ3NR and EQTL were developed for the FTN and CFT FORTRAN languages on the CDC 7600 and Cray-1 computers. Special FORTRAN conventions have been implemented for ease of portability to IBM, UNIVAC, and VAX computers

  3. A computer code for calculation of solvent-extraction separation in a multicomponent system with reference to nuclear fuel reprocessing

    International Nuclear Information System (INIS)

    Nuclear technology development pointed out the need for a new assessment of the fuel cycle back-end. Treatment and disposal of radioactive wastes arising from nuclear fuel reprocessing is known as one of the problems not yet satisfactorily solved, together with separation process of uranium and plutonium from fission products in highly irradiated fuels. Aim of this work is to present an improvement of the computer code for solvent extraction process calculation previously designed by the authors. The modeling of the extraction system has been modified by introducing a new method for calculating the distribution coefficients. The new correlations were based on deriving empirical functions for not only the apparent equilibrium constants, but also the solvation number. The mathematical model derived for calculating separation performance has been then tested for up to ten components and twelve theoretical stages with minor modifications to the convergence criteria. Suitable correlations for the calculation of the distribution coefficients of Uranium, Plutonium, Nitric Acid and fission products were constructed and used to successfully simulate several experimental conditions. (Author)

  4. ERATO - a computer program for the calculation of induced eddy-currents in three-dimensional conductive structures

    International Nuclear Information System (INIS)

    The computer code ERATO is used for the calculation of eddy-currents in three-dimensional conductive structures and their secondary magnetic field. ERATO is a revised version of the code FEDIFF, developed at IPP Garching. For the calculation the Finite-Element-Network (FEN) method is used, where the structure is simulated by an equivalent electric network. In the ERATO-code, the calculation of the finite-element discretization, the eddy-current analysis, and the final evaluation of the results are done in separate programs. So the eddy-current analysis as the central step is perfectly independent of a special geometry. For the finite-element discretization there are two so called preprocessors, which treat a torus-segment and a rectangular, flat plate. For the final evaluation postprocessors are used, by which the current-distributions can be printed and plotted. In the report, the theoretical foundation of the FEN-Method is discussed, the structure and the application of the programs (preprocessors, analysis-program, postprocessors, supporting programs) are shown, and two examples for calculations are presented. (orig.)

  5. A fast computational approach for the determination of thermal properties of hollow bricks in energy-related calculations

    International Nuclear Information System (INIS)

    As successful products of the recent developments in the building industry aimed at increasing the energy efficiency of buildings, the hollow clay brick blocks with complex systems of internal cavities present a prospective alternative to the traditional solid bricks on the building ceramics market. Determination of their thermal properties, which are essential for any energy-related calculations, is though not an easy task. Contrary to the solid bricks, the application of sophisticated methods is a necessity. In this paper, a fast computational approach for the determination of equivalent thermal conductivity of hollow brick blocks with the cavities filled by air is presented, which can be used as an integral part of energy-related calculations. The thermal conductivity of the brick body is the main input parameter of the model, the convection and radiation in the cavities are taken into account in a simplified form. The error range of the designed method is identified using a thorough uncertainty analysis. A direct comparison of the calculated equivalent thermal conductivity with the results obtained by two different experimental techniques for the same hollow brick block shows a satisfactory agreement, making the designed computational approach a viable alternative to the currently used methods. - Highlights: • A fast approach for determination of thermal properties of hollow bricks is given. • A simplified model including all significant heat transport phenomena is applied. • The error range of the method is identified using a thorough uncertainty analysis. • The verification is done by a comparison with two experimental techniques. • The approach is designed as a part of whole-building energy-related calculations

  6. A computer code to calculate the fast induced signals by electron swarms in gases

    Energy Technology Data Exchange (ETDEWEB)

    Tobias, Carmen C.B. [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil); Mangiarotti, Alessio [Universidade de Coimbra (Portugal). Dept. de Fisica. Lab. de Instrumentacao e Fisica Experimental de Particulas

    2010-07-01

    Full text: The study of electron transport parameters (i.e. drift velocity, diffusion coefficients and first Townsend coefficient) in gases is very important in several areas of applied nuclear science. For example, they are a relevant input to the design of particle detector employing micro-structures (MSGC's, micromegas, GEM's) and RPC's (resistive plate chambers). Moreover, if the data are accurate and complete enough, they can be used to derive a set of electron impact cross-sections with their energy dependence, that are a key ingredient in micro-dosimetry calculations. Despite the fundamental need of such data and the long age of the field, the gases of possible interest are so many and the effort of obtaining good quality data so time demanding, that an important contribution can still be made. As an example, electrons drift velocity at moderate field strengths (up to 50 Td) in pure Isobutane (a tissue equivalent gas) has been measured only recently by the IPEN-LIP collaboration using a dedicated setup. The transport parameters are derived from the recorded electric pulse induced by a swarm started with a pulsed laser shining on the cathode. To aid the data analysis, a special code has been developed to calculate the induced pulse by solving the electrons continuity equation including growth, drift and diffusion. A realistic profile of the initial laser beam is taken into account as well as the boundary conditions at the cathode and anode. The approach is either semi-analytic, based on the expression derived by P. H. Purdie and J. Fletcher, or fully numerical, using a finite difference scheme improved over the one introduced by J. de Urquijo et al. The agreement between the two will be demonstrated under typical conditions for the mentioned experimental setup. A brief discussion on the stability of the finite difference scheme will be given. The new finite difference scheme allows a detailed investigation of the importance of back diffusion to

  7. A computer programmed model for calculation of fall and dispersion of particles in the atmosphere

    International Nuclear Information System (INIS)

    An atmospheric model has been designed and developed to provide estimates of air concentrations or ground deposit densities of particles released in the atmosphere up to 90-km altitude. Particle density and diameter may range from 1 to 10 g/cm3 and about 3 to 300μ, respectively, for given instantaneous point or line sources. The particle cloud is allowed to move horizontally in accordance with analytically simulated winds and to fall at terminal velocity plus vertical air velocity. Small-scale cloud growth rate is specified empirically at values based on past instantaneous tracer experiments while large-scale growth results from trajectory subdivision and divergence of new particle trajectories. Some specific computer runs at Sandia were done to assess hazards resulting from possible rocket abort situations and atmospheric re-entry from improper orbits of isotopic or reactor power supplies. The results have been compared with other modes of estimation derived from simpler models of world-wide contaminant spread. While existing data are insufficient for full verification, it is felt that the present model is one of the most comprehensive and realistic available. (author)

  8. WASTEMGMT: A computer model for calculation of waste loads, profiles, and emissions

    International Nuclear Information System (INIS)

    WasteMGMT is a computational model developed to provide waste loads, profiles, and emissions for the US Department of Energy's Waste Management Programmatic Environmental Impact Statement (WP PEIS). The model was developed to account for the considerable variety of waste types and processing alternatives evaluated for the WM PEIS. The model is table-driven, with three types of fundamental waste management data defining the input: (1) waste inventories and characteristics; (2) treatment, storage, and disposal facility characteristics; and (3) alternative definition. The primary output of the model consists of tables of waste loads and contaminant profiles at facilities, as well as contaminant air releases for each treatment and storage facility at each site for each waste stream. The model is implemented in Microsoft reg-sign FoxPro reg-sign for MS-DOS reg-sign version 2.5 and requires a microcomputer with at least a 386 processor and a minimum 6 Mbytes of memory and 10 Mbytes of disk space for temporary storage

  9. Occlusion culling and calculation for a computer generated hologram using spatial frequency index method

    International Nuclear Information System (INIS)

    A spatial frequency index method is proposed to cull occlusion and generate a hologram. Object points with the same spatial frequency are put into a set for their mutual occlusion. The hidden surfaces of the three-dimensional (3D) scene are quickly removed through culling the object points that are furthest from the hologram plane in the set. The phases of plane wave, which are only interrelated with the spatial frequencies, are precomputed and stored in a table. According to the spatial frequency of the object points, the phases of plane wave for generating fringes are obtained directly from the table. Three 3D scenes are chosen to verify the spatial frequency index method. Both numerical simulation and optical reconstruction are performed. Experimental results demonstrate that the proposed method can cull the hidden surfaces of the 3D scene correctly. The occlusion effect of the 3D scene can be well reproduced. The computational speed is better than that obtained using conventional methods but is still time-consuming. (paper)

  10. Experimental design of membrane sensor for selective determination of phenazopyridine hydrochloride based on computational calculations.

    Science.gov (United States)

    Attia, Khalid A M; El-Abasawi, Nasr M; Abdel-Azim, Ahmed H

    2016-04-01

    Computational study has been done electronically and geometrically to select the most suitable ionophore to design a novel sensitive and selective electrochemical sensor for phenazopyridine hydrochloride (PAP). This study has revealed that sodium tetraphenylbarate (NaTPB) fits better with PAP than potassium tetrakis (KTClPB). The sensor design is based on the ion pair of PAP with NaTPB using dioctyl phthalate as a plasticizer. Under optimum conditions, the proposed sensor shows the slope of 59.5 mV per concentration decade in the concentration range of 1.0 × 10(-2)-1.0 × 10(-5) M with detection limit 8.5 × 10(-6) M. The sensor exhibits a very good selectivity for PAP with respect to a large number of interfering species as inorganic cations and sugars. The sensor enables track of determining PAP in the presence of its oxidative degradation product 2, 3, 6-Triaminopyridine, which is also its toxic metabolite. The proposed sensor has been successfully applied for the selective determination of PAP in pharmaceutical formulation. Also, the obtained results have been statistically compared to a reported electrochemical method indicating no significant difference between the investigated method and the reported one with respect to accuracy and precision. PMID:26838908

  11. WASTE{_}MGMT: A computer model for calculation of waste loads, profiles, and emissions

    Energy Technology Data Exchange (ETDEWEB)

    Kotek, T.J.; Avci, H.I.; Koebnick, B.L. [Argonne National Lab., IL (United States). Environmental Assessment Div.

    1996-12-01

    Waste{_}MGMT is a computational model developed to provide waste loads, profiles, and emissions for the US Department of Energy`s Waste Management Programmatic Environmental Impact Statement (WP PEIS). The model was developed to account for the considerable variety of waste types and processing alternatives evaluated for the WM PEIS. The model is table-driven, with three types of fundamental waste management data defining the input: (1) waste inventories and characteristics; (2) treatment, storage, and disposal facility characteristics; and (3) alternative definition. The primary output of the model consists of tables of waste loads and contaminant profiles at facilities, as well as contaminant air releases for each treatment and storage facility at each site for each waste stream. The model is implemented in Microsoft{reg_sign} FoxPro{reg_sign} for MS-DOS{reg_sign} version 2.5 and requires a microcomputer with at least a 386 processor and a minimum 6 Mbytes of memory and 10 Mbytes of disk space for temporary storage.

  12. Waste-Mgmt: A computer model for calculation of waste loads, profiles, and emissions

    Energy Technology Data Exchange (ETDEWEB)

    Kotek, T.J.; Avci, H.I.; Koebnick, B.L.

    1995-04-01

    WASTE-MGMT is a computational model that provides waste loads, profiles, and emissions for the U.S. Department of Energy`s Waste Management Programmatic Environmental Impact Statement (WM PEIS). The model was developed to account for the considerable variety of waste types and processing alternatives evaluated by the WM PEIS. The model is table-driven, with three types of fundamental waste management data defining the input: (1) waste inventories and characteristics; (2) treatment, storage and disposal facility characteristics; and (3) alternative definition. The primary output of the model consists of tables of waste loads and contaminant profiles at facilities, as well as contaminant air releases for each treatment and storage facility at each site for each waste stream. The model is implemented in Microsoft{reg_sign} FoxPro{reg_sign} for MS-DOS{reg_sign} version 2.5 and requires a microcomputer with at least a 386 processor and a minimum 6 MBytes of memory and 10 MBytes of disk space for temporary storage.

  13. Establishment of scatter factors for use in shielding calculations and risk assessment for computed tomography facilities

    International Nuclear Information System (INIS)

    The specification of shielding for CT facilities in the UK and many other countries has been based on isodose scatter curves supplied by the manufacturers combined with the scanner's mAs workload. Shielding calculations for radiography and fluoroscopy are linked to a dose measurement of radiation incident on the patient called the kerma–area product (KAP), and a related quantity, the dose-length product (DLP), is now employed for assessment of CT patient doses. In this study the link between scatter air kerma and DLP has been investigated for CT scanners from different manufacturers. Scatter air kerma values have been measured and scatter factors established that can be used to estimate air kerma levels within CT scanning rooms. Factors recommended to derive the scatter air kerma at 1 m from the isocentre are 0.36 µGy (mGy cm)−1 for the body and 0.14 µGy (mGy cm)−1 for head scans. The CT scanner gantries only transmit 10% of the scatter air kerma level and this can also be taken into account when designing protection. The factors can be used to predict scatter air kerma levels within a scanner room that might be used in risk assessments relating to personnel whose presence may be required during CT fluoroscopy procedures.

  14. Accuracy of patient dose calculation for lung IMRT: A comparison of Monte Carlo, convolution/superposition, and pencil beam computations.

    Science.gov (United States)

    Vanderstraeten, Barbara; Reynaert, Nick; Paelinck, Leen; Madani, Indira; De Wagter, Carlos; De Gersem, Werner; De Neve, Wilfried; Thierens, Hubert

    2006-09-01

    The accuracy of dose computation within the lungs depends strongly on the performance of the calculation algorithm in regions of electronic disequilibrium that arise near tissue inhomogeneities with large density variations. There is a lack of data evaluating the performance of highly developed analytical dose calculation algorithms compared to Monte Carlo computations in a clinical setting. We compared full Monte Carlo calculations (performed by our Monte Carlo dose engine MCDE) with two different commercial convolution/superposition (CS) implementations (Pinnacle-CS and Helax-TMS's collapsed cone model Helax-CC) and one pencil beam algorithm (Helax-TMS's pencil beam model Helax-PB) for 10 intensity modulated radiation therapy (IMRT) lung cancer patients. Treatment plans were created for two photon beam qualities (6 and 18 MV). For each dose calculation algorithm, patient, and beam quality, the following set of clinically relevant dose-volume values was reported: (i) minimal, median, and maximal dose (Dmin, D50, and Dmax) for the gross tumor and planning target volumes (GTV and PTV); (ii) the volume of the lungs (excluding the GTV) receiving at least 20 and 30 Gy (V20 and V30) and the mean lung dose; (iii) the 33rd percentile dose (D33) and Dmax delivered to the heart and the expanded esophagus; and (iv) Dmax for the expanded spinal cord. Statistical analysis was performed by means of one-way analysis of variance for repeated measurements and Tukey pairwise comparison of means. Pinnacle-CS showed an excellent agreement with MCDE within the target structures, whereas the best correspondence for the organs at risk (OARs) was found between Helax-CC and MCDE. Results from Helax-PB were unsatisfying for both targets and OARs. Additionally, individual patient results were analyzed. Within the target structures, deviations above 5% were found in one patient for the comparison of MCDE and Helax-CC, while all differences between MCDE and Pinnacle-CS were below 5%. For both

  15. Accuracy of patient dose calculation for lung IMRT: A comparison of Monte Carlo, convolution/superposition, and pencil beam computations

    International Nuclear Information System (INIS)

    The accuracy of dose computation within the lungs depends strongly on the performance of the calculation algorithm in regions of electronic disequilibrium that arise near tissue inhomogeneities with large density variations. There is a lack of data evaluating the performance of highly developed analytical dose calculation algorithms compared to Monte Carlo computations in a clinical setting. We compared full Monte Carlo calculations (performed by our Monte Carlo dose engine MCDE) with two different commercial convolution/superposition (CS) implementations (Pinnacle-CS and Helax-TMS's collapsed cone model Helax-CC) and one pencil beam algorithm (Helax-TMS's pencil beam model Helax-PB) for 10 intensity modulated radiation therapy (IMRT) lung cancer patients. Treatment plans were created for two photon beam qualities (6 and 18 MV). For each dose calculation algorithm, patient, and beam quality, the following set of clinically relevant dose-volume values was reported: (i) minimal, median, and maximal dose (Dmin, D50, and Dmax) for the gross tumor and planning target volumes (GTV and PTV); (ii) the volume of the lungs (excluding the GTV) receiving at least 20 and 30 Gy (V20 and V30) and the mean lung dose; (iii) the 33rd percentile dose (D33) and Dmax delivered to the heart and the expanded esophagus; and (iv) Dmax for the expanded spinal cord. Statistical analysis was performed by means of one-way analysis of variance for repeated measurements and Tukey pairwise comparison of means. Pinnacle-CS showed an excellent agreement with MCDE within the target structures, whereas the best correspondence for the organs at risk (OARs) was found between Helax-CC and MCDE. Results from Helax-PB were unsatisfying for both targets and OARs. Additionally, individual patient results were analyzed. Within the target structures, deviations above 5% were found in one patient for the comparison of MCDE and Helax-CC, while all differences between MCDE and Pinnacle-CS were below 5%. For both

  16. User's manual to the ICRP Code: a series of computer programs to perform dosimetric calculations for the ICRP Committee 2 report

    International Nuclear Information System (INIS)

    A computer code has been developed that implements the recommendations of ICRP Committee 2 for computing limits for occupational exposure of radionuclides. The purpose of this report is to describe the various modules of the computer code and to present a description of the methods and criteria used to compute the tables published in the Committee 2 report. The computer code contains three modules of which: (1) one computes specific effective energy; (2) one calculates cumulated activity; and (3) one computes dose and the series of ICRP tables. The description of the first two modules emphasizes the new ICRP Committee 2 recommendations in computing specific effective energy and cumulated activity. For the third module, the complex criteria are discussed for calculating the tables of committed dose equivalent, weighted committed dose equivalents, annual limit of intake, and derived air concentration

  17. HARAD: a computer code for calculating daughter concentrations in air following the atmospheric release of a parent radionuclide

    International Nuclear Information System (INIS)

    The HARAD computer code, written in FORTRAN IV, calculates concentrations of radioactive daughters in air following the atmospheric release of a parent radionuclide under a variety of meteorological conditions. It can be applied most profitably to the assessment of doses to man from the noble gases such as 222Rn, 220Rn, and Xe and Kr isotopes. These gases can produce significant quantities of short-lived particulate daughters in an airborne plume, which are the major contributors to dose from these chains with gaseous parent radionuclides. The simultaneous processes of radioactive decay, buildup, and environmental losses through wet and dry deposition on ground surfaces are calculated for a daughter chain in an airborne plume as it is dispersed downwind from a point of release of a parent. The code employs exact solutions of the differential equations describing the above processes over successive discrete segments of downwind distance. Average values for the dry deposition coefficients of the chain members over each of these distance segments were treated as constants in the equations. The advantage of HARAD is its short computing time

  18. ALPHN: A computer program for calculating ([alpha], n) neutron production in canisters of high-level waste

    Energy Technology Data Exchange (ETDEWEB)

    Salmon, R.; Hermann, O.W.

    1992-10-01

    The rate of neutron production from ([alpha], n) reactions in canisters of immobilized high-level waste containing borosilicate glass or glass-ceramic compositions is significant and must be considered when estimating neutron shielding requirements. The personal computer program ALPHA calculates the ([alpha], n) neutron production rate of a canister of vitrified high-level waste. The user supplies the chemical composition of the glass or glass-ceramic and the curies of the alpha-emitting actinides present. The output of the program gives the ([alpha], n) neutron production of each actinide in neutrons per second and the total for the canister. The ([alpha], n) neutron production rates are source terms only; that is, they are production rates within the glass and do not take into account the shielding effect of the glass. For a given glass composition, the user can calculate up to eight cases simultaneously; these cases are based on the same glass composition but contain different quantities of actinides per canister. In a typical application, these cases might represent the same canister of vitrified high-level waste at eight different decay times. Run time for a typical problem containing 20 chemical species, 24 actinides, and 8 decay times was 35 s on an IBM AT personal computer. Results of an example based on an expected canister composition at the Defense Waste Processing Facility are shown.

  19. ALPHN: A computer program for calculating (α, n) neutron production in canisters of high-level waste

    International Nuclear Information System (INIS)

    The rate of neutron production from (α, n) reactions in canisters of immobilized high-level waste containing borosilicate glass or glass-ceramic compositions is significant and must be considered when estimating neutron shielding requirements. The personal computer program ALPHA calculates the (α, n) neutron production rate of a canister of vitrified high-level waste. The user supplies the chemical composition of the glass or glass-ceramic and the curies of the alpha-emitting actinides present. The output of the program gives the (α, n) neutron production of each actinide in neutrons per second and the total for the canister. The (α, n) neutron production rates are source terms only; that is, they are production rates within the glass and do not take into account the shielding effect of the glass. For a given glass composition, the user can calculate up to eight cases simultaneously; these cases are based on the same glass composition but contain different quantities of actinides per canister. In a typical application, these cases might represent the same canister of vitrified high-level waste at eight different decay times. Run time for a typical problem containing 20 chemical species, 24 actinides, and 8 decay times was 35 s on an IBM AT personal computer. Results of an example based on an expected canister composition at the Defense Waste Processing Facility are shown

  20. Efficient computer program EPAS-J1 for calculating stress intensity factors of three-dimensional surface cracks

    International Nuclear Information System (INIS)

    A finite element computer program EPAS-J1 was developed to calculate the stress intensity factors of three-dimensional cracks. In the program, the stress intensity factor is determined by the virtual crack extension method together with the distorted elements allocated along the crack front. This program also includes the connection elements based on the Lagrange multiplier concept to connect such different kinds of elements as the solid and shell elements, or the shell and beam elements. For the structure including three-dimensional surface cracks, the solid elements are employed only at the neighborhood of a surface crack, while the remainder of the structure is modeled by the shell or beam elements due to the reason that the crack singularity is very local. Computer storage and computational time can be highly reduced with the application of the above modeling technique for the calculation of the stress intensity factors of the three-dimensional surface cracks, because the three-dimensional solid elements are required only around the crack front. Several numerical analyses were performed by the EPAS-J1 program. At first, the accuracies of the connection element and the virtual crack extension method were confirmed using the simple structures. Compared with other techniques of connecting different kinds of elements such as the tying method or the method using anisotropic plate element, the present connection element is found to provide better results than the others. It is also found that the virtual crack extension method provides the accurate stress intensity factor. Furthermore, the results are also presented for the stress intensity factor analyses of cylinders with longitudinal or circumferential surface cracks using the combination of the various kinds of elements together with the connection elements. (author)

  1. Chemical solver to compute molecule and grain abundances and non-ideal MHD resistivities in prestellar core-collapse calculations

    Science.gov (United States)

    Marchand, P.; Masson, J.; Chabrier, G.; Hennebelle, P.; Commerçon, B.; Vaytet, N.

    2016-07-01

    We develop a detailed chemical network relevant to calculate the conditions that are characteristic of prestellar core collapse. We solve the system of time-dependent differential equations to calculate the equilibrium abundances of molecules and dust grains, with a size distribution given by size-bins for these latter. These abundances are used to compute the different non-ideal magneto-hydrodynamics resistivities (ambipolar, Ohmic and Hall), needed to carry out simulations of protostellar collapse. For the first time in this context, we take into account the evaporation of the grains, the thermal ionisation of potassium, sodium, and hydrogen at high temperature, and the thermionic emission of grains in the chemical network, and we explore the impact of various cosmic ray ionisation rates. All these processes significantly affect the non-ideal magneto-hydrodynamics resistivities, which will modify the dynamics of the collapse. Ambipolar diffusion and Hall effect dominate at low densities, up to nH = 1012 cm-3, after which Ohmic diffusion takes over. We find that the time-scale needed to reach chemical equilibrium is always shorter than the typical dynamical (free fall) one. This allows us to build a large, multi-dimensional multi-species equilibrium abundance table over a large temperature, density and ionisation rate ranges. This table, which we make accessible to the community, is used during first and second prestellar core collapse calculations to compute the non-ideal magneto-hydrodynamics resistivities, yielding a consistent dynamical-chemical description of this process. The multi-dimensional multi-species equilibrium abundance table and a copy of the code are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/592/A18

  2. Computational Modeling and Theoretical Calculations on the Interactions between Spermidine and Functional Monomer (Methacrylic Acid in a Molecularly Imprinted Polymer

    Directory of Open Access Journals (Sweden)

    Yujie Huang

    2015-01-01

    Full Text Available This paper theoretically investigates interactions between a template and functional monomer required for synthesizing an efficient molecularly imprinted polymer (MIP. We employed density functional theory (DFT to compute geometry, single-point energy, and binding energy (ΔE of an MIP system, where spermidine (SPD and methacrylic acid (MAA were selected as template and functional monomer, respectively. The geometry was calculated by using B3LYP method with 6-31+(d basis set. Furthermore, 6-311++(d, p basis set was used to compute the single-point energy of the above geometry. The optimized geometries at different template to functional monomer molar ratios, mode of bonding between template and functional monomer, changes in charge on natural bond orbital (NBO, and binding energy were analyzed. The simulation results show that SPD and MAA form a stable complex via hydrogen bonding. At 1 : 5 SPD to MAA ratio, the binding energy is minimum, while the amount of transferred charge between the molecules is maximum; SPD and MAA form a stable complex at 1 : 5 molar ratio through six hydrogen bonds. Optimizing structure of template-functional monomer complex, through computational modeling prior synthesis, significantly contributes towards choosing a suitable pair of template-functional monomer that yields an efficient MIP with high specificity and selectivity.

  3. Two computational approaches for Monte Carlo based shutdown dose rate calculation with applications to the JET fusion machine

    International Nuclear Information System (INIS)

    shortly after the deuterium-tritium experiment (DTE1) in 1997. Large computing power, both in terms of amount of data handling and storage and the CPU computing time is needed by the two methods, partly due to the complexity of the problem. With parallel versions of the MCNP code, running on two different platforms, a satisfying accuracy of the calculation has been reached in reasonable times. (authors)

  4. Two computational approaches for Monte Carlo based shutdown dose rate calculation with applications to the JET fusion machine

    Energy Technology Data Exchange (ETDEWEB)

    Petrizzi, L.; Batistoni, P.; Migliori, S. [Associazione EURATOM ENEA sulla Fusione, Frascati (Roma) (Italy); Chen, Y.; Fischer, U.; Pereslavtsev, P. [Association FZK-EURATOM Forschungszentrum Karlsruhe (Germany); Loughlin, M. [EURATOM/UKAEA Fusion Association, Culham Science Centre, Abingdon, Oxfordshire, OX (United Kingdom); Secco, A. [Nice Srl Via Serra 33 Camerano Casasco AT (Italy)

    2003-07-01

    shortly after the deuterium-tritium experiment (DTE1) in 1997. Large computing power, both in terms of amount of data handling and storage and the CPU computing time is needed by the two methods, partly due to the complexity of the problem. With parallel versions of the MCNP code, running on two different platforms, a satisfying accuracy of the calculation has been reached in reasonable times. (authors)

  5. Practically acquired and modified cone-beam computed tomography images for accurate dose calculation in head and neck cancer

    International Nuclear Information System (INIS)

    On-line cone-beam computed tomography (CBCT) may be used to reconstruct the dose for geometric changes of patients and tumors during radiotherapy course. This study is to establish a practical method to modify the CBCT for accurate dose calculation in head and neck cancer. Fan-beam CT (FBCT) and Elekta's CBCT were used to acquire images. The CT numbers for different materials on CBCT were mathematically modified to match them with FBCT. Three phantoms were scanned by FBCT and CBCT for image uniformity, spatial resolution, and CT numbers, and to compare the dose distribution from orthogonal beams. A Rando phantom was scanned and planned with intensity-modulated radiation therapy (IMRT). Finally, two nasopharyngeal cancer patients treated with IMRT had their CBCT image sets calculated for dose comparison. With 360 acquisition of CBCT and high-resolution reconstruction, the uniformity of CT number distribution was improved and the otherwise large variations for background and high-density materials were reduced significantly. The dose difference between FBCT and CBCT was < 2% in phantoms. In the Rando phantom and the patients, the dose-volume histograms were similar. The corresponding isodose curves covering ≥ 90% of prescribed dose on FBCT and CBCT were close to each other (within 2 mm). Most dosimetric differences were from the setup errors related to the interval changes in body shape and tumor response. The specific CBCT acquisition, reconstruction, and CT number modification can generate accurate dose calculation for the potential use in adaptive radiotherapy.

  6. Theoretical calculations of X-ray absorption spectra of a copper mixed ligand complex using computer code FEFF9

    International Nuclear Information System (INIS)

    The terms X-ray absorption near edge structure (XANES) and extended X-ray absorption fine structure (EXAFS) refer, respectively, to the structure in the X-ray absorption spectrum at low and high energies relative to the absorption edge. Routine analysis of EXAFS experiments generally makes use of simplified models and several many-body parameters, e.g. mean free paths, many-body amplitude factors, and Debye-Waller factors, as incorporated in EXAFS analysis software packages like IFEFFIT which includes Artemis. Similar considerations apply to XANES, where the agreement between theory and experiment is often less satisfactory. The recently available computer code FEFF9 uses the real-space Green's function (RSGF) approach to calculate dielectric response over a broad spectrum including the dominant low-energy region. This code includes improved treatments of many-body effects such as inelastic losses, core-hole effects, vibrational amplitudes, and the extension to full spectrum calculations of optical constants including solid state effects. In the present work, using FEFF9, we have calculated the X-ray absorption spectrum at the K-edge of copper in a complex, viz., aqua (diethylenetriamine) (isonicotinato) copper(II), the crystal structure of which is unknown. The theoretical spectrum has been compared with the experimental spectrum, recorded by us at the XAFS beamline 11.1 at ELETTRA synchrotron source, Italy, in both XANES and EXAFS regions.

  7. A computer code for forward calculation and inversion of the H/V spectral ratio under the diffuse field assumption

    CERN Document Server

    García-Jerez, Antonio; Sánchez-Sesma, Francisco J; Luzón, Francisco; Perton, Mathieu

    2016-01-01

    During a quarter of a century, the main characteristics of the horizontal-to-vertical spectral ratio of ambient noise HVSRN have been extensively used for site effect assessment. In spite of the uncertainties about the optimum theoretical model to describe these observations, several schemes for inversion of the full HVSRN curve for near surface surveying have been developed over the last decade. In this work, a computer code for forward calculation of H/V spectra based on the diffuse field assumption (DFA) is presented and tested.It takes advantage of the recently stated connection between the HVSRN and the elastodynamic Green's function which arises from the ambient noise interferometry theory. The algorithm allows for (1) a natural calculation of the Green's functions imaginary parts by using suitable contour integrals in the complex wavenumber plane, and (2) separate calculation of the contributions of Rayleigh, Love, P-SV and SH waves as well. The stability of the algorithm at high frequencies is preserv...

  8. Lobi pre-prediction excersise test A1-04: results of calculations using Relap4/Mod.6 computer code

    International Nuclear Information System (INIS)

    LOBI test facility is the only high pressure, integral system test facility within european communities, built and operating in the Joint Research Centre of Ispra. Test A1-04 of LOBI program (LOop Blowdown Investigations), simulating a double-ended cold leg break of the primary cooling system of a four loop PWR, was choosen for a blind test Prediction Exercise (PREX) with large international partecipation; no results of experiment tests on LOBI facility were published before all calculate PREX results were supplied to Ispra. The 'Istituto di Impianti Nucleari' of Pisa University studied the thermohydraulic transient in test A1-04 by RELAP4/mod.6 code, running on IBM 370/158 computer of CNUCE (CNR-Pisa). This report, after a brief description of adopted nodalization and input data, contains the comparison between the experimental results (in form of graphs) and those of pre-test and post-test calculations; the latter were obtained with only minor changes on modelling (i.e. of SG behaviour) and on the basis of sensitivity studies about multipliers of critical flow models. The agreement between experimental and calculated results is generally good, also if some discrepances are noted and analyzed

  9. Practically acquired and modified cone-beam computed tomography images for accurate dose calculation in head and neck cancer

    Energy Technology Data Exchange (ETDEWEB)

    Hu, Chih-Chung [National Taiwan Univ. Hospital and College of Medicine, Taipei (China). Division of Radiation Oncology; Yuanpei Univ., Hsinchu (China). Dept. of Radiological Technology; Huang, Wen-Tao [Yuanpei Univ., Hsinchu (China). Dept. of Radiological Technology; Tsai, Chiao-Ling; Chao, Hsiao-Ling; Huang, Guo-Ming; Wang, Chun-Wei [National Taiwan Univ. Hospital and College of Medicine, Taipei (China). Division of Radiation Oncology; Wu, Jian-Kuen [National Taiwan Univ. Hospital and College of Medicine, Taipei (China). Division of Radiation Oncology; National Taiwan Normal Univ., Taipei (China). Inst. of Electro-Optical Science and Technology; Wu, Chien-Jang [National Taiwan Normal Univ., Taipei (China). Inst. of Electro-Optical Science and Technology; Cheng, Jason Chia-Hsien [National Taiwan Univ. Hospital and College of Medicine, Taipei (China). Division of Radiation Oncology; National Taiwan Univ. Taipei (China). Graduate Inst. of Oncology; National Taiwan Univ. Taipei (China). Graduate Inst. of Clinical Medicine; National Taiwan Univ. Taipei (China). Graduate Inst. of Biomedical Electronics and Bioinformatics

    2011-10-15

    On-line cone-beam computed tomography (CBCT) may be used to reconstruct the dose for geometric changes of patients and tumors during radiotherapy course. This study is to establish a practical method to modify the CBCT for accurate dose calculation in head and neck cancer. Fan-beam CT (FBCT) and Elekta's CBCT were used to acquire images. The CT numbers for different materials on CBCT were mathematically modified to match them with FBCT. Three phantoms were scanned by FBCT and CBCT for image uniformity, spatial resolution, and CT numbers, and to compare the dose distribution from orthogonal beams. A Rando phantom was scanned and planned with intensity-modulated radiation therapy (IMRT). Finally, two nasopharyngeal cancer patients treated with IMRT had their CBCT image sets calculated for dose comparison. With 360 acquisition of CBCT and high-resolution reconstruction, the uniformity of CT number distribution was improved and the otherwise large variations for background and high-density materials were reduced significantly. The dose difference between FBCT and CBCT was < 2% in phantoms. In the Rando phantom and the patients, the dose-volume histograms were similar. The corresponding isodose curves covering {>=} 90% of prescribed dose on FBCT and CBCT were close to each other (within 2 mm). Most dosimetric differences were from the setup errors related to the interval changes in body shape and tumor response. The specific CBCT acquisition, reconstruction, and CT number modification can generate accurate dose calculation for the potential use in adaptive radiotherapy.

  10. Calculated organ doses from selected prostate treatment plans using Monte Carlo simulations and an anatomically realistic computational phantom

    International Nuclear Information System (INIS)

    There is growing concern about radiation-induced second cancers associated with radiation treatments. Particular attention has been focused on the risk to patients treated with intensity-modulated radiation therapy (IMRT) due primarily to increased monitor units. To address this concern we have combined a detailed medical linear accelerator model of the Varian Clinac 2100 C with anatomically realistic computational phantoms to calculate organ doses from selected treatment plans. This paper describes the application to calculate organ-averaged equivalent doses using a computational phantom for three different treatments of prostate cancer: a 4-field box treatment, the same box treatment plus a 6-field 3D-CRT boost treatment and a 7-field IMRT treatment. The equivalent doses per MU to those organs that have shown a predilection for second cancers were compared between the different treatment techniques. In addition, the dependence of photon and neutron equivalent doses on gantry angle and energy was investigated. The results indicate that the box treatment plus 6-field boost delivered the highest intermediate- and low-level photon doses per treatment MU to the patient primarily due to the elevated patient scatter contribution as a result of an increase in integral dose delivered by this treatment. In most organs the contribution of neutron dose to the total equivalent dose for the 3D-CRT treatments was less than the contribution of photon dose, except for the lung, esophagus, thyroid and brain. The total equivalent dose per MU to each organ was calculated by summing the photon and neutron dose contributions. For all organs non-adjacent to the primary beam, the equivalent doses per MU from the IMRT treatment were less than the doses from the 3D-CRT treatments. This is due to the increase in the integral dose and the added neutron dose to these organs from the 18 MV treatments. However, depending on the application technique and optimization used, the required MU

  11. Calculated organ doses from selected prostate treatment plans using Monte Carlo simulations and an anatomically realistic computational phantom

    Science.gov (United States)

    Bednarz, Bryan; Hancox, Cindy; Xu, X. George

    2009-09-01

    There is growing concern about radiation-induced second cancers associated with radiation treatments. Particular attention has been focused on the risk to patients treated with intensity-modulated radiation therapy (IMRT) due primarily to increased monitor units. To address this concern we have combined a detailed medical linear accelerator model of the Varian Clinac 2100 C with anatomically realistic computational phantoms to calculate organ doses from selected treatment plans. This paper describes the application to calculate organ-averaged equivalent doses using a computational phantom for three different treatments of prostate cancer: a 4-field box treatment, the same box treatment plus a 6-field 3D-CRT boost treatment and a 7-field IMRT treatment. The equivalent doses per MU to those organs that have shown a predilection for second cancers were compared between the different treatment techniques. In addition, the dependence of photon and neutron equivalent doses on gantry angle and energy was investigated. The results indicate that the box treatment plus 6-field boost delivered the highest intermediate- and low-level photon doses per treatment MU to the patient primarily due to the elevated patient scatter contribution as a result of an increase in integral dose delivered by this treatment. In most organs the contribution of neutron dose to the total equivalent dose for the 3D-CRT treatments was less than the contribution of photon dose, except for the lung, esophagus, thyroid and brain. The total equivalent dose per MU to each organ was calculated by summing the photon and neutron dose contributions. For all organs non-adjacent to the primary beam, the equivalent doses per MU from the IMRT treatment were less than the doses from the 3D-CRT treatments. This is due to the increase in the integral dose and the added neutron dose to these organs from the 18 MV treatments. However, depending on the application technique and optimization used, the required MU

  12. Evaluation of open MPI and MPICH2 performances for the computation time in proton therapy dose calculations with Geant4

    Science.gov (United States)

    Kazemi, M.; Afarideh, H.; Riazi, Z.

    2015-11-01

    The aim of this research work is to use a better parallel software structure to improve the performance of the Monte Carlo Geant4 code in proton treatment planning. The hadron therapy simulation is rewritten to parallelize the shared memory multiprocessor systems by using the Message-Passing Interface (MPI). The speedup performance of the code has been studied by using two MPI-compliant libraries including Open MPI and the MPICH2, separately. Despite the speedup, the results are almost linear for both the Open MPI and MPICH2; the latter was chosen because of its better characteristics and lower computation time. The Geant4 parameters, including the step limiter and the set cut, have been analyzed to minimize the simulation time as much as possible. For a reasonable compromise between the spatial dose distribution and the calculation time, the improvement in time reduction coefficient reaches about 157.

  13. TRANCS, a computer code for calculating fission product release from high temperature gas-cooled reactor fuel, (2)

    International Nuclear Information System (INIS)

    This report describes the calculation procedure of the TRANCS code, which deals with fission product transport in fuel rod of high temperature gas-cooled reactor (HTGR). The fundamental equation modeled in the code is a cylindrical one-dimensional diffusion equation with generation and decay terms, and the non-stationary solution of the equation is obtained numerically by a finite difference method. The generation terms consist of the diffusional release from coated fuel particles, recoil release from outer-most coating layer of the fuel particle and generation due to contaminating uranium in the graphite matrix of the fuel compact. The decay term deals with neutron capture as well as beta decay. Factors affecting the computation error has been examined, and further extention of the code has been discussed in the fields of radial transport of fission products from graphite sleeve into coolant helium gas and axial transport in the fuel rod. (author)

  14. Automatic 2D scintillation camera and computed tomography whole-body image registration to perform dosimetric calculations

    International Nuclear Information System (INIS)

    Full text: In this work a software tool that has been developed to allow automatic registrations of 2D Scintillation Camera (SC) and Computed Tomography (CT) images is presented. This tool, used with a dosimetric software with Integrated Activity or Residence Time as input data, allows the user to assess physicians about effects of radiodiagnostic or radiotherapeutic practices that involves nuclear medicine 'open sources'. Images are registered locally and globally, maximizing Mutual Information coefficient between regions been registered. In the regional case whole-body images are segmented into five regions: head, thorax, pelvis, left and right legs. Each region has its own registration parameters, which are optimized through Powell-Brent minimization method that 'maximizes' Mutual Information coefficient. This software tool allows the user to draw ROIs, input isotope characteristics and finally calculate Integrated Activity or Residence Time in one or many specific organ. These last values can be introduced in many dosimetric software to finally obtain Absorbed Dose values. (author)

  15. Automatic 2D scintillation camera and computed tomography whole-body image registration to perform dosimetry calculation

    International Nuclear Information System (INIS)

    In this paper we present a software tool that has been developed to allow automatic registrations of 2D Scintillation Camera (SC) and Computed Tomography (CT) images. This tool, used with a dosimetric software with Integrated Activity or Residence Time as input data, allows the user to assess physicians about effects of radiodiagnostic or radioterapeutic practices. Images are registered locally and globally, maximizing Mutual Information coefficient between regions been registered. In the regional case whole-body images are segmented into five regions: head, thorax, pelvis, left and right legs. Each region has its own registration parameters, which are optimized through Powell-Brent minimization method that 'maximizes' Mutual Information coefficient. This software tool allows the user to draw ROIs, input isotope characteristics and finally calculate Integrated Activity or Residence Time in one or many specific organ. These last values can be introduced in many dosimetric softwares to finally obtain Absorbed Dose values

  16. Comparative calculations of the WWER fuel rod thermophysical characteristics employing the TOPRA-s and the TRANSURANUS computer codes

    International Nuclear Information System (INIS)

    A short description of the TOPRA-s computer code is presented. The code is developed to calculate the thermophysical cross-section characteristics of the WWER fuel rods: fuel temperature distributions and fuel-to-cladding gap conductance. The TOPRA-s input does not require the fuel rod irradiation pre-history (time dependent distributions of linear power, fast neutron flux and coolant temperature along the rod). The required input consists of the considered cross-section data (coolant temperature, burnup, linear power) and the overall fuel rod data (burnup and linear power). TOPRA-s is included into the KASKAD code package. Some results of the TOPRA-s code validation using the SOFIT-1 and IFA-503.1 experimental data, are shown. A short description of the TRANSURANUS code for thermal and mechanical predictions of the LWR fuel rod behavior at various irradiation conditions and its version for WWER reactors, are presented. (Authors)

  17. RISKIND: A computer program for calculating radiological consequences and health risks from transportation of spent nuclear fuel

    International Nuclear Information System (INIS)

    This report presents the technical details of RISKIND, a computer code designed to estimate potential radiological consequences and health risks to individuals and the collective population from exposures associated with the transportation of spent nuclear fuel. RISKIND is a user-friendly, interactive program that can be run on an IBM or equivalent personal computer under the Windows trademark environment. Several models are included in RISKIND that have been tailored to calculate the exposure to individuals under various incident-free and accident conditions. The incident-free models assess exposures from both gamma and neutron radiation and can account for different cask designs. The accident models include accidental release, atmospheric transport, and the environmental pathways of radionuclides from spent fuels; these models also assess health risks to individuals and the collective population. The models are supported by databases that are specific to spent nuclear fuels and include a radionuclide inventory and dose conversion factors. In addition, the flexibility of the models allows them to be used for assessing any accidental release involving radioactive materials. The RISKIND code allows for user-specified accident scenarios as well as receptor locations under various exposure conditions, thereby facilitating the estimation of radiological consequences and health risks for individuals. Median (50% probability) and typical worst-case (less than 5% probability of being exceeded) doses and health consequences from potential accidental releases can be calculated by constructing a cumulative dose/probability distribution curve for a complete matrix of site joint-wind-frequency data. These consequence results, together with the estimated probability of the entire spectrum of potential accidents, form a comprehensive, probabilistic risk assessment of a spent nuclear fuel transportation accident

  18. RISKIND: A computer program for calculating radiological consequences and health risks from transportation of spent nuclear fuel

    Energy Technology Data Exchange (ETDEWEB)

    Yuan, Y.C. [Square Y Consultants, Orchard Park, NY (US); Chen, S.Y.; Biwer, B.M.; LePoire, D.J. [Argonne National Lab., IL (US)

    1995-11-01

    This report presents the technical details of RISKIND, a computer code designed to estimate potential radiological consequences and health risks to individuals and the collective population from exposures associated with the transportation of spent nuclear fuel. RISKIND is a user-friendly, interactive program that can be run on an IBM or equivalent personal computer under the Windows{trademark} environment. Several models are included in RISKIND that have been tailored to calculate the exposure to individuals under various incident-free and accident conditions. The incident-free models assess exposures from both gamma and neutron radiation and can account for different cask designs. The accident models include accidental release, atmospheric transport, and the environmental pathways of radionuclides from spent fuels; these models also assess health risks to individuals and the collective population. The models are supported by databases that are specific to spent nuclear fuels and include a radionuclide inventory and dose conversion factors. In addition, the flexibility of the models allows them to be used for assessing any accidental release involving radioactive materials. The RISKIND code allows for user-specified accident scenarios as well as receptor locations under various exposure conditions, thereby facilitating the estimation of radiological consequences and health risks for individuals. Median (50% probability) and typical worst-case (less than 5% probability of being exceeded) doses and health consequences from potential accidental releases can be calculated by constructing a cumulative dose/probability distribution curve for a complete matrix of site joint-wind-frequency data. These consequence results, together with the estimated probability of the entire spectrum of potential accidents, form a comprehensive, probabilistic risk assessment of a spent nuclear fuel transportation accident.

  19. 3-D parallel program for numerical calculation of gas dynamics problems with heat conductivity on distributed memory computational systems (CS)

    Energy Technology Data Exchange (ETDEWEB)

    Sofronov, I.D.; Voronin, B.L.; Butnev, O.I. [VNIIEF (Russian Federation)] [and others

    1997-12-31

    The aim of the work performed is to develop a 3D parallel program for numerical calculation of gas dynamics problem with heat conductivity on distributed memory computational systems (CS), satisfying the condition of numerical result independence from the number of processors involved. Two basically different approaches to the structure of massive parallel computations have been developed. The first approach uses the 3D data matrix decomposition reconstructed at temporal cycle and is a development of parallelization algorithms for multiprocessor CS with shareable memory. The second approach is based on using a 3D data matrix decomposition not reconstructed during a temporal cycle. The program was developed on 8-processor CS MP-3 made in VNIIEF and was adapted to a massive parallel CS Meiko-2 in LLNL by joint efforts of VNIIEF and LLNL staffs. A large number of numerical experiments has been carried out with different number of processors up to 256 and the efficiency of parallelization has been evaluated in dependence on processor number and their parameters.

  20. Performance of heterogeneous computing with graphics processing unit and many integrated core for hartree potential calculations on a numerical grid.

    Science.gov (United States)

    Choi, Sunghwan; Kwon, Oh-Kyoung; Kim, Jaewook; Kim, Woo Youn

    2016-09-15

    We investigated the performance of heterogeneous computing with graphics processing units (GPUs) and many integrated core (MIC) with 20 CPU cores (20×CPU). As a practical example toward large scale electronic structure calculations using grid-based methods, we evaluated the Hartree potentials of silver nanoparticles with various sizes (3.1, 3.7, 4.9, 6.1, and 6.9 nm) via a direct integral method supported by the sinc basis set. The so-called work stealing scheduler was used for efficient heterogeneous computing via the balanced dynamic distribution of workloads between all processors on a given architecture without any prior information on their individual performances. 20×CPU + 1GPU was up to ∼1.5 and ∼3.1 times faster than 1GPU and 20×CPU, respectively. 20×CPU + 2GPU was ∼4.3 times faster than 20×CPU. The performance enhancement by CPU + MIC was considerably lower than expected because of the large initialization overhead of MIC, although its theoretical performance is similar with that of CPU + GPU. © 2016 Wiley Periodicals, Inc. PMID:27431905

  1. TRANCS, a computer code for calculating fission product release from high temperature gas-cooled reactor fuel, (1)

    International Nuclear Information System (INIS)

    The computer program, TRANCS, has been developed for evaluating the fractional release of long-lived fission products from coated fuel particles. This code numerically gives the non-stationary solution of the diffusion equation with birth and decay terms. The birth term deals with the fissile material in the fuel kernel, the contamination in the coating layers and the fission-recoil transfer from the kernel into the buffer layer; and the decay term deals with effective decay not only due to beta decay but also due to neutron capture, if appropriate input data are given. The code calculates the concentration profile, the release to birth rates (R/B), and the release and residual fractions in the coated fuel particle. Results obtained numerically have been in good agreement with the corresponding analytical solutions after the Booth model. Thus, the validity of the present code was confirmed, and further undate of the code has been discussed for extention of its computation scopes and models. (author)

  2. Recommendations for computer code selection of a flow and transport code to be used in undisturbed vadose zone calculations for TWRS immobilized wastes environmental analyses

    International Nuclear Information System (INIS)

    An analysis of three software proposals is performed to recommend a computer code for immobilized low activity waste flow and transport modeling. The document uses criteria restablished in HNF-1839, ''Computer Code Selection Criteria for Flow and Transport Codes to be Used in Undisturbed Vadose Zone Calculation for TWRS Environmental Analyses'' as the basis for this analysis

  3. A comparison of radiation dose measured in CT dosimetry phantoms with calculations using EGS4 and voxel-based computational models

    International Nuclear Information System (INIS)

    CT is a high-dose examination and possibly the dominant contributor to dose from diagnostic radiology. Estimates of organ doses are obtained from Monte Carlo calculations and used to quantify radiation risk. To ensure the validity of using Monte Carlo calculations to estimate actual dose, measurements must be compared with calculations. We have measured doses to CT head and chest dosimetry phantoms and compared them with Monte Carlo (EGS4) calculated doses in voxel-based computational models of the phantoms. The simulation used an x-ray spectrum calculated from the specified values of the scanner's x-ray tube parameters. The scanner's beam-shaping filter was included in the modelling. Measured and calculated doses to both the head and chest phantoms agreed to within 7%. The inclusion of Rayleigh scattering in the calculations has a significant effect if only one slice is scanned but not if multiple slices are scanned. (author)

  4. Spectroscopic (FT-IR, FT-Raman, UV and NMR) investigation on 1-phenyl-2-nitropropene by quantum computational calculations.

    Science.gov (United States)

    Xavier, S; Periandy, S

    2015-10-01

    In this paper, the spectral analysis of 1-phenyl-2-nitropropene is carried out using the FTIR, FT Raman, FT NMR and UV-Vis spectra of the compound with the help of quantum mechanical computations using ab-initio and density functional theories. The FT-IR (4000-400 cm(-1)) and FT-Raman (4000-100 cm(-1)) spectra were recorded in solid phase, the (1)H and (13)C NMR spectra were recorded in CDCl3 solution phase and the UV-Vis (200-800 nm) spectrum was recorded in ethanol solution phase. The different conformers of the compound and their minimum energies are studied using B3LYP functional with 6-311+G(d,p) basis set and two stable conformers with lowest energy were identified and the same was used for further computations. The computed wavenumbers from different methods are scaled so as to agree with the experimental values and the scaling factors are reported. All the modes of vibrations are assigned and the structure the molecule is analyzed in terms of parameters like bond length, bond angle and dihedral angle predicted by both B3LYP and B3PW91 methods with 6-311+G(d,p) and 6-311++G(d,p) basis sets. The values of dipole moment (μ), polarizability (α) and hyperpolarizability (β) of the molecule are reported, using which the non-linear property of the molecule is discussed. The HOMO-LUMO mappings are reported which reveals the different charge transfer possibilities within the molecule. The isotropic chemical shifts predicted for (1)H and (13)C atoms using gauge invariant atomic orbital (GIAO) theory show good agreement with experimental shifts. NBO analysis is carried out to picture the charge transfer between the localized bonds and lone pairs. The local reactivity of the molecule has been studied using the Fukui function. The thermodynamic properties (heat capacity, entropy and enthalpy) at different temperatures are also calculated. PMID:25965169

  5. An evolutionary computation based algorithm for calculating solar differential rotation by automatic tracking of coronal bright points

    Directory of Open Access Journals (Sweden)

    Shahamatnia Ehsan

    2016-01-01

    Full Text Available Developing specialized software tools is essential to support studies of solar activity evolution. With new space missions such as Solar Dynamics Observatory (SDO, solar images are being produced in unprecedented volumes. To capitalize on that huge data availability, the scientific community needs a new generation of software tools for automatic and efficient data processing. In this paper a prototype of a modular framework for solar feature detection, characterization, and tracking is presented. To develop an efficient system capable of automatic solar feature tracking and measuring, a hybrid approach combining specialized image processing, evolutionary optimization, and soft computing algorithms is being followed. The specialized hybrid algorithm for tracking solar features allows automatic feature tracking while gathering characterization details about the tracked features. The hybrid algorithm takes advantages of the snake model, a specialized image processing algorithm widely used in applications such as boundary delineation, image segmentation, and object tracking. Further, it exploits the flexibility and efficiency of Particle Swarm Optimization (PSO, a stochastic population based optimization algorithm. PSO has been used successfully in a wide range of applications including combinatorial optimization, control, clustering, robotics, scheduling, and image processing and video analysis applications. The proposed tool, denoted PSO-Snake model, was already successfully tested in other works for tracking sunspots and coronal bright points. In this work, we discuss the application of the PSO-Snake algorithm for calculating the sidereal rotational angular velocity of the solar corona. To validate the results we compare them with published manual results performed by an expert.

  6. An evolutionary computation based algorithm for calculating solar differential rotation by automatic tracking of coronal bright points

    Science.gov (United States)

    Shahamatnia, Ehsan; Dorotovič, Ivan; Fonseca, Jose M.; Ribeiro, Rita A.

    2016-03-01

    Developing specialized software tools is essential to support studies of solar activity evolution. With new space missions such as Solar Dynamics Observatory (SDO), solar images are being produced in unprecedented volumes. To capitalize on that huge data availability, the scientific community needs a new generation of software tools for automatic and efficient data processing. In this paper a prototype of a modular framework for solar feature detection, characterization, and tracking is presented. To develop an efficient system capable of automatic solar feature tracking and measuring, a hybrid approach combining specialized image processing, evolutionary optimization, and soft computing algorithms is being followed. The specialized hybrid algorithm for tracking solar features allows automatic feature tracking while gathering characterization details about the tracked features. The hybrid algorithm takes advantages of the snake model, a specialized image processing algorithm widely used in applications such as boundary delineation, image segmentation, and object tracking. Further, it exploits the flexibility and efficiency of Particle Swarm Optimization (PSO), a stochastic population based optimization algorithm. PSO has been used successfully in a wide range of applications including combinatorial optimization, control, clustering, robotics, scheduling, and image processing and video analysis applications. The proposed tool, denoted PSO-Snake model, was already successfully tested in other works for tracking sunspots and coronal bright points. In this work, we discuss the application of the PSO-Snake algorithm for calculating the sidereal rotational angular velocity of the solar corona. To validate the results we compare them with published manual results performed by an expert.

  7. A chemical solver to compute molecule and grain abundances and non-ideal MHD resistivities in prestellar core collapse calculations

    CERN Document Server

    Marchand, Pierre; Chabrier, Gilles; Hennebelle, Patrick; Commerçon, Benoit; Vaytet, Neil

    2016-01-01

    We develop a detailed chemical network relevant to the conditions characteristic of prestellar core collapse. We solve the system of time-dependent differential equations to calculate the equilibrium abundances of molecules and dust grains, with a size distribution given by size-bins for these latter. These abundances are used to compute the different non-ideal magneto-hydrodynamics resistivities (ambipolar, Ohmic and Hall), needed to carry out simulations of protostellar collapse. For the first time in this context, we take into account the evaporation of the grains, the thermal ionisation of Potassium, Sodium and Hydrogen at high temperature, and the thermionic emission of grains in the chemical network, and we explore the impact of various cosmic ray ionisation rates. All these processes significantly affect the non-ideal magneto-hydrodynamics resistivities, which will modify the dynamics of the collapse. Ambipolar diffusion and Hall effect dominate at low densities, up to n_H = 10^12 cm^-3, after which Oh...

  8. A computer code TERFOC-N to calculate doses to the public due to atmospheric releases of radionuclides in normal operations of nuclear facilities

    International Nuclear Information System (INIS)

    A computer code TERFOC-N has been developed to calculate doses to the public due to atmospheric releases of radionuclides in normal operations of nuclear facilities. This code calculates the highest individual dose and the collective dose from 4 exposure pathway; internal doses from ingestion and inhalation, external doses from cloudshine and groundshine. Foodchain models, which are originally referred to the U. S. Nuclear Regulatory Guide 1.109, have been improved to apply not only LWRs but also to other nuclear facilities. This report describes the employed models and the computer code, and gives a sample run performed by this code. (author)

  9. Študija delovanja programske opreme za izračun porabe energije v stavbah: Study of computer software performance for calculation of energy use in buildings:

    OpenAIRE

    Košir, Mitja; Krainer, Aleš; Kristl, Živa; Šestan, Primož

    2013-01-01

    In the following study we compared the results of four computer tools for calculation of energy use in buildings, taking into account the applicable Slovenian legislation and accompanying standards. The compared tools are TOST,URSA 4, Energy 2010 and ArchiMAID. We primarily intended to carry out verification of the new developed tool TOST. The chosen example was a family house. However, already at an early stage we discovered that the calculated values between programs differ significantly, i...

  10. Integrated design of Nb-based superalloys: Ab initio calculations, computational thermodynamics and kinetics, and experimental results

    Energy Technology Data Exchange (ETDEWEB)

    Ghosh, G. [Department of Materials Science and Engineering, Robert R. McCormick School of Engineering and Applied Science, Northwestern University, 2220 Campus Drive, Evanston, IL 60208-3108 (United States)]. E-mail: g-ghosh@northwestern.edu; Olson, G.B. [Department of Materials Science and Engineering, Robert R. McCormick School of Engineering and Applied Science, Northwestern University, 2220 Campus Drive, Evanston, IL 60208-3108 (United States)

    2007-06-15

    An optimal integration of modern computational tools and efficient experimentation is presented for the accelerated design of Nb-based superalloys. Integrated within a systems engineering framework, we have used ab initio methods along with alloy theory tools to predict phase stability of solid solutions and intermetallics to accelerate assessment of thermodynamic and kinetic databases enabling comprehensive predictive design of multicomponent multiphase microstructures as dynamic systems. Such an approach is also applicable for the accelerated design and development of other high performance materials. Based on established principles underlying Ni-based superalloys, the central microstructural concept is a precipitation strengthened system in which coherent cubic aluminide phase(s) provide both creep strengthening and a source of Al for Al{sub 2}O{sub 3} passivation enabled by a Nb-based alloy matrix with required ductile-to-brittle transition temperature, atomic transport kinetics and oxygen solubility behaviors. Ultrasoft and PAW pseudopotentials, as implemented in VASP, are used to calculate total energy, density of states and bonding charge densities of aluminides with B2 and L2{sub 1} structures relevant to this research. Characterization of prototype alloys by transmission and analytical electron microscopy demonstrates the precipitation of B2 or L2{sub 1} aluminide in a (Nb) matrix. Employing Thermo-Calc and DICTRA software systems, thermodynamic and kinetic databases are developed for substitutional alloying elements and interstitial oxygen to enhance the diffusivity ratio of Al to O for promotion of Al{sub 2}O{sub 3} passivation. However, the oxidation study of a Nb-Hf-Al alloy, with enhanced solubility of Al in (Nb) than in binary Nb-Al alloys, at 1300 deg. C shows the presence of a mixed oxide layer of NbAlO{sub 4} and HfO{sub 2} exhibiting parabolic growth.

  11. Integrated design of Nb-based superalloys: Ab initio calculations, computational thermodynamics and kinetics, and experimental results

    International Nuclear Information System (INIS)

    An optimal integration of modern computational tools and efficient experimentation is presented for the accelerated design of Nb-based superalloys. Integrated within a systems engineering framework, we have used ab initio methods along with alloy theory tools to predict phase stability of solid solutions and intermetallics to accelerate assessment of thermodynamic and kinetic databases enabling comprehensive predictive design of multicomponent multiphase microstructures as dynamic systems. Such an approach is also applicable for the accelerated design and development of other high performance materials. Based on established principles underlying Ni-based superalloys, the central microstructural concept is a precipitation strengthened system in which coherent cubic aluminide phase(s) provide both creep strengthening and a source of Al for Al2O3 passivation enabled by a Nb-based alloy matrix with required ductile-to-brittle transition temperature, atomic transport kinetics and oxygen solubility behaviors. Ultrasoft and PAW pseudopotentials, as implemented in VASP, are used to calculate total energy, density of states and bonding charge densities of aluminides with B2 and L21 structures relevant to this research. Characterization of prototype alloys by transmission and analytical electron microscopy demonstrates the precipitation of B2 or L21 aluminide in a (Nb) matrix. Employing Thermo-Calc and DICTRA software systems, thermodynamic and kinetic databases are developed for substitutional alloying elements and interstitial oxygen to enhance the diffusivity ratio of Al to O for promotion of Al2O3 passivation. However, the oxidation study of a Nb-Hf-Al alloy, with enhanced solubility of Al in (Nb) than in binary Nb-Al alloys, at 1300 deg. C shows the presence of a mixed oxide layer of NbAlO4 and HfO2 exhibiting parabolic growth

  12. Efficient Energy and Electrostatic Properties Calculations at the MP2 Theory Level: A Case Study of Density Matrix-Based Computational Quantum Chemistry

    OpenAIRE

    Grzegorz Mazur; Marcin Makowski; Jakub Sumera; Krzysztof Kowalczyk

    2012-01-01

    Wavefunction-less, density matrix-based approach to computational quantum chemistry is briefly discussed. Implementation of second-order M oller-Plesset Perturbation Method energy and dipole moment calculations within the new paradigm is presented. Efficiency and reliability of the method is analyzed.

  13. TIMED: a computer program for calculating cumulated activity of a radionuclide in the organs of the human body at a given time, t, after deposition

    International Nuclear Information System (INIS)

    TIMED is a computer program designed to calculate cumulated radioactivity in the various source organs at various times after radionuclide deposition. TIMED embodies a system of differential equations which describes activity transfer in the lungs, gastrointestinal tract, and other organs of the body. This system accounts for delay of transfer of activity between compartments of the body and radioactive daughters

  14. RCMAT: A Computer Program to Calculate a Measure of Associative Verbal Relatedness. Interim Report. Occasional Paper No. 6.

    Science.gov (United States)

    Mead, Michael A.

    The report describes the characteristics and usage of a computer program, the Relatedness Coefficient Matrix Program (RCMAT), designed to summarize associative responses given to verbal stimuli by individual respondents and by groups of respondents. The computer program uses the response distributions for individuals, and the pooled response…

  15. Development of a computer code for calculating the steady super/hypersonic inviscid flow around real configurations. Volume 2: Code description

    Science.gov (United States)

    Marconi, F.; Yaeger, L.

    1976-01-01

    A numerical procedure was developed to compute the inviscid super/hypersonic flow field about complex vehicle geometries accurately and efficiently. A second-order accurate finite difference scheme is used to integrate the three-dimensional Euler equations in regions of continuous flow, while all shock waves are computed as discontinuities via the Rankine-Hugoniot jump conditions. Conformal mappings are used to develop a computational grid. The effects of blunt nose entropy layers are computed in detail. Real gas effects for equilibrium air are included using curve fits of Mollier charts. Typical calculated results for shuttle orbiter, hypersonic transport, and supersonic aircraft configurations are included to demonstrate the usefulness of this tool.

  16. Calculation of Metzner Constant for Double Helical Ribbon Impeller by Computational Fluid Dynamic Method%双螺带桨Metzner常数的模拟计算

    Institute of Scientific and Technical Information of China (English)

    张敏革; 张吕鸿; 姜斌; 尹玉国; 李鑫钢

    2008-01-01

    Using the multiple reference frames(MRF)impeller method,the three-dimensional non-Newtonian flow field generated by a double helical ribbon(DHR)impeller has been simulated.The velocity field calculated by thc numerical simulation was similar to the previous studies and the power constant agreed well with the experi-mental data.Three computational fluid dynamic(CFD)methods,labeled Ⅰ,Ⅱ and Ⅲ,were used to compute the Metzner constant ks.The results showed that the calculated value from the slop method(method I)was consistent with the experimental data.Method Ⅱ.which took the maximal circumference-average shear rate around the impel-ler as the effective shear rate to compute ks,also showed good agreement with the experiment.However,both methods SUgcr from the complexity of calculation procedures.A new method(method III)was devised in this papcr to use the area.weighted average viscosity around the impeller as the effective viscosity for calculating ks.Method Ⅲ showed both good accuracy and ease of use.

  17. FORIG: a computer code for calculating radionuclide generation and depletion in fusion and fission reactors. User's manual

    International Nuclear Information System (INIS)

    In this manual we describe the use of the FORIG computer code to solve isotope-generation and depletion problems in fusion and fission reactors. FORIG runs on a Cray-1 computer and accepts more extensive activation cross sections than ORIGEN2 from which it was adapted. This report is an updated and a combined version of the previous ORIGEN2 and FORIG manuals. 7 refs., 15 figs., 13 tabs

  18. Pretest and posttest calculations of Semiscale Test S-07-10D with the TRAC computer program

    International Nuclear Information System (INIS)

    The Transient Reactor Analysis Code (TRAC) developed at the Los Alamos National Laboratory was used to predict the behavior of the small-break experiment designated Semiscale S-07-10D. This test simulates a 10 per cent communicative cold-leg break with delayed Emergency Core Coolant injection and blowdown of the broken-loop steam generator secondary. Both pretest calculations that incorporated measured initial conditions and posttest calculations that incorporated measured initial conditions and measured transient boundary conditions were completed. The posttest calculated parameters were generally between those obtained from pretest calculations and those from the test data. The results are strongly dependent on depressurization rate and, hence, on break flow

  19. Calculation of electromagnetic fields in electric machines by means of the finite element. Computational aspects; Calculo de campos electromagneticos en maquinas electricas mediante elemento finito. Aspectos computacionales

    Energy Technology Data Exchange (ETDEWEB)

    Rosales, Mario; De la Torre, Octavio [Instituto de Investigaciones Electricas, Cuernavaca (Mexico)

    1989-12-31

    In this article are described the computational characteristics of the Package CALIIE 2D of the Instituto de Investigaciones Electricas (IIE), for the calculation of bi-dimensional electromagnetic fields. The computational implementation of the package is based in the electromagnetic and numerical statements formerly published in this series. [Espanol] En este articulo se describen las caracteristicas computacionales del paquete CALIIE 2D del Instituto de Investigaciones Electricas (IIE), para calcular campos electromagneticos bidimensionales. La implantacion computacional del paquete se basa en los planteamientos electromagneticos y numericos antes publicados en esta serie.

  20. TRANGE: computer code to calculate the energy beam degradation in target stack; TRANGE: programa para calcular a degradacao de energia de particulas carregadas em alvos

    Energy Technology Data Exchange (ETDEWEB)

    Bellido, Luis F.

    1995-07-01

    A computer code to calculate the projectile energy degradation along a target stack was developed for an IBM or compatible personal microcomputer. A comparison of protons and deuterons bombarding uranium and aluminium targets was made. The results showed that the data obtained with TRANGE were in agreement with other computers code such as TRIM, EDP and also using Williamsom and Janni range and stopping power tables. TRANGE can be used for any charged particle ion, for energies between 1 to 100 MeV, in metal foils and solid compounds targets. (author). 8 refs., 2 tabs.

  1. A user`s guide to LUGSAN 1.1: A computer program to calculate and archive lug and sway brace loads for aircraft-carried stores

    Energy Technology Data Exchange (ETDEWEB)

    Dunn, W.N. [Sandia National Labs., Albuquerque, NM (United States). Experimental Structural Dynamics Dept.

    1994-07-01

    LUGSAN (LUG and Sway brace ANalysis) is a analysis and database computer program designed to calculate store lug and sway brace loads from aircraft captive carriage. LUGSAN combines the rigid body dynamics code, SWAY85 and the maneuver calculation code, MILGEN, with an INGRES database to function both as an analysis and archival system. This report describes the operation of the LUGSAN application program, including function description, layout examples, and sample sessions. This report is intended to be a user`s manual for version 1.1 of LUGSAN operating on the VAX/VMS system. The report is not intended to be a programmer or developer`s manual.

  2. ACDOS1: a computer code to calculate dose rates from neutron activation of neutral beamlines and other fusion-reactor components

    International Nuclear Information System (INIS)

    A computer code has been written to calculate neutron induced activation of neutral-beam injector components and the corresponding dose rates as a function of geometry, component composition, and time after shutdown. The code, ACDOS1, was written in FORTRAN IV to calculate both activity and dose rates for up to 30 target nuclides and 50 neutron groups. Sufficient versatility has also been incorporated into the code to make it applicable to a variety of general activation problems due to neutrons of energy less than 20 MeV

  3. Pre-test calculation of reflooding experiments with wider lattice in APWR-geometry (FLORESTAN 2) using the advanced computer code FLUT-FDWR

    International Nuclear Information System (INIS)

    After the reflooding tests in an extremely tight bundle (p/d=1.06, FLORESTAN 1) have been completed, new experiments for a wider lattice (p/d=1.242, FLORESTAN 2), which is employed in the recent APWR design of KfK, are planned at KfK to obtain the benchmark data for validation and improvement of calculation methods. This report presents the results of pre-test calculations for the FLORESTAN 2 experiment using FLUT-FDWR, a modified version of the GRS computer code FLUT for analysis of the most important behaviour during the reflooding phase after a LOCA in the APWR design. (orig.)

  4. Ab initio quasi-particle approximation bandgaps of silicon nanowires calculated at density functional theory/local density approximation computational effort

    Energy Technology Data Exchange (ETDEWEB)

    Ribeiro, M., E-mail: ribeiro.jr@oorbit.com.br [Office of Operational Research for Business Intelligence and Technology, Principal Office, Buffalo, Wyoming 82834 (United States)

    2015-06-21

    Ab initio calculations of hydrogen-passivated Si nanowires were performed using density functional theory within LDA-1/2, to account for the excited states properties. A range of diameters was calculated to draw conclusions about the ability of the method to correctly describe the main trends of bandgap, quantum confinement, and self-energy corrections versus the diameter of the nanowire. Bandgaps are predicted with excellent accuracy if compared with other theoretical results like GW, and with the experiment as well, but with a low computational cost.

  5. Ab initio quasi-particle approximation bandgaps of silicon nanowires calculated at density functional theory/local density approximation computational effort

    International Nuclear Information System (INIS)

    Ab initio calculations of hydrogen-passivated Si nanowires were performed using density functional theory within LDA-1/2, to account for the excited states properties. A range of diameters was calculated to draw conclusions about the ability of the method to correctly describe the main trends of bandgap, quantum confinement, and self-energy corrections versus the diameter of the nanowire. Bandgaps are predicted with excellent accuracy if compared with other theoretical results like GW, and with the experiment as well, but with a low computational cost

  6. Quantification of the computational accuracy of code systems on the burn-up credit using experimental re-calculations

    International Nuclear Information System (INIS)

    In order to account for the reactivity-reducing effect of burn-up in the criticality safety analysis for systems with irradiated nuclear fuel (''burnup credit''), numerical methods to determine the enrichment and burnup dependent nuclide inventory (''burnup code'') and its resulting multiplication factor keff (''criticality code'') are applied. To allow for reliable conclusions, for both calculation systems the systematic deviations of the calculation results from the respective true values, the bias and its uncertainty, are being quantified by calculation and analysis of a sufficient number of suitable experiments. This quantification is specific for the application case under scope and is also called validation. GRS has developed a methodology to validate a calculation system for the application of burnup credit in the criticality safety analysis for irradiated fuel assemblies from pressurized water reactors. This methodology was demonstrated by applying the GRS home-built KENOREST burnup code and the criticality calculation sequence CSAS5 from SCALE code package. It comprises a bounding approach and alternatively a stochastic, which both have been exemplarily demonstrated by use of a generic spent fuel pool rack and a generic dry storage cask, respectively. Based on publicly available post irradiation examination and criticality experiments, currently the isotopes of uranium and plutonium elements can be regarded for.

  7. POTAMOS mass spectrometry calculator: computer aided mass spectrometry to the post-translational modifications of proteins. A focus on histones.

    Science.gov (United States)

    Vlachopanos, A; Soupsana, E; Politou, A S; Papamokos, G V

    2014-12-01

    Mass spectrometry is a widely used technique for protein identification and it has also become the method of choice in order to detect and characterize the post-translational modifications (PTMs) of proteins. Many software tools have been developed to deal with this complication. In this paper we introduce a new, free and user friendly online software tool, named POTAMOS Mass Spectrometry Calculator, which was developed in the open source application framework Ruby on Rails. It can provide calculated mass spectrometry data in a time saving manner, independently of instrumentation. In this web application we have focused on a well known protein family of histones whose PTMs are believed to play a crucial role in gene regulation, as suggested by the so called "histone code" hypothesis. The PTMs implemented in this software are: methylations of arginines and lysines, acetylations of lysines and phosphorylations of serines and threonines. The application is able to calculate the kind, the number and the combinations of the possible PTMs corresponding to a given peptide sequence and a given mass along with the full set of the unique primary structures produced by the possible distributions along the amino acid sequence. It can also calculate the masses and charges of a fragmented histone variant, which carries predefined modifications already implemented. Additional functionality is provided by the calculation of the masses of fragments produced upon protein cleavage by the proteolytic enzymes that are most widely used in proteomics studies. PMID:25450216

  8. SFACTOR: a computer code for calculating dose equivalent to a target organ per microcurie-day residence of a radionuclide in a source organ - supplementary report

    International Nuclear Information System (INIS)

    The report describes a revision of the SFACTOR computer code, which has been developed to estimate the average dose equivalent to each of a specified list of target organs per microcurie-day residence of a radionuclide in source organs in man. Source and target organs of interest are specified in the input data stream, along with nuclear decay information. The SFACTOR code computes components of dose equivalent rate from each type of decay present for a particular radionuclide, including alpha, electron, gamma radiation, and spontaneous fission. The principal refinement to the program is the addition of a method for calculating components of the dose equivalent rate from alpha particles to endosteal cells and red bone marrow from a source in mineral bone. Other details of the calculations remain unchanged. Corrected tabulations of all components of S are provided for an array of 22 source organs and 24 target organs for 19 radionuclides in an adult

  9. On the kinetics of the aluminum-water reaction during exposure in high-heat flux test loops: 1, A computer program for oxidation calculations

    International Nuclear Information System (INIS)

    The ''Griess Correlation,'' in which the thickness of the corrosion product on aluminum alloy surfaces is expressed as a function of time and temperature for high-flux-reactor conditions, was rewritten in the form of a simple, general rate equation. Based on this equation, a computer program that calculates oxide-layer thickness for any given time-temperature transient was written. 4 refs

  10. ERWIN2: User's manual for a computer model to calculate the economic efficiency of wind energy systems

    International Nuclear Information System (INIS)

    During the last few years the Business Unit ESC-Energy Studies of the Netherlands Energy Research Foundation (ECN) developed calculation programs to determine the economic efficiency of energy technologies, which programs support several studies for the Dutch Ministry of Economic Affairs. All these programs form the so-called BRET programs. One of these programs is ERWIN (Economische Rentabiliteit WINdenergiesystemen or in English: Economic Efficiency of Wind Energy Systems) of which an updated manual (ERWIN2) is presented in this report. An outline is given of the possibilities and limitations to carry out calculations with the model

  11. Development of a computer program for calculation of flow velocity in the borehole and hydraulic conductivity of the formation

    International Nuclear Information System (INIS)

    During the investigation of the Mors salt dome, a site considered as a possible repository for Danish high level radioactive wastes, a new method for testing low-permeable formations - The Labelled Slug Test -was developed. The large amount of data obtained during this test makes manual evaluation both difficult and time consuming. Principles of computerized procedure for the evaluation of the results are given and problems arising during calculation are discussed. Spinner flowmeter data is normally used to give a qualitative estimate of permeability distribution. Formulas and procedures are proposed which make direct calculations of permeability from spinner readings possible

  12. Assessment of effectiveness of geologic isolation systems. ARRRG and FOOD: computer programs for calculating radiation dose to man from radionuclides in the environment

    International Nuclear Information System (INIS)

    The computer programs ARRRG and FOOD were written to facilitate the calculation of internal radiation doses to man from the radionuclides in the environment and external radiation doses from radionuclides in the environment. Using ARRRG, radiation doses to man may be calculated for radionuclides released to bodies of water from which people might obtain fish, other aquatic foods, or drinking water, and in which they might fish, swim or boat. With the FOOD program, radiation doses to man may be calculated from deposition on farm or garden soil and crops during either an atmospheric or water release of radionuclides. Deposition may be either directly from the air or from irrigation water. Fifteen crop or animal product pathways may be chosen. ARRAG and FOOD doses may be calculated for either a maximum-exposed individual or for a population group. Doses calculated are a one-year dose and a committed dose from one year of exposure. The exposure is usually considered as chronic; however, equations are included to calculate dose and dose commitment from acute (one-time) exposure. The equations for calculating internal dose and dose commitment are derived from those given by the International Commission on Radiological Protection (ICRP) for body burdens and Maximum Permissible Concentration (MPC) of each radionuclide. The radiation doses from external exposure to contaminated farm fields or shorelines are calculated assuming an infinite flat plane source of radionuclides. A factor of two is included for surface roughness. A modifying factor to compensate for finite extent is included in the shoreline calculations

  13. Assessment of effectiveness of geologic isolation systems. ARRRG and FOOD: computer programs for calculating radiation dose to man from radionuclides in the environment

    Energy Technology Data Exchange (ETDEWEB)

    Napier, B.A.; Roswell, R.L.; Kennedy, W.E. Jr.; Strenge, D.L.

    1980-06-01

    The computer programs ARRRG and FOOD were written to facilitate the calculation of internal radiation doses to man from the radionuclides in the environment and external radiation doses from radionuclides in the environment. Using ARRRG, radiation doses to man may be calculated for radionuclides released to bodies of water from which people might obtain fish, other aquatic foods, or drinking water, and in which they might fish, swim or boat. With the FOOD program, radiation doses to man may be calculated from deposition on farm or garden soil and crops during either an atmospheric or water release of radionuclides. Deposition may be either directly from the air or from irrigation water. Fifteen crop or animal product pathways may be chosen. ARRAG and FOOD doses may be calculated for either a maximum-exposed individual or for a population group. Doses calculated are a one-year dose and a committed dose from one year of exposure. The exposure is usually considered as chronic; however, equations are included to calculate dose and dose commitment from acute (one-time) exposure. The equations for calculating internal dose and dose commitment are derived from those given by the International Commission on Radiological Protection (ICRP) for body burdens and Maximum Permissible Concentration (MPC) of each radionuclide. The radiation doses from external exposure to contaminated farm fields or shorelines are calculated assuming an infinite flat plane source of radionuclides. A factor of two is included for surface roughness. A modifying factor to compensate for finite extent is included in the shoreline calculations.

  14. Investigation of an inventory calculation model for a solvent extraction system and the development of its computer programme - SEPHIS-J

    International Nuclear Information System (INIS)

    In order to improve the applicability of near-real-time materials accountancy (N.R.T.MA) to a reprocessing plant, it is necessary to develop an estimation method for the nuclear material inventory at a solvent extraction system under operation. For designing the solvent extraction system, such computer codes as SEPHIS, SOLVEX and TRANSIENTS had been used. Accuracy of these codes in tracing operations and predicting inventories in the extraction system had been discussed. Then, much better codes, e.g., SEPHIS Mod4 and PUBG, were developed. Unfortunately, SEPHIS Mod4 was not available in countries other than the USA and PUBG was not suitable for use with a mini-computer which would be practical as a field computer because of quite a lot of computing time needed. The authors investigated an inventory estimation model compatible with PUBG in functions and developed the corresponding computer programme, SEPHIS-J, based on the SEPHIS Mod3 code, resulting in a third of computing time compared with PUBG. They also validated the programme by calculating a static state as well as a dynamic one of the solvent extraction process and by comparing them among the programme, SEPHIS Mod3 and PUBG. Using the programme, it was shown that the inventory changes due to changes of feed flow and concentration were not so small that they might be neglected although the changes of feed flow and concentration were within measurement errors. (author)

  15. Development of computer codes to perform model calculations for thermomechanical interaction of rock salt with borehole liners in a HLW repository

    International Nuclear Information System (INIS)

    The principal objectives of the work are the development of computer codes with suitable material laws for rock salt and to perform model calculations on the thermomechanical phenomena in the near field of a radioactive waste repository. In particular the work was dealing with the following subjects: - Comparison of the computational capabilities of the finite element codes ADINA and MAUS (new version) by model calculations for the temperature test field 3. Thermally induced convergence rates of boreholes in a high level waste (HLW) repository. Investigation of the contact between salt and waste containers or borehole liners, and of the resulting pressure rise. Investigation of the effect of inhomogenities (anhydrite layers) in rock salt on the stress-strain field around a borehole in a HLW repository. Development of a material model for backfill material and performance of model calculations on the convergence of backfilled storage rooms or tunnels in rock salt. Development of a new version of the computer code MAUS by the Institut fuer Elektrische Anlagen und Energiewirtschaft of the RWTH-Aachen

  16. User's guide to the SEPHIS computer code for calculating the Thorex solvent extraction system

    Energy Technology Data Exchange (ETDEWEB)

    Watson, S.B.; Rainey, R.H.

    1979-05-01

    The SEPHIS computer program was developed to simulate the countercurrent solvent extraction process. The code has now been adapted to model the Acid Thorex flow sheet. This report represents a practical user's guide to SEPHIS - Thorex containing a program description, user information, program listing, and sample input and output.

  17. COMPUTING

    CERN Multimedia

    M. Kasemann

    Overview In autumn the main focus was to process and handle CRAFT data and to perform the Summer08 MC production. The operational aspects were well covered by regular Computing Shifts, experts on duty and Computing Run Coordination. At the Computing Resource Board (CRB) in October a model to account for service work at Tier 2s was approved. The computing resources for 2009 were reviewed for presentation at the C-RRB. The quarterly resource monitoring is continuing. Facilities/Infrastructure operations Operations during CRAFT data taking ran fine. This proved to be a very valuable experience for T0 workflows and operations. The transfers of custodial data to most T1s went smoothly. A first round of reprocessing started at the Tier-1 centers end of November; it will take about two weeks. The Computing Shifts procedure was tested full scale during this period and proved to be very efficient: 30 Computing Shifts Persons (CSP) and 10 Computing Resources Coordinators (CRC). The shift program for the shut down w...

  18. A computer code for calculation of radioactive nuclide generation and depletion, decay heat and γ ray spectrum. FPGS90

    International Nuclear Information System (INIS)

    In a nuclear reactor radioactive nuclides are generated and depleted with burning up of nuclear fuel. The radioactive nuclides, emitting γ ray and β ray, play role of radioactive source of decay heat in a reactor and radiation exposure. In safety evaluation of nuclear reactor and nuclear fuel cycle, it is needed to estimate the number of nuclides generated in nuclear fuel under various burn-up condition of many kinds of nuclear fuel used in a nuclear reactor. FPGS90 is a code calculating the number of nuclides, decay heat and spectrum of emitted γ ray from fission products produced in a nuclear fuel under the various kinds of burn-up condition. The nuclear data library used in FPGS90 code is the library 'JNDC Nuclear Data Library of Fission Products - second version -', which is compiled by working group of Japanese Nuclear Data Committee for evaluating decay heat in a reactor. The code has a function of processing a so-called evaluated nuclear data file such as ENDF/B, JENDL, ENSDF and so on. It also has a function of making figures of calculated results. Using FPGS90 code it is possible to do all works from making library, calculating nuclide generation and decay heat through making figures of the calculated results. (author)

  19. LEAF: a computer program to calculate fission product release from a reactor containment building for arbitrary radioactive decay chains

    Energy Technology Data Exchange (ETDEWEB)

    Lee, C.E.; Apperson, C.E. Jr.; Foley, J.E.

    1976-10-01

    The report describes an analytic containment building model that is used for calculating the leakage into the environment of each isotope of an arbitrary radioactive decay chain. The model accounts for the source, the buildup, the decay, the cleanup, and the leakage of isotopes that are gas-borne inside the containment building.

  20. LEAF: a computer program to calculate fission product release from a reactor containment building for arbitrary radioactive decay chains

    International Nuclear Information System (INIS)

    The report describes an analytic containment building model that is used for calculating the leakage into the environment of each isotope of an arbitrary radioactive decay chain. The model accounts for the source, the buildup, the decay, the cleanup, and the leakage of isotopes that are gas-borne inside the containment building

  1. Computational modeling of the mathematical phantoms of the Brazilian woman to internal dosimetry calculations and for comparison of the absorbed fractions with specific reference women

    International Nuclear Information System (INIS)

    The theme of this work is the study of the concept of mathematical dummy - also called phantoms - used in internal dosimetry and radiation protection, from the perspective of computer simulations. In this work he developed the mathematical phantom of the Brazilian woman, to be used as the basis of calculations of Specific Absorbed Fractions (AEDs) in the body's organs and skeleton by virtue of goals with regarding the diagnosis or therapy in nuclear medicine. The phantom now developed is similar, in form, to Snyder phantom making it more realistic for the anthropomorphic conditions of Brazilian women. For so we used the Monte Carlo method of formalism, through computer modeling. As a contribution to the objectives of this study, it was developed and implemented the computer system cFAE - consultation Fraction Specific Absorbed, which makes it versatile for the user's query researcher

  2. FRAPCON-3: A computer code for the calculation of steady-state, thermal-mechanical behavior of oxide fuel rods for high burnup

    International Nuclear Information System (INIS)

    FRAPCON-3 is a FORTRAN IV computer code that calculates the steady-state response of light water reactor fuel rods during long-term burnup. The code calculates the temperature, pressure, and deformation of a fuel rod as functions of time-dependent fuel rod power and coolant boundary conditions. The phenomena modeled by the code include (1) heat conduction through the fuel and cladding, (2) cladding elastic and plastic deformation, (3) fuel-cladding mechanical interaction, (4) fission gas release, (5) fuel rod internal gas pressure, (6) heat transfer between fuel and cladding, (7) cladding oxidation, and (8) heat transfer from cladding to coolant. The code contains necessary material properties, water properties, and heat-transfer correlations. The codes' integral predictions of mechanical behavior have not been assessed against a data base, e.g., cladding strain or failure data. Therefore, it is recommended that the code not be used for analyses of cladding stress or strain. FRAPCON-3 is programmed for use on both mainframe computers and UNIX-based workstations such as DEC 5000 or SUN Sparcstation 10. It is also programmed for personal computers with FORTRAN compiler software and at least 8 to 10 megabytes of random access memory (RAM). The FRAPCON-3 code is designed to generate initial conditions for transient fuel rod analysis by the FRAPTRAN computer code (formerly named FRAP-T6)

  3. COMPUTING

    CERN Multimedia

    P. McBride

    The Computing Project is preparing for a busy year where the primary emphasis of the project moves towards steady operations. Following the very successful completion of Computing Software and Analysis challenge, CSA06, last fall, we have reorganized and established four groups in computing area: Commissioning, User Support, Facility/Infrastructure Operations and Data Operations. These groups work closely together with groups from the Offline Project in planning for data processing and operations. Monte Carlo production has continued since CSA06, with about 30M events produced each month to be used for HLT studies and physics validation. Monte Carlo production will continue throughout the year in the preparation of large samples for physics and detector studies ramping to 50 M events/month for CSA07. Commissioning of the full CMS computing system is a major goal for 2007. Site monitoring is an important commissioning component and work is ongoing to devise CMS specific tests to be included in Service Availa...

  4. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction CMS distributed computing system performed well during the 2011 start-up. The events in 2011 have more pile-up and are more complex than last year; this results in longer reconstruction times and harder events to simulate. Significant increases in computing capacity were delivered in April for all computing tiers, and the utilisation and load is close to the planning predictions. All computing centre tiers performed their expected functionalities. Heavy-Ion Programme The CMS Heavy-Ion Programme had a very strong showing at the Quark Matter conference. A large number of analyses were shown. The dedicated heavy-ion reconstruction facility at the Vanderbilt Tier-2 is still involved in some commissioning activities, but is available for processing and analysis. Facilities and Infrastructure Operations Facility and Infrastructure operations have been active with operations and several important deployment tasks. Facilities participated in the testing and deployment of WMAgent and WorkQueue+Request...

  5. COMPUTING

    CERN Multimedia

    M. Kasemann

    Overview During the past three months activities were focused on data operations, testing and re-enforcing shift and operational procedures for data production and transfer, MC production and on user support. Planning of the computing resources in view of the new LHC calendar in ongoing. Two new task forces were created for supporting the integration work: Site Commissioning, which develops tools helping distributed sites to monitor job and data workflows, and Analysis Support, collecting the user experience and feedback during analysis activities and developing tools to increase efficiency. The development plan for DMWM for 2009/2011 was developed at the beginning of the year, based on the requirements from the Physics, Computing and Offline groups (see Offline section). The Computing management meeting at FermiLab on February 19th and 20th was an excellent opportunity discussing the impact and for addressing issues and solutions to the main challenges facing CMS computing. The lack of manpower is particul...

  6. Organization of the M-6000 computer calculating process in the CAMAC on-line measurement systems for a physical experiment

    International Nuclear Information System (INIS)

    Discussed are the basic results of the work on designing the software of the computer measuring complex (CMC) which uses the M-6000 computer and operates on line with an accelerator. All the CMC units comply with the CAMAC standard. The CMC incorporates a mainframe memory, twenty-four kilobytes of 16-digit words in size, and external memory on magnetic disks, 1 megabyte in size. Suggested is a modification of the technique for designing the CMC software providing for program complexes which are dynamically adjusted by an experimentalist for the given experiment for a short time. The CMC software comprises the following major portions: a software generator, data acquisition program, on-line data processing routines, off-line data processing programs and programs for data recording on magnetic tapes and disks. Testing of the designed CMC has revealed that the total data processing time equals to from 150 to 500 ms

  7. The Cerebellum: New Computational Model that Reveals its Primary Function to Calculate Multibody Dynamics Conform to Lagrange-Euler Formulation

    OpenAIRE

    Kurtaj, Lavdim; Limani, Ilir; Shatri, Vjosa; Skeja, Avni

    2014-01-01

    Cerebellum is part of the brain that occupies only 10% of the brain volume, but it contains about 80% of total number of brain neurons. New cerebellar function model is developed that sets cerebellar circuits in context of multibody dynamics model computations, as important step in controlling balance and movement coordination, functions performed by two oldest parts of the cerebellum. Model gives new functional interpretation for granule cells-Golgi cell circuit, including distinct function ...

  8. Development of computational methods for the prediction of protein structure, protein binding, and mutational effects using free energy calculations.

    OpenAIRE

    Becker, Caroline

    2014-01-01

    A molecular understanding of protein-protein or protein-ligand binding is of crucial importance for the design of proteins or ligands with defined binding characteristics. The comprehensive analysis of biomolecular binding and the coupled rational in silico design of protein-ligand interfaces requires both, accurate and computationally fast methods for the prediction of free energies. Accurate free energy methods usually involve atomistic molecular dynamics simulations that are computationall...

  9. ELKIN, a computer program for kinematic calculations of interactions among arbitrary number of particles with arbitrary spin

    International Nuclear Information System (INIS)

    ELKIN is based on a method of kinematic analysis that uses invariant amplitudes with two invariant indices for each particle. Differential cross sections can be calculated, expressed in invariant amplitudes and particle momenta. Conservation laws can be applied, reducing the number of amplitudes. ELKIN is written in LISP and the assembler language LAP. The simplification part of the program is an adaptation of the function SIMP from the algebraic language LAM

  10. Detecting number processing and mental calculation in patients with disorders of consciousness using a hybrid brain-computer interface system

    OpenAIRE

    Li, Yuanqing; Pan, Jiahui; He, Yanbin; Wang, Fei; Laureys, Steven; Xie, Qiuyou; Yu, Ronghao

    2015-01-01

    Background For patients with disorders of consciousness such as coma, a vegetative state or a minimally conscious state, one challenge is to detect and assess the residual cognitive functions in their brains. Number processing and mental calculation are important brain functions but are difficult to detect in patients with disorders of consciousness using motor response-based clinical assessment scales such as the Coma Recovery Scale-Revised due to the patients’ motor impairments and inabilit...

  11. COMPUTING

    CERN Multimedia

    I. Fisk

    2013-01-01

    Computing activity had ramped down after the completion of the reprocessing of the 2012 data and parked data, but is increasing with new simulation samples for analysis and upgrade studies. Much of the Computing effort is currently involved in activities to improve the computing system in preparation for 2015. Operations Office Since the beginning of 2013, the Computing Operations team successfully re-processed the 2012 data in record time, not only by using opportunistic resources like the San Diego Supercomputer Center which was accessible, to re-process the primary datasets HTMHT and MultiJet in Run2012D much earlier than planned. The Heavy-Ion data-taking period was successfully concluded in February collecting almost 500 T. Figure 3: Number of events per month (data) In LS1, our emphasis is to increase efficiency and flexibility of the infrastructure and operation. Computing Operations is working on separating disk and tape at the Tier-1 sites and the full implementation of the xrootd federation ...

  12. Verification of computer code for calculation of coolant radiolysis in the VVER reactor core with regard for boiling in its upper part

    International Nuclear Information System (INIS)

    Code Bora for WWER coolant radiolysis calculation considering single jets boiling in the reactor core top part is developed on the basis of computer codes MOPABA-H2 (radiolysis of aqueous solutions) and SteamRad (radiolysis of vapor). Physico-chemical processes taking place in boiling core coolant are complex and diversified. Still, for the solution of certain problems their simulation can be simplified. The approach of reasonable simplification was used for development of code Bora: mathematical model assumed is purposed for simulation of phenomena only in the area of interest; the number of simulated chemical reactions and particles shall be reasonably minimum; complexity of interphase mass transfer calculation procedure shall be adequate to actually available accuracy of modeling. The analysis of new experimental initial yields of water radiolysis products data and kinetic parameters of elementary chemical reactions with their participation has been carried out. Some changes have been introduced in the mechanism of liquid water and aqueous solutions of ammonia radiolysis have been significantly revised on the basis of this analysis. Examples of the calculations provided for code Bora verification are presented. Despite of very simple simulation of interphase mass transfer, Bora allows to obtain average chemical composition of two-phase coolant at BWR core outlet with the accuracy sufficient for engineering calculations. The report also presents the results of two-phase coolant chemical composition test calculation for reactor core top part coolant boiling in pressurized water reactor. (author)

  13. Computational tool for phase-shift calculation in an interference pattern by fringe displacements based on a skeletonized image

    Science.gov (United States)

    Rivera-Ortega, Uriel; Pico-Gonzalez, Beatriz

    2016-01-01

    In this manuscript an algorithm based on a graphic user interface (GUI) designed in MATLAB for an automatic phase-shifting estimation between two digitalized interferograms is presented. The proposed algorithm finds the midpoint locus of the dark and bright interference fringes in two skeletonized fringe patterns and relates their displacements with the corresponding phase-shift. In order to demonstrate the usefulness of the proposed GUI, its application to simulated and experimental interference patterns will be shown. The viability of this GUI makes it a helpful and easy-to-use computational tool for educational or research purposes in optical phenomena for undergraduate or graduate studies in the field of physics.

  14. Guide-lines for an early evaluation of a nuclear accident, calculated with the computer model park

    International Nuclear Information System (INIS)

    For a nuclear accident where large areas are contaminated, it is necessary to predict the exposure of the population as early as possible in order to plan appropriate countermeasures. The radioecological computer model PARK (Program System for the Assessment and Mitigation of Radiological Consequences) is part of the German decision support system IMIS (Integrated Measurement- and Information System for the Surveillance of Environmental Radioactivity) for a fast assessment of contaminations and doses. In this paper PARK is used to investigate the dose relevance of the exposure pathways, of ingested radionuclides, and of foodstuffs in relation to the date of the event. (author)

  15. How to Compute a Slot Marker - Calculation of Controller Managed Spacing Tools for Efficient Descents with Precision Scheduling

    Science.gov (United States)

    Prevot, Thomas

    2012-01-01

    This paper describes the underlying principles and algorithms for computing the primary controller managed spacing (CMS) tools developed at NASA for precisely spacing aircraft along efficient descent paths. The trajectory-based CMS tools include slot markers, delay indications and speed advisories. These tools are one of three core NASA technologies integrated in NASAs ATM technology demonstration-1 (ATD-1) that will operationally demonstrate the feasibility of fuel-efficient, high throughput arrival operations using Automatic Dependent Surveillance Broadcast (ADS-B) and ground-based and airborne NASA technologies for precision scheduling and spacing.

  16. Exit of a blast wave from a conical nozzle. [flow field calculations by Eulerian computer code DORF

    Science.gov (United States)

    Kim, K.; Johnson, W. E.

    1976-01-01

    The Eulerian computer code DORF was used in the analysis of a two-dimensional, unsteady flow field resulting from semi-confined explosions for propulsive applications. Initially, the ambient gas inside the conical shaped nozzle is set into motion due to the expansion of the explosion product gas, forming a shock wave. When this shock front exits the nozzle, it takes almost a spherical form while a complex interaction between the nozzle and compression and rarefaction waves takes place behind the shock. The results show an excellent agreement with experimental data.

  17. Computer

    CERN Document Server

    Atkinson, Paul

    2011-01-01

    The pixelated rectangle we spend most of our day staring at in silence is not the television as many long feared, but the computer-the ubiquitous portal of work and personal lives. At this point, the computer is almost so common we don't notice it in our view. It's difficult to envision that not that long ago it was a gigantic, room-sized structure only to be accessed by a few inspiring as much awe and respect as fear and mystery. Now that the machine has decreased in size and increased in popular use, the computer has become a prosaic appliance, little-more noted than a toaster. These dramati

  18. User's guide for GAPCON-THERMAL-2: a computer program for calculating the thermal behavior of an oxide fuel rod

    International Nuclear Information System (INIS)

    This report is being published as a user's manual for GAPCON-THERMAL-2 and provides a general description of the code and instructions for its use. The GAPCON-THERMAL-2 code was developed for the Regulatory Staff, NRC, to use as a tool in estimating fuel-cladding gap conductances and fuel stored energy and represents a modification of the GAPCON-THERMAL-1 code. The goal of the modifications was to reduce uncertainties associated with calculating power history and burnup effects and yet retain a relatively flexible and fast running code for parametric studies. 15 references

  19. A Statistical Model and Computer program for Preliminary Calculations Related to the Scaling of Sensor Arrays; TOPICAL

    International Nuclear Information System (INIS)

    Recent advances in sensor technology and engineering have made it possible to assemble many related sensors in a common array, often of small physical size. Sensor arrays may report an entire vector of measured values in each data collection cycle, typically one value per sensor per sampling time. The larger quantities of data provided by larger arrays certainly contain more information, however in some cases experience suggests that dramatic increases in array size do not always lead to corresponding improvements in the practical value of the data. The work leading to this report was motivated by the need to develop computational planning tools to approximate the relative effectiveness of arrays of different size (or scale) in a wide variety of contexts. The basis of the work is a statistical model of a generic sensor array. It includes features representing measurement error, both common to all sensors and independent from sensor to sensor, and the stochastic relationships between the quantities to be measured by the sensors. The model can be used to assess the effectiveness of hypothetical arrays in classifying objects or events from two classes. A computer program is presented for evaluating the misclassification rates which can be expected when arrays are calibrated using a given number of training samples, or the number of training samples required to attain a given level of classification accuracy. The program is also available via email from the first author for a limited time

  20. Development of a computational model for the calculation of neutron dose equivalent in laminated primary barriers of radiotherapy rooms

    International Nuclear Information System (INIS)

    Many radiotherapy centers acquire 15 and 18 MV linear accelerators to perform more effective treatments for deep tumors. However, the acquisition of these equipment must be accompanied by an additional care in shielding planning of the rooms that will house them. In cases where space is restricted, it is common to find primary barriers made of concrete and metal. The drawback of this type of barrier is the photoneutron emission when high energy photons (e.g. 15 and 18 MV spectra) interact with the metallic material of the barrier. The emission of these particles constitutes a problem of radiation protection inside and outside of radiotherapy rooms, which should be properly assessed. A recent work has shown that the current model underestimate the dose of neutrons outside the treatment rooms. In this work, a computational model for the aforementioned problem was created from Monte Carlo Simulations and Artificial Intelligence. The developed model was composed by three neural networks, each being formed of a pair of material and spectrum: Pb18, Pb15 and Fe18. In a direct comparison with the McGinley method, the Pb18 network exhibited the best responses for approximately 78% of the cases tested; the Pb15 network showed better results for 100% of the tested cases, while the Fe18 network produced better answers to 94% of the tested cases. Thus, the computational model composed by the three networks has shown more consistent results than McGinley method. (author)

  1. Quantum Computational Calculations of the Ionization Energies of Acidic and Basic Amino Acids: Aspartate, Glutamate, Arginine, Lysine, and Histidine

    Science.gov (United States)

    de Guzman, C. P.; Andrianarijaona, M.; Lee, Y. S.; Andrianarijaona, V.

    An extensive knowledge of the ionization energies of amino acids can provide vital information on protein sequencing, structure, and function. Acidic and basic amino acids are unique because they have three ionizable groups: the C-terminus, the N-terminus, and the side chain. The effects of multiple ionizable groups can be seen in how Aspartate's ionizable side chain heavily influences its preferred conformation (J Phys Chem A. 2011 April 7; 115(13): 2900-2912). Theoretical and experimental data on the ionization energies of many of these molecules is sparse. Considering each atom of the amino acid as a potential departing site for the electron gives insight on how the three ionizable groups affect the ionization process of the molecule and the dynamic coupling between the vibrational modes. In the following study, we optimized the structure of each acidic and basic amino acid then exported the three dimensional coordinates of the amino acids. We used ORCA to calculate single point energies for a region near the optimized coordinates and systematically went through the x, y, and z coordinates of each atom in the neutral and ionized forms of the amino acid. With the calculations, we were able to graph energy potential curves to better understand the quantum dynamic properties of the amino acids. The authors thank Pacific Union College Student Association for providing funds.

  2. Nuclear magnetic resonance, vibrational spectroscopic studies, physico-chemical properties and computational calculations on (nitrophenyl) octahydroquinolindiones by DFT method.

    Science.gov (United States)

    Pasha, M A; Siddekha, Aisha; Mishra, Soni; Azzam, Sadeq Hamood Saleh; Umapathy, S

    2015-02-01

    In the present study, 2'-nitrophenyloctahydroquinolinedione and its 3'-nitrophenyl isomer were synthesized and characterized by FT-IR, FT-Raman, (1)H NMR and (13)C NMR spectroscopy. The molecular geometry, vibrational frequencies, (1)H and (13)C NMR chemical shift values of the synthesized compounds in the ground state have been calculated by using the density functional theory (DFT) method with the 6-311++G (d,p) basis set and compared with the experimental data. The complete vibrational assignments of wave numbers were made on the basis of potential energy distribution using GAR2PED programme. Isotropic chemical shifts for (1)H and (13)C NMR were calculated using gauge-invariant atomic orbital (GIAO) method. The experimental vibrational frequencies, (1)H and (13)C NMR chemical shift values were found to be in good agreement with the theoretical values. On the basis of vibrational analysis, molecular electrostatic potential and the standard thermodynamic functions have been investigated. PMID:25440584

  3. COMPUTING

    CERN Multimedia

    I. Fisk

    2010-01-01

    Introduction It has been a very active quarter in Computing with interesting progress in all areas. The activity level at the computing facilities, driven by both organised processing from data operations and user analysis, has been steadily increasing. The large-scale production of simulated events that has been progressing throughout the fall is wrapping-up and reprocessing with pile-up will continue. A large reprocessing of all the proton-proton data has just been released and another will follow shortly. The number of analysis jobs by users each day, that was already hitting the computing model expectations at the time of ICHEP, is now 33% higher. We are expecting a busy holiday break to ensure samples are ready in time for the winter conferences. Heavy Ion An activity that is still in progress is computing for the heavy-ion program. The heavy-ion events are collected without zero suppression, so the event size is much large at roughly 11 MB per event of RAW. The central collisions are more complex and...

  4. COMPUTING

    CERN Multimedia

    M. Kasemann P. McBride Edited by M-C. Sawley with contributions from: P. Kreuzer D. Bonacorsi S. Belforte F. Wuerthwein L. Bauerdick K. Lassila-Perini M-C. Sawley

    Introduction More than seventy CMS collaborators attended the Computing and Offline Workshop in San Diego, California, April 20-24th to discuss the state of readiness of software and computing for collisions. Focus and priority were given to preparations for data taking and providing room for ample dialog between groups involved in Commissioning, Data Operations, Analysis and MC Production. Throughout the workshop, aspects of software, operating procedures and issues addressing all parts of the computing model were discussed. Plans for the CMS participation in STEP’09, the combined scale testing for all four experiments due in June 2009, were refined. The article in CMS Times by Frank Wuerthwein gave a good recap of the highly collaborative atmosphere of the workshop. Many thanks to UCSD and to the organizers for taking care of this workshop, which resulted in a long list of action items and was definitely a success. A considerable amount of effort and care is invested in the estimate of the comput...

  5. IDAC2.0 a new generation of internal dosimetric calculations for diagnostic examinations in nuclear medicine using the adult ICRP/ICRU reference computational voxel phantoms

    International Nuclear Information System (INIS)

    Full text of publication follows. Aim and background: the International Commission on Radiological Protection (ICRP) Task Group 36 (TG 36) has the commission to propose biokinetic data and estimates of absorbed doses to organs and tissues and the effective dose to patients from various radiopharmaceuticals. To this date the program IDAC1.0 has been used to perform the dose calculations and OLINDA-EXM as an independent validation of the calculations. Both these calculations are based on photon specific absorption fractions (SAF) simulated from the mathematical phantoms created by Cristy and Eckerman in 1987 while the kinetic energy of electrons is mainly assumed to be absorbed locally. To improve the accuracy of the calculations, ICRP has now adopted a more realistic voxel phantom to incorporate in Monte Carlo (MC) simulations of new electron and photon SAF-values. The internal dosimetry computer program, IDAC, has been substantially upgraded (IDAC2.0) and incorporates these new SAF-values for calculations of the absorbed doses and the effective dose. Material and methods: with IDAC2.0 it is possible to calculate the dose from 1252 different radionuclides. The program uses the latest biokinetic models and assumptions of the ICRP TG 36, which also includes the incorporation of the Human Alimentary Tract Model (ICRP 100) and the latest tissue weighting factors (ICRP 103). The S-values are generated through mono-energetic photon and electron SAF-values from the new voxel phantom and decay data of ICRP publication 107. The input data of the source regions included in the model for the absorbed doses and effective dose calculations in IDAC2.0 can be given in a descriptive biokinetic model or by constructing a compartment model with defined transfer coefficients or just as the total number of disintegrations per unit administered activity. Absorbed doses and the effective dose are calculated and presented here for 120 different radiopharmaceuticals based on earlier

  6. COXPRO-II: a computer program for calculating radiation and conduction heat transfer in irradiated fuel assemblies

    International Nuclear Information System (INIS)

    This report describes the computer program COXPRO-II, which was written for performing thermal analyses of irradiated fuel assemblies in a gaseous environment with no forced cooling. The heat transfer modes within the fuel pin bundle are radiation exchange among fuel pin surfaces and conduction by the stagnant gas. The array of parallel cylindrical fuel pins may be enclosed by a metal wrapper or shroud. Heat is dissipated from the outer surface of the fuel pin assembly by radiation and convection. Both equilateral triangle and square fuel pin arrays can be analyzed. Steady-state and unsteady-state conditions are included. Temperatures predicted by the COXPRO-II code have been validated by comparing them with experimental measurements. Temperature predictions compare favorably to temperature measurements in pressurized water reactor (PWR) and liquid-metal fast breeder reactor (LMFBR) simulated, electrically heated fuel assemblies. Also, temperature comparisons are made on an actual irradiated Fast-Flux Test Facility (FFTF) LMFBR fuel assembly

  7. Metadata Management for Distributed First Principles Calculations in VLab: a Collaborative Grid/Portal System for Geomaterials Computations

    Science.gov (United States)

    da Silveira, P. R.; da Silva, C. R.; Wentzcovitch, R. M.

    2006-12-01

    We describe the metadata and metadata management algorithms necessary to handle the concurrent execution of multiple tasks from a single workflow in a collaborative service oriented architecture environment. Metadata requirements are imposed by the distributed workflow that calculates elastic properties of materials at high pressures and temperatures. We explain the basic metaphor underlying the metadata management, the receipt. We also show the actual java representation of the receipt, and explain how they are XML serialized to be transferred between servers and stored in a database. We also discuss how the collaborative aspect of the user activity on running workflows could potentially lead to rush conditions, how this affects the requirements on metadata, and how these rush conditions are avoided. Finally, we describe an additional metadata structure complimentary to the receipts that contains general information about the workflow. Work supported by NSF/ITR 0428774 (VLab).

  8. Accident consequence calculations and risk assessments for pressurized light water reactors with the computer code UFOMOD/B3

    International Nuclear Information System (INIS)

    With respect to the application of the accident consequence model of the German Risk Study (GRS) for light water reactors to risk assessments of other reactor types (high temperature reactor HTR-1160, fast breeder reactor SNR-300), the improved version UFOMOD/B3 was developed. The modifications mainly concern the deposition parameters, the resuspension process, the ingestion model and the dose factors. To make results comparable, recalculations for pressurized light water reactors were performed with the release categories of the GRS. The results show in contrast to the findings of the GRS a significant reduction of the acute fatality risk by a factor of 3.6. This essentially results from the smaller deposition parameters. The latent fatality risk was calculated nearly unchanged. (orig.)

  9. MO-G-17A-04: Internal Dosimetric Calculations for Pediatric Nuclear Imaging Applications, Using Monte Carlo Simulations and High-Resolution Pediatric Computational Models

    International Nuclear Information System (INIS)

    Purpose: Our purpose is to evaluate the administered absorbed dose in pediatric, nuclear imaging studies. Monte Carlo simulations with the incorporation of pediatric computational models can serve as reference for the accurate determination of absorbed dose. The procedure of the calculated dosimetric factors is described, while a dataset of reference doses is created. Methods: Realistic simulations were executed using the GATE toolkit and a series of pediatric computational models, developed by the “IT'IS Foundation”. The series of the phantoms used in our work includes 6 models in the range of 5–14 years old (3 boys and 3 girls). Pre-processing techniques were applied to the images, to incorporate the phantoms in GATE simulations. The resolution of the phantoms was set to 2 mm3. The most important organ densities were simulated according to the GATE “Materials Database”. Several used radiopharmaceuticals in SPECT and PET applications are being tested, following the EANM pediatric dosage protocol. The biodistributions of the several isotopes used as activity maps in the simulations, were derived by the literature. Results: Initial results of absorbed dose per organ (mGy) are presented in a 5 years old girl from the whole body exposure to 99mTc - SestaMIBI, 30 minutes after administration. Heart, kidney, liver, ovary, pancreas and brain are the most critical organs, in which the S-factors are calculated. The statistical uncertainty in the simulation procedure was kept lower than 5%. The Sfactors for each target organ are calculated in Gy/(MBq*sec) with highest dose being absorbed in kidneys and pancreas (9.29*1010 and 0.15*1010 respectively). Conclusion: An approach for the accurate dosimetry on pediatric models is presented, creating a reference dosage dataset for several radionuclides in children computational models with the advantages of MC techniques. Our study is ongoing, extending our investigation to other reference models and evaluating the

  10. MO-G-17A-04: Internal Dosimetric Calculations for Pediatric Nuclear Imaging Applications, Using Monte Carlo Simulations and High-Resolution Pediatric Computational Models

    Energy Technology Data Exchange (ETDEWEB)

    Papadimitroulas, P; Kagadis, GC [University of Patras, Rion, Ahaia (Greece); Loudos, G [Technical Educational Institute of Athens, Aigaleo, Attiki (Greece)

    2014-06-15

    Purpose: Our purpose is to evaluate the administered absorbed dose in pediatric, nuclear imaging studies. Monte Carlo simulations with the incorporation of pediatric computational models can serve as reference for the accurate determination of absorbed dose. The procedure of the calculated dosimetric factors is described, while a dataset of reference doses is created. Methods: Realistic simulations were executed using the GATE toolkit and a series of pediatric computational models, developed by the “IT'IS Foundation”. The series of the phantoms used in our work includes 6 models in the range of 5–14 years old (3 boys and 3 girls). Pre-processing techniques were applied to the images, to incorporate the phantoms in GATE simulations. The resolution of the phantoms was set to 2 mm3. The most important organ densities were simulated according to the GATE “Materials Database”. Several used radiopharmaceuticals in SPECT and PET applications are being tested, following the EANM pediatric dosage protocol. The biodistributions of the several isotopes used as activity maps in the simulations, were derived by the literature. Results: Initial results of absorbed dose per organ (mGy) are presented in a 5 years old girl from the whole body exposure to 99mTc - SestaMIBI, 30 minutes after administration. Heart, kidney, liver, ovary, pancreas and brain are the most critical organs, in which the S-factors are calculated. The statistical uncertainty in the simulation procedure was kept lower than 5%. The Sfactors for each target organ are calculated in Gy/(MBq*sec) with highest dose being absorbed in kidneys and pancreas (9.29*10{sup 10} and 0.15*10{sup 10} respectively). Conclusion: An approach for the accurate dosimetry on pediatric models is presented, creating a reference dosage dataset for several radionuclides in children computational models with the advantages of MC techniques. Our study is ongoing, extending our investigation to other reference models and

  11. Acquisition of data from on-line laser turbidimeter and calculation of some kinetic variables in computer-coupled automated fed-batch culture

    International Nuclear Information System (INIS)

    Output signals of a commercially available on-line laser turbidimeter exhibit fluctuations due to air and/or CO2 bubbles. A simple data processing algorithm and a personal computer software have been developed to smooth the noisy turbidity data acquired, and to utilize them for the on-line calculations of some kinetic variables involved in batch and fed-batch cultures of uniformly dispersed microorganisms. With this software, about 103 instantaneous turbidity data acquired over 55 s are averaged and convert it to dry cell concentration, X, every minute. Also, volume of the culture broth, V, is estimated from the averaged output data of weight loss of feed solution reservoir, W, using an electronic balance on which the reservoir is placed. Then, the computer software is used to perform linear regression analyses over the past 30 min of the total biomass, VX, the natural logarithm of the total biomass, ln(VX), and the weight loss, W, in order to calculate volumetric growth rate, d(VX)/dt, specific growth rate, μ [ = dln(VX)/dt] and the rate of W, dW/dt, every minute in a fed-batch culture. The software used to perform the first-order regression analyses of VX, ln(VX) and W was applied to batch or fed-batch cultures of Escherichia coli on minimum synthetic or natural complex media. Sample determination coefficients of the three different variables (VX, ln(VX) and W) were close to unity, indicating that the calculations are accurate. Furthermore, growth yield, Yx/s, and specific substrate consumption rate, qsc, were approximately estimated from the data, dW/dt and in a ‘balanced’ fed-batch culture of E. coli on the minimum synthetic medium where the computer-aided substrate-feeding system automatically matches well with the cell growth. (author)

  12. Overcoming the existent computational challenges in the ab initio calculations of the two-photon circular dichroism spectra of large molecules using a fragment-recombination approach

    Science.gov (United States)

    Diaz, Carlos; Echevarria, Lorenzo; Hernández, Florencio E.

    2013-05-01

    Herein we report on the development of a fragment-recombination approach (FRA) that allows overcoming the computational limitations found in the ab initio calculation of the two-photon circular dichroism (TPCD) spectra of large optically active molecules. Through the comparative analysis of the corresponding theoretical TPCD spectra of the fragments and that of the entire molecule, we prove that TPCD is an additive property. We also demonstrate that the same property apply to two-photon absorption (TPA). TPCD-FRA is expected to find great applications in the structural-analysis of large catalysts and polypeptides due to its reduced computational complexity, cost and time, and to reveal fingerprints in the obscure spectral region between the near and far UV.

  13. Calculation of temperature fields formed in induction annealing of closing welded joint of jacket of steam generator for WWER 440 type nuclear power plant using ICL 2960 computer

    International Nuclear Information System (INIS)

    The problems are discussed of the mathematical description and simulation of temperature fields in annealing the closing weld of the steam generator jacket of the WWER 440 nuclear power plant. The basic principles are given of induction annealing, the method of calculating temperature fields is indicated and the mathematical description is given of boundary conditions on the outer and inner surfaces of the steam generator jacket for the computation of temperature fields arising during annealing. Also described are the methods of determining the temperature of exposed parts of heat exchange tubes inside the steam generator and the technical possibilities are assessed of the annealing equipment from the point of view of its computer simulation. Five alternatives are given for the computation of temperature fields in the area around the weld for different boundary conditions. The values are given of maximum differences in the temperatures of the metal in the annealed part of the steam generator jacket which allow the assessment of individual computation variants, this mainly from the point of view of observing the course of annealing temperature in the required width of the annealed jacket of the steam generator along both sides of the closing weld. (B.S.)

  14. SFACTOR: a computer code for calculating dose equivalent to a target organ per microcurie-day residence of a radionuclide in a source organ - supplementary report

    Energy Technology Data Exchange (ETDEWEB)

    Dunning, Jr, D E; Pleasant, J C; Killough, G G

    1980-05-01

    The purpose of this report is to describe revisions in the SFACTOR computer code and to provide useful documentation for that program. The SFACTOR computer code has been developed to implement current methodologies for computing the average dose equivalent rate S(X reverse arrow Y) to specified target organs in man due to 1 ..mu..Ci of a given radionuclide uniformly distributed in designated source orrgans. The SFACTOR methodology is largely based upon that of Snyder, however, it has been expanded to include components of S from alpha and spontaneous fission decay, in addition to electron and photon radiations. With this methodology, S-factors can be computed for any radionuclide for which decay data are available. The tabulations in Appendix II provide a reference compilation of S-factors for several dosimetrically important radionuclides which are not available elsewhere in the literature. These S-factors are calculated for an adult with characteristics similar to those of the International Commission on Radiological Protection's Reference Man. Corrections to tabulations from Dunning are presented in Appendix III, based upon the methods described in Section 2.3. 10 refs.

  15. Computer code TERFOC-N to calculate doses to public using terrestrial foodchain models improved and extended for long-lived nuclides

    International Nuclear Information System (INIS)

    A computer code TERFOC-N has bee developed to calculate doses to the public due to atmospheric releases of radionuclides in normal operations of nuclear facilities. The code calculates the highest individual dose and the collective dose from four exposure pathways; internal doses due to ingestion and inhalation, external doses due to cloudshine and groundshine. A foodchain model, which is originally referred to the U.S.NRC Regulatory Guide 1.109, has been improved to apply to not only LWRs but also other nuclear facilities. This report describes the models employed and gives a sample run performed by the code. The parameters which were sensitive to ingestion dose were identified from the results of sensitivity analysis. The models which significantly contributed to the dose were identified among the models improved and extended here. (author)

  16. Short-term calculations to supplement the RS 16 B PWR experiments with internals (PWR1 to PWR5), using the LECK 4 computer code

    International Nuclear Information System (INIS)

    Within the framework of research project RS 16 B sponsored by the German BMFT a series of a blowdown experiments, DWR1 to DWR5, were performed using a vessel with dummy internals under conditions similar to those in a PWR. The prime objective of these experiments was the investigation of the highly transient blowdown phenomena in the discharge nozzle and the determination of the induced loads on the internals. As a partner in the project, KWU carried out both pre-test predictions and post-test analyses of these experiments using, among others, the computer code LECK 4. For the most severe blowdown test DWR5, the influence of the most important model parameters on the blowdown analysis was investigated in detail. These investigations suggest that, similar to the long-term analyses, calculations using the homogeneous critical flow model would improve agreement between calculation and experiment. (orig./RW)

  17. Application of computational fluid dynamics and fluid structure interaction techniques for calculating the 3D transient flow of journal bearings coupled with rotor systems

    Science.gov (United States)

    Li, Qiang; Yu, Guichang; Liu, Shulian; Zheng, Shuiying

    2012-09-01

    Journal bearings are important parts to keep the high dynamic performance of rotor machinery. Some methods have already been proposed to analysis the flow field of journal bearings, and in most of these methods simplified physical model and classic Reynolds equation are always applied. While the application of the general computational fluid dynamics (CFD)-fluid structure interaction (FSI) techniques is more beneficial for analysis of the fluid field in a journal bearing when more detailed solutions are needed. This paper deals with the quasi-coupling calculation of transient fluid dynamics of oil film in journal bearings and rotor dynamics with CFD-FSI techniques. The fluid dynamics of oil film is calculated by applying the so-called "dynamic mesh" technique. A new mesh movement approach is presented while the dynamic mesh models provided by FLUENT are not suitable for the transient oil flow in journal bearings. The proposed mesh movement approach is based on the structured mesh. When the journal moves, the movement distance of every grid in the flow field of bearing can be calculated, and then the update of the volume mesh can be handled automatically by user defined function (UDF). The journal displacement at each time step is obtained by solving the moving equations of the rotor-bearing system under the known oil film force condition. A case study is carried out to calculate the locus of the journal center and pressure distribution of the journal in order to prove the feasibility of this method. The calculating results indicate that the proposed method can predict the transient flow field of a journal bearing in a rotor-bearing system where more realistic models are involved. The presented calculation method provides a basis for studying the nonlinear dynamic behavior of a general rotor-bearing system.

  18. Subtraction Procedure for Calculation of Anomalous Magnetic Moment of Electron in QED and its Application to Numerical Computation at 3-loop Level

    CERN Document Server

    Volkov, S A

    2015-01-01

    A new subtraction procedure for removal both ultraviolet and infrared divergences in Feynman integrals is proposed. This method is developed for computation of QED corrections to the electron anomalous magnetic moment. The procedure is formulated in the form of a forest formula with linear operators that are applied to Feynman amplitudes of UV-divergent subgraphs. The contribution of each Feynman graph that contains propagators of electrons and photons is represented as a finite Feynman-parametric integral. Application of the developed method to the calculation of 2-loop and 3-loop contributions is described.

  19. CALIPSO - a computer code for the calculation of fluiddynamics, thermohydraulics and changes of geometry in failing fuel elements of a fast breeder reactor

    International Nuclear Information System (INIS)

    The computer code CALIPSO was developed for the calculation of a hypothetical accident in an LMFBR (Liquid Metal Fast Breeder Reactor), where the failure of fuel pins is assumed. It calculates two-dimensionally the thermodynamics, fluiddynamics and changes in geometry of a single fuel pin and its coolant channel in a time period between failure of the pin and a state, at which the geometry is nearly destroyed. The determination of temperature profiles in the fuel pin cladding and the channel wall make it possible to take melting and freezing processes into account. Further features of CALIPSO are the variable channel cross section in order to model disturbances of the channel geometry as well as the calculation of two velocity fields including the consideration of virtual mass effects. The documented version of CALIPSO is especially suited for the calculation of the SIMBATH experiments carried out at the Kernforschungszentrum Karlsruhe, which simulate the above-mentioned accident. The report contains the complete documentation of the CALIPSO code: the modeling of the geometry, the equations used, the structure of the code and the solution procedure as well as the instructions for use with an application example. (orig.)

  20. Dynamic Thermal Loads and Cooling Requirements Calculations for V ACs System in Nuclear Fuel Processing Facilities Using Computer Aided Energy Conservation Models

    International Nuclear Information System (INIS)

    In terms of nuclear safety, the most important function of ventilation air conditioning (VAC) systems is to maintain safe ambient conditions for components and structures important to safety inside the nuclear facility and to maintain appropriate working conditions for the plant's operating and maintenance staff. As a part of a study aimed to evaluate the performance of VAC system of the nuclear fuel cycle facility (NFCF) a computer model was developed and verified to evaluate the thermal loads and cooling requirements for different zones of fuel processing facility. The program is based on transfer function method (TFM) and it is used to calculate the dynamic heat gain by various multilayer walls constructions and windows hour by hour at any orientation of the building. The developed model was verified by comparing the obtained calculated results of the solar heat gain by a given building with the corresponding calculated values using finite difference method (FDM) and total equivalent temperature different method (TETD). As an example the developed program is used to calculate the cooling loads of the different zones of a typical nuclear fuel facility the results showed that the cooling capacities of the different cooling units of each zone of the facility meet the design requirements according to safety regulations in nuclear facilities.

  1. Development of a computational code for calculations of shielding in dental facilities; Desenvolvimento de um codigo computacional para calculos de blindagem em instalacoes odontologicas

    Energy Technology Data Exchange (ETDEWEB)

    Lava, Deise D.; Borges, Diogo da S.; Affonso, Renato R.W.; Guimaraes, Antonio C.F.; Moreira, Maria de L., E-mail: deise_dy@hotmail.com, E-mail: diogosb@outlook.com, E-mail: raoniwa@yahoo.com.br, E-mail: tony@ien.gov.br, E-mail: malu@ien.gov.br [Instituto de Engenharia Nuclear (IEN/CNEN-RJ), Rio de Janeiro, RJ (Brazil)

    2014-07-01

    This paper is prepared in order to address calculations of shielding to minimize the interaction of patients with ionizing radiation and / or personnel. The work includes the use of protection report Radiation in Dental Medicine (NCRP-145 or Radiation Protection in Dentistry), which establishes calculations and standards to be adopted to ensure safety to those who may be exposed to ionizing radiation in dental facilities, according to the dose limits established by CNEN-NN-3.1 standard published in September / 2011. The methodology comprises the use of computer language for processing data provided by that report, and a commercial application used for creating residential projects and decoration. The FORTRAN language was adopted as a method for application to a real case. The result is a programming capable of returning data related to the thickness of material, such as steel, lead, wood, glass, plaster, acrylic, acrylic and leaded glass, which can be used for effective shielding against single or continuous pulse beams. Several variables are used to calculate the thickness of the shield, as: number of films used in the week, film load, use factor, occupational factor, distance between the wall and the source, transmission factor, workload, area definition, beam intensity, intraoral and panoramic exam. Before the application of the methodology is made a validation of results with examples provided by NCRP-145. The calculations redone from the examples provide answers consistent with the report.

  2. SP-FISPACT2001. A computer code for activation and decay calculations for intermediate energies. A connection of FISPACT with MCNPX

    International Nuclear Information System (INIS)

    The calculation of the number of atoms and the activity of materials following nuclear interactions at incident energies up to several GeV is necessary in the design of Accelerator Driven Systems, Radioactive Ion Beam and proton accelerator facilities such as spallation neutron sources. As well as the radioactivity of the materials, this allows the evaluation of the formation of active gaseous elements and the assessment of possible corrosion problems The particle energies involved here are higher than those used in typical nuclear reactors and fusion devices for which many codes already exist. These calculations can be performed by coupling two different computer codes: MCNPX and SP-FISPACT. MCNPX performs Monte Carlo particle transport up to energies of several GeV. SP-FISPACT is a modification of FISPACT, a code designed for fusion applications and able to calculate neutron activation for energies <20 MeV. In such a way it is possible to perform a hybrid calculation in which neutron activation data are used for neutron interactions at energies <20 MeV and intermediate energy physics models for all the other nuclear interactions

  3. COMPUTING

    CERN Multimedia

    I. Fisk

    2010-01-01

    Introduction The first data taking period of November produced a first scientific paper, and this is a very satisfactory step for Computing. It also gave the invaluable opportunity to learn and debrief from this first, intense period, and make the necessary adaptations. The alarm procedures between different groups (DAQ, Physics, T0 processing, Alignment/calibration, T1 and T2 communications) have been reinforced. A major effort has also been invested into remodeling and optimizing operator tasks in all activities in Computing, in parallel with the recruitment of new Cat A operators. The teams are being completed and by mid year the new tasks will have been assigned. CRB (Computing Resource Board) The Board met twice since last CMS week. In December it reviewed the experience of the November data-taking period and could measure the positive improvements made for the site readiness. It also reviewed the policy under which Tier-2 are associated with Physics Groups. Such associations are decided twice per ye...

  4. COMPUTING

    CERN Multimedia

    M. Kasemann

    CCRC’08 challenges and CSA08 During the February campaign of the Common Computing readiness challenges (CCRC’08), the CMS computing team had achieved very good results. The link between the detector site and the Tier0 was tested by gradually increasing the number of parallel transfer streams well beyond the target. Tests covered the global robustness at the Tier0, processing a massive number of very large files and with a high writing speed to tapes.  Other tests covered the links between the different Tiers of the distributed infrastructure and the pre-staging and reprocessing capacity of the Tier1’s: response time, data transfer rate and success rate for Tape to Buffer staging of files kept exclusively on Tape were measured. In all cases, coordination with the sites was efficient and no serious problem was found. These successful preparations prepared the ground for the second phase of the CCRC’08 campaign, in May. The Computing Software and Analysis challen...

  5. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction It has been a very active quarter in Computing with interesting progress in all areas. The activity level at the computing facilities, driven by both organised processing from data operations and user analysis, has been steadily increasing. The large-scale production of simulated events that has been progressing throughout the fall is wrapping-up and reprocessing with pile-up will continue. A large reprocessing of all the proton-proton data has just been released and another will follow shortly. The number of analysis jobs by users each day, that was already hitting the computing model expectations at the time of ICHEP, is now 33% higher. We are expecting a busy holiday break to ensure samples are ready in time for the winter conferences. Heavy Ion The Tier 0 infrastructure was able to repack and promptly reconstruct heavy-ion collision data. Two copies were made of the data at CERN using a large CASTOR disk pool, and the core physics sample was replicated ...

  6. COMPUTING

    CERN Multimedia

    M. Kasemann

    Introduction During the past six months, Computing participated in the STEP09 exercise, had a major involvement in the October exercise and has been working with CMS sites on improving open issues relevant for data taking. At the same time operations for MC production, real data reconstruction and re-reconstructions and data transfers at large scales were performed. STEP09 was successfully conducted in June as a joint exercise with ATLAS and the other experiments. It gave good indication about the readiness of the WLCG infrastructure with the two major LHC experiments stressing the reading, writing and processing of physics data. The October Exercise, in contrast, was conducted as an all-CMS exercise, where Physics, Computing and Offline worked on a common plan to exercise all steps to efficiently access and analyze data. As one of the major results, the CMS Tier-2s demonstrated to be fully capable for performing data analysis. In recent weeks, efforts were devoted to CMS Computing readiness. All th...

  7. COMPUTING

    CERN Multimedia

    I. Fisk

    2012-01-01

    Introduction Computing continued with a high level of activity over the winter in preparation for conferences and the start of the 2012 run. 2012 brings new challenges with a new energy, more complex events, and the need to make the best use of the available time before the Long Shutdown. We expect to be resource constrained on all tiers of the computing system in 2012 and are working to ensure the high-priority goals of CMS are not impacted. Heavy ions After a successful 2011 heavy-ion run, the programme is moving to analysis. During the run, the CAF resources were well used for prompt analysis. Since then in 2012 on average 200 job slots have been used continuously at Vanderbilt for analysis workflows. Operations Office As of 2012, the Computing Project emphasis has moved from commissioning to operation of the various systems. This is reflected in the new organisation structure where the Facilities and Data Operations tasks have been merged into a common Operations Office, which now covers everything ...

  8. COMPUTING

    CERN Multimedia

    P. McBride

    It has been a very active year for the computing project with strong contributions from members of the global community. The project has focused on site preparation and Monte Carlo production. The operations group has begun processing data from P5 as part of the global data commissioning. Improvements in transfer rates and site availability have been seen as computing sites across the globe prepare for large scale production and analysis as part of CSA07. Preparations for the upcoming Computing Software and Analysis Challenge CSA07 are progressing. Ian Fisk and Neil Geddes have been appointed as coordinators for the challenge. CSA07 will include production tests of the Tier-0 production system, reprocessing at the Tier-1 sites and Monte Carlo production at the Tier-2 sites. At the same time there will be a large analysis exercise at the Tier-2 centres. Pre-production simulation of the Monte Carlo events for the challenge is beginning. Scale tests of the Tier-0 will begin in mid-July and the challenge it...

  9. COMPUTING

    CERN Multimedia

    M. Kasemann

    Introduction More than seventy CMS collaborators attended the Computing and Offline Workshop in San Diego, California, April 20-24th to discuss the state of readiness of software and computing for collisions. Focus and priority were given to preparations for data taking and providing room for ample dialog between groups involved in Commissioning, Data Operations, Analysis and MC Production. Throughout the workshop, aspects of software, operating procedures and issues addressing all parts of the computing model were discussed. Plans for the CMS participation in STEP’09, the combined scale testing for all four experiments due in June 2009, were refined. The article in CMS Times by Frank Wuerthwein gave a good recap of the highly collaborative atmosphere of the workshop. Many thanks to UCSD and to the organizers for taking care of this workshop, which resulted in a long list of action items and was definitely a success. A considerable amount of effort and care is invested in the estimate of the co...

  10. Vibrational, NMR and UV-visible spectroscopic investigation and NLO studies on benzaldehyde thiosemicarbazone using computational calculations

    Science.gov (United States)

    Moorthy, N.; Prabakar, P. C. Jobe; Ramalingam, S.; Pandian, G. V.; Anbusrinivasan, P.

    2016-04-01

    In order to investigate the vibrational, electronic and NLO characteristics of the compound; benzaldehyde thiosemicarbazone (BTSC), the XRD, FT-IR, FT-Raman, NMR and UV-visible spectra were recorded and were analysed with the calculated spectra by using HF and B3LYP methods with 6-311++G(d,p) basis set. The XRD results revealed that the stabilized molecular systems were confined in orthorhombic unit cell system. The cause for the change of chemical and physical properties behind the compound has been discussed makes use of Mulliken charge levels and NBO in detail. The shift of molecular vibrational pattern by the fusing of ligand; thiosemicarbazone group with benzaldehyde has been keenly observed. The occurrence of in phase and out of phase molecular interaction over the frontier molecular orbitals was determined to evaluate the degeneracy of the electronic energy levels. The thermodynamical studies of the temperature region 100-1000 K to detect the thermal stabilization of the crystal phase of the compound were investigated. The NLO properties were evaluated by the determination of the polarizability and hyperpolarizability of the compound in crystal phase. The physical stabilization of the geometry of the compound has been explained by geometry deformation analysis.

  11. Vibrational, NMR and UV-Visible spectroscopic investigation, VCD and NLO studies on Benzophenone thiosemicarbazone using computational calculations

    Science.gov (United States)

    Moorthy, N.; Jobe Prabakar, P. C.; Ramalingam, S.; Periandy, S.; Parasuraman, K.

    2016-04-01

    In order to explore the unbelievable NLO property of prepared Benzophenone thiosemicarbazone (BPTSC), the experimental and theoretical investigation has been made. The theoretical calculations were made using RHF and CAM-B3LYP methods at 6-311++G(d,p) basis set. The title compound contains Cdbnd S ligand which helps to improve the second harmonic generation (SHG) efficiency. The molecule has been examined in terms of the vibrational, electronic and optical properties. The entire molecular behavior was studied by their fundamental IR and Raman wavenumbers and was compared with the theoretical aspect. The molecular chirality has been studied by performing vibrational circular dichroism (circularly polarized infrared radiation). The Mulliken charge levels of the compound ensure the perturbation of atomic charges according to the ligand. The molecular interaction of frontier orbitals emphasizes the modification of chemical properties of the compound through the reaction path. The enormous amount of NLO activity was induced by the Benzophenone in thiosemicarbazone. The Gibbs free energy was evaluated at different temperature and from which the enhancement of chemical stability was stressed. The VCD spectrum was simulated and the optical dichroism of the compound has been analyzed.

  12. Reliability Calculations

    DEFF Research Database (Denmark)

    Petersen, Kurt Erling

    1986-01-01

    probabilistic approaches have been introduced in some cases for the calculation of the reliability of structures or components. A new computer program has been developed based upon numerical integration in several variables. In systems reliability Monte Carlo simulation programs are used especially in analysis...... of very complex systems. In order to increase the applicability of the programs variance reduction techniques can be applied to speed up the calculation process. Variance reduction techniques have been studied and procedures for implementation of importance sampling are suggested....

  13. A Monte Carlo program for calculating high energy spectra in cylindrical geometry on the IBM 709 computer

    International Nuclear Information System (INIS)

    The report describes an I.B.M. 709 program written at the request of the Reactor Division, Harwell, to obtain high energy spectra in a system containing a number of fissile and non-fissile materials, arranged as concentric cylinders of infinite length surrounded by an outer material with a square or rectangular boundary. At the cell boundary neutrons can be lost by leakage or reflected back into the system. A specified number of fission neutrons born in the fissile materials, together with any descendants they may have, are tracked one by one through the system until they are absorbed, lost by leakage through the lattice boundary, or their energies have fallen below a specifiable cut-off energy. The neutrons may be started from anywhere in the system and all neutron-nucleus reactions that occur in the nuclides supplied with the program are allowed. A descriptions is given of the use of the program, the current version of which is available as a self-loading binary tape which contains, in addition to the program, all the nuclear data at present available. Binary card decks are also available and nuclear data for other nuclides can be added. A feature of the program is the flexibility with which the core storage available for input and output data can be allocated according to the requirements of the problem. The output of the program is in the form of a Binary Coded Decimal tape (B.C.D.) which can be used on the normal I.B.M. off-line equipment to print out the results. An example is given of the results obtained for use in radiation damage calculations of the spatial distribution of neutrons in a simple uranium-D2O system

  14. Description of input and examples for PHREEQC version 3: a computer program for speciation, batch-reaction, one-dimensional transport, and inverse geochemical calculations

    Science.gov (United States)

    Parkhurst, David L.; Appelo, C.A.J.

    2013-01-01

    PHREEQC version 3 is a computer program written in the C and C++ programming languages that is designed to perform a wide variety of aqueous geochemical calculations. PHREEQC implements several types of aqueous models: two ion-association aqueous models (the Lawrence Livermore National Laboratory model and WATEQ4F), a Pitzer specific-ion-interaction aqueous model, and the SIT (Specific ion Interaction Theory) aqueous model. Using any of these aqueous models, PHREEQC has capabilities for (1) speciation and saturation-index calculations; (2) batch-reaction and one-dimensional (1D) transport calculations with reversible and irreversible reactions, which include aqueous, mineral, gas, solid-solution, surface-complexation, and ion-exchange equilibria, and specified mole transfers of reactants, kinetically controlled reactions, mixing of solutions, and pressure and temperature changes; and (3) inverse modeling, which finds sets of mineral and gas mole transfers that account for differences in composition between waters within specified compositional uncertainty limits. Many new modeling features were added to PHREEQC version 3 relative to version 2. The Pitzer aqueous model (pitzer.dat database, with keyword PITZER) can be used for high-salinity waters that are beyond the range of application for the Debye-Hückel theory. The Peng-Robinson equation of state has been implemented for calculating the solubility of gases at high pressure. Specific volumes of aqueous species are calculated as a function of the dielectric properties of water and the ionic strength of the solution, which allows calculation of pressure effects on chemical reactions and the density of a solution. The specific conductance and the density of a solution are calculated and printed in the output file. In addition to Runge-Kutta integration, a stiff ordinary differential equation solver (CVODE) has been included for kinetic calculations with multiple rates that occur at widely different time scales

  15. Development of computer programme for the use of empirical calculation of mining subsidence; Desarrollo informatico para utilizacion de los metodos empiricos de calculo de subsidencia minera

    Energy Technology Data Exchange (ETDEWEB)

    1999-09-01

    The fundamental objective of the project is the elaboration of a user friendly computer programme which allows to mining technicians an easy application of the empirical calculation methods of mining subsidence. As is well known these methods use, together with a suitable theoretical support, the experimental data obtained during a long period of mining activities in areas of different geological and geomechanical nature. Thus they can incorporate to the calculus the local parameters that hardly could be taken into account by using pure theoretical methods. In general, as basic calculation method, it has been followed the procedure development by the VNIMI Institute of Leningrad, a particularly suitable method for application to the most various conditions that may occur in the mining of flat or steep seams. The computer programme has been worked out on the basis of MicroStation System (5.0 version) of INTERGRAPH which allows the development of new applications related to the basic aims of the project. An important feature, of the programme that may be quoted is the easy adaptation to local conditions by adjustment of the geomechanical or mining parameters according to the values obtained from the own working experience. (Author)

  16. Hepatic arterial phase and portal venous phase computed tomography for dose calculation of stereotactic body radiation therapy plans in liver cancer: a dosimetric comparison study

    International Nuclear Information System (INIS)

    To investigate the effect of computed tomography (CT) using hepatic arterial phase (HAP) and portal venous phase (PVP) contrast on dose calculation of stereotactic body radiation therapy (SBRT) for liver cancer. Twenty-one patients with liver cancer were studied. HAP, PVP and non-enhanced CTs were performed on subjects scanned in identical positions under active breathing control (ABC). SBRT plans were generated using seven-field three-dimensional conformal radiotherapy (7 F-3D-CRT), seven-field intensity-modulated radiotherapy (7 F-IMRT) and single-arc volumetric modulated arc therapy (VMAT) based on the PVP CT. Plans were copied to the HAP and non-enhanced CTs. Radiation doses calculated from the three phases of CTs were compared with respect to the planning target volume (PTV) and the organs at risk (OAR) using the Friedman test and the Wilcoxon signed ranks test. SBRT plans calculated from either PVP or HAP CT, including 3D-CRT, IMRT and VMAT plans, demonstrated significantly lower (p <0.05) minimum absorbed doses covering 98%, 95%, 50% and 2% of PTV (D98%, D95%, D50% and D2%) than those calculated from non-enhanced CT. The mean differences between PVP or HAP CT and non-enhanced CT were less than 2% and 1% respectively. All mean dose differences between the three phases of CTs for OARs were less than 2%. Our data indicate that though the differences in dose calculation between contrast phases are not clinically relevant, dose underestimation (IE, delivery of higher-than-intended doses) resulting from CT using PVP contrast is larger than that resulting from CT using HAP contrast when compared against doses based upon non-contrast CT in SBRT treatment of liver cancer using VMAT, IMRT or 3D-CRT

  17. COMPUTING

    CERN Multimedia

    P. MacBride

    The Computing Software and Analysis Challenge CSA07 has been the main focus of the Computing Project for the past few months. Activities began over the summer with the preparation of the Monte Carlo data sets for the challenge and tests of the new production system at the Tier-0 at CERN. The pre-challenge Monte Carlo production was done in several steps: physics generation, detector simulation, digitization, conversion to RAW format and the samples were run through the High Level Trigger (HLT). The data was then merged into three "Soups": Chowder (ALPGEN), Stew (Filtered Pythia) and Gumbo (Pythia). The challenge officially started when the first Chowder events were reconstructed on the Tier-0 on October 3rd. The data operations teams were very busy during the the challenge period. The MC production teams continued with signal production and processing while the Tier-0 and Tier-1 teams worked on splitting the Soups into Primary Data Sets (PDS), reconstruction and skimming. The storage sys...

  18. COMPUTING

    CERN Multimedia

    Matthias Kasemann

    Overview The main focus during the summer was to handle data coming from the detector and to perform Monte Carlo production. The lessons learned during the CCRC and CSA08 challenges in May were addressed by dedicated PADA campaigns lead by the Integration team. Big improvements were achieved in the stability and reliability of the CMS Tier1 and Tier2 centres by regular and systematic follow-up of faults and errors with the help of the Savannah bug tracking system. In preparation for data taking the roles of a Computing Run Coordinator and regular computing shifts monitoring the services and infrastructure as well as interfacing to the data operations tasks are being defined. The shift plan until the end of 2008 is being put together. User support worked on documentation and organized several training sessions. The ECoM task force delivered the report on “Use Cases for Start-up of pp Data-Taking” with recommendations and a set of tests to be performed for trigger rates much higher than the ...

  19. COMPUTING

    CERN Multimedia

    M. Kasemann

    Introduction A large fraction of the effort was focused during the last period into the preparation and monitoring of the February tests of Common VO Computing Readiness Challenge 08. CCRC08 is being run by the WLCG collaboration in two phases, between the centres and all experiments. The February test is dedicated to functionality tests, while the May challenge will consist of running at all centres and with full workflows. For this first period, a number of functionality checks of the computing power, data repositories and archives as well as network links are planned. This will help assess the reliability of the systems under a variety of loads, and identifying possible bottlenecks. Many tests are scheduled together with other VOs, allowing the full scale stress test. The data rates (writing, accessing and transfer¬ring) are being checked under a variety of loads and operating conditions, as well as the reliability and transfer rates of the links between Tier-0 and Tier-1s. In addition, the capa...

  20. COMPUTING

    CERN Multimedia

    Contributions from I. Fisk

    2012-01-01

    Introduction The start of the 2012 run has been busy for Computing. We have reconstructed, archived, and served a larger sample of new data than in 2011, and we are in the process of producing an even larger new sample of simulations at 8 TeV. The running conditions and system performance are largely what was anticipated in the plan, thanks to the hard work and preparation of many people. Heavy ions Heavy Ions has been actively analysing data and preparing for conferences.  Operations Office Figure 6: Transfers from all sites in the last 90 days For ICHEP and the Upgrade efforts, we needed to produce and process record amounts of MC samples while supporting the very successful data-taking. This was a large burden, especially on the team members. Nevertheless the last three months were very successful and the total output was phenomenal, thanks to our dedicated site admins who keep the sites operational and the computing project members who spend countless hours nursing the...

  1. COMPUTING

    CERN Multimedia

    I. Fisk

    2012-01-01

      Introduction Computing activity has been running at a sustained, high rate as we collect data at high luminosity, process simulation, and begin to process the parked data. The system is functional, though a number of improvements are planned during LS1. Many of the changes will impact users, we hope only in positive ways. We are trying to improve the distributed analysis tools as well as the ability to access more data samples more transparently.  Operations Office Figure 2: Number of events per month, for 2012 Since the June CMS Week, Computing Operations teams successfully completed data re-reconstruction passes and finished the CMSSW_53X MC campaign with over three billion events available in AOD format. Recorded data was successfully processed in parallel, exceeding 1.2 billion raw physics events per month for the first time in October 2012 due to the increase in data-parking rate. In parallel, large efforts were dedicated to WMAgent development and integrati...

  2. COMPUTING

    CERN Multimedia

    I. Fisk

    2013-01-01

    Computing operation has been lower as the Run 1 samples are completing and smaller samples for upgrades and preparations are ramping up. Much of the computing activity is focusing on preparations for Run 2 and improvements in data access and flexibility of using resources. Operations Office Data processing was slow in the second half of 2013 with only the legacy re-reconstruction pass of 2011 data being processed at the sites.   Figure 1: MC production and processing was more in demand with a peak of over 750 Million GEN-SIM events in a single month.   Figure 2: The transfer system worked reliably and efficiently and transferred on average close to 520 TB per week with peaks at close to 1.2 PB.   Figure 3: The volume of data moved between CMS sites in the last six months   The tape utilisation was a focus for the operation teams with frequent deletion campaigns from deprecated 7 TeV MC GEN-SIM samples to INVALID datasets, which could be cleaned up...

  3. COMPUTING

    CERN Multimedia

    2010-01-01

    Introduction Just two months after the “LHC First Physics” event of 30th March, the analysis of the O(200) million 7 TeV collision events in CMS accumulated during the first 60 days is well under way. The consistency of the CMS computing model has been confirmed during these first weeks of data taking. This model is based on a hierarchy of use-cases deployed between the different tiers and, in particular, the distribution of RECO data to T1s, who then serve data on request to T2s, along a topology known as “fat tree”. Indeed, during this period this model was further extended by almost full “mesh” commissioning, meaning that RECO data were shipped to T2s whenever possible, enabling additional physics analyses compared with the “fat tree” model. Computing activities at the CMS Analysis Facility (CAF) have been marked by a good time response for a load almost evenly shared between ALCA (Alignment and Calibration tasks - highest p...

  4. Application of the ICRP/ICRU reference computational phantoms to internal dosimetry: calculation of specific absorbed fractions of energy for photons and electrons

    Science.gov (United States)

    Hadid, L.; Desbrée, A.; Schlattl, H.; Franck, D.; Blanchardon, E.; Zankl, M.

    2010-07-01

    The emission of radiation from a contaminated body region is connected with the dose received by radiosensitive tissue through the specific absorbed fractions (SAFs) of emitted energy, which is therefore an essential quantity for internal dose assessment. A set of SAFs were calculated using the new adult reference computational phantoms, released by the International Commission on Radiological Protection (ICRP) together with the International Commission on Radiation Units and Measurements (ICRU). Part of these results has been recently published in ICRP Publication 110 (2009 Adult reference computational phantoms (Oxford: Elsevier)). In this paper, we mainly discuss the results and also present them in numeric form. The emission of monoenergetic photons and electrons with energies ranging from 10 keV to 10 MeV was simulated for three source organs: lungs, thyroid and liver. SAFs were calculated for four target regions in the body: lungs, colon wall, breasts and stomach wall. For quality assurance purposes, the simulations were performed simultaneously at the Helmholtz Zentrum München (HMGU, Germany) and at the Institute for Radiological Protection and Nuclear Safety (IRSN, France), using the Monte Carlo transport codes EGSnrc and MCNPX, respectively. The comparison of results shows overall agreement for photons and high-energy electrons with differences lower than 8%. Nevertheless, significant differences were found for electrons at lower energy for distant source/target organ pairs. Finally, the results for photons were compared to the SAF values derived using mathematical phantoms. Significant variations that can amount to 200% were found. The main reason for these differences is the change of geometry in the more realistic voxel body models. For electrons, no SAFs have been computed with the mathematical phantoms; instead, approximate formulae have been used by both the Medical Internal Radiation Dose committee (MIRD) and the ICRP due to the limitations imposed

  5. Application of the ICRP/ICRU reference computational phantoms to internal dosimetry: calculation of specific absorbed fractions of energy for photons and electrons

    Energy Technology Data Exchange (ETDEWEB)

    Hadid, L; Desbree, A; Franck, D; Blanchardon, E [IRSN, Institute for Radiological Protection and Nuclear Safety, Internal Dosimetry Department, IRSN/DRPH/SDI, BP 17, F-92262 Fontenay-aux-Roses Cedex (France); Schlattl, H; Zankl, M, E-mail: lama.hadid@irsn.f [Institute of Radiation Protection, Helmholtz Zentrum Muenchen-German Research Center for Environmental Health, Neuherberg (Germany)

    2010-07-07

    The emission of radiation from a contaminated body region is connected with the dose received by radiosensitive tissue through the specific absorbed fractions (SAFs) of emitted energy, which is therefore an essential quantity for internal dose assessment. A set of SAFs were calculated using the new adult reference computational phantoms, released by the International Commission on Radiological Protection (ICRP) together with the International Commission on Radiation Units and Measurements (ICRU). Part of these results has been recently published in ICRP Publication 110 (2009 Adult reference computational phantoms (Oxford: Elsevier)). In this paper, we mainly discuss the results and also present them in numeric form. The emission of monoenergetic photons and electrons with energies ranging from 10 keV to 10 MeV was simulated for three source organs: lungs, thyroid and liver. SAFs were calculated for four target regions in the body: lungs, colon wall, breasts and stomach wall. For quality assurance purposes, the simulations were performed simultaneously at the Helmholtz Zentrum Muenchen (HMGU, Germany) and at the Institute for Radiological Protection and Nuclear Safety (IRSN, France), using the Monte Carlo transport codes EGSnrc and MCNPX, respectively. The comparison of results shows overall agreement for photons and high-energy electrons with differences lower than 8%. Nevertheless, significant differences were found for electrons at lower energy for distant source/target organ pairs. Finally, the results for photons were compared to the SAF values derived using mathematical phantoms. Significant variations that can amount to 200% were found. The main reason for these differences is the change of geometry in the more realistic voxel body models. For electrons, no SAFs have been computed with the mathematical phantoms; instead, approximate formulae have been used by both the Medical Internal Radiation Dose committee (MIRD) and the ICRP due to the limitations imposed

  6. SPEI Calculator

    OpenAIRE

    Beguería, Santiago; Vicente Serrano, Sergio M.

    2009-01-01

    [EN] *Objectives: The program calculates time series of the Standardised Precipitation-Evapotransporation Index (SPEI). *Technical Characteristics: The program is executed from the Windows console. From an input data file containing monthly time series of precipitation and mean temperature, plus the geographic coordinates of the observatory, the program computes the SPEI accumulated at the time interval specified by the user, and generates a new data file with the SPEI time serie...

  7. Computer programming to calculate the variations of characteristic angles of heliostats as a function of time and position in a central receiver solar power plant

    Energy Technology Data Exchange (ETDEWEB)

    Mehrabian, M.A.; Aseman, R.D. [Mechanical Engineering Dept., Shahid Bahonar Univ., Kerman (Iran, Islamic Republic of)

    2008-07-01

    The central receiver solar power plant is composed of a large number of individually stirred mirrors (heliostats), focusing the solar radiation onto a tower-mounted receiver. In this paper, an algorithm is developed based on vector geometry to pick an individual heliostat and calculate its characteristic angles at different times of the day and different days of the year. The algorithm then picks the other heliostats one by one and performs the same calculations as did for the first one. These data are used to control the orientation of heliostats for improving the performance of the field. This procedure is relatively straight-forward, and quite suitable for computer programming. The effect of major parameters such as shading and blocking on the performance of the heliostat field is also studied using this algorithm. The results of computer simulation are presented in three sections: (1) the characteristic angles of individual heliostats, (2) the incidence angle of the sun rays striking individual heliostats, and (3) the blocking and shading effect of each heliostat. The calculations and comparisons of results show that: (a) the average incidence angle in the northern hemisphere at the north side of the tower is less than that at its south side, (b) the cosine losses are reduced as the latitude is increased or the tower height is increased, (c) the blocking effect is more important in winter and its effect is much more noticeable than shading for large fields, (d) the height of the tower does not considerably affect shading; but significantly reduces the blocking effect, and (e) to have no blocking effect throughout the year, the field design should be performed for the winter solstice noon. (orig.)

  8. Computational modeling of the mathematical dummy of the Brazilian woman for calculations of internal dosimetry and ends of comparison of the fractions absorbed specific with the woman reference

    International Nuclear Information System (INIS)

    Tools for dosimetric calculations are of the utmost importance for the basic principles of radiological protection, not only in nuclear medicine, but also in other scientific calculations. In this work a mathematical model of the Brazilian woman is developed in order to be used as a basis for calculations of Specific Absorbed Fractions (SAFs) in internal organs and in the skeleton, in accord with the objectives of diagnosis or therapy in nuclear medicine. The model developed here is similar in form to that of Snyder, but modified to be more relevant to the case of the Brazilian woman. To do this, the formalism of the Monte Carlo method was used by means of the ALGAM- 97R computational code. As a contribution to the objectives of this thesis, we developed the computational system cSAF - consultation for Specific Absorbed Fractions (cFAE from Portuguese acronym) - which furnishes several 'look-up' facilities for the research user. The dialogue interface with the operator was planned following current practices in the utilization of event-oriented languages. This interface permits the user to navigate by means of the reference models, choose the source organ, the energy desired, and receive an answer through an efficient and intuitive dialogue. The system furnishes, in addition to the data referring to the Brazilian woman, data referring to the model of Snyder and to the model of the Brazilian man. The system makes available not only individual data to the SAFs of the three models, but also a comparison among them. (author)

  9. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction The Computing Team successfully completed the storage, initial processing, and distribution for analysis of proton-proton data in 2011. There are still a variety of activities ongoing to support winter conference activities and preparations for 2012. Heavy ions The heavy-ion run for 2011 started in early November and has already demonstrated good machine performance and success of some of the more advanced workflows planned for 2011. Data collection will continue until early December. Facilities and Infrastructure Operations Operational and deployment support for WMAgent and WorkQueue+Request Manager components, routinely used in production by Data Operations, are provided. The GlideInWMS and components installation are now deployed at CERN, which is added to the GlideInWMS factory placed in the US. There has been new operational collaboration between the CERN team and the UCSD GlideIn factory operators, covering each others time zones by monitoring/debugging pilot jobs sent from the facto...

  10. Development of a computer program of fast calculation for the pre design of advanced nuclear fuel 10 x 10 for BWR type reactors

    International Nuclear Information System (INIS)

    In the National Institute of Nuclear Research (ININ) a methodology is developed to optimize the design of cells 10x10 of assemble fuels for reactors of water in boil or BWR. It was proposed a lineal calculation formula based on a coefficients matrix (of the change reason of the relative power due to changes in the enrichment of U-235) for estimate the relative powers by pin of a cell. With this it was developed the computer program of fast calculation named PreDiCeldas. The one which by means of a simple search algorithm allows to minimize the relative power peak maximum of cell or LPPF. This is achieved varying the distribution of U-235 inside the cell, maintaining in turn fixed its average enrichment. The accuracy in the estimation of the relative powers for pin is of the order from 1.9% when comparing it with results of the 'best estimate' HELIOS code. With the PreDiCeldas it was possible, at one minimum time of calculation, to re-design a reference cell diminishing the LPPF, to the beginning of the life, of 1.44 to a value of 1.31. With the cell design with low LPPF is sought to even design cycles but extensive that those reached at the moment in the BWR of the Laguna Verde Central. (Author)

  11. MAIL3.0: a computer program calculating cross section sets for SIMCRI, ANISN, KENO-IV, MULTI-KENO and MULTI-KENO-II

    International Nuclear Information System (INIS)

    This paper is a user manual of the computer program MAIL3.0 which makes various types of cross section sets for neutron transport theory programs. MAIL3.0 is a revised version of the MAIL in the JACS code system and has new features as follows: (1) Both of conventional MGCL library and new memory-saved library with P3-scattering matrix file can be processed. (2) A cross section library for MULTI-KENO-II can be made. (3) Calculation of a self-shielding factor, f(σ0, T), at a specified temperature T by interpolating two f-tables with different temperature in the MGCL library can be performed. (4) An interpolation method of f(σ0) for σ0 ≥ 105 [barn] is revised. (5) A Monte Carlo Dancoff correction factor calculation program by Monte Carlo method MCDAN is included. (6) The h-table to compensate the narrow resonance approximation can be read and processed. (7) A program to calculate atomic number densities of various nuclear materials is included. (8) Atomic number densities such as structural materials, moderators and poisons are available. (author)

  12. Evaluation of On-Board kV Cone Beam Computed Tomography–Based Dose Calculation With Deformable Image Registration Using Hounsfield Unit Modifications

    International Nuclear Information System (INIS)

    Purpose: The purpose of this study was to estimate the accuracy of the dose calculation of On-Board Imager (Varian, Palo Alto, CA) cone beam computed tomography (CBCT) with deformable image registration (DIR), using the multilevel-threshold (MLT) algorithm and histogram matching (HM) algorithm in pelvic radiation therapy. Methods and Materials: One pelvis phantom and 10 patients with prostate cancer treated with intensity modulated radiation therapy were studied. To minimize the effect of organ deformation and different Hounsfield unit values between planning CT (PCT) and CBCT, we modified CBCT (mCBCT) with DIR by using the MLT (mCBCTMLT) and HM (mCBCTHM) algorithms. To evaluate the accuracy of the dose calculation, we compared dose differences in dosimetric parameters (mean dose [Dmean], minimum dose [Dmin], and maximum dose [Dmax]) for planning target volume, rectum, and bladder between PCT (reference) and CBCTs or mCBCTs. Furthermore, we investigated the effect of organ deformation compared with DIR and rigid registration (RR). We determined whether dose differences between PCT and mCBCTs were significantly lower than in CBCT by using Student t test. Results: For patients, the average dose differences in all dosimetric parameters of CBCT with DIR were smaller than those of CBCT with RR (eg, rectum; 0.54% for DIR vs 1.24% for RR). For the mCBCTs with DIR, the average dose differences in all dosimetric parameters were less than 1.0%. Conclusions: We evaluated the accuracy of the dose calculation in CBCT, mCBCTMLT, and mCBCTHM with DIR for 10 patients. The results showed that dose differences in Dmean, Dmin, and Dmax in mCBCTs were within 1%, which were significantly better than those in CBCT, especially for the rectum (P<.05). Our results indicate that the mCBCTMLT and mCBCTHM can be useful for improving the dose calculation for adaptive radiation therapy

  13. COMPUTING

    CERN Multimedia

    M. Kasemann

    CMS relies on a well functioning, distributed computing infrastructure. The Site Availability Monitoring (SAM) and the Job Robot submission have been very instrumental for site commissioning in order to increase availability of more sites such that they are available to participate in CSA07 and are ready to be used for analysis. The commissioning process has been further developed, including "lessons learned" documentation via the CMS twiki. Recently the visualization, presentation and summarizing of SAM tests for sites has been redesigned, it is now developed by the central ARDA project of WLCG. Work to test the new gLite Workload Management System was performed; a 4 times increase in throughput with respect to LCG Resource Broker is observed. CMS has designed and launched a new-generation traffic load generator called "LoadTest" to commission and to keep exercised all data transfer routes in the CMS PhE-DEx topology. Since mid-February, a transfer volume of about 12 P...

  14. Heterogeneous Calculation of ε

    International Nuclear Information System (INIS)

    A heterogeneous method of calculating the fast fission factor given by Naudet has been applied to the Carlvik - Pershagen definition of ε. An exact calculation of the collision probabilities is included in the programme developed for the Ferranti - Mercury computer

  15. WAZA-ARI: Computational dosimetry system for x-ray CT examinations. I. radiation transport calculation for organ and tissue doses evaluation using JM phantom

    International Nuclear Information System (INIS)

    A web system of WAZA-ARI is being developed to assess radiation dose to a patient in a computed tomography examination. WAZA-ARI uses one of organ dose data sets corresponding to the options selected by a user to describe examination conditions. The organ dose data have been derived by the Particle and Heavy Ion Transport code system, combined with Japanese male (JM) phantom. The configuration of JM phantom is adjusted to the averaged JM adult. In addition, a new phantom is introduced by removing arms from JM phantom to take into account for dose calculations in torso examinations. Some of the organ doses by JM phantom without arms are compared with results obtained by using a MIRD-type phantom, which was applied in some previous dosimetry systems. (authors)

  16. Calculation of limits for significant bidirectional changes in two or more serial results of a biomarker based on a computer simulation model

    DEFF Research Database (Denmark)

    Lund, Flemming; Petersen, Per Hyltoft; Fraser, Callum G; Söletormos, Georg

    2015-01-01

    .01). RESULTS: From an individual factors used to multiply the first result were calculated to create limits for constant cumulated significant changes. The factors were shown to become a function of the number of results included and the total coefficient of variation. CONCLUSIONS: The first result should be......BACKGROUND: Reference change values provide objective tools to assess the significance of a change in two consecutive results of a biomarker from an individual. However, in practice, more results are usually available and using the reference change value concept on more than two results will...... increase the number of false positive results. METHODS: A computer simulation model was developed using Excel. Based on 10,000 simulated measurements among healthy individuals, a series of up to 20 results of a biomarker from each individual was generated using different values for the within...

  17. Calculation of limits for significant unidirectional changes in two or more serial results of a biomarker based on a computer simulation model

    DEFF Research Database (Denmark)

    Lund, Flemming; Petersen, Per Hyltoft; Fraser, Callum G; Söletormos, Georg

    2015-01-01

    ,000 simulated data from healthy individuals, a series of up to 20 results from an individual was generated using different values for the within-subject biological variation plus the analytical variation. Each new result in this series was compared to the initial measurement result. These successive serial...... relative differences were computed to give limits for significant unidirectional differences with a constant cumulated maximum probability of both 95% (P < 0.05) and 99% (P < 0.01). RESULTS: Factors used to multiply the first result from an individual were calculated to create the limits for constant......BACKGROUND: Reference change values (RCVs) were introduced more than 30 years ago and provide objective tools for assessment of the significance of differences in two consecutive results from an individual. However, in practice, more results are usually available for monitoring, and using the RCV...

  18. GORGON - a computer code for the calculation of energy deposition and the slowing down of ions in cold materials and hot dense plasmas

    International Nuclear Information System (INIS)

    The computer code GORGON, which calculates the energy deposition and slowing down of ions in cold materials and hot plasmas is described, and analyzed in this report. This code is in a state of continuous development but an intermediate stage has been reached where it is considered useful to document the 'state of the art' at the present time. The GORGON code is an improved version of a code developed by Zinamon et al. as part of a more complex program system for studying the hydrodynamic motion of plane metal targets irradiated by intense beams of protons. The improvements made in the code were necessary to improve its usefulness for problems related to the design and burn of heavy ion beam driven inertial confinement fusion targets. (orig./GG)

  19. Results of an international parallel calculations exercise comparing creep responses predicted with three computer codes for two excavations in rock salt

    International Nuclear Information System (INIS)

    Creep responses computed with the ANSALT finite element code, which was developed by the Federal Institute for Geosciences and Natural Resources in the Federal Republic of Germany (FRG), are compared to responses computed in the United States with the SANCHO and SPECTROM codes for two well defined boundary value problems. One boundary value problem is an idealization of an underground room configuration proposed as a repository for low level nuclear waste. The other is an idealization of a room configuration designed for testing the effect of heat on the structural response of rock salt. Both room configurations represent actual excavations in bedded salt at the Waste Isolation Pilot Plant facility near Carlsbad, New Mexico. The SANCHO and SPECTROM solutions to the boundary value problems have already been compared and analyzed extensively in an earlier parallel calculations exercise. The study presented here is an extension of the earlier exercise and was performed as part of a bilateral agreement between the US and the FRG to exchange technology related to the development of nuclear waste repositories in rock salt. Parameters on which the comparisons are based include both displacements and stresses. A rigorous set of procedures designed to eliminate input errors was followed. The comparisons indicate only minor differences between the ANSALT, SANCHO, and SPECTROM solutions, and in most cases reasons for these differences are clearly identified. 14 refs., 62 figs., 2 tabs

  20. Computer-assisted radiographic calculation of spinal curvature in brachycephalic "screw-tailed" dog breeds with congenital thoracic vertebral malformations: reliability and clinical evaluation.

    Science.gov (United States)

    Guevar, Julien; Penderis, Jacques; Faller, Kiterie; Yeamans, Carmen; Stalin, Catherine; Gutierrez-Quintana, Rodrigo

    2014-01-01

    The objectives of this study were: To investigate computer-assisted digital radiographic measurement of Cobb angles in dogs with congenital thoracic vertebral malformations, to determine its intra- and inter-observer reliability and its association with the presence of neurological deficits. Medical records were reviewed (2009-2013) to identify brachycephalic screw-tailed dog breeds with radiographic studies of the thoracic vertebral column and with at least one vertebral malformation present. Twenty-eight dogs were included in the study. The end vertebrae were defined as the cranial end plate of the vertebra cranial to the malformed vertebra and the caudal end plate of the vertebra caudal to the malformed vertebra. Three observers performed the measurements twice. Intraclass correlation coefficients were used to calculate the intra- and inter-observer reliabilities. The intraclass correlation coefficient was excellent for all intra- and inter-observer measurements using this method. There was a significant difference in the kyphotic Cobb angle between dogs with and without associated neurological deficits. The majority of dogs with neurological deficits had a kyphotic Cobb angle higher than 35°. No significant difference in the scoliotic Cobb angle was observed. We concluded that the computer assisted digital radiographic measurement of the Cobb angle for kyphosis and scoliosis is a valid, reproducible and reliable method to quantify the degree of spinal curvature in brachycephalic screw-tailed dog breeds with congenital thoracic vertebral malformations. PMID:25198374

  1. Computer-assisted radiographic calculation of spinal curvature in brachycephalic "screw-tailed" dog breeds with congenital thoracic vertebral malformations: reliability and clinical evaluation.

    Directory of Open Access Journals (Sweden)

    Julien Guevar

    Full Text Available The objectives of this study were: To investigate computer-assisted digital radiographic measurement of Cobb angles in dogs with congenital thoracic vertebral malformations, to determine its intra- and inter-observer reliability and its association with the presence of neurological deficits. Medical records were reviewed (2009-2013 to identify brachycephalic screw-tailed dog breeds with radiographic studies of the thoracic vertebral column and with at least one vertebral malformation present. Twenty-eight dogs were included in the study. The end vertebrae were defined as the cranial end plate of the vertebra cranial to the malformed vertebra and the caudal end plate of the vertebra caudal to the malformed vertebra. Three observers performed the measurements twice. Intraclass correlation coefficients were used to calculate the intra- and inter-observer reliabilities. The intraclass correlation coefficient was excellent for all intra- and inter-observer measurements using this method. There was a significant difference in the kyphotic Cobb angle between dogs with and without associated neurological deficits. The majority of dogs with neurological deficits had a kyphotic Cobb angle higher than 35°. No significant difference in the scoliotic Cobb angle was observed. We concluded that the computer assisted digital radiographic measurement of the Cobb angle for kyphosis and scoliosis is a valid, reproducible and reliable method to quantify the degree of spinal curvature in brachycephalic screw-tailed dog breeds with congenital thoracic vertebral malformations.

  2. Computer-Assisted Radiographic Calculation of Spinal Curvature in Brachycephalic “Screw-Tailed” Dog Breeds with Congenital Thoracic Vertebral Malformations: Reliability and Clinical Evaluation

    Science.gov (United States)

    Guevar, Julien; Penderis, Jacques; Faller, Kiterie; Yeamans, Carmen; Stalin, Catherine; Gutierrez-Quintana, Rodrigo

    2014-01-01

    The objectives of this study were: To investigate computer-assisted digital radiographic measurement of Cobb angles in dogs with congenital thoracic vertebral malformations, to determine its intra- and inter-observer reliability and its association with the presence of neurological deficits. Medical records were reviewed (2009–2013) to identify brachycephalic screw-tailed dog breeds with radiographic studies of the thoracic vertebral column and with at least one vertebral malformation present. Twenty-eight dogs were included in the study. The end vertebrae were defined as the cranial end plate of the vertebra cranial to the malformed vertebra and the caudal end plate of the vertebra caudal to the malformed vertebra. Three observers performed the measurements twice. Intraclass correlation coefficients were used to calculate the intra- and inter-observer reliabilities. The intraclass correlation coefficient was excellent for all intra- and inter-observer measurements using this method. There was a significant difference in the kyphotic Cobb angle between dogs with and without associated neurological deficits. The majority of dogs with neurological deficits had a kyphotic Cobb angle higher than 35°. No significant difference in the scoliotic Cobb angle was observed. We concluded that the computer assisted digital radiographic measurement of the Cobb angle for kyphosis and scoliosis is a valid, reproducible and reliable method to quantify the degree of spinal curvature in brachycephalic screw-tailed dog breeds with congenital thoracic vertebral malformations. PMID:25198374

  3. 3D neutron transport and HPC. A PWR full core calculation using PENTRAN SN code and IBM BLUEGENE/P computers

    International Nuclear Information System (INIS)

    When dealing with nuclear reactor calculation schemes, the need for 3D transport-based reference solutions is essential for validation and optimization purposes. As SN transport method may be considered promising with respect to comprehensive parallel computations, a 3D full PWR core benchmark was proposed to challenge the capabilities of the PENTRAN parallel SN code utilizing an IBM-BG/P computer. After a brief description of the benchmark, a parallel performance analysis is carried out, and shows that the parallelizable (Amdahl) fraction of PENTRAN is comprised between 0.994 ≤ f ≤ 0.996 for a number of BG/P nodes ranging from 17 to 1156. The associated speedup reaches a value greater than 200 with 1156 nodes. Using a best estimate model, PENTRAN results are then compared to Monte Carlo results rendered using the MCNP5 code. Good consistency is observed between the two methods (SN and Monte Carlo), with discrepancies less than 65 pcm for the keff, and less than 2.5% for the flux at the pincell level. (author)

  4. SU-E-I-05: A Correction Algorithm for Kilovoltage Cone-Beam Computed Tomography Dose Calculations in Cervical Cancer Patients

    International Nuclear Information System (INIS)

    Purpose: To investigate the accuracy and feasibility of dose calculations using kilovoltage cone beam computed tomography in cervical cancer radiotherapy using a correction algorithm. Methods: The Hounsfield units (HU) and electron density (HU-density) curve was obtained for both planning CT (pCT) and kilovoltage cone beam CT (CBCT) using a CIRS-062 calibration phantom. The pCT and kV-CBCT images have different HU values, and if the HU-density curve of CBCT was directly used to calculate dose in CBCT images may have a deviation on dose distribution. It is necessary to normalize the different HU values between pCT and CBCT. A HU correction algorithm was used for CBCT images (cCBCT). Fifteen intensity-modulated radiation therapy (IMRT) plans of cervical cancer were chosen, and the plans were transferred to the pCT and cCBCT data sets without any changes for dose calculations. Phantom and patient studies were carried out. The dose differences and dose distributions were compared between cCBCT plan and pCT plan. Results: The HU number of CBCT was measured by several times, and the maximum change was less than 2%. To compare with pCT, the CBCT and cCBCT has a discrepancy, the dose differences in CBCT and cCBCT images were 2.48%±0.65% (range: 1.3%∼3.8%) and 0.48%±0.21% (range: 0.1%∼0.82%) for phantom study, respectively. For dose calculation in patient images, the dose differences were 2.25%±0.43% (range: 1.4%∼3.4%) and 0.63%±0.35% (range: 0.13%∼0.97%), respectively. And for the dose distributions, the passing rate of cCBCT was higher than the CBCTs. Conclusion: The CBCT image for dose calculation is feasible in cervical cancer radiotherapy, and the correction algorithm offers acceptable accuracy. It will become a useful tool for adaptive radiation therapy

  5. SU-E-I-05: A Correction Algorithm for Kilovoltage Cone-Beam Computed Tomography Dose Calculations in Cervical Cancer Patients

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, J; Zhang, W; Lu, J [Cancer Hospital of Shantou University Medical College, Shantou, Guangdong (China)

    2015-06-15

    Purpose: To investigate the accuracy and feasibility of dose calculations using kilovoltage cone beam computed tomography in cervical cancer radiotherapy using a correction algorithm. Methods: The Hounsfield units (HU) and electron density (HU-density) curve was obtained for both planning CT (pCT) and kilovoltage cone beam CT (CBCT) using a CIRS-062 calibration phantom. The pCT and kV-CBCT images have different HU values, and if the HU-density curve of CBCT was directly used to calculate dose in CBCT images may have a deviation on dose distribution. It is necessary to normalize the different HU values between pCT and CBCT. A HU correction algorithm was used for CBCT images (cCBCT). Fifteen intensity-modulated radiation therapy (IMRT) plans of cervical cancer were chosen, and the plans were transferred to the pCT and cCBCT data sets without any changes for dose calculations. Phantom and patient studies were carried out. The dose differences and dose distributions were compared between cCBCT plan and pCT plan. Results: The HU number of CBCT was measured by several times, and the maximum change was less than 2%. To compare with pCT, the CBCT and cCBCT has a discrepancy, the dose differences in CBCT and cCBCT images were 2.48%±0.65% (range: 1.3%∼3.8%) and 0.48%±0.21% (range: 0.1%∼0.82%) for phantom study, respectively. For dose calculation in patient images, the dose differences were 2.25%±0.43% (range: 1.4%∼3.4%) and 0.63%±0.35% (range: 0.13%∼0.97%), respectively. And for the dose distributions, the passing rate of cCBCT was higher than the CBCTs. Conclusion: The CBCT image for dose calculation is feasible in cervical cancer radiotherapy, and the correction algorithm offers acceptable accuracy. It will become a useful tool for adaptive radiation therapy.

  6. 并流多效蒸发系统的计算机辅助计算%The computer-aided calculation of cocurrent multi-effect evaporation

    Institute of Scientific and Technical Information of China (English)

    陈文波; 陈华新; 施得志; 阮奇

    2001-01-01

    The model of cocurrent multi-effect evaporation with extra vapor elicitation and condenser water flash is established. A computer-aided calculating method is presented. The algorithm is programmed by Visual Basic 5.0. A practical example shows that the calculating method is fast and accurate. With condensed water flash and preheating cane sugar solution from 26.7?℃ to 70?℃ by eliciting extra vapor, the fresh vapor expenditure of cocurrent four-effectevaporation is decreased by about 11%.%建立了带有冷凝水闪蒸和额外蒸汽引出的并流多效蒸发系统工艺计算的数学模型,用计算机辅助计算方法求解. 算法用Visual Basic 5.0编程实现. 算例表明,计算方法快速、准确,对四效并流蒸发蔗糖水溶液系统,引出额外蒸汽将原料液从26.7℃预热到60℃进料并利用冷凝水闪蒸可节省加热蒸汽约11%.

  7. A computer program to calculate the resistivity and induced polarization response for a three-dimensional body in the presence of buried electrodes

    Science.gov (United States)

    Daniels, Jeffrey J.

    1977-01-01

    Three-dimensional induced polarization and resistivity modeling for buried electrode configurations can be achieved by adapting surface integral techniques for surface electrode configurations to buried electrodes. Modification of. the surface technique is accomplished by considering the additional mathematical terms required to express-the changes in the electrical potential and geometry caused by placing the source and receiver electrodes below the surface. This report presents a listing of a computer program to calculate the resistivity and induced polarization response from a three-dimensional body for buried electrode configurations. The program is designed to calculate the response for the following electrode configurations: (1) hole-to-surface array with a buried bipole source and a surface bipole receiver, (2) hole-to-surface array with a buried pole source and a surface bipole receiver, (3) hole-to-hole array with a buried, fixed pole source and a moving bipole receiver, (4) surface-to-hole array with a fixed pole source on the surface and a moving bipole receiver in the borehole, (5) hole-to-hole array with a buried, fixed bipole source and a buried, moving bipole receiver, (6) hole-to-hole array with a buried, moving bipole source and a buried, moving bipole receiver, and (7) single-hole, buried bipole-bipole array. Input and output examples are given for each of the arrays.

  8. PYFLOW: A computer code for the calculation of the impact parameters of Dilute Pyroclastic Density Currents (DPDC) based on field data

    Science.gov (United States)

    Dioguardi, Fabio; Dellino, Pierfrancesco

    2014-05-01

    PYFLOW is a computer code designed for quantifying the hazard related to Dilute Pyroclastic Density Currents (DPDC). DPDCs are multiphase flows that form during explosive volcanic eruptions. They are the major source of hazard related to volcanic eruptions, as they exert a significant stress over buildings and transport significant amounts of volcanic ash, which is hot and unbreathable. The program calculates the DPDC's impact parameters (e.g. dynamic pressure and particle volumetric concentration) and is founded on the turbulent boundary layer theory adapted to a multiphase framework. Fluid-dynamic variables are searched with a probabilistic approach, meaning that for each variable the average, maximum and minimum solutions are calculated. From these values, PYFLOW creates probability functions that allow to calculate the parameter at a given percentile. The code is written in Fortran 90 and can be compiled and installed on Windows, Mac OS X, Linux operating systems (OS). A User's manual is provided, explaining the details of the theoretical background, the setup and running procedure and the input data. The model inputs are DPDC deposits data, e.g. particle grainsize, layer thickness, particles shape factor and density. PYFLOW reads input data from a specifically designed input file or from the user's direct typing by command lines. Guidelines for writing input data are also contained in the package. PYFLOW guides the user at each step of execution, asking for additional data and inputs. The program is a tool for DPDC hazard assessment and, as an example, an application to the DPDC deposits of the Agnano-Monte Spina eruption (4.1 ky BP) at Campi Flegrei (Italy) is presented.

  9. Cooperative and competitive concurrency in scientific computing. A full open-source upgrade of the program for dynamical calculations of RHEED intensity oscillations

    Science.gov (United States)

    Daniluk, Andrzej

    2011-06-01

    A computational model is a computer program, which attempts to simulate an abstract model of a particular system. Computational models use enormous calculations and often require supercomputer speed. As personal computers are becoming more and more powerful, more laboratory experiments can be converted into computer models that can be interactively examined by scientists and students without the risk and cost of the actual experiments. The future of programming is concurrent programming. The threaded programming model provides application programmers with a useful abstraction of concurrent execution of multiple tasks. The objective of this release is to address the design of architecture for scientific application, which may execute as multiple threads execution, as well as implementations of the related shared data structures. New version program summaryProgram title: GrowthCP Catalogue identifier: ADVL_v4_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADVL_v4_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 32 269 No. of bytes in distributed program, including test data, etc.: 8 234 229 Distribution format: tar.gz Programming language: Free Object Pascal Computer: multi-core x64-based PC Operating system: Windows XP, Vista, 7 Has the code been vectorised or parallelized?: No RAM: More than 1 GB. The program requires a 32-bit or 64-bit processor to run the generated code. Memory is addressed using 32-bit (on 32-bit processors) or 64-bit (on 64-bit processors with 64-bit addressing) pointers. The amount of addressed memory is limited only by the available amount of virtual memory. Supplementary material: The figures mentioned in the "Summary of revisions" section can be obtained here. Classification: 4.3, 7.2, 6.2, 8, 14 External routines: Lazarus [1] Catalogue

  10. URR [Unresolved Resonance Region] computer code: A code to calculate resonance neutron cross-section probability tables, Bondarenko self-shielding factors, and self-indication ratios for fissile and fertile nuclides

    International Nuclear Information System (INIS)

    The URR computer code has been developed to calculate cross-section probability tables, Bondarenko self-shielding factors, and self-indication ratios for fertile and fissile isotopes in the unresolved resonance region. Monte Carlo methods are utilized to select appropriate resonance parameters and to compute the cross sections at the desired reference energy. The neutron cross sections are calculated by the single-level Breit-Wigner formalism with s-, p-, and d-wave contributions. The cross-section probability tables are constructed by sampling by Doppler broadened cross-sections. The various self-shielding factors are computer numerically as Lebesgue integrals over the cross-section probability tables

  11. Relationship between renal volume calculated by using multislice computed tomography and glomerular filtration rate calculated by using the Cockcroft-Gault and modification of diet in renal disease equations in living kidney donors.

    Science.gov (United States)

    Adibi, Atoosa; Mortazavi, Mojgan; Shayganfar, Azin; Kamal, Sima; Azad, Roya; Aalinezhad, Marzieh

    2016-01-01

    It is essential to ascertain the state of health and renal function of potential kidney donors before organ removal. In this regard, one of the primary steps is to estimate the donor's glomerular filtration rate (GFR). For this purpose, the modification of diet in renal disease (MDRD) and the Cockcroft-Gault (CG) formulas has been used. However, these two formulas produce different results and finding new techniques with greater accuracy is required. Measuring the renal volume from computed tomography (CT) scan may be a valuable index to assess the renal function. This study was conducted to investigate the correlation between renal volume and the GFR values in potential living kidney donors referred to the multislice imaging center at Alzahra Hospital during 2014. The study comprised 66 subjects whose GFR was calculated using the two aforementioned formulas. Their kidney volumes were measured by using 64-slice CT angiography and the correlation between renal volume and GFR values were analyzed using the Statistical Package for the Social Science software. There was no correlation between the volume of the left and right kidneys and the MDRD-based estimates of GFR (P = 0.772, r = 0.036, P = 0.251, r = 0.143, respectively). A direct linear correlation was found between the volume of the left and right kidneys and the CG-based GFR values (P = 0.001, r = 0.397, P kidney volume derived from multislice CT scan can help predict the GFR value in kidney donors with normal renal function. The limitations of our study include the small sample size and the medium resolution of 64-slice multislice scanners. Further studies with larger sample size and using higher resolution scanners are warranted to determine the accuracy of this method in potential kidney donors. PMID:27424682

  12. Calculator calculus

    CERN Document Server

    McCarty, George

    1982-01-01

    How THIS BOOK DIFFERS This book is about the calculus. What distinguishes it, however, from other books is that it uses the pocket calculator to illustrate the theory. A computation that requires hours of labor when done by hand with tables is quite inappropriate as an example or exercise in a beginning calculus course. But that same computation can become a delicate illustration of the theory when the student does it in seconds on his calculator. t Furthermore, the student's own personal involvement and easy accomplishment give hi~ reassurance and en­ couragement. The machine is like a microscope, and its magnification is a hundred millionfold. We shall be interested in limits, and no stage of numerical approximation proves anything about the limit. However, the derivative of fex) = 67.SgX, for instance, acquires real meaning when a student first appreciates its values as numbers, as limits of 10 100 1000 t A quick example is 1.1 , 1.01 , 1.001 , •••• Another example is t = 0.1, 0.01, in the functio...

  13. Using the OECD/NRC Pressurized Water Reactor Main Steam Line Break Benchmark to Study Current Numerical and Computational Issues of Coupled Calculations

    International Nuclear Information System (INIS)

    Incorporating full three-dimensional (3-D) models of the reactor core into system transient codes allows for a 'best-estimate' calculation of interactions between the core behavior and plant dynamics. Recent progress in computer technology has made the development of coupled thermal-hydraulic (T-H) and neutron kinetics code systems feasible. Considerable efforts have been made in various countries and organizations in this direction. Appropriate benchmarks need to be developed that will permit testing of two particular aspects. One is to verify the capability of the coupled codes to analyze complex transients with coupled core-plant interactions. The second is to test fully the neutronics/T-H coupling. One such benchmark is the Pressurized Water Reactor Main Steam Line Break (MSLB) Benchmark problem. It was sponsored by the Organization for Economic Cooperation and Development, U.S. Nuclear Regulatory Commission, and The Pennsylvania State University. The benchmark problem uses a 3-D neutronics core model that is based on real plant design and operational data for the Three Mile Island Unit 1 nuclear power plant. The purpose of this benchmark is threefold: to verify the capability of system codes for analyzing complex transients with coupled core-plant interactions; to test fully the 3-D neutronics/T-H coupling; and to evaluate discrepancies among the predictions of coupled codes in best-estimate transient simulations. The purposes of the benchmark are met through the application of three exercises: a point kinetics plant simulation (exercise 1), a coupled 3-D neutronics/core T-H evaluation of core response (exercise 2), and a best-estimate coupled core-plant transient model (exercise 3).In this paper we present the three exercises of the MSLB benchmark, and we summarize the findings of the participants with regard to the current numerical and computational issues of coupled calculations. In addition, this paper reviews in some detail the sensitivity studies on

  14. Radiotherapy treatment planning with contrast-enhanced computed tomography: feasibility of dual-energy virtual unenhanced imaging for improved dose calculations

    International Nuclear Information System (INIS)

    In radiotherapy treatment planning, intravenous administration of an iodine-based contrast agent during computed tomography (CT) improves the accuracy of delineating target volumes. However, increased tissue attenuation resulting from the high atomic number of iodine may result in erroneous dose calculations because the contrast agent is absent during the actual procedure. The purpose of this proof-of-concept study was to present a novel framework to improve the accuracy of dose calculations using dual-energy virtual unenhanced CT in the presence of an iodine-based contrast agent. Simple phantom experiments were designed to assess the feasibility of the proposed concept. By utilizing a “second-generation” dual-source CT scanner equipped with a tin filter for improved spectral separation, four CT datasets were obtained using both a water phantom and an iodine phantom: “true unenhanced” images with attenuation values of 2 ± 11 Hounsfield Units (HU), “enhanced” images with attenuation values of 274 ± 23 HU, and two series of “virtual unenhanced” images synthesized from dual-energy scans of the iodine phantom, each with a different combination of tube voltages. Two series of virtual unenhanced images demonstrated attenuation values of 12 ± 29 HU (with 80 kVp/140 kVp) and 34 ± 10 HU (with 100 kVp/140 kVp) after removing the iodine component from the contrast-enhanced images. Dose distributions of the single photon beams calculated from the enhanced images and two series of virtual unenhanced images were compared to those from true unenhanced images as a reference. The dose distributions obtained from both series of virtual unenhanced images were almost equivalent to that from the true unenhanced images, whereas the dose distribution obtained from the enhanced images indicated increased beam attenuation caused by the high attenuation characteristics of iodine. Compared to the reference dose distribution from the true unenhanced images, the dose

  15. Analysis of core physics test data and sodium void reactivity worth calculation for MONJU core with ARCADIAN-FBR computer code system

    International Nuclear Information System (INIS)

    In order to evaluate core characteristics of fast reactors, a computer code system ARCADIAN-FBR has been developed by utilizing the existing analysis codes and the latest nuclear data library JENDL-3.3. The validity of ARCADIAN-FBR was verified by using the experimental data obtained in the MONJU core physics tests. The results of analyses are in good agreement with the experimental data and the applicability of ARCADIAN-FBR for fast reactor core analysis is confirmed. Using ARCADIAN-FBR, the sodium void reactivity worth, which is an important parameter in the safety analysis of fast reactors, was analyzed for MONJU core. 241Pu in the core fuel is transmuted to 241Am due to disintegrations. Therefore, the effect of 241Am accumulation on the sodium void reactivity worth was evaluated for MONJU core. As a result of calculation, it was confirmed that the accumulation of 241Am significantly influences on the sodium void reactivity worth and hence on the safety analysis of sodium-cooled fast reactors. (author)

  16. INDAR: a computer code for the calculation of critical group radiation exposure from routine discharges of radioactivity to seas and estuaries - description and users' guide

    International Nuclear Information System (INIS)

    The computer program INDAR enables detailed estimates to be made of critical group radiation exposure arising from routine discharges of radioactivity for coastal sites where the discharge is close to the shore and the shoreline is reasonably straight, and for estuarine sites where radioactivity is rapidly mixed across the width of the estuary. Important processes which can be taken into account include the turbulence generated by the discharge, the effects of a sloping sea bed and the variation with time of the lateral dispersion coefficient. The significance of the timing of discharges can also be assessed. INDAR uses physically meaningful hydrographic parameters directly. For most sites the most important exposure pathways are seafood consumption, external exposure over estuarine sediments and beaches, and the handling of fishing gear. As well as for these primary pathways, INDAR enables direct calculations to be made for some additional exposure pathways. The secondary pathways considered are seaweed consumption, swimming, the handling of materials other than fishing gear and the inhalation of activity. (author)

  17. Accident and safety analyses for the HTR-modul. Partial project 1: Computer codes for system behaviour calculation. Final report. Pt. 2

    International Nuclear Information System (INIS)

    The project encompasses the following project tasks and problems: (1) Studies relating to complete failure of the main heat transfer system; (2) Pebble flow; (3) Development of computer codes for detailed calculation of hypothetical accidents; (a) the THERMIX/RZKRIT temperature buildup code (covering a.o. a variation to include exothermal heat sources); (b) the REACT/THERMIX corrosion code (variation taking into account extremely severe air ingress into the primary loop); (c) the GRECO corrosion code (variation for treating extremely severe water ingress into the primary loop); (d) the KIND transients code (for treating extremely fast transients during reactivity incidents. (4) Limiting devices for safety-relevant quantities. (5) Analyses relating to hypothetical accidents. (a) hypothetical air ingress; (b) effects on the fuel particles induced by fast transients. The problems of the various tasks are defined in detail and the main results obtained are explained. The contributions reporting the various project tasks and activities have been prepared for separate retrieval from the database. (orig./HP)

  18. Computer-simulation-based selection of optimal monomer for imprinting of tri-O-acetyl adenosine in a polymer matrix: calculations for benzene solution.

    Science.gov (United States)

    Douhaya, Ya V; Barkaline, V V; Tsakalof, A

    2016-07-01

    Molecular imprinting is a promising way to create polymer materials that can be used as artificial receptors, and have anticipated use in synthetic imitation of natural antibodies. In case of successful imprinting, the selectivity and affinity of the imprint for the substrate molecules are comparable with those of natural counterparts. Various calculation methods can be used to estimate the effects of a large range of imprinting parameters under different conditions, and to find better ways to improve polymer characteristics. However, one difficulty is that properties such as hydrogen bonding can be modeled only by quantum methods that demand a lot of computational resources. Combined quantum mechanics/molecular mechanics (QM/MM) methods allow the use of MM and QM for different parts of the modeled system. In present study this method was realized in the NWChem package to compare estimations of the stability of tri-O-acetyl adenosine-monomer pre-polymerization complexes in benzene solution with previous results under vacuum. PMID:27296451

  19. Program POD; A computer code to calculate nuclear elastic scattering cross sections with the optical model and neutron inelastic scattering cross sections by the distorted-wave born approximation

    International Nuclear Information System (INIS)

    The computer code, POD, was developed to calculate angle-differential cross sections and analyzing powers for shape-elastic scattering for collisions of neutron or light ions with target nucleus. The cross sections are computed with the optical model. Angle-differential cross sections for neutron inelastic scattering can also be calculated with the distorted-wave Born approximation. The optical model potential parameters are the most essential inputs for those model computations. In this program, the cross sections and analyzing powers are obtained by using the existing local or global parameters. The parameters can also be inputted by users. In this report, the theoretical formulas, the computational methods, and the input parameters are explained. The sample inputs and outputs are also presented. (author)

  20. Calcul statistique du volume des blocs matriciels d'un gisement fissuré The Statistical Computing of Matrix Block Volume in a Fissured Reservoir

    Directory of Open Access Journals (Sweden)

    Guez F.

    2006-11-01

    Full Text Available La recherche des conditions optimales d'exploitation d'un gisement fissuré repose sur une bonne description de la fissuration. En conséquence il est nécessaire de définir les dimensions et volumes des blocs matriciels en chaque point d'une structure. Or la géométrie du milieu (juxtaposition et formes des blocs est généralement trop complexe pour se prêter au calcul. Aussi, dans une précédente communication, avons-nous dû tourner cette difficulté par un raisonnement sur des moyennes (pendages, azimuts, espacement des fissures qui nous a conduits à un ordre de grandeur des volumes. Cependant un volume moyen ne peut pas rendre compte d'une loi de répartition des volumes des blocs. Or c'est cette répartition qui conditionne le choix d'une ou plusieurs méthodes successives de récupération. Aussi présentons-nous ici une méthode originale de calcul statistique de la loi de distribution des volumes des blocs matriciels, applicable en tout point d'un gisement. La part de gisement concernée par les blocs de volume donné en est déduite. La connaissance générale du phénomène de la fracturation sert de base au modèle. Les observations de subsurface sur la fracturation du gisement en fournissent les données (histogramme d'orientation et d'espacement des fissures.Une application au gisement d'Eschau (Alsace, France est rapportée ici pour illustrer la méthode. The search for optimum production conditions for a fissured reservoir depends on having a good description of the fissure pattern. Hence the sizes and volumes of the matrix blocks must be defined at all points in a structure. However, the geometry of the medium (juxtaposition and shapes of blocks in usually too complex for such computation. This is why, in a previous paper, we got around this problem by reasoning on the bases of averages (clips, azimuths, fissure spacing, and thot led us to an order of magnitude of the volumes. Yet a mean volume cannot be used to explain

  1. Simulating biochemical physics with computers: 1. Enzyme catalysis by phosphotriesterase and phosphodiesterase; 2. Integration-free path-integral method for quantum-statistical calculations

    Science.gov (United States)

    Wong, Kin-Yiu

    We have simulated two enzymatic reactions with molecular dynamics (MD) and combined quantum mechanical/molecular mechanical (QM/MM) techniques. One reaction is the hydrolysis of the insecticide paraoxon catalyzed by phosphotriesterase (PTE). PTE is a bioremediation candidate for environments contaminated by toxic nerve gases (e.g., sarin) or pesticides. Based on the potential of mean force (PMF) and the structural changes of the active site during the catalysis, we propose a revised reaction mechanism for PTE. Another reaction is the hydrolysis of the second-messenger cyclic adenosine 3'-5'-monophosphate (cAMP) catalyzed by phosphodiesterase (PDE). Cyclicnucleotide PDE is a vital protein in signal-transduction pathways and thus a popular target for inhibition by drugs (e.g., ViagraRTM). A two-dimensional (2-D) free-energy profile has been generated showing that the catalysis by PDE proceeds in a two-step SN2-type mechanism. Furthermore, to characterize a chemical reaction mechanism in experiment, a direct probe is measuring kinetic isotope effects (KIEs). KIEs primarily arise from internuclear quantum-statistical effects, e.g., quantum tunneling and quantization of vibration. To systematically incorporate the quantum-statistical effects during MD simulations, we have developed an automated integration-free path-integral (AIF-PI) method based on Kleinert's variational perturbation theory for the centroid density of Feynman's path integral. Using this analytic method, we have performed ab initio pathintegral calculations to study the origin of KIEs on several series of proton-transfer reactions from carboxylic acids to aryl substituted alpha-methoxystyrenes in water. In addition, we also demonstrate that the AIF-PI method can be used to systematically compute the exact value of zero-point energy (beyond the harmonic approximation) by simply minimizing the centroid effective potential.

  2. Numerical Calculation of Permeability and Electrical Formation Factor from AN Oil Reservoir Rock Using Geometry Obtained from Synchrotron X-Ray Computed Microtomography

    Science.gov (United States)

    Butler, S. L.; Bird, M.; Hawkes, C.; Kotzer, T.

    2013-12-01

    Advanced imaging techniques and computational modeling are being used increasingly to investigate the transport characteristics of porous rocks. In this contribution, we describe modeling of fluid and electrical flow through the interstices of two rock samples from the Weyburn oilfield in Southwestern Saskatchewan, Canada, using commercially available software. Samples of Marley Dolostone and Vuggy Limestone were imaged at resolutions of 0.78 μm and 7.45 μm, respectively, using synchrotron X-ray tomography. The porosity, permeability and electrical formation factor of similar samples were measured in the laboratory. The connected pore space of the rock sample was extracted and converted to a standard CAD file representation using commercial software. This CAD file was then imported into a commercial finite-element modeling software package where the pore space was meshed and the Navier-Stokes equations and Laplace's equation describing fluid and electrical flows were solved with appropriate boundary conditions. An example solution of the fluid flow field is shown in figure 1. Streamlines follow the direction of fluid flow while colors indicate the magnitude of the velocity. Calculation of the fluxes in post-processing allowed us to determine the permeability and electrical formation factors which were similar to those found experimentally and fell on the same porosity-permeability and Archie's Law trends. Fluid flow through a 50 micron per side cubic sub-sample of a Marley Dolostone. The pressure gradient is applied vertically. Streamlines indicate the direction of fluid flow. Colors indicate the magnitude of flow velocity.

  3. Core physics calculations

    International Nuclear Information System (INIS)

    In this paper, excerpts of the 'Core Design', 'Computational Chains' and 'Qualification of Computational Chains' lectures are presented. Nuclear reactor design basic concepts as power distribution and reactivity are defined and analyzed both from the theoretical and the computational point of view. Emphasis is put on the physical meaning and sensitivity of both 'observables' to design parameters. Computational aspects, mainly as regards the effects of the heterogeneity in space and energy in reactor calculations, are afforded too. Structure and qualification of computational code packages are discussed and a practical application to the FRAMATOME SCIENCE advanced computational chain is supplied. (author)

  4. Reliability calculations

    International Nuclear Information System (INIS)

    Risk and reliability analysis is increasingly being used in evaluations of plant safety and plant reliability. The analysis can be performed either during the design process or during the operation time, with the purpose to improve the safety or the reliability. Due to plant complexity and safety and availability requirements, sophisticated tools, which are flexible and efficient, are needed. Such tools have been developed in the last 20 years and they have to be continuously refined to meet the growing requirements. Two different areas of application were analysed. In structural reliability probabilistic approaches have been introduced in some cases for the calculation of the reliability of structures or components. A new computer program has been developed based upon numerical integration in several variables. In systems reliability Monte Carlo simulation programs are used especially in analysis of very complex systems. In order to increase the applicability of the programs variance reduction techniques can be applied to speed up the calculation process. Variance reduction techniques have been studied and procedures for implementation of importance sampling are suggested. (author)

  5. Analysis and calculation on human-computer interaction system of emotion%基于人机互动系统情感分析计算研究

    Institute of Scientific and Technical Information of China (English)

    杨杰; 赵强

    2012-01-01

    由于人的情感包含大量的信息,特别是面部的表情与手势图像中存在大量干扰信息,造成了情感计算正确率下降,为此提出了一种从用户的面部表情与手势中系统地分析情感线索的方法.对面部表情与手势进行分析是感情丰富的人机交互系统的基本组成部分,采用非语言线索算法来判断用户的感情状态.从图像序列中提取与表情相关的特征,通过智能法则系统分析用户的情感状态,得到的感情信息最终与用户的真实反映相适应.最后采用基于主体接口技术来处理如沮丧或愤怒等特定的情绪状态.实验结果表明,提出的方法能够准确地分析计算出用户的情感信息.%Especially facial expression and gesture image existed in a large number of interference information disadvantages , resulting in affective computing accuracy down, this paper presented a from the user' s facial expressions and gestured in a sys-tematic analysis of the emotional cues of method. First analysis of facial expressions and gestures were the feelings of the rich interactive basic part of the system, and then the nonverbal cued algorithm to judge user emotional state. At the same time, these papers extracted from an image sequence and expression of related characteristics, through the intelligent rule system analysis of the user' s emotional state, and reflected the user' s suit. Finally based on the main interface technology,it treated such as frustration or anger and other specific emotional states. The experimental results show that, the proposed method can accurately analyze and calculate the user' s emotional information.

  6. URR [Unresolved Resonance Region] computer code: A code to calculate resonance neutron cross-section probability tables, Bondarenko self-shielding factors, and self-indication ratios for fissile and fertile nuclides

    International Nuclear Information System (INIS)

    The URR computer code has been developed to calculate cross-section probability tables, Bondarenko self-shielding factors, and self- indication ratios for fertile and fissile isotopes in the unresolved resonance region. Monte Carlo methods are utilized to select appropriate resonance parameters and to compute the cross sections at the desired reference energy. The neutron cross sections are calculated by the single-level Breit-Wigner formalism with s-, p-, and d-wave contributions. The cross-section probability tables are constructed by sampling the Doppler broadened cross-section. The various shelf-shielded factors are computed numerically as Lebesgue integrals over the cross-section probability tables. 6 refs

  7. Soil interaction model. Manual 1. Seltra. Documentation of the computer programme for calculating of the trafficability of terrain and the mobility of forest tractors.

    OpenAIRE

    Saarilahti, M.

    2002-01-01

    Documentation of a computer programme written in VisualBasic for comparing the trafficability of forest terrain and mobility of forest tractors using different WES-based mobility models and empirical rut depth models.

  8. Coupling calculation of CFD-ACE computational fluid dynamics code and DeCART whole-core neutron transport code for development of numerical reactor

    International Nuclear Information System (INIS)

    Code coupling activities have so far focused on coupling the neutronics modules with the CFD module. An interface module for the CFD-ACE/DeCART coupling was established as an alternative to the original STAR-CD/DeCART interface. The interface module for DeCART/CFD-ACE was validated by single-pin model. The optimized CFD mesh was decided through the calculation of multi-pin model. It was important to consider turbulent mixing of subchannels for calculation of fuel temperature. For the parallel calculation, the optimized decompose process was necessary to reduce the calculation costs and setting of the iteration and convergence criterion for each code was important, too

  9. Calculations detailed progression of fire in NPP ALMARAZ through the code computational fire dynamics SIMULATOR; Calculos detallados de progresion de incendios en C.N. Alamaraz mediante el codigo coputacional Fire Dynamics Simulator

    Energy Technology Data Exchange (ETDEWEB)

    Villar Sanchez, T.

    2012-07-01

    (FDS) is an advanced computational model of calculation of simulation of fire that numerically solves the Navier-Stokes equations in each cell of the mesh in each interval of time, having capacity to calculate accurately all those parameters of fire to NUREG-1805 has a limited capacity. The objective of the analysis is to compare the results obtained with the FDS with those obtained from spreadsheets of NUREG-1805 and deal widespread and realistic study of the propagation of a fire in different areas of NPP Almaraz.

  10. Quantification of the computational accuracy of code systems on the burn-up credit using experimental re-calculations; Quantifizierung der Rechengenauigkeit von Codesystemen zum Abbrandkredit durch Experimentnachrechnungen

    Energy Technology Data Exchange (ETDEWEB)

    Behler, Matthias; Hannstein, Volker; Kilger, Robert; Moser, Franz-Eberhard; Pfeiffer, Arndt; Stuke, Maik

    2014-06-15

    In order to account for the reactivity-reducing effect of burn-up in the criticality safety analysis for systems with irradiated nuclear fuel (''burnup credit''), numerical methods to determine the enrichment and burnup dependent nuclide inventory (''burnup code'') and its resulting multiplication factor k{sub eff} (''criticality code'') are applied. To allow for reliable conclusions, for both calculation systems the systematic deviations of the calculation results from the respective true values, the bias and its uncertainty, are being quantified by calculation and analysis of a sufficient number of suitable experiments. This quantification is specific for the application case under scope and is also called validation. GRS has developed a methodology to validate a calculation system for the application of burnup credit in the criticality safety analysis for irradiated fuel assemblies from pressurized water reactors. This methodology was demonstrated by applying the GRS home-built KENOREST burnup code and the criticality calculation sequence CSAS5 from SCALE code package. It comprises a bounding approach and alternatively a stochastic, which both have been exemplarily demonstrated by use of a generic spent fuel pool rack and a generic dry storage cask, respectively. Based on publicly available post irradiation examination and criticality experiments, currently the isotopes of uranium and plutonium elements can be regarded for.

  11. The use of symbolic computation in radiative, energy, and neutron transport calculations. Technical report, 15 August 1992--14 August 1994

    International Nuclear Information System (INIS)

    This investigation uses symbolic computation in developing analytical methods and general computational strategies for solving both linear and nonlinear, regular and singular, integral and integro-differential equations which appear in radiative and combined mode energy transport. This technical report summarizes the research conducted during the first nine months of the present investigation. The use of Chebyshev polynomials augmented with symbolic computation has clearly been demonstrated in problems involving radiative (or neutron) transport, and mixed-mode energy transport. Theoretical issues related to convergence, errors, and accuracy have also been pursued. Three manuscripts have resulted from the funded research. These manuscripts have been submitted to archival journals. At the present time, an investigation involving a conductive and radiative medium is underway. The mathematical formulation leads to a system of nonlinear, weakly-singular integral equations involving the unknown temperature and various Legendre moments of the radiative intensity in a participating medium. Some preliminary results are presented illustrating the direction of the proposed research

  12. CODAR: a computer code for the calculation of collective and individual radiation exposure arising from the release of activity to an estuary or sea in the British Isles

    International Nuclear Information System (INIS)

    A description is given of the computer program CODAR which enables estimates to be made of both collective and individual radiation exposure for releases of activity around the coasts of the Britsh Isles The program can be run for either a limited or a continuous release of activity, and for different assumptions about, for example, the loss of activity to bottom sediments, he external exposure arising from contaminated sediments and the local hydrology. For collective exposure the world, EEC or UK populations can be considered. Full details are given to enable users to run the computer program. (author)

  13. Proposal for an IAPWS Guideline on the Fast Calculation of Steam and Water Properties in Computational Fluid Dynamics Using Spline Interpolation

    Czech Academy of Sciences Publication Activity Database

    Kretzschmar, H.; J.; Kunick, M.; Hrubý, Jan; Duška, Michal; Vinš, Václav; Di Mare, F.; Singh, A.

    Londýn: British & Irish Association for Properties of Water and Steam (BIAPWS) a Institution of Mechanical Engineers, 2013, s. 109-109. [International Conference on the Properties of Water and Steam /16./ ICPWS. University of Greenwich, Londýn (GB), 01.09.2013-05.09.2013] Institutional support: RVO:61388998 Keywords : computational fluid dynamics * thermodynamic properties * steam turbine

  14. Calculation of demands for nuclear fuels and fuel cycle services. Description of computer model and strategies developed by Working Group 1

    International Nuclear Information System (INIS)

    Working Group 1 examined a range of reactor deployment strategies and fuel cycle options, in oder to estimate the range of nuclear fuel requirements and fuel cycle service needs which would result. The computer model, its verification in comparison with other models, the strategies to be examined through use of the model, and the range of results obtained are described

  15. Computer program /P1-GAS/ calculates the P-0 and P-1 transfer matrices for neutron moderation in a monatomic gas

    Science.gov (United States)

    Collier, G.; Gibson, G.

    1968-01-01

    FORTRAN 4 program /P1-GAS/ calculates the P-O and P-1 transfer matrices for neutron moderation in a monatomic gas. The equations used are based on the conditions that there is isotropic scattering in the center-of-mass coordinate system, the scattering cross section is constant, and the target nuclear velocities satisfy a Maxwellian distribution.

  16. TP2, a computer program for the calculation of reactivity and kinetic parameters by the two-dimensional neutron transport perturbation theory

    International Nuclear Information System (INIS)

    TP2 is a FORTRAN-IV program for the calculation of the reactivity, effective delayed neutron fractions and mean generation time by the perturbation theory using the angular fluxes calculated by a two-dimensional Ssub(n) transport code. Group cross sections, delayed neutron fractions and spectra, isotope dependent prompt neutron spectrum, and direct and adjoint angular fluxes are read from disk files. This code can treat x-y, r-z and r-theta geometry in two dimensions, and the code structure is nearly the same as the TP1 code for the one-dimensional geometry. As in the TP1 code, there are two main options. One is for the exact perturbation calculation of the reactivity where the direct and adjoint angular fluxes are used for unperturbed and perturbed systems respectively. The other option is for the first order perturbation calculation of the probe reactivity in which usually unperturbed direct and adjoint angular fluxes are used. In both cases, reactivities for each reaction process are printed in the energy and space dependent form according to the input specification. The criticality factor calculated by the Ssub(n) transport code using an isotope independent fission spectrum can be corrected by the TP2 code by taking into account an isotope dependency of the prompt fission spectrum and delayed neutron spectrum. Numerical examples are presented to demonstrate the accuracy of the reactivity, the effect of the number of mesh points and the order of the Ssub(n) method. Comparisons with the diffusion method are given. (orig.)

  17. SFACTOR: a computer code for calculating dose equivalent to a target organ per microcurie-day residence of a radionuclide in a source organ

    Energy Technology Data Exchange (ETDEWEB)

    Dunning, D.E. Jr.; Pleasant, J.C.; Killough, G.G.

    1977-11-01

    A computer code SFACTOR was developed to estimate the average dose equivalent S (rem/..mu..Ci-day) to each of a specified list of target organs per microcurie-day residence of a radionuclide in source organs in man. Source and target organs of interest are specified in the input data stream, along with the nuclear decay information. The SFACTOR code computes components of the dose equivalent rate from each type of decay present for a particular radionuclide, including alpha, electron, and gamma radiation. For those transuranic isotopes which also decay by spontaneous fission, components of S from the resulting fission fragments, neutrons, betas, and gammas are included in the tabulation. Tabulations of all components of S are provided for an array of 22 source organs and 24 target organs for 52 radionuclides in an adult.

  18. SFACTOR: a computer code for calculating dose equivalent to a target organ per microcurie-day residence of a radionuclide in a source organ

    International Nuclear Information System (INIS)

    A computer code SFACTOR was developed to estimate the average dose equivalent S (rem/μCi-day) to each of a specified list of target organs per microcurie-day residence of a radionuclide in source organs in man. Source and target organs of interest are specified in the input data stream, along with the nuclear decay information. The SFACTOR code computes components of the dose equivalent rate from each type of decay present for a particular radionuclide, including alpha, electron, and gamma radiation. For those transuranic isotopes which also decay by spontaneous fission, components of S from the resulting fission fragments, neutrons, betas, and gammas are included in the tabulation. Tabulations of all components of S are provided for an array of 22 source organs and 24 target organs for 52 radionuclides in an adult

  19. Computer-Assisted Radiographic Calculation of Spinal Curvature in Brachycephalic “Screw-Tailed” Dog Breeds with Congenital Thoracic Vertebral Malformations: Reliability and Clinical Evaluation

    OpenAIRE

    Guevar, Julien; Penderis, Jacques; Faller, Kiterie; Yeamans, Carmen; Stalin, Catherine; Gutierrez Quintana, Rodrigo

    2014-01-01

    The objectives of this study were: To investigate computer-assisted digital radiographic measurement of Cobb angles in dogs with congenital thoracic vertebral malformations, to determine its intra- and inter-observer reliability and its association with the presence of neurological deficits. Medical records were reviewed (2009–2013) to identify brachycephalic screw-tailed dog breeds with radiographic studies of the thoracic vertebral column and with at least one vertebral malformation present...

  20. EQ3NR, a computer program for geochemical aqueous speciation-solubility calculations: Theoretical manual, user's guide, and related documentation (Version 7.0)

    International Nuclear Information System (INIS)

    EQ3NR is an aqueous solution speciation-solubility modeling code. It is part of the EQ3/6 software package for geochemical modeling. It computes the thermodynamic state of an aqueous solution by determining the distribution of chemical species, including simple ions, ion pairs, and complexes, using standard state thermodynamic data and various equations which describe the thermodynamic activity coefficients of these species. The input to the code describes the aqueous solution in terms of analytical data, including total (analytical) concentrations of dissolved components and such other parameters as the pH, pHCl, Eh, pe, and oxygen fugacity. The input may also include a desired electrical balancing adjustment and various constraints which impose equilibrium with special pure minerals, solid solution end-member components (of specified mole fractions), and gases (of specified fugacities). The code evaluates the degree of disequilibrium in terms of the saturation index (SI = 1og Q/K) and the thermodynamic affinity (A = -2.303 RT log Q/K) for various reactions, such as mineral dissolution or oxidation-reduction in the aqueous solution itself. Individual values of Eh, pe, oxygen fugacity, and Ah (redox affinity) are computed for aqueous redox couples. Equilibrium fugacities are computed for gas species. The code is highly flexible in dealing with various parameters as either model inputs or outputs. The user can specify modification or substitution of equilibrium constants at run time by using options on the input file